Compare commits

..

276 Commits

Author SHA1 Message Date
github-actions[bot]
b720568cf3 flake.lock: Update
Flake lock file updates:

• Updated input 'nixpkgs':
    'github:NixOS/nixpkgs/0f76631' (2026-04-09)
  → 'github:NixOS/nixpkgs/1304392' (2026-04-11)
2026-04-12 00:39:02 +00:00
Florian Preinstorfer
61c9ae81e4 Remove old migrations for the debian package
Those were required to streamline new installs with updates before 0.27.
Since 0.29 will not allow direct upgrades from <0.27 to 0.29 we might as
well remove it.
2026-04-11 20:35:15 +02:00
Kristoffer Dalby
1f9635c2ec ci: restrict test generator to .go files
The integration test generator scanned all files under integration/
with ripgrep, matching func Test* patterns in README.md code examples
(TestMyScenario, TestRouteAdvertisementBasic). Add --type go to limit
the search to Go source files.
2026-04-10 14:09:57 +01:00
Kristoffer Dalby
fd1074160e CHANGELOG: document user-facing changes from #3180 2026-04-10 14:09:57 +01:00
Kristoffer Dalby
d66d3a4269 oidc: add confirmation page for node registration
Render an interstitial showing device hostname, OS, and machine-key
fingerprint before finalising OIDC registration. The user must POST
to /register/confirm/{auth_id} with a CSRF double-submit cookie.
Removes the TODO at oidc.go:201.
2026-04-10 14:09:57 +01:00
Kristoffer Dalby
d5a4e6e36a debug: route statsviz through tsweb.Protected
Build the statsviz Server directly and wrap its Index/Ws handlers in
tsweb.Protected instead of calling statsviz.Register on the raw mux
which bypasses AllowDebugAccess.
2026-04-10 14:09:57 +01:00
Kristoffer Dalby
8c6cb05ab4 noise: pass context to sshActionFollowUp
Select on ctx.Done() alongside auth.WaitForAuth() so the goroutine
exits promptly when the client disconnects instead of parking until
cache eviction.
2026-04-10 14:09:57 +01:00
Kristoffer Dalby
42b8c779a0 hscontrol: limit /verify request body size
Wrap req.Body with io.LimitReader bounded to 4 KiB before
io.ReadAll. The DERP verify payload is a few hundred bytes.
2026-04-10 14:09:57 +01:00
Kristoffer Dalby
a3c4ad2ca3 types: omit secret fields from JSON marshalling
Add json:"-" to PostgresConfig.Pass, OIDCConfig.ClientSecret, and
CLIConfig.APIKey so they are excluded from json.Marshal output
(e.g. the /debug/config endpoint).
2026-04-10 14:09:57 +01:00
Kristoffer Dalby
0641771128 db: guard UsePreAuthKey with WHERE used=false
Add a row-level check so concurrent registrations with the same
single-use key cannot both succeed. Skip the call on
re-registration where the key is already marked used (#2830).
2026-04-10 14:09:57 +01:00
Kristoffer Dalby
f7d8bb8b3f app: remove gRPC reflection from remote server
Reflection is a streaming RPC and bypasses the unary auth
interceptor on the remote (TCP) gRPC server. Remove it there;
the unix-socket server retains it for local debugging.
2026-04-10 14:09:57 +01:00
Kristoffer Dalby
adb9467f60 oidc: validate state parameter length in callback
getCookieName sliced value[:6] unconditionally; a short state query
parameter caused a panic recovered by chi middleware. Reject states
shorter than cookieNamePrefixLen with 400.
2026-04-10 14:09:57 +01:00
Kristoffer Dalby
41d70fe87b auth: check machine key on tailscaled-restart fast path
The #2862 restart path returned nodeToRegisterResponse after a
NodeKey-only lookup without verifying MachineKey. Add the same
check handleLogout already performs.
2026-04-10 14:09:57 +01:00
Kristoffer Dalby
99767cf805 hscontrol: validate machine key and bind src/dst in SSH check handler
SSHActionHandler now verifies that the Noise session's machine key
matches the dst node before proceeding. The (src, dst) pair is
captured at hold-and-delegate time via a new SSHCheckBinding on
AuthRequest so sshActionFollowUp can verify the follow-up URL
matches. The OIDC non-registration callback requires the
authenticated user to own the src node before approving.
2026-04-10 14:09:57 +01:00
Kristoffer Dalby
0d4f2293ff state: replace zcache with bounded LRU for auth cache
Replace zcache with golang-lru/v2/expirable for both the state auth
cache and the OIDC state cache. Add tuning.register_cache_max_entries
(default 1024) to cap the number of pending registration entries.

Introduce types.RegistrationData to replace caching a full *Node;
only the fields the registration callback path reads are retained.
Remove the dead HSDatabase.regCache field. Drop zgo.at/zcache/v2
from go.mod.
2026-04-10 14:09:57 +01:00
Kristoffer Dalby
3587225a88 mapper: fix phantom updateSentPeers on disconnected nodes
When send() is called on a node with zero active connections
(disconnected but kept for rapid reconnection), it returns nil
(success). handleNodeChange then calls updateSentPeers, recording
peers as delivered when no client received the data.

This corrupts lastSentPeers: future computePeerDiff calculations
produce wrong results because they compare against phantom state.
After reconnection, the node's initial map resets lastSentPeers,
but any changes processed during the disconnect window leave
stale entries that cause asymmetric peer visibility.

Return errNoActiveConnections from send() when there are no
connections. handleNodeChange treats this as a no-op (the change
was generated but not deliverable) and skips updateSentPeers,
keeping lastSentPeers consistent with what clients actually
received.
2026-04-10 13:18:56 +01:00
Kristoffer Dalby
9371b4ee28 mapper: fix empty Peers list not clearing client peer state
When a FullUpdate produces zero visible peers (e.g., a restrictive
policy isolates a node), the MapResponse has Peers: [] (empty
non-nil). The Tailscale client only processes Peers as a full
replacement when len(Peers) > 0 (controlclient/map.go:462), so an
empty list is silently ignored and stale peers persist.

This triggers when a FullUpdate() replaces a pending PolicyChange()
in the batcher. The PolicyChange would have used computePeerDiff to
send explicit PeersRemoved, but the FullUpdate goes through
buildFromChange which sets Peers: [] that the client ignores.

When a full update produces zero peers, compute the peer diff
against lastSentPeers and add explicit PeersRemoved entries so the
client correctly clears its stale peer state.
2026-04-10 13:18:56 +01:00
Kristoffer Dalby
cef5338cfe types/change: panic on Merge with conflicting TargetNode values
Merging two changes targeted at different nodes is not supported
because the result can only carry one TargetNode. The second
target's content would be silently misrouted.

Add a panic guard that catches this at the Merge call site rather
than allowing silent data loss. In production, Merge is only called
with broadcast changes (TargetNode=0) so the guard acts as
insurance against future misuse.
2026-04-10 13:18:56 +01:00
Kristoffer Dalby
3529fe0da1 types: fix OIDC identifier path traversal dropping subject
url.JoinPath resolves path-traversal segments like '..' and '.',
which silently drops the OIDC subject from the identifier. For
example, Iss='https://example.com' with Sub='..' produces
'https://example.com' — the subject is lost entirely. This causes
distinct OIDC users to receive colliding identifiers.

Replace url.JoinPath with simple string concatenation using a slash
separator. This preserves the subject literally regardless of its
content. url.PathEscape does not help because dots are valid URL
path characters and are not escaped.
2026-04-10 13:18:56 +01:00
Kristoffer Dalby
4064f13bda types: fix nil panics in Owner() and TailscaleUserID() for orphaned nodes
Owner() on a non-tagged node with nil User returns an invalid
UserView that panics when Name() is called. Add a guard to return
an empty UserView{} when the user is not valid.

TailscaleUserID() calls UserID().Get() without checking Valid()
first, which panics on orphaned nodes (no tags, no UserID). Add a
validity check to return 0 for this invalid state.

Callers should check Owner().Valid() before accessing fields.
2026-04-10 13:18:56 +01:00
Kristoffer Dalby
3037e5eee0 db: fix slice aliasing in migration tag merge
The migration at db.go:680 appends validated tags to existing tags
using append(existingTags, validatedTags...) where existingTags
aliases node.Tags. When node.Tags has spare capacity, append writes
into the shared backing array, and the subsequent slices.Sort
corrupts the original.

Clone existingTags before appending to prevent aliasing.
2026-04-10 13:18:56 +01:00
Kristoffer Dalby
82bb4331f5 state: fix routesChanged mutating input Hostinfo
routesChanged aliases newHI.RoutableIPs into a local variable then
sorts it in place, which mutates the caller's Hostinfo data. The
Hostinfo is subsequently stored on the node, so the mutation
propagates but the input contract is violated.

Clone the slice before sorting to avoid mutating the input.
2026-04-10 13:18:56 +01:00
Kristoffer Dalby
2a2d5c869a types/change: fix slice aliasing in Change.Merge
Merge copies the receiver by value, but the slice headers share the
backing array with the original. When append has spare capacity, it
writes through to the original's memory, and uniqueNodeIDs then
sorts that shared data in place.

Replace append with slices.Concat which always allocates a fresh
backing array, preventing mutation of the receiver's slices.
2026-04-10 13:18:56 +01:00
Kristoffer Dalby
157e3a30fc AGENTS.md: trim to behavioural guidance, drop deprecated sub-agent
Procedural content moves to cmd/hi/README.md and integration/README.md.
Stale references (poll.go:420, mapper/tail.go, notifier/,
quality-control-enforcer, validateAndNormalizeTags) are corrected or
removed.
2026-04-10 12:30:07 +01:00
Kristoffer Dalby
70b622fc68 docs: expand cmd/hi and integration READMEs
Move integration-test runbook and authoring guide into the component
READMEs so the content sits next to the code it describes.
2026-04-10 12:30:07 +01:00
Kristoffer Dalby
742878d172 all: regenerate generated files for new tool versions
The nix dev shell refresh in 758fef9b pulled in protoc-gen-go-grpc
v1.6.1 and newer tailscale.com/cmd/{viewer,cloner}, so rerunning
`make generate` updates the version header comments in the three
affected generated files. No semantic changes.
2026-04-09 18:42:25 +01:00
Kristoffer Dalby
2109674467 nix: update flake inputs and dev shell tool versions
Refresh flake.lock (nixpkgs 2026-03-08 -> 2026-04-09) and bump the
tool pins that live directly in flake.nix:

  * golangci-lint 2.9.0 -> 2.11.4
  * protoc-gen-grpc-gateway 2.27.7 -> 2.28.0 (keeps the dev-shell
    code-gen tool in sync with the grpc-gateway Go module)
  * protobuf-language-server pinned commit bumped to ab4c128

Also replace nodePackages.prettier with the top-level prettier
attribute. nodePackages was removed from nixpkgs in the update and
the dev shell would otherwise fail to evaluate with:

    error: nodePackages has been removed because it was unmaintainable
           within nixpkgs

`nix flake check --all-systems` and `nix build .#headscale` both
pass, and `golangci-lint 2.11.4` reports no new issues on the tree.
2026-04-09 18:42:25 +01:00
Kristoffer Dalby
36a73f8c22 all: update Go dependencies
Routine bump of direct Go dependencies. Notable updates:

  * tailscale.com v1.94.1 -> v1.96.5 (gvisor bumped in lockstep to
    match upstream tailscale go.mod)
  * modernc.org/sqlite v1.44.3 -> v1.48.2, modernc.org/libc v1.67.6
    -> v1.70.0 (updated together as required by the fragile libc
    dependency noted in #2188)
  * google.golang.org/grpc v1.78.0 -> v1.80.0
  * grpc-ecosystem/grpc-gateway/v2 v2.27.7 -> v2.28.0
  * tailscale/hujson, tailscale/squibble, tailscale/tailsql
  * golang.org/x/{crypto,net,sync,oauth2,exp,sys,text,time,term,mod,tools}
  * rs/zerolog, samber/lo, sasha-s/go-deadlock, coreos/go-oidc/v3,
    creachadair/command, go-json-experiment/json, pterm/pterm

Update the nix vendorHash to match the new go.sum. Regenerating capver
against tailscale v1.96.5 produces no diff: v1.96.0 was already
captured in 442fcdbd and the capability version has not changed in
the patch series.

All unit tests and `golangci-lint run --new-from-rev=main` are clean.
2026-04-09 18:42:25 +01:00
Kristoffer Dalby
e40dbe3b28 Dockerfile: bump tailscale DERPer builder to Go 1.26.2
Tailscale main now requires go >= 1.26.2, so building the HEAD derper
image against golang:1.26.1-alpine fails with:

    go: go.mod requires go >= 1.26.2 (running go 1.26.1; GOTOOLCHAIN=local)

Bump Dockerfile.derper to match the earlier fix for Dockerfile.tailscale-HEAD
in 6390fcee so TestDERPVerifyEndpoint can build the derper container
again. This test is the only consumer of Dockerfile.derper, which is why
the failure was scoped to that single integration job.
2026-04-09 18:42:25 +01:00
Jacky
7c756b8201 db: scope DestroyUser to only delete the target user's pre-auth keys
DestroyUser called ListPreAuthKeys(tx) which returns ALL pre-auth keys
across all users, then deleted every one of them. This caused deleting
any single user to wipe out pre-auth keys for every other user.

Extract a ListPreAuthKeysByUser function (consistent with the existing
ListNodesByUser pattern) and use it in DestroyUser to scope key deletion
to the user being destroyed.

Add unit test (table-driven in TestDestroyUserErrors) and integration
test to prevent regression.

Fixes #3154

Co-authored-by: Kristoffer Dalby <kristoffer@dalby.cc>
2026-04-09 08:30:21 +01:00
Kristoffer Dalby
6ae182696f state: fix policy change race in UpdateNodeFromMapRequest
When UpdateNodeFromMapRequest and SetNodeTags race on persistNodeToDB,
the first caller to run updatePolicyManagerNodes detects the tag change
and returns a PolicyChange. The second caller finds no change and falls
back to NodeAdded.

If UpdateNodeFromMapRequest wins the race, it checked
policyChange.IsFull() which is always false for PolicyChange (only sets
IncludePolicy and RequiresRuntimePeerComputation). This caused the
PolicyChange to be dropped, so affected clients never received
PeersRemoved and the stale peer remained in their NetMap indefinitely.

Fix: check !policyChange.IsEmpty() instead, which correctly detects
any non-trivial policy change including PolicyChange().

This fixes the root cause of TestACLTagPropagation/multiple-tags-partial-
removal flaking at ~20% on CI.

Updates #3125
2026-04-08 14:32:08 +01:00
Kristoffer Dalby
ccddeceeec state: fix GORM not persisting user_id=NULL on tagged node conversion
GORM's struct-based Updates() silently skips nil pointer fields.
When SetNodeTags sets node.UserID = nil to transfer ownership to tags,
the in-memory NodeStore is correct but the database retains the old
user_id value. This causes tagged nodes to remain associated with the
original user in the database, preventing user deletion and risking
ON DELETE CASCADE destroying tagged nodes.

Add Select("*") before Omit() on all three node persistence paths
to force GORM to include all fields in the UPDATE statement, including
nil pointers. This is the same pattern already used in db/ip.go for
IPv4/IPv6 nil handling, and is documented GORM behavior:

  db.Select("*").Omit("excluded").Updates(struct)

The three affected paths are:
- persistNodeToDB: used by SetNodeTags and MapRequest updates
- applyAuthNodeUpdate: used by re-authentication with --advertise-tags
- HandleNodeFromPreAuthKey: used by PAK re-registration

Fixes #3161
2026-04-08 14:32:08 +01:00
Kristoffer Dalby
580dcad683 hscontrol: add tests for SetTags user_id database persistence
Add four tests that verify the tags-as-identity ownership transition
correctly persists to the database when converting a user-owned node
to a tagged node via SetTags:

- TestSetTags_ClearsUserIDInDatabase: verifies user_id is NULL in DB
- TestSetTags_NodeDisappearsFromUserListing: verifies ListNodes by user
- TestSetTags_NodeStoreAndDBConsistency: verifies in-memory and DB agree
- TestSetTags_UserDeletionDoesNotCascadeToTaggedNode: verifies user
  deletion does not cascade-delete tagged nodes

Three of these tests currently fail because GORM's struct-based
Updates() silently skips nil pointer fields, so user_id is never
written as NULL to the database after SetNodeTags clears it in memory.

Updates #3161
2026-04-08 14:32:08 +01:00
Kristoffer Dalby
442fcdbd33 capver: regenerate for tailscale v1.96
go generate ./hscontrol/capver/...

Adds v1.96 (capVer 133) to tailscaleToCapVer and capVerToTailscaleVer,
rolls the 10-version support window forward so MinSupportedCapabilityVersion
is now 109 (v1.78), and refreshes the test fixture accordingly.
2026-04-08 13:00:22 +01:00
Kristoffer Dalby
380f531342 state: trigger PolicyChange on every Connect and Disconnect
Connect and Disconnect previously only appended a PolicyChange when
the affected node was a subnet router (routeChange) or the database
persist returned a full change. For every other node the peers just
received a small PeerChangedPatch{Online: ...} and no filter rules
were recomputed. That was too narrow: a node going offline or coming
online can affect policy compilation in ways beyond subnet routes.

TestGrantCapRelay Phase 4 exposed this. When the cap/relay target node
went down with `tailscale down`, headscale only sent an Online=false
patch, peers never got a recomputed netmap, and their cached
PeerRelay allocation stayed populated until the 120s assertion
timeout. With a PolicyChange queued on Disconnect, peers immediately
receive a full netmap on relay loss and clear PeerRelay as expected;
the symmetric change on Connect lets Phase 5 re-publish the policy
when the relay comes back.

Drop the now-unused routeChange return from the Disconnect gate.

Updates #2180
2026-04-08 13:00:22 +01:00
Kristoffer Dalby
51eed414b4 integration: fix ACL tests for address-family-specific resolve
Address-based aliases (Prefix, Host) now resolve to exactly the literal
prefix and do not expand to include the matching node's other IP
addresses. This means an IPv4-only host definition only produces IPv4
filter rules, and an IPv6-only definition only produces IPv6 rules.

Update TestACLDevice1CanAccessDevice2 and TestACLNamedHostsCanReach to
track which addresses each test case covers via test1Addr/test2Addr/
test3Addr fields and only assert connectivity for that family.
Previously the tests assumed all address families would work regardless
of how the policy aliases were defined, which was true only when
address-based aliases auto-expanded to include all of a node's IPs.

The group test case (identity-based) keeps using IPv4 since tags, users,
groups, autogroups and the wildcard still resolve to both IPv4 and IPv6.

Updates #2180
2026-04-08 13:00:22 +01:00
Kristoffer Dalby
e638cbc9b9 integration/tsic: accept via peer-relay in non-direct ping check
When WithPingUntilDirect(false) is set, the Ping helper should accept
any indirect path, but the substring check only matched "via DERP" and
"via relay". Tailscale peer relay pings output

    pong from ... via peer-relay(ip:port:vni:N) in Nms

which does not contain the "via relay" substring and was therefore
rejected as errTailscalePingNotDERP. TestGrantCapRelay Phase 4 never
passed because of this: even when the data plane was healthy the
helper returned an error.

Commit abe1a3e7 attempted to fix this by adding "via relay" alongside
"via DERP" but missed the "peer-" prefix used by peer relay output.

Add an explicit "via peer-relay" substring check so peer relay pongs
are accepted alongside DERP and plain relay pongs.

Updates #2180
2026-04-08 13:00:22 +01:00
Kristoffer Dalby
6390fcee79 Dockerfile: bump tailscale HEAD builder to Go 1.26.2
Tailscale main now requires go >= 1.26.2, so building the HEAD image
against golang:1.26.1-alpine fails with:

    go: go.mod requires go >= 1.26.2 (running go 1.26.1; GOTOOLCHAIN=local)

Bump the base image to golang:1.26.2-alpine so `go run ./cmd/hi run`
can build the HEAD container locally again.
2026-04-08 13:00:22 +01:00
Kristoffer Dalby
b52f8cb52f CHANGELOG: document node.expiry and oidc.expiry deprecation
Updates #1711
2026-04-08 13:00:22 +01:00
Kristoffer Dalby
ff29af63f6 servertest: use memnet networking and add WithNodeExpiry option
Replace httptest (real TCP sockets) with tailscale.com/net/memnet
so all connections stay in-process. Wire the client's tsdial.Dialer
to the server's memnet.Network via SetSystemDialerForTest,
preserving the full Noise protocol path.

Also update servertest to use the new Node.Ephemeral.InactivityTimeout
config path introduced in the types refactor, and add WithNodeExpiry
server option for testing default node key expiry behaviour.

Updates #1711
2026-04-08 13:00:22 +01:00
Kristoffer Dalby
7e8930c507 hscontrol: add tests for default node key expiry
Add tests covering the core expiry scenarios:
- Untagged auth key with zero expiry gets configured default
- Tagged nodes ignore node.expiry
- node.expiry=0 disables default (backwards compatible)
- Client-requested expiry takes precedence
- Re-registration refreshes the default expiry

Updates #1711
2026-04-08 13:00:22 +01:00
Kristoffer Dalby
6337a3dbc4 state: apply default node key expiry on registration
Use the node.expiry config to apply a default expiry to non-tagged
nodes when the client does not request a specific expiry. This covers
all registration paths: new node creation, re-authentication, and
pre-auth key re-registration.

Tagged nodes remain exempt and never expire.

Fixes #1711
2026-04-08 13:00:22 +01:00
Kristoffer Dalby
4d0b273b90 types: add node.expiry config, deprecate oidc.expiry
Introduce a structured NodeConfig that replaces the flat
EphemeralNodeInactivityTimeout field with a nested Node section.

Add node.expiry config (default: no expiry) as the unified default key
expiry for all non-tagged nodes regardless of registration method.

Remove oidc.expiry entirely — node.expiry now applies to OIDC nodes
the same as all other registration methods. Using oidc.expiry in the
config is a hard error. determineNodeExpiry() returns nil (no expiry)
unless use_expiry_from_token is enabled, letting state.go apply the
node.expiry default uniformly.

The old ephemeral_node_inactivity_timeout key is preserved for
backwards compatibility.

Updates #1711
2026-04-08 13:00:22 +01:00
Florian Preinstorfer
23a5f1b628 Use pymdownx.magiclink with its default configuration
The docs contain bare links that are not rendered without it.
2026-04-02 21:24:27 +02:00
Florian Preinstorfer
44600550c6 Fix invisible selected menu item
A light background with white primary font makes the selected menu entry
unreadable.
2026-04-02 21:24:27 +02:00
Kristoffer Dalby
835db974b5 testdata: strip unused fields from all test data files (23MB -> 4MB)
Strip fields not consumed by any test from all 594 HuJSON test data files:

grant_results/ (248 files, 21MB -> 1.8MB):
  - Remove: timestamp, propagation_wait_seconds, input.policy_file,
    input.grants_section, input.api_endpoint, input.api_method,
    topology.nodes.mts_name, topology.nodes.socket, topology.nodes.user_id,
    captures.commands, captures.packet_filter_matches, captures.whois
  - V14-V16, V26-V36: keep stripped netmap (Peers.Name/AllowedIPs/PrimaryRoutes
    + PacketFilterRules) for via_compat_test.go compatibility
  - V17-V25: strip netmap (old topology, incompatible with via_compat harness)

acl_results/ (215 files, 1.4MB -> 1.2MB):
  - Remove: timestamp, propagation_wait_seconds, input.policy_file,
    input.api_endpoint, input.api_response_code, entire topology section
    (parsed by Go struct but completely ignored — nodes are hardcoded)

routes_results/ (92 files, unchanged — topology is actively used):
  - Remove: timestamp, propagation_wait_seconds, input.policy_file,
    input.api_endpoint, input.api_response_code

ssh_results/ (39 files, unchanged — minimal to begin with):
  - Remove: policy_file
2026-04-01 14:10:42 +01:00
Kristoffer Dalby
30dce30a9d testdata: convert .json to .hujson with header comments
Rename all 594 test data files from .json to .hujson and add
descriptive header comments to each file documenting what policy
rules are under test and what outcome is expected.

Update test loaders in all 5 _test.go files to parse HuJSON via
hujson.Parse/Standardize/Pack before json.Unmarshal.

Add cross-dependency warning to via_compat_test.go documenting
that GRANT-V29/V30/V31/V36 are shared with TestGrantsCompat.

Add .gitignore exemption for testdata HuJSON files.
2026-04-01 14:10:42 +01:00
Kristoffer Dalby
f693cc0851 CHANGELOG: document grants support for 0.29.0
Updates #2180
2026-04-01 14:10:42 +01:00
Kristoffer Dalby
abd2b15db5 policy/v2: clean up dead error variables, stale TODO, and test skip reasons
Remove unused error variables (ErrGrantViaNotSupported, ErrGrantEmptySources, ErrGrantEmptyDestinations, ErrGrantViaOnlyTag) and the stale TODO for via implementation. Update compat test skip reasons to reflect that user:*@passkey wildcard is a known unsupported feature, not a pending implementation.

Updates #2180
2026-04-01 14:10:42 +01:00
Kristoffer Dalby
b762e4c350 integration: remove exit node via grant tests
Remove TestGrantViaExitNodeSteering and TestGrantViaMixedSteering.
Exit node traffic forwarding through via grants cannot be validated
with curl/traceroute in Docker containers because Tailscale exit nodes
strip locally-connected subnets from their forwarding filter.

The correctness of via exit steering is validated by:
- Golden MapResponse comparison (TestViaGrantMapCompat with GRANT-V31
  and GRANT-V36) comparing full netmap output against Tailscale SaaS
- Filter rule compatibility (TestGrantsCompat with GRANT-V14 through
  GRANT-V36) comparing per-node PacketFilter rules against Tailscale SaaS
- TestGrantViaSubnetSteering (kept) validates via subnet steering with
  actual curl/traceroute through Docker, which works for subnet routes

Updates #2180
2026-04-01 14:10:42 +01:00
Kristoffer Dalby
c36cedc32f policy/v2: fix via grants in BuildPeerMap, MatchersForNode, and ViaRoutesForPeer
Use per-node compilation path for via grants in BuildPeerMap and MatchersForNode to ensure via-granted nodes appear in peer maps. Fix ViaRoutesForPeer golden test route inference to correctly resolve via grant effects.

Updates #2180
2026-04-01 14:10:42 +01:00
Kristoffer Dalby
6a55f7d731 policy/v2: add via exit steering golden captures and tests
Add golden test data for via exit route steering and fix via exit grant compilation to match Tailscale SaaS behavior. Includes MapResponse golden tests for via grant route steering verification.

Updates #2180
2026-04-01 14:10:42 +01:00
Kristoffer Dalby
bca6e6334d integration: add custom subnet support and fix exit node tests
Add NetworkSpec struct with optional Subnet field to ScenarioSpec.Networks.
When Subnet is set, the Docker network is created with that specific CIDR
instead of Docker's auto-assigned RFC1918 range.

Fix all exit node integration tests to use curl + traceroute. Tailscale
exit nodes strip locally-connected subnets from their forwarding filter
(shrinkDefaultRoute + localInterfaceRoutes), so exit nodes cannot
forward to IPs on their Docker network via the default route alone.
This is by design: exit nodes provide internet access, not LAN access.
To also get LAN access, the subnet must be explicitly advertised as a
route — matching real-world Tailscale deployment requirements.

- TestSubnetRouterMultiNetworkExitNode: advertise usernet1 subnet
  alongside exit route, upgraded from ping to curl + traceroute
- TestGrantViaExitNodeSteering: usernet1 subnet in via grants and
  auto-approvers alongside autogroup:internet
- TestGrantViaMixedSteering: externet subnet in auto-approvers and
  route advertisement for exit traffic

Updates #2180
2026-04-01 14:10:42 +01:00
Kristoffer Dalby
0431039f2a servertest: add regression tests for via grant filter rules
Add three tests that verify control plane behavior for grant policies:

- TestGrantViaSubnetFilterRules: verifies the router's PacketFilter
  contains destination rules for via-steered subnets. Without per-node
  filter compilation for via grants, these rules were missing and the
  router would drop forwarded traffic.

- TestGrantViaExitNodeFilterRules: same verification for exit nodes
  with via grants steering autogroup:internet traffic.

- TestGrantIPv6OnlyPrefixACL: verifies that address-based aliases
  (Prefix, Host) resolve to exactly the literal prefix and do not
  expand to include the matching node's other IP addresses. An
  IPv6-only host definition produces only IPv6 filter rules.

Updates #2180
2026-04-01 14:10:42 +01:00
Kristoffer Dalby
ccd284c0a5 policy/v2: use per-node filter compilation for via grants
Via grants compile filter rules that depend on the node's route state
(SubnetRoutes, ExitRoutes). Without per-node compilation, these rules
were only included in the global filter path which explicitly skips via
grants (compileFilterRules skips grants with non-empty Via fields).

Add a needsPerNodeFilter flag that is true when the policy uses either
autogroup:self or via grants. filterForNodeLocked now uses this flag
instead of usesAutogroupSelf alone, ensuring via grant rules are
compiled per-node through compileFilterRulesForNode/compileViaGrant.

The filter cache also needs to account for route-dependent compilation:

- nodesHavePolicyAffectingChanges now treats route changes as
  policy-affecting when needsPerNodeFilter is true, so SetNodes
  triggers updateLocked and clears caches through the normal flow.

- invalidateGlobalPolicyCache now clears compiledFilterRulesMap
  (the unreduced per-node cache) alongside filterRulesMap when
  needsPerNodeFilter is true and routes changed.

Updates #2180
2026-04-01 14:10:42 +01:00
Kristoffer Dalby
9db5fb6393 integration: fix error message assertion for invalid ACL action
Action.UnmarshalJSON produces the format
'action="unknown-action" is not supported: invalid ACL action',
not the reversed format the test expected.

Updates #2180
2026-04-01 14:10:42 +01:00
Kristoffer Dalby
3ca4ff8f3f state,servertest: add grant control plane tests and fix via route ReduceRoutes filtering
Add servertest grant policy control plane tests covering basic grants, via grants, and cap grants. Fix ReduceRoutes in State to apply route reduction to non-via routes first, then append via-included routes, preventing via grant routes from being incorrectly filtered.

Updates #2180
2026-04-01 14:10:42 +01:00
Kristoffer Dalby
5cd5e5de69 policy/v2: add unit tests for ViaRoutesForPeer
Test via route computation for viewer-peer pairs: self-steering returns
empty, viewer not in source returns empty, peer without advertised
destination returns empty, peer with/without via tag populates
Include/Exclude respectively, mixed prefix and autogroup:internet
destinations, and exit route steering.

7 subtests covering all code paths in ViaRoutesForPeer.

Updates #2180
2026-04-01 14:10:42 +01:00
Kristoffer Dalby
08d26e541c policy/v2: add unit tests for grant filter compilation helpers
Test companionCapGrantRules, sourcesHaveWildcard, sourcesHaveDangerAll,
srcIPsWithRoutes, the FilterAllowAll fix for grant-only policies,
compileViaGrant, compileGrantWithAutogroupSelf grant paths, and
destinationsToNetPortRange autogroup:internet skipping.

51 subtests across 8 test functions covering all grant-specific code
paths in filter.go that previously had no test coverage.

Updates #2180
2026-04-01 14:10:42 +01:00
Kristoffer Dalby
d243adaedd types,mapper,integration: enable Taildrive and add cap/drive grant lifecycle test
Add NodeAttrsTaildriveShare and NodeAttrsTaildriveAccess to the node capability map, enabling Taildrive file sharing when granted via policy. Add integration test verifying the full cap/drive grant lifecycle.

Updates #2180
2026-04-01 14:10:42 +01:00
Kristoffer Dalby
9b1a6b6c05 integration: add cap/relay grant peer relay lifecycle test
Add ConnectToNetwork to the TailscaleClient interface for multi-network test scenarios and implement peer relay ping support. Use these to test that cap/relay grants correctly enable peer-to-peer relay connections between tagged nodes.

Updates #2180
2026-04-01 14:10:42 +01:00
Kristoffer Dalby
8573ff9158 policy/v2: fix grant-only policies returning FilterAllowAll
compileFilterRules checked only pol.ACLs == nil to decide whether
to return FilterAllowAll (permit-any). Policies that use only Grants
(no ACLs) had nil ACLs, so the function short-circuited before
compiling any CapGrant rules. This meant cap/relay, cap/drive, and
any other App-based grant capabilities were silently ignored.

Check both ACLs and Grants are empty before returning FilterAllowAll.

Updates #2180
2026-04-01 14:10:42 +01:00
Kristoffer Dalby
a739862c65 integration: add via grant route steering tests
Add integration tests validating that via grants correctly steer
routes to designated nodes per client group:

- TestGrantViaSubnetSteering: two routers advertise the same
  subnet, via grants steer each client group to a specific router.
  Verifies per-client route visibility, curl reachability, and
  traceroute path.

- TestGrantViaExitNodeSteering: two exit nodes, via grants steer
  each client group to a designated exit node. Verifies exit
  routes are withdrawn from non-designated nodes and the client
  rejects setting a non-designated exit node.

- TestGrantViaMixedSteering: cross-steering where subnet routes
  and exit routes go to different servers per client group.
  Verifies subnet traffic uses the subnet-designated server while
  exit traffic uses the exit-designated server.

Also add autogroupp helper for constructing AutoGroup aliases in
grant policy configurations.

Updates #2180
2026-04-01 14:10:42 +01:00
Kristoffer Dalby
8358017dcf policy/v2,state,mapper: implement per-viewer via route steering
Via grants steer routes to specific nodes per viewer. Until now,
all clients saw the same routes for each peer because route
assembly was viewer-independent. This implements per-viewer route
visibility so that via-designated peers serve routes only to
matching viewers, while non-designated peers have those routes
withdrawn.

Add ViaRouteResult type (Include/Exclude prefix lists) and
ViaRoutesForPeer to the PolicyManager interface. The v2
implementation iterates via grants, resolves sources against the
viewer, matches destinations against the peer's advertised routes
(both subnet and exit), and categorizes prefixes by whether the
peer has the via tag.

Add RoutesForPeer to State which composes global primary election,
via Include/Exclude filtering, exit routes, and ACL reduction.
When no via grants exist, it falls back to existing behavior.

Update the mapper to call RoutesForPeer per-peer instead of using
a single route function for all peers. The route function now
returns all routes (subnet + exit), and TailNode filters exit
routes out of the PrimaryRoutes field for HA tracking.

Updates #2180
2026-04-01 14:10:42 +01:00
Kristoffer Dalby
28be15f8ea policy/v2: handle autogroup:internet in via grant compilation
compileViaGrant only handled *Prefix destinations, skipping
*AutoGroup entirely. This meant via grants with
dst=[autogroup:internet] produced no filter rules even when the
node was an exit node with approved exit routes.

Switch the destination loop from a type assertion to a type switch
that handles both *Prefix (subnet routes) and *AutoGroup (exit
routes via autogroup:internet). Also check ExitRoutes() in
addition to SubnetRoutes() so the function doesn't bail early
when a node only has exit routes.

Updates #2180
2026-04-01 14:10:42 +01:00
Kristoffer Dalby
687cf0882f policy/v2: implement autogroup:danger-all support
Add autogroup:danger-all as a valid source alias that matches ALL IP
addresses including non-Tailscale addresses. When used as a source,
it resolves to 0.0.0.0/0 + ::/0 internally but produces SrcIPs: ["*"]
in filter rules. When used as a destination, it is rejected with an
error matching Tailscale SaaS behavior.

Key changes:
- Add AutoGroupDangerAll constant and validation
- Add sourcesHaveDangerAll() helper and hasDangerAll parameter to
  srcIPsWithRoutes() across all compilation paths
- Add ErrAutogroupDangerAllDst for destination rejection
- Remove 3 AUTOGROUP_DANGER_ALL skip entries (K6, K7, K8)

Updates #2180
2026-04-01 14:10:42 +01:00
Kristoffer Dalby
4f040dead2 policy/v2: implement grant validation rules matching Tailscale SaaS
Implement comprehensive grant validation including: accept empty sources/destinations (they produce no rules), validate grant ip/app field requirements, capability name format, autogroup constraints, via tag existence, and default route CIDR restrictions.

Updates #2180
2026-04-01 14:10:42 +01:00
Kristoffer Dalby
54db47badc policy/v2: implement via route compilation for grants
Compile grants with "via" field into FilterRules that are placed only
on nodes matching the via tag and actually advertising the destination
subnets. Key behavior:

- Filter rules go exclusively to via-nodes with matching approved routes
- Destination subnets not advertised by the via node are silently dropped
- App-only via grants (no ip field) produce no packet filter rules
- Via grants are skipped in the global compileFilterRules since they
  are node-specific

Reduces grant compat test skips from 41 to 30 (11 newly passing).

Updates #2180
2026-04-01 14:10:42 +01:00
Kristoffer Dalby
0e3acdd8ec policy/v2: implement CapGrant compilation with companion capabilities
Compile grant app fields into CapGrant FilterRules matching Tailscale
SaaS behavior. Key changes:

- Generate CapGrant rules in compileFilterRules and
  compileGrantWithAutogroupSelf, with node-specific /32 and /128
  Dsts for autogroup:self grants
- Add reversed companion rules for drive→drive-sharer and
  relay→relay-target capabilities, ordered by original cap name
- Narrow broad CapGrant Dsts to node-specific prefixes in
  ReduceFilterRules via new reduceCapGrantRule helper
- Skip merging CapGrant rules in mergeFilterRules to preserve
  per-capability structure
- Remove ip+app mutual exclusivity validation (Tailscale accepts both)
- Add semantic JSON comparison for RawMessage types and netip.Prefix
  comparators in test infrastructure

Reduces grant compat test skips from 99 to 41 (58 newly passing).

Updates #2180
2026-04-01 14:10:42 +01:00
Kristoffer Dalby
ebe0f4078d policy/v2: preserve non-wildcard source IPs alongside wildcard ranges
When an ACL source list contains a wildcard (*) alongside explicit
sources (tags, groups, hosts, etc.), Tailscale preserves the individual
IPs from non-wildcard sources in SrcIPs alongside the merged wildcard
CGNAT ranges. Previously, headscale's IPSetBuilder would merge all
sources into a single set, absorbing the explicit IPs into the wildcard
range.

Track non-wildcard resolved addresses separately during source
resolution, then append their individual IP strings to the output
when a wildcard is also present. This fixes the remaining 5 ACL
compat test failures (K01 and M06 subtests).

Updates #2180
2026-04-01 14:10:42 +01:00
Kristoffer Dalby
dda35847b0 policy/v2: reorder ACL self grants to match Tailscale rule ordering
When an ACL has non-autogroup destinations (groups, users, tags, hosts)
alongside autogroup:self, emit non-self grants before self grants to
match Tailscale's filter rule ordering. ACLs with only autogroup
destinations (self + member) preserve the policy-defined order.

This fixes ACL-A17, ACL-SF07, and ACL-SF11 compat test failures.

Updates #2180
2026-04-01 14:10:42 +01:00
Kristoffer Dalby
f95b254ea9 policy/v2: exclude exit routes from ReduceFilterRules
Add exit route check in ReduceFilterRules to prevent exit nodes from receiving packet filter rules for destinations that only overlap via exit routes. Remove resolved SUBNET_ROUTE_FILTER_RULES grant skip entries and update error message formatting for grant validation.

Updates #2180
2026-04-01 14:10:42 +01:00
Kristoffer Dalby
e05f45cfb1 policy/v2: use approved node routes in wildcard SrcIPs
Per Tailscale documentation, the wildcard (*) source includes "any
approved subnets" — the actually-advertised-and-approved routes from
nodes, not the autoApprover policy prefixes.

Change Asterix.resolve() to return just the base CGNAT+ULA set, and
add approved subnet routes as separate SrcIPs entries in the filter
compilation path. This preserves individual route prefixes that would
otherwise be merged by IPSet (e.g., 10.0.0.0/8 absorbing 10.33.0.0/16).

Also swap rule ordering in compileGrantWithAutogroupSelf() to emit
non-self destination rules before autogroup:self rules, matching the
Tailscale FilterRule wire format ordering.

Remove the unused AutoApproverPolicy.prefixes() method.

Updates #2180
2026-04-01 14:10:42 +01:00
Kristoffer Dalby
995ed0187c policy/v2: add advertised routes to compat test topologies
Add routable_ips and approved_routes fields to the node topology
definitions in all golden test files. These represent the subnet
routes actually advertised by nodes on the Tailscale SaaS network
during data capture:

  Routes topology (92 files, 6 router nodes):
    big-router:     10.0.0.0/8
    subnet-router:  10.33.0.0/16
    ha-router1:     192.168.1.0/24
    ha-router2:     192.168.1.0/24
    multi-router:   172.16.0.0/24
    exit-node:      0.0.0.0/0, ::/0

  ACL topology (199 files, 1 router node):
    subnet-router:  10.33.0.0/16

  Grants topology (203 files, 1 router node):
    subnet-router:  10.33.0.0/16

The route assignments were deduced from the golden data by analyzing
which router nodes receive FilterRules for which destination CIDRs
across all test files, and cross-referenced with the MTS setup
script (setup_grant_nodes.sh).

Updates #2180
2026-04-01 14:10:42 +01:00
Kristoffer Dalby
927ce418d2 policy/v2: use bare IPs in autogroup:self DstPorts
Use ip.String() instead of netip.PrefixFrom(ip, ip.BitLen()).String()
when building DstPorts for autogroup:self destinations. This produces
bare IPs like "100.90.199.68" instead of CIDR notation like
"100.90.199.68/32", matching the Tailscale FilterRule wire format.

Updates #2180
2026-04-01 14:10:42 +01:00
Kristoffer Dalby
93d79d8da9 policy: include IPv6 in identity-based alias resolution
AppendToIPSet now adds both IPv4 and IPv6 addresses for nodes, matching Tailscale's FilterRule wire format where identity-based aliases (tags, users, groups, autogroups) resolve to both address families. Update ReduceFilterRules test expectations to include IPv6 entries.

Updates #2180
2026-04-01 14:10:42 +01:00
Kristoffer Dalby
500442c8f1 policy/v2: convert routes compat tests to data-driven format with Tailscale SaaS captures
Replace 8,286 lines of inline Go test expectations with 92 JSON golden files captured from Tailscale SaaS. The data-driven test driver validates route filtering, auto-approval, HA routing, and exit node behavior against real Tailscale output.

Updates #2180
2026-04-01 14:10:42 +01:00
Kristoffer Dalby
2fb71690e8 policy/v2: convert ACL compat tests to data-driven format with Tailscale SaaS captures
Replace 9,937 lines of inline Go test expectations with 215 JSON golden files captured from Tailscale SaaS. The new data-driven test driver compares headscale's filter compilation output against real Tailscale behavior for each node in an 8-node topology.

Updates #2180
2026-04-01 14:10:42 +01:00
Kristoffer Dalby
9f7aa55689 policy/v2: refactor alias resolution to use ResolvedAddresses
Introduce ResolvedAddresses type for structured IP set results. Refactor all Alias.Resolve() methods to return ResolvedAddresses instead of raw IPSets. Restrict identity-based aliases to matching address families, fix nil dereferences in partial resolution paths, and update test expectations for the new IP format (bare IPs, IP ranges instead of CIDR prefixes).

Updates #2180
2026-04-01 14:10:42 +01:00
Kristoffer Dalby
0fa9dcaff8 policy/v2: add data-driven grants compatibility test with Tailscale SaaS captures
Rename tailscale_compat_test.go to tailscale_acl_compat_test.go to make room for the grants compat test. Add 237 GRANT-*.json golden test files captured from Tailscale SaaS and a data-driven test driver that compares headscale's grant filter compilation against real Tailscale behavior.

Updates #2180
2026-04-01 14:10:42 +01:00
Kristoffer Dalby
f74ea5b8ed hscontrol/policy/v2: add Grant policy format support
Add support for the Grant policy format as an alternative to ACL format,
following Tailscale's policy v2 specification. Grants provide a more
structured way to define network access rules with explicit separation
of IP-based and capability-based permissions.

Key changes:

- Add Grant struct with Sources, Destinations, InternetProtocols (ip),
  and App (capabilities) fields
- Add ProtocolPort type for unmarshaling protocol:port strings
- Add Grant validation in Policy.validate() to enforce:
  - Mutual exclusivity of ip and app fields
  - Required ip or app field presence
  - Non-empty sources and destinations
- Refactor compileFilterRules to support both ACLs and Grants
- Convert ACLs to Grants internally via aclToGrants() for unified
  processing
- Extract destinationsToNetPortRange() helper for cleaner code
- Rename parseProtocol() to toIANAProtocolNumbers() for clarity
- Add ProtocolNumberToName mapping for reverse lookups

The Grant format allows policies to be written using either the legacy
ACL format or the new Grant format. ACLs are converted to Grants
internally, ensuring backward compatibility while enabling the new
format's benefits.

Updates #2180
2026-04-01 14:10:42 +01:00
Kristoffer Dalby
53b8a81d48 servertest: support tagged pre-auth keys in test clients
WithTags was defined but never passed through to CreatePreAuthKey.
Fix NewClient to use CreateTaggedPreAuthKey when tags are specified,
enabling tests that need tagged nodes (e.g. via grant steering).

Updates #2180
2026-04-01 14:10:42 +01:00
Kristoffer Dalby
15c1cfd778 types: include ExitRoutes in HasNetworkChanges
When exit routes are approved, SubnetRoutes remains empty because exit
routes (0.0.0.0/0, ::/0) are classified separately. Without checking
ExitRoutes, the PolicyManager cache is not invalidated on exit route
approval, causing stale filter rules that lack via grant entries for
autogroup:internet destinations.

Updates #2180
2026-04-01 14:10:42 +01:00
Kristoffer Dalby
a76b4bd46c ci: switch integration tests to ARM runners
Switch all integration test jobs (build, build-postgres, test
template) from ubuntu-latest (x86_64) to ubuntu-24.04-arm (aarch64).

ARM runners on GitHub Actions are free for public repos and tend
to have more consistent performance characteristics than the
shared x86_64 pool. This should reduce flakiness caused by
resource contention on congested runners.

Updates #3125
2026-03-31 22:06:25 +02:00
Kristoffer Dalby
a9a2001ae7 integration: scale remaining hardcoded timeouts and replace pingAllHelper
Apply CI-aware scaling to all remaining hardcoded timeouts:

- requireAllClientsOfflineStaged: scale the three internal stage
  timeouts (15s/20s/60s) with ScaledTimeout.
- validateReloginComplete: scale requireAllClientsOnline (120s)
  and requireAllClientsNetInfoAndDERP (3min) calls.
- WaitForTailscaleSyncPerUser callers in acl_test.go (3 sites, 60s).
- WaitForRunning callers in tags_test.go (10 sites): switch to
  PeerSyncTimeout() to match convention.
- WaitForRunning/WaitForPeers direct callers in route_test.go.
- requireAllClientsOnline callers in general_test.go and
  auth_key_test.go.

Replace pingAllHelper with assertPingAll/assertPingAllWithCollect:

- Wraps pings in EventuallyWithT so transient docker exec timeouts
  are retried instead of immediately failing the test.
- Timeout scales with the ping matrix size (2s per ping budget for
  2 full sweeps) so large tests get proportionally more time.
- Uses CollectT correctly, fixing the broken EventuallyWithT usage
  in TestEphemeral where the old t.Errorf bypassed CollectT.
- Follows the established assert*/assertWithCollect naming.

Updates #3125
2026-03-31 22:06:25 +02:00
Kristoffer Dalby
acb8cfc7ee integration: make docker execute and ping timeouts CI-aware
The default docker execute timeout (10s) is the root cause of
"dockertest command timed out" errors across many integration tests
on CI. On congested GitHub Actions runners, docker exec latency
alone can consume 2-5 seconds of this budget before the command
even starts inside the container.

Replace the hardcoded 10s constant with a function that returns
20s on CI, doubling the budget for all container commands
(tailscale status, headscale CLI, curl, etc.).

Similarly, scale the default tailscale ping timeout from 200ms to
400ms on CI. This doubles the per-attempt budget and the docker
exec timeout for pings (from 200ms*5=1s to 400ms*5=2s), giving
more headroom for docker exec overhead.

Updates #3125
2026-03-31 22:06:25 +02:00
Kristoffer Dalby
f1e5f1346d integration/acl: add tag verification step to TestACLTagPropagationPortSpecific
TestACLTagPropagationPortSpecific failed twice on CI because it jumped
from SetNodeTags directly to checking curl, without first verifying the
tag change was applied on the server. This races against server-side
processing.

Add a tag verification step (matching TestACLTagPropagation's pattern)
and bump the Step 4 timeout from 60s to 90s since port-specific filter
changes require both endpoints to process the new PacketFilter from
the MapResponse while the WireGuard tunnel stays up.

Updates #3125
2026-03-31 22:06:25 +02:00
Kristoffer Dalby
210f58f62e integration: use CI-scaled timeouts for all EventuallyWithT assertions
Wrap all 329 hardcoded EventuallyWithT timeouts across 12 test files
with integrationutil.ScaledTimeout(), which applies a 2x multiplier
on CI runners. This addresses the systemic issue where hardcoded
timeouts that work locally are insufficient under CI resource
contention.

Variable-based timeouts (propagationTime, assertTimeout in
route_test.go and totalWaitTime in auth_oidc_test.go) are wrapped
at their definition site so all downstream usages benefit.

The retry intervals (second duration parameter) are intentionally
NOT scaled, as they control polling frequency, not total wait time.

Updates #3125
2026-03-31 22:06:25 +02:00
Kristoffer Dalby
a147b0cd87 integration/acl: use CurlFailFast for all negative curl assertions
Replace Curl() with CurlFailFast() in all negative curl assertions
(where the test expects the connection to fail). CurlFailFast uses
1 retry and 2s max time instead of 3 retries and 5s max, which
avoids wasting time on unnecessary retries when we expect the
connection to be blocked.

This affects 21 call sites across 7 test functions:

- TestACLAllowUser80Dst
- TestACLDenyAllPort80
- TestACLAllowUserDst
- TestACLAllowStarDst
- TestACLNamedHostsCanReach
- TestACLDevice1CanAccessDevice2
- TestPolicyUpdateWhileRunningWithCLIInDatabase
- TestACLAutogroupSelf
- TestACLPolicyPropagationOverTime

Where possible, the inline Curl+Error pattern is replaced with the
assertCurlFailWithCollect helper introduced in the previous commit.

Updates #3125
2026-03-31 22:06:25 +02:00
Kristoffer Dalby
a7edcf3b0f integration: add CI-scaled timeouts and curl helpers for flaky ACL tests
Add ScaledTimeout to scale EventuallyWithT timeouts by 2x on CI,
consistent with the existing PeerSyncTimeout (60s/120s) and
dockertestMaxWait (300s/600s) conventions.

Add assertCurlSuccessWithCollect and assertCurlFailWithCollect helpers
following the existing *WithCollect naming convention.
assertCurlFailWithCollect uses CurlFailFast internally for aggressive
timeouts, avoiding wasted retries when expecting blocked connections.

Apply these to the three flakiest ACL tests:

- TestACLTagPropagation: swap NetMap and curl verification order so
  the fast NetMap check (confirms MapResponse arrived) runs before
  the slower curl check. Use curl helpers and scaled timeouts.

- TestACLTagPropagationPortSpecific: use curl helpers and scaled
  timeouts.

- TestACLHostsInNetMapTable: scale the 10s EventuallyWithT timeout.

Updates #3125
2026-03-31 22:06:25 +02:00
Kristoffer Dalby
fda72ad1a3 Update main.md
Co-authored-by: nblock <nblock@users.noreply.github.com>
2026-03-31 13:36:31 +02:00
Kristoffer Dalby
dfaf120f2a docs: add development builds install page
Move the container image and binary download details from the README
into a dedicated documentation page at setup/install/main. This gives
development builds a proper home in the docs site alongside the other
install methods. The README now links to the docs page instead.
2026-03-31 13:36:31 +02:00
Kristoffer Dalby
e171d30179 ci: add build workflow for main branch
Build and push multi-arch container images (linux/amd64, linux/arm64)
to GHCR and Docker Hub on every push to main that changes Go or Nix
files. Images are tagged as main-<short-sha> using ko with the same
distroless base image as release builds.

Cross-compiled binaries for linux and darwin (amd64, arm64) are
uploaded as workflow artifacts. The README links to these via
nightly.link for stable download URLs.
2026-03-31 13:36:31 +02:00
Kristoffer Dalby
0c6b9f5348 goreleaser: remove unused ts2019 build tag
The ts2019 build tag is no longer used. Remove it from the
goreleaser build configuration.
2026-03-31 13:36:31 +02:00
Florian Preinstorfer
f3512d50df Switch to mkdocs-materialx
The project mkdocs-material is in maintenance-only mode and their
successor is not ready yet.

Use the modern, refreshed theme and drop the pymdownx.magiclink
extension.
2026-03-25 22:30:03 +01:00
Florian Preinstorfer
efd83da14e Explicitly mention that a headscale username should *not* end with @
See: #3149
2026-03-20 19:44:33 +01:00
Tanayk07
568baf3d02 fix: align banner right-side border to consistent 64-char width 2026-03-19 07:08:35 +01:00
Tanayk07
5105033224 feat: add prominent warning banner for non-standard IP prefixes
Add a highly visible ASCII-art warning banner that is printed at
startup when the configured IP prefixes fall outside the standard
Tailscale CGNAT (100.64.0.0/10) or ULA (fd7a:115c:a1e0::/48) ranges.

The warning fires once even if both v4 and v6 are non-standard, and
the warnBanner() function is reusable for other critical configuration
warnings in the future.

Also updates config-example.yaml to clarify that subsets of the
default ranges are fine, but ranges outside CGNAT/ULA are not.

Closes #3055
2026-03-19 07:08:35 +01:00
Kristoffer Dalby
3d53f97c82 hscontrol/servertest: fix test expectations for eventual consistency
Three corrections to issue tests that had wrong assumptions about
when data becomes available:

1. initial_map_should_include_peer_online_status: use WaitForCondition
   instead of checking the initial netmap. Online status is set by
   Connect() which sends a PeerChange patch after the initial
   RegisterResponse, so it may not be present immediately.

2. disco_key_should_propagate_to_peers: use WaitForCondition. The
   DiscoKey is sent in the first MapRequest (not RegisterRequest),
   so peers may not see it until a subsequent map update.

3. approved_route_without_announcement: invert the test expectation.
   Tailscale uses a strict advertise-then-approve model -- routes are
   only distributed when the node advertises them (Hostinfo.RoutableIPs)
   AND they are approved. An approval without advertisement is a dormant
   pre-approval. The test now asserts the route does NOT appear in
   AllowedIPs, matching upstream Tailscale semantics.

Also fix TestClient.Reconnect to clear the cached netmap and drain
pending updates before re-registering. Without this, WaitForPeers
returned immediately based on the old session's stale data.
2026-03-19 07:05:58 +01:00
Kristoffer Dalby
1053fbb16b hscontrol/state: fix online status reset during re-registration
Two fixes to how online status is handled during registration:

1. Re-registration (applyAuthNodeUpdate, HandleNodeFromPreAuthKey) no
   longer resets IsOnline to false. Online status is managed exclusively
   by Connect()/Disconnect() in the poll session lifecycle. The reset
   caused a false offline blip: the auth handler's change notification
   triggered a map regeneration showing the node as offline to peers,
   even though Connect() would set it back to true moments later.

2. New node creation (createAndSaveNewNode) now explicitly sets
   IsOnline=false instead of leaving it nil. This ensures peers always
   receive a known online status rather than an ambiguous nil/unknown.
2026-03-19 07:05:58 +01:00
Kristoffer Dalby
b09af3846b hscontrol/poll,state: fix grace period disconnect TOCTOU race
When a node disconnects, serveLongPoll defers a cleanup that starts a
grace period goroutine. This goroutine polls batcher.IsConnected() and,
if the node has not reconnected within ~10 seconds, calls
state.Disconnect() to mark it offline. A TOCTOU race exists: the node
can reconnect (calling Connect()) between the IsConnected check and
the Disconnect() call, causing the stale Disconnect() to overwrite
the new session's online status.

Fix with a monotonic per-node generation counter:

- State.Connect() increments the counter and returns the current
  generation alongside the change list.
- State.Disconnect() accepts the generation from the caller and
  rejects the call if a newer generation exists, making stale
  disconnects from old sessions a no-op.
- serveLongPoll captures the generation at Connect() time and passes
  it to Disconnect() in the deferred cleanup.
- RemoveNode's return value is now checked: if another session already
  owns the batcher slot (reconnect happened), the old session skips
  the grace period entirely.

Update batcher_test.go to track per-node connect generations and
pass them through to Disconnect(), matching production behavior.

Fixes the following test failures:
- server_state_online_after_reconnect_within_grace
- update_history_no_false_offline
- nodestore_correct_after_rapid_reconnect
- rapid_reconnect_peer_never_sees_offline
2026-03-19 07:05:58 +01:00
Kristoffer Dalby
00c41b6422 hscontrol/servertest: add race, stress, and poll race tests
Add three test files designed to stress the control plane under
concurrent and adversarial conditions:

- race_test.go: 14 tests exercising concurrent mutations, session
  replacement, batcher contention, NodeStore access, and map response
  delivery during disconnect. All pass the Go race detector.

- poll_race_test.go: 8 tests targeting the poll.go grace period
  interleaving. These confirm a logical TOCTOU race: when a node
  disconnects and reconnects within the grace period, the old
  session's deferred Disconnect() can overwrite the new session's
  Connect(), leaving IsOnline=false despite an active poll session.

- stress_test.go: sustained churn, rapid mutations, rolling
  replacement, data integrity checks under load, and verification
  that rapid reconnects do not leak false-offline notifications.

Known failing tests (grace period TOCTOU race):
- server_state_online_after_reconnect_within_grace
- update_history_no_false_offline
- rapid_reconnect_peer_never_sees_offline
2026-03-19 07:05:58 +01:00
Kristoffer Dalby
ab4e205ce7 hscontrol/servertest: expand issue tests to 24 scenarios, surface 4 issues
Split TestIssues into 7 focused test functions to stay under cyclomatic
complexity limits while testing more aggressively.

Issues surfaced (4 failing tests):

1. initial_map_should_include_peer_online_status: Initial MapResponse
   has Online=nil for peers. Online status only arrives later via
   PeersChangedPatch.

2. disco_key_should_propagate_to_peers: DiscoPublicKey set by client
   is not visible to peers. Peers see zero disco key.

3. approved_route_without_announcement_is_visible: Server-side route
   approval without client-side announcement silently produces empty
   SubnetRoutes (intersection of empty announced + approved = empty).

4. nodestore_correct_after_rapid_reconnect: After 5 rapid reconnect
   cycles, NodeStore reports node as offline despite having an active
   poll session. The connect/disconnect grace period interleaving
   leaves IsOnline in an incorrect state.

Passing tests (20) verify:
- IP uniqueness across 10 nodes
- IP stability across reconnect
- New peers have addresses immediately
- Node rename propagates to peers
- Node delete removes from all peer lists
- Hostinfo changes (OS field) propagate
- NodeStore/DB consistency after route mutations
- Grace period timing (8-20s window)
- Ephemeral node deletion (not just offline)
- 10-node simultaneous connect convergence
- Rapid sequential node additions
- Reconnect produces complete map
- Cross-user visibility with default policy
- Same-user multiple nodes get distinct IDs
- Same-hostname nodes get unique GivenNames
- Policy change during connect still converges
- DERP region references are valid
- User profiles present for self and peers
- Self-update arrives after route approval
- Route advertisement stored as AnnouncedRoutes
2026-03-19 07:05:58 +01:00
Kristoffer Dalby
f87b08676d hscontrol/servertest: add policy, route, ephemeral, and content tests
Extend the servertest harness with:
- TestClient.Direct() accessor for advanced operations
- TestClient.WaitForPeerCount and WaitForCondition helpers
- TestHarness.ChangePolicy for ACL policy testing
- AssertDERPMapPresent and AssertSelfHasAddresses

New test suites:
- content_test.go: self node, DERP map, peer properties, user profiles,
  update history monotonicity, and endpoint update propagation
- policy_test.go: default allow-all, explicit policy, policy triggers
  updates on all nodes, multiple policy changes, multi-user mesh
- ephemeral_test.go: ephemeral connect, cleanup after disconnect,
  mixed ephemeral/regular, reconnect prevents cleanup
- routes_test.go: addresses in AllowedIPs, route advertise and approve,
  advertised routes via hostinfo, CGNAT range validation

Also fix node_departs test to use WaitForCondition instead of
assert.Eventually, and convert concurrent_join_and_leave to
interleaved_join_and_leave with grace-period-tolerant assertions.
2026-03-19 07:05:58 +01:00
Kristoffer Dalby
ca7362e9aa hscontrol/servertest: add control plane lifecycle and consistency tests
Add three test files exercising the servertest harness:

- lifecycle_test.go: connection, disconnection, reconnection, session
  replacement, and mesh formation at various sizes.
- consistency_test.go: symmetric visibility, consistent peer state,
  address presence, concurrent join/leave convergence.
- weather_test.go: rapid reconnects, flapping stability, reconnect
  with various delays, concurrent reconnects, and scale tests.

All tests use table-driven patterns with subtests.
2026-03-19 07:05:58 +01:00
Kristoffer Dalby
0288614bdf hscontrol: add servertest harness for in-process control plane testing
Add a new hscontrol/servertest package that provides a test harness
for exercising the full Headscale control protocol in-process, using
Tailscale's controlclient.Direct as the client.

The harness consists of:
- TestServer: wraps a Headscale instance with an httptest.Server
- TestClient: wraps controlclient.Direct with NetworkMap tracking
- TestHarness: orchestrates N clients against a single server
- Assertion helpers for mesh completeness, visibility, and consistency

Export minimal accessor methods on Headscale (HTTPHandler, NoisePublicKey,
GetState, SetServerURL, StartBatcher, StartEphemeralGC) so the servertest
package can construct a working server from outside the hscontrol package.

This enables fast, deterministic tests of connection lifecycle, update
propagation, and network weather scenarios without Docker.
2026-03-19 07:05:58 +01:00
Kristoffer Dalby
82c7efccf8 mapper/batcher: serialize per-node work to prevent out-of-order delivery
processBatchedChanges queued each pending change for a node as a
separate work item. Since multiple workers pull from the same channel,
two changes for the same node could be processed concurrently by
different workers. This caused two problems:

1. MapResponses delivered out of order — a later change could finish
   generating before an earlier one, so the client sees stale state.
2. updateSentPeers and computePeerDiff race against each other —
   updateSentPeers does Clear() + Store() which is not atomic relative
   to a concurrent Range() in computePeerDiff.

Bundle all pending changes for a node into a single work item so one
worker processes them sequentially. Add a per-node workMu that
serializes processing across consecutive batch ticks, preventing a
second worker from starting tick N+1 while tick N is still in progress.

Fixes #3140
2026-03-19 07:05:58 +01:00
Kristoffer Dalby
81b871c9b5 integration/acl: replace custom entrypoints with WithPackages
Replace inline WithDockerEntrypoint shell scripts in
TestACLTagPropagation and TestACLTagPropagationPortSpecific with
the standard WithPackages and WithWebserver options.

The custom entrypoints used fragile fixed sleeps and lacked the
robust network/cert readiness waits that buildEntrypoint provides.

Updates #3139
2026-03-16 03:57:05 -07:00
Kristoffer Dalby
e5ebe3205a integration: standardize test infrastructure options
Make embedded DERP server and TLS the default configuration for all
integration tests, replacing the per-test opt-in model that led to
inconsistent and flaky test behavior.

Infrastructure changes:
- DefaultConfigEnv() includes embedded DERP server settings
- New() auto-generates a proper CA + server TLS certificate pair
- CA cert is installed into container trust stores and returned by
  GetCert() so clients and internal tools (curl) trust the server
- CreateCertificate() now returns (caCert, cert, key) instead of
  discarding the CA certificate
- Add WithPublicDERP() and WithoutTLS() opt-out options
- Remove WithTLS(), WithEmbeddedDERPServerOnly(), and WithDERPAsIP()
  since all their behavior is now the default or unnecessary

Test cleanup:
- Remove all redundant WithTLS/WithEmbeddedDERPServerOnly/WithDERPAsIP
  calls from test files
- Give every test a unique WithTestName by parameterizing aclScenario,
  sshScenario, and derpServerScenario helpers
- Add WithTestName to tests that were missing it
- Document all non-standard options with inline comments explaining
  why each is needed

Updates #3139
2026-03-16 03:57:05 -07:00
Kristoffer Dalby
87b8507ac9 mapper/batcher: replace connected map with per-node disconnectedAt
The Batcher's connected field (*xsync.Map[types.NodeID, *time.Time])
encoded three states via pointer semantics:

  - nil value:    node is connected
  - non-nil time: node disconnected at that timestamp
  - key missing:  node was never seen

This was error-prone (nil meaning 'connected' inverts Go idioms),
redundant with b.nodes + hasActiveConnections(), and required keeping
two parallel maps in sync. It also contained a bug in RemoveNode where
new(time.Now()) was used instead of &now, producing a zero time.

Replace the separate connected map with a disconnectedAt field on
multiChannelNodeConn (atomic.Pointer[time.Time]), tracked directly
on the object that already manages the node's connections.

Changes:
  - Add disconnectedAt field and helpers (markConnected, markDisconnected,
    isConnected, offlineDuration) to multiChannelNodeConn
  - Remove the connected field from Batcher
  - Simplify IsConnected from two map lookups to one
  - Simplify ConnectedMap and Debug from two-map iteration to one
  - Rewrite cleanupOfflineNodes to scan b.nodes directly
  - Remove the markDisconnectedIfNoConns helper
  - Update all tests and benchmarks

Fixes #3141
2026-03-16 02:22:56 -07:00
Kristoffer Dalby
60317064fd mapper/batcher: serialize per-node work to prevent out-of-order delivery
processBatchedChanges queued each pending change for a node as a
separate work item. Since multiple workers pull from the same channel,
two changes for the same node could be processed concurrently by
different workers. This caused two problems:

1. MapResponses delivered out of order — a later change could finish
   generating before an earlier one, so the client sees stale state.
2. updateSentPeers and computePeerDiff race against each other —
   updateSentPeers does Clear() + Store() which is not atomic relative
   to a concurrent Range() in computePeerDiff.

Bundle all pending changes for a node into a single work item so one
worker processes them sequentially. Add a per-node workMu that
serializes processing across consecutive batch ticks, preventing a
second worker from starting tick N+1 while tick N is still in progress.

Fixes #3140
2026-03-16 02:22:46 -07:00
Juan Font
4d427cfe2a noise: limit request body size to prevent unauthenticated OOM
The Noise handshake accepts any machine key without checking
registration, so all endpoints behind the Noise router are reachable
without credentials. Three handlers used io.ReadAll without size
limits, allowing an attacker to OOM-kill the server.

Fix:
- Add http.MaxBytesReader middleware (1 MiB) on the Noise router.
- Replace io.ReadAll + json.Unmarshal with json.NewDecoder in
  PollNetMapHandler and RegistrationHandler.
- Stop reading the body in NotImplementedHandler entirely.
2026-03-16 09:28:31 +01:00
Kristoffer Dalby
afd3a6acbc mapper/batcher: remove disabled X-prefixed test functions
Remove XTestBatcherChannelClosingRace (~95 lines) and
XTestBatcherScalability (~515 lines). These were disabled by
prefixing with X (making them invisible to go test) and served
as dead code. The functionality they covered is exercised by the
active test suite.

Updates #2545
2026-03-14 02:52:28 -07:00
Kristoffer Dalby
feaf85bfbc mapper/batcher: clean up test constants and output
L8: Rename SCREAMING_SNAKE_CASE test constants to idiomatic Go
camelCase. Remove highLoad* and extremeLoad* constants that were
only referenced by disabled (X-prefixed) tests.

L10: Fix misleading assert message that said "1337" while checking
for region ID 999.

L12: Remove emoji from test log output to avoid encoding issues
in CI environments.

Updates #2545
2026-03-14 02:52:28 -07:00
Kristoffer Dalby
86e279869e mapper/batcher: minor production code cleanup
L1: Replace crypto/rand with an atomic counter for generating
connection IDs. These identifiers are process-local and do not need
cryptographic randomness; a monotonic counter is cheaper and
produces shorter, sortable IDs.

L5: Use getActiveConnectionCount() in Debug() instead of directly
locking the mutex and reading the connections slice. This avoids
bypassing the accessor that already exists for this purpose.

L6: Extract the hardcoded 15*time.Minute cleanup threshold into
the named constant offlineNodeCleanupThreshold.

L7: Inline the trivial addWork wrapper; AddWork now calls addToBatch
directly.

Updates #2545
2026-03-14 02:52:28 -07:00
Kristoffer Dalby
7881f65358 mapper: extract node connection types to node_conn.go
Move connectionEntry, multiChannelNodeConn, generateConnectionID, and
all their methods from batcher.go into a dedicated file. This reduces
batcher.go from ~1170 lines to ~800 and separates per-node connection
management from batcher orchestration.

Pure move — no logic changes.

Updates #2545
2026-03-14 02:52:28 -07:00
Kristoffer Dalby
2d549e579f mapper/batcher: add regression tests for M1, M3, M7 fixes
- TestBatcher_CloseBeforeStart_DoesNotHang: verifies Close() before
  Start() returns promptly now that done is initialized in NewBatcher.

- TestBatcher_QueueWorkAfterClose_DoesNotHang: verifies queueWork
  returns via the done channel after Close(), even without Start().

- TestIsConnected_FalseAfterAddNodeFailure: verifies IsConnected
  returns false after AddNode fails and removes the last connection.

- TestRemoveConnectionAtIndex_NilsTrailingSlot: verifies the backing
  array slot is nil-ed after removal to avoid retaining pointers.

Updates #2545
2026-03-14 02:52:28 -07:00
Kristoffer Dalby
50e8b21471 mapper/batcher: fix pointer retention, done-channel init, and connected-map races
M7: Nil out trailing *connectionEntry pointers in the backing array
after slice removal in removeConnectionAtIndexLocked and send().
Without this, the GC cannot collect removed entries until the slice
is reallocated.

M1: Initialize the done channel in NewBatcher instead of Start().
Previously, calling Close() or queueWork before Start() would select
on a nil channel, blocking forever. Moving the make() to the
constructor ensures the channel is always usable.

M2: Move b.connected.Delete and b.totalNodes decrement inside the
Compute callback in cleanupOfflineNodes. Previously these ran after
the Compute returned, allowing a concurrent AddNode to reconnect
between the delete and the bookkeeping update, which would wipe the
fresh connected state.

M3: Call markDisconnectedIfNoConns on AddNode error paths. Previously,
when initial map generation or send timed out, the connection was
removed but b.connected retained its old nil (= connected) value,
making IsConnected return true for a node with zero connections.

Updates #2545
2026-03-14 02:52:28 -07:00
Kristoffer Dalby
8e26651f2c mapper/batcher: add regression tests for timer leak and Close lifecycle
Add four unit tests guarding fixes introduced in recent commits:

- TestConnectionEntry_SendFastPath_TimerStopped: verifies the
  time.NewTimer fix (H1) does not leak goroutines after many
  fast-path sends on a buffered channel.

- TestBatcher_CloseWaitsForWorkers: verifies Close() blocks until all
  worker goroutines exit (H3), preventing sends on torn-down channels.

- TestBatcher_CloseThenStartIsNoop: verifies the one-shot lifecycle
  contract; Start() after Close() must not spawn new goroutines.

- TestBatcher_CloseStopsTicker: verifies Close() stops the internal
  ticker to prevent resource leaks.

Updates #2545
2026-03-14 02:52:28 -07:00
Kristoffer Dalby
57a38b5678 mapper/batcher: reduce hot-path log verbosity
Remove Caller(), channel pointer formatting (fmt.Sprintf("%p",...)),
and mutex timing from send(), addConnection(), and
removeConnectionByChannel(). Move per-broadcast summary and
no-connection logs from Debug to Trace. Remove per-connection
"attempting"/"succeeded" logs entirely; keep Warn for failures.

These methods run on every MapResponse delivery, so the savings
compound quickly under load.

Updates #2545
2026-03-14 02:52:28 -07:00
Kristoffer Dalby
051a38a4c4 mapper/batcher: track worker goroutines and stop ticker on Close
Close() previously closed the done channel and returned immediately,
without waiting for worker goroutines to exit. This caused goroutine
leaks in tests and allowed workers to race with connection teardown.
The ticker was also never stopped, leaking its internal goroutine.

Add a sync.WaitGroup to track the doWork goroutine and every worker
it spawns. Close() now calls wg.Wait() after signalling shutdown,
ensuring all goroutines have exited before tearing down connections.
Also stop the ticker to prevent resource leaks.

Document that a Batcher must not be reused after Close().
2026-03-14 02:52:28 -07:00
Kristoffer Dalby
3276bda0c0 mapper/batcher: replace time.After with NewTimer to avoid timer leak
connectionEntry.send() is on the hot path: called once per connection
per broadcast tick. time.After allocates a timer that sits in the
runtime timer heap until it fires (50 ms), even when the channel send
succeeds immediately. At 1000 connected nodes, every tick leaks 1000
timers into the heap, creating continuous GC pressure.

Replace with time.NewTimer + defer timer.Stop() so the timer is
removed from the heap as soon as the fast-path send completes.
2026-03-14 02:52:28 -07:00
Kristoffer Dalby
ebc57d9a38 integration/acl: fix TestACLPolicyPropagationOverTime infrastructure
Add embedded DERP server, TLS, and netfilter=off to match the
infrastructure configuration used by all other ACL integration tests.

Without these options, the test fails intermittently because traffic
routes through external DERP relays and iptables initialization fails
in Docker containers.

Updates #3139
2026-03-14 02:52:28 -07:00
Kristoffer Dalby
2058343ad6 mapper: remove Batcher interface, rename to Batcher struct
Remove the Batcher interface since there is only one implementation.
Rename LockFreeBatcher to Batcher and merge batcher_lockfree.go into
batcher.go.

Drop type assertions in debug.go now that mapBatcher is a concrete
*mapper.Batcher pointer.
2026-03-14 02:52:28 -07:00
Kristoffer Dalby
9b24a39943 mapper/batcher: add scale benchmarks
Add benchmarks that systematically test node counts from 100 to
50,000 to identify scaling limits and validate performance under
load.
2026-03-14 02:52:28 -07:00
Kristoffer Dalby
3ebe4d99c1 mapper/batcher: reduce lock contention with two-phase send
Rewrite multiChannelNodeConn.send() to use a two-phase approach:
1. RLock: snapshot connections slice (cheap pointer copy)
2. Unlock: send to all connections (50ms timeouts happen here)
3. Lock: remove failed connections by pointer identity

Previously, send() held the write lock for the entire duration of
sending to all connections. With N stale connections each timing out
at 50ms, this blocked addConnection/removeConnection for N*50ms.
The two-phase approach holds the lock only for O(N) pointer
operations, not for N*50ms I/O waits.
2026-03-14 02:52:28 -07:00
Kristoffer Dalby
da33795e79 mapper/batcher: fix race conditions in cleanup and lookups
Replace the two-phase Load-check-Delete in cleanupOfflineNodes with
xsync.Map.Compute() for atomic check-and-delete. This prevents the
TOCTOU race where a node reconnects between the hasActiveConnections
check and the Delete call.

Add nil guards on all b.nodes.Load() and b.nodes.Range() call sites
to prevent nil pointer panics from concurrent cleanup races.
2026-03-14 02:52:28 -07:00
Kristoffer Dalby
57070680a5 mapper/batcher: restructure internals for correctness
Move per-node pending changes from a shared xsync.Map on the batcher
into multiChannelNodeConn, protected by a dedicated mutex. The new
appendPending/drainPending methods provide atomic append and drain
operations, eliminating data races in addToBatch and
processBatchedChanges.

Add sync.Once to multiChannelNodeConn.close() to make it idempotent,
preventing panics from concurrent close calls on the same channel.

Add started atomic.Bool to guard Start() against being called
multiple times, preventing orphaned goroutines.

Add comprehensive concurrency tests validating these changes.
2026-03-14 02:52:28 -07:00
Kristoffer Dalby
21e02e5d1f mapper/batcher: add unit tests and benchmarks
Add comprehensive unit tests for the LockFreeBatcher covering
AddNode/RemoveNode lifecycle, addToBatch routing (broadcast, targeted,
full update), processBatchedChanges deduplication, cleanup of offline
nodes, close/shutdown behavior, IsConnected state tracking, and
connected map consistency.

Add benchmarks for connection entry send, multi-channel send and
broadcast, peer diff computation, sentPeers updates, addToBatch at
various scales (10/100/1000 nodes), processBatchedChanges, broadcast
delivery, IsConnected lookups, connected map enumeration, connection
churn, and concurrent send+churn scenarios.

Widen setupBatcherWithTestData to accept testing.TB so benchmarks can
reuse the same database-backed test setup as unit tests.
2026-03-14 02:52:28 -07:00
Kristoffer Dalby
2f94b80e70 go.mod: add stress tool dependency
Add golang.org/x/tools/cmd/stress as a tool dependency for running
tests under repeated stress to surface flaky failures.

Update flake vendorHash for the new go.mod dependencies.
2026-03-14 02:52:28 -07:00
Kristoffer Dalby
3e0a96ec3a all: fix test flakiness and improve test infrastructure
Buffer the AuthRequest verdict channel to prevent a race where the
sender blocks indefinitely if the receiver has already timed out, and
increase the auth followup test timeout from 100ms to 5s to prevent
spurious failures under load.

Skip postgres-backed tests when the postgres server is unavailable
instead of calling t.Fatal, which was preventing the rest of the test
suite from running.

Add TestMain to db, types, and policy/v2 packages to chdir to the
source directory before running tests. This ensures relative testdata/
paths resolve correctly when the test binary is executed from an
arbitrary working directory (e.g., via "go tool stress").
2026-03-14 02:52:28 -07:00
DM
fffc58b5d0 poll: fix poll test linter violations 2026-03-12 01:27:34 -07:00
DM
4aca9d6568 poll: stop stale map sessions through an explicit teardown hook
When stale-send cleanup prunes a connection from the batcher, the old serveLongPoll session needs an explicit stop signal. Pass a stop hook into AddNode and trigger it when that connection is removed, so the session exits through its normal cancel path instead of relying on channel closure from the batcher side.
2026-03-12 01:27:34 -07:00
DM
3daf45e88a mapper: close stale map channels after send timeouts
When the batcher timed out sending to a node, it removed the channel from multiChannelNodeConn but left the old serveLongPoll goroutine running on that channel. That left a live stale session behind: it no longer received new updates, but it could still keep the stream open and block shutdown.

Close the pruned channel when stale-send cleanup removes it so the old map session exits after draining any buffered update.
2026-03-12 01:27:34 -07:00
DM
b81d6c734d mapper: handle RemoveNode after channel cleanup
A connection can already be removed from multiChannelNodeConn by the stale-send cleanup path before serveLongPoll reaches its deferred RemoveNode call. In that case RemoveNode used to return early on "channel not found" and never updated the node's connected state.

Drop that early return so RemoveNode still checks whether any active connections remain and marks the node disconnected when the last one is gone.
2026-03-12 01:27:34 -07:00
Kristoffer Dalby
c5ef1d3bb9 nix: upgrade dev shell to Python 3.14
Update mdformat and related packages from python313Packages to
python314Packages. All four packages (mdformat, mdformat-footnote,
mdformat-frontmatter, mdformat-mkdocs) are available in the updated
nixpkgs.

Updates #1261
2026-03-11 03:18:14 -07:00
Kristoffer Dalby
542cdb2cb2 all: update Go to 1.26.1
Bump Go version from 1.26.0 to 1.26.1 across go.mod, Dockerfiles,
and the integration test runner fallback defaults.

Updates #1261
2026-03-11 03:18:14 -07:00
Kristoffer Dalby
5e33259550 nix: update flake inputs
Update nixpkgs from 2026-02-15 (ac055f38) to 2026-03-08 (608d0cad).

Updates #1261
2026-03-11 03:18:14 -07:00
Kristoffer Dalby
65880ecb58 nix: disable external DERP URL fetch in VM test
Explicitly set derp.urls to an empty list in the NixOS VM test,
matching the upstream nixpkgs test. The VMs have no internet
access, so fetching the default Tailscale DERP map would silently
fail and add unnecessary timeout delay to the test run.
2026-03-06 05:18:44 -08:00
Kristoffer Dalby
37c6a9e3a6 nix: sync module options and descriptions with upstream nixpkgs
Add missing typed options from the upstream nixpkgs module:
- configFile: read-only option exposing the generated config path
  for composability with other NixOS modules
- dns.split: split DNS configuration with proper type checking
- dns.extra_records: typed submodule with name/type/value validation

Sync descriptions and assertions with upstream:
- Use Tailscale doc link for override_local_dns description
- Remove redundant requirement note from nameservers.global
- Match upstream assertion message wording and expression style

Update systemd script to reference cfg.configFile instead of a
local let-binding, matching the upstream pattern.
2026-03-06 05:18:44 -08:00
DM
8423af2732 Swap favicon for updated version 2026-03-03 05:59:40 +01:00
Florian Preinstorfer
9baa795ddb Update docs for auth-id changes
- Replace "headscale nodes register" with "headscale auth register"
- Update from registration key to Auth ID
- Fix API example to register a node
2026-03-01 13:38:22 +01:00
Florian Preinstorfer
acddd73183 Reformat docs with mdformat 2026-03-01 09:24:52 +01:00
Florian Preinstorfer
47307d19cf Switch to mdformat to format docs
- Use mdformat and mdformat-mkdocs to format docs
- Add mdformat to Makefile and pre-commit-config
- Prettier ignores docs/
2026-03-01 09:24:52 +01:00
Kristoffer Dalby
5c449db125 ci: regenerate test-integration.yaml for TestSSHLocalpart
Updates #3049
2026-02-28 05:14:11 -08:00
Kristoffer Dalby
2be94ce19a integration: add TestSSHLocalpart integration test
Add end-to-end integration test that validates localpart:*@domain
SSH user mapping with real Tailscale clients. The test sets up an
SSH policy with localpart entries and verifies that users can SSH
into tagged servers using their email local-part as the username.

Updates #3049
2026-02-28 05:14:11 -08:00
Kristoffer Dalby
6c59d3e601 policy/v2: add SSH compatibility testdata from Tailscale SaaS
Add 39 test fixtures captured from Tailscale SaaS API responses
to validate SSH policy compilation parity. Each JSON file contains
the SSH policy section and expected compiled SSHRule arrays for 5
test nodes (3 user-owned, 2 tagged).

Test series: SSH-A (basic), SSH-B (specific sources), SSH-C
(destination combos), SSH-D (localpart), SSH-E (edge cases),
SSH-F (multi-rule), SSH-G (acceptEnv).

The data-driven TestSSHDataCompat harness uses cmp.Diff with
principal order tolerance but strict rule ordering (first-match-wins
semantics require exact order).

Updates #3049
2026-02-28 05:14:11 -08:00
Kristoffer Dalby
0acf09bdd2 policy/v2: add localpart:*@domain SSH user compilation
Add support for localpart:*@<domain> entries in SSH policy users.
When a user SSHes into a target, their email local-part becomes the
OS username (e.g. alice@example.com → OS user alice).

Type system (types.go):
- SSHUser.IsLocalpart() and ParseLocalpart() for validation
- SSHUsers.LocalpartEntries(), NormalUsers(), ContainsLocalpart()
- Enforces format: localpart:*@<domain> (wildcard-only)
- UserWildcard.Resolve for user:*@domain SSH source aliases
- acceptEnv passthrough for SSH rules

Compilation (filter.go):
- resolveLocalparts: pure function mapping users to local-parts
  by email domain. No node walking, easy to test.
- groupSourcesByUser: single walk producing per-user principals
  with sorted user IDs, and tagged principals separately.
- ipSetToPrincipals: shared helper replacing 6 inline copies.
- selfPrincipalsForNode: self-access using pre-computed byUser.

The approach separates data gathering from rule assembly. Localpart
rules are interleaved per source user to match Tailscale SaaS
first-match-wins ordering.

Updates #3049
2026-02-28 05:14:11 -08:00
QEDeD
414d3bbbd8 Fix typo in comment about fsnotify behavior
Correct loose (opposite of tight) to lose (opposite of keep).
2026-02-27 15:23:06 +01:00
Stefan Bethke
0f12e414a6 Explain one approach to update OIDC provider info
See #3112
2026-02-27 10:06:50 +01:00
Stefan Bethke
df339cd290 Add a link to Authentik's integration guide 2026-02-27 09:56:18 +01:00
DM
610c1daa4d types: avoid NodeView clone in CanAccess
NodeView.CanAccess called node2.AsStruct() on every check. In peer-map construction we run CanAccess in O(n^2) pair scans (often twice per pair), so that per-call clone multiplied into large heap churn
2026-02-26 19:15:07 -08:00
Kristoffer Dalby
84adda226b doc: add CHANGELOG entries for SSH check and auth commands
Updates #1850
2026-02-25 21:28:05 +01:00
Kristoffer Dalby
0f97294665 ci: regenerate integration test workflow
Updates #1850
2026-02-25 21:28:05 +01:00
Kristoffer Dalby
3db0a483ed integration: add SSH check mode tests
Add ReadLog method to headscale integration container for log
inspection. Split SSH check mode tests into CLI and OIDC variants
and add comprehensive test coverage:

- TestSSHOneUserToOneCheckModeCLI: basic check mode with CLI approval
- TestSSHOneUserToOneCheckModeOIDC: check mode with OIDC approval
- TestSSHCheckModeUnapprovedTimeout: rejection on cache expiry
- TestSSHCheckModeCheckPeriodCLI: session expiry and re-auth
- TestSSHCheckModeAutoApprove: auto-approval within check period
- TestSSHCheckModeNegativeCLI: explicit rejection via CLI

Update existing integration tests to use headscale auth register.

Updates #1850
2026-02-25 21:28:05 +01:00
Kristoffer Dalby
7bab8da366 state, policy, noise: implement SSH check period auto-approval
Add SSH check period tracking so that recently authenticated users
are auto-approved without requiring manual intervention each time.

Introduce SSHCheckPeriod type with validation (min 1m, max 168h,
"always" for every request) and encode the compiled check period
as URL query parameters in the HoldAndDelegate URL.

The SSHActionHandler checks recorded auth times before creating a
new HoldAndDelegate flow. Auth timestamps are stored in-memory:
- Default period (no explicit checkPeriod): auth covers any
  destination, keyed by source node with Dst=0 sentinel
- Explicit period: auth covers only that specific destination,
  keyed by (source, destination) pair

Auth times are cleared on policy changes.

Updates #1850
2026-02-25 21:28:05 +01:00
Kristoffer Dalby
48cc98b787 hscontrol, cli: add auth register and approve commands
Implement AuthRegister and AuthApprove gRPC handlers and add
corresponding CLI commands (headscale auth register, approve, reject)
for managing pending auth requests including SSH check approvals.

Updates #1850
2026-02-25 21:28:05 +01:00
Kristoffer Dalby
61a14bb0e4 gen: regenerate from auth proto changes
Updates #1850
2026-02-25 21:28:05 +01:00
Kristoffer Dalby
dc0e52a960 proto: add AuthRegister and AuthApprove RPCs
Add gRPC service definitions for managing auth requests:
AuthRegister to register interactive auth sessions and
AuthApprove/AuthReject to approve or deny pending requests
(used for SSH check mode).

Updates #1850
2026-02-25 21:28:05 +01:00
Kristoffer Dalby
107c2f2f70 policy, noise: implement SSH check action
Implement the SSH "check" action which requires additional
verification before allowing SSH access. The policy compiler generates
a HoldAndDelegate URL that the Tailscale client calls back to
headscale. The SSHActionHandler creates an auth session and waits for
approval via the generalised auth flow.

Sort check (HoldAndDelegate) rules before accept rules to match
Tailscale's first-match-wins evaluation order.

Updates #1850
2026-02-25 21:28:05 +01:00
Kristoffer Dalby
4a7e1475c0 templates: generalise auth templates for web and OIDC
Extract shared HTML/CSS design into a common template and create
generalised auth success and web auth templates that work for both
node registration and SSH check authentication flows.

Updates #1850
2026-02-25 21:28:05 +01:00
Kristoffer Dalby
cb3b6949ea auth: generalise auth flow and introduce AuthVerdict
Generalise the registration pipeline to a more general auth pipeline
supporting both node registrations and SSH check auth requests.
Rename RegistrationID to AuthID, unexport AuthRequest fields, and
introduce AuthVerdict to unify the auth finish API.

Add the urlParam generic helper for extracting typed URL parameters
from chi routes, used by the new auth request handler.

Updates #1850
2026-02-25 21:28:05 +01:00
Kristoffer Dalby
30338441c1 app: switch from gorilla to chi mux
Replace gorilla/mux with go-chi/chi as the HTTP router and add a
custom zerolog-based request logger to replace chi's default
stdlib-based middleware.Logger, consistent with the rest of the
application.

Updates #1850
2026-02-25 21:28:05 +01:00
Kristoffer Dalby
25ccb5a161 build: update golangci-lint and gopls in flake
Updates #1850
2026-02-25 21:28:05 +01:00
Kristoffer Dalby
8048f10d13 hscontrol/state: extract findExistingNodeForPAK to reduce complexity
Extract the existing-node lookup logic from HandleNodeFromPreAuthKey
into a separate method. This reduces the cyclomatic complexity from
32 to 28, below the gocyclo limit of 30.

Updates #3077
2026-02-20 21:51:00 +01:00
Kristoffer Dalby
be4fd9ff2d integration: fix tag tests for tagged nodes with nil user_id
Tagged nodes no longer have user_id set, so ListNodes(user) cannot
find them. Update integration tests to use ListNodes() (all nodes)
when looking up tagged nodes.

Add a findNode helper to locate nodes by predicate from an
unfiltered list, used in ACL tests that have multiple nodes per
scenario.

Updates #3077
2026-02-20 21:51:00 +01:00
Kristoffer Dalby
1e4fc3f179 hscontrol: add tests for deleting users with tagged nodes
Test the tagged-node-survives-user-deletion scenario at two layers:

DB layer (users_test.go):
- success_user_only_has_tagged_nodes: tagged nodes with nil
  user_id do not block user deletion and survive it
- error_user_has_tagged_and_owned_nodes: user-owned nodes
  still block deletion even when tagged nodes coexist

App layer (grpcv1_test.go):
- TestDeleteUser_TaggedNodeSurvives: full registration flow
  with tagged PreAuthKey verifies nil UserID after registration,
  absence from nodesByUser index, user deletion succeeds, and
  tagged node remains in global node list

Also update auth_tags_test.go assertions to expect nil UserID
on tagged nodes, consistent with the new invariant.

Updates #3077
2026-02-20 21:51:00 +01:00
Kristoffer Dalby
894e6946dc hscontrol/types: regenerate types_view.go
make generate

Updates #3077
2026-02-20 21:51:00 +01:00
Kristoffer Dalby
75e56df9e4 hscontrol: enforce that tagged nodes never have user_id
Tagged nodes are owned by their tags, not a user. Enforce this
invariant at every write path:

- createAndSaveNewNode: do not set UserID for tagged PreAuthKey
  registration; clear UserID when advertise-tags are applied
  during OIDC/CLI registration
- SetNodeTags: clear UserID/User when tags are assigned
- processReauthTags: clear UserID/User when tags are applied
  during re-authentication
- validateNodeOwnership: reject tagged nodes with non-nil UserID
- NodeStore: skip nodesByUser indexing for tagged nodes since
  they have no owning user
- HandleNodeFromPreAuthKey: add fallback lookup for tagged PAK
  re-registration (tagged nodes indexed under UserID(0)); guard
  against nil User deref for tagged nodes in different-user check

Since tagged nodes now have user_id = NULL, ListNodesByUser
will not return them and DestroyUser naturally allows deleting
users whose nodes have all been tagged. The ON DELETE CASCADE
FK cannot reach tagged nodes through a NULL foreign key.

Also tone down shouty comments throughout state.go.

Fixes #3077
2026-02-20 21:51:00 +01:00
Kristoffer Dalby
52d454d0c8 hscontrol/db: add migration to clear user_id on tagged nodes
Tagged nodes are owned by their tags, not a user. Previously
user_id was kept as "created by" tracking, but this prevents
deleting users whose nodes have all been tagged, and the
ON DELETE CASCADE FK would destroy the tagged nodes.

Add a migration that sets user_id = NULL on all existing tagged
nodes. Subsequent commits enforce this invariant at write time.

Updates #3077
2026-02-20 21:51:00 +01:00
Kristoffer Dalby
f20bd0cf08 node: implement disable key expiry via CLI and API
Add --disable flag to "headscale nodes expire" CLI command and
disable_expiry field handling in the gRPC API to allow disabling
key expiry for nodes. When disabled, the node's expiry is set to
NULL and IsExpired() returns false.

The CLI follows the new grpcRunE/RunE/printOutput patterns
introduced in the recent CLI refactor.

Also fix NodeSetExpiry to persist directly to the database instead
of going through persistNodeToDB which omits the expiry field.

Fixes #2681

Co-authored-by: Marco Santos <me@marcopsantos.com>
2026-02-20 21:49:55 +01:00
Kristoffer Dalby
a8f7fedced proto: add disable_expiry field to ExpireNodeRequest
Add bool disable_expiry field (field 3) to ExpireNodeRequest proto
and regenerate all protobuf, gRPC gateway, and OpenAPI files.

Fixes #2681

Co-authored-by: Marco Santos <me@marcopsantos.com>
2026-02-20 21:49:55 +01:00
Kristoffer Dalby
b668c7a596 policy/v2: add policy unmarshal tests for bracketed IPv6
Add end-to-end test cases to TestUnmarshalPolicy that verify bracketed
IPv6 addresses are correctly parsed through the full policy pipeline
(JSON unmarshal -> splitDestinationAndPort -> parseAlias -> parsePortRange)
and survive JSON round-trips.

Cover single port, multiple ports, wildcard port, CIDR prefix, port
range, bracketed IPv4, and hostname rejection.

Updates #2754
2026-02-20 21:49:21 +01:00
Kristoffer Dalby
49744cd467 policy/v2: accept RFC 3986 bracketed IPv6 in ACL destinations
Headscale rejects IPv6 addresses with square brackets in ACL policy
destinations (e.g. "[fd7a:115c:a1e0::87e1]:80,443"), while Tailscale
SaaS accepts them. The root cause is that splitDestinationAndPort uses
strings.LastIndex(":") which leaves brackets on the destination string,
and netip.ParseAddr does not accept brackets.

Add a bracket-handling branch at the top of splitDestinationAndPort that
uses net.SplitHostPort for RFC 3986 parsing when input starts with "[".
The extracted host is validated with netip.ParseAddr/ParsePrefix to
ensure brackets are only accepted around IP addresses and CIDR prefixes,
not hostnames or other alias types like tags and groups.

Fixes #2754
2026-02-20 21:49:21 +01:00
Brandon Sprague
a0d6802d5b Fix minor formatting issue in FAQ
This just fixes a small issue I noticed reading the docs: the two 'scenarios' listed in the scaling section end up showing up as a numbered list of five items, instead of the desired two items + their descriptions.
2026-02-20 16:59:14 +01:00
Kristoffer Dalby
13ebea192c cmd/headscale/cli: remove nil resp guards and unexport HasMachineOutputFlag
Remove dead if-resp-nil checks in tagCmd and approveRoutesCmd; gRPC
returns either an error or a valid response, never (nil, nil).

Rename HasMachineOutputFlag to hasMachineOutputFlag since it has a
single internal caller in root.go.
2026-02-20 11:42:07 +01:00
Kristoffer Dalby
af777f44f4 cmd/headscale/cli: extract bypassDatabase helper and simplify policy file reads
Add bypassDatabase() to consolidate the repeated LoadServerConfig +
NewHeadscaleDatabase pattern in getPolicy and setPolicy.

Replace os.Open + io.ReadAll with os.ReadFile in setPolicy and
checkPolicy, removing the manual file-handle management.
2026-02-20 11:42:07 +01:00
Kristoffer Dalby
7460bec767 cmd/headscale/cli: move errMissingParameter and Error type to their users
Move errMissingParameter from users.go to utils.go alongside the
other shared sentinel errors; the variable is referenced by
api_key.go and preauthkeys.go.

Move the Error constant-error type from debug.go to mockoidc.go,
its only consumer.
2026-02-20 11:42:07 +01:00
Kristoffer Dalby
ca321d3c13 cmd/headscale/cli: use HeadscaleDateTimeFormat and util.Base10 consistently
Replace five hardcoded "2006-01-02 15:04:05" strings with the
HeadscaleDateTimeFormat constant already defined in utils.go.
Replace two literal 10 base arguments to strconv.FormatUint with
util.Base10 to match the convention in api_key.go and nodes.go.
2026-02-20 11:42:07 +01:00
Kristoffer Dalby
2765fd397f cmd/headscale/cli: drop dead flag-read error checks
Flags registered on a cobra.Command cannot fail to read at runtime;
GetString/GetUint64/GetStringSlice only error when the flag name is
unknown. The error-handling blocks for these calls are unreachable
dead code.

Adopt the value, _ := pattern already used in api_key.go,
preauthkeys.go and users.go, removing ~40 lines of dead code from
nodes.go and debug.go.
2026-02-20 11:42:07 +01:00
Kristoffer Dalby
d72a06c6c6 cmd/headscale/cli: remove legacy namespace and machine aliases
The --namespace flag on nodes list/register and debug create-node was
never wired to the --user flag, so its value was silently ignored.
Remove it along with the deprecateNamespaceMessage constant.

Also remove the namespace/ns command aliases on users and
machine/machines aliases on nodes, which have been deprecated since
the naming changes in 0.23.0.
2026-02-20 11:42:07 +01:00
Kristoffer Dalby
e816397d54 cmd/headscale/cli: remove no-op Args functions from serveCmd and dumpConfigCmd
These functions unconditionally return nil, which is the default cobra
behavior when Args is not set.
2026-02-20 11:42:07 +01:00
Kristoffer Dalby
22fccae125 cmd/headscale/cli: deduplicate expiration parsing and api-key flag validation
Add expirationFromFlag helper that parses the --expiration flag into a
timestamppb.Timestamp, replacing identical duration-parsing blocks in
api_key.go and preauthkeys.go.

Add apiKeyIDOrPrefix helper to validate the mutually-exclusive --id and
--prefix flags, replacing the duplicated switch block in expireAPIKeyCmd
and deleteAPIKeyCmd.
2026-02-20 11:42:07 +01:00
Kristoffer Dalby
6c08b49d63 cmd/headscale/cli: add confirmAction helper for force/prompt patterns
Centralise the repeated force-flag-check + YesNo-prompt logic into a
single confirmAction(cmd, prompt) helper.  Callers still decide what
to return on decline (error, message, or nil).
2026-02-20 11:42:07 +01:00
Kristoffer Dalby
7b7b270126 cmd/headscale/cli: add mustMarkRequired helper for init-time flag validation
Replace three inconsistent MarkFlagRequired error-handling styles
(stdlib log.Fatal, zerolog log.Fatal, silently discarded) with a
single mustMarkRequired helper that panics on programmer error.

Also fixes a bug where renameNodeCmd.MarkFlagRequired("new-name")
targeted the wrong command (should be renameUserCmd), making the
--new-name flag effectively never required on "headscale users rename".
2026-02-20 11:42:07 +01:00
Kristoffer Dalby
d6c39e65a5 cmd/headscale/cli: add printListOutput to centralise table-vs-JSON branching
Add a helper that checks the --output flag and either serialises as
JSON/YAML or invokes a table-rendering callback. This removes the
repeated format,_ := cmd.Flags().GetString("output") + if-branch from
the five list commands.
2026-02-20 11:42:07 +01:00
Kristoffer Dalby
8891ec9835 cmd/headscale/cli: remove deprecated output, SuccessOutput, ErrorOutput
All callers now use formatOutput/printOutput (non-exiting) with
RunE error returns, so the old os.Exit-based helpers are dead code.
2026-02-20 11:42:07 +01:00
Kristoffer Dalby
095106f498 cmd/headscale/cli: convert remaining commands to RunE
Convert the 10 commands that were still using Run with
ErrorOutput/SuccessOutput or log.Fatal/os.Exit:

- backfillNodeIPsCmd: use grpcRunE-style manual connection with
  error returns; simplify the confirm/force logic
- getPolicy, setPolicy, checkPolicy: replace ErrorOutput with
  fmt.Errorf returns in both the bypass-gRPC and gRPC paths
- serveCmd, configTestCmd: replace log.Fatal with error returns
- mockOidcCmd: replace log.Error+os.Exit with error return
- versionCmd, generatePrivateKeyCmd: replace SuccessOutput with
  printOutput
- dumpConfigCmd: return the error instead of swallowing it
2026-02-20 11:42:07 +01:00
Kristoffer Dalby
e4fe216e45 cmd/headscale/cli: switch to RunE with grpcRunE and error returns
Rename grpcRun to grpcRunE: the inner closure now returns error
and the wrapper returns a cobra RunE-compatible function.

Change newHeadscaleCLIWithConfig to return an error instead of
calling log.Fatal/os.Exit, making connection failures propagate
through the normal error path.

Add formatOutput (returns error) and printOutput (writes to stdout)
as non-exiting replacements for the old output/SuccessOutput pair.
Extract output format string literals into package-level constants.
Mark the old ErrorOutput, SuccessOutput and output helpers as
deprecated; they remain temporarily for the unconverted commands.

Convert all 22 grpcRunE commands from Run+ErrorOutput+SuccessOutput
to RunE+fmt.Errorf+printOutput. Change usernameAndIDFromFlag to
return an error instead of calling ErrorOutput directly.

Update backfillNodeIPsCmd and policy.go callers of
newHeadscaleCLIWithConfig for the new 5-return signature while
keeping their Run-based pattern for now.
2026-02-20 11:42:07 +01:00
Kristoffer Dalby
e6546b2cea cmd/headscale/cli: silence cobra error/usage output and centralise error formatting
Set SilenceErrors and SilenceUsage on the root command so that
cobra never prints usage text for runtime errors. A SetFlagErrorFunc
callback re-enables usage output specifically for flag-parsing
errors (the kubectl pattern).

Add printError to utils.go and switch Execute() to ExecuteC() so
the returned error is formatted as JSON/YAML when --output requests
machine-readable output.
2026-02-20 11:42:07 +01:00
Kristoffer Dalby
aae2f7de71 cmd/headscale/cli: add grpcRun wrapper for gRPC client lifecycle
Add a grpcRun helper that wraps cobra RunFuncs, injecting a ready
gRPC client and context. The connection lifecycle (cancel, close)
is managed by the wrapper, eliminating the duplicated 3-line
boilerplate (newHeadscaleCLIWithConfig + defer cancel + defer
conn.Close) from 22 command handlers across 7 files.

Three call sites are intentionally left unconverted:
- backfillNodeIPsCmd: creates the client only after user confirmation
- getPolicy/setPolicy: conditionally use gRPC vs direct DB access
2026-02-20 11:42:07 +01:00
Florian Preinstorfer
cfb308b4a7 Add FAQ entry to migrate back to default IP prefixes 2026-02-19 17:16:40 +01:00
Florian Preinstorfer
4bb0241257 Require to update from one version to the next 2026-02-19 17:16:40 +01:00
Florian Preinstorfer
513544cc11 Simplify upgrade snippet with a link to the upgrade guide
Remove some duplicated text.
2026-02-19 17:16:40 +01:00
Florian Preinstorfer
d556df1c36 Extend upgrade guide with backup instructions 2026-02-19 17:16:40 +01:00
Kristoffer Dalby
d15ec28799 ci: pin Docker to v28 to avoid v29 breaking changes
Docker 29 (shipped with runner-images 20260209.23.1) breaks docker
build via Go client libraries (broken pipe writing build context)
and docker load/save with certain tarball formats. Add Docker's
official apt repository and install docker-ce 28.5.x in all CI
jobs that interact with Docker.

See https://github.com/actions/runner-images/issues/13474

Updates #3058
2026-02-19 08:21:23 +01:00
Kristoffer Dalby
eccf64eb58 all: fix staticcheck SA4006 in types_test.go
Use new(users["name"]) instead of extracting to intermediate
variables that staticcheck does not recognise as used with
Go 1.26 new(value) syntax.

Updates #3058
2026-02-19 08:21:23 +01:00
Kristoffer Dalby
43afeedde2 all: apply golangci-lint 2.9.0 fixes
Fix issues found by the upgraded golangci-lint:
- wsl_v5: add required whitespace in CLI files
- staticcheck SA4006: replace new(var.Field) with &localVar
  pattern since staticcheck does not recognize Go 1.26
  new(value) as a use of the variable
- staticcheck SA5011: use t.Fatal instead of t.Error for
  nil guard checks so execution stops
- unused: remove dead ptrTo helper function
2026-02-19 08:21:23 +01:00
Kristoffer Dalby
73613d7f53 db: fix database_versions table creation for PostgreSQL
Use GORM AutoMigrate instead of raw SQL to create the
database_versions table, since PostgreSQL does not support the
datetime type used in the raw SQL (it requires timestamp).
2026-02-19 08:21:23 +01:00
Kristoffer Dalby
30d18575be CHANGELOG: document strict version upgrade path 2026-02-19 08:21:23 +01:00
Kristoffer Dalby
70f8141abd all: upgrade from Go 1.26rc2 to Go 1.26.0 2026-02-19 08:21:23 +01:00
Kristoffer Dalby
82958835ce db: enforce strict version upgrade path
Add a version check that runs before database migrations to ensure
users do not skip minor versions or downgrade. This protects database
migrations and allows future cleanup of old migration code.

Rules enforced:
- Same minor version: always allowed (patch changes either way)
- Single minor upgrade (e.g. 0.27 -> 0.28): allowed
- Multi-minor upgrade (e.g. 0.25 -> 0.28): blocked with guidance
- Any minor downgrade: blocked
- Major version change: blocked
- Dev builds: warn but allow, preserve stored version

The version is stored in a purpose-built database_versions table
after migrations succeed. The table is created with raw SQL before
gormigrate runs to avoid circular dependencies.

Updates #3058
2026-02-19 08:21:23 +01:00
Kristoffer Dalby
9c3a3c5837 flake: upgrade golangci-lint to 2.9.0 and update nixpkgs 2026-02-19 08:21:23 +01:00
Florian Preinstorfer
faf55f5e8f Document how to use the provider identifier in the policy 2026-02-18 10:24:05 +01:00
Florian Preinstorfer
e3323b65e5 Describe how to set username instead of SPN for Kanidm 2026-02-18 10:24:05 +01:00
Florian Preinstorfer
8f60b819ec Refresh update path 2026-02-16 15:22:46 +01:00
Florian Preinstorfer
c29bcd2eaf Release planning happens in milestones 2026-02-16 15:22:46 +01:00
Florian Preinstorfer
890a044ef6 Add more UIs 2026-02-16 15:22:46 +01:00
Florian Preinstorfer
8028fa5483 No longer consider autogroup:self experimental 2026-02-16 15:22:46 +01:00
Kristoffer Dalby
a7f981e30e github: fix needs-more-info label race condition
Replace tiangolo/issue-manager with custom logic that distinguishes
bot comments from human responses. The issue-manager action treated
all comments equally, so the bot's own instruction comment would
trigger label removal on the next scheduled run.

Split into two jobs:
- remove-label-on-response: triggers on issue_comment from non-bot
  users, removes the needs-more-info label immediately
- close-stale: runs on daily schedule, uses nushell to iterate open
  needs-more-info issues, checks for human comments after the label
  was added, and closes after 3 days with no response
2026-02-15 19:42:47 +01:00
Kristoffer Dalby
e0d8c3c877 github: fix needs-more-info label race condition
Remove the `issues: labeled` trigger from the timer workflow.

When both workflows triggered on label addition, the comment workflow
would post the bot comment, and by the time the timer workflow ran,
issue-manager would see "a comment was added after the label" and
immediately remove the label due to `remove_label_on_comment: true`.

The timer workflow now only runs on:
- Daily cron (to close stale issues)
- issue_comment (to remove label when humans respond)
- workflow_dispatch (for manual testing)
2026-02-09 10:03:12 +01:00
Kristoffer Dalby
c1b468f9f4 github: update issue template contact links
Reorder contact links to show Discord first, then documentation.
Update Discord invite link and docs URL to current values.
2026-02-09 09:51:28 +01:00
Kristoffer Dalby
900f4b7b75 github: add support-request automation workflow
Add workflow that automatically closes issues labeled as
support-request with a message directing users to Discord
for configuration and support questions.

The workflow:
- Triggers when support-request label is added
- Posts a comment explaining this tracker is for bugs/features
- Links to documentation and Discord
- Closes the issue as "not planned"
2026-02-09 09:51:28 +01:00
Kristoffer Dalby
64f23136a2 github: add needs-more-info automation workflow
Add GitHub Actions automation that helps manage issues requiring
additional information from reporters:

- Post an instruction comment when 'needs-more-info' label is added,
  requesting environment details, debug logs from multiple nodes,
  configuration files, and proper formatting
- Automatically remove the label when anyone comments
- Close the issue after 3 days if no response is provided
- Exempt needs-more-info labeled issues from the stale bot

The instruction comment includes guidance on:
- Required environment and debug information
- Collecting logs from both connecting and connected-to nodes
- Proper redaction rules (replace consistently, never remove IPs)
- Formatting requirements for attachments and Markdown
- Encouragement to discuss on Discord before filing issues
2026-02-09 09:51:28 +01:00
Kristoffer Dalby
0f6d312ada all: upgrade to Go 1.26rc2 and modernize codebase
This commit upgrades the codebase from Go 1.25.5 to Go 1.26rc2 and
adopts new language features.

Toolchain updates:
- go.mod: go 1.25.5 → go 1.26rc2
- flake.nix: buildGo125Module → buildGo126Module, go_1_25 → go_1_26
- flake.nix: build golangci-lint from source with Go 1.26
- Dockerfile.integration: golang:1.25-trixie → golang:1.26rc2-trixie
- Dockerfile.tailscale-HEAD: golang:1.25-alpine → golang:1.26rc2-alpine
- Dockerfile.derper: golang:alpine → golang:1.26rc2-alpine
- .goreleaser.yml: go mod tidy -compat=1.25 → -compat=1.26
- cmd/hi/run.go: fallback Go version 1.25 → 1.26rc2
- .pre-commit-config.yaml: simplify golangci-lint hook entry

Code modernization using Go 1.26 features:
- Replace tsaddr.SortPrefixes with slices.SortFunc + netip.Prefix.Compare
- Replace ptr.To(x) with new(x) syntax
- Replace errors.As with errors.AsType[T]

Lint rule updates:
- Add forbidigo rules to prevent regression to old patterns
2026-02-08 12:35:23 +01:00
Kristoffer Dalby
20dff82f95 CHANGELOG: add minimum Tailscale version for 0.29.0
Update the 0.29.0 changelog entry to document the minimum
supported Tailscale client version (v1.76.0), which corresponds
to capability version 106 based on the 10-version support window.
2026-02-07 08:23:51 +01:00
Kristoffer Dalby
31c4331a91 capver: regenerate from docker tags
Signed-off-by: Kristoffer Dalby <kristoffer@dalby.cc>
2026-02-07 08:23:51 +01:00
Kristoffer Dalby
ce580f8245 all: fix golangci-lint issues (#3064) 2026-02-06 21:45:32 +01:00
Kristoffer Dalby
bfb6fd80df integration: fixup test
Signed-off-by: Kristoffer Dalby <kristoffer@dalby.cc>
2026-02-06 07:40:29 +01:00
Kristoffer Dalby
3acce2da87 errors: rewrite errors to follow go best practices
Errors should not start capitalised and they should not contain the word error
or state that they "failed" as we already know it is an error

Signed-off-by: Kristoffer Dalby <kristoffer@dalby.cc>
2026-02-06 07:40:29 +01:00
Kristoffer Dalby
4a9a329339 all: use lowercase log messages
Go style recommends that log messages and error strings should not be
capitalized (unless beginning with proper nouns or acronyms) and should
not end with punctuation.

This change normalizes all zerolog .Msg() and .Msgf() calls to start
with lowercase letters, following Go conventions and making logs more
consistent across the codebase.
2026-02-06 07:40:29 +01:00
Kristoffer Dalby
dd16567c52 hscontrol/state,db: use zf constants for logging
Replace raw string field names with zf constants in state.go and
db/node.go for consistent, type-safe logging.

state.go changes:
- User creation, hostinfo validation, node registration
- Tag processing during reauth (processReauthTags)
- Auth path and PreAuthKey handling
- Route auto-approval and MapRequest processing

db/node.go changes:
- RegisterNodeForTest logging
- Invalid hostname replacement logging
2026-02-06 07:40:29 +01:00
Kristoffer Dalby
e0a436cefc hscontrol/util/zlog/zf: add tag, authkey, and route constants
Add new zerolog field constants for improved logging consistency:

- Tag fields: CurrentTags, RemovedTags, RejectedTags, NewTags, OldTags,
  IsTagged, WasAuthKeyTagged
- Node fields: ExistingNodeID
- AuthKey fields: AuthKeyID, AuthKeyUsed, AuthKeyExpired, AuthKeyReusable,
  NodeKeyRotation
- Route fields: RoutesApprovedOld, RoutesApprovedNew, OldAnnouncedRoutes,
  NewAnnouncedRoutes, ApprovedRoutes, OldApprovedRoutes, NewApprovedRoutes,
  AutoApprovedRoutes, AllApprovedRoutes, RouteChanged
2026-02-06 07:40:29 +01:00
Kristoffer Dalby
53cdeff129 hscontrol/mapper: use sub-loggers and zf constants
Add sub-logger patterns to worker(), AddNode(), RemoveNode() and
multiChannelNodeConn to eliminate repeated field calls. Use zf.*
constants for consistent field naming.

Changes in batcher_lockfree.go:
- Add wlog sub-logger in worker() with worker.id context
- Add log field to multiChannelNodeConn struct
- Initialize mc.log with node.id in newMultiChannelNodeConn()
- Add nlog sub-loggers in AddNode() and RemoveNode()
- Update all connection methods to use mc.log

Changes in batcher.go:
- Use zf.NodeID and zf.Reason in handleNodeChange()
2026-02-06 07:40:29 +01:00
Kristoffer Dalby
7148a690d0 hscontrol/grpcv1: use EmbedObject and zf constants
Replace manual field extraction with EmbedObject for node logging
in gRPC handlers. Use zf.* constants for consistent field naming.

Changes:
- RegisterNode: use EmbedObject(node), zf.RegistrationKey, etc.
- SetTags: use EmbedObject(node)
- ExpireNode: use EmbedObject(node), zf.ExpiresAt
- RenameNode: use EmbedObject(node), zf.NewName
- SetApprovedRoutes: use zf.NodeID
2026-02-06 07:40:29 +01:00
Kristoffer Dalby
4e73133b9f hscontrol/routes: use sub-logger and zf constants
Add sub-logger pattern to SetRoutes() to eliminate repeated node.id
field calls. Replace raw strings with zf.* constants throughout
the primary routes code for consistent field naming.

Changes:
- Add nlog sub-logger in SetRoutes() with node.id context
- Replace "prefix" with zf.Prefix
- Replace "changed" with zf.Changes
- Replace "newState" with zf.NewState
- Replace "finalState" with zf.FinalState
2026-02-06 07:40:29 +01:00
Kristoffer Dalby
4f8724151e hscontrol/poll: use sub-logger pattern for mapSession
Replace the helper functions (logf, infof, tracef, errf) with a
zerolog sub-logger initialized in newMapSession(). The sub-logger
is pre-populated with session context (component, node, omitPeers,
stream) eliminating repeated field calls throughout the code.

Changes:
- Add log field to mapSession struct
- Initialize sub-logger with EmbedObject(node) and request context
- Remove logf/infof/tracef/errf helper functions
- Update all callers to use m.log.Level().Caller()... pattern
- Update noise.go to use sess.log instead of sess.tracef

This reduces code by ~20 lines and eliminates ~15 repeated field
calls per log statement.
2026-02-06 07:40:29 +01:00
Kristoffer Dalby
91730e2a1d hscontrol: use EmbedObject for node logging
Replace manual Uint64("node.id")/Str("node.name") field patterns with
EmbedObject(node) which automatically includes all standard node fields
(id, name, machine key, node key, online status, tags, user).

This reduces code repetition and ensures consistent logging across:
- state.go: Connect/Disconnect, persistNodeToDB, AutoApproveRoutes
- auth.go: handleLogout, handleRegisterWithAuthKey
2026-02-06 07:40:29 +01:00
Kristoffer Dalby
b5090a01ec cmd: use zf constants for zerolog field names
Update CLI logging to use zf.* constants instead of inline strings
for consistency with the rest of the codebase.
2026-02-06 07:40:29 +01:00
Kristoffer Dalby
27f5641341 golangci: add forbidigo rule for zerolog field constants
Add a lint rule to enforce use of zf.* constants for zerolog field
names instead of inline string literals. This catches at lint time
any new code that doesn't follow the convention.

The rule matches common zerolog field methods (Str, Int, Bool, etc.)
and flags any usage with a string literal first argument.
2026-02-06 07:40:29 +01:00
Kristoffer Dalby
cf3d30b6f6 types: add MarshalZerologObject to domain types
Implement zerolog.LogObjectMarshaler interface on domain types
for structured logging:

- Node: logs node.id, node.name, machine.key (short), node.key (short),
  node.is_tagged, node.expired, node.online, node.tags, user.name
- User: logs user.id, user.name, user.display, user.provider
- PreAuthKey: logs pak.id, pak.prefix (masked), pak.reusable,
  pak.ephemeral, pak.used, pak.is_tagged, pak.tags
- APIKey: logs api_key.id, api_key.prefix (masked), api_key.expiration

Security: PreAuthKey and APIKey only log masked prefixes, never full
keys or hashes. Uses zf.* constants for consistent field naming.
2026-02-06 07:40:29 +01:00
Kristoffer Dalby
58020696fe zlog: add utility package for safe and consistent logging
Add hscontrol/util/zlog package with:

- zf subpackage: field name constants for compile-time safety
- SafeHostinfo: wrapper that redacts device fingerprinting data
- SafeMapRequest: wrapper that redacts client endpoints

The zf (zerolog fields) subpackage provides short constant names
(e.g., zf.NodeID instead of inline "node.id" strings) ensuring
consistent field naming across all log statements.

Security considerations:
- SafeHostinfo never logs: OSVersion, DeviceModel, DistroName
- SafeMapRequest only logs endpoint counts, not actual IPs
2026-02-06 07:40:29 +01:00
Kristoffer Dalby
e44b402fe4 integration: update TestSubnetRouteACL for filter merging and IPProto
Update integration test expectations to match current policy behavior:

1. IPProto defaults include all four protocols (TCP, UDP, ICMPv4,
   ICMPv6) for port-range ACL rules, not just TCP and UDP.

2. Filter rules with identical SrcIPs and IPProto are now merged
   into a single rule with combined DstPorts, so the subnet router
   receives one filter rule instead of two.

Updates #3036
2026-02-05 19:29:16 +01:00
Kristoffer Dalby
835b7eb960 policy: autogroup:internet does not generate packet filters
According to Tailscale SaaS behavior, autogroup:internet is handled
by exit node routing via AllowedIPs, not by packet filtering. ACL
rules with autogroup:internet as destination should produce no
filter rules for any node.

Previously, Headscale expanded autogroup:internet to public CIDR
ranges and distributed filters to exit nodes (because 0.0.0.0/0
"covers" internet destinations). This was incorrect.

Add detection for AutoGroupInternet in filter compilation to skip
filter generation for this autogroup. Update test expectations
accordingly.
2026-02-05 19:29:16 +01:00
Kristoffer Dalby
95b1fd636e policy: fix wildcard DstPorts format and proto:icmp handling
Fix two compatibility issues discovered in Tailscale SaaS testing:

1. Wildcard DstPorts format: Headscale was expanding wildcard
   destinations to CGNAT ranges (100.64.0.0/10, fd7a:115c:a1e0::/48)
   while Tailscale uses {IP: "*"} directly. Add detection for
   wildcard (Asterix) alias type in filter compilation to use the
   correct format.

2. proto:icmp handling: The "icmp" protocol name was returning both
   ICMPv4 (1) and ICMPv6 (58), but Tailscale only returns ICMPv4.
   Users should use "ipv6-icmp" or protocol number 58 explicitly
   for IPv6 ICMP.

Update all test expectations accordingly. This significantly reduces
test file line count by replacing duplicated CGNAT range patterns
with single wildcard entries.
2026-02-05 19:29:16 +01:00
Kristoffer Dalby
834ac27779 policy/v2: add subnet routes and exit node compatibility tests
Add comprehensive test file for validating Headscale's ACL engine
behavior for subnet routes and exit nodes against documented
Tailscale SaaS behavior.

Tests cover:
- Category A: Subnet route basics (wildcard includes routes, tag-based
  ACL excludes routes)
- Category B: Exit node behavior (exit routes not in SrcIPs)
- Category F: Filter placement rules (filters on destination nodes)
- Category G: Protocol and port restrictions
- Category R: Route coverage rules
- Category O: Overlapping routes
- Category H: Edge cases (wildcard formats, CGNAT handling)
- Category T: Tag resolution (tags resolve to node IPs only)
- Category I: IPv6 specific behavior

The tests document expected Tailscale SaaS behavior with TODOs marking
areas where Headscale currently differs. This provides a baseline for
compatibility improvements.
2026-02-05 19:29:16 +01:00
Kristoffer Dalby
4a4032a4b0 changelog: document filter rule merging
Updates #3036
2026-02-05 19:29:16 +01:00
Kristoffer Dalby
29aa08df0e policy: update test expectations for merged filter rules
Update test expectations across policy tests to expect merged
FilterRule entries instead of separate ones. Tests now expect:
- Single FilterRule with combined DstPorts for same source
- Reduced matcher counts for exit node tests

Updates #3036
2026-02-05 19:29:16 +01:00
Kristoffer Dalby
0b1727c337 policy: merge filter rules with identical SrcIPs and IPProto
Tailscale merges multiple ACL rules into fewer FilterRule entries
when they have identical SrcIPs and IPProto, combining their DstPorts
arrays. This change implements the same behavior in Headscale.

Add mergeFilterRules() which uses O(n) hash map lookup to merge rules
with identical keys. DstPorts are NOT deduplicated to match Tailscale
behavior.

Also fix DestsIsTheInternet() to handle merged filter rules where
TheInternet is combined with other destinations - now uses superset
check instead of equality check.

Updates #3036
2026-02-05 19:29:16 +01:00
Kristoffer Dalby
08fe2e4d6c policy: use CIDR format for autogroup:self destinations
Updates #3036
2026-02-05 19:29:16 +01:00
Kristoffer Dalby
cb29cade46 docs: add compatibility test documentation
Updates #3036
2026-02-05 19:29:16 +01:00
Kristoffer Dalby
f27298c759 changelog: document wildcard CGNAT range change
Add breaking change entry for the wildcard resolution change to use
CGNAT/ULA ranges instead of all IPs.
Updates #3036

Updates #3036
2026-02-05 19:29:16 +01:00
Kristoffer Dalby
8baa14ef4a policy: use CGNAT/ULA ranges for wildcard resolution
Change Asterix.Resolve() to use Tailscale's CGNAT range (100.64.0.0/10)
and ULA range (fd7a:115c:a1e0::/48) instead of all IPs (0.0.0.0/0 and
::/0).
This better matches Tailscale's security model where wildcard (*) means
"any node in the tailnet" rather than literally "any IP address on the
internet".
Updates #3036

Updates #3036
2026-02-05 19:29:16 +01:00
Kristoffer Dalby
ebdbe03639 policy: validate autogroup:self sources in ACL rules
Tailscale validates that autogroup:self destinations in ACL rules can
only be used when ALL sources are users, groups, autogroup:member, or
wildcard (*). Previously, Headscale only performed this validation for
SSH rules.
Add validateACLSrcDstCombination() to enforce that tags, autogroup:tagged,
hosts, and raw IPs cannot be used as sources with autogroup:self
destinations. Invalid policies like `tag:client → autogroup:self:*` are
now rejected at validation time, matching Tailscale behavior.
Wildcard (*) is allowed because autogroup:self evaluation narrows it
per-node to only the node's own IPs.

Updates #3036
2026-02-05 19:29:16 +01:00
Kristoffer Dalby
f735502eae policy: add ICMP protocols to default and export constants
When ACL rules don't specify a protocol, Headscale now defaults to
[TCP, UDP, ICMP, ICMPv6] instead of just [TCP, UDP], matching
Tailscale's behavior.
Also export protocol number constants (ProtocolTCP, ProtocolUDP, etc.)
for use in external test packages, renaming the string protocol
constants to ProtoNameTCP, ProtoNameUDP, etc. to avoid conflicts.
This resolves 78 ICMP-related TODOs in the Tailscale compatibility
tests, reducing the total from 165 to 87.

Updates #3036
2026-02-05 19:29:16 +01:00
Kristoffer Dalby
53d17aa321 policy: add comprehensive Tailscale ACL compatibility tests
Add extensive test coverage verifying Headscale's ACL policy behavior
matches Tailscale's coordination server. Tests cover:
- Source/destination resolution for users, groups, tags, hosts, IPs
- autogroup:member, autogroup:tagged, autogroup:self behavior
- Filter rule deduplication and merging semantics
- Multi-rule interaction patterns
- Error case validation
Key behavioral differences documented:
- Headscale creates separate filter entries per ACL rule; Tailscale
  merges rules with identical sources
- Headscale deduplicates Dsts within a rule; Tailscale does not
- Headscale does not validate autogroup:self source restrictions for
  ACL rules (only SSH rules); Tailscale rejects invalid sources
Tests are based on real Tailscale coordination server responses
captured from a test environment with 5 nodes (1 user-owned, 4 tagged).

Updates #3036
2026-02-05 19:29:16 +01:00
Kristoffer Dalby
14f833bdb9 policy: fix autogroup:self handling for tagged nodes
Skip autogroup:self destination processing for tagged nodes since they
can never match autogroup:self (which only applies to user-owned nodes).
Also reorder the IsTagged() check to short-circuit before accessing
User() to avoid potential nil pointer access on tagged nodes.

Updates #3036
2026-02-05 19:29:16 +01:00
Florian Preinstorfer
9e50071df9 Link Fosdem 2026 talk 2026-02-05 08:01:02 +01:00
Florian Preinstorfer
c907b0d323 Fix version in mkdocs 2026-02-05 08:01:02 +01:00
Kristoffer Dalby
97fa117c48 changelog: set 0.28 date
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2026-02-04 21:26:22 +01:00
Kristoffer Dalby
b5329ff0f3 flake.lock: update nixpkgs to 2026-02-03 2026-02-04 20:18:46 +01:00
Kristoffer Dalby
eac8a57bce flake.nix: update hashes for dependency changes
Update vendorHash for headscale after Go module dependency updates.
Update grpc-gateway from v2.27.4 to v2.27.7 with new source and
vendor hashes.
2026-02-04 20:18:46 +01:00
Kristoffer Dalby
44af046196 all: update Go module dependencies
Update all direct and indirect Go module dependencies to their latest
compatible versions.

Notable direct dependency updates:
- tailscale.com v1.94.0 → v1.94.1
- github.com/coreos/go-oidc/v3 v3.16.0 → v3.17.0
- github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.4 → v2.27.7
- github.com/puzpuzpuz/xsync/v4 v4.3.0 → v4.4.0
- golang.org/x/crypto v0.46.0 → v0.47.0
- golang.org/x/net v0.48.0 → v0.49.0
- google.golang.org/genproto updated to 2025-02-03

Notable indirect dependency updates:
- AWS SDK v2 group updated
- OpenTelemetry v1.39.0 → v1.40.0
- github.com/jackc/pgx/v5 v5.7.6 → v5.8.0
- github.com/gaissmai/bart v0.18.0 → v0.26.1

Add lockstep comment for gvisor.dev/gvisor noting it must be updated
together with tailscale.com, similar to the existing modernc.org/sqlite
comment.
2026-02-04 20:18:46 +01:00
Kristoffer Dalby
4a744f423b changelog: change api key format
Signed-off-by: Kristoffer Dalby <kristoffer@tailscale.com>
2026-02-04 20:18:46 +01:00
Kristoffer Dalby
ca75e096e6 integration: add test for tagged→user-owned conversion panic
Add TestTagsAuthKeyConvertToUserViaCLIRegister that reproduces the
exact panic from #3038: register a node with a tags-only PreAuthKey
(no user), force reauth with empty tags, then register via CLI with
a user. The mapper panics on node.Owner().Model().ID when User is nil.

The critical detail is using a tags-only PreAuthKey (User: nil). When
the key is created under a user, the node inherits the User pointer
from createAndSaveNewNode and the bug is masked.

Also add Owner() validity assertions to the existing unit test
TestTaggedNodeWithoutUserToDifferentUser to catch the nil pointer
at the unit test level.

Updates #3038
2026-02-04 15:44:55 +01:00
Kristoffer Dalby
ce7c256d1e state: set User pointer during tagged→user-owned conversion
processReauthTags sets UserID when converting a tagged node to
user-owned, but does not set the User pointer. When the node was
registered with a tags-only PreAuthKey (User: nil), the in-memory
NodeStore cache holds a node with User=nil. The mapper's
generateUserProfiles then calls node.Owner().Model().ID, which
dereferences the nil pointer and panics.

Set node.User alongside node.UserID in processReauthTags. Also add
defensive nil checks in generateUserProfiles to gracefully handle
nodes with invalid owners rather than panicking.

Fixes #3038
2026-02-04 15:44:55 +01:00
Kristoffer Dalby
4912ceaaf5 state: inline reauthExistingNode and convertTaggedNodeToUser
These were thin wrappers around applyAuthNodeUpdate that only added
logging. Move the logging into applyAuthNodeUpdate and call it directly
from HandleNodeFromAuthPath.

This simplifies the code structure without changing behavior.

Updates #3038
2026-02-04 15:44:55 +01:00
Kristoffer Dalby
d7f7f2c85e state: validate tags before UpdateNode to ensure consistency
Move tag validation before the UpdateNode callback in applyAuthNodeUpdate.
Previously, tag validation happened inside the callback, and the error
check occurred after UpdateNode had already committed changes to the
NodeStore. This left the NodeStore in an inconsistent state when tags
were rejected.

Now validation happens first, and UpdateNode is only called when we know
the operation will succeed. This follows the principle that UpdateNode
should only be called when we have all information and are ready to commit.

Also extract validateRequestTags as a reusable function and use it in
createAndSaveNewNode to deduplicate the tag validation logic.

Updates #3038
Updates #3048
2026-02-04 15:44:55 +01:00
Kristoffer Dalby
df184e5276 state: fix expiry handling during node tag conversion
Previously, expiry handling ran BEFORE processReauthTags(), using the
old tagged status to determine whether to set/clear expiry. This caused:

- Personal → Tagged: Expiry remained set (should be cleared to nil)
- Tagged → Personal: Expiry remained nil (should be set from client)

Move expiry handling after tag processing and handle all four transition
cases based on the new tagged status:

- Tagged → Personal: Set expiry from client request
- Personal → Tagged: Clear expiry (tagged nodes don't expire)
- Personal → Personal: Update expiry from client
- Tagged → Tagged: Keep existing nil expiry

Fixes #3048
2026-02-04 15:44:55 +01:00
Kristoffer Dalby
0630fd32e5 state: refactor HandleNodeFromAuthPath for clarity
Reorganize HandleNodeFromAuthPath (~300 lines) into a cleaner structure
with named conditions and extracted helper functions.

Changes:
- Add authNodeUpdateParams struct for shared update logic
- Extract applyAuthNodeUpdate for common reauth/convert operations
- Extract reauthExistingNode and convertTaggedNodeToUser handlers
- Extract createNewNodeFromAuth for new node creation
- Use named boolean conditions (nodeExistsForSameUser, existingNodeIsTagged,
  existingNodeOwnedByOtherUser) instead of compound if conditions
- Create logger with common fields (registration_id, user.name, machine.key,
  method) to reduce log statement verbosity

Updates #3038
2026-02-04 15:44:55 +01:00
Kristoffer Dalby
306aabbbce state: fix nil pointer panic when re-registering tagged node without user
When a node was registered with a tags-only PreAuthKey (no user
associated), the node had User=nil and UserID=nil. When attempting to
re-register this node to a different user via HandleNodeFromAuthPath,
two issues occurred:

1. The code called oldUser.Name() without checking if oldUser was valid,
   causing a nil pointer dereference panic.

2. The existing node lookup logic didn't find the tagged node because it
   searched by (machineKey, userID), but tagged nodes have no userID.
   This caused a new node to be created instead of updating the existing
   tagged node.

Fix this by restructuring HandleNodeFromAuthPath to:
1. First check if a node exists for the same user (existing behavior)
2. If not found, check if an existing TAGGED node exists with the same
   machine key (regardless of userID)
3. If a tagged node exists, UPDATE it to convert from tagged to
   user-owned (preserving the node ID)
4. Only create a new node if the existing node is user-owned by a
   different user

This ensures consistent behavior between:
- personal → tagged → personal (same node, same owner)
- tagged (no user) → personal (same node, new owner)

Add a test that reproduces the panic and conversion scenario by:
1. Creating a tags-only PreAuthKey (no user)
2. Registering a node with that key
3. Re-registering the same machine to a different user
4. Verifying the node ID stays the same (conversion, not creation)

Fixes #3038
2026-02-04 15:44:55 +01:00
Kristoffer Dalby
a09b0d1d69 policy/v2: add Caller() to log statements in compileACLWithAutogroupSelf
Both compileFilterRules and compileSSHPolicy include .Caller() on
their resolution error log statements, but compileACLWithAutogroupSelf
does not. Add .Caller() to the three log sites (source resolution
error, destination resolution error, nil destination) for consistent
debuggability across all compilation paths.

Updates #2990
2026-02-03 16:53:15 +01:00
Kristoffer Dalby
362696a5ef policy/v2: keep partial IPSet on SSH destination resolution errors
In compileSSHPolicy, when resolving other (non-autogroup:self)
destinations, the code discards the entire result on error via
`continue`. If a destination alias (e.g., a tag owned by a group
with a non-existent user) returns a partial IPSet alongside an
error, valid IPs are lost.

Both ACL compilation paths (compileFilterRules and
compileACLWithAutogroupSelf) already handle this correctly by
logging the error and using the IPSet if non-nil.

Remove the `continue` so the SSH path is consistent with the
ACL paths.

Fixes #2990
2026-02-03 16:53:15 +01:00
Kristoffer Dalby
1f32c8bf61 policy/v2: add IsTagged() guards to prevent panics on tagged nodes
Three related issues where User().ID() is called on potentially tagged
nodes without first checking IsTagged():

1. compileACLWithAutogroupSelf: the autogroup:self block at line 166
   lacks the !node.IsTagged() guard that compileSSHPolicy already has.
   If a tagged node is the compilation target, node.User().ID() may
   panic. Tagged nodes should never participate in autogroup:self.

2. compileSSHPolicy: the IsTagged() check is on the right side of &&,
   so n.User().ID() evaluates first and may panic before short-circuit
   can prevent it. Swap to !n.IsTagged() && n.User().ID() == ... to
   match the already-correct order in compileACLWithAutogroupSelf.

3. invalidateAutogroupSelfCache: calls User().ID() at ~10 sites
   without IsTagged() guards. Tagged nodes don't participate in
   autogroup:self, so they should be skipped when collecting affected
   users and during cache lookup. Tag status transitions are handled
   by using the non-tagged version's user ID.

Fixes #2990
2026-02-03 16:53:15 +01:00
Kristoffer Dalby
fb137a8fe3 policy/v2: use partial IPSet on group resolution errors in autogroup:self path
In compileACLWithAutogroupSelf, when a group contains a non-existent
user, Group.Resolve() returns a partial IPSet (with IPs from valid
users) alongside an error. The code was discarding the entire result
via `continue`, losing valid IPs. The non-autogroup-self path
(compileFilterRules) already handles this correctly by logging the
error and using the IPSet if non-empty.

Remove the `continue` on error for both source and destination
resolution, matching the existing behavior in compileFilterRules.
Also reorder the IsTagged check before User().ID() comparison
in the same-user node filter to prevent nil dereference on tagged
nodes that have no User set.

Fixes #2990
2026-02-03 16:53:15 +01:00
Kristoffer Dalby
c2f28efbd7 policy/v2: add test for issue #2990 same-user tagged device
Add test reproducing the exact scenario from issue #2990 where:
- One user (user1) in group:admin
- node1: user device (not tagged)
- node2: tagged with tag:admin, same user

The test verifies that peer visibility and packet filters are correct.

Updates #2990
2026-02-03 16:53:15 +01:00
Kristoffer Dalby
11f0d4cfdd policy/v2: include nodes with empty filters in BuildPeerMap
Previously, nodes with empty filter rules (e.g., tagged servers that are
only destinations, never sources) were skipped entirely in BuildPeerMap.
This could cause visibility issues when using autogroup:self with
multiple user groups.

Remove the len(filter) == 0 skip condition so all nodes are included in
nodeMatchers. Empty filters result in empty matchers where CanAccess()
returns false, but the node still needs to be in the map so symmetric
visibility works correctly: if node A can access node B, both should see
each other regardless of B's filter rules.

Add comprehensive tests for:
- Multi-group scenarios where autogroup:self is used by privileged users
- Nodes with empty filters remaining visible to authorized peers
- Combined access rules (autogroup:self + tags in same rule)

Updates #2990
2026-02-03 16:53:15 +01:00
Florian Preinstorfer
5d300273dc Add a tags page and describe a few common operations 2026-01-28 15:52:57 +01:00
Florian Preinstorfer
7f003ecaff Add a page to describe supported registration methods 2026-01-28 15:52:57 +01:00
Florian Preinstorfer
2695d1527e Use registration key instead of machine key 2026-01-28 15:52:57 +01:00
Florian Preinstorfer
d32f6707f7 Add missing words 2026-01-28 15:52:57 +01:00
Florian Preinstorfer
89e436f0e6 Bump year/version for mkdocs 2026-01-28 15:52:57 +01:00
Kristoffer Dalby
46daa659e2 state: omit AuthKeyID/AuthKey in node Updates to prevent FK errors
When a PreAuthKey is deleted, the database correctly sets auth_key_id
to NULL on referencing nodes via ON DELETE SET NULL. However, the
NodeStore (in-memory cache) retains the old AuthKeyID value.

When nodes send MapRequests (e.g., after tailscaled restart), GORM's
Updates() tries to persist the stale AuthKeyID, causing a foreign key
constraint error when trying to reference a deleted PreAuthKey.

Fix this by adding AuthKeyID and AuthKey to the Omit() call in all
three places where nodes are updated via GORM's Updates():
- persistNodeToDB (MapRequest processing)
- HandleNodeFromAuthPath (re-auth via web/OIDC)
- HandleNodeFromPreAuthKey (re-registration with preauth key)

This tells GORM to never touch the auth_key_id column or AuthKey
association during node updates, letting the database handle the
foreign key relationship correctly.

Added TestDeletedPreAuthKeyNotRecreatedOnNodeUpdate to verify that
deleted PreAuthKeys are not recreated when nodes send MapRequests.
2026-01-26 12:12:11 +00:00
Florian Preinstorfer
49b70db7f2 Conversion from personal to tagged node is reversible 2026-01-24 17:18:59 +01:00
Florian Preinstorfer
04b4071888 Fix node expiration success message
A node is expired when the requested expiration is either now or in the
past.
2026-01-24 15:18:12 +01:00
Florian Preinstorfer
ee127edbf7 Remove trace log for preauthkeys create
This always prints a TRC message on `preauthkeys create`. Since we don't
print anything for `apikeys create` either we might as well remove it.
2026-01-23 08:40:09 +01:00
878 changed files with 173683 additions and 10777 deletions

View File

@@ -1,870 +0,0 @@
---
name: headscale-integration-tester
description: Use this agent when you need to execute, analyze, or troubleshoot Headscale integration tests. This includes running specific test scenarios, investigating test failures, interpreting test artifacts, validating end-to-end functionality, or ensuring integration test quality before releases. Examples: <example>Context: User has made changes to the route management code and wants to validate the changes work correctly. user: 'I've updated the route advertisement logic in poll.go. Can you run the relevant integration tests to make sure everything still works?' assistant: 'I'll use the headscale-integration-tester agent to run the subnet routing integration tests and analyze the results.' <commentary>Since the user wants to validate route-related changes with integration tests, use the headscale-integration-tester agent to execute the appropriate tests and analyze results.</commentary></example> <example>Context: A CI pipeline integration test is failing and the user needs help understanding why. user: 'The TestSubnetRouterMultiNetwork test is failing in CI. The logs show some timing issues but I can't figure out what's wrong.' assistant: 'Let me use the headscale-integration-tester agent to analyze the test failure and examine the artifacts.' <commentary>Since this involves analyzing integration test failures and interpreting test artifacts, use the headscale-integration-tester agent to investigate the issue.</commentary></example>
color: green
---
You are a specialist Quality Assurance Engineer with deep expertise in Headscale's integration testing system. You understand the Docker-based test infrastructure, real Tailscale client interactions, and the complex timing considerations involved in end-to-end network testing.
## Integration Test System Overview
The Headscale integration test system uses Docker containers running real Tailscale clients against a Headscale server. Tests validate end-to-end functionality including routing, ACLs, node lifecycle, and network coordination. The system is built around the `hi` (Headscale Integration) test runner in `cmd/hi/`.
## Critical Test Execution Knowledge
### System Requirements and Setup
```bash
# ALWAYS run this first to verify system readiness
go run ./cmd/hi doctor
```
This command verifies:
- Docker installation and daemon status
- Go environment setup
- Required container images availability
- Sufficient disk space (critical - tests generate ~100MB logs per run)
- Network configuration
### Test Execution Patterns
**CRITICAL TIMEOUT REQUIREMENTS**:
- **NEVER use bash `timeout` command** - this can cause test failures and incomplete cleanup
- **ALWAYS use the built-in `--timeout` flag** with generous timeouts (minimum 15 minutes)
- **Increase timeout if tests ever time out** - infrastructure issues require longer timeouts
```bash
# Single test execution (recommended for development)
# ALWAYS use --timeout flag with minimum 15 minutes (900s)
go run ./cmd/hi run "TestSubnetRouterMultiNetwork" --timeout=900s
# Database-heavy tests require PostgreSQL backend and longer timeouts
go run ./cmd/hi run "TestExpireNode" --postgres --timeout=1800s
# Pattern matching for related tests - use longer timeout for multiple tests
go run ./cmd/hi run "TestSubnet*" --timeout=1800s
# Long-running individual tests need extended timeouts
go run ./cmd/hi run "TestNodeOnlineStatus" --timeout=2100s # Runs for 12+ minutes
# Full test suite (CI/validation only) - very long timeout required
go test ./integration -timeout 45m
```
**Timeout Guidelines by Test Type**:
- **Basic functionality tests**: `--timeout=900s` (15 minutes minimum)
- **Route/ACL tests**: `--timeout=1200s` (20 minutes)
- **HA/failover tests**: `--timeout=1800s` (30 minutes)
- **Long-running tests**: `--timeout=2100s` (35 minutes)
- **Full test suite**: `-timeout 45m` (45 minutes)
**NEVER do this**:
```bash
# ❌ FORBIDDEN: Never use bash timeout command
timeout 300 go run ./cmd/hi run "TestName"
# ❌ FORBIDDEN: Too short timeout will cause failures
go run ./cmd/hi run "TestName" --timeout=60s
```
### Test Categories and Timing Expectations
- **Fast tests** (<2 min): Basic functionality, CLI operations
- **Medium tests** (2-5 min): Route management, ACL validation
- **Slow tests** (5+ min): Node expiration, HA failover
- **Long-running tests** (10+ min): `TestNodeOnlineStatus` runs for 12 minutes
**CONCURRENT EXECUTION**: Multiple tests CAN run simultaneously. Each test run gets a unique Run ID for isolation. See "Concurrent Execution and Run ID Isolation" section below.
## Test Artifacts and Log Analysis
### Artifact Structure
All test runs save comprehensive artifacts to `control_logs/TIMESTAMP-ID/`:
```
control_logs/20250713-213106-iajsux/
├── hs-testname-abc123.stderr.log # Headscale server error logs
├── hs-testname-abc123.stdout.log # Headscale server output logs
├── hs-testname-abc123.db # Database snapshot for post-mortem
├── hs-testname-abc123_metrics.txt # Prometheus metrics dump
├── hs-testname-abc123-mapresponses/ # Protocol-level debug data
├── ts-client-xyz789.stderr.log # Tailscale client error logs
├── ts-client-xyz789.stdout.log # Tailscale client output logs
└── ts-client-xyz789_status.json # Client network status dump
```
### Log Analysis Priority Order
When tests fail, examine artifacts in this specific order:
1. **Headscale server stderr logs** (`hs-*.stderr.log`): Look for errors, panics, database issues, policy evaluation failures
2. **Tailscale client stderr logs** (`ts-*.stderr.log`): Check for authentication failures, network connectivity issues
3. **MapResponse JSON files**: Protocol-level debugging for network map generation issues
4. **Client status dumps** (`*_status.json`): Network state and peer connectivity information
5. **Database snapshots** (`.db` files): For data consistency and state persistence issues
## Concurrent Execution and Run ID Isolation
### Overview
The integration test system supports running multiple tests concurrently on the same Docker daemon. Each test run is isolated through a unique Run ID that ensures containers, networks, and cleanup operations don't interfere with each other.
### Run ID Format and Usage
Each test run generates a unique Run ID in the format: `YYYYMMDD-HHMMSS-{6-char-hash}`
- Example: `20260109-104215-mdjtzx`
The Run ID is used for:
- **Container naming**: `ts-{runIDShort}-{version}-{hash}` (e.g., `ts-mdjtzx-1-74-fgdyls`)
- **Docker labels**: All containers get `hi.run-id={runID}` label
- **Log directories**: `control_logs/{runID}/`
- **Cleanup isolation**: Only containers with matching run ID are cleaned up
### Container Isolation Mechanisms
1. **Unique Container Names**: Each container includes the run ID for identification
2. **Docker Labels**: `hi.run-id` and `hi.test-type` labels on all containers
3. **Dynamic Port Allocation**: All ports use `{HostPort: "0"}` to let kernel assign free ports
4. **Per-Run Networks**: Network names include scenario hash for isolation
5. **Isolated Cleanup**: `killTestContainersByRunID()` only removes containers matching the run ID
### ⚠️ CRITICAL: Never Interfere with Other Test Runs
**FORBIDDEN OPERATIONS** when other tests may be running:
```bash
# ❌ NEVER do global container cleanup while tests are running
docker rm -f $(docker ps -q --filter "name=hs-")
docker rm -f $(docker ps -q --filter "name=ts-")
# ❌ NEVER kill all test containers
# This will destroy other agents' test sessions!
# ❌ NEVER prune all Docker resources during active tests
docker system prune -f # Only safe when NO tests are running
```
**SAFE OPERATIONS**:
```bash
# ✅ Clean up only YOUR test run's containers (by run ID)
# The test runner does this automatically via cleanup functions
# ✅ Clean stale (stopped/exited) containers only
# Pre-test cleanup only removes stopped containers, not running ones
# ✅ Check what's running before cleanup
docker ps --filter "name=headscale-test-suite" --format "{{.Names}}"
```
### Running Concurrent Tests
```bash
# Start multiple tests in parallel - each gets unique run ID
go run ./cmd/hi run "TestPingAllByIP" &
go run ./cmd/hi run "TestACLAllowUserDst" &
go run ./cmd/hi run "TestOIDCAuthenticationPingAll" &
# Monitor running test suites
docker ps --filter "name=headscale-test-suite" --format "table {{.Names}}\t{{.Status}}"
```
### Agent Session Isolation Rules
When working as an agent:
1. **Your run ID is unique**: Each test you start gets its own run ID
2. **Never clean up globally**: Only use run ID-specific cleanup
3. **Check before cleanup**: Verify no other tests are running if you need to prune resources
4. **Respect other sessions**: Other agents may have tests running concurrently
5. **Log directories are isolated**: Your artifacts are in `control_logs/{your-run-id}/`
### Identifying Your Containers
Your test containers can be identified by:
- The run ID in the container name
- The `hi.run-id` Docker label
- The test suite container: `headscale-test-suite-{your-run-id}`
```bash
# List containers for a specific run ID
docker ps --filter "label=hi.run-id=20260109-104215-mdjtzx"
# Get your run ID from the test output
# Look for: "Run ID: 20260109-104215-mdjtzx"
```
## Common Failure Patterns and Root Cause Analysis
### CRITICAL MINDSET: Code Issues vs Infrastructure Issues
**⚠️ IMPORTANT**: When tests fail, it is ALMOST ALWAYS a code issue with Headscale, NOT infrastructure problems. Do not immediately blame disk space, Docker issues, or timing unless you have thoroughly investigated the actual error logs first.
### Systematic Debugging Process
1. **Read the actual error message**: Don't assume - read the stderr logs completely
2. **Check Headscale server logs first**: Most issues originate from server-side logic
3. **Verify client connectivity**: Only after ruling out server issues
4. **Check timing patterns**: Use proper `EventuallyWithT` patterns
5. **Infrastructure as last resort**: Only blame infrastructure after code analysis
### Real Failure Patterns
#### 1. Timing Issues (Common but fixable)
```go
// ❌ Wrong: Immediate assertions after async operations
client.Execute([]string{"tailscale", "set", "--advertise-routes=10.0.0.0/24"})
nodes, _ := headscale.ListNodes()
require.Len(t, nodes[0].GetAvailableRoutes(), 1) // WILL FAIL
// ✅ Correct: Wait for async operations
client.Execute([]string{"tailscale", "set", "--advertise-routes=10.0.0.0/24"})
require.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
assert.Len(c, nodes[0].GetAvailableRoutes(), 1)
}, 10*time.Second, 100*time.Millisecond, "route should be advertised")
```
**Timeout Guidelines**:
- Route operations: 3-5 seconds
- Node state changes: 5-10 seconds
- Complex scenarios: 10-15 seconds
- Policy recalculation: 5-10 seconds
#### 2. NodeStore Synchronization Issues
Route advertisements must propagate through poll requests (`poll.go:420`). NodeStore updates happen at specific synchronization points after Hostinfo changes.
#### 3. Test Data Management Issues
```go
// ❌ Wrong: Assuming array ordering
require.Len(t, nodes[0].GetAvailableRoutes(), 1)
// ✅ Correct: Identify nodes by properties
expectedRoutes := map[string]string{"1": "10.33.0.0/16"}
for _, node := range nodes {
nodeIDStr := fmt.Sprintf("%d", node.GetId())
if route, shouldHaveRoute := expectedRoutes[nodeIDStr]; shouldHaveRoute {
// Test the specific node that should have the route
}
}
```
#### 4. Database Backend Differences
SQLite vs PostgreSQL have different timing characteristics:
- Use `--postgres` flag for database-intensive tests
- PostgreSQL generally has more consistent timing
- Some race conditions only appear with specific backends
## Resource Management and Cleanup
### Disk Space Management
Tests consume significant disk space (~100MB per run):
```bash
# Check available space before running tests
df -h
# Clean up test artifacts periodically
rm -rf control_logs/older-timestamp-dirs/
# Clean Docker resources
docker system prune -f
docker volume prune -f
```
### Container Cleanup
- Successful tests clean up automatically
- Failed tests may leave containers running
- Manually clean if needed: `docker ps -a` and `docker rm -f <containers>`
## Advanced Debugging Techniques
### Protocol-Level Debugging
MapResponse JSON files in `control_logs/*/hs-*-mapresponses/` contain:
- Network topology as sent to clients
- Peer relationships and visibility
- Route distribution and primary route selection
- Policy evaluation results
### Database State Analysis
Use the database snapshots for post-mortem analysis:
```bash
# SQLite examination
sqlite3 control_logs/TIMESTAMP/hs-*.db
.tables
.schema nodes
SELECT * FROM nodes WHERE name LIKE '%problematic%';
```
### Performance Analysis
Prometheus metrics dumps show:
- Request latencies and error rates
- NodeStore operation timing
- Database query performance
- Memory usage patterns
## Test Development and Quality Guidelines
### Proper Test Patterns
```go
// Always use EventuallyWithT for async operations
require.EventuallyWithT(t, func(c *assert.CollectT) {
// Test condition that may take time to become true
}, timeout, interval, "descriptive failure message")
// Handle node identification correctly
var targetNode *v1.Node
for _, node := range nodes {
if node.GetName() == expectedNodeName {
targetNode = node
break
}
}
require.NotNil(t, targetNode, "should find expected node")
```
### Quality Validation Checklist
- ✅ Tests use `EventuallyWithT` for asynchronous operations
- ✅ Tests don't rely on array ordering for node identification
- ✅ Proper cleanup and resource management
- ✅ Tests handle both success and failure scenarios
- ✅ Timing assumptions are realistic for operations being tested
- ✅ Error messages are descriptive and actionable
## Real-World Test Failure Patterns from HA Debugging
### Infrastructure vs Code Issues - Detailed Examples
**INFRASTRUCTURE FAILURES (Rare but Real)**:
1. **DNS Resolution in Auth Tests**: `failed to resolve "hs-pingallbyip-jax97k": no DNS fallback candidates remain`
- **Pattern**: Client containers can't resolve headscale server hostname during logout
- **Detection**: Error messages specifically mention DNS/hostname resolution
- **Solution**: Docker networking reset, not code changes
2. **Container Creation Timeouts**: Test gets stuck during client container setup
- **Pattern**: Tests hang indefinitely at container startup phase
- **Detection**: No progress in logs for >2 minutes during initialization
- **Solution**: `docker system prune -f` and retry
3. **Docker Resource Exhaustion**: Too many concurrent tests overwhelming system
- **Pattern**: Container creation timeouts, OOM kills, slow test execution
- **Detection**: System load high, Docker daemon slow to respond
- **Solution**: Reduce number of concurrent tests, wait for completion before starting more
**CODE ISSUES (99% of failures)**:
1. **Route Approval Process Failures**: Routes not getting approved when they should be
- **Pattern**: Tests expecting approved routes but finding none
- **Detection**: `SubnetRoutes()` returns empty when `AnnouncedRoutes()` shows routes
- **Root Cause**: Auto-approval logic bugs, policy evaluation issues
2. **NodeStore Synchronization Issues**: State updates not propagating correctly
- **Pattern**: Route changes not reflected in NodeStore or Primary Routes
- **Detection**: Logs show route announcements but no tracking updates
- **Root Cause**: Missing synchronization points in `poll.go:420` area
3. **HA Failover Architecture Issues**: Routes removed when nodes go offline
- **Pattern**: `TestHASubnetRouterFailover` fails because approved routes disappear
- **Detection**: Routes available on online nodes but lost when nodes disconnect
- **Root Cause**: Conflating route approval with node connectivity
### Critical Test Environment Setup
**Pre-Test Cleanup**:
The test runner automatically handles cleanup:
- **Before test**: Removes only stale (stopped/exited) containers - does NOT affect running tests
- **After test**: Removes only containers belonging to the specific run ID
```bash
# Only clean old log directories if disk space is low
rm -rf control_logs/202507*
df -h # Verify sufficient disk space
# SAFE: Clean only stale/stopped containers (does not affect running tests)
# The test runner does this automatically via cleanupStaleTestContainers()
# ⚠️ DANGEROUS: Only use when NO tests are running
docker system prune -f
```
**Environment Verification**:
```bash
# Verify system readiness
go run ./cmd/hi doctor
# Check what tests are currently running (ALWAYS check before global cleanup)
docker ps --filter "name=headscale-test-suite" --format "{{.Names}}"
```
### Specific Test Categories and Known Issues
#### Route-Related Tests (Primary Focus)
```bash
# Core route functionality - these should work first
# Note: Generous timeouts are required for reliable execution
go run ./cmd/hi run "TestSubnetRouteACL" --timeout=1200s
go run ./cmd/hi run "TestAutoApproveMultiNetwork" --timeout=1800s
go run ./cmd/hi run "TestHASubnetRouterFailover" --timeout=1800s
```
**Common Route Test Patterns**:
- Tests validate route announcement, approval, and distribution workflows
- Route state changes are asynchronous - may need `EventuallyWithT` wrappers
- Route approval must respect ACL policies - test expectations encode security requirements
- HA tests verify route persistence during node connectivity changes
#### Authentication Tests (Infrastructure-Prone)
```bash
# These tests are more prone to infrastructure issues
# Require longer timeouts due to auth flow complexity
go run ./cmd/hi run "TestAuthKeyLogoutAndReloginSameUser" --timeout=1200s
go run ./cmd/hi run "TestAuthWebFlowLogoutAndRelogin" --timeout=1200s
go run ./cmd/hi run "TestOIDCExpireNodesBasedOnTokenExpiry" --timeout=1800s
```
**Common Auth Test Infrastructure Failures**:
- DNS resolution during logout operations
- Container creation timeouts
- HTTP/2 stream errors (often symptoms, not root cause)
### Security-Critical Debugging Rules
**❌ FORBIDDEN CHANGES (Security & Test Integrity)**:
1. **Never change expected test outputs** - Tests define correct behavior contracts
- Changing `require.Len(t, routes, 3)` to `require.Len(t, routes, 2)` because test fails
- Modifying expected status codes, node counts, or route counts
- Removing assertions that are "inconvenient"
- **Why forbidden**: Test expectations encode business requirements and security policies
2. **Never bypass security mechanisms** - Security must never be compromised for convenience
- Using `AnnouncedRoutes()` instead of `SubnetRoutes()` in production code
- Skipping authentication or authorization checks
- **Why forbidden**: Security bypasses create vulnerabilities in production
3. **Never reduce test coverage** - Tests prevent regressions
- Removing test cases or assertions
- Commenting out "problematic" test sections
- **Why forbidden**: Reduced coverage allows bugs to slip through
**✅ ALLOWED CHANGES (Timing & Observability)**:
1. **Fix timing issues with proper async patterns**
```go
// ✅ GOOD: Add EventuallyWithT for async operations
require.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
assert.Len(c, nodes, expectedCount) // Keep original expectation
}, 10*time.Second, 100*time.Millisecond, "nodes should reach expected count")
```
- **Why allowed**: Fixes race conditions without changing business logic
2. **Add MORE observability and debugging**
- Additional logging statements
- More detailed error messages
- Extra assertions that verify intermediate states
- **Why allowed**: Better observability helps debug without changing behavior
3. **Improve test documentation**
- Add godoc comments explaining test purpose and business logic
- Document timing requirements and async behavior
- **Why encouraged**: Helps future maintainers understand intent
### Advanced Debugging Workflows
#### Route Tracking Debug Flow
```bash
# Run test with detailed logging and proper timeout
go run ./cmd/hi run "TestSubnetRouteACL" --timeout=1200s > test_output.log 2>&1
# Check route approval process
grep -E "(auto-approval|ApproveRoutesWithPolicy|PolicyManager)" test_output.log
# Check route tracking
tail -50 control_logs/*/hs-*.stderr.log | grep -E "(announced|tracking|SetNodeRoutes)"
# Check for security violations
grep -E "(AnnouncedRoutes.*SetNodeRoutes|bypass.*approval)" test_output.log
```
#### HA Failover Debug Flow
```bash
# Test HA failover specifically with adequate timeout
go run ./cmd/hi run "TestHASubnetRouterFailover" --timeout=1800s
# Check route persistence during disconnect
grep -E "(Disconnect|NodeWentOffline|PrimaryRoutes)" control_logs/*/hs-*.stderr.log
# Verify routes don't disappear inappropriately
grep -E "(removing.*routes|SetNodeRoutes.*empty)" control_logs/*/hs-*.stderr.log
```
### Test Result Interpretation Guidelines
#### Success Patterns to Look For
- `"updating node routes for tracking"` in logs
- Routes appearing in `announcedRoutes` logs
- Proper `ApproveRoutesWithPolicy` calls for auto-approval
- Routes persisting through node connectivity changes (HA tests)
#### Failure Patterns to Investigate
- `SubnetRoutes()` returning empty when `AnnouncedRoutes()` has routes
- Routes disappearing when nodes go offline (HA architectural issue)
- Missing `EventuallyWithT` causing timing race conditions
- Security bypass attempts using wrong route methods
### Critical Testing Methodology
**Phase-Based Testing Approach**:
1. **Phase 1**: Core route tests (ACL, auto-approval, basic functionality)
2. **Phase 2**: HA and complex route scenarios
3. **Phase 3**: Auth tests (infrastructure-sensitive, test last)
**Per-Test Process**:
1. Clean environment before each test
2. Monitor logs for route tracking and approval messages
3. Check artifacts in `control_logs/` if test fails
4. Focus on actual error messages, not assumptions
5. Document results and patterns discovered
## Test Documentation and Code Quality Standards
### Adding Missing Test Documentation
When you understand a test's purpose through debugging, always add comprehensive godoc:
```go
// TestSubnetRoutes validates the complete subnet route lifecycle including
// advertisement from clients, policy-based approval, and distribution to peers.
// This test ensures that route security policies are properly enforced and that
// only approved routes are distributed to the network.
//
// The test verifies:
// - Route announcements are received and tracked
// - ACL policies control route approval correctly
// - Only approved routes appear in peer network maps
// - Route state persists correctly in the database
func TestSubnetRoutes(t *testing.T) {
// Test implementation...
}
```
**Why add documentation**: Future maintainers need to understand business logic and security requirements encoded in tests.
### Comment Guidelines - Focus on WHY, Not WHAT
```go
// ✅ GOOD: Explains reasoning and business logic
// Wait for route propagation because NodeStore updates are asynchronous
// and happen after poll requests complete processing
require.EventuallyWithT(t, func(c *assert.CollectT) {
// Check that security policies are enforced...
}, timeout, interval, "route approval must respect ACL policies")
// ❌ BAD: Just describes what the code does
// Wait for routes
require.EventuallyWithT(t, func(c *assert.CollectT) {
// Get routes and check length
}, timeout, interval, "checking routes")
```
**Why focus on WHY**: Helps maintainers understand architectural decisions and security requirements.
## EventuallyWithT Pattern for External Calls
### Overview
EventuallyWithT is a testing pattern used to handle eventual consistency in distributed systems. In Headscale integration tests, many operations are asynchronous - clients advertise routes, the server processes them, updates propagate through the network. EventuallyWithT allows tests to wait for these operations to complete while making assertions.
### External Calls That Must Be Wrapped
The following operations are **external calls** that interact with the headscale server or tailscale clients and MUST be wrapped in EventuallyWithT:
- `headscale.ListNodes()` - Queries server state
- `client.Status()` - Gets client network status
- `client.Curl()` - Makes HTTP requests through the network
- `client.Traceroute()` - Performs network diagnostics
- `client.Execute()` when running commands that query state
- Any operation that reads from the headscale server or tailscale client
### Five Key Rules for EventuallyWithT
1. **One External Call Per EventuallyWithT Block**
- Each EventuallyWithT should make ONE external call (e.g., ListNodes OR Status)
- Related assertions based on that single call can be grouped together
- Unrelated external calls must be in separate EventuallyWithT blocks
2. **Variable Scoping**
- Declare variables that need to be shared across EventuallyWithT blocks at function scope
- Use `=` for assignment inside EventuallyWithT, not `:=` (unless the variable is only used within that block)
- Variables declared with `:=` inside EventuallyWithT are not accessible outside
3. **No Nested EventuallyWithT**
- NEVER put an EventuallyWithT inside another EventuallyWithT
- This is a critical anti-pattern that must be avoided
4. **Use CollectT for Assertions**
- Inside EventuallyWithT, use `assert` methods with the CollectT parameter
- Helper functions called within EventuallyWithT must accept `*assert.CollectT`
5. **Descriptive Messages**
- Always provide a descriptive message as the last parameter
- Message should explain what condition is being waited for
### Correct Pattern Examples
```go
// CORRECT: Single external call with related assertions
var nodes []*v1.Node
var err error
assert.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err = headscale.ListNodes()
assert.NoError(c, err)
assert.Len(c, nodes, 2)
// These assertions are all based on the ListNodes() call
requireNodeRouteCountWithCollect(c, nodes[0], 2, 2, 2)
requireNodeRouteCountWithCollect(c, nodes[1], 1, 1, 1)
}, 10*time.Second, 500*time.Millisecond, "nodes should have expected route counts")
// CORRECT: Separate EventuallyWithT for different external call
assert.EventuallyWithT(t, func(c *assert.CollectT) {
status, err := client.Status()
assert.NoError(c, err)
// All these assertions are based on the single Status() call
for _, peerKey := range status.Peers() {
peerStatus := status.Peer[peerKey]
requirePeerSubnetRoutesWithCollect(c, peerStatus, expectedPrefixes)
}
}, 10*time.Second, 500*time.Millisecond, "client should see expected routes")
// CORRECT: Variable scoping for sharing between blocks
var routeNode *v1.Node
var nodeKey key.NodePublic
// First EventuallyWithT to get the node
assert.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
for _, node := range nodes {
if node.GetName() == "router" {
routeNode = node
nodeKey, _ = key.ParseNodePublicUntyped(mem.S(node.GetNodeKey()))
break
}
}
assert.NotNil(c, routeNode, "should find router node")
}, 10*time.Second, 100*time.Millisecond, "router node should exist")
// Second EventuallyWithT using the nodeKey from first block
assert.EventuallyWithT(t, func(c *assert.CollectT) {
status, err := client.Status()
assert.NoError(c, err)
peerStatus, ok := status.Peer[nodeKey]
assert.True(c, ok, "peer should exist in status")
requirePeerSubnetRoutesWithCollect(c, peerStatus, expectedPrefixes)
}, 10*time.Second, 100*time.Millisecond, "routes should be visible to client")
```
### Incorrect Patterns to Avoid
```go
// INCORRECT: Multiple unrelated external calls in same EventuallyWithT
assert.EventuallyWithT(t, func(c *assert.CollectT) {
// First external call
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
assert.Len(c, nodes, 2)
// Second unrelated external call - WRONG!
status, err := client.Status()
assert.NoError(c, err)
assert.NotNil(c, status)
}, 10*time.Second, 500*time.Millisecond, "mixed operations")
// INCORRECT: Nested EventuallyWithT
assert.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
// NEVER do this!
assert.EventuallyWithT(t, func(c2 *assert.CollectT) {
status, _ := client.Status()
assert.NotNil(c2, status)
}, 5*time.Second, 100*time.Millisecond, "nested")
}, 10*time.Second, 500*time.Millisecond, "outer")
// INCORRECT: Variable scoping error
assert.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes() // This shadows outer 'nodes' variable
assert.NoError(c, err)
}, 10*time.Second, 500*time.Millisecond, "get nodes")
// This will fail - nodes is nil because := created a new variable inside the block
require.Len(t, nodes, 2) // COMPILATION ERROR or nil pointer
// INCORRECT: Not wrapping external calls
nodes, err := headscale.ListNodes() // External call not wrapped!
require.NoError(t, err)
```
### Helper Functions for EventuallyWithT
When creating helper functions for use within EventuallyWithT:
```go
// Helper function that accepts CollectT
func requireNodeRouteCountWithCollect(c *assert.CollectT, node *v1.Node, available, approved, primary int) {
assert.Len(c, node.GetAvailableRoutes(), available, "available routes for node %s", node.GetName())
assert.Len(c, node.GetApprovedRoutes(), approved, "approved routes for node %s", node.GetName())
assert.Len(c, node.GetPrimaryRoutes(), primary, "primary routes for node %s", node.GetName())
}
// Usage within EventuallyWithT
assert.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
requireNodeRouteCountWithCollect(c, nodes[0], 2, 2, 2)
}, 10*time.Second, 500*time.Millisecond, "route counts should match expected")
```
### Operations That Must NOT Be Wrapped
**CRITICAL**: The following operations are **blocking/mutating operations** that change state and MUST NOT be wrapped in EventuallyWithT:
- `tailscale set` commands (e.g., `--advertise-routes`, `--accept-routes`)
- `headscale.ApproveRoute()` - Approves routes on server
- `headscale.CreateUser()` - Creates users
- `headscale.CreatePreAuthKey()` - Creates authentication keys
- `headscale.RegisterNode()` - Registers new nodes
- Any `client.Execute()` that modifies configuration
- Any operation that creates, updates, or deletes resources
These operations:
1. Complete synchronously or fail immediately
2. Should not be retried automatically
3. Need explicit error handling with `require.NoError()`
### Correct Pattern for Blocking Operations
```go
// CORRECT: Blocking operation NOT wrapped
status := client.MustStatus()
command := []string{"tailscale", "set", "--advertise-routes=" + expectedRoutes[string(status.Self.ID)]}
_, _, err = client.Execute(command)
require.NoErrorf(t, err, "failed to advertise route: %s", err)
// Then wait for the result with EventuallyWithT
assert.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
assert.Contains(c, nodes[0].GetAvailableRoutes(), expectedRoutes[string(status.Self.ID)])
}, 10*time.Second, 100*time.Millisecond, "route should be advertised")
// INCORRECT: Blocking operation wrapped (DON'T DO THIS)
assert.EventuallyWithT(t, func(c *assert.CollectT) {
_, _, err = client.Execute([]string{"tailscale", "set", "--advertise-routes=10.0.0.0/24"})
assert.NoError(c, err) // This might retry the command multiple times!
}, 10*time.Second, 100*time.Millisecond, "advertise routes")
```
### Assert vs Require Pattern
When working within EventuallyWithT blocks where you need to prevent panics:
```go
assert.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
// For array bounds - use require with t to prevent panic
assert.Len(c, nodes, 6) // Test expectation
require.GreaterOrEqual(t, len(nodes), 3, "need at least 3 nodes to avoid panic")
// For nil pointer access - use require with t before dereferencing
assert.NotNil(c, srs1PeerStatus.PrimaryRoutes) // Test expectation
require.NotNil(t, srs1PeerStatus.PrimaryRoutes, "primary routes must be set to avoid panic")
assert.Contains(c,
srs1PeerStatus.PrimaryRoutes.AsSlice(),
pref,
)
}, 5*time.Second, 200*time.Millisecond, "checking route state")
```
**Key Principle**:
- Use `assert` with `c` (*assert.CollectT) for test expectations that can be retried
- Use `require` with `t` (*testing.T) for MUST conditions that prevent panics
- Within EventuallyWithT, both are available - choose based on whether failure would cause a panic
### Common Scenarios
1. **Waiting for route advertisement**:
```go
client.Execute([]string{"tailscale", "set", "--advertise-routes=10.0.0.0/24"})
assert.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
assert.Contains(c, nodes[0].GetAvailableRoutes(), "10.0.0.0/24")
}, 10*time.Second, 100*time.Millisecond, "route should be advertised")
```
2. **Checking client sees routes**:
```go
assert.EventuallyWithT(t, func(c *assert.CollectT) {
status, err := client.Status()
assert.NoError(c, err)
// Check all peers have expected routes
for _, peerKey := range status.Peers() {
peerStatus := status.Peer[peerKey]
assert.Contains(c, peerStatus.AllowedIPs, expectedPrefix)
}
}, 10*time.Second, 100*time.Millisecond, "all peers should see route")
```
3. **Sequential operations**:
```go
// First wait for node to appear
var nodeID uint64
assert.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
assert.Len(c, nodes, 1)
nodeID = nodes[0].GetId()
}, 10*time.Second, 100*time.Millisecond, "node should register")
// Then perform operation
_, err := headscale.ApproveRoute(nodeID, "10.0.0.0/24")
require.NoError(t, err)
// Then wait for result
assert.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
assert.Contains(c, nodes[0].GetApprovedRoutes(), "10.0.0.0/24")
}, 10*time.Second, 100*time.Millisecond, "route should be approved")
```
## Your Core Responsibilities
1. **Test Execution Strategy**: Execute integration tests with appropriate configurations, understanding when to use `--postgres` and timing requirements for different test categories. Follow phase-based testing approach prioritizing route tests.
- **Why this priority**: Route tests are less infrastructure-sensitive and validate core security logic
2. **Systematic Test Analysis**: When tests fail, systematically examine artifacts starting with Headscale server logs, then client logs, then protocol data. Focus on CODE ISSUES first (99% of cases), not infrastructure. Use real-world failure patterns to guide investigation.
- **Why this approach**: Most failures are logic bugs, not environment issues - efficient debugging saves time
3. **Timing & Synchronization Expertise**: Understand asynchronous Headscale operations, particularly route advertisements, NodeStore synchronization at `poll.go:420`, and policy propagation. Fix timing with `EventuallyWithT` while preserving original test expectations.
- **Why preserve expectations**: Test assertions encode business requirements and security policies
- **Key Pattern**: Apply the EventuallyWithT pattern correctly for all external calls as documented above
4. **Root Cause Analysis**: Distinguish between actual code regressions (route approval logic, HA failover architecture), timing issues requiring `EventuallyWithT` patterns, and genuine infrastructure problems (DNS, Docker, container issues).
- **Why this distinction matters**: Different problem types require completely different solution approaches
- **EventuallyWithT Issues**: Often manifest as flaky tests or immediate assertion failures after async operations
5. **Security-Aware Quality Validation**: Ensure tests properly validate end-to-end functionality with realistic timing expectations and proper error handling. Never suggest security bypasses or test expectation changes. Add comprehensive godoc when you understand test business logic.
- **Why security focus**: Integration tests are the last line of defense against security regressions
- **EventuallyWithT Usage**: Proper use prevents race conditions without weakening security assertions
6. **Concurrent Execution Awareness**: Respect run ID isolation and never interfere with other agents' test sessions. Each test run has a unique run ID - only clean up YOUR containers (by run ID label), never perform global cleanup while tests may be running.
- **Why this matters**: Multiple agents/users may run tests concurrently on the same Docker daemon
- **Key Rule**: NEVER use global container cleanup commands - the test runner handles cleanup automatically per run ID
**CRITICAL PRINCIPLE**: Test expectations are sacred contracts that define correct system behavior. When tests fail, fix the code to match the test, never change the test to match broken code. Only timing and observability improvements are allowed - business logic expectations are immutable.
**ISOLATION PRINCIPLE**: Each test run is isolated by its unique Run ID. Never interfere with other test sessions. The system handles cleanup automatically - manual global cleanup commands are forbidden when other tests may be running.
**EventuallyWithT PRINCIPLE**: Every external call to headscale server or tailscale client must be wrapped in EventuallyWithT. Follow the five key rules strictly: one external call per block, proper variable scoping, no nesting, use CollectT for assertions, and provide descriptive messages.
**Remember**: Test failures are usually code issues in Headscale that need to be fixed, not infrastructure problems to be ignored. Use the specific debugging workflows and failure patterns documented above to efficiently identify root causes. Infrastructure issues have very specific signatures - everything else is code-related.

View File

@@ -6,8 +6,7 @@ body:
- type: checkboxes - type: checkboxes
attributes: attributes:
label: Is this a support request? label: Is this a support request?
description: description: This issue tracker is for bugs and feature requests only. If you need
This issue tracker is for bugs and feature requests only. If you need
help, please use ask in our Discord community help, please use ask in our Discord community
options: options:
- label: This is not a support request - label: This is not a support request
@@ -15,8 +14,7 @@ body:
- type: checkboxes - type: checkboxes
attributes: attributes:
label: Is there an existing issue for this? label: Is there an existing issue for this?
description: description: Please search to see if an issue already exists for the bug you
Please search to see if an issue already exists for the bug you
encountered. encountered.
options: options:
- label: I have searched the existing issues - label: I have searched the existing issues

View File

@@ -3,9 +3,9 @@ blank_issues_enabled: false
# Contact links # Contact links
contact_links: contact_links:
- name: "headscale usage documentation"
url: "https://github.com/juanfont/headscale/blob/main/docs"
about: "Find documentation about how to configure and run headscale."
- name: "headscale Discord community" - name: "headscale Discord community"
url: "https://discord.gg/xGj2TuqyxY" url: "https://discord.gg/c84AZQhmpx"
about: "Please ask and answer questions about usage of headscale here." about: "Please ask and answer questions about usage of headscale here."
- name: "headscale usage documentation"
url: "https://headscale.net/"
about: "Find documentation about how to configure and run headscale."

View File

@@ -0,0 +1,80 @@
Thank you for taking the time to report this issue.
To help us investigate and resolve this, we need more information. Please provide the following:
> [!TIP]
> Most issues turn out to be configuration errors rather than bugs. We encourage you to discuss your problem in our [Discord community](https://discord.gg/c84AZQhmpx) **before** opening an issue. The community can often help identify misconfigurations quickly, saving everyone time.
## Required Information
### Environment Details
- **Headscale version**: (run `headscale version`)
- **Tailscale client version**: (run `tailscale version`)
- **Operating System**: (e.g., Ubuntu 24.04, macOS 14, Windows 11)
- **Deployment method**: (binary, Docker, Kubernetes, etc.)
- **Reverse proxy**: (if applicable: nginx, Traefik, Caddy, etc. - include configuration)
### Debug Information
Please follow our [Debugging and Troubleshooting Guide](https://headscale.net/stable/ref/debug/) and provide:
1. **Client netmap dump** (from affected Tailscale client):
```bash
tailscale debug netmap > netmap.json
```
2. **Client status dump** (from affected Tailscale client):
```bash
tailscale status --json > status.json
```
3. **Tailscale client logs** (if experiencing client issues):
```bash
tailscale debug daemon-logs
```
> [!IMPORTANT]
> We need logs from **multiple nodes** to understand the full picture:
>
> - The node(s) initiating connections
> - The node(s) being connected to
>
> Without logs from both sides, we cannot diagnose connectivity issues.
4. **Headscale server logs** with `log.level: trace` enabled
5. **Headscale configuration** (with sensitive values redacted - see rules below)
6. **ACL/Policy configuration** (if using ACLs)
7. **Proxy/Docker configuration** (if applicable - nginx.conf, docker-compose.yml, Traefik config, etc.)
## Formatting Requirements
- **Attach long files** - Do not paste large logs or configurations inline. Use GitHub file attachments or GitHub Gists.
- **Use proper Markdown** - Format code blocks, logs, and configurations with appropriate syntax highlighting.
- **Structure your response** - Use the headings above to organize your information clearly.
## Redaction Rules
> [!CAUTION]
> **Replace, do not remove.** Removing information makes debugging impossible.
When redacting sensitive information:
- ✅ **Replace consistently** - If you change `alice@company.com` to `user1@example.com`, use `user1@example.com` everywhere (logs, config, policy, etc.)
- ✅ **Use meaningful placeholders** - `user1@example.com`, `bob@example.com`, `my-secret-key` are acceptable
- ❌ **Never remove information** - Gaps in data prevent us from correlating events across logs
- ❌ **Never redact IP addresses** - We need the actual IPs to trace network paths and identify issues
**If redaction rules are not followed, we will be unable to debug the issue and will have to close it.**
---
**Note:** This issue will be automatically closed in 3 days if no additional information is provided. Once you reply with the requested information, the `needs-more-info` label will be removed automatically.
If you need help gathering this information, please visit our [Discord community](https://discord.gg/c84AZQhmpx).

View File

@@ -0,0 +1,15 @@
Thank you for reaching out.
This issue tracker is used for **bug reports and feature requests** only. Your question appears to be a support or configuration question rather than a bug report.
For help with setup, configuration, or general questions, please visit our [Discord community](https://discord.gg/c84AZQhmpx) where the community and maintainers can assist you in real-time.
**Before posting in Discord, please check:**
- [Documentation](https://headscale.net/)
- [FAQ](https://headscale.net/stable/faq/)
- [Debugging and Troubleshooting Guide](https://headscale.net/stable/ref/debug/)
If after troubleshooting you determine this is actually a bug, please open a new issue with the required debug information from the troubleshooting guide.
This issue has been automatically closed.

112
.github/workflows/container-main.yml vendored Normal file
View File

@@ -0,0 +1,112 @@
---
name: Build (main)
on:
push:
branches:
- main
paths:
- "*.nix"
- "go.*"
- "**/*.go"
- ".github/workflows/container-main.yml"
workflow_dispatch:
concurrency:
group: ${{ github.workflow }}-${{ github.sha }}
cancel-in-progress: true
jobs:
container:
if: github.repository == 'juanfont/headscale'
runs-on: ubuntu-latest
permissions:
packages: write
contents: read
steps:
- name: Checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- name: Login to DockerHub
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to GHCR
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
with:
primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
'**/flake.lock') }}
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
- name: Set commit timestamp
run: echo "SOURCE_DATE_EPOCH=$(git log -1 --format=%ct)" >> $GITHUB_ENV
- name: Build and push to GHCR
env:
KO_DOCKER_REPO: ghcr.io/juanfont/headscale
KO_DEFAULTBASEIMAGE: gcr.io/distroless/base-debian13
CGO_ENABLED: "0"
run: |
nix develop --command -- ko build \
--bare \
--platform=linux/amd64,linux/arm64 \
--tags=main-${GITHUB_SHA::7} \
./cmd/headscale
- name: Push to Docker Hub
env:
KO_DOCKER_REPO: headscale/headscale
KO_DEFAULTBASEIMAGE: gcr.io/distroless/base-debian13
CGO_ENABLED: "0"
run: |
nix develop --command -- ko build \
--bare \
--platform=linux/amd64,linux/arm64 \
--tags=main-${GITHUB_SHA::7} \
./cmd/headscale
binaries:
if: github.repository == 'juanfont/headscale'
runs-on: ubuntu-latest
strategy:
matrix:
include:
- goos: linux
goarch: amd64
- goos: linux
goarch: arm64
- goos: darwin
goarch: amd64
- goos: darwin
goarch: arm64
steps:
- name: Checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
with:
primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
'**/flake.lock') }}
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
- name: Build binary
env:
CGO_ENABLED: "0"
GOOS: ${{ matrix.goos }}
GOARCH: ${{ matrix.goarch }}
run: nix develop --command -- go build -o headscale ./cmd/headscale
- uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: headscale-${{ matrix.goos }}-${{ matrix.goarch }}
path: headscale

View File

@@ -66,6 +66,7 @@ func findTests() []string {
} }
args := []string{ args := []string{
"--type", "go",
"--regexp", "func (Test.+)\\(.*", "--regexp", "func (Test.+)\\(.*",
"../../integration/", "../../integration/",
"--replace", "$1", "--replace", "$1",

View File

@@ -16,7 +16,7 @@ on:
jobs: jobs:
test: test:
runs-on: ubuntu-latest runs-on: ubuntu-24.04-arm
env: env:
# Github does not allow us to access secrets in pull requests, # Github does not allow us to access secrets in pull requests,
# so this env var is used to check if we have the secret or not. # so this env var is used to check if we have the secret or not.
@@ -67,6 +67,24 @@ jobs:
with: with:
name: postgres-image name: postgres-image
path: /tmp/artifacts path: /tmp/artifacts
- name: Pin Docker to v28 (avoid v29 breaking changes)
run: |
# Docker 29 breaks docker build via Go client libraries and
# docker load/save with certain tarball formats.
# Pin to Docker 28.x until our tooling is updated.
# https://github.com/actions/runner-images/issues/13474
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg \
| sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "$VERSION_CODENAME") stable" \
| sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update -qq
VERSION=$(apt-cache madison docker-ce | grep '28\.5' | head -1 | awk '{print $3}')
sudo apt-get install -y --allow-downgrades \
"docker-ce=${VERSION}" "docker-ce-cli=${VERSION}"
sudo systemctl restart docker
docker version
- name: Load Docker images, Go cache, and prepare binary - name: Load Docker images, Go cache, and prepare binary
run: | run: |
gunzip -c /tmp/artifacts/headscale-image.tar.gz | docker load gunzip -c /tmp/artifacts/headscale-image.tar.gz | docker load

View File

@@ -0,0 +1,28 @@
name: Needs More Info - Post Comment
on:
issues:
types: [labeled]
jobs:
post-comment:
if: >-
github.event.label.name == 'needs-more-info' &&
github.repository == 'juanfont/headscale'
runs-on: ubuntu-latest
permissions:
issues: write
contents: read
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
sparse-checkout: .github/label-response/needs-more-info.md
sparse-checkout-cone-mode: false
- name: Post instruction comment
run: gh issue comment "$NUMBER" --body-file .github/label-response/needs-more-info.md
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
GH_REPO: ${{ github.repository }}
NUMBER: ${{ github.event.issue.number }}

View File

@@ -0,0 +1,98 @@
name: Needs More Info - Timer
on:
schedule:
- cron: "0 0 * * *" # Daily at midnight UTC
issue_comment:
types: [created]
workflow_dispatch:
jobs:
# When a non-bot user comments on a needs-more-info issue, remove the label.
remove-label-on-response:
if: >-
github.repository == 'juanfont/headscale' &&
github.event_name == 'issue_comment' &&
github.event.comment.user.type != 'Bot' &&
contains(github.event.issue.labels.*.name, 'needs-more-info')
runs-on: ubuntu-latest
permissions:
issues: write
steps:
- name: Remove needs-more-info label
run: gh issue edit "$NUMBER" --remove-label needs-more-info
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
GH_REPO: ${{ github.repository }}
NUMBER: ${{ github.event.issue.number }}
# On schedule, close issues that have had no human response for 3 days.
close-stale:
if: >-
github.repository == 'juanfont/headscale' &&
github.event_name != 'issue_comment'
runs-on: ubuntu-latest
permissions:
issues: write
steps:
- uses: hustcer/setup-nu@920172d92eb04671776f3ba69d605d3b09351c30 # v3.22
with:
version: "*"
- name: Close stale needs-more-info issues
shell: nu {0}
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
GH_REPO: ${{ github.repository }}
run: |
let issues = (gh issue list
--repo $env.GH_REPO
--label "needs-more-info"
--state open
--json number
| from json)
for issue in $issues {
let number = $issue.number
print $"Checking issue #($number)"
# Find when needs-more-info was last added
let events = (gh api $"repos/($env.GH_REPO)/issues/($number)/events"
--paginate | from json | flatten)
let label_event = ($events
| where event == "labeled" and label.name == "needs-more-info"
| last)
let label_added_at = ($label_event.created_at | into datetime)
# Check for non-bot comments after the label was added
let comments = (gh api $"repos/($env.GH_REPO)/issues/($number)/comments"
--paginate | from json | flatten)
let human_responses = ($comments
| where user.type != "Bot"
| where { ($in.created_at | into datetime) > $label_added_at })
if ($human_responses | length) > 0 {
print $" Human responded, removing label"
gh issue edit $number --repo $env.GH_REPO --remove-label needs-more-info
continue
}
# Check if 3 days have passed
let elapsed = (date now) - $label_added_at
if $elapsed < 3day {
print $" Only ($elapsed | format duration day) elapsed, skipping"
continue
}
print $" No response for ($elapsed | format duration day), closing"
let message = [
"This issue has been automatically closed because no additional information was provided within 3 days."
""
"If you have the requested information, please open a new issue and include the debug information requested above."
""
"Thank you for your understanding."
] | str join "\n"
gh issue comment $number --repo $env.GH_REPO --body $message
gh issue close $number --repo $env.GH_REPO --reason "not planned"
gh issue edit $number --repo $env.GH_REPO --remove-label needs-more-info
}

View File

@@ -17,6 +17,25 @@ jobs:
with: with:
fetch-depth: 0 fetch-depth: 0
- name: Pin Docker to v28 (avoid v29 breaking changes)
run: |
# Docker 29 breaks docker build via Go client libraries and
# docker load/save with certain tarball formats.
# Pin to Docker 28.x until our tooling is updated.
# https://github.com/actions/runner-images/issues/13474
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg \
| sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "$VERSION_CODENAME") stable" \
| sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update -qq
VERSION=$(apt-cache madison docker-ce | grep '28\.5' | head -1 | awk '{print $3}')
sudo apt-get install -y --allow-downgrades \
"docker-ce=${VERSION}" "docker-ce-cli=${VERSION}"
sudo systemctl restart docker
docker version
- name: Login to DockerHub - name: Login to DockerHub
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0 uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
with: with:

View File

@@ -23,5 +23,5 @@ jobs:
since being marked as stale." since being marked as stale."
days-before-pr-stale: -1 days-before-pr-stale: -1
days-before-pr-close: -1 days-before-pr-close: -1
exempt-issue-labels: "no-stale-bot" exempt-issue-labels: "no-stale-bot,needs-more-info"
repo-token: ${{ secrets.GITHUB_TOKEN }} repo-token: ${{ secrets.GITHUB_TOKEN }}

30
.github/workflows/support-request.yml vendored Normal file
View File

@@ -0,0 +1,30 @@
name: Support Request - Close Issue
on:
issues:
types: [labeled]
jobs:
close-support-request:
if: >-
github.event.label.name == 'support-request' &&
github.repository == 'juanfont/headscale'
runs-on: ubuntu-latest
permissions:
issues: write
contents: read
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
sparse-checkout: .github/label-response/support-request.md
sparse-checkout-cone-mode: false
- name: Post comment and close issue
run: |
gh issue comment "$NUMBER" --body-file .github/label-response/support-request.md
gh issue close "$NUMBER" --reason "not planned"
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
GH_REPO: ${{ github.repository }}
NUMBER: ${{ github.event.issue.number }}

View File

@@ -12,7 +12,7 @@ jobs:
# sqlite: Runs all integration tests with SQLite backend. # sqlite: Runs all integration tests with SQLite backend.
# postgres: Runs a subset of tests with PostgreSQL to verify database compatibility. # postgres: Runs a subset of tests with PostgreSQL to verify database compatibility.
build: build:
runs-on: ubuntu-latest runs-on: ubuntu-24.04-arm
outputs: outputs:
files-changed: ${{ steps.changed-files.outputs.files }} files-changed: ${{ steps.changed-files.outputs.files }}
steps: steps:
@@ -69,6 +69,25 @@ jobs:
name: go-cache name: go-cache
path: go-cache.tar.gz path: go-cache.tar.gz
retention-days: 10 retention-days: 10
- name: Pin Docker to v28 (avoid v29 breaking changes)
if: steps.changed-files.outputs.files == 'true'
run: |
# Docker 29 breaks docker build via Go client libraries and
# docker load/save with certain tarball formats.
# Pin to Docker 28.x until our tooling is updated.
# https://github.com/actions/runner-images/issues/13474
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg \
| sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "$VERSION_CODENAME") stable" \
| sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update -qq
VERSION=$(apt-cache madison docker-ce | grep '28\.5' | head -1 | awk '{print $3}')
sudo apt-get install -y --allow-downgrades \
"docker-ce=${VERSION}" "docker-ce-cli=${VERSION}"
sudo systemctl restart docker
docker version
- name: Build headscale image - name: Build headscale image
if: steps.changed-files.outputs.files == 'true' if: steps.changed-files.outputs.files == 'true'
run: | run: |
@@ -100,10 +119,28 @@ jobs:
path: tailscale-head-image.tar.gz path: tailscale-head-image.tar.gz
retention-days: 10 retention-days: 10
build-postgres: build-postgres:
runs-on: ubuntu-latest runs-on: ubuntu-24.04-arm
needs: build needs: build
if: needs.build.outputs.files-changed == 'true' if: needs.build.outputs.files-changed == 'true'
steps: steps:
- name: Pin Docker to v28 (avoid v29 breaking changes)
run: |
# Docker 29 breaks docker build via Go client libraries and
# docker load/save with certain tarball formats.
# Pin to Docker 28.x until our tooling is updated.
# https://github.com/actions/runner-images/issues/13474
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg \
| sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "$VERSION_CODENAME") stable" \
| sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update -qq
VERSION=$(apt-cache madison docker-ce | grep '28\.5' | head -1 | awk '{print $3}')
sudo apt-get install -y --allow-downgrades \
"docker-ce=${VERSION}" "docker-ce-cli=${VERSION}"
sudo systemctl restart docker
docker version
- name: Pull and save postgres image - name: Pull and save postgres image
run: | run: |
docker pull postgres:latest docker pull postgres:latest
@@ -192,9 +229,12 @@ jobs:
- TestUpdateHostnameFromClient - TestUpdateHostnameFromClient
- TestExpireNode - TestExpireNode
- TestSetNodeExpiryInFuture - TestSetNodeExpiryInFuture
- TestDisableNodeExpiry
- TestNodeOnlineStatus - TestNodeOnlineStatus
- TestPingAllByIPManyUpDown - TestPingAllByIPManyUpDown
- Test2118DeletingOnlineNodePanics - Test2118DeletingOnlineNodePanics
- TestGrantCapRelay
- TestGrantCapDrive
- TestEnablingRoutes - TestEnablingRoutes
- TestHASubnetRouterFailover - TestHASubnetRouterFailover
- TestSubnetRouteACL - TestSubnetRouteACL
@@ -208,6 +248,7 @@ jobs:
- TestAutoApproveMultiNetwork/webauth-user.* - TestAutoApproveMultiNetwork/webauth-user.*
- TestAutoApproveMultiNetwork/webauth-group.* - TestAutoApproveMultiNetwork/webauth-group.*
- TestSubnetRouteACLFiltering - TestSubnetRouteACLFiltering
- TestGrantViaSubnetSteering
- TestHeadscale - TestHeadscale
- TestTailscaleNodesJoiningHeadcale - TestTailscaleNodesJoiningHeadcale
- TestSSHOneUserToAll - TestSSHOneUserToAll
@@ -216,6 +257,13 @@ jobs:
- TestSSHIsBlockedInACL - TestSSHIsBlockedInACL
- TestSSHUserOnlyIsolation - TestSSHUserOnlyIsolation
- TestSSHAutogroupSelf - TestSSHAutogroupSelf
- TestSSHOneUserToOneCheckModeCLI
- TestSSHOneUserToOneCheckModeOIDC
- TestSSHCheckModeUnapprovedTimeout
- TestSSHCheckModeCheckPeriodCLI
- TestSSHCheckModeAutoApprove
- TestSSHCheckModeNegativeCLI
- TestSSHLocalpart
- TestTagsAuthKeyWithTagRequestDifferentTag - TestTagsAuthKeyWithTagRequestDifferentTag
- TestTagsAuthKeyWithTagNoAdvertiseFlag - TestTagsAuthKeyWithTagNoAdvertiseFlag
- TestTagsAuthKeyWithTagCannotAddViaCLI - TestTagsAuthKeyWithTagCannotAddViaCLI
@@ -247,6 +295,7 @@ jobs:
- TestTagsUserLoginReauthWithEmptyTagsRemovesAllTags - TestTagsUserLoginReauthWithEmptyTagsRemovesAllTags
- TestTagsAuthKeyWithoutUserInheritsTags - TestTagsAuthKeyWithoutUserInheritsTags
- TestTagsAuthKeyWithoutUserRejectsAdvertisedTags - TestTagsAuthKeyWithoutUserRejectsAdvertisedTags
- TestTagsAuthKeyConvertToUserViaCLIRegister
uses: ./.github/workflows/integration-test-template.yml uses: ./.github/workflows/integration-test-template.yml
secrets: inherit secrets: inherit
with: with:

1
.gitignore vendored
View File

@@ -29,6 +29,7 @@ config*.yaml
!config-example.yaml !config-example.yaml
derp.yaml derp.yaml
*.hujson *.hujson
!hscontrol/policy/v2/testdata/*/*.hujson
*.key *.key
/db.sqlite /db.sqlite
*.sqlite3 *.sqlite3

View File

@@ -18,6 +18,7 @@ linters:
- lll - lll
- maintidx - maintidx
- makezero - makezero
- mnd
- musttag - musttag
- nestif - nestif
- nolintlint - nolintlint
@@ -37,6 +38,23 @@ linters:
time.Sleep is forbidden. time.Sleep is forbidden.
In tests: use assert.EventuallyWithT for polling/waiting patterns. In tests: use assert.EventuallyWithT for polling/waiting patterns.
In production code: use a backoff strategy (e.g., cenkalti/backoff) or proper synchronization primitives. In production code: use a backoff strategy (e.g., cenkalti/backoff) or proper synchronization primitives.
# Forbid inline string literals in zerolog field methods - use zf.* constants
- pattern: '\.(Str|Int|Int8|Int16|Int32|Int64|Uint|Uint8|Uint16|Uint32|Uint64|Float32|Float64|Bool|Dur|Time|TimeDiff|Strs|Ints|Uints|Floats|Bools|Any|Interface)\("[^"]+"'
msg: >-
Use zf.* constants for zerolog field names instead of string literals.
Import "github.com/juanfont/headscale/hscontrol/util/zlog/zf" and use
constants like zf.NodeID, zf.UserName, etc. Add new constants to
hscontrol/util/zlog/zf/fields.go if needed.
# Forbid ptr.To - use Go 1.26 new(expr) instead
- pattern: 'ptr\.To\('
msg: >-
ptr.To is forbidden. Use Go 1.26's new(expr) syntax instead.
Example: ptr.To(value) → new(value)
# Forbid tsaddr.SortPrefixes - use slices.SortFunc with netip.Prefix.Compare
- pattern: 'tsaddr\.SortPrefixes'
msg: >-
tsaddr.SortPrefixes is forbidden. Use Go 1.26's netip.Prefix.Compare instead.
Example: slices.SortFunc(prefixes, netip.Prefix.Compare)
analyze-types: true analyze-types: true
gocritic: gocritic:
disabled-checks: disabled-checks:

View File

@@ -2,7 +2,7 @@
version: 2 version: 2
before: before:
hooks: hooks:
- go mod tidy -compat=1.25 - go mod tidy -compat=1.26
- go mod vendor - go mod vendor
release: release:
@@ -13,29 +13,6 @@ release:
Please follow the steps outlined in the [upgrade guide](https://headscale.net/stable/setup/upgrade/) to update your existing Headscale installation. Please follow the steps outlined in the [upgrade guide](https://headscale.net/stable/setup/upgrade/) to update your existing Headscale installation.
**It's best to update from one stable version to the next** (e.g., 0.24.0 → 0.25.1 → 0.26.1) in case you are multiple releases behind. You should always pick the latest available patch release.
Be sure to check the changelog above for version-specific upgrade instructions and breaking changes.
### Backup Your Database
**Always backup your database before upgrading.** Here's how to backup a SQLite database:
```bash
# Stop headscale
systemctl stop headscale
# Backup sqlite database
cp /var/lib/headscale/db.sqlite /var/lib/headscale/db.sqlite.backup
# Backup sqlite WAL/SHM files (if they exist)
cp /var/lib/headscale/db.sqlite-wal /var/lib/headscale/db.sqlite-wal.backup
cp /var/lib/headscale/db.sqlite-shm /var/lib/headscale/db.sqlite-shm.backup
# Start headscale (migration will run automatically)
systemctl start headscale
```
builds: builds:
- id: headscale - id: headscale
main: ./cmd/headscale main: ./cmd/headscale
@@ -50,8 +27,6 @@ builds:
- linux_arm64 - linux_arm64
flags: flags:
- -mod=readonly - -mod=readonly
tags:
- ts2019
archives: archives:
- id: golang-cross - id: golang-cross

2
.mdformat.toml Normal file
View File

@@ -0,0 +1,2 @@
[plugin.mkdocs]
align_semantic_breaks_in_lists = true

View File

@@ -43,26 +43,20 @@ repos:
entry: prettier --write --list-different entry: prettier --write --list-different
language: system language: system
exclude: ^docs/ exclude: ^docs/
types_or: types_or: [javascript, jsx, ts, tsx, yaml, json, toml, html, css, scss, sass, markdown]
[
javascript, # mdformat for docs
jsx, - id: mdformat
ts, name: mdformat
tsx, entry: mdformat
yaml, language: system
json, types_or: [markdown]
toml, files: ^docs/
html,
css,
scss,
sass,
markdown,
]
# golangci-lint for Go code quality # golangci-lint for Go code quality
- id: golangci-lint - id: golangci-lint
name: golangci-lint name: golangci-lint
entry: nix develop --command golangci-lint run --new-from-rev=HEAD~1 --timeout=5m --fix entry: nix develop --command -- golangci-lint run --new-from-rev=HEAD~1 --timeout=5m --fix
language: system language: system
types: [go] types: [go]
pass_filenames: false pass_filenames: false

View File

@@ -1,5 +1,2 @@
.github/workflows/test-integration-v2* .github/workflows/test-integration-v2*
docs/about/features.md docs/
docs/ref/api.md
docs/ref/configuration.md
docs/ref/oidc.md

1278
AGENTS.md

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,109 @@
# CHANGELOG # CHANGELOG
## 0.28.0 (202x-xx-xx) ## 0.29.0 (202x-xx-xx)
**Minimum supported Tailscale client version: v1.76.0**
### Tailscale ACL compatibility improvements
Extensive test cases were systematically generated using Tailscale clients and the official SaaS
to understand how the packet filter should be generated. We discovered a few differences, but
overall our implementation was very close.
[#3036](https://github.com/juanfont/headscale/pull/3036)
### SSH check action
SSH rules with `"action": "check"` are now supported. When a client initiates a SSH connection to a node
with a `check` action policy, the user is prompted to authenticate via OIDC or CLI approval before access
is granted. OIDC approval requires the authenticated user to own the source node; tagged source nodes
cannot use SSH check-mode.
A new `headscale auth` CLI command group supports the approval flow:
- `headscale auth approve --auth-id <id>` approves a pending authentication request (SSH check or web auth)
- `headscale auth reject --auth-id <id>` rejects a pending authentication request
- `headscale auth register --auth-id <id> --user <user>` registers a node (replaces deprecated `headscale nodes register`)
[#1850](https://github.com/juanfont/headscale/pull/1850)
[#3180](https://github.com/juanfont/headscale/pull/3180)
### Grants
We now support [Tailscale grants](https://tailscale.com/kb/1324/grants) alongside ACLs. Grants
extend what you can express in a policy beyond packet filtering: the `app` field controls
application-level features like Taildrive file sharing and peer relay, and the `via` field steers
traffic through specific tagged subnet routers or exit nodes. The `ip` field works like an ACL rule.
Grants can be mixed with ACLs in the same policy file.
[#2180](https://github.com/juanfont/headscale/pull/2180)
As part of this, we added `autogroup:danger-all`. It resolves to `0.0.0.0/0` and `::/0` — all IP
addresses, including those outside the tailnet. This replaces the old behaviour where `*` matched
all IPs (see BREAKING below). The name is intentionally scary: accepting traffic from the entire
internet is a security-sensitive choice. `autogroup:danger-all` can only be used as a source.
### BREAKING
- **ACL Policy**: Wildcard (`*`) in ACL sources and destinations now resolves to Tailscale's CGNAT range (`100.64.0.0/10`) and ULA range (`fd7a:115c:a1e0::/48`) instead of all IPs (`0.0.0.0/0` and `::/0`) [#3036](https://github.com/juanfont/headscale/pull/3036)
- This better matches Tailscale's security model where `*` means "any node in the tailnet" rather than "any IP address"
- Policies that need to match all IP addresses including non-Tailscale IPs should use `autogroup:danger-all` as a source, or explicit CIDR ranges as destinations [#2180](https://github.com/juanfont/headscale/pull/2180)
- `autogroup:danger-all` can only be used as a source; it cannot be used as a destination
- **Note**: Users with non-standard IP ranges configured in `prefixes.ipv4` or `prefixes.ipv6` (which is unsupported and produces a warning) will need to explicitly specify their CIDR ranges in ACL rules instead of using `*`
- **ACL Policy**: Validate autogroup:self source restrictions matching Tailscale behavior - tags, hosts, and IPs are rejected as sources for autogroup:self destinations [#3036](https://github.com/juanfont/headscale/pull/3036)
- Policies using tags, hosts, or IP addresses as sources for autogroup:self destinations will now fail validation
- **Upgrade path**: Headscale now enforces a strict version upgrade path [#3083](https://github.com/juanfont/headscale/pull/3083)
- Skipping minor versions (e.g. 0.27 → 0.29) is blocked; upgrade one minor version at a time
- Downgrading to a previous minor version is blocked
- Patch version changes within the same minor are always allowed
- **ACL Policy**: The `proto:icmp` protocol name now only includes ICMPv4 (protocol 1), matching Tailscale behavior [#3036](https://github.com/juanfont/headscale/pull/3036)
- Previously, `proto:icmp` included both ICMPv4 and ICMPv6
- Use `proto:ipv6-icmp` or protocol number `58` explicitly for ICMPv6
- **CLI**: `headscale nodes register` is deprecated in favour of `headscale auth register --auth-id <id> --user <user>` [#1850](https://github.com/juanfont/headscale/pull/1850)
- The old command continues to work but will be removed in a future release
### Changes
- **OIDC registration**: Add a confirmation page before completing node registration, showing the device hostname and machine key fingerprint [#3180](https://github.com/juanfont/headscale/pull/3180)
- **Debug endpoints**: Omit secret fields (`Pass`, `ClientSecret`, `APIKey`) from `/debug/config` JSON output [#3180](https://github.com/juanfont/headscale/pull/3180)
- **Debug endpoints**: Route `statsviz` through `tsweb.Protected` [#3180](https://github.com/juanfont/headscale/pull/3180)
- Remove gRPC reflection from the remote (TCP) server [#3180](https://github.com/juanfont/headscale/pull/3180)
- **Node Expiry**: Add `node.expiry` configuration option to set a default node key expiry for nodes registered via auth key [#3122](https://github.com/juanfont/headscale/pull/3122)
- Tagged nodes (registered with tagged pre-auth keys) are exempt from default expiry
- `oidc.expiry` has been removed; use `node.expiry` instead (applies to all registration methods including OIDC)
- `ephemeral_node_inactivity_timeout` is deprecated in favour of `node.ephemeral.inactivity_timeout`
- **SSH Policy**: Add support for `localpart:*@<domain>` in SSH rule `users` field, mapping each matching user's email local-part as their OS username [#3091](https://github.com/juanfont/headscale/pull/3091)
- **ACL Policy**: Add ICMP and IPv6-ICMP protocols to default filter rules when no protocol is specified [#3036](https://github.com/juanfont/headscale/pull/3036)
- **ACL Policy**: Fix autogroup:self handling for tagged nodes - tagged nodes no longer incorrectly receive autogroup:self filter rules [#3036](https://github.com/juanfont/headscale/pull/3036)
- **ACL Policy**: Use CIDR format for autogroup:self destination IPs matching Tailscale behavior [#3036](https://github.com/juanfont/headscale/pull/3036)
- **ACL Policy**: Merge filter rules with identical SrcIPs and IPProto matching Tailscale behavior - multiple ACL rules with the same source now produce a single FilterRule with combined DstPorts [#3036](https://github.com/juanfont/headscale/pull/3036)
- Remove deprecated `--namespace` flag from `nodes list`, `nodes register`, and `debug create-node` commands (use `--user` instead) [#3093](https://github.com/juanfont/headscale/pull/3093)
- Remove deprecated `namespace`/`ns` command aliases for `users` and `machine`/`machines` aliases for `nodes` [#3093](https://github.com/juanfont/headscale/pull/3093)
- Add SSH `check` action support with OIDC and CLI-based approval flows [#1850](https://github.com/juanfont/headscale/pull/1850)
- Add `headscale auth register`, `headscale auth approve`, and `headscale auth reject` CLI commands [#1850](https://github.com/juanfont/headscale/pull/1850)
- Add `auth` related routes to the API. The `auth/register` endpoint now expects data as JSON [#1850](https://github.com/juanfont/headscale/pull/1850)
- Deprecate `headscale nodes register --key` in favour of `headscale auth register --auth-id` [#1850](https://github.com/juanfont/headscale/pull/1850)
- Generalise auth templates into reusable `AuthSuccess` and `AuthWeb` components [#1850](https://github.com/juanfont/headscale/pull/1850)
- Unify auth pipeline with `AuthVerdict` type, supporting registration, reauthentication, and SSH checks [#1850](https://github.com/juanfont/headscale/pull/1850)
- Add support for policy grants with `ip`, `app`, and `via` fields [#2180](https://github.com/juanfont/headscale/pull/2180)
- Add `autogroup:danger-all` as a source-only autogroup resolving to all IP addresses [#2180](https://github.com/juanfont/headscale/pull/2180)
- Add capability grants for Taildrive (`cap/drive`) and peer relay (`cap/relay`) with automatic companion capabilities [#2180](https://github.com/juanfont/headscale/pull/2180)
- Add per-viewer via route steering — grants with `via` tags control which subnet router or exit node handles traffic for each group of viewers [#2180](https://github.com/juanfont/headscale/pull/2180)
- Enable Taildrive node attributes on all nodes; actual access is controlled by `cap/drive` grants [#2180](https://github.com/juanfont/headscale/pull/2180)
- Fix exit nodes incorrectly receiving filter rules for destinations that only overlap via exit routes [#2180](https://github.com/juanfont/headscale/pull/2180)
- Fix address-based aliases (hosts, raw IPs) incorrectly expanding to include the matching node's other address family [#2180](https://github.com/juanfont/headscale/pull/2180)
- Fix identity-based aliases (tags, users, groups) resolving to IPv4 only; they now include both IPv4 and IPv6 matching Tailscale behavior [#2180](https://github.com/juanfont/headscale/pull/2180)
- Fix wildcard (`*`) source in ACLs now using actually-approved subnet routes instead of autoApprover policy prefixes [#2180](https://github.com/juanfont/headscale/pull/2180)
- Fix non-wildcard source IPs being dropped when combined with wildcard `*` in the same ACL rule [#2180](https://github.com/juanfont/headscale/pull/2180)
- Fix exit node approval not triggering filter rule recalculation for peers [#2180](https://github.com/juanfont/headscale/pull/2180)
- Policy validation error messages now include field context (e.g., `src=`, `dst=`) and are more descriptive [#2180](https://github.com/juanfont/headscale/pull/2180)
- Remove old migrations for the debian package [#3185](https://github.com/juanfont/headscale/pull/3185)
## 0.28.1 (202x-xx-xx)
### Changes
- **User deletion**: Fix `DestroyUser` deleting all pre-auth keys in the database instead of only the target user's keys [#3155](https://github.com/juanfont/headscale/pull/3155)
## 0.28.0 (2026-02-04)
**Minimum supported Tailscale client version: v1.74.0** **Minimum supported Tailscale client version: v1.74.0**
@@ -162,9 +265,7 @@ sequentially through each stable release, selecting the latest patch version ava
- Fix autogroup:self preventing visibility of nodes matched by other ACL rules [#2882](https://github.com/juanfont/headscale/pull/2882) - Fix autogroup:self preventing visibility of nodes matched by other ACL rules [#2882](https://github.com/juanfont/headscale/pull/2882)
- Fix nodes being rejected after pre-authentication key expiration [#2917](https://github.com/juanfont/headscale/pull/2917) - Fix nodes being rejected after pre-authentication key expiration [#2917](https://github.com/juanfont/headscale/pull/2917)
- Fix list-routes command respecting identifier filter with JSON output [#2927](https://github.com/juanfont/headscale/pull/2927) - Fix list-routes command respecting identifier filter with JSON output [#2927](https://github.com/juanfont/headscale/pull/2927)
- **API Key CLI**: Add `--id` flag to expire/delete commands as alternative to `--prefix` [#3016](https://github.com/juanfont/headscale/pull/3016) - Add `--id` flag to expire/delete commands as alternative to `--prefix` for API Keys [#3016](https://github.com/juanfont/headscale/pull/3016)
- `headscale apikeys expire --id <ID>` or `--prefix <PREFIX>`
- `headscale apikeys delete --id <ID>` or `--prefix <PREFIX>`
## 0.27.1 (2025-11-11) ## 0.27.1 (2025-11-11)

View File

@@ -1,6 +1,6 @@
# For testing purposes only # For testing purposes only
FROM golang:alpine AS build-env FROM golang:1.26.2-alpine AS build-env
WORKDIR /go/src WORKDIR /go/src

View File

@@ -2,7 +2,7 @@
# and are in no way endorsed by Headscale's maintainers as an # and are in no way endorsed by Headscale's maintainers as an
# official nor supported release or distribution. # official nor supported release or distribution.
FROM docker.io/golang:1.25-trixie AS builder FROM docker.io/golang:1.26.1-trixie AS builder
ARG VERSION=dev ARG VERSION=dev
ENV GOPATH /go ENV GOPATH /go
WORKDIR /go/src/headscale WORKDIR /go/src/headscale

View File

@@ -4,7 +4,7 @@
# This Dockerfile is more or less lifted from tailscale/tailscale # This Dockerfile is more or less lifted from tailscale/tailscale
# to ensure a similar build process when testing the HEAD of tailscale. # to ensure a similar build process when testing the HEAD of tailscale.
FROM golang:1.25-alpine AS build-env FROM golang:1.26.2-alpine AS build-env
WORKDIR /go/src WORKDIR /go/src

View File

@@ -21,7 +21,7 @@ endef
# Source file collections using shell find for better performance # Source file collections using shell find for better performance
GO_SOURCES := $(shell find . -name '*.go' -not -path './gen/*' -not -path './vendor/*') GO_SOURCES := $(shell find . -name '*.go' -not -path './gen/*' -not -path './vendor/*')
PROTO_SOURCES := $(shell find . -name '*.proto' -not -path './gen/*' -not -path './vendor/*') PROTO_SOURCES := $(shell find . -name '*.proto' -not -path './gen/*' -not -path './vendor/*')
DOC_SOURCES := $(shell find . \( -name '*.md' -o -name '*.yaml' -o -name '*.yml' -o -name '*.ts' -o -name '*.js' -o -name '*.html' -o -name '*.css' -o -name '*.scss' -o -name '*.sass' \) -not -path './gen/*' -not -path './vendor/*' -not -path './node_modules/*') PRETTIER_SOURCES := $(shell find . \( -name '*.md' -o -name '*.yaml' -o -name '*.yml' -o -name '*.ts' -o -name '*.js' -o -name '*.html' -o -name '*.css' -o -name '*.scss' -o -name '*.sass' \) -not -path './gen/*' -not -path './vendor/*' -not -path './node_modules/*')
# Default target # Default target
.PHONY: all .PHONY: all
@@ -33,6 +33,7 @@ check-deps:
$(call check_tool,go) $(call check_tool,go)
$(call check_tool,golangci-lint) $(call check_tool,golangci-lint)
$(call check_tool,gofumpt) $(call check_tool,gofumpt)
$(call check_tool,mdformat)
$(call check_tool,prettier) $(call check_tool,prettier)
$(call check_tool,clang-format) $(call check_tool,clang-format)
$(call check_tool,buf) $(call check_tool,buf)
@@ -52,7 +53,7 @@ test: check-deps $(GO_SOURCES) go.mod go.sum
# Formatting targets # Formatting targets
.PHONY: fmt .PHONY: fmt
fmt: fmt-go fmt-prettier fmt-proto fmt: fmt-go fmt-mdformat fmt-prettier fmt-proto
.PHONY: fmt-go .PHONY: fmt-go
fmt-go: check-deps $(GO_SOURCES) fmt-go: check-deps $(GO_SOURCES)
@@ -60,9 +61,14 @@ fmt-go: check-deps $(GO_SOURCES)
gofumpt -l -w . gofumpt -l -w .
golangci-lint run --fix golangci-lint run --fix
.PHONY: fmt-mdformat
fmt-mdformat: check-deps
@echo "Formatting documentation..."
mdformat docs/
.PHONY: fmt-prettier .PHONY: fmt-prettier
fmt-prettier: check-deps $(DOC_SOURCES) fmt-prettier: check-deps $(PRETTIER_SOURCES)
@echo "Formatting documentation and config files..." @echo "Formatting markup and config files..."
prettier --write '**/*.{ts,js,md,yaml,yml,sass,css,scss,html}' prettier --write '**/*.{ts,js,md,yaml,yml,sass,css,scss,html}'
.PHONY: fmt-proto .PHONY: fmt-proto
@@ -116,7 +122,8 @@ help:
@echo "" @echo ""
@echo "Specific targets:" @echo "Specific targets:"
@echo " fmt-go - Format Go code only" @echo " fmt-go - Format Go code only"
@echo " fmt-prettier - Format documentation only" @echo " fmt-mdformat - Format documentation only"
@echo " fmt-prettier - Format markup and config files only"
@echo " fmt-proto - Format Protocol Buffer files only" @echo " fmt-proto - Format Protocol Buffer files only"
@echo " lint-go - Lint Go code only" @echo " lint-go - Lint Go code only"
@echo " lint-proto - Lint Protocol Buffer files only" @echo " lint-proto - Lint Protocol Buffer files only"

View File

@@ -65,8 +65,16 @@ Please have a look at the [`documentation`](https://headscale.net/stable/).
For NixOS users, a module is available in [`nix/`](./nix/). For NixOS users, a module is available in [`nix/`](./nix/).
## Builds from `main`
Development builds from the `main` branch are available as container images and
binaries. See the [development builds](https://headscale.net/stable/setup/install/main/)
documentation for details.
## Talks ## Talks
- Fosdem 2026 (video): [Headscale & Tailscale: The complementary open source clone](https://fosdem.org/2026/schedule/event/KYQ3LL-headscale-the-complementary-open-source-clone/)
- presented by Kristoffer Dalby
- Fosdem 2023 (video): [Headscale: How we are using integration testing to reimplement Tailscale](https://fosdem.org/2023/schedule/event/goheadscale/) - Fosdem 2023 (video): [Headscale: How we are using integration testing to reimplement Tailscale](https://fosdem.org/2023/schedule/event/goheadscale/)
- presented by Juan Font Alonso and Kristoffer Dalby - presented by Juan Font Alonso and Kristoffer Dalby
@@ -105,6 +113,8 @@ run `make lint` and `make fmt` before committing any code.
The **Proto** code is linted with [`buf`](https://docs.buf.build/lint/overview) and The **Proto** code is linted with [`buf`](https://docs.buf.build/lint/overview) and
formatted with [`clang-format`](https://clang.llvm.org/docs/ClangFormat.html). formatted with [`clang-format`](https://clang.llvm.org/docs/ClangFormat.html).
The **docs** are formatted with [`mdformat`](https://mdformat.readthedocs.io).
The **rest** (Markdown, YAML, etc) is formatted with [`prettier`](https://prettier.io). The **rest** (Markdown, YAML, etc) is formatted with [`prettier`](https://prettier.io).
Check out the `.golangci.yaml` and `Makefile` to see the specific configuration. Check out the `.golangci.yaml` and `Makefile` to see the specific configuration.

View File

@@ -1,20 +1,18 @@
package cli package cli
import ( import (
"context"
"fmt" "fmt"
"strconv" "strconv"
"time"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1" v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/juanfont/headscale/hscontrol/util" "github.com/juanfont/headscale/hscontrol/util"
"github.com/prometheus/common/model"
"github.com/pterm/pterm" "github.com/pterm/pterm"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"google.golang.org/protobuf/types/known/timestamppb"
) )
const ( const (
// 90 days. // DefaultAPIKeyExpiry is 90 days.
DefaultAPIKeyExpiry = "90d" DefaultAPIKeyExpiry = "90d"
) )
@@ -46,55 +44,35 @@ var listAPIKeys = &cobra.Command{
Use: "list", Use: "list",
Short: "List the Api keys for headscale", Short: "List the Api keys for headscale",
Aliases: []string{"ls", "show"}, Aliases: []string{"ls", "show"},
Run: func(cmd *cobra.Command, args []string) { RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
output, _ := cmd.Flags().GetString("output") response, err := client.ListApiKeys(ctx, &v1.ListApiKeysRequest{})
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
defer cancel()
defer conn.Close()
request := &v1.ListApiKeysRequest{}
response, err := client.ListApiKeys(ctx, request)
if err != nil { if err != nil {
ErrorOutput( return fmt.Errorf("listing api keys: %w", err)
err,
fmt.Sprintf("Error getting the list of keys: %s", err),
output,
)
} }
if output != "" { return printListOutput(cmd, response.GetApiKeys(), func() error {
SuccessOutput(response.GetApiKeys(), "", output) tableData := pterm.TableData{
} {"ID", "Prefix", "Expiration", "Created"},
tableData := pterm.TableData{
{"ID", "Prefix", "Expiration", "Created"},
}
for _, key := range response.GetApiKeys() {
expiration := "-"
if key.GetExpiration() != nil {
expiration = ColourTime(key.GetExpiration().AsTime())
} }
tableData = append(tableData, []string{ for _, key := range response.GetApiKeys() {
strconv.FormatUint(key.GetId(), util.Base10), expiration := "-"
key.GetPrefix(),
expiration,
key.GetCreatedAt().AsTime().Format(HeadscaleDateTimeFormat),
})
} if key.GetExpiration() != nil {
err = pterm.DefaultTable.WithHasHeader().WithData(tableData).Render() expiration = ColourTime(key.GetExpiration().AsTime())
if err != nil { }
ErrorOutput(
err, tableData = append(tableData, []string{
fmt.Sprintf("Failed to render pterm table: %s", err), strconv.FormatUint(key.GetId(), util.Base10),
output, key.GetPrefix(),
) expiration,
} key.GetCreatedAt().AsTime().Format(HeadscaleDateTimeFormat),
}, })
}
return pterm.DefaultTable.WithHasHeader().WithData(tableData).Render()
})
}),
} }
var createAPIKeyCmd = &cobra.Command{ var createAPIKeyCmd = &cobra.Command{
@@ -105,137 +83,79 @@ Creates a new Api key, the Api key is only visible on creation
and cannot be retrieved again. and cannot be retrieved again.
If you loose a key, create a new one and revoke (expire) the old one.`, If you loose a key, create a new one and revoke (expire) the old one.`,
Aliases: []string{"c", "new"}, Aliases: []string{"c", "new"},
Run: func(cmd *cobra.Command, args []string) { RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
output, _ := cmd.Flags().GetString("output") expiration, err := expirationFromFlag(cmd)
request := &v1.CreateApiKeyRequest{}
durationStr, _ := cmd.Flags().GetString("expiration")
duration, err := model.ParseDuration(durationStr)
if err != nil { if err != nil {
ErrorOutput( return err
err,
fmt.Sprintf("Could not parse duration: %s\n", err),
output,
)
} }
expiration := time.Now().UTC().Add(time.Duration(duration)) response, err := client.CreateApiKey(ctx, &v1.CreateApiKeyRequest{
Expiration: expiration,
request.Expiration = timestamppb.New(expiration) })
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
defer cancel()
defer conn.Close()
response, err := client.CreateApiKey(ctx, request)
if err != nil { if err != nil {
ErrorOutput( return fmt.Errorf("creating api key: %w", err)
err,
fmt.Sprintf("Cannot create Api Key: %s\n", err),
output,
)
} }
SuccessOutput(response.GetApiKey(), response.GetApiKey(), output) return printOutput(cmd, response.GetApiKey(), response.GetApiKey())
}, }),
}
// apiKeyIDOrPrefix reads --id and --prefix from cmd and validates that
// exactly one is provided.
func apiKeyIDOrPrefix(cmd *cobra.Command) (uint64, string, error) {
id, _ := cmd.Flags().GetUint64("id")
prefix, _ := cmd.Flags().GetString("prefix")
switch {
case id == 0 && prefix == "":
return 0, "", fmt.Errorf("either --id or --prefix must be provided: %w", errMissingParameter)
case id != 0 && prefix != "":
return 0, "", fmt.Errorf("only one of --id or --prefix can be provided: %w", errMissingParameter)
}
return id, prefix, nil
} }
var expireAPIKeyCmd = &cobra.Command{ var expireAPIKeyCmd = &cobra.Command{
Use: "expire", Use: "expire",
Short: "Expire an ApiKey", Short: "Expire an ApiKey",
Aliases: []string{"revoke", "exp", "e"}, Aliases: []string{"revoke", "exp", "e"},
Run: func(cmd *cobra.Command, args []string) { RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
output, _ := cmd.Flags().GetString("output") id, prefix, err := apiKeyIDOrPrefix(cmd)
id, _ := cmd.Flags().GetUint64("id")
prefix, _ := cmd.Flags().GetString("prefix")
switch {
case id == 0 && prefix == "":
ErrorOutput(
errMissingParameter,
"Either --id or --prefix must be provided",
output,
)
case id != 0 && prefix != "":
ErrorOutput(
errMissingParameter,
"Only one of --id or --prefix can be provided",
output,
)
}
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
defer cancel()
defer conn.Close()
request := &v1.ExpireApiKeyRequest{}
if id != 0 {
request.Id = id
} else {
request.Prefix = prefix
}
response, err := client.ExpireApiKey(ctx, request)
if err != nil { if err != nil {
ErrorOutput( return err
err,
fmt.Sprintf("Cannot expire Api Key: %s\n", err),
output,
)
} }
SuccessOutput(response, "Key expired", output) response, err := client.ExpireApiKey(ctx, &v1.ExpireApiKeyRequest{
}, Id: id,
Prefix: prefix,
})
if err != nil {
return fmt.Errorf("expiring api key: %w", err)
}
return printOutput(cmd, response, "Key expired")
}),
} }
var deleteAPIKeyCmd = &cobra.Command{ var deleteAPIKeyCmd = &cobra.Command{
Use: "delete", Use: "delete",
Short: "Delete an ApiKey", Short: "Delete an ApiKey",
Aliases: []string{"remove", "del"}, Aliases: []string{"remove", "del"},
Run: func(cmd *cobra.Command, args []string) { RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
output, _ := cmd.Flags().GetString("output") id, prefix, err := apiKeyIDOrPrefix(cmd)
id, _ := cmd.Flags().GetUint64("id")
prefix, _ := cmd.Flags().GetString("prefix")
switch {
case id == 0 && prefix == "":
ErrorOutput(
errMissingParameter,
"Either --id or --prefix must be provided",
output,
)
case id != 0 && prefix != "":
ErrorOutput(
errMissingParameter,
"Only one of --id or --prefix can be provided",
output,
)
}
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
defer cancel()
defer conn.Close()
request := &v1.DeleteApiKeyRequest{}
if id != 0 {
request.Id = id
} else {
request.Prefix = prefix
}
response, err := client.DeleteApiKey(ctx, request)
if err != nil { if err != nil {
ErrorOutput( return err
err,
fmt.Sprintf("Cannot delete Api Key: %s\n", err),
output,
)
} }
SuccessOutput(response, "Key deleted", output) response, err := client.DeleteApiKey(ctx, &v1.DeleteApiKeyRequest{
}, Id: id,
Prefix: prefix,
})
if err != nil {
return fmt.Errorf("deleting api key: %w", err)
}
return printOutput(cmd, response, "Key deleted")
}),
} }

93
cmd/headscale/cli/auth.go Normal file
View File

@@ -0,0 +1,93 @@
package cli
import (
"context"
"fmt"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/spf13/cobra"
)
func init() {
rootCmd.AddCommand(authCmd)
authRegisterCmd.Flags().StringP("user", "u", "", "User")
authRegisterCmd.Flags().String("auth-id", "", "Auth ID")
mustMarkRequired(authRegisterCmd, "user", "auth-id")
authCmd.AddCommand(authRegisterCmd)
authApproveCmd.Flags().String("auth-id", "", "Auth ID")
mustMarkRequired(authApproveCmd, "auth-id")
authCmd.AddCommand(authApproveCmd)
authRejectCmd.Flags().String("auth-id", "", "Auth ID")
mustMarkRequired(authRejectCmd, "auth-id")
authCmd.AddCommand(authRejectCmd)
}
var authCmd = &cobra.Command{
Use: "auth",
Short: "Manage node authentication and approval",
}
var authRegisterCmd = &cobra.Command{
Use: "register",
Short: "Register a node to your network",
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
user, _ := cmd.Flags().GetString("user")
authID, _ := cmd.Flags().GetString("auth-id")
request := &v1.AuthRegisterRequest{
AuthId: authID,
User: user,
}
response, err := client.AuthRegister(ctx, request)
if err != nil {
return fmt.Errorf("registering node: %w", err)
}
return printOutput(
cmd,
response.GetNode(),
fmt.Sprintf("Node %s registered", response.GetNode().GetGivenName()))
}),
}
var authApproveCmd = &cobra.Command{
Use: "approve",
Short: "Approve a pending authentication request",
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
authID, _ := cmd.Flags().GetString("auth-id")
request := &v1.AuthApproveRequest{
AuthId: authID,
}
response, err := client.AuthApprove(ctx, request)
if err != nil {
return fmt.Errorf("approving auth request: %w", err)
}
return printOutput(cmd, response, "Auth request approved")
}),
}
var authRejectCmd = &cobra.Command{
Use: "reject",
Short: "Reject a pending authentication request",
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
authID, _ := cmd.Flags().GetString("auth-id")
request := &v1.AuthRejectRequest{
AuthId: authID,
}
response, err := client.AuthReject(ctx, request)
if err != nil {
return fmt.Errorf("rejecting auth request: %w", err)
}
return printOutput(cmd, response, "Auth request rejected")
}),
}

View File

@@ -1,7 +1,8 @@
package cli package cli
import ( import (
"github.com/rs/zerolog/log" "fmt"
"github.com/spf13/cobra" "github.com/spf13/cobra"
) )
@@ -13,10 +14,12 @@ var configTestCmd = &cobra.Command{
Use: "configtest", Use: "configtest",
Short: "Test the configuration.", Short: "Test the configuration.",
Long: "Run a test of the configuration and exit.", Long: "Run a test of the configuration and exit.",
Run: func(cmd *cobra.Command, args []string) { RunE: func(cmd *cobra.Command, args []string) error {
_, err := newHeadscaleServerWithConfig() _, err := newHeadscaleServerWithConfig()
if err != nil { if err != nil {
log.Fatal().Caller().Err(err).Msg("Error initializing") return fmt.Errorf("configuration error: %w", err)
} }
return nil
}, },
} }

View File

@@ -1,44 +1,22 @@
package cli package cli
import ( import (
"context"
"fmt" "fmt"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1" v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/juanfont/headscale/hscontrol/types" "github.com/juanfont/headscale/hscontrol/types"
"github.com/rs/zerolog/log"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"google.golang.org/grpc/status"
) )
// Error is used to compare errors as per https://dave.cheney.net/2016/04/07/constant-errors
type Error string
func (e Error) Error() string { return string(e) }
func init() { func init() {
rootCmd.AddCommand(debugCmd) rootCmd.AddCommand(debugCmd)
createNodeCmd.Flags().StringP("name", "", "", "Name") createNodeCmd.Flags().StringP("name", "", "", "Name")
err := createNodeCmd.MarkFlagRequired("name")
if err != nil {
log.Fatal().Err(err).Msg("")
}
createNodeCmd.Flags().StringP("user", "u", "", "User") createNodeCmd.Flags().StringP("user", "u", "", "User")
createNodeCmd.Flags().StringP("namespace", "n", "", "User")
createNodeNamespaceFlag := createNodeCmd.Flags().Lookup("namespace")
createNodeNamespaceFlag.Deprecated = deprecateNamespaceMessage
createNodeNamespaceFlag.Hidden = true
err = createNodeCmd.MarkFlagRequired("user")
if err != nil {
log.Fatal().Err(err).Msg("")
}
createNodeCmd.Flags().StringP("key", "k", "", "Key") createNodeCmd.Flags().StringP("key", "k", "", "Key")
err = createNodeCmd.MarkFlagRequired("key") mustMarkRequired(createNodeCmd, "name", "user", "key")
if err != nil {
log.Fatal().Err(err).Msg("")
}
createNodeCmd.Flags(). createNodeCmd.Flags().
StringSliceP("route", "r", []string{}, "List (or repeated flags) of routes to advertise") StringSliceP("route", "r", []string{}, "List (or repeated flags) of routes to advertise")
@@ -53,54 +31,18 @@ var debugCmd = &cobra.Command{
var createNodeCmd = &cobra.Command{ var createNodeCmd = &cobra.Command{
Use: "create-node", Use: "create-node",
Short: "Create a node that can be registered with `nodes register <>` command", Short: "Create a node that can be registered with `auth register <>` command",
Run: func(cmd *cobra.Command, args []string) { RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
output, _ := cmd.Flags().GetString("output") user, _ := cmd.Flags().GetString("user")
name, _ := cmd.Flags().GetString("name")
registrationID, _ := cmd.Flags().GetString("key")
user, err := cmd.Flags().GetString("user") _, err := types.AuthIDFromString(registrationID)
if err != nil { if err != nil {
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output) return fmt.Errorf("parsing machine key: %w", err)
} }
ctx, client, conn, cancel := newHeadscaleCLIWithConfig() routes, _ := cmd.Flags().GetStringSlice("route")
defer cancel()
defer conn.Close()
name, err := cmd.Flags().GetString("name")
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error getting node from flag: %s", err),
output,
)
}
registrationID, err := cmd.Flags().GetString("key")
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error getting key from flag: %s", err),
output,
)
}
_, err = types.RegistrationIDFromString(registrationID)
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Failed to parse machine key from flag: %s", err),
output,
)
}
routes, err := cmd.Flags().GetStringSlice("route")
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error getting routes from flag: %s", err),
output,
)
}
request := &v1.DebugCreateNodeRequest{ request := &v1.DebugCreateNodeRequest{
Key: registrationID, Key: registrationID,
@@ -111,13 +53,9 @@ var createNodeCmd = &cobra.Command{
response, err := client.DebugCreateNode(ctx, request) response, err := client.DebugCreateNode(ctx, request)
if err != nil { if err != nil {
ErrorOutput( return fmt.Errorf("creating node: %w", err)
err,
"Cannot create node: "+status.Convert(err).Message(),
output,
)
} }
SuccessOutput(response.GetNode(), "Node created", output) return printOutput(cmd, response.GetNode(), "Node created")
}, }),
} }

View File

@@ -15,14 +15,12 @@ var dumpConfigCmd = &cobra.Command{
Use: "dumpConfig", Use: "dumpConfig",
Short: "dump current config to /etc/headscale/config.dump.yaml, integration test only", Short: "dump current config to /etc/headscale/config.dump.yaml, integration test only",
Hidden: true, Hidden: true,
Args: func(cmd *cobra.Command, args []string) error { RunE: func(cmd *cobra.Command, args []string) error {
return nil
},
Run: func(cmd *cobra.Command, args []string) {
err := viper.WriteConfigAs("/etc/headscale/config.dump.yaml") err := viper.WriteConfigAs("/etc/headscale/config.dump.yaml")
if err != nil { if err != nil {
//nolint return fmt.Errorf("dumping config: %w", err)
fmt.Println("Failed to dump config")
} }
return nil
}, },
} }

View File

@@ -21,22 +21,17 @@ var generateCmd = &cobra.Command{
var generatePrivateKeyCmd = &cobra.Command{ var generatePrivateKeyCmd = &cobra.Command{
Use: "private-key", Use: "private-key",
Short: "Generate a private key for the headscale server", Short: "Generate a private key for the headscale server",
Run: func(cmd *cobra.Command, args []string) { RunE: func(cmd *cobra.Command, args []string) error {
output, _ := cmd.Flags().GetString("output")
machineKey := key.NewMachine() machineKey := key.NewMachine()
machineKeyStr, err := machineKey.MarshalText() machineKeyStr, err := machineKey.MarshalText()
if err != nil { if err != nil {
ErrorOutput( return fmt.Errorf("marshalling machine key: %w", err)
err,
fmt.Sprintf("Error getting machine key from flag: %s", err),
output,
)
} }
SuccessOutput(map[string]string{ return printOutput(cmd, map[string]string{
"private_key": string(machineKeyStr), "private_key": string(machineKeyStr),
}, },
string(machineKeyStr), output) string(machineKeyStr))
}, },
} }

View File

@@ -1,6 +1,9 @@
package cli package cli
import ( import (
"context"
"fmt"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1" v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/spf13/cobra" "github.com/spf13/cobra"
) )
@@ -13,17 +16,12 @@ var healthCmd = &cobra.Command{
Use: "health", Use: "health",
Short: "Check the health of the Headscale server", Short: "Check the health of the Headscale server",
Long: "Check the health of the Headscale server. This command will return an exit code of 0 if the server is healthy, or 1 if it is not.", Long: "Check the health of the Headscale server. This command will return an exit code of 0 if the server is healthy, or 1 if it is not.",
Run: func(cmd *cobra.Command, args []string) { RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
output, _ := cmd.Flags().GetString("output")
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
defer cancel()
defer conn.Close()
response, err := client.Health(ctx, &v1.HealthRequest{}) response, err := client.Health(ctx, &v1.HealthRequest{})
if err != nil { if err != nil {
ErrorOutput(err, "Error checking health", output) return fmt.Errorf("checking health: %w", err)
} }
SuccessOutput(response, "", output) return printOutput(cmd, response, "")
}, }),
} }

View File

@@ -1,8 +1,8 @@
package cli package cli
import ( import (
"context"
"encoding/json" "encoding/json"
"errors"
"fmt" "fmt"
"net" "net"
"net/http" "net/http"
@@ -10,15 +10,22 @@ import (
"strconv" "strconv"
"time" "time"
"github.com/juanfont/headscale/hscontrol/util/zlog/zf"
"github.com/oauth2-proxy/mockoidc" "github.com/oauth2-proxy/mockoidc"
"github.com/rs/zerolog/log" "github.com/rs/zerolog/log"
"github.com/spf13/cobra" "github.com/spf13/cobra"
) )
// Error is used to compare errors as per https://dave.cheney.net/2016/04/07/constant-errors
type Error string
func (e Error) Error() string { return string(e) }
const ( const (
errMockOidcClientIDNotDefined = Error("MOCKOIDC_CLIENT_ID not defined") errMockOidcClientIDNotDefined = Error("MOCKOIDC_CLIENT_ID not defined")
errMockOidcClientSecretNotDefined = Error("MOCKOIDC_CLIENT_SECRET not defined") errMockOidcClientSecretNotDefined = Error("MOCKOIDC_CLIENT_SECRET not defined")
errMockOidcPortNotDefined = Error("MOCKOIDC_PORT not defined") errMockOidcPortNotDefined = Error("MOCKOIDC_PORT not defined")
errMockOidcUsersNotDefined = Error("MOCKOIDC_USERS not defined")
refreshTTL = 60 * time.Minute refreshTTL = 60 * time.Minute
) )
@@ -32,12 +39,13 @@ var mockOidcCmd = &cobra.Command{
Use: "mockoidc", Use: "mockoidc",
Short: "Runs a mock OIDC server for testing", Short: "Runs a mock OIDC server for testing",
Long: "This internal command runs a OpenID Connect for testing purposes", Long: "This internal command runs a OpenID Connect for testing purposes",
Run: func(cmd *cobra.Command, args []string) { RunE: func(cmd *cobra.Command, args []string) error {
err := mockOIDC() err := mockOIDC()
if err != nil { if err != nil {
log.Error().Err(err).Msgf("Error running mock OIDC server") return fmt.Errorf("running mock OIDC server: %w", err)
os.Exit(1)
} }
return nil
}, },
} }
@@ -46,41 +54,47 @@ func mockOIDC() error {
if clientID == "" { if clientID == "" {
return errMockOidcClientIDNotDefined return errMockOidcClientIDNotDefined
} }
clientSecret := os.Getenv("MOCKOIDC_CLIENT_SECRET") clientSecret := os.Getenv("MOCKOIDC_CLIENT_SECRET")
if clientSecret == "" { if clientSecret == "" {
return errMockOidcClientSecretNotDefined return errMockOidcClientSecretNotDefined
} }
addrStr := os.Getenv("MOCKOIDC_ADDR") addrStr := os.Getenv("MOCKOIDC_ADDR")
if addrStr == "" { if addrStr == "" {
return errMockOidcPortNotDefined return errMockOidcPortNotDefined
} }
portStr := os.Getenv("MOCKOIDC_PORT") portStr := os.Getenv("MOCKOIDC_PORT")
if portStr == "" { if portStr == "" {
return errMockOidcPortNotDefined return errMockOidcPortNotDefined
} }
accessTTLOverride := os.Getenv("MOCKOIDC_ACCESS_TTL") accessTTLOverride := os.Getenv("MOCKOIDC_ACCESS_TTL")
if accessTTLOverride != "" { if accessTTLOverride != "" {
newTTL, err := time.ParseDuration(accessTTLOverride) newTTL, err := time.ParseDuration(accessTTLOverride)
if err != nil { if err != nil {
return err return err
} }
accessTTL = newTTL accessTTL = newTTL
} }
userStr := os.Getenv("MOCKOIDC_USERS") userStr := os.Getenv("MOCKOIDC_USERS")
if userStr == "" { if userStr == "" {
return errors.New("MOCKOIDC_USERS not defined") return errMockOidcUsersNotDefined
} }
var users []mockoidc.MockUser var users []mockoidc.MockUser
err := json.Unmarshal([]byte(userStr), &users) err := json.Unmarshal([]byte(userStr), &users)
if err != nil { if err != nil {
return fmt.Errorf("unmarshalling users: %w", err) return fmt.Errorf("unmarshalling users: %w", err)
} }
log.Info().Interface("users", users).Msg("loading users from JSON") log.Info().Interface(zf.Users, users).Msg("loading users from JSON")
log.Info().Msgf("Access token TTL: %s", accessTTL) log.Info().Msgf("access token TTL: %s", accessTTL)
port, err := strconv.Atoi(portStr) port, err := strconv.Atoi(portStr)
if err != nil { if err != nil {
@@ -92,7 +106,7 @@ func mockOIDC() error {
return err return err
} }
listener, err := net.Listen("tcp", fmt.Sprintf("%s:%d", addrStr, port)) listener, err := new(net.ListenConfig).Listen(context.Background(), "tcp", fmt.Sprintf("%s:%d", addrStr, port))
if err != nil { if err != nil {
return err return err
} }
@@ -101,8 +115,10 @@ func mockOIDC() error {
if err != nil { if err != nil {
return err return err
} }
log.Info().Msgf("Mock OIDC server listening on %s", listener.Addr().String())
log.Info().Msgf("Issuer: %s", mock.Issuer()) log.Info().Msgf("mock OIDC server listening on %s", listener.Addr().String())
log.Info().Msgf("issuer: %s", mock.Issuer())
c := make(chan struct{}) c := make(chan struct{})
<-c <-c
@@ -133,12 +149,13 @@ func getMockOIDC(clientID string, clientSecret string, users []mockoidc.MockUser
ErrorQueue: &mockoidc.ErrorQueue{}, ErrorQueue: &mockoidc.ErrorQueue{},
} }
mock.AddMiddleware(func(h http.Handler) http.Handler { _ = mock.AddMiddleware(func(h http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
log.Info().Msgf("Request: %+v", r) log.Info().Msgf("request: %+v", r)
h.ServeHTTP(w, r) h.ServeHTTP(w, r)
if r.Response != nil { if r.Response != nil {
log.Info().Msgf("Response: %+v", r.Response) log.Info().Msgf("response: %+v", r.Response)
} }
}) })
}) })

View File

@@ -1,8 +1,8 @@
package cli package cli
import ( import (
"context"
"fmt" "fmt"
"log"
"net/netip" "net/netip"
"strconv" "strconv"
"strings" "strings"
@@ -13,7 +13,6 @@ import (
"github.com/pterm/pterm" "github.com/pterm/pterm"
"github.com/samber/lo" "github.com/samber/lo"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"google.golang.org/grpc/status"
"google.golang.org/protobuf/types/known/timestamppb" "google.golang.org/protobuf/types/known/timestamppb"
"tailscale.com/types/key" "tailscale.com/types/key"
) )
@@ -21,63 +20,37 @@ import (
func init() { func init() {
rootCmd.AddCommand(nodeCmd) rootCmd.AddCommand(nodeCmd)
listNodesCmd.Flags().StringP("user", "u", "", "Filter by user") listNodesCmd.Flags().StringP("user", "u", "", "Filter by user")
listNodesCmd.Flags().StringP("namespace", "n", "", "User")
listNodesNamespaceFlag := listNodesCmd.Flags().Lookup("namespace")
listNodesNamespaceFlag.Deprecated = deprecateNamespaceMessage
listNodesNamespaceFlag.Hidden = true
nodeCmd.AddCommand(listNodesCmd) nodeCmd.AddCommand(listNodesCmd)
listNodeRoutesCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)") listNodeRoutesCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
nodeCmd.AddCommand(listNodeRoutesCmd) nodeCmd.AddCommand(listNodeRoutesCmd)
registerNodeCmd.Flags().StringP("user", "u", "", "User") registerNodeCmd.Flags().StringP("user", "u", "", "User")
registerNodeCmd.Flags().StringP("namespace", "n", "", "User")
registerNodeNamespaceFlag := registerNodeCmd.Flags().Lookup("namespace")
registerNodeNamespaceFlag.Deprecated = deprecateNamespaceMessage
registerNodeNamespaceFlag.Hidden = true
err := registerNodeCmd.MarkFlagRequired("user")
if err != nil {
log.Fatal(err.Error())
}
registerNodeCmd.Flags().StringP("key", "k", "", "Key") registerNodeCmd.Flags().StringP("key", "k", "", "Key")
err = registerNodeCmd.MarkFlagRequired("key") mustMarkRequired(registerNodeCmd, "user", "key")
if err != nil {
log.Fatal(err.Error())
}
nodeCmd.AddCommand(registerNodeCmd) nodeCmd.AddCommand(registerNodeCmd)
expireNodeCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)") expireNodeCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
expireNodeCmd.Flags().StringP("expiry", "e", "", "Set expire to (RFC3339 format, e.g. 2025-08-27T10:00:00Z), or leave empty to expire immediately.") expireNodeCmd.Flags().StringP("expiry", "e", "", "Set expire to (RFC3339 format, e.g. 2025-08-27T10:00:00Z), or leave empty to expire immediately.")
err = expireNodeCmd.MarkFlagRequired("identifier") expireNodeCmd.Flags().BoolP("disable", "d", false, "Disable key expiry (node will never expire)")
if err != nil { mustMarkRequired(expireNodeCmd, "identifier")
log.Fatal(err.Error())
}
nodeCmd.AddCommand(expireNodeCmd) nodeCmd.AddCommand(expireNodeCmd)
renameNodeCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)") renameNodeCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
err = renameNodeCmd.MarkFlagRequired("identifier") mustMarkRequired(renameNodeCmd, "identifier")
if err != nil {
log.Fatal(err.Error())
}
nodeCmd.AddCommand(renameNodeCmd) nodeCmd.AddCommand(renameNodeCmd)
deleteNodeCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)") deleteNodeCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
err = deleteNodeCmd.MarkFlagRequired("identifier") mustMarkRequired(deleteNodeCmd, "identifier")
if err != nil {
log.Fatal(err.Error())
}
nodeCmd.AddCommand(deleteNodeCmd) nodeCmd.AddCommand(deleteNodeCmd)
tagCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)") tagCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
tagCmd.MarkFlagRequired("identifier") mustMarkRequired(tagCmd, "identifier")
tagCmd.Flags().StringSliceP("tags", "t", []string{}, "List of tags to add to the node") tagCmd.Flags().StringSliceP("tags", "t", []string{}, "List of tags to add to the node")
nodeCmd.AddCommand(tagCmd) nodeCmd.AddCommand(tagCmd)
approveRoutesCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)") approveRoutesCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
approveRoutesCmd.MarkFlagRequired("identifier") mustMarkRequired(approveRoutesCmd, "identifier")
approveRoutesCmd.Flags().StringSliceP("routes", "r", []string{}, `List of routes that will be approved (comma-separated, e.g. "10.0.0.0/8,192.168.0.0/24" or empty string to remove all approved routes)`) approveRoutesCmd.Flags().StringSliceP("routes", "r", []string{}, `List of routes that will be approved (comma-separated, e.g. "10.0.0.0/8,192.168.0.0/24" or empty string to remove all approved routes)`)
nodeCmd.AddCommand(approveRoutesCmd) nodeCmd.AddCommand(approveRoutesCmd)
@@ -87,31 +60,16 @@ func init() {
var nodeCmd = &cobra.Command{ var nodeCmd = &cobra.Command{
Use: "nodes", Use: "nodes",
Short: "Manage the nodes of Headscale", Short: "Manage the nodes of Headscale",
Aliases: []string{"node", "machine", "machines"}, Aliases: []string{"node"},
} }
var registerNodeCmd = &cobra.Command{ var registerNodeCmd = &cobra.Command{
Use: "register", Use: "register",
Short: "Registers a node to your network", Short: "Registers a node to your network",
Run: func(cmd *cobra.Command, args []string) { Deprecated: "use 'headscale auth register --auth-id <id> --user <user>' instead",
output, _ := cmd.Flags().GetString("output") RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
user, err := cmd.Flags().GetString("user") user, _ := cmd.Flags().GetString("user")
if err != nil { registrationID, _ := cmd.Flags().GetString("key")
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
}
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
defer cancel()
defer conn.Close()
registrationID, err := cmd.Flags().GetString("key")
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error getting node key from flag: %s", err),
output,
)
}
request := &v1.RegisterNodeRequest{ request := &v1.RegisterNodeRequest{
Key: registrationID, Key: registrationID,
@@ -120,98 +78,49 @@ var registerNodeCmd = &cobra.Command{
response, err := client.RegisterNode(ctx, request) response, err := client.RegisterNode(ctx, request)
if err != nil { if err != nil {
ErrorOutput( return fmt.Errorf("registering node: %w", err)
err,
fmt.Sprintf(
"Cannot register node: %s\n",
status.Convert(err).Message(),
),
output,
)
} }
SuccessOutput( return printOutput(
cmd,
response.GetNode(), response.GetNode(),
fmt.Sprintf("Node %s registered", response.GetNode().GetGivenName()), output) fmt.Sprintf("Node %s registered", response.GetNode().GetGivenName()))
}, }),
} }
var listNodesCmd = &cobra.Command{ var listNodesCmd = &cobra.Command{
Use: "list", Use: "list",
Short: "List nodes", Short: "List nodes",
Aliases: []string{"ls", "show"}, Aliases: []string{"ls", "show"},
Run: func(cmd *cobra.Command, args []string) { RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
output, _ := cmd.Flags().GetString("output") user, _ := cmd.Flags().GetString("user")
user, err := cmd.Flags().GetString("user")
response, err := client.ListNodes(ctx, &v1.ListNodesRequest{User: user})
if err != nil { if err != nil {
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output) return fmt.Errorf("listing nodes: %w", err)
} }
ctx, client, conn, cancel := newHeadscaleCLIWithConfig() return printListOutput(cmd, response.GetNodes(), func() error {
defer cancel() tableData, err := nodesToPtables(user, response.GetNodes())
defer conn.Close() if err != nil {
return fmt.Errorf("converting to table: %w", err)
}
request := &v1.ListNodesRequest{ return pterm.DefaultTable.WithHasHeader().WithData(tableData).Render()
User: user, })
} }),
response, err := client.ListNodes(ctx, request)
if err != nil {
ErrorOutput(
err,
"Cannot get nodes: "+status.Convert(err).Message(),
output,
)
}
if output != "" {
SuccessOutput(response.GetNodes(), "", output)
}
tableData, err := nodesToPtables(user, response.GetNodes())
if err != nil {
ErrorOutput(err, fmt.Sprintf("Error converting to table: %s", err), output)
}
err = pterm.DefaultTable.WithHasHeader().WithData(tableData).Render()
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Failed to render pterm table: %s", err),
output,
)
}
},
} }
var listNodeRoutesCmd = &cobra.Command{ var listNodeRoutesCmd = &cobra.Command{
Use: "list-routes", Use: "list-routes",
Short: "List routes available on nodes", Short: "List routes available on nodes",
Aliases: []string{"lsr", "routes"}, Aliases: []string{"lsr", "routes"},
Run: func(cmd *cobra.Command, args []string) { RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
output, _ := cmd.Flags().GetString("output") identifier, _ := cmd.Flags().GetUint64("identifier")
identifier, err := cmd.Flags().GetUint64("identifier")
response, err := client.ListNodes(ctx, &v1.ListNodesRequest{})
if err != nil { if err != nil {
ErrorOutput( return fmt.Errorf("listing nodes: %w", err)
err,
fmt.Sprintf("Error converting ID to integer: %s", err),
output,
)
}
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
defer cancel()
defer conn.Close()
request := &v1.ListNodesRequest{}
response, err := client.ListNodes(ctx, request)
if err != nil {
ErrorOutput(
err,
"Cannot get nodes: "+status.Convert(err).Message(),
output,
)
} }
nodes := response.GetNodes() nodes := response.GetNodes()
@@ -219,6 +128,7 @@ var listNodeRoutesCmd = &cobra.Command{
for _, node := range response.GetNodes() { for _, node := range response.GetNodes() {
if node.GetId() == identifier { if node.GetId() == identifier {
nodes = []*v1.Node{node} nodes = []*v1.Node{node}
break break
} }
} }
@@ -228,72 +138,51 @@ var listNodeRoutesCmd = &cobra.Command{
return (n.GetSubnetRoutes() != nil && len(n.GetSubnetRoutes()) > 0) || (n.GetApprovedRoutes() != nil && len(n.GetApprovedRoutes()) > 0) || (n.GetAvailableRoutes() != nil && len(n.GetAvailableRoutes()) > 0) return (n.GetSubnetRoutes() != nil && len(n.GetSubnetRoutes()) > 0) || (n.GetApprovedRoutes() != nil && len(n.GetApprovedRoutes()) > 0) || (n.GetAvailableRoutes() != nil && len(n.GetAvailableRoutes()) > 0)
}) })
if output != "" { return printListOutput(cmd, nodes, func() error {
SuccessOutput(nodes, "", output) return pterm.DefaultTable.WithHasHeader().WithData(nodeRoutesToPtables(nodes)).Render()
return })
} }),
tableData, err := nodeRoutesToPtables(nodes)
if err != nil {
ErrorOutput(err, fmt.Sprintf("Error converting to table: %s", err), output)
}
err = pterm.DefaultTable.WithHasHeader().WithData(tableData).Render()
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Failed to render pterm table: %s", err),
output,
)
}
},
} }
var expireNodeCmd = &cobra.Command{ var expireNodeCmd = &cobra.Command{
Use: "expire", Use: "expire",
Short: "Expire (log out) a node in your network", Short: "Expire (log out) a node in your network",
Long: "Expiring a node will keep the node in the database and force it to reauthenticate.", Long: `Expiring a node will keep the node in the database and force it to reauthenticate.
Use --disable to disable key expiry (node will never expire).`,
Aliases: []string{"logout", "exp", "e"}, Aliases: []string{"logout", "exp", "e"},
Run: func(cmd *cobra.Command, args []string) { RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
output, _ := cmd.Flags().GetString("output") identifier, _ := cmd.Flags().GetUint64("identifier")
disableExpiry, _ := cmd.Flags().GetBool("disable")
identifier, err := cmd.Flags().GetUint64("identifier") // Handle disable expiry - node will never expire.
if err != nil { if disableExpiry {
ErrorOutput( request := &v1.ExpireNodeRequest{
err, NodeId: identifier,
fmt.Sprintf("Error converting ID to integer: %s", err), DisableExpiry: true,
output, }
)
response, err := client.ExpireNode(ctx, request)
if err != nil {
return fmt.Errorf("disabling node expiry: %w", err)
}
return printOutput(cmd, response.GetNode(), "Node expiry disabled")
} }
expiry, err := cmd.Flags().GetString("expiry") expiry, _ := cmd.Flags().GetString("expiry")
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error converting expiry to string: %s", err),
output,
)
return now := time.Now()
}
expiryTime := time.Now() expiryTime := now
if expiry != "" { if expiry != "" {
var err error
expiryTime, err = time.Parse(time.RFC3339, expiry) expiryTime, err = time.Parse(time.RFC3339, expiry)
if err != nil { if err != nil {
ErrorOutput( return fmt.Errorf("parsing expiry time: %w", err)
err,
fmt.Sprintf("Error converting expiry to string: %s", err),
output,
)
return
} }
} }
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
defer cancel()
defer conn.Close()
request := &v1.ExpireNodeRequest{ request := &v1.ExpireNodeRequest{
NodeId: identifier, NodeId: identifier,
Expiry: timestamppb.New(expiryTime), Expiry: timestamppb.New(expiryTime),
@@ -301,43 +190,28 @@ var expireNodeCmd = &cobra.Command{
response, err := client.ExpireNode(ctx, request) response, err := client.ExpireNode(ctx, request)
if err != nil { if err != nil {
ErrorOutput( return fmt.Errorf("expiring node: %w", err)
err,
fmt.Sprintf(
"Cannot expire node: %s\n",
status.Convert(err).Message(),
),
output,
)
} }
SuccessOutput(response.GetNode(), "Node expired", output) if now.Equal(expiryTime) || now.After(expiryTime) {
}, return printOutput(cmd, response.GetNode(), "Node expired")
}
return printOutput(cmd, response.GetNode(), "Node expiration updated")
}),
} }
var renameNodeCmd = &cobra.Command{ var renameNodeCmd = &cobra.Command{
Use: "rename NEW_NAME", Use: "rename NEW_NAME",
Short: "Renames a node in your network", Short: "Renames a node in your network",
Run: func(cmd *cobra.Command, args []string) { RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
output, _ := cmd.Flags().GetString("output") identifier, _ := cmd.Flags().GetUint64("identifier")
identifier, err := cmd.Flags().GetUint64("identifier")
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error converting ID to integer: %s", err),
output,
)
}
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
defer cancel()
defer conn.Close()
newName := "" newName := ""
if len(args) > 0 { if len(args) > 0 {
newName = args[0] newName = args[0]
} }
request := &v1.RenameNodeRequest{ request := &v1.RenameNodeRequest{
NodeId: identifier, NodeId: identifier,
NewName: newName, NewName: newName,
@@ -345,39 +219,19 @@ var renameNodeCmd = &cobra.Command{
response, err := client.RenameNode(ctx, request) response, err := client.RenameNode(ctx, request)
if err != nil { if err != nil {
ErrorOutput( return fmt.Errorf("renaming node: %w", err)
err,
fmt.Sprintf(
"Cannot rename node: %s\n",
status.Convert(err).Message(),
),
output,
)
} }
SuccessOutput(response.GetNode(), "Node renamed", output) return printOutput(cmd, response.GetNode(), "Node renamed")
}, }),
} }
var deleteNodeCmd = &cobra.Command{ var deleteNodeCmd = &cobra.Command{
Use: "delete", Use: "delete",
Short: "Delete a node", Short: "Delete a node",
Aliases: []string{"del"}, Aliases: []string{"del"},
Run: func(cmd *cobra.Command, args []string) { RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
output, _ := cmd.Flags().GetString("output") identifier, _ := cmd.Flags().GetUint64("identifier")
identifier, err := cmd.Flags().GetUint64("identifier")
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error converting ID to integer: %s", err),
output,
)
}
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
defer cancel()
defer conn.Close()
getRequest := &v1.GetNodeRequest{ getRequest := &v1.GetNodeRequest{
NodeId: identifier, NodeId: identifier,
@@ -385,49 +239,31 @@ var deleteNodeCmd = &cobra.Command{
getResponse, err := client.GetNode(ctx, getRequest) getResponse, err := client.GetNode(ctx, getRequest)
if err != nil { if err != nil {
ErrorOutput( return fmt.Errorf("getting node: %w", err)
err,
"Error getting node node: "+status.Convert(err).Message(),
output,
)
} }
deleteRequest := &v1.DeleteNodeRequest{ deleteRequest := &v1.DeleteNodeRequest{
NodeId: identifier, NodeId: identifier,
} }
confirm := false if !confirmAction(cmd, fmt.Sprintf(
force, _ := cmd.Flags().GetBool("force") "Do you want to remove the node %s?",
if !force { getResponse.GetNode().GetName(),
confirm = util.YesNo(fmt.Sprintf( )) {
"Do you want to remove the node %s?", return printOutput(cmd, map[string]string{"Result": "Node not deleted"}, "Node not deleted")
getResponse.GetNode().GetName(),
))
} }
if confirm || force { _, err = client.DeleteNode(ctx, deleteRequest)
response, err := client.DeleteNode(ctx, deleteRequest) if err != nil {
if output != "" { return fmt.Errorf("deleting node: %w", err)
SuccessOutput(response, "", output)
return
}
if err != nil {
ErrorOutput(
err,
"Error deleting node: "+status.Convert(err).Message(),
output,
)
}
SuccessOutput(
map[string]string{"Result": "Node deleted"},
"Node deleted",
output,
)
} else {
SuccessOutput(map[string]string{"Result": "Node not deleted"}, "Node not deleted", output)
} }
},
return printOutput(
cmd,
map[string]string{"Result": "Node deleted"},
"Node deleted",
)
}),
} }
var backfillNodeIPsCmd = &cobra.Command{ var backfillNodeIPsCmd = &cobra.Command{
@@ -445,32 +281,24 @@ all nodes that are missing.
If you remove IPv4 or IPv6 prefixes from the config, If you remove IPv4 or IPv6 prefixes from the config,
it can be run to remove the IPs that should no longer it can be run to remove the IPs that should no longer
be assigned to nodes.`, be assigned to nodes.`,
Run: func(cmd *cobra.Command, args []string) { RunE: func(cmd *cobra.Command, args []string) error {
output, _ := cmd.Flags().GetString("output") if !confirmAction(cmd, "Are you sure that you want to assign/remove IPs to/from nodes?") {
return nil
confirm := false
force, _ := cmd.Flags().GetBool("force")
if !force {
confirm = util.YesNo("Are you sure that you want to assign/remove IPs to/from nodes?")
} }
if confirm || force { ctx, client, conn, cancel, err := newHeadscaleCLIWithConfig()
ctx, client, conn, cancel := newHeadscaleCLIWithConfig() if err != nil {
defer cancel() return fmt.Errorf("connecting to headscale: %w", err)
defer conn.Close()
changes, err := client.BackfillNodeIPs(ctx, &v1.BackfillNodeIPsRequest{Confirmed: confirm || force})
if err != nil {
ErrorOutput(
err,
"Error backfilling IPs: "+status.Convert(err).Message(),
output,
)
}
SuccessOutput(changes, "Node IPs backfilled successfully", output)
} }
defer cancel()
defer conn.Close()
changes, err := client.BackfillNodeIPs(ctx, &v1.BackfillNodeIPsRequest{Confirmed: true})
if err != nil {
return fmt.Errorf("backfilling IPs: %w", err)
}
return printOutput(cmd, changes, "Node IPs backfilled successfully")
}, },
} }
@@ -501,23 +329,30 @@ func nodesToPtables(
ephemeral = true ephemeral = true
} }
var lastSeen time.Time var (
var lastSeenTime string lastSeen time.Time
lastSeenTime string
)
if node.GetLastSeen() != nil { if node.GetLastSeen() != nil {
lastSeen = node.GetLastSeen().AsTime() lastSeen = node.GetLastSeen().AsTime()
lastSeenTime = lastSeen.Format("2006-01-02 15:04:05") lastSeenTime = lastSeen.Format(HeadscaleDateTimeFormat)
} }
var expiry time.Time var (
var expiryTime string expiry time.Time
expiryTime string
)
if node.GetExpiry() != nil { if node.GetExpiry() != nil {
expiry = node.GetExpiry().AsTime() expiry = node.GetExpiry().AsTime()
expiryTime = expiry.Format("2006-01-02 15:04:05") expiryTime = expiry.Format(HeadscaleDateTimeFormat)
} else { } else {
expiryTime = "N/A" expiryTime = "N/A"
} }
var machineKey key.MachinePublic var machineKey key.MachinePublic
err := machineKey.UnmarshalText( err := machineKey.UnmarshalText(
[]byte(node.GetMachineKey()), []byte(node.GetMachineKey()),
) )
@@ -526,6 +361,7 @@ func nodesToPtables(
} }
var nodeKey key.NodePublic var nodeKey key.NodePublic
err = nodeKey.UnmarshalText( err = nodeKey.UnmarshalText(
[]byte(node.GetNodeKey()), []byte(node.GetNodeKey()),
) )
@@ -541,42 +377,39 @@ func nodesToPtables(
} }
var expired string var expired string
if expiry.IsZero() || expiry.After(time.Now()) { if node.GetExpiry() != nil && node.GetExpiry().AsTime().Before(time.Now()) {
expired = pterm.LightGreen("no")
} else {
expired = pterm.LightRed("yes") expired = pterm.LightRed("yes")
} else {
expired = pterm.LightGreen("no")
} }
// TODO(kradalby): as part of CLI rework, we should add the posibility to show "unusable" tags as mentioned in
// https://github.com/juanfont/headscale/issues/2981
var tagsBuilder strings.Builder var tagsBuilder strings.Builder
for _, tag := range node.GetTags() { for _, tag := range node.GetTags() {
tagsBuilder.WriteString("\n" + tag) tagsBuilder.WriteString("\n" + tag)
} }
tags := tagsBuilder.String() tags := strings.TrimLeft(tagsBuilder.String(), "\n")
tags = strings.TrimLeft(tags, "\n")
var user string var user string
if currentUser == "" || (currentUser == node.GetUser().GetName()) { if node.GetUser() != nil {
user = pterm.LightMagenta(node.GetUser().GetName()) user = node.GetUser().GetName()
} else {
// Shared into this user
user = pterm.LightYellow(node.GetUser().GetName())
} }
var IPV4Address string var ipBuilder strings.Builder
var IPV6Address string
for _, addr := range node.GetIpAddresses() { for _, addr := range node.GetIpAddresses() {
if netip.MustParseAddr(addr).Is4() { ip, err := netip.ParseAddr(addr)
IPV4Address = addr if err == nil {
} else { if ipBuilder.Len() > 0 {
IPV6Address = addr ipBuilder.WriteString("\n")
}
ipBuilder.WriteString(ip.String())
} }
} }
ipAddresses := ipBuilder.String()
nodeData := []string{ nodeData := []string{
strconv.FormatUint(node.GetId(), util.Base10), strconv.FormatUint(node.GetId(), util.Base10),
node.GetName(), node.GetName(),
@@ -585,7 +418,7 @@ func nodesToPtables(
nodeKey.ShortString(), nodeKey.ShortString(),
user, user,
tags, tags,
strings.Join([]string{IPV4Address, IPV6Address}, ", "), ipAddresses,
strconv.FormatBool(ephemeral), strconv.FormatBool(ephemeral),
lastSeenTime, lastSeenTime,
expiryTime, expiryTime,
@@ -603,7 +436,7 @@ func nodesToPtables(
func nodeRoutesToPtables( func nodeRoutesToPtables(
nodes []*v1.Node, nodes []*v1.Node,
) (pterm.TableData, error) { ) pterm.TableData {
tableHeader := []string{ tableHeader := []string{
"ID", "ID",
"Hostname", "Hostname",
@@ -627,108 +460,50 @@ func nodeRoutesToPtables(
) )
} }
return tableData, nil return tableData
} }
var tagCmd = &cobra.Command{ var tagCmd = &cobra.Command{
Use: "tag", Use: "tag",
Short: "Manage the tags of a node", Short: "Manage the tags of a node",
Aliases: []string{"tags", "t"}, Aliases: []string{"tags", "t"},
Run: func(cmd *cobra.Command, args []string) { RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
output, _ := cmd.Flags().GetString("output") identifier, _ := cmd.Flags().GetUint64("identifier")
ctx, client, conn, cancel := newHeadscaleCLIWithConfig() tagsToSet, _ := cmd.Flags().GetStringSlice("tags")
defer cancel()
defer conn.Close()
// retrieve flags from CLI
identifier, err := cmd.Flags().GetUint64("identifier")
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error converting ID to integer: %s", err),
output,
)
}
tagsToSet, err := cmd.Flags().GetStringSlice("tags")
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error retrieving list of tags to add to node, %v", err),
output,
)
}
// Sending tags to node // Sending tags to node
request := &v1.SetTagsRequest{ request := &v1.SetTagsRequest{
NodeId: identifier, NodeId: identifier,
Tags: tagsToSet, Tags: tagsToSet,
} }
resp, err := client.SetTags(ctx, request) resp, err := client.SetTags(ctx, request)
if err != nil { if err != nil {
ErrorOutput( return fmt.Errorf("setting tags: %w", err)
err,
fmt.Sprintf("Error while sending tags to headscale: %s", err),
output,
)
} }
if resp != nil { return printOutput(cmd, resp.GetNode(), "Node updated")
SuccessOutput( }),
resp.GetNode(),
"Node updated",
output,
)
}
},
} }
var approveRoutesCmd = &cobra.Command{ var approveRoutesCmd = &cobra.Command{
Use: "approve-routes", Use: "approve-routes",
Short: "Manage the approved routes of a node", Short: "Manage the approved routes of a node",
Run: func(cmd *cobra.Command, args []string) { RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
output, _ := cmd.Flags().GetString("output") identifier, _ := cmd.Flags().GetUint64("identifier")
ctx, client, conn, cancel := newHeadscaleCLIWithConfig() routes, _ := cmd.Flags().GetStringSlice("routes")
defer cancel()
defer conn.Close()
// retrieve flags from CLI
identifier, err := cmd.Flags().GetUint64("identifier")
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error converting ID to integer: %s", err),
output,
)
}
routes, err := cmd.Flags().GetStringSlice("routes")
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error retrieving list of routes to add to node, %v", err),
output,
)
}
// Sending routes to node // Sending routes to node
request := &v1.SetApprovedRoutesRequest{ request := &v1.SetApprovedRoutesRequest{
NodeId: identifier, NodeId: identifier,
Routes: routes, Routes: routes,
} }
resp, err := client.SetApprovedRoutes(ctx, request) resp, err := client.SetApprovedRoutes(ctx, request)
if err != nil { if err != nil {
ErrorOutput( return fmt.Errorf("setting approved routes: %w", err)
err,
fmt.Sprintf("Error while sending routes to headscale: %s", err),
output,
)
} }
if resp != nil { return printOutput(cmd, resp.GetNode(), "Node updated")
SuccessOutput( }),
resp.GetNode(),
"Node updated",
output,
)
}
},
} }

View File

@@ -1,24 +1,41 @@
package cli package cli
import ( import (
"errors"
"fmt" "fmt"
"io"
"os" "os"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1" v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/juanfont/headscale/hscontrol/db" "github.com/juanfont/headscale/hscontrol/db"
"github.com/juanfont/headscale/hscontrol/policy" "github.com/juanfont/headscale/hscontrol/policy"
"github.com/juanfont/headscale/hscontrol/types" "github.com/juanfont/headscale/hscontrol/types"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/rs/zerolog/log"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"tailscale.com/types/views" "tailscale.com/types/views"
) )
const ( const (
bypassFlag = "bypass-grpc-and-access-database-directly" bypassFlag = "bypass-grpc-and-access-database-directly" //nolint:gosec // not a credential
) )
var errAborted = errors.New("command aborted by user")
// bypassDatabase loads the server config and opens the database directly,
// bypassing the gRPC server. The caller is responsible for closing the
// returned database handle.
func bypassDatabase() (*db.HSDatabase, error) {
cfg, err := types.LoadServerConfig()
if err != nil {
return nil, fmt.Errorf("loading config: %w", err)
}
d, err := db.NewHeadscaleDatabase(cfg)
if err != nil {
return nil, fmt.Errorf("opening database: %w", err)
}
return d, nil
}
func init() { func init() {
rootCmd.AddCommand(policyCmd) rootCmd.AddCommand(policyCmd)
@@ -26,16 +43,12 @@ func init() {
policyCmd.AddCommand(getPolicy) policyCmd.AddCommand(getPolicy)
setPolicy.Flags().StringP("file", "f", "", "Path to a policy file in HuJSON format") setPolicy.Flags().StringP("file", "f", "", "Path to a policy file in HuJSON format")
if err := setPolicy.MarkFlagRequired("file"); err != nil {
log.Fatal().Err(err).Msg("")
}
setPolicy.Flags().BoolP(bypassFlag, "", false, "Uses the headscale config to directly access the database, bypassing gRPC and does not require the server to be running") setPolicy.Flags().BoolP(bypassFlag, "", false, "Uses the headscale config to directly access the database, bypassing gRPC and does not require the server to be running")
mustMarkRequired(setPolicy, "file")
policyCmd.AddCommand(setPolicy) policyCmd.AddCommand(setPolicy)
checkPolicy.Flags().StringP("file", "f", "", "Path to a policy file in HuJSON format") checkPolicy.Flags().StringP("file", "f", "", "Path to a policy file in HuJSON format")
if err := checkPolicy.MarkFlagRequired("file"); err != nil { mustMarkRequired(checkPolicy, "file")
log.Fatal().Err(err).Msg("")
}
policyCmd.AddCommand(checkPolicy) policyCmd.AddCommand(checkPolicy)
} }
@@ -48,59 +61,46 @@ var getPolicy = &cobra.Command{
Use: "get", Use: "get",
Short: "Print the current ACL Policy", Short: "Print the current ACL Policy",
Aliases: []string{"show", "view", "fetch"}, Aliases: []string{"show", "view", "fetch"},
Run: func(cmd *cobra.Command, args []string) { RunE: func(cmd *cobra.Command, args []string) error {
output, _ := cmd.Flags().GetString("output") var policyData string
var policy string
if bypass, _ := cmd.Flags().GetBool(bypassFlag); bypass { if bypass, _ := cmd.Flags().GetBool(bypassFlag); bypass {
confirm := false if !confirmAction(cmd, "DO NOT run this command if an instance of headscale is running, are you sure headscale is not running?") {
force, _ := cmd.Flags().GetBool("force") return errAborted
if !force {
confirm = util.YesNo("DO NOT run this command if an instance of headscale is running, are you sure headscale is not running?")
} }
if !confirm && !force { d, err := bypassDatabase()
ErrorOutput(nil, "Aborting command", output)
return
}
cfg, err := types.LoadServerConfig()
if err != nil { if err != nil {
ErrorOutput(err, fmt.Sprintf("Failed loading config: %s", err), output) return err
}
d, err := db.NewHeadscaleDatabase(
cfg,
nil,
)
if err != nil {
ErrorOutput(err, fmt.Sprintf("Failed to open database: %s", err), output)
} }
defer d.Close()
pol, err := d.GetPolicy() pol, err := d.GetPolicy()
if err != nil { if err != nil {
ErrorOutput(err, fmt.Sprintf("Failed loading Policy from database: %s", err), output) return fmt.Errorf("loading policy from database: %w", err)
} }
policy = pol.Data policyData = pol.Data
} else { } else {
ctx, client, conn, cancel := newHeadscaleCLIWithConfig() ctx, client, conn, cancel, err := newHeadscaleCLIWithConfig()
if err != nil {
return fmt.Errorf("connecting to headscale: %w", err)
}
defer cancel() defer cancel()
defer conn.Close() defer conn.Close()
request := &v1.GetPolicyRequest{} response, err := client.GetPolicy(ctx, &v1.GetPolicyRequest{})
response, err := client.GetPolicy(ctx, request)
if err != nil { if err != nil {
ErrorOutput(err, fmt.Sprintf("Failed loading ACL Policy: %s", err), output) return fmt.Errorf("loading ACL policy: %w", err)
} }
policy = response.GetPolicy() policyData = response.GetPolicy()
} }
// TODO(pallabpain): Maybe print this better? // This does not pass output format as we don't support yaml, json or
// This does not pass output as we dont support yaml, json or json-line // json-line output for this command. It is HuJSON already.
// output for this command. It is HuJSON already. fmt.Println(policyData)
SuccessOutput("", policy, "")
return nil
}, },
} }
@@ -111,100 +111,79 @@ var setPolicy = &cobra.Command{
Updates the existing ACL Policy with the provided policy. The policy must be a valid HuJSON object. Updates the existing ACL Policy with the provided policy. The policy must be a valid HuJSON object.
This command only works when the acl.policy_mode is set to "db", and the policy will be stored in the database.`, This command only works when the acl.policy_mode is set to "db", and the policy will be stored in the database.`,
Aliases: []string{"put", "update"}, Aliases: []string{"put", "update"},
Run: func(cmd *cobra.Command, args []string) { RunE: func(cmd *cobra.Command, args []string) error {
output, _ := cmd.Flags().GetString("output")
policyPath, _ := cmd.Flags().GetString("file") policyPath, _ := cmd.Flags().GetString("file")
f, err := os.Open(policyPath) policyBytes, err := os.ReadFile(policyPath)
if err != nil { if err != nil {
ErrorOutput(err, fmt.Sprintf("Error opening the policy file: %s", err), output) return fmt.Errorf("reading policy file: %w", err)
}
defer f.Close()
policyBytes, err := io.ReadAll(f)
if err != nil {
ErrorOutput(err, fmt.Sprintf("Error reading the policy file: %s", err), output)
} }
if bypass, _ := cmd.Flags().GetBool(bypassFlag); bypass { if bypass, _ := cmd.Flags().GetBool(bypassFlag); bypass {
confirm := false if !confirmAction(cmd, "DO NOT run this command if an instance of headscale is running, are you sure headscale is not running?") {
force, _ := cmd.Flags().GetBool("force") return errAborted
if !force {
confirm = util.YesNo("DO NOT run this command if an instance of headscale is running, are you sure headscale is not running?")
} }
if !confirm && !force { d, err := bypassDatabase()
ErrorOutput(nil, "Aborting command", output)
return
}
cfg, err := types.LoadServerConfig()
if err != nil { if err != nil {
ErrorOutput(err, fmt.Sprintf("Failed loading config: %s", err), output) return err
}
d, err := db.NewHeadscaleDatabase(
cfg,
nil,
)
if err != nil {
ErrorOutput(err, fmt.Sprintf("Failed to open database: %s", err), output)
} }
defer d.Close()
users, err := d.ListUsers() users, err := d.ListUsers()
if err != nil { if err != nil {
ErrorOutput(err, fmt.Sprintf("Failed to load users for policy validation: %s", err), output) return fmt.Errorf("loading users for policy validation: %w", err)
} }
_, err = policy.NewPolicyManager(policyBytes, users, views.Slice[types.NodeView]{}) _, err = policy.NewPolicyManager(policyBytes, users, views.Slice[types.NodeView]{})
if err != nil { if err != nil {
ErrorOutput(err, fmt.Sprintf("Error parsing the policy file: %s", err), output) return fmt.Errorf("parsing policy file: %w", err)
return
} }
_, err = d.SetPolicy(string(policyBytes)) _, err = d.SetPolicy(string(policyBytes))
if err != nil { if err != nil {
ErrorOutput(err, fmt.Sprintf("Failed to set ACL Policy: %s", err), output) return fmt.Errorf("setting ACL policy: %w", err)
} }
} else { } else {
request := &v1.SetPolicyRequest{Policy: string(policyBytes)} request := &v1.SetPolicyRequest{Policy: string(policyBytes)}
ctx, client, conn, cancel := newHeadscaleCLIWithConfig() ctx, client, conn, cancel, err := newHeadscaleCLIWithConfig()
if err != nil {
return fmt.Errorf("connecting to headscale: %w", err)
}
defer cancel() defer cancel()
defer conn.Close() defer conn.Close()
if _, err := client.SetPolicy(ctx, request); err != nil { _, err = client.SetPolicy(ctx, request)
ErrorOutput(err, fmt.Sprintf("Failed to set ACL Policy: %s", err), output) if err != nil {
return fmt.Errorf("setting ACL policy: %w", err)
} }
} }
SuccessOutput(nil, "Policy updated.", "") fmt.Println("Policy updated.")
return nil
}, },
} }
var checkPolicy = &cobra.Command{ var checkPolicy = &cobra.Command{
Use: "check", Use: "check",
Short: "Check the Policy file for errors", Short: "Check the Policy file for errors",
Run: func(cmd *cobra.Command, args []string) { RunE: func(cmd *cobra.Command, args []string) error {
output, _ := cmd.Flags().GetString("output")
policyPath, _ := cmd.Flags().GetString("file") policyPath, _ := cmd.Flags().GetString("file")
f, err := os.Open(policyPath) policyBytes, err := os.ReadFile(policyPath)
if err != nil { if err != nil {
ErrorOutput(err, fmt.Sprintf("Error opening the policy file: %s", err), output) return fmt.Errorf("reading policy file: %w", err)
}
defer f.Close()
policyBytes, err := io.ReadAll(f)
if err != nil {
ErrorOutput(err, fmt.Sprintf("Error reading the policy file: %s", err), output)
} }
_, err = policy.NewPolicyManager(policyBytes, nil, views.Slice[types.NodeView]{}) _, err = policy.NewPolicyManager(policyBytes, nil, views.Slice[types.NodeView]{})
if err != nil { if err != nil {
ErrorOutput(err, fmt.Sprintf("Error parsing the policy file: %s", err), output) return fmt.Errorf("parsing policy file: %w", err)
} }
SuccessOutput(nil, "Policy is valid", "") fmt.Println("Policy is valid")
return nil
}, },
} }

View File

@@ -1,17 +1,15 @@
package cli package cli
import ( import (
"context"
"fmt" "fmt"
"strconv" "strconv"
"strings" "strings"
"time"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1" v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/prometheus/common/model" "github.com/juanfont/headscale/hscontrol/util"
"github.com/pterm/pterm" "github.com/pterm/pterm"
"github.com/rs/zerolog/log"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"google.golang.org/protobuf/types/known/timestamppb"
) )
const ( const (
@@ -47,207 +45,134 @@ var listPreAuthKeys = &cobra.Command{
Use: "list", Use: "list",
Short: "List all preauthkeys", Short: "List all preauthkeys",
Aliases: []string{"ls", "show"}, Aliases: []string{"ls", "show"},
Run: func(cmd *cobra.Command, args []string) { RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
output, _ := cmd.Flags().GetString("output")
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
defer cancel()
defer conn.Close()
response, err := client.ListPreAuthKeys(ctx, &v1.ListPreAuthKeysRequest{}) response, err := client.ListPreAuthKeys(ctx, &v1.ListPreAuthKeysRequest{})
if err != nil { if err != nil {
ErrorOutput( return fmt.Errorf("listing preauthkeys: %w", err)
err,
fmt.Sprintf("Error getting the list of keys: %s", err),
output,
)
return
} }
if output != "" { return printListOutput(cmd, response.GetPreAuthKeys(), func() error {
SuccessOutput(response.GetPreAuthKeys(), "", output) tableData := pterm.TableData{
} {
"ID",
tableData := pterm.TableData{ "Key/Prefix",
{ "Reusable",
"ID", "Ephemeral",
"Key/Prefix", "Used",
"Reusable", "Expiration",
"Ephemeral", "Created",
"Used", "Owner",
"Expiration", },
"Created",
"Owner",
},
}
for _, key := range response.GetPreAuthKeys() {
expiration := "-"
if key.GetExpiration() != nil {
expiration = ColourTime(key.GetExpiration().AsTime())
} }
var owner string for _, key := range response.GetPreAuthKeys() {
if len(key.GetAclTags()) > 0 { expiration := "-"
owner = strings.Join(key.GetAclTags(), "\n") if key.GetExpiration() != nil {
} else if key.GetUser() != nil { expiration = ColourTime(key.GetExpiration().AsTime())
owner = key.GetUser().GetName() }
} else {
owner = "-" var owner string
if len(key.GetAclTags()) > 0 {
owner = strings.Join(key.GetAclTags(), "\n")
} else if key.GetUser() != nil {
owner = key.GetUser().GetName()
} else {
owner = "-"
}
tableData = append(tableData, []string{
strconv.FormatUint(key.GetId(), util.Base10),
key.GetKey(),
strconv.FormatBool(key.GetReusable()),
strconv.FormatBool(key.GetEphemeral()),
strconv.FormatBool(key.GetUsed()),
expiration,
key.GetCreatedAt().AsTime().Format(HeadscaleDateTimeFormat),
owner,
})
} }
tableData = append(tableData, []string{ return pterm.DefaultTable.WithHasHeader().WithData(tableData).Render()
strconv.FormatUint(key.GetId(), 10), })
key.GetKey(), }),
strconv.FormatBool(key.GetReusable()),
strconv.FormatBool(key.GetEphemeral()),
strconv.FormatBool(key.GetUsed()),
expiration,
key.GetCreatedAt().AsTime().Format("2006-01-02 15:04:05"),
owner,
})
}
err = pterm.DefaultTable.WithHasHeader().WithData(tableData).Render()
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Failed to render pterm table: %s", err),
output,
)
}
},
} }
var createPreAuthKeyCmd = &cobra.Command{ var createPreAuthKeyCmd = &cobra.Command{
Use: "create", Use: "create",
Short: "Creates a new preauthkey", Short: "Creates a new preauthkey",
Aliases: []string{"c", "new"}, Aliases: []string{"c", "new"},
Run: func(cmd *cobra.Command, args []string) { RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
output, _ := cmd.Flags().GetString("output")
user, _ := cmd.Flags().GetUint64("user") user, _ := cmd.Flags().GetUint64("user")
reusable, _ := cmd.Flags().GetBool("reusable") reusable, _ := cmd.Flags().GetBool("reusable")
ephemeral, _ := cmd.Flags().GetBool("ephemeral") ephemeral, _ := cmd.Flags().GetBool("ephemeral")
tags, _ := cmd.Flags().GetStringSlice("tags") tags, _ := cmd.Flags().GetStringSlice("tags")
request := &v1.CreatePreAuthKeyRequest{ expiration, err := expirationFromFlag(cmd)
User: user,
Reusable: reusable,
Ephemeral: ephemeral,
AclTags: tags,
}
durationStr, _ := cmd.Flags().GetString("expiration")
duration, err := model.ParseDuration(durationStr)
if err != nil { if err != nil {
ErrorOutput( return err
err,
fmt.Sprintf("Could not parse duration: %s\n", err),
output,
)
} }
expiration := time.Now().UTC().Add(time.Duration(duration)) request := &v1.CreatePreAuthKeyRequest{
User: user,
log.Trace(). Reusable: reusable,
Dur("expiration", time.Duration(duration)). Ephemeral: ephemeral,
Msg("expiration has been set") AclTags: tags,
Expiration: expiration,
request.Expiration = timestamppb.New(expiration) }
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
defer cancel()
defer conn.Close()
response, err := client.CreatePreAuthKey(ctx, request) response, err := client.CreatePreAuthKey(ctx, request)
if err != nil { if err != nil {
ErrorOutput( return fmt.Errorf("creating preauthkey: %w", err)
err,
fmt.Sprintf("Cannot create Pre Auth Key: %s\n", err),
output,
)
} }
SuccessOutput(response.GetPreAuthKey(), response.GetPreAuthKey().GetKey(), output) return printOutput(cmd, response.GetPreAuthKey(), response.GetPreAuthKey().GetKey())
}, }),
} }
var expirePreAuthKeyCmd = &cobra.Command{ var expirePreAuthKeyCmd = &cobra.Command{
Use: "expire", Use: "expire",
Short: "Expire a preauthkey", Short: "Expire a preauthkey",
Aliases: []string{"revoke", "exp", "e"}, Aliases: []string{"revoke", "exp", "e"},
Run: func(cmd *cobra.Command, args []string) { RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
output, _ := cmd.Flags().GetString("output")
id, _ := cmd.Flags().GetUint64("id") id, _ := cmd.Flags().GetUint64("id")
if id == 0 { if id == 0 {
ErrorOutput( return fmt.Errorf("missing --id parameter: %w", errMissingParameter)
errMissingParameter,
"Error: missing --id parameter",
output,
)
return
} }
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
defer cancel()
defer conn.Close()
request := &v1.ExpirePreAuthKeyRequest{ request := &v1.ExpirePreAuthKeyRequest{
Id: id, Id: id,
} }
response, err := client.ExpirePreAuthKey(ctx, request) response, err := client.ExpirePreAuthKey(ctx, request)
if err != nil { if err != nil {
ErrorOutput( return fmt.Errorf("expiring preauthkey: %w", err)
err,
fmt.Sprintf("Cannot expire Pre Auth Key: %s\n", err),
output,
)
} }
SuccessOutput(response, "Key expired", output) return printOutput(cmd, response, "Key expired")
}, }),
} }
var deletePreAuthKeyCmd = &cobra.Command{ var deletePreAuthKeyCmd = &cobra.Command{
Use: "delete", Use: "delete",
Short: "Delete a preauthkey", Short: "Delete a preauthkey",
Aliases: []string{"del", "rm", "d"}, Aliases: []string{"del", "rm", "d"},
Run: func(cmd *cobra.Command, args []string) { RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
output, _ := cmd.Flags().GetString("output")
id, _ := cmd.Flags().GetUint64("id") id, _ := cmd.Flags().GetUint64("id")
if id == 0 { if id == 0 {
ErrorOutput( return fmt.Errorf("missing --id parameter: %w", errMissingParameter)
errMissingParameter,
"Error: missing --id parameter",
output,
)
return
} }
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
defer cancel()
defer conn.Close()
request := &v1.DeletePreAuthKeyRequest{ request := &v1.DeletePreAuthKeyRequest{
Id: id, Id: id,
} }
response, err := client.DeletePreAuthKey(ctx, request) response, err := client.DeletePreAuthKey(ctx, request)
if err != nil { if err != nil {
ErrorOutput( return fmt.Errorf("deleting preauthkey: %w", err)
err,
fmt.Sprintf("Cannot delete Pre Auth Key: %s\n", err),
output,
)
} }
SuccessOutput(response, "Key deleted", output) return printOutput(cmd, response, "Key deleted")
}, }),
} }

View File

@@ -7,7 +7,7 @@ import (
) )
func ColourTime(date time.Time) string { func ColourTime(date time.Time) string {
dateStr := date.Format("2006-01-02 15:04:05") dateStr := date.Format(HeadscaleDateTimeFormat)
if date.After(time.Now()) { if date.After(time.Now()) {
dateStr = pterm.LightGreen(dateStr) dateStr = pterm.LightGreen(dateStr)

View File

@@ -1,7 +1,6 @@
package cli package cli
import ( import (
"fmt"
"os" "os"
"runtime" "runtime"
"slices" "slices"
@@ -15,10 +14,6 @@ import (
"github.com/tcnksm/go-latest" "github.com/tcnksm/go-latest"
) )
const (
deprecateNamespaceMessage = "use --user"
)
var cfgFile string = "" var cfgFile string = ""
func init() { func init() {
@@ -39,25 +34,34 @@ func init() {
StringP("output", "o", "", "Output format. Empty for human-readable, 'json', 'json-line' or 'yaml'") StringP("output", "o", "", "Output format. Empty for human-readable, 'json', 'json-line' or 'yaml'")
rootCmd.PersistentFlags(). rootCmd.PersistentFlags().
Bool("force", false, "Disable prompts and forces the execution") Bool("force", false, "Disable prompts and forces the execution")
// Re-enable usage output only for flag-parsing errors; runtime errors
// from RunE should never dump usage text.
rootCmd.SetFlagErrorFunc(func(cmd *cobra.Command, err error) error {
cmd.SilenceUsage = false
return err
})
} }
func initConfig() { func initConfig() {
if cfgFile == "" { if cfgFile == "" {
cfgFile = os.Getenv("HEADSCALE_CONFIG") cfgFile = os.Getenv("HEADSCALE_CONFIG")
} }
if cfgFile != "" { if cfgFile != "" {
err := types.LoadConfig(cfgFile, true) err := types.LoadConfig(cfgFile, true)
if err != nil { if err != nil {
log.Fatal().Caller().Err(err).Msgf("Error loading config file %s", cfgFile) log.Fatal().Caller().Err(err).Msgf("error loading config file %s", cfgFile)
} }
} else { } else {
err := types.LoadConfig("", false) err := types.LoadConfig("", false)
if err != nil { if err != nil {
log.Fatal().Caller().Err(err).Msgf("Error loading config") log.Fatal().Caller().Err(err).Msgf("error loading config")
} }
} }
machineOutput := HasMachineOutputFlag() machineOutput := hasMachineOutputFlag()
// If the user has requested a "node" readable format, // If the user has requested a "node" readable format,
// then disable login so the output remains valid. // then disable login so the output remains valid.
@@ -80,6 +84,7 @@ func initConfig() {
Repository: "headscale", Repository: "headscale",
TagFilterFunc: filterPreReleasesIfStable(func() string { return versionInfo.Version }), TagFilterFunc: filterPreReleasesIfStable(func() string { return versionInfo.Version }),
} }
res, err := latest.Check(githubTag, versionInfo.Version) res, err := latest.Check(githubTag, versionInfo.Version)
if err == nil && res.Outdated { if err == nil && res.Outdated {
//nolint //nolint
@@ -101,6 +106,7 @@ func isPreReleaseVersion(version string) bool {
return true return true
} }
} }
return false return false
} }
@@ -137,11 +143,15 @@ var rootCmd = &cobra.Command{
headscale is an open source implementation of the Tailscale control server headscale is an open source implementation of the Tailscale control server
https://github.com/juanfont/headscale`, https://github.com/juanfont/headscale`,
SilenceErrors: true,
SilenceUsage: true,
} }
func Execute() { func Execute() {
if err := rootCmd.Execute(); err != nil { cmd, err := rootCmd.ExecuteC()
fmt.Fprintln(os.Stderr, err) if err != nil {
outputFormat, _ := cmd.Flags().GetString("output")
printError(err, outputFormat)
os.Exit(1) os.Exit(1)
} }
} }

View File

@@ -5,7 +5,6 @@ import (
"fmt" "fmt"
"net/http" "net/http"
"github.com/rs/zerolog/log"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"github.com/tailscale/squibble" "github.com/tailscale/squibble"
) )
@@ -17,24 +16,22 @@ func init() {
var serveCmd = &cobra.Command{ var serveCmd = &cobra.Command{
Use: "serve", Use: "serve",
Short: "Launches the headscale server", Short: "Launches the headscale server",
Args: func(cmd *cobra.Command, args []string) error { RunE: func(cmd *cobra.Command, args []string) error {
return nil
},
Run: func(cmd *cobra.Command, args []string) {
app, err := newHeadscaleServerWithConfig() app, err := newHeadscaleServerWithConfig()
if err != nil { if err != nil {
var squibbleErr squibble.ValidationError if squibbleErr, ok := errors.AsType[squibble.ValidationError](err); ok {
if errors.As(err, &squibbleErr) {
fmt.Printf("SQLite schema failed to validate:\n") fmt.Printf("SQLite schema failed to validate:\n")
fmt.Println(squibbleErr.Diff) fmt.Println(squibbleErr.Diff)
} }
log.Fatal().Caller().Err(err).Msg("Error initializing") return fmt.Errorf("initializing: %w", err)
} }
err = app.Serve() err = app.Serve()
if err != nil && !errors.Is(err, http.ErrServerClosed) { if err != nil && !errors.Is(err, http.ErrServerClosed) {
log.Fatal().Caller().Err(err).Msg("Headscale ran into an error and had to shut down.") return fmt.Errorf("headscale ran into an error and had to shut down: %w", err)
} }
return nil
}, },
} }

View File

@@ -1,6 +1,7 @@
package cli package cli
import ( import (
"context"
"errors" "errors"
"fmt" "fmt"
"net/url" "net/url"
@@ -8,10 +9,16 @@ import (
v1 "github.com/juanfont/headscale/gen/go/headscale/v1" v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/juanfont/headscale/hscontrol/util" "github.com/juanfont/headscale/hscontrol/util"
"github.com/juanfont/headscale/hscontrol/util/zlog/zf"
"github.com/pterm/pterm" "github.com/pterm/pterm"
"github.com/rs/zerolog/log" "github.com/rs/zerolog/log"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"google.golang.org/grpc/status" )
// CLI user errors.
var (
errFlagRequired = errors.New("--name or --identifier flag is required")
errMultipleUsersMatch = errors.New("multiple users match query, specify an ID")
) )
func usernameAndIDFlag(cmd *cobra.Command) { func usernameAndIDFlag(cmd *cobra.Command) {
@@ -20,20 +27,21 @@ func usernameAndIDFlag(cmd *cobra.Command) {
} }
// usernameAndIDFromFlag returns the username and ID from the flags of the command. // usernameAndIDFromFlag returns the username and ID from the flags of the command.
// If both are empty, it will exit the program with an error. func usernameAndIDFromFlag(cmd *cobra.Command) (uint64, string, error) {
func usernameAndIDFromFlag(cmd *cobra.Command) (uint64, string) {
username, _ := cmd.Flags().GetString("name") username, _ := cmd.Flags().GetString("name")
identifier, _ := cmd.Flags().GetInt64("identifier") identifier, _ := cmd.Flags().GetInt64("identifier")
if username == "" && identifier < 0 { if username == "" && identifier < 0 {
err := errors.New("--name or --identifier flag is required") return 0, "", errFlagRequired
ErrorOutput(
err,
"Cannot rename user: "+status.Convert(err).Message(),
"",
)
} }
return uint64(identifier), username // Normalise unset/negative identifiers to 0 so the uint64
// conversion does not produce a bogus large value.
if identifier < 0 {
identifier = 0
}
return uint64(identifier), username, nil //nolint:gosec // identifier is clamped to >= 0 above
} }
func init() { func init() {
@@ -50,15 +58,13 @@ func init() {
userCmd.AddCommand(renameUserCmd) userCmd.AddCommand(renameUserCmd)
usernameAndIDFlag(renameUserCmd) usernameAndIDFlag(renameUserCmd)
renameUserCmd.Flags().StringP("new-name", "r", "", "New username") renameUserCmd.Flags().StringP("new-name", "r", "", "New username")
renameNodeCmd.MarkFlagRequired("new-name") mustMarkRequired(renameUserCmd, "new-name")
} }
var errMissingParameter = errors.New("missing parameters")
var userCmd = &cobra.Command{ var userCmd = &cobra.Command{
Use: "users", Use: "users",
Short: "Manage the users of Headscale", Short: "Manage the users of Headscale",
Aliases: []string{"user", "namespace", "namespaces", "ns"}, Aliases: []string{"user"},
} }
var createUserCmd = &cobra.Command{ var createUserCmd = &cobra.Command{
@@ -72,16 +78,10 @@ var createUserCmd = &cobra.Command{
return nil return nil
}, },
Run: func(cmd *cobra.Command, args []string) { RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
output, _ := cmd.Flags().GetString("output")
userName := args[0] userName := args[0]
ctx, client, conn, cancel := newHeadscaleCLIWithConfig() log.Trace().Interface(zf.Client, client).Msg("obtained gRPC client")
defer cancel()
defer conn.Close()
log.Trace().Interface("client", client).Msg("Obtained gRPC client")
request := &v1.CreateUserRequest{Name: userName} request := &v1.CreateUserRequest{Name: userName}
@@ -94,108 +94,73 @@ var createUserCmd = &cobra.Command{
} }
if pictureURL, _ := cmd.Flags().GetString("picture-url"); pictureURL != "" { if pictureURL, _ := cmd.Flags().GetString("picture-url"); pictureURL != "" {
if _, err := url.Parse(pictureURL); err != nil { if _, err := url.Parse(pictureURL); err != nil { //nolint:noinlineerr
ErrorOutput( return fmt.Errorf("invalid picture URL: %w", err)
err,
fmt.Sprintf(
"Invalid Picture URL: %s",
err,
),
output,
)
} }
request.PictureUrl = pictureURL request.PictureUrl = pictureURL
} }
log.Trace().Interface("request", request).Msg("Sending CreateUser request") log.Trace().Interface(zf.Request, request).Msg("sending CreateUser request")
response, err := client.CreateUser(ctx, request) response, err := client.CreateUser(ctx, request)
if err != nil { if err != nil {
ErrorOutput( return fmt.Errorf("creating user: %w", err)
err,
"Cannot create user: "+status.Convert(err).Message(),
output,
)
} }
SuccessOutput(response.GetUser(), "User created", output) return printOutput(cmd, response.GetUser(), "User created")
}, }),
} }
var destroyUserCmd = &cobra.Command{ var destroyUserCmd = &cobra.Command{
Use: "destroy --identifier ID or --name NAME", Use: "destroy --identifier ID or --name NAME",
Short: "Destroys a user", Short: "Destroys a user",
Aliases: []string{"delete"}, Aliases: []string{"delete"},
Run: func(cmd *cobra.Command, args []string) { RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
output, _ := cmd.Flags().GetString("output") id, username, err := usernameAndIDFromFlag(cmd)
if err != nil {
return err
}
id, username := usernameAndIDFromFlag(cmd)
request := &v1.ListUsersRequest{ request := &v1.ListUsersRequest{
Name: username, Name: username,
Id: id, Id: id,
} }
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
defer cancel()
defer conn.Close()
users, err := client.ListUsers(ctx, request) users, err := client.ListUsers(ctx, request)
if err != nil { if err != nil {
ErrorOutput( return fmt.Errorf("listing users: %w", err)
err,
"Error: "+status.Convert(err).Message(),
output,
)
} }
if len(users.GetUsers()) != 1 { if len(users.GetUsers()) != 1 {
err := errors.New("Unable to determine user to delete, query returned multiple users, use ID") return errMultipleUsersMatch
ErrorOutput(
err,
"Error: "+status.Convert(err).Message(),
output,
)
} }
user := users.GetUsers()[0] user := users.GetUsers()[0]
confirm := false if !confirmAction(cmd, fmt.Sprintf(
force, _ := cmd.Flags().GetBool("force") "Do you want to remove the user %q (%d) and any associated preauthkeys?",
if !force { user.GetName(), user.GetId(),
confirm = util.YesNo(fmt.Sprintf( )) {
"Do you want to remove the user %q (%d) and any associated preauthkeys?", return printOutput(cmd, map[string]string{"Result": "User not destroyed"}, "User not destroyed")
user.GetName(), user.GetId(),
))
} }
if confirm || force { deleteRequest := &v1.DeleteUserRequest{Id: user.GetId()}
request := &v1.DeleteUserRequest{Id: user.GetId()}
response, err := client.DeleteUser(ctx, request) response, err := client.DeleteUser(ctx, deleteRequest)
if err != nil { if err != nil {
ErrorOutput( return fmt.Errorf("destroying user: %w", err)
err,
"Cannot destroy user: "+status.Convert(err).Message(),
output,
)
}
SuccessOutput(response, "User destroyed", output)
} else {
SuccessOutput(map[string]string{"Result": "User not destroyed"}, "User not destroyed", output)
} }
},
return printOutput(cmd, response, "User destroyed")
}),
} }
var listUsersCmd = &cobra.Command{ var listUsersCmd = &cobra.Command{
Use: "list", Use: "list",
Short: "List all the users", Short: "List all the users",
Aliases: []string{"ls", "show"}, Aliases: []string{"ls", "show"},
Run: func(cmd *cobra.Command, args []string) { RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
output, _ := cmd.Flags().GetString("output")
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
defer cancel()
defer conn.Close()
request := &v1.ListUsersRequest{} request := &v1.ListUsersRequest{}
id, _ := cmd.Flags().GetInt64("identifier") id, _ := cmd.Flags().GetInt64("identifier")
@@ -214,53 +179,39 @@ var listUsersCmd = &cobra.Command{
response, err := client.ListUsers(ctx, request) response, err := client.ListUsers(ctx, request)
if err != nil { if err != nil {
ErrorOutput( return fmt.Errorf("listing users: %w", err)
err,
"Cannot get users: "+status.Convert(err).Message(),
output,
)
} }
if output != "" { return printListOutput(cmd, response.GetUsers(), func() error {
SuccessOutput(response.GetUsers(), "", output) tableData := pterm.TableData{{"ID", "Name", "Username", "Email", "Created"}}
} for _, user := range response.GetUsers() {
tableData = append(
tableData,
[]string{
strconv.FormatUint(user.GetId(), util.Base10),
user.GetDisplayName(),
user.GetName(),
user.GetEmail(),
user.GetCreatedAt().AsTime().Format(HeadscaleDateTimeFormat),
},
)
}
tableData := pterm.TableData{{"ID", "Name", "Username", "Email", "Created"}} return pterm.DefaultTable.WithHasHeader().WithData(tableData).Render()
for _, user := range response.GetUsers() { })
tableData = append( }),
tableData,
[]string{
strconv.FormatUint(user.GetId(), 10),
user.GetDisplayName(),
user.GetName(),
user.GetEmail(),
user.GetCreatedAt().AsTime().Format("2006-01-02 15:04:05"),
},
)
}
err = pterm.DefaultTable.WithHasHeader().WithData(tableData).Render()
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Failed to render pterm table: %s", err),
output,
)
}
},
} }
var renameUserCmd = &cobra.Command{ var renameUserCmd = &cobra.Command{
Use: "rename", Use: "rename",
Short: "Renames a user", Short: "Renames a user",
Aliases: []string{"mv"}, Aliases: []string{"mv"},
Run: func(cmd *cobra.Command, args []string) { RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
output, _ := cmd.Flags().GetString("output") id, username, err := usernameAndIDFromFlag(cmd)
if err != nil {
return err
}
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
defer cancel()
defer conn.Close()
id, username := usernameAndIDFromFlag(cmd)
listReq := &v1.ListUsersRequest{ listReq := &v1.ListUsersRequest{
Name: username, Name: username,
Id: id, Id: id,
@@ -268,20 +219,11 @@ var renameUserCmd = &cobra.Command{
users, err := client.ListUsers(ctx, listReq) users, err := client.ListUsers(ctx, listReq)
if err != nil { if err != nil {
ErrorOutput( return fmt.Errorf("listing users: %w", err)
err,
"Error: "+status.Convert(err).Message(),
output,
)
} }
if len(users.GetUsers()) != 1 { if len(users.GetUsers()) != 1 {
err := errors.New("Unable to determine user to delete, query returned multiple users, use ID") return errMultipleUsersMatch
ErrorOutput(
err,
"Error: "+status.Convert(err).Message(),
output,
)
} }
newName, _ := cmd.Flags().GetString("new-name") newName, _ := cmd.Flags().GetString("new-name")
@@ -293,13 +235,9 @@ var renameUserCmd = &cobra.Command{
response, err := client.RenameUser(ctx, renameReq) response, err := client.RenameUser(ctx, renameReq)
if err != nil { if err != nil {
ErrorOutput( return fmt.Errorf("renaming user: %w", err)
err,
"Cannot rename user: "+status.Convert(err).Message(),
output,
)
} }
SuccessOutput(response.GetUser(), "User renamed", output) return printOutput(cmd, response.GetUser(), "User renamed")
}, }),
} }

View File

@@ -4,25 +4,52 @@ import (
"context" "context"
"crypto/tls" "crypto/tls"
"encoding/json" "encoding/json"
"errors"
"fmt" "fmt"
"os" "os"
"time"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1" v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/juanfont/headscale/hscontrol" "github.com/juanfont/headscale/hscontrol"
"github.com/juanfont/headscale/hscontrol/types" "github.com/juanfont/headscale/hscontrol/types"
"github.com/juanfont/headscale/hscontrol/util" "github.com/juanfont/headscale/hscontrol/util"
"github.com/juanfont/headscale/hscontrol/util/zlog/zf"
"github.com/prometheus/common/model"
"github.com/rs/zerolog/log" "github.com/rs/zerolog/log"
"github.com/spf13/cobra"
"google.golang.org/grpc" "google.golang.org/grpc"
"google.golang.org/grpc/credentials" "google.golang.org/grpc/credentials"
"google.golang.org/grpc/credentials/insecure" "google.golang.org/grpc/credentials/insecure"
"google.golang.org/protobuf/types/known/timestamppb"
"gopkg.in/yaml.v3" "gopkg.in/yaml.v3"
) )
const ( const (
HeadscaleDateTimeFormat = "2006-01-02 15:04:05" HeadscaleDateTimeFormat = "2006-01-02 15:04:05"
SocketWritePermissions = 0o666 SocketWritePermissions = 0o666
outputFormatJSON = "json"
outputFormatJSONLine = "json-line"
outputFormatYAML = "yaml"
) )
var (
errAPIKeyNotSet = errors.New("HEADSCALE_CLI_API_KEY environment variable needs to be set")
errMissingParameter = errors.New("missing parameters")
)
// mustMarkRequired marks the named flags as required on cmd, panicking
// if any name does not match a registered flag. This is only called
// from init() where a failure indicates a programming error.
func mustMarkRequired(cmd *cobra.Command, names ...string) {
for _, n := range names {
err := cmd.MarkFlagRequired(n)
if err != nil {
panic(fmt.Sprintf("marking flag %q required on %q: %v", n, cmd.Name(), err))
}
}
}
func newHeadscaleServerWithConfig() (*hscontrol.Headscale, error) { func newHeadscaleServerWithConfig() (*hscontrol.Headscale, error) {
cfg, err := types.LoadServerConfig() cfg, err := types.LoadServerConfig()
if err != nil { if err != nil {
@@ -40,14 +67,28 @@ func newHeadscaleServerWithConfig() (*hscontrol.Headscale, error) {
return app, nil return app, nil
} }
func newHeadscaleCLIWithConfig() (context.Context, v1.HeadscaleServiceClient, *grpc.ClientConn, context.CancelFunc) { // grpcRunE wraps a cobra RunE func, injecting a ready gRPC client and
// context. Connection lifecycle is managed by the wrapper — callers
// never see the underlying conn or cancel func.
func grpcRunE(
fn func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error,
) func(*cobra.Command, []string) error {
return func(cmd *cobra.Command, args []string) error {
ctx, client, conn, cancel, err := newHeadscaleCLIWithConfig()
if err != nil {
return fmt.Errorf("connecting to headscale: %w", err)
}
defer cancel()
defer conn.Close()
return fn(ctx, client, cmd, args)
}
}
func newHeadscaleCLIWithConfig() (context.Context, v1.HeadscaleServiceClient, *grpc.ClientConn, context.CancelFunc, error) {
cfg, err := types.LoadCLIConfig() cfg, err := types.LoadCLIConfig()
if err != nil { if err != nil {
log.Fatal(). return nil, nil, nil, nil, fmt.Errorf("loading configuration: %w", err)
Err(err).
Caller().
Msgf("Failed to load configuration")
os.Exit(-1) // we get here if logging is suppressed (i.e., json output)
} }
log.Debug(). log.Debug().
@@ -57,7 +98,7 @@ func newHeadscaleCLIWithConfig() (context.Context, v1.HeadscaleServiceClient, *g
ctx, cancel := context.WithTimeout(context.Background(), cfg.CLI.Timeout) ctx, cancel := context.WithTimeout(context.Background(), cfg.CLI.Timeout)
grpcOptions := []grpc.DialOption{ grpcOptions := []grpc.DialOption{
grpc.WithBlock(), grpc.WithBlock(), //nolint:staticcheck // SA1019: deprecated but supported in 1.x
} }
address := cfg.CLI.Address address := cfg.CLI.Address
@@ -71,17 +112,23 @@ func newHeadscaleCLIWithConfig() (context.Context, v1.HeadscaleServiceClient, *g
address = cfg.UnixSocket address = cfg.UnixSocket
// Try to give the user better feedback if we cannot write to the headscale // Try to give the user better feedback if we cannot write to the headscale
// socket. // socket. Note: os.OpenFile on a Unix domain socket returns ENXIO on
socket, err := os.OpenFile(cfg.UnixSocket, os.O_WRONLY, SocketWritePermissions) // nolint // Linux which is expected — only permission errors are actionable here.
// The actual gRPC connection uses net.Dial which handles sockets properly.
socket, err := os.OpenFile(cfg.UnixSocket, os.O_WRONLY, SocketWritePermissions) //nolint
if err != nil { if err != nil {
if os.IsPermission(err) { if os.IsPermission(err) {
log.Fatal(). cancel()
Err(err).
Str("socket", cfg.UnixSocket). return nil, nil, nil, nil, fmt.Errorf(
Msgf("Unable to read/write to headscale socket, do you have the correct permissions?") "unable to read/write to headscale socket %q, do you have the correct permissions? %w",
cfg.UnixSocket,
err,
)
} }
} else {
socket.Close()
} }
socket.Close()
grpcOptions = append( grpcOptions = append(
grpcOptions, grpcOptions,
@@ -92,8 +139,11 @@ func newHeadscaleCLIWithConfig() (context.Context, v1.HeadscaleServiceClient, *g
// If we are not connecting to a local server, require an API key for authentication // If we are not connecting to a local server, require an API key for authentication
apiKey := cfg.CLI.APIKey apiKey := cfg.CLI.APIKey
if apiKey == "" { if apiKey == "" {
log.Fatal().Caller().Msgf("HEADSCALE_CLI_API_KEY environment variable needs to be set.") cancel()
return nil, nil, nil, nil, errAPIKeyNotSet
} }
grpcOptions = append(grpcOptions, grpcOptions = append(grpcOptions,
grpc.WithPerRPCCredentials(tokenAuth{ grpc.WithPerRPCCredentials(tokenAuth{
token: apiKey, token: apiKey,
@@ -118,71 +168,136 @@ func newHeadscaleCLIWithConfig() (context.Context, v1.HeadscaleServiceClient, *g
} }
} }
log.Trace().Caller().Str("address", address).Msg("Connecting via gRPC") log.Trace().Caller().Str(zf.Address, address).Msg("connecting via gRPC")
conn, err := grpc.DialContext(ctx, address, grpcOptions...)
conn, err := grpc.DialContext(ctx, address, grpcOptions...) //nolint:staticcheck // SA1019: deprecated but supported in 1.x
if err != nil { if err != nil {
log.Fatal().Caller().Err(err).Msgf("Could not connect: %v", err) cancel()
os.Exit(-1) // we get here if logging is suppressed (i.e., json output)
return nil, nil, nil, nil, fmt.Errorf("connecting to %s: %w", address, err)
} }
client := v1.NewHeadscaleServiceClient(conn) client := v1.NewHeadscaleServiceClient(conn)
return ctx, client, conn, cancel return ctx, client, conn, cancel, nil
} }
func output(result any, override string, outputFormat string) string { // formatOutput serialises result into the requested format. For the
var jsonBytes []byte // default (empty) format the human-readable override string is returned.
var err error func formatOutput(result any, override string, outputFormat string) (string, error) {
switch outputFormat { switch outputFormat {
case "json": case outputFormatJSON:
jsonBytes, err = json.MarshalIndent(result, "", "\t") b, err := json.MarshalIndent(result, "", "\t")
if err != nil { if err != nil {
log.Fatal().Err(err).Msg("failed to unmarshal output") return "", fmt.Errorf("marshalling JSON output: %w", err)
} }
case "json-line":
jsonBytes, err = json.Marshal(result) return string(b), nil
case outputFormatJSONLine:
b, err := json.Marshal(result)
if err != nil { if err != nil {
log.Fatal().Err(err).Msg("failed to unmarshal output") return "", fmt.Errorf("marshalling JSON-line output: %w", err)
} }
case "yaml":
jsonBytes, err = yaml.Marshal(result) return string(b), nil
case outputFormatYAML:
b, err := yaml.Marshal(result)
if err != nil { if err != nil {
log.Fatal().Err(err).Msg("failed to unmarshal output") return "", fmt.Errorf("marshalling YAML output: %w", err)
} }
return string(b), nil
default: default:
// nolint return override, nil
return override }
}
// printOutput formats result and writes it to stdout. It reads the --output
// flag from cmd to decide the serialisation format.
func printOutput(cmd *cobra.Command, result any, override string) error {
format, _ := cmd.Flags().GetString("output")
out, err := formatOutput(result, override, format)
if err != nil {
return err
} }
return string(jsonBytes) fmt.Println(out)
return nil
} }
// SuccessOutput prints the result to stdout and exits with status code 0. // expirationFromFlag parses the --expiration flag as a Prometheus-style
func SuccessOutput(result any, override string, outputFormat string) { // duration (e.g. "90d", "1h") and returns an absolute timestamp.
fmt.Println(output(result, override, outputFormat)) func expirationFromFlag(cmd *cobra.Command) (*timestamppb.Timestamp, error) {
os.Exit(0) durationStr, _ := cmd.Flags().GetString("expiration")
duration, err := model.ParseDuration(durationStr)
if err != nil {
return nil, fmt.Errorf("parsing duration: %w", err)
}
return timestamppb.New(time.Now().UTC().Add(time.Duration(duration))), nil
} }
// ErrorOutput prints an error message to stderr and exits with status code 1. // confirmAction returns true when the user confirms a prompt, or when
func ErrorOutput(errResult error, override string, outputFormat string) { // --force is set. Callers decide what to do when it returns false.
func confirmAction(cmd *cobra.Command, prompt string) bool {
force, _ := cmd.Flags().GetBool("force")
if force {
return true
}
return util.YesNo(prompt)
}
// printListOutput checks the --output flag: when a machine-readable format is
// requested it serialises data as JSON/YAML; otherwise it calls renderTable
// to produce the human-readable pterm table.
func printListOutput(
cmd *cobra.Command,
data any,
renderTable func() error,
) error {
format, _ := cmd.Flags().GetString("output")
if format != "" {
return printOutput(cmd, data, "")
}
return renderTable()
}
// printError writes err to stderr, formatting it as JSON/YAML when the
// --output flag requests machine-readable output. Used exclusively by
// Execute() so that every error surfaces in the format the caller asked for.
func printError(err error, outputFormat string) {
type errOutput struct { type errOutput struct {
Error string `json:"error"` Error string `json:"error"`
} }
var errorMessage string e := errOutput{Error: err.Error()}
if errResult != nil {
errorMessage = errResult.Error() var formatted []byte
} else {
errorMessage = override switch outputFormat {
case outputFormatJSON:
formatted, _ = json.MarshalIndent(e, "", "\t") //nolint:errchkjson // errOutput contains only a string field
case outputFormatJSONLine:
formatted, _ = json.Marshal(e) //nolint:errchkjson // errOutput contains only a string field
case outputFormatYAML:
formatted, _ = yaml.Marshal(e)
default:
fmt.Fprintf(os.Stderr, "Error: %s\n", err)
return
} }
fmt.Fprintf(os.Stderr, "%s\n", output(errOutput{errorMessage}, override, outputFormat)) fmt.Fprintf(os.Stderr, "%s\n", formatted)
os.Exit(1)
} }
func HasMachineOutputFlag() bool { func hasMachineOutputFlag() bool {
for _, arg := range os.Args { for _, arg := range os.Args {
if arg == "json" || arg == "json-line" || arg == "yaml" { if arg == outputFormatJSON || arg == outputFormatJSONLine || arg == outputFormatYAML {
return true return true
} }
} }

View File

@@ -14,11 +14,9 @@ var versionCmd = &cobra.Command{
Use: "version", Use: "version",
Short: "Print the version.", Short: "Print the version.",
Long: "The version of headscale.", Long: "The version of headscale.",
Run: func(cmd *cobra.Command, args []string) { RunE: func(cmd *cobra.Command, args []string) error {
output, _ := cmd.Flags().GetString("output")
info := types.GetVersionInfo() info := types.GetVersionInfo()
SuccessOutput(info, info.String(), output) return printOutput(cmd, info, info.String())
}, },
} }

View File

@@ -12,6 +12,7 @@ import (
func main() { func main() {
var colors bool var colors bool
switch l := termcolor.SupportLevel(os.Stderr); l { switch l := termcolor.SupportLevel(os.Stderr); l {
case termcolor.Level16M: case termcolor.Level16M:
colors = true colors = true

View File

@@ -14,9 +14,7 @@ import (
) )
func TestConfigFileLoading(t *testing.T) { func TestConfigFileLoading(t *testing.T) {
tmpDir, err := os.MkdirTemp("", "headscale") tmpDir := t.TempDir()
require.NoError(t, err)
defer os.RemoveAll(tmpDir)
path, err := os.Getwd() path, err := os.Getwd()
require.NoError(t, err) require.NoError(t, err)
@@ -48,9 +46,7 @@ func TestConfigFileLoading(t *testing.T) {
} }
func TestConfigLoading(t *testing.T) { func TestConfigLoading(t *testing.T) {
tmpDir, err := os.MkdirTemp("", "headscale") tmpDir := t.TempDir()
require.NoError(t, err)
defer os.RemoveAll(tmpDir)
path, err := os.Getwd() path, err := os.Getwd()
require.NoError(t, err) require.NoError(t, err)

View File

@@ -1,6 +1,262 @@
# hi # hi — Headscale Integration test runner
hi (headscale integration runner) is an entirely "vibe coded" wrapper around our `hi` wraps Docker container orchestration around the tests in
[integration test suite](../integration). It essentially runs the docker [`../../integration`](../../integration) and extracts debugging artefacts
commands for you with some added benefits of extracting resources like logs and (logs, database snapshots, MapResponse protocol captures) for post-mortem
databases. analysis.
**Read this file in full before running any `hi` command.** The test
runner has sharp edges — wrong flags produce stale containers, lost
artefacts, or hung CI.
For test-authoring patterns (scenario setup, `EventuallyWithT`,
`IntegrationSkip`, helper variants), read
[`../../integration/README.md`](../../integration/README.md).
## Quick Start
```bash
# Verify system requirements (Docker, Go, disk space, images)
go run ./cmd/hi doctor
# Run a single test (the default flags are tuned for development)
go run ./cmd/hi run "TestPingAllByIP"
# Run a database-heavy test against PostgreSQL
go run ./cmd/hi run "TestExpireNode" --postgres
# Pattern matching
go run ./cmd/hi run "TestSubnet*"
```
Run `doctor` before the first `run` in any new environment. Tests
generate ~100 MB of logs per run in `control_logs/`; `doctor` verifies
there is enough space and that the required Docker images are available.
## Commands
| Command | Purpose |
| ------------------ | ---------------------------------------------------- |
| `run [pattern]` | Execute the test(s) matching `pattern` |
| `doctor` | Verify system requirements |
| `clean networks` | Prune unused Docker networks |
| `clean images` | Clean old test images |
| `clean containers` | Kill **all** test containers (dangerous — see below) |
| `clean cache` | Clean Go module cache volume |
| `clean all` | Run all cleanup operations |
## Flags
Defaults are tuned for single-test development runs. Review before
changing.
| Flag | Default | Purpose |
| ------------------- | -------------- | --------------------------------------------------------------------------- |
| `--timeout` | `120m` | Total test timeout. Use the built-in flag — never wrap with bash `timeout`. |
| `--postgres` | `false` | Use PostgreSQL instead of SQLite |
| `--failfast` | `true` | Stop on first test failure |
| `--go-version` | auto | Detected from `go.mod` (currently 1.26.1) |
| `--clean-before` | `true` | Clean stale (stopped/exited) containers before starting |
| `--clean-after` | `true` | Clean this run's containers after completion |
| `--keep-on-failure` | `false` | Preserve containers for manual inspection on failure |
| `--logs-dir` | `control_logs` | Where to save run artefacts |
| `--verbose` | `false` | Verbose output |
| `--stats` | `false` | Collect container resource-usage stats |
| `--hs-memory-limit` | `0` | Fail if any headscale container exceeds N MB (0 = disabled) |
| `--ts-memory-limit` | `0` | Fail if any tailscale container exceeds N MB |
### Timeout guidance
The default `120m` is generous for a single test. If you must tune it,
these are realistic floors by category:
| Test type | Minimum | Examples |
| ------------------------- | ----------- | ------------------------------------- |
| Basic functionality / CLI | 900s (15m) | `TestPingAllByIP`, `TestCLI*` |
| Route / ACL | 1200s (20m) | `TestSubnet*`, `TestACL*` |
| HA / failover | 1800s (30m) | `TestHASubnetRouter*` |
| Long-running | 2100s (35m) | `TestNodeOnlineStatus` (~12 min body) |
| Full suite | 45m | `go test ./integration -timeout 45m` |
**Never** use the shell `timeout` command around `hi`. It kills the
process mid-cleanup and leaves stale containers:
```bash
timeout 300 go run ./cmd/hi run "TestName" # WRONG — orphaned containers
go run ./cmd/hi run "TestName" --timeout=900s # correct
```
## Concurrent Execution
Multiple `hi run` invocations can run simultaneously on the same Docker
daemon. Each invocation gets a unique **Run ID** (format
`YYYYMMDD-HHMMSS-6charhash`, e.g. `20260409-104215-mdjtzx`).
- **Container names** include the short run ID: `ts-mdjtzx-1-74-fgdyls`
- **Docker labels**: `hi.run-id={runID}` on every container
- **Port allocation**: dynamic — kernel assigns free ports, no conflicts
- **Cleanup isolation**: each run cleans only its own containers
- **Log directories**: `control_logs/{runID}/`
```bash
# Start three tests in parallel — each gets its own run ID
go run ./cmd/hi run "TestPingAllByIP" &
go run ./cmd/hi run "TestACLAllowUserDst" &
go run ./cmd/hi run "TestOIDCAuthenticationPingAll" &
```
### Safety rules for concurrent runs
- ✅ Your run cleans only containers labelled with its own `hi.run-id`
-`--clean-before` removes only stopped/exited containers
-**Never** run `docker rm -f $(docker ps -q --filter name=hs-)`
this destroys other agents' live test sessions
-**Never** run `docker system prune -f` while any tests are running
-**Never** run `hi clean containers` / `hi clean all` while other
tests are running — both kill all test containers on the daemon
To identify your own containers:
```bash
docker ps --filter "label=hi.run-id=20260409-104215-mdjtzx"
```
The run ID appears at the top of the `hi run` output — copy it from
there rather than trying to reconstruct it.
## Artefacts
Every run saves debugging artefacts under `control_logs/{runID}/`:
```
control_logs/20260409-104215-mdjtzx/
├── hs-<test>-<hash>.stderr.log # headscale server errors
├── hs-<test>-<hash>.stdout.log # headscale server output
├── hs-<test>-<hash>.db # database snapshot (SQLite)
├── hs-<test>-<hash>_metrics.txt # Prometheus metrics dump
├── hs-<test>-<hash>-mapresponses/ # MapResponse protocol captures
├── ts-<client>-<hash>.stderr.log # tailscale client errors
├── ts-<client>-<hash>.stdout.log # tailscale client output
└── ts-<client>-<hash>_status.json # client network-status dump
```
Artefacts persist after cleanup. Old runs accumulate fast — delete
unwanted directories to reclaim disk.
## Debugging workflow
When a test fails, read the artefacts **in this order**:
1. **`hs-*.stderr.log`** — headscale server errors, panics, policy
evaluation failures. Most issues originate server-side.
```bash
grep -E "ERROR|panic|FATAL" control_logs/*/hs-*.stderr.log
```
2. **`ts-*.stderr.log`** — authentication failures, connectivity issues,
DNS resolution problems on the client side.
3. **MapResponse JSON** in `hs-*-mapresponses/` — protocol-level
debugging for network map generation, peer visibility, route
distribution, policy evaluation results.
```bash
ls control_logs/*/hs-*-mapresponses/
jq '.Peers[] | {Name, Tags, PrimaryRoutes}' \
control_logs/*/hs-*-mapresponses/001.json
```
4. **`*_status.json`** — client peer-connectivity state.
5. **`hs-*.db`** — SQLite snapshot for post-mortem consistency checks.
```bash
sqlite3 control_logs/<runID>/hs-*.db
sqlite> .tables
sqlite> .schema nodes
sqlite> SELECT id, hostname, user_id, tags FROM nodes WHERE hostname LIKE '%problematic%';
```
6. **`*_metrics.txt`** — Prometheus dumps for latency, NodeStore
operation timing, database query performance, memory usage.
## Heuristic: infrastructure vs code
**Before blaming Docker, disk, or network: read `hs-*.stderr.log` in
full.** In practice, well over 99% of failures are code bugs (policy
evaluation, NodeStore sync, route approval) rather than infrastructure.
Actual infrastructure failures have signature error messages:
| Signature | Cause | Fix |
| --------------------------------------------------------------- | ------------------------- | ------------------------------------------------------------- |
| `failed to resolve "hs-...": no DNS fallback candidates remain` | Docker DNS | Reset Docker networking |
| `container creation timeout`, no progress >2 min | Resource exhaustion | `docker system prune -f` (when no other tests running), retry |
| OOM kills, slow Docker daemon | Too many concurrent tests | Reduce concurrency, wait for completion |
| `no space left on device` | Disk full | Delete old `control_logs/` |
If you don't see a signature error, **assume it's a code regression** —
do not retry hoping the flake goes away.
## Common failure patterns (code bugs)
### Route advertisement timing
Test asserts route state before the client has finished propagating its
Hostinfo update. Symptom: `nodes[0].GetAvailableRoutes()` empty when
the test expects a route.
- **Wrong fix**: `time.Sleep(5 * time.Second)` — fragile and slow.
- **Right fix**: wrap the assertion in `EventuallyWithT`. See
[`../../integration/README.md`](../../integration/README.md).
### NodeStore sync issues
Route changes not reflected in the NodeStore snapshot. Symptom: route
advertisements in logs but no tracking updates in subsequent reads.
The sync point is `State.UpdateNodeFromMapRequest()` in
`hscontrol/state/state.go`. If you added a new kind of client state
update, make sure it lands here.
### HA failover: routes disappearing on disconnect
`TestHASubnetRouterFailover` fails because approved routes vanish when
a subnet router goes offline. **This is a bug, not expected behaviour.**
Route approval must not be coupled to client connectivity — routes
stay approved; only the primary-route selection is affected by
connectivity.
### Policy evaluation race
Symptom: tests that change policy and immediately assert peer visibility
fail intermittently. Policy changes trigger async recomputation.
- See recent fixes in `git log -- hscontrol/state/` for examples (e.g.
the `PolicyChange` trigger on every Connect/Disconnect).
### SQLite vs PostgreSQL timing differences
Some race conditions only surface on one backend. If a test is flaky,
try the other backend with `--postgres`:
```bash
go run ./cmd/hi run "TestName" --postgres --verbose
```
PostgreSQL generally has more consistent timing; SQLite can expose
races during rapid writes.
## Keeping containers for inspection
If you need to inspect a failed test's state manually:
```bash
go run ./cmd/hi run "TestName" --keep-on-failure
# containers survive — inspect them
docker exec -it ts-<runID>-<...> /bin/sh
docker logs hs-<runID>-<...>
# clean up manually when done
go run ./cmd/hi clean all # only when no other tests are running
```

View File

@@ -22,11 +22,11 @@ import (
func cleanupBeforeTest(ctx context.Context) error { func cleanupBeforeTest(ctx context.Context) error {
err := cleanupStaleTestContainers(ctx) err := cleanupStaleTestContainers(ctx)
if err != nil { if err != nil {
return fmt.Errorf("failed to clean stale test containers: %w", err) return fmt.Errorf("cleaning stale test containers: %w", err)
} }
if err := pruneDockerNetworks(ctx); err != nil { if err := pruneDockerNetworks(ctx); err != nil { //nolint:noinlineerr
return fmt.Errorf("failed to prune networks: %w", err) return fmt.Errorf("pruning networks: %w", err)
} }
return nil return nil
@@ -39,14 +39,14 @@ func cleanupAfterTest(ctx context.Context, cli *client.Client, containerID, runI
Force: true, Force: true,
}) })
if err != nil { if err != nil {
return fmt.Errorf("failed to remove test container: %w", err) return fmt.Errorf("removing test container: %w", err)
} }
// Clean up integration test containers for this run only // Clean up integration test containers for this run only
if runID != "" { if runID != "" {
err := killTestContainersByRunID(ctx, runID) err := killTestContainersByRunID(ctx, runID)
if err != nil { if err != nil {
return fmt.Errorf("failed to clean up containers for run %s: %w", runID, err) return fmt.Errorf("cleaning up containers for run %s: %w", runID, err)
} }
} }
@@ -55,9 +55,9 @@ func cleanupAfterTest(ctx context.Context, cli *client.Client, containerID, runI
// killTestContainers terminates and removes all test containers. // killTestContainers terminates and removes all test containers.
func killTestContainers(ctx context.Context) error { func killTestContainers(ctx context.Context) error {
cli, err := createDockerClient() cli, err := createDockerClient(ctx)
if err != nil { if err != nil {
return fmt.Errorf("failed to create Docker client: %w", err) return fmt.Errorf("creating Docker client: %w", err)
} }
defer cli.Close() defer cli.Close()
@@ -65,12 +65,14 @@ func killTestContainers(ctx context.Context) error {
All: true, All: true,
}) })
if err != nil { if err != nil {
return fmt.Errorf("failed to list containers: %w", err) return fmt.Errorf("listing containers: %w", err)
} }
removed := 0 removed := 0
for _, cont := range containers { for _, cont := range containers {
shouldRemove := false shouldRemove := false
for _, name := range cont.Names { for _, name := range cont.Names {
if strings.Contains(name, "headscale-test-suite") || if strings.Contains(name, "headscale-test-suite") ||
strings.Contains(name, "hs-") || strings.Contains(name, "hs-") ||
@@ -107,9 +109,9 @@ func killTestContainers(ctx context.Context) error {
// This function filters containers by the hi.run-id label to only affect containers // This function filters containers by the hi.run-id label to only affect containers
// belonging to the specified test run, leaving other concurrent test runs untouched. // belonging to the specified test run, leaving other concurrent test runs untouched.
func killTestContainersByRunID(ctx context.Context, runID string) error { func killTestContainersByRunID(ctx context.Context, runID string) error {
cli, err := createDockerClient() cli, err := createDockerClient(ctx)
if err != nil { if err != nil {
return fmt.Errorf("failed to create Docker client: %w", err) return fmt.Errorf("creating Docker client: %w", err)
} }
defer cli.Close() defer cli.Close()
@@ -121,7 +123,7 @@ func killTestContainersByRunID(ctx context.Context, runID string) error {
), ),
}) })
if err != nil { if err != nil {
return fmt.Errorf("failed to list containers for run %s: %w", runID, err) return fmt.Errorf("listing containers for run %s: %w", runID, err)
} }
removed := 0 removed := 0
@@ -149,9 +151,9 @@ func killTestContainersByRunID(ctx context.Context, runID string) error {
// This is useful for cleaning up leftover containers from previous crashed or interrupted test runs // This is useful for cleaning up leftover containers from previous crashed or interrupted test runs
// without interfering with currently running concurrent tests. // without interfering with currently running concurrent tests.
func cleanupStaleTestContainers(ctx context.Context) error { func cleanupStaleTestContainers(ctx context.Context) error {
cli, err := createDockerClient() cli, err := createDockerClient(ctx)
if err != nil { if err != nil {
return fmt.Errorf("failed to create Docker client: %w", err) return fmt.Errorf("creating Docker client: %w", err)
} }
defer cli.Close() defer cli.Close()
@@ -164,7 +166,7 @@ func cleanupStaleTestContainers(ctx context.Context) error {
), ),
}) })
if err != nil { if err != nil {
return fmt.Errorf("failed to list stopped containers: %w", err) return fmt.Errorf("listing stopped containers: %w", err)
} }
removed := 0 removed := 0
@@ -223,15 +225,15 @@ func removeContainerWithRetry(ctx context.Context, cli *client.Client, container
// pruneDockerNetworks removes unused Docker networks. // pruneDockerNetworks removes unused Docker networks.
func pruneDockerNetworks(ctx context.Context) error { func pruneDockerNetworks(ctx context.Context) error {
cli, err := createDockerClient() cli, err := createDockerClient(ctx)
if err != nil { if err != nil {
return fmt.Errorf("failed to create Docker client: %w", err) return fmt.Errorf("creating Docker client: %w", err)
} }
defer cli.Close() defer cli.Close()
report, err := cli.NetworksPrune(ctx, filters.Args{}) report, err := cli.NetworksPrune(ctx, filters.Args{})
if err != nil { if err != nil {
return fmt.Errorf("failed to prune networks: %w", err) return fmt.Errorf("pruning networks: %w", err)
} }
if len(report.NetworksDeleted) > 0 { if len(report.NetworksDeleted) > 0 {
@@ -245,9 +247,9 @@ func pruneDockerNetworks(ctx context.Context) error {
// cleanOldImages removes test-related and old dangling Docker images. // cleanOldImages removes test-related and old dangling Docker images.
func cleanOldImages(ctx context.Context) error { func cleanOldImages(ctx context.Context) error {
cli, err := createDockerClient() cli, err := createDockerClient(ctx)
if err != nil { if err != nil {
return fmt.Errorf("failed to create Docker client: %w", err) return fmt.Errorf("creating Docker client: %w", err)
} }
defer cli.Close() defer cli.Close()
@@ -255,12 +257,14 @@ func cleanOldImages(ctx context.Context) error {
All: true, All: true,
}) })
if err != nil { if err != nil {
return fmt.Errorf("failed to list images: %w", err) return fmt.Errorf("listing images: %w", err)
} }
removed := 0 removed := 0
for _, img := range images { for _, img := range images {
shouldRemove := false shouldRemove := false
for _, tag := range img.RepoTags { for _, tag := range img.RepoTags {
if strings.Contains(tag, "hs-") || if strings.Contains(tag, "hs-") ||
strings.Contains(tag, "headscale-integration") || strings.Contains(tag, "headscale-integration") ||
@@ -295,18 +299,19 @@ func cleanOldImages(ctx context.Context) error {
// cleanCacheVolume removes the Docker volume used for Go module cache. // cleanCacheVolume removes the Docker volume used for Go module cache.
func cleanCacheVolume(ctx context.Context) error { func cleanCacheVolume(ctx context.Context) error {
cli, err := createDockerClient() cli, err := createDockerClient(ctx)
if err != nil { if err != nil {
return fmt.Errorf("failed to create Docker client: %w", err) return fmt.Errorf("creating Docker client: %w", err)
} }
defer cli.Close() defer cli.Close()
volumeName := "hs-integration-go-cache" volumeName := "hs-integration-go-cache"
err = cli.VolumeRemove(ctx, volumeName, true) err = cli.VolumeRemove(ctx, volumeName, true)
if err != nil { if err != nil {
if errdefs.IsNotFound(err) { if errdefs.IsNotFound(err) { //nolint:staticcheck // SA1019: deprecated but functional
fmt.Printf("Go module cache volume not found: %s\n", volumeName) fmt.Printf("Go module cache volume not found: %s\n", volumeName)
} else if errdefs.IsConflict(err) { } else if errdefs.IsConflict(err) { //nolint:staticcheck // SA1019: deprecated but functional
fmt.Printf("Go module cache volume is in use and cannot be removed: %s\n", volumeName) fmt.Printf("Go module cache volume is in use and cannot be removed: %s\n", volumeName)
} else { } else {
fmt.Printf("Failed to remove Go module cache volume %s: %v\n", volumeName, err) fmt.Printf("Failed to remove Go module cache volume %s: %v\n", volumeName, err)
@@ -330,7 +335,7 @@ func cleanCacheVolume(ctx context.Context) error {
func cleanupSuccessfulTestArtifacts(logsDir string, verbose bool) error { func cleanupSuccessfulTestArtifacts(logsDir string, verbose bool) error {
entries, err := os.ReadDir(logsDir) entries, err := os.ReadDir(logsDir)
if err != nil { if err != nil {
return fmt.Errorf("failed to read logs directory: %w", err) return fmt.Errorf("reading logs directory: %w", err)
} }
var ( var (

View File

@@ -22,17 +22,22 @@ import (
"github.com/juanfont/headscale/integration/dockertestutil" "github.com/juanfont/headscale/integration/dockertestutil"
) )
const defaultDirPerm = 0o755
var ( var (
ErrTestFailed = errors.New("test failed") ErrTestFailed = errors.New("test failed")
ErrUnexpectedContainerWait = errors.New("unexpected end of container wait") ErrUnexpectedContainerWait = errors.New("unexpected end of container wait")
ErrNoDockerContext = errors.New("no docker context found") ErrNoDockerContext = errors.New("no docker context found")
ErrMemoryLimitViolations = errors.New("container(s) exceeded memory limits")
) )
// runTestContainer executes integration tests in a Docker container. // runTestContainer executes integration tests in a Docker container.
//
//nolint:gocyclo // complex test orchestration function
func runTestContainer(ctx context.Context, config *RunConfig) error { func runTestContainer(ctx context.Context, config *RunConfig) error {
cli, err := createDockerClient() cli, err := createDockerClient(ctx)
if err != nil { if err != nil {
return fmt.Errorf("failed to create Docker client: %w", err) return fmt.Errorf("creating Docker client: %w", err)
} }
defer cli.Close() defer cli.Close()
@@ -48,19 +53,21 @@ func runTestContainer(ctx context.Context, config *RunConfig) error {
absLogsDir, err := filepath.Abs(logsDir) absLogsDir, err := filepath.Abs(logsDir)
if err != nil { if err != nil {
return fmt.Errorf("failed to get absolute path for logs directory: %w", err) return fmt.Errorf("getting absolute path for logs directory: %w", err)
} }
const dirPerm = 0o755 const dirPerm = 0o755
if err := os.MkdirAll(absLogsDir, dirPerm); err != nil { if err := os.MkdirAll(absLogsDir, dirPerm); err != nil { //nolint:noinlineerr
return fmt.Errorf("failed to create logs directory: %w", err) return fmt.Errorf("creating logs directory: %w", err)
} }
if config.CleanBefore { if config.CleanBefore {
if config.Verbose { if config.Verbose {
log.Printf("Running pre-test cleanup...") log.Printf("Running pre-test cleanup...")
} }
if err := cleanupBeforeTest(ctx); err != nil && config.Verbose {
err := cleanupBeforeTest(ctx)
if err != nil && config.Verbose {
log.Printf("Warning: pre-test cleanup failed: %v", err) log.Printf("Warning: pre-test cleanup failed: %v", err)
} }
} }
@@ -71,21 +78,21 @@ func runTestContainer(ctx context.Context, config *RunConfig) error {
} }
imageName := "golang:" + config.GoVersion imageName := "golang:" + config.GoVersion
if err := ensureImageAvailable(ctx, cli, imageName, config.Verbose); err != nil { if err := ensureImageAvailable(ctx, cli, imageName, config.Verbose); err != nil { //nolint:noinlineerr
return fmt.Errorf("failed to ensure image availability: %w", err) return fmt.Errorf("ensuring image availability: %w", err)
} }
resp, err := createGoTestContainer(ctx, cli, config, containerName, absLogsDir, goTestCmd) resp, err := createGoTestContainer(ctx, cli, config, containerName, absLogsDir, goTestCmd)
if err != nil { if err != nil {
return fmt.Errorf("failed to create container: %w", err) return fmt.Errorf("creating container: %w", err)
} }
if config.Verbose { if config.Verbose {
log.Printf("Created container: %s", resp.ID) log.Printf("Created container: %s", resp.ID)
} }
if err := cli.ContainerStart(ctx, resp.ID, container.StartOptions{}); err != nil { if err := cli.ContainerStart(ctx, resp.ID, container.StartOptions{}); err != nil { //nolint:noinlineerr
return fmt.Errorf("failed to start container: %w", err) return fmt.Errorf("starting container: %w", err)
} }
log.Printf("Starting test: %s", config.TestPattern) log.Printf("Starting test: %s", config.TestPattern)
@@ -95,13 +102,16 @@ func runTestContainer(ctx context.Context, config *RunConfig) error {
// Start stats collection for container resource monitoring (if enabled) // Start stats collection for container resource monitoring (if enabled)
var statsCollector *StatsCollector var statsCollector *StatsCollector
if config.Stats { if config.Stats {
var err error var err error
statsCollector, err = NewStatsCollector()
statsCollector, err = NewStatsCollector(ctx)
if err != nil { if err != nil {
if config.Verbose { if config.Verbose {
log.Printf("Warning: failed to create stats collector: %v", err) log.Printf("Warning: failed to create stats collector: %v", err)
} }
statsCollector = nil statsCollector = nil
} }
@@ -110,7 +120,8 @@ func runTestContainer(ctx context.Context, config *RunConfig) error {
// Start stats collection immediately - no need for complex retry logic // Start stats collection immediately - no need for complex retry logic
// The new implementation monitors Docker events and will catch containers as they start // The new implementation monitors Docker events and will catch containers as they start
if err := statsCollector.StartCollection(ctx, runID, config.Verbose); err != nil { err := statsCollector.StartCollection(ctx, runID, config.Verbose)
if err != nil {
if config.Verbose { if config.Verbose {
log.Printf("Warning: failed to start stats collection: %v", err) log.Printf("Warning: failed to start stats collection: %v", err)
} }
@@ -122,12 +133,13 @@ func runTestContainer(ctx context.Context, config *RunConfig) error {
exitCode, err := streamAndWait(ctx, cli, resp.ID) exitCode, err := streamAndWait(ctx, cli, resp.ID)
// Ensure all containers have finished and logs are flushed before extracting artifacts // Ensure all containers have finished and logs are flushed before extracting artifacts
if waitErr := waitForContainerFinalization(ctx, cli, resp.ID, config.Verbose); waitErr != nil && config.Verbose { waitErr := waitForContainerFinalization(ctx, cli, resp.ID, config.Verbose)
if waitErr != nil && config.Verbose {
log.Printf("Warning: failed to wait for container finalization: %v", waitErr) log.Printf("Warning: failed to wait for container finalization: %v", waitErr)
} }
// Extract artifacts from test containers before cleanup // Extract artifacts from test containers before cleanup
if err := extractArtifactsFromContainers(ctx, resp.ID, logsDir, config.Verbose); err != nil && config.Verbose { if err := extractArtifactsFromContainers(ctx, resp.ID, logsDir, config.Verbose); err != nil && config.Verbose { //nolint:noinlineerr
log.Printf("Warning: failed to extract artifacts from containers: %v", err) log.Printf("Warning: failed to extract artifacts from containers: %v", err)
} }
@@ -140,12 +152,13 @@ func runTestContainer(ctx context.Context, config *RunConfig) error {
if len(violations) > 0 { if len(violations) > 0 {
log.Printf("MEMORY LIMIT VIOLATIONS DETECTED:") log.Printf("MEMORY LIMIT VIOLATIONS DETECTED:")
log.Printf("=================================") log.Printf("=================================")
for _, violation := range violations { for _, violation := range violations {
log.Printf("Container %s exceeded memory limit: %.1f MB > %.1f MB", log.Printf("Container %s exceeded memory limit: %.1f MB > %.1f MB",
violation.ContainerName, violation.MaxMemoryMB, violation.LimitMB) violation.ContainerName, violation.MaxMemoryMB, violation.LimitMB)
} }
return fmt.Errorf("test failed: %d container(s) exceeded memory limits", len(violations)) return fmt.Errorf("test failed: %d %w", len(violations), ErrMemoryLimitViolations)
} }
} }
@@ -176,7 +189,7 @@ func runTestContainer(ctx context.Context, config *RunConfig) error {
} }
if err != nil { if err != nil {
return fmt.Errorf("test execution failed: %w", err) return fmt.Errorf("executing test: %w", err)
} }
if exitCode != 0 { if exitCode != 0 {
@@ -210,7 +223,7 @@ func buildGoTestCommand(config *RunConfig) []string {
func createGoTestContainer(ctx context.Context, cli *client.Client, config *RunConfig, containerName, logsDir string, goTestCmd []string) (container.CreateResponse, error) { func createGoTestContainer(ctx context.Context, cli *client.Client, config *RunConfig, containerName, logsDir string, goTestCmd []string) (container.CreateResponse, error) {
pwd, err := os.Getwd() pwd, err := os.Getwd()
if err != nil { if err != nil {
return container.CreateResponse{}, fmt.Errorf("failed to get working directory: %w", err) return container.CreateResponse{}, fmt.Errorf("getting working directory: %w", err)
} }
projectRoot := findProjectRoot(pwd) projectRoot := findProjectRoot(pwd)
@@ -312,7 +325,7 @@ func streamAndWait(ctx context.Context, cli *client.Client, containerID string)
Follow: true, Follow: true,
}) })
if err != nil { if err != nil {
return -1, fmt.Errorf("failed to get container logs: %w", err) return -1, fmt.Errorf("getting container logs: %w", err)
} }
defer out.Close() defer out.Close()
@@ -324,7 +337,7 @@ func streamAndWait(ctx context.Context, cli *client.Client, containerID string)
select { select {
case err := <-errCh: case err := <-errCh:
if err != nil { if err != nil {
return -1, fmt.Errorf("error waiting for container: %w", err) return -1, fmt.Errorf("waiting for container: %w", err)
} }
case status := <-statusCh: case status := <-statusCh:
return int(status.StatusCode), nil return int(status.StatusCode), nil
@@ -338,7 +351,7 @@ func waitForContainerFinalization(ctx context.Context, cli *client.Client, testC
// First, get all related test containers // First, get all related test containers
containers, err := cli.ContainerList(ctx, container.ListOptions{All: true}) containers, err := cli.ContainerList(ctx, container.ListOptions{All: true})
if err != nil { if err != nil {
return fmt.Errorf("failed to list containers: %w", err) return fmt.Errorf("listing containers: %w", err)
} }
testContainers := getCurrentTestContainers(containers, testContainerID, verbose) testContainers := getCurrentTestContainers(containers, testContainerID, verbose)
@@ -347,6 +360,7 @@ func waitForContainerFinalization(ctx context.Context, cli *client.Client, testC
maxWaitTime := 10 * time.Second maxWaitTime := 10 * time.Second
checkInterval := 500 * time.Millisecond checkInterval := 500 * time.Millisecond
timeout := time.After(maxWaitTime) timeout := time.After(maxWaitTime)
ticker := time.NewTicker(checkInterval) ticker := time.NewTicker(checkInterval)
defer ticker.Stop() defer ticker.Stop()
@@ -356,6 +370,7 @@ func waitForContainerFinalization(ctx context.Context, cli *client.Client, testC
if verbose { if verbose {
log.Printf("Timeout waiting for container finalization, proceeding with artifact extraction") log.Printf("Timeout waiting for container finalization, proceeding with artifact extraction")
} }
return nil return nil
case <-ticker.C: case <-ticker.C:
allFinalized := true allFinalized := true
@@ -366,12 +381,14 @@ func waitForContainerFinalization(ctx context.Context, cli *client.Client, testC
if verbose { if verbose {
log.Printf("Warning: failed to inspect container %s: %v", testCont.name, err) log.Printf("Warning: failed to inspect container %s: %v", testCont.name, err)
} }
continue continue
} }
// Check if container is in a final state // Check if container is in a final state
if !isContainerFinalized(inspect.State) { if !isContainerFinalized(inspect.State) {
allFinalized = false allFinalized = false
if verbose { if verbose {
log.Printf("Container %s still finalizing (state: %s)", testCont.name, inspect.State.Status) log.Printf("Container %s still finalizing (state: %s)", testCont.name, inspect.State.Status)
} }
@@ -384,6 +401,7 @@ func waitForContainerFinalization(ctx context.Context, cli *client.Client, testC
if verbose { if verbose {
log.Printf("All test containers finalized, ready for artifact extraction") log.Printf("All test containers finalized, ready for artifact extraction")
} }
return nil return nil
} }
} }
@@ -400,13 +418,15 @@ func isContainerFinalized(state *container.State) bool {
func findProjectRoot(startPath string) string { func findProjectRoot(startPath string) string {
current := startPath current := startPath
for { for {
if _, err := os.Stat(filepath.Join(current, "go.mod")); err == nil { if _, err := os.Stat(filepath.Join(current, "go.mod")); err == nil { //nolint:noinlineerr
return current return current
} }
parent := filepath.Dir(current) parent := filepath.Dir(current)
if parent == current { if parent == current {
return startPath return startPath
} }
current = parent current = parent
} }
} }
@@ -416,6 +436,7 @@ func boolToInt(b bool) int {
if b { if b {
return 1 return 1
} }
return 0 return 0
} }
@@ -428,13 +449,14 @@ type DockerContext struct {
} }
// createDockerClient creates a Docker client with context detection. // createDockerClient creates a Docker client with context detection.
func createDockerClient() (*client.Client, error) { func createDockerClient(ctx context.Context) (*client.Client, error) {
contextInfo, err := getCurrentDockerContext() contextInfo, err := getCurrentDockerContext(ctx)
if err != nil { if err != nil {
return client.NewClientWithOpts(client.FromEnv, client.WithAPIVersionNegotiation()) return client.NewClientWithOpts(client.FromEnv, client.WithAPIVersionNegotiation())
} }
var clientOpts []client.Opt var clientOpts []client.Opt
clientOpts = append(clientOpts, client.WithAPIVersionNegotiation()) clientOpts = append(clientOpts, client.WithAPIVersionNegotiation())
if contextInfo != nil { if contextInfo != nil {
@@ -444,6 +466,7 @@ func createDockerClient() (*client.Client, error) {
if runConfig.Verbose { if runConfig.Verbose {
log.Printf("Using Docker host from context '%s': %s", contextInfo.Name, host) log.Printf("Using Docker host from context '%s': %s", contextInfo.Name, host)
} }
clientOpts = append(clientOpts, client.WithHost(host)) clientOpts = append(clientOpts, client.WithHost(host))
} }
} }
@@ -458,16 +481,17 @@ func createDockerClient() (*client.Client, error) {
} }
// getCurrentDockerContext retrieves the current Docker context information. // getCurrentDockerContext retrieves the current Docker context information.
func getCurrentDockerContext() (*DockerContext, error) { func getCurrentDockerContext(ctx context.Context) (*DockerContext, error) {
cmd := exec.Command("docker", "context", "inspect") cmd := exec.CommandContext(ctx, "docker", "context", "inspect")
output, err := cmd.Output() output, err := cmd.Output()
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to get docker context: %w", err) return nil, fmt.Errorf("getting docker context: %w", err)
} }
var contexts []DockerContext var contexts []DockerContext
if err := json.Unmarshal(output, &contexts); err != nil { if err := json.Unmarshal(output, &contexts); err != nil { //nolint:noinlineerr
return nil, fmt.Errorf("failed to parse docker context: %w", err) return nil, fmt.Errorf("parsing docker context: %w", err)
} }
if len(contexts) > 0 { if len(contexts) > 0 {
@@ -486,12 +510,13 @@ func getDockerSocketPath() string {
// checkImageAvailableLocally checks if the specified Docker image is available locally. // checkImageAvailableLocally checks if the specified Docker image is available locally.
func checkImageAvailableLocally(ctx context.Context, cli *client.Client, imageName string) (bool, error) { func checkImageAvailableLocally(ctx context.Context, cli *client.Client, imageName string) (bool, error) {
_, _, err := cli.ImageInspectWithRaw(ctx, imageName) _, _, err := cli.ImageInspectWithRaw(ctx, imageName) //nolint:staticcheck // SA1019: deprecated but functional
if err != nil { if err != nil {
if client.IsErrNotFound(err) { if client.IsErrNotFound(err) { //nolint:staticcheck // SA1019: deprecated but functional
return false, nil return false, nil
} }
return false, fmt.Errorf("failed to inspect image %s: %w", imageName, err)
return false, fmt.Errorf("inspecting image %s: %w", imageName, err)
} }
return true, nil return true, nil
@@ -502,13 +527,14 @@ func ensureImageAvailable(ctx context.Context, cli *client.Client, imageName str
// First check if image is available locally // First check if image is available locally
available, err := checkImageAvailableLocally(ctx, cli, imageName) available, err := checkImageAvailableLocally(ctx, cli, imageName)
if err != nil { if err != nil {
return fmt.Errorf("failed to check local image availability: %w", err) return fmt.Errorf("checking local image availability: %w", err)
} }
if available { if available {
if verbose { if verbose {
log.Printf("Image %s is available locally", imageName) log.Printf("Image %s is available locally", imageName)
} }
return nil return nil
} }
@@ -519,20 +545,21 @@ func ensureImageAvailable(ctx context.Context, cli *client.Client, imageName str
reader, err := cli.ImagePull(ctx, imageName, image.PullOptions{}) reader, err := cli.ImagePull(ctx, imageName, image.PullOptions{})
if err != nil { if err != nil {
return fmt.Errorf("failed to pull image %s: %w", imageName, err) return fmt.Errorf("pulling image %s: %w", imageName, err)
} }
defer reader.Close() defer reader.Close()
if verbose { if verbose {
_, err = io.Copy(os.Stdout, reader) _, err = io.Copy(os.Stdout, reader)
if err != nil { if err != nil {
return fmt.Errorf("failed to read pull output: %w", err) return fmt.Errorf("reading pull output: %w", err)
} }
} else { } else {
_, err = io.Copy(io.Discard, reader) _, err = io.Copy(io.Discard, reader)
if err != nil { if err != nil {
return fmt.Errorf("failed to read pull output: %w", err) return fmt.Errorf("reading pull output: %w", err)
} }
log.Printf("Image %s pulled successfully", imageName) log.Printf("Image %s pulled successfully", imageName)
} }
@@ -547,9 +574,11 @@ func listControlFiles(logsDir string) {
return return
} }
var logFiles []string var (
var dataFiles []string logFiles []string
var dataDirs []string dataFiles []string
dataDirs []string
)
for _, entry := range entries { for _, entry := range entries {
name := entry.Name() name := entry.Name()
@@ -578,6 +607,7 @@ func listControlFiles(logsDir string) {
if len(logFiles) > 0 { if len(logFiles) > 0 {
log.Printf("Headscale logs:") log.Printf("Headscale logs:")
for _, file := range logFiles { for _, file := range logFiles {
log.Printf(" %s", file) log.Printf(" %s", file)
} }
@@ -585,9 +615,11 @@ func listControlFiles(logsDir string) {
if len(dataFiles) > 0 || len(dataDirs) > 0 { if len(dataFiles) > 0 || len(dataDirs) > 0 {
log.Printf("Headscale data:") log.Printf("Headscale data:")
for _, file := range dataFiles { for _, file := range dataFiles {
log.Printf(" %s", file) log.Printf(" %s", file)
} }
for _, dir := range dataDirs { for _, dir := range dataDirs {
log.Printf(" %s/", dir) log.Printf(" %s/", dir)
} }
@@ -596,25 +628,27 @@ func listControlFiles(logsDir string) {
// extractArtifactsFromContainers collects container logs and files from the specific test run. // extractArtifactsFromContainers collects container logs and files from the specific test run.
func extractArtifactsFromContainers(ctx context.Context, testContainerID, logsDir string, verbose bool) error { func extractArtifactsFromContainers(ctx context.Context, testContainerID, logsDir string, verbose bool) error {
cli, err := createDockerClient() cli, err := createDockerClient(ctx)
if err != nil { if err != nil {
return fmt.Errorf("failed to create Docker client: %w", err) return fmt.Errorf("creating Docker client: %w", err)
} }
defer cli.Close() defer cli.Close()
// List all containers // List all containers
containers, err := cli.ContainerList(ctx, container.ListOptions{All: true}) containers, err := cli.ContainerList(ctx, container.ListOptions{All: true})
if err != nil { if err != nil {
return fmt.Errorf("failed to list containers: %w", err) return fmt.Errorf("listing containers: %w", err)
} }
// Get containers from the specific test run // Get containers from the specific test run
currentTestContainers := getCurrentTestContainers(containers, testContainerID, verbose) currentTestContainers := getCurrentTestContainers(containers, testContainerID, verbose)
extractedCount := 0 extractedCount := 0
for _, cont := range currentTestContainers { for _, cont := range currentTestContainers {
// Extract container logs and tar files // Extract container logs and tar files
if err := extractContainerArtifacts(ctx, cli, cont.ID, cont.name, logsDir, verbose); err != nil { err := extractContainerArtifacts(ctx, cli, cont.ID, cont.name, logsDir, verbose)
if err != nil {
if verbose { if verbose {
log.Printf("Warning: failed to extract artifacts from container %s (%s): %v", cont.name, cont.ID[:12], err) log.Printf("Warning: failed to extract artifacts from container %s (%s): %v", cont.name, cont.ID[:12], err)
} }
@@ -622,6 +656,7 @@ func extractArtifactsFromContainers(ctx context.Context, testContainerID, logsDi
if verbose { if verbose {
log.Printf("Extracted artifacts from container %s (%s)", cont.name, cont.ID[:12]) log.Printf("Extracted artifacts from container %s (%s)", cont.name, cont.ID[:12])
} }
extractedCount++ extractedCount++
} }
} }
@@ -645,11 +680,13 @@ func getCurrentTestContainers(containers []container.Summary, testContainerID st
// Find the test container to get its run ID label // Find the test container to get its run ID label
var runID string var runID string
for _, cont := range containers { for _, cont := range containers {
if cont.ID == testContainerID { if cont.ID == testContainerID {
if cont.Labels != nil { if cont.Labels != nil {
runID = cont.Labels["hi.run-id"] runID = cont.Labels["hi.run-id"]
} }
break break
} }
} }
@@ -690,18 +727,21 @@ func getCurrentTestContainers(containers []container.Summary, testContainerID st
// extractContainerArtifacts saves logs and tar files from a container. // extractContainerArtifacts saves logs and tar files from a container.
func extractContainerArtifacts(ctx context.Context, cli *client.Client, containerID, containerName, logsDir string, verbose bool) error { func extractContainerArtifacts(ctx context.Context, cli *client.Client, containerID, containerName, logsDir string, verbose bool) error {
// Ensure the logs directory exists // Ensure the logs directory exists
if err := os.MkdirAll(logsDir, 0o755); err != nil { err := os.MkdirAll(logsDir, defaultDirPerm)
return fmt.Errorf("failed to create logs directory: %w", err) if err != nil {
return fmt.Errorf("creating logs directory: %w", err)
} }
// Extract container logs // Extract container logs
if err := extractContainerLogs(ctx, cli, containerID, containerName, logsDir, verbose); err != nil { err = extractContainerLogs(ctx, cli, containerID, containerName, logsDir, verbose)
return fmt.Errorf("failed to extract logs: %w", err) if err != nil {
return fmt.Errorf("extracting logs: %w", err)
} }
// Extract tar files for headscale containers only // Extract tar files for headscale containers only
if strings.HasPrefix(containerName, "hs-") { if strings.HasPrefix(containerName, "hs-") {
if err := extractContainerFiles(ctx, cli, containerID, containerName, logsDir, verbose); err != nil { err := extractContainerFiles(ctx, cli, containerID, containerName, logsDir, verbose)
if err != nil {
if verbose { if verbose {
log.Printf("Warning: failed to extract files from %s: %v", containerName, err) log.Printf("Warning: failed to extract files from %s: %v", containerName, err)
} }
@@ -723,7 +763,7 @@ func extractContainerLogs(ctx context.Context, cli *client.Client, containerID,
Tail: "all", Tail: "all",
}) })
if err != nil { if err != nil {
return fmt.Errorf("failed to get container logs: %w", err) return fmt.Errorf("getting container logs: %w", err)
} }
defer logReader.Close() defer logReader.Close()
@@ -737,17 +777,17 @@ func extractContainerLogs(ctx context.Context, cli *client.Client, containerID,
// Demultiplex the Docker logs stream to separate stdout and stderr // Demultiplex the Docker logs stream to separate stdout and stderr
_, err = stdcopy.StdCopy(&stdoutBuf, &stderrBuf, logReader) _, err = stdcopy.StdCopy(&stdoutBuf, &stderrBuf, logReader)
if err != nil { if err != nil {
return fmt.Errorf("failed to demultiplex container logs: %w", err) return fmt.Errorf("demultiplexing container logs: %w", err)
} }
// Write stdout logs // Write stdout logs
if err := os.WriteFile(stdoutPath, stdoutBuf.Bytes(), 0o644); err != nil { if err := os.WriteFile(stdoutPath, stdoutBuf.Bytes(), 0o644); err != nil { //nolint:gosec,noinlineerr // log files should be readable
return fmt.Errorf("failed to write stdout log: %w", err) return fmt.Errorf("writing stdout log: %w", err)
} }
// Write stderr logs // Write stderr logs
if err := os.WriteFile(stderrPath, stderrBuf.Bytes(), 0o644); err != nil { if err := os.WriteFile(stderrPath, stderrBuf.Bytes(), 0o644); err != nil { //nolint:gosec,noinlineerr // log files should be readable
return fmt.Errorf("failed to write stderr log: %w", err) return fmt.Errorf("writing stderr log: %w", err)
} }
if verbose { if verbose {

View File

@@ -38,13 +38,13 @@ func runDoctorCheck(ctx context.Context) error {
} }
// Check 3: Go installation // Check 3: Go installation
results = append(results, checkGoInstallation()) results = append(results, checkGoInstallation(ctx))
// Check 4: Git repository // Check 4: Git repository
results = append(results, checkGitRepository()) results = append(results, checkGitRepository(ctx))
// Check 5: Required files // Check 5: Required files
results = append(results, checkRequiredFiles()) results = append(results, checkRequiredFiles(ctx))
// Display results // Display results
displayDoctorResults(results) displayDoctorResults(results)
@@ -86,7 +86,7 @@ func checkDockerBinary() DoctorResult {
// checkDockerDaemon verifies Docker daemon is running and accessible. // checkDockerDaemon verifies Docker daemon is running and accessible.
func checkDockerDaemon(ctx context.Context) DoctorResult { func checkDockerDaemon(ctx context.Context) DoctorResult {
cli, err := createDockerClient() cli, err := createDockerClient(ctx)
if err != nil { if err != nil {
return DoctorResult{ return DoctorResult{
Name: "Docker Daemon", Name: "Docker Daemon",
@@ -124,8 +124,8 @@ func checkDockerDaemon(ctx context.Context) DoctorResult {
} }
// checkDockerContext verifies Docker context configuration. // checkDockerContext verifies Docker context configuration.
func checkDockerContext(_ context.Context) DoctorResult { func checkDockerContext(ctx context.Context) DoctorResult {
contextInfo, err := getCurrentDockerContext() contextInfo, err := getCurrentDockerContext(ctx)
if err != nil { if err != nil {
return DoctorResult{ return DoctorResult{
Name: "Docker Context", Name: "Docker Context",
@@ -155,7 +155,7 @@ func checkDockerContext(_ context.Context) DoctorResult {
// checkDockerSocket verifies Docker socket accessibility. // checkDockerSocket verifies Docker socket accessibility.
func checkDockerSocket(ctx context.Context) DoctorResult { func checkDockerSocket(ctx context.Context) DoctorResult {
cli, err := createDockerClient() cli, err := createDockerClient(ctx)
if err != nil { if err != nil {
return DoctorResult{ return DoctorResult{
Name: "Docker Socket", Name: "Docker Socket",
@@ -192,7 +192,7 @@ func checkDockerSocket(ctx context.Context) DoctorResult {
// checkGolangImage verifies the golang Docker image is available locally or can be pulled. // checkGolangImage verifies the golang Docker image is available locally or can be pulled.
func checkGolangImage(ctx context.Context) DoctorResult { func checkGolangImage(ctx context.Context) DoctorResult {
cli, err := createDockerClient() cli, err := createDockerClient(ctx)
if err != nil { if err != nil {
return DoctorResult{ return DoctorResult{
Name: "Golang Image", Name: "Golang Image",
@@ -251,7 +251,7 @@ func checkGolangImage(ctx context.Context) DoctorResult {
} }
// checkGoInstallation verifies Go is installed and working. // checkGoInstallation verifies Go is installed and working.
func checkGoInstallation() DoctorResult { func checkGoInstallation(ctx context.Context) DoctorResult {
_, err := exec.LookPath("go") _, err := exec.LookPath("go")
if err != nil { if err != nil {
return DoctorResult{ return DoctorResult{
@@ -265,7 +265,8 @@ func checkGoInstallation() DoctorResult {
} }
} }
cmd := exec.Command("go", "version") cmd := exec.CommandContext(ctx, "go", "version")
output, err := cmd.Output() output, err := cmd.Output()
if err != nil { if err != nil {
return DoctorResult{ return DoctorResult{
@@ -285,8 +286,9 @@ func checkGoInstallation() DoctorResult {
} }
// checkGitRepository verifies we're in a git repository. // checkGitRepository verifies we're in a git repository.
func checkGitRepository() DoctorResult { func checkGitRepository(ctx context.Context) DoctorResult {
cmd := exec.Command("git", "rev-parse", "--git-dir") cmd := exec.CommandContext(ctx, "git", "rev-parse", "--git-dir")
err := cmd.Run() err := cmd.Run()
if err != nil { if err != nil {
return DoctorResult{ return DoctorResult{
@@ -308,7 +310,7 @@ func checkGitRepository() DoctorResult {
} }
// checkRequiredFiles verifies required files exist. // checkRequiredFiles verifies required files exist.
func checkRequiredFiles() DoctorResult { func checkRequiredFiles(ctx context.Context) DoctorResult {
requiredFiles := []string{ requiredFiles := []string{
"go.mod", "go.mod",
"integration/", "integration/",
@@ -316,9 +318,12 @@ func checkRequiredFiles() DoctorResult {
} }
var missingFiles []string var missingFiles []string
for _, file := range requiredFiles { for _, file := range requiredFiles {
cmd := exec.Command("test", "-e", file) cmd := exec.CommandContext(ctx, "test", "-e", file)
if err := cmd.Run(); err != nil {
err := cmd.Run()
if err != nil {
missingFiles = append(missingFiles, file) missingFiles = append(missingFiles, file)
} }
} }
@@ -350,6 +355,7 @@ func displayDoctorResults(results []DoctorResult) {
for _, result := range results { for _, result := range results {
var icon string var icon string
switch result.Status { switch result.Status {
case "PASS": case "PASS":
icon = "✅" icon = "✅"

View File

@@ -79,13 +79,18 @@ func main() {
} }
func cleanAll(ctx context.Context) error { func cleanAll(ctx context.Context) error {
if err := killTestContainers(ctx); err != nil { err := killTestContainers(ctx)
if err != nil {
return err return err
} }
if err := pruneDockerNetworks(ctx); err != nil {
err = pruneDockerNetworks(ctx)
if err != nil {
return err return err
} }
if err := cleanOldImages(ctx); err != nil {
err = cleanOldImages(ctx)
if err != nil {
return err return err
} }

View File

@@ -48,7 +48,9 @@ func runIntegrationTest(env *command.Env) error {
if runConfig.Verbose { if runConfig.Verbose {
log.Printf("Running pre-flight system checks...") log.Printf("Running pre-flight system checks...")
} }
if err := runDoctorCheck(env.Context()); err != nil {
err := runDoctorCheck(env.Context())
if err != nil {
return fmt.Errorf("pre-flight checks failed: %w", err) return fmt.Errorf("pre-flight checks failed: %w", err)
} }
@@ -66,15 +68,15 @@ func runIntegrationTest(env *command.Env) error {
func detectGoVersion() string { func detectGoVersion() string {
goModPath := filepath.Join("..", "..", "go.mod") goModPath := filepath.Join("..", "..", "go.mod")
if _, err := os.Stat("go.mod"); err == nil { if _, err := os.Stat("go.mod"); err == nil { //nolint:noinlineerr
goModPath = "go.mod" goModPath = "go.mod"
} else if _, err := os.Stat("../../go.mod"); err == nil { } else if _, err := os.Stat("../../go.mod"); err == nil { //nolint:noinlineerr
goModPath = "../../go.mod" goModPath = "../../go.mod"
} }
content, err := os.ReadFile(goModPath) content, err := os.ReadFile(goModPath)
if err != nil { if err != nil {
return "1.25" return "1.26.1"
} }
lines := splitLines(string(content)) lines := splitLines(string(content))
@@ -89,13 +91,15 @@ func detectGoVersion() string {
} }
} }
return "1.25" return "1.26.1"
} }
// splitLines splits a string into lines without using strings.Split. // splitLines splits a string into lines without using strings.Split.
func splitLines(s string) []string { func splitLines(s string) []string {
var lines []string var (
var current string lines []string
current string
)
for _, char := range s { for _, char := range s {
if char == '\n' { if char == '\n' {

View File

@@ -18,6 +18,9 @@ import (
"github.com/docker/docker/client" "github.com/docker/docker/client"
) )
// ErrStatsCollectionAlreadyStarted is returned when trying to start stats collection that is already running.
var ErrStatsCollectionAlreadyStarted = errors.New("stats collection already started")
// ContainerStats represents statistics for a single container. // ContainerStats represents statistics for a single container.
type ContainerStats struct { type ContainerStats struct {
ContainerID string ContainerID string
@@ -44,10 +47,10 @@ type StatsCollector struct {
} }
// NewStatsCollector creates a new stats collector instance. // NewStatsCollector creates a new stats collector instance.
func NewStatsCollector() (*StatsCollector, error) { func NewStatsCollector(ctx context.Context) (*StatsCollector, error) {
cli, err := createDockerClient() cli, err := createDockerClient(ctx)
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to create Docker client: %w", err) return nil, fmt.Errorf("creating Docker client: %w", err)
} }
return &StatsCollector{ return &StatsCollector{
@@ -63,17 +66,19 @@ func (sc *StatsCollector) StartCollection(ctx context.Context, runID string, ver
defer sc.mutex.Unlock() defer sc.mutex.Unlock()
if sc.collectionStarted { if sc.collectionStarted {
return errors.New("stats collection already started") return ErrStatsCollectionAlreadyStarted
} }
sc.collectionStarted = true sc.collectionStarted = true
// Start monitoring existing containers // Start monitoring existing containers
sc.wg.Add(1) sc.wg.Add(1)
go sc.monitorExistingContainers(ctx, runID, verbose) go sc.monitorExistingContainers(ctx, runID, verbose)
// Start Docker events monitoring for new containers // Start Docker events monitoring for new containers
sc.wg.Add(1) sc.wg.Add(1)
go sc.monitorDockerEvents(ctx, runID, verbose) go sc.monitorDockerEvents(ctx, runID, verbose)
if verbose { if verbose {
@@ -87,10 +92,12 @@ func (sc *StatsCollector) StartCollection(ctx context.Context, runID string, ver
func (sc *StatsCollector) StopCollection() { func (sc *StatsCollector) StopCollection() {
// Check if already stopped without holding lock // Check if already stopped without holding lock
sc.mutex.RLock() sc.mutex.RLock()
if !sc.collectionStarted { if !sc.collectionStarted {
sc.mutex.RUnlock() sc.mutex.RUnlock()
return return
} }
sc.mutex.RUnlock() sc.mutex.RUnlock()
// Signal stop to all goroutines // Signal stop to all goroutines
@@ -114,6 +121,7 @@ func (sc *StatsCollector) monitorExistingContainers(ctx context.Context, runID s
if verbose { if verbose {
log.Printf("Failed to list existing containers: %v", err) log.Printf("Failed to list existing containers: %v", err)
} }
return return
} }
@@ -147,13 +155,13 @@ func (sc *StatsCollector) monitorDockerEvents(ctx context.Context, runID string,
case event := <-events: case event := <-events:
if event.Type == "container" && event.Action == "start" { if event.Type == "container" && event.Action == "start" {
// Get container details // Get container details
containerInfo, err := sc.client.ContainerInspect(ctx, event.ID) containerInfo, err := sc.client.ContainerInspect(ctx, event.ID) //nolint:staticcheck // SA1019: use Actor.ID
if err != nil { if err != nil {
continue continue
} }
// Convert to types.Container format for consistency // Convert to types.Container format for consistency
cont := types.Container{ cont := types.Container{ //nolint:staticcheck // SA1019: use container.Summary
ID: containerInfo.ID, ID: containerInfo.ID,
Names: []string{containerInfo.Name}, Names: []string{containerInfo.Name},
Labels: containerInfo.Config.Labels, Labels: containerInfo.Config.Labels,
@@ -167,13 +175,14 @@ func (sc *StatsCollector) monitorDockerEvents(ctx context.Context, runID string,
if verbose { if verbose {
log.Printf("Error in Docker events stream: %v", err) log.Printf("Error in Docker events stream: %v", err)
} }
return return
} }
} }
} }
// shouldMonitorContainer determines if a container should be monitored. // shouldMonitorContainer determines if a container should be monitored.
func (sc *StatsCollector) shouldMonitorContainer(cont types.Container, runID string) bool { func (sc *StatsCollector) shouldMonitorContainer(cont types.Container, runID string) bool { //nolint:staticcheck // SA1019: use container.Summary
// Check if it has the correct run ID label // Check if it has the correct run ID label
if cont.Labels == nil || cont.Labels["hi.run-id"] != runID { if cont.Labels == nil || cont.Labels["hi.run-id"] != runID {
return false return false
@@ -213,6 +222,7 @@ func (sc *StatsCollector) startStatsForContainer(ctx context.Context, containerI
} }
sc.wg.Add(1) sc.wg.Add(1)
go sc.collectStatsForContainer(ctx, containerID, verbose) go sc.collectStatsForContainer(ctx, containerID, verbose)
} }
@@ -226,12 +236,14 @@ func (sc *StatsCollector) collectStatsForContainer(ctx context.Context, containe
if verbose { if verbose {
log.Printf("Failed to get stats stream for container %s: %v", containerID[:12], err) log.Printf("Failed to get stats stream for container %s: %v", containerID[:12], err)
} }
return return
} }
defer statsResponse.Body.Close() defer statsResponse.Body.Close()
decoder := json.NewDecoder(statsResponse.Body) decoder := json.NewDecoder(statsResponse.Body)
var prevStats *container.Stats
var prevStats *container.Stats //nolint:staticcheck // SA1019: use StatsResponse
for { for {
select { select {
@@ -240,12 +252,15 @@ func (sc *StatsCollector) collectStatsForContainer(ctx context.Context, containe
case <-ctx.Done(): case <-ctx.Done():
return return
default: default:
var stats container.Stats var stats container.Stats //nolint:staticcheck // SA1019: use StatsResponse
if err := decoder.Decode(&stats); err != nil {
err := decoder.Decode(&stats)
if err != nil {
// EOF is expected when container stops or stream ends // EOF is expected when container stops or stream ends
if err.Error() != "EOF" && verbose { if err.Error() != "EOF" && verbose {
log.Printf("Failed to decode stats for container %s: %v", containerID[:12], err) log.Printf("Failed to decode stats for container %s: %v", containerID[:12], err)
} }
return return
} }
@@ -261,8 +276,10 @@ func (sc *StatsCollector) collectStatsForContainer(ctx context.Context, containe
// Store the sample (skip first sample since CPU calculation needs previous stats) // Store the sample (skip first sample since CPU calculation needs previous stats)
if prevStats != nil { if prevStats != nil {
// Get container stats reference without holding the main mutex // Get container stats reference without holding the main mutex
var containerStats *ContainerStats var (
var exists bool containerStats *ContainerStats
exists bool
)
sc.mutex.RLock() sc.mutex.RLock()
containerStats, exists = sc.containers[containerID] containerStats, exists = sc.containers[containerID]
@@ -286,7 +303,7 @@ func (sc *StatsCollector) collectStatsForContainer(ctx context.Context, containe
} }
// calculateCPUPercent calculates CPU usage percentage from Docker stats. // calculateCPUPercent calculates CPU usage percentage from Docker stats.
func calculateCPUPercent(prevStats, stats *container.Stats) float64 { func calculateCPUPercent(prevStats, stats *container.Stats) float64 { //nolint:staticcheck // SA1019: use StatsResponse
// CPU calculation based on Docker's implementation // CPU calculation based on Docker's implementation
cpuDelta := float64(stats.CPUStats.CPUUsage.TotalUsage) - float64(prevStats.CPUStats.CPUUsage.TotalUsage) cpuDelta := float64(stats.CPUStats.CPUUsage.TotalUsage) - float64(prevStats.CPUStats.CPUUsage.TotalUsage)
systemDelta := float64(stats.CPUStats.SystemUsage) - float64(prevStats.CPUStats.SystemUsage) systemDelta := float64(stats.CPUStats.SystemUsage) - float64(prevStats.CPUStats.SystemUsage)
@@ -331,10 +348,12 @@ type StatsSummary struct {
func (sc *StatsCollector) GetSummary() []ContainerStatsSummary { func (sc *StatsCollector) GetSummary() []ContainerStatsSummary {
// Take snapshot of container references without holding main lock long // Take snapshot of container references without holding main lock long
sc.mutex.RLock() sc.mutex.RLock()
containerRefs := make([]*ContainerStats, 0, len(sc.containers)) containerRefs := make([]*ContainerStats, 0, len(sc.containers))
for _, containerStats := range sc.containers { for _, containerStats := range sc.containers {
containerRefs = append(containerRefs, containerStats) containerRefs = append(containerRefs, containerStats)
} }
sc.mutex.RUnlock() sc.mutex.RUnlock()
summaries := make([]ContainerStatsSummary, 0, len(containerRefs)) summaries := make([]ContainerStatsSummary, 0, len(containerRefs))
@@ -384,23 +403,25 @@ func calculateStatsSummary(values []float64) StatsSummary {
return StatsSummary{} return StatsSummary{}
} }
min := values[0] minVal := values[0]
max := values[0] maxVal := values[0]
sum := 0.0 sum := 0.0
for _, value := range values { for _, value := range values {
if value < min { if value < minVal {
min = value minVal = value
} }
if value > max {
max = value if value > maxVal {
maxVal = value
} }
sum += value sum += value
} }
return StatsSummary{ return StatsSummary{
Min: min, Min: minVal,
Max: max, Max: maxVal,
Average: sum / float64(len(values)), Average: sum / float64(len(values)),
} }
} }
@@ -434,6 +455,7 @@ func (sc *StatsCollector) CheckMemoryLimits(hsLimitMB, tsLimitMB float64) []Memo
} }
summaries := sc.GetSummary() summaries := sc.GetSummary()
var violations []MemoryViolation var violations []MemoryViolation
for _, summary := range summaries { for _, summary := range summaries {

View File

@@ -2,6 +2,7 @@ package main
import ( import (
"encoding/json" "encoding/json"
"errors"
"fmt" "fmt"
"os" "os"
@@ -15,7 +16,10 @@ type MapConfig struct {
Directory string `flag:"directory,Directory to read map responses from"` Directory string `flag:"directory,Directory to read map responses from"`
} }
var mapConfig MapConfig var (
mapConfig MapConfig
errDirectoryRequired = errors.New("directory is required")
)
func main() { func main() {
root := command.C{ root := command.C{
@@ -40,7 +44,7 @@ func main() {
// runIntegrationTest executes the integration test workflow. // runIntegrationTest executes the integration test workflow.
func runOnline(env *command.Env) error { func runOnline(env *command.Env) error {
if mapConfig.Directory == "" { if mapConfig.Directory == "" {
return fmt.Errorf("directory is required") return errDirectoryRequired
} }
resps, err := mapper.ReadMapResponsesFromDirectory(mapConfig.Directory) resps, err := mapper.ReadMapResponsesFromDirectory(mapConfig.Directory)
@@ -57,5 +61,6 @@ func runOnline(env *command.Env) error {
os.Stderr.Write(out) os.Stderr.Write(out)
os.Stderr.Write([]byte("\n")) os.Stderr.Write([]byte("\n"))
return nil return nil
} }

View File

@@ -50,12 +50,21 @@ noise:
# List of IP prefixes to allocate tailaddresses from. # List of IP prefixes to allocate tailaddresses from.
# Each prefix consists of either an IPv4 or IPv6 address, # Each prefix consists of either an IPv4 or IPv6 address,
# and the associated prefix length, delimited by a slash. # and the associated prefix length, delimited by a slash.
# It must be within IP ranges supported by the Tailscale #
# client - i.e., subnets of 100.64.0.0/10 and fd7a:115c:a1e0::/48. # WARNING: These prefixes MUST be subsets of the standard Tailscale ranges:
# See below: # - IPv4: 100.64.0.0/10 (CGNAT range)
# IPv6: https://github.com/tailscale/tailscale/blob/22ebb25e833264f58d7c3f534a8b166894a89536/net/tsaddr/tsaddr.go#LL81C52-L81C71 # - IPv6: fd7a:115c:a1e0::/48 (Tailscale ULA range)
#
# Using a SUBSET of these ranges is supported and useful if you want to
# limit IP allocation to a smaller block (e.g., 100.64.0.0/24).
#
# Using ranges OUTSIDE of CGNAT/ULA is NOT supported and will cause
# undefined behaviour. The Tailscale client has hard-coded assumptions
# about these ranges and will break in subtle, hard-to-debug ways.
#
# See:
# IPv4: https://github.com/tailscale/tailscale/blob/22ebb25e833264f58d7c3f534a8b166894a89536/net/tsaddr/tsaddr.go#L33 # IPv4: https://github.com/tailscale/tailscale/blob/22ebb25e833264f58d7c3f534a8b166894a89536/net/tsaddr/tsaddr.go#L33
# Any other range is NOT supported, and it will cause unexpected issues. # IPv6: https://github.com/tailscale/tailscale/blob/22ebb25e833264f58d7c3f534a8b166894a89536/net/tsaddr/tsaddr.go#LL81C52-L81C71
prefixes: prefixes:
v4: 100.64.0.0/10 v4: 100.64.0.0/10
v6: fd7a:115c:a1e0::/48 v6: fd7a:115c:a1e0::/48
@@ -136,8 +145,25 @@ derp:
# Disables the automatic check for headscale updates on startup # Disables the automatic check for headscale updates on startup
disable_check_updates: false disable_check_updates: false
# Time before an inactive ephemeral node is deleted? # Node lifecycle configuration.
ephemeral_node_inactivity_timeout: 30m node:
# Default key expiry for non-tagged nodes, regardless of registration method
# (auth key, CLI, web auth). Tagged nodes are exempt and never expire.
#
# This is the base default. OIDC can override this via oidc.expiry.
# If a client explicitly requests a specific expiry, the client value is used.
#
# Setting the value to "0" means no default expiry (nodes never expire unless
# explicitly expired via `headscale nodes expire`).
#
# Tailscale SaaS uses 180d; set to a positive duration to match that behaviour.
#
# Default: 0 (no default expiry)
expiry: 0
ephemeral:
# Time before an inactive ephemeral node is deleted.
inactivity_timeout: 30m
database: database:
# Database type. Available options: sqlite, postgres # Database type. Available options: sqlite, postgres
@@ -346,15 +372,11 @@ unix_socket_permission: "0770"
# # `LoadCredential` straightforward: # # `LoadCredential` straightforward:
# client_secret_path: "${CREDENTIALS_DIRECTORY}/oidc_client_secret" # client_secret_path: "${CREDENTIALS_DIRECTORY}/oidc_client_secret"
# #
# # The amount of time a node is authenticated with OpenID until it expires
# # and needs to reauthenticate.
# # Setting the value to "0" will mean no expiry.
# expiry: 180d
#
# # Use the expiry from the token received from OpenID when the user logged # # Use the expiry from the token received from OpenID when the user logged
# # in. This will typically lead to frequent need to reauthenticate and should # # in. This will typically lead to frequent need to reauthenticate and should
# # only be enabled if you know what you are doing. # # only be enabled if you know what you are doing.
# # Note: enabling this will cause `oidc.expiry` to be ignored. # # Note: enabling this will cause `node.expiry` to be ignored for
# # OIDC-authenticated nodes.
# use_expiry_from_token: false # use_expiry_from_token: false
# #
# # The OIDC scopes to use, defaults to "openid", "profile" and "email". # # The OIDC scopes to use, defaults to "openid", "profile" and "email".
@@ -428,6 +450,11 @@ taildrop:
# Only modify these if you have identified a specific performance issue. # Only modify these if you have identified a specific performance issue.
# #
# tuning: # tuning:
# # Maximum number of pending registration entries in the auth cache.
# # Oldest entries are evicted when the cap is reached.
# #
# # register_cache_max_entries: 1024
#
# # NodeStore write batching configuration. # # NodeStore write batching configuration.
# # The NodeStore batches write operations before rebuilding peer relationships, # # The NodeStore batches write operations before rebuilding peer relationships,
# # which is computationally expensive. Batching reduces rebuild frequency. # # which is computationally expensive. Batching reduces rebuild frequency.

View File

@@ -1,3 +1,3 @@
{% {%
include-markdown "../../CONTRIBUTING.md" include-markdown "../../CONTRIBUTING.md"
%} %}

View File

@@ -24,9 +24,12 @@ We are more than happy to exchange emails, or to have dedicated calls before a P
## When/Why is Feature X going to be implemented? ## When/Why is Feature X going to be implemented?
We don't know. We might be working on it. If you're interested in contributing, please post a feature request about it. We use [GitHub Milestones to plan for upcoming Headscale releases](https://github.com/juanfont/headscale/milestones).
Have a look at [our current plan](https://github.com/juanfont/headscale/milestones) to get an idea when a specific
feature is about to be implemented. The release plan is subject to change at any time.
Please be aware that there are a number of reasons why we might not accept specific contributions: If you're interested in contributing, please post a feature request about it. Please be aware that there are a number of
reasons why we might not accept specific contributions:
- It is not possible to implement the feature in a way that makes sense in a self-hosted environment. - It is not possible to implement the feature in a way that makes sense in a self-hosted environment.
- Given that we are reverse-engineering Tailscale to satisfy our own curiosity, we might be interested in implementing the feature ourselves. - Given that we are reverse-engineering Tailscale to satisfy our own curiosity, we might be interested in implementing the feature ourselves.
@@ -47,8 +50,8 @@ we have a "docker-issues" channel where you can ask for Docker-specific help to
## What is the recommended update path? Can I skip multiple versions while updating? ## What is the recommended update path? Can I skip multiple versions while updating?
Please follow the steps outlined in the [upgrade guide](../setup/upgrade.md) to update your existing Headscale Please follow the steps outlined in the [upgrade guide](../setup/upgrade.md) to update your existing Headscale
installation. Its best to update from one stable version to the next (e.g. 0.24.0 &rarr; 0.25.1 &rarr; 0.26.1) in case installation. Its required to update from one stable version to the next (e.g. 0.26.0 0.27.1 → 0.28.0) without
you are multiple releases behind. You should always pick the latest available patch release. skipping minor versions in between. You should always pick the latest available patch release.
Be sure to check the [changelog](https://github.com/juanfont/headscale/blob/main/CHANGELOG.md) for version specific Be sure to check the [changelog](https://github.com/juanfont/headscale/blob/main/CHANGELOG.md) for version specific
upgrade instructions and breaking changes. upgrade instructions and breaking changes.
@@ -70,12 +73,12 @@ of Headscale:
1. An environment with 1000 servers 1. An environment with 1000 servers
- they rarely "move" (change their endpoints) - they rarely "move" (change their endpoints)
- new nodes are added rarely - new nodes are added rarely
2. An environment with 80 laptops/phones (end user devices) 1. An environment with 80 laptops/phones (end user devices)
- nodes move often, e.g. switching from home to office - nodes move often, e.g. switching from home to office
Headscale calculates a map of all nodes that need to talk to each other, Headscale calculates a map of all nodes that need to talk to each other,
creating this "world map" requires a lot of CPU time. When an event that creating this "world map" requires a lot of CPU time. When an event that
@@ -139,8 +142,8 @@ connect back to the administrator's node. Why do all nodes see the administrator
`tailscale status`? `tailscale status`?
This is essentially how Tailscale works. If traffic is allowed to flow in one direction, then both nodes see each other This is essentially how Tailscale works. If traffic is allowed to flow in one direction, then both nodes see each other
in their output of `tailscale status`. Traffic is still filtered according to the ACL, with the exception of `tailscale in their output of `tailscale status`. Traffic is still filtered according to the ACL, with the exception of
ping` which is always allowed in either direction. `tailscale ping` which is always allowed in either direction.
See also <https://tailscale.com/kb/1087/device-visibility>. See also <https://tailscale.com/kb/1087/device-visibility>.
@@ -157,8 +160,41 @@ indicates which part of the policy is invalid. Follow these steps to fix your po
!!! warning "Full server configuration required" !!! warning "Full server configuration required"
The above commands to get/set the policy require a complete server configuration file including database settings. A The above commands to get/set the policy require a complete server configuration file including database settings. A
minimal config to [control Headscale via remote CLI](../ref/api.md#grpc) is not sufficient. You may use `headscale minimal config to [control Headscale via remote CLI](../ref/api.md#grpc) is not sufficient. You may use
-c /path/to/config.yaml` to specify the path to an alternative configuration file. `headscale -c /path/to/config.yaml` to specify the path to an alternative configuration file.
## How can I migrate back to the recommended IP prefixes?
Tailscale only supports the IP prefixes `100.64.0.0/10` and `fd7a:115c:a1e0::/48` or smaller subnets thereof. The
following steps can be used to migrate from unsupported IP prefixes back to the supported and recommended ones.
!!! warning "Backup and test in a demo environment required"
The commands below update the IP addresses of all nodes in your tailnet and this might have a severe impact in your
specific environment. At a minimum:
- [Create a backup of your database](../setup/upgrade.md#backup)
- Test the commands below in a representive demo environment. This allows to catch subsequent connectivity errors
early and see how the tailnet behaves in your specific environment.
- Stop Headscale
- Restore the default prefixes in the [configuration file](../ref/configuration.md):
```yaml
prefixes:
v4: 100.64.0.0/10
v6: fd7a:115c:a1e0::/48
```
- Update the `nodes.ipv4` and `nodes.ipv6` columns in the database and assign each node a unique IPv4 and IPv6 address.
The following SQL statement assigns IP addresses based on the node ID:
```sql
UPDATE nodes
SET ipv4=concat('100.64.', id/256, '.', id%256),
ipv6=concat('fd7a:115c:a1e0::', format('%x', id));
```
- Update the [policy](../ref/acls.md) to reflect the IP address changes (if any)
- Start Headscale
Nodes should reconnect within a few seconds and pickup their newly assigned IP addresses.
## How can I avoid to send logs to Tailscale Inc? ## How can I avoid to send logs to Tailscale Inc?

View File

@@ -5,16 +5,16 @@ to provide self-hosters and hobbyists with an open-source server they can use fo
provides on overview of Headscale's feature and compatibility with the Tailscale control server: provides on overview of Headscale's feature and compatibility with the Tailscale control server:
- [x] Full "base" support of Tailscale's features - [x] Full "base" support of Tailscale's features
- [x] Node registration - [x] [Node registration](../ref/registration.md)
- [x] Interactive - [x] [Web authentication](../ref/registration.md#web-authentication)
- [x] Pre authenticated key - [x] [Pre authenticated key](../ref/registration.md#pre-authenticated-key)
- [x] [DNS](../ref/dns.md) - [x] [DNS](../ref/dns.md)
- [x] [MagicDNS](https://tailscale.com/kb/1081/magicdns) - [x] [MagicDNS](https://tailscale.com/kb/1081/magicdns)
- [x] [Global and restricted nameservers (split DNS)](https://tailscale.com/kb/1054/dns#nameservers) - [x] [Global and restricted nameservers (split DNS)](https://tailscale.com/kb/1054/dns#nameservers)
- [x] [search domains](https://tailscale.com/kb/1054/dns#search-domains) - [x] [search domains](https://tailscale.com/kb/1054/dns#search-domains)
- [x] [Extra DNS records (Headscale only)](../ref/dns.md#setting-extra-dns-records) - [x] [Extra DNS records (Headscale only)](../ref/dns.md#setting-extra-dns-records)
- [x] [Taildrop (File Sharing)](https://tailscale.com/kb/1106/taildrop) - [x] [Taildrop (File Sharing)](https://tailscale.com/kb/1106/taildrop)
- [x] [Tags](https://tailscale.com/kb/1068/tags) - [x] [Tags](../ref/tags.md)
- [x] [Routes](../ref/routes.md) - [x] [Routes](../ref/routes.md)
- [x] [Subnet routers](../ref/routes.md#subnet-router) - [x] [Subnet routers](../ref/routes.md#subnet-router)
- [x] [Exit nodes](../ref/routes.md#exit-node) - [x] [Exit nodes](../ref/routes.md#exit-node)
@@ -29,7 +29,7 @@ provides on overview of Headscale's feature and compatibility with the Tailscale
routers](../ref/routes.md#automatically-approve-routes-of-a-subnet-router) and [exit routers](../ref/routes.md#automatically-approve-routes-of-a-subnet-router) and [exit
nodes](../ref/routes.md#automatically-approve-an-exit-node-with-auto-approvers) nodes](../ref/routes.md#automatically-approve-an-exit-node-with-auto-approvers)
- [x] [Tailscale SSH](https://tailscale.com/kb/1193/tailscale-ssh) - [x] [Tailscale SSH](https://tailscale.com/kb/1193/tailscale-ssh)
* [x] [Node registration using Single-Sign-On (OpenID Connect)](../ref/oidc.md) ([GitHub label "OIDC"](https://github.com/juanfont/headscale/labels/OIDC)) - [x] [Node registration using Single-Sign-On (OpenID Connect)](../ref/oidc.md) ([GitHub label "OIDC"](https://github.com/juanfont/headscale/labels/OIDC))
- [x] Basic registration - [x] Basic registration
- [x] Update user profile from identity provider - [x] Update user profile from identity provider
- [ ] OIDC groups cannot be used in ACLs - [ ] OIDC groups cannot be used in ACLs

View File

@@ -222,7 +222,7 @@ Allows access to the internet through [exit nodes](routes.md#exit-node). Can onl
### `autogroup:member` ### `autogroup:member`
Includes all untagged devices. Includes all [personal (untagged) devices](registration.md/#identity-model).
```json ```json
{ {
@@ -234,7 +234,7 @@ Includes all untagged devices.
### `autogroup:tagged` ### `autogroup:tagged`
Includes all devices that have at least one tag. Includes all devices that [have at least one tag](registration.md/#identity-model).
```json ```json
{ {
@@ -245,7 +245,6 @@ Includes all devices that have at least one tag.
``` ```
### `autogroup:self` ### `autogroup:self`
**(EXPERIMENTAL)**
!!! warning "The current implementation of `autogroup:self` is inefficient" !!! warning "The current implementation of `autogroup:self` is inefficient"
@@ -258,9 +257,11 @@ Includes devices where the same user is authenticated on both the source and des
"dst": ["autogroup:self:*"] "dst": ["autogroup:self:*"]
} }
``` ```
*Using `autogroup:self` may cause performance degradation on the Headscale coordinator server in large deployments, as filter rules must be compiled per-node rather than globally and the current implementation is not very efficient.* *Using `autogroup:self` may cause performance degradation on the Headscale coordinator server in large deployments, as filter rules must be compiled per-node rather than globally and the current implementation is not very efficient.*
If you experience performance issues, consider using more specific ACL rules or limiting the use of `autogroup:self`. If you experience performance issues, consider using more specific ACL rules or limiting the use of `autogroup:self`.
```json ```json
{ {
// The following rules allow internal users to communicate with their // The following rules allow internal users to communicate with their

View File

@@ -1,4 +1,5 @@
# API # API
Headscale provides a [HTTP REST API](#rest-api) and a [gRPC interface](#grpc) which may be used to integrate a [web Headscale provides a [HTTP REST API](#rest-api) and a [gRPC interface](#grpc) which may be used to integrate a [web
interface](integration/web-ui.md), [remote control Headscale](#setup-remote-control) or provide a base for custom interface](integration/web-ui.md), [remote control Headscale](#setup-remote-control) or provide a base for custom
integration and tooling. integration and tooling.
@@ -30,8 +31,7 @@ headscale apikeys expire --prefix <PREFIX>
- API endpoint: `/api/v1`, e.g. `https://headscale.example.com/api/v1` - API endpoint: `/api/v1`, e.g. `https://headscale.example.com/api/v1`
- Documentation: `/swagger`, e.g. `https://headscale.example.com/swagger` - Documentation: `/swagger`, e.g. `https://headscale.example.com/swagger`
- Headscale Version: `/version`, e.g. `https://headscale.example.com/version` - Headscale Version: `/version`, e.g. `https://headscale.example.com/version`
- Authenticate using HTTP Bearer authentication by sending the [API key](#api) with the HTTP `Authorization: Bearer - Authenticate using HTTP Bearer authentication by sending the [API key](#api) with the HTTP `Authorization: Bearer <API_KEY>` header.
<API_KEY>` header.
Start by [creating an API key](#api) and test it with the examples below. Read the API documentation provided by your Start by [creating an API key](#api) and test it with the examples below. Read the API documentation provided by your
Headscale server at `/swagger` for details. Headscale server at `/swagger` for details.
@@ -54,8 +54,8 @@ Headscale server at `/swagger` for details.
```console ```console
curl -H "Authorization: Bearer <API_KEY>" \ curl -H "Authorization: Bearer <API_KEY>" \
-d user=<USER> -d key=<KEY> \ --json '{"user": "<USER>", "authId": "AUTH_ID>"}' \
https://headscale.example.com/api/v1/node/register https://headscale.example.com/api/v1/auth/register
``` ```
## gRPC ## gRPC
@@ -72,17 +72,17 @@ The gRPC interface can be used to control a Headscale instance from a remote mac
### Setup remote control ### Setup remote control
1. Download the [`headscale` binary from GitHub's release page](https://github.com/juanfont/headscale/releases). Make 1. Download the [`headscale` binary from GitHub's release page](https://github.com/juanfont/headscale/releases). Make
sure to use the same version as on the server. sure to use the same version as on the server.
1. Put the binary somewhere in your `PATH`, e.g. `/usr/local/bin/headscale` 1. Put the binary somewhere in your `PATH`, e.g. `/usr/local/bin/headscale`
1. Make `headscale` executable: `chmod +x /usr/local/bin/headscale` 1. Make `headscale` executable: `chmod +x /usr/local/bin/headscale`
1. [Create an API key](#api) on the Headscale server. 1. [Create an API key](#api) on the Headscale server.
1. Provide the connection parameters for the remote Headscale server either via a minimal YAML configuration file or 1. Provide the connection parameters for the remote Headscale server either via a minimal YAML configuration file or
via environment variables: via environment variables:
=== "Minimal YAML configuration file" === "Minimal YAML configuration file"
@@ -102,7 +102,7 @@ The gRPC interface can be used to control a Headscale instance from a remote mac
This instructs the `headscale` binary to connect to a remote instance at `<HEADSCALE_ADDRESS>:<PORT>`, instead of This instructs the `headscale` binary to connect to a remote instance at `<HEADSCALE_ADDRESS>:<PORT>`, instead of
connecting to the local instance. connecting to the local instance.
1. Test the connection by listing all nodes: 1. Test the connection by listing all nodes:
```shell ```shell
headscale nodes list headscale nodes list

View File

@@ -17,8 +17,8 @@
=== "View on GitHub" === "View on GitHub"
* Development version: <https://github.com/juanfont/headscale/blob/main/config-example.yaml> - Development version: <https://github.com/juanfont/headscale/blob/main/config-example.yaml>
* Version {{ headscale.version }}: <https://github.com/juanfont/headscale/blob/v{{ headscale.version }}/config-example.yaml> - Version {{ headscale.version }}: https://github.com/juanfont/headscale/blob/v{{ headscale.version }}/config-example.yaml
=== "Download with `wget`" === "Download with `wget`"

View File

@@ -63,10 +63,10 @@ maps fetched via URL or to offer your own, custom DERP servers to nodes.
ID. You can explicitly disable a specific region by setting its region ID to `null`. The following sample ID. You can explicitly disable a specific region by setting its region ID to `null`. The following sample
`derp.yaml` disables the New York DERP region (which has the region ID 1): `derp.yaml` disables the New York DERP region (which has the region ID 1):
```yaml title="derp.yaml" ```yaml title="derp.yaml"
regions: regions:
1: null 1: null
``` ```
Use the following configuration to serve the default DERP map (excluding New York) to nodes: Use the following configuration to serve the default DERP map (excluding New York) to nodes:
@@ -165,11 +165,10 @@ Any Tailscale client may be used to introspect the DERP map and to check for con
Additional DERP related metrics and information is available via the [metrics and debug Additional DERP related metrics and information is available via the [metrics and debug
endpoint](./debug.md#metrics-and-debug-endpoint). endpoint](./debug.md#metrics-and-debug-endpoint).
[^1]:
This assumes that the default region code of the [configuration file](./configuration.md) is used.
## Limitations ## Limitations
- The embedded DERP server can't be used for Tailscale's captive portal checks as it doesn't support the `/generate_204` - The embedded DERP server can't be used for Tailscale's captive portal checks as it doesn't support the `/generate_204`
endpoint via HTTP on port tcp/80. endpoint via HTTP on port tcp/80.
- There are no speed or throughput optimisations, the main purpose is to assist in node connectivity. - There are no speed or throughput optimisations, the main purpose is to assist in node connectivity.
[^1]: This assumes that the default region code of the [configuration file](./configuration.md) is used.

View File

@@ -25,7 +25,7 @@ hostname and port combination "http://hostname-in-magic-dns.myvpn.example.com:30
Currently, [only A and AAAA records are processed by Tailscale](https://github.com/tailscale/tailscale/blob/v1.86.5/ipn/ipnlocal/node_backend.go#L662). Currently, [only A and AAAA records are processed by Tailscale](https://github.com/tailscale/tailscale/blob/v1.86.5/ipn/ipnlocal/node_backend.go#L662).
1. Configure extra DNS records using one of the available configuration options: 1. Configure extra DNS records using one of the available configuration options:
=== "Static entries, via `dns.extra_records`" === "Static entries, via `dns.extra_records`"
@@ -66,12 +66,12 @@ hostname and port combination "http://hostname-in-magic-dns.myvpn.example.com:30
!!! tip "Good to know" !!! tip "Good to know"
* The `dns.extra_records_path` option in the [configuration file](./configuration.md) needs to reference the - The `dns.extra_records_path` option in the [configuration file](./configuration.md) needs to reference the
JSON file containing extra DNS records. JSON file containing extra DNS records.
* Be sure to "sort keys" and produce a stable output in case you generate the JSON file with a script. - Be sure to "sort keys" and produce a stable output in case you generate the JSON file with a script.
Headscale uses a checksum to detect changes to the file and a stable output avoids unnecessary processing. Headscale uses a checksum to detect changes to the file and a stable output avoids unnecessary processing.
1. Verify that DNS records are properly set using the DNS querying tool of your choice: 1. Verify that DNS records are properly set using the DNS querying tool of your choice:
=== "Query with dig" === "Query with dig"
@@ -87,7 +87,7 @@ hostname and port combination "http://hostname-in-magic-dns.myvpn.example.com:30
100.64.0.3 100.64.0.3
``` ```
1. Optional: Setup the reverse proxy 1. Optional: Setup the reverse proxy
The motivating example here was to be able to access internal monitoring services on the same host without The motivating example here was to be able to access internal monitoring services on the same host without
specifying a port, depicted as NGINX configuration snippet: specifying a port, depicted as NGINX configuration snippet:

View File

@@ -20,5 +20,7 @@ Headscale doesn't provide a built-in web interface but users may pick one from t
- [headscale-console](https://github.com/rickli-cloud/headscale-console) - WebAssembly-based client supporting SSH, VNC - [headscale-console](https://github.com/rickli-cloud/headscale-console) - WebAssembly-based client supporting SSH, VNC
and RDP with optional self-service capabilities and RDP with optional self-service capabilities
- [headscale-piying](https://github.com/wszgrcy/headscale-piying) - headscale web ui,support visual ACL configuration - [headscale-piying](https://github.com/wszgrcy/headscale-piying) - headscale web ui,support visual ACL configuration
- [HeadControl](https://github.com/ahmadzip/HeadControl) - Minimal Headscale admin dashboard, built with Go and HTMX
- [Headscale Manager](https://github.com/hkdone/headscalemanager) - Headscale UI for Android
You can ask for support on our [Discord server](https://discord.gg/c84AZQhmpx) in the "web-interfaces" channel. You can ask for support on our [Discord server](https://discord.gg/c84AZQhmpx) in the "web-interfaces" channel.

View File

@@ -40,9 +40,9 @@ A basic configuration connects Headscale to an identity provider and typically r
=== "Identity provider" === "Identity provider"
* Create a new confidential client (`Client ID`, `Client secret`) - Create a new confidential client (`Client ID`, `Client secret`)
* Add Headscale's OIDC callback URL as valid redirect URL: `https://headscale.example.com/oidc/callback` - Add Headscale's OIDC callback URL as valid redirect URL: `https://headscale.example.com/oidc/callback`
* Configure additional parameters to improve user experience such as: name, description, logo, … - Configure additional parameters to improve user experience such as: name, description, logo, …
### Enable PKCE (recommended) ### Enable PKCE (recommended)
@@ -63,8 +63,8 @@ recommended and needs to be configured for Headscale and the identity provider a
=== "Identity provider" === "Identity provider"
* Enable PKCE for the headscale client - Enable PKCE for the headscale client
* Set the PKCE challenge method to "S256" - Set the PKCE challenge method to "S256"
### Authorize users with filters ### Authorize users with filters
@@ -75,11 +75,11 @@ are configured, a user needs to pass all of them.
=== "Allowed domains" === "Allowed domains"
* Check the email domain of each authenticating user against the list of allowed domains and only authorize users - Check the email domain of each authenticating user against the list of allowed domains and only authorize users
whose email domain matches `example.com`. whose email domain matches `example.com`.
* A verified email address is required [unless email verification is disabled](#control-email-verification). - A verified email address is required [unless email verification is disabled](#control-email-verification).
* Access allowed: `alice@example.com` - Access allowed: `alice@example.com`
* Access denied: `bob@example.net` - Access denied: `bob@example.net`
```yaml hl_lines="5-6" ```yaml hl_lines="5-6"
oidc: oidc:
@@ -92,11 +92,11 @@ are configured, a user needs to pass all of them.
=== "Allowed users/emails" === "Allowed users/emails"
* Check the email address of each authenticating user against the list of allowed email addresses and only authorize - Check the email address of each authenticating user against the list of allowed email addresses and only authorize
users whose email is part of the `allowed_users` list. users whose email is part of the `allowed_users` list.
* A verified email address is required [unless email verification is disabled](#control-email-verification). - A verified email address is required [unless email verification is disabled](#control-email-verification).
* Access allowed: `alice@example.com`, `bob@example.net` - Access allowed: `alice@example.com`, `bob@example.net`
* Access denied: `mallory@example.net` - Access denied: `mallory@example.net`
```yaml hl_lines="5-7" ```yaml hl_lines="5-7"
oidc: oidc:
@@ -110,10 +110,10 @@ are configured, a user needs to pass all of them.
=== "Allowed groups" === "Allowed groups"
* Use the OIDC `groups` claim of each authenticating user to get their group membership and only authorize users - Use the OIDC `groups` claim of each authenticating user to get their group membership and only authorize users
which are members in at least one of the referenced groups. which are members in at least one of the referenced groups.
* Access allowed: users in the `headscale_users` group - Access allowed: users in the `headscale_users` group
* Access denied: users without groups, users with other groups - Access denied: users without groups, users with other groups
```yaml hl_lines="5-7" ```yaml hl_lines="5-7"
oidc: oidc:
@@ -145,16 +145,12 @@ oidc:
### Customize node expiration ### Customize node expiration
The node expiration is the amount of time a node is authenticated with OpenID Connect until it expires and needs to The node expiration is the amount of time a node is authenticated with OpenID Connect until it expires and needs to
reauthenticate. The default node expiration is 180 days. This can either be customized or set to the expiration from the reauthenticate. The default node expiration can be configured via the top-level `node.expiry` setting.
Access Token.
=== "Customize node expiration" === "Customize node expiration"
```yaml hl_lines="5" ```yaml hl_lines="2"
oidc: node:
issuer: "https://sso.example.com"
client_id: "headscale"
client_secret: "generated-secret"
expiry: 30d # Use 0 to disable node expiration expiry: 30d # Use 0 to disable node expiration
``` ```
@@ -163,7 +159,6 @@ Access Token.
Please keep in mind that the Access Token is typically a short-lived token that expires within a few minutes. You Please keep in mind that the Access Token is typically a short-lived token that expires within a few minutes. You
will have to configure token expiration in your identity provider to avoid frequent re-authentication. will have to configure token expiration in your identity provider to avoid frequent re-authentication.
```yaml hl_lines="5" ```yaml hl_lines="5"
oidc: oidc:
issuer: "https://sso.example.com" issuer: "https://sso.example.com"
@@ -175,6 +170,7 @@ Access Token.
!!! tip "Expire a node and force re-authentication" !!! tip "Expire a node and force re-authentication"
A node can be expired immediately via: A node can be expired immediately via:
```console ```console
headscale node expire -i <NODE_ID> headscale node expire -i <NODE_ID>
``` ```
@@ -185,13 +181,16 @@ You may refer to users in the Headscale policy via:
- Email address - Email address
- Username - Username
- Provider identifier (only available in the database or from your identity provider) - Provider identifier (this value is currently only available from the [API](api.md), database or directly from your
identity provider)
!!! note "A user identifier in the policy must contain a single `@`" !!! note "A user identifier in the policy must contain a single `@`"
The Headscale policy requires a single `@` to reference a user. If the username or provider identifier doesn't The Headscale policy requires a single `@` to reference a user. If the username or provider identifier doesn't
already contain a single `@`, it needs to be appended at the end. For example: the username `ssmith` has to be already contain a single `@`, it needs to be appended at the end. For example: the Headscale username `ssmith` has
written as `ssmith@` to be correctly identified as user within the policy. to be written as `ssmith@` to be correctly identified as user within the policy.
Ensure that the Headscale username itself does not end with `@`.
!!! warning "Email address or username might be updated by users" !!! warning "Email address or username might be updated by users"
@@ -200,6 +199,34 @@ You may refer to users in the Headscale policy via:
consequences for Headscale where a policy might no longer work or a user might obtain more access by hijacking an consequences for Headscale where a policy might no longer work or a user might obtain more access by hijacking an
existing username or email address. existing username or email address.
!!! tip "Howto use the provider identifier in the policy"
The provider identifier uniquely identifies an OIDC user and a well-behaving identity provider guarantees that this
value never changes for a particular user. It is usually an opaque and long string and its value is currently only
available from the [API](api.md), database or directly from your identity provider).
Use the [API](api.md) with the `/api/v1/user` endpoint to fetch the provider identifier (`providerId`). The value
(be sure to append an `@` in case the provider identifier doesn't already contain an `@` somewhere) can be used
directly to reference a user in the policy. To improve readability of the policy, one may use the `groups` section
as an alias:
```json
{
"groups": {
"group:alice": [
"https://soo.example.com/oauth2/openid/59ac9125-c31b-46c5-814e-06242908cf57@"
]
},
"acls": [
{
"action": "accept",
"src": ["group:alice"],
"dst": ["*:*"]
}
]
}
```
## Supported OIDC claims ## Supported OIDC claims
Headscale uses [the standard OIDC claims](https://openid.net/specs/openid-connect-core-1_0.html#StandardClaims) to Headscale uses [the standard OIDC claims](https://openid.net/specs/openid-connect-core-1_0.html#StandardClaims) to
@@ -250,8 +277,9 @@ Authelia is fully supported by Headscale.
### Authentik ### Authentik
- Authentik is fully supported by Headscale. - Authentik is fully supported by Headscale.
- [Headscale does not JSON Web Encryption](https://github.com/juanfont/headscale/issues/2446). Leave the field - [Headscale does not support JSON Web Encryption](https://github.com/juanfont/headscale/issues/2446). Leave the field
`Encryption Key` in the providers section unset. `Encryption Key` in the providers section unset.
- See Authentik's [Integrate with Headscale](https://integrations.goauthentik.io/networking/headscale/)
### Google OAuth ### Google OAuth
@@ -273,22 +301,30 @@ Console.
#### Steps #### Steps
1. Go to [Google Console](https://console.cloud.google.com) and login or create an account if you don't have one. 1. Go to [Google Console](https://console.cloud.google.com) and login or create an account if you don't have one.
2. Create a project (if you don't already have one). 1. Create a project (if you don't already have one).
3. On the left hand menu, go to `APIs and services` -> `Credentials` 1. On the left hand menu, go to `APIs and services` -> `Credentials`
4. Click `Create Credentials` -> `OAuth client ID` 1. Click `Create Credentials` -> `OAuth client ID`
5. Under `Application Type`, choose `Web Application` 1. Under `Application Type`, choose `Web Application`
6. For `Name`, enter whatever you like 1. For `Name`, enter whatever you like
7. Under `Authorised redirect URIs`, add Headscale's OIDC callback URL: `https://headscale.example.com/oidc/callback` 1. Under `Authorised redirect URIs`, add Headscale's OIDC callback URL: `https://headscale.example.com/oidc/callback`
8. Click `Save` at the bottom of the form 1. Click `Save` at the bottom of the form
9. Take note of the `Client ID` and `Client secret`, you can also download it for reference if you need it. 1. Take note of the `Client ID` and `Client secret`, you can also download it for reference if you need it.
10. [Configure Headscale following the "Basic configuration" steps](#basic-configuration). The issuer URL for Google 1. [Configure Headscale following the "Basic configuration" steps](#basic-configuration). The issuer URL for Google
OAuth is: `https://accounts.google.com`. OAuth is: `https://accounts.google.com`.
### Kanidm ### Kanidm
- Kanidm is fully supported by Headscale. - Kanidm is fully supported by Headscale.
- Groups for the [allowed groups filter](#authorize-users-with-filters) need to be specified with their full SPN, for - Groups for the [allowed groups filter](#authorize-users-with-filters) need to be specified with their full SPN, for
example: `headscale_users@sso.example.com`. example: `headscale_users@sso.example.com`.
- Kanidm sends the full SPN (`alice@sso.example.com`) as `preferred_username` by default. Headscale stores this value as
username which might be confusing as the username and email fields now contain values that look like an email address.
[Kanidm can be configured to send the short username as `preferred_username` attribute
instead](https://kanidm.github.io/kanidm/stable/integrations/oauth2.html#short-names):
```console
kanidm system oauth2 prefer-short-username <client name>
```
Once configured, the short username in Headscale will be `alice` and can be referred to as `alice@` in the policy.
### Keycloak ### Keycloak
@@ -332,3 +368,9 @@ oidc:
Groups for the [allowed groups filter](#authorize-users-with-filters) need to be specified with their group ID(UUID) instead Groups for the [allowed groups filter](#authorize-users-with-filters) need to be specified with their group ID(UUID) instead
of the group name. of the group name.
## Switching OIDC providers
Headscale only supports a single OIDC provider in its configuration, but it does store the provider identifier of each user. When switching providers, this might lead to issues with existing users: all user details (name, email, groups) might be identical with the new provider, but the identifier will differ. Headscale will be unable to create a new user as the name and email will already be in use for the existing users.
At this time, you will need to manually update the `provider_identifier` column in the `users` table for each user with the appropriate value for the new provider. The identifier is built from the `iss` and `sub` claims of the OIDC ID token, for example `https://id.example.com/12340987`.

144
docs/ref/registration.md Normal file
View File

@@ -0,0 +1,144 @@
# Registration methods
Headscale supports multiple ways to register a node. The preferred registration method depends on the identity of a node
and your use case.
## Identity model
Tailscale's identity model distinguishes between personal and tagged nodes:
- A personal node (or user-owned node) is owned by a human and typically refers to end-user devices such as laptops,
workstations or mobile phones. End-user devices are managed by a single user.
- A tagged node (or service-based node or non-human node) provides services to the network. Common examples include web-
and database servers. Those nodes are typically managed by a team of users. Some additional restrictions apply for
tagged nodes, e.g. a tagged node is not allowed to [Tailscale SSH](https://tailscale.com/kb/1193/tailscale-ssh) into a
personal node.
Headscale implements Tailscale's identity model and distinguishes between personal and tagged nodes where a personal
node is owned by a Headscale user and a tagged node is owned by a tag. Tagged devices are grouped under the special user
`tagged-devices`.
## Registration methods
There are two main ways to register new nodes, [web authentication](#web-authentication) and [registration with a pre
authenticated key](#pre-authenticated-key). Both methods can be used to register personal and tagged nodes.
### Web authentication
Web authentication is the default method to register a new node. It's interactive, where the client initiates the
registration and the Headscale administrator needs to approve the new node before it is allowed to join the network. A
node can be approved with:
- Headscale CLI (described in this documentation)
- [Headscale API](api.md)
- Or delegated to an identity provider via [OpenID Connect](oidc.md)
Web authentication relies on the presence of a Headscale user. Use the `headscale users` command to create a new
user[^1]:
```console
headscale users create <USER>
```
=== "Personal devices"
Run `tailscale up` to login your personal device:
```console
tailscale up --login-server <YOUR_HEADSCALE_URL>
```
Usually, a browser window with further instructions is opened. This page explains how to complete the registration
on your Headscale server and it also prints the Auth ID required to approve the node:
```console
headscale auth register --user <USER> --auth-id <AUTH_ID>
```
Congrations, the registration of your personal node is complete and it should be listed as "online" in the output of
`headscale nodes list`. The "User" column displays `<USER>` as the owner of the node.
=== "Tagged devices"
Your Headscale user needs to be authorized to register tagged devices. This authorization is specified in the
[`tagOwners`](https://tailscale.com/kb/1337/policy-syntax#tag-owners) section of the [ACL](acls.md). A simple
example looks like this:
```json title="The user alice can register nodes tagged with tag:server"
{
"tagOwners": {
"tag:server": ["alice@"]
},
// more rules
}
```
Run `tailscale up` and provide at least one tag to login a tagged device:
```console
tailscale up --login-server <YOUR_HEADSCALE_URL> --advertise-tags tag:<TAG>
```
Usually, a browser window with further instructions is opened. This page explains how to complete the registration
on your Headscale server and it also prints the Auth ID required to approve the node:
```console
headscale auth register --user <USER> --auth-id <AUTH_ID>
```
Headscale checks that `<USER>` is allowed to register a node with the specified tag(s) and then transfers ownership
of the new node to the special user `tagged-devices`. The registration of a tagged node is complete and it should be
listed as "online" in the output of `headscale nodes list`. The "User" column displays `tagged-devices` as the owner
of the node. See the "Tags" column for the list of assigned tags.
### Pre authenticated key
Registration with a pre authenticated key (or auth key) is a non-interactive way to register a new node. The Headscale
administrator creates a preauthkey upfront and this preauthkey can then be used to register a node non-interactively.
Its best suited for automation.
=== "Personal devices"
A personal node is always assigned to a Headscale user. Use the `headscale users` command to create a new user[^1]:
```console
headscale users create <USER>
```
Use the `headscale user list` command to learn its `<USER_ID>` and create a new pre authenticated key for your user:
```console
headscale preauthkeys create --user <USER_ID>
```
The above prints a pre authenticated key with the default settings (can be used once and is valid for one hour). Use
this auth key to register a node non-interactively:
```console
tailscale up --login-server <YOUR_HEADSCALE_URL> --authkey <YOUR_AUTH_KEY>
```
Congrations, the registration of your personal node is complete and it should be listed as "online" in the output of
`headscale nodes list`. The "User" column displays `<USER>` as the owner of the node.
=== "Tagged devices"
Create a new pre authenticated key and provide at least one tag:
```console
headscale preauthkeys create --tags tag:<TAG>
```
The above prints a pre authenticated key with the default settings (can be used once and is valid for one hour). Use
this auth key to register a node non-interactively. You don't need to provide the `--advertise-tags` parameter as
the tags are automatically read from the pre authenticated key:
```console
tailscale up --login-server <YOUR_HEADSCALE_URL> --authkey <YOUR_AUTH_KEY>
```
The registration of a tagged node is complete and it should be listed as "online" in the output of
`headscale nodes list`. The "User" column displays `tagged-devices` as the owner of the node. See the "Tags" column for the list of
assigned tags.
[^1]: [Ensure that the Headscale username does not end with `@`.](oidc.md#reference-a-user-in-the-policy)

54
docs/ref/tags.md Normal file
View File

@@ -0,0 +1,54 @@
# Tags
Headscale supports Tailscale tags. Please read [Tailscale's tag documentation](https://tailscale.com/kb/1068/tags) to
learn how tags work and how to use them.
Tags can be applied during [node registration](registration.md):
- using the `--advertise-tags` flag, see [web authentication for tagged devices](registration.md#__tabbed_1_2)
- using a tagged pre authenticated key, see [how to create and use it](registration.md#__tabbed_2_2)
Administrators can manage tags with:
- Headscale CLI
- [Headscale API](api.md)
## Common operations
### Manage tags for a node
Run `headscale nodes list` to list the tags for a node.
Use the `headscale nodes tag` command to modify the tags for a node. At least one tag is required and multiple tags can
be provided as comma separated list. The following command sets the tags `tag:server` and `tag:prod` on node with ID 1:
```console
headscale nodes tag -i 1 -t tag:server,tag:prod
```
### Convert from personal to tagged node
Use the `headscale nodes tag` command to convert a personal (user-owned) node to a tagged node:
```console
headscale nodes tag -i <NODE_ID> -t <TAG>
```
The node is now owned by the special user `tagged-devices` and has the specified tags assigned to it.
### Convert from tagged to personal node
Tagged nodes can return to personal (user-owned) nodes by re-authenticating with:
```console
tailscale up --login-server <YOUR_HEADSCALE_URL> --advertise-tags= --force-reauth
```
Usually, a browser window with further instructions is opened. This page explains how to complete the registration on
your Headscale server and it also prints the Auth ID required to approve the node:
```console
headscale auth register --user <USER> --auth-id <AUTH_ID>
```
All previously assigned tags get removed and the node is now owned by the user specified in the above command.

View File

@@ -50,7 +50,7 @@ Headscale uses [autocert](https://pkg.go.dev/golang.org/x/crypto/acme/autocert),
If you want to validate that certificate renewal completed successfully, this can be done either manually, or through external monitoring software. Two examples of doing this manually: If you want to validate that certificate renewal completed successfully, this can be done either manually, or through external monitoring software. Two examples of doing this manually:
1. Open the URL for your headscale server in your browser of choice, and manually inspecting the expiry date of the certificate you receive. 1. Open the URL for your headscale server in your browser of choice, and manually inspecting the expiry date of the certificate you receive.
2. Or, check remotely from CLI using `openssl`: 1. Or, check remotely from CLI using `openssl`:
```console ```console
$ openssl s_client -servername [hostname] -connect [hostname]:443 | openssl x509 -noout -dates $ openssl s_client -servername [hostname] -connect [hostname]:443 | openssl x509 -noout -dates

View File

@@ -1,6 +1,6 @@
mike~=2.1 mike~=2.1
mkdocs-include-markdown-plugin~=7.1 mkdocs-include-markdown-plugin~=7.2
mkdocs-macros-plugin~=1.3 mkdocs-macros-plugin~=1.5
mkdocs-material[imaging]~=9.5 mkdocs-materialx[imaging]~=10.1
mkdocs-minify-plugin~=0.7 mkdocs-minify-plugin~=0.8
mkdocs-redirects~=1.2 mkdocs-redirects~=1.2

View File

@@ -18,17 +18,17 @@ Registry](https://github.com/juanfont/headscale/pkgs/container/headscale). The c
## Configure and run headscale ## Configure and run headscale
1. Create a directory on the container host to store headscale's [configuration](../../ref/configuration.md) and the [SQLite](https://www.sqlite.org/) database: 1. Create a directory on the container host to store headscale's [configuration](../../ref/configuration.md) and the [SQLite](https://www.sqlite.org/) database:
```shell ```shell
mkdir -p ./headscale/{config,lib} mkdir -p ./headscale/{config,lib}
cd ./headscale cd ./headscale
``` ```
1. Download the example configuration for your chosen version and save it as: `$(pwd)/config/config.yaml`. Adjust the 1. Download the example configuration for your chosen version and save it as: `$(pwd)/config/config.yaml`. Adjust the
configuration to suit your local environment. See [Configuration](../../ref/configuration.md) for details. configuration to suit your local environment. See [Configuration](../../ref/configuration.md) for details.
1. Start headscale from within the previously created `./headscale` directory: 1. Start headscale from within the previously created `./headscale` directory:
```shell ```shell
docker run \ docker run \
@@ -74,7 +74,7 @@ Registry](https://github.com/juanfont/headscale/pkgs/container/headscale). The c
test: ["CMD", "headscale", "health"] test: ["CMD", "headscale", "health"]
``` ```
1. Verify headscale is running: 1. Verify headscale is running:
Follow the container logs: Follow the container logs:

View File

@@ -0,0 +1,58 @@
# Development builds
!!! warning
Development builds are created automatically from the latest `main` branch
and are **not versioned releases**. They may contain incomplete features,
breaking changes, or bugs. Use them for testing only.
Each push to `main` produces container images and cross-compiled binaries.
Container images are multi-arch (amd64, arm64) and use the same distroless
base image as official releases.
## Container images
Images are available from both Docker Hub and GitHub Container Registry, tagged
with the short commit hash of the build (e.g. `main-abc1234`):
- Docker Hub: `docker.io/headscale/headscale:main-<sha>`
- GitHub Container Registry: `ghcr.io/juanfont/headscale:main-<sha>`
To find the latest available tag, check the
[GitHub Actions workflow](https://github.com/juanfont/headscale/actions/workflows/container-main.yml)
or the [GitHub Container Registry package page](https://github.com/juanfont/headscale/pkgs/container/headscale).
For example, to run a specific development build:
```shell
docker run \
--name headscale \
--detach \
--read-only \
--tmpfs /var/run/headscale \
--volume "$(pwd)/config:/etc/headscale:ro" \
--volume "$(pwd)/lib:/var/lib/headscale" \
--publish 127.0.0.1:8080:8080 \
--publish 127.0.0.1:9090:9090 \
--health-cmd "CMD headscale health" \
docker.io/headscale/headscale:main-<sha> \
serve
```
See [Running headscale in a container](./container.md) for full container setup instructions.
## Binaries
Pre-built binaries from the latest successful build on `main` are available
via [nightly.link](https://nightly.link/juanfont/headscale/workflows/container-main/main):
| OS | Arch | Download |
| ----- | ----- | -------------------------------------------------------------------------------------------------------------------------- |
| Linux | amd64 | [headscale-linux-amd64](https://nightly.link/juanfont/headscale/workflows/container-main/main/headscale-linux-amd64.zip) |
| Linux | arm64 | [headscale-linux-arm64](https://nightly.link/juanfont/headscale/workflows/container-main/main/headscale-linux-arm64.zip) |
| macOS | amd64 | [headscale-darwin-amd64](https://nightly.link/juanfont/headscale/workflows/container-main/main/headscale-darwin-amd64.zip) |
| macOS | arm64 | [headscale-darwin-arm64](https://nightly.link/juanfont/headscale/workflows/container-main/main/headscale-darwin-arm64.zip) |
After downloading and extracting the archive, make the binary executable and follow the
[standalone binary installation](./official.md#using-standalone-binaries-advanced)
instructions for setting up the service.

View File

@@ -9,7 +9,7 @@ It is recommended to use our DEB packages to install headscale on a Debian based
local user to run headscale, provide a default configuration and ship with a systemd service file. Supported local user to run headscale, provide a default configuration and ship with a systemd service file. Supported
distributions are Ubuntu 22.04 or newer, Debian 12 or newer. distributions are Ubuntu 22.04 or newer, Debian 12 or newer.
1. Download the [latest headscale package](https://github.com/juanfont/headscale/releases/latest) for your platform (`.deb` for Ubuntu and Debian). 1. Download the [latest headscale package](https://github.com/juanfont/headscale/releases/latest) for your platform (`.deb` for Ubuntu and Debian).
```shell ```shell
HEADSCALE_VERSION="" # See above URL for latest version, e.g. "X.Y.Z" (NOTE: do not add the "v" prefix!) HEADSCALE_VERSION="" # See above URL for latest version, e.g. "X.Y.Z" (NOTE: do not add the "v" prefix!)
@@ -18,25 +18,25 @@ distributions are Ubuntu 22.04 or newer, Debian 12 or newer.
"https://github.com/juanfont/headscale/releases/download/v${HEADSCALE_VERSION}/headscale_${HEADSCALE_VERSION}_linux_${HEADSCALE_ARCH}.deb" "https://github.com/juanfont/headscale/releases/download/v${HEADSCALE_VERSION}/headscale_${HEADSCALE_VERSION}_linux_${HEADSCALE_ARCH}.deb"
``` ```
1. Install headscale: 1. Install headscale:
```shell ```shell
sudo apt install ./headscale.deb sudo apt install ./headscale.deb
``` ```
1. [Configure headscale by editing the configuration file](../../ref/configuration.md): 1. [Configure headscale by editing the configuration file](../../ref/configuration.md):
```shell ```shell
sudo nano /etc/headscale/config.yaml sudo nano /etc/headscale/config.yaml
``` ```
1. Enable and start the headscale service: 1. Enable and start the headscale service:
```shell ```shell
sudo systemctl enable --now headscale sudo systemctl enable --now headscale
``` ```
1. Verify that headscale is running as intended: 1. Verify that headscale is running as intended:
```shell ```shell
sudo systemctl status headscale sudo systemctl status headscale
@@ -56,20 +56,20 @@ This section describes the installation of headscale according to the [Requireme
assumptions](../requirements.md#assumptions). Headscale is run by a dedicated local user and the service itself is assumptions](../requirements.md#assumptions). Headscale is run by a dedicated local user and the service itself is
managed by systemd. managed by systemd.
1. Download the latest [`headscale` binary from GitHub's release page](https://github.com/juanfont/headscale/releases): 1. Download the latest [`headscale` binary from GitHub's release page](https://github.com/juanfont/headscale/releases):
```shell ```shell
sudo wget --output-document=/usr/bin/headscale \ sudo wget --output-document=/usr/bin/headscale \
https://github.com/juanfont/headscale/releases/download/v<HEADSCALE VERSION>/headscale_<HEADSCALE VERSION>_linux_<ARCH> https://github.com/juanfont/headscale/releases/download/v<HEADSCALE VERSION>/headscale_<HEADSCALE VERSION>_linux_<ARCH>
``` ```
1. Make `headscale` executable: 1. Make `headscale` executable:
```shell ```shell
sudo chmod +x /usr/bin/headscale sudo chmod +x /usr/bin/headscale
``` ```
1. Add a dedicated local user to run headscale: 1. Add a dedicated local user to run headscale:
```shell ```shell
sudo useradd \ sudo useradd \
@@ -81,38 +81,38 @@ managed by systemd.
headscale headscale
``` ```
1. Download the example configuration for your chosen version and save it as: `/etc/headscale/config.yaml`. Adjust the 1. Download the example configuration for your chosen version and save it as: `/etc/headscale/config.yaml`. Adjust the
configuration to suit your local environment. See [Configuration](../../ref/configuration.md) for details. configuration to suit your local environment. See [Configuration](../../ref/configuration.md) for details.
```shell ```shell
sudo mkdir -p /etc/headscale sudo mkdir -p /etc/headscale
sudo nano /etc/headscale/config.yaml sudo nano /etc/headscale/config.yaml
``` ```
1. Copy [headscale's systemd service file](https://github.com/juanfont/headscale/blob/main/packaging/systemd/headscale.service) 1. Copy [headscale's systemd service file](https://github.com/juanfont/headscale/blob/main/packaging/systemd/headscale.service)
to `/etc/systemd/system/headscale.service` and adjust it to suit your local setup. The following parameters likely need to `/etc/systemd/system/headscale.service` and adjust it to suit your local setup. The following parameters likely need
to be modified: `ExecStart`, `WorkingDirectory`, `ReadWritePaths`. to be modified: `ExecStart`, `WorkingDirectory`, `ReadWritePaths`.
1. In `/etc/headscale/config.yaml`, override the default `headscale` unix socket with a path that is writable by the 1. In `/etc/headscale/config.yaml`, override the default `headscale` unix socket with a path that is writable by the
`headscale` user or group: `headscale` user or group:
```yaml title="config.yaml" ```yaml title="config.yaml"
unix_socket: /var/run/headscale/headscale.sock unix_socket: /var/run/headscale/headscale.sock
``` ```
1. Reload systemd to load the new configuration file: 1. Reload systemd to load the new configuration file:
```shell ```shell
systemctl daemon-reload systemctl daemon-reload
``` ```
1. Enable and start the new headscale service: 1. Enable and start the new headscale service:
```shell ```shell
systemctl enable --now headscale systemctl enable --now headscale
``` ```
1. Verify that headscale is running as intended: 1. Verify that headscale is running as intended:
```shell ```shell
systemctl status headscale systemctl status headscale

View File

@@ -46,7 +46,6 @@ The headscale documentation and the provided examples are written with a few ass
Please adjust to your local environment accordingly. Please adjust to your local environment accordingly.
[^1]: [^1]: The Tailscale client assumes HTTPS on port 443 in certain situations. Serving headscale either via HTTP or via
The Tailscale client assumes HTTPS on port 443 in certain situations. Serving headscale either via HTTP or via HTTPS HTTPS on a port other than 443 is possible but sticking with HTTPS on port 443 is strongly recommended for
on a port other than 443 is possible but sticking with HTTPS on port 443 is strongly recommended for production production setups. See [issue 2164](https://github.com/juanfont/headscale/issues/2164) for more information.
setups. See [issue 2164](https://github.com/juanfont/headscale/issues/2164) for more information.

View File

@@ -1,10 +1,50 @@
# Upgrade an existing installation # Upgrade an existing installation
Update an existing headscale installation to a new version: !!! tip "Required update path"
Its required to update from one stable version to the next (e.g. 0.26.0 → 0.27.1 → 0.28.0) without skipping minor
versions in between. You should always pick the latest available patch release.
Update an existing Headscale installation to a new version:
- Read the announcement on the [GitHub releases](https://github.com/juanfont/headscale/releases) page for the new - Read the announcement on the [GitHub releases](https://github.com/juanfont/headscale/releases) page for the new
version. It lists the changes of the release along with possible breaking changes. version. It lists the changes of the release along with possible breaking changes and version-specific upgrade
- **Create a backup of your database.** instructions.
- Update headscale to the new version, preferably by following the same installation method. - Stop Headscale
- **[Create a backup of your installation](#backup)**
- Update Headscale to the new version, preferably by following the same installation method.
- Compare and update the [configuration](../ref/configuration.md) file. - Compare and update the [configuration](../ref/configuration.md) file.
- Restart headscale. - Start Headscale
## Backup
Headscale applies database migrations during upgrades and we highly recommend to create a backup of your database before
upgrading. A full backup of Headscale depends on your individual setup, but below are some typical setup scenarios.
=== "Standard installation"
A installation that follows our [official releases](install/official.md) setup guide uses the following paths:
- [Configuration file](../ref/configuration.md): `/etc/headscale/config.yaml`
- Data directory: `/var/lib/headscale`
- SQLite as database: `/var/lib/headscale/db.sqlite`
```console
TIMESTAMP=$(date +%Y%m%d%H%M%S)
cp -aR /etc/headscale /etc/headscale.backup-$TIMESTAMP
cp -aR /var/lib/headscale /var/lib/headscale.backup-$TIMESTAMP
```
=== "Container"
A installation that follows our [container](install/container.md) setup guide uses a single source volume directory
that contains the configuration file, data directory and the SQLite database.
```console
cp -aR /path/to/headscale /path/to/headscale.backup-$(date +%Y%m%d%H%M%S)
```
=== "PostgreSQL"
Please follow PostgreSQL's [Backup and Restore](https://www.postgresql.org/docs/current/backup.html) documentation
to create a backup of your PostgreSQL database.

View File

@@ -6,7 +6,7 @@ This documentation has the goal of showing how a user can use the official Andro
Install the official Tailscale Android client from the [Google Play Store](https://play.google.com/store/apps/details?id=com.tailscale.ipn) or [F-Droid](https://f-droid.org/packages/com.tailscale.ipn/). Install the official Tailscale Android client from the [Google Play Store](https://play.google.com/store/apps/details?id=com.tailscale.ipn) or [F-Droid](https://f-droid.org/packages/com.tailscale.ipn/).
## Connect via normal, interactive login ## Connect via web authentication
- Open the app and select the settings menu in the upper-right corner - Open the app and select the settings menu in the upper-right corner
- Tap on `Accounts` - Tap on `Accounts`
@@ -15,7 +15,7 @@ Install the official Tailscale Android client from the [Google Play Store](https
- The client connects automatically as soon as the node registration is complete on headscale. Until then, nothing is - The client connects automatically as soon as the node registration is complete on headscale. Until then, nothing is
visible in the server logs. visible in the server logs.
## Connect using a preauthkey ## Connect using a pre authenticated key
- Open the app and select the settings menu in the upper-right corner - Open the app and select the settings menu in the upper-right corner
- Tap on `Accounts` - Tap on `Accounts`
@@ -24,5 +24,5 @@ Install the official Tailscale Android client from the [Google Play Store](https
- Open the settings menu in the upper-right corner - Open the settings menu in the upper-right corner
- Tap on `Accounts` - Tap on `Accounts`
- In the kebab menu icon (three dots) in the upper-right corner select `Use an auth key` - In the kebab menu icon (three dots) in the upper-right corner select `Use an auth key`
- Enter your [preauthkey generated from headscale](../getting-started.md#using-a-preauthkey) - Enter your [preauthkey generated from headscale](../../ref/registration.md#pre-authenticated-key)
- If needed, tap `Log in` on the main screen. You should now be connected to your headscale. - If needed, tap `Log in` on the main screen. You should now be connected to your headscale.

View File

@@ -54,6 +54,6 @@ This typically means that the registry keys above was not set appropriately.
To reset and try again, it is important to do the following: To reset and try again, it is important to do the following:
1. Shut down the Tailscale service (or the client running in the tray) 1. Shut down the Tailscale service (or the client running in the tray)
2. Delete Tailscale Application data folder, located at `C:\Users\<USERNAME>\AppData\Local\Tailscale` and try to connect again. 1. Delete Tailscale Application data folder, located at `C:\Users\<USERNAME>\AppData\Local\Tailscale` and try to connect again.
3. Ensure the Windows node is deleted from headscale (to ensure fresh setup) 1. Ensure the Windows node is deleted from headscale (to ensure fresh setup)
4. Start Tailscale on the Windows machine and retry the login. 1. Start Tailscale on the Windows machine and retry the login.

View File

@@ -5,13 +5,13 @@ This page helps you get started with headscale and provides a few usage examples
!!! note "Prerequisites" !!! note "Prerequisites"
* Headscale is installed and running as system service. Read the [setup section](../setup/requirements.md) for - Headscale is installed and running as system service. Read the [setup section](../setup/requirements.md) for
installation instructions. installation instructions.
* The configuration file exists and is adjusted to suit your environment, see - The configuration file exists and is adjusted to suit your environment, see
[Configuration](../ref/configuration.md) for details. [Configuration](../ref/configuration.md) for details.
* Headscale is reachable from the Internet. Verify this by visiting the health endpoint: - Headscale is reachable from the Internet. Verify this by visiting the health endpoint:
https://headscale.example.com/health https://headscale.example.com/health
* The Tailscale client is installed, see [Client and operating system support](../about/clients.md) for more - The Tailscale client is installed, see [Client and operating system support](../about/clients.md) for more
information. information.
## Getting help ## Getting help
@@ -48,9 +48,9 @@ options, run:
communicate with the headscale service you have to make sure the unix socket is accessible by the user that runs communicate with the headscale service you have to make sure the unix socket is accessible by the user that runs
the commands. In general you can achieve this by any of the following methods: the commands. In general you can achieve this by any of the following methods:
* using `sudo` - using `sudo`
* run the commands as user `headscale` - run the commands as user `headscale`
* add your user to the `headscale` group - add your user to the `headscale` group
To verify you can run the following command using your preferred method: To verify you can run the following command using your preferred method:
@@ -60,10 +60,9 @@ options, run:
## Manage headscale users ## Manage headscale users
In headscale, a node (also known as machine or device) is always assigned to a In headscale, a node (also known as machine or device) is [typically assigned to a headscale
headscale user. Such a headscale user may have many nodes assigned to them and user](../ref/registration.md#identity-model). Such a headscale user[^1] may have many nodes assigned to them and can be
can be managed with the `headscale users` command. Invoke the built-in help for managed with the `headscale users` command. Invoke the built-in help for more information: `headscale users --help`.
more information: `headscale users --help`.
### Create a headscale user ### Create a headscale user
@@ -97,11 +96,12 @@ more information: `headscale users --help`.
## Register a node ## Register a node
One has to register a node first to use headscale as coordination with Tailscale. The following examples work for the One has to [register a node](../ref/registration.md) first to use headscale as coordination server with Tailscale. The
Tailscale client on Linux/BSD operating systems. Alternatively, follow the instructions to connect following examples work for the Tailscale client on Linux/BSD operating systems. Alternatively, follow the instructions
[Android](connect/android.md), [Apple](connect/apple.md) or [Windows](connect/windows.md) devices. to connect [Android](connect/android.md), [Apple](connect/apple.md) or [Windows](connect/windows.md) devices. Read
[registration methods](../ref/registration.md) for an overview of available registration methods.
### Normal, interactive login ### [Web authentication](../ref/registration.md#web-authentication)
On a client machine, run the `tailscale up` command and provide the FQDN of your headscale instance as argument: On a client machine, run the `tailscale up` command and provide the FQDN of your headscale instance as argument:
@@ -109,27 +109,26 @@ On a client machine, run the `tailscale up` command and provide the FQDN of your
tailscale up --login-server <YOUR_HEADSCALE_URL> tailscale up --login-server <YOUR_HEADSCALE_URL>
``` ```
Usually, a browser window with further instructions is opened and contains the value for `<YOUR_MACHINE_KEY>`. Approve Usually, a browser window with further instructions is opened. This page explains how to complete the registration on
and register the node on your headscale server: your headscale server and it also prints the Auth ID required to approve the node:
=== "Native" === "Native"
```shell ```shell
headscale nodes register --user <USER> --key <YOUR_MACHINE_KEY> headscale auth register --user <USER> --auth-id <AUTH_ID>
``` ```
=== "Container" === "Container"
```shell ```shell
docker exec -it headscale \ docker exec -it headscale \
headscale nodes register --user <USER> --key <YOUR_MACHINE_KEY> headscale auth register --user <USER> --auth-id <AUTH_ID>
``` ```
### Using a preauthkey ### [Pre authenticated key](../ref/registration.md#pre-authenticated-key)
It is also possible to generate a preauthkey and register a node non-interactively. First, generate a preauthkey on the It is also possible to generate a preauthkey and register a node non-interactively. First, generate a preauthkey on the
headscale instance. By default, the key is valid for one hour and can only be used once (see `headscale preauthkeys headscale instance. By default, the key is valid for one hour and can only be used once (see `headscale preauthkeys --help` for other options):
--help` for other options):
=== "Native" === "Native"
@@ -150,3 +149,5 @@ The command returns the preauthkey on success which is used to connect a node to
```shell ```shell
tailscale up --login-server <YOUR_HEADSCALE_URL> --authkey <YOUR_AUTH_KEY> tailscale up --login-server <YOUR_HEADSCALE_URL> --authkey <YOUR_AUTH_KEY>
``` ```
[^1]: [Ensure that the Headscale username does not end with `@`.](../ref/oidc.md#reference-a-user-in-the-policy)

6
flake.lock generated
View File

@@ -20,11 +20,11 @@
}, },
"nixpkgs": { "nixpkgs": {
"locked": { "locked": {
"lastModified": 1768875095, "lastModified": 1775888245,
"narHash": "sha256-dYP3DjiL7oIiiq3H65tGIXXIT1Waiadmv93JS0sS+8A=", "narHash": "sha256-nwASzrRDD1JBEu/o8ekKYEXm/oJW6EMCzCRdrwcLe90=",
"owner": "NixOS", "owner": "NixOS",
"repo": "nixpkgs", "repo": "nixpkgs",
"rev": "ed142ab1b3a092c4d149245d0c4126a5d7ea00b0", "rev": "13043924aaa7375ce482ebe2494338e058282925",
"type": "github" "type": "github"
}, },
"original": { "original": {

View File

@@ -26,8 +26,8 @@
overlays.default = _: prev: overlays.default = _: prev:
let let
pkgs = nixpkgs.legacyPackages.${prev.stdenv.hostPlatform.system}; pkgs = nixpkgs.legacyPackages.${prev.stdenv.hostPlatform.system};
buildGo = pkgs.buildGo125Module; buildGo = pkgs.buildGo126Module;
vendorHash = "sha256-dWsDgI5K+8mFw4PA5gfFBPCSqBJp5RcZzm0ML1+HsWw="; vendorHash = "sha256-x0xXxa7sjyDwWLq8fO0Z/pbPefctzctK3TAdBea7FtY=";
in in
{ {
headscale = buildGo { headscale = buildGo {
@@ -62,16 +62,16 @@
protoc-gen-grpc-gateway = buildGo rec { protoc-gen-grpc-gateway = buildGo rec {
pname = "grpc-gateway"; pname = "grpc-gateway";
version = "2.27.4"; version = "2.28.0";
src = pkgs.fetchFromGitHub { src = pkgs.fetchFromGitHub {
owner = "grpc-ecosystem"; owner = "grpc-ecosystem";
repo = "grpc-gateway"; repo = "grpc-gateway";
rev = "v${version}"; rev = "v${version}";
sha256 = "sha256-4bhEQTVV04EyX/qJGNMIAQDcMWcDVr1tFkEjBHpc2CA="; sha256 = "sha256-93omvHb+b+S0w4D+FGEEwYYDjgumJFDAruc1P4elfvA=";
}; };
vendorHash = "sha256-ohZW/uPdt08Y2EpIQ2yeyGSjV9O58+QbQQqYrs6O8/g="; vendorHash = "sha256-jVP5zfFPfHeAEApKNJzZwuZLA+DjKgkL7m2DFG72UNs=";
nativeBuildInputs = [ pkgs.installShellFiles ]; nativeBuildInputs = [ pkgs.installShellFiles ];
@@ -80,13 +80,13 @@
protobuf-language-server = buildGo rec { protobuf-language-server = buildGo rec {
pname = "protobuf-language-server"; pname = "protobuf-language-server";
version = "1cf777d"; version = "ab4c128";
src = pkgs.fetchFromGitHub { src = pkgs.fetchFromGitHub {
owner = "lasorda"; owner = "lasorda";
repo = "protobuf-language-server"; repo = "protobuf-language-server";
rev = "1cf777de4d35a6e493a689e3ca1a6183ce3206b6"; rev = "ab4c128f00774d51bd6d1f4cfa735f4b7c8619e3";
sha256 = "sha256-9MkBQPxr/TDr/sNz/Sk7eoZwZwzdVbE5u6RugXXk5iY="; sha256 = "sha256-yF6kG+qTRxVO/qp2V9HgTyFBeOm5RQzeqdZFrdidwxM=";
}; };
vendorHash = "sha256-4nTpKBe7ekJsfQf+P6edT/9Vp2SBYbKz1ITawD3bhkI="; vendorHash = "sha256-4nTpKBe7ekJsfQf+P6edT/9Vp2SBYbKz1ITawD3bhkI=";
@@ -94,19 +94,46 @@
subPackages = [ "." ]; subPackages = [ "." ];
}; };
# Upstream does not override buildGoModule properly, # Build golangci-lint with Go 1.26 (upstream uses hardcoded Go version)
# importing a specific module, so comment out for now. golangci-lint = buildGo rec {
# golangci-lint = prev.golangci-lint.override { pname = "golangci-lint";
# buildGoModule = buildGo; version = "2.11.4";
# };
# golangci-lint-langserver = prev.golangci-lint.override {
# buildGoModule = buildGo;
# };
# The package uses buildGo125Module, not the convention. src = pkgs.fetchFromGitHub {
# goreleaser = prev.goreleaser.override { owner = "golangci";
# buildGoModule = buildGo; repo = "golangci-lint";
# }; rev = "v${version}";
hash = "sha256-B19aLvfNRY9TOYw/71f2vpNUuSIz8OI4dL0ijGezsas=";
};
vendorHash = "sha256-xuoj4+U4tB5gpABKq4Dbp2cxnljxdYoBbO8A7DqPM5E=";
subPackages = [ "cmd/golangci-lint" ];
nativeBuildInputs = [ pkgs.installShellFiles ];
ldflags = [
"-s"
"-w"
"-X main.version=${version}"
"-X main.commit=v${version}"
"-X main.date=1970-01-01T00:00:00Z"
];
postInstall = ''
for shell in bash zsh fish; do
HOME=$TMPDIR $out/bin/golangci-lint completion $shell > golangci-lint.$shell
installShellCompletion golangci-lint.$shell
done
'';
meta = {
description = "Fast linters runner for Go";
homepage = "https://golangci-lint.run/";
changelog = "https://github.com/golangci/golangci-lint/blob/v${version}/CHANGELOG.md";
mainProgram = "golangci-lint";
};
};
gotestsum = prev.gotestsum.override { gotestsum = prev.gotestsum.override {
buildGoModule = buildGo; buildGoModule = buildGo;
@@ -120,9 +147,9 @@
buildGoModule = buildGo; buildGoModule = buildGo;
}; };
# gopls = prev.gopls.override { gopls = prev.gopls.override {
# buildGoModule = buildGo; buildGoLatestModule = buildGo;
# }; };
}; };
} }
// flake-utils.lib.eachDefaultSystem // flake-utils.lib.eachDefaultSystem
@@ -132,14 +159,14 @@
overlays = [ self.overlays.default ]; overlays = [ self.overlays.default ];
inherit system; inherit system;
}; };
buildDeps = with pkgs; [ git go_1_25 gnumake ]; buildDeps = with pkgs; [ git go_1_26 gnumake ];
devDeps = with pkgs; devDeps = with pkgs;
buildDeps buildDeps
++ [ ++ [
golangci-lint golangci-lint
golangci-lint-langserver golangci-lint-langserver
golines golines
nodePackages.prettier prettier
nixpkgs-fmt nixpkgs-fmt
goreleaser goreleaser
nfpm nfpm
@@ -152,6 +179,10 @@
yq-go yq-go
ripgrep ripgrep
postgresql postgresql
python314Packages.mdformat
python314Packages.mdformat-footnote
python314Packages.mdformat-frontmatter
python314Packages.mdformat-mkdocs
prek prek
# 'dot' is needed for pprof graphs # 'dot' is needed for pprof graphs

View File

@@ -0,0 +1,351 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.36.11
// protoc (unknown)
// source: headscale/v1/auth.proto
package v1
import (
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
reflect "reflect"
sync "sync"
unsafe "unsafe"
)
const (
// Verify that this generated code is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
// Verify that runtime/protoimpl is sufficiently up-to-date.
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
)
type AuthRegisterRequest struct {
state protoimpl.MessageState `protogen:"open.v1"`
User string `protobuf:"bytes,1,opt,name=user,proto3" json:"user,omitempty"`
AuthId string `protobuf:"bytes,2,opt,name=auth_id,json=authId,proto3" json:"auth_id,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *AuthRegisterRequest) Reset() {
*x = AuthRegisterRequest{}
mi := &file_headscale_v1_auth_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *AuthRegisterRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*AuthRegisterRequest) ProtoMessage() {}
func (x *AuthRegisterRequest) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_auth_proto_msgTypes[0]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use AuthRegisterRequest.ProtoReflect.Descriptor instead.
func (*AuthRegisterRequest) Descriptor() ([]byte, []int) {
return file_headscale_v1_auth_proto_rawDescGZIP(), []int{0}
}
func (x *AuthRegisterRequest) GetUser() string {
if x != nil {
return x.User
}
return ""
}
func (x *AuthRegisterRequest) GetAuthId() string {
if x != nil {
return x.AuthId
}
return ""
}
type AuthRegisterResponse struct {
state protoimpl.MessageState `protogen:"open.v1"`
Node *Node `protobuf:"bytes,1,opt,name=node,proto3" json:"node,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *AuthRegisterResponse) Reset() {
*x = AuthRegisterResponse{}
mi := &file_headscale_v1_auth_proto_msgTypes[1]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *AuthRegisterResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*AuthRegisterResponse) ProtoMessage() {}
func (x *AuthRegisterResponse) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_auth_proto_msgTypes[1]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use AuthRegisterResponse.ProtoReflect.Descriptor instead.
func (*AuthRegisterResponse) Descriptor() ([]byte, []int) {
return file_headscale_v1_auth_proto_rawDescGZIP(), []int{1}
}
func (x *AuthRegisterResponse) GetNode() *Node {
if x != nil {
return x.Node
}
return nil
}
type AuthApproveRequest struct {
state protoimpl.MessageState `protogen:"open.v1"`
AuthId string `protobuf:"bytes,1,opt,name=auth_id,json=authId,proto3" json:"auth_id,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *AuthApproveRequest) Reset() {
*x = AuthApproveRequest{}
mi := &file_headscale_v1_auth_proto_msgTypes[2]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *AuthApproveRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*AuthApproveRequest) ProtoMessage() {}
func (x *AuthApproveRequest) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_auth_proto_msgTypes[2]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use AuthApproveRequest.ProtoReflect.Descriptor instead.
func (*AuthApproveRequest) Descriptor() ([]byte, []int) {
return file_headscale_v1_auth_proto_rawDescGZIP(), []int{2}
}
func (x *AuthApproveRequest) GetAuthId() string {
if x != nil {
return x.AuthId
}
return ""
}
type AuthApproveResponse struct {
state protoimpl.MessageState `protogen:"open.v1"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *AuthApproveResponse) Reset() {
*x = AuthApproveResponse{}
mi := &file_headscale_v1_auth_proto_msgTypes[3]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *AuthApproveResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*AuthApproveResponse) ProtoMessage() {}
func (x *AuthApproveResponse) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_auth_proto_msgTypes[3]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use AuthApproveResponse.ProtoReflect.Descriptor instead.
func (*AuthApproveResponse) Descriptor() ([]byte, []int) {
return file_headscale_v1_auth_proto_rawDescGZIP(), []int{3}
}
type AuthRejectRequest struct {
state protoimpl.MessageState `protogen:"open.v1"`
AuthId string `protobuf:"bytes,1,opt,name=auth_id,json=authId,proto3" json:"auth_id,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *AuthRejectRequest) Reset() {
*x = AuthRejectRequest{}
mi := &file_headscale_v1_auth_proto_msgTypes[4]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *AuthRejectRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*AuthRejectRequest) ProtoMessage() {}
func (x *AuthRejectRequest) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_auth_proto_msgTypes[4]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use AuthRejectRequest.ProtoReflect.Descriptor instead.
func (*AuthRejectRequest) Descriptor() ([]byte, []int) {
return file_headscale_v1_auth_proto_rawDescGZIP(), []int{4}
}
func (x *AuthRejectRequest) GetAuthId() string {
if x != nil {
return x.AuthId
}
return ""
}
type AuthRejectResponse struct {
state protoimpl.MessageState `protogen:"open.v1"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *AuthRejectResponse) Reset() {
*x = AuthRejectResponse{}
mi := &file_headscale_v1_auth_proto_msgTypes[5]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *AuthRejectResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*AuthRejectResponse) ProtoMessage() {}
func (x *AuthRejectResponse) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_auth_proto_msgTypes[5]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use AuthRejectResponse.ProtoReflect.Descriptor instead.
func (*AuthRejectResponse) Descriptor() ([]byte, []int) {
return file_headscale_v1_auth_proto_rawDescGZIP(), []int{5}
}
var File_headscale_v1_auth_proto protoreflect.FileDescriptor
const file_headscale_v1_auth_proto_rawDesc = "" +
"\n" +
"\x17headscale/v1/auth.proto\x12\fheadscale.v1\x1a\x17headscale/v1/node.proto\"B\n" +
"\x13AuthRegisterRequest\x12\x12\n" +
"\x04user\x18\x01 \x01(\tR\x04user\x12\x17\n" +
"\aauth_id\x18\x02 \x01(\tR\x06authId\">\n" +
"\x14AuthRegisterResponse\x12&\n" +
"\x04node\x18\x01 \x01(\v2\x12.headscale.v1.NodeR\x04node\"-\n" +
"\x12AuthApproveRequest\x12\x17\n" +
"\aauth_id\x18\x01 \x01(\tR\x06authId\"\x15\n" +
"\x13AuthApproveResponse\",\n" +
"\x11AuthRejectRequest\x12\x17\n" +
"\aauth_id\x18\x01 \x01(\tR\x06authId\"\x14\n" +
"\x12AuthRejectResponseB)Z'github.com/juanfont/headscale/gen/go/v1b\x06proto3"
var (
file_headscale_v1_auth_proto_rawDescOnce sync.Once
file_headscale_v1_auth_proto_rawDescData []byte
)
func file_headscale_v1_auth_proto_rawDescGZIP() []byte {
file_headscale_v1_auth_proto_rawDescOnce.Do(func() {
file_headscale_v1_auth_proto_rawDescData = protoimpl.X.CompressGZIP(unsafe.Slice(unsafe.StringData(file_headscale_v1_auth_proto_rawDesc), len(file_headscale_v1_auth_proto_rawDesc)))
})
return file_headscale_v1_auth_proto_rawDescData
}
var file_headscale_v1_auth_proto_msgTypes = make([]protoimpl.MessageInfo, 6)
var file_headscale_v1_auth_proto_goTypes = []any{
(*AuthRegisterRequest)(nil), // 0: headscale.v1.AuthRegisterRequest
(*AuthRegisterResponse)(nil), // 1: headscale.v1.AuthRegisterResponse
(*AuthApproveRequest)(nil), // 2: headscale.v1.AuthApproveRequest
(*AuthApproveResponse)(nil), // 3: headscale.v1.AuthApproveResponse
(*AuthRejectRequest)(nil), // 4: headscale.v1.AuthRejectRequest
(*AuthRejectResponse)(nil), // 5: headscale.v1.AuthRejectResponse
(*Node)(nil), // 6: headscale.v1.Node
}
var file_headscale_v1_auth_proto_depIdxs = []int32{
6, // 0: headscale.v1.AuthRegisterResponse.node:type_name -> headscale.v1.Node
1, // [1:1] is the sub-list for method output_type
1, // [1:1] is the sub-list for method input_type
1, // [1:1] is the sub-list for extension type_name
1, // [1:1] is the sub-list for extension extendee
0, // [0:1] is the sub-list for field type_name
}
func init() { file_headscale_v1_auth_proto_init() }
func file_headscale_v1_auth_proto_init() {
if File_headscale_v1_auth_proto != nil {
return
}
file_headscale_v1_node_proto_init()
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: unsafe.Slice(unsafe.StringData(file_headscale_v1_auth_proto_rawDesc), len(file_headscale_v1_auth_proto_rawDesc)),
NumEnums: 0,
NumMessages: 6,
NumExtensions: 0,
NumServices: 0,
},
GoTypes: file_headscale_v1_auth_proto_goTypes,
DependencyIndexes: file_headscale_v1_auth_proto_depIdxs,
MessageInfos: file_headscale_v1_auth_proto_msgTypes,
}.Build()
File_headscale_v1_auth_proto = out.File
file_headscale_v1_auth_proto_goTypes = nil
file_headscale_v1_auth_proto_depIdxs = nil
}

View File

@@ -106,10 +106,10 @@ var File_headscale_v1_headscale_proto protoreflect.FileDescriptor
const file_headscale_v1_headscale_proto_rawDesc = "" + const file_headscale_v1_headscale_proto_rawDesc = "" +
"\n" + "\n" +
"\x1cheadscale/v1/headscale.proto\x12\fheadscale.v1\x1a\x1cgoogle/api/annotations.proto\x1a\x17headscale/v1/user.proto\x1a\x1dheadscale/v1/preauthkey.proto\x1a\x17headscale/v1/node.proto\x1a\x19headscale/v1/apikey.proto\x1a\x19headscale/v1/policy.proto\"\x0f\n" + "\x1cheadscale/v1/headscale.proto\x12\fheadscale.v1\x1a\x1cgoogle/api/annotations.proto\x1a\x17headscale/v1/user.proto\x1a\x1dheadscale/v1/preauthkey.proto\x1a\x17headscale/v1/node.proto\x1a\x19headscale/v1/apikey.proto\x1a\x17headscale/v1/auth.proto\x1a\x19headscale/v1/policy.proto\"\x0f\n" +
"\rHealthRequest\"E\n" + "\rHealthRequest\"E\n" +
"\x0eHealthResponse\x123\n" + "\x0eHealthResponse\x123\n" +
"\x15database_connectivity\x18\x01 \x01(\bR\x14databaseConnectivity2\x8c\x17\n" + "\x15database_connectivity\x18\x01 \x01(\bR\x14databaseConnectivity2\xeb\x19\n" +
"\x10HeadscaleService\x12h\n" + "\x10HeadscaleService\x12h\n" +
"\n" + "\n" +
"CreateUser\x12\x1f.headscale.v1.CreateUserRequest\x1a .headscale.v1.CreateUserResponse\"\x17\x82\xd3\xe4\x93\x02\x11:\x01*\"\f/api/v1/user\x12\x80\x01\n" + "CreateUser\x12\x1f.headscale.v1.CreateUserRequest\x1a .headscale.v1.CreateUserResponse\"\x17\x82\xd3\xe4\x93\x02\x11:\x01*\"\f/api/v1/user\x12\x80\x01\n" +
@@ -134,7 +134,11 @@ const file_headscale_v1_headscale_proto_rawDesc = "" +
"\n" + "\n" +
"RenameNode\x12\x1f.headscale.v1.RenameNodeRequest\x1a .headscale.v1.RenameNodeResponse\"0\x82\xd3\xe4\x93\x02*\"(/api/v1/node/{node_id}/rename/{new_name}\x12b\n" + "RenameNode\x12\x1f.headscale.v1.RenameNodeRequest\x1a .headscale.v1.RenameNodeResponse\"0\x82\xd3\xe4\x93\x02*\"(/api/v1/node/{node_id}/rename/{new_name}\x12b\n" +
"\tListNodes\x12\x1e.headscale.v1.ListNodesRequest\x1a\x1f.headscale.v1.ListNodesResponse\"\x14\x82\xd3\xe4\x93\x02\x0e\x12\f/api/v1/node\x12\x80\x01\n" + "\tListNodes\x12\x1e.headscale.v1.ListNodesRequest\x1a\x1f.headscale.v1.ListNodesResponse\"\x14\x82\xd3\xe4\x93\x02\x0e\x12\f/api/v1/node\x12\x80\x01\n" +
"\x0fBackfillNodeIPs\x12$.headscale.v1.BackfillNodeIPsRequest\x1a%.headscale.v1.BackfillNodeIPsResponse\" \x82\xd3\xe4\x93\x02\x1a\"\x18/api/v1/node/backfillips\x12p\n" + "\x0fBackfillNodeIPs\x12$.headscale.v1.BackfillNodeIPsRequest\x1a%.headscale.v1.BackfillNodeIPsResponse\" \x82\xd3\xe4\x93\x02\x1a\"\x18/api/v1/node/backfillips\x12w\n" +
"\fAuthRegister\x12!.headscale.v1.AuthRegisterRequest\x1a\".headscale.v1.AuthRegisterResponse\" \x82\xd3\xe4\x93\x02\x1a:\x01*\"\x15/api/v1/auth/register\x12s\n" +
"\vAuthApprove\x12 .headscale.v1.AuthApproveRequest\x1a!.headscale.v1.AuthApproveResponse\"\x1f\x82\xd3\xe4\x93\x02\x19:\x01*\"\x14/api/v1/auth/approve\x12o\n" +
"\n" +
"AuthReject\x12\x1f.headscale.v1.AuthRejectRequest\x1a .headscale.v1.AuthRejectResponse\"\x1e\x82\xd3\xe4\x93\x02\x18:\x01*\"\x13/api/v1/auth/reject\x12p\n" +
"\fCreateApiKey\x12!.headscale.v1.CreateApiKeyRequest\x1a\".headscale.v1.CreateApiKeyResponse\"\x19\x82\xd3\xe4\x93\x02\x13:\x01*\"\x0e/api/v1/apikey\x12w\n" + "\fCreateApiKey\x12!.headscale.v1.CreateApiKeyRequest\x1a\".headscale.v1.CreateApiKeyResponse\"\x19\x82\xd3\xe4\x93\x02\x13:\x01*\"\x0e/api/v1/apikey\x12w\n" +
"\fExpireApiKey\x12!.headscale.v1.ExpireApiKeyRequest\x1a\".headscale.v1.ExpireApiKeyResponse\" \x82\xd3\xe4\x93\x02\x1a:\x01*\"\x15/api/v1/apikey/expire\x12j\n" + "\fExpireApiKey\x12!.headscale.v1.ExpireApiKeyRequest\x1a\".headscale.v1.ExpireApiKeyResponse\" \x82\xd3\xe4\x93\x02\x1a:\x01*\"\x15/api/v1/apikey/expire\x12j\n" +
"\vListApiKeys\x12 .headscale.v1.ListApiKeysRequest\x1a!.headscale.v1.ListApiKeysResponse\"\x16\x82\xd3\xe4\x93\x02\x10\x12\x0e/api/v1/apikey\x12v\n" + "\vListApiKeys\x12 .headscale.v1.ListApiKeysRequest\x1a!.headscale.v1.ListApiKeysResponse\"\x16\x82\xd3\xe4\x93\x02\x10\x12\x0e/api/v1/apikey\x12v\n" +
@@ -177,36 +181,42 @@ var file_headscale_v1_headscale_proto_goTypes = []any{
(*RenameNodeRequest)(nil), // 17: headscale.v1.RenameNodeRequest (*RenameNodeRequest)(nil), // 17: headscale.v1.RenameNodeRequest
(*ListNodesRequest)(nil), // 18: headscale.v1.ListNodesRequest (*ListNodesRequest)(nil), // 18: headscale.v1.ListNodesRequest
(*BackfillNodeIPsRequest)(nil), // 19: headscale.v1.BackfillNodeIPsRequest (*BackfillNodeIPsRequest)(nil), // 19: headscale.v1.BackfillNodeIPsRequest
(*CreateApiKeyRequest)(nil), // 20: headscale.v1.CreateApiKeyRequest (*AuthRegisterRequest)(nil), // 20: headscale.v1.AuthRegisterRequest
(*ExpireApiKeyRequest)(nil), // 21: headscale.v1.ExpireApiKeyRequest (*AuthApproveRequest)(nil), // 21: headscale.v1.AuthApproveRequest
(*ListApiKeysRequest)(nil), // 22: headscale.v1.ListApiKeysRequest (*AuthRejectRequest)(nil), // 22: headscale.v1.AuthRejectRequest
(*DeleteApiKeyRequest)(nil), // 23: headscale.v1.DeleteApiKeyRequest (*CreateApiKeyRequest)(nil), // 23: headscale.v1.CreateApiKeyRequest
(*GetPolicyRequest)(nil), // 24: headscale.v1.GetPolicyRequest (*ExpireApiKeyRequest)(nil), // 24: headscale.v1.ExpireApiKeyRequest
(*SetPolicyRequest)(nil), // 25: headscale.v1.SetPolicyRequest (*ListApiKeysRequest)(nil), // 25: headscale.v1.ListApiKeysRequest
(*CreateUserResponse)(nil), // 26: headscale.v1.CreateUserResponse (*DeleteApiKeyRequest)(nil), // 26: headscale.v1.DeleteApiKeyRequest
(*RenameUserResponse)(nil), // 27: headscale.v1.RenameUserResponse (*GetPolicyRequest)(nil), // 27: headscale.v1.GetPolicyRequest
(*DeleteUserResponse)(nil), // 28: headscale.v1.DeleteUserResponse (*SetPolicyRequest)(nil), // 28: headscale.v1.SetPolicyRequest
(*ListUsersResponse)(nil), // 29: headscale.v1.ListUsersResponse (*CreateUserResponse)(nil), // 29: headscale.v1.CreateUserResponse
(*CreatePreAuthKeyResponse)(nil), // 30: headscale.v1.CreatePreAuthKeyResponse (*RenameUserResponse)(nil), // 30: headscale.v1.RenameUserResponse
(*ExpirePreAuthKeyResponse)(nil), // 31: headscale.v1.ExpirePreAuthKeyResponse (*DeleteUserResponse)(nil), // 31: headscale.v1.DeleteUserResponse
(*DeletePreAuthKeyResponse)(nil), // 32: headscale.v1.DeletePreAuthKeyResponse (*ListUsersResponse)(nil), // 32: headscale.v1.ListUsersResponse
(*ListPreAuthKeysResponse)(nil), // 33: headscale.v1.ListPreAuthKeysResponse (*CreatePreAuthKeyResponse)(nil), // 33: headscale.v1.CreatePreAuthKeyResponse
(*DebugCreateNodeResponse)(nil), // 34: headscale.v1.DebugCreateNodeResponse (*ExpirePreAuthKeyResponse)(nil), // 34: headscale.v1.ExpirePreAuthKeyResponse
(*GetNodeResponse)(nil), // 35: headscale.v1.GetNodeResponse (*DeletePreAuthKeyResponse)(nil), // 35: headscale.v1.DeletePreAuthKeyResponse
(*SetTagsResponse)(nil), // 36: headscale.v1.SetTagsResponse (*ListPreAuthKeysResponse)(nil), // 36: headscale.v1.ListPreAuthKeysResponse
(*SetApprovedRoutesResponse)(nil), // 37: headscale.v1.SetApprovedRoutesResponse (*DebugCreateNodeResponse)(nil), // 37: headscale.v1.DebugCreateNodeResponse
(*RegisterNodeResponse)(nil), // 38: headscale.v1.RegisterNodeResponse (*GetNodeResponse)(nil), // 38: headscale.v1.GetNodeResponse
(*DeleteNodeResponse)(nil), // 39: headscale.v1.DeleteNodeResponse (*SetTagsResponse)(nil), // 39: headscale.v1.SetTagsResponse
(*ExpireNodeResponse)(nil), // 40: headscale.v1.ExpireNodeResponse (*SetApprovedRoutesResponse)(nil), // 40: headscale.v1.SetApprovedRoutesResponse
(*RenameNodeResponse)(nil), // 41: headscale.v1.RenameNodeResponse (*RegisterNodeResponse)(nil), // 41: headscale.v1.RegisterNodeResponse
(*ListNodesResponse)(nil), // 42: headscale.v1.ListNodesResponse (*DeleteNodeResponse)(nil), // 42: headscale.v1.DeleteNodeResponse
(*BackfillNodeIPsResponse)(nil), // 43: headscale.v1.BackfillNodeIPsResponse (*ExpireNodeResponse)(nil), // 43: headscale.v1.ExpireNodeResponse
(*CreateApiKeyResponse)(nil), // 44: headscale.v1.CreateApiKeyResponse (*RenameNodeResponse)(nil), // 44: headscale.v1.RenameNodeResponse
(*ExpireApiKeyResponse)(nil), // 45: headscale.v1.ExpireApiKeyResponse (*ListNodesResponse)(nil), // 45: headscale.v1.ListNodesResponse
(*ListApiKeysResponse)(nil), // 46: headscale.v1.ListApiKeysResponse (*BackfillNodeIPsResponse)(nil), // 46: headscale.v1.BackfillNodeIPsResponse
(*DeleteApiKeyResponse)(nil), // 47: headscale.v1.DeleteApiKeyResponse (*AuthRegisterResponse)(nil), // 47: headscale.v1.AuthRegisterResponse
(*GetPolicyResponse)(nil), // 48: headscale.v1.GetPolicyResponse (*AuthApproveResponse)(nil), // 48: headscale.v1.AuthApproveResponse
(*SetPolicyResponse)(nil), // 49: headscale.v1.SetPolicyResponse (*AuthRejectResponse)(nil), // 49: headscale.v1.AuthRejectResponse
(*CreateApiKeyResponse)(nil), // 50: headscale.v1.CreateApiKeyResponse
(*ExpireApiKeyResponse)(nil), // 51: headscale.v1.ExpireApiKeyResponse
(*ListApiKeysResponse)(nil), // 52: headscale.v1.ListApiKeysResponse
(*DeleteApiKeyResponse)(nil), // 53: headscale.v1.DeleteApiKeyResponse
(*GetPolicyResponse)(nil), // 54: headscale.v1.GetPolicyResponse
(*SetPolicyResponse)(nil), // 55: headscale.v1.SetPolicyResponse
} }
var file_headscale_v1_headscale_proto_depIdxs = []int32{ var file_headscale_v1_headscale_proto_depIdxs = []int32{
2, // 0: headscale.v1.HeadscaleService.CreateUser:input_type -> headscale.v1.CreateUserRequest 2, // 0: headscale.v1.HeadscaleService.CreateUser:input_type -> headscale.v1.CreateUserRequest
@@ -227,40 +237,46 @@ var file_headscale_v1_headscale_proto_depIdxs = []int32{
17, // 15: headscale.v1.HeadscaleService.RenameNode:input_type -> headscale.v1.RenameNodeRequest 17, // 15: headscale.v1.HeadscaleService.RenameNode:input_type -> headscale.v1.RenameNodeRequest
18, // 16: headscale.v1.HeadscaleService.ListNodes:input_type -> headscale.v1.ListNodesRequest 18, // 16: headscale.v1.HeadscaleService.ListNodes:input_type -> headscale.v1.ListNodesRequest
19, // 17: headscale.v1.HeadscaleService.BackfillNodeIPs:input_type -> headscale.v1.BackfillNodeIPsRequest 19, // 17: headscale.v1.HeadscaleService.BackfillNodeIPs:input_type -> headscale.v1.BackfillNodeIPsRequest
20, // 18: headscale.v1.HeadscaleService.CreateApiKey:input_type -> headscale.v1.CreateApiKeyRequest 20, // 18: headscale.v1.HeadscaleService.AuthRegister:input_type -> headscale.v1.AuthRegisterRequest
21, // 19: headscale.v1.HeadscaleService.ExpireApiKey:input_type -> headscale.v1.ExpireApiKeyRequest 21, // 19: headscale.v1.HeadscaleService.AuthApprove:input_type -> headscale.v1.AuthApproveRequest
22, // 20: headscale.v1.HeadscaleService.ListApiKeys:input_type -> headscale.v1.ListApiKeysRequest 22, // 20: headscale.v1.HeadscaleService.AuthReject:input_type -> headscale.v1.AuthRejectRequest
23, // 21: headscale.v1.HeadscaleService.DeleteApiKey:input_type -> headscale.v1.DeleteApiKeyRequest 23, // 21: headscale.v1.HeadscaleService.CreateApiKey:input_type -> headscale.v1.CreateApiKeyRequest
24, // 22: headscale.v1.HeadscaleService.GetPolicy:input_type -> headscale.v1.GetPolicyRequest 24, // 22: headscale.v1.HeadscaleService.ExpireApiKey:input_type -> headscale.v1.ExpireApiKeyRequest
25, // 23: headscale.v1.HeadscaleService.SetPolicy:input_type -> headscale.v1.SetPolicyRequest 25, // 23: headscale.v1.HeadscaleService.ListApiKeys:input_type -> headscale.v1.ListApiKeysRequest
0, // 24: headscale.v1.HeadscaleService.Health:input_type -> headscale.v1.HealthRequest 26, // 24: headscale.v1.HeadscaleService.DeleteApiKey:input_type -> headscale.v1.DeleteApiKeyRequest
26, // 25: headscale.v1.HeadscaleService.CreateUser:output_type -> headscale.v1.CreateUserResponse 27, // 25: headscale.v1.HeadscaleService.GetPolicy:input_type -> headscale.v1.GetPolicyRequest
27, // 26: headscale.v1.HeadscaleService.RenameUser:output_type -> headscale.v1.RenameUserResponse 28, // 26: headscale.v1.HeadscaleService.SetPolicy:input_type -> headscale.v1.SetPolicyRequest
28, // 27: headscale.v1.HeadscaleService.DeleteUser:output_type -> headscale.v1.DeleteUserResponse 0, // 27: headscale.v1.HeadscaleService.Health:input_type -> headscale.v1.HealthRequest
29, // 28: headscale.v1.HeadscaleService.ListUsers:output_type -> headscale.v1.ListUsersResponse 29, // 28: headscale.v1.HeadscaleService.CreateUser:output_type -> headscale.v1.CreateUserResponse
30, // 29: headscale.v1.HeadscaleService.CreatePreAuthKey:output_type -> headscale.v1.CreatePreAuthKeyResponse 30, // 29: headscale.v1.HeadscaleService.RenameUser:output_type -> headscale.v1.RenameUserResponse
31, // 30: headscale.v1.HeadscaleService.ExpirePreAuthKey:output_type -> headscale.v1.ExpirePreAuthKeyResponse 31, // 30: headscale.v1.HeadscaleService.DeleteUser:output_type -> headscale.v1.DeleteUserResponse
32, // 31: headscale.v1.HeadscaleService.DeletePreAuthKey:output_type -> headscale.v1.DeletePreAuthKeyResponse 32, // 31: headscale.v1.HeadscaleService.ListUsers:output_type -> headscale.v1.ListUsersResponse
33, // 32: headscale.v1.HeadscaleService.ListPreAuthKeys:output_type -> headscale.v1.ListPreAuthKeysResponse 33, // 32: headscale.v1.HeadscaleService.CreatePreAuthKey:output_type -> headscale.v1.CreatePreAuthKeyResponse
34, // 33: headscale.v1.HeadscaleService.DebugCreateNode:output_type -> headscale.v1.DebugCreateNodeResponse 34, // 33: headscale.v1.HeadscaleService.ExpirePreAuthKey:output_type -> headscale.v1.ExpirePreAuthKeyResponse
35, // 34: headscale.v1.HeadscaleService.GetNode:output_type -> headscale.v1.GetNodeResponse 35, // 34: headscale.v1.HeadscaleService.DeletePreAuthKey:output_type -> headscale.v1.DeletePreAuthKeyResponse
36, // 35: headscale.v1.HeadscaleService.SetTags:output_type -> headscale.v1.SetTagsResponse 36, // 35: headscale.v1.HeadscaleService.ListPreAuthKeys:output_type -> headscale.v1.ListPreAuthKeysResponse
37, // 36: headscale.v1.HeadscaleService.SetApprovedRoutes:output_type -> headscale.v1.SetApprovedRoutesResponse 37, // 36: headscale.v1.HeadscaleService.DebugCreateNode:output_type -> headscale.v1.DebugCreateNodeResponse
38, // 37: headscale.v1.HeadscaleService.RegisterNode:output_type -> headscale.v1.RegisterNodeResponse 38, // 37: headscale.v1.HeadscaleService.GetNode:output_type -> headscale.v1.GetNodeResponse
39, // 38: headscale.v1.HeadscaleService.DeleteNode:output_type -> headscale.v1.DeleteNodeResponse 39, // 38: headscale.v1.HeadscaleService.SetTags:output_type -> headscale.v1.SetTagsResponse
40, // 39: headscale.v1.HeadscaleService.ExpireNode:output_type -> headscale.v1.ExpireNodeResponse 40, // 39: headscale.v1.HeadscaleService.SetApprovedRoutes:output_type -> headscale.v1.SetApprovedRoutesResponse
41, // 40: headscale.v1.HeadscaleService.RenameNode:output_type -> headscale.v1.RenameNodeResponse 41, // 40: headscale.v1.HeadscaleService.RegisterNode:output_type -> headscale.v1.RegisterNodeResponse
42, // 41: headscale.v1.HeadscaleService.ListNodes:output_type -> headscale.v1.ListNodesResponse 42, // 41: headscale.v1.HeadscaleService.DeleteNode:output_type -> headscale.v1.DeleteNodeResponse
43, // 42: headscale.v1.HeadscaleService.BackfillNodeIPs:output_type -> headscale.v1.BackfillNodeIPsResponse 43, // 42: headscale.v1.HeadscaleService.ExpireNode:output_type -> headscale.v1.ExpireNodeResponse
44, // 43: headscale.v1.HeadscaleService.CreateApiKey:output_type -> headscale.v1.CreateApiKeyResponse 44, // 43: headscale.v1.HeadscaleService.RenameNode:output_type -> headscale.v1.RenameNodeResponse
45, // 44: headscale.v1.HeadscaleService.ExpireApiKey:output_type -> headscale.v1.ExpireApiKeyResponse 45, // 44: headscale.v1.HeadscaleService.ListNodes:output_type -> headscale.v1.ListNodesResponse
46, // 45: headscale.v1.HeadscaleService.ListApiKeys:output_type -> headscale.v1.ListApiKeysResponse 46, // 45: headscale.v1.HeadscaleService.BackfillNodeIPs:output_type -> headscale.v1.BackfillNodeIPsResponse
47, // 46: headscale.v1.HeadscaleService.DeleteApiKey:output_type -> headscale.v1.DeleteApiKeyResponse 47, // 46: headscale.v1.HeadscaleService.AuthRegister:output_type -> headscale.v1.AuthRegisterResponse
48, // 47: headscale.v1.HeadscaleService.GetPolicy:output_type -> headscale.v1.GetPolicyResponse 48, // 47: headscale.v1.HeadscaleService.AuthApprove:output_type -> headscale.v1.AuthApproveResponse
49, // 48: headscale.v1.HeadscaleService.SetPolicy:output_type -> headscale.v1.SetPolicyResponse 49, // 48: headscale.v1.HeadscaleService.AuthReject:output_type -> headscale.v1.AuthRejectResponse
1, // 49: headscale.v1.HeadscaleService.Health:output_type -> headscale.v1.HealthResponse 50, // 49: headscale.v1.HeadscaleService.CreateApiKey:output_type -> headscale.v1.CreateApiKeyResponse
25, // [25:50] is the sub-list for method output_type 51, // 50: headscale.v1.HeadscaleService.ExpireApiKey:output_type -> headscale.v1.ExpireApiKeyResponse
0, // [0:25] is the sub-list for method input_type 52, // 51: headscale.v1.HeadscaleService.ListApiKeys:output_type -> headscale.v1.ListApiKeysResponse
53, // 52: headscale.v1.HeadscaleService.DeleteApiKey:output_type -> headscale.v1.DeleteApiKeyResponse
54, // 53: headscale.v1.HeadscaleService.GetPolicy:output_type -> headscale.v1.GetPolicyResponse
55, // 54: headscale.v1.HeadscaleService.SetPolicy:output_type -> headscale.v1.SetPolicyResponse
1, // 55: headscale.v1.HeadscaleService.Health:output_type -> headscale.v1.HealthResponse
28, // [28:56] is the sub-list for method output_type
0, // [0:28] is the sub-list for method input_type
0, // [0:0] is the sub-list for extension type_name 0, // [0:0] is the sub-list for extension type_name
0, // [0:0] is the sub-list for extension extendee 0, // [0:0] is the sub-list for extension extendee
0, // [0:0] is the sub-list for field type_name 0, // [0:0] is the sub-list for field type_name
@@ -275,6 +291,7 @@ func file_headscale_v1_headscale_proto_init() {
file_headscale_v1_preauthkey_proto_init() file_headscale_v1_preauthkey_proto_init()
file_headscale_v1_node_proto_init() file_headscale_v1_node_proto_init()
file_headscale_v1_apikey_proto_init() file_headscale_v1_apikey_proto_init()
file_headscale_v1_auth_proto_init()
file_headscale_v1_policy_proto_init() file_headscale_v1_policy_proto_init()
type x struct{} type x struct{}
out := protoimpl.TypeBuilder{ out := protoimpl.TypeBuilder{

View File

@@ -709,6 +709,87 @@ func local_request_HeadscaleService_BackfillNodeIPs_0(ctx context.Context, marsh
return msg, metadata, err return msg, metadata, err
} }
func request_HeadscaleService_AuthRegister_0(ctx context.Context, marshaler runtime.Marshaler, client HeadscaleServiceClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq AuthRegisterRequest
metadata runtime.ServerMetadata
)
if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && !errors.Is(err, io.EOF) {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
if req.Body != nil {
_, _ = io.Copy(io.Discard, req.Body)
}
msg, err := client.AuthRegister(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
func local_request_HeadscaleService_AuthRegister_0(ctx context.Context, marshaler runtime.Marshaler, server HeadscaleServiceServer, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq AuthRegisterRequest
metadata runtime.ServerMetadata
)
if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && !errors.Is(err, io.EOF) {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := server.AuthRegister(ctx, &protoReq)
return msg, metadata, err
}
func request_HeadscaleService_AuthApprove_0(ctx context.Context, marshaler runtime.Marshaler, client HeadscaleServiceClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq AuthApproveRequest
metadata runtime.ServerMetadata
)
if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && !errors.Is(err, io.EOF) {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
if req.Body != nil {
_, _ = io.Copy(io.Discard, req.Body)
}
msg, err := client.AuthApprove(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
func local_request_HeadscaleService_AuthApprove_0(ctx context.Context, marshaler runtime.Marshaler, server HeadscaleServiceServer, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq AuthApproveRequest
metadata runtime.ServerMetadata
)
if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && !errors.Is(err, io.EOF) {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := server.AuthApprove(ctx, &protoReq)
return msg, metadata, err
}
func request_HeadscaleService_AuthReject_0(ctx context.Context, marshaler runtime.Marshaler, client HeadscaleServiceClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq AuthRejectRequest
metadata runtime.ServerMetadata
)
if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && !errors.Is(err, io.EOF) {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
if req.Body != nil {
_, _ = io.Copy(io.Discard, req.Body)
}
msg, err := client.AuthReject(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
func local_request_HeadscaleService_AuthReject_0(ctx context.Context, marshaler runtime.Marshaler, server HeadscaleServiceServer, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq AuthRejectRequest
metadata runtime.ServerMetadata
)
if err := marshaler.NewDecoder(req.Body).Decode(&protoReq); err != nil && !errors.Is(err, io.EOF) {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := server.AuthReject(ctx, &protoReq)
return msg, metadata, err
}
func request_HeadscaleService_CreateApiKey_0(ctx context.Context, marshaler runtime.Marshaler, client HeadscaleServiceClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) { func request_HeadscaleService_CreateApiKey_0(ctx context.Context, marshaler runtime.Marshaler, client HeadscaleServiceClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var ( var (
protoReq CreateApiKeyRequest protoReq CreateApiKeyRequest
@@ -1272,6 +1353,66 @@ func RegisterHeadscaleServiceHandlerServer(ctx context.Context, mux *runtime.Ser
} }
forward_HeadscaleService_BackfillNodeIPs_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...) forward_HeadscaleService_BackfillNodeIPs_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
}) })
mux.Handle(http.MethodPost, pattern_HeadscaleService_AuthRegister_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
var stream runtime.ServerTransportStream
ctx = grpc.NewContextWithServerTransportStream(ctx, &stream)
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateIncomingContext(ctx, mux, req, "/headscale.v1.HeadscaleService/AuthRegister", runtime.WithHTTPPathPattern("/api/v1/auth/register"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := local_request_HeadscaleService_AuthRegister_0(annotatedContext, inboundMarshaler, server, req, pathParams)
md.HeaderMD, md.TrailerMD = metadata.Join(md.HeaderMD, stream.Header()), metadata.Join(md.TrailerMD, stream.Trailer())
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_AuthRegister_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodPost, pattern_HeadscaleService_AuthApprove_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
var stream runtime.ServerTransportStream
ctx = grpc.NewContextWithServerTransportStream(ctx, &stream)
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateIncomingContext(ctx, mux, req, "/headscale.v1.HeadscaleService/AuthApprove", runtime.WithHTTPPathPattern("/api/v1/auth/approve"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := local_request_HeadscaleService_AuthApprove_0(annotatedContext, inboundMarshaler, server, req, pathParams)
md.HeaderMD, md.TrailerMD = metadata.Join(md.HeaderMD, stream.Header()), metadata.Join(md.TrailerMD, stream.Trailer())
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_AuthApprove_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodPost, pattern_HeadscaleService_AuthReject_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
var stream runtime.ServerTransportStream
ctx = grpc.NewContextWithServerTransportStream(ctx, &stream)
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateIncomingContext(ctx, mux, req, "/headscale.v1.HeadscaleService/AuthReject", runtime.WithHTTPPathPattern("/api/v1/auth/reject"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := local_request_HeadscaleService_AuthReject_0(annotatedContext, inboundMarshaler, server, req, pathParams)
md.HeaderMD, md.TrailerMD = metadata.Join(md.HeaderMD, stream.Header()), metadata.Join(md.TrailerMD, stream.Trailer())
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_AuthReject_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodPost, pattern_HeadscaleService_CreateApiKey_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) { mux.Handle(http.MethodPost, pattern_HeadscaleService_CreateApiKey_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context()) ctx, cancel := context.WithCancel(req.Context())
defer cancel() defer cancel()
@@ -1758,6 +1899,57 @@ func RegisterHeadscaleServiceHandlerClient(ctx context.Context, mux *runtime.Ser
} }
forward_HeadscaleService_BackfillNodeIPs_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...) forward_HeadscaleService_BackfillNodeIPs_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
}) })
mux.Handle(http.MethodPost, pattern_HeadscaleService_AuthRegister_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateContext(ctx, mux, req, "/headscale.v1.HeadscaleService/AuthRegister", runtime.WithHTTPPathPattern("/api/v1/auth/register"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := request_HeadscaleService_AuthRegister_0(annotatedContext, inboundMarshaler, client, req, pathParams)
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_AuthRegister_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodPost, pattern_HeadscaleService_AuthApprove_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateContext(ctx, mux, req, "/headscale.v1.HeadscaleService/AuthApprove", runtime.WithHTTPPathPattern("/api/v1/auth/approve"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := request_HeadscaleService_AuthApprove_0(annotatedContext, inboundMarshaler, client, req, pathParams)
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_AuthApprove_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodPost, pattern_HeadscaleService_AuthReject_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateContext(ctx, mux, req, "/headscale.v1.HeadscaleService/AuthReject", runtime.WithHTTPPathPattern("/api/v1/auth/reject"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := request_HeadscaleService_AuthReject_0(annotatedContext, inboundMarshaler, client, req, pathParams)
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_AuthReject_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodPost, pattern_HeadscaleService_CreateApiKey_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) { mux.Handle(http.MethodPost, pattern_HeadscaleService_CreateApiKey_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context()) ctx, cancel := context.WithCancel(req.Context())
defer cancel() defer cancel()
@@ -1899,6 +2091,9 @@ var (
pattern_HeadscaleService_RenameNode_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 1, 0, 4, 1, 5, 3, 2, 4, 1, 0, 4, 1, 5, 5}, []string{"api", "v1", "node", "node_id", "rename", "new_name"}, "")) pattern_HeadscaleService_RenameNode_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 1, 0, 4, 1, 5, 3, 2, 4, 1, 0, 4, 1, 5, 5}, []string{"api", "v1", "node", "node_id", "rename", "new_name"}, ""))
pattern_HeadscaleService_ListNodes_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"api", "v1", "node"}, "")) pattern_HeadscaleService_ListNodes_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"api", "v1", "node"}, ""))
pattern_HeadscaleService_BackfillNodeIPs_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 2, 3}, []string{"api", "v1", "node", "backfillips"}, "")) pattern_HeadscaleService_BackfillNodeIPs_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 2, 3}, []string{"api", "v1", "node", "backfillips"}, ""))
pattern_HeadscaleService_AuthRegister_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 2, 3}, []string{"api", "v1", "auth", "register"}, ""))
pattern_HeadscaleService_AuthApprove_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 2, 3}, []string{"api", "v1", "auth", "approve"}, ""))
pattern_HeadscaleService_AuthReject_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 2, 3}, []string{"api", "v1", "auth", "reject"}, ""))
pattern_HeadscaleService_CreateApiKey_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"api", "v1", "apikey"}, "")) pattern_HeadscaleService_CreateApiKey_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"api", "v1", "apikey"}, ""))
pattern_HeadscaleService_ExpireApiKey_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 2, 3}, []string{"api", "v1", "apikey", "expire"}, "")) pattern_HeadscaleService_ExpireApiKey_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 2, 3}, []string{"api", "v1", "apikey", "expire"}, ""))
pattern_HeadscaleService_ListApiKeys_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"api", "v1", "apikey"}, "")) pattern_HeadscaleService_ListApiKeys_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"api", "v1", "apikey"}, ""))
@@ -1927,6 +2122,9 @@ var (
forward_HeadscaleService_RenameNode_0 = runtime.ForwardResponseMessage forward_HeadscaleService_RenameNode_0 = runtime.ForwardResponseMessage
forward_HeadscaleService_ListNodes_0 = runtime.ForwardResponseMessage forward_HeadscaleService_ListNodes_0 = runtime.ForwardResponseMessage
forward_HeadscaleService_BackfillNodeIPs_0 = runtime.ForwardResponseMessage forward_HeadscaleService_BackfillNodeIPs_0 = runtime.ForwardResponseMessage
forward_HeadscaleService_AuthRegister_0 = runtime.ForwardResponseMessage
forward_HeadscaleService_AuthApprove_0 = runtime.ForwardResponseMessage
forward_HeadscaleService_AuthReject_0 = runtime.ForwardResponseMessage
forward_HeadscaleService_CreateApiKey_0 = runtime.ForwardResponseMessage forward_HeadscaleService_CreateApiKey_0 = runtime.ForwardResponseMessage
forward_HeadscaleService_ExpireApiKey_0 = runtime.ForwardResponseMessage forward_HeadscaleService_ExpireApiKey_0 = runtime.ForwardResponseMessage
forward_HeadscaleService_ListApiKeys_0 = runtime.ForwardResponseMessage forward_HeadscaleService_ListApiKeys_0 = runtime.ForwardResponseMessage

View File

@@ -1,6 +1,6 @@
// Code generated by protoc-gen-go-grpc. DO NOT EDIT. // Code generated by protoc-gen-go-grpc. DO NOT EDIT.
// versions: // versions:
// - protoc-gen-go-grpc v1.6.0 // - protoc-gen-go-grpc v1.6.1
// - protoc (unknown) // - protoc (unknown)
// source: headscale/v1/headscale.proto // source: headscale/v1/headscale.proto
@@ -37,6 +37,9 @@ const (
HeadscaleService_RenameNode_FullMethodName = "/headscale.v1.HeadscaleService/RenameNode" HeadscaleService_RenameNode_FullMethodName = "/headscale.v1.HeadscaleService/RenameNode"
HeadscaleService_ListNodes_FullMethodName = "/headscale.v1.HeadscaleService/ListNodes" HeadscaleService_ListNodes_FullMethodName = "/headscale.v1.HeadscaleService/ListNodes"
HeadscaleService_BackfillNodeIPs_FullMethodName = "/headscale.v1.HeadscaleService/BackfillNodeIPs" HeadscaleService_BackfillNodeIPs_FullMethodName = "/headscale.v1.HeadscaleService/BackfillNodeIPs"
HeadscaleService_AuthRegister_FullMethodName = "/headscale.v1.HeadscaleService/AuthRegister"
HeadscaleService_AuthApprove_FullMethodName = "/headscale.v1.HeadscaleService/AuthApprove"
HeadscaleService_AuthReject_FullMethodName = "/headscale.v1.HeadscaleService/AuthReject"
HeadscaleService_CreateApiKey_FullMethodName = "/headscale.v1.HeadscaleService/CreateApiKey" HeadscaleService_CreateApiKey_FullMethodName = "/headscale.v1.HeadscaleService/CreateApiKey"
HeadscaleService_ExpireApiKey_FullMethodName = "/headscale.v1.HeadscaleService/ExpireApiKey" HeadscaleService_ExpireApiKey_FullMethodName = "/headscale.v1.HeadscaleService/ExpireApiKey"
HeadscaleService_ListApiKeys_FullMethodName = "/headscale.v1.HeadscaleService/ListApiKeys" HeadscaleService_ListApiKeys_FullMethodName = "/headscale.v1.HeadscaleService/ListApiKeys"
@@ -71,6 +74,10 @@ type HeadscaleServiceClient interface {
RenameNode(ctx context.Context, in *RenameNodeRequest, opts ...grpc.CallOption) (*RenameNodeResponse, error) RenameNode(ctx context.Context, in *RenameNodeRequest, opts ...grpc.CallOption) (*RenameNodeResponse, error)
ListNodes(ctx context.Context, in *ListNodesRequest, opts ...grpc.CallOption) (*ListNodesResponse, error) ListNodes(ctx context.Context, in *ListNodesRequest, opts ...grpc.CallOption) (*ListNodesResponse, error)
BackfillNodeIPs(ctx context.Context, in *BackfillNodeIPsRequest, opts ...grpc.CallOption) (*BackfillNodeIPsResponse, error) BackfillNodeIPs(ctx context.Context, in *BackfillNodeIPsRequest, opts ...grpc.CallOption) (*BackfillNodeIPsResponse, error)
// --- Auth start ---
AuthRegister(ctx context.Context, in *AuthRegisterRequest, opts ...grpc.CallOption) (*AuthRegisterResponse, error)
AuthApprove(ctx context.Context, in *AuthApproveRequest, opts ...grpc.CallOption) (*AuthApproveResponse, error)
AuthReject(ctx context.Context, in *AuthRejectRequest, opts ...grpc.CallOption) (*AuthRejectResponse, error)
// --- ApiKeys start --- // --- ApiKeys start ---
CreateApiKey(ctx context.Context, in *CreateApiKeyRequest, opts ...grpc.CallOption) (*CreateApiKeyResponse, error) CreateApiKey(ctx context.Context, in *CreateApiKeyRequest, opts ...grpc.CallOption) (*CreateApiKeyResponse, error)
ExpireApiKey(ctx context.Context, in *ExpireApiKeyRequest, opts ...grpc.CallOption) (*ExpireApiKeyResponse, error) ExpireApiKey(ctx context.Context, in *ExpireApiKeyRequest, opts ...grpc.CallOption) (*ExpireApiKeyResponse, error)
@@ -271,6 +278,36 @@ func (c *headscaleServiceClient) BackfillNodeIPs(ctx context.Context, in *Backfi
return out, nil return out, nil
} }
func (c *headscaleServiceClient) AuthRegister(ctx context.Context, in *AuthRegisterRequest, opts ...grpc.CallOption) (*AuthRegisterResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(AuthRegisterResponse)
err := c.cc.Invoke(ctx, HeadscaleService_AuthRegister_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *headscaleServiceClient) AuthApprove(ctx context.Context, in *AuthApproveRequest, opts ...grpc.CallOption) (*AuthApproveResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(AuthApproveResponse)
err := c.cc.Invoke(ctx, HeadscaleService_AuthApprove_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *headscaleServiceClient) AuthReject(ctx context.Context, in *AuthRejectRequest, opts ...grpc.CallOption) (*AuthRejectResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(AuthRejectResponse)
err := c.cc.Invoke(ctx, HeadscaleService_AuthReject_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *headscaleServiceClient) CreateApiKey(ctx context.Context, in *CreateApiKeyRequest, opts ...grpc.CallOption) (*CreateApiKeyResponse, error) { func (c *headscaleServiceClient) CreateApiKey(ctx context.Context, in *CreateApiKeyRequest, opts ...grpc.CallOption) (*CreateApiKeyResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...) cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(CreateApiKeyResponse) out := new(CreateApiKeyResponse)
@@ -366,6 +403,10 @@ type HeadscaleServiceServer interface {
RenameNode(context.Context, *RenameNodeRequest) (*RenameNodeResponse, error) RenameNode(context.Context, *RenameNodeRequest) (*RenameNodeResponse, error)
ListNodes(context.Context, *ListNodesRequest) (*ListNodesResponse, error) ListNodes(context.Context, *ListNodesRequest) (*ListNodesResponse, error)
BackfillNodeIPs(context.Context, *BackfillNodeIPsRequest) (*BackfillNodeIPsResponse, error) BackfillNodeIPs(context.Context, *BackfillNodeIPsRequest) (*BackfillNodeIPsResponse, error)
// --- Auth start ---
AuthRegister(context.Context, *AuthRegisterRequest) (*AuthRegisterResponse, error)
AuthApprove(context.Context, *AuthApproveRequest) (*AuthApproveResponse, error)
AuthReject(context.Context, *AuthRejectRequest) (*AuthRejectResponse, error)
// --- ApiKeys start --- // --- ApiKeys start ---
CreateApiKey(context.Context, *CreateApiKeyRequest) (*CreateApiKeyResponse, error) CreateApiKey(context.Context, *CreateApiKeyRequest) (*CreateApiKeyResponse, error)
ExpireApiKey(context.Context, *ExpireApiKeyRequest) (*ExpireApiKeyResponse, error) ExpireApiKey(context.Context, *ExpireApiKeyRequest) (*ExpireApiKeyResponse, error)
@@ -440,6 +481,15 @@ func (UnimplementedHeadscaleServiceServer) ListNodes(context.Context, *ListNodes
func (UnimplementedHeadscaleServiceServer) BackfillNodeIPs(context.Context, *BackfillNodeIPsRequest) (*BackfillNodeIPsResponse, error) { func (UnimplementedHeadscaleServiceServer) BackfillNodeIPs(context.Context, *BackfillNodeIPsRequest) (*BackfillNodeIPsResponse, error) {
return nil, status.Error(codes.Unimplemented, "method BackfillNodeIPs not implemented") return nil, status.Error(codes.Unimplemented, "method BackfillNodeIPs not implemented")
} }
func (UnimplementedHeadscaleServiceServer) AuthRegister(context.Context, *AuthRegisterRequest) (*AuthRegisterResponse, error) {
return nil, status.Error(codes.Unimplemented, "method AuthRegister not implemented")
}
func (UnimplementedHeadscaleServiceServer) AuthApprove(context.Context, *AuthApproveRequest) (*AuthApproveResponse, error) {
return nil, status.Error(codes.Unimplemented, "method AuthApprove not implemented")
}
func (UnimplementedHeadscaleServiceServer) AuthReject(context.Context, *AuthRejectRequest) (*AuthRejectResponse, error) {
return nil, status.Error(codes.Unimplemented, "method AuthReject not implemented")
}
func (UnimplementedHeadscaleServiceServer) CreateApiKey(context.Context, *CreateApiKeyRequest) (*CreateApiKeyResponse, error) { func (UnimplementedHeadscaleServiceServer) CreateApiKey(context.Context, *CreateApiKeyRequest) (*CreateApiKeyResponse, error) {
return nil, status.Error(codes.Unimplemented, "method CreateApiKey not implemented") return nil, status.Error(codes.Unimplemented, "method CreateApiKey not implemented")
} }
@@ -806,6 +856,60 @@ func _HeadscaleService_BackfillNodeIPs_Handler(srv interface{}, ctx context.Cont
return interceptor(ctx, in, info, handler) return interceptor(ctx, in, info, handler)
} }
func _HeadscaleService_AuthRegister_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(AuthRegisterRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(HeadscaleServiceServer).AuthRegister(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: HeadscaleService_AuthRegister_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(HeadscaleServiceServer).AuthRegister(ctx, req.(*AuthRegisterRequest))
}
return interceptor(ctx, in, info, handler)
}
func _HeadscaleService_AuthApprove_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(AuthApproveRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(HeadscaleServiceServer).AuthApprove(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: HeadscaleService_AuthApprove_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(HeadscaleServiceServer).AuthApprove(ctx, req.(*AuthApproveRequest))
}
return interceptor(ctx, in, info, handler)
}
func _HeadscaleService_AuthReject_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(AuthRejectRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(HeadscaleServiceServer).AuthReject(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: HeadscaleService_AuthReject_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(HeadscaleServiceServer).AuthReject(ctx, req.(*AuthRejectRequest))
}
return interceptor(ctx, in, info, handler)
}
func _HeadscaleService_CreateApiKey_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) { func _HeadscaleService_CreateApiKey_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(CreateApiKeyRequest) in := new(CreateApiKeyRequest)
if err := dec(in); err != nil { if err := dec(in); err != nil {
@@ -1011,6 +1115,18 @@ var HeadscaleService_ServiceDesc = grpc.ServiceDesc{
MethodName: "BackfillNodeIPs", MethodName: "BackfillNodeIPs",
Handler: _HeadscaleService_BackfillNodeIPs_Handler, Handler: _HeadscaleService_BackfillNodeIPs_Handler,
}, },
{
MethodName: "AuthRegister",
Handler: _HeadscaleService_AuthRegister_Handler,
},
{
MethodName: "AuthApprove",
Handler: _HeadscaleService_AuthApprove_Handler,
},
{
MethodName: "AuthReject",
Handler: _HeadscaleService_AuthReject_Handler,
},
{ {
MethodName: "CreateApiKey", MethodName: "CreateApiKey",
Handler: _HeadscaleService_CreateApiKey_Handler, Handler: _HeadscaleService_CreateApiKey_Handler,

View File

@@ -715,9 +715,11 @@ func (*DeleteNodeResponse) Descriptor() ([]byte, []int) {
} }
type ExpireNodeRequest struct { type ExpireNodeRequest struct {
state protoimpl.MessageState `protogen:"open.v1"` state protoimpl.MessageState `protogen:"open.v1"`
NodeId uint64 `protobuf:"varint,1,opt,name=node_id,json=nodeId,proto3" json:"node_id,omitempty"` NodeId uint64 `protobuf:"varint,1,opt,name=node_id,json=nodeId,proto3" json:"node_id,omitempty"`
Expiry *timestamppb.Timestamp `protobuf:"bytes,2,opt,name=expiry,proto3" json:"expiry,omitempty"` Expiry *timestamppb.Timestamp `protobuf:"bytes,2,opt,name=expiry,proto3" json:"expiry,omitempty"`
// When true, sets expiry to null (node will never expire).
DisableExpiry bool `protobuf:"varint,3,opt,name=disable_expiry,json=disableExpiry,proto3" json:"disable_expiry,omitempty"`
unknownFields protoimpl.UnknownFields unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache sizeCache protoimpl.SizeCache
} }
@@ -766,6 +768,13 @@ func (x *ExpireNodeRequest) GetExpiry() *timestamppb.Timestamp {
return nil return nil
} }
func (x *ExpireNodeRequest) GetDisableExpiry() bool {
if x != nil {
return x.DisableExpiry
}
return false
}
type ExpireNodeResponse struct { type ExpireNodeResponse struct {
state protoimpl.MessageState `protogen:"open.v1"` state protoimpl.MessageState `protogen:"open.v1"`
Node *Node `protobuf:"bytes,1,opt,name=node,proto3" json:"node,omitempty"` Node *Node `protobuf:"bytes,1,opt,name=node,proto3" json:"node,omitempty"`
@@ -1245,10 +1254,11 @@ const file_headscale_v1_node_proto_rawDesc = "" +
"\x04node\x18\x01 \x01(\v2\x12.headscale.v1.NodeR\x04node\",\n" + "\x04node\x18\x01 \x01(\v2\x12.headscale.v1.NodeR\x04node\",\n" +
"\x11DeleteNodeRequest\x12\x17\n" + "\x11DeleteNodeRequest\x12\x17\n" +
"\anode_id\x18\x01 \x01(\x04R\x06nodeId\"\x14\n" + "\anode_id\x18\x01 \x01(\x04R\x06nodeId\"\x14\n" +
"\x12DeleteNodeResponse\"`\n" + "\x12DeleteNodeResponse\"\x87\x01\n" +
"\x11ExpireNodeRequest\x12\x17\n" + "\x11ExpireNodeRequest\x12\x17\n" +
"\anode_id\x18\x01 \x01(\x04R\x06nodeId\x122\n" + "\anode_id\x18\x01 \x01(\x04R\x06nodeId\x122\n" +
"\x06expiry\x18\x02 \x01(\v2\x1a.google.protobuf.TimestampR\x06expiry\"<\n" + "\x06expiry\x18\x02 \x01(\v2\x1a.google.protobuf.TimestampR\x06expiry\x12%\n" +
"\x0edisable_expiry\x18\x03 \x01(\bR\rdisableExpiry\"<\n" +
"\x12ExpireNodeResponse\x12&\n" + "\x12ExpireNodeResponse\x12&\n" +
"\x04node\x18\x01 \x01(\v2\x12.headscale.v1.NodeR\x04node\"G\n" + "\x04node\x18\x01 \x01(\v2\x12.headscale.v1.NodeR\x04node\"G\n" +
"\x11RenameNodeRequest\x12\x17\n" + "\x11RenameNodeRequest\x12\x17\n" +

View File

@@ -0,0 +1,44 @@
{
"swagger": "2.0",
"info": {
"title": "headscale/v1/auth.proto",
"version": "version not set"
},
"consumes": [
"application/json"
],
"produces": [
"application/json"
],
"paths": {},
"definitions": {
"protobufAny": {
"type": "object",
"properties": {
"@type": {
"type": "string"
}
},
"additionalProperties": {}
},
"rpcStatus": {
"type": "object",
"properties": {
"code": {
"type": "integer",
"format": "int32"
},
"message": {
"type": "string"
},
"details": {
"type": "array",
"items": {
"type": "object",
"$ref": "#/definitions/protobufAny"
}
}
}
}
}
}

View File

@@ -138,6 +138,103 @@
] ]
} }
}, },
"/api/v1/auth/approve": {
"post": {
"operationId": "HeadscaleService_AuthApprove",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v1AuthApproveResponse"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/rpcStatus"
}
}
},
"parameters": [
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/v1AuthApproveRequest"
}
}
],
"tags": [
"HeadscaleService"
]
}
},
"/api/v1/auth/register": {
"post": {
"summary": "--- Auth start ---",
"operationId": "HeadscaleService_AuthRegister",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v1AuthRegisterResponse"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/rpcStatus"
}
}
},
"parameters": [
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/v1AuthRegisterRequest"
}
}
],
"tags": [
"HeadscaleService"
]
}
},
"/api/v1/auth/reject": {
"post": {
"operationId": "HeadscaleService_AuthReject",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v1AuthRejectResponse"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/rpcStatus"
}
}
},
"parameters": [
{
"name": "body",
"in": "body",
"required": true,
"schema": {
"$ref": "#/definitions/v1AuthRejectRequest"
}
}
],
"tags": [
"HeadscaleService"
]
}
},
"/api/v1/debug/node": { "/api/v1/debug/node": {
"post": { "post": {
"summary": "--- Node start ---", "summary": "--- Node start ---",
@@ -420,6 +517,13 @@
"required": false, "required": false,
"type": "string", "type": "string",
"format": "date-time" "format": "date-time"
},
{
"name": "disableExpiry",
"description": "When true, sets expiry to null (node will never expire).",
"in": "query",
"required": false,
"type": "boolean"
} }
], ],
"tags": [ "tags": [
@@ -888,6 +992,47 @@
} }
} }
}, },
"v1AuthApproveRequest": {
"type": "object",
"properties": {
"authId": {
"type": "string"
}
}
},
"v1AuthApproveResponse": {
"type": "object"
},
"v1AuthRegisterRequest": {
"type": "object",
"properties": {
"user": {
"type": "string"
},
"authId": {
"type": "string"
}
}
},
"v1AuthRegisterResponse": {
"type": "object",
"properties": {
"node": {
"$ref": "#/definitions/v1Node"
}
}
},
"v1AuthRejectRequest": {
"type": "object",
"properties": {
"authId": {
"type": "string"
}
}
},
"v1AuthRejectResponse": {
"type": "object"
},
"v1BackfillNodeIPsResponse": { "v1BackfillNodeIPsResponse": {
"type": "object", "type": "object",
"properties": { "properties": {

176
go.mod
View File

@@ -1,25 +1,28 @@
module github.com/juanfont/headscale module github.com/juanfont/headscale
go 1.25.5 go 1.26.1
require ( require (
github.com/arl/statsviz v0.8.0 github.com/arl/statsviz v0.8.0
github.com/cenkalti/backoff/v5 v5.0.3 github.com/cenkalti/backoff/v5 v5.0.3
github.com/chasefleming/elem-go v0.31.0 github.com/chasefleming/elem-go v0.31.0
github.com/coder/websocket v1.8.14 github.com/coder/websocket v1.8.14
github.com/coreos/go-oidc/v3 v3.16.0 github.com/coreos/go-oidc/v3 v3.18.0
github.com/creachadair/command v0.2.0 github.com/creachadair/command v0.2.2
github.com/creachadair/flax v0.0.5 github.com/creachadair/flax v0.0.5
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc
github.com/docker/docker v28.5.2+incompatible github.com/docker/docker v28.5.2+incompatible
github.com/fsnotify/fsnotify v1.9.0 github.com/fsnotify/fsnotify v1.9.0
github.com/glebarez/sqlite v1.11.0 github.com/glebarez/sqlite v1.11.0
github.com/go-chi/chi/v5 v5.2.5
github.com/go-chi/metrics v0.1.1
github.com/go-gormigrate/gormigrate/v2 v2.1.5 github.com/go-gormigrate/gormigrate/v2 v2.1.5
github.com/go-json-experiment/json v0.0.0-20250813024750-ebf49471dced github.com/go-json-experiment/json v0.0.0-20260214004413-d219187c3433
github.com/gofrs/uuid/v5 v5.4.0 github.com/gofrs/uuid/v5 v5.4.0
github.com/google/go-cmp v0.7.0 github.com/google/go-cmp v0.7.0
github.com/gorilla/mux v1.8.1 github.com/gorilla/mux v1.8.1
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.4 github.com/grpc-ecosystem/grpc-gateway/v2 v2.28.0
github.com/hashicorp/golang-lru/v2 v2.0.7
github.com/jagottsicher/termcolor v1.0.2 github.com/jagottsicher/termcolor v1.0.2
github.com/oauth2-proxy/mockoidc v0.0.0-20240214162133-caebfff84d25 github.com/oauth2-proxy/mockoidc v0.0.0-20240214162133-caebfff84d25
github.com/ory/dockertest/v3 v3.12.0 github.com/ory/dockertest/v3 v3.12.0
@@ -27,32 +30,31 @@ require (
github.com/pkg/profile v1.7.0 github.com/pkg/profile v1.7.0
github.com/prometheus/client_golang v1.23.2 github.com/prometheus/client_golang v1.23.2
github.com/prometheus/common v0.67.5 github.com/prometheus/common v0.67.5
github.com/pterm/pterm v0.12.82 github.com/pterm/pterm v0.12.83
github.com/puzpuzpuz/xsync/v4 v4.3.0 github.com/puzpuzpuz/xsync/v4 v4.4.0
github.com/rs/zerolog v1.34.0 github.com/rs/zerolog v1.35.0
github.com/samber/lo v1.52.0 github.com/samber/lo v1.53.0
github.com/sasha-s/go-deadlock v0.3.6 github.com/sasha-s/go-deadlock v0.3.9
github.com/spf13/cobra v1.10.2 github.com/spf13/cobra v1.10.2
github.com/spf13/viper v1.21.0 github.com/spf13/viper v1.21.0
github.com/stretchr/testify v1.11.1 github.com/stretchr/testify v1.11.1
github.com/tailscale/hujson v0.0.0-20250605163823-992244df8c5a github.com/tailscale/hujson v0.0.0-20260302212456-ecc657c15afd
github.com/tailscale/squibble v0.0.0-20251104223530-a961feffb67f github.com/tailscale/squibble v0.0.0-20260303070345-3ac5157f405e
github.com/tailscale/tailsql v0.0.0-20260105194658-001575c3ca09 github.com/tailscale/tailsql v0.0.0-20260322172246-3ab0c1744d9c
github.com/tcnksm/go-latest v0.0.0-20170313132115-e3007ae9052e github.com/tcnksm/go-latest v0.0.0-20170313132115-e3007ae9052e
go4.org/netipx v0.0.0-20231129151722-fdeea329fbba go4.org/netipx v0.0.0-20231129151722-fdeea329fbba
golang.org/x/crypto v0.46.0 golang.org/x/crypto v0.49.0
golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546 golang.org/x/exp v0.0.0-20260312153236-7ab1446f8b90
golang.org/x/net v0.48.0 golang.org/x/net v0.52.0
golang.org/x/oauth2 v0.34.0 golang.org/x/oauth2 v0.36.0
golang.org/x/sync v0.19.0 golang.org/x/sync v0.20.0
google.golang.org/genproto/googleapis/api v0.0.0-20251222181119-0a764e51fe1b google.golang.org/genproto/googleapis/api v0.0.0-20260406210006-6f92a3bedf2d
google.golang.org/grpc v1.78.0 google.golang.org/grpc v1.80.0
google.golang.org/protobuf v1.36.11 google.golang.org/protobuf v1.36.11
gopkg.in/yaml.v3 v3.0.1 gopkg.in/yaml.v3 v3.0.1
gorm.io/driver/postgres v1.6.0 gorm.io/driver/postgres v1.6.0
gorm.io/gorm v1.31.1 gorm.io/gorm v1.31.1
tailscale.com v1.94.0 tailscale.com v1.96.5
zgo.at/zcache/v2 v2.4.1
zombiezen.com/go/postgrestest v1.0.1 zombiezen.com/go/postgrestest v1.0.1
) )
@@ -74,118 +76,130 @@ require (
// together, e.g: // together, e.g:
// go get modernc.org/libc@v1.55.3 modernc.org/sqlite@v1.33.1 // go get modernc.org/libc@v1.55.3 modernc.org/sqlite@v1.33.1
require ( require (
modernc.org/libc v1.67.6 // indirect modernc.org/libc v1.70.0 // indirect
modernc.org/mathutil v1.7.1 // indirect modernc.org/mathutil v1.7.1 // indirect
modernc.org/memory v1.11.0 // indirect modernc.org/memory v1.11.0 // indirect
modernc.org/sqlite v1.44.3 modernc.org/sqlite v1.48.2
) )
// NOTE: gvisor must be updated in lockstep with
// tailscale.com. The version used here should match
// the version required by the tailscale.com dependency.
// To find the correct version, check tailscale.com's
// go.mod file for the gvisor.dev/gvisor version:
// https://github.com/tailscale/tailscale/blob/main/go.mod
require gvisor.dev/gvisor v0.0.0-20260224225140-573d5e7127a8 // indirect
require ( require (
atomicgo.dev/cursor v0.2.0 // indirect atomicgo.dev/cursor v0.2.0 // indirect
atomicgo.dev/keyboard v0.2.9 // indirect atomicgo.dev/keyboard v0.2.9 // indirect
atomicgo.dev/schedule v0.1.0 // indirect atomicgo.dev/schedule v0.1.0 // indirect
dario.cat/mergo v1.0.2 // indirect dario.cat/mergo v1.0.2 // indirect
filippo.io/edwards25519 v1.1.0 // indirect filippo.io/edwards25519 v1.2.0 // indirect
github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c // indirect github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c // indirect
github.com/Microsoft/go-winio v0.6.2 // indirect github.com/Microsoft/go-winio v0.6.2 // indirect
github.com/Nvveen/Gotty v0.0.0-20120604004816-cd527374f1e5 // indirect github.com/Nvveen/Gotty v0.0.0-20120604004816-cd527374f1e5 // indirect
github.com/akutz/memconn v0.1.0 // indirect github.com/akutz/memconn v0.1.0 // indirect
github.com/alexbrainman/sspi v0.0.0-20231016080023-1a75b4708caa // indirect github.com/alexbrainman/sspi v0.0.0-20250919150558-7d374ff0d59e // indirect
github.com/aws/aws-sdk-go-v2 v1.41.0 // indirect github.com/aws/aws-sdk-go-v2 v1.41.1 // indirect
github.com/aws/aws-sdk-go-v2/config v1.29.5 // indirect github.com/aws/aws-sdk-go-v2/config v1.32.7 // indirect
github.com/aws/aws-sdk-go-v2/credentials v1.17.58 // indirect github.com/aws/aws-sdk-go-v2/credentials v1.19.7 // indirect
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.27 // indirect github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.17 // indirect
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.16 // indirect github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.17 // indirect
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.16 // indirect github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.17 // indirect
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.2 // indirect github.com/aws/aws-sdk-go-v2/internal/ini v1.8.4 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.4 // indirect github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.4 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.16 // indirect github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.17 // indirect
github.com/aws/aws-sdk-go-v2/service/sso v1.24.14 // indirect github.com/aws/aws-sdk-go-v2/service/signin v1.0.5 // indirect
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.13 // indirect github.com/aws/aws-sdk-go-v2/service/sso v1.30.9 // indirect
github.com/aws/aws-sdk-go-v2/service/sts v1.41.5 // indirect github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.13 // indirect
github.com/aws/aws-sdk-go-v2/service/sts v1.41.6 // indirect
github.com/aws/smithy-go v1.24.0 // indirect github.com/aws/smithy-go v1.24.0 // indirect
github.com/axiomhq/hyperloglog v0.0.0-20240319100328-84253e514e02 // indirect github.com/axiomhq/hyperloglog v0.2.6 // indirect
github.com/beorn7/perks v1.0.1 // indirect github.com/beorn7/perks v1.0.1 // indirect
github.com/cenkalti/backoff/v4 v4.3.0 // indirect github.com/cenkalti/backoff/v4 v4.3.0 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/clipperhouse/uax29/v2 v2.2.0 // indirect github.com/clipperhouse/uax29/v2 v2.7.0 // indirect
github.com/containerd/console v1.0.5 // indirect github.com/containerd/console v1.0.5 // indirect
github.com/containerd/continuity v0.4.5 // indirect github.com/containerd/continuity v0.4.5 // indirect
github.com/containerd/errdefs v1.0.0 // indirect github.com/containerd/errdefs v1.0.0 // indirect
github.com/containerd/errdefs/pkg v0.3.0 // indirect github.com/containerd/errdefs/pkg v0.3.0 // indirect
github.com/creachadair/mds v0.25.10 // indirect github.com/creachadair/mds v0.26.2 // indirect
github.com/creachadair/msync v0.7.1 // indirect github.com/creachadair/msync v0.8.2 // indirect
github.com/dblohm7/wingoes v0.0.0-20240123200102-b75a8a7d7eb0 // indirect github.com/dblohm7/wingoes v0.0.0-20250822163801-6d8e6105c62d // indirect
github.com/dgryski/go-metro v0.0.0-20180109044635-280f6062b5bc // indirect github.com/dgryski/go-metro v0.0.0-20250106013310-edb8663e5e33 // indirect
github.com/distribution/reference v0.6.0 // indirect github.com/distribution/reference v0.6.0 // indirect
github.com/docker/cli v28.5.1+incompatible // indirect github.com/docker/cli v29.2.1+incompatible // indirect
github.com/docker/go-connections v0.6.0 // indirect github.com/docker/go-connections v0.6.0 // indirect
github.com/docker/go-units v0.5.0 // indirect github.com/docker/go-units v0.5.0 // indirect
github.com/dustin/go-humanize v1.0.1 // indirect github.com/dustin/go-humanize v1.0.1 // indirect
github.com/felixge/fgprof v0.9.5 // indirect github.com/felixge/fgprof v0.9.5 // indirect
github.com/felixge/httpsnoop v1.0.4 // indirect github.com/felixge/httpsnoop v1.0.4 // indirect
github.com/fxamacker/cbor/v2 v2.9.0 // indirect github.com/fxamacker/cbor/v2 v2.9.0 // indirect
github.com/gaissmai/bart v0.18.0 // indirect github.com/gaissmai/bart v0.26.1 // indirect
github.com/glebarez/go-sqlite v1.22.0 // indirect github.com/glebarez/go-sqlite v1.22.0 // indirect
github.com/go-jose/go-jose/v3 v3.0.4 // indirect github.com/go-jose/go-jose/v3 v3.0.4 // indirect
github.com/go-jose/go-jose/v4 v4.1.3 // indirect github.com/go-jose/go-jose/v4 v4.1.4 // indirect
github.com/go-logr/logr v1.4.3 // indirect github.com/go-logr/logr v1.4.3 // indirect
github.com/go-logr/stdr v1.2.2 // indirect github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-viper/mapstructure/v2 v2.4.0 // indirect github.com/go-viper/mapstructure/v2 v2.5.0 // indirect
github.com/godbus/dbus/v5 v5.1.1-0.20230522191255-76236955d466 // indirect github.com/godbus/dbus/v5 v5.2.2 // indirect
github.com/golang-jwt/jwt/v5 v5.3.0 // indirect github.com/golang-jwt/jwt/v5 v5.3.1 // indirect
github.com/golang/groupcache v0.0.0-20241129210726-2c02b8208cf8 // indirect github.com/golang/groupcache v0.0.0-20241129210726-2c02b8208cf8 // indirect
github.com/golang/protobuf v1.5.4 // indirect github.com/golang/protobuf v1.5.4 // indirect
github.com/google/btree v1.1.3 // indirect github.com/google/btree v1.1.3 // indirect
github.com/google/go-github v17.0.0+incompatible // indirect github.com/google/go-github v17.0.0+incompatible // indirect
github.com/google/go-querystring v1.1.0 // indirect github.com/google/go-querystring v1.2.0 // indirect
github.com/google/pprof v0.0.0-20251007162407-5df77e3f7d1d // indirect github.com/google/pprof v0.0.0-20260202012954-cb029daf43ef // indirect
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 // indirect github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 // indirect
github.com/google/uuid v1.6.0 // indirect github.com/google/uuid v1.6.0 // indirect
github.com/gookit/color v1.6.0 // indirect github.com/gookit/color v1.6.0 // indirect
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674 // indirect github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674 // indirect
github.com/hashicorp/go-version v1.7.0 // indirect github.com/hashicorp/go-version v1.8.0 // indirect
github.com/hdevalence/ed25519consensus v0.2.0 // indirect github.com/hdevalence/ed25519consensus v0.2.0 // indirect
github.com/huin/goupnp v1.3.0 // indirect github.com/huin/goupnp v1.3.0 // indirect
github.com/inconshreveable/mousetrap v1.1.0 // indirect github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/jackc/pgpassfile v1.0.0 // indirect github.com/jackc/pgpassfile v1.0.0 // indirect
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 // indirect github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 // indirect
github.com/jackc/pgx/v5 v5.7.6 // indirect github.com/jackc/pgx/v5 v5.8.0 // indirect
github.com/jackc/puddle/v2 v2.2.2 // indirect github.com/jackc/puddle/v2 v2.2.2 // indirect
github.com/jinzhu/inflection v1.0.0 // indirect github.com/jinzhu/inflection v1.0.0 // indirect
github.com/jinzhu/now v1.1.5 // indirect github.com/jinzhu/now v1.1.5 // indirect
github.com/jsimonetti/rtnetlink v1.4.1 // indirect github.com/jsimonetti/rtnetlink v1.4.2 // indirect
github.com/klauspost/compress v1.18.2 // indirect github.com/kamstrup/intmap v0.5.2 // indirect
github.com/lib/pq v1.10.9 // indirect github.com/klauspost/compress v1.18.3 // indirect
github.com/lib/pq v1.11.1 // indirect
github.com/lithammer/fuzzysearch v1.1.8 // indirect github.com/lithammer/fuzzysearch v1.1.8 // indirect
github.com/mattn/go-colorable v0.1.14 // indirect github.com/mattn/go-colorable v0.1.14 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect github.com/mattn/go-isatty v0.0.20 // indirect
github.com/mattn/go-runewidth v0.0.19 // indirect github.com/mattn/go-runewidth v0.0.20 // indirect
github.com/mdlayher/netlink v1.7.3-0.20250113171957-fbb4dce95f42 // indirect github.com/mdlayher/netlink v1.8.0 // indirect
github.com/mdlayher/socket v0.5.0 // indirect github.com/mdlayher/socket v0.5.1 // indirect
github.com/mitchellh/go-ps v1.0.0 // indirect github.com/mitchellh/go-ps v1.0.0 // indirect
github.com/moby/docker-image-spec v1.3.1 // indirect github.com/moby/docker-image-spec v1.3.1 // indirect
github.com/moby/moby/api v1.53.0 // indirect
github.com/moby/moby/client v0.2.2 // indirect
github.com/moby/sys/atomicwriter v0.1.0 // indirect github.com/moby/sys/atomicwriter v0.1.0 // indirect
github.com/moby/sys/user v0.4.0 // indirect github.com/moby/sys/user v0.4.0 // indirect
github.com/moby/term v0.5.2 // indirect github.com/moby/term v0.5.2 // indirect
github.com/morikuni/aec v1.0.0 // indirect github.com/morikuni/aec v1.1.0 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/ncruces/go-strftime v1.0.0 // indirect github.com/ncruces/go-strftime v1.0.0 // indirect
github.com/opencontainers/go-digest v1.0.0 // indirect github.com/opencontainers/go-digest v1.0.0 // indirect
github.com/opencontainers/image-spec v1.1.1 // indirect github.com/opencontainers/image-spec v1.1.1 // indirect
github.com/opencontainers/runc v1.3.2 // indirect github.com/opencontainers/runc v1.3.2 // indirect
github.com/pelletier/go-toml/v2 v2.2.4 // indirect github.com/pelletier/go-toml/v2 v2.2.4 // indirect
github.com/petermattis/goid v0.0.0-20250904145737-900bdf8bb490 // indirect github.com/petermattis/goid v0.0.0-20260113132338-7c7de50cc741 // indirect
github.com/pires/go-proxyproto v0.8.1 // indirect github.com/pires/go-proxyproto v0.9.2 // indirect
github.com/pkg/errors v0.9.1 // indirect github.com/pkg/errors v0.9.1 // indirect
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
github.com/prometheus-community/pro-bing v0.4.0 // indirect github.com/prometheus-community/pro-bing v0.7.0 // indirect
github.com/prometheus/client_model v0.6.2 // indirect github.com/prometheus/client_model v0.6.2 // indirect
github.com/prometheus/procfs v0.16.1 // indirect github.com/prometheus/procfs v0.19.2 // indirect
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec // indirect github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec // indirect
github.com/safchain/ethtool v0.3.0 // indirect github.com/safchain/ethtool v0.7.0 // indirect
github.com/sagikazarmark/locafero v0.12.0 // indirect github.com/sagikazarmark/locafero v0.12.0 // indirect
github.com/sirupsen/logrus v1.9.3 // indirect github.com/sirupsen/logrus v1.9.4 // indirect
github.com/spf13/afero v1.15.0 // indirect github.com/spf13/afero v1.15.0 // indirect
github.com/spf13/cast v1.10.0 // indirect github.com/spf13/cast v1.10.0 // indirect
github.com/spf13/pflag v1.0.10 // indirect github.com/spf13/pflag v1.0.10 // indirect
@@ -193,8 +207,8 @@ require (
github.com/tailscale/certstore v0.1.1-0.20231202035212-d3fa0460f47e // indirect github.com/tailscale/certstore v0.1.1-0.20231202035212-d3fa0460f47e // indirect
github.com/tailscale/go-winio v0.0.0-20231025203758-c4f33415bf55 // indirect github.com/tailscale/go-winio v0.0.0-20231025203758-c4f33415bf55 // indirect
github.com/tailscale/peercred v0.0.0-20250107143737-35a0c7bd7edc // indirect github.com/tailscale/peercred v0.0.0-20250107143737-35a0c7bd7edc // indirect
github.com/tailscale/setec v0.0.0-20251203133219-2ab774e4129a // indirect github.com/tailscale/setec v0.0.0-20260115174028-19d190c5556d // indirect
github.com/tailscale/web-client-prebuilt v0.0.0-20250124233751-d4cd19a26976 // indirect github.com/tailscale/web-client-prebuilt v0.0.0-20251127225136-f19339b67368 // indirect
github.com/tailscale/wireguard-go v0.0.0-20250716170648-1d0488a3d7da // indirect github.com/tailscale/wireguard-go v0.0.0-20250716170648-1d0488a3d7da // indirect
github.com/x448/float16 v0.8.4 // indirect github.com/x448/float16 v0.8.4 // indirect
github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb // indirect github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb // indirect
@@ -202,27 +216,27 @@ require (
github.com/xeipuuv/gojsonschema v1.2.0 // indirect github.com/xeipuuv/gojsonschema v1.2.0 // indirect
github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e // indirect github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e // indirect
go.opentelemetry.io/auto/sdk v1.2.1 // indirect go.opentelemetry.io/auto/sdk v1.2.1 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.64.0 // indirect go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.65.0 // indirect
go.opentelemetry.io/otel v1.39.0 // indirect go.opentelemetry.io/otel v1.40.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.36.0 // indirect go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.40.0 // indirect
go.opentelemetry.io/otel/metric v1.39.0 // indirect go.opentelemetry.io/otel/metric v1.40.0 // indirect
go.opentelemetry.io/otel/trace v1.39.0 // indirect go.opentelemetry.io/otel/trace v1.40.0 // indirect
go.yaml.in/yaml/v2 v2.4.3 // indirect go.yaml.in/yaml/v2 v2.4.3 // indirect
go.yaml.in/yaml/v3 v3.0.4 // indirect go.yaml.in/yaml/v3 v3.0.4 // indirect
go4.org/mem v0.0.0-20240501181205-ae6ca9944745 // indirect go4.org/mem v0.0.0-20240501181205-ae6ca9944745 // indirect
golang.org/x/mod v0.30.0 // indirect golang.org/x/mod v0.35.0 // indirect
golang.org/x/sys v0.40.0 // indirect golang.org/x/sys v0.43.0 // indirect
golang.org/x/term v0.38.0 // indirect golang.org/x/term v0.42.0 // indirect
golang.org/x/text v0.32.0 // indirect golang.org/x/text v0.36.0 // indirect
golang.org/x/time v0.12.0 // indirect golang.org/x/time v0.15.0 // indirect
golang.org/x/tools v0.39.0 // indirect golang.org/x/tools v0.43.0 // indirect
golang.zx2c4.com/wintun v0.0.0-20230126152724-0fa3db229ce2 // indirect golang.zx2c4.com/wintun v0.0.0-20230126152724-0fa3db229ce2 // indirect
golang.zx2c4.com/wireguard/windows v0.5.3 // indirect golang.zx2c4.com/wireguard/windows v0.5.3 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20251222181119-0a764e51fe1b // indirect google.golang.org/genproto/googleapis/rpc v0.0.0-20260401024825-9d38bb4040a9 // indirect
gvisor.dev/gvisor v0.0.0-20250205023644-9414b50a5633 // indirect
) )
tool ( tool (
golang.org/x/tools/cmd/stress
golang.org/x/tools/cmd/stringer golang.org/x/tools/cmd/stringer
tailscale.com/cmd/viewer tailscale.com/cmd/viewer
) )

418
go.sum
View File

@@ -10,8 +10,8 @@ atomicgo.dev/schedule v0.1.0 h1:nTthAbhZS5YZmgYbb2+DH8uQIZcTlIrd4eYr3UQxEjs=
atomicgo.dev/schedule v0.1.0/go.mod h1:xeUa3oAkiuHYh8bKiQBRojqAMq3PXXbJujjb0hw8pEU= atomicgo.dev/schedule v0.1.0/go.mod h1:xeUa3oAkiuHYh8bKiQBRojqAMq3PXXbJujjb0hw8pEU=
dario.cat/mergo v1.0.2 h1:85+piFYR1tMbRrLcDwR18y4UKJ3aH1Tbzi24VRW1TK8= dario.cat/mergo v1.0.2 h1:85+piFYR1tMbRrLcDwR18y4UKJ3aH1Tbzi24VRW1TK8=
dario.cat/mergo v1.0.2/go.mod h1:E/hbnu0NxMFBjpMIE34DRGLWqDy0g5FuKDhCb31ngxA= dario.cat/mergo v1.0.2/go.mod h1:E/hbnu0NxMFBjpMIE34DRGLWqDy0g5FuKDhCb31ngxA=
filippo.io/edwards25519 v1.1.0 h1:FNf4tywRC1HmFuKW5xopWpigGjJKiJSV0Cqo0cJWDaA= filippo.io/edwards25519 v1.2.0 h1:crnVqOiS4jqYleHd9vaKZ+HKtHfllngJIiOpNpoJsjo=
filippo.io/edwards25519 v1.1.0/go.mod h1:BxyFTGdWcka3PhytdK4V28tE5sGfRvvvRV7EaN4VDT4= filippo.io/edwards25519 v1.2.0/go.mod h1:xzAOLCNug/yB62zG1bQ8uziwrIqIuxhctzJT18Q77mc=
filippo.io/mkcert v1.4.4 h1:8eVbbwfVlaqUM7OwuftKc2nuYOoTDQWqsoXmzoXZdbc= filippo.io/mkcert v1.4.4 h1:8eVbbwfVlaqUM7OwuftKc2nuYOoTDQWqsoXmzoXZdbc=
filippo.io/mkcert v1.4.4/go.mod h1:VyvOchVuAye3BoUsPUOOofKygVwLV2KQMVFJNRq+1dA= filippo.io/mkcert v1.4.4/go.mod h1:VyvOchVuAye3BoUsPUOOofKygVwLV2KQMVFJNRq+1dA=
github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c h1:udKWzYgxTojEKWjV8V+WSxDXJ4NFATAsZjh8iIbsQIg= github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c h1:udKWzYgxTojEKWjV8V+WSxDXJ4NFATAsZjh8iIbsQIg=
@@ -33,53 +33,55 @@ github.com/Nvveen/Gotty v0.0.0-20120604004816-cd527374f1e5 h1:TngWCqHvy9oXAN6lEV
github.com/Nvveen/Gotty v0.0.0-20120604004816-cd527374f1e5/go.mod h1:lmUJ/7eu/Q8D7ML55dXQrVaamCz2vxCfdQBasLZfHKk= github.com/Nvveen/Gotty v0.0.0-20120604004816-cd527374f1e5/go.mod h1:lmUJ/7eu/Q8D7ML55dXQrVaamCz2vxCfdQBasLZfHKk=
github.com/akutz/memconn v0.1.0 h1:NawI0TORU4hcOMsMr11g7vwlCdkYeLKXBcxWu2W/P8A= github.com/akutz/memconn v0.1.0 h1:NawI0TORU4hcOMsMr11g7vwlCdkYeLKXBcxWu2W/P8A=
github.com/akutz/memconn v0.1.0/go.mod h1:Jo8rI7m0NieZyLI5e2CDlRdRqRRB4S7Xp77ukDjH+Fw= github.com/akutz/memconn v0.1.0/go.mod h1:Jo8rI7m0NieZyLI5e2CDlRdRqRRB4S7Xp77ukDjH+Fw=
github.com/alexbrainman/sspi v0.0.0-20231016080023-1a75b4708caa h1:LHTHcTQiSGT7VVbI0o4wBRNQIgn917usHWOd6VAffYI= github.com/alexbrainman/sspi v0.0.0-20250919150558-7d374ff0d59e h1:4dAU9FXIyQktpoUAgOJK3OTFc/xug0PCXYCqU0FgDKI=
github.com/alexbrainman/sspi v0.0.0-20231016080023-1a75b4708caa/go.mod h1:cEWa1LVoE5KvSD9ONXsZrj0z6KqySlCCNKHlLzbqAt4= github.com/alexbrainman/sspi v0.0.0-20250919150558-7d374ff0d59e/go.mod h1:cEWa1LVoE5KvSD9ONXsZrj0z6KqySlCCNKHlLzbqAt4=
github.com/anmitsu/go-shlex v0.0.0-20200514113438-38f4b401e2be h1:9AeTilPcZAjCFIImctFaOjnTIavg87rW78vTPkQqLI8= github.com/anmitsu/go-shlex v0.0.0-20200514113438-38f4b401e2be h1:9AeTilPcZAjCFIImctFaOjnTIavg87rW78vTPkQqLI8=
github.com/anmitsu/go-shlex v0.0.0-20200514113438-38f4b401e2be/go.mod h1:ySMOLuWl6zY27l47sB3qLNK6tF2fkHG55UZxx8oIVo4= github.com/anmitsu/go-shlex v0.0.0-20200514113438-38f4b401e2be/go.mod h1:ySMOLuWl6zY27l47sB3qLNK6tF2fkHG55UZxx8oIVo4=
github.com/arl/statsviz v0.8.0 h1:O6GjjVxEDxcByAucOSl29HaGYLXsuwA3ujJw8H9E7/U= github.com/arl/statsviz v0.8.0 h1:O6GjjVxEDxcByAucOSl29HaGYLXsuwA3ujJw8H9E7/U=
github.com/arl/statsviz v0.8.0/go.mod h1:XlrbiT7xYT03xaW9JMMfD8KFUhBOESJwfyNJu83PbB0= github.com/arl/statsviz v0.8.0/go.mod h1:XlrbiT7xYT03xaW9JMMfD8KFUhBOESJwfyNJu83PbB0=
github.com/atomicgo/cursor v0.0.1/go.mod h1:cBON2QmmrysudxNBFthvMtN32r3jxVRIvzkUiF/RuIk= github.com/atomicgo/cursor v0.0.1/go.mod h1:cBON2QmmrysudxNBFthvMtN32r3jxVRIvzkUiF/RuIk=
github.com/aws/aws-sdk-go-v2 v1.41.0 h1:tNvqh1s+v0vFYdA1xq0aOJH+Y5cRyZ5upu6roPgPKd4= github.com/aws/aws-sdk-go-v2 v1.41.1 h1:ABlyEARCDLN034NhxlRUSZr4l71mh+T5KAeGh6cerhU=
github.com/aws/aws-sdk-go-v2 v1.41.0/go.mod h1:MayyLB8y+buD9hZqkCW3kX1AKq07Y5pXxtgB+rRFhz0= github.com/aws/aws-sdk-go-v2 v1.41.1/go.mod h1:MayyLB8y+buD9hZqkCW3kX1AKq07Y5pXxtgB+rRFhz0=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.8 h1:zAxi9p3wsZMIaVCdoiQp2uZ9k1LsZvmAnoTBeZPXom0= github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.4 h1:489krEF9xIGkOaaX3CE/Be2uWjiXrkCH6gUX+bZA/BU=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.8/go.mod h1:3XkePX5dSaxveLAYY7nsbsZZrKxCyEuE5pM4ziFxyGg= github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.4/go.mod h1:IOAPF6oT9KCsceNTvvYMNHy0+kMF8akOjeDvPENWxp4=
github.com/aws/aws-sdk-go-v2/config v1.29.5 h1:4lS2IB+wwkj5J43Tq/AwvnscBerBJtQQ6YS7puzCI1k= github.com/aws/aws-sdk-go-v2/config v1.32.7 h1:vxUyWGUwmkQ2g19n7JY/9YL8MfAIl7bTesIUykECXmY=
github.com/aws/aws-sdk-go-v2/config v1.29.5/go.mod h1:SNzldMlDVbN6nWxM7XsUiNXPSa1LWlqiXtvh/1PrJGg= github.com/aws/aws-sdk-go-v2/config v1.32.7/go.mod h1:2/Qm5vKUU/r7Y+zUk/Ptt2MDAEKAfUtKc1+3U1Mo3oY=
github.com/aws/aws-sdk-go-v2/credentials v1.17.58 h1:/d7FUpAPU8Lf2KUdjniQvfNdlMID0Sd9pS23FJ3SS9Y= github.com/aws/aws-sdk-go-v2/credentials v1.19.7 h1:tHK47VqqtJxOymRrNtUXN5SP/zUTvZKeLx4tH6PGQc8=
github.com/aws/aws-sdk-go-v2/credentials v1.17.58/go.mod h1:aVYW33Ow10CyMQGFgC0ptMRIqJWvJ4nxZb0sUiuQT/A= github.com/aws/aws-sdk-go-v2/credentials v1.19.7/go.mod h1:qOZk8sPDrxhf+4Wf4oT2urYJrYt3RejHSzgAquYeppw=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.27 h1:7lOW8NUwE9UZekS1DYoiPdVAqZ6A+LheHWb+mHbNOq8= github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.17 h1:I0GyV8wiYrP8XpA70g1HBcQO1JlQxCMTW9npl5UbDHY=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.27/go.mod h1:w1BASFIPOPUae7AgaH4SbjNbfdkxuggLyGfNFTn8ITY= github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.18.17/go.mod h1:tyw7BOl5bBe/oqvoIeECFJjMdzXoa/dfVz3QQ5lgHGA=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.16 h1:rgGwPzb82iBYSvHMHXc8h9mRoOUBZIGFgKb9qniaZZc= github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.17 h1:xOLELNKGp2vsiteLsvLPwxC+mYmO6OZ8PYgiuPJzF8U=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.16/go.mod h1:L/UxsGeKpGoIj6DxfhOWHWQ/kGKcd4I1VncE4++IyKA= github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.17/go.mod h1:5M5CI3D12dNOtH3/mk6minaRwI2/37ifCURZISxA/IQ=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.16 h1:1jtGzuV7c82xnqOVfx2F0xmJcOw5374L7N6juGW6x6U= github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.17 h1:WWLqlh79iO48yLkj1v3ISRNiv+3KdQoZ6JWyfcsyQik=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.16/go.mod h1:M2E5OQf+XLe+SZGmmpaI2yy+J326aFf6/+54PoxSANc= github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.17/go.mod h1:EhG22vHRrvF8oXSTYStZhJc1aUgKtnJe+aOiFEV90cM=
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.2 h1:Pg9URiobXy85kgFev3og2CuOZ8JZUBENF+dcgWBaYNk= github.com/aws/aws-sdk-go-v2/internal/ini v1.8.4 h1:WKuaxf++XKWlHWu9ECbMlha8WOEGm0OUEZqm4K/Gcfk=
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.2/go.mod h1:FbtygfRFze9usAadmnGJNc8KsP346kEe+y2/oyhGAGc= github.com/aws/aws-sdk-go-v2/internal/ini v1.8.4/go.mod h1:ZWy7j6v1vWGmPReu0iSGvRiise4YI5SkR3OHKTZ6Wuc=
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.31 h1:8IwBjuLdqIO1dGB+dZ9zJEl8wzY3bVYxcs0Xyu/Lsc0= github.com/aws/aws-sdk-go-v2/internal/v4a v1.4.16 h1:CjMzUs78RDDv4ROu3JnJn/Ig1r6ZD7/T2DXLLRpejic=
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.31/go.mod h1:8tMBcuVjL4kP/ECEIWTCWtwV2kj6+ouEKl4cqR4iWLw= github.com/aws/aws-sdk-go-v2/internal/v4a v1.4.16/go.mod h1:uVW4OLBqbJXSHJYA9svT9BluSvvwbzLQ2Crf6UPzR3c=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.4 h1:0ryTNEdJbzUCEWkVXEXoqlXV72J5keC1GvILMOuD00E= github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.4 h1:0ryTNEdJbzUCEWkVXEXoqlXV72J5keC1GvILMOuD00E=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.4/go.mod h1:HQ4qwNZh32C3CBeO6iJLQlgtMzqeG17ziAA/3KDJFow= github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.13.4/go.mod h1:HQ4qwNZh32C3CBeO6iJLQlgtMzqeG17ziAA/3KDJFow=
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.5.5 h1:siiQ+jummya9OLPDEyHVb2dLW4aOMe22FGDd0sAfuSw= github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.9.7 h1:DIBqIrJ7hv+e4CmIk2z3pyKT+3B6qVMgRsawHiR3qso=
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.5.5/go.mod h1:iHVx2J9pWzITdP5MJY6qWfG34TfD9EA+Qi3eV6qQCXw= github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.9.7/go.mod h1:vLm00xmBke75UmpNvOcZQ/Q30ZFjbczeLFqGx5urmGo=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.16 h1:oHjJHeUy0ImIV0bsrX0X91GkV5nJAyv1l1CC9lnO0TI= github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.17 h1:RuNSMoozM8oXlgLG/n6WLaFGoea7/CddrCfIiSA+xdY=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.16/go.mod h1:iRSNGgOYmiYwSCXxXaKb9HfOEj40+oTKn8pTxMlYkRM= github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.13.17/go.mod h1:F2xxQ9TZz5gDWsclCtPQscGpP0VUOc8RqgFM3vDENmU=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.12 h1:tkVNm99nkJnFo1H9IIQb5QkCiPcvCDn3Pos+IeTbGRA= github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.19.16 h1:NSbvS17MlI2lurYgXnCOLvCFX38sBW4eiVER7+kkgsU=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.12/go.mod h1:dIVlquSPUMqEJtx2/W17SM2SuESRaVEhEV9alcMqxjw= github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.19.16/go.mod h1:SwT8Tmqd4sA6G1qaGdzWCJN99bUmPGHfRwwq3G5Qb+A=
github.com/aws/aws-sdk-go-v2/service/s3 v1.75.3 h1:JBod0SnNqcWQ0+uAyzeRFG1zCHotW8DukumYYyNy0zo= github.com/aws/aws-sdk-go-v2/service/s3 v1.93.2 h1:U3ygWUhCpiSPYSHOrRhb3gOl9T5Y3kB8k5Vjs//57bE=
github.com/aws/aws-sdk-go-v2/service/s3 v1.75.3/go.mod h1:FHSHmyEUkzRbaFFqqm6bkLAOQHgqhsLmfCahvCBMiyA= github.com/aws/aws-sdk-go-v2/service/s3 v1.93.2/go.mod h1:79S2BdqCJpScXZA2y+cpZuocWsjGjJINyXnOsf5DTz8=
github.com/aws/aws-sdk-go-v2/service/signin v1.0.5 h1:VrhDvQib/i0lxvr3zqlUwLwJP4fpmpyD9wYG1vfSu+Y=
github.com/aws/aws-sdk-go-v2/service/signin v1.0.5/go.mod h1:k029+U8SY30/3/ras4G/Fnv/b88N4mAfliNn08Dem4M=
github.com/aws/aws-sdk-go-v2/service/ssm v1.45.0 h1:IOdss+igJDFdic9w3WKwxGCmHqUxydvIhJOm9LJ32Dk= github.com/aws/aws-sdk-go-v2/service/ssm v1.45.0 h1:IOdss+igJDFdic9w3WKwxGCmHqUxydvIhJOm9LJ32Dk=
github.com/aws/aws-sdk-go-v2/service/ssm v1.45.0/go.mod h1:Q7XIWsMo0JcMpI/6TGD6XXcXcV1DbTj6e9BKNntIMIM= github.com/aws/aws-sdk-go-v2/service/ssm v1.45.0/go.mod h1:Q7XIWsMo0JcMpI/6TGD6XXcXcV1DbTj6e9BKNntIMIM=
github.com/aws/aws-sdk-go-v2/service/sso v1.24.14 h1:c5WJ3iHz7rLIgArznb3JCSQT3uUMiz9DLZhIX+1G8ok= github.com/aws/aws-sdk-go-v2/service/sso v1.30.9 h1:v6EiMvhEYBoHABfbGB4alOYmCIrcgyPPiBE1wZAEbqk=
github.com/aws/aws-sdk-go-v2/service/sso v1.24.14/go.mod h1:+JJQTxB6N4niArC14YNtxcQtwEqzS3o9Z32n7q33Rfs= github.com/aws/aws-sdk-go-v2/service/sso v1.30.9/go.mod h1:yifAsgBxgJWn3ggx70A3urX2AN49Y5sJTD1UQFlfqBw=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.13 h1:f1L/JtUkVODD+k1+IiSJUUv8A++2qVr+Xvb3xWXETMU= github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.13 h1:gd84Omyu9JLriJVCbGApcLzVR3XtmC4ZDPcAI6Ftvds=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.13/go.mod h1:tvqlFoja8/s0o+UruA1Nrezo/df0PzdunMDDurUfg6U= github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.13/go.mod h1:sTGThjphYE4Ohw8vJiRStAcu3rbjtXRsdNB0TvZ5wwo=
github.com/aws/aws-sdk-go-v2/service/sts v1.41.5 h1:SciGFVNZ4mHdm7gpD1dgZYnCuVdX1s+lFTg4+4DOy70= github.com/aws/aws-sdk-go-v2/service/sts v1.41.6 h1:5fFjR/ToSOzB2OQ/XqWpZBmNvmP/pJ1jOWYlFDJTjRQ=
github.com/aws/aws-sdk-go-v2/service/sts v1.41.5/go.mod h1:iW40X4QBmUxdP+fZNOpfmkdMZqsovezbAeO+Ubiv2pk= github.com/aws/aws-sdk-go-v2/service/sts v1.41.6/go.mod h1:qgFDZQSD/Kys7nJnVqYlWKnh0SSdMjAi0uSwON4wgYQ=
github.com/aws/smithy-go v1.24.0 h1:LpilSUItNPFr1eY85RYgTIg5eIEPtvFbskaFcmmIUnk= github.com/aws/smithy-go v1.24.0 h1:LpilSUItNPFr1eY85RYgTIg5eIEPtvFbskaFcmmIUnk=
github.com/aws/smithy-go v1.24.0/go.mod h1:LEj2LM3rBRQJxPZTB4KuzZkaZYnZPnvgIhb4pu07mx0= github.com/aws/smithy-go v1.24.0/go.mod h1:LEj2LM3rBRQJxPZTB4KuzZkaZYnZPnvgIhb4pu07mx0=
github.com/axiomhq/hyperloglog v0.0.0-20240319100328-84253e514e02 h1:bXAPYSbdYbS5VTy92NIUbeDI1qyggi+JYh5op9IFlcQ= github.com/axiomhq/hyperloglog v0.2.6 h1:sRhvvF3RIXWQgAXaTphLp4yJiX4S0IN3MWTaAgZoRJw=
github.com/axiomhq/hyperloglog v0.0.0-20240319100328-84253e514e02/go.mod h1:k08r+Yj1PRAmuayFiRK6MYuR5Ve4IuZtTfxErMIh0+c= github.com/axiomhq/hyperloglog v0.2.6/go.mod h1:YjX/dQqCR/7QYX0g8mu8UZAjpIenz1FKM71UEsjFoTo=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM= github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw= github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/cenkalti/backoff/v4 v4.3.0 h1:MyRJ/UdXutAwSAT+s3wNd7MfTIcy71VQueUuFK343L8= github.com/cenkalti/backoff/v4 v4.3.0 h1:MyRJ/UdXutAwSAT+s3wNd7MfTIcy71VQueUuFK343L8=
@@ -101,8 +103,8 @@ github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMn
github.com/chzyer/test v1.0.0/go.mod h1:2JlltgoNkt4TW/z9V/IzDdFaMTM2JPIi26O1pF38GC8= github.com/chzyer/test v1.0.0/go.mod h1:2JlltgoNkt4TW/z9V/IzDdFaMTM2JPIi26O1pF38GC8=
github.com/cilium/ebpf v0.17.3 h1:FnP4r16PWYSE4ux6zN+//jMcW4nMVRvuTLVTvCjyyjg= github.com/cilium/ebpf v0.17.3 h1:FnP4r16PWYSE4ux6zN+//jMcW4nMVRvuTLVTvCjyyjg=
github.com/cilium/ebpf v0.17.3/go.mod h1:G5EDHij8yiLzaqn0WjyfJHvRa+3aDlReIaLVRMvOyJk= github.com/cilium/ebpf v0.17.3/go.mod h1:G5EDHij8yiLzaqn0WjyfJHvRa+3aDlReIaLVRMvOyJk=
github.com/clipperhouse/uax29/v2 v2.2.0 h1:ChwIKnQN3kcZteTXMgb1wztSgaU+ZemkgWdohwgs8tY= github.com/clipperhouse/uax29/v2 v2.7.0 h1:+gs4oBZ2gPfVrKPthwbMzWZDaAFPGYK72F0NJv2v7Vk=
github.com/clipperhouse/uax29/v2 v2.2.0/go.mod h1:EFJ2TJMRUaplDxHKj1qAEhCtQPW2tJSwu5BF98AuoVM= github.com/clipperhouse/uax29/v2 v2.7.0/go.mod h1:EFJ2TJMRUaplDxHKj1qAEhCtQPW2tJSwu5BF98AuoVM=
github.com/coder/websocket v1.8.14 h1:9L0p0iKiNOibykf283eHkKUHHrpG7f65OE3BhhO7v9g= github.com/coder/websocket v1.8.14 h1:9L0p0iKiNOibykf283eHkKUHHrpG7f65OE3BhhO7v9g=
github.com/coder/websocket v1.8.14/go.mod h1:NX3SzP+inril6yawo5CQXx8+fk145lPDC6pumgx0mVg= github.com/coder/websocket v1.8.14/go.mod h1:NX3SzP+inril6yawo5CQXx8+fk145lPDC6pumgx0mVg=
github.com/containerd/console v1.0.3/go.mod h1:7LqA/THxQ86k76b8c/EMSiaJ3h1eZkMkXar0TQ1gf3U= github.com/containerd/console v1.0.3/go.mod h1:7LqA/THxQ86k76b8c/EMSiaJ3h1eZkMkXar0TQ1gf3U=
@@ -118,38 +120,37 @@ github.com/containerd/log v0.1.0 h1:TCJt7ioM2cr/tfR8GPbGf9/VRAX8D2B4PjzCpfX540I=
github.com/containerd/log v0.1.0/go.mod h1:VRRf09a7mHDIRezVKTRCrOq78v577GXq3bSa3EhrzVo= github.com/containerd/log v0.1.0/go.mod h1:VRRf09a7mHDIRezVKTRCrOq78v577GXq3bSa3EhrzVo=
github.com/coreos/go-iptables v0.7.1-0.20240112124308-65c67c9f46e6 h1:8h5+bWd7R6AYUslN6c6iuZWTKsKxUFDlpnmilO6R2n0= github.com/coreos/go-iptables v0.7.1-0.20240112124308-65c67c9f46e6 h1:8h5+bWd7R6AYUslN6c6iuZWTKsKxUFDlpnmilO6R2n0=
github.com/coreos/go-iptables v0.7.1-0.20240112124308-65c67c9f46e6/go.mod h1:Qe8Bv2Xik5FyTXwgIbLAnv2sWSBmvWdFETJConOQ//Q= github.com/coreos/go-iptables v0.7.1-0.20240112124308-65c67c9f46e6/go.mod h1:Qe8Bv2Xik5FyTXwgIbLAnv2sWSBmvWdFETJConOQ//Q=
github.com/coreos/go-oidc/v3 v3.16.0 h1:qRQUCFstKpXwmEjDQTIbyY/5jF00+asXzSkmkoa/mow= github.com/coreos/go-oidc/v3 v3.18.0 h1:V9orjXynvu5wiC9SemFTWnG4F45v403aIcjWo0d41+A=
github.com/coreos/go-oidc/v3 v3.16.0/go.mod h1:wqPbKFrVnE90vty060SB40FCJ8fTHTxSwyXJqZH+sI8= github.com/coreos/go-oidc/v3 v3.18.0/go.mod h1:DYCf24+ncYi+XkIH97GY1+dqoRlbaSI26KVTCI9SrY4=
github.com/coreos/go-systemd/v22 v22.5.0/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g= github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=
github.com/creachadair/command v0.2.0 h1:qTA9cMMhZePAxFoNdnk6F6nn94s1qPndIg9hJbqI9cA= github.com/creachadair/command v0.2.2 h1:4RGsUhqFf1imFC+vMWOOCiQdncThCdcdMJp0JNCjxxc=
github.com/creachadair/command v0.2.0/go.mod h1:j+Ar+uYnFsHpkMeV9kGj6lJ45y9u2xqtg8FYy6cm+0o= github.com/creachadair/command v0.2.2/go.mod h1:Z6Zp6CSJcnaWWR4wHgdqzODnFdxFJAaa/DrcVkeUu3E=
github.com/creachadair/flax v0.0.5 h1:zt+CRuXQASxwQ68e9GHAOnEgAU29nF0zYMHOCrL5wzE= github.com/creachadair/flax v0.0.5 h1:zt+CRuXQASxwQ68e9GHAOnEgAU29nF0zYMHOCrL5wzE=
github.com/creachadair/flax v0.0.5/go.mod h1:F1PML0JZLXSNDMNiRGK2yjm5f+L9QCHchyHBldFymj8= github.com/creachadair/flax v0.0.5/go.mod h1:F1PML0JZLXSNDMNiRGK2yjm5f+L9QCHchyHBldFymj8=
github.com/creachadair/mds v0.25.10 h1:9k9JB35D1xhOCFl0liBhagBBp8fWWkKZrA7UXsfoHtA= github.com/creachadair/mds v0.26.2 h1:rCtvEV/bCRY0hGfwvvMg0p3yzKgBE8l/9OV4fjF9QQ8=
github.com/creachadair/mds v0.25.10/go.mod h1:4hatI3hRM+qhzuAmqPRFvaBM8mONkS7nsLxkcuTYUIs= github.com/creachadair/mds v0.26.2/go.mod h1:dMBTCSy3iS3dwh4Rb1zxeZz2d7K8+N24GCTsayWtQRI=
github.com/creachadair/msync v0.7.1 h1:SeZmuEBXQPe5GqV/C94ER7QIZPwtvFbeQiykzt/7uho= github.com/creachadair/msync v0.8.2 h1:ujvc/SVJPn+bFwmjUHucXNTTn3opVe2YbQ46mBCnP08=
github.com/creachadair/msync v0.7.1/go.mod h1:8CcFlLsSujfHE5wWm19uUBLHIPDAUr6LXDwneVMO008= github.com/creachadair/msync v0.8.2/go.mod h1:LzxqD9kfIl/O3DczkwOgJplLPqwrTbIhINlf9bHIsEY=
github.com/creachadair/taskgroup v0.13.2 h1:3KyqakBuFsm3KkXi/9XIb0QcA8tEzLHLgaoidf0MdVc= github.com/creachadair/taskgroup v0.13.2 h1:3KyqakBuFsm3KkXi/9XIb0QcA8tEzLHLgaoidf0MdVc=
github.com/creachadair/taskgroup v0.13.2/go.mod h1:i3V1Zx7H8RjwljUEeUWYT30Lmb9poewSb2XI1yTwD0g= github.com/creachadair/taskgroup v0.13.2/go.mod h1:i3V1Zx7H8RjwljUEeUWYT30Lmb9poewSb2XI1yTwD0g=
github.com/creack/pty v1.1.23 h1:4M6+isWdcStXEf15G/RbrMPOQj1dZ7HPZCGwE4kOeP0= github.com/creack/pty v1.1.24 h1:bJrF4RRfyJnbTJqzRLHzcGaZK1NeM5kTC9jGgovnR1s=
github.com/creack/pty v1.1.23/go.mod h1:08sCNb52WyoAwi2QDyzUCTgcvVFhUzewun7wtTfvcwE= github.com/creack/pty v1.1.24/go.mod h1:08sCNb52WyoAwi2QDyzUCTgcvVFhUzewun7wtTfvcwE=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM= github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dblohm7/wingoes v0.0.0-20240123200102-b75a8a7d7eb0 h1:vrC07UZcgPzu/OjWsmQKMGg3LoPSz9jh/pQXIrHjUj4= github.com/dblohm7/wingoes v0.0.0-20250822163801-6d8e6105c62d h1:QRKpU+9ZBDs62LyBfwhZkJdB5DJX2Sm3p4kUh7l1aA0=
github.com/dblohm7/wingoes v0.0.0-20240123200102-b75a8a7d7eb0/go.mod h1:Nx87SkVqTKd8UtT+xu7sM/l+LgXs6c0aHrlKusR+2EQ= github.com/dblohm7/wingoes v0.0.0-20250822163801-6d8e6105c62d/go.mod h1:SUxUaAK/0UG5lYyZR1L1nC4AaYYvSSYTWQSH3FPcxKU=
github.com/dgryski/go-metro v0.0.0-20180109044635-280f6062b5bc h1:8WFBn63wegobsYAX0YjD+8suexZDga5CctH4CCTx2+8= github.com/dgryski/go-metro v0.0.0-20250106013310-edb8663e5e33 h1:ucRHb6/lvW/+mTEIGbvhcYU3S8+uSNkuMjx/qZFfhtM=
github.com/dgryski/go-metro v0.0.0-20180109044635-280f6062b5bc/go.mod h1:c9O8+fpSOX1DM8cPNSkX/qsBWdkD4yd2dpciOWQjpBw= github.com/dgryski/go-metro v0.0.0-20250106013310-edb8663e5e33/go.mod h1:c9O8+fpSOX1DM8cPNSkX/qsBWdkD4yd2dpciOWQjpBw=
github.com/digitalocean/go-smbios v0.0.0-20180907143718-390a4f403a8e h1:vUmf0yezR0y7jJ5pceLHthLaYf4bA5T14B6q39S4q2Q= github.com/digitalocean/go-smbios v0.0.0-20180907143718-390a4f403a8e h1:vUmf0yezR0y7jJ5pceLHthLaYf4bA5T14B6q39S4q2Q=
github.com/digitalocean/go-smbios v0.0.0-20180907143718-390a4f403a8e/go.mod h1:YTIHhz/QFSYnu/EhlF2SpU2Uk+32abacUYA5ZPljz1A= github.com/digitalocean/go-smbios v0.0.0-20180907143718-390a4f403a8e/go.mod h1:YTIHhz/QFSYnu/EhlF2SpU2Uk+32abacUYA5ZPljz1A=
github.com/distribution/reference v0.6.0 h1:0IXCQ5g4/QMHHkarYzh5l+u8T3t73zM5QvfrDyIgxBk= github.com/distribution/reference v0.6.0 h1:0IXCQ5g4/QMHHkarYzh5l+u8T3t73zM5QvfrDyIgxBk=
github.com/distribution/reference v0.6.0/go.mod h1:BbU0aIcezP1/5jX/8MP0YiH4SdvB5Y4f/wlDRiLyi3E= github.com/distribution/reference v0.6.0/go.mod h1:BbU0aIcezP1/5jX/8MP0YiH4SdvB5Y4f/wlDRiLyi3E=
github.com/djherbis/times v1.6.0 h1:w2ctJ92J8fBvWPxugmXIv7Nz7Q3iDMKNx9v5ocVH20c= github.com/djherbis/times v1.6.0 h1:w2ctJ92J8fBvWPxugmXIv7Nz7Q3iDMKNx9v5ocVH20c=
github.com/djherbis/times v1.6.0/go.mod h1:gOHeRAz2h+VJNZ5Gmc/o7iD9k4wW7NMVqieYCY99oc0= github.com/djherbis/times v1.6.0/go.mod h1:gOHeRAz2h+VJNZ5Gmc/o7iD9k4wW7NMVqieYCY99oc0=
github.com/docker/cli v28.5.1+incompatible h1:ESutzBALAD6qyCLqbQSEf1a/U8Ybms5agw59yGVc+yY= github.com/docker/cli v29.2.1+incompatible h1:n3Jt0QVCN65eiVBoUTZQM9mcQICCJt3akW4pKAbKdJg=
github.com/docker/cli v28.5.1+incompatible/go.mod h1:JLrzqnKDaYBop7H2jaqPtU4hHvMKP+vjCwu2uszcLI8= github.com/docker/cli v29.2.1+incompatible/go.mod h1:JLrzqnKDaYBop7H2jaqPtU4hHvMKP+vjCwu2uszcLI8=
github.com/docker/docker v28.5.2+incompatible h1:DBX0Y0zAjZbSrm1uzOkdr1onVghKaftjlSWt4AFexzM= github.com/docker/docker v28.5.2+incompatible h1:DBX0Y0zAjZbSrm1uzOkdr1onVghKaftjlSWt4AFexzM=
github.com/docker/docker v28.5.2+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk= github.com/docker/docker v28.5.2+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/go-connections v0.6.0 h1:LlMG9azAe1TqfR7sO+NJttz1gy6KO7VJBh+pMmjSD94= github.com/docker/go-connections v0.6.0 h1:LlMG9azAe1TqfR7sO+NJttz1gy6KO7VJBh+pMmjSD94=
@@ -169,22 +170,26 @@ github.com/fsnotify/fsnotify v1.9.0 h1:2Ml+OJNzbYCTzsxtv8vKSFD9PbJjmhYF14k/jKC7S
github.com/fsnotify/fsnotify v1.9.0/go.mod h1:8jBTzvmWwFyi3Pb8djgCCO5IBqzKJ/Jwo8TRcHyHii0= github.com/fsnotify/fsnotify v1.9.0/go.mod h1:8jBTzvmWwFyi3Pb8djgCCO5IBqzKJ/Jwo8TRcHyHii0=
github.com/fxamacker/cbor/v2 v2.9.0 h1:NpKPmjDBgUfBms6tr6JZkTHtfFGcMKsw3eGcmD/sapM= github.com/fxamacker/cbor/v2 v2.9.0 h1:NpKPmjDBgUfBms6tr6JZkTHtfFGcMKsw3eGcmD/sapM=
github.com/fxamacker/cbor/v2 v2.9.0/go.mod h1:vM4b+DJCtHn+zz7h3FFp/hDAI9WNWCsZj23V5ytsSxQ= github.com/fxamacker/cbor/v2 v2.9.0/go.mod h1:vM4b+DJCtHn+zz7h3FFp/hDAI9WNWCsZj23V5ytsSxQ=
github.com/gaissmai/bart v0.18.0 h1:jQLBT/RduJu0pv/tLwXE+xKPgtWJejbxuXAR+wLJafo= github.com/gaissmai/bart v0.26.1 h1:+w4rnLGNlA2GDVn382Tfe3jOsK5vOr5n4KmigJ9lbTo=
github.com/gaissmai/bart v0.18.0/go.mod h1:JJzMAhNF5Rjo4SF4jWBrANuJfqY+FvsFhW7t1UZJ+XY= github.com/gaissmai/bart v0.26.1/go.mod h1:GREWQfTLRWz/c5FTOsIw+KkscuFkIV5t8Rp7Nd1Td5c=
github.com/github/fakeca v0.1.0 h1:Km/MVOFvclqxPM9dZBC4+QE564nU4gz4iZ0D9pMw28I= github.com/github/fakeca v0.1.0 h1:Km/MVOFvclqxPM9dZBC4+QE564nU4gz4iZ0D9pMw28I=
github.com/github/fakeca v0.1.0/go.mod h1:+bormgoGMMuamOscx7N91aOuUST7wdaJ2rNjeohylyo= github.com/github/fakeca v0.1.0/go.mod h1:+bormgoGMMuamOscx7N91aOuUST7wdaJ2rNjeohylyo=
github.com/glebarez/go-sqlite v1.22.0 h1:uAcMJhaA6r3LHMTFgP0SifzgXg46yJkgxqyuyec+ruQ= github.com/glebarez/go-sqlite v1.22.0 h1:uAcMJhaA6r3LHMTFgP0SifzgXg46yJkgxqyuyec+ruQ=
github.com/glebarez/go-sqlite v1.22.0/go.mod h1:PlBIdHe0+aUEFn+r2/uthrWq4FxbzugL0L8Li6yQJbc= github.com/glebarez/go-sqlite v1.22.0/go.mod h1:PlBIdHe0+aUEFn+r2/uthrWq4FxbzugL0L8Li6yQJbc=
github.com/glebarez/sqlite v1.11.0 h1:wSG0irqzP6VurnMEpFGer5Li19RpIRi2qvQz++w0GMw= github.com/glebarez/sqlite v1.11.0 h1:wSG0irqzP6VurnMEpFGer5Li19RpIRi2qvQz++w0GMw=
github.com/glebarez/sqlite v1.11.0/go.mod h1:h8/o8j5wiAsqSPoWELDUdJXhjAhsVliSn7bWZjOhrgQ= github.com/glebarez/sqlite v1.11.0/go.mod h1:h8/o8j5wiAsqSPoWELDUdJXhjAhsVliSn7bWZjOhrgQ=
github.com/go-chi/chi/v5 v5.2.5 h1:Eg4myHZBjyvJmAFjFvWgrqDTXFyOzjj7YIm3L3mu6Ug=
github.com/go-chi/chi/v5 v5.2.5/go.mod h1:X7Gx4mteadT3eDOMTsXzmI4/rwUpOwBHLpAfupzFJP0=
github.com/go-chi/metrics v0.1.1 h1:CXhbnkAVVjb0k73EBRQ6Z2YdWFnbXZgNtg1Mboguibk=
github.com/go-chi/metrics v0.1.1/go.mod h1:mcGTM1pPalP7WCtb+akNYFO/lwNwBBLCuedepqjoPn4=
github.com/go-gormigrate/gormigrate/v2 v2.1.5 h1:1OyorA5LtdQw12cyJDEHuTrEV3GiXiIhS4/QTTa/SM8= github.com/go-gormigrate/gormigrate/v2 v2.1.5 h1:1OyorA5LtdQw12cyJDEHuTrEV3GiXiIhS4/QTTa/SM8=
github.com/go-gormigrate/gormigrate/v2 v2.1.5/go.mod h1:mj9ekk/7CPF3VjopaFvWKN2v7fN3D9d3eEOAXRhi/+M= github.com/go-gormigrate/gormigrate/v2 v2.1.5/go.mod h1:mj9ekk/7CPF3VjopaFvWKN2v7fN3D9d3eEOAXRhi/+M=
github.com/go-jose/go-jose/v3 v3.0.4 h1:Wp5HA7bLQcKnf6YYao/4kpRpVMp/yf6+pJKV8WFSaNY= github.com/go-jose/go-jose/v3 v3.0.4 h1:Wp5HA7bLQcKnf6YYao/4kpRpVMp/yf6+pJKV8WFSaNY=
github.com/go-jose/go-jose/v3 v3.0.4/go.mod h1:5b+7YgP7ZICgJDBdfjZaIt+H/9L9T/YQrVfLAMboGkQ= github.com/go-jose/go-jose/v3 v3.0.4/go.mod h1:5b+7YgP7ZICgJDBdfjZaIt+H/9L9T/YQrVfLAMboGkQ=
github.com/go-jose/go-jose/v4 v4.1.3 h1:CVLmWDhDVRa6Mi/IgCgaopNosCaHz7zrMeF9MlZRkrs= github.com/go-jose/go-jose/v4 v4.1.4 h1:moDMcTHmvE6Groj34emNPLs/qtYXRVcd6S7NHbHz3kA=
github.com/go-jose/go-jose/v4 v4.1.3/go.mod h1:x4oUasVrzR7071A4TnHLGSPpNOm2a21K9Kf04k1rs08= github.com/go-jose/go-jose/v4 v4.1.4/go.mod h1:x4oUasVrzR7071A4TnHLGSPpNOm2a21K9Kf04k1rs08=
github.com/go-json-experiment/json v0.0.0-20250813024750-ebf49471dced h1:Q311OHjMh/u5E2TITc++WlTP5We0xNseRMkHDyvhW7I= github.com/go-json-experiment/json v0.0.0-20260214004413-d219187c3433 h1:vymEbVwYFP/L05h5TKQxvkXoKxNvTpjxYKdF1Nlwuao=
github.com/go-json-experiment/json v0.0.0-20250813024750-ebf49471dced/go.mod h1:TiCD2a1pcmjd7YnhGH0f/zKNcCD06B029pHhzV23c2M= github.com/go-json-experiment/json v0.0.0-20260214004413-d219187c3433/go.mod h1:tphK2c80bpPhMOI4v6bIc2xWywPfbqi1Z06+RcrMkDg=
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A= github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI= github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=
github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY= github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
@@ -194,42 +199,41 @@ github.com/go-ole/go-ole v1.3.0 h1:Dt6ye7+vXGIKZ7Xtk4s6/xVdGDQynvom7xCFEdWr6uE=
github.com/go-ole/go-ole v1.3.0/go.mod h1:5LS6F96DhAwUc7C+1HLexzMXY1xGRSryjyPPKW6zv78= github.com/go-ole/go-ole v1.3.0/go.mod h1:5LS6F96DhAwUc7C+1HLexzMXY1xGRSryjyPPKW6zv78=
github.com/go-sql-driver/mysql v1.8.1 h1:LedoTUt/eveggdHS9qUFC1EFSa8bU2+1pZjSRpvNJ1Y= github.com/go-sql-driver/mysql v1.8.1 h1:LedoTUt/eveggdHS9qUFC1EFSa8bU2+1pZjSRpvNJ1Y=
github.com/go-sql-driver/mysql v1.8.1/go.mod h1:wEBSXgmK//2ZFJyE+qWnIsVGmvmEKlqwuVSjsCm7DZg= github.com/go-sql-driver/mysql v1.8.1/go.mod h1:wEBSXgmK//2ZFJyE+qWnIsVGmvmEKlqwuVSjsCm7DZg=
github.com/go-viper/mapstructure/v2 v2.4.0 h1:EBsztssimR/CONLSZZ04E8qAkxNYq4Qp9LvH92wZUgs= github.com/go-viper/mapstructure/v2 v2.5.0 h1:vM5IJoUAy3d7zRSVtIwQgBj7BiWtMPfmPEgAXnvj1Ro=
github.com/go-viper/mapstructure/v2 v2.4.0/go.mod h1:oJDH3BJKyqBA2TXFhDsKDGDTlndYOZ6rGS0BRZIxGhM= github.com/go-viper/mapstructure/v2 v2.5.0/go.mod h1:oJDH3BJKyqBA2TXFhDsKDGDTlndYOZ6rGS0BRZIxGhM=
github.com/go4org/plan9netshell v0.0.0-20250324183649-788daa080737 h1:cf60tHxREO3g1nroKr2osU3JWZsJzkfi7rEg+oAB0Lo= github.com/go4org/plan9netshell v0.0.0-20250324183649-788daa080737 h1:cf60tHxREO3g1nroKr2osU3JWZsJzkfi7rEg+oAB0Lo=
github.com/go4org/plan9netshell v0.0.0-20250324183649-788daa080737/go.mod h1:MIS0jDzbU/vuM9MC4YnBITCv+RYuTRq8dJzmCrFsK9g= github.com/go4org/plan9netshell v0.0.0-20250324183649-788daa080737/go.mod h1:MIS0jDzbU/vuM9MC4YnBITCv+RYuTRq8dJzmCrFsK9g=
github.com/gobwas/httphead v0.1.0/go.mod h1:O/RXo79gxV8G+RqlR/otEwx4Q36zl9rqC5u12GKvMCM= github.com/gobwas/httphead v0.1.0/go.mod h1:O/RXo79gxV8G+RqlR/otEwx4Q36zl9rqC5u12GKvMCM=
github.com/gobwas/pool v0.2.1/go.mod h1:q8bcK0KcYlCgd9e7WYLm9LpyS+YeLd8JVDW6WezmKEw= github.com/gobwas/pool v0.2.1/go.mod h1:q8bcK0KcYlCgd9e7WYLm9LpyS+YeLd8JVDW6WezmKEw=
github.com/gobwas/ws v1.2.1/go.mod h1:hRKAFb8wOxFROYNsT1bqfWnhX+b5MFeJM9r2ZSwg/KY= github.com/gobwas/ws v1.2.1/go.mod h1:hRKAFb8wOxFROYNsT1bqfWnhX+b5MFeJM9r2ZSwg/KY=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA= github.com/godbus/dbus/v5 v5.2.2 h1:TUR3TgtSVDmjiXOgAAyaZbYmIeP3DPkld3jgKGV8mXQ=
github.com/godbus/dbus/v5 v5.1.1-0.20230522191255-76236955d466 h1:sQspH8M4niEijh3PFscJRLDnkL547IeP7kpPe3uUhEg= github.com/godbus/dbus/v5 v5.2.2/go.mod h1:3AAv2+hPq5rdnr5txxxRwiGjPXamgoIHgz9FPBfOp3c=
github.com/godbus/dbus/v5 v5.1.1-0.20230522191255-76236955d466/go.mod h1:ZiQxhyQ+bbbfxUKVvjfO498oPYvtYhZzycal3G/NHmU=
github.com/gofrs/uuid/v5 v5.4.0 h1:EfbpCTjqMuGyq5ZJwxqzn3Cbr2d0rUZU7v5ycAk/e/0= github.com/gofrs/uuid/v5 v5.4.0 h1:EfbpCTjqMuGyq5ZJwxqzn3Cbr2d0rUZU7v5ycAk/e/0=
github.com/gofrs/uuid/v5 v5.4.0/go.mod h1:CDOjlDMVAtN56jqyRUZh58JT31Tiw7/oQyEXZV+9bD8= github.com/gofrs/uuid/v5 v5.4.0/go.mod h1:CDOjlDMVAtN56jqyRUZh58JT31Tiw7/oQyEXZV+9bD8=
github.com/golang-jwt/jwt/v5 v5.3.0 h1:pv4AsKCKKZuqlgs5sUmn4x8UlGa0kEVt/puTpKx9vvo= github.com/golang-jwt/jwt/v5 v5.3.1 h1:kYf81DTWFe7t+1VvL7eS+jKFVWaUnK9cB1qbwn63YCY=
github.com/golang-jwt/jwt/v5 v5.3.0/go.mod h1:fxCRLWMO43lRc8nhHWY6LGqRcf+1gQWArsqaEUEa5bE= github.com/golang-jwt/jwt/v5 v5.3.1/go.mod h1:fxCRLWMO43lRc8nhHWY6LGqRcf+1gQWArsqaEUEa5bE=
github.com/golang/groupcache v0.0.0-20241129210726-2c02b8208cf8 h1:f+oWsMOmNPc8JmEHVZIycC7hBoQxHH9pNKQORJNozsQ= github.com/golang/groupcache v0.0.0-20241129210726-2c02b8208cf8 h1:f+oWsMOmNPc8JmEHVZIycC7hBoQxHH9pNKQORJNozsQ=
github.com/golang/groupcache v0.0.0-20241129210726-2c02b8208cf8/go.mod h1:wcDNUvekVysuuOpQKo3191zZyTpiI6se1N1ULghS0sw= github.com/golang/groupcache v0.0.0-20241129210726-2c02b8208cf8/go.mod h1:wcDNUvekVysuuOpQKo3191zZyTpiI6se1N1ULghS0sw=
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek= github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps= github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
github.com/google/btree v1.1.3 h1:CVpQJjYgC4VbzxeGVHfvZrv1ctoYCAI8vbl07Fcxlyg= github.com/google/btree v1.1.3 h1:CVpQJjYgC4VbzxeGVHfvZrv1ctoYCAI8vbl07Fcxlyg=
github.com/google/btree v1.1.3/go.mod h1:qOPhT0dTNdNzV6Z/lhRX0YXUafgPLFUh+gZMl761Gm4= github.com/google/btree v1.1.3/go.mod h1:qOPhT0dTNdNzV6Z/lhRX0YXUafgPLFUh+gZMl761Gm4=
github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8= github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU= github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
github.com/google/go-github v17.0.0+incompatible h1:N0LgJ1j65A7kfXrZnUDaYCs/Sf4rEjNlfyDHW9dolSY= github.com/google/go-github v17.0.0+incompatible h1:N0LgJ1j65A7kfXrZnUDaYCs/Sf4rEjNlfyDHW9dolSY=
github.com/google/go-github v17.0.0+incompatible/go.mod h1:zLgOLi98H3fifZn+44m+umXrS52loVEgC2AApnigrVQ= github.com/google/go-github v17.0.0+incompatible/go.mod h1:zLgOLi98H3fifZn+44m+umXrS52loVEgC2AApnigrVQ=
github.com/google/go-querystring v1.1.0 h1:AnCroh3fv4ZBgVIf1Iwtovgjaw/GiKJo8M8yD/fhyJ8= github.com/google/go-querystring v1.2.0 h1:yhqkPbu2/OH+V9BfpCVPZkNmUXhb2gBxJArfhIxNtP0=
github.com/google/go-querystring v1.1.0/go.mod h1:Kcdr2DB4koayq7X8pmAG4sNG59So17icRSOU623lUBU= github.com/google/go-querystring v1.2.0/go.mod h1:8IFJqpSRITyJ8QhQ13bmbeMBDfmeEJZD5A0egEOmkqU=
github.com/google/go-tpm v0.9.4 h1:awZRf9FwOeTunQmHoDYSHJps3ie6f1UlhS1fOdPEt1I= github.com/google/go-tpm v0.9.4 h1:awZRf9FwOeTunQmHoDYSHJps3ie6f1UlhS1fOdPEt1I=
github.com/google/go-tpm v0.9.4/go.mod h1:h9jEsEECg7gtLis0upRBQU+GhYVH6jMjrFxI8u6bVUY= github.com/google/go-tpm v0.9.4/go.mod h1:h9jEsEECg7gtLis0upRBQU+GhYVH6jMjrFxI8u6bVUY=
github.com/google/nftables v0.2.1-0.20240414091927-5e242ec57806 h1:wG8RYIyctLhdFk6Vl1yPGtSRtwGpVkWyZww1OCil2MI= github.com/google/nftables v0.2.1-0.20240414091927-5e242ec57806 h1:wG8RYIyctLhdFk6Vl1yPGtSRtwGpVkWyZww1OCil2MI=
github.com/google/nftables v0.2.1-0.20240414091927-5e242ec57806/go.mod h1:Beg6V6zZ3oEn0JuiUQ4wqwuyqqzasOltcoXPtgLbFp4= github.com/google/nftables v0.2.1-0.20240414091927-5e242ec57806/go.mod h1:Beg6V6zZ3oEn0JuiUQ4wqwuyqqzasOltcoXPtgLbFp4=
github.com/google/pprof v0.0.0-20211214055906-6f57359322fd/go.mod h1:KgnwoLYCZ8IQu3XUZ8Nc/bM9CCZFOyjUNOSygVozoDg= github.com/google/pprof v0.0.0-20211214055906-6f57359322fd/go.mod h1:KgnwoLYCZ8IQu3XUZ8Nc/bM9CCZFOyjUNOSygVozoDg=
github.com/google/pprof v0.0.0-20240227163752-401108e1b7e7/go.mod h1:czg5+yv1E0ZGTi6S6vVK1mke0fV+FaUhNGcd6VRS9Ik= github.com/google/pprof v0.0.0-20240227163752-401108e1b7e7/go.mod h1:czg5+yv1E0ZGTi6S6vVK1mke0fV+FaUhNGcd6VRS9Ik=
github.com/google/pprof v0.0.0-20251007162407-5df77e3f7d1d h1:KJIErDwbSHjnp/SGzE5ed8Aol7JsKiI5X7yWKAtzhM0= github.com/google/pprof v0.0.0-20260202012954-cb029daf43ef h1:xpF9fUHpoIrrjX24DURVKiwHcFpw19ndIs+FwTSMbno=
github.com/google/pprof v0.0.0-20251007162407-5df77e3f7d1d/go.mod h1:I6V7YzU0XDpsHqbsyrghnFZLO1gwK6NPTNvmetQIk9U= github.com/google/pprof v0.0.0-20260202012954-cb029daf43ef/go.mod h1:MxpfABSjhmINe3F1It9d+8exIHFvUqtLIRCdOGNXqiI=
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 h1:El6M4kTTCOh6aBiKaUGG7oYTSPP8MxqL4YI3kZKwcP4= github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 h1:El6M4kTTCOh6aBiKaUGG7oYTSPP8MxqL4YI3kZKwcP4=
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510/go.mod h1:pupxD2MaaD3pAXIBCelhxNneeOaAeabZDe5s4K6zSpQ= github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510/go.mod h1:pupxD2MaaD3pAXIBCelhxNneeOaAeabZDe5s4K6zSpQ=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0= github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
@@ -244,11 +248,10 @@ github.com/gorilla/mux v1.8.1 h1:TuBL49tXwgrFYWhqrNgrUNEY92u81SPhu7sTdzQEiWY=
github.com/gorilla/mux v1.8.1/go.mod h1:AKf9I4AEqPTmMytcMc0KkNouC66V3BtZ4qD5fmWSiMQ= github.com/gorilla/mux v1.8.1/go.mod h1:AKf9I4AEqPTmMytcMc0KkNouC66V3BtZ4qD5fmWSiMQ=
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674 h1:JeSE6pjso5THxAzdVpqr6/geYxZytqFMBCOtn/ujyeo= github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674 h1:JeSE6pjso5THxAzdVpqr6/geYxZytqFMBCOtn/ujyeo=
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674/go.mod h1:r4w70xmWCQKmi1ONH4KIaBptdivuRPyosB9RmPlGEwA= github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674/go.mod h1:r4w70xmWCQKmi1ONH4KIaBptdivuRPyosB9RmPlGEwA=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.4 h1:kEISI/Gx67NzH3nJxAmY/dGac80kKZgZt134u7Y/k1s= github.com/grpc-ecosystem/grpc-gateway/v2 v2.28.0 h1:HWRh5R2+9EifMyIHV7ZV+MIZqgz+PMpZ14Jynv3O2Zs=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.4/go.mod h1:6Nz966r3vQYCqIzWsuEl9d7cf7mRhtDmm++sOxlnfxI= github.com/grpc-ecosystem/grpc-gateway/v2 v2.28.0/go.mod h1:JfhWUomR1baixubs02l85lZYYOm7LV6om4ceouMv45c=
github.com/hashicorp/go-version v1.7.0 h1:5tqGy27NaOTB8yJKUZELlFAS/LTKJkrmONwQKeRZfjY= github.com/hashicorp/go-version v1.8.0 h1:KAkNb1HAiZd1ukkxDFGmokVZe1Xy9HG6NUp+bPle2i4=
github.com/hashicorp/go-version v1.7.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA= github.com/hashicorp/go-version v1.8.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA=
github.com/hashicorp/golang-lru v0.6.0 h1:uL2shRDx7RTrOrTCUZEGP/wJUFiUI8QT6E7z5o8jga4=
github.com/hashicorp/golang-lru/v2 v2.0.7 h1:a+bsQ5rvGLjzHuww6tVxozPZFVghXaHOwFs4luLUK2k= github.com/hashicorp/golang-lru/v2 v2.0.7 h1:a+bsQ5rvGLjzHuww6tVxozPZFVghXaHOwFs4luLUK2k=
github.com/hashicorp/golang-lru/v2 v2.0.7/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM= github.com/hashicorp/golang-lru/v2 v2.0.7/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM=
github.com/hdevalence/ed25519consensus v0.2.0 h1:37ICyZqdyj0lAZ8P4D1d1id3HqbbG1N3iBb1Tb4rdcU= github.com/hdevalence/ed25519consensus v0.2.0 h1:37ICyZqdyj0lAZ8P4D1d1id3HqbbG1N3iBb1Tb4rdcU=
@@ -267,8 +270,8 @@ github.com/jackc/pgpassfile v1.0.0 h1:/6Hmqy13Ss2zCq62VdNG8tM1wchn8zjSGOBJ6icpsI
github.com/jackc/pgpassfile v1.0.0/go.mod h1:CEx0iS5ambNFdcRtxPj5JhEz+xB6uRky5eyVu/W2HEg= github.com/jackc/pgpassfile v1.0.0/go.mod h1:CEx0iS5ambNFdcRtxPj5JhEz+xB6uRky5eyVu/W2HEg=
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 h1:iCEnooe7UlwOQYpKFhBabPMi4aNAfoODPEFNiAnClxo= github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 h1:iCEnooe7UlwOQYpKFhBabPMi4aNAfoODPEFNiAnClxo=
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761/go.mod h1:5TJZWKEWniPve33vlWYSoGYefn3gLQRzjfDlhSJ9ZKM= github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761/go.mod h1:5TJZWKEWniPve33vlWYSoGYefn3gLQRzjfDlhSJ9ZKM=
github.com/jackc/pgx/v5 v5.7.6 h1:rWQc5FwZSPX58r1OQmkuaNicxdmExaEz5A2DO2hUuTk= github.com/jackc/pgx/v5 v5.8.0 h1:TYPDoleBBme0xGSAX3/+NujXXtpZn9HBONkQC7IEZSo=
github.com/jackc/pgx/v5 v5.7.6/go.mod h1:aruU7o91Tc2q2cFp5h4uP3f6ztExVpyVv88Xl/8Vl8M= github.com/jackc/pgx/v5 v5.8.0/go.mod h1:QVeDInX2m9VyzvNeiCJVjCkNFqzsNb43204HshNSZKw=
github.com/jackc/puddle/v2 v2.2.2 h1:PR8nw+E/1w0GLuRFSmiioY6UooMp6KJv0/61nB7icHo= github.com/jackc/puddle/v2 v2.2.2 h1:PR8nw+E/1w0GLuRFSmiioY6UooMp6KJv0/61nB7icHo=
github.com/jackc/puddle/v2 v2.2.2/go.mod h1:vriiEXHvEE654aYKXXjOvZM39qJ0q+azkZFrfEOc3H4= github.com/jackc/puddle/v2 v2.2.2/go.mod h1:vriiEXHvEE654aYKXXjOvZM39qJ0q+azkZFrfEOc3H4=
github.com/jagottsicher/termcolor v1.0.2 h1:fo0c51pQSuLBN1+yVX2ZE+hE+P7ULb/TY8eRowJnrsM= github.com/jagottsicher/termcolor v1.0.2 h1:fo0c51pQSuLBN1+yVX2ZE+hE+P7ULb/TY8eRowJnrsM=
@@ -282,10 +285,12 @@ github.com/jinzhu/now v1.1.5/go.mod h1:d3SSVoowX0Lcu0IBviAWJpolVfI5UJVZZ7cO71lE/
github.com/jmespath/go-jmespath v0.4.0 h1:BEgLn5cpjn8UN1mAw4NjwDrS35OdebyEtFe+9YPoQUg= github.com/jmespath/go-jmespath v0.4.0 h1:BEgLn5cpjn8UN1mAw4NjwDrS35OdebyEtFe+9YPoQUg=
github.com/jmespath/go-jmespath v0.4.0/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo= github.com/jmespath/go-jmespath v0.4.0/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo=
github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y= github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y=
github.com/jsimonetti/rtnetlink v1.4.1 h1:JfD4jthWBqZMEffc5RjgmlzpYttAVw1sdnmiNaPO3hE= github.com/jsimonetti/rtnetlink v1.4.2 h1:Df9w9TZ3npHTyDn0Ev9e1uzmN2odmXd0QX+J5GTEn90=
github.com/jsimonetti/rtnetlink v1.4.1/go.mod h1:xJjT7t59UIZ62GLZbv6PLLo8VFrostJMPBAheR6OM8w= github.com/jsimonetti/rtnetlink v1.4.2/go.mod h1:92s6LJdE+1iOrw+F2/RO7LYI2Qd8pPpFNNUYW06gcoM=
github.com/klauspost/compress v1.18.2 h1:iiPHWW0YrcFgpBYhsA6D1+fqHssJscY/Tm/y2Uqnapk= github.com/kamstrup/intmap v0.5.2 h1:qnwBm1mh4XAnW9W9Ue9tZtTff8pS6+s6iKF6JRIV2Dk=
github.com/klauspost/compress v1.18.2/go.mod h1:R0h/fSBs8DE4ENlcrlib3PsXS61voFxhIs2DeRhCvJ4= github.com/kamstrup/intmap v0.5.2/go.mod h1:gWUVWHKzWj8xpJVFf5GC0O26bWmv3GqdnIX/LMT6Aq4=
github.com/klauspost/compress v1.18.3 h1:9PJRvfbmTabkOX8moIpXPbMMbYN60bWImDDU7L+/6zw=
github.com/klauspost/compress v1.18.3/go.mod h1:R0h/fSBs8DE4ENlcrlib3PsXS61voFxhIs2DeRhCvJ4=
github.com/klauspost/cpuid/v2 v2.0.9/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg= github.com/klauspost/cpuid/v2 v2.0.9/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg=
github.com/klauspost/cpuid/v2 v2.0.10/go.mod h1:g2LTdtYhdyuGPqyWyv7qRAmj1WBqxuObKfj5c0PQa7c= github.com/klauspost/cpuid/v2 v2.0.10/go.mod h1:g2LTdtYhdyuGPqyWyv7qRAmj1WBqxuObKfj5c0PQa7c=
github.com/klauspost/cpuid/v2 v2.0.12/go.mod h1:g2LTdtYhdyuGPqyWyv7qRAmj1WBqxuObKfj5c0PQa7c= github.com/klauspost/cpuid/v2 v2.0.12/go.mod h1:g2LTdtYhdyuGPqyWyv7qRAmj1WBqxuObKfj5c0PQa7c=
@@ -306,35 +311,36 @@ github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw= github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
github.com/ledongthuc/pdf v0.0.0-20220302134840-0c2507a12d80/go.mod h1:imJHygn/1yfhB7XSJJKlFZKl/J+dCPAknuiaGOshXAs= github.com/ledongthuc/pdf v0.0.0-20220302134840-0c2507a12d80/go.mod h1:imJHygn/1yfhB7XSJJKlFZKl/J+dCPAknuiaGOshXAs=
github.com/lib/pq v1.8.0/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o= github.com/lib/pq v1.8.0/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
github.com/lib/pq v1.10.9 h1:YXG7RB+JIjhP29X+OtkiDnYaXQwpS4JEWq7dtCCRUEw= github.com/lib/pq v1.11.1 h1:wuChtj2hfsGmmx3nf1m7xC2XpK6OtelS2shMY+bGMtI=
github.com/lib/pq v1.10.9/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o= github.com/lib/pq v1.11.1/go.mod h1:/p+8NSbOcwzAEI7wiMXFlgydTwcgTr3OSKMsD2BitpA=
github.com/lithammer/fuzzysearch v1.1.8 h1:/HIuJnjHuXS8bKaiTMeeDlW2/AyIWk2brx1V8LFgLN4= github.com/lithammer/fuzzysearch v1.1.8 h1:/HIuJnjHuXS8bKaiTMeeDlW2/AyIWk2brx1V8LFgLN4=
github.com/lithammer/fuzzysearch v1.1.8/go.mod h1:IdqeyBClc3FFqSzYq/MXESsS4S0FsZ5ajtkr5xPLts4= github.com/lithammer/fuzzysearch v1.1.8/go.mod h1:IdqeyBClc3FFqSzYq/MXESsS4S0FsZ5ajtkr5xPLts4=
github.com/mailru/easyjson v0.7.7/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc= github.com/mailru/easyjson v0.7.7/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc=
github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg=
github.com/mattn/go-colorable v0.1.14 h1:9A9LHSqF/7dyVVX6g0U9cwm9pG3kP9gSzcuIPHPsaIE= github.com/mattn/go-colorable v0.1.14 h1:9A9LHSqF/7dyVVX6g0U9cwm9pG3kP9gSzcuIPHPsaIE=
github.com/mattn/go-colorable v0.1.14/go.mod h1:6LmQG8QLFO4G5z1gPvYEzlUgJ2wF+stgPZH1UqBm1s8= github.com/mattn/go-colorable v0.1.14/go.mod h1:6LmQG8QLFO4G5z1gPvYEzlUgJ2wF+stgPZH1UqBm1s8=
github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
github.com/mattn/go-isatty v0.0.19/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY= github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y= github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mattn/go-runewidth v0.0.13/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w= github.com/mattn/go-runewidth v0.0.13/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w=
github.com/mattn/go-runewidth v0.0.19 h1:v++JhqYnZuu5jSKrk9RbgF5v4CGUjqRfBm05byFGLdw= github.com/mattn/go-runewidth v0.0.20 h1:WcT52H91ZUAwy8+HUkdM3THM6gXqXuLJi9O3rjcQQaQ=
github.com/mattn/go-runewidth v0.0.19/go.mod h1:XBkDxAl56ILZc9knddidhrOlY5R/pDhgLpndooCuJAs= github.com/mattn/go-runewidth v0.0.20/go.mod h1:XBkDxAl56ILZc9knddidhrOlY5R/pDhgLpndooCuJAs=
github.com/mdlayher/genetlink v1.3.2 h1:KdrNKe+CTu+IbZnm/GVUMXSqBBLqcGpRDa0xkQy56gw= github.com/mdlayher/genetlink v1.3.2 h1:KdrNKe+CTu+IbZnm/GVUMXSqBBLqcGpRDa0xkQy56gw=
github.com/mdlayher/genetlink v1.3.2/go.mod h1:tcC3pkCrPUGIKKsCsp0B3AdaaKuHtaxoJRz3cc+528o= github.com/mdlayher/genetlink v1.3.2/go.mod h1:tcC3pkCrPUGIKKsCsp0B3AdaaKuHtaxoJRz3cc+528o=
github.com/mdlayher/netlink v1.7.3-0.20250113171957-fbb4dce95f42 h1:A1Cq6Ysb0GM0tpKMbdCXCIfBclan4oHk1Jb+Hrejirg= github.com/mdlayher/netlink v1.8.0 h1:e7XNIYJKD7hUct3Px04RuIGJbBxy1/c4nX7D5YyvvlM=
github.com/mdlayher/netlink v1.7.3-0.20250113171957-fbb4dce95f42/go.mod h1:BB4YCPDOzfy7FniQ/lxuYQ3dgmM2cZumHbK8RpTjN2o= github.com/mdlayher/netlink v1.8.0/go.mod h1:UhgKXUlDQhzb09DrCl2GuRNEglHmhYoWAHid9HK3594=
github.com/mdlayher/sdnotify v1.0.0 h1:Ma9XeLVN/l0qpyx1tNeMSeTjCPH6NtuD6/N9XdTlQ3c= github.com/mdlayher/sdnotify v1.0.0 h1:Ma9XeLVN/l0qpyx1tNeMSeTjCPH6NtuD6/N9XdTlQ3c=
github.com/mdlayher/sdnotify v1.0.0/go.mod h1:HQUmpM4XgYkhDLtd+Uad8ZFK1T9D5+pNxnXQjCeJlGE= github.com/mdlayher/sdnotify v1.0.0/go.mod h1:HQUmpM4XgYkhDLtd+Uad8ZFK1T9D5+pNxnXQjCeJlGE=
github.com/mdlayher/socket v0.5.0 h1:ilICZmJcQz70vrWVes1MFera4jGiWNocSkykwwoy3XI= github.com/mdlayher/socket v0.5.1 h1:VZaqt6RkGkt2OE9l3GcC6nZkqD3xKeQLyfleW/uBcos=
github.com/mdlayher/socket v0.5.0/go.mod h1:WkcBFfvyG8QENs5+hfQPl1X6Jpd2yeLIYgrGFmJiJxI= github.com/mdlayher/socket v0.5.1/go.mod h1:TjPLHI1UgwEv5J1B5q0zTZq12A/6H7nKmtTanQE37IQ=
github.com/miekg/dns v1.1.58 h1:ca2Hdkz+cDg/7eNF6V56jjzuZ4aCAE+DbVkILdQWG/4= github.com/miekg/dns v1.1.58 h1:ca2Hdkz+cDg/7eNF6V56jjzuZ4aCAE+DbVkILdQWG/4=
github.com/miekg/dns v1.1.58/go.mod h1:Ypv+3b/KadlvW9vJfXOTf300O4UqaHFzFCuHz+rPkBY= github.com/miekg/dns v1.1.58/go.mod h1:Ypv+3b/KadlvW9vJfXOTf300O4UqaHFzFCuHz+rPkBY=
github.com/mitchellh/go-ps v1.0.0 h1:i6ampVEEF4wQFF+bkYfwYgY+F/uYJDktmvLPf7qIgjc= github.com/mitchellh/go-ps v1.0.0 h1:i6ampVEEF4wQFF+bkYfwYgY+F/uYJDktmvLPf7qIgjc=
github.com/mitchellh/go-ps v1.0.0/go.mod h1:J4lOc8z8yJs6vUwklHw2XEIiT4z4C40KtWVN3nvg8Pg= github.com/mitchellh/go-ps v1.0.0/go.mod h1:J4lOc8z8yJs6vUwklHw2XEIiT4z4C40KtWVN3nvg8Pg=
github.com/moby/docker-image-spec v1.3.1 h1:jMKff3w6PgbfSa69GfNg+zN/XLhfXJGnEx3Nl2EsFP0= github.com/moby/docker-image-spec v1.3.1 h1:jMKff3w6PgbfSa69GfNg+zN/XLhfXJGnEx3Nl2EsFP0=
github.com/moby/docker-image-spec v1.3.1/go.mod h1:eKmb5VW8vQEh/BAr2yvVNvuiJuY6UIocYsFu/DxxRpo= github.com/moby/docker-image-spec v1.3.1/go.mod h1:eKmb5VW8vQEh/BAr2yvVNvuiJuY6UIocYsFu/DxxRpo=
github.com/moby/moby/api v1.53.0 h1:PihqG1ncw4W+8mZs69jlwGXdaYBeb5brF6BL7mPIS/w=
github.com/moby/moby/api v1.53.0/go.mod h1:8mb+ReTlisw4pS6BRzCMts5M49W5M7bKt1cJy/YbAqc=
github.com/moby/moby/client v0.2.2 h1:Pt4hRMCAIlyjL3cr8M5TrXCwKzguebPAc2do2ur7dEM=
github.com/moby/moby/client v0.2.2/go.mod h1:2EkIPVNCqR05CMIzL1mfA07t0HvVUUOl85pasRz/GmQ=
github.com/moby/sys/atomicwriter v0.1.0 h1:kw5D/EqkBwsBFi0ss9v1VG3wIkVhzGvLklJ+w3A14Sw= github.com/moby/sys/atomicwriter v0.1.0 h1:kw5D/EqkBwsBFi0ss9v1VG3wIkVhzGvLklJ+w3A14Sw=
github.com/moby/sys/atomicwriter v0.1.0/go.mod h1:Ul8oqv2ZMNHOceF643P6FKPXeCmYtlQMvpizfsSoaWs= github.com/moby/sys/atomicwriter v0.1.0/go.mod h1:Ul8oqv2ZMNHOceF643P6FKPXeCmYtlQMvpizfsSoaWs=
github.com/moby/sys/sequential v0.6.0 h1:qrx7XFUd/5DxtqcoH1h438hF5TmOvzC/lspjy7zgvCU= github.com/moby/sys/sequential v0.6.0 h1:qrx7XFUd/5DxtqcoH1h438hF5TmOvzC/lspjy7zgvCU=
@@ -343,8 +349,8 @@ github.com/moby/sys/user v0.4.0 h1:jhcMKit7SA80hivmFJcbB1vqmw//wU61Zdui2eQXuMs=
github.com/moby/sys/user v0.4.0/go.mod h1:bG+tYYYJgaMtRKgEmuueC0hJEAZWwtIbZTB+85uoHjs= github.com/moby/sys/user v0.4.0/go.mod h1:bG+tYYYJgaMtRKgEmuueC0hJEAZWwtIbZTB+85uoHjs=
github.com/moby/term v0.5.2 h1:6qk3FJAFDs6i/q3W/pQ97SX192qKfZgGjCQqfCJkgzQ= github.com/moby/term v0.5.2 h1:6qk3FJAFDs6i/q3W/pQ97SX192qKfZgGjCQqfCJkgzQ=
github.com/moby/term v0.5.2/go.mod h1:d3djjFCrjnB+fl8NJux+EJzu0msscUP+f8it8hPkFLc= github.com/moby/term v0.5.2/go.mod h1:d3djjFCrjnB+fl8NJux+EJzu0msscUP+f8it8hPkFLc=
github.com/morikuni/aec v1.0.0 h1:nP9CBfwrvYnBRgY6qfDQkygYDmYwOilePFkwzv4dU8A= github.com/morikuni/aec v1.1.0 h1:vBBl0pUnvi/Je71dsRrhMBtreIqNMYErSAbEeb8jrXQ=
github.com/morikuni/aec v1.0.0/go.mod h1:BbKIizmSmc5MMPqRYbxO4ZU0S0+P200+tUnFx7PXmsc= github.com/morikuni/aec v1.1.0/go.mod h1:xDRgiq/iw5l+zkao76YTKzKttOp2cwPEne25HDkJnBw=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA= github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ= github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/ncruces/go-strftime v1.0.0 h1:HMFp8mLCTPp341M/ZnA4qaf7ZlsbTc+miZjCLOFAw7w= github.com/ncruces/go-strftime v1.0.0 h1:HMFp8mLCTPp341M/ZnA4qaf7ZlsbTc+miZjCLOFAw7w=
@@ -365,14 +371,14 @@ github.com/ory/dockertest/v3 v3.12.0/go.mod h1:aKNDTva3cp8dwOWwb9cWuX84aH5akkxXR
github.com/pelletier/go-toml/v2 v2.2.4 h1:mye9XuhQ6gvn5h28+VilKrrPoQVanw5PMw/TB0t5Ec4= github.com/pelletier/go-toml/v2 v2.2.4 h1:mye9XuhQ6gvn5h28+VilKrrPoQVanw5PMw/TB0t5Ec4=
github.com/pelletier/go-toml/v2 v2.2.4/go.mod h1:2gIqNv+qfxSVS7cM2xJQKtLSTLUE9V8t9Stt+h56mCY= github.com/pelletier/go-toml/v2 v2.2.4/go.mod h1:2gIqNv+qfxSVS7cM2xJQKtLSTLUE9V8t9Stt+h56mCY=
github.com/petermattis/goid v0.0.0-20250813065127-a731cc31b4fe/go.mod h1:pxMtw7cyUw6B2bRH0ZBANSPg+AoSud1I1iyJHI69jH4= github.com/petermattis/goid v0.0.0-20250813065127-a731cc31b4fe/go.mod h1:pxMtw7cyUw6B2bRH0ZBANSPg+AoSud1I1iyJHI69jH4=
github.com/petermattis/goid v0.0.0-20250904145737-900bdf8bb490 h1:QTvNkZ5ylY0PGgA+Lih+GdboMLY/G9SEGLMEGVjTVA4= github.com/petermattis/goid v0.0.0-20260113132338-7c7de50cc741 h1:KPpdlQLZcHfTMQRi6bFQ7ogNO0ltFT4PmtwTLW4W+14=
github.com/petermattis/goid v0.0.0-20250904145737-900bdf8bb490/go.mod h1:pxMtw7cyUw6B2bRH0ZBANSPg+AoSud1I1iyJHI69jH4= github.com/petermattis/goid v0.0.0-20260113132338-7c7de50cc741/go.mod h1:pxMtw7cyUw6B2bRH0ZBANSPg+AoSud1I1iyJHI69jH4=
github.com/philip-bui/grpc-zerolog v1.0.1 h1:EMacvLRUd2O1K0eWod27ZP5CY1iTNkhBDLSN+Q4JEvA= github.com/philip-bui/grpc-zerolog v1.0.1 h1:EMacvLRUd2O1K0eWod27ZP5CY1iTNkhBDLSN+Q4JEvA=
github.com/philip-bui/grpc-zerolog v1.0.1/go.mod h1:qXbiq/2X4ZUMMshsqlWyTHOcw7ns+GZmlqZZN05ZHcQ= github.com/philip-bui/grpc-zerolog v1.0.1/go.mod h1:qXbiq/2X4ZUMMshsqlWyTHOcw7ns+GZmlqZZN05ZHcQ=
github.com/pierrec/lz4/v4 v4.1.21 h1:yOVMLb6qSIDP67pl/5F7RepeKYu/VmTyEXvuMI5d9mQ= github.com/pierrec/lz4/v4 v4.1.25 h1:kocOqRffaIbU5djlIBr7Wh+cx82C0vtFb0fOurZHqD0=
github.com/pierrec/lz4/v4 v4.1.21/go.mod h1:gZWDp/Ze/IJXGXf23ltt2EXimqmTUXEy0GFuRQyBid4= github.com/pierrec/lz4/v4 v4.1.25/go.mod h1:EoQMVJgeeEOMsCqCzqFm2O0cJvljX2nGZjcRIPL34O4=
github.com/pires/go-proxyproto v0.8.1 h1:9KEixbdJfhrbtjpz/ZwCdWDD2Xem0NZ38qMYaASJgp0= github.com/pires/go-proxyproto v0.9.2 h1:H1UdHn695zUVVmB0lQ354lOWHOy6TZSpzBl3tgN0s1U=
github.com/pires/go-proxyproto v0.8.1/go.mod h1:ZKAAyp3cgy5Y5Mo4n9AlScrkCZwUy0g3Jf+slqQVcuU= github.com/pires/go-proxyproto v0.9.2/go.mod h1:ZKAAyp3cgy5Y5Mo4n9AlScrkCZwUy0g3Jf+slqQVcuU=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4= github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/profile v1.7.0 h1:hnbDkaNWPCLMO9wGLdBFTIZvzDrDfBM2072E1S9gJkA= github.com/pkg/profile v1.7.0 h1:hnbDkaNWPCLMO9wGLdBFTIZvzDrDfBM2072E1S9gJkA=
@@ -382,16 +388,16 @@ github.com/pkg/sftp v1.13.6/go.mod h1:tz1ryNURKu77RL+GuCzmoJYxQczL3wLNNpPWagdg4Q
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U= github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus-community/pro-bing v0.4.0 h1:YMbv+i08gQz97OZZBwLyvmmQEEzyfyrrjEaAchdy3R4= github.com/prometheus-community/pro-bing v0.7.0 h1:KFYFbxC2f2Fp6c+TyxbCOEarf7rbnzr9Gw8eIb0RfZA=
github.com/prometheus-community/pro-bing v0.4.0/go.mod h1:b7wRYZtCcPmt4Sz319BykUU241rWLe1VFXyiyWK/dH4= github.com/prometheus-community/pro-bing v0.7.0/go.mod h1:Moob9dvlY50Bfq6i88xIwfyw7xLFHH69LUgx9n5zqCE=
github.com/prometheus/client_golang v1.23.2 h1:Je96obch5RDVy3FDMndoUsjAhG5Edi49h0RJWRi/o0o= github.com/prometheus/client_golang v1.23.2 h1:Je96obch5RDVy3FDMndoUsjAhG5Edi49h0RJWRi/o0o=
github.com/prometheus/client_golang v1.23.2/go.mod h1:Tb1a6LWHB3/SPIzCoaDXI4I8UHKeFTEQ1YCr+0Gyqmg= github.com/prometheus/client_golang v1.23.2/go.mod h1:Tb1a6LWHB3/SPIzCoaDXI4I8UHKeFTEQ1YCr+0Gyqmg=
github.com/prometheus/client_model v0.6.2 h1:oBsgwpGs7iVziMvrGhE53c/GrLUsZdHnqNwqPLxwZyk= github.com/prometheus/client_model v0.6.2 h1:oBsgwpGs7iVziMvrGhE53c/GrLUsZdHnqNwqPLxwZyk=
github.com/prometheus/client_model v0.6.2/go.mod h1:y3m2F6Gdpfy6Ut/GBsUqTWZqCUvMVzSfMLjcu6wAwpE= github.com/prometheus/client_model v0.6.2/go.mod h1:y3m2F6Gdpfy6Ut/GBsUqTWZqCUvMVzSfMLjcu6wAwpE=
github.com/prometheus/common v0.67.5 h1:pIgK94WWlQt1WLwAC5j2ynLaBRDiinoAb86HZHTUGI4= github.com/prometheus/common v0.67.5 h1:pIgK94WWlQt1WLwAC5j2ynLaBRDiinoAb86HZHTUGI4=
github.com/prometheus/common v0.67.5/go.mod h1:SjE/0MzDEEAyrdr5Gqc6G+sXI67maCxzaT3A2+HqjUw= github.com/prometheus/common v0.67.5/go.mod h1:SjE/0MzDEEAyrdr5Gqc6G+sXI67maCxzaT3A2+HqjUw=
github.com/prometheus/procfs v0.16.1 h1:hZ15bTNuirocR6u0JZ6BAHHmwS1p8B4P6MRqxtzMyRg= github.com/prometheus/procfs v0.19.2 h1:zUMhqEW66Ex7OXIiDkll3tl9a1ZdilUOd/F6ZXw4Vws=
github.com/prometheus/procfs v0.16.1/go.mod h1:teAbpZRB1iIAJYREa1LsoWUXykVXA1KlTmWl8x/U+Is= github.com/prometheus/procfs v0.19.2/go.mod h1:M0aotyiemPhBCM0z5w87kL22CxfcH05ZpYlu+b4J7mw=
github.com/pterm/pterm v0.12.27/go.mod h1:PhQ89w4i95rhgE+xedAoqous6K9X+r6aSOI2eFF7DZI= github.com/pterm/pterm v0.12.27/go.mod h1:PhQ89w4i95rhgE+xedAoqous6K9X+r6aSOI2eFF7DZI=
github.com/pterm/pterm v0.12.29/go.mod h1:WI3qxgvoQFFGKGjGnJR849gU0TsEOvKn5Q8LlY1U7lg= github.com/pterm/pterm v0.12.29/go.mod h1:WI3qxgvoQFFGKGjGnJR849gU0TsEOvKn5Q8LlY1U7lg=
github.com/pterm/pterm v0.12.30/go.mod h1:MOqLIyMOgmTDz9yorcYbcw+HsgoZo3BQfg2wtl3HEFE= github.com/pterm/pterm v0.12.30/go.mod h1:MOqLIyMOgmTDz9yorcYbcw+HsgoZo3BQfg2wtl3HEFE=
@@ -399,32 +405,31 @@ github.com/pterm/pterm v0.12.31/go.mod h1:32ZAWZVXD7ZfG0s8qqHXePte42kdz8ECtRyEej
github.com/pterm/pterm v0.12.33/go.mod h1:x+h2uL+n7CP/rel9+bImHD5lF3nM9vJj80k9ybiiTTE= github.com/pterm/pterm v0.12.33/go.mod h1:x+h2uL+n7CP/rel9+bImHD5lF3nM9vJj80k9ybiiTTE=
github.com/pterm/pterm v0.12.36/go.mod h1:NjiL09hFhT/vWjQHSj1athJpx6H8cjpHXNAK5bUw8T8= github.com/pterm/pterm v0.12.36/go.mod h1:NjiL09hFhT/vWjQHSj1athJpx6H8cjpHXNAK5bUw8T8=
github.com/pterm/pterm v0.12.40/go.mod h1:ffwPLwlbXxP+rxT0GsgDTzS3y3rmpAO1NMjUkGTYf8s= github.com/pterm/pterm v0.12.40/go.mod h1:ffwPLwlbXxP+rxT0GsgDTzS3y3rmpAO1NMjUkGTYf8s=
github.com/pterm/pterm v0.12.82 h1:+D9wYhCaeaK0FIQoZtqbNQuNpe2lB2tajKKsTd5paVQ= github.com/pterm/pterm v0.12.83 h1:ie+YmGmA727VuhxBlyGr74Ks+7McV6kT99IB8EU80aA=
github.com/pterm/pterm v0.12.82/go.mod h1:TyuyrPjnxfwP+ccJdBTeWHtd/e0ybQHkOS/TakajZCw= github.com/pterm/pterm v0.12.83/go.mod h1:xlgc6bFWyJIMtmLJvGim+L7jhSReilOlOnodeIYe4Tk=
github.com/puzpuzpuz/xsync/v4 v4.3.0 h1:w/bWkEJdYuRNYhHn5eXnIT8LzDM1O629X1I9MJSkD7Q= github.com/puzpuzpuz/xsync/v4 v4.4.0 h1:vlSN6/CkEY0pY8KaB0yqo/pCLZvp9nhdbBdjipT4gWo=
github.com/puzpuzpuz/xsync/v4 v4.3.0/go.mod h1:VJDmTCJMBt8igNxnkQd86r+8KUeN1quSfNKu5bLYFQo= github.com/puzpuzpuz/xsync/v4 v4.4.0/go.mod h1:VJDmTCJMBt8igNxnkQd86r+8KUeN1quSfNKu5bLYFQo=
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec h1:W09IVJc94icq4NjY3clb7Lk8O1qJ8BdBEF8z0ibU0rE= github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec h1:W09IVJc94icq4NjY3clb7Lk8O1qJ8BdBEF8z0ibU0rE=
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo= github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo=
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc= github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ= github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc= github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc=
github.com/rs/xid v1.6.0/go.mod h1:7XoLgs4eV+QndskICGsho+ADou8ySMSjJKDIan90Nz0= github.com/rs/zerolog v1.35.0 h1:VD0ykx7HMiMJytqINBsKcbLS+BJ4WYjz+05us+LRTdI=
github.com/rs/zerolog v1.34.0 h1:k43nTLIwcTVQAncfCw4KZ2VY6ukYoZaBPNOE8txlOeY= github.com/rs/zerolog v1.35.0/go.mod h1:EjML9kdfa/RMA7h/6z6pYmq1ykOuA8/mjWaEvGI+jcw=
github.com/rs/zerolog v1.34.0/go.mod h1:bJsvje4Z08ROH4Nhs5iH600c3IkWhwp44iRc54W6wYQ=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM= github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/safchain/ethtool v0.3.0 h1:gimQJpsI6sc1yIqP/y8GYgiXn/NjgvpM0RNoWLVVmP0= github.com/safchain/ethtool v0.7.0 h1:rlJzfDetsVvT61uz8x1YIcFn12akMfuPulHtZjtb7Is=
github.com/safchain/ethtool v0.3.0/go.mod h1:SA9BwrgyAqNo7M+uaL6IYbxpm5wk3L7Mm6ocLW+CJUs= github.com/safchain/ethtool v0.7.0/go.mod h1:MenQKEjXdfkjD3mp2QdCk8B/hwvkrlOTm/FD4gTpFxQ=
github.com/sagikazarmark/locafero v0.12.0 h1:/NQhBAkUb4+fH1jivKHWusDYFjMOOKU88eegjfxfHb4= github.com/sagikazarmark/locafero v0.12.0 h1:/NQhBAkUb4+fH1jivKHWusDYFjMOOKU88eegjfxfHb4=
github.com/sagikazarmark/locafero v0.12.0/go.mod h1:sZh36u/YSZ918v0Io+U9ogLYQJ9tLLBmM4eneO6WwsI= github.com/sagikazarmark/locafero v0.12.0/go.mod h1:sZh36u/YSZ918v0Io+U9ogLYQJ9tLLBmM4eneO6WwsI=
github.com/samber/lo v1.52.0 h1:Rvi+3BFHES3A8meP33VPAxiBZX/Aws5RxrschYGjomw= github.com/samber/lo v1.53.0 h1:t975lj2py4kJPQ6haz1QMgtId2gtmfktACxIXArw3HM=
github.com/samber/lo v1.52.0/go.mod h1:4+MXEGsJzbKGaUEQFKBq2xtfuznW9oz/WrgyzMzRoM0= github.com/samber/lo v1.53.0/go.mod h1:4+MXEGsJzbKGaUEQFKBq2xtfuznW9oz/WrgyzMzRoM0=
github.com/sasha-s/go-deadlock v0.3.6 h1:TR7sfOnZ7x00tWPfD397Peodt57KzMDo+9Ae9rMiUmw= github.com/sasha-s/go-deadlock v0.3.9 h1:fiaT9rB7g5sr5ddNZvlwheclN9IP86eFW9WgqlEQV+w=
github.com/sasha-s/go-deadlock v0.3.6/go.mod h1:CUqNyyvMxTyjFqDT7MRg9mb4Dv/btmGTqSR+rky/UXo= github.com/sasha-s/go-deadlock v0.3.9/go.mod h1:KuZj51ZFmx42q/mPaYbRk0P1xcwe697zsJKE03vD4/Y=
github.com/sergi/go-diff v1.2.0/go.mod h1:STckp+ISIX8hZLjrqAeVduY0gWCT9IjLuqbuNXdaHfM= github.com/sergi/go-diff v1.2.0/go.mod h1:STckp+ISIX8hZLjrqAeVduY0gWCT9IjLuqbuNXdaHfM=
github.com/sergi/go-diff v1.3.2-0.20230802210424-5b0b94c5c0d3 h1:n661drycOFuPLCN3Uc8sB6B/s6Z4t2xvBgU1htSHuq8= github.com/sergi/go-diff v1.3.2-0.20230802210424-5b0b94c5c0d3 h1:n661drycOFuPLCN3Uc8sB6B/s6Z4t2xvBgU1htSHuq8=
github.com/sergi/go-diff v1.3.2-0.20230802210424-5b0b94c5c0d3/go.mod h1:A0bzQcvG0E7Rwjx0REVgAGH58e96+X0MeOfepqsbeW4= github.com/sergi/go-diff v1.3.2-0.20230802210424-5b0b94c5c0d3/go.mod h1:A0bzQcvG0E7Rwjx0REVgAGH58e96+X0MeOfepqsbeW4=
github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ= github.com/sirupsen/logrus v1.9.4 h1:TsZE7l11zFCLZnZ+teH4Umoq5BhEIfIzfRDZ1Uzql2w=
github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ= github.com/sirupsen/logrus v1.9.4/go.mod h1:ftWc9WdOfJ0a92nsE2jF5u5ZwH8Bv2zdeOC42RjbV2g=
github.com/spf13/afero v1.15.0 h1:b/YBCLWAJdFWJTN9cLhiXXcD7mzKn9Dm86dNnfyQw1I= github.com/spf13/afero v1.15.0 h1:b/YBCLWAJdFWJTN9cLhiXXcD7mzKn9Dm86dNnfyQw1I=
github.com/spf13/afero v1.15.0/go.mod h1:NC2ByUVxtQs4b3sIUphxK0NioZnmxgyCrfzeuq8lxMg= github.com/spf13/afero v1.15.0/go.mod h1:NC2ByUVxtQs4b3sIUphxK0NioZnmxgyCrfzeuq8lxMg=
github.com/spf13/cast v1.10.0 h1:h2x0u2shc1QuLHfxi+cTJvs30+ZAHOGRic8uyGTDWxY= github.com/spf13/cast v1.10.0 h1:h2x0u2shc1QuLHfxi+cTJvs30+ZAHOGRic8uyGTDWxY=
@@ -456,20 +461,20 @@ github.com/tailscale/go-winio v0.0.0-20231025203758-c4f33415bf55 h1:Gzfnfk2TWrk8
github.com/tailscale/go-winio v0.0.0-20231025203758-c4f33415bf55/go.mod h1:4k4QO+dQ3R5FofL+SanAUZe+/QfeK0+OIuwDIRu2vSg= github.com/tailscale/go-winio v0.0.0-20231025203758-c4f33415bf55/go.mod h1:4k4QO+dQ3R5FofL+SanAUZe+/QfeK0+OIuwDIRu2vSg=
github.com/tailscale/golang-x-crypto v0.0.0-20250404221719-a5573b049869 h1:SRL6irQkKGQKKLzvQP/ke/2ZuB7Py5+XuqtOgSj+iMM= github.com/tailscale/golang-x-crypto v0.0.0-20250404221719-a5573b049869 h1:SRL6irQkKGQKKLzvQP/ke/2ZuB7Py5+XuqtOgSj+iMM=
github.com/tailscale/golang-x-crypto v0.0.0-20250404221719-a5573b049869/go.mod h1:ikbF+YT089eInTp9f2vmvy4+ZVnW5hzX1q2WknxSprQ= github.com/tailscale/golang-x-crypto v0.0.0-20250404221719-a5573b049869/go.mod h1:ikbF+YT089eInTp9f2vmvy4+ZVnW5hzX1q2WknxSprQ=
github.com/tailscale/hujson v0.0.0-20250605163823-992244df8c5a h1:a6TNDN9CgG+cYjaeN8l2mc4kSz2iMiCDQxPEyltUV/I= github.com/tailscale/hujson v0.0.0-20260302212456-ecc657c15afd h1:Rf9uhF1+VJ7ZHqxrG8pJ6YacmHvVCmByDmGbAWCc/gA=
github.com/tailscale/hujson v0.0.0-20250605163823-992244df8c5a/go.mod h1:EbW0wDK/qEUYI0A5bqq0C2kF8JTQwWONmGDBbzsxxHo= github.com/tailscale/hujson v0.0.0-20260302212456-ecc657c15afd/go.mod h1:EbW0wDK/qEUYI0A5bqq0C2kF8JTQwWONmGDBbzsxxHo=
github.com/tailscale/netlink v1.1.1-0.20240822203006-4d49adab4de7 h1:uFsXVBE9Qr4ZoF094vE6iYTLDl0qCiKzYXlL6UeWObU= github.com/tailscale/netlink v1.1.1-0.20240822203006-4d49adab4de7 h1:uFsXVBE9Qr4ZoF094vE6iYTLDl0qCiKzYXlL6UeWObU=
github.com/tailscale/netlink v1.1.1-0.20240822203006-4d49adab4de7/go.mod h1:NzVQi3Mleb+qzq8VmcWpSkcSYxXIg0DkI6XDzpVkhJ0= github.com/tailscale/netlink v1.1.1-0.20240822203006-4d49adab4de7/go.mod h1:NzVQi3Mleb+qzq8VmcWpSkcSYxXIg0DkI6XDzpVkhJ0=
github.com/tailscale/peercred v0.0.0-20250107143737-35a0c7bd7edc h1:24heQPtnFR+yfntqhI3oAu9i27nEojcQ4NuBQOo5ZFA= github.com/tailscale/peercred v0.0.0-20250107143737-35a0c7bd7edc h1:24heQPtnFR+yfntqhI3oAu9i27nEojcQ4NuBQOo5ZFA=
github.com/tailscale/peercred v0.0.0-20250107143737-35a0c7bd7edc/go.mod h1:f93CXfllFsO9ZQVq+Zocb1Gp4G5Fz0b0rXHLOzt/Djc= github.com/tailscale/peercred v0.0.0-20250107143737-35a0c7bd7edc/go.mod h1:f93CXfllFsO9ZQVq+Zocb1Gp4G5Fz0b0rXHLOzt/Djc=
github.com/tailscale/setec v0.0.0-20251203133219-2ab774e4129a h1:TApskGPim53XY5WRt5hX4DnO8V6CmVoimSklryIoGMM= github.com/tailscale/setec v0.0.0-20260115174028-19d190c5556d h1:N+TtzIaGYREbLbKZB0WU0vVnMSfaqUkSf3qMEi03hwE=
github.com/tailscale/setec v0.0.0-20251203133219-2ab774e4129a/go.mod h1:+6WyG6kub5/5uPsMdYQuSti8i6F5WuKpFWLQnZt/Mms= github.com/tailscale/setec v0.0.0-20260115174028-19d190c5556d/go.mod h1:6NU8H/GLPVX2TnXAY1duyy9ylLaHwFpr0X93UPiYmNI=
github.com/tailscale/squibble v0.0.0-20251104223530-a961feffb67f h1:CL6gu95Y1o2ko4XiWPvWkJka0QmQWcUyPywWVWDPQbQ= github.com/tailscale/squibble v0.0.0-20260303070345-3ac5157f405e h1:4yfp5/YDr+TzbUME/PalYJVXAsp7zA2Gv2xQMZ9Qors=
github.com/tailscale/squibble v0.0.0-20251104223530-a961feffb67f/go.mod h1:xJkMmR3t+thnUQhA3Q4m2VSlS5pcOq+CIjmU/xfKKx4= github.com/tailscale/squibble v0.0.0-20260303070345-3ac5157f405e/go.mod h1:xJkMmR3t+thnUQhA3Q4m2VSlS5pcOq+CIjmU/xfKKx4=
github.com/tailscale/tailsql v0.0.0-20260105194658-001575c3ca09 h1:Fc9lE2cDYJbBLpCqnVmoLdf7McPqoHZiDxDPPpkJM04= github.com/tailscale/tailsql v0.0.0-20260322172246-3ab0c1744d9c h1:7lJQ/zycbk1E9e0nUiMuwIDYprFTLpWXUwiPdi+tRlI=
github.com/tailscale/tailsql v0.0.0-20260105194658-001575c3ca09/go.mod h1:QMNhC4XGFiXKngHVLXE+ERDmQoH0s5fD7AUxupykocQ= github.com/tailscale/tailsql v0.0.0-20260322172246-3ab0c1744d9c/go.mod h1:bpNmZdvZKmBstrZunT+NXL6hmrFw5AsuT7MGiYS8sRc=
github.com/tailscale/web-client-prebuilt v0.0.0-20250124233751-d4cd19a26976 h1:UBPHPtv8+nEAy2PD8RyAhOYvau1ek0HDJqLS/Pysi14= github.com/tailscale/web-client-prebuilt v0.0.0-20251127225136-f19339b67368 h1:0tpDdAj9sSfSZg4gMwNTdqMP592sBrq2Sm0w6ipnh7k=
github.com/tailscale/web-client-prebuilt v0.0.0-20250124233751-d4cd19a26976/go.mod h1:agQPE6y6ldqCOui2gkIh7ZMztTkIQKH049tv8siLuNQ= github.com/tailscale/web-client-prebuilt v0.0.0-20251127225136-f19339b67368/go.mod h1:agQPE6y6ldqCOui2gkIh7ZMztTkIQKH049tv8siLuNQ=
github.com/tailscale/wf v0.0.0-20240214030419-6fbb0a674ee6 h1:l10Gi6w9jxvinoiq15g8OToDdASBni4CyJOdHY1Hr8M= github.com/tailscale/wf v0.0.0-20240214030419-6fbb0a674ee6 h1:l10Gi6w9jxvinoiq15g8OToDdASBni4CyJOdHY1Hr8M=
github.com/tailscale/wf v0.0.0-20240214030419-6fbb0a674ee6/go.mod h1:ZXRML051h7o4OcI0d3AaILDIad/Xw0IkXaHM17dic1Y= github.com/tailscale/wf v0.0.0-20240214030419-6fbb0a674ee6/go.mod h1:ZXRML051h7o4OcI0d3AaILDIad/Xw0IkXaHM17dic1Y=
github.com/tailscale/wireguard-go v0.0.0-20250716170648-1d0488a3d7da h1:jVRUZPRs9sqyKlYHHzHjAqKN+6e/Vog6NpHYeNPJqOw= github.com/tailscale/wireguard-go v0.0.0-20250716170648-1d0488a3d7da h1:jVRUZPRs9sqyKlYHHzHjAqKN+6e/Vog6NpHYeNPJqOw=
@@ -480,8 +485,8 @@ github.com/tc-hib/winres v0.2.1 h1:YDE0FiP0VmtRaDn7+aaChp1KiF4owBiJa5l964l5ujA=
github.com/tc-hib/winres v0.2.1/go.mod h1:C/JaNhH3KBvhNKVbvdlDWkbMDO9H4fKKDaN7/07SSuk= github.com/tc-hib/winres v0.2.1/go.mod h1:C/JaNhH3KBvhNKVbvdlDWkbMDO9H4fKKDaN7/07SSuk=
github.com/tcnksm/go-latest v0.0.0-20170313132115-e3007ae9052e h1:IWllFTiDjjLIf2oeKxpIUmtiDV5sn71VgeQgg6vcE7k= github.com/tcnksm/go-latest v0.0.0-20170313132115-e3007ae9052e h1:IWllFTiDjjLIf2oeKxpIUmtiDV5sn71VgeQgg6vcE7k=
github.com/tcnksm/go-latest v0.0.0-20170313132115-e3007ae9052e/go.mod h1:d7u6HkTYKSv5m6MCKkOQlHwaShTMl3HjqSGW3XtVhXM= github.com/tcnksm/go-latest v0.0.0-20170313132115-e3007ae9052e/go.mod h1:d7u6HkTYKSv5m6MCKkOQlHwaShTMl3HjqSGW3XtVhXM=
github.com/tink-crypto/tink-go/v2 v2.1.0 h1:QXFBguwMwTIaU17EgZpEJWsUSc60b1BAGTzBIoMdmok= github.com/tink-crypto/tink-go/v2 v2.6.0 h1:+KHNBHhWH33Vn+igZWcsgdEPUxKwBMEe0QC60t388v4=
github.com/tink-crypto/tink-go/v2 v2.1.0/go.mod h1:y1TnYFt1i2eZVfx4OGc+C+EMp4CoKWAw2VSEuoicHHI= github.com/tink-crypto/tink-go/v2 v2.6.0/go.mod h1:2WbBA6pfNsAfBwDCggboaHeB2X29wkU8XHtGwh2YIk8=
github.com/u-root/u-root v0.14.0 h1:Ka4T10EEML7dQ5XDvO9c3MBN8z4nuSnGjcd1jmU2ivg= github.com/u-root/u-root v0.14.0 h1:Ka4T10EEML7dQ5XDvO9c3MBN8z4nuSnGjcd1jmU2ivg=
github.com/u-root/u-root v0.14.0/go.mod h1:hAyZorapJe4qzbLWlAkmSVCJGbfoU9Pu4jpJ1WMluqE= github.com/u-root/u-root v0.14.0/go.mod h1:hAyZorapJe4qzbLWlAkmSVCJGbfoU9Pu4jpJ1WMluqE=
github.com/u-root/uio v0.0.0-20240224005618-d2acac8f3701 h1:pyC9PaHYZFgEKFdlp3G8RaCKgVpHZnecvArXvPXcFkM= github.com/u-root/uio v0.0.0-20240224005618-d2acac8f3701 h1:pyC9PaHYZFgEKFdlp3G8RaCKgVpHZnecvArXvPXcFkM=
@@ -503,24 +508,24 @@ github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e/go.mod h1:RbqR21r5mrJu
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY= github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
go.opentelemetry.io/auto/sdk v1.2.1 h1:jXsnJ4Lmnqd11kwkBV2LgLoFMZKizbCi5fNZ/ipaZ64= go.opentelemetry.io/auto/sdk v1.2.1 h1:jXsnJ4Lmnqd11kwkBV2LgLoFMZKizbCi5fNZ/ipaZ64=
go.opentelemetry.io/auto/sdk v1.2.1/go.mod h1:KRTj+aOaElaLi+wW1kO/DZRXwkF4C5xPbEe3ZiIhN7Y= go.opentelemetry.io/auto/sdk v1.2.1/go.mod h1:KRTj+aOaElaLi+wW1kO/DZRXwkF4C5xPbEe3ZiIhN7Y=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.64.0 h1:ssfIgGNANqpVFCndZvcuyKbl0g+UAVcbBcqGkG28H0Y= go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.65.0 h1:7iP2uCb7sGddAr30RRS6xjKy7AZ2JtTOPA3oolgVSw8=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.64.0/go.mod h1:GQ/474YrbE4Jx8gZ4q5I4hrhUzM6UPzyrqJYV2AqPoQ= go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.65.0/go.mod h1:c7hN3ddxs/z6q9xwvfLPk+UHlWRQyaeR1LdgfL/66l0=
go.opentelemetry.io/otel v1.39.0 h1:8yPrr/S0ND9QEfTfdP9V+SiwT4E0G7Y5MO7p85nis48= go.opentelemetry.io/otel v1.40.0 h1:oA5YeOcpRTXq6NN7frwmwFR0Cn3RhTVZvXsP4duvCms=
go.opentelemetry.io/otel v1.39.0/go.mod h1:kLlFTywNWrFyEdH0oj2xK0bFYZtHRYUdv1NklR/tgc8= go.opentelemetry.io/otel v1.40.0/go.mod h1:IMb+uXZUKkMXdPddhwAHm6UfOwJyh4ct1ybIlV14J0g=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.36.0 h1:dNzwXjZKpMpE2JhmO+9HsPl42NIXFIFSUSSs0fiqra0= go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.40.0 h1:QKdN8ly8zEMrByybbQgv8cWBcdAarwmIPZ6FThrWXJs=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.36.0/go.mod h1:90PoxvaEB5n6AOdZvi+yWJQoE95U8Dhhw2bSyRqnTD0= go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.40.0/go.mod h1:bTdK1nhqF76qiPoCCdyFIV+N/sRHYXYCTQc+3VCi3MI=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.36.0 h1:nRVXXvf78e00EwY6Wp0YII8ww2JVWshZ20HfTlE11AM= go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.40.0 h1:wVZXIWjQSeSmMoxF74LzAnpVQOAFDo3pPji9Y4SOFKc=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.36.0/go.mod h1:r49hO7CgrxY9Voaj3Xe8pANWtr0Oq916d0XAmOoCZAQ= go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.40.0/go.mod h1:khvBS2IggMFNwZK/6lEeHg/W57h/IX6J4URh57fuI40=
go.opentelemetry.io/otel/metric v1.39.0 h1:d1UzonvEZriVfpNKEVmHXbdf909uGTOQjA0HF0Ls5Q0= go.opentelemetry.io/otel/metric v1.40.0 h1:rcZe317KPftE2rstWIBitCdVp89A2HqjkxR3c11+p9g=
go.opentelemetry.io/otel/metric v1.39.0/go.mod h1:jrZSWL33sD7bBxg1xjrqyDjnuzTUB0x1nBERXd7Ftcs= go.opentelemetry.io/otel/metric v1.40.0/go.mod h1:ib/crwQH7N3r5kfiBZQbwrTge743UDc7DTFVZrrXnqc=
go.opentelemetry.io/otel/sdk v1.39.0 h1:nMLYcjVsvdui1B/4FRkwjzoRVsMK8uL/cj0OyhKzt18= go.opentelemetry.io/otel/sdk v1.40.0 h1:KHW/jUzgo6wsPh9At46+h4upjtccTmuZCFAc9OJ71f8=
go.opentelemetry.io/otel/sdk v1.39.0/go.mod h1:vDojkC4/jsTJsE+kh+LXYQlbL8CgrEcwmt1ENZszdJE= go.opentelemetry.io/otel/sdk v1.40.0/go.mod h1:Ph7EFdYvxq72Y8Li9q8KebuYUr2KoeyHx0DRMKrYBUE=
go.opentelemetry.io/otel/sdk/metric v1.39.0 h1:cXMVVFVgsIf2YL6QkRF4Urbr/aMInf+2WKg+sEJTtB8= go.opentelemetry.io/otel/sdk/metric v1.40.0 h1:mtmdVqgQkeRxHgRv4qhyJduP3fYJRMX4AtAlbuWdCYw=
go.opentelemetry.io/otel/sdk/metric v1.39.0/go.mod h1:xq9HEVH7qeX69/JnwEfp6fVq5wosJsY1mt4lLfYdVew= go.opentelemetry.io/otel/sdk/metric v1.40.0/go.mod h1:4Z2bGMf0KSK3uRjlczMOeMhKU2rhUqdWNoKcYrtcBPg=
go.opentelemetry.io/otel/trace v1.39.0 h1:2d2vfpEDmCJ5zVYz7ijaJdOF59xLomrvj7bjt6/qCJI= go.opentelemetry.io/otel/trace v1.40.0 h1:WA4etStDttCSYuhwvEa8OP8I5EWu24lkOzp+ZYblVjw=
go.opentelemetry.io/otel/trace v1.39.0/go.mod h1:88w4/PnZSazkGzz/w84VHpQafiU4EtqqlVdxWy+rNOA= go.opentelemetry.io/otel/trace v1.40.0/go.mod h1:zeAhriXecNGP/s2SEG3+Y8X9ujcJOTqQ5RgdEJcawiA=
go.opentelemetry.io/proto/otlp v1.6.0 h1:jQjP+AQyTf+Fe7OKj/MfkDrmK4MNVtw2NpXsf9fefDI= go.opentelemetry.io/proto/otlp v1.9.0 h1:l706jCMITVouPOqEnii2fIAuO3IVGBRPV5ICjceRb/A=
go.opentelemetry.io/proto/otlp v1.6.0/go.mod h1:cicgGehlFuNdgZkcALOCh3VE6K/u2tAjzlRhDwmVpZc= go.opentelemetry.io/proto/otlp v1.9.0/go.mod h1:xE+Cx5E/eEHw+ISFkwPLwCZefwVjY+pqKg1qcK03+/4=
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto= go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE= go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
go.yaml.in/yaml/v2 v2.4.3 h1:6gvOSjQoTB3vt1l+CU+tSyi/HOjfOjRLJ4YwYZGwRO0= go.yaml.in/yaml/v2 v2.4.3 h1:6gvOSjQoTB3vt1l+CU+tSyi/HOjfOjRLJ4YwYZGwRO0=
@@ -534,33 +539,33 @@ go4.org/netipx v0.0.0-20231129151722-fdeea329fbba/go.mod h1:PLyyIXexvUFg3Owu6p/W
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc= golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.19.0/go.mod h1:Iy9bg/ha4yyC70EfRS8jz+B6ybOBKMaSxLj6P6oBDfU= golang.org/x/crypto v0.19.0/go.mod h1:Iy9bg/ha4yyC70EfRS8jz+B6ybOBKMaSxLj6P6oBDfU=
golang.org/x/crypto v0.46.0 h1:cKRW/pmt1pKAfetfu+RCEvjvZkA9RimPbh7bhFjGVBU= golang.org/x/crypto v0.49.0 h1:+Ng2ULVvLHnJ/ZFEq4KdcDd/cfjrrjjNSXNzxg0Y4U4=
golang.org/x/crypto v0.46.0/go.mod h1:Evb/oLKmMraqjZ2iQTwDwvCtJkczlDuTmdJXoZVzqU0= golang.org/x/crypto v0.49.0/go.mod h1:ErX4dUh2UM+CFYiXZRTcMpEcN8b/1gxEuv3nODoYtCA=
golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546 h1:mgKeJMpvi0yx/sU5GsxQ7p6s2wtOnGAHZWCHUM4KGzY= golang.org/x/exp v0.0.0-20260312153236-7ab1446f8b90 h1:jiDhWWeC7jfWqR9c/uplMOqJ0sbNlNWv0UkzE0vX1MA=
golang.org/x/exp v0.0.0-20251023183803-a4bb9ffd2546/go.mod h1:j/pmGrbnkbPtQfxEe5D0VQhZC6qKbfKifgD0oM7sR70= golang.org/x/exp v0.0.0-20260312153236-7ab1446f8b90/go.mod h1:xE1HEv6b+1SCZ5/uscMRjUBKtIxworgEcEi+/n9NQDQ=
golang.org/x/exp/typeparams v0.0.0-20240314144324-c7f7c6466f7f h1:phY1HzDcf18Aq9A8KkmRtY9WvOFIxN8wgfvy6Zm1DV8= golang.org/x/exp/typeparams v0.0.0-20240314144324-c7f7c6466f7f h1:phY1HzDcf18Aq9A8KkmRtY9WvOFIxN8wgfvy6Zm1DV8=
golang.org/x/exp/typeparams v0.0.0-20240314144324-c7f7c6466f7f/go.mod h1:AbB0pIl9nAr9wVwH+Z2ZpaocVmF5I4GyWCDIsVjR0bk= golang.org/x/exp/typeparams v0.0.0-20240314144324-c7f7c6466f7f/go.mod h1:AbB0pIl9nAr9wVwH+Z2ZpaocVmF5I4GyWCDIsVjR0bk=
golang.org/x/image v0.27.0 h1:C8gA4oWU/tKkdCfYT6T2u4faJu3MeNS5O8UPWlPF61w= golang.org/x/image v0.27.0 h1:C8gA4oWU/tKkdCfYT6T2u4faJu3MeNS5O8UPWlPF61w=
golang.org/x/image v0.27.0/go.mod h1:xbdrClrAUway1MUTEZDq9mz/UpRwYAkFFNUslZtcB+g= golang.org/x/image v0.27.0/go.mod h1:xbdrClrAUway1MUTEZDq9mz/UpRwYAkFFNUslZtcB+g=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4= golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs= golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/mod v0.30.0 h1:fDEXFVZ/fmCKProc/yAXXUijritrDzahmwwefnjoPFk= golang.org/x/mod v0.35.0 h1:Ww1D637e6Pg+Zb2KrWfHQUnH2dQRLBQyAtpr/haaJeM=
golang.org/x/mod v0.30.0/go.mod h1:lAsf5O2EvJeSFMiBxXDki7sCgAxEUcZHXoXMKT4GJKc= golang.org/x/mod v0.35.0/go.mod h1:+GwiRhIInF8wPm+4AoT6L0FA1QWAad3OMdTRx4tFYlU=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c= golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs= golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg= golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg=
golang.org/x/net v0.48.0 h1:zyQRTTrjc33Lhh0fBgT/H3oZq9WuvRR5gPC70xpDiQU= golang.org/x/net v0.52.0 h1:He/TN1l0e4mmR3QqHMT2Xab3Aj3L9qjbhRm78/6jrW0=
golang.org/x/net v0.48.0/go.mod h1:+ndRgGjkh8FGtu1w1FGbEC31if4VrNVMuKTgcAAnQRY= golang.org/x/net v0.52.0/go.mod h1:R1MAz7uMZxVMualyPXb+VaqGSa3LIaUqk0eEt3w36Sw=
golang.org/x/oauth2 v0.34.0 h1:hqK/t4AKgbqWkdkcAeI8XLmbK+4m4G5YeQRrmiotGlw= golang.org/x/oauth2 v0.36.0 h1:peZ/1z27fi9hUOFCAZaHyrpWG5lwe0RJEEEeH0ThlIs=
golang.org/x/oauth2 v0.34.0/go.mod h1:lzm5WQJQwKZ3nwavOZ3IS5Aulzxi68dUSgRHujetwEA= golang.org/x/oauth2 v0.36.0/go.mod h1:YDBUJMTkDnJS+A4BP4eZBjCqtokkg1hODuPjwiGPO7Q=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.19.0 h1:vV+1eWNmZ5geRlYjzm2adRgW2/mcpevXNg50YZtPCE4= golang.org/x/sync v0.20.0 h1:e0PTpb7pjO8GAtTs2dQ6jYa5BWYlMuX047Dco/pItO4=
golang.org/x/sync v0.19.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI= golang.org/x/sync v0.20.0/go.mod h1:9xrNwdLfx4jkKbNva9FpL6vEN7evnE43NNNJQ2LF3+0=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@@ -572,18 +577,14 @@ golang.org/x/sys v0.0.0-20211013075003-97ac67df715c/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.0.0-20220310020820-b874c991c1a5/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220310020820-b874c991c1a5/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220319134239-a9b59b0215f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220319134239-a9b59b0215f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220817070843-5a390386f1f2/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.40.0 h1:DBZZqJ2Rkml6QMQsZywtnjnnGvHza6BTfYFWY9kjEWQ= golang.org/x/sys v0.43.0 h1:Rlag2XtaFTxp19wS8MXlJwTvoh8ArU6ezoyFsMyCTNI=
golang.org/x/sys v0.40.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks= golang.org/x/sys v0.43.0/go.mod h1:4GL1E5IUh+htKOUEOaiffhrAeqysfVGipDYzABqnCmw=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210220032956-6a3ed077a48d/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= golang.org/x/term v0.0.0-20210220032956-6a3ed077a48d/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210615171337-6886f2dfbf5b/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= golang.org/x/term v0.0.0-20210615171337-6886f2dfbf5b/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
@@ -591,38 +592,37 @@ golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuX
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k= golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo= golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo=
golang.org/x/term v0.17.0/go.mod h1:lLRBjIVuehSbZlaOtGMbcMncT+aqLLLmKrsjNrUguwk= golang.org/x/term v0.17.0/go.mod h1:lLRBjIVuehSbZlaOtGMbcMncT+aqLLLmKrsjNrUguwk=
golang.org/x/term v0.38.0 h1:PQ5pkm/rLO6HnxFR7N2lJHOZX6Kez5Y1gDSJla6jo7Q= golang.org/x/term v0.42.0 h1:UiKe+zDFmJobeJ5ggPwOshJIVt6/Ft0rcfrXZDLWAWY=
golang.org/x/term v0.38.0/go.mod h1:bSEAKrOT1W+VSu9TSCMtoGEOUcKxOKgl3LE5QEF/xVg= golang.org/x/term v0.42.0/go.mod h1:Dq/D+snpsbazcBG5+F9Q1n2rXV8Ma+71xEjTRufARgY=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ= golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8= golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU= golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/text v0.32.0 h1:ZD01bjUt1FQ9WJ0ClOL5vxgxOI/sVCNgX1YtKwcY0mU= golang.org/x/text v0.36.0 h1:JfKh3XmcRPqZPKevfXVpI1wXPTqbkE5f7JA92a55Yxg=
golang.org/x/text v0.32.0/go.mod h1:o/rUWzghvpD5TXrTIBuJU77MTaN0ljMWE47kxGJQ7jY= golang.org/x/text v0.36.0/go.mod h1:NIdBknypM8iqVmPiuco0Dh6P5Jcdk8lJL0CUebqK164=
golang.org/x/time v0.12.0 h1:ScB/8o8olJvc+CQPWrK3fPZNfh7qgwCrY0zJmoEQLSE= golang.org/x/time v0.15.0 h1:bbrp8t3bGUeFOx08pvsMYRTCVSMk89u4tKbNOZbp88U=
golang.org/x/time v0.12.0/go.mod h1:CDIdPxbZBQxdj6cxyCIdrNogrJKMJ7pr37NYpMcMDSg= golang.org/x/time v0.15.0/go.mod h1:Y4YMaQmXwGQZoFaVFk4YpCt4FLQMYKZe9oeV/f4MSno=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc= golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU= golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
golang.org/x/tools v0.39.0 h1:ik4ho21kwuQln40uelmciQPp9SipgNDdrafrYA4TmQQ= golang.org/x/tools v0.43.0 h1:12BdW9CeB3Z+J/I/wj34VMl8X+fEXBxVR90JeMX5E7s=
golang.org/x/tools v0.39.0/go.mod h1:JnefbkDPyD8UU2kI5fuf8ZX4/yUeh9W877ZeBONxUqQ= golang.org/x/tools v0.43.0/go.mod h1:uHkMso649BX2cZK6+RpuIPXS3ho2hZo4FVwfoy1vIk0=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.zx2c4.com/wintun v0.0.0-20230126152724-0fa3db229ce2 h1:B82qJJgjvYKsXS9jeunTOisW56dUokqW/FOteYJJ/yg= golang.zx2c4.com/wintun v0.0.0-20230126152724-0fa3db229ce2 h1:B82qJJgjvYKsXS9jeunTOisW56dUokqW/FOteYJJ/yg=
golang.zx2c4.com/wintun v0.0.0-20230126152724-0fa3db229ce2/go.mod h1:deeaetjYA+DHMHg+sMSMI58GrEteJUUzzw7en6TJQcI= golang.zx2c4.com/wintun v0.0.0-20230126152724-0fa3db229ce2/go.mod h1:deeaetjYA+DHMHg+sMSMI58GrEteJUUzzw7en6TJQcI=
golang.zx2c4.com/wireguard/windows v0.5.3 h1:On6j2Rpn3OEMXqBq00QEDC7bWSZrPIHKIus8eIuExIE= golang.zx2c4.com/wireguard/windows v0.5.3 h1:On6j2Rpn3OEMXqBq00QEDC7bWSZrPIHKIus8eIuExIE=
golang.zx2c4.com/wireguard/windows v0.5.3/go.mod h1:9TEe8TJmtwyQebdFwAkEWOPr3prrtqm+REGFifP60hI= golang.zx2c4.com/wireguard/windows v0.5.3/go.mod h1:9TEe8TJmtwyQebdFwAkEWOPr3prrtqm+REGFifP60hI=
gonum.org/v1/gonum v0.16.0 h1:5+ul4Swaf3ESvrOnidPp4GZbzf0mxVQpDCYUQE7OJfk= gonum.org/v1/gonum v0.17.0 h1:VbpOemQlsSMrYmn7T2OUvQ4dqxQXU+ouZFQsZOx50z4=
gonum.org/v1/gonum v0.16.0/go.mod h1:fef3am4MQ93R2HHpKnLk4/Tbh/s0+wqD5nfa6Pnwy4E= gonum.org/v1/gonum v0.17.0/go.mod h1:El3tOrEuMpv2UdMrbNlKEh9vd86bmQ6vqIcDwxEOc1E=
google.golang.org/genproto/googleapis/api v0.0.0-20251222181119-0a764e51fe1b h1:uA40e2M6fYRBf0+8uN5mLlqUtV192iiksiICIBkYJ1E= google.golang.org/genproto/googleapis/api v0.0.0-20260406210006-6f92a3bedf2d h1:/aDRtSZJjyLQzm75d+a1wOJaqyKBMvIAfeQmoa3ORiI=
google.golang.org/genproto/googleapis/api v0.0.0-20251222181119-0a764e51fe1b/go.mod h1:Xa7le7qx2vmqB/SzWUBa7KdMjpdpAHlh5QCSnjessQk= google.golang.org/genproto/googleapis/api v0.0.0-20260406210006-6f92a3bedf2d/go.mod h1:etfGUgejTiadZAUaEP14NP97xi1RGeawqkjDARA/UOs=
google.golang.org/genproto/googleapis/rpc v0.0.0-20251222181119-0a764e51fe1b h1:Mv8VFug0MP9e5vUxfBcE3vUkV6CImK3cMNMIDFjmzxU= google.golang.org/genproto/googleapis/rpc v0.0.0-20260401024825-9d38bb4040a9 h1:m8qni9SQFH0tJc1X0vmnpw/0t+AImlSvp30sEupozUg=
google.golang.org/genproto/googleapis/rpc v0.0.0-20251222181119-0a764e51fe1b/go.mod h1:j9x/tPzZkyxcgEFkiKEEGxfvyumM01BEtsW8xzOahRQ= google.golang.org/genproto/googleapis/rpc v0.0.0-20260401024825-9d38bb4040a9/go.mod h1:4Hqkh8ycfw05ld/3BWL7rJOSfebL2Q+DVDeRgYgxUU8=
google.golang.org/grpc v1.78.0 h1:K1XZG/yGDJnzMdd/uZHAkVqJE+xIDOcmdSFZkBUicNc= google.golang.org/grpc v1.80.0 h1:Xr6m2WmWZLETvUNvIUmeD5OAagMw3FiKmMlTdViWsHM=
google.golang.org/grpc v1.78.0/go.mod h1:I47qjTo4OKbMkjA/aOOwxDIiPSBofUtQUI5EfpWvW7U= google.golang.org/grpc v1.80.0/go.mod h1:ho/dLnxwi3EDJA4Zghp7k2Ec1+c2jqup0bFkw07bwF4=
google.golang.org/protobuf v1.36.11 h1:fV6ZwhNocDyBLK0dj+fg8ektcVegBBuEolpbTQyBNVE= google.golang.org/protobuf v1.36.11 h1:fV6ZwhNocDyBLK0dj+fg8ektcVegBBuEolpbTQyBNVE=
google.golang.org/protobuf v1.36.11/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco= google.golang.org/protobuf v1.36.11/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
@@ -639,28 +639,28 @@ gorm.io/driver/postgres v1.6.0 h1:2dxzU8xJ+ivvqTRph34QX+WrRaJlmfyPqXmoGVjMBa4=
gorm.io/driver/postgres v1.6.0/go.mod h1:vUw0mrGgrTK+uPHEhAdV4sfFELrByKVGnaVRkXDhtWo= gorm.io/driver/postgres v1.6.0/go.mod h1:vUw0mrGgrTK+uPHEhAdV4sfFELrByKVGnaVRkXDhtWo=
gorm.io/gorm v1.31.1 h1:7CA8FTFz/gRfgqgpeKIBcervUn3xSyPUmr6B2WXJ7kg= gorm.io/gorm v1.31.1 h1:7CA8FTFz/gRfgqgpeKIBcervUn3xSyPUmr6B2WXJ7kg=
gorm.io/gorm v1.31.1/go.mod h1:XyQVbO2k6YkOis7C2437jSit3SsDK72s7n7rsSHd+Gs= gorm.io/gorm v1.31.1/go.mod h1:XyQVbO2k6YkOis7C2437jSit3SsDK72s7n7rsSHd+Gs=
gotest.tools/v3 v3.5.1 h1:EENdUnS3pdur5nybKYIh2Vfgc8IUNBjxDPSjtiJcOzU= gotest.tools/v3 v3.5.2 h1:7koQfIKdy+I8UTetycgUqXWSDwpgv193Ka+qRsmBY8Q=
gotest.tools/v3 v3.5.1/go.mod h1:isy3WKz7GK6uNw/sbHzfKBLvlvXwUyV06n6brMxxopU= gotest.tools/v3 v3.5.2/go.mod h1:LtdLGcnqToBH83WByAAi/wiwSFCArdFIUV/xxN4pcjA=
gvisor.dev/gvisor v0.0.0-20250205023644-9414b50a5633 h1:2gap+Kh/3F47cO6hAu3idFvsJ0ue6TRcEi2IUkv/F8k= gvisor.dev/gvisor v0.0.0-20260224225140-573d5e7127a8 h1:Zy8IV/+FMLxy6j6p87vk/vQGKcdnbprwjTxc8UiUtsA=
gvisor.dev/gvisor v0.0.0-20250205023644-9414b50a5633/go.mod h1:5DMfjtclAbTIjbXqO1qCe2K5GKKxWz2JHvCChuTcJEM= gvisor.dev/gvisor v0.0.0-20260224225140-573d5e7127a8/go.mod h1:QkHjoMIBaYtpVufgwv3keYAbln78mBoCuShZrPrer1Q=
honnef.co/go/tools v0.7.0-0.dev.0.20251022135355-8273271481d0 h1:5SXjd4ET5dYijLaf0O3aOenC0Z4ZafIWSpjUzsQaNho= honnef.co/go/tools v0.7.0 h1:w6WUp1VbkqPEgLz4rkBzH/CSU6HkoqNLp6GstyTx3lU=
honnef.co/go/tools v0.7.0-0.dev.0.20251022135355-8273271481d0/go.mod h1:EPDDhEZqVHhWuPI5zPAsjU0U7v9xNIWjoOVyZ5ZcniQ= honnef.co/go/tools v0.7.0/go.mod h1:pm29oPxeP3P82ISxZDgIYeOaf9ta6Pi0EWvCFoLG2vc=
howett.net/plist v1.0.0 h1:7CrbWYbPPO/PyNy38b2EB/+gYbjCe2DXBxgtOOZbSQM= howett.net/plist v1.0.0 h1:7CrbWYbPPO/PyNy38b2EB/+gYbjCe2DXBxgtOOZbSQM=
howett.net/plist v1.0.0/go.mod h1:lqaXoTrLY4hg8tnEzNru53gicrbv7rrk+2xJA/7hw9g= howett.net/plist v1.0.0/go.mod h1:lqaXoTrLY4hg8tnEzNru53gicrbv7rrk+2xJA/7hw9g=
modernc.org/cc/v4 v4.27.1 h1:9W30zRlYrefrDV2JE2O8VDtJ1yPGownxciz5rrbQZis= modernc.org/cc/v4 v4.27.1 h1:9W30zRlYrefrDV2JE2O8VDtJ1yPGownxciz5rrbQZis=
modernc.org/cc/v4 v4.27.1/go.mod h1:uVtb5OGqUKpoLWhqwNQo/8LwvoiEBLvZXIQ/SmO6mL0= modernc.org/cc/v4 v4.27.1/go.mod h1:uVtb5OGqUKpoLWhqwNQo/8LwvoiEBLvZXIQ/SmO6mL0=
modernc.org/ccgo/v4 v4.30.1 h1:4r4U1J6Fhj98NKfSjnPUN7Ze2c6MnAdL0hWw6+LrJpc= modernc.org/ccgo/v4 v4.32.0 h1:hjG66bI/kqIPX1b2yT6fr/jt+QedtP2fqojG2VrFuVw=
modernc.org/ccgo/v4 v4.30.1/go.mod h1:bIOeI1JL54Utlxn+LwrFyjCx2n2RDiYEaJVSrgdrRfM= modernc.org/ccgo/v4 v4.32.0/go.mod h1:6F08EBCx5uQc38kMGl+0Nm0oWczoo1c7cgpzEry7Uc0=
modernc.org/fileutil v1.3.40 h1:ZGMswMNc9JOCrcrakF1HrvmergNLAmxOPjizirpfqBA= modernc.org/fileutil v1.4.0 h1:j6ZzNTftVS054gi281TyLjHPp6CPHr2KCxEXjEbD6SM=
modernc.org/fileutil v1.3.40/go.mod h1:HxmghZSZVAz/LXcMNwZPA/DRrQZEVP9VX0V4LQGQFOc= modernc.org/fileutil v1.4.0/go.mod h1:EqdKFDxiByqxLk8ozOxObDSfcVOv/54xDs/DUHdvCUU=
modernc.org/gc/v2 v2.6.5 h1:nyqdV8q46KvTpZlsw66kWqwXRHdjIlJOhG6kxiV/9xI= modernc.org/gc/v2 v2.6.5 h1:nyqdV8q46KvTpZlsw66kWqwXRHdjIlJOhG6kxiV/9xI=
modernc.org/gc/v2 v2.6.5/go.mod h1:YgIahr1ypgfe7chRuJi2gD7DBQiKSLMPgBQe9oIiito= modernc.org/gc/v2 v2.6.5/go.mod h1:YgIahr1ypgfe7chRuJi2gD7DBQiKSLMPgBQe9oIiito=
modernc.org/gc/v3 v3.1.1 h1:k8T3gkXWY9sEiytKhcgyiZ2L0DTyCQ/nvX+LoCljoRE= modernc.org/gc/v3 v3.1.2 h1:ZtDCnhonXSZexk/AYsegNRV1lJGgaNZJuKjJSWKyEqo=
modernc.org/gc/v3 v3.1.1/go.mod h1:HFK/6AGESC7Ex+EZJhJ2Gni6cTaYpSMmU/cT9RmlfYY= modernc.org/gc/v3 v3.1.2/go.mod h1:HFK/6AGESC7Ex+EZJhJ2Gni6cTaYpSMmU/cT9RmlfYY=
modernc.org/goabi0 v0.2.0 h1:HvEowk7LxcPd0eq6mVOAEMai46V+i7Jrj13t4AzuNks= modernc.org/goabi0 v0.2.0 h1:HvEowk7LxcPd0eq6mVOAEMai46V+i7Jrj13t4AzuNks=
modernc.org/goabi0 v0.2.0/go.mod h1:CEFRnnJhKvWT1c1JTI3Avm+tgOWbkOu5oPA8eH8LnMI= modernc.org/goabi0 v0.2.0/go.mod h1:CEFRnnJhKvWT1c1JTI3Avm+tgOWbkOu5oPA8eH8LnMI=
modernc.org/libc v1.67.6 h1:eVOQvpModVLKOdT+LvBPjdQqfrZq+pC39BygcT+E7OI= modernc.org/libc v1.70.0 h1:U58NawXqXbgpZ/dcdS9kMshu08aiA6b7gusEusqzNkw=
modernc.org/libc v1.67.6/go.mod h1:JAhxUVlolfYDErnwiqaLvUqc8nfb2r6S6slAgZOnaiE= modernc.org/libc v1.70.0/go.mod h1:OVmxFGP1CI/Z4L3E0Q3Mf1PDE0BucwMkcXjjLntvHJo=
modernc.org/mathutil v1.7.1 h1:GCZVGXdaN8gTqB1Mf/usp1Y/hSqgI2vAGGP4jZMCxOU= modernc.org/mathutil v1.7.1 h1:GCZVGXdaN8gTqB1Mf/usp1Y/hSqgI2vAGGP4jZMCxOU=
modernc.org/mathutil v1.7.1/go.mod h1:4p5IwJITfppl0G4sUEDtCr4DthTaT47/N3aT6MhfgJg= modernc.org/mathutil v1.7.1/go.mod h1:4p5IwJITfppl0G4sUEDtCr4DthTaT47/N3aT6MhfgJg=
modernc.org/memory v1.11.0 h1:o4QC8aMQzmcwCK3t3Ux/ZHmwFPzE6hf2Y5LbkRs+hbI= modernc.org/memory v1.11.0 h1:o4QC8aMQzmcwCK3t3Ux/ZHmwFPzE6hf2Y5LbkRs+hbI=
@@ -669,17 +669,17 @@ modernc.org/opt v0.1.4 h1:2kNGMRiUjrp4LcaPuLY2PzUfqM/w9N23quVwhKt5Qm8=
modernc.org/opt v0.1.4/go.mod h1:03fq9lsNfvkYSfxrfUhZCWPk1lm4cq4N+Bh//bEtgns= modernc.org/opt v0.1.4/go.mod h1:03fq9lsNfvkYSfxrfUhZCWPk1lm4cq4N+Bh//bEtgns=
modernc.org/sortutil v1.2.1 h1:+xyoGf15mM3NMlPDnFqrteY07klSFxLElE2PVuWIJ7w= modernc.org/sortutil v1.2.1 h1:+xyoGf15mM3NMlPDnFqrteY07klSFxLElE2PVuWIJ7w=
modernc.org/sortutil v1.2.1/go.mod h1:7ZI3a3REbai7gzCLcotuw9AC4VZVpYMjDzETGsSMqJE= modernc.org/sortutil v1.2.1/go.mod h1:7ZI3a3REbai7gzCLcotuw9AC4VZVpYMjDzETGsSMqJE=
modernc.org/sqlite v1.44.3 h1:+39JvV/HWMcYslAwRxHb8067w+2zowvFOUrOWIy9PjY= modernc.org/sqlite v1.48.2 h1:5CnW4uP8joZtA0LedVqLbZV5GD7F/0x91AXeSyjoh5c=
modernc.org/sqlite v1.44.3/go.mod h1:CzbrU2lSB1DKUusvwGz7rqEKIq+NUd8GWuBBZDs9/nA= modernc.org/sqlite v1.48.2/go.mod h1:hWjRO6Tj/5Ik8ieqxQybiEOUXy0NJFNp2tpvVpKlvig=
modernc.org/strutil v1.2.1 h1:UneZBkQA+DX2Rp35KcM69cSsNES9ly8mQWD71HKlOA0= modernc.org/strutil v1.2.1 h1:UneZBkQA+DX2Rp35KcM69cSsNES9ly8mQWD71HKlOA0=
modernc.org/strutil v1.2.1/go.mod h1:EHkiggD70koQxjVdSBM3JKM7k6L0FbGE5eymy9i3B9A= modernc.org/strutil v1.2.1/go.mod h1:EHkiggD70koQxjVdSBM3JKM7k6L0FbGE5eymy9i3B9A=
modernc.org/token v1.1.0 h1:Xl7Ap9dKaEs5kLoOQeQmPWevfnk/DM5qcLcYlA8ys6Y= modernc.org/token v1.1.0 h1:Xl7Ap9dKaEs5kLoOQeQmPWevfnk/DM5qcLcYlA8ys6Y=
modernc.org/token v1.1.0/go.mod h1:UGzOrNV1mAFSEB63lOFHIpNRUVMvYTc6yu1SMY/XTDM= modernc.org/token v1.1.0/go.mod h1:UGzOrNV1mAFSEB63lOFHIpNRUVMvYTc6yu1SMY/XTDM=
pgregory.net/rapid v1.2.0 h1:keKAYRcjm+e1F0oAuU5F5+YPAWcyxNNRK2wud503Gnk=
pgregory.net/rapid v1.2.0/go.mod h1:PY5XlDGj0+V1FCq0o192FdRhpKHGTRIWBgqjDBTrq04=
software.sslmate.com/src/go-pkcs12 v0.4.0 h1:H2g08FrTvSFKUj+D309j1DPfk5APnIdAQAB8aEykJ5k= software.sslmate.com/src/go-pkcs12 v0.4.0 h1:H2g08FrTvSFKUj+D309j1DPfk5APnIdAQAB8aEykJ5k=
software.sslmate.com/src/go-pkcs12 v0.4.0/go.mod h1:Qiz0EyvDRJjjxGyUQa2cCNZn/wMyzrRJ/qcDXOQazLI= software.sslmate.com/src/go-pkcs12 v0.4.0/go.mod h1:Qiz0EyvDRJjjxGyUQa2cCNZn/wMyzrRJ/qcDXOQazLI=
tailscale.com v1.94.0 h1:5oW3SF35aU9ekHDhP2J4CHewnA2NxE7SRilDB2pVjaA= tailscale.com v1.96.5 h1:gNkfA/KSZAl6jCH9cj8urq00HRWItDDTtGsyATI89jA=
tailscale.com v1.94.0/go.mod h1:gLnVrEOP32GWvroaAHHGhjSGMPJ1i4DvqNwEg+Yuov4= tailscale.com v1.96.5/go.mod h1:/3lnZBYb2UEwnN0MNu2SDXUtT06AGd5k0s+OWx3WmcY=
zgo.at/zcache/v2 v2.4.1 h1:Dfjoi8yI0Uq7NCc4lo2kaQJJmp9Mijo21gef+oJstbY=
zgo.at/zcache/v2 v2.4.1/go.mod h1:gyCeoLVo01QjDZynjime8xUGHHMbsLiPyUTBpDGd4Gk=
zombiezen.com/go/postgrestest v1.0.1 h1:aXoADQAJmZDU3+xilYVut0pHhgc0sF8ZspPW9gFNwP4= zombiezen.com/go/postgrestest v1.0.1 h1:aXoADQAJmZDU3+xilYVut0pHhgc0sF8ZspPW9gFNwP4=
zombiezen.com/go/postgrestest v1.0.1/go.mod h1:marlZezr+k2oSJrvXHnZUs1olHqpE9czlz8ZYkVxliQ= zombiezen.com/go/postgrestest v1.0.1/go.mod h1:marlZezr+k2oSJrvXHnZUs1olHqpE9czlz8ZYkVxliQ=

View File

@@ -16,11 +16,14 @@ import (
"strings" "strings"
"sync" "sync"
"syscall" "syscall"
"testing"
"time" "time"
"github.com/cenkalti/backoff/v5" "github.com/cenkalti/backoff/v5"
"github.com/davecgh/go-spew/spew" "github.com/davecgh/go-spew/spew"
"github.com/gorilla/mux" "github.com/go-chi/chi/v5"
"github.com/go-chi/chi/v5/middleware"
"github.com/go-chi/metrics"
grpcRuntime "github.com/grpc-ecosystem/grpc-gateway/v2/runtime" grpcRuntime "github.com/grpc-ecosystem/grpc-gateway/v2/runtime"
"github.com/juanfont/headscale" "github.com/juanfont/headscale"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1" v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
@@ -99,7 +102,7 @@ type Headscale struct {
// Things that generate changes // Things that generate changes
extraRecordMan *dns.ExtraRecordsMan extraRecordMan *dns.ExtraRecordsMan
authProvider AuthProvider authProvider AuthProvider
mapBatcher mapper.Batcher mapBatcher *mapper.Batcher
clientStreamsOpen sync.WaitGroup clientStreamsOpen sync.WaitGroup
} }
@@ -115,13 +118,14 @@ var (
func NewHeadscale(cfg *types.Config) (*Headscale, error) { func NewHeadscale(cfg *types.Config) (*Headscale, error) {
var err error var err error
if profilingEnabled { if profilingEnabled {
runtime.SetBlockProfileRate(1) runtime.SetBlockProfileRate(1)
} }
noisePrivateKey, err := readOrCreatePrivateKey(cfg.NoisePrivateKeyPath) noisePrivateKey, err := readOrCreatePrivateKey(cfg.NoisePrivateKeyPath)
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to read or create Noise protocol private key: %w", err) return nil, fmt.Errorf("reading or creating Noise protocol private key: %w", err)
} }
s, err := state.NewState(cfg) s, err := state.NewState(cfg)
@@ -140,27 +144,30 @@ func NewHeadscale(cfg *types.Config) (*Headscale, error) {
ephemeralGC := db.NewEphemeralGarbageCollector(func(ni types.NodeID) { ephemeralGC := db.NewEphemeralGarbageCollector(func(ni types.NodeID) {
node, ok := app.state.GetNodeByID(ni) node, ok := app.state.GetNodeByID(ni)
if !ok { if !ok {
log.Error().Uint64("node.id", ni.Uint64()).Msg("Ephemeral node deletion failed") log.Error().Uint64("node.id", ni.Uint64()).Msg("ephemeral node deletion failed")
log.Debug().Caller().Uint64("node.id", ni.Uint64()).Msg("Ephemeral node deletion failed because node not found in NodeStore") log.Debug().Caller().Uint64("node.id", ni.Uint64()).Msg("ephemeral node deletion failed because node not found in NodeStore")
return return
} }
policyChanged, err := app.state.DeleteNode(node) policyChanged, err := app.state.DeleteNode(node)
if err != nil { if err != nil {
log.Error().Err(err).Uint64("node.id", ni.Uint64()).Str("node.name", node.Hostname()).Msg("Ephemeral node deletion failed") log.Error().Err(err).EmbedObject(node).Msg("ephemeral node deletion failed")
return return
} }
app.Change(policyChanged) app.Change(policyChanged)
log.Debug().Caller().Uint64("node.id", ni.Uint64()).Str("node.name", node.Hostname()).Msg("Ephemeral node deleted because garbage collection timeout reached") log.Debug().Caller().EmbedObject(node).Msg("ephemeral node deleted because garbage collection timeout reached")
}) })
app.ephemeralGC = ephemeralGC app.ephemeralGC = ephemeralGC
var authProvider AuthProvider var authProvider AuthProvider
authProvider = NewAuthProviderWeb(cfg.ServerURL) authProvider = NewAuthProviderWeb(cfg.ServerURL)
if cfg.OIDC.Issuer != "" { if cfg.OIDC.Issuer != "" {
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second) ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel() defer cancel()
oidcProvider, err := NewAuthProviderOIDC( oidcProvider, err := NewAuthProviderOIDC(
ctx, ctx,
&app, &app,
@@ -177,17 +184,18 @@ func NewHeadscale(cfg *types.Config) (*Headscale, error) {
authProvider = oidcProvider authProvider = oidcProvider
} }
} }
app.authProvider = authProvider app.authProvider = authProvider
if app.cfg.TailcfgDNSConfig != nil && app.cfg.TailcfgDNSConfig.Proxied { // if MagicDNS if app.cfg.TailcfgDNSConfig != nil && app.cfg.TailcfgDNSConfig.Proxied { // if MagicDNS
// TODO(kradalby): revisit why this takes a list. // TODO(kradalby): revisit why this takes a list.
var magicDNSDomains []dnsname.FQDN var magicDNSDomains []dnsname.FQDN
if cfg.PrefixV4 != nil { if cfg.PrefixV4 != nil {
magicDNSDomains = append( magicDNSDomains = append(
magicDNSDomains, magicDNSDomains,
util.GenerateIPv4DNSRootDomain(*cfg.PrefixV4)...) util.GenerateIPv4DNSRootDomain(*cfg.PrefixV4)...)
} }
if cfg.PrefixV6 != nil { if cfg.PrefixV6 != nil {
magicDNSDomains = append( magicDNSDomains = append(
magicDNSDomains, magicDNSDomains,
@@ -198,6 +206,7 @@ func NewHeadscale(cfg *types.Config) (*Headscale, error) {
if app.cfg.TailcfgDNSConfig.Routes == nil { if app.cfg.TailcfgDNSConfig.Routes == nil {
app.cfg.TailcfgDNSConfig.Routes = make(map[string][]*dnstype.Resolver) app.cfg.TailcfgDNSConfig.Routes = make(map[string][]*dnstype.Resolver)
} }
for _, d := range magicDNSDomains { for _, d := range magicDNSDomains {
app.cfg.TailcfgDNSConfig.Routes[d.WithoutTrailingDot()] = nil app.cfg.TailcfgDNSConfig.Routes[d.WithoutTrailingDot()] = nil
} }
@@ -206,7 +215,7 @@ func NewHeadscale(cfg *types.Config) (*Headscale, error) {
if cfg.DERP.ServerEnabled { if cfg.DERP.ServerEnabled {
derpServerKey, err := readOrCreatePrivateKey(cfg.DERP.ServerPrivateKeyPath) derpServerKey, err := readOrCreatePrivateKey(cfg.DERP.ServerPrivateKeyPath)
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to read or create DERP server private key: %w", err) return nil, fmt.Errorf("reading or creating DERP server private key: %w", err)
} }
if derpServerKey.Equal(*noisePrivateKey) { if derpServerKey.Equal(*noisePrivateKey) {
@@ -232,6 +241,7 @@ func NewHeadscale(cfg *types.Config) (*Headscale, error) {
if err != nil { if err != nil {
return nil, err return nil, err
} }
app.DERPServer = embeddedDERPServer app.DERPServer = embeddedDERPServer
} }
@@ -251,9 +261,11 @@ func (h *Headscale) scheduledTasks(ctx context.Context) {
lastExpiryCheck := time.Unix(0, 0) lastExpiryCheck := time.Unix(0, 0)
derpTickerChan := make(<-chan time.Time) derpTickerChan := make(<-chan time.Time)
if h.cfg.DERP.AutoUpdate && h.cfg.DERP.UpdateFrequency != 0 { if h.cfg.DERP.AutoUpdate && h.cfg.DERP.UpdateFrequency != 0 {
derpTicker := time.NewTicker(h.cfg.DERP.UpdateFrequency) derpTicker := time.NewTicker(h.cfg.DERP.UpdateFrequency)
defer derpTicker.Stop() defer derpTicker.Stop()
derpTickerChan = derpTicker.C derpTickerChan = derpTicker.C
} }
@@ -271,8 +283,10 @@ func (h *Headscale) scheduledTasks(ctx context.Context) {
return return
case <-expireTicker.C: case <-expireTicker.C:
var expiredNodeChanges []change.Change var (
var changed bool expiredNodeChanges []change.Change
changed bool
)
lastExpiryCheck, expiredNodeChanges, changed = h.state.ExpireExpiredNodes(lastExpiryCheck) lastExpiryCheck, expiredNodeChanges, changed = h.state.ExpireExpiredNodes(lastExpiryCheck)
@@ -286,12 +300,14 @@ func (h *Headscale) scheduledTasks(ctx context.Context) {
} }
case <-derpTickerChan: case <-derpTickerChan:
log.Info().Msg("Fetching DERPMap updates") log.Info().Msg("fetching DERPMap updates")
derpMap, err := backoff.Retry(ctx, func() (*tailcfg.DERPMap, error) {
derpMap, err := backoff.Retry(ctx, func() (*tailcfg.DERPMap, error) { //nolint:contextcheck
derpMap, err := derp.GetDERPMap(h.cfg.DERP) derpMap, err := derp.GetDERPMap(h.cfg.DERP)
if err != nil { if err != nil {
return nil, err return nil, err
} }
if h.cfg.DERP.ServerEnabled && h.cfg.DERP.AutomaticallyAddEmbeddedDerpRegion { if h.cfg.DERP.ServerEnabled && h.cfg.DERP.AutomaticallyAddEmbeddedDerpRegion {
region, _ := h.DERPServer.GenerateRegion() region, _ := h.DERPServer.GenerateRegion()
derpMap.Regions[region.RegionID] = &region derpMap.Regions[region.RegionID] = &region
@@ -303,6 +319,7 @@ func (h *Headscale) scheduledTasks(ctx context.Context) {
log.Error().Err(err).Msg("failed to build new DERPMap, retrying later") log.Error().Err(err).Msg("failed to build new DERPMap, retrying later")
continue continue
} }
h.state.SetDERPMap(derpMap) h.state.SetDERPMap(derpMap)
h.Change(change.DERPMap()) h.Change(change.DERPMap())
@@ -311,6 +328,7 @@ func (h *Headscale) scheduledTasks(ctx context.Context) {
if !ok { if !ok {
continue continue
} }
h.cfg.TailcfgDNSConfig.ExtraRecords = records h.cfg.TailcfgDNSConfig.ExtraRecords = records
h.Change(change.ExtraRecords()) h.Change(change.ExtraRecords())
@@ -339,7 +357,7 @@ func (h *Headscale) grpcAuthenticationInterceptor(ctx context.Context,
if !ok { if !ok {
return ctx, status.Errorf( return ctx, status.Errorf(
codes.InvalidArgument, codes.InvalidArgument,
"Retrieving metadata is failed", "retrieving metadata",
) )
} }
@@ -347,7 +365,7 @@ func (h *Headscale) grpcAuthenticationInterceptor(ctx context.Context,
if !ok { if !ok {
return ctx, status.Errorf( return ctx, status.Errorf(
codes.Unauthenticated, codes.Unauthenticated,
"Authorization token is not supplied", "authorization token not supplied",
) )
} }
@@ -362,7 +380,7 @@ func (h *Headscale) grpcAuthenticationInterceptor(ctx context.Context,
valid, err := h.state.ValidateAPIKey(strings.TrimPrefix(token, AuthPrefix)) valid, err := h.state.ValidateAPIKey(strings.TrimPrefix(token, AuthPrefix))
if err != nil { if err != nil {
return ctx, status.Error(codes.Internal, "failed to validate token") return ctx, status.Error(codes.Internal, "validating token")
} }
if !valid { if !valid {
@@ -390,7 +408,8 @@ func (h *Headscale) httpAuthenticationMiddleware(next http.Handler) http.Handler
writeUnauthorized := func(statusCode int) { writeUnauthorized := func(statusCode int) {
writer.WriteHeader(statusCode) writer.WriteHeader(statusCode)
if _, err := writer.Write([]byte("Unauthorized")); err != nil {
if _, err := writer.Write([]byte("Unauthorized")); err != nil { //nolint:noinlineerr
log.Error().Err(err).Msg("writing HTTP response failed") log.Error().Err(err).Msg("writing HTTP response failed")
} }
} }
@@ -401,6 +420,7 @@ func (h *Headscale) httpAuthenticationMiddleware(next http.Handler) http.Handler
Str("client_address", req.RemoteAddr). Str("client_address", req.RemoteAddr).
Msg(`missing "Bearer " prefix in "Authorization" header`) Msg(`missing "Bearer " prefix in "Authorization" header`)
writeUnauthorized(http.StatusUnauthorized) writeUnauthorized(http.StatusUnauthorized)
return return
} }
@@ -412,6 +432,7 @@ func (h *Headscale) httpAuthenticationMiddleware(next http.Handler) http.Handler
Str("client_address", req.RemoteAddr). Str("client_address", req.RemoteAddr).
Msg("failed to validate token") Msg("failed to validate token")
writeUnauthorized(http.StatusUnauthorized) writeUnauthorized(http.StatusUnauthorized)
return return
} }
@@ -420,6 +441,7 @@ func (h *Headscale) httpAuthenticationMiddleware(next http.Handler) http.Handler
Str("client_address", req.RemoteAddr). Str("client_address", req.RemoteAddr).
Msg("invalid token") Msg("invalid token")
writeUnauthorized(http.StatusUnauthorized) writeUnauthorized(http.StatusUnauthorized)
return return
} }
@@ -431,61 +453,74 @@ func (h *Headscale) httpAuthenticationMiddleware(next http.Handler) http.Handler
// and will remove it if it is not. // and will remove it if it is not.
func (h *Headscale) ensureUnixSocketIsAbsent() error { func (h *Headscale) ensureUnixSocketIsAbsent() error {
// File does not exist, all fine // File does not exist, all fine
if _, err := os.Stat(h.cfg.UnixSocket); errors.Is(err, os.ErrNotExist) { if _, err := os.Stat(h.cfg.UnixSocket); errors.Is(err, os.ErrNotExist) { //nolint:noinlineerr
return nil return nil
} }
return os.Remove(h.cfg.UnixSocket) return os.Remove(h.cfg.UnixSocket)
} }
func (h *Headscale) createRouter(grpcMux *grpcRuntime.ServeMux) *mux.Router { func (h *Headscale) createRouter(grpcMux *grpcRuntime.ServeMux) *chi.Mux {
router := mux.NewRouter() r := chi.NewRouter()
router.Use(prometheusMiddleware) r.Use(metrics.Collector(metrics.CollectorOpts{
Host: false,
Proto: true,
Skip: func(r *http.Request) bool {
return r.Method != http.MethodOptions
},
}))
r.Use(middleware.RequestID)
r.Use(middleware.RealIP)
r.Use(middleware.RequestLogger(&zerologRequestLogger{}))
r.Use(middleware.Recoverer)
router.HandleFunc(ts2021UpgradePath, h.NoiseUpgradeHandler). r.Post(ts2021UpgradePath, h.NoiseUpgradeHandler)
Methods(http.MethodPost, http.MethodGet)
router.HandleFunc("/robots.txt", h.RobotsHandler).Methods(http.MethodGet) r.Get("/robots.txt", h.RobotsHandler)
router.HandleFunc("/health", h.HealthHandler).Methods(http.MethodGet) r.Get("/health", h.HealthHandler)
router.HandleFunc("/version", h.VersionHandler).Methods(http.MethodGet) r.Get("/version", h.VersionHandler)
router.HandleFunc("/key", h.KeyHandler).Methods(http.MethodGet) r.Get("/key", h.KeyHandler)
router.HandleFunc("/register/{registration_id}", h.authProvider.RegisterHandler). r.Get("/register/{auth_id}", h.authProvider.RegisterHandler)
Methods(http.MethodGet) r.Get("/auth/{auth_id}", h.authProvider.AuthHandler)
if provider, ok := h.authProvider.(*AuthProviderOIDC); ok { if provider, ok := h.authProvider.(*AuthProviderOIDC); ok {
router.HandleFunc("/oidc/callback", provider.OIDCCallbackHandler).Methods(http.MethodGet) r.Get("/oidc/callback", provider.OIDCCallbackHandler)
r.Post("/register/confirm/{auth_id}", provider.RegisterConfirmHandler)
} }
router.HandleFunc("/apple", h.AppleConfigMessage).Methods(http.MethodGet)
router.HandleFunc("/apple/{platform}", h.ApplePlatformConfig). r.Get("/apple", h.AppleConfigMessage)
Methods(http.MethodGet) r.Get("/apple/{platform}", h.ApplePlatformConfig)
router.HandleFunc("/windows", h.WindowsConfigMessage).Methods(http.MethodGet) r.Get("/windows", h.WindowsConfigMessage)
// TODO(kristoffer): move swagger into a package // TODO(kristoffer): move swagger into a package
router.HandleFunc("/swagger", headscale.SwaggerUI).Methods(http.MethodGet) r.Get("/swagger", headscale.SwaggerUI)
router.HandleFunc("/swagger/v1/openapiv2.json", headscale.SwaggerAPIv1). r.Get("/swagger/v1/openapiv2.json", headscale.SwaggerAPIv1)
Methods(http.MethodGet)
router.HandleFunc("/verify", h.VerifyHandler).Methods(http.MethodPost) r.Post("/verify", h.VerifyHandler)
if h.cfg.DERP.ServerEnabled { if h.cfg.DERP.ServerEnabled {
router.HandleFunc("/derp", h.DERPServer.DERPHandler) r.HandleFunc("/derp", h.DERPServer.DERPHandler)
router.HandleFunc("/derp/probe", derpServer.DERPProbeHandler) r.HandleFunc("/derp/probe", derpServer.DERPProbeHandler)
router.HandleFunc("/derp/latency-check", derpServer.DERPProbeHandler) r.HandleFunc("/derp/latency-check", derpServer.DERPProbeHandler)
router.HandleFunc("/bootstrap-dns", derpServer.DERPBootstrapDNSHandler(h.state.DERPMap())) r.HandleFunc("/bootstrap-dns", derpServer.DERPBootstrapDNSHandler(h.state.DERPMap()))
} }
apiRouter := router.PathPrefix("/api").Subrouter() r.Route("/api", func(r chi.Router) {
apiRouter.Use(h.httpAuthenticationMiddleware) r.Use(h.httpAuthenticationMiddleware)
apiRouter.PathPrefix("/v1/").HandlerFunc(grpcMux.ServeHTTP) r.HandleFunc("/v1/*", grpcMux.ServeHTTP)
router.HandleFunc("/favicon.ico", FaviconHandler) })
router.PathPrefix("/").HandlerFunc(BlankHandler) r.Get("/favicon.ico", FaviconHandler)
r.Get("/", BlankHandler)
return router return r
} }
// Serve launches the HTTP and gRPC server service Headscale and the API. // Serve launches the HTTP and gRPC server service Headscale and the API.
//
//nolint:gocyclo // complex server startup function
func (h *Headscale) Serve() error { func (h *Headscale) Serve() error {
var err error var err error
capver.CanOldCodeBeCleanedUp() capver.CanOldCodeBeCleanedUp()
if profilingEnabled { if profilingEnabled {
@@ -506,12 +541,13 @@ func (h *Headscale) Serve() error {
} }
versionInfo := types.GetVersionInfo() versionInfo := types.GetVersionInfo()
log.Info().Str("version", versionInfo.Version).Str("commit", versionInfo.Commit).Msg("Starting Headscale") log.Info().Str("version", versionInfo.Version).Str("commit", versionInfo.Commit).Msg("starting headscale")
log.Info(). log.Info().
Str("minimum_version", capver.TailscaleVersion(capver.MinSupportedCapabilityVersion)). Str("minimum_version", capver.TailscaleVersion(capver.MinSupportedCapabilityVersion)).
Msg("Clients with a lower minimum version will be rejected") Msg("Clients with a lower minimum version will be rejected")
h.mapBatcher = mapper.NewBatcherAndMapper(h.cfg, h.state) h.mapBatcher = mapper.NewBatcherAndMapper(h.cfg, h.state)
h.mapBatcher.Start() h.mapBatcher.Start()
defer h.mapBatcher.Close() defer h.mapBatcher.Close()
@@ -526,7 +562,7 @@ func (h *Headscale) Serve() error {
derpMap, err := derp.GetDERPMap(h.cfg.DERP) derpMap, err := derp.GetDERPMap(h.cfg.DERP)
if err != nil { if err != nil {
return fmt.Errorf("failed to get DERPMap: %w", err) return fmt.Errorf("getting DERPMap: %w", err)
} }
if h.cfg.DERP.ServerEnabled && h.cfg.DERP.AutomaticallyAddEmbeddedDerpRegion { if h.cfg.DERP.ServerEnabled && h.cfg.DERP.AutomaticallyAddEmbeddedDerpRegion {
@@ -545,9 +581,10 @@ func (h *Headscale) Serve() error {
// around between restarts, they will reconnect and the GC will // around between restarts, they will reconnect and the GC will
// be cancelled. // be cancelled.
go h.ephemeralGC.Start() go h.ephemeralGC.Start()
ephmNodes := h.state.ListEphemeralNodes() ephmNodes := h.state.ListEphemeralNodes()
for _, node := range ephmNodes.All() { for _, node := range ephmNodes.All() {
h.ephemeralGC.Schedule(node.ID(), h.cfg.EphemeralNodeInactivityTimeout) h.ephemeralGC.Schedule(node.ID(), h.cfg.Node.Ephemeral.InactivityTimeout)
} }
if h.cfg.DNSConfig.ExtraRecordsPath != "" { if h.cfg.DNSConfig.ExtraRecordsPath != "" {
@@ -555,7 +592,9 @@ func (h *Headscale) Serve() error {
if err != nil { if err != nil {
return fmt.Errorf("setting up extrarecord manager: %w", err) return fmt.Errorf("setting up extrarecord manager: %w", err)
} }
h.cfg.TailcfgDNSConfig.ExtraRecords = h.extraRecordMan.Records() h.cfg.TailcfgDNSConfig.ExtraRecords = h.extraRecordMan.Records()
go h.extraRecordMan.Run() go h.extraRecordMan.Run()
defer h.extraRecordMan.Close() defer h.extraRecordMan.Close()
} }
@@ -564,6 +603,7 @@ func (h *Headscale) Serve() error {
// records updates // records updates
scheduleCtx, scheduleCancel := context.WithCancel(context.Background()) scheduleCtx, scheduleCancel := context.WithCancel(context.Background())
defer scheduleCancel() defer scheduleCancel()
go h.scheduledTasks(scheduleCtx) go h.scheduledTasks(scheduleCtx)
if zl.GlobalLevel() == zl.TraceLevel { if zl.GlobalLevel() == zl.TraceLevel {
@@ -576,6 +616,7 @@ func (h *Headscale) Serve() error {
errorGroup := new(errgroup.Group) errorGroup := new(errgroup.Group)
ctx := context.Background() ctx := context.Background()
ctx, cancel := context.WithCancel(ctx) ctx, cancel := context.WithCancel(ctx)
defer cancel() defer cancel()
@@ -586,29 +627,30 @@ func (h *Headscale) Serve() error {
err = h.ensureUnixSocketIsAbsent() err = h.ensureUnixSocketIsAbsent()
if err != nil { if err != nil {
return fmt.Errorf("unable to remove old socket file: %w", err) return fmt.Errorf("removing old socket file: %w", err)
} }
socketDir := filepath.Dir(h.cfg.UnixSocket) socketDir := filepath.Dir(h.cfg.UnixSocket)
err = util.EnsureDir(socketDir) err = util.EnsureDir(socketDir)
if err != nil { if err != nil {
return fmt.Errorf("setting up unix socket: %w", err) return fmt.Errorf("setting up unix socket: %w", err)
} }
socketListener, err := net.Listen("unix", h.cfg.UnixSocket) socketListener, err := new(net.ListenConfig).Listen(context.Background(), "unix", h.cfg.UnixSocket)
if err != nil { if err != nil {
return fmt.Errorf("failed to set up gRPC socket: %w", err) return fmt.Errorf("setting up gRPC socket: %w", err)
} }
// Change socket permissions // Change socket permissions
if err := os.Chmod(h.cfg.UnixSocket, h.cfg.UnixSocketPermission); err != nil { if err := os.Chmod(h.cfg.UnixSocket, h.cfg.UnixSocketPermission); err != nil { //nolint:noinlineerr
return fmt.Errorf("failed change permission of gRPC socket: %w", err) return fmt.Errorf("changing gRPC socket permission: %w", err)
} }
grpcGatewayMux := grpcRuntime.NewServeMux() grpcGatewayMux := grpcRuntime.NewServeMux()
// Make the grpc-gateway connect to grpc over socket // Make the grpc-gateway connect to grpc over socket
grpcGatewayConn, err := grpc.Dial( grpcGatewayConn, err := grpc.Dial( //nolint:staticcheck // SA1019: deprecated but supported in 1.x
h.cfg.UnixSocket, h.cfg.UnixSocket,
[]grpc.DialOption{ []grpc.DialOption{
grpc.WithTransportCredentials(insecure.NewCredentials()), grpc.WithTransportCredentials(insecure.NewCredentials()),
@@ -659,10 +701,13 @@ func (h *Headscale) Serve() error {
// https://github.com/soheilhy/cmux/issues/68 // https://github.com/soheilhy/cmux/issues/68
// https://github.com/soheilhy/cmux/issues/91 // https://github.com/soheilhy/cmux/issues/91
var grpcServer *grpc.Server var (
var grpcListener net.Listener grpcServer *grpc.Server
grpcListener net.Listener
)
if tlsConfig != nil || h.cfg.GRPCAllowInsecure { if tlsConfig != nil || h.cfg.GRPCAllowInsecure {
log.Info().Msgf("Enabling remote gRPC at %s", h.cfg.GRPCAddr) log.Info().Msgf("enabling remote gRPC at %s", h.cfg.GRPCAddr)
grpcOptions := []grpc.ServerOption{ grpcOptions := []grpc.ServerOption{
grpc.ChainUnaryInterceptor( grpc.ChainUnaryInterceptor(
@@ -683,11 +728,10 @@ func (h *Headscale) Serve() error {
grpcServer = grpc.NewServer(grpcOptions...) grpcServer = grpc.NewServer(grpcOptions...)
v1.RegisterHeadscaleServiceServer(grpcServer, newHeadscaleV1APIServer(h)) v1.RegisterHeadscaleServiceServer(grpcServer, newHeadscaleV1APIServer(h))
reflection.Register(grpcServer)
grpcListener, err = net.Listen("tcp", h.cfg.GRPCAddr) grpcListener, err = new(net.ListenConfig).Listen(context.Background(), "tcp", h.cfg.GRPCAddr)
if err != nil { if err != nil {
return fmt.Errorf("failed to bind to TCP address: %w", err) return fmt.Errorf("binding to TCP address: %w", err)
} }
errorGroup.Go(func() error { return grpcServer.Serve(grpcListener) }) errorGroup.Go(func() error { return grpcServer.Serve(grpcListener) })
@@ -715,14 +759,16 @@ func (h *Headscale) Serve() error {
} }
var httpListener net.Listener var httpListener net.Listener
if tlsConfig != nil { if tlsConfig != nil {
httpServer.TLSConfig = tlsConfig httpServer.TLSConfig = tlsConfig
httpListener, err = tls.Listen("tcp", h.cfg.Addr, tlsConfig) httpListener, err = tls.Listen("tcp", h.cfg.Addr, tlsConfig)
} else { } else {
httpListener, err = net.Listen("tcp", h.cfg.Addr) httpListener, err = new(net.ListenConfig).Listen(context.Background(), "tcp", h.cfg.Addr)
} }
if err != nil { if err != nil {
return fmt.Errorf("failed to bind to TCP address: %w", err) return fmt.Errorf("binding to TCP address: %w", err)
} }
errorGroup.Go(func() error { return httpServer.Serve(httpListener) }) errorGroup.Go(func() error { return httpServer.Serve(httpListener) })
@@ -738,7 +784,7 @@ func (h *Headscale) Serve() error {
if h.cfg.MetricsAddr != "" { if h.cfg.MetricsAddr != "" {
debugHTTPListener, err = (&net.ListenConfig{}).Listen(ctx, "tcp", h.cfg.MetricsAddr) debugHTTPListener, err = (&net.ListenConfig{}).Listen(ctx, "tcp", h.cfg.MetricsAddr)
if err != nil { if err != nil {
return fmt.Errorf("failed to bind to TCP address: %w", err) return fmt.Errorf("binding to TCP address: %w", err)
} }
debugHTTPServer = h.debugHTTPServer() debugHTTPServer = h.debugHTTPServer()
@@ -751,19 +797,24 @@ func (h *Headscale) Serve() error {
log.Info().Msg("metrics server disabled (metrics_listen_addr is empty)") log.Info().Msg("metrics server disabled (metrics_listen_addr is empty)")
} }
var tailsqlContext context.Context var tailsqlContext context.Context
if tailsqlEnabled { if tailsqlEnabled {
if h.cfg.Database.Type != types.DatabaseSqlite { if h.cfg.Database.Type != types.DatabaseSqlite {
//nolint:gocritic // exitAfterDefer: Fatal exits during initialization before servers start
log.Fatal(). log.Fatal().
Str("type", h.cfg.Database.Type). Str("type", h.cfg.Database.Type).
Msgf("tailsql only support %q", types.DatabaseSqlite) Msgf("tailsql only support %q", types.DatabaseSqlite)
} }
if tailsqlTSKey == "" { if tailsqlTSKey == "" {
//nolint:gocritic // exitAfterDefer: Fatal exits during initialization before servers start
log.Fatal().Msg("tailsql requires TS_AUTHKEY to be set") log.Fatal().Msg("tailsql requires TS_AUTHKEY to be set")
} }
tailsqlContext = context.Background() tailsqlContext = context.Background()
go runTailSQLService(ctx, util.TSLogfWrapper(), tailsqlStateDir, h.cfg.Database.Sqlite.Path)
go runTailSQLService(ctx, util.TSLogfWrapper(), tailsqlStateDir, h.cfg.Database.Sqlite.Path) //nolint:errcheck
} }
// Handle common process-killing signals so we can gracefully shut down: // Handle common process-killing signals so we can gracefully shut down:
@@ -774,6 +825,7 @@ func (h *Headscale) Serve() error {
syscall.SIGTERM, syscall.SIGTERM,
syscall.SIGQUIT, syscall.SIGQUIT,
syscall.SIGHUP) syscall.SIGHUP)
sigFunc := func(c chan os.Signal) { sigFunc := func(c chan os.Signal) {
// Wait for a SIGINT or SIGKILL: // Wait for a SIGINT or SIGKILL:
for { for {
@@ -798,6 +850,7 @@ func (h *Headscale) Serve() error {
default: default:
info := func(msg string) { log.Info().Msg(msg) } info := func(msg string) { log.Info().Msg(msg) }
log.Info(). log.Info().
Str("signal", sig.String()). Str("signal", sig.String()).
Msg("Received signal to stop, shutting down gracefully") Msg("Received signal to stop, shutting down gracefully")
@@ -854,6 +907,7 @@ func (h *Headscale) Serve() error {
if debugHTTPListener != nil { if debugHTTPListener != nil {
debugHTTPListener.Close() debugHTTPListener.Close()
} }
httpListener.Close() httpListener.Close()
grpcGatewayConn.Close() grpcGatewayConn.Close()
@@ -863,6 +917,7 @@ func (h *Headscale) Serve() error {
// Close state connections // Close state connections
info("closing state and database") info("closing state and database")
err = h.state.Close() err = h.state.Close()
if err != nil { if err != nil {
log.Error().Err(err).Msg("failed to close state") log.Error().Err(err).Msg("failed to close state")
@@ -875,6 +930,7 @@ func (h *Headscale) Serve() error {
} }
} }
} }
errorGroup.Go(func() error { errorGroup.Go(func() error {
sigFunc(sigc) sigFunc(sigc)
@@ -886,6 +942,7 @@ func (h *Headscale) Serve() error {
func (h *Headscale) getTLSSettings() (*tls.Config, error) { func (h *Headscale) getTLSSettings() (*tls.Config, error) {
var err error var err error
if h.cfg.TLS.LetsEncrypt.Hostname != "" { if h.cfg.TLS.LetsEncrypt.Hostname != "" {
if !strings.HasPrefix(h.cfg.ServerURL, "https://") { if !strings.HasPrefix(h.cfg.ServerURL, "https://") {
log.Warn(). log.Warn().
@@ -918,7 +975,6 @@ func (h *Headscale) getTLSSettings() (*tls.Config, error) {
// Configuration via autocert with HTTP-01. This requires listening on // Configuration via autocert with HTTP-01. This requires listening on
// port 80 for the certificate validation in addition to the headscale // port 80 for the certificate validation in addition to the headscale
// service, which can be configured to run on any other port. // service, which can be configured to run on any other port.
server := &http.Server{ server := &http.Server{
Addr: h.cfg.TLS.LetsEncrypt.Listen, Addr: h.cfg.TLS.LetsEncrypt.Listen,
Handler: certManager.HTTPHandler(http.HandlerFunc(h.redirect)), Handler: certManager.HTTPHandler(http.HandlerFunc(h.redirect)),
@@ -940,13 +996,13 @@ func (h *Headscale) getTLSSettings() (*tls.Config, error) {
} }
} else if h.cfg.TLS.CertPath == "" { } else if h.cfg.TLS.CertPath == "" {
if !strings.HasPrefix(h.cfg.ServerURL, "http://") { if !strings.HasPrefix(h.cfg.ServerURL, "http://") {
log.Warn().Msg("Listening without TLS but ServerURL does not start with http://") log.Warn().Msg("listening without TLS but ServerURL does not start with http://")
} }
return nil, err return nil, err
} else { } else {
if !strings.HasPrefix(h.cfg.ServerURL, "https://") { if !strings.HasPrefix(h.cfg.ServerURL, "https://") {
log.Warn().Msg("Listening with TLS but ServerURL does not start with https://") log.Warn().Msg("listening with TLS but ServerURL does not start with https://")
} }
tlsConfig := &tls.Config{ tlsConfig := &tls.Config{
@@ -963,6 +1019,7 @@ func (h *Headscale) getTLSSettings() (*tls.Config, error) {
func readOrCreatePrivateKey(path string) (*key.MachinePrivate, error) { func readOrCreatePrivateKey(path string) (*key.MachinePrivate, error) {
dir := filepath.Dir(path) dir := filepath.Dir(path)
err := util.EnsureDir(dir) err := util.EnsureDir(dir)
if err != nil { if err != nil {
return nil, fmt.Errorf("ensuring private key directory: %w", err) return nil, fmt.Errorf("ensuring private key directory: %w", err)
@@ -970,21 +1027,22 @@ func readOrCreatePrivateKey(path string) (*key.MachinePrivate, error) {
privateKey, err := os.ReadFile(path) privateKey, err := os.ReadFile(path)
if errors.Is(err, os.ErrNotExist) { if errors.Is(err, os.ErrNotExist) {
log.Info().Str("path", path).Msg("No private key file at path, creating...") log.Info().Str("path", path).Msg("no private key file at path, creating...")
machineKey := key.NewMachine() machineKey := key.NewMachine()
machineKeyStr, err := machineKey.MarshalText() machineKeyStr, err := machineKey.MarshalText()
if err != nil { if err != nil {
return nil, fmt.Errorf( return nil, fmt.Errorf(
"failed to convert private key to string for saving: %w", "converting private key to string for saving: %w",
err, err,
) )
} }
err = os.WriteFile(path, machineKeyStr, privateKeyFileMode) err = os.WriteFile(path, machineKeyStr, privateKeyFileMode)
if err != nil { if err != nil {
return nil, fmt.Errorf( return nil, fmt.Errorf(
"failed to save private key to disk at path %q: %w", "saving private key to disk at path %q: %w",
path, path,
err, err,
) )
@@ -992,14 +1050,14 @@ func readOrCreatePrivateKey(path string) (*key.MachinePrivate, error) {
return &machineKey, nil return &machineKey, nil
} else if err != nil { } else if err != nil {
return nil, fmt.Errorf("failed to read private key file: %w", err) return nil, fmt.Errorf("reading private key file: %w", err)
} }
trimmedPrivateKey := strings.TrimSpace(string(privateKey)) trimmedPrivateKey := strings.TrimSpace(string(privateKey))
var machineKey key.MachinePrivate var machineKey key.MachinePrivate
if err = machineKey.UnmarshalText([]byte(trimmedPrivateKey)); err != nil { if err = machineKey.UnmarshalText([]byte(trimmedPrivateKey)); err != nil { //nolint:noinlineerr
return nil, fmt.Errorf("failed to parse private key: %w", err) return nil, fmt.Errorf("parsing private key: %w", err)
} }
return &machineKey, nil return &machineKey, nil
@@ -1012,6 +1070,56 @@ func (h *Headscale) Change(cs ...change.Change) {
h.mapBatcher.AddWork(cs...) h.mapBatcher.AddWork(cs...)
} }
// HTTPHandler returns an http.Handler for the Headscale control server.
// The handler serves the Tailscale control protocol including the /key
// endpoint and /ts2021 Noise upgrade path.
func (h *Headscale) HTTPHandler() http.Handler {
return h.createRouter(grpcRuntime.NewServeMux())
}
// NoisePublicKey returns the server's Noise protocol public key.
func (h *Headscale) NoisePublicKey() key.MachinePublic {
return h.noisePrivateKey.Public()
}
// GetState returns the server's state manager for programmatic access
// to users, nodes, policies, and other server state.
func (h *Headscale) GetState() *state.State {
return h.state
}
// SetServerURLForTest updates the server URL in the configuration.
// This is needed for test servers where the URL is not known until
// the HTTP test server starts.
// It panics when called outside of tests.
func (h *Headscale) SetServerURLForTest(tb testing.TB, url string) {
tb.Helper()
h.cfg.ServerURL = url
}
// StartBatcherForTest initialises and starts the map response batcher.
// It registers a cleanup function on tb to stop the batcher.
// It panics when called outside of tests.
func (h *Headscale) StartBatcherForTest(tb testing.TB) {
tb.Helper()
h.mapBatcher = mapper.NewBatcherAndMapper(h.cfg, h.state)
h.mapBatcher.Start()
tb.Cleanup(func() { h.mapBatcher.Close() })
}
// StartEphemeralGCForTest starts the ephemeral node garbage collector.
// It registers a cleanup function on tb to stop the collector.
// It panics when called outside of tests.
func (h *Headscale) StartEphemeralGCForTest(tb testing.TB) {
tb.Helper()
go h.ephemeralGC.Start()
tb.Cleanup(func() { h.ephemeralGC.Close() })
}
// Provide some middleware that can inspect the ACME/autocert https calls // Provide some middleware that can inspect the ACME/autocert https calls
// and log when things are failing. // and log when things are failing.
type acmeLogger struct { type acmeLogger struct {
@@ -1023,7 +1131,7 @@ type acmeLogger struct {
func (l *acmeLogger) RoundTrip(req *http.Request) (*http.Response, error) { func (l *acmeLogger) RoundTrip(req *http.Request) (*http.Response, error) {
resp, err := l.rt.RoundTrip(req) resp, err := l.rt.RoundTrip(req)
if err != nil { if err != nil {
log.Error().Err(err).Str("url", req.URL.String()).Msg("ACME request failed") log.Error().Err(err).Str("url", req.URL.String()).Msg("acme request failed")
return nil, err return nil, err
} }
@@ -1031,8 +1139,57 @@ func (l *acmeLogger) RoundTrip(req *http.Request) (*http.Response, error) {
defer resp.Body.Close() defer resp.Body.Close()
body, _ := io.ReadAll(resp.Body) body, _ := io.ReadAll(resp.Body)
log.Error().Int("status_code", resp.StatusCode).Str("url", req.URL.String()).Bytes("body", body).Msg("ACME request returned error") log.Error().Int("status_code", resp.StatusCode).Str("url", req.URL.String()).Bytes("body", body).Msg("acme request returned error")
} }
return resp, nil return resp, nil
} }
// zerologRequestLogger implements chi's middleware.LogFormatter
// to route HTTP request logs through zerolog.
type zerologRequestLogger struct{}
func (z *zerologRequestLogger) NewLogEntry(
r *http.Request,
) middleware.LogEntry {
return &zerologLogEntry{
method: r.Method,
path: r.URL.Path,
proto: r.Proto,
remote: r.RemoteAddr,
}
}
type zerologLogEntry struct {
method string
path string
proto string
remote string
}
func (e *zerologLogEntry) Write(
status, bytes int,
header http.Header,
elapsed time.Duration,
extra any,
) {
log.Info().
Str("method", e.method).
Str("path", e.path).
Str("proto", e.proto).
Str("remote", e.remote).
Int("status", status).
Int("bytes", bytes).
Dur("elapsed", elapsed).
Msg("http request")
}
func (e *zerologLogEntry) Panic(
v any,
stack []byte,
) {
log.Error().
Interface("panic", v).
Bytes("stack", stack).
Msg("http handler panic")
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 22 KiB

After

Width:  |  Height:  |  Size: 15 KiB

View File

@@ -1,7 +1,6 @@
package hscontrol package hscontrol
import ( import (
"cmp"
"context" "context"
"errors" "errors"
"fmt" "fmt"
@@ -16,12 +15,13 @@ import (
"gorm.io/gorm" "gorm.io/gorm"
"tailscale.com/tailcfg" "tailscale.com/tailcfg"
"tailscale.com/types/key" "tailscale.com/types/key"
"tailscale.com/types/ptr"
) )
type AuthProvider interface { type AuthProvider interface {
RegisterHandler(http.ResponseWriter, *http.Request) RegisterHandler(w http.ResponseWriter, r *http.Request)
AuthURL(types.RegistrationID) string AuthHandler(w http.ResponseWriter, r *http.Request)
RegisterURL(authID types.AuthID) string
AuthURL(authID types.AuthID) string
} }
func (h *Headscale) handleRegister( func (h *Headscale) handleRegister(
@@ -42,8 +42,7 @@ func (h *Headscale) handleRegister(
// This is a logout attempt (expiry in the past) // This is a logout attempt (expiry in the past)
if node, ok := h.state.GetNodeByNodeKey(req.NodeKey); ok { if node, ok := h.state.GetNodeByNodeKey(req.NodeKey); ok {
log.Debug(). log.Debug().
Uint64("node.id", node.ID().Uint64()). EmbedObject(node).
Str("node.name", node.Hostname()).
Bool("is_ephemeral", node.IsEphemeral()). Bool("is_ephemeral", node.IsEphemeral()).
Bool("has_authkey", node.AuthKey().Valid()). Bool("has_authkey", node.AuthKey().Valid()).
Msg("Found existing node for logout, calling handleLogout") Msg("Found existing node for logout, calling handleLogout")
@@ -52,6 +51,7 @@ func (h *Headscale) handleRegister(
if err != nil { if err != nil {
return nil, fmt.Errorf("handling logout: %w", err) return nil, fmt.Errorf("handling logout: %w", err)
} }
if resp != nil { if resp != nil {
return resp, nil return resp, nil
} }
@@ -70,6 +70,20 @@ func (h *Headscale) handleRegister(
// We do not look up nodes by [key.MachinePublic] as it might belong to multiple // We do not look up nodes by [key.MachinePublic] as it might belong to multiple
// nodes, separated by users and this path is handling expiring/logout paths. // nodes, separated by users and this path is handling expiring/logout paths.
if node, ok := h.state.GetNodeByNodeKey(req.NodeKey); ok { if node, ok := h.state.GetNodeByNodeKey(req.NodeKey); ok {
// Refuse to act on a node looked up purely by NodeKey unless
// the Noise session's machine key matches the cached node.
// Without this check anyone holding a target's NodeKey could
// open a Noise session with a throwaway machine key and read
// the owner's User/Login back through nodeToRegisterResponse.
// handleLogout enforces the same check on its own path.
if node.MachineKey() != machineKey {
return nil, NewHTTPError(
http.StatusUnauthorized,
"node exists with a different machine key",
nil,
)
}
// When tailscaled restarts, it sends RegisterRequest with Auth=nil and Expiry=zero. // When tailscaled restarts, it sends RegisterRequest with Auth=nil and Expiry=zero.
// Return the current node state without modification. // Return the current node state without modification.
// See: https://github.com/juanfont/headscale/issues/2862 // See: https://github.com/juanfont/headscale/issues/2862
@@ -113,8 +127,7 @@ func (h *Headscale) handleRegister(
resp, err := h.handleRegisterWithAuthKey(req, machineKey) resp, err := h.handleRegisterWithAuthKey(req, machineKey)
if err != nil { if err != nil {
// Preserve HTTPError types so they can be handled properly by the HTTP layer // Preserve HTTPError types so they can be handled properly by the HTTP layer
var httpErr HTTPError if httpErr, ok := errors.AsType[HTTPError](err); ok {
if errors.As(err, &httpErr) {
return nil, httpErr return nil, httpErr
} }
@@ -133,7 +146,7 @@ func (h *Headscale) handleRegister(
} }
// handleLogout checks if the [tailcfg.RegisterRequest] is a // handleLogout checks if the [tailcfg.RegisterRequest] is a
// logout attempt from a node. If the node is not attempting to // logout attempt from a node. If the node is not attempting to.
func (h *Headscale) handleLogout( func (h *Headscale) handleLogout(
node types.NodeView, node types.NodeView,
req tailcfg.RegisterRequest, req tailcfg.RegisterRequest,
@@ -155,11 +168,12 @@ func (h *Headscale) handleLogout(
// force the client to re-authenticate. // force the client to re-authenticate.
// TODO(kradalby): I wonder if this is a path we ever hit? // TODO(kradalby): I wonder if this is a path we ever hit?
if node.IsExpired() { if node.IsExpired() {
log.Trace().Str("node.name", node.Hostname()). log.Trace().
Uint64("node.id", node.ID().Uint64()). EmbedObject(node).
Interface("reg.req", req). Interface("reg.req", req).
Bool("unexpected", true). Bool("unexpected", true).
Msg("Node key expired, forcing re-authentication") Msg("Node key expired, forcing re-authentication")
return &tailcfg.RegisterResponse{ return &tailcfg.RegisterResponse{
NodeKeyExpired: true, NodeKeyExpired: true,
MachineAuthorized: false, MachineAuthorized: false,
@@ -182,8 +196,7 @@ func (h *Headscale) handleLogout(
// Zero expiry is handled in handleRegister() before calling this function. // Zero expiry is handled in handleRegister() before calling this function.
if req.Expiry.Before(time.Now()) { if req.Expiry.Before(time.Now()) {
log.Debug(). log.Debug().
Uint64("node.id", node.ID().Uint64()). EmbedObject(node).
Str("node.name", node.Hostname()).
Bool("is_ephemeral", node.IsEphemeral()). Bool("is_ephemeral", node.IsEphemeral()).
Bool("has_authkey", node.AuthKey().Valid()). Bool("has_authkey", node.AuthKey().Valid()).
Time("req.expiry", req.Expiry). Time("req.expiry", req.Expiry).
@@ -191,8 +204,7 @@ func (h *Headscale) handleLogout(
if node.IsEphemeral() { if node.IsEphemeral() {
log.Info(). log.Info().
Uint64("node.id", node.ID().Uint64()). EmbedObject(node).
Str("node.name", node.Hostname()).
Msg("Deleting ephemeral node during logout") Msg("Deleting ephemeral node during logout")
c, err := h.state.DeleteNode(node) c, err := h.state.DeleteNode(node)
@@ -209,14 +221,15 @@ func (h *Headscale) handleLogout(
} }
log.Debug(). log.Debug().
Uint64("node.id", node.ID().Uint64()). EmbedObject(node).
Str("node.name", node.Hostname()).
Msg("Node is not ephemeral, setting expiry instead of deleting") Msg("Node is not ephemeral, setting expiry instead of deleting")
} }
// Update the internal state with the nodes new expiry, meaning it is // Update the internal state with the nodes new expiry, meaning it is
// logged out. // logged out.
updatedNode, c, err := h.state.SetNodeExpiry(node.ID(), req.Expiry) expiry := req.Expiry
updatedNode, c, err := h.state.SetNodeExpiry(node.ID(), &expiry)
if err != nil { if err != nil {
return nil, fmt.Errorf("setting node expiry: %w", err) return nil, fmt.Errorf("setting node expiry: %w", err)
} }
@@ -265,21 +278,24 @@ func (h *Headscale) waitForFollowup(
return nil, NewHTTPError(http.StatusUnauthorized, "invalid followup URL", err) return nil, NewHTTPError(http.StatusUnauthorized, "invalid followup URL", err)
} }
followupReg, err := types.RegistrationIDFromString(strings.ReplaceAll(fu.Path, "/register/", "")) followupReg, err := types.AuthIDFromString(strings.ReplaceAll(fu.Path, "/register/", ""))
if err != nil { if err != nil {
return nil, NewHTTPError(http.StatusUnauthorized, "invalid registration ID", err) return nil, NewHTTPError(http.StatusUnauthorized, "invalid registration ID", err)
} }
if reg, ok := h.state.GetRegistrationCacheEntry(followupReg); ok { if reg, ok := h.state.GetAuthCacheEntry(followupReg); ok {
select { select {
case <-ctx.Done(): case <-ctx.Done():
return nil, NewHTTPError(http.StatusUnauthorized, "registration timed out", err) return nil, NewHTTPError(http.StatusUnauthorized, "registration timed out", err)
case node := <-reg.Registered: case verdict := <-reg.WaitForAuth():
if node == nil { if verdict.Accept() {
// registration is expired in the cache, instruct the client to try a new registration if !verdict.Node.Valid() {
return h.reqToNewRegisterResponse(req, machineKey) // registration is expired in the cache, instruct the client to try a new registration
return h.reqToNewRegisterResponse(req, machineKey)
}
return nodeToRegisterResponse(verdict.Node), nil
} }
return nodeToRegisterResponse(node.View()), nil
} }
} }
@@ -294,42 +310,51 @@ func (h *Headscale) reqToNewRegisterResponse(
req tailcfg.RegisterRequest, req tailcfg.RegisterRequest,
machineKey key.MachinePublic, machineKey key.MachinePublic,
) (*tailcfg.RegisterResponse, error) { ) (*tailcfg.RegisterResponse, error) {
newRegID, err := types.NewRegistrationID() newAuthID, err := types.NewAuthID()
if err != nil { if err != nil {
return nil, NewHTTPError(http.StatusInternalServerError, "failed to generate registration ID", err) return nil, NewHTTPError(http.StatusInternalServerError, "failed to generate registration ID", err)
} }
// Ensure we have a valid hostname authRegReq := types.NewRegisterAuthRequest(
registrationDataFromRequest(req, machineKey),
)
log.Info().Msgf("new followup node registration using auth id: %s", newAuthID)
h.state.SetAuthCacheEntry(newAuthID, authRegReq)
return &tailcfg.RegisterResponse{
AuthURL: h.authProvider.RegisterURL(newAuthID),
}, nil
}
// registrationDataFromRequest builds the RegistrationData payload stored
// in the auth cache for a pending registration. The original Hostinfo is
// retained so that consumers (auth callback, observability) see the
// fields the client originally announced; the bounded-LRU cap on the
// cache is what bounds the unauthenticated cache-fill DoS surface.
func registrationDataFromRequest(
req tailcfg.RegisterRequest,
machineKey key.MachinePublic,
) *types.RegistrationData {
hostname := util.EnsureHostname( hostname := util.EnsureHostname(
req.Hostinfo, req.Hostinfo.View(),
machineKey.String(), machineKey.String(),
req.NodeKey.String(), req.NodeKey.String(),
) )
// Ensure we have valid hostinfo regData := &types.RegistrationData{
hostinfo := cmp.Or(req.Hostinfo, &tailcfg.Hostinfo{}) MachineKey: machineKey,
hostinfo.Hostname = hostname NodeKey: req.NodeKey,
Hostname: hostname,
nodeToRegister := types.NewRegisterNode( Hostinfo: req.Hostinfo,
types.Node{
Hostname: hostname,
MachineKey: machineKey,
NodeKey: req.NodeKey,
Hostinfo: hostinfo,
LastSeen: ptr.To(time.Now()),
},
)
if !req.Expiry.IsZero() {
nodeToRegister.Node.Expiry = &req.Expiry
} }
log.Info().Msgf("New followup node registration using key: %s", newRegID) if !req.Expiry.IsZero() {
h.state.SetRegistrationCacheEntry(newRegID, nodeToRegister) expiry := req.Expiry
regData.Expiry = &expiry
}
return &tailcfg.RegisterResponse{ return regData
AuthURL: h.authProvider.AuthURL(newRegID),
}, nil
} }
func (h *Headscale) handleRegisterWithAuthKey( func (h *Headscale) handleRegisterWithAuthKey(
@@ -344,8 +369,8 @@ func (h *Headscale) handleRegisterWithAuthKey(
if errors.Is(err, gorm.ErrRecordNotFound) { if errors.Is(err, gorm.ErrRecordNotFound) {
return nil, NewHTTPError(http.StatusUnauthorized, "invalid pre auth key", nil) return nil, NewHTTPError(http.StatusUnauthorized, "invalid pre auth key", nil)
} }
var perr types.PAKError
if errors.As(err, &perr) { if perr, ok := errors.AsType[types.PAKError](err); ok {
return nil, NewHTTPError(http.StatusUnauthorized, perr.Error(), nil) return nil, NewHTTPError(http.StatusUnauthorized, perr.Error(), nil)
} }
@@ -355,7 +380,7 @@ func (h *Headscale) handleRegisterWithAuthKey(
// If node is not valid, it means an ephemeral node was deleted during logout // If node is not valid, it means an ephemeral node was deleted during logout
if !node.Valid() { if !node.Valid() {
h.Change(changed) h.Change(changed)
return nil, nil return nil, nil //nolint:nilnil // intentional: no node to return when ephemeral deleted
} }
// This is a bit of a back and forth, but we have a bit of a chicken and egg // This is a bit of a back and forth, but we have a bit of a chicken and egg
@@ -379,13 +404,6 @@ func (h *Headscale) handleRegisterWithAuthKey(
// Send both changes. Empty changes are ignored by Change(). // Send both changes. Empty changes are ignored by Change().
h.Change(changed, routesChange) h.Change(changed, routesChange)
// TODO(kradalby): I think this is covered above, but we need to validate that.
// // If policy changed due to node registration, send a separate policy change
// if policyChanged {
// policyChange := change.PolicyChange()
// h.Change(policyChange)
// }
resp := &tailcfg.RegisterResponse{ resp := &tailcfg.RegisterResponse{
MachineAuthorized: true, MachineAuthorized: true,
NodeKeyExpired: node.IsExpired(), NodeKeyExpired: node.IsExpired(),
@@ -397,8 +415,7 @@ func (h *Headscale) handleRegisterWithAuthKey(
Caller(). Caller().
Interface("reg.resp", resp). Interface("reg.resp", resp).
Interface("reg.req", req). Interface("reg.req", req).
Str("node.name", node.Hostname()). EmbedObject(node).
Uint64("node.id", node.ID().Uint64()).
Msg("RegisterResponse") Msg("RegisterResponse")
return resp, nil return resp, nil
@@ -408,57 +425,32 @@ func (h *Headscale) handleRegisterInteractive(
req tailcfg.RegisterRequest, req tailcfg.RegisterRequest,
machineKey key.MachinePublic, machineKey key.MachinePublic,
) (*tailcfg.RegisterResponse, error) { ) (*tailcfg.RegisterResponse, error) {
registrationId, err := types.NewRegistrationID() authID, err := types.NewAuthID()
if err != nil { if err != nil {
return nil, fmt.Errorf("generating registration ID: %w", err) return nil, fmt.Errorf("generating registration ID: %w", err)
} }
// Ensure we have a valid hostname
hostname := util.EnsureHostname(
req.Hostinfo,
machineKey.String(),
req.NodeKey.String(),
)
// Ensure we have valid hostinfo
hostinfo := cmp.Or(req.Hostinfo, &tailcfg.Hostinfo{})
if req.Hostinfo == nil { if req.Hostinfo == nil {
log.Warn(). log.Warn().
Str("machine.key", machineKey.ShortString()). Str("machine.key", machineKey.ShortString()).
Str("node.key", req.NodeKey.ShortString()). Str("node.key", req.NodeKey.ShortString()).
Str("generated.hostname", hostname).
Msg("Received registration request with nil hostinfo, generated default hostname") Msg("Received registration request with nil hostinfo, generated default hostname")
} else if req.Hostinfo.Hostname == "" { } else if req.Hostinfo.Hostname == "" {
log.Warn(). log.Warn().
Str("machine.key", machineKey.ShortString()). Str("machine.key", machineKey.ShortString()).
Str("node.key", req.NodeKey.ShortString()). Str("node.key", req.NodeKey.ShortString()).
Str("generated.hostname", hostname).
Msg("Received registration request with empty hostname, generated default") Msg("Received registration request with empty hostname, generated default")
} }
hostinfo.Hostname = hostname
nodeToRegister := types.NewRegisterNode( authRegReq := types.NewRegisterAuthRequest(
types.Node{ registrationDataFromRequest(req, machineKey),
Hostname: hostname,
MachineKey: machineKey,
NodeKey: req.NodeKey,
Hostinfo: hostinfo,
LastSeen: ptr.To(time.Now()),
},
) )
if !req.Expiry.IsZero() { h.state.SetAuthCacheEntry(authID, authRegReq)
nodeToRegister.Node.Expiry = &req.Expiry
}
h.state.SetRegistrationCacheEntry( log.Info().Msgf("starting node registration using auth id: %s", authID)
registrationId,
nodeToRegister,
)
log.Info().Msgf("Starting node registration using key: %s", registrationId)
return &tailcfg.RegisterResponse{ return &tailcfg.RegisterResponse{
AuthURL: h.authProvider.AuthURL(registrationId), AuthURL: h.authProvider.RegisterURL(authID),
}, nil }, nil
} }

View File

@@ -4,6 +4,7 @@ import (
"testing" "testing"
"time" "time"
"github.com/juanfont/headscale/hscontrol/mapper"
"github.com/juanfont/headscale/hscontrol/types" "github.com/juanfont/headscale/hscontrol/types"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
@@ -11,10 +12,53 @@ import (
"tailscale.com/types/key" "tailscale.com/types/key"
) )
// createTestAppWithNodeExpiry creates a test app with a specific node.expiry config.
func createTestAppWithNodeExpiry(t *testing.T, nodeExpiry time.Duration) *Headscale {
t.Helper()
tmpDir := t.TempDir()
cfg := types.Config{
ServerURL: "http://localhost:8080",
NoisePrivateKeyPath: tmpDir + "/noise_private.key",
Node: types.NodeConfig{
Expiry: nodeExpiry,
},
Database: types.DatabaseConfig{
Type: "sqlite3",
Sqlite: types.SqliteConfig{
Path: tmpDir + "/headscale_test.db",
},
},
OIDC: types.OIDCConfig{},
Policy: types.PolicyConfig{
Mode: types.PolicyModeDB,
},
Tuning: types.Tuning{
BatchChangeDelay: 100 * time.Millisecond,
BatcherWorkers: 1,
},
}
app, err := NewHeadscale(&cfg)
require.NoError(t, err)
app.mapBatcher = mapper.NewBatcherAndMapper(&cfg, app.state)
app.mapBatcher.Start()
t.Cleanup(func() {
if app.mapBatcher != nil {
app.mapBatcher.Close()
}
})
return app
}
// TestTaggedPreAuthKeyCreatesTaggedNode tests that a PreAuthKey with tags creates // TestTaggedPreAuthKeyCreatesTaggedNode tests that a PreAuthKey with tags creates
// a tagged node with: // a tagged node with:
// - Tags from the PreAuthKey // - Tags from the PreAuthKey
// - UserID tracking who created the key (informational "created by") // - Nil UserID (tagged nodes are owned by tags, not a user)
// - IsTagged() returns true. // - IsTagged() returns true.
func TestTaggedPreAuthKeyCreatesTaggedNode(t *testing.T) { func TestTaggedPreAuthKeyCreatesTaggedNode(t *testing.T) {
app := createTestApp(t) app := createTestApp(t)
@@ -51,11 +95,10 @@ func TestTaggedPreAuthKeyCreatesTaggedNode(t *testing.T) {
node, found := app.state.GetNodeByNodeKey(nodeKey.Public()) node, found := app.state.GetNodeByNodeKey(nodeKey.Public())
require.True(t, found) require.True(t, found)
// Critical assertions for tags-as-identity model // Tagged nodes are owned by their tags, not a user.
assert.True(t, node.IsTagged(), "Node should be tagged") assert.True(t, node.IsTagged(), "Node should be tagged")
assert.ElementsMatch(t, tags, node.Tags().AsSlice(), "Node should have tags from PreAuthKey") assert.ElementsMatch(t, tags, node.Tags().AsSlice(), "Node should have tags from PreAuthKey")
assert.True(t, node.UserID().Valid(), "Node should have UserID tracking creator") assert.False(t, node.UserID().Valid(), "Tagged node should not have UserID")
assert.Equal(t, user.ID, node.UserID().Get(), "UserID should track PreAuthKey creator")
// Verify node is identified correctly // Verify node is identified correctly
assert.True(t, node.IsTagged(), "Tagged node is not user-owned") assert.True(t, node.IsTagged(), "Tagged node is not user-owned")
@@ -129,9 +172,10 @@ func TestReAuthDoesNotReapplyTags(t *testing.T) {
assert.True(t, nodeAfterReauth.IsTagged(), "Node should still be tagged") assert.True(t, nodeAfterReauth.IsTagged(), "Node should still be tagged")
assert.ElementsMatch(t, initialTags, nodeAfterReauth.Tags().AsSlice(), "Tags should remain unchanged on re-auth") assert.ElementsMatch(t, initialTags, nodeAfterReauth.Tags().AsSlice(), "Tags should remain unchanged on re-auth")
// Verify only one node was created (no duplicates) // Verify only one node was created (no duplicates).
nodes := app.state.ListNodesByUser(types.UserID(user.ID)) // Tagged nodes are not indexed by user, so check the global list.
assert.Equal(t, 1, nodes.Len(), "Should have exactly one node") allNodes := app.state.ListNodes()
assert.Equal(t, 1, allNodes.Len(), "Should have exactly one node")
} }
// NOTE: TestSetTagsOnUserOwnedNode functionality is covered by gRPC tests in grpcv1_test.go // NOTE: TestSetTagsOnUserOwnedNode functionality is covered by gRPC tests in grpcv1_test.go
@@ -294,13 +338,13 @@ func TestMultipleNodesWithSameReusableTaggedPreAuthKey(t *testing.T) {
assert.ElementsMatch(t, tags, node1.Tags().AsSlice(), "First node should have PreAuthKey tags") assert.ElementsMatch(t, tags, node1.Tags().AsSlice(), "First node should have PreAuthKey tags")
assert.ElementsMatch(t, tags, node2.Tags().AsSlice(), "Second node should have PreAuthKey tags") assert.ElementsMatch(t, tags, node2.Tags().AsSlice(), "Second node should have PreAuthKey tags")
// Both nodes should track the same creator // Tagged nodes should not have UserID set.
assert.Equal(t, user.ID, node1.UserID().Get(), "First node should track creator") assert.False(t, node1.UserID().Valid(), "First node should not have UserID")
assert.Equal(t, user.ID, node2.UserID().Get(), "Second node should track creator") assert.False(t, node2.UserID().Valid(), "Second node should not have UserID")
// Verify we have exactly 2 nodes // Verify we have exactly 2 nodes.
nodes := app.state.ListNodesByUser(types.UserID(user.ID)) allNodes := app.state.ListNodes()
assert.Equal(t, 2, nodes.Len(), "Should have exactly two nodes") assert.Equal(t, 2, allNodes.Len(), "Should have exactly two nodes")
} }
// TestNonReusableTaggedPreAuthKey tests that a non-reusable PreAuthKey with tags // TestNonReusableTaggedPreAuthKey tests that a non-reusable PreAuthKey with tags
@@ -359,9 +403,9 @@ func TestNonReusableTaggedPreAuthKey(t *testing.T) {
_, err = app.handleRegisterWithAuthKey(regReq2, machineKey2.Public()) _, err = app.handleRegisterWithAuthKey(regReq2, machineKey2.Public())
require.Error(t, err, "Should not be able to reuse non-reusable PreAuthKey") require.Error(t, err, "Should not be able to reuse non-reusable PreAuthKey")
// Verify only one node was created // Verify only one node was created.
nodes := app.state.ListNodesByUser(types.UserID(user.ID)) allNodes := app.state.ListNodes()
assert.Equal(t, 1, nodes.Len(), "Should have exactly one node") assert.Equal(t, 1, allNodes.Len(), "Should have exactly one node")
} }
// TestExpiredTaggedPreAuthKey tests that an expired PreAuthKey with tags // TestExpiredTaggedPreAuthKey tests that an expired PreAuthKey with tags
@@ -625,6 +669,152 @@ func TestTaggedNodeReauthPreservesDisabledExpiry(t *testing.T) {
"Tagged node should have expiry PRESERVED as disabled after re-auth") "Tagged node should have expiry PRESERVED as disabled after re-auth")
} }
// TestExpiryDuringPersonalToTaggedConversion tests that when a personal node
// is converted to tagged via reauth with RequestTags, the expiry is cleared to nil.
// BUG #3048: Previously expiry was NOT cleared because expiry handling ran
// BEFORE processReauthTags.
func TestExpiryDuringPersonalToTaggedConversion(t *testing.T) {
app := createTestApp(t)
user := app.state.CreateUserForTest("expiry-test-user")
// Update policy to allow user to own tags
err := app.state.UpdatePolicyManagerUsersForTest()
require.NoError(t, err)
policy := `{
"tagOwners": {
"tag:server": ["expiry-test-user@"]
},
"acls": [{"action": "accept", "src": ["*"], "dst": ["*:*"]}]
}`
_, err = app.state.SetPolicy([]byte(policy))
require.NoError(t, err)
machineKey := key.NewMachine()
nodeKey1 := key.NewNode()
// Step 1: Create user-owned node WITH expiry set
clientExpiry := time.Now().Add(24 * time.Hour)
registrationID1 := types.MustAuthID()
regEntry1 := types.NewRegisterAuthRequest(&types.RegistrationData{
MachineKey: machineKey.Public(),
NodeKey: nodeKey1.Public(),
Hostname: "personal-to-tagged",
Hostinfo: &tailcfg.Hostinfo{
Hostname: "personal-to-tagged",
RequestTags: []string{}, // No tags - user-owned
},
Expiry: &clientExpiry,
})
app.state.SetAuthCacheEntry(registrationID1, regEntry1)
node, _, err := app.state.HandleNodeFromAuthPath(
registrationID1, types.UserID(user.ID), nil, "webauth",
)
require.NoError(t, err)
require.False(t, node.IsTagged(), "Node should be user-owned initially")
require.True(t, node.Expiry().Valid(), "User-owned node should have expiry set")
// Step 2: Re-auth with tags (Personal → Tagged conversion)
nodeKey2 := key.NewNode()
registrationID2 := types.MustAuthID()
regEntry2 := types.NewRegisterAuthRequest(&types.RegistrationData{
MachineKey: machineKey.Public(),
NodeKey: nodeKey2.Public(),
Hostname: "personal-to-tagged",
Hostinfo: &tailcfg.Hostinfo{
Hostname: "personal-to-tagged",
RequestTags: []string{"tag:server"}, // Adding tags
},
Expiry: &clientExpiry, // Client still sends expiry
})
app.state.SetAuthCacheEntry(registrationID2, regEntry2)
nodeAfter, _, err := app.state.HandleNodeFromAuthPath(
registrationID2, types.UserID(user.ID), nil, "webauth",
)
require.NoError(t, err)
require.True(t, nodeAfter.IsTagged(), "Node should be tagged after conversion")
// CRITICAL ASSERTION: Tagged nodes should NOT have expiry
assert.False(t, nodeAfter.Expiry().Valid(),
"Tagged node should have expiry cleared to nil")
}
// TestExpiryDuringTaggedToPersonalConversion tests that when a tagged node
// is converted to personal via reauth with empty RequestTags, expiry is set
// from the client request.
// BUG #3048: Previously expiry was NOT set because expiry handling ran
// BEFORE processReauthTags (node was still tagged at check time).
func TestExpiryDuringTaggedToPersonalConversion(t *testing.T) {
app := createTestApp(t)
user := app.state.CreateUserForTest("expiry-test-user2")
// Update policy to allow user to own tags
err := app.state.UpdatePolicyManagerUsersForTest()
require.NoError(t, err)
policy := `{
"tagOwners": {
"tag:server": ["expiry-test-user2@"]
},
"acls": [{"action": "accept", "src": ["*"], "dst": ["*:*"]}]
}`
_, err = app.state.SetPolicy([]byte(policy))
require.NoError(t, err)
machineKey := key.NewMachine()
nodeKey1 := key.NewNode()
// Step 1: Create tagged node (expiry should be nil)
registrationID1 := types.MustAuthID()
regEntry1 := types.NewRegisterAuthRequest(&types.RegistrationData{
MachineKey: machineKey.Public(),
NodeKey: nodeKey1.Public(),
Hostname: "tagged-to-personal",
Hostinfo: &tailcfg.Hostinfo{
Hostname: "tagged-to-personal",
RequestTags: []string{"tag:server"}, // Tagged node
},
})
app.state.SetAuthCacheEntry(registrationID1, regEntry1)
node, _, err := app.state.HandleNodeFromAuthPath(
registrationID1, types.UserID(user.ID), nil, "webauth",
)
require.NoError(t, err)
require.True(t, node.IsTagged(), "Node should be tagged initially")
require.False(t, node.Expiry().Valid(), "Tagged node should have nil expiry")
// Step 2: Re-auth with empty tags (Tagged → Personal conversion)
nodeKey2 := key.NewNode()
clientExpiry := time.Now().Add(48 * time.Hour)
registrationID2 := types.MustAuthID()
regEntry2 := types.NewRegisterAuthRequest(&types.RegistrationData{
MachineKey: machineKey.Public(),
NodeKey: nodeKey2.Public(),
Hostname: "tagged-to-personal",
Hostinfo: &tailcfg.Hostinfo{
Hostname: "tagged-to-personal",
RequestTags: []string{}, // Empty tags - convert to user-owned
},
Expiry: &clientExpiry, // Client requests expiry
})
app.state.SetAuthCacheEntry(registrationID2, regEntry2)
nodeAfter, _, err := app.state.HandleNodeFromAuthPath(
registrationID2, types.UserID(user.ID), nil, "webauth",
)
require.NoError(t, err)
require.False(t, nodeAfter.IsTagged(), "Node should be user-owned after conversion")
// CRITICAL ASSERTION: User-owned nodes should have expiry from client
assert.True(t, nodeAfter.Expiry().Valid(),
"User-owned node should have expiry set")
assert.WithinDuration(t, clientExpiry, nodeAfter.Expiry().Get(), 5*time.Second,
"Expiry should match client request")
}
// TestReAuthWithDifferentMachineKey tests the edge case where a node attempts // TestReAuthWithDifferentMachineKey tests the edge case where a node attempts
// to re-authenticate with the same NodeKey but a DIFFERENT MachineKey. // to re-authenticate with the same NodeKey but a DIFFERENT MachineKey.
// This scenario should be handled gracefully (currently creates a new node). // This scenario should be handled gracefully (currently creates a new node).
@@ -687,3 +877,245 @@ func TestReAuthWithDifferentMachineKey(t *testing.T) {
assert.True(t, node2.IsTagged()) assert.True(t, node2.IsTagged())
assert.ElementsMatch(t, tags, node2.Tags().AsSlice()) assert.ElementsMatch(t, tags, node2.Tags().AsSlice())
} }
// TestUntaggedAuthKeyZeroExpiryGetsDefault tests that when node.expiry is configured
// and a client registers with an untagged auth key without requesting a specific expiry,
// the node gets the configured default expiry.
// This is the core fix for https://github.com/juanfont/headscale/issues/1711
func TestUntaggedAuthKeyZeroExpiryGetsDefault(t *testing.T) {
t.Parallel()
nodeExpiry := 180 * 24 * time.Hour // 180 days
app := createTestAppWithNodeExpiry(t, nodeExpiry)
user := app.state.CreateUserForTest("node-owner")
pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, nil)
require.NoError(t, err)
machineKey := key.NewMachine()
nodeKey := key.NewNode()
// Client sends zero expiry (the default behaviour of tailscale up --authkey).
regReq := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak.Key,
},
NodeKey: nodeKey.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "default-expiry-test",
},
Expiry: time.Time{}, // zero — no client-requested expiry
}
resp, err := app.handleRegisterWithAuthKey(regReq, machineKey.Public())
require.NoError(t, err)
require.True(t, resp.MachineAuthorized)
node, found := app.state.GetNodeByNodeKey(nodeKey.Public())
require.True(t, found)
assert.False(t, node.IsTagged())
assert.True(t, node.Expiry().Valid(), "node should have expiry set from config default")
assert.False(t, node.IsExpired(), "node should not be expired yet")
expectedExpiry := time.Now().Add(nodeExpiry)
assert.WithinDuration(t, expectedExpiry, node.Expiry().Get(), 10*time.Second,
"node expiry should be ~180 days from now")
}
// TestTaggedAuthKeyIgnoresNodeExpiry tests that tagged nodes still get nil
// expiry even when node.expiry is configured.
func TestTaggedAuthKeyIgnoresNodeExpiry(t *testing.T) {
t.Parallel()
nodeExpiry := 180 * 24 * time.Hour
app := createTestAppWithNodeExpiry(t, nodeExpiry)
user := app.state.CreateUserForTest("tag-creator")
tags := []string{"tag:server"}
pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, tags)
require.NoError(t, err)
machineKey := key.NewMachine()
nodeKey := key.NewNode()
regReq := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak.Key,
},
NodeKey: nodeKey.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "tagged-no-expiry",
},
Expiry: time.Time{},
}
resp, err := app.handleRegisterWithAuthKey(regReq, machineKey.Public())
require.NoError(t, err)
require.True(t, resp.MachineAuthorized)
node, found := app.state.GetNodeByNodeKey(nodeKey.Public())
require.True(t, found)
assert.True(t, node.IsTagged())
assert.False(t, node.Expiry().Valid(),
"tagged node should have expiry disabled (nil) even with node.expiry configured")
}
// TestNodeExpiryZeroDisablesDefault tests that setting node.expiry to 0
// preserves the old behaviour where nodes registered without a client-requested
// expiry get no expiry (never expire).
func TestNodeExpiryZeroDisablesDefault(t *testing.T) {
t.Parallel()
// node.expiry = 0 means "no default expiry"
app := createTestAppWithNodeExpiry(t, 0)
user := app.state.CreateUserForTest("node-owner")
pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, nil)
require.NoError(t, err)
machineKey := key.NewMachine()
nodeKey := key.NewNode()
regReq := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak.Key,
},
NodeKey: nodeKey.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "no-default-expiry",
},
Expiry: time.Time{}, // zero
}
resp, err := app.handleRegisterWithAuthKey(regReq, machineKey.Public())
require.NoError(t, err)
require.True(t, resp.MachineAuthorized)
node, found := app.state.GetNodeByNodeKey(nodeKey.Public())
require.True(t, found)
assert.False(t, node.IsTagged())
assert.False(t, node.IsExpired(), "node should not be expired")
// With node.expiry=0 and zero client expiry, the node gets a zero expiry
// which IsExpired() treats as "never expires" — backwards compatible.
if node.Expiry().Valid() {
assert.True(t, node.Expiry().Get().IsZero(),
"with node.expiry=0 and zero client expiry, expiry should be zero time")
}
}
// TestClientNonZeroExpiryTakesPrecedence tests that when a client explicitly
// requests an expiry, that value is used instead of the configured default.
func TestClientNonZeroExpiryTakesPrecedence(t *testing.T) {
t.Parallel()
nodeExpiry := 180 * 24 * time.Hour // 180 days
app := createTestAppWithNodeExpiry(t, nodeExpiry)
user := app.state.CreateUserForTest("node-owner")
pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, nil)
require.NoError(t, err)
machineKey := key.NewMachine()
nodeKey := key.NewNode()
// Client explicitly requests 24h expiry
clientExpiry := time.Now().Add(24 * time.Hour)
regReq := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak.Key,
},
NodeKey: nodeKey.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "client-expiry-test",
},
Expiry: clientExpiry,
}
resp, err := app.handleRegisterWithAuthKey(regReq, machineKey.Public())
require.NoError(t, err)
require.True(t, resp.MachineAuthorized)
node, found := app.state.GetNodeByNodeKey(nodeKey.Public())
require.True(t, found)
assert.True(t, node.Expiry().Valid(), "node should have expiry set")
assert.WithinDuration(t, clientExpiry, node.Expiry().Get(), 5*time.Second,
"client-requested expiry should take precedence over node.expiry default")
}
// TestReregistrationAppliesDefaultExpiry tests that when a node re-registers
// with an untagged auth key and the client sends zero expiry, the configured
// default is applied.
func TestReregistrationAppliesDefaultExpiry(t *testing.T) {
t.Parallel()
nodeExpiry := 90 * 24 * time.Hour // 90 days
app := createTestAppWithNodeExpiry(t, nodeExpiry)
user := app.state.CreateUserForTest("node-owner")
pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, nil)
require.NoError(t, err)
machineKey := key.NewMachine()
nodeKey := key.NewNode()
// Initial registration with zero expiry
regReq := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak.Key,
},
NodeKey: nodeKey.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "reregister-test",
},
Expiry: time.Time{},
}
resp, err := app.handleRegisterWithAuthKey(regReq, machineKey.Public())
require.NoError(t, err)
require.True(t, resp.MachineAuthorized)
node, found := app.state.GetNodeByNodeKey(nodeKey.Public())
require.True(t, found)
assert.True(t, node.Expiry().Valid(), "initial registration should get default expiry")
firstExpiry := node.Expiry().Get()
// Re-register with a new node key but same machine key
nodeKey2 := key.NewNode()
regReq2 := tailcfg.RegisterRequest{
Auth: &tailcfg.RegisterResponseAuth{
AuthKey: pak.Key,
},
NodeKey: nodeKey2.Public(),
Hostinfo: &tailcfg.Hostinfo{
Hostname: "reregister-test",
},
Expiry: time.Time{}, // still zero
}
resp2, err := app.handleRegisterWithAuthKey(regReq2, machineKey.Public())
require.NoError(t, err)
require.True(t, resp2.MachineAuthorized)
node2, found := app.state.GetNodeByNodeKey(nodeKey2.Public())
require.True(t, found)
assert.True(t, node2.Expiry().Valid(), "re-registration should also get default expiry")
// The expiry should be refreshed (new 90d from now), not the old one
expectedExpiry := time.Now().Add(nodeExpiry)
assert.WithinDuration(t, expectedExpiry, node2.Expiry().Get(), 10*time.Second,
"re-registration should refresh the default expiry")
assert.True(t, node2.Expiry().Get().After(firstExpiry),
"re-registration expiry should be later than initial registration expiry")
}

File diff suppressed because it is too large Load Diff

View File

@@ -40,6 +40,8 @@ var tailscaleToCapVer = map[string]tailcfg.CapabilityVersion{
"v1.88": 125, "v1.88": 125,
"v1.90": 130, "v1.90": 130,
"v1.92": 131, "v1.92": 131,
"v1.94": 131,
"v1.96": 133,
} }
var capVerToTailscaleVer = map[tailcfg.CapabilityVersion]string{ var capVerToTailscaleVer = map[tailcfg.CapabilityVersion]string{
@@ -74,6 +76,7 @@ var capVerToTailscaleVer = map[tailcfg.CapabilityVersion]string{
125: "v1.88", 125: "v1.88",
130: "v1.90", 130: "v1.90",
131: "v1.92", 131: "v1.92",
133: "v1.96",
} }
// SupportedMajorMinorVersions is the number of major.minor Tailscale versions supported. // SupportedMajorMinorVersions is the number of major.minor Tailscale versions supported.
@@ -81,4 +84,4 @@ const SupportedMajorMinorVersions = 10
// MinSupportedCapabilityVersion represents the minimum capability version // MinSupportedCapabilityVersion represents the minimum capability version
// supported by this Headscale instance (latest 10 minor versions) // supported by this Headscale instance (latest 10 minor versions)
const MinSupportedCapabilityVersion tailcfg.CapabilityVersion = 106 const MinSupportedCapabilityVersion tailcfg.CapabilityVersion = 109

View File

@@ -9,11 +9,9 @@ var tailscaleLatestMajorMinorTests = []struct {
stripV bool stripV bool
expected []string expected []string
}{ }{
{3, false, []string{"v1.88", "v1.90", "v1.92"}}, {3, false, []string{"v1.92", "v1.94", "v1.96"}},
{2, true, []string{"1.90", "1.92"}}, {2, true, []string{"1.94", "1.96"}},
{10, true, []string{ {10, true, []string{
"1.74",
"1.76",
"1.78", "1.78",
"1.80", "1.80",
"1.82", "1.82",
@@ -22,6 +20,8 @@ var tailscaleLatestMajorMinorTests = []struct {
"1.88", "1.88",
"1.90", "1.90",
"1.92", "1.92",
"1.94",
"1.96",
}}, }},
{0, false, nil}, {0, false, nil},
} }
@@ -30,7 +30,7 @@ var capVerMinimumTailscaleVersionTests = []struct {
input tailcfg.CapabilityVersion input tailcfg.CapabilityVersion
expected string expected string
}{ }{
{106, "v1.74"}, {109, "v1.78"},
{32, "v1.24"}, {32, "v1.24"},
{41, "v1.30"}, {41, "v1.30"},
{46, "v1.32"}, {46, "v1.32"},

View File

@@ -77,8 +77,8 @@ func (hsdb *HSDatabase) CreateAPIKey(
Expiration: expiration, Expiration: expiration,
} }
if err := hsdb.DB.Save(&key).Error; err != nil { if err := hsdb.DB.Save(&key).Error; err != nil { //nolint:noinlineerr
return "", nil, fmt.Errorf("failed to save API key to database: %w", err) return "", nil, fmt.Errorf("saving API key to database: %w", err)
} }
return keyStr, &key, nil return keyStr, &key, nil
@@ -87,7 +87,9 @@ func (hsdb *HSDatabase) CreateAPIKey(
// ListAPIKeys returns the list of ApiKeys for a user. // ListAPIKeys returns the list of ApiKeys for a user.
func (hsdb *HSDatabase) ListAPIKeys() ([]types.APIKey, error) { func (hsdb *HSDatabase) ListAPIKeys() ([]types.APIKey, error) {
keys := []types.APIKey{} keys := []types.APIKey{}
if err := hsdb.DB.Find(&keys).Error; err != nil {
err := hsdb.DB.Find(&keys).Error
if err != nil {
return nil, err return nil, err
} }
@@ -126,7 +128,8 @@ func (hsdb *HSDatabase) DestroyAPIKey(key types.APIKey) error {
// ExpireAPIKey marks a ApiKey as expired. // ExpireAPIKey marks a ApiKey as expired.
func (hsdb *HSDatabase) ExpireAPIKey(key *types.APIKey) error { func (hsdb *HSDatabase) ExpireAPIKey(key *types.APIKey) error {
if err := hsdb.DB.Model(&key).Update("Expiration", time.Now()).Error; err != nil { err := hsdb.DB.Model(&key).Update("Expiration", time.Now()).Error
if err != nil {
return err return err
} }

View File

@@ -24,8 +24,6 @@ import (
"gorm.io/gorm" "gorm.io/gorm"
"gorm.io/gorm/logger" "gorm.io/gorm/logger"
"gorm.io/gorm/schema" "gorm.io/gorm/schema"
"tailscale.com/net/tsaddr"
"zgo.at/zcache/v2"
) )
//go:embed schema.sql //go:embed schema.sql
@@ -46,22 +44,25 @@ const (
) )
type HSDatabase struct { type HSDatabase struct {
DB *gorm.DB DB *gorm.DB
cfg *types.Config cfg *types.Config
regCache *zcache.Cache[types.RegistrationID, types.RegisterNode]
} }
// NewHeadscaleDatabase creates a new database connection and runs migrations. // NewHeadscaleDatabase creates a new database connection and runs migrations.
// It accepts the full configuration to allow migrations access to policy settings. // It accepts the full configuration to allow migrations access to policy settings.
func NewHeadscaleDatabase( //
cfg *types.Config, //nolint:gocyclo // complex database initialization with many migrations
regCache *zcache.Cache[types.RegistrationID, types.RegisterNode], func NewHeadscaleDatabase(cfg *types.Config) (*HSDatabase, error) {
) (*HSDatabase, error) {
dbConn, err := openDB(cfg.Database) dbConn, err := openDB(cfg.Database)
if err != nil { if err != nil {
return nil, err return nil, err
} }
err = checkVersionUpgradePath(dbConn)
if err != nil {
return nil, fmt.Errorf("version check: %w", err)
}
migrations := gormigrate.New( migrations := gormigrate.New(
dbConn, dbConn,
gormigrate.DefaultOptions, gormigrate.DefaultOptions,
@@ -76,7 +77,7 @@ func NewHeadscaleDatabase(
ID: "202501221827", ID: "202501221827",
Migrate: func(tx *gorm.DB) error { Migrate: func(tx *gorm.DB) error {
// Remove any invalid routes associated with a node that does not exist. // Remove any invalid routes associated with a node that does not exist.
if tx.Migrator().HasTable(&types.Route{}) && tx.Migrator().HasTable(&types.Node{}) { if tx.Migrator().HasTable(&types.Route{}) && tx.Migrator().HasTable(&types.Node{}) { //nolint:staticcheck // SA1019: Route kept for migrations
err := tx.Exec("delete from routes where node_id not in (select id from nodes)").Error err := tx.Exec("delete from routes where node_id not in (select id from nodes)").Error
if err != nil { if err != nil {
return err return err
@@ -84,14 +85,14 @@ func NewHeadscaleDatabase(
} }
// Remove any invalid routes without a node_id. // Remove any invalid routes without a node_id.
if tx.Migrator().HasTable(&types.Route{}) { if tx.Migrator().HasTable(&types.Route{}) { //nolint:staticcheck // SA1019: Route kept for migrations
err := tx.Exec("delete from routes where node_id is null").Error err := tx.Exec("delete from routes where node_id is null").Error
if err != nil { if err != nil {
return err return err
} }
} }
err := tx.AutoMigrate(&types.Route{}) err := tx.AutoMigrate(&types.Route{}) //nolint:staticcheck // SA1019: Route kept for migrations
if err != nil { if err != nil {
return fmt.Errorf("automigrating types.Route: %w", err) return fmt.Errorf("automigrating types.Route: %w", err)
} }
@@ -109,6 +110,7 @@ func NewHeadscaleDatabase(
if err != nil { if err != nil {
return fmt.Errorf("automigrating types.PreAuthKey: %w", err) return fmt.Errorf("automigrating types.PreAuthKey: %w", err)
} }
err = tx.AutoMigrate(&types.Node{}) err = tx.AutoMigrate(&types.Node{})
if err != nil { if err != nil {
return fmt.Errorf("automigrating types.Node: %w", err) return fmt.Errorf("automigrating types.Node: %w", err)
@@ -155,7 +157,8 @@ AND auth_key_id NOT IN (
nodeRoutes := map[uint64][]netip.Prefix{} nodeRoutes := map[uint64][]netip.Prefix{}
var routes []types.Route var routes []types.Route //nolint:staticcheck // SA1019: Route kept for migrations
err = tx.Find(&routes).Error err = tx.Find(&routes).Error
if err != nil { if err != nil {
return fmt.Errorf("fetching routes: %w", err) return fmt.Errorf("fetching routes: %w", err)
@@ -168,10 +171,10 @@ AND auth_key_id NOT IN (
} }
for nodeID, routes := range nodeRoutes { for nodeID, routes := range nodeRoutes {
tsaddr.SortPrefixes(routes) slices.SortFunc(routes, netip.Prefix.Compare)
routes = slices.Compact(routes) routes = slices.Compact(routes)
data, err := json.Marshal(routes) data, _ := json.Marshal(routes)
err = tx.Model(&types.Node{}).Where("id = ?", nodeID).Update("approved_routes", data).Error err = tx.Model(&types.Node{}).Where("id = ?", nodeID).Update("approved_routes", data).Error
if err != nil { if err != nil {
@@ -180,7 +183,7 @@ AND auth_key_id NOT IN (
} }
// Drop the old table. // Drop the old table.
_ = tx.Migrator().DropTable(&types.Route{}) _ = tx.Migrator().DropTable(&types.Route{}) //nolint:staticcheck // SA1019: Route kept for migrations
return nil return nil
}, },
@@ -245,21 +248,24 @@ AND auth_key_id NOT IN (
Migrate: func(tx *gorm.DB) error { Migrate: func(tx *gorm.DB) error {
// Only run on SQLite // Only run on SQLite
if cfg.Database.Type != types.DatabaseSqlite { if cfg.Database.Type != types.DatabaseSqlite {
log.Info().Msg("Skipping schema migration on non-SQLite database") log.Info().Msg("skipping schema migration on non-SQLite database")
return nil return nil
} }
log.Info().Msg("Starting schema recreation with table renaming") log.Info().Msg("starting schema recreation with table renaming")
// Rename existing tables to _old versions // Rename existing tables to _old versions
tablesToRename := []string{"users", "pre_auth_keys", "api_keys", "nodes", "policies"} tablesToRename := []string{"users", "pre_auth_keys", "api_keys", "nodes", "policies"}
// Check if routes table exists and drop it (should have been migrated already) // Check if routes table exists and drop it (should have been migrated already)
var routesExists bool var routesExists bool
err := tx.Raw("SELECT COUNT(*) FROM sqlite_master WHERE type='table' AND name='routes'").Row().Scan(&routesExists) err := tx.Raw("SELECT COUNT(*) FROM sqlite_master WHERE type='table' AND name='routes'").Row().Scan(&routesExists)
if err == nil && routesExists { if err == nil && routesExists {
log.Info().Msg("Dropping leftover routes table") log.Info().Msg("dropping leftover routes table")
if err := tx.Exec("DROP TABLE routes").Error; err != nil {
err := tx.Exec("DROP TABLE routes").Error
if err != nil {
return fmt.Errorf("dropping routes table: %w", err) return fmt.Errorf("dropping routes table: %w", err)
} }
} }
@@ -281,6 +287,7 @@ AND auth_key_id NOT IN (
for _, table := range tablesToRename { for _, table := range tablesToRename {
// Check if table exists before renaming // Check if table exists before renaming
var exists bool var exists bool
err := tx.Raw("SELECT COUNT(*) FROM sqlite_master WHERE type='table' AND name=?", table).Row().Scan(&exists) err := tx.Raw("SELECT COUNT(*) FROM sqlite_master WHERE type='table' AND name=?", table).Row().Scan(&exists)
if err != nil { if err != nil {
return fmt.Errorf("checking if table %s exists: %w", table, err) return fmt.Errorf("checking if table %s exists: %w", table, err)
@@ -291,7 +298,8 @@ AND auth_key_id NOT IN (
_ = tx.Exec("DROP TABLE IF EXISTS " + table + "_old").Error _ = tx.Exec("DROP TABLE IF EXISTS " + table + "_old").Error
// Rename current table to _old // Rename current table to _old
if err := tx.Exec("ALTER TABLE " + table + " RENAME TO " + table + "_old").Error; err != nil { err := tx.Exec("ALTER TABLE " + table + " RENAME TO " + table + "_old").Error
if err != nil {
return fmt.Errorf("renaming table %s to %s_old: %w", table, table, err) return fmt.Errorf("renaming table %s to %s_old: %w", table, table, err)
} }
} }
@@ -365,7 +373,8 @@ AND auth_key_id NOT IN (
} }
for _, createSQL := range tableCreationSQL { for _, createSQL := range tableCreationSQL {
if err := tx.Exec(createSQL).Error; err != nil { err := tx.Exec(createSQL).Error
if err != nil {
return fmt.Errorf("creating new table: %w", err) return fmt.Errorf("creating new table: %w", err)
} }
} }
@@ -394,7 +403,8 @@ AND auth_key_id NOT IN (
} }
for _, copySQL := range dataCopySQL { for _, copySQL := range dataCopySQL {
if err := tx.Exec(copySQL).Error; err != nil { err := tx.Exec(copySQL).Error
if err != nil {
return fmt.Errorf("copying data: %w", err) return fmt.Errorf("copying data: %w", err)
} }
} }
@@ -417,19 +427,21 @@ AND auth_key_id NOT IN (
} }
for _, indexSQL := range indexes { for _, indexSQL := range indexes {
if err := tx.Exec(indexSQL).Error; err != nil { err := tx.Exec(indexSQL).Error
if err != nil {
return fmt.Errorf("creating index: %w", err) return fmt.Errorf("creating index: %w", err)
} }
} }
// Drop old tables only after everything succeeds // Drop old tables only after everything succeeds
for _, table := range tablesToRename { for _, table := range tablesToRename {
if err := tx.Exec("DROP TABLE IF EXISTS " + table + "_old").Error; err != nil { err := tx.Exec("DROP TABLE IF EXISTS " + table + "_old").Error
log.Warn().Str("table", table+"_old").Err(err).Msg("Failed to drop old table, but migration succeeded") if err != nil {
log.Warn().Str("table", table+"_old").Err(err).Msg("failed to drop old table, but migration succeeded")
} }
} }
log.Info().Msg("Schema recreation completed successfully") log.Info().Msg("schema recreation completed successfully")
return nil return nil
}, },
@@ -595,12 +607,12 @@ AND auth_key_id NOT IN (
// 1. Load policy from file or database based on configuration // 1. Load policy from file or database based on configuration
policyData, err := PolicyBytes(tx, cfg) policyData, err := PolicyBytes(tx, cfg)
if err != nil { if err != nil {
log.Warn().Err(err).Msg("Failed to load policy, skipping RequestTags migration (tags will be validated on node reconnect)") log.Warn().Err(err).Msg("failed to load policy, skipping RequestTags migration (tags will be validated on node reconnect)")
return nil return nil
} }
if len(policyData) == 0 { if len(policyData) == 0 {
log.Info().Msg("No policy found, skipping RequestTags migration (tags will be validated on node reconnect)") log.Info().Msg("no policy found, skipping RequestTags migration (tags will be validated on node reconnect)")
return nil return nil
} }
@@ -618,7 +630,7 @@ AND auth_key_id NOT IN (
// 3. Create PolicyManager (handles HuJSON parsing, groups, nested tags, etc.) // 3. Create PolicyManager (handles HuJSON parsing, groups, nested tags, etc.)
polMan, err := policy.NewPolicyManager(policyData, users, nodes.ViewSlice()) polMan, err := policy.NewPolicyManager(policyData, users, nodes.ViewSlice())
if err != nil { if err != nil {
log.Warn().Err(err).Msg("Failed to parse policy, skipping RequestTags migration (tags will be validated on node reconnect)") log.Warn().Err(err).Msg("failed to parse policy, skipping RequestTags migration (tags will be validated on node reconnect)")
return nil return nil
} }
@@ -652,8 +664,7 @@ AND auth_key_id NOT IN (
if len(validatedTags) == 0 { if len(validatedTags) == 0 {
if len(rejectedTags) > 0 { if len(rejectedTags) > 0 {
log.Debug(). log.Debug().
Uint64("node.id", uint64(node.ID)). EmbedObject(node).
Str("node.name", node.Hostname).
Strs("rejected_tags", rejectedTags). Strs("rejected_tags", rejectedTags).
Msg("RequestTags rejected during migration (not authorized)") Msg("RequestTags rejected during migration (not authorized)")
} }
@@ -661,7 +672,7 @@ AND auth_key_id NOT IN (
continue continue
} }
mergedTags := append(existingTags, validatedTags...) mergedTags := append(slices.Clone(existingTags), validatedTags...)
slices.Sort(mergedTags) slices.Sort(mergedTags)
mergedTags = slices.Compact(mergedTags) mergedTags = slices.Compact(mergedTags)
@@ -676,8 +687,7 @@ AND auth_key_id NOT IN (
} }
log.Info(). log.Info().
Uint64("node.id", uint64(node.ID)). EmbedObject(node).
Str("node.name", node.Hostname).
Strs("validated_tags", validatedTags). Strs("validated_tags", validatedTags).
Strs("rejected_tags", rejectedTags). Strs("rejected_tags", rejectedTags).
Strs("existing_tags", existingTags). Strs("existing_tags", existingTags).
@@ -689,6 +699,29 @@ AND auth_key_id NOT IN (
}, },
Rollback: func(db *gorm.DB) error { return nil }, Rollback: func(db *gorm.DB) error { return nil },
}, },
{
// Clear user_id on tagged nodes.
// Tagged nodes are owned by their tags, not a user.
// Previously user_id was kept as "created by" tracking,
// but this prevents deleting users whose nodes have been
// tagged, and the ON DELETE CASCADE FK would destroy the
// tagged nodes if the user were deleted.
// Fixes: https://github.com/juanfont/headscale/issues/3077
ID: "202602201200-clear-tagged-node-user-id",
Migrate: func(tx *gorm.DB) error {
err := tx.Exec(`
UPDATE nodes
SET user_id = NULL
WHERE tags IS NOT NULL AND tags != '[]' AND tags != '';
`).Error
if err != nil {
return fmt.Errorf("clearing user_id on tagged nodes: %w", err)
}
return nil
},
Rollback: func(db *gorm.DB) error { return nil },
},
}, },
) )
@@ -750,6 +783,20 @@ AND auth_key_id NOT IN (
return nil, fmt.Errorf("migration failed: %w", err) return nil, fmt.Errorf("migration failed: %w", err)
} }
// Store the current version in the database after migrations succeed.
// Dev builds skip this to preserve the stored version for the next
// real versioned binary.
currentVersion := types.GetVersionInfo().Version
if !isDev(currentVersion) {
err = setDatabaseVersion(dbConn, currentVersion)
if err != nil {
return nil, fmt.Errorf(
"storing database version: %w",
err,
)
}
}
// Validate that the schema ends up in the expected state. // Validate that the schema ends up in the expected state.
// This is currently only done on sqlite as squibble does not // This is currently only done on sqlite as squibble does not
// support Postgres and we use our sqlite schema as our source of // support Postgres and we use our sqlite schema as our source of
@@ -762,6 +809,7 @@ AND auth_key_id NOT IN (
// or else it blocks... // or else it blocks...
sqlConn.SetMaxIdleConns(maxIdleConns) sqlConn.SetMaxIdleConns(maxIdleConns)
sqlConn.SetMaxOpenConns(maxOpenConns) sqlConn.SetMaxOpenConns(maxOpenConns)
defer sqlConn.SetMaxIdleConns(1) defer sqlConn.SetMaxIdleConns(1)
defer sqlConn.SetMaxOpenConns(1) defer sqlConn.SetMaxOpenConns(1)
@@ -779,15 +827,14 @@ AND auth_key_id NOT IN (
}, },
} }
if err := squibble.Validate(ctx, sqlConn, dbSchema, &opts); err != nil { if err := squibble.Validate(ctx, sqlConn, dbSchema, &opts); err != nil { //nolint:noinlineerr
return nil, fmt.Errorf("validating schema: %w", err) return nil, fmt.Errorf("validating schema: %w", err)
} }
} }
db := HSDatabase{ db := HSDatabase{
DB: dbConn, DB: dbConn,
cfg: cfg, cfg: cfg,
regCache: regCache,
} }
return &db, err return &db, err
@@ -805,6 +852,7 @@ func openDB(cfg types.DatabaseConfig) (*gorm.DB, error) {
switch cfg.Type { switch cfg.Type {
case types.DatabaseSqlite: case types.DatabaseSqlite:
dir := filepath.Dir(cfg.Sqlite.Path) dir := filepath.Dir(cfg.Sqlite.Path)
err := util.EnsureDir(dir) err := util.EnsureDir(dir)
if err != nil { if err != nil {
return nil, fmt.Errorf("creating directory for sqlite: %w", err) return nil, fmt.Errorf("creating directory for sqlite: %w", err)
@@ -858,7 +906,7 @@ func openDB(cfg types.DatabaseConfig) (*gorm.DB, error) {
Str("path", dbString). Str("path", dbString).
Msg("Opening database") Msg("Opening database")
if sslEnabled, err := strconv.ParseBool(cfg.Postgres.Ssl); err == nil { if sslEnabled, err := strconv.ParseBool(cfg.Postgres.Ssl); err == nil { //nolint:noinlineerr
if !sslEnabled { if !sslEnabled {
dbString += " sslmode=disable" dbString += " sslmode=disable"
} }
@@ -913,7 +961,7 @@ func runMigrations(cfg types.DatabaseConfig, dbConn *gorm.DB, migrations *gormig
// Get the current foreign key status // Get the current foreign key status
var fkOriginallyEnabled int var fkOriginallyEnabled int
if err := dbConn.Raw("PRAGMA foreign_keys").Scan(&fkOriginallyEnabled).Error; err != nil { if err := dbConn.Raw("PRAGMA foreign_keys").Scan(&fkOriginallyEnabled).Error; err != nil { //nolint:noinlineerr
return fmt.Errorf("checking foreign key status: %w", err) return fmt.Errorf("checking foreign key status: %w", err)
} }
@@ -937,33 +985,36 @@ func runMigrations(cfg types.DatabaseConfig, dbConn *gorm.DB, migrations *gormig
} }
for _, migrationID := range migrationIDs { for _, migrationID := range migrationIDs {
log.Trace().Caller().Str("migration_id", migrationID).Msg("Running migration") log.Trace().Caller().Str("migration_id", migrationID).Msg("running migration")
needsFKDisabled := migrationsRequiringFKDisabled[migrationID] needsFKDisabled := migrationsRequiringFKDisabled[migrationID]
if needsFKDisabled { if needsFKDisabled {
// Disable foreign keys for this migration // Disable foreign keys for this migration
if err := dbConn.Exec("PRAGMA foreign_keys = OFF").Error; err != nil { err := dbConn.Exec("PRAGMA foreign_keys = OFF").Error
if err != nil {
return fmt.Errorf("disabling foreign keys for migration %s: %w", migrationID, err) return fmt.Errorf("disabling foreign keys for migration %s: %w", migrationID, err)
} }
} else { } else {
// Ensure foreign keys are enabled for this migration // Ensure foreign keys are enabled for this migration
if err := dbConn.Exec("PRAGMA foreign_keys = ON").Error; err != nil { err := dbConn.Exec("PRAGMA foreign_keys = ON").Error
if err != nil {
return fmt.Errorf("enabling foreign keys for migration %s: %w", migrationID, err) return fmt.Errorf("enabling foreign keys for migration %s: %w", migrationID, err)
} }
} }
// Run up to this specific migration (will only run the next pending migration) // Run up to this specific migration (will only run the next pending migration)
if err := migrations.MigrateTo(migrationID); err != nil { err := migrations.MigrateTo(migrationID)
if err != nil {
return fmt.Errorf("running migration %s: %w", migrationID, err) return fmt.Errorf("running migration %s: %w", migrationID, err)
} }
} }
if err := dbConn.Exec("PRAGMA foreign_keys = ON").Error; err != nil { if err := dbConn.Exec("PRAGMA foreign_keys = ON").Error; err != nil { //nolint:noinlineerr
return fmt.Errorf("restoring foreign keys: %w", err) return fmt.Errorf("restoring foreign keys: %w", err)
} }
// Run the rest of the migrations // Run the rest of the migrations
if err := migrations.Migrate(); err != nil { if err := migrations.Migrate(); err != nil { //nolint:noinlineerr
return err return err
} }
@@ -981,16 +1032,22 @@ func runMigrations(cfg types.DatabaseConfig, dbConn *gorm.DB, migrations *gormig
if err != nil { if err != nil {
return err return err
} }
defer rows.Close()
for rows.Next() { for rows.Next() {
var violation constraintViolation var violation constraintViolation
if err := rows.Scan(&violation.Table, &violation.RowID, &violation.Parent, &violation.ConstraintIndex); err != nil {
err := rows.Scan(&violation.Table, &violation.RowID, &violation.Parent, &violation.ConstraintIndex)
if err != nil {
return err return err
} }
violatedConstraints = append(violatedConstraints, violation) violatedConstraints = append(violatedConstraints, violation)
} }
_ = rows.Close()
if err := rows.Err(); err != nil { //nolint:noinlineerr
return err
}
if len(violatedConstraints) > 0 { if len(violatedConstraints) > 0 {
for _, violation := range violatedConstraints { for _, violation := range violatedConstraints {
@@ -1005,7 +1062,8 @@ func runMigrations(cfg types.DatabaseConfig, dbConn *gorm.DB, migrations *gormig
} }
} else { } else {
// PostgreSQL can run all migrations in one block - no foreign key issues // PostgreSQL can run all migrations in one block - no foreign key issues
if err := migrations.Migrate(); err != nil { err := migrations.Migrate()
if err != nil {
return err return err
} }
} }
@@ -1016,6 +1074,7 @@ func runMigrations(cfg types.DatabaseConfig, dbConn *gorm.DB, migrations *gormig
func (hsdb *HSDatabase) PingDB(ctx context.Context) error { func (hsdb *HSDatabase) PingDB(ctx context.Context) error {
ctx, cancel := context.WithTimeout(ctx, time.Second) ctx, cancel := context.WithTimeout(ctx, time.Second)
defer cancel() defer cancel()
sqlDB, err := hsdb.DB.DB() sqlDB, err := hsdb.DB.DB()
if err != nil { if err != nil {
return err return err
@@ -1031,7 +1090,7 @@ func (hsdb *HSDatabase) Close() error {
} }
if hsdb.cfg.Database.Type == types.DatabaseSqlite && hsdb.cfg.Database.Sqlite.WriteAheadLog { if hsdb.cfg.Database.Type == types.DatabaseSqlite && hsdb.cfg.Database.Sqlite.WriteAheadLog {
db.Exec("VACUUM") db.Exec("VACUUM") //nolint:errcheck,noctx
} }
return db.Close() return db.Close()
@@ -1040,12 +1099,14 @@ func (hsdb *HSDatabase) Close() error {
func (hsdb *HSDatabase) Read(fn func(rx *gorm.DB) error) error { func (hsdb *HSDatabase) Read(fn func(rx *gorm.DB) error) error {
rx := hsdb.DB.Begin() rx := hsdb.DB.Begin()
defer rx.Rollback() defer rx.Rollback()
return fn(rx) return fn(rx)
} }
func Read[T any](db *gorm.DB, fn func(rx *gorm.DB) (T, error)) (T, error) { func Read[T any](db *gorm.DB, fn func(rx *gorm.DB) (T, error)) (T, error) {
rx := db.Begin() rx := db.Begin()
defer rx.Rollback() defer rx.Rollback()
ret, err := fn(rx) ret, err := fn(rx)
if err != nil { if err != nil {
var no T var no T
@@ -1058,7 +1119,9 @@ func Read[T any](db *gorm.DB, fn func(rx *gorm.DB) (T, error)) (T, error) {
func (hsdb *HSDatabase) Write(fn func(tx *gorm.DB) error) error { func (hsdb *HSDatabase) Write(fn func(tx *gorm.DB) error) error {
tx := hsdb.DB.Begin() tx := hsdb.DB.Begin()
defer tx.Rollback() defer tx.Rollback()
if err := fn(tx); err != nil {
err := fn(tx)
if err != nil {
return err return err
} }
@@ -1068,6 +1131,7 @@ func (hsdb *HSDatabase) Write(fn func(tx *gorm.DB) error) error {
func Write[T any](db *gorm.DB, fn func(tx *gorm.DB) (T, error)) (T, error) { func Write[T any](db *gorm.DB, fn func(tx *gorm.DB) (T, error)) (T, error) {
tx := db.Begin() tx := db.Begin()
defer tx.Rollback() defer tx.Rollback()
ret, err := fn(tx) ret, err := fn(tx)
if err != nil { if err != nil {
var no T var no T

View File

@@ -1,19 +1,18 @@
package db package db
import ( import (
"context"
"database/sql" "database/sql"
"os" "os"
"os/exec" "os/exec"
"path/filepath" "path/filepath"
"strings" "strings"
"testing" "testing"
"time"
"github.com/juanfont/headscale/hscontrol/types" "github.com/juanfont/headscale/hscontrol/types"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
"gorm.io/gorm" "gorm.io/gorm"
"zgo.at/zcache/v2"
) )
// TestSQLiteMigrationAndDataValidation tests specific SQLite migration scenarios // TestSQLiteMigrationAndDataValidation tests specific SQLite migration scenarios
@@ -44,6 +43,7 @@ func TestSQLiteMigrationAndDataValidation(t *testing.T) {
// Verify api_keys data preservation // Verify api_keys data preservation
var apiKeyCount int var apiKeyCount int
err = hsdb.DB.Raw("SELECT COUNT(*) FROM api_keys").Scan(&apiKeyCount).Error err = hsdb.DB.Raw("SELECT COUNT(*) FROM api_keys").Scan(&apiKeyCount).Error
require.NoError(t, err) require.NoError(t, err)
assert.Equal(t, 2, apiKeyCount, "should preserve all 2 api_keys from original schema") assert.Equal(t, 2, apiKeyCount, "should preserve all 2 api_keys from original schema")
@@ -160,10 +160,6 @@ func TestSQLiteMigrationAndDataValidation(t *testing.T) {
} }
} }
func emptyCache() *zcache.Cache[types.RegistrationID, types.RegisterNode] {
return zcache.New[types.RegistrationID, types.RegisterNode](time.Minute, time.Hour)
}
func createSQLiteFromSQLFile(sqlFilePath, dbPath string) error { func createSQLiteFromSQLFile(sqlFilePath, dbPath string) error {
db, err := sql.Open("sqlite", dbPath) db, err := sql.Open("sqlite", dbPath)
if err != nil { if err != nil {
@@ -176,7 +172,7 @@ func createSQLiteFromSQLFile(sqlFilePath, dbPath string) error {
return err return err
} }
_, err = db.Exec(string(schemaContent)) _, err = db.ExecContext(context.Background(), string(schemaContent))
return err return err
} }
@@ -186,6 +182,7 @@ func createSQLiteFromSQLFile(sqlFilePath, dbPath string) error {
func requireConstraintFailed(t *testing.T, err error) { func requireConstraintFailed(t *testing.T, err error) {
t.Helper() t.Helper()
require.Error(t, err) require.Error(t, err)
if !strings.Contains(err.Error(), "UNIQUE constraint failed:") && !strings.Contains(err.Error(), "violates unique constraint") { if !strings.Contains(err.Error(), "UNIQUE constraint failed:") && !strings.Contains(err.Error(), "violates unique constraint") {
require.Failf(t, "expected error to contain a constraint failure, got: %s", err.Error()) require.Failf(t, "expected error to contain a constraint failure, got: %s", err.Error())
} }
@@ -198,7 +195,7 @@ func TestConstraints(t *testing.T) {
}{ }{
{ {
name: "no-duplicate-username-if-no-oidc", name: "no-duplicate-username-if-no-oidc",
run: func(t *testing.T, db *gorm.DB) { run: func(t *testing.T, db *gorm.DB) { //nolint:thelper
_, err := CreateUser(db, types.User{Name: "user1"}) _, err := CreateUser(db, types.User{Name: "user1"})
require.NoError(t, err) require.NoError(t, err)
_, err = CreateUser(db, types.User{Name: "user1"}) _, err = CreateUser(db, types.User{Name: "user1"})
@@ -207,7 +204,7 @@ func TestConstraints(t *testing.T) {
}, },
{ {
name: "no-oidc-duplicate-username-and-id", name: "no-oidc-duplicate-username-and-id",
run: func(t *testing.T, db *gorm.DB) { run: func(t *testing.T, db *gorm.DB) { //nolint:thelper
user := types.User{ user := types.User{
Model: gorm.Model{ID: 1}, Model: gorm.Model{ID: 1},
Name: "user1", Name: "user1",
@@ -229,7 +226,7 @@ func TestConstraints(t *testing.T) {
}, },
{ {
name: "no-oidc-duplicate-id", name: "no-oidc-duplicate-id",
run: func(t *testing.T, db *gorm.DB) { run: func(t *testing.T, db *gorm.DB) { //nolint:thelper
user := types.User{ user := types.User{
Model: gorm.Model{ID: 1}, Model: gorm.Model{ID: 1},
Name: "user1", Name: "user1",
@@ -251,7 +248,7 @@ func TestConstraints(t *testing.T) {
}, },
{ {
name: "allow-duplicate-username-cli-then-oidc", name: "allow-duplicate-username-cli-then-oidc",
run: func(t *testing.T, db *gorm.DB) { run: func(t *testing.T, db *gorm.DB) { //nolint:thelper
_, err := CreateUser(db, types.User{Name: "user1"}) // Create CLI username _, err := CreateUser(db, types.User{Name: "user1"}) // Create CLI username
require.NoError(t, err) require.NoError(t, err)
@@ -266,7 +263,7 @@ func TestConstraints(t *testing.T) {
}, },
{ {
name: "allow-duplicate-username-oidc-then-cli", name: "allow-duplicate-username-oidc-then-cli",
run: func(t *testing.T, db *gorm.DB) { run: func(t *testing.T, db *gorm.DB) { //nolint:thelper
user := types.User{ user := types.User{
Name: "user1", Name: "user1",
ProviderIdentifier: sql.NullString{String: "http://test.com/user1", Valid: true}, ProviderIdentifier: sql.NullString{String: "http://test.com/user1", Valid: true},
@@ -320,7 +317,7 @@ func TestPostgresMigrationAndDataValidation(t *testing.T) {
} }
// Construct the pg_restore command // Construct the pg_restore command
cmd := exec.Command(pgRestorePath, "--verbose", "--if-exists", "--clean", "--no-owner", "--dbname", u.String(), tt.dbPath) cmd := exec.CommandContext(context.Background(), pgRestorePath, "--verbose", "--if-exists", "--clean", "--no-owner", "--dbname", u.String(), tt.dbPath)
// Set the output streams // Set the output streams
cmd.Stdout = os.Stdout cmd.Stdout = os.Stdout
@@ -376,7 +373,6 @@ func dbForTestWithPath(t *testing.T, sqlFilePath string) *HSDatabase {
Mode: types.PolicyModeDB, Mode: types.PolicyModeDB,
}, },
}, },
emptyCache(),
) )
if err != nil { if err != nil {
t.Fatalf("setting up database: %s", err) t.Fatalf("setting up database: %s", err)
@@ -401,6 +397,7 @@ func dbForTestWithPath(t *testing.T, sqlFilePath string) *HSDatabase {
// skip already-applied migrations and only run new ones. // skip already-applied migrations and only run new ones.
func TestSQLiteAllTestdataMigrations(t *testing.T) { func TestSQLiteAllTestdataMigrations(t *testing.T) {
t.Parallel() t.Parallel()
schemas, err := os.ReadDir("testdata/sqlite") schemas, err := os.ReadDir("testdata/sqlite")
require.NoError(t, err) require.NoError(t, err)
@@ -435,7 +432,6 @@ func TestSQLiteAllTestdataMigrations(t *testing.T) {
Mode: types.PolicyModeDB, Mode: types.PolicyModeDB,
}, },
}, },
emptyCache(),
) )
require.NoError(t, err) require.NoError(t, err)
}) })

View File

@@ -27,13 +27,17 @@ func TestEphemeralGarbageCollectorGoRoutineLeak(t *testing.T) {
t.Logf("Initial number of goroutines: %d", initialGoroutines) t.Logf("Initial number of goroutines: %d", initialGoroutines)
// Basic deletion tracking mechanism // Basic deletion tracking mechanism
var deletedIDs []types.NodeID var (
var deleteMutex sync.Mutex deletedIDs []types.NodeID
var deletionWg sync.WaitGroup deleteMutex sync.Mutex
deletionWg sync.WaitGroup
)
deleteFunc := func(nodeID types.NodeID) { deleteFunc := func(nodeID types.NodeID) {
deleteMutex.Lock() deleteMutex.Lock()
deletedIDs = append(deletedIDs, nodeID) deletedIDs = append(deletedIDs, nodeID)
deleteMutex.Unlock() deleteMutex.Unlock()
deletionWg.Done() deletionWg.Done()
} }
@@ -43,14 +47,17 @@ func TestEphemeralGarbageCollectorGoRoutineLeak(t *testing.T) {
go gc.Start() go gc.Start()
// Schedule several nodes for deletion with short expiry // Schedule several nodes for deletion with short expiry
const expiry = fifty const (
const numNodes = 100 expiry = fifty
numNodes = 100
)
// Set up wait group for expected deletions // Set up wait group for expected deletions
deletionWg.Add(numNodes) deletionWg.Add(numNodes)
for i := 1; i <= numNodes; i++ { for i := 1; i <= numNodes; i++ {
gc.Schedule(types.NodeID(i), expiry) gc.Schedule(types.NodeID(i), expiry) //nolint:gosec // safe conversion in test
} }
// Wait for all scheduled deletions to complete // Wait for all scheduled deletions to complete
@@ -63,7 +70,7 @@ func TestEphemeralGarbageCollectorGoRoutineLeak(t *testing.T) {
// Schedule and immediately cancel to test that part of the code // Schedule and immediately cancel to test that part of the code
for i := numNodes + 1; i <= numNodes*2; i++ { for i := numNodes + 1; i <= numNodes*2; i++ {
nodeID := types.NodeID(i) nodeID := types.NodeID(i) //nolint:gosec // safe conversion in test
gc.Schedule(nodeID, time.Hour) gc.Schedule(nodeID, time.Hour)
gc.Cancel(nodeID) gc.Cancel(nodeID)
} }
@@ -87,14 +94,18 @@ func TestEphemeralGarbageCollectorGoRoutineLeak(t *testing.T) {
// and then reschedules it with a shorter expiry, and verifies that the node is deleted only once. // and then reschedules it with a shorter expiry, and verifies that the node is deleted only once.
func TestEphemeralGarbageCollectorReschedule(t *testing.T) { func TestEphemeralGarbageCollectorReschedule(t *testing.T) {
// Deletion tracking mechanism // Deletion tracking mechanism
var deletedIDs []types.NodeID var (
var deleteMutex sync.Mutex deletedIDs []types.NodeID
deleteMutex sync.Mutex
)
deletionNotifier := make(chan types.NodeID, 1) deletionNotifier := make(chan types.NodeID, 1)
deleteFunc := func(nodeID types.NodeID) { deleteFunc := func(nodeID types.NodeID) {
deleteMutex.Lock() deleteMutex.Lock()
deletedIDs = append(deletedIDs, nodeID) deletedIDs = append(deletedIDs, nodeID)
deleteMutex.Unlock() deleteMutex.Unlock()
deletionNotifier <- nodeID deletionNotifier <- nodeID
@@ -102,11 +113,14 @@ func TestEphemeralGarbageCollectorReschedule(t *testing.T) {
// Start GC // Start GC
gc := NewEphemeralGarbageCollector(deleteFunc) gc := NewEphemeralGarbageCollector(deleteFunc)
go gc.Start() go gc.Start()
defer gc.Close() defer gc.Close()
const shortExpiry = fifty const (
const longExpiry = 1 * time.Hour shortExpiry = fifty
longExpiry = 1 * time.Hour
)
nodeID := types.NodeID(1) nodeID := types.NodeID(1)
@@ -136,23 +150,31 @@ func TestEphemeralGarbageCollectorReschedule(t *testing.T) {
// and verifies that the node is deleted only once. // and verifies that the node is deleted only once.
func TestEphemeralGarbageCollectorCancelAndReschedule(t *testing.T) { func TestEphemeralGarbageCollectorCancelAndReschedule(t *testing.T) {
// Deletion tracking mechanism // Deletion tracking mechanism
var deletedIDs []types.NodeID var (
var deleteMutex sync.Mutex deletedIDs []types.NodeID
deleteMutex sync.Mutex
)
deletionNotifier := make(chan types.NodeID, 1) deletionNotifier := make(chan types.NodeID, 1)
deleteFunc := func(nodeID types.NodeID) { deleteFunc := func(nodeID types.NodeID) {
deleteMutex.Lock() deleteMutex.Lock()
deletedIDs = append(deletedIDs, nodeID) deletedIDs = append(deletedIDs, nodeID)
deleteMutex.Unlock() deleteMutex.Unlock()
deletionNotifier <- nodeID deletionNotifier <- nodeID
} }
// Start the GC // Start the GC
gc := NewEphemeralGarbageCollector(deleteFunc) gc := NewEphemeralGarbageCollector(deleteFunc)
go gc.Start() go gc.Start()
defer gc.Close() defer gc.Close()
nodeID := types.NodeID(1) nodeID := types.NodeID(1)
const expiry = fifty const expiry = fifty
// Schedule node for deletion // Schedule node for deletion
@@ -196,14 +218,18 @@ func TestEphemeralGarbageCollectorCancelAndReschedule(t *testing.T) {
// It creates a new EphemeralGarbageCollector, schedules a node for deletion, closes the GC, and verifies that the node is not deleted. // It creates a new EphemeralGarbageCollector, schedules a node for deletion, closes the GC, and verifies that the node is not deleted.
func TestEphemeralGarbageCollectorCloseBeforeTimerFires(t *testing.T) { func TestEphemeralGarbageCollectorCloseBeforeTimerFires(t *testing.T) {
// Deletion tracking // Deletion tracking
var deletedIDs []types.NodeID var (
var deleteMutex sync.Mutex deletedIDs []types.NodeID
deleteMutex sync.Mutex
)
deletionNotifier := make(chan types.NodeID, 1) deletionNotifier := make(chan types.NodeID, 1)
deleteFunc := func(nodeID types.NodeID) { deleteFunc := func(nodeID types.NodeID) {
deleteMutex.Lock() deleteMutex.Lock()
deletedIDs = append(deletedIDs, nodeID) deletedIDs = append(deletedIDs, nodeID)
deleteMutex.Unlock() deleteMutex.Unlock()
deletionNotifier <- nodeID deletionNotifier <- nodeID
@@ -246,13 +272,18 @@ func TestEphemeralGarbageCollectorScheduleAfterClose(t *testing.T) {
t.Logf("Initial number of goroutines: %d", initialGoroutines) t.Logf("Initial number of goroutines: %d", initialGoroutines)
// Deletion tracking // Deletion tracking
var deletedIDs []types.NodeID var (
var deleteMutex sync.Mutex deletedIDs []types.NodeID
deleteMutex sync.Mutex
)
nodeDeleted := make(chan struct{}) nodeDeleted := make(chan struct{})
deleteFunc := func(nodeID types.NodeID) { deleteFunc := func(nodeID types.NodeID) {
deleteMutex.Lock() deleteMutex.Lock()
deletedIDs = append(deletedIDs, nodeID) deletedIDs = append(deletedIDs, nodeID)
deleteMutex.Unlock() deleteMutex.Unlock()
close(nodeDeleted) // Signal that deletion happened close(nodeDeleted) // Signal that deletion happened
} }
@@ -263,10 +294,12 @@ func TestEphemeralGarbageCollectorScheduleAfterClose(t *testing.T) {
// Use a WaitGroup to ensure the GC has started // Use a WaitGroup to ensure the GC has started
var startWg sync.WaitGroup var startWg sync.WaitGroup
startWg.Add(1) startWg.Add(1)
go func() { go func() {
startWg.Done() // Signal that the goroutine has started startWg.Done() // Signal that the goroutine has started
gc.Start() gc.Start()
}() }()
startWg.Wait() // Wait for the GC to start startWg.Wait() // Wait for the GC to start
// Close GC right away // Close GC right away
@@ -288,7 +321,9 @@ func TestEphemeralGarbageCollectorScheduleAfterClose(t *testing.T) {
// Check no node was deleted // Check no node was deleted
deleteMutex.Lock() deleteMutex.Lock()
nodesDeleted := len(deletedIDs) nodesDeleted := len(deletedIDs)
deleteMutex.Unlock() deleteMutex.Unlock()
assert.Equal(t, 0, nodesDeleted, "No nodes should be deleted when Schedule is called after Close") assert.Equal(t, 0, nodesDeleted, "No nodes should be deleted when Schedule is called after Close")
@@ -311,12 +346,16 @@ func TestEphemeralGarbageCollectorConcurrentScheduleAndClose(t *testing.T) {
t.Logf("Initial number of goroutines: %d", initialGoroutines) t.Logf("Initial number of goroutines: %d", initialGoroutines)
// Deletion tracking mechanism // Deletion tracking mechanism
var deletedIDs []types.NodeID var (
var deleteMutex sync.Mutex deletedIDs []types.NodeID
deleteMutex sync.Mutex
)
deleteFunc := func(nodeID types.NodeID) { deleteFunc := func(nodeID types.NodeID) {
deleteMutex.Lock() deleteMutex.Lock()
deletedIDs = append(deletedIDs, nodeID) deletedIDs = append(deletedIDs, nodeID)
deleteMutex.Unlock() deleteMutex.Unlock()
} }
@@ -325,8 +364,10 @@ func TestEphemeralGarbageCollectorConcurrentScheduleAndClose(t *testing.T) {
go gc.Start() go gc.Start()
// Number of concurrent scheduling goroutines // Number of concurrent scheduling goroutines
const numSchedulers = 10 const (
const nodesPerScheduler = 50 numSchedulers = 10
nodesPerScheduler = 50
)
const closeAfterNodes = 25 // Close GC after this many nodes per scheduler const closeAfterNodes = 25 // Close GC after this many nodes per scheduler
@@ -353,8 +394,8 @@ func TestEphemeralGarbageCollectorConcurrentScheduleAndClose(t *testing.T) {
case <-stopScheduling: case <-stopScheduling:
return return
default: default:
nodeID := types.NodeID(baseNodeID + j + 1) nodeID := types.NodeID(baseNodeID + j + 1) //nolint:gosec // safe conversion in test
gc.Schedule(nodeID, 1*time.Hour) // Long expiry to ensure it doesn't trigger during test gc.Schedule(nodeID, 1*time.Hour) // Long expiry to ensure it doesn't trigger during test
atomic.AddInt64(&scheduledCount, 1) atomic.AddInt64(&scheduledCount, 1)
// Yield to other goroutines to introduce variability // Yield to other goroutines to introduce variability

View File

@@ -17,7 +17,11 @@ import (
"tailscale.com/net/tsaddr" "tailscale.com/net/tsaddr"
) )
var errGeneratedIPBytesInvalid = errors.New("generated ip bytes are invalid ip") var (
errGeneratedIPBytesInvalid = errors.New("generated ip bytes are invalid ip")
errGeneratedIPNotInPrefix = errors.New("generated ip not in prefix")
errIPAllocatorNil = errors.New("ip allocator was nil")
)
// IPAllocator is a singleton responsible for allocating // IPAllocator is a singleton responsible for allocating
// IP addresses for nodes and making sure the same // IP addresses for nodes and making sure the same
@@ -62,8 +66,10 @@ func NewIPAllocator(
strategy: strategy, strategy: strategy,
} }
var v4s []sql.NullString var (
var v6s []sql.NullString v4s []sql.NullString
v6s []sql.NullString
)
if db != nil { if db != nil {
err := db.Read(func(rx *gorm.DB) error { err := db.Read(func(rx *gorm.DB) error {
@@ -135,15 +141,18 @@ func (i *IPAllocator) Next() (*netip.Addr, *netip.Addr, error) {
i.mu.Lock() i.mu.Lock()
defer i.mu.Unlock() defer i.mu.Unlock()
var err error var (
var ret4 *netip.Addr err error
var ret6 *netip.Addr ret4 *netip.Addr
ret6 *netip.Addr
)
if i.prefix4 != nil { if i.prefix4 != nil {
ret4, err = i.next(i.prev4, i.prefix4) ret4, err = i.next(i.prev4, i.prefix4)
if err != nil { if err != nil {
return nil, nil, fmt.Errorf("allocating IPv4 address: %w", err) return nil, nil, fmt.Errorf("allocating IPv4 address: %w", err)
} }
i.prev4 = *ret4 i.prev4 = *ret4
} }
@@ -152,6 +161,7 @@ func (i *IPAllocator) Next() (*netip.Addr, *netip.Addr, error) {
if err != nil { if err != nil {
return nil, nil, fmt.Errorf("allocating IPv6 address: %w", err) return nil, nil, fmt.Errorf("allocating IPv6 address: %w", err)
} }
i.prev6 = *ret6 i.prev6 = *ret6
} }
@@ -168,8 +178,10 @@ func (i *IPAllocator) nextLocked(prev netip.Addr, prefix *netip.Prefix) (*netip.
} }
func (i *IPAllocator) next(prev netip.Addr, prefix *netip.Prefix) (*netip.Addr, error) { func (i *IPAllocator) next(prev netip.Addr, prefix *netip.Prefix) (*netip.Addr, error) {
var err error var (
var ip netip.Addr err error
ip netip.Addr
)
switch i.strategy { switch i.strategy {
case types.IPAllocationStrategySequential: case types.IPAllocationStrategySequential:
@@ -243,7 +255,8 @@ func randomNext(pfx netip.Prefix) (netip.Addr, error) {
if !pfx.Contains(ip) { if !pfx.Contains(ip) {
return netip.Addr{}, fmt.Errorf( return netip.Addr{}, fmt.Errorf(
"generated ip(%s) not in prefix(%s)", "%w: ip(%s) not in prefix(%s)",
errGeneratedIPNotInPrefix,
ip.String(), ip.String(),
pfx.String(), pfx.String(),
) )
@@ -268,11 +281,14 @@ func isTailscaleReservedIP(ip netip.Addr) bool {
// If a prefix type has been removed (IPv4 or IPv6), it // If a prefix type has been removed (IPv4 or IPv6), it
// will remove the IPs in that family from the node. // will remove the IPs in that family from the node.
func (db *HSDatabase) BackfillNodeIPs(i *IPAllocator) ([]string, error) { func (db *HSDatabase) BackfillNodeIPs(i *IPAllocator) ([]string, error) {
var err error var (
var ret []string err error
ret []string
)
err = db.Write(func(tx *gorm.DB) error { err = db.Write(func(tx *gorm.DB) error {
if i == nil { if i == nil {
return errors.New("backfilling IPs: ip allocator was nil") return fmt.Errorf("backfilling IPs: %w", errIPAllocatorNil)
} }
log.Trace().Caller().Msgf("starting to backfill IPs") log.Trace().Caller().Msgf("starting to backfill IPs")
@@ -283,18 +299,19 @@ func (db *HSDatabase) BackfillNodeIPs(i *IPAllocator) ([]string, error) {
} }
for _, node := range nodes { for _, node := range nodes {
log.Trace().Caller().Uint64("node.id", node.ID.Uint64()).Str("node.name", node.Hostname).Msg("IP backfill check started because node found in database") log.Trace().Caller().EmbedObject(node).Msg("ip backfill check started because node found in database")
changed := false changed := false
// IPv4 prefix is set, but node ip is missing, alloc // IPv4 prefix is set, but node ip is missing, alloc
if i.prefix4 != nil && node.IPv4 == nil { if i.prefix4 != nil && node.IPv4 == nil {
ret4, err := i.nextLocked(i.prev4, i.prefix4) ret4, err := i.nextLocked(i.prev4, i.prefix4)
if err != nil { if err != nil {
return fmt.Errorf("failed to allocate ipv4 for node(%d): %w", node.ID, err) return fmt.Errorf("allocating IPv4 for node(%d): %w", node.ID, err)
} }
node.IPv4 = ret4 node.IPv4 = ret4
changed = true changed = true
ret = append(ret, fmt.Sprintf("assigned IPv4 %q to Node(%d) %q", ret4.String(), node.ID, node.Hostname)) ret = append(ret, fmt.Sprintf("assigned IPv4 %q to Node(%d) %q", ret4.String(), node.ID, node.Hostname))
} }
@@ -302,11 +319,12 @@ func (db *HSDatabase) BackfillNodeIPs(i *IPAllocator) ([]string, error) {
if i.prefix6 != nil && node.IPv6 == nil { if i.prefix6 != nil && node.IPv6 == nil {
ret6, err := i.nextLocked(i.prev6, i.prefix6) ret6, err := i.nextLocked(i.prev6, i.prefix6)
if err != nil { if err != nil {
return fmt.Errorf("failed to allocate ipv6 for node(%d): %w", node.ID, err) return fmt.Errorf("allocating IPv6 for node(%d): %w", node.ID, err)
} }
node.IPv6 = ret6 node.IPv6 = ret6
changed = true changed = true
ret = append(ret, fmt.Sprintf("assigned IPv6 %q to Node(%d) %q", ret6.String(), node.ID, node.Hostname)) ret = append(ret, fmt.Sprintf("assigned IPv6 %q to Node(%d) %q", ret6.String(), node.ID, node.Hostname))
} }

Some files were not shown because too many files have changed in this diff Show More