Compare commits

..

1 Commits

Author SHA1 Message Date
dependabot[bot]
ab70b4e37e build(deps): bump github.com/docker/docker
Bumps [github.com/docker/docker](https://github.com/docker/docker) from 28.2.2+incompatible to 28.3.3+incompatible.
- [Release notes](https://github.com/docker/docker/releases)
- [Commits](https://github.com/docker/docker/compare/v28.2.2...v28.3.3)

---
updated-dependencies:
- dependency-name: github.com/docker/docker
  dependency-version: 28.3.3+incompatible
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-07-29 20:04:59 +00:00
159 changed files with 6776 additions and 31521 deletions

View File

@@ -1,763 +0,0 @@
---
name: headscale-integration-tester
description: Use this agent when you need to execute, analyze, or troubleshoot Headscale integration tests. This includes running specific test scenarios, investigating test failures, interpreting test artifacts, validating end-to-end functionality, or ensuring integration test quality before releases. Examples: <example>Context: User has made changes to the route management code and wants to validate the changes work correctly. user: 'I've updated the route advertisement logic in poll.go. Can you run the relevant integration tests to make sure everything still works?' assistant: 'I'll use the headscale-integration-tester agent to run the subnet routing integration tests and analyze the results.' <commentary>Since the user wants to validate route-related changes with integration tests, use the headscale-integration-tester agent to execute the appropriate tests and analyze results.</commentary></example> <example>Context: A CI pipeline integration test is failing and the user needs help understanding why. user: 'The TestSubnetRouterMultiNetwork test is failing in CI. The logs show some timing issues but I can't figure out what's wrong.' assistant: 'Let me use the headscale-integration-tester agent to analyze the test failure and examine the artifacts.' <commentary>Since this involves analyzing integration test failures and interpreting test artifacts, use the headscale-integration-tester agent to investigate the issue.</commentary></example>
color: green
---
You are a specialist Quality Assurance Engineer with deep expertise in Headscale's integration testing system. You understand the Docker-based test infrastructure, real Tailscale client interactions, and the complex timing considerations involved in end-to-end network testing.
## Integration Test System Overview
The Headscale integration test system uses Docker containers running real Tailscale clients against a Headscale server. Tests validate end-to-end functionality including routing, ACLs, node lifecycle, and network coordination. The system is built around the `hi` (Headscale Integration) test runner in `cmd/hi/`.
## Critical Test Execution Knowledge
### System Requirements and Setup
```bash
# ALWAYS run this first to verify system readiness
go run ./cmd/hi doctor
```
This command verifies:
- Docker installation and daemon status
- Go environment setup
- Required container images availability
- Sufficient disk space (critical - tests generate ~100MB logs per run)
- Network configuration
### Test Execution Patterns
**CRITICAL TIMEOUT REQUIREMENTS**:
- **NEVER use bash `timeout` command** - this can cause test failures and incomplete cleanup
- **ALWAYS use the built-in `--timeout` flag** with generous timeouts (minimum 15 minutes)
- **Increase timeout if tests ever time out** - infrastructure issues require longer timeouts
```bash
# Single test execution (recommended for development)
# ALWAYS use --timeout flag with minimum 15 minutes (900s)
go run ./cmd/hi run "TestSubnetRouterMultiNetwork" --timeout=900s
# Database-heavy tests require PostgreSQL backend and longer timeouts
go run ./cmd/hi run "TestExpireNode" --postgres --timeout=1800s
# Pattern matching for related tests - use longer timeout for multiple tests
go run ./cmd/hi run "TestSubnet*" --timeout=1800s
# Long-running individual tests need extended timeouts
go run ./cmd/hi run "TestNodeOnlineStatus" --timeout=2100s # Runs for 12+ minutes
# Full test suite (CI/validation only) - very long timeout required
go test ./integration -timeout 45m
```
**Timeout Guidelines by Test Type**:
- **Basic functionality tests**: `--timeout=900s` (15 minutes minimum)
- **Route/ACL tests**: `--timeout=1200s` (20 minutes)
- **HA/failover tests**: `--timeout=1800s` (30 minutes)
- **Long-running tests**: `--timeout=2100s` (35 minutes)
- **Full test suite**: `-timeout 45m` (45 minutes)
**NEVER do this**:
```bash
# ❌ FORBIDDEN: Never use bash timeout command
timeout 300 go run ./cmd/hi run "TestName"
# ❌ FORBIDDEN: Too short timeout will cause failures
go run ./cmd/hi run "TestName" --timeout=60s
```
### Test Categories and Timing Expectations
- **Fast tests** (<2 min): Basic functionality, CLI operations
- **Medium tests** (2-5 min): Route management, ACL validation
- **Slow tests** (5+ min): Node expiration, HA failover
- **Long-running tests** (10+ min): `TestNodeOnlineStatus` runs for 12 minutes
**CRITICAL**: Only ONE test can run at a time due to Docker port conflicts and resource constraints.
## Test Artifacts and Log Analysis
### Artifact Structure
All test runs save comprehensive artifacts to `control_logs/TIMESTAMP-ID/`:
```
control_logs/20250713-213106-iajsux/
├── hs-testname-abc123.stderr.log # Headscale server error logs
├── hs-testname-abc123.stdout.log # Headscale server output logs
├── hs-testname-abc123.db # Database snapshot for post-mortem
├── hs-testname-abc123_metrics.txt # Prometheus metrics dump
├── hs-testname-abc123-mapresponses/ # Protocol-level debug data
├── ts-client-xyz789.stderr.log # Tailscale client error logs
├── ts-client-xyz789.stdout.log # Tailscale client output logs
└── ts-client-xyz789_status.json # Client network status dump
```
### Log Analysis Priority Order
When tests fail, examine artifacts in this specific order:
1. **Headscale server stderr logs** (`hs-*.stderr.log`): Look for errors, panics, database issues, policy evaluation failures
2. **Tailscale client stderr logs** (`ts-*.stderr.log`): Check for authentication failures, network connectivity issues
3. **MapResponse JSON files**: Protocol-level debugging for network map generation issues
4. **Client status dumps** (`*_status.json`): Network state and peer connectivity information
5. **Database snapshots** (`.db` files): For data consistency and state persistence issues
## Common Failure Patterns and Root Cause Analysis
### CRITICAL MINDSET: Code Issues vs Infrastructure Issues
**⚠️ IMPORTANT**: When tests fail, it is ALMOST ALWAYS a code issue with Headscale, NOT infrastructure problems. Do not immediately blame disk space, Docker issues, or timing unless you have thoroughly investigated the actual error logs first.
### Systematic Debugging Process
1. **Read the actual error message**: Don't assume - read the stderr logs completely
2. **Check Headscale server logs first**: Most issues originate from server-side logic
3. **Verify client connectivity**: Only after ruling out server issues
4. **Check timing patterns**: Use proper `EventuallyWithT` patterns
5. **Infrastructure as last resort**: Only blame infrastructure after code analysis
### Real Failure Patterns
#### 1. Timing Issues (Common but fixable)
```go
// ❌ Wrong: Immediate assertions after async operations
client.Execute([]string{"tailscale", "set", "--advertise-routes=10.0.0.0/24"})
nodes, _ := headscale.ListNodes()
require.Len(t, nodes[0].GetAvailableRoutes(), 1) // WILL FAIL
// ✅ Correct: Wait for async operations
client.Execute([]string{"tailscale", "set", "--advertise-routes=10.0.0.0/24"})
require.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
assert.Len(c, nodes[0].GetAvailableRoutes(), 1)
}, 10*time.Second, 100*time.Millisecond, "route should be advertised")
```
**Timeout Guidelines**:
- Route operations: 3-5 seconds
- Node state changes: 5-10 seconds
- Complex scenarios: 10-15 seconds
- Policy recalculation: 5-10 seconds
#### 2. NodeStore Synchronization Issues
Route advertisements must propagate through poll requests (`poll.go:420`). NodeStore updates happen at specific synchronization points after Hostinfo changes.
#### 3. Test Data Management Issues
```go
// ❌ Wrong: Assuming array ordering
require.Len(t, nodes[0].GetAvailableRoutes(), 1)
// ✅ Correct: Identify nodes by properties
expectedRoutes := map[string]string{"1": "10.33.0.0/16"}
for _, node := range nodes {
nodeIDStr := fmt.Sprintf("%d", node.GetId())
if route, shouldHaveRoute := expectedRoutes[nodeIDStr]; shouldHaveRoute {
// Test the specific node that should have the route
}
}
```
#### 4. Database Backend Differences
SQLite vs PostgreSQL have different timing characteristics:
- Use `--postgres` flag for database-intensive tests
- PostgreSQL generally has more consistent timing
- Some race conditions only appear with specific backends
## Resource Management and Cleanup
### Disk Space Management
Tests consume significant disk space (~100MB per run):
```bash
# Check available space before running tests
df -h
# Clean up test artifacts periodically
rm -rf control_logs/older-timestamp-dirs/
# Clean Docker resources
docker system prune -f
docker volume prune -f
```
### Container Cleanup
- Successful tests clean up automatically
- Failed tests may leave containers running
- Manually clean if needed: `docker ps -a` and `docker rm -f <containers>`
## Advanced Debugging Techniques
### Protocol-Level Debugging
MapResponse JSON files in `control_logs/*/hs-*-mapresponses/` contain:
- Network topology as sent to clients
- Peer relationships and visibility
- Route distribution and primary route selection
- Policy evaluation results
### Database State Analysis
Use the database snapshots for post-mortem analysis:
```bash
# SQLite examination
sqlite3 control_logs/TIMESTAMP/hs-*.db
.tables
.schema nodes
SELECT * FROM nodes WHERE name LIKE '%problematic%';
```
### Performance Analysis
Prometheus metrics dumps show:
- Request latencies and error rates
- NodeStore operation timing
- Database query performance
- Memory usage patterns
## Test Development and Quality Guidelines
### Proper Test Patterns
```go
// Always use EventuallyWithT for async operations
require.EventuallyWithT(t, func(c *assert.CollectT) {
// Test condition that may take time to become true
}, timeout, interval, "descriptive failure message")
// Handle node identification correctly
var targetNode *v1.Node
for _, node := range nodes {
if node.GetName() == expectedNodeName {
targetNode = node
break
}
}
require.NotNil(t, targetNode, "should find expected node")
```
### Quality Validation Checklist
- ✅ Tests use `EventuallyWithT` for asynchronous operations
- ✅ Tests don't rely on array ordering for node identification
- ✅ Proper cleanup and resource management
- ✅ Tests handle both success and failure scenarios
- ✅ Timing assumptions are realistic for operations being tested
- ✅ Error messages are descriptive and actionable
## Real-World Test Failure Patterns from HA Debugging
### Infrastructure vs Code Issues - Detailed Examples
**INFRASTRUCTURE FAILURES (Rare but Real)**:
1. **DNS Resolution in Auth Tests**: `failed to resolve "hs-pingallbyip-jax97k": no DNS fallback candidates remain`
- **Pattern**: Client containers can't resolve headscale server hostname during logout
- **Detection**: Error messages specifically mention DNS/hostname resolution
- **Solution**: Docker networking reset, not code changes
2. **Container Creation Timeouts**: Test gets stuck during client container setup
- **Pattern**: Tests hang indefinitely at container startup phase
- **Detection**: No progress in logs for >2 minutes during initialization
- **Solution**: `docker system prune -f` and retry
3. **Docker Port Conflicts**: Multiple tests trying to use same ports
- **Pattern**: "bind: address already in use" errors
- **Detection**: Port binding failures in Docker logs
- **Solution**: Only run ONE test at a time
**CODE ISSUES (99% of failures)**:
1. **Route Approval Process Failures**: Routes not getting approved when they should be
- **Pattern**: Tests expecting approved routes but finding none
- **Detection**: `SubnetRoutes()` returns empty when `AnnouncedRoutes()` shows routes
- **Root Cause**: Auto-approval logic bugs, policy evaluation issues
2. **NodeStore Synchronization Issues**: State updates not propagating correctly
- **Pattern**: Route changes not reflected in NodeStore or Primary Routes
- **Detection**: Logs show route announcements but no tracking updates
- **Root Cause**: Missing synchronization points in `poll.go:420` area
3. **HA Failover Architecture Issues**: Routes removed when nodes go offline
- **Pattern**: `TestHASubnetRouterFailover` fails because approved routes disappear
- **Detection**: Routes available on online nodes but lost when nodes disconnect
- **Root Cause**: Conflating route approval with node connectivity
### Critical Test Environment Setup
**Pre-Test Cleanup (MANDATORY)**:
```bash
# ALWAYS run this before each test
rm -rf control_logs/202507*
docker system prune -f
df -h # Verify sufficient disk space
```
**Environment Verification**:
```bash
# Verify system readiness
go run ./cmd/hi doctor
# Check for running containers that might conflict
docker ps
```
### Specific Test Categories and Known Issues
#### Route-Related Tests (Primary Focus)
```bash
# Core route functionality - these should work first
# Note: Generous timeouts are required for reliable execution
go run ./cmd/hi run "TestSubnetRouteACL" --timeout=1200s
go run ./cmd/hi run "TestAutoApproveMultiNetwork" --timeout=1800s
go run ./cmd/hi run "TestHASubnetRouterFailover" --timeout=1800s
```
**Common Route Test Patterns**:
- Tests validate route announcement, approval, and distribution workflows
- Route state changes are asynchronous - may need `EventuallyWithT` wrappers
- Route approval must respect ACL policies - test expectations encode security requirements
- HA tests verify route persistence during node connectivity changes
#### Authentication Tests (Infrastructure-Prone)
```bash
# These tests are more prone to infrastructure issues
# Require longer timeouts due to auth flow complexity
go run ./cmd/hi run "TestAuthKeyLogoutAndReloginSameUser" --timeout=1200s
go run ./cmd/hi run "TestAuthWebFlowLogoutAndRelogin" --timeout=1200s
go run ./cmd/hi run "TestOIDCExpireNodesBasedOnTokenExpiry" --timeout=1800s
```
**Common Auth Test Infrastructure Failures**:
- DNS resolution during logout operations
- Container creation timeouts
- HTTP/2 stream errors (often symptoms, not root cause)
### Security-Critical Debugging Rules
**❌ FORBIDDEN CHANGES (Security & Test Integrity)**:
1. **Never change expected test outputs** - Tests define correct behavior contracts
- Changing `require.Len(t, routes, 3)` to `require.Len(t, routes, 2)` because test fails
- Modifying expected status codes, node counts, or route counts
- Removing assertions that are "inconvenient"
- **Why forbidden**: Test expectations encode business requirements and security policies
2. **Never bypass security mechanisms** - Security must never be compromised for convenience
- Using `AnnouncedRoutes()` instead of `SubnetRoutes()` in production code
- Skipping authentication or authorization checks
- **Why forbidden**: Security bypasses create vulnerabilities in production
3. **Never reduce test coverage** - Tests prevent regressions
- Removing test cases or assertions
- Commenting out "problematic" test sections
- **Why forbidden**: Reduced coverage allows bugs to slip through
**✅ ALLOWED CHANGES (Timing & Observability)**:
1. **Fix timing issues with proper async patterns**
```go
// ✅ GOOD: Add EventuallyWithT for async operations
require.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
assert.Len(c, nodes, expectedCount) // Keep original expectation
}, 10*time.Second, 100*time.Millisecond, "nodes should reach expected count")
```
- **Why allowed**: Fixes race conditions without changing business logic
2. **Add MORE observability and debugging**
- Additional logging statements
- More detailed error messages
- Extra assertions that verify intermediate states
- **Why allowed**: Better observability helps debug without changing behavior
3. **Improve test documentation**
- Add godoc comments explaining test purpose and business logic
- Document timing requirements and async behavior
- **Why encouraged**: Helps future maintainers understand intent
### Advanced Debugging Workflows
#### Route Tracking Debug Flow
```bash
# Run test with detailed logging and proper timeout
go run ./cmd/hi run "TestSubnetRouteACL" --timeout=1200s > test_output.log 2>&1
# Check route approval process
grep -E "(auto-approval|ApproveRoutesWithPolicy|PolicyManager)" test_output.log
# Check route tracking
tail -50 control_logs/*/hs-*.stderr.log | grep -E "(announced|tracking|SetNodeRoutes)"
# Check for security violations
grep -E "(AnnouncedRoutes.*SetNodeRoutes|bypass.*approval)" test_output.log
```
#### HA Failover Debug Flow
```bash
# Test HA failover specifically with adequate timeout
go run ./cmd/hi run "TestHASubnetRouterFailover" --timeout=1800s
# Check route persistence during disconnect
grep -E "(Disconnect|NodeWentOffline|PrimaryRoutes)" control_logs/*/hs-*.stderr.log
# Verify routes don't disappear inappropriately
grep -E "(removing.*routes|SetNodeRoutes.*empty)" control_logs/*/hs-*.stderr.log
```
### Test Result Interpretation Guidelines
#### Success Patterns to Look For
- `"updating node routes for tracking"` in logs
- Routes appearing in `announcedRoutes` logs
- Proper `ApproveRoutesWithPolicy` calls for auto-approval
- Routes persisting through node connectivity changes (HA tests)
#### Failure Patterns to Investigate
- `SubnetRoutes()` returning empty when `AnnouncedRoutes()` has routes
- Routes disappearing when nodes go offline (HA architectural issue)
- Missing `EventuallyWithT` causing timing race conditions
- Security bypass attempts using wrong route methods
### Critical Testing Methodology
**Phase-Based Testing Approach**:
1. **Phase 1**: Core route tests (ACL, auto-approval, basic functionality)
2. **Phase 2**: HA and complex route scenarios
3. **Phase 3**: Auth tests (infrastructure-sensitive, test last)
**Per-Test Process**:
1. Clean environment before each test
2. Monitor logs for route tracking and approval messages
3. Check artifacts in `control_logs/` if test fails
4. Focus on actual error messages, not assumptions
5. Document results and patterns discovered
## Test Documentation and Code Quality Standards
### Adding Missing Test Documentation
When you understand a test's purpose through debugging, always add comprehensive godoc:
```go
// TestSubnetRoutes validates the complete subnet route lifecycle including
// advertisement from clients, policy-based approval, and distribution to peers.
// This test ensures that route security policies are properly enforced and that
// only approved routes are distributed to the network.
//
// The test verifies:
// - Route announcements are received and tracked
// - ACL policies control route approval correctly
// - Only approved routes appear in peer network maps
// - Route state persists correctly in the database
func TestSubnetRoutes(t *testing.T) {
// Test implementation...
}
```
**Why add documentation**: Future maintainers need to understand business logic and security requirements encoded in tests.
### Comment Guidelines - Focus on WHY, Not WHAT
```go
// ✅ GOOD: Explains reasoning and business logic
// Wait for route propagation because NodeStore updates are asynchronous
// and happen after poll requests complete processing
require.EventuallyWithT(t, func(c *assert.CollectT) {
// Check that security policies are enforced...
}, timeout, interval, "route approval must respect ACL policies")
// ❌ BAD: Just describes what the code does
// Wait for routes
require.EventuallyWithT(t, func(c *assert.CollectT) {
// Get routes and check length
}, timeout, interval, "checking routes")
```
**Why focus on WHY**: Helps maintainers understand architectural decisions and security requirements.
## EventuallyWithT Pattern for External Calls
### Overview
EventuallyWithT is a testing pattern used to handle eventual consistency in distributed systems. In Headscale integration tests, many operations are asynchronous - clients advertise routes, the server processes them, updates propagate through the network. EventuallyWithT allows tests to wait for these operations to complete while making assertions.
### External Calls That Must Be Wrapped
The following operations are **external calls** that interact with the headscale server or tailscale clients and MUST be wrapped in EventuallyWithT:
- `headscale.ListNodes()` - Queries server state
- `client.Status()` - Gets client network status
- `client.Curl()` - Makes HTTP requests through the network
- `client.Traceroute()` - Performs network diagnostics
- `client.Execute()` when running commands that query state
- Any operation that reads from the headscale server or tailscale client
### Five Key Rules for EventuallyWithT
1. **One External Call Per EventuallyWithT Block**
- Each EventuallyWithT should make ONE external call (e.g., ListNodes OR Status)
- Related assertions based on that single call can be grouped together
- Unrelated external calls must be in separate EventuallyWithT blocks
2. **Variable Scoping**
- Declare variables that need to be shared across EventuallyWithT blocks at function scope
- Use `=` for assignment inside EventuallyWithT, not `:=` (unless the variable is only used within that block)
- Variables declared with `:=` inside EventuallyWithT are not accessible outside
3. **No Nested EventuallyWithT**
- NEVER put an EventuallyWithT inside another EventuallyWithT
- This is a critical anti-pattern that must be avoided
4. **Use CollectT for Assertions**
- Inside EventuallyWithT, use `assert` methods with the CollectT parameter
- Helper functions called within EventuallyWithT must accept `*assert.CollectT`
5. **Descriptive Messages**
- Always provide a descriptive message as the last parameter
- Message should explain what condition is being waited for
### Correct Pattern Examples
```go
// CORRECT: Single external call with related assertions
var nodes []*v1.Node
var err error
assert.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err = headscale.ListNodes()
assert.NoError(c, err)
assert.Len(c, nodes, 2)
// These assertions are all based on the ListNodes() call
requireNodeRouteCountWithCollect(c, nodes[0], 2, 2, 2)
requireNodeRouteCountWithCollect(c, nodes[1], 1, 1, 1)
}, 10*time.Second, 500*time.Millisecond, "nodes should have expected route counts")
// CORRECT: Separate EventuallyWithT for different external call
assert.EventuallyWithT(t, func(c *assert.CollectT) {
status, err := client.Status()
assert.NoError(c, err)
// All these assertions are based on the single Status() call
for _, peerKey := range status.Peers() {
peerStatus := status.Peer[peerKey]
requirePeerSubnetRoutesWithCollect(c, peerStatus, expectedPrefixes)
}
}, 10*time.Second, 500*time.Millisecond, "client should see expected routes")
// CORRECT: Variable scoping for sharing between blocks
var routeNode *v1.Node
var nodeKey key.NodePublic
// First EventuallyWithT to get the node
assert.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
for _, node := range nodes {
if node.GetName() == "router" {
routeNode = node
nodeKey, _ = key.ParseNodePublicUntyped(mem.S(node.GetNodeKey()))
break
}
}
assert.NotNil(c, routeNode, "should find router node")
}, 10*time.Second, 100*time.Millisecond, "router node should exist")
// Second EventuallyWithT using the nodeKey from first block
assert.EventuallyWithT(t, func(c *assert.CollectT) {
status, err := client.Status()
assert.NoError(c, err)
peerStatus, ok := status.Peer[nodeKey]
assert.True(c, ok, "peer should exist in status")
requirePeerSubnetRoutesWithCollect(c, peerStatus, expectedPrefixes)
}, 10*time.Second, 100*time.Millisecond, "routes should be visible to client")
```
### Incorrect Patterns to Avoid
```go
// INCORRECT: Multiple unrelated external calls in same EventuallyWithT
assert.EventuallyWithT(t, func(c *assert.CollectT) {
// First external call
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
assert.Len(c, nodes, 2)
// Second unrelated external call - WRONG!
status, err := client.Status()
assert.NoError(c, err)
assert.NotNil(c, status)
}, 10*time.Second, 500*time.Millisecond, "mixed operations")
// INCORRECT: Nested EventuallyWithT
assert.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
// NEVER do this!
assert.EventuallyWithT(t, func(c2 *assert.CollectT) {
status, _ := client.Status()
assert.NotNil(c2, status)
}, 5*time.Second, 100*time.Millisecond, "nested")
}, 10*time.Second, 500*time.Millisecond, "outer")
// INCORRECT: Variable scoping error
assert.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes() // This shadows outer 'nodes' variable
assert.NoError(c, err)
}, 10*time.Second, 500*time.Millisecond, "get nodes")
// This will fail - nodes is nil because := created a new variable inside the block
require.Len(t, nodes, 2) // COMPILATION ERROR or nil pointer
// INCORRECT: Not wrapping external calls
nodes, err := headscale.ListNodes() // External call not wrapped!
require.NoError(t, err)
```
### Helper Functions for EventuallyWithT
When creating helper functions for use within EventuallyWithT:
```go
// Helper function that accepts CollectT
func requireNodeRouteCountWithCollect(c *assert.CollectT, node *v1.Node, available, approved, primary int) {
assert.Len(c, node.GetAvailableRoutes(), available, "available routes for node %s", node.GetName())
assert.Len(c, node.GetApprovedRoutes(), approved, "approved routes for node %s", node.GetName())
assert.Len(c, node.GetPrimaryRoutes(), primary, "primary routes for node %s", node.GetName())
}
// Usage within EventuallyWithT
assert.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
requireNodeRouteCountWithCollect(c, nodes[0], 2, 2, 2)
}, 10*time.Second, 500*time.Millisecond, "route counts should match expected")
```
### Operations That Must NOT Be Wrapped
**CRITICAL**: The following operations are **blocking/mutating operations** that change state and MUST NOT be wrapped in EventuallyWithT:
- `tailscale set` commands (e.g., `--advertise-routes`, `--accept-routes`)
- `headscale.ApproveRoute()` - Approves routes on server
- `headscale.CreateUser()` - Creates users
- `headscale.CreatePreAuthKey()` - Creates authentication keys
- `headscale.RegisterNode()` - Registers new nodes
- Any `client.Execute()` that modifies configuration
- Any operation that creates, updates, or deletes resources
These operations:
1. Complete synchronously or fail immediately
2. Should not be retried automatically
3. Need explicit error handling with `require.NoError()`
### Correct Pattern for Blocking Operations
```go
// CORRECT: Blocking operation NOT wrapped
status := client.MustStatus()
command := []string{"tailscale", "set", "--advertise-routes=" + expectedRoutes[string(status.Self.ID)]}
_, _, err = client.Execute(command)
require.NoErrorf(t, err, "failed to advertise route: %s", err)
// Then wait for the result with EventuallyWithT
assert.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
assert.Contains(c, nodes[0].GetAvailableRoutes(), expectedRoutes[string(status.Self.ID)])
}, 10*time.Second, 100*time.Millisecond, "route should be advertised")
// INCORRECT: Blocking operation wrapped (DON'T DO THIS)
assert.EventuallyWithT(t, func(c *assert.CollectT) {
_, _, err = client.Execute([]string{"tailscale", "set", "--advertise-routes=10.0.0.0/24"})
assert.NoError(c, err) // This might retry the command multiple times!
}, 10*time.Second, 100*time.Millisecond, "advertise routes")
```
### Assert vs Require Pattern
When working within EventuallyWithT blocks where you need to prevent panics:
```go
assert.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
// For array bounds - use require with t to prevent panic
assert.Len(c, nodes, 6) // Test expectation
require.GreaterOrEqual(t, len(nodes), 3, "need at least 3 nodes to avoid panic")
// For nil pointer access - use require with t before dereferencing
assert.NotNil(c, srs1PeerStatus.PrimaryRoutes) // Test expectation
require.NotNil(t, srs1PeerStatus.PrimaryRoutes, "primary routes must be set to avoid panic")
assert.Contains(c,
srs1PeerStatus.PrimaryRoutes.AsSlice(),
pref,
)
}, 5*time.Second, 200*time.Millisecond, "checking route state")
```
**Key Principle**:
- Use `assert` with `c` (*assert.CollectT) for test expectations that can be retried
- Use `require` with `t` (*testing.T) for MUST conditions that prevent panics
- Within EventuallyWithT, both are available - choose based on whether failure would cause a panic
### Common Scenarios
1. **Waiting for route advertisement**:
```go
client.Execute([]string{"tailscale", "set", "--advertise-routes=10.0.0.0/24"})
assert.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
assert.Contains(c, nodes[0].GetAvailableRoutes(), "10.0.0.0/24")
}, 10*time.Second, 100*time.Millisecond, "route should be advertised")
```
2. **Checking client sees routes**:
```go
assert.EventuallyWithT(t, func(c *assert.CollectT) {
status, err := client.Status()
assert.NoError(c, err)
// Check all peers have expected routes
for _, peerKey := range status.Peers() {
peerStatus := status.Peer[peerKey]
assert.Contains(c, peerStatus.AllowedIPs, expectedPrefix)
}
}, 10*time.Second, 100*time.Millisecond, "all peers should see route")
```
3. **Sequential operations**:
```go
// First wait for node to appear
var nodeID uint64
assert.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
assert.Len(c, nodes, 1)
nodeID = nodes[0].GetId()
}, 10*time.Second, 100*time.Millisecond, "node should register")
// Then perform operation
_, err := headscale.ApproveRoute(nodeID, "10.0.0.0/24")
require.NoError(t, err)
// Then wait for result
assert.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
assert.Contains(c, nodes[0].GetApprovedRoutes(), "10.0.0.0/24")
}, 10*time.Second, 100*time.Millisecond, "route should be approved")
```
## Your Core Responsibilities
1. **Test Execution Strategy**: Execute integration tests with appropriate configurations, understanding when to use `--postgres` and timing requirements for different test categories. Follow phase-based testing approach prioritizing route tests.
- **Why this priority**: Route tests are less infrastructure-sensitive and validate core security logic
2. **Systematic Test Analysis**: When tests fail, systematically examine artifacts starting with Headscale server logs, then client logs, then protocol data. Focus on CODE ISSUES first (99% of cases), not infrastructure. Use real-world failure patterns to guide investigation.
- **Why this approach**: Most failures are logic bugs, not environment issues - efficient debugging saves time
3. **Timing & Synchronization Expertise**: Understand asynchronous Headscale operations, particularly route advertisements, NodeStore synchronization at `poll.go:420`, and policy propagation. Fix timing with `EventuallyWithT` while preserving original test expectations.
- **Why preserve expectations**: Test assertions encode business requirements and security policies
- **Key Pattern**: Apply the EventuallyWithT pattern correctly for all external calls as documented above
4. **Root Cause Analysis**: Distinguish between actual code regressions (route approval logic, HA failover architecture), timing issues requiring `EventuallyWithT` patterns, and genuine infrastructure problems (DNS, Docker, container issues).
- **Why this distinction matters**: Different problem types require completely different solution approaches
- **EventuallyWithT Issues**: Often manifest as flaky tests or immediate assertion failures after async operations
5. **Security-Aware Quality Validation**: Ensure tests properly validate end-to-end functionality with realistic timing expectations and proper error handling. Never suggest security bypasses or test expectation changes. Add comprehensive godoc when you understand test business logic.
- **Why security focus**: Integration tests are the last line of defense against security regressions
- **EventuallyWithT Usage**: Proper use prevents race conditions without weakening security assertions
**CRITICAL PRINCIPLE**: Test expectations are sacred contracts that define correct system behavior. When tests fail, fix the code to match the test, never change the test to match broken code. Only timing and observability improvements are allowed - business logic expectations are immutable.
**EventuallyWithT PRINCIPLE**: Every external call to headscale server or tailscale client must be wrapped in EventuallyWithT. Follow the five key rules strictly: one external call per block, proper variable scoping, no nesting, use CollectT for assertions, and provide descriptive messages.
**Remember**: Test failures are usually code issues in Headscale that need to be fixed, not infrastructure problems to be ignored. Use the specific debugging workflows and failure patterns documented above to efficiently identify root causes. Infrastructure issues have very specific signatures - everything else is code-related.

View File

@@ -52,15 +52,12 @@ body:
If you are using a container, always provide the headscale version and not only the Docker image version.
Please do not put "latest".
Describe your "headscale network". Is there a lot of nodes, are the nodes all interconnected, are some subnet routers?
If you are experiencing a problem during an upgrade, please provide the versions of the old and new versions of Headscale and Tailscale.
examples:
- **OS**: Ubuntu 24.04
- **Headscale version**: 0.24.3
- **Tailscale version**: 1.80.0
- **Number of nodes**: 20
value: |
- OS:
- Headscale version:

View File

@@ -16,13 +16,15 @@ body:
- type: textarea
attributes:
label: Description
description: A clear and precise description of what new or changed feature you want.
description:
A clear and precise description of what new or changed feature you want.
validations:
required: true
- type: checkboxes
attributes:
label: Contribution
description: Are you willing to contribute to the implementation of this feature?
description:
Are you willing to contribute to the implementation of this feature?
options:
- label: I can write the design doc for this feature
required: false
@@ -31,6 +33,7 @@ body:
- type: textarea
attributes:
label: How can it be implemented?
description: Free text for your ideas on how this feature could be implemented.
description:
Free text for your ideas on how this feature could be implemented.
validations:
required: false

View File

@@ -5,6 +5,8 @@ on:
branches:
- main
pull_request:
branches:
- main
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
@@ -92,8 +94,6 @@ jobs:
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
- name: Run go cross compile
env:
CGO_ENABLED: 0
run:
env ${{ matrix.env }} nix develop --command -- go build -o "headscale"
./cmd/headscale

View File

@@ -62,11 +62,24 @@ jobs:
'**/flake.lock') }}
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
- name: Run Integration Test
if: always() && steps.changed-files.outputs.files == 'true'
run:
nix develop --command -- hi run --stats --ts-memory-limit=300 --hs-memory-limit=1500 "^${{ inputs.test }}$" \
--timeout=120m \
${{ inputs.postgres_flag }}
uses: Wandalen/wretry.action@e68c23e6309f2871ca8ae4763e7629b9c258e1ea # v3.8.0
if: steps.changed-files.outputs.files == 'true'
with:
# Our integration tests are started like a thundering herd, often
# hitting limits of the various external repositories we depend on
# like docker hub. This will retry jobs every 5 min, 10 times,
# hopefully letting us avoid manual intervention and restarting jobs.
# One could of course argue that we should invest in trying to avoid
# this, but currently it seems like a larger investment to be cleverer
# about this.
# Some of the jobs might still require manual restart as they are really
# slow and this will cause them to eventually be killed by Github actions.
attempt_delay: 300000 # 5 min
attempt_limit: 2
command: |
nix develop --command -- hi run --stats --ts-memory-limit=300 --hs-memory-limit=500 "^${{ inputs.test }}$" \
--timeout=120m \
${{ inputs.postgres_flag }}
- uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
if: always() && steps.changed-files.outputs.files == 'true'
with:

View File

@@ -23,31 +23,16 @@ jobs:
- TestPolicyUpdateWhileRunningWithCLIInDatabase
- TestACLAutogroupMember
- TestACLAutogroupTagged
- TestACLAutogroupSelf
- TestACLPolicyPropagationOverTime
- TestAPIAuthenticationBypass
- TestAPIAuthenticationBypassCurl
- TestGRPCAuthenticationBypass
- TestCLIWithConfigAuthenticationBypass
- TestAuthKeyLogoutAndReloginSameUser
- TestAuthKeyLogoutAndReloginNewUser
- TestAuthKeyLogoutAndReloginSameUserExpiredKey
- TestAuthKeyDeleteKey
- TestAuthKeyLogoutAndReloginRoutesPreserved
- TestOIDCAuthenticationPingAll
- TestOIDCExpireNodesBasedOnTokenExpiry
- TestOIDC024UserCreation
- TestOIDCAuthenticationWithPKCE
- TestOIDCReloginSameNodeNewUser
- TestOIDCFollowUpUrl
- TestOIDCMultipleOpenedLoginUrls
- TestOIDCReloginSameNodeSameUser
- TestOIDCExpiryAfterRestart
- TestOIDCACLPolicyOnJoin
- TestOIDCReloginSameUserRoutesPreserved
- TestAuthWebFlowAuthenticationPingAll
- TestAuthWebFlowLogoutAndReloginSameUser
- TestAuthWebFlowLogoutAndReloginNewUser
- TestAuthWebFlowLogoutAndRelogin
- TestUserCommand
- TestPreAuthKeyCommand
- TestPreAuthKeyCommandWithoutExpiry
@@ -76,7 +61,6 @@ jobs:
- TestTaildrop
- TestUpdateHostnameFromClient
- TestExpireNode
- TestSetNodeExpiryInFuture
- TestNodeOnlineStatus
- TestPingAllByIPManyUpDown
- Test2118DeletingOnlineNodePanics
@@ -95,7 +79,6 @@ jobs:
- TestSSHNoSSHConfigured
- TestSSHIsBlockedInACL
- TestSSHUserOnlyIsolation
- TestSSHAutogroupSelf
uses: ./.github/workflows/integration-test-template.yml
with:
test: ${{ matrix.test }}

View File

@@ -2,39 +2,12 @@
version: 2
before:
hooks:
- go mod tidy -compat=1.25
- go mod tidy -compat=1.24
- go mod vendor
release:
prerelease: auto
draft: true
header: |
## Upgrade
Please follow the steps outlined in the [upgrade guide](https://headscale.net/stable/setup/upgrade/) to update your existing Headscale installation.
**It's best to update from one stable version to the next** (e.g., 0.24.0 → 0.25.1 → 0.26.1) in case you are multiple releases behind. You should always pick the latest available patch release.
Be sure to check the changelog above for version-specific upgrade instructions and breaking changes.
### Backup Your Database
**Always backup your database before upgrading.** Here's how to backup a SQLite database:
```bash
# Stop headscale
systemctl stop headscale
# Backup sqlite database
cp /var/lib/headscale/db.sqlite /var/lib/headscale/db.sqlite.backup
# Backup sqlite WAL/SHM files (if they exist)
cp /var/lib/headscale/db.sqlite-wal /var/lib/headscale/db.sqlite-wal.backup
cp /var/lib/headscale/db.sqlite-shm /var/lib/headscale/db.sqlite-shm.backup
# Start headscale (migration will run automatically)
systemctl start headscale
```
builds:
- id: headscale
@@ -50,6 +23,10 @@ builds:
- linux_arm64
flags:
- -mod=readonly
ldflags:
- -s -w
- -X github.com/juanfont/headscale/hscontrol/types.Version={{ .Version }}
- -X github.com/juanfont/headscale/hscontrol/types.GitCommitHash={{ .Commit }}
tags:
- ts2019
@@ -145,8 +122,6 @@ kos:
- "{{ .Tag }}"
- '{{ trimprefix .Tag "v" }}'
- "sha-{{ .ShortCommit }}"
creation_time: "{{.CommitTimestamp}}"
ko_data_creation_time: "{{.CommitTimestamp}}"
- id: ghcr-debug
repositories:

View File

@@ -1,48 +0,0 @@
{
"mcpServers": {
"claude-code-mcp": {
"type": "stdio",
"command": "npx",
"args": [
"-y",
"@steipete/claude-code-mcp@latest"
],
"env": {}
},
"sequential-thinking": {
"type": "stdio",
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-sequential-thinking"
],
"env": {}
},
"nixos": {
"type": "stdio",
"command": "uvx",
"args": [
"mcp-nixos"
],
"env": {}
},
"context7": {
"type": "stdio",
"command": "npx",
"args": [
"-y",
"@upstash/context7-mcp"
],
"env": {}
},
"git": {
"type": "stdio",
"command": "npx",
"args": [
"-y",
"@cyanheads/git-mcp-server"
],
"env": {}
}
}
}

View File

@@ -2,62 +2,15 @@
## Next
### Changes
## 0.27.2 (2025-xx-xx)
### Changes
- Fix ACL policy not applied to new OIDC nodes until client restart
[#2890](https://github.com/juanfont/headscale/pull/2890)
- Fix autogroup:self preventing visibility of nodes matched by other ACL rules
[#2882](https://github.com/juanfont/headscale/pull/2882)
- Fix nodes being rejected after pre-authentication key expiration
[#2917](https://github.com/juanfont/headscale/pull/2917)
## 0.27.1 (2025-11-11)
**Minimum supported Tailscale client version: v1.64.0**
### Changes
- Expire nodes with a custom timestamp
[#2828](https://github.com/juanfont/headscale/pull/2828)
- Fix issue where node expiry was reset when tailscaled restarts
[#2875](https://github.com/juanfont/headscale/pull/2875)
- Fix OIDC authentication when multiple login URLs are opened
[#2861](https://github.com/juanfont/headscale/pull/2861)
- Fix node re-registration failing with expired auth keys
[#2859](https://github.com/juanfont/headscale/pull/2859)
- Remove old unused database tables and indices
[#2844](https://github.com/juanfont/headscale/pull/2844)
[#2872](https://github.com/juanfont/headscale/pull/2872)
- Ignore litestream tables during database validation
[#2843](https://github.com/juanfont/headscale/pull/2843)
- Fix exit node visibility to respect ACL rules
[#2855](https://github.com/juanfont/headscale/pull/2855)
- Fix SSH policy becoming empty when unknown user is referenced
[#2874](https://github.com/juanfont/headscale/pull/2874)
- Fix policy validation when using bypass-grpc mode
[#2854](https://github.com/juanfont/headscale/pull/2854)
- Fix autogroup:self interaction with other ACL rules
[#2842](https://github.com/juanfont/headscale/pull/2842)
- Fix flaky DERP map shuffle test
[#2848](https://github.com/juanfont/headscale/pull/2848)
- Use current stable base images for Debian and Alpine containers
[#2827](https://github.com/juanfont/headscale/pull/2827)
## 0.27.0 (2025-10-27)
**Minimum supported Tailscale client version: v1.64.0**
### Database integrity improvements
This release includes a significant database migration that addresses
longstanding issues with the database schema and data integrity that has
accumulated over the years. The migration introduces a `schema.sql` file as the
source of truth for the expected database schema to ensure new migrations that
will cause divergence does not occur again.
This release includes a significant database migration that addresses longstanding
issues with the database schema and data integrity that has accumulated over the
years. The migration introduces a `schema.sql` file as the source of truth for
the expected database schema to ensure new migrations that will cause divergence
does not occur again.
These issues arose from a combination of factors discovered over time: SQLite
foreign keys not being enforced for many early versions, all migrations being
@@ -69,12 +22,10 @@ enforced throughout the migration process.
We are only improving SQLite databases with this change - PostgreSQL databases
are not affected.
Please read the
[PR description](https://github.com/juanfont/headscale/pull/2617) for more
technical details about the issues and solutions.
Please read the [PR description](https://github.com/juanfont/headscale/pull/2617)
for more technical details about the issues and solutions.
**SQLite Database Backup Example:**
```bash
# Stop headscale
systemctl stop headscale
@@ -90,60 +41,12 @@ cp /var/lib/headscale/db.sqlite-shm /var/lib/headscale/db.sqlite-shm.backup
systemctl start headscale
```
### DERPMap update frequency
The default DERPMap update frequency has been changed from 24 hours to 3 hours.
If you set the `derp.update_frequency` configuration option, it is recommended
to change it to `3h` to ensure that the headscale instance gets the latest
DERPMap updates when upstream is changed.
### Autogroups
This release adds support for the three missing autogroups: `self`
(experimental), `member`, and `tagged`. Please refer to the
[documentation](https://tailscale.com/kb/1018/autogroups/) for a detailed
explanation.
`autogroup:self` is marked as experimental and should be used with caution, but
we need help testing it. Experimental here means two things; first, generating
the packet filter from policies that use `autogroup:self` is very expensive, and
it might perform, or straight up not work on Headscale installations with a
large number of nodes. Second, the implementation might have bugs or edge cases
we are not aware of, meaning that nodes or users might gain _more_ access than
expected. Please report bugs.
### Node store (in memory database)
Under the hood, we have added a new datastructure to store nodes in memory. This
datastructure is called `NodeStore` and aims to reduce the reading and writing
of nodes to the database layer. We have not benchmarked it, but expect it to
improve performance for read heavy workloads. We think of it as, "worst case" we
have moved the bottle neck somewhere else, and "best case" we should see a good
improvement in compute resource usage at the expense of memory usage. We are
quite excited for this change and think it will make it easier for us to improve
the code base over time and make it more correct and efficient.
### BREAKING
- Remove support for 32-bit binaries
[#2692](https://github.com/juanfont/headscale/pull/2692)
- Policy: Zero or empty destination port is no longer allowed
[#2606](https://github.com/juanfont/headscale/pull/2606)
- Stricter hostname validation
[#2383](https://github.com/juanfont/headscale/pull/2383)
- Hostnames must be valid DNS labels (2-63 characters, alphanumeric and
hyphens only, cannot start/end with hyphen)
- **Client Registration (New Nodes)**: Invalid hostnames are automatically
renamed to `invalid-XXXXXX` format
- `my-laptop` → accepted as-is
- `My-Laptop``my-laptop` (lowercased)
- `my_laptop``invalid-a1b2c3` (underscore not allowed)
- `test@host``invalid-d4e5f6` (@ not allowed)
- `laptop-🚀``invalid-j1k2l3` (emoji not allowed)
- **Hostinfo Updates / CLI**: Invalid hostnames are rejected with an error
- Valid names are accepted or lowercased
- Names with invalid characters, too short (<2), too long (>63), or
starting/ending with hyphen are rejected
### Changes
@@ -152,41 +55,18 @@ the code base over time and make it more correct and efficient.
- **IMPORTANT: Backup your SQLite database before upgrading**
- Introduces safer table renaming migration strategy
- Addresses longstanding database integrity issues
- Add flag to directly manipulate the policy in the database
[#2765](https://github.com/juanfont/headscale/pull/2765)
- DERPmap update frequency default changed from 24h to 3h
[#2741](https://github.com/juanfont/headscale/pull/2741)
- DERPmap update mechanism has been improved with retry, and is now failing
conservatively, preserving the old map upon failure.
[#2741](https://github.com/juanfont/headscale/pull/2741)
- Add support for `autogroup:member`, `autogroup:tagged`
[#2572](https://github.com/juanfont/headscale/pull/2572)
- Fix bug where return routes were being removed by policy
[#2767](https://github.com/juanfont/headscale/pull/2767)
- Remove policy v1 code [#2600](https://github.com/juanfont/headscale/pull/2600)
- Refactor Debian/Ubuntu packaging and drop support for Ubuntu 20.04.
[#2614](https://github.com/juanfont/headscale/pull/2614)
- Support client verify for DERP
[#2046](https://github.com/juanfont/headscale/pull/2046)
- Remove redundant check regarding `noise` config
[#2658](https://github.com/juanfont/headscale/pull/2658)
- Refactor OpenID Connect documentation
[#2625](https://github.com/juanfont/headscale/pull/2625)
- Don't crash if config file is missing
[#2656](https://github.com/juanfont/headscale/pull/2656)
- Adds `/robots.txt` endpoint to avoid crawlers
[#2643](https://github.com/juanfont/headscale/pull/2643)
- OIDC: Use group claim from UserInfo
[#2663](https://github.com/juanfont/headscale/pull/2663)
- OIDC: Update user with claims from UserInfo _before_ comparing with allowed
groups, email and domain
[#2663](https://github.com/juanfont/headscale/pull/2663)
- Policy will now reject invalid fields, making it easier to spot spelling
errors [#2764](https://github.com/juanfont/headscale/pull/2764)
- Add FAQ entry on how to recover from an invalid policy in the database
[#2776](https://github.com/juanfont/headscale/pull/2776)
- EXPERIMENTAL: Add support for `autogroup:self`
[#2789](https://github.com/juanfont/headscale/pull/2789)
- Add healthcheck command
[#2659](https://github.com/juanfont/headscale/pull/2659)
## 0.26.1 (2025-06-06)
@@ -253,7 +133,7 @@ new policy code passes all of our tests.
- Error messages should be more descriptive and informative.
- There is still work to be here, but it is already improved with "typing"
(e.g. only Users can be put in Groups)
- All users in the policy must contain an `@` character.
- All users must contain an `@` character.
- If your user naturally contains and `@`, like an email, this will just work.
- If its based on usernames, or other identifiers not containing an `@`, an
`@` should be appended at the end. For example, if your user is `john`, it
@@ -343,6 +223,8 @@ working in v1 and not tested might be broken in v2 (and vice versa).
[#2438](https://github.com/juanfont/headscale/pull/2438)
- Add documentation for routes
[#2496](https://github.com/juanfont/headscale/pull/2496)
- Add support for `autogroup:member`, `autogroup:tagged`
[#2572](https://github.com/juanfont/headscale/pull/2572)
## 0.25.1 (2025-02-25)

406
CLAUDE.md
View File

@@ -205,46 +205,139 @@ The architecture supports incremental development:
- **Policy Tests**: ACL rule evaluation and edge cases
- **Performance Tests**: NodeStore and high-frequency operation validation
## Integration Testing System
## Integration Test System
### Overview
Headscale uses Docker-based integration tests with real Tailscale clients to validate end-to-end functionality. The integration test system is complex and requires specialized knowledge for effective execution and debugging.
Integration tests use Docker containers running real Tailscale clients against a Headscale server. Tests validate end-to-end functionality including routing, ACLs, node lifecycle, and network coordination.
### **MANDATORY: Use the headscale-integration-tester Agent**
**CRITICAL REQUIREMENT**: For ANY integration test execution, analysis, troubleshooting, or validation, you MUST use the `headscale-integration-tester` agent. This agent contains specialized knowledge about:
- Test execution strategies and timing requirements
- Infrastructure vs code issue distinction (99% vs 1% failure patterns)
- Security-critical debugging rules and forbidden practices
- Comprehensive artifact analysis workflows
- Real-world failure patterns from HA debugging experiences
### Quick Reference Commands
### Running Integration Tests
**System Requirements**
```bash
# Check system requirements (always run first)
# Check if your system is ready
go run ./cmd/hi doctor
```
This verifies Docker, Go, required images, and disk space.
# Run single test (recommended for development)
go run ./cmd/hi run "TestName"
**Test Execution Patterns**
```bash
# Run a single test (recommended for development)
go run ./cmd/hi run "TestSubnetRouterMultiNetwork"
# Use PostgreSQL for database-heavy tests
go run ./cmd/hi run "TestName" --postgres
# Run with PostgreSQL backend (for database-heavy tests)
go run ./cmd/hi run "TestExpireNode" --postgres
# Pattern matching for related tests
go run ./cmd/hi run "TestPattern*"
# Run multiple tests with pattern matching
go run ./cmd/hi run "TestSubnet*"
# Run all integration tests (CI/full validation)
go test ./integration -timeout 30m
```
**Critical Notes**:
- Only ONE test can run at a time (Docker port conflicts)
- Tests generate ~100MB of logs per run in `control_logs/`
- Clean environment before each test: `rm -rf control_logs/202507* && docker system prune -f`
**Test Categories & Timing**
- **Fast tests** (< 2 min): Basic functionality, CLI operations
- **Medium tests** (2-5 min): Route management, ACL validation
- **Slow tests** (5+ min): Node expiration, HA failover
- **Long-running tests** (10+ min): `TestNodeOnlineStatus` (12 min duration)
### Test Artifacts Location
All test runs save comprehensive debugging artifacts to `control_logs/TIMESTAMP-ID/` including server logs, client logs, database dumps, MapResponse protocol data, and Prometheus metrics.
### Test Infrastructure
**For all integration test work, use the headscale-integration-tester agent - it contains the complete knowledge needed for effective testing and debugging.**
**Docker Setup**
- Headscale server container with configurable database backend
- Multiple Tailscale client containers with different versions
- Isolated networks per test scenario
- Automatic cleanup after test completion
**Test Artifacts**
All test runs save artifacts to `control_logs/TIMESTAMP-ID/`:
```
control_logs/20250713-213106-iajsux/
├── hs-testname-abc123.stderr.log # Headscale server logs
├── hs-testname-abc123.stdout.log
├── hs-testname-abc123.db # Database snapshot
├── hs-testname-abc123_metrics.txt # Prometheus metrics
├── hs-testname-abc123-mapresponses/ # Protocol debug data
├── ts-client-xyz789.stderr.log # Tailscale client logs
├── ts-client-xyz789.stdout.log
└── ts-client-xyz789_status.json # Client status dump
```
### Test Development Guidelines
**Timing Considerations**
Integration tests involve real network operations and Docker container lifecycle:
```go
// ❌ Wrong: Immediate assertions after async operations
client.Execute([]string{"tailscale", "set", "--advertise-routes=10.0.0.0/24"})
nodes, _ := headscale.ListNodes()
require.Len(t, nodes[0].GetAvailableRoutes(), 1) // May fail due to timing
// ✅ Correct: Wait for async operations to complete
client.Execute([]string{"tailscale", "set", "--advertise-routes=10.0.0.0/24"})
require.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
assert.Len(c, nodes[0].GetAvailableRoutes(), 1)
}, 10*time.Second, 100*time.Millisecond, "route should be advertised")
```
**Common Test Patterns**
- **Route Advertisement**: Use `EventuallyWithT` for route propagation
- **Node State Changes**: Wait for NodeStore synchronization
- **ACL Policy Changes**: Allow time for policy recalculation
- **Network Connectivity**: Use ping tests with retries
**Test Data Management**
```go
// Node identification: Don't assume array ordering
expectedRoutes := map[string]string{"1": "10.33.0.0/16"}
for _, node := range nodes {
nodeIDStr := fmt.Sprintf("%d", node.GetId())
if route, shouldHaveRoute := expectedRoutes[nodeIDStr]; shouldHaveRoute {
// Test the node that should have the route
}
}
```
### Troubleshooting Integration Tests
**Common Failure Patterns**
1. **Timing Issues**: Test assertions run before async operations complete
- **Solution**: Use `EventuallyWithT` with appropriate timeouts
- **Timeout Guidelines**: 3-5s for route operations, 10s for complex scenarios
2. **Infrastructure Problems**: Disk space, Docker issues, network conflicts
- **Check**: `go run ./cmd/hi doctor` for system health
- **Clean**: Remove old test containers and networks
3. **NodeStore Synchronization**: Tests expecting immediate data availability
- **Key Points**: Route advertisements must propagate through poll requests
- **Fix**: Wait for NodeStore updates after Hostinfo changes
4. **Database Backend Differences**: SQLite vs PostgreSQL behavior differences
- **Use**: `--postgres` flag for database-intensive tests
- **Note**: Some timing characteristics differ between backends
**Debugging Failed Tests**
1. **Check test artifacts** in `control_logs/` for detailed logs
2. **Examine MapResponse JSON** files for protocol-level debugging
3. **Review Headscale stderr logs** for server-side error messages
4. **Check Tailscale client status** for network-level issues
**Resource Management**
- Tests require significant disk space (each run ~100MB of logs)
- Docker containers are cleaned up automatically on success
- Failed tests may leave containers running - clean manually if needed
- Use `docker system prune` periodically to reclaim space
### Best Practices for Test Modifications
1. **Always test locally** before committing integration test changes
2. **Use appropriate timeouts** - too short causes flaky tests, too long slows CI
3. **Clean up properly** - ensure tests don't leave persistent state
4. **Handle both success and failure paths** in test scenarios
5. **Document timing requirements** for complex test scenarios
## NodeStore Implementation Details
@@ -259,108 +352,14 @@ All test runs save comprehensive debugging artifacts to `control_logs/TIMESTAMP-
## Testing Guidelines
### Integration Test Patterns
#### **CRITICAL: EventuallyWithT Pattern for External Calls**
**All external calls in integration tests MUST be wrapped in EventuallyWithT blocks** to handle eventual consistency in distributed systems. External calls include:
- `client.Status()` - Getting Tailscale client status
- `client.Curl()` - Making HTTP requests through clients
- `client.Traceroute()` - Running network diagnostics
- `headscale.ListNodes()` - Querying headscale server state
- Any other calls that interact with external systems or network operations
**Key Rules**:
1. **Never use bare `require.NoError(t, err)` with external calls** - Always wrap in EventuallyWithT
2. **Keep related assertions together** - If multiple assertions depend on the same external call, keep them in the same EventuallyWithT block
3. **Split unrelated external calls** - Different external calls should be in separate EventuallyWithT blocks
4. **Never nest EventuallyWithT calls** - Each EventuallyWithT should be at the same level
5. **Declare shared variables at function scope** - Variables used across multiple EventuallyWithT blocks must be declared before first use
**Examples**:
```go
// CORRECT: External call wrapped in EventuallyWithT
assert.EventuallyWithT(t, func(c *assert.CollectT) {
status, err := client.Status()
assert.NoError(c, err)
// Related assertions using the same status call
for _, peerKey := range status.Peers() {
peerStatus := status.Peer[peerKey]
assert.NotNil(c, peerStatus.PrimaryRoutes)
requirePeerSubnetRoutesWithCollect(c, peerStatus, expectedRoutes)
}
}, 5*time.Second, 200*time.Millisecond, "Verifying client status and routes")
// INCORRECT: Bare external call without EventuallyWithT
status, err := client.Status() // ❌ Will fail intermittently
require.NoError(t, err)
// CORRECT: Separate EventuallyWithT for different external calls
// First external call - headscale.ListNodes()
assert.EventuallyWithT(t, func(c *assert.CollectT) {
// Use EventuallyWithT for async operations
require.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
assert.Len(c, nodes, 2)
requireNodeRouteCountWithCollect(c, nodes[0], 2, 2, 2)
}, 10*time.Second, 500*time.Millisecond, "route state changes should propagate to nodes")
// Check expected state
}, 10*time.Second, 100*time.Millisecond, "description")
// Second external call - client.Status()
assert.EventuallyWithT(t, func(c *assert.CollectT) {
status, err := client.Status()
assert.NoError(c, err)
for _, peerKey := range status.Peers() {
peerStatus := status.Peer[peerKey]
requirePeerSubnetRoutesWithCollect(c, peerStatus, []netip.Prefix{tsaddr.AllIPv4(), tsaddr.AllIPv6()})
}
}, 10*time.Second, 500*time.Millisecond, "routes should be visible to client")
// INCORRECT: Multiple unrelated external calls in same EventuallyWithT
assert.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes() // ❌ First external call
assert.NoError(c, err)
status, err := client.Status() // ❌ Different external call - should be separate
assert.NoError(c, err)
}, 10*time.Second, 500*time.Millisecond, "mixed calls")
// CORRECT: Variable scoping for shared data
var (
srs1, srs2, srs3 *ipnstate.Status
clientStatus *ipnstate.Status
srs1PeerStatus *ipnstate.PeerStatus
)
assert.EventuallyWithT(t, func(c *assert.CollectT) {
srs1 = subRouter1.MustStatus() // = not :=
srs2 = subRouter2.MustStatus()
clientStatus = client.MustStatus()
srs1PeerStatus = clientStatus.Peer[srs1.Self.PublicKey]
// assertions...
}, 5*time.Second, 200*time.Millisecond, "checking router status")
// CORRECT: Wrapping client operations
assert.EventuallyWithT(t, func(c *assert.CollectT) {
result, err := client.Curl(weburl)
assert.NoError(c, err)
assert.Len(c, result, 13)
}, 5*time.Second, 200*time.Millisecond, "Verifying HTTP connectivity")
assert.EventuallyWithT(t, func(c *assert.CollectT) {
tr, err := client.Traceroute(webip)
assert.NoError(c, err)
assertTracerouteViaIPWithCollect(c, tr, expectedRouter.MustIPv4())
}, 5*time.Second, 200*time.Millisecond, "Verifying network path")
```
**Helper Functions**:
- Use `requirePeerSubnetRoutesWithCollect` instead of `requirePeerSubnetRoutes` inside EventuallyWithT
- Use `requireNodeRouteCountWithCollect` instead of `requireNodeRouteCount` inside EventuallyWithT
- Use `assertTracerouteViaIPWithCollect` instead of `assertTracerouteViaIP` inside EventuallyWithT
```go
// Node route checking by actual node properties, not array position
var routeNode *v1.Node
for _, node := range nodes {
@@ -376,156 +375,21 @@ for _, node := range nodes {
- Infrastructure issues like disk space can cause test failures unrelated to code changes
- Use `--postgres` flag when testing database-heavy scenarios
## Quality Assurance and Testing Requirements
### **MANDATORY: Always Use Specialized Testing Agents**
**CRITICAL REQUIREMENT**: For ANY task involving testing, quality assurance, review, or validation, you MUST use the appropriate specialized agent at the END of your task list. This ensures comprehensive quality validation and prevents regressions.
**Required Agents for Different Task Types**:
1. **Integration Testing**: Use `headscale-integration-tester` agent for:
- Running integration tests with `cmd/hi`
- Analyzing test failures and artifacts
- Troubleshooting Docker-based test infrastructure
- Validating end-to-end functionality changes
2. **Quality Control**: Use `quality-control-enforcer` agent for:
- Code review and validation
- Ensuring best practices compliance
- Preventing common pitfalls and anti-patterns
- Validating architectural decisions
**Agent Usage Pattern**: Always add the appropriate agent as the FINAL step in any task list to ensure quality validation occurs after all work is complete.
### Integration Test Debugging Reference
Test artifacts are preserved in `control_logs/TIMESTAMP-ID/` including:
- Headscale server logs (stderr/stdout)
- Tailscale client logs and status
- Database dumps and network captures
- MapResponse JSON files for protocol debugging
**For integration test issues, ALWAYS use the headscale-integration-tester agent - do not attempt manual debugging.**
## EventuallyWithT Pattern for Integration Tests
### Overview
EventuallyWithT is a testing pattern used to handle eventual consistency in distributed systems. In Headscale integration tests, many operations are asynchronous - clients advertise routes, the server processes them, updates propagate through the network. EventuallyWithT allows tests to wait for these operations to complete while making assertions.
### External Calls That Must Be Wrapped
The following operations are **external calls** that interact with the headscale server or tailscale clients and MUST be wrapped in EventuallyWithT:
- `headscale.ListNodes()` - Queries server state
- `client.Status()` - Gets client network status
- `client.Curl()` - Makes HTTP requests through the network
- `client.Traceroute()` - Performs network diagnostics
- `client.Execute()` when running commands that query state
- Any operation that reads from the headscale server or tailscale client
### Operations That Must NOT Be Wrapped
The following are **blocking operations** that modify state and should NOT be wrapped in EventuallyWithT:
- `tailscale set` commands (e.g., `--advertise-routes`, `--exit-node`)
- Any command that changes configuration or state
- Use `client.MustStatus()` instead of `client.Status()` when you just need the ID for a blocking operation
### Five Key Rules for EventuallyWithT
1. **One External Call Per EventuallyWithT Block**
- Each EventuallyWithT should make ONE external call (e.g., ListNodes OR Status)
- Related assertions based on that single call can be grouped together
- Unrelated external calls must be in separate EventuallyWithT blocks
2. **Variable Scoping**
- Declare variables that need to be shared across EventuallyWithT blocks at function scope
- Use `=` for assignment inside EventuallyWithT, not `:=` (unless the variable is only used within that block)
- Variables declared with `:=` inside EventuallyWithT are not accessible outside
3. **No Nested EventuallyWithT**
- NEVER put an EventuallyWithT inside another EventuallyWithT
- This is a critical anti-pattern that must be avoided
4. **Use CollectT for Assertions**
- Inside EventuallyWithT, use `assert` methods with the CollectT parameter
- Helper functions called within EventuallyWithT must accept `*assert.CollectT`
5. **Descriptive Messages**
- Always provide a descriptive message as the last parameter
- Message should explain what condition is being waited for
### Correct Pattern Examples
```go
// CORRECT: Blocking operation NOT wrapped
for _, client := range allClients {
status := client.MustStatus()
command := []string{
"tailscale",
"set",
"--advertise-routes=" + expectedRoutes[string(status.Self.ID)],
}
_, _, err = client.Execute(command)
require.NoErrorf(t, err, "failed to advertise route: %s", err)
}
// CORRECT: Single external call with related assertions
var nodes []*v1.Node
assert.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err = headscale.ListNodes()
assert.NoError(c, err)
assert.Len(c, nodes, 2)
requireNodeRouteCountWithCollect(c, nodes[0], 2, 2, 2)
}, 10*time.Second, 500*time.Millisecond, "nodes should have expected route counts")
// CORRECT: Separate EventuallyWithT for different external call
assert.EventuallyWithT(t, func(c *assert.CollectT) {
status, err := client.Status()
assert.NoError(c, err)
for _, peerKey := range status.Peers() {
peerStatus := status.Peer[peerKey]
requirePeerSubnetRoutesWithCollect(c, peerStatus, expectedPrefixes)
}
}, 10*time.Second, 500*time.Millisecond, "client should see expected routes")
```
### Incorrect Patterns to Avoid
```go
// INCORRECT: Blocking operation wrapped in EventuallyWithT
assert.EventuallyWithT(t, func(c *assert.CollectT) {
status, err := client.Status()
assert.NoError(c, err)
// This is a blocking operation - should NOT be in EventuallyWithT!
command := []string{
"tailscale",
"set",
"--advertise-routes=" + expectedRoutes[string(status.Self.ID)],
}
_, _, err = client.Execute(command)
assert.NoError(c, err)
}, 5*time.Second, 200*time.Millisecond, "wrong pattern")
// INCORRECT: Multiple unrelated external calls in same EventuallyWithT
assert.EventuallyWithT(t, func(c *assert.CollectT) {
// First external call
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
assert.Len(c, nodes, 2)
// Second unrelated external call - WRONG!
status, err := client.Status()
assert.NoError(c, err)
assert.NotNil(c, status)
}, 10*time.Second, 500*time.Millisecond, "mixed operations")
```
## Important Notes
- **Dependencies**: Use `nix develop` for consistent toolchain (Go, buf, protobuf tools, linting)
- **Protocol Buffers**: Changes to `proto/` require `make generate` and should be committed separately
- **Code Style**: Enforced via golangci-lint with golines (width 88) and gofumpt formatting
- **Database**: Supports both SQLite (development) and PostgreSQL (production/testing)
- **Integration Tests**: Require Docker and can consume significant disk space - use headscale-integration-tester agent
- **Integration Tests**: Require Docker and can consume significant disk space
- **Performance**: NodeStore optimizations are critical for scale - be careful with changes to state management
- **Quality Assurance**: Always use appropriate specialized agents for testing and validation tasks
- **NEVER create gists in the user's name**: Do not use the `create_gist` tool - present information directly in the response instead
## Debugging Integration Tests
Test artifacts are preserved in `control_logs/TIMESTAMP-ID/` including:
- Headscale server logs (stderr/stdout)
- Tailscale client logs and status
- Database dumps and network captures
- MapResponse JSON files for protocol debugging
When tests fail, check these artifacts first before assuming code issues.

View File

@@ -12,7 +12,7 @@ WORKDIR /go/src/tailscale
ARG TARGETARCH
RUN GOARCH=$TARGETARCH go install -v ./cmd/derper
FROM alpine:3.22
FROM alpine:3.18
RUN apk add --no-cache ca-certificates iptables iproute2 ip6tables curl
COPY --from=build-env /go/bin/* /usr/local/bin/

View File

@@ -2,12 +2,13 @@
# and are in no way endorsed by Headscale's maintainers as an
# official nor supported release or distribution.
FROM docker.io/golang:1.25-trixie
FROM docker.io/golang:1.24-bookworm
ARG VERSION=dev
ENV GOPATH /go
WORKDIR /go/src/headscale
RUN apt-get --update install --no-install-recommends --yes less jq sqlite3 dnsutils \
RUN apt-get update \
&& apt-get install --no-install-recommends --yes less jq sqlite3 dnsutils \
&& rm -rf /var/lib/apt/lists/* \
&& apt-get clean
RUN mkdir -p /var/run/headscale

View File

@@ -4,7 +4,7 @@
# This Dockerfile is more or less lifted from tailscale/tailscale
# to ensure a similar build process when testing the HEAD of tailscale.
FROM golang:1.25-alpine AS build-env
FROM golang:1.24-alpine AS build-env
WORKDIR /go/src
@@ -36,7 +36,7 @@ RUN GOARCH=$TARGETARCH go install -tags="${BUILD_TAGS}" -ldflags="\
-X tailscale.com/version.gitCommitStamp=$VERSION_GIT_HASH" \
-v ./cmd/tailscale ./cmd/tailscaled ./cmd/containerboot
FROM alpine:3.22
FROM alpine:3.18
RUN apk add --no-cache ca-certificates iptables iproute2 ip6tables curl
COPY --from=build-env /go/bin/* /usr/local/bin/

View File

@@ -1,29 +0,0 @@
package cli
import (
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/spf13/cobra"
)
func init() {
rootCmd.AddCommand(healthCmd)
}
var healthCmd = &cobra.Command{
Use: "health",
Short: "Check the health of the Headscale server",
Long: "Check the health of the Headscale server. This command will return an exit code of 0 if the server is healthy, or 1 if it is not.",
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
defer cancel()
defer conn.Close()
response, err := client.Health(ctx, &v1.HealthRequest{})
if err != nil {
ErrorOutput(err, "Error checking health", output)
}
SuccessOutput(response, "", output)
},
}

View File

@@ -9,13 +9,13 @@ import (
"strings"
"time"
survey "github.com/AlecAivazis/survey/v2"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/pterm/pterm"
"github.com/samber/lo"
"github.com/spf13/cobra"
"google.golang.org/grpc/status"
"google.golang.org/protobuf/types/known/timestamppb"
"tailscale.com/types/key"
)
@@ -52,7 +52,6 @@ func init() {
nodeCmd.AddCommand(registerNodeCmd)
expireNodeCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
expireNodeCmd.Flags().StringP("expiry", "e", "", "Set expire to (RFC3339 format, e.g. 2025-08-27T10:00:00Z), or leave empty to expire immediately.")
err = expireNodeCmd.MarkFlagRequired("identifier")
if err != nil {
log.Fatal(err.Error())
@@ -223,6 +222,8 @@ var listNodeRoutesCmd = &cobra.Command{
fmt.Sprintf("Error converting ID to integer: %s", err),
output,
)
return
}
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
@@ -289,31 +290,9 @@ var expireNodeCmd = &cobra.Command{
fmt.Sprintf("Error converting ID to integer: %s", err),
output,
)
}
expiry, err := cmd.Flags().GetString("expiry")
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error converting expiry to string: %s", err),
output,
)
return
}
expiryTime := time.Now()
if expiry != "" {
expiryTime, err = time.Parse(time.RFC3339, expiry)
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error converting expiry to string: %s", err),
output,
)
return
}
}
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
defer cancel()
@@ -321,7 +300,6 @@ var expireNodeCmd = &cobra.Command{
request := &v1.ExpireNodeRequest{
NodeId: identifier,
Expiry: timestamppb.New(expiryTime),
}
response, err := client.ExpireNode(ctx, request)
@@ -334,6 +312,8 @@ var expireNodeCmd = &cobra.Command{
),
output,
)
return
}
SuccessOutput(response.GetNode(), "Node expired", output)
@@ -353,6 +333,8 @@ var renameNodeCmd = &cobra.Command{
fmt.Sprintf("Error converting ID to integer: %s", err),
output,
)
return
}
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
@@ -378,6 +360,8 @@ var renameNodeCmd = &cobra.Command{
),
output,
)
return
}
SuccessOutput(response.GetNode(), "Node renamed", output)
@@ -398,6 +382,8 @@ var deleteNodeCmd = &cobra.Command{
fmt.Sprintf("Error converting ID to integer: %s", err),
output,
)
return
}
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
@@ -415,6 +401,8 @@ var deleteNodeCmd = &cobra.Command{
"Error getting node node: "+status.Convert(err).Message(),
output,
)
return
}
deleteRequest := &v1.DeleteNodeRequest{
@@ -424,10 +412,16 @@ var deleteNodeCmd = &cobra.Command{
confirm := false
force, _ := cmd.Flags().GetBool("force")
if !force {
confirm = util.YesNo(fmt.Sprintf(
"Do you want to remove the node %s?",
getResponse.GetNode().GetName(),
))
prompt := &survey.Confirm{
Message: fmt.Sprintf(
"Do you want to remove the node %s?",
getResponse.GetNode().GetName(),
),
}
err = survey.AskOne(prompt, &confirm)
if err != nil {
return
}
}
if confirm || force {
@@ -443,6 +437,8 @@ var deleteNodeCmd = &cobra.Command{
"Error deleting node: "+status.Convert(err).Message(),
output,
)
return
}
SuccessOutput(
map[string]string{"Result": "Node deleted"},
@@ -469,6 +465,8 @@ var moveNodeCmd = &cobra.Command{
fmt.Sprintf("Error converting ID to integer: %s", err),
output,
)
return
}
user, err := cmd.Flags().GetUint64("user")
@@ -478,6 +476,8 @@ var moveNodeCmd = &cobra.Command{
fmt.Sprintf("Error getting user: %s", err),
output,
)
return
}
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
@@ -495,6 +495,8 @@ var moveNodeCmd = &cobra.Command{
"Error getting node: "+status.Convert(err).Message(),
output,
)
return
}
moveRequest := &v1.MoveNodeRequest{
@@ -509,6 +511,8 @@ var moveNodeCmd = &cobra.Command{
"Error moving node: "+status.Convert(err).Message(),
output,
)
return
}
SuccessOutput(moveResponse.GetNode(), "Node moved to another user", output)
@@ -531,27 +535,31 @@ If you remove IPv4 or IPv6 prefixes from the config,
it can be run to remove the IPs that should no longer
be assigned to nodes.`,
Run: func(cmd *cobra.Command, args []string) {
var err error
output, _ := cmd.Flags().GetString("output")
confirm := false
force, _ := cmd.Flags().GetBool("force")
if !force {
confirm = util.YesNo("Are you sure that you want to assign/remove IPs to/from nodes?")
prompt := &survey.Confirm{
Message: "Are you sure that you want to assign/remove IPs to/from nodes?",
}
if confirm || force {
err = survey.AskOne(prompt, &confirm)
if err != nil {
return
}
if confirm {
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
defer cancel()
defer conn.Close()
changes, err := client.BackfillNodeIPs(ctx, &v1.BackfillNodeIPsRequest{Confirmed: confirm || force})
changes, err := client.BackfillNodeIPs(ctx, &v1.BackfillNodeIPsRequest{Confirmed: confirm})
if err != nil {
ErrorOutput(
err,
"Error backfilling IPs: "+status.Convert(err).Message(),
output,
)
return
}
SuccessOutput(changes, "Node IPs backfilled successfully", output)
@@ -750,6 +758,8 @@ var tagCmd = &cobra.Command{
fmt.Sprintf("Error converting ID to integer: %s", err),
output,
)
return
}
tagsToSet, err := cmd.Flags().GetStringSlice("tags")
if err != nil {
@@ -758,6 +768,8 @@ var tagCmd = &cobra.Command{
fmt.Sprintf("Error retrieving list of tags to add to node, %v", err),
output,
)
return
}
// Sending tags to node
@@ -772,6 +784,8 @@ var tagCmd = &cobra.Command{
fmt.Sprintf("Error while sending tags to headscale: %s", err),
output,
)
return
}
if resp != nil {
@@ -801,6 +815,8 @@ var approveRoutesCmd = &cobra.Command{
fmt.Sprintf("Error converting ID to integer: %s", err),
output,
)
return
}
routes, err := cmd.Flags().GetStringSlice("routes")
if err != nil {
@@ -809,6 +825,8 @@ var approveRoutesCmd = &cobra.Command{
fmt.Sprintf("Error retrieving list of routes to add to node, %v", err),
output,
)
return
}
// Sending routes to node
@@ -823,6 +841,8 @@ var approveRoutesCmd = &cobra.Command{
fmt.Sprintf("Error while sending routes to headscale: %s", err),
output,
)
return
}
if resp != nil {

View File

@@ -6,30 +6,21 @@ import (
"os"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/juanfont/headscale/hscontrol/db"
"github.com/juanfont/headscale/hscontrol/policy"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/rs/zerolog/log"
"github.com/spf13/cobra"
"tailscale.com/types/views"
)
const (
bypassFlag = "bypass-grpc-and-access-database-directly"
)
func init() {
rootCmd.AddCommand(policyCmd)
getPolicy.Flags().BoolP(bypassFlag, "", false, "Uses the headscale config to directly access the database, bypassing gRPC and does not require the server to be running")
policyCmd.AddCommand(getPolicy)
setPolicy.Flags().StringP("file", "f", "", "Path to a policy file in HuJSON format")
if err := setPolicy.MarkFlagRequired("file"); err != nil {
log.Fatal().Err(err).Msg("")
}
setPolicy.Flags().BoolP(bypassFlag, "", false, "Uses the headscale config to directly access the database, bypassing gRPC and does not require the server to be running")
policyCmd.AddCommand(setPolicy)
checkPolicy.Flags().StringP("file", "f", "", "Path to a policy file in HuJSON format")
@@ -50,58 +41,21 @@ var getPolicy = &cobra.Command{
Aliases: []string{"show", "view", "fetch"},
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
var policy string
if bypass, _ := cmd.Flags().GetBool(bypassFlag); bypass {
confirm := false
force, _ := cmd.Flags().GetBool("force")
if !force {
confirm = util.YesNo("DO NOT run this command if an instance of headscale is running, are you sure headscale is not running?")
}
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
defer cancel()
defer conn.Close()
if !confirm && !force {
ErrorOutput(nil, "Aborting command", output)
return
}
request := &v1.GetPolicyRequest{}
cfg, err := types.LoadServerConfig()
if err != nil {
ErrorOutput(err, fmt.Sprintf("Failed loading config: %s", err), output)
}
d, err := db.NewHeadscaleDatabase(
cfg.Database,
cfg.BaseDomain,
nil,
)
if err != nil {
ErrorOutput(err, fmt.Sprintf("Failed to open database: %s", err), output)
}
pol, err := d.GetPolicy()
if err != nil {
ErrorOutput(err, fmt.Sprintf("Failed loading Policy from database: %s", err), output)
}
policy = pol.Data
} else {
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
defer cancel()
defer conn.Close()
request := &v1.GetPolicyRequest{}
response, err := client.GetPolicy(ctx, request)
if err != nil {
ErrorOutput(err, fmt.Sprintf("Failed loading ACL Policy: %s", err), output)
}
policy = response.GetPolicy()
response, err := client.GetPolicy(ctx, request)
if err != nil {
ErrorOutput(err, fmt.Sprintf("Failed loading ACL Policy: %s", err), output)
}
// TODO(pallabpain): Maybe print this better?
// This does not pass output as we dont support yaml, json or json-line
// output for this command. It is HuJSON already.
SuccessOutput("", policy, "")
SuccessOutput("", response.GetPolicy(), "")
},
}
@@ -127,57 +81,14 @@ var setPolicy = &cobra.Command{
ErrorOutput(err, fmt.Sprintf("Error reading the policy file: %s", err), output)
}
if bypass, _ := cmd.Flags().GetBool(bypassFlag); bypass {
confirm := false
force, _ := cmd.Flags().GetBool("force")
if !force {
confirm = util.YesNo("DO NOT run this command if an instance of headscale is running, are you sure headscale is not running?")
}
request := &v1.SetPolicyRequest{Policy: string(policyBytes)}
if !confirm && !force {
ErrorOutput(nil, "Aborting command", output)
return
}
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
defer cancel()
defer conn.Close()
cfg, err := types.LoadServerConfig()
if err != nil {
ErrorOutput(err, fmt.Sprintf("Failed loading config: %s", err), output)
}
d, err := db.NewHeadscaleDatabase(
cfg.Database,
cfg.BaseDomain,
nil,
)
if err != nil {
ErrorOutput(err, fmt.Sprintf("Failed to open database: %s", err), output)
}
users, err := d.ListUsers()
if err != nil {
ErrorOutput(err, fmt.Sprintf("Failed to load users for policy validation: %s", err), output)
}
_, err = policy.NewPolicyManager(policyBytes, users, views.Slice[types.NodeView]{})
if err != nil {
ErrorOutput(err, fmt.Sprintf("Error parsing the policy file: %s", err), output)
return
}
_, err = d.SetPolicy(string(policyBytes))
if err != nil {
ErrorOutput(err, fmt.Sprintf("Failed to set ACL Policy: %s", err), output)
}
} else {
request := &v1.SetPolicyRequest{Policy: string(policyBytes)}
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
defer cancel()
defer conn.Close()
if _, err := client.SetPolicy(ctx, request); err != nil {
ErrorOutput(err, fmt.Sprintf("Failed to set ACL Policy: %s", err), output)
}
if _, err := client.SetPolicy(ctx, request); err != nil {
ErrorOutput(err, fmt.Sprintf("Failed to set ACL Policy: %s", err), output)
}
SuccessOutput(nil, "Policy updated.", "")

View File

@@ -34,7 +34,6 @@ func init() {
preauthkeysCmd.AddCommand(listPreAuthKeys)
preauthkeysCmd.AddCommand(createPreAuthKeyCmd)
preauthkeysCmd.AddCommand(expirePreAuthKeyCmd)
preauthkeysCmd.AddCommand(deletePreAuthKeyCmd)
createPreAuthKeyCmd.PersistentFlags().
Bool("reusable", false, "Make the preauthkey reusable")
createPreAuthKeyCmd.PersistentFlags().
@@ -233,43 +232,3 @@ var expirePreAuthKeyCmd = &cobra.Command{
SuccessOutput(response, "Key expired", output)
},
}
var deletePreAuthKeyCmd = &cobra.Command{
Use: "delete KEY",
Short: "Delete a preauthkey",
Aliases: []string{"del", "rm", "d"},
Args: func(cmd *cobra.Command, args []string) error {
if len(args) < 1 {
return errMissingParameter
}
return nil
},
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
user, err := cmd.Flags().GetUint64("user")
if err != nil {
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
}
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
defer cancel()
defer conn.Close()
request := &v1.DeletePreAuthKeyRequest{
User: user,
Key: args[0],
}
response, err := client.DeletePreAuthKey(ctx, request)
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Cannot delete Pre Auth Key: %s\n", err),
output,
)
}
SuccessOutput(response, "Key deleted", output)
},
}

View File

@@ -5,7 +5,6 @@ import (
"os"
"runtime"
"slices"
"strings"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/rs/zerolog"
@@ -72,64 +71,25 @@ func initConfig() {
disableUpdateCheck := viper.GetBool("disable_check_updates")
if !disableUpdateCheck && !machineOutput {
versionInfo := types.GetVersionInfo()
if (runtime.GOOS == "linux" || runtime.GOOS == "darwin") &&
!versionInfo.Dirty {
types.Version != "dev" {
githubTag := &latest.GithubTag{
Owner: "juanfont",
Repository: "headscale",
TagFilterFunc: filterPreReleasesIfStable(func() string { return versionInfo.Version }),
Owner: "juanfont",
Repository: "headscale",
}
res, err := latest.Check(githubTag, versionInfo.Version)
res, err := latest.Check(githubTag, types.Version)
if err == nil && res.Outdated {
//nolint
log.Warn().Msgf(
"An updated version of Headscale has been found (%s vs. your current %s). Check it out https://github.com/juanfont/headscale/releases\n",
res.Current,
versionInfo.Version,
types.Version,
)
}
}
}
}
var prereleases = []string{"alpha", "beta", "rc", "dev"}
func isPreReleaseVersion(version string) bool {
for _, unstable := range prereleases {
if strings.Contains(version, unstable) {
return true
}
}
return false
}
// filterPreReleasesIfStable returns a function that filters out
// pre-release tags if the current version is stable.
// If the current version is a pre-release, it does not filter anything.
// versionFunc is a function that returns the current version string, it is
// a func for testability.
func filterPreReleasesIfStable(versionFunc func() string) func(string) bool {
return func(tag string) bool {
version := versionFunc()
// If we are on a pre-release version, then we do not filter anything
// as we want to recommend the user the latest pre-release.
if isPreReleaseVersion(version) {
return false
}
// If we are on a stable release, filter out pre-releases.
for _, ignore := range prereleases {
if strings.Contains(tag, ignore) {
return true
}
}
return false
}
}
var rootCmd = &cobra.Command{
Use: "headscale",
Short: "headscale - a Tailscale control server",

View File

@@ -1,293 +0,0 @@
package cli
import (
"testing"
)
func TestFilterPreReleasesIfStable(t *testing.T) {
tests := []struct {
name string
currentVersion string
tag string
expectedFilter bool
description string
}{
{
name: "stable version filters alpha tag",
currentVersion: "0.23.0",
tag: "v0.24.0-alpha.1",
expectedFilter: true,
description: "When on stable release, alpha tags should be filtered",
},
{
name: "stable version filters beta tag",
currentVersion: "0.23.0",
tag: "v0.24.0-beta.2",
expectedFilter: true,
description: "When on stable release, beta tags should be filtered",
},
{
name: "stable version filters rc tag",
currentVersion: "0.23.0",
tag: "v0.24.0-rc.1",
expectedFilter: true,
description: "When on stable release, rc tags should be filtered",
},
{
name: "stable version allows stable tag",
currentVersion: "0.23.0",
tag: "v0.24.0",
expectedFilter: false,
description: "When on stable release, stable tags should not be filtered",
},
{
name: "alpha version allows alpha tag",
currentVersion: "0.23.0-alpha.1",
tag: "v0.24.0-alpha.2",
expectedFilter: false,
description: "When on alpha release, alpha tags should not be filtered",
},
{
name: "alpha version allows beta tag",
currentVersion: "0.23.0-alpha.1",
tag: "v0.24.0-beta.1",
expectedFilter: false,
description: "When on alpha release, beta tags should not be filtered",
},
{
name: "alpha version allows rc tag",
currentVersion: "0.23.0-alpha.1",
tag: "v0.24.0-rc.1",
expectedFilter: false,
description: "When on alpha release, rc tags should not be filtered",
},
{
name: "alpha version allows stable tag",
currentVersion: "0.23.0-alpha.1",
tag: "v0.24.0",
expectedFilter: false,
description: "When on alpha release, stable tags should not be filtered",
},
{
name: "beta version allows alpha tag",
currentVersion: "0.23.0-beta.1",
tag: "v0.24.0-alpha.1",
expectedFilter: false,
description: "When on beta release, alpha tags should not be filtered",
},
{
name: "beta version allows beta tag",
currentVersion: "0.23.0-beta.2",
tag: "v0.24.0-beta.3",
expectedFilter: false,
description: "When on beta release, beta tags should not be filtered",
},
{
name: "beta version allows rc tag",
currentVersion: "0.23.0-beta.1",
tag: "v0.24.0-rc.1",
expectedFilter: false,
description: "When on beta release, rc tags should not be filtered",
},
{
name: "beta version allows stable tag",
currentVersion: "0.23.0-beta.1",
tag: "v0.24.0",
expectedFilter: false,
description: "When on beta release, stable tags should not be filtered",
},
{
name: "rc version allows alpha tag",
currentVersion: "0.23.0-rc.1",
tag: "v0.24.0-alpha.1",
expectedFilter: false,
description: "When on rc release, alpha tags should not be filtered",
},
{
name: "rc version allows beta tag",
currentVersion: "0.23.0-rc.1",
tag: "v0.24.0-beta.1",
expectedFilter: false,
description: "When on rc release, beta tags should not be filtered",
},
{
name: "rc version allows rc tag",
currentVersion: "0.23.0-rc.2",
tag: "v0.24.0-rc.3",
expectedFilter: false,
description: "When on rc release, rc tags should not be filtered",
},
{
name: "rc version allows stable tag",
currentVersion: "0.23.0-rc.1",
tag: "v0.24.0",
expectedFilter: false,
description: "When on rc release, stable tags should not be filtered",
},
{
name: "stable version with patch filters alpha",
currentVersion: "0.23.1",
tag: "v0.24.0-alpha.1",
expectedFilter: true,
description: "Stable version with patch number should filter alpha tags",
},
{
name: "stable version with patch allows stable",
currentVersion: "0.23.1",
tag: "v0.24.0",
expectedFilter: false,
description: "Stable version with patch number should allow stable tags",
},
{
name: "tag with alpha substring in version number",
currentVersion: "0.23.0",
tag: "v1.0.0-alpha.1",
expectedFilter: true,
description: "Tags with alpha in version string should be filtered on stable",
},
{
name: "tag with beta substring in version number",
currentVersion: "0.23.0",
tag: "v1.0.0-beta.1",
expectedFilter: true,
description: "Tags with beta in version string should be filtered on stable",
},
{
name: "tag with rc substring in version number",
currentVersion: "0.23.0",
tag: "v1.0.0-rc.1",
expectedFilter: true,
description: "Tags with rc in version string should be filtered on stable",
},
{
name: "empty tag on stable version",
currentVersion: "0.23.0",
tag: "",
expectedFilter: false,
description: "Empty tags should not be filtered",
},
{
name: "dev version allows all tags",
currentVersion: "0.23.0-dev",
tag: "v0.24.0-alpha.1",
expectedFilter: false,
description: "Dev versions should not filter any tags (pre-release allows all)",
},
{
name: "stable version filters dev tag",
currentVersion: "0.23.0",
tag: "v0.24.0-dev",
expectedFilter: true,
description: "When on stable release, dev tags should be filtered",
},
{
name: "dev version allows dev tag",
currentVersion: "0.23.0-dev",
tag: "v0.24.0-dev.1",
expectedFilter: false,
description: "When on dev release, dev tags should not be filtered",
},
{
name: "dev version allows stable tag",
currentVersion: "0.23.0-dev",
tag: "v0.24.0",
expectedFilter: false,
description: "When on dev release, stable tags should not be filtered",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := filterPreReleasesIfStable(func() string { return tt.currentVersion })(tt.tag)
if result != tt.expectedFilter {
t.Errorf("%s: got %v, want %v\nDescription: %s\nCurrent version: %s, Tag: %s",
tt.name,
result,
tt.expectedFilter,
tt.description,
tt.currentVersion,
tt.tag,
)
}
})
}
}
func TestIsPreReleaseVersion(t *testing.T) {
tests := []struct {
name string
version string
expected bool
description string
}{
{
name: "stable version",
version: "0.23.0",
expected: false,
description: "Stable version should not be pre-release",
},
{
name: "alpha version",
version: "0.23.0-alpha.1",
expected: true,
description: "Alpha version should be pre-release",
},
{
name: "beta version",
version: "0.23.0-beta.1",
expected: true,
description: "Beta version should be pre-release",
},
{
name: "rc version",
version: "0.23.0-rc.1",
expected: true,
description: "RC version should be pre-release",
},
{
name: "version with alpha substring",
version: "0.23.0-alphabetical",
expected: true,
description: "Version containing 'alpha' should be pre-release",
},
{
name: "version with beta substring",
version: "0.23.0-betamax",
expected: true,
description: "Version containing 'beta' should be pre-release",
},
{
name: "dev version",
version: "0.23.0-dev",
expected: true,
description: "Dev version should be pre-release",
},
{
name: "empty version",
version: "",
expected: false,
description: "Empty version should not be pre-release",
},
{
name: "version with patch number",
version: "0.23.1",
expected: false,
description: "Stable version with patch should not be pre-release",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := isPreReleaseVersion(tt.version)
if result != tt.expected {
t.Errorf("%s: got %v, want %v\nDescription: %s\nVersion: %s",
tt.name,
result,
tt.expected,
tt.description,
tt.version,
)
}
})
}
}

View File

@@ -6,8 +6,8 @@ import (
"net/url"
"strconv"
survey "github.com/AlecAivazis/survey/v2"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/pterm/pterm"
"github.com/rs/zerolog/log"
"github.com/spf13/cobra"
@@ -161,10 +161,16 @@ var destroyUserCmd = &cobra.Command{
confirm := false
force, _ := cmd.Flags().GetBool("force")
if !force {
confirm = util.YesNo(fmt.Sprintf(
"Do you want to remove the user %q (%d) and any associated preauthkeys?",
user.GetName(), user.GetId(),
))
prompt := &survey.Confirm{
Message: fmt.Sprintf(
"Do you want to remove the user %q (%d) and any associated preauthkeys?",
user.GetName(), user.GetId(),
),
}
err := survey.AskOne(prompt, &confirm)
if err != nil {
return
}
}
if confirm || force {

View File

@@ -169,14 +169,7 @@ func ErrorOutput(errResult error, override string, outputFormat string) {
Error string `json:"error"`
}
var errorMessage string
if errResult != nil {
errorMessage = errResult.Error()
} else {
errorMessage = override
}
fmt.Fprintf(os.Stderr, "%s\n", output(errOutput{errorMessage}, override, outputFormat))
fmt.Fprintf(os.Stderr, "%s\n", output(errOutput{errResult.Error()}, override, outputFormat))
os.Exit(1)
}

View File

@@ -7,7 +7,6 @@ import (
func init() {
rootCmd.AddCommand(versionCmd)
versionCmd.Flags().StringP("output", "o", "", "Output format. Empty for human-readable, 'json', 'json-line' or 'yaml'")
}
var versionCmd = &cobra.Command{
@@ -16,9 +15,9 @@ var versionCmd = &cobra.Command{
Long: "The version of headscale.",
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
info := types.GetVersionInfo()
SuccessOutput(info, info.String(), output)
SuccessOutput(map[string]string{
"version": types.Version,
"commit": types.GitCommitHash,
}, types.Version, output)
},
}

View File

@@ -104,7 +104,7 @@ func runTestContainer(ctx context.Context, config *RunConfig) error {
if statsCollector != nil {
defer statsCollector.Close()
// Start stats collection immediately - no need for complex retry logic
// The new implementation monitors Docker events and will catch containers as they start
if err := statsCollector.StartCollection(ctx, runID, config.Verbose); err != nil {
@@ -138,10 +138,9 @@ func runTestContainer(ctx context.Context, config *RunConfig) error {
log.Printf("MEMORY LIMIT VIOLATIONS DETECTED:")
log.Printf("=================================")
for _, violation := range violations {
log.Printf("Container %s exceeded memory limit: %.1f MB > %.1f MB",
log.Printf("Container %s exceeded memory limit: %.1f MB > %.1f MB",
violation.ContainerName, violation.MaxMemoryMB, violation.LimitMB)
}
return fmt.Errorf("test failed: %d container(s) exceeded memory limits", len(violations))
}
}
@@ -202,18 +201,6 @@ func createGoTestContainer(ctx context.Context, cli *client.Client, config *RunC
fmt.Sprintf("HEADSCALE_INTEGRATION_POSTGRES=%d", boolToInt(config.UsePostgres)),
"HEADSCALE_INTEGRATION_RUN_ID=" + runID,
}
// Pass through all HEADSCALE_INTEGRATION_* environment variables
for _, e := range os.Environ() {
if strings.HasPrefix(e, "HEADSCALE_INTEGRATION_") {
// Skip the ones we already set explicitly
if strings.HasPrefix(e, "HEADSCALE_INTEGRATION_POSTGRES=") ||
strings.HasPrefix(e, "HEADSCALE_INTEGRATION_RUN_ID=") {
continue
}
env = append(env, e)
}
}
containerConfig := &container.Config{
Image: "golang:" + config.GoVersion,
Cmd: goTestCmd,

View File

@@ -24,9 +24,9 @@ type RunConfig struct {
KeepOnFailure bool `flag:"keep-on-failure,default=false,Keep containers on test failure"`
LogsDir string `flag:"logs-dir,default=control_logs,Control logs directory"`
Verbose bool `flag:"verbose,default=false,Verbose output"`
Stats bool `flag:"stats,default=false,Collect and display container resource usage statistics"`
HSMemoryLimit float64 `flag:"hs-memory-limit,default=0,Fail test if any Headscale container exceeds this memory limit in MB (0 = disabled)"`
TSMemoryLimit float64 `flag:"ts-memory-limit,default=0,Fail test if any Tailscale container exceeds this memory limit in MB (0 = disabled)"`
Stats bool `flag:"stats,default=false,Collect and display container resource usage statistics"`
HSMemoryLimit float64 `flag:"hs-memory-limit,default=0,Fail test if any Headscale container exceeds this memory limit in MB (0 = disabled)"`
TSMemoryLimit float64 `flag:"ts-memory-limit,default=0,Fail test if any Tailscale container exceeds this memory limit in MB (0 = disabled)"`
}
// runIntegrationTest executes the integration test workflow.
@@ -74,7 +74,7 @@ func detectGoVersion() string {
content, err := os.ReadFile(goModPath)
if err != nil {
return "1.25"
return "1.24"
}
lines := splitLines(string(content))
@@ -89,7 +89,7 @@ func detectGoVersion() string {
}
}
return "1.25"
return "1.24"
}
// splitLines splits a string into lines without using strings.Split.

View File

@@ -3,7 +3,6 @@ package main
import (
"context"
"encoding/json"
"errors"
"fmt"
"log"
"sort"
@@ -18,7 +17,7 @@ import (
"github.com/docker/docker/client"
)
// ContainerStats represents statistics for a single container.
// ContainerStats represents statistics for a single container
type ContainerStats struct {
ContainerID string
ContainerName string
@@ -26,14 +25,14 @@ type ContainerStats struct {
mutex sync.RWMutex
}
// StatsSample represents a single stats measurement.
// StatsSample represents a single stats measurement
type StatsSample struct {
Timestamp time.Time
CPUUsage float64 // CPU usage percentage
MemoryMB float64 // Memory usage in MB
}
// StatsCollector manages collection of container statistics.
// StatsCollector manages collection of container statistics
type StatsCollector struct {
client *client.Client
containers map[string]*ContainerStats
@@ -43,7 +42,7 @@ type StatsCollector struct {
collectionStarted bool
}
// NewStatsCollector creates a new stats collector instance.
// NewStatsCollector creates a new stats collector instance
func NewStatsCollector() (*StatsCollector, error) {
cli, err := createDockerClient()
if err != nil {
@@ -57,13 +56,13 @@ func NewStatsCollector() (*StatsCollector, error) {
}, nil
}
// StartCollection begins monitoring all containers and collecting stats for hs- and ts- containers with matching run ID.
// StartCollection begins monitoring all containers and collecting stats for hs- and ts- containers with matching run ID
func (sc *StatsCollector) StartCollection(ctx context.Context, runID string, verbose bool) error {
sc.mutex.Lock()
defer sc.mutex.Unlock()
if sc.collectionStarted {
return errors.New("stats collection already started")
return fmt.Errorf("stats collection already started")
}
sc.collectionStarted = true
@@ -83,7 +82,7 @@ func (sc *StatsCollector) StartCollection(ctx context.Context, runID string, ver
return nil
}
// StopCollection stops all stats collection.
// StopCollection stops all stats collection
func (sc *StatsCollector) StopCollection() {
// Check if already stopped without holding lock
sc.mutex.RLock()
@@ -95,17 +94,17 @@ func (sc *StatsCollector) StopCollection() {
// Signal stop to all goroutines
close(sc.stopChan)
// Wait for all goroutines to finish
sc.wg.Wait()
// Mark as stopped
sc.mutex.Lock()
sc.collectionStarted = false
sc.mutex.Unlock()
}
// monitorExistingContainers checks for existing containers that match our criteria.
// monitorExistingContainers checks for existing containers that match our criteria
func (sc *StatsCollector) monitorExistingContainers(ctx context.Context, runID string, verbose bool) {
defer sc.wg.Done()
@@ -124,14 +123,14 @@ func (sc *StatsCollector) monitorExistingContainers(ctx context.Context, runID s
}
}
// monitorDockerEvents listens for container start events and begins monitoring relevant containers.
// monitorDockerEvents listens for container start events and begins monitoring relevant containers
func (sc *StatsCollector) monitorDockerEvents(ctx context.Context, runID string, verbose bool) {
defer sc.wg.Done()
filter := filters.NewArgs()
filter.Add("type", "container")
filter.Add("event", "start")
eventOptions := events.ListOptions{
Filters: filter,
}
@@ -172,7 +171,7 @@ func (sc *StatsCollector) monitorDockerEvents(ctx context.Context, runID string,
}
}
// shouldMonitorContainer determines if a container should be monitored.
// shouldMonitorContainer determines if a container should be monitored
func (sc *StatsCollector) shouldMonitorContainer(cont types.Container, runID string) bool {
// Check if it has the correct run ID label
if cont.Labels == nil || cont.Labels["hi.run-id"] != runID {
@@ -190,7 +189,7 @@ func (sc *StatsCollector) shouldMonitorContainer(cont types.Container, runID str
return false
}
// startStatsForContainer begins stats collection for a specific container.
// startStatsForContainer begins stats collection for a specific container
func (sc *StatsCollector) startStatsForContainer(ctx context.Context, containerID, containerName string, verbose bool) {
containerName = strings.TrimPrefix(containerName, "/")
@@ -216,7 +215,7 @@ func (sc *StatsCollector) startStatsForContainer(ctx context.Context, containerI
go sc.collectStatsForContainer(ctx, containerID, verbose)
}
// collectStatsForContainer collects stats for a specific container using Docker API streaming.
// collectStatsForContainer collects stats for a specific container using Docker API streaming
func (sc *StatsCollector) collectStatsForContainer(ctx context.Context, containerID string, verbose bool) {
defer sc.wg.Done()
@@ -263,7 +262,7 @@ func (sc *StatsCollector) collectStatsForContainer(ctx context.Context, containe
// Get container stats reference without holding the main mutex
var containerStats *ContainerStats
var exists bool
sc.mutex.RLock()
containerStats, exists = sc.containers[containerID]
sc.mutex.RUnlock()
@@ -285,12 +284,12 @@ func (sc *StatsCollector) collectStatsForContainer(ctx context.Context, containe
}
}
// calculateCPUPercent calculates CPU usage percentage from Docker stats.
// calculateCPUPercent calculates CPU usage percentage from Docker stats
func calculateCPUPercent(prevStats, stats *container.Stats) float64 {
// CPU calculation based on Docker's implementation
cpuDelta := float64(stats.CPUStats.CPUUsage.TotalUsage) - float64(prevStats.CPUStats.CPUUsage.TotalUsage)
systemDelta := float64(stats.CPUStats.SystemUsage) - float64(prevStats.CPUStats.SystemUsage)
if systemDelta > 0 && cpuDelta >= 0 {
// Calculate CPU percentage: (container CPU delta / system CPU delta) * number of CPUs * 100
numCPUs := float64(len(stats.CPUStats.CPUUsage.PercpuUsage))
@@ -298,14 +297,12 @@ func calculateCPUPercent(prevStats, stats *container.Stats) float64 {
// Fallback: if PercpuUsage is not available, assume 1 CPU
numCPUs = 1.0
}
return (cpuDelta / systemDelta) * numCPUs * 100.0
}
return 0.0
}
// ContainerStatsSummary represents summary statistics for a container.
// ContainerStatsSummary represents summary statistics for a container
type ContainerStatsSummary struct {
ContainerName string
SampleCount int
@@ -313,21 +310,21 @@ type ContainerStatsSummary struct {
Memory StatsSummary
}
// MemoryViolation represents a container that exceeded the memory limit.
// MemoryViolation represents a container that exceeded the memory limit
type MemoryViolation struct {
ContainerName string
MaxMemoryMB float64
LimitMB float64
}
// StatsSummary represents min, max, and average for a metric.
// StatsSummary represents min, max, and average for a metric
type StatsSummary struct {
Min float64
Max float64
Average float64
}
// GetSummary returns a summary of collected statistics.
// GetSummary returns a summary of collected statistics
func (sc *StatsCollector) GetSummary() []ContainerStatsSummary {
// Take snapshot of container references without holding main lock long
sc.mutex.RLock()
@@ -358,7 +355,7 @@ func (sc *StatsCollector) GetSummary() []ContainerStatsSummary {
// Calculate CPU stats
cpuValues := make([]float64, len(stats))
memoryValues := make([]float64, len(stats))
for i, sample := range stats {
cpuValues[i] = sample.CPUUsage
memoryValues[i] = sample.MemoryMB
@@ -378,7 +375,7 @@ func (sc *StatsCollector) GetSummary() []ContainerStatsSummary {
return summaries
}
// calculateStatsSummary calculates min, max, and average for a slice of values.
// calculateStatsSummary calculates min, max, and average for a slice of values
func calculateStatsSummary(values []float64) StatsSummary {
if len(values) == 0 {
return StatsSummary{}
@@ -405,10 +402,10 @@ func calculateStatsSummary(values []float64) StatsSummary {
}
}
// PrintSummary prints the statistics summary to the console.
// PrintSummary prints the statistics summary to the console
func (sc *StatsCollector) PrintSummary() {
summaries := sc.GetSummary()
if len(summaries) == 0 {
log.Printf("No container statistics collected")
return
@@ -416,18 +413,18 @@ func (sc *StatsCollector) PrintSummary() {
log.Printf("Container Resource Usage Summary:")
log.Printf("================================")
for _, summary := range summaries {
log.Printf("Container: %s (%d samples)", summary.ContainerName, summary.SampleCount)
log.Printf(" CPU Usage: Min: %6.2f%% Max: %6.2f%% Avg: %6.2f%%",
log.Printf(" CPU Usage: Min: %6.2f%% Max: %6.2f%% Avg: %6.2f%%",
summary.CPU.Min, summary.CPU.Max, summary.CPU.Average)
log.Printf(" Memory Usage: Min: %6.1f MB Max: %6.1f MB Avg: %6.1f MB",
log.Printf(" Memory Usage: Min: %6.1f MB Max: %6.1f MB Avg: %6.1f MB",
summary.Memory.Min, summary.Memory.Max, summary.Memory.Average)
log.Printf("")
}
}
// CheckMemoryLimits checks if any containers exceeded their memory limits.
// CheckMemoryLimits checks if any containers exceeded their memory limits
func (sc *StatsCollector) CheckMemoryLimits(hsLimitMB, tsLimitMB float64) []MemoryViolation {
if hsLimitMB <= 0 && tsLimitMB <= 0 {
return nil
@@ -458,14 +455,14 @@ func (sc *StatsCollector) CheckMemoryLimits(hsLimitMB, tsLimitMB float64) []Memo
return violations
}
// PrintSummaryAndCheckLimits prints the statistics summary and returns memory violations if any.
// PrintSummaryAndCheckLimits prints the statistics summary and returns memory violations if any
func (sc *StatsCollector) PrintSummaryAndCheckLimits(hsLimitMB, tsLimitMB float64) []MemoryViolation {
sc.PrintSummary()
return sc.CheckMemoryLimits(hsLimitMB, tsLimitMB)
}
// Close closes the stats collector and cleans up resources.
// Close closes the stats collector and cleans up resources
func (sc *StatsCollector) Close() error {
sc.StopCollection()
return sc.client.Close()
}
}

View File

@@ -68,7 +68,7 @@ func extractDirectoryFromTar(tarReader io.Reader, targetDir string) error {
continue // Skip potentially dangerous paths
}
targetPath := filepath.Join(targetDir, cleanName)
targetPath := filepath.Join(targetDir, filepath.Base(cleanName))
switch header.Typeflag {
case tar.TypeDir:
@@ -77,11 +77,6 @@ func extractDirectoryFromTar(tarReader io.Reader, targetDir string) error {
return fmt.Errorf("failed to create directory %s: %w", targetPath, err)
}
case tar.TypeReg:
// Ensure parent directories exist
if err := os.MkdirAll(filepath.Dir(targetPath), 0o755); err != nil {
return fmt.Errorf("failed to create parent directories for %s: %w", targetPath, err)
}
// Create file
outFile, err := os.Create(targetPath)
if err != nil {

View File

@@ -1,61 +0,0 @@
package main
import (
"encoding/json"
"fmt"
"os"
"github.com/creachadair/command"
"github.com/creachadair/flax"
"github.com/juanfont/headscale/hscontrol/mapper"
"github.com/juanfont/headscale/integration/integrationutil"
)
type MapConfig struct {
Directory string `flag:"directory,Directory to read map responses from"`
}
var mapConfig MapConfig
func main() {
root := command.C{
Name: "mapresponses",
Help: "MapResponses is a tool to map and compare map responses from a directory",
Commands: []*command.C{
{
Name: "online",
Help: "",
Usage: "run [test-pattern] [flags]",
SetFlags: command.Flags(flax.MustBind, &mapConfig),
Run: runOnline,
},
command.HelpCommand(nil),
},
}
env := root.NewEnv(nil).MergeFlags(true)
command.RunOrFail(env, os.Args[1:])
}
// runIntegrationTest executes the integration test workflow.
func runOnline(env *command.Env) error {
if mapConfig.Directory == "" {
return fmt.Errorf("directory is required")
}
resps, err := mapper.ReadMapResponsesFromDirectory(mapConfig.Directory)
if err != nil {
return fmt.Errorf("reading map responses from directory: %w", err)
}
expected := integrationutil.BuildExpectedOnlineMap(resps)
out, err := json.MarshalIndent(expected, "", " ")
if err != nil {
return fmt.Errorf("marshaling expected online map: %w", err)
}
os.Stderr.Write(out)
os.Stderr.Write([]byte("\n"))
return nil
}

View File

@@ -60,9 +60,7 @@ prefixes:
v6: fd7a:115c:a1e0::/48
# Strategy used for allocation of IPs to nodes, available options:
# - sequential (default): assigns the next free IP from the previous given
# IP. A best-effort approach is used and Headscale might leave holes in the
# IP range or fill up existing holes in the IP range.
# - sequential (default): assigns the next free IP from the previous given IP.
# - random: assigns the next free IP from a pseudo-random IP generator (crypto/rand).
allocation: sequential
@@ -107,7 +105,7 @@ derp:
# For better connection stability (especially when using an Exit-Node and DNS is not working),
# it is possible to optionally add the public IPv4 and IPv6 address to the Derp-Map using:
ipv4: 198.51.100.1
ipv4: 1.2.3.4
ipv6: 2001:db8::1
# List of externally available DERP maps encoded in JSON
@@ -130,7 +128,7 @@ derp:
auto_update_enabled: true
# How often should we check for DERP updates?
update_frequency: 3h
update_frequency: 24h
# Disables the automatic check for headscale updates on startup
disable_check_updates: false
@@ -277,9 +275,9 @@ dns:
# `hostname.base_domain` (e.g., _myhost.example.com_).
base_domain: example.com
# Whether to use the local DNS settings of a node or override the local DNS
# settings (default) and force the use of Headscale's DNS configuration.
override_local_dns: true
# Whether to use the local DNS settings of a node (default) or override the
# local DNS settings and force the use of Headscale's DNS configuration.
override_local_dns: false
# List of DNS servers to expose to clients.
nameservers:
@@ -295,7 +293,8 @@ dns:
# Split DNS (see https://tailscale.com/kb/1054/dns/),
# a map of domains and which DNS server to use for each.
split: {}
split:
{}
# foo.bar.com:
# - 1.1.1.1
# darp.headscale.net:
@@ -393,13 +392,11 @@ unix_socket_permission: "0770"
# method: S256
# Logtail configuration
# Logtail is Tailscales logging and auditing infrastructure, it allows the
# control panel to instruct tailscale nodes to log their activity to a remote
# server. To disable logging on the client side, please refer to:
# https://tailscale.com/kb/1011/log-mesh-traffic#opting-out-of-client-logging
# Logtail is Tailscales logging and auditing infrastructure, it allows the control panel
# to instruct tailscale nodes to log their activity to a remote server.
logtail:
# Enable logtail for tailscale nodes of this Headscale instance.
# As there is currently no support for overriding the log server in Headscale, this is
# Enable logtail for this headscales clients.
# As there is currently no support for overriding the log server in headscale, this is
# disabled by default. Enabling this will make your clients send logs to Tailscale Inc.
enabled: false

View File

@@ -1,6 +1,5 @@
# If you plan to somehow use headscale, please deploy your own DERP infra: https://tailscale.com/kb/1118/custom-derp-servers/
regions:
1: null # Disable DERP region with ID 1
900:
regionid: 900
regioncode: custom
@@ -8,9 +7,9 @@ regions:
nodes:
- name: 900a
regionid: 900
hostname: myderp.example.com
ipv4: 198.51.100.1
ipv6: 2001:db8::1
hostname: myderp.mydomain.no
ipv4: 123.123.123.123
ipv6: "2604:a880:400:d1::828:b001"
stunport: 0
stunonly: false
derpport: 0

View File

@@ -44,15 +44,6 @@ For convenience, we also [build container images with headscale](../setup/instal
we don't officially support deploying headscale using Docker**. On our [Discord server](https://discord.gg/c84AZQhmpx)
we have a "docker-issues" channel where you can ask for Docker-specific help to the community.
## What is the recommended update path? Can I skip multiple versions while updating?
Please follow the steps outlined in the [upgrade guide](../setup/upgrade.md) to update your existing Headscale
installation. Its best to update from one stable version to the next (e.g. 0.24.0 &rarr; 0.25.1 &rarr; 0.26.1) in case
you are multiple releases behind. You should always pick the latest available patch release.
Be sure to check the [changelog](https://github.com/juanfont/headscale/blob/main/CHANGELOG.md) for version specific
upgrade instructions and breaking changes.
## Scaling / How many clients does Headscale support?
It depends. As often stated, Headscale is not enterprise software and our focus
@@ -143,35 +134,3 @@ in their output of `tailscale status`. Traffic is still filtered according to th
ping` which is always allowed in either direction.
See also <https://tailscale.com/kb/1087/device-visibility>.
## My policy is stored in the database and Headscale refuses to start due to an invalid policy. How can I recover?
Headscale checks if the policy is valid during startup and refuses to start if it detects an error. The error message
indicates which part of the policy is invalid. Follow these steps to fix your policy:
- Dump the policy to a file: `headscale policy get --bypass-grpc-and-access-database-directly > policy.json`
- Edit and fixup `policy.json`. Use the command `headscale policy check --file policy.json` to validate the policy.
- Load the modified policy: `headscale policy set --bypass-grpc-and-access-database-directly --file policy.json`
- Start Headscale as usual.
!!! warning "Full server configuration required"
The above commands to get/set the policy require a complete server configuration file including database settings. A
minimal config to [control Headscale via remote CLI](../ref/remote-cli.md) is not sufficient. You may use `headscale
-c /path/to/config.yaml` to specify the path to an alternative configuration file.
## How can I avoid to send logs to Tailscale Inc?
A Tailscale client [collects logs about its operation and connection attempts with other
clients](https://tailscale.com/kb/1011/log-mesh-traffic#client-logs) and sends them to a central log service operated by
Tailscale Inc.
Headscale, by default, instructs clients to disable log submission to the central log service. This configuration is
applied by a client once it successfully connected with Headscale. See the configuration option `logtail.enabled` in the
[configuration file](../ref/configuration.md) for details.
Alternatively, logging can also be disabled on the client side. This is independent of Headscale and opting out of
client logging disables log submission early during client startup. The configuration is operating system specific and
is usually achieved by setting the environment variable `TS_NO_LOGS_NO_SUPPORT=true` or by passing the flag
`--no-logs-no-support` to `tailscaled`. See
<https://tailscale.com/kb/1011/log-mesh-traffic#opting-out-of-client-logging> for details.

View File

@@ -19,11 +19,11 @@ provides on overview of Headscale's feature and compatibility with the Tailscale
- [x] [Exit nodes](../ref/routes.md#exit-node)
- [x] Dual stack (IPv4 and IPv6)
- [x] Ephemeral nodes
- [x] Embedded [DERP server](../ref/derp.md)
- [x] Embedded [DERP server](https://tailscale.com/kb/1232/derp-servers)
- [x] Access control lists ([GitHub label "policy"](https://github.com/juanfont/headscale/labels/policy%20%F0%9F%93%9D))
- [x] ACL management via API
- [x] Some [Autogroups](https://tailscale.com/kb/1396/targets#autogroups), currently: `autogroup:internet`,
`autogroup:nonroot`, `autogroup:member`, `autogroup:tagged`, `autogroup:self`
`autogroup:nonroot`, `autogroup:member`, `autogroup:tagged`
- [x] [Auto approvers](https://tailscale.com/kb/1337/acl-syntax#auto-approvers) for [subnet
routers](../ref/routes.md#automatically-approve-routes-of-a-subnet-router) and [exit
nodes](../ref/routes.md#automatically-approve-an-exit-node-with-auto-approvers)

View File

@@ -9,38 +9,9 @@ When using ACL's the User borders are no longer applied. All machines
whichever the User have the ability to communicate with other hosts as
long as the ACL's permits this exchange.
## ACL Setup
## ACLs use case example
To enable and configure ACLs in Headscale, you need to specify the path to your ACL policy file in the `policy.path` key in `config.yaml`.
Your ACL policy file must be formatted using [huJSON](https://github.com/tailscale/hujson).
Info on how these policies are written can be found
[here](https://tailscale.com/kb/1018/acls/).
Please reload or restart Headscale after updating the ACL file. Headscale may be reloaded either via its systemd service
(`sudo systemctl reload headscale`) or by sending a SIGHUP signal (`sudo kill -HUP $(pidof headscale)`) to the main
process. Headscale logs the result of ACL policy processing after each reload.
## Simple Examples
- [**Allow All**](https://tailscale.com/kb/1192/acl-samples#allow-all-default-acl): If you define an ACL file but completely omit the `"acls"` field from its content, Headscale will default to an "allow all" policy. This means all devices connected to your tailnet will be able to communicate freely with each other.
```json
{}
```
- [**Deny All**](https://tailscale.com/kb/1192/acl-samples#deny-all): To prevent all communication within your tailnet, you can include an empty array for the `"acls"` field in your policy file.
```json
{
"acls": []
}
```
## Complex Example
Let's build a more complex example use case for a small business (It may be the place where
Let's build an example use case for a small business (It may be the place where
ACL's are the most useful).
We have a small company with a boss, an admin, two developers and an intern.
@@ -67,6 +38,10 @@ servers.
![ACL implementation example](../images/headscale-acl-network.png)
## ACL setup
ACLs have to be written in [huJSON](https://github.com/tailscale/hujson).
When [registering the servers](../usage/getting-started.md#register-a-node) we
will need to add the flag `--advertise-tags=tag:<tag1>,tag:<tag2>`, and the user
that is registering the server should be allowed to do it. Since anyone can add
@@ -74,6 +49,14 @@ tags to a server they can register, the check of the tags is done on headscale
server and only valid tags are applied. A tag is valid if the user that is
registering it is allowed to do it.
To use ACLs in headscale, you must edit your `config.yaml` file. In there you will find a `policy.path` parameter. This
will need to point to your ACL file. More info on how these policies are written can be found
[here](https://tailscale.com/kb/1018/acls/).
Please reload or restart Headscale after updating the ACL file. Headscale may be reloaded either via its systemd service
(`sudo systemctl reload headscale`) or by sending a SIGHUP signal (`sudo kill -HUP $(pidof headscale)`) to the main
process. Headscale logs the result of ACL policy processing after each reload.
Here are the ACL's to implement the same permissions as above:
```json title="acl.json"
@@ -194,94 +177,13 @@ Here are the ACL's to implement the same permissions as above:
"dst": ["tag:dev-app-servers:80,443"]
},
// Allow users to access their own devices using autogroup:self (see below for more details about performance impact)
{
"action": "accept",
"src": ["autogroup:member"],
"dst": ["autogroup:self:*"]
}
// We still have to allow internal users communications since nothing guarantees that each user have
// their own users.
{ "action": "accept", "src": ["boss@"], "dst": ["boss@:*"] },
{ "action": "accept", "src": ["dev1@"], "dst": ["dev1@:*"] },
{ "action": "accept", "src": ["dev2@"], "dst": ["dev2@:*"] },
{ "action": "accept", "src": ["admin1@"], "dst": ["admin1@:*"] },
{ "action": "accept", "src": ["intern1@"], "dst": ["intern1@:*"] }
]
}
```
## Autogroups
Headscale supports several autogroups that automatically include users, destinations, or devices with specific properties. Autogroups provide a convenient way to write ACL rules without manually listing individual users or devices.
### `autogroup:internet`
Allows access to the internet through [exit nodes](routes.md#exit-node). Can only be used in ACL destinations.
```json
{
"action": "accept",
"src": ["group:users"],
"dst": ["autogroup:internet:*"]
}
```
### `autogroup:member`
Includes all users who are direct members of the tailnet. Does not include users from shared devices.
```json
{
"action": "accept",
"src": ["autogroup:member"],
"dst": ["tag:prod-app-servers:80,443"]
}
```
### `autogroup:tagged`
Includes all devices that have at least one tag.
```json
{
"action": "accept",
"src": ["autogroup:tagged"],
"dst": ["tag:monitoring:9090"]
}
```
### `autogroup:self`
**(EXPERIMENTAL)**
!!! warning "The current implementation of `autogroup:self` is inefficient"
Includes devices where the same user is authenticated on both the source and destination. Does not include tagged devices. Can only be used in ACL destinations.
```json
{
"action": "accept",
"src": ["autogroup:member"],
"dst": ["autogroup:self:*"]
}
```
*Using `autogroup:self` may cause performance degradation on the Headscale coordinator server in large deployments, as filter rules must be compiled per-node rather than globally and the current implementation is not very efficient.*
If you experience performance issues, consider using more specific ACL rules or limiting the use of `autogroup:self`.
```json
{
// The following rules allow internal users to communicate with their
// own nodes in case autogroup:self is causing performance issues.
{ "action": "accept", "src": ["boss@"], "dst": ["boss@:*"] },
{ "action": "accept", "src": ["dev1@"], "dst": ["dev1@:*"] },
{ "action": "accept", "src": ["dev2@"], "dst": ["dev2@:*"] },
{ "action": "accept", "src": ["admin1@"], "dst": ["admin1@:*"] },
{ "action": "accept", "src": ["intern1@"], "dst": ["intern1@:*"] }
}
```
### `autogroup:nonroot`
Used in Tailscale SSH rules to allow access to any user except root. Can only be used in the `users` field of SSH rules.
```json
{
"action": "accept",
"src": ["autogroup:member"],
"dst": ["autogroup:self"],
"users": ["autogroup:nonroot"]
}
```

View File

@@ -1,175 +0,0 @@
# DERP
A [DERP (Designated Encrypted Relay for Packets) server](https://tailscale.com/kb/1232/derp-servers) is mainly used to
relay traffic between two nodes in case a direct connection can't be established. Headscale provides an embedded DERP
server to ensure seamless connectivity between nodes.
## Configuration
DERP related settings are configured within the `derp` section of the [configuration file](./configuration.md). The
following sections only use a few of the available settings, check the [example configuration](./configuration.md) for
all available configuration options.
### Enable embedded DERP
Headscale ships with an embedded DERP server which allows to run your own self-hosted DERP server easily. The embedded
DERP server is disabled by default and needs to be enabled. In addition, you should configure the public IPv4 and public
IPv6 address of your Headscale server for improved connection stability:
```yaml title="config.yaml" hl_lines="3-5"
derp:
server:
enabled: true
ipv4: 198.51.100.1
ipv6: 2001:db8::1
```
Keep in mind that [additional ports are needed to run a DERP server](../setup/requirements.md#ports-in-use). Besides
relaying traffic, it also uses STUN (udp/3478) to help clients discover their public IP addresses and perform NAT
traversal. [Check DERP server connectivity](#check-derp-server-connectivity) to see if everything works.
### Remove Tailscale's DERP servers
Once enabled, Headscale's embedded DERP is added to the list of free-to-use [DERP
servers](https://tailscale.com/kb/1232/derp-servers) offered by Tailscale Inc. To only use Headscale's embedded DERP
server, disable the loading of the default DERP map:
```yaml title="config.yaml" hl_lines="6"
derp:
server:
enabled: true
ipv4: 198.51.100.1
ipv6: 2001:db8::1
urls: []
```
!!! warning "Single point of failure"
Removing Tailscale's DERP servers means that there is now just a single DERP server available for clients. This is a
single point of failure and could hamper connectivity.
[Check DERP server connectivity](#check-derp-server-connectivity) with your embedded DERP server before removing
Tailscale's DERP servers.
### Customize DERP map
The DERP map offered to clients can be customized with a [dedicated YAML-configuration
file](https://github.com/juanfont/headscale/blob/main/derp-example.yaml). This allows to modify previously loaded DERP
maps fetched via URL or to offer your own, custom DERP servers to nodes.
=== "Remove specific DERP regions"
The free-to-use [DERP servers](https://tailscale.com/kb/1232/derp-servers) are organized into regions via a region
ID. You can explicitly disable a specific region by setting its region ID to `null`. The following sample
`derp.yaml` disables the New York DERP region (which has the region ID 1):
```yaml title="derp.yaml"
regions:
1: null
```
Use the following configuration to serve the default DERP map (excluding New York) to nodes:
```yaml title="config.yaml" hl_lines="6 7"
derp:
server:
enabled: false
urls:
- https://controlplane.tailscale.com/derpmap/default
paths:
- /etc/headscale/derp.yaml
```
=== "Provide custom DERP servers"
The following sample `derp.yaml` references two custom regions (`custom-east` with ID 900 and `custom-west` with ID 901)
with one custom DERP server in each region. Each DERP server offers DERP relay via HTTPS on tcp/443, support for captive
portal checks via HTTP on tcp/80 and STUN on udp/3478. See the definitions of
[DERPMap](https://pkg.go.dev/tailscale.com/tailcfg#DERPMap),
[DERPRegion](https://pkg.go.dev/tailscale.com/tailcfg#DERPRegion) and
[DERPNode](https://pkg.go.dev/tailscale.com/tailcfg#DERPNode) for all available options.
```yaml title="derp.yaml"
regions:
900:
regionid: 900
regioncode: custom-east
regionname: My region (east)
nodes:
- name: 900a
regionid: 900
hostname: derp900a.example.com
ipv4: 198.51.100.1
ipv6: 2001:db8::1
canport80: true
901:
regionid: 901
regioncode: custom-west
regionname: My Region (west)
nodes:
- name: 901a
regionid: 901
hostname: derp901a.example.com
ipv4: 198.51.100.2
ipv6: 2001:db8::2
canport80: true
```
Use the following configuration to only serve the two DERP servers from the above `derp.yaml`:
```yaml title="config.yaml" hl_lines="5 6"
derp:
server:
enabled: false
urls: []
paths:
- /etc/headscale/derp.yaml
```
Independent of the custom DERP map, you may choose to [enable the embedded DERP server and have it automatically added
to the custom DERP map](#enable-embedded-derp).
### Verify clients
Access to DERP serves can be restricted to nodes that are members of your Tailnet. Relay access is denied for unknown
clients.
=== "Embedded DERP"
Client verification is enabled by default.
```yaml title="config.yaml" hl_lines="3"
derp:
server:
verify_clients: true
```
=== "3rd-party DERP"
Tailscale's `derper` provides two parameters to configure client verification:
- Use the `-verify-client-url` parameter of the `derper` and point it towards the `/verify` endpoint of your
Headscale server (e.g `https://headscale.example.com/verify`). The DERP server will query your Headscale instance
as soon as a client connects with it to ask whether access should be allowed or denied. Access is allowed if
Headscale knows about the connecting client and denied otherwise.
- The parameter `-verify-client-url-fail-open` controls what should happen when the DERP server can't reach the
Headscale instance. By default, it will allow access if Headscale is unreachable.
## Check DERP server connectivity
Any Tailscale client may be used to introspect the DERP map and to check for connectivity issues with DERP servers.
- Display DERP map: `tailscale debug derp-map`
- Check connectivity with the embedded DERP[^1]:`tailscale debug derp headscale`
Additional DERP related metrics and information is available via the [metrics and debug
endpoint](./debug.md#metrics-and-debug-endpoint).
[^1]:
This assumes that the default region code of the [configuration file](./configuration.md) is used.
## Limitations
- The embedded DERP server can't be used for Tailscale's captive portal checks as it doesn't support the `/generate_204`
endpoint via HTTP on port tcp/80.
- There are no speed or throughput optimisations, the main purpose is to assist in node connectivity.

View File

@@ -1,7 +1,7 @@
# DNS
Headscale supports [most DNS features](../about/features.md) from Tailscale. DNS related settings can be configured
within the `dns` section of the [configuration file](./configuration.md).
within `dns` section of the [configuration file](./configuration.md).
## Setting extra DNS records
@@ -23,7 +23,7 @@ hostname and port combination "http://hostname-in-magic-dns.myvpn.example.com:30
!!! warning "Limitations"
Currently, [only A and AAAA records are processed by Tailscale](https://github.com/tailscale/tailscale/blob/v1.86.5/ipn/ipnlocal/node_backend.go#L662).
Currently, [only A and AAAA records are processed by Tailscale](https://github.com/tailscale/tailscale/blob/v1.78.3/ipn/ipnlocal/local.go#L4461-L4479).
1. Configure extra DNS records using one of the available configuration options:

View File

@@ -13,7 +13,7 @@ Running headscale behind a reverse proxy is useful when running multiple applica
The reverse proxy MUST be configured to support WebSockets to communicate with Tailscale clients.
WebSockets support is also required when using the Headscale [embedded DERP server](../derp.md). In this case, you will also need to expose the UDP port used for STUN (by default, udp/3478). Please check our [config-example.yaml](https://github.com/juanfont/headscale/blob/main/config-example.yaml).
WebSockets support is also required when using the headscale embedded DERP server. In this case, you will also need to expose the UDP port used for STUN (by default, udp/3478). Please check our [config-example.yaml](https://github.com/juanfont/headscale/blob/main/config-example.yaml).
### Cloudflare

View File

@@ -13,4 +13,3 @@ This page collects third-party tools, client libraries, and scripts related to h
| headscalebacktosqlite | [Github](https://github.com/bigbozza/headscalebacktosqlite) | Migrate headscale from PostgreSQL back to SQLite |
| headscale-pf | [Github](https://github.com/YouSysAdmin/headscale-pf) | Populates user groups based on user groups in Jumpcloud or Authentik |
| headscale-client-go | [Github](https://github.com/hibare/headscale-client-go) | A Go client implementation for the Headscale HTTP API. |
| headscale-zabbix | [Github](https://github.com/dblanque/headscale-zabbix) | A Zabbix Monitoring Template for the Headscale Service. |

View File

@@ -184,7 +184,7 @@ You may refer to users in the Headscale policy via:
## Supported OIDC claims
Headscale uses [the standard OIDC claims](https://openid.net/specs/openid-connect-core-1_0.html#StandardClaims) to
populate and update its local user profile on each login. OIDC claims are read from the ID Token and from the UserInfo
populate and update its local user profile on each login. OIDC claims are read from the ID Token or from the UserInfo
endpoint.
| Headscale profile | OIDC claim | Notes / examples |
@@ -230,6 +230,19 @@ are known to work:
Authelia is fully supported by Headscale.
#### Additional configuration to authorize users based on filters
Authelia (4.39.0 or newer) no longer provides standard OIDC claims such as `email` or `groups` via the ID Token. The
OIDC `email` and `groups` claims are used to [authorize users with filters](#authorize-users-with-filters). This extra
configuration step is **only** needed if you need to authorize access based on one of the following user properties:
- domain
- email address
- group membership
Please follow the instructions from Authelia's documentation on how to [Restore Functionality Prior to Claims
Parameter](https://www.authelia.com/integration/openid-connect/openid-connect-1.0-claims/#restore-functionality-prior-to-claims-parameter).
### Authentik
- Authentik is fully supported by Headscale.
@@ -284,15 +297,13 @@ you need to [authorize access based on group membership](#authorize-users-with-f
- Create a new client scope `groups` for OpenID Connect:
- Configure a `Group Membership` mapper with name `groups` and the token claim name `groups`.
- Add the mapper to at least the UserInfo endpoint.
- Enable the mapper for the ID Token, Access Token and UserInfo endpoint.
- Configure the new client scope for your Headscale client:
- Edit the Headscale client.
- Search for the client scope `group`.
- Add it with assigned type `Default`.
- [Configure the allowed groups in Headscale](#authorize-users-with-filters). How groups need to be specified depends on
Keycloak's `Full group path` option:
- `Full group path` is enabled: groups contain their full path, e.g. `/top/group1`
- `Full group path` is disabled: only the name of the group is used, e.g. `group1`
- [Configure the allowed groups in Headscale](#authorize-users-with-filters). Keep in mind that groups in Keycloak start
with a leading `/`.
### Microsoft Entra ID
@@ -304,6 +315,3 @@ Entra ID is: `https://login.microsoftonline.com/<tenant-UUID>/v2.0`. The followi
- `domain_hint: example.com` to use your own domain
- `prompt: select_account` to force an account picker during login
Groups for the [allowed groups filter](#authorize-users-with-filters) need to be specified with their group ID instead
of the group name.

View File

@@ -67,6 +67,12 @@ headscale apikeys expire --prefix "<PREFIX>"
export HEADSCALE_CLI_API_KEY="<API_KEY_FROM_PREVIOUS_STEP>"
```
!!! bug
Headscale currently requires at least an empty configuration file when environment variables are used to
specify connection details. See [issue 2193](https://github.com/juanfont/headscale/issues/2193) for more
information.
This instructs the `headscale` binary to connect to a remote instance at `<HEADSCALE_ADDRESS>:<PORT>`, instead of
connecting to the local instance.

View File

@@ -216,39 +216,6 @@ nodes.
}
```
### Restrict access to exit nodes per user or group
A user can use _any_ of the available exit nodes with `autogroup:internet`. Alternatively, the ACL snippet below assigns
each user a specific exit node while hiding all other exit nodes. The user `alice` can only use exit node `exit1` while
user `bob` can only use exit node `exit2`.
```json title="Assign each user a dedicated exit node"
{
"hosts": {
"exit1": "100.64.0.1/32",
"exit2": "100.64.0.2/32"
},
"acls": [
{
"action": "accept",
"src": ["alice@"],
"dst": ["exit1:*"]
},
{
"action": "accept",
"src": ["bob@"],
"dst": ["exit2:*"]
}
]
}
```
!!! warning
- The above implementation is Headscale specific and will likely be removed once [support for
`via`](https://github.com/juanfont/headscale/issues/2409) is available.
- Beware that a user can also connect to any port of the exit node itself.
### Automatically approve an exit node with auto approvers
The initial setup of an exit node usually requires manual approval on the control server before it can be used by a node

View File

@@ -39,7 +39,6 @@ Registry](https://github.com/juanfont/headscale/pkgs/container/headscale). The c
--volume "$(pwd)/run:/var/run/headscale" \
--publish 127.0.0.1:8080:8080 \
--publish 127.0.0.1:9090:9090 \
--health-cmd "CMD headscale health" \
docker.io/headscale/headscale:<VERSION> \
serve
```
@@ -67,8 +66,6 @@ Registry](https://github.com/juanfont/headscale/pkgs/container/headscale). The c
- <HEADSCALE_PATH>/lib:/var/lib/headscale
- <HEADSCALE_PATH>/run:/var/run/headscale
command: serve
healthcheck:
test: ["CMD", "headscale", "health"]
```
1. Verify headscale is running:

View File

@@ -7,7 +7,7 @@ Both are available on the [GitHub releases page](https://github.com/juanfont/hea
It is recommended to use our DEB packages to install headscale on a Debian based system as those packages configure a
local user to run headscale, provide a default configuration and ship with a systemd service file. Supported
distributions are Ubuntu 22.04 or newer, Debian 12 or newer.
distributions are Ubuntu 22.04 or newer, Debian 11 or newer.
1. Download the [latest headscale package](https://github.com/juanfont/headscale/releases/latest) for your platform (`.deb` for Ubuntu and Debian).
@@ -57,14 +57,14 @@ managed by systemd.
1. Download the latest [`headscale` binary from GitHub's release page](https://github.com/juanfont/headscale/releases):
```shell
sudo wget --output-document=/usr/bin/headscale \
sudo wget --output-document=/usr/local/bin/headscale \
https://github.com/juanfont/headscale/releases/download/v<HEADSCALE VERSION>/headscale_<HEADSCALE VERSION>_linux_<ARCH>
```
1. Make `headscale` executable:
```shell
sudo chmod +x /usr/bin/headscale
sudo chmod +x /usr/local/bin/headscale
```
1. Add a dedicated local user to run headscale:

View File

@@ -4,35 +4,11 @@ Headscale should just work as long as the following requirements are met:
- A server with a public IP address for headscale. A dual-stack setup with a public IPv4 and a public IPv6 address is
recommended.
- Headscale is served via HTTPS on port 443[^1] and [may use additional ports](#ports-in-use).
- Headscale is served via HTTPS on port 443[^1].
- A reasonably modern Linux or BSD based operating system.
- A dedicated local user account to run headscale.
- A little bit of command line knowledge to configure and operate headscale.
## Ports in use
The ports in use vary with the intended scenario and enabled features. Some of the listed ports may be changed via the
[configuration file](../ref/configuration.md) but we recommend to stick with the default values.
- tcp/80
- Expose publicly: yes
- HTTP, used by Let's Encrypt to verify ownership via the HTTP-01 challenge.
- Only required if the built-in Let's Enrypt client with the HTTP-01 challenge is used. See [TLS](../ref/tls.md) for
details.
- tcp/443
- Expose publicly: yes
- HTTPS, required to make Headscale available to Tailscale clients[^1]
- Required if the [embedded DERP server](../ref/derp.md) is enabled
- udp/3478
- Expose publicly: yes
- STUN, required if the [embedded DERP server](../ref/derp.md) is enabled
- tcp/50443
- Expose publicly: yes
- Only required if the gRPC interface is used to [remote-control Headscale](../ref/remote-cli.md).
- tcp/9090
- Expose publicly: no
- [Metrics and debug endpoint](../ref/debug.md#metrics-and-debug-endpoint)
## Assumptions
The headscale documentation and the provided examples are written with a few assumptions in mind:

View File

@@ -6,23 +6,9 @@ This documentation has the goal of showing how a user can use the official Andro
Install the official Tailscale Android client from the [Google Play Store](https://play.google.com/store/apps/details?id=com.tailscale.ipn) or [F-Droid](https://f-droid.org/packages/com.tailscale.ipn/).
## Connect via normal, interactive login
## Configuring the headscale URL
- Open the app and select the settings menu in the upper-right corner
- Tap on `Accounts`
- In the kebab menu icon (three dots) in the upper-right corner select `Use an alternate server`
- Enter your server URL (e.g `https://headscale.example.com`) and follow the instructions
- The client connects automatically as soon as the node registration is complete on headscale. Until then, nothing is
visible in the server logs.
## Connect using a preauthkey
- Open the app and select the settings menu in the upper-right corner
- Tap on `Accounts`
- In the kebab menu icon (three dots) in the upper-right corner select `Use an alternate server`
- Enter your server URL (e.g `https://headscale.example.com`). If login prompts open, close it and continue
- Open the settings menu in the upper-right corner
- Tap on `Accounts`
- In the kebab menu icon (three dots) in the upper-right corner select `Use an auth key`
- Enter your [preauthkey generated from headscale](../getting-started.md#using-a-preauthkey)
- If needed, tap `Log in` on the main screen. You should now be connected to your headscale.

6
flake.lock generated
View File

@@ -20,11 +20,11 @@
},
"nixpkgs": {
"locked": {
"lastModified": 1760533177,
"narHash": "sha256-OwM1sFustLHx+xmTymhucZuNhtq98fHIbfO8Swm5L8A=",
"lastModified": 1752012998,
"narHash": "sha256-Q82Ms+FQmgOBkdoSVm+FBpuFoeUAffNerR5yVV7SgT8=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "35f590344ff791e6b1d6d6b8f3523467c9217caf",
"rev": "2a2130494ad647f953593c4e84ea4df839fbd68c",
"type": "github"
},
"original": {

View File

@@ -18,8 +18,8 @@
{
overlay = _: prev: let
pkgs = nixpkgs.legacyPackages.${prev.system};
buildGo = pkgs.buildGo125Module;
vendorHash = "sha256-VOi4PGZ8I+2MiwtzxpKc/4smsL5KcH/pHVkjJfAFPJ0=";
buildGo = pkgs.buildGo124Module;
vendorHash = "sha256-83L2NMyOwKCHWqcowStJ7Ze/U9CJYhzleDRLrJNhX2g=";
in {
headscale = buildGo {
pname = "headscale";
@@ -97,10 +97,9 @@
# buildGoModule = buildGo;
# };
# The package uses buildGo125Module, not the convention.
# goreleaser = prev.goreleaser.override {
# buildGoModule = buildGo;
# };
goreleaser = prev.goreleaser.override {
buildGoModule = buildGo;
};
gotestsum = prev.gotestsum.override {
buildGoModule = buildGo;
@@ -125,7 +124,7 @@
overlays = [self.overlay];
inherit system;
};
buildDeps = with pkgs; [git go_1_25 gnumake];
buildDeps = with pkgs; [git go_1_24 gnumake];
devDeps = with pkgs;
buildDeps
++ [

View File

@@ -1,6 +1,6 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.36.10
// protoc-gen-go v1.36.6
// protoc (unknown)
// source: headscale/v1/apikey.proto

View File

@@ -1,6 +1,6 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.36.10
// protoc-gen-go v1.36.6
// protoc (unknown)
// source: headscale/v1/device.proto

View File

@@ -1,6 +1,6 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.36.10
// protoc-gen-go v1.36.6
// protoc (unknown)
// source: headscale/v1/headscale.proto
@@ -11,7 +11,6 @@ import (
protoreflect "google.golang.org/protobuf/reflect/protoreflect"
protoimpl "google.golang.org/protobuf/runtime/protoimpl"
reflect "reflect"
sync "sync"
unsafe "unsafe"
)
@@ -22,94 +21,11 @@ const (
_ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
)
type HealthRequest struct {
state protoimpl.MessageState `protogen:"open.v1"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *HealthRequest) Reset() {
*x = HealthRequest{}
mi := &file_headscale_v1_headscale_proto_msgTypes[0]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *HealthRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*HealthRequest) ProtoMessage() {}
func (x *HealthRequest) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_headscale_proto_msgTypes[0]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use HealthRequest.ProtoReflect.Descriptor instead.
func (*HealthRequest) Descriptor() ([]byte, []int) {
return file_headscale_v1_headscale_proto_rawDescGZIP(), []int{0}
}
type HealthResponse struct {
state protoimpl.MessageState `protogen:"open.v1"`
DatabaseConnectivity bool `protobuf:"varint,1,opt,name=database_connectivity,json=databaseConnectivity,proto3" json:"database_connectivity,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *HealthResponse) Reset() {
*x = HealthResponse{}
mi := &file_headscale_v1_headscale_proto_msgTypes[1]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *HealthResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*HealthResponse) ProtoMessage() {}
func (x *HealthResponse) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_headscale_proto_msgTypes[1]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use HealthResponse.ProtoReflect.Descriptor instead.
func (*HealthResponse) Descriptor() ([]byte, []int) {
return file_headscale_v1_headscale_proto_rawDescGZIP(), []int{1}
}
func (x *HealthResponse) GetDatabaseConnectivity() bool {
if x != nil {
return x.DatabaseConnectivity
}
return false
}
var File_headscale_v1_headscale_proto protoreflect.FileDescriptor
const file_headscale_v1_headscale_proto_rawDesc = "" +
"\n" +
"\x1cheadscale/v1/headscale.proto\x12\fheadscale.v1\x1a\x1cgoogle/api/annotations.proto\x1a\x17headscale/v1/user.proto\x1a\x1dheadscale/v1/preauthkey.proto\x1a\x17headscale/v1/node.proto\x1a\x19headscale/v1/apikey.proto\x1a\x19headscale/v1/policy.proto\"\x0f\n" +
"\rHealthRequest\"E\n" +
"\x0eHealthResponse\x123\n" +
"\x15database_connectivity\x18\x01 \x01(\bR\x14databaseConnectivity2\xff\x17\n" +
"\x1cheadscale/v1/headscale.proto\x12\fheadscale.v1\x1a\x1cgoogle/api/annotations.proto\x1a\x17headscale/v1/user.proto\x1a\x1dheadscale/v1/preauthkey.proto\x1a\x17headscale/v1/node.proto\x1a\x19headscale/v1/apikey.proto\x1a\x19headscale/v1/policy.proto2\xa3\x16\n" +
"\x10HeadscaleService\x12h\n" +
"\n" +
"CreateUser\x12\x1f.headscale.v1.CreateUserRequest\x1a .headscale.v1.CreateUserResponse\"\x17\x82\xd3\xe4\x93\x02\x11:\x01*\"\f/api/v1/user\x12\x80\x01\n" +
@@ -119,8 +35,7 @@ const file_headscale_v1_headscale_proto_rawDesc = "" +
"DeleteUser\x12\x1f.headscale.v1.DeleteUserRequest\x1a .headscale.v1.DeleteUserResponse\"\x19\x82\xd3\xe4\x93\x02\x13*\x11/api/v1/user/{id}\x12b\n" +
"\tListUsers\x12\x1e.headscale.v1.ListUsersRequest\x1a\x1f.headscale.v1.ListUsersResponse\"\x14\x82\xd3\xe4\x93\x02\x0e\x12\f/api/v1/user\x12\x80\x01\n" +
"\x10CreatePreAuthKey\x12%.headscale.v1.CreatePreAuthKeyRequest\x1a&.headscale.v1.CreatePreAuthKeyResponse\"\x1d\x82\xd3\xe4\x93\x02\x17:\x01*\"\x12/api/v1/preauthkey\x12\x87\x01\n" +
"\x10ExpirePreAuthKey\x12%.headscale.v1.ExpirePreAuthKeyRequest\x1a&.headscale.v1.ExpirePreAuthKeyResponse\"$\x82\xd3\xe4\x93\x02\x1e:\x01*\"\x19/api/v1/preauthkey/expire\x12}\n" +
"\x10DeletePreAuthKey\x12%.headscale.v1.DeletePreAuthKeyRequest\x1a&.headscale.v1.DeletePreAuthKeyResponse\"\x1a\x82\xd3\xe4\x93\x02\x14*\x12/api/v1/preauthkey\x12z\n" +
"\x10ExpirePreAuthKey\x12%.headscale.v1.ExpirePreAuthKeyRequest\x1a&.headscale.v1.ExpirePreAuthKeyResponse\"$\x82\xd3\xe4\x93\x02\x1e:\x01*\"\x19/api/v1/preauthkey/expire\x12z\n" +
"\x0fListPreAuthKeys\x12$.headscale.v1.ListPreAuthKeysRequest\x1a%.headscale.v1.ListPreAuthKeysResponse\"\x1a\x82\xd3\xe4\x93\x02\x14\x12\x12/api/v1/preauthkey\x12}\n" +
"\x0fDebugCreateNode\x12$.headscale.v1.DebugCreateNodeRequest\x1a%.headscale.v1.DebugCreateNodeResponse\"\x1d\x82\xd3\xe4\x93\x02\x17:\x01*\"\x12/api/v1/debug/node\x12f\n" +
"\aGetNode\x12\x1c.headscale.v1.GetNodeRequest\x1a\x1d.headscale.v1.GetNodeResponse\"\x1e\x82\xd3\xe4\x93\x02\x18\x12\x16/api/v1/node/{node_id}\x12n\n" +
@@ -141,131 +56,109 @@ const file_headscale_v1_headscale_proto_rawDesc = "" +
"\vListApiKeys\x12 .headscale.v1.ListApiKeysRequest\x1a!.headscale.v1.ListApiKeysResponse\"\x16\x82\xd3\xe4\x93\x02\x10\x12\x0e/api/v1/apikey\x12v\n" +
"\fDeleteApiKey\x12!.headscale.v1.DeleteApiKeyRequest\x1a\".headscale.v1.DeleteApiKeyResponse\"\x1f\x82\xd3\xe4\x93\x02\x19*\x17/api/v1/apikey/{prefix}\x12d\n" +
"\tGetPolicy\x12\x1e.headscale.v1.GetPolicyRequest\x1a\x1f.headscale.v1.GetPolicyResponse\"\x16\x82\xd3\xe4\x93\x02\x10\x12\x0e/api/v1/policy\x12g\n" +
"\tSetPolicy\x12\x1e.headscale.v1.SetPolicyRequest\x1a\x1f.headscale.v1.SetPolicyResponse\"\x19\x82\xd3\xe4\x93\x02\x13:\x01*\x1a\x0e/api/v1/policy\x12[\n" +
"\x06Health\x12\x1b.headscale.v1.HealthRequest\x1a\x1c.headscale.v1.HealthResponse\"\x16\x82\xd3\xe4\x93\x02\x10\x12\x0e/api/v1/healthB)Z'github.com/juanfont/headscale/gen/go/v1b\x06proto3"
"\tSetPolicy\x12\x1e.headscale.v1.SetPolicyRequest\x1a\x1f.headscale.v1.SetPolicyResponse\"\x19\x82\xd3\xe4\x93\x02\x13:\x01*\x1a\x0e/api/v1/policyB)Z'github.com/juanfont/headscale/gen/go/v1b\x06proto3"
var (
file_headscale_v1_headscale_proto_rawDescOnce sync.Once
file_headscale_v1_headscale_proto_rawDescData []byte
)
func file_headscale_v1_headscale_proto_rawDescGZIP() []byte {
file_headscale_v1_headscale_proto_rawDescOnce.Do(func() {
file_headscale_v1_headscale_proto_rawDescData = protoimpl.X.CompressGZIP(unsafe.Slice(unsafe.StringData(file_headscale_v1_headscale_proto_rawDesc), len(file_headscale_v1_headscale_proto_rawDesc)))
})
return file_headscale_v1_headscale_proto_rawDescData
}
var file_headscale_v1_headscale_proto_msgTypes = make([]protoimpl.MessageInfo, 2)
var file_headscale_v1_headscale_proto_goTypes = []any{
(*HealthRequest)(nil), // 0: headscale.v1.HealthRequest
(*HealthResponse)(nil), // 1: headscale.v1.HealthResponse
(*CreateUserRequest)(nil), // 2: headscale.v1.CreateUserRequest
(*RenameUserRequest)(nil), // 3: headscale.v1.RenameUserRequest
(*DeleteUserRequest)(nil), // 4: headscale.v1.DeleteUserRequest
(*ListUsersRequest)(nil), // 5: headscale.v1.ListUsersRequest
(*CreatePreAuthKeyRequest)(nil), // 6: headscale.v1.CreatePreAuthKeyRequest
(*ExpirePreAuthKeyRequest)(nil), // 7: headscale.v1.ExpirePreAuthKeyRequest
(*DeletePreAuthKeyRequest)(nil), // 8: headscale.v1.DeletePreAuthKeyRequest
(*ListPreAuthKeysRequest)(nil), // 9: headscale.v1.ListPreAuthKeysRequest
(*DebugCreateNodeRequest)(nil), // 10: headscale.v1.DebugCreateNodeRequest
(*GetNodeRequest)(nil), // 11: headscale.v1.GetNodeRequest
(*SetTagsRequest)(nil), // 12: headscale.v1.SetTagsRequest
(*SetApprovedRoutesRequest)(nil), // 13: headscale.v1.SetApprovedRoutesRequest
(*RegisterNodeRequest)(nil), // 14: headscale.v1.RegisterNodeRequest
(*DeleteNodeRequest)(nil), // 15: headscale.v1.DeleteNodeRequest
(*ExpireNodeRequest)(nil), // 16: headscale.v1.ExpireNodeRequest
(*RenameNodeRequest)(nil), // 17: headscale.v1.RenameNodeRequest
(*ListNodesRequest)(nil), // 18: headscale.v1.ListNodesRequest
(*MoveNodeRequest)(nil), // 19: headscale.v1.MoveNodeRequest
(*BackfillNodeIPsRequest)(nil), // 20: headscale.v1.BackfillNodeIPsRequest
(*CreateApiKeyRequest)(nil), // 21: headscale.v1.CreateApiKeyRequest
(*ExpireApiKeyRequest)(nil), // 22: headscale.v1.ExpireApiKeyRequest
(*ListApiKeysRequest)(nil), // 23: headscale.v1.ListApiKeysRequest
(*DeleteApiKeyRequest)(nil), // 24: headscale.v1.DeleteApiKeyRequest
(*GetPolicyRequest)(nil), // 25: headscale.v1.GetPolicyRequest
(*SetPolicyRequest)(nil), // 26: headscale.v1.SetPolicyRequest
(*CreateUserResponse)(nil), // 27: headscale.v1.CreateUserResponse
(*RenameUserResponse)(nil), // 28: headscale.v1.RenameUserResponse
(*DeleteUserResponse)(nil), // 29: headscale.v1.DeleteUserResponse
(*ListUsersResponse)(nil), // 30: headscale.v1.ListUsersResponse
(*CreatePreAuthKeyResponse)(nil), // 31: headscale.v1.CreatePreAuthKeyResponse
(*ExpirePreAuthKeyResponse)(nil), // 32: headscale.v1.ExpirePreAuthKeyResponse
(*DeletePreAuthKeyResponse)(nil), // 33: headscale.v1.DeletePreAuthKeyResponse
(*ListPreAuthKeysResponse)(nil), // 34: headscale.v1.ListPreAuthKeysResponse
(*DebugCreateNodeResponse)(nil), // 35: headscale.v1.DebugCreateNodeResponse
(*GetNodeResponse)(nil), // 36: headscale.v1.GetNodeResponse
(*SetTagsResponse)(nil), // 37: headscale.v1.SetTagsResponse
(*SetApprovedRoutesResponse)(nil), // 38: headscale.v1.SetApprovedRoutesResponse
(*RegisterNodeResponse)(nil), // 39: headscale.v1.RegisterNodeResponse
(*DeleteNodeResponse)(nil), // 40: headscale.v1.DeleteNodeResponse
(*ExpireNodeResponse)(nil), // 41: headscale.v1.ExpireNodeResponse
(*RenameNodeResponse)(nil), // 42: headscale.v1.RenameNodeResponse
(*ListNodesResponse)(nil), // 43: headscale.v1.ListNodesResponse
(*MoveNodeResponse)(nil), // 44: headscale.v1.MoveNodeResponse
(*BackfillNodeIPsResponse)(nil), // 45: headscale.v1.BackfillNodeIPsResponse
(*CreateApiKeyResponse)(nil), // 46: headscale.v1.CreateApiKeyResponse
(*ExpireApiKeyResponse)(nil), // 47: headscale.v1.ExpireApiKeyResponse
(*ListApiKeysResponse)(nil), // 48: headscale.v1.ListApiKeysResponse
(*DeleteApiKeyResponse)(nil), // 49: headscale.v1.DeleteApiKeyResponse
(*GetPolicyResponse)(nil), // 50: headscale.v1.GetPolicyResponse
(*SetPolicyResponse)(nil), // 51: headscale.v1.SetPolicyResponse
(*CreateUserRequest)(nil), // 0: headscale.v1.CreateUserRequest
(*RenameUserRequest)(nil), // 1: headscale.v1.RenameUserRequest
(*DeleteUserRequest)(nil), // 2: headscale.v1.DeleteUserRequest
(*ListUsersRequest)(nil), // 3: headscale.v1.ListUsersRequest
(*CreatePreAuthKeyRequest)(nil), // 4: headscale.v1.CreatePreAuthKeyRequest
(*ExpirePreAuthKeyRequest)(nil), // 5: headscale.v1.ExpirePreAuthKeyRequest
(*ListPreAuthKeysRequest)(nil), // 6: headscale.v1.ListPreAuthKeysRequest
(*DebugCreateNodeRequest)(nil), // 7: headscale.v1.DebugCreateNodeRequest
(*GetNodeRequest)(nil), // 8: headscale.v1.GetNodeRequest
(*SetTagsRequest)(nil), // 9: headscale.v1.SetTagsRequest
(*SetApprovedRoutesRequest)(nil), // 10: headscale.v1.SetApprovedRoutesRequest
(*RegisterNodeRequest)(nil), // 11: headscale.v1.RegisterNodeRequest
(*DeleteNodeRequest)(nil), // 12: headscale.v1.DeleteNodeRequest
(*ExpireNodeRequest)(nil), // 13: headscale.v1.ExpireNodeRequest
(*RenameNodeRequest)(nil), // 14: headscale.v1.RenameNodeRequest
(*ListNodesRequest)(nil), // 15: headscale.v1.ListNodesRequest
(*MoveNodeRequest)(nil), // 16: headscale.v1.MoveNodeRequest
(*BackfillNodeIPsRequest)(nil), // 17: headscale.v1.BackfillNodeIPsRequest
(*CreateApiKeyRequest)(nil), // 18: headscale.v1.CreateApiKeyRequest
(*ExpireApiKeyRequest)(nil), // 19: headscale.v1.ExpireApiKeyRequest
(*ListApiKeysRequest)(nil), // 20: headscale.v1.ListApiKeysRequest
(*DeleteApiKeyRequest)(nil), // 21: headscale.v1.DeleteApiKeyRequest
(*GetPolicyRequest)(nil), // 22: headscale.v1.GetPolicyRequest
(*SetPolicyRequest)(nil), // 23: headscale.v1.SetPolicyRequest
(*CreateUserResponse)(nil), // 24: headscale.v1.CreateUserResponse
(*RenameUserResponse)(nil), // 25: headscale.v1.RenameUserResponse
(*DeleteUserResponse)(nil), // 26: headscale.v1.DeleteUserResponse
(*ListUsersResponse)(nil), // 27: headscale.v1.ListUsersResponse
(*CreatePreAuthKeyResponse)(nil), // 28: headscale.v1.CreatePreAuthKeyResponse
(*ExpirePreAuthKeyResponse)(nil), // 29: headscale.v1.ExpirePreAuthKeyResponse
(*ListPreAuthKeysResponse)(nil), // 30: headscale.v1.ListPreAuthKeysResponse
(*DebugCreateNodeResponse)(nil), // 31: headscale.v1.DebugCreateNodeResponse
(*GetNodeResponse)(nil), // 32: headscale.v1.GetNodeResponse
(*SetTagsResponse)(nil), // 33: headscale.v1.SetTagsResponse
(*SetApprovedRoutesResponse)(nil), // 34: headscale.v1.SetApprovedRoutesResponse
(*RegisterNodeResponse)(nil), // 35: headscale.v1.RegisterNodeResponse
(*DeleteNodeResponse)(nil), // 36: headscale.v1.DeleteNodeResponse
(*ExpireNodeResponse)(nil), // 37: headscale.v1.ExpireNodeResponse
(*RenameNodeResponse)(nil), // 38: headscale.v1.RenameNodeResponse
(*ListNodesResponse)(nil), // 39: headscale.v1.ListNodesResponse
(*MoveNodeResponse)(nil), // 40: headscale.v1.MoveNodeResponse
(*BackfillNodeIPsResponse)(nil), // 41: headscale.v1.BackfillNodeIPsResponse
(*CreateApiKeyResponse)(nil), // 42: headscale.v1.CreateApiKeyResponse
(*ExpireApiKeyResponse)(nil), // 43: headscale.v1.ExpireApiKeyResponse
(*ListApiKeysResponse)(nil), // 44: headscale.v1.ListApiKeysResponse
(*DeleteApiKeyResponse)(nil), // 45: headscale.v1.DeleteApiKeyResponse
(*GetPolicyResponse)(nil), // 46: headscale.v1.GetPolicyResponse
(*SetPolicyResponse)(nil), // 47: headscale.v1.SetPolicyResponse
}
var file_headscale_v1_headscale_proto_depIdxs = []int32{
2, // 0: headscale.v1.HeadscaleService.CreateUser:input_type -> headscale.v1.CreateUserRequest
3, // 1: headscale.v1.HeadscaleService.RenameUser:input_type -> headscale.v1.RenameUserRequest
4, // 2: headscale.v1.HeadscaleService.DeleteUser:input_type -> headscale.v1.DeleteUserRequest
5, // 3: headscale.v1.HeadscaleService.ListUsers:input_type -> headscale.v1.ListUsersRequest
6, // 4: headscale.v1.HeadscaleService.CreatePreAuthKey:input_type -> headscale.v1.CreatePreAuthKeyRequest
7, // 5: headscale.v1.HeadscaleService.ExpirePreAuthKey:input_type -> headscale.v1.ExpirePreAuthKeyRequest
8, // 6: headscale.v1.HeadscaleService.DeletePreAuthKey:input_type -> headscale.v1.DeletePreAuthKeyRequest
9, // 7: headscale.v1.HeadscaleService.ListPreAuthKeys:input_type -> headscale.v1.ListPreAuthKeysRequest
10, // 8: headscale.v1.HeadscaleService.DebugCreateNode:input_type -> headscale.v1.DebugCreateNodeRequest
11, // 9: headscale.v1.HeadscaleService.GetNode:input_type -> headscale.v1.GetNodeRequest
12, // 10: headscale.v1.HeadscaleService.SetTags:input_type -> headscale.v1.SetTagsRequest
13, // 11: headscale.v1.HeadscaleService.SetApprovedRoutes:input_type -> headscale.v1.SetApprovedRoutesRequest
14, // 12: headscale.v1.HeadscaleService.RegisterNode:input_type -> headscale.v1.RegisterNodeRequest
15, // 13: headscale.v1.HeadscaleService.DeleteNode:input_type -> headscale.v1.DeleteNodeRequest
16, // 14: headscale.v1.HeadscaleService.ExpireNode:input_type -> headscale.v1.ExpireNodeRequest
17, // 15: headscale.v1.HeadscaleService.RenameNode:input_type -> headscale.v1.RenameNodeRequest
18, // 16: headscale.v1.HeadscaleService.ListNodes:input_type -> headscale.v1.ListNodesRequest
19, // 17: headscale.v1.HeadscaleService.MoveNode:input_type -> headscale.v1.MoveNodeRequest
20, // 18: headscale.v1.HeadscaleService.BackfillNodeIPs:input_type -> headscale.v1.BackfillNodeIPsRequest
21, // 19: headscale.v1.HeadscaleService.CreateApiKey:input_type -> headscale.v1.CreateApiKeyRequest
22, // 20: headscale.v1.HeadscaleService.ExpireApiKey:input_type -> headscale.v1.ExpireApiKeyRequest
23, // 21: headscale.v1.HeadscaleService.ListApiKeys:input_type -> headscale.v1.ListApiKeysRequest
24, // 22: headscale.v1.HeadscaleService.DeleteApiKey:input_type -> headscale.v1.DeleteApiKeyRequest
25, // 23: headscale.v1.HeadscaleService.GetPolicy:input_type -> headscale.v1.GetPolicyRequest
26, // 24: headscale.v1.HeadscaleService.SetPolicy:input_type -> headscale.v1.SetPolicyRequest
0, // 25: headscale.v1.HeadscaleService.Health:input_type -> headscale.v1.HealthRequest
27, // 26: headscale.v1.HeadscaleService.CreateUser:output_type -> headscale.v1.CreateUserResponse
28, // 27: headscale.v1.HeadscaleService.RenameUser:output_type -> headscale.v1.RenameUserResponse
29, // 28: headscale.v1.HeadscaleService.DeleteUser:output_type -> headscale.v1.DeleteUserResponse
30, // 29: headscale.v1.HeadscaleService.ListUsers:output_type -> headscale.v1.ListUsersResponse
31, // 30: headscale.v1.HeadscaleService.CreatePreAuthKey:output_type -> headscale.v1.CreatePreAuthKeyResponse
32, // 31: headscale.v1.HeadscaleService.ExpirePreAuthKey:output_type -> headscale.v1.ExpirePreAuthKeyResponse
33, // 32: headscale.v1.HeadscaleService.DeletePreAuthKey:output_type -> headscale.v1.DeletePreAuthKeyResponse
34, // 33: headscale.v1.HeadscaleService.ListPreAuthKeys:output_type -> headscale.v1.ListPreAuthKeysResponse
35, // 34: headscale.v1.HeadscaleService.DebugCreateNode:output_type -> headscale.v1.DebugCreateNodeResponse
36, // 35: headscale.v1.HeadscaleService.GetNode:output_type -> headscale.v1.GetNodeResponse
37, // 36: headscale.v1.HeadscaleService.SetTags:output_type -> headscale.v1.SetTagsResponse
38, // 37: headscale.v1.HeadscaleService.SetApprovedRoutes:output_type -> headscale.v1.SetApprovedRoutesResponse
39, // 38: headscale.v1.HeadscaleService.RegisterNode:output_type -> headscale.v1.RegisterNodeResponse
40, // 39: headscale.v1.HeadscaleService.DeleteNode:output_type -> headscale.v1.DeleteNodeResponse
41, // 40: headscale.v1.HeadscaleService.ExpireNode:output_type -> headscale.v1.ExpireNodeResponse
42, // 41: headscale.v1.HeadscaleService.RenameNode:output_type -> headscale.v1.RenameNodeResponse
43, // 42: headscale.v1.HeadscaleService.ListNodes:output_type -> headscale.v1.ListNodesResponse
44, // 43: headscale.v1.HeadscaleService.MoveNode:output_type -> headscale.v1.MoveNodeResponse
45, // 44: headscale.v1.HeadscaleService.BackfillNodeIPs:output_type -> headscale.v1.BackfillNodeIPsResponse
46, // 45: headscale.v1.HeadscaleService.CreateApiKey:output_type -> headscale.v1.CreateApiKeyResponse
47, // 46: headscale.v1.HeadscaleService.ExpireApiKey:output_type -> headscale.v1.ExpireApiKeyResponse
48, // 47: headscale.v1.HeadscaleService.ListApiKeys:output_type -> headscale.v1.ListApiKeysResponse
49, // 48: headscale.v1.HeadscaleService.DeleteApiKey:output_type -> headscale.v1.DeleteApiKeyResponse
50, // 49: headscale.v1.HeadscaleService.GetPolicy:output_type -> headscale.v1.GetPolicyResponse
51, // 50: headscale.v1.HeadscaleService.SetPolicy:output_type -> headscale.v1.SetPolicyResponse
1, // 51: headscale.v1.HeadscaleService.Health:output_type -> headscale.v1.HealthResponse
26, // [26:52] is the sub-list for method output_type
0, // [0:26] is the sub-list for method input_type
0, // 0: headscale.v1.HeadscaleService.CreateUser:input_type -> headscale.v1.CreateUserRequest
1, // 1: headscale.v1.HeadscaleService.RenameUser:input_type -> headscale.v1.RenameUserRequest
2, // 2: headscale.v1.HeadscaleService.DeleteUser:input_type -> headscale.v1.DeleteUserRequest
3, // 3: headscale.v1.HeadscaleService.ListUsers:input_type -> headscale.v1.ListUsersRequest
4, // 4: headscale.v1.HeadscaleService.CreatePreAuthKey:input_type -> headscale.v1.CreatePreAuthKeyRequest
5, // 5: headscale.v1.HeadscaleService.ExpirePreAuthKey:input_type -> headscale.v1.ExpirePreAuthKeyRequest
6, // 6: headscale.v1.HeadscaleService.ListPreAuthKeys:input_type -> headscale.v1.ListPreAuthKeysRequest
7, // 7: headscale.v1.HeadscaleService.DebugCreateNode:input_type -> headscale.v1.DebugCreateNodeRequest
8, // 8: headscale.v1.HeadscaleService.GetNode:input_type -> headscale.v1.GetNodeRequest
9, // 9: headscale.v1.HeadscaleService.SetTags:input_type -> headscale.v1.SetTagsRequest
10, // 10: headscale.v1.HeadscaleService.SetApprovedRoutes:input_type -> headscale.v1.SetApprovedRoutesRequest
11, // 11: headscale.v1.HeadscaleService.RegisterNode:input_type -> headscale.v1.RegisterNodeRequest
12, // 12: headscale.v1.HeadscaleService.DeleteNode:input_type -> headscale.v1.DeleteNodeRequest
13, // 13: headscale.v1.HeadscaleService.ExpireNode:input_type -> headscale.v1.ExpireNodeRequest
14, // 14: headscale.v1.HeadscaleService.RenameNode:input_type -> headscale.v1.RenameNodeRequest
15, // 15: headscale.v1.HeadscaleService.ListNodes:input_type -> headscale.v1.ListNodesRequest
16, // 16: headscale.v1.HeadscaleService.MoveNode:input_type -> headscale.v1.MoveNodeRequest
17, // 17: headscale.v1.HeadscaleService.BackfillNodeIPs:input_type -> headscale.v1.BackfillNodeIPsRequest
18, // 18: headscale.v1.HeadscaleService.CreateApiKey:input_type -> headscale.v1.CreateApiKeyRequest
19, // 19: headscale.v1.HeadscaleService.ExpireApiKey:input_type -> headscale.v1.ExpireApiKeyRequest
20, // 20: headscale.v1.HeadscaleService.ListApiKeys:input_type -> headscale.v1.ListApiKeysRequest
21, // 21: headscale.v1.HeadscaleService.DeleteApiKey:input_type -> headscale.v1.DeleteApiKeyRequest
22, // 22: headscale.v1.HeadscaleService.GetPolicy:input_type -> headscale.v1.GetPolicyRequest
23, // 23: headscale.v1.HeadscaleService.SetPolicy:input_type -> headscale.v1.SetPolicyRequest
24, // 24: headscale.v1.HeadscaleService.CreateUser:output_type -> headscale.v1.CreateUserResponse
25, // 25: headscale.v1.HeadscaleService.RenameUser:output_type -> headscale.v1.RenameUserResponse
26, // 26: headscale.v1.HeadscaleService.DeleteUser:output_type -> headscale.v1.DeleteUserResponse
27, // 27: headscale.v1.HeadscaleService.ListUsers:output_type -> headscale.v1.ListUsersResponse
28, // 28: headscale.v1.HeadscaleService.CreatePreAuthKey:output_type -> headscale.v1.CreatePreAuthKeyResponse
29, // 29: headscale.v1.HeadscaleService.ExpirePreAuthKey:output_type -> headscale.v1.ExpirePreAuthKeyResponse
30, // 30: headscale.v1.HeadscaleService.ListPreAuthKeys:output_type -> headscale.v1.ListPreAuthKeysResponse
31, // 31: headscale.v1.HeadscaleService.DebugCreateNode:output_type -> headscale.v1.DebugCreateNodeResponse
32, // 32: headscale.v1.HeadscaleService.GetNode:output_type -> headscale.v1.GetNodeResponse
33, // 33: headscale.v1.HeadscaleService.SetTags:output_type -> headscale.v1.SetTagsResponse
34, // 34: headscale.v1.HeadscaleService.SetApprovedRoutes:output_type -> headscale.v1.SetApprovedRoutesResponse
35, // 35: headscale.v1.HeadscaleService.RegisterNode:output_type -> headscale.v1.RegisterNodeResponse
36, // 36: headscale.v1.HeadscaleService.DeleteNode:output_type -> headscale.v1.DeleteNodeResponse
37, // 37: headscale.v1.HeadscaleService.ExpireNode:output_type -> headscale.v1.ExpireNodeResponse
38, // 38: headscale.v1.HeadscaleService.RenameNode:output_type -> headscale.v1.RenameNodeResponse
39, // 39: headscale.v1.HeadscaleService.ListNodes:output_type -> headscale.v1.ListNodesResponse
40, // 40: headscale.v1.HeadscaleService.MoveNode:output_type -> headscale.v1.MoveNodeResponse
41, // 41: headscale.v1.HeadscaleService.BackfillNodeIPs:output_type -> headscale.v1.BackfillNodeIPsResponse
42, // 42: headscale.v1.HeadscaleService.CreateApiKey:output_type -> headscale.v1.CreateApiKeyResponse
43, // 43: headscale.v1.HeadscaleService.ExpireApiKey:output_type -> headscale.v1.ExpireApiKeyResponse
44, // 44: headscale.v1.HeadscaleService.ListApiKeys:output_type -> headscale.v1.ListApiKeysResponse
45, // 45: headscale.v1.HeadscaleService.DeleteApiKey:output_type -> headscale.v1.DeleteApiKeyResponse
46, // 46: headscale.v1.HeadscaleService.GetPolicy:output_type -> headscale.v1.GetPolicyResponse
47, // 47: headscale.v1.HeadscaleService.SetPolicy:output_type -> headscale.v1.SetPolicyResponse
24, // [24:48] is the sub-list for method output_type
0, // [0:24] is the sub-list for method input_type
0, // [0:0] is the sub-list for extension type_name
0, // [0:0] is the sub-list for extension extendee
0, // [0:0] is the sub-list for field type_name
@@ -287,13 +180,12 @@ func file_headscale_v1_headscale_proto_init() {
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: unsafe.Slice(unsafe.StringData(file_headscale_v1_headscale_proto_rawDesc), len(file_headscale_v1_headscale_proto_rawDesc)),
NumEnums: 0,
NumMessages: 2,
NumMessages: 0,
NumExtensions: 0,
NumServices: 1,
},
GoTypes: file_headscale_v1_headscale_proto_goTypes,
DependencyIndexes: file_headscale_v1_headscale_proto_depIdxs,
MessageInfos: file_headscale_v1_headscale_proto_msgTypes,
}.Build()
File_headscale_v1_headscale_proto = out.File
file_headscale_v1_headscale_proto_goTypes = nil

View File

@@ -227,38 +227,6 @@ func local_request_HeadscaleService_ExpirePreAuthKey_0(ctx context.Context, mars
return msg, metadata, err
}
var filter_HeadscaleService_DeletePreAuthKey_0 = &utilities.DoubleArray{Encoding: map[string]int{}, Base: []int(nil), Check: []int(nil)}
func request_HeadscaleService_DeletePreAuthKey_0(ctx context.Context, marshaler runtime.Marshaler, client HeadscaleServiceClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq DeletePreAuthKeyRequest
metadata runtime.ServerMetadata
)
if err := req.ParseForm(); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
if err := runtime.PopulateQueryParameters(&protoReq, req.Form, filter_HeadscaleService_DeletePreAuthKey_0); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := client.DeletePreAuthKey(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
func local_request_HeadscaleService_DeletePreAuthKey_0(ctx context.Context, marshaler runtime.Marshaler, server HeadscaleServiceServer, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq DeletePreAuthKeyRequest
metadata runtime.ServerMetadata
)
if err := req.ParseForm(); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
if err := runtime.PopulateQueryParameters(&protoReq, req.Form, filter_HeadscaleService_DeletePreAuthKey_0); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := server.DeletePreAuthKey(ctx, &protoReq)
return msg, metadata, err
}
var filter_HeadscaleService_ListPreAuthKeys_0 = &utilities.DoubleArray{Encoding: map[string]int{}, Base: []int(nil), Check: []int(nil)}
func request_HeadscaleService_ListPreAuthKeys_0(ctx context.Context, marshaler runtime.Marshaler, client HeadscaleServiceClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
@@ -503,8 +471,6 @@ func local_request_HeadscaleService_DeleteNode_0(ctx context.Context, marshaler
return msg, metadata, err
}
var filter_HeadscaleService_ExpireNode_0 = &utilities.DoubleArray{Encoding: map[string]int{"node_id": 0}, Base: []int{1, 1, 0}, Check: []int{0, 1, 2}}
func request_HeadscaleService_ExpireNode_0(ctx context.Context, marshaler runtime.Marshaler, client HeadscaleServiceClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq ExpireNodeRequest
@@ -519,12 +485,6 @@ func request_HeadscaleService_ExpireNode_0(ctx context.Context, marshaler runtim
if err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "type mismatch, parameter: %s, error: %v", "node_id", err)
}
if err := req.ParseForm(); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
if err := runtime.PopulateQueryParameters(&protoReq, req.Form, filter_HeadscaleService_ExpireNode_0); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := client.ExpireNode(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
@@ -543,12 +503,6 @@ func local_request_HeadscaleService_ExpireNode_0(ctx context.Context, marshaler
if err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "type mismatch, parameter: %s, error: %v", "node_id", err)
}
if err := req.ParseForm(); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
if err := runtime.PopulateQueryParameters(&protoReq, req.Form, filter_HeadscaleService_ExpireNode_0); err != nil {
return nil, metadata, status.Errorf(codes.InvalidArgument, "%v", err)
}
msg, err := server.ExpireNode(ctx, &protoReq)
return msg, metadata, err
}
@@ -855,24 +809,6 @@ func local_request_HeadscaleService_SetPolicy_0(ctx context.Context, marshaler r
return msg, metadata, err
}
func request_HeadscaleService_Health_0(ctx context.Context, marshaler runtime.Marshaler, client HeadscaleServiceClient, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq HealthRequest
metadata runtime.ServerMetadata
)
msg, err := client.Health(ctx, &protoReq, grpc.Header(&metadata.HeaderMD), grpc.Trailer(&metadata.TrailerMD))
return msg, metadata, err
}
func local_request_HeadscaleService_Health_0(ctx context.Context, marshaler runtime.Marshaler, server HeadscaleServiceServer, req *http.Request, pathParams map[string]string) (proto.Message, runtime.ServerMetadata, error) {
var (
protoReq HealthRequest
metadata runtime.ServerMetadata
)
msg, err := server.Health(ctx, &protoReq)
return msg, metadata, err
}
// RegisterHeadscaleServiceHandlerServer registers the http handlers for service HeadscaleService to "mux".
// UnaryRPC :call HeadscaleServiceServer directly.
// StreamingRPC :currently unsupported pending https://github.com/grpc/grpc-go/issues/906.
@@ -999,26 +935,6 @@ func RegisterHeadscaleServiceHandlerServer(ctx context.Context, mux *runtime.Ser
}
forward_HeadscaleService_ExpirePreAuthKey_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodDelete, pattern_HeadscaleService_DeletePreAuthKey_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
var stream runtime.ServerTransportStream
ctx = grpc.NewContextWithServerTransportStream(ctx, &stream)
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateIncomingContext(ctx, mux, req, "/headscale.v1.HeadscaleService/DeletePreAuthKey", runtime.WithHTTPPathPattern("/api/v1/preauthkey"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := local_request_HeadscaleService_DeletePreAuthKey_0(annotatedContext, inboundMarshaler, server, req, pathParams)
md.HeaderMD, md.TrailerMD = metadata.Join(md.HeaderMD, stream.Header()), metadata.Join(md.TrailerMD, stream.Trailer())
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_DeletePreAuthKey_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodGet, pattern_HeadscaleService_ListPreAuthKeys_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
@@ -1379,26 +1295,6 @@ func RegisterHeadscaleServiceHandlerServer(ctx context.Context, mux *runtime.Ser
}
forward_HeadscaleService_SetPolicy_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodGet, pattern_HeadscaleService_Health_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
var stream runtime.ServerTransportStream
ctx = grpc.NewContextWithServerTransportStream(ctx, &stream)
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateIncomingContext(ctx, mux, req, "/headscale.v1.HeadscaleService/Health", runtime.WithHTTPPathPattern("/api/v1/health"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := local_request_HeadscaleService_Health_0(annotatedContext, inboundMarshaler, server, req, pathParams)
md.HeaderMD, md.TrailerMD = metadata.Join(md.HeaderMD, stream.Header()), metadata.Join(md.TrailerMD, stream.Trailer())
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_Health_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
return nil
}
@@ -1541,23 +1437,6 @@ func RegisterHeadscaleServiceHandlerClient(ctx context.Context, mux *runtime.Ser
}
forward_HeadscaleService_ExpirePreAuthKey_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodDelete, pattern_HeadscaleService_DeletePreAuthKey_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateContext(ctx, mux, req, "/headscale.v1.HeadscaleService/DeletePreAuthKey", runtime.WithHTTPPathPattern("/api/v1/preauthkey"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := request_HeadscaleService_DeletePreAuthKey_0(annotatedContext, inboundMarshaler, client, req, pathParams)
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_DeletePreAuthKey_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodGet, pattern_HeadscaleService_ListPreAuthKeys_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
@@ -1864,23 +1743,6 @@ func RegisterHeadscaleServiceHandlerClient(ctx context.Context, mux *runtime.Ser
}
forward_HeadscaleService_SetPolicy_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
mux.Handle(http.MethodGet, pattern_HeadscaleService_Health_0, func(w http.ResponseWriter, req *http.Request, pathParams map[string]string) {
ctx, cancel := context.WithCancel(req.Context())
defer cancel()
inboundMarshaler, outboundMarshaler := runtime.MarshalerForRequest(mux, req)
annotatedContext, err := runtime.AnnotateContext(ctx, mux, req, "/headscale.v1.HeadscaleService/Health", runtime.WithHTTPPathPattern("/api/v1/health"))
if err != nil {
runtime.HTTPError(ctx, mux, outboundMarshaler, w, req, err)
return
}
resp, md, err := request_HeadscaleService_Health_0(annotatedContext, inboundMarshaler, client, req, pathParams)
annotatedContext = runtime.NewServerMetadataContext(annotatedContext, md)
if err != nil {
runtime.HTTPError(annotatedContext, mux, outboundMarshaler, w, req, err)
return
}
forward_HeadscaleService_Health_0(annotatedContext, mux, outboundMarshaler, w, req, resp, mux.GetForwardResponseOptions()...)
})
return nil
}
@@ -1891,7 +1753,6 @@ var (
pattern_HeadscaleService_ListUsers_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"api", "v1", "user"}, ""))
pattern_HeadscaleService_CreatePreAuthKey_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"api", "v1", "preauthkey"}, ""))
pattern_HeadscaleService_ExpirePreAuthKey_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 2, 3}, []string{"api", "v1", "preauthkey", "expire"}, ""))
pattern_HeadscaleService_DeletePreAuthKey_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"api", "v1", "preauthkey"}, ""))
pattern_HeadscaleService_ListPreAuthKeys_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"api", "v1", "preauthkey"}, ""))
pattern_HeadscaleService_DebugCreateNode_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 2, 3}, []string{"api", "v1", "debug", "node"}, ""))
pattern_HeadscaleService_GetNode_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 1, 0, 4, 1, 5, 3}, []string{"api", "v1", "node", "node_id"}, ""))
@@ -1910,7 +1771,6 @@ var (
pattern_HeadscaleService_DeleteApiKey_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2, 1, 0, 4, 1, 5, 3}, []string{"api", "v1", "apikey", "prefix"}, ""))
pattern_HeadscaleService_GetPolicy_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"api", "v1", "policy"}, ""))
pattern_HeadscaleService_SetPolicy_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"api", "v1", "policy"}, ""))
pattern_HeadscaleService_Health_0 = runtime.MustPattern(runtime.NewPattern(1, []int{2, 0, 2, 1, 2, 2}, []string{"api", "v1", "health"}, ""))
)
var (
@@ -1920,7 +1780,6 @@ var (
forward_HeadscaleService_ListUsers_0 = runtime.ForwardResponseMessage
forward_HeadscaleService_CreatePreAuthKey_0 = runtime.ForwardResponseMessage
forward_HeadscaleService_ExpirePreAuthKey_0 = runtime.ForwardResponseMessage
forward_HeadscaleService_DeletePreAuthKey_0 = runtime.ForwardResponseMessage
forward_HeadscaleService_ListPreAuthKeys_0 = runtime.ForwardResponseMessage
forward_HeadscaleService_DebugCreateNode_0 = runtime.ForwardResponseMessage
forward_HeadscaleService_GetNode_0 = runtime.ForwardResponseMessage
@@ -1939,5 +1798,4 @@ var (
forward_HeadscaleService_DeleteApiKey_0 = runtime.ForwardResponseMessage
forward_HeadscaleService_GetPolicy_0 = runtime.ForwardResponseMessage
forward_HeadscaleService_SetPolicy_0 = runtime.ForwardResponseMessage
forward_HeadscaleService_Health_0 = runtime.ForwardResponseMessage
)

View File

@@ -25,7 +25,6 @@ const (
HeadscaleService_ListUsers_FullMethodName = "/headscale.v1.HeadscaleService/ListUsers"
HeadscaleService_CreatePreAuthKey_FullMethodName = "/headscale.v1.HeadscaleService/CreatePreAuthKey"
HeadscaleService_ExpirePreAuthKey_FullMethodName = "/headscale.v1.HeadscaleService/ExpirePreAuthKey"
HeadscaleService_DeletePreAuthKey_FullMethodName = "/headscale.v1.HeadscaleService/DeletePreAuthKey"
HeadscaleService_ListPreAuthKeys_FullMethodName = "/headscale.v1.HeadscaleService/ListPreAuthKeys"
HeadscaleService_DebugCreateNode_FullMethodName = "/headscale.v1.HeadscaleService/DebugCreateNode"
HeadscaleService_GetNode_FullMethodName = "/headscale.v1.HeadscaleService/GetNode"
@@ -44,7 +43,6 @@ const (
HeadscaleService_DeleteApiKey_FullMethodName = "/headscale.v1.HeadscaleService/DeleteApiKey"
HeadscaleService_GetPolicy_FullMethodName = "/headscale.v1.HeadscaleService/GetPolicy"
HeadscaleService_SetPolicy_FullMethodName = "/headscale.v1.HeadscaleService/SetPolicy"
HeadscaleService_Health_FullMethodName = "/headscale.v1.HeadscaleService/Health"
)
// HeadscaleServiceClient is the client API for HeadscaleService service.
@@ -59,7 +57,6 @@ type HeadscaleServiceClient interface {
// --- PreAuthKeys start ---
CreatePreAuthKey(ctx context.Context, in *CreatePreAuthKeyRequest, opts ...grpc.CallOption) (*CreatePreAuthKeyResponse, error)
ExpirePreAuthKey(ctx context.Context, in *ExpirePreAuthKeyRequest, opts ...grpc.CallOption) (*ExpirePreAuthKeyResponse, error)
DeletePreAuthKey(ctx context.Context, in *DeletePreAuthKeyRequest, opts ...grpc.CallOption) (*DeletePreAuthKeyResponse, error)
ListPreAuthKeys(ctx context.Context, in *ListPreAuthKeysRequest, opts ...grpc.CallOption) (*ListPreAuthKeysResponse, error)
// --- Node start ---
DebugCreateNode(ctx context.Context, in *DebugCreateNodeRequest, opts ...grpc.CallOption) (*DebugCreateNodeResponse, error)
@@ -81,8 +78,6 @@ type HeadscaleServiceClient interface {
// --- Policy start ---
GetPolicy(ctx context.Context, in *GetPolicyRequest, opts ...grpc.CallOption) (*GetPolicyResponse, error)
SetPolicy(ctx context.Context, in *SetPolicyRequest, opts ...grpc.CallOption) (*SetPolicyResponse, error)
// --- Health start ---
Health(ctx context.Context, in *HealthRequest, opts ...grpc.CallOption) (*HealthResponse, error)
}
type headscaleServiceClient struct {
@@ -153,16 +148,6 @@ func (c *headscaleServiceClient) ExpirePreAuthKey(ctx context.Context, in *Expir
return out, nil
}
func (c *headscaleServiceClient) DeletePreAuthKey(ctx context.Context, in *DeletePreAuthKeyRequest, opts ...grpc.CallOption) (*DeletePreAuthKeyResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(DeletePreAuthKeyResponse)
err := c.cc.Invoke(ctx, HeadscaleService_DeletePreAuthKey_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *headscaleServiceClient) ListPreAuthKeys(ctx context.Context, in *ListPreAuthKeysRequest, opts ...grpc.CallOption) (*ListPreAuthKeysResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(ListPreAuthKeysResponse)
@@ -343,16 +328,6 @@ func (c *headscaleServiceClient) SetPolicy(ctx context.Context, in *SetPolicyReq
return out, nil
}
func (c *headscaleServiceClient) Health(ctx context.Context, in *HealthRequest, opts ...grpc.CallOption) (*HealthResponse, error) {
cOpts := append([]grpc.CallOption{grpc.StaticMethod()}, opts...)
out := new(HealthResponse)
err := c.cc.Invoke(ctx, HeadscaleService_Health_FullMethodName, in, out, cOpts...)
if err != nil {
return nil, err
}
return out, nil
}
// HeadscaleServiceServer is the server API for HeadscaleService service.
// All implementations must embed UnimplementedHeadscaleServiceServer
// for forward compatibility.
@@ -365,7 +340,6 @@ type HeadscaleServiceServer interface {
// --- PreAuthKeys start ---
CreatePreAuthKey(context.Context, *CreatePreAuthKeyRequest) (*CreatePreAuthKeyResponse, error)
ExpirePreAuthKey(context.Context, *ExpirePreAuthKeyRequest) (*ExpirePreAuthKeyResponse, error)
DeletePreAuthKey(context.Context, *DeletePreAuthKeyRequest) (*DeletePreAuthKeyResponse, error)
ListPreAuthKeys(context.Context, *ListPreAuthKeysRequest) (*ListPreAuthKeysResponse, error)
// --- Node start ---
DebugCreateNode(context.Context, *DebugCreateNodeRequest) (*DebugCreateNodeResponse, error)
@@ -387,8 +361,6 @@ type HeadscaleServiceServer interface {
// --- Policy start ---
GetPolicy(context.Context, *GetPolicyRequest) (*GetPolicyResponse, error)
SetPolicy(context.Context, *SetPolicyRequest) (*SetPolicyResponse, error)
// --- Health start ---
Health(context.Context, *HealthRequest) (*HealthResponse, error)
mustEmbedUnimplementedHeadscaleServiceServer()
}
@@ -417,9 +389,6 @@ func (UnimplementedHeadscaleServiceServer) CreatePreAuthKey(context.Context, *Cr
func (UnimplementedHeadscaleServiceServer) ExpirePreAuthKey(context.Context, *ExpirePreAuthKeyRequest) (*ExpirePreAuthKeyResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method ExpirePreAuthKey not implemented")
}
func (UnimplementedHeadscaleServiceServer) DeletePreAuthKey(context.Context, *DeletePreAuthKeyRequest) (*DeletePreAuthKeyResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method DeletePreAuthKey not implemented")
}
func (UnimplementedHeadscaleServiceServer) ListPreAuthKeys(context.Context, *ListPreAuthKeysRequest) (*ListPreAuthKeysResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method ListPreAuthKeys not implemented")
}
@@ -474,9 +443,6 @@ func (UnimplementedHeadscaleServiceServer) GetPolicy(context.Context, *GetPolicy
func (UnimplementedHeadscaleServiceServer) SetPolicy(context.Context, *SetPolicyRequest) (*SetPolicyResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method SetPolicy not implemented")
}
func (UnimplementedHeadscaleServiceServer) Health(context.Context, *HealthRequest) (*HealthResponse, error) {
return nil, status.Errorf(codes.Unimplemented, "method Health not implemented")
}
func (UnimplementedHeadscaleServiceServer) mustEmbedUnimplementedHeadscaleServiceServer() {}
func (UnimplementedHeadscaleServiceServer) testEmbeddedByValue() {}
@@ -606,24 +572,6 @@ func _HeadscaleService_ExpirePreAuthKey_Handler(srv interface{}, ctx context.Con
return interceptor(ctx, in, info, handler)
}
func _HeadscaleService_DeletePreAuthKey_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(DeletePreAuthKeyRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(HeadscaleServiceServer).DeletePreAuthKey(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: HeadscaleService_DeletePreAuthKey_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(HeadscaleServiceServer).DeletePreAuthKey(ctx, req.(*DeletePreAuthKeyRequest))
}
return interceptor(ctx, in, info, handler)
}
func _HeadscaleService_ListPreAuthKeys_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(ListPreAuthKeysRequest)
if err := dec(in); err != nil {
@@ -948,24 +896,6 @@ func _HeadscaleService_SetPolicy_Handler(srv interface{}, ctx context.Context, d
return interceptor(ctx, in, info, handler)
}
func _HeadscaleService_Health_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(HealthRequest)
if err := dec(in); err != nil {
return nil, err
}
if interceptor == nil {
return srv.(HeadscaleServiceServer).Health(ctx, in)
}
info := &grpc.UnaryServerInfo{
Server: srv,
FullMethod: HeadscaleService_Health_FullMethodName,
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(HeadscaleServiceServer).Health(ctx, req.(*HealthRequest))
}
return interceptor(ctx, in, info, handler)
}
// HeadscaleService_ServiceDesc is the grpc.ServiceDesc for HeadscaleService service.
// It's only intended for direct use with grpc.RegisterService,
// and not to be introspected or modified (even as a copy)
@@ -997,10 +927,6 @@ var HeadscaleService_ServiceDesc = grpc.ServiceDesc{
MethodName: "ExpirePreAuthKey",
Handler: _HeadscaleService_ExpirePreAuthKey_Handler,
},
{
MethodName: "DeletePreAuthKey",
Handler: _HeadscaleService_DeletePreAuthKey_Handler,
},
{
MethodName: "ListPreAuthKeys",
Handler: _HeadscaleService_ListPreAuthKeys_Handler,
@@ -1073,10 +999,6 @@ var HeadscaleService_ServiceDesc = grpc.ServiceDesc{
MethodName: "SetPolicy",
Handler: _HeadscaleService_SetPolicy_Handler,
},
{
MethodName: "Health",
Handler: _HeadscaleService_Health_Handler,
},
},
Streams: []grpc.StreamDesc{},
Metadata: "headscale/v1/headscale.proto",

View File

@@ -1,6 +1,6 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.36.10
// protoc-gen-go v1.36.6
// protoc (unknown)
// source: headscale/v1/node.proto
@@ -729,7 +729,6 @@ func (*DeleteNodeResponse) Descriptor() ([]byte, []int) {
type ExpireNodeRequest struct {
state protoimpl.MessageState `protogen:"open.v1"`
NodeId uint64 `protobuf:"varint,1,opt,name=node_id,json=nodeId,proto3" json:"node_id,omitempty"`
Expiry *timestamppb.Timestamp `protobuf:"bytes,2,opt,name=expiry,proto3" json:"expiry,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
@@ -771,13 +770,6 @@ func (x *ExpireNodeRequest) GetNodeId() uint64 {
return 0
}
func (x *ExpireNodeRequest) GetExpiry() *timestamppb.Timestamp {
if x != nil {
return x.Expiry
}
return nil
}
type ExpireNodeResponse struct {
state protoimpl.MessageState `protogen:"open.v1"`
Node *Node `protobuf:"bytes,1,opt,name=node,proto3" json:"node,omitempty"`
@@ -1357,10 +1349,9 @@ const file_headscale_v1_node_proto_rawDesc = "" +
"\x04node\x18\x01 \x01(\v2\x12.headscale.v1.NodeR\x04node\",\n" +
"\x11DeleteNodeRequest\x12\x17\n" +
"\anode_id\x18\x01 \x01(\x04R\x06nodeId\"\x14\n" +
"\x12DeleteNodeResponse\"`\n" +
"\x12DeleteNodeResponse\",\n" +
"\x11ExpireNodeRequest\x12\x17\n" +
"\anode_id\x18\x01 \x01(\x04R\x06nodeId\x122\n" +
"\x06expiry\x18\x02 \x01(\v2\x1a.google.protobuf.TimestampR\x06expiry\"<\n" +
"\anode_id\x18\x01 \x01(\x04R\x06nodeId\"<\n" +
"\x12ExpireNodeResponse\x12&\n" +
"\x04node\x18\x01 \x01(\v2\x12.headscale.v1.NodeR\x04node\"G\n" +
"\x11RenameNodeRequest\x12\x17\n" +
@@ -1448,17 +1439,16 @@ var file_headscale_v1_node_proto_depIdxs = []int32{
1, // 7: headscale.v1.GetNodeResponse.node:type_name -> headscale.v1.Node
1, // 8: headscale.v1.SetTagsResponse.node:type_name -> headscale.v1.Node
1, // 9: headscale.v1.SetApprovedRoutesResponse.node:type_name -> headscale.v1.Node
25, // 10: headscale.v1.ExpireNodeRequest.expiry:type_name -> google.protobuf.Timestamp
1, // 11: headscale.v1.ExpireNodeResponse.node:type_name -> headscale.v1.Node
1, // 12: headscale.v1.RenameNodeResponse.node:type_name -> headscale.v1.Node
1, // 13: headscale.v1.ListNodesResponse.nodes:type_name -> headscale.v1.Node
1, // 14: headscale.v1.MoveNodeResponse.node:type_name -> headscale.v1.Node
1, // 15: headscale.v1.DebugCreateNodeResponse.node:type_name -> headscale.v1.Node
16, // [16:16] is the sub-list for method output_type
16, // [16:16] is the sub-list for method input_type
16, // [16:16] is the sub-list for extension type_name
16, // [16:16] is the sub-list for extension extendee
0, // [0:16] is the sub-list for field type_name
1, // 10: headscale.v1.ExpireNodeResponse.node:type_name -> headscale.v1.Node
1, // 11: headscale.v1.RenameNodeResponse.node:type_name -> headscale.v1.Node
1, // 12: headscale.v1.ListNodesResponse.nodes:type_name -> headscale.v1.Node
1, // 13: headscale.v1.MoveNodeResponse.node:type_name -> headscale.v1.Node
1, // 14: headscale.v1.DebugCreateNodeResponse.node:type_name -> headscale.v1.Node
15, // [15:15] is the sub-list for method output_type
15, // [15:15] is the sub-list for method input_type
15, // [15:15] is the sub-list for extension type_name
15, // [15:15] is the sub-list for extension extendee
0, // [0:15] is the sub-list for field type_name
}
func init() { file_headscale_v1_node_proto_init() }

View File

@@ -1,6 +1,6 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.36.10
// protoc-gen-go v1.36.6
// protoc (unknown)
// source: headscale/v1/policy.proto

View File

@@ -1,6 +1,6 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.36.10
// protoc-gen-go v1.36.6
// protoc (unknown)
// source: headscale/v1/preauthkey.proto
@@ -338,94 +338,6 @@ func (*ExpirePreAuthKeyResponse) Descriptor() ([]byte, []int) {
return file_headscale_v1_preauthkey_proto_rawDescGZIP(), []int{4}
}
type DeletePreAuthKeyRequest struct {
state protoimpl.MessageState `protogen:"open.v1"`
User uint64 `protobuf:"varint,1,opt,name=user,proto3" json:"user,omitempty"`
Key string `protobuf:"bytes,2,opt,name=key,proto3" json:"key,omitempty"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *DeletePreAuthKeyRequest) Reset() {
*x = DeletePreAuthKeyRequest{}
mi := &file_headscale_v1_preauthkey_proto_msgTypes[5]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *DeletePreAuthKeyRequest) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*DeletePreAuthKeyRequest) ProtoMessage() {}
func (x *DeletePreAuthKeyRequest) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_preauthkey_proto_msgTypes[5]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use DeletePreAuthKeyRequest.ProtoReflect.Descriptor instead.
func (*DeletePreAuthKeyRequest) Descriptor() ([]byte, []int) {
return file_headscale_v1_preauthkey_proto_rawDescGZIP(), []int{5}
}
func (x *DeletePreAuthKeyRequest) GetUser() uint64 {
if x != nil {
return x.User
}
return 0
}
func (x *DeletePreAuthKeyRequest) GetKey() string {
if x != nil {
return x.Key
}
return ""
}
type DeletePreAuthKeyResponse struct {
state protoimpl.MessageState `protogen:"open.v1"`
unknownFields protoimpl.UnknownFields
sizeCache protoimpl.SizeCache
}
func (x *DeletePreAuthKeyResponse) Reset() {
*x = DeletePreAuthKeyResponse{}
mi := &file_headscale_v1_preauthkey_proto_msgTypes[6]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
func (x *DeletePreAuthKeyResponse) String() string {
return protoimpl.X.MessageStringOf(x)
}
func (*DeletePreAuthKeyResponse) ProtoMessage() {}
func (x *DeletePreAuthKeyResponse) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_preauthkey_proto_msgTypes[6]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
}
return ms
}
return mi.MessageOf(x)
}
// Deprecated: Use DeletePreAuthKeyResponse.ProtoReflect.Descriptor instead.
func (*DeletePreAuthKeyResponse) Descriptor() ([]byte, []int) {
return file_headscale_v1_preauthkey_proto_rawDescGZIP(), []int{6}
}
type ListPreAuthKeysRequest struct {
state protoimpl.MessageState `protogen:"open.v1"`
User uint64 `protobuf:"varint,1,opt,name=user,proto3" json:"user,omitempty"`
@@ -435,7 +347,7 @@ type ListPreAuthKeysRequest struct {
func (x *ListPreAuthKeysRequest) Reset() {
*x = ListPreAuthKeysRequest{}
mi := &file_headscale_v1_preauthkey_proto_msgTypes[7]
mi := &file_headscale_v1_preauthkey_proto_msgTypes[5]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -447,7 +359,7 @@ func (x *ListPreAuthKeysRequest) String() string {
func (*ListPreAuthKeysRequest) ProtoMessage() {}
func (x *ListPreAuthKeysRequest) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_preauthkey_proto_msgTypes[7]
mi := &file_headscale_v1_preauthkey_proto_msgTypes[5]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -460,7 +372,7 @@ func (x *ListPreAuthKeysRequest) ProtoReflect() protoreflect.Message {
// Deprecated: Use ListPreAuthKeysRequest.ProtoReflect.Descriptor instead.
func (*ListPreAuthKeysRequest) Descriptor() ([]byte, []int) {
return file_headscale_v1_preauthkey_proto_rawDescGZIP(), []int{7}
return file_headscale_v1_preauthkey_proto_rawDescGZIP(), []int{5}
}
func (x *ListPreAuthKeysRequest) GetUser() uint64 {
@@ -479,7 +391,7 @@ type ListPreAuthKeysResponse struct {
func (x *ListPreAuthKeysResponse) Reset() {
*x = ListPreAuthKeysResponse{}
mi := &file_headscale_v1_preauthkey_proto_msgTypes[8]
mi := &file_headscale_v1_preauthkey_proto_msgTypes[6]
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
ms.StoreMessageInfo(mi)
}
@@ -491,7 +403,7 @@ func (x *ListPreAuthKeysResponse) String() string {
func (*ListPreAuthKeysResponse) ProtoMessage() {}
func (x *ListPreAuthKeysResponse) ProtoReflect() protoreflect.Message {
mi := &file_headscale_v1_preauthkey_proto_msgTypes[8]
mi := &file_headscale_v1_preauthkey_proto_msgTypes[6]
if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
@@ -504,7 +416,7 @@ func (x *ListPreAuthKeysResponse) ProtoReflect() protoreflect.Message {
// Deprecated: Use ListPreAuthKeysResponse.ProtoReflect.Descriptor instead.
func (*ListPreAuthKeysResponse) Descriptor() ([]byte, []int) {
return file_headscale_v1_preauthkey_proto_rawDescGZIP(), []int{8}
return file_headscale_v1_preauthkey_proto_rawDescGZIP(), []int{6}
}
func (x *ListPreAuthKeysResponse) GetPreAuthKeys() []*PreAuthKey {
@@ -547,11 +459,7 @@ const file_headscale_v1_preauthkey_proto_rawDesc = "" +
"\x17ExpirePreAuthKeyRequest\x12\x12\n" +
"\x04user\x18\x01 \x01(\x04R\x04user\x12\x10\n" +
"\x03key\x18\x02 \x01(\tR\x03key\"\x1a\n" +
"\x18ExpirePreAuthKeyResponse\"?\n" +
"\x17DeletePreAuthKeyRequest\x12\x12\n" +
"\x04user\x18\x01 \x01(\x04R\x04user\x12\x10\n" +
"\x03key\x18\x02 \x01(\tR\x03key\"\x1a\n" +
"\x18DeletePreAuthKeyResponse\",\n" +
"\x18ExpirePreAuthKeyResponse\",\n" +
"\x16ListPreAuthKeysRequest\x12\x12\n" +
"\x04user\x18\x01 \x01(\x04R\x04user\"W\n" +
"\x17ListPreAuthKeysResponse\x12<\n" +
@@ -569,32 +477,30 @@ func file_headscale_v1_preauthkey_proto_rawDescGZIP() []byte {
return file_headscale_v1_preauthkey_proto_rawDescData
}
var file_headscale_v1_preauthkey_proto_msgTypes = make([]protoimpl.MessageInfo, 9)
var file_headscale_v1_preauthkey_proto_msgTypes = make([]protoimpl.MessageInfo, 7)
var file_headscale_v1_preauthkey_proto_goTypes = []any{
(*PreAuthKey)(nil), // 0: headscale.v1.PreAuthKey
(*CreatePreAuthKeyRequest)(nil), // 1: headscale.v1.CreatePreAuthKeyRequest
(*CreatePreAuthKeyResponse)(nil), // 2: headscale.v1.CreatePreAuthKeyResponse
(*ExpirePreAuthKeyRequest)(nil), // 3: headscale.v1.ExpirePreAuthKeyRequest
(*ExpirePreAuthKeyResponse)(nil), // 4: headscale.v1.ExpirePreAuthKeyResponse
(*DeletePreAuthKeyRequest)(nil), // 5: headscale.v1.DeletePreAuthKeyRequest
(*DeletePreAuthKeyResponse)(nil), // 6: headscale.v1.DeletePreAuthKeyResponse
(*ListPreAuthKeysRequest)(nil), // 7: headscale.v1.ListPreAuthKeysRequest
(*ListPreAuthKeysResponse)(nil), // 8: headscale.v1.ListPreAuthKeysResponse
(*User)(nil), // 9: headscale.v1.User
(*timestamppb.Timestamp)(nil), // 10: google.protobuf.Timestamp
(*ListPreAuthKeysRequest)(nil), // 5: headscale.v1.ListPreAuthKeysRequest
(*ListPreAuthKeysResponse)(nil), // 6: headscale.v1.ListPreAuthKeysResponse
(*User)(nil), // 7: headscale.v1.User
(*timestamppb.Timestamp)(nil), // 8: google.protobuf.Timestamp
}
var file_headscale_v1_preauthkey_proto_depIdxs = []int32{
9, // 0: headscale.v1.PreAuthKey.user:type_name -> headscale.v1.User
10, // 1: headscale.v1.PreAuthKey.expiration:type_name -> google.protobuf.Timestamp
10, // 2: headscale.v1.PreAuthKey.created_at:type_name -> google.protobuf.Timestamp
10, // 3: headscale.v1.CreatePreAuthKeyRequest.expiration:type_name -> google.protobuf.Timestamp
0, // 4: headscale.v1.CreatePreAuthKeyResponse.pre_auth_key:type_name -> headscale.v1.PreAuthKey
0, // 5: headscale.v1.ListPreAuthKeysResponse.pre_auth_keys:type_name -> headscale.v1.PreAuthKey
6, // [6:6] is the sub-list for method output_type
6, // [6:6] is the sub-list for method input_type
6, // [6:6] is the sub-list for extension type_name
6, // [6:6] is the sub-list for extension extendee
0, // [0:6] is the sub-list for field type_name
7, // 0: headscale.v1.PreAuthKey.user:type_name -> headscale.v1.User
8, // 1: headscale.v1.PreAuthKey.expiration:type_name -> google.protobuf.Timestamp
8, // 2: headscale.v1.PreAuthKey.created_at:type_name -> google.protobuf.Timestamp
8, // 3: headscale.v1.CreatePreAuthKeyRequest.expiration:type_name -> google.protobuf.Timestamp
0, // 4: headscale.v1.CreatePreAuthKeyResponse.pre_auth_key:type_name -> headscale.v1.PreAuthKey
0, // 5: headscale.v1.ListPreAuthKeysResponse.pre_auth_keys:type_name -> headscale.v1.PreAuthKey
6, // [6:6] is the sub-list for method output_type
6, // [6:6] is the sub-list for method input_type
6, // [6:6] is the sub-list for extension type_name
6, // [6:6] is the sub-list for extension extendee
0, // [0:6] is the sub-list for field type_name
}
func init() { file_headscale_v1_preauthkey_proto_init() }
@@ -609,7 +515,7 @@ func file_headscale_v1_preauthkey_proto_init() {
GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
RawDescriptor: unsafe.Slice(unsafe.StringData(file_headscale_v1_preauthkey_proto_rawDesc), len(file_headscale_v1_preauthkey_proto_rawDesc)),
NumEnums: 0,
NumMessages: 9,
NumMessages: 7,
NumExtensions: 0,
NumServices: 0,
},

View File

@@ -1,6 +1,6 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
// protoc-gen-go v1.36.10
// protoc-gen-go v1.36.6
// protoc (unknown)
// source: headscale/v1/user.proto

View File

@@ -164,29 +164,6 @@
]
}
},
"/api/v1/health": {
"get": {
"summary": "--- Health start ---",
"operationId": "HeadscaleService_Health",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v1HealthResponse"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/rpcStatus"
}
}
},
"tags": [
"HeadscaleService"
]
}
},
"/api/v1/node": {
"get": {
"operationId": "HeadscaleService_ListNodes",
@@ -406,13 +383,6 @@
"required": true,
"type": "string",
"format": "uint64"
},
{
"name": "expiry",
"in": "query",
"required": false,
"type": "string",
"format": "date-time"
}
],
"tags": [
@@ -618,41 +588,6 @@
"HeadscaleService"
]
},
"delete": {
"operationId": "HeadscaleService_DeletePreAuthKey",
"responses": {
"200": {
"description": "A successful response.",
"schema": {
"$ref": "#/definitions/v1DeletePreAuthKeyResponse"
}
},
"default": {
"description": "An unexpected error response.",
"schema": {
"$ref": "#/definitions/rpcStatus"
}
}
},
"parameters": [
{
"name": "user",
"in": "query",
"required": false,
"type": "string",
"format": "uint64"
},
{
"name": "key",
"in": "query",
"required": false,
"type": "string"
}
],
"tags": [
"HeadscaleService"
]
},
"post": {
"summary": "--- PreAuthKeys start ---",
"operationId": "HeadscaleService_CreatePreAuthKey",
@@ -1064,9 +999,6 @@
"v1DeleteNodeResponse": {
"type": "object"
},
"v1DeletePreAuthKeyResponse": {
"type": "object"
},
"v1DeleteUserResponse": {
"type": "object"
},
@@ -1124,14 +1056,6 @@
}
}
},
"v1HealthResponse": {
"type": "object",
"properties": {
"databaseConnectivity": {
"type": "boolean"
}
}
},
"v1ListApiKeysResponse": {
"type": "object",
"properties": {

149
go.mod
View File

@@ -1,59 +1,61 @@
module github.com/juanfont/headscale
go 1.25
go 1.24.0
toolchain go1.24.2
require (
github.com/arl/statsviz v0.7.2
github.com/cenkalti/backoff/v5 v5.0.3
github.com/chasefleming/elem-go v0.31.0
github.com/coder/websocket v1.8.14
github.com/coreos/go-oidc/v3 v3.16.0
github.com/creachadair/command v0.2.0
github.com/AlecAivazis/survey/v2 v2.3.7
github.com/arl/statsviz v0.6.0
github.com/cenkalti/backoff/v5 v5.0.2
github.com/chasefleming/elem-go v0.30.0
github.com/coder/websocket v1.8.13
github.com/coreos/go-oidc/v3 v3.14.1
github.com/creachadair/command v0.1.22
github.com/creachadair/flax v0.0.5
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc
github.com/docker/docker v28.5.1+incompatible
github.com/docker/docker v28.3.3+incompatible
github.com/fsnotify/fsnotify v1.9.0
github.com/glebarez/sqlite v1.11.0
github.com/go-gormigrate/gormigrate/v2 v2.1.5
github.com/go-json-experiment/json v0.0.0-20250813024750-ebf49471dced
github.com/go-gormigrate/gormigrate/v2 v2.1.4
github.com/gofrs/uuid/v5 v5.3.2
github.com/google/go-cmp v0.7.0
github.com/gorilla/mux v1.8.1
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.3
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.0
github.com/jagottsicher/termcolor v1.0.2
github.com/oauth2-proxy/mockoidc v0.0.0-20240214162133-caebfff84d25
github.com/ory/dockertest/v3 v3.12.0
github.com/philip-bui/grpc-zerolog v1.0.1
github.com/pkg/profile v1.7.0
github.com/prometheus/client_golang v1.23.2
github.com/prometheus/common v0.66.1
github.com/pterm/pterm v0.12.82
github.com/puzpuzpuz/xsync/v4 v4.2.0
github.com/prometheus/client_golang v1.22.0
github.com/prometheus/common v0.65.0
github.com/pterm/pterm v0.12.81
github.com/puzpuzpuz/xsync/v4 v4.1.0
github.com/rs/zerolog v1.34.0
github.com/samber/lo v1.52.0
github.com/sasha-s/go-deadlock v0.3.6
github.com/spf13/cobra v1.10.1
github.com/spf13/viper v1.21.0
github.com/stretchr/testify v1.11.1
github.com/samber/lo v1.51.0
github.com/sasha-s/go-deadlock v0.3.5
github.com/spf13/cobra v1.9.1
github.com/spf13/viper v1.20.1
github.com/stretchr/testify v1.10.0
github.com/tailscale/hujson v0.0.0-20250226034555-ec1d1c113d33
github.com/tailscale/squibble v0.0.0-20251030164342-4d5df9caa993
github.com/tailscale/squibble v0.0.0-20250108170732-a4ca58afa694
github.com/tailscale/tailsql v0.0.0-20250421235516-02f85f087b97
github.com/tcnksm/go-latest v0.0.0-20170313132115-e3007ae9052e
go4.org/netipx v0.0.0-20231129151722-fdeea329fbba
golang.org/x/crypto v0.43.0
golang.org/x/exp v0.0.0-20251009144603-d2f985daa21b
golang.org/x/net v0.46.0
golang.org/x/oauth2 v0.32.0
golang.org/x/sync v0.17.0
google.golang.org/genproto/googleapis/api v0.0.0-20250929231259-57b25ae835d4
google.golang.org/grpc v1.75.1
google.golang.org/protobuf v1.36.10
golang.org/x/crypto v0.40.0
golang.org/x/exp v0.0.0-20250408133849-7e4ce0ab07d0
golang.org/x/net v0.42.0
golang.org/x/oauth2 v0.30.0
golang.org/x/sync v0.16.0
google.golang.org/genproto/googleapis/api v0.0.0-20250603155806-513f23925822
google.golang.org/grpc v1.73.0
google.golang.org/protobuf v1.36.6
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c
gopkg.in/yaml.v3 v3.0.1
gorm.io/driver/postgres v1.6.0
gorm.io/gorm v1.31.0
tailscale.com v1.86.5
zgo.at/zcache/v2 v2.4.1
gorm.io/gorm v1.30.0
tailscale.com v1.84.3
zgo.at/zcache/v2 v2.2.0
zombiezen.com/go/postgrestest v1.0.1
)
@@ -75,17 +77,17 @@ require (
// together, e.g:
// go get modernc.org/libc@v1.55.3 modernc.org/sqlite@v1.33.1
require (
modernc.org/libc v1.66.10 // indirect
modernc.org/libc v1.62.1 // indirect
modernc.org/mathutil v1.7.1 // indirect
modernc.org/memory v1.11.0 // indirect
modernc.org/sqlite v1.39.1
modernc.org/memory v1.10.0 // indirect
modernc.org/sqlite v1.37.0
)
require (
atomicgo.dev/cursor v0.2.0 // indirect
atomicgo.dev/keyboard v0.2.9 // indirect
atomicgo.dev/schedule v0.1.0 // indirect
dario.cat/mergo v1.0.2 // indirect
dario.cat/mergo v1.0.1 // indirect
filippo.io/edwards25519 v1.1.0 // indirect
github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c // indirect
github.com/Microsoft/go-winio v0.6.2 // indirect
@@ -109,18 +111,17 @@ require (
github.com/beorn7/perks v1.0.1 // indirect
github.com/cenkalti/backoff/v4 v4.3.0 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/clipperhouse/uax29/v2 v2.2.0 // indirect
github.com/containerd/console v1.0.5 // indirect
github.com/containerd/continuity v0.4.5 // indirect
github.com/containerd/errdefs v0.3.0 // indirect
github.com/containerd/errdefs/pkg v0.3.0 // indirect
github.com/coreos/go-iptables v0.7.1-0.20240112124308-65c67c9f46e6 // indirect
github.com/creachadair/mds v0.25.10 // indirect
github.com/creachadair/mds v0.24.3 // indirect
github.com/dblohm7/wingoes v0.0.0-20240123200102-b75a8a7d7eb0 // indirect
github.com/digitalocean/go-smbios v0.0.0-20180907143718-390a4f403a8e // indirect
github.com/distribution/reference v0.6.0 // indirect
github.com/docker/cli v28.5.1+incompatible // indirect
github.com/docker/go-connections v0.6.0 // indirect
github.com/docker/cli v28.1.1+incompatible // indirect
github.com/docker/go-connections v0.5.0 // indirect
github.com/docker/go-units v0.5.0 // indirect
github.com/dustin/go-humanize v1.0.1 // indirect
github.com/felixge/fgprof v0.9.5 // indirect
@@ -129,12 +130,14 @@ require (
github.com/gaissmai/bart v0.18.0 // indirect
github.com/glebarez/go-sqlite v1.22.0 // indirect
github.com/go-jose/go-jose/v3 v3.0.4 // indirect
github.com/go-jose/go-jose/v4 v4.1.3 // indirect
github.com/go-logr/logr v1.4.3 // indirect
github.com/go-jose/go-jose/v4 v4.1.0 // indirect
github.com/go-json-experiment/json v0.0.0-20250223041408-d3c622f1b874 // indirect
github.com/go-logr/logr v1.4.2 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-ole/go-ole v1.3.0 // indirect
github.com/go-viper/mapstructure/v2 v2.4.0 // indirect
github.com/go-viper/mapstructure/v2 v2.2.1 // indirect
github.com/godbus/dbus/v5 v5.1.1-0.20230522191255-76236955d466 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/golang-jwt/jwt/v5 v5.2.2 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/golang/protobuf v1.5.4 // indirect
@@ -142,10 +145,12 @@ require (
github.com/google/go-github v17.0.0+incompatible // indirect
github.com/google/go-querystring v1.1.0 // indirect
github.com/google/nftables v0.2.1-0.20240414091927-5e242ec57806 // indirect
github.com/google/pprof v0.0.0-20251007162407-5df77e3f7d1d // indirect
github.com/google/pprof v0.0.0-20250501235452-c0086092b71a // indirect
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/gookit/color v1.6.0 // indirect
github.com/gookit/color v1.5.4 // indirect
github.com/gorilla/csrf v1.7.3 // indirect
github.com/gorilla/securecookie v1.1.2 // indirect
github.com/gorilla/websocket v1.5.3 // indirect
github.com/hashicorp/go-version v1.7.0 // indirect
github.com/hdevalence/ed25519consensus v0.2.0 // indirect
@@ -153,24 +158,26 @@ require (
github.com/inconshreveable/mousetrap v1.1.0 // indirect
github.com/jackc/pgpassfile v1.0.0 // indirect
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 // indirect
github.com/jackc/pgx/v5 v5.7.6 // indirect
github.com/jackc/pgx/v5 v5.7.4 // indirect
github.com/jackc/puddle/v2 v2.2.2 // indirect
github.com/jinzhu/inflection v1.0.0 // indirect
github.com/jinzhu/now v1.1.5 // indirect
github.com/jmespath/go-jmespath v0.4.0 // indirect
github.com/jsimonetti/rtnetlink v1.4.1 // indirect
github.com/klauspost/compress v1.18.1 // indirect
github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51 // indirect
github.com/klauspost/compress v1.18.0 // indirect
github.com/kr/pretty v0.3.1 // indirect
github.com/kr/text v0.2.0 // indirect
github.com/lib/pq v1.10.9 // indirect
github.com/lithammer/fuzzysearch v1.1.8 // indirect
github.com/mattn/go-colorable v0.1.14 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/mattn/go-runewidth v0.0.19 // indirect
github.com/mattn/go-runewidth v0.0.16 // indirect
github.com/mdlayher/genetlink v1.3.2 // indirect
github.com/mdlayher/netlink v1.7.3-0.20250113171957-fbb4dce95f42 // indirect
github.com/mdlayher/sdnotify v1.0.0 // indirect
github.com/mdlayher/socket v0.5.0 // indirect
github.com/mgutz/ansi v0.0.0-20200706080929-d51e80ef957d // indirect
github.com/miekg/dns v1.1.58 // indirect
github.com/mitchellh/go-ps v1.0.0 // indirect
github.com/moby/docker-image-spec v1.3.1 // indirect
@@ -179,25 +186,27 @@ require (
github.com/moby/term v0.5.2 // indirect
github.com/morikuni/aec v1.0.0 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/ncruces/go-strftime v1.0.0 // indirect
github.com/ncruces/go-strftime v0.1.9 // indirect
github.com/opencontainers/go-digest v1.0.0 // indirect
github.com/opencontainers/image-spec v1.1.1 // indirect
github.com/opencontainers/runc v1.3.2 // indirect
github.com/opencontainers/runc v1.3.0 // indirect
github.com/pelletier/go-toml/v2 v2.2.4 // indirect
github.com/petermattis/goid v0.0.0-20250904145737-900bdf8bb490 // indirect
github.com/petermattis/goid v0.0.0-20250319124200-ccd6737f222a // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
github.com/prometheus-community/pro-bing v0.4.0 // indirect
github.com/prometheus/client_model v0.6.2 // indirect
github.com/prometheus/procfs v0.16.1 // indirect
github.com/prometheus/procfs v0.15.1 // indirect
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec // indirect
github.com/rivo/uniseg v0.4.7 // indirect
github.com/rogpeppe/go-internal v1.14.1 // indirect
github.com/safchain/ethtool v0.3.0 // indirect
github.com/sagikazarmark/locafero v0.12.0 // indirect
github.com/sagikazarmark/locafero v0.9.0 // indirect
github.com/sirupsen/logrus v1.9.3 // indirect
github.com/spf13/afero v1.15.0 // indirect
github.com/spf13/cast v1.10.0 // indirect
github.com/spf13/pflag v1.0.10 // indirect
github.com/sourcegraph/conc v0.3.0 // indirect
github.com/spf13/afero v1.14.0 // indirect
github.com/spf13/cast v1.8.0 // indirect
github.com/spf13/pflag v1.0.6 // indirect
github.com/subosito/gotenv v1.6.0 // indirect
github.com/tailscale/certstore v0.1.1-0.20231202035212-d3fa0460f47e // indirect
github.com/tailscale/go-winio v0.0.0-20231025203758-c4f33415bf55 // indirect
@@ -206,8 +215,8 @@ require (
github.com/tailscale/peercred v0.0.0-20250107143737-35a0c7bd7edc // indirect
github.com/tailscale/setec v0.0.0-20250305161714-445cadbbca3d // indirect
github.com/tailscale/web-client-prebuilt v0.0.0-20250124233751-d4cd19a26976 // indirect
github.com/tailscale/wireguard-go v0.0.0-20250716170648-1d0488a3d7da // indirect
github.com/vishvananda/netns v0.0.5 // indirect
github.com/tailscale/wireguard-go v0.0.0-20250304000100-91a0587fb251 // indirect
github.com/vishvananda/netns v0.0.4 // indirect
github.com/x448/float16 v0.8.4 // indirect
github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb // indirect
github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 // indirect
@@ -215,22 +224,22 @@ require (
github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e // indirect
go.opentelemetry.io/auto/sdk v1.1.0 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.58.0 // indirect
go.opentelemetry.io/otel v1.37.0 // indirect
go.opentelemetry.io/otel v1.36.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.36.0 // indirect
go.opentelemetry.io/otel/metric v1.37.0 // indirect
go.opentelemetry.io/otel/trace v1.37.0 // indirect
go.yaml.in/yaml/v2 v2.4.2 // indirect
go.yaml.in/yaml/v3 v3.0.4 // indirect
go.opentelemetry.io/otel/metric v1.36.0 // indirect
go.opentelemetry.io/otel/sdk v1.36.0 // indirect
go.opentelemetry.io/otel/trace v1.36.0 // indirect
go.uber.org/multierr v1.11.0 // indirect
go4.org/mem v0.0.0-20240501181205-ae6ca9944745 // indirect
golang.org/x/mod v0.29.0 // indirect
golang.org/x/sys v0.37.0 // indirect
golang.org/x/term v0.36.0 // indirect
golang.org/x/text v0.30.0 // indirect
golang.org/x/time v0.11.0 // indirect
golang.org/x/tools v0.38.0 // indirect
golang.org/x/mod v0.26.0 // indirect
golang.org/x/sys v0.34.0 // indirect
golang.org/x/term v0.33.0 // indirect
golang.org/x/text v0.27.0 // indirect
golang.org/x/time v0.10.0 // indirect
golang.org/x/tools v0.35.0 // indirect
golang.zx2c4.com/wintun v0.0.0-20230126152724-0fa3db229ce2 // indirect
golang.zx2c4.com/wireguard/windows v0.5.3 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20250929231259-57b25ae835d4 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20250603155806-513f23925822 // indirect
gvisor.dev/gvisor v0.0.0-20250205023644-9414b50a5633 // indirect
)

354
go.sum
View File

@@ -8,12 +8,14 @@ atomicgo.dev/keyboard v0.2.9 h1:tOsIid3nlPLZ3lwgG8KZMp/SFmr7P0ssEN5JUsm78K8=
atomicgo.dev/keyboard v0.2.9/go.mod h1:BC4w9g00XkxH/f1HXhW2sXmJFOCWbKn9xrOunSFtExQ=
atomicgo.dev/schedule v0.1.0 h1:nTthAbhZS5YZmgYbb2+DH8uQIZcTlIrd4eYr3UQxEjs=
atomicgo.dev/schedule v0.1.0/go.mod h1:xeUa3oAkiuHYh8bKiQBRojqAMq3PXXbJujjb0hw8pEU=
dario.cat/mergo v1.0.2 h1:85+piFYR1tMbRrLcDwR18y4UKJ3aH1Tbzi24VRW1TK8=
dario.cat/mergo v1.0.2/go.mod h1:E/hbnu0NxMFBjpMIE34DRGLWqDy0g5FuKDhCb31ngxA=
dario.cat/mergo v1.0.1 h1:Ra4+bf83h2ztPIQYNP99R6m+Y7KfnARDfID+a+vLl4s=
dario.cat/mergo v1.0.1/go.mod h1:uNxQE+84aUszobStD9th8a29P2fMDhsBdgRYvZOxGmk=
filippo.io/edwards25519 v1.1.0 h1:FNf4tywRC1HmFuKW5xopWpigGjJKiJSV0Cqo0cJWDaA=
filippo.io/edwards25519 v1.1.0/go.mod h1:BxyFTGdWcka3PhytdK4V28tE5sGfRvvvRV7EaN4VDT4=
filippo.io/mkcert v1.4.4 h1:8eVbbwfVlaqUM7OwuftKc2nuYOoTDQWqsoXmzoXZdbc=
filippo.io/mkcert v1.4.4/go.mod h1:VyvOchVuAye3BoUsPUOOofKygVwLV2KQMVFJNRq+1dA=
github.com/AlecAivazis/survey/v2 v2.3.7 h1:6I/u8FvytdGsgonrYsVn2t8t4QiRnh6QSTqkkhIiSjQ=
github.com/AlecAivazis/survey/v2 v2.3.7/go.mod h1:xUTIdE4KCOIjsBAE1JYsUPoCqYdZ1reCfTwbto0Fduo=
github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c h1:udKWzYgxTojEKWjV8V+WSxDXJ4NFATAsZjh8iIbsQIg=
github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c/go.mod h1:xomTg63KZ2rFqZQzSB4Vz2SUXa1BpHTVz9L5PTmPC4E=
github.com/BurntSushi/toml v1.4.1-0.20240526193622-a339e1f7089c h1:pxW6RcqyfI9/kWtOwnv/G+AzdKuy2ZrqINhenH4HyNs=
@@ -29,6 +31,8 @@ github.com/MarvinJWendt/testza v0.5.2 h1:53KDo64C1z/h/d/stCYCPY69bt/OSwjq5KpFNwi
github.com/MarvinJWendt/testza v0.5.2/go.mod h1:xu53QFE5sCdjtMCKk8YMQ2MnymimEctc4n3EjyIYvEY=
github.com/Microsoft/go-winio v0.6.2 h1:F2VQgta7ecxGYO8k3ZZz3RS8fVIXVxONVUPlNERoyfY=
github.com/Microsoft/go-winio v0.6.2/go.mod h1:yd8OoFMLzJbo9gZq8j5qaps8bJ9aShtEA8Ipt1oGCvU=
github.com/Netflix/go-expect v0.0.0-20220104043353-73e0943537d2 h1:+vx7roKuyA63nhn5WAunQHLTznkw5W8b1Xc0dNjp83s=
github.com/Netflix/go-expect v0.0.0-20220104043353-73e0943537d2/go.mod h1:HBCaDeC1lPdgDeDbhX8XFpy1jqjK0IBG8W5K+xYqA0w=
github.com/Nvveen/Gotty v0.0.0-20120604004816-cd527374f1e5 h1:TngWCqHvy9oXAN6lEVMRuU21PR1EtLVZJmdB18Gu3Rw=
github.com/Nvveen/Gotty v0.0.0-20120604004816-cd527374f1e5/go.mod h1:lmUJ/7eu/Q8D7ML55dXQrVaamCz2vxCfdQBasLZfHKk=
github.com/akutz/memconn v0.1.0 h1:NawI0TORU4hcOMsMr11g7vwlCdkYeLKXBcxWu2W/P8A=
@@ -37,8 +41,8 @@ github.com/alexbrainman/sspi v0.0.0-20231016080023-1a75b4708caa h1:LHTHcTQiSGT7V
github.com/alexbrainman/sspi v0.0.0-20231016080023-1a75b4708caa/go.mod h1:cEWa1LVoE5KvSD9ONXsZrj0z6KqySlCCNKHlLzbqAt4=
github.com/anmitsu/go-shlex v0.0.0-20200514113438-38f4b401e2be h1:9AeTilPcZAjCFIImctFaOjnTIavg87rW78vTPkQqLI8=
github.com/anmitsu/go-shlex v0.0.0-20200514113438-38f4b401e2be/go.mod h1:ySMOLuWl6zY27l47sB3qLNK6tF2fkHG55UZxx8oIVo4=
github.com/arl/statsviz v0.7.2 h1:xnuIfRiXE4kvxEcfGL+IE3mKH1BXNHuE+eJELIh7oOA=
github.com/arl/statsviz v0.7.2/go.mod h1:XlrbiT7xYT03xaW9JMMfD8KFUhBOESJwfyNJu83PbB0=
github.com/arl/statsviz v0.6.0 h1:jbW1QJkEYQkufd//4NDYRSNBpwJNrdzPahF7ZmoGdyE=
github.com/arl/statsviz v0.6.0/go.mod h1:0toboo+YGSUXDaS4g1D5TVS4dXs7S7YYT5J/qnW2h8s=
github.com/atomicgo/cursor v0.0.1/go.mod h1:cBON2QmmrysudxNBFthvMtN32r3jxVRIvzkUiF/RuIk=
github.com/aws/aws-sdk-go-v2 v1.36.0 h1:b1wM5CcE65Ujwn565qcwgtOTT1aT4ADOHHgglKjG7fk=
github.com/aws/aws-sdk-go-v2 v1.36.0/go.mod h1:5PMILGVKiW32oDzjj6RU52yrNrDPUHcbZQYr1sM7qmM=
@@ -82,12 +86,12 @@ github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/cenkalti/backoff/v4 v4.3.0 h1:MyRJ/UdXutAwSAT+s3wNd7MfTIcy71VQueUuFK343L8=
github.com/cenkalti/backoff/v4 v4.3.0/go.mod h1:Y3VNntkOUPxTVeUxJ/G5vcM//AlwfmyYozVcomhLiZE=
github.com/cenkalti/backoff/v5 v5.0.3 h1:ZN+IMa753KfX5hd8vVaMixjnqRZ3y8CuJKRKj1xcsSM=
github.com/cenkalti/backoff/v5 v5.0.3/go.mod h1:rkhZdG3JZukswDf7f0cwqPNk4K0sa+F97BxZthm/crw=
github.com/cenkalti/backoff/v5 v5.0.2 h1:rIfFVxEf1QsI7E1ZHfp/B4DF/6QBAUhmgkxc0H7Zss8=
github.com/cenkalti/backoff/v5 v5.0.2/go.mod h1:rkhZdG3JZukswDf7f0cwqPNk4K0sa+F97BxZthm/crw=
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/chasefleming/elem-go v0.31.0 h1:vZsuKmKdv6idnUbu3awMruxTiFqZ/ertFJFAyBCkVhI=
github.com/chasefleming/elem-go v0.31.0/go.mod h1:UBmmZfso2LkXA0HZInbcwsmhE/LXFClEcBPNCGeARtA=
github.com/chasefleming/elem-go v0.30.0 h1:BlhV1ekv1RbFiM8XZUQeln1Ikb4D+bu2eDO4agREvok=
github.com/chasefleming/elem-go v0.30.0/go.mod h1:hz73qILBIKnTgOujnSMtEj20/epI+f6vg71RUilJAA4=
github.com/chromedp/cdproto v0.0.0-20230802225258-3cf4e6d46a89/go.mod h1:GKljq0VrfU4D5yc+2qA6OVr8pmO/MBbPEWqWQ/oqGEs=
github.com/chromedp/chromedp v0.9.2/go.mod h1:LkSXJKONWTCHAfQasKFUZI+mxqS4tZqhmtGzzhLsnLs=
github.com/chromedp/sysutil v1.0.0/go.mod h1:kgWmDdq8fTzXYcKIBqIYvRRTnYb9aNS9moAV0xufSww=
@@ -99,10 +103,8 @@ github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMn
github.com/chzyer/test v1.0.0/go.mod h1:2JlltgoNkt4TW/z9V/IzDdFaMTM2JPIi26O1pF38GC8=
github.com/cilium/ebpf v0.17.3 h1:FnP4r16PWYSE4ux6zN+//jMcW4nMVRvuTLVTvCjyyjg=
github.com/cilium/ebpf v0.17.3/go.mod h1:G5EDHij8yiLzaqn0WjyfJHvRa+3aDlReIaLVRMvOyJk=
github.com/clipperhouse/uax29/v2 v2.2.0 h1:ChwIKnQN3kcZteTXMgb1wztSgaU+ZemkgWdohwgs8tY=
github.com/clipperhouse/uax29/v2 v2.2.0/go.mod h1:EFJ2TJMRUaplDxHKj1qAEhCtQPW2tJSwu5BF98AuoVM=
github.com/coder/websocket v1.8.14 h1:9L0p0iKiNOibykf283eHkKUHHrpG7f65OE3BhhO7v9g=
github.com/coder/websocket v1.8.14/go.mod h1:NX3SzP+inril6yawo5CQXx8+fk145lPDC6pumgx0mVg=
github.com/coder/websocket v1.8.13 h1:f3QZdXy7uGVz+4uCJy2nTZyM0yTBj8yANEHhqlXZ9FE=
github.com/coder/websocket v1.8.13/go.mod h1:LNVeNrXQZfe5qhS9ALED3uA+l5pPqvwXg3CKoDBB2gs=
github.com/containerd/console v1.0.3/go.mod h1:7LqA/THxQ86k76b8c/EMSiaJ3h1eZkMkXar0TQ1gf3U=
github.com/containerd/console v1.0.5 h1:R0ymNeydRqH2DmakFNdmjR2k0t7UPuiOV/N/27/qqsc=
github.com/containerd/console v1.0.5/go.mod h1:YynlIjWYF8myEu6sdkwKIvGQq+cOckRm6So2avqoYAk=
@@ -116,21 +118,20 @@ github.com/containerd/log v0.1.0 h1:TCJt7ioM2cr/tfR8GPbGf9/VRAX8D2B4PjzCpfX540I=
github.com/containerd/log v0.1.0/go.mod h1:VRRf09a7mHDIRezVKTRCrOq78v577GXq3bSa3EhrzVo=
github.com/coreos/go-iptables v0.7.1-0.20240112124308-65c67c9f46e6 h1:8h5+bWd7R6AYUslN6c6iuZWTKsKxUFDlpnmilO6R2n0=
github.com/coreos/go-iptables v0.7.1-0.20240112124308-65c67c9f46e6/go.mod h1:Qe8Bv2Xik5FyTXwgIbLAnv2sWSBmvWdFETJConOQ//Q=
github.com/coreos/go-oidc/v3 v3.16.0 h1:qRQUCFstKpXwmEjDQTIbyY/5jF00+asXzSkmkoa/mow=
github.com/coreos/go-oidc/v3 v3.16.0/go.mod h1:wqPbKFrVnE90vty060SB40FCJ8fTHTxSwyXJqZH+sI8=
github.com/coreos/go-oidc/v3 v3.14.1 h1:9ePWwfdwC4QKRlCXsJGou56adA/owXczOzwKdOumLqk=
github.com/coreos/go-oidc/v3 v3.14.1/go.mod h1:HaZ3szPaZ0e4r6ebqvsLWlk2Tn+aejfmrfah6hnSYEU=
github.com/coreos/go-systemd/v22 v22.5.0/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=
github.com/creachadair/command v0.2.0 h1:qTA9cMMhZePAxFoNdnk6F6nn94s1qPndIg9hJbqI9cA=
github.com/creachadair/command v0.2.0/go.mod h1:j+Ar+uYnFsHpkMeV9kGj6lJ45y9u2xqtg8FYy6cm+0o=
github.com/creachadair/command v0.1.22 h1:WmdrURwZdmPD1jm13SjKooaMoqo7mW1qI2BPCShs154=
github.com/creachadair/command v0.1.22/go.mod h1:YFc+OMGucqTpxwQg/iJnNg8BMNmRPDK60rYy8ckgKwE=
github.com/creachadair/flax v0.0.5 h1:zt+CRuXQASxwQ68e9GHAOnEgAU29nF0zYMHOCrL5wzE=
github.com/creachadair/flax v0.0.5/go.mod h1:F1PML0JZLXSNDMNiRGK2yjm5f+L9QCHchyHBldFymj8=
github.com/creachadair/mds v0.25.2 h1:xc0S0AfDq5GX9KUR5sLvi5XjA61/P6S5e0xFs1vA18Q=
github.com/creachadair/mds v0.25.2/go.mod h1:+s4CFteFRj4eq2KcGHW8Wei3u9NyzSPzNV32EvjyK/Q=
github.com/creachadair/mds v0.25.10 h1:9k9JB35D1xhOCFl0liBhagBBp8fWWkKZrA7UXsfoHtA=
github.com/creachadair/mds v0.25.10/go.mod h1:4hatI3hRM+qhzuAmqPRFvaBM8mONkS7nsLxkcuTYUIs=
github.com/creachadair/mds v0.24.3 h1:X7cM2ymZSyl4IVWnfyXLxRXMJ6awhbcWvtLPhfnTaqI=
github.com/creachadair/mds v0.24.3/go.mod h1:0oeHt9QWu8VfnmskOL4zi2CumjEvB29ScmtOmdrhFeU=
github.com/creachadair/taskgroup v0.13.2 h1:3KyqakBuFsm3KkXi/9XIb0QcA8tEzLHLgaoidf0MdVc=
github.com/creachadair/taskgroup v0.13.2/go.mod h1:i3V1Zx7H8RjwljUEeUWYT30Lmb9poewSb2XI1yTwD0g=
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/creack/pty v1.1.17/go.mod h1:MOBLtS5ELjhRRrroQr9kyvTxUAFNvYEK993ew/Vr4O4=
github.com/creack/pty v1.1.23 h1:4M6+isWdcStXEf15G/RbrMPOQj1dZ7HPZCGwE4kOeP0=
github.com/creack/pty v1.1.23/go.mod h1:08sCNb52WyoAwi2QDyzUCTgcvVFhUzewun7wtTfvcwE=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
@@ -145,14 +146,16 @@ github.com/distribution/reference v0.6.0 h1:0IXCQ5g4/QMHHkarYzh5l+u8T3t73zM5Qvfr
github.com/distribution/reference v0.6.0/go.mod h1:BbU0aIcezP1/5jX/8MP0YiH4SdvB5Y4f/wlDRiLyi3E=
github.com/djherbis/times v1.6.0 h1:w2ctJ92J8fBvWPxugmXIv7Nz7Q3iDMKNx9v5ocVH20c=
github.com/djherbis/times v1.6.0/go.mod h1:gOHeRAz2h+VJNZ5Gmc/o7iD9k4wW7NMVqieYCY99oc0=
github.com/docker/cli v28.5.1+incompatible h1:ESutzBALAD6qyCLqbQSEf1a/U8Ybms5agw59yGVc+yY=
github.com/docker/cli v28.5.1+incompatible/go.mod h1:JLrzqnKDaYBop7H2jaqPtU4hHvMKP+vjCwu2uszcLI8=
github.com/docker/docker v28.5.1+incompatible h1:Bm8DchhSD2J6PsFzxC35TZo4TLGR2PdW/E69rU45NhM=
github.com/docker/docker v28.5.1+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/go-connections v0.6.0 h1:LlMG9azAe1TqfR7sO+NJttz1gy6KO7VJBh+pMmjSD94=
github.com/docker/go-connections v0.6.0/go.mod h1:AahvXYshr6JgfUJGdDCs2b5EZG/vmaMAntpSFH5BFKE=
github.com/docker/cli v28.1.1+incompatible h1:eyUemzeI45DY7eDPuwUcmDyDj1pM98oD5MdSpiItp8k=
github.com/docker/cli v28.1.1+incompatible/go.mod h1:JLrzqnKDaYBop7H2jaqPtU4hHvMKP+vjCwu2uszcLI8=
github.com/docker/docker v28.3.3+incompatible h1:Dypm25kh4rmk49v1eiVbsAtpAsYURjYkaKubwuBdxEI=
github.com/docker/docker v28.3.3+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/go-connections v0.5.0 h1:USnMq7hx7gwdVZq1L49hLXaFtUdTADjXGp+uj1Br63c=
github.com/docker/go-connections v0.5.0/go.mod h1:ov60Kzw0kKElRwhNs9UlUHAE/F9Fe6GLaXnqyDdmEXc=
github.com/docker/go-units v0.5.0 h1:69rxXcBk27SvSaaxTtLh/8llcHD8vYHT7WSdRZ/jvr4=
github.com/docker/go-units v0.5.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
github.com/dsnet/try v0.0.3 h1:ptR59SsrcFUYbT/FhAbKTV6iLkeD6O18qfIWRml2fqI=
github.com/dsnet/try v0.0.3/go.mod h1:WBM8tRpUmnXXhY1U6/S8dt6UWdHTQ7y8A5YSkRCkq40=
github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
github.com/felixge/fgprof v0.9.3/go.mod h1:RdbpDgzqYVh/T9fPELJyV7EYJuHB55UTEULNun8eiPw=
@@ -174,25 +177,25 @@ github.com/glebarez/go-sqlite v1.22.0 h1:uAcMJhaA6r3LHMTFgP0SifzgXg46yJkgxqyuyec
github.com/glebarez/go-sqlite v1.22.0/go.mod h1:PlBIdHe0+aUEFn+r2/uthrWq4FxbzugL0L8Li6yQJbc=
github.com/glebarez/sqlite v1.11.0 h1:wSG0irqzP6VurnMEpFGer5Li19RpIRi2qvQz++w0GMw=
github.com/glebarez/sqlite v1.11.0/go.mod h1:h8/o8j5wiAsqSPoWELDUdJXhjAhsVliSn7bWZjOhrgQ=
github.com/go-gormigrate/gormigrate/v2 v2.1.5 h1:1OyorA5LtdQw12cyJDEHuTrEV3GiXiIhS4/QTTa/SM8=
github.com/go-gormigrate/gormigrate/v2 v2.1.5/go.mod h1:mj9ekk/7CPF3VjopaFvWKN2v7fN3D9d3eEOAXRhi/+M=
github.com/go-gormigrate/gormigrate/v2 v2.1.4 h1:KOPEt27qy1cNzHfMZbp9YTmEuzkY4F4wrdsJW9WFk1U=
github.com/go-gormigrate/gormigrate/v2 v2.1.4/go.mod h1:y/6gPAH6QGAgP1UfHMiXcqGeJ88/GRQbfCReE1JJD5Y=
github.com/go-jose/go-jose/v3 v3.0.4 h1:Wp5HA7bLQcKnf6YYao/4kpRpVMp/yf6+pJKV8WFSaNY=
github.com/go-jose/go-jose/v3 v3.0.4/go.mod h1:5b+7YgP7ZICgJDBdfjZaIt+H/9L9T/YQrVfLAMboGkQ=
github.com/go-jose/go-jose/v4 v4.1.3 h1:CVLmWDhDVRa6Mi/IgCgaopNosCaHz7zrMeF9MlZRkrs=
github.com/go-jose/go-jose/v4 v4.1.3/go.mod h1:x4oUasVrzR7071A4TnHLGSPpNOm2a21K9Kf04k1rs08=
github.com/go-json-experiment/json v0.0.0-20250813024750-ebf49471dced h1:Q311OHjMh/u5E2TITc++WlTP5We0xNseRMkHDyvhW7I=
github.com/go-json-experiment/json v0.0.0-20250813024750-ebf49471dced/go.mod h1:TiCD2a1pcmjd7YnhGH0f/zKNcCD06B029pHhzV23c2M=
github.com/go-jose/go-jose/v4 v4.1.0 h1:cYSYxd3pw5zd2FSXk2vGdn9igQU2PS8MuxrCOCl0FdY=
github.com/go-jose/go-jose/v4 v4.1.0/go.mod h1:GG/vqmYm3Von2nYiB2vGTXzdoNKE5tix5tuc6iAd+sw=
github.com/go-json-experiment/json v0.0.0-20250223041408-d3c622f1b874 h1:F8d1AJ6M9UQCavhwmO6ZsrYLfG8zVFWfEfMS2MXPkSY=
github.com/go-json-experiment/json v0.0.0-20250223041408-d3c622f1b874/go.mod h1:TiCD2a1pcmjd7YnhGH0f/zKNcCD06B029pHhzV23c2M=
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=
github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/logr v1.4.2 h1:6pFjapn8bFcIbiKo3XT4j/BhANplGihG6tvd+8rYgrY=
github.com/go-logr/logr v1.4.2/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
github.com/go-ole/go-ole v1.3.0 h1:Dt6ye7+vXGIKZ7Xtk4s6/xVdGDQynvom7xCFEdWr6uE=
github.com/go-ole/go-ole v1.3.0/go.mod h1:5LS6F96DhAwUc7C+1HLexzMXY1xGRSryjyPPKW6zv78=
github.com/go-sql-driver/mysql v1.8.1 h1:LedoTUt/eveggdHS9qUFC1EFSa8bU2+1pZjSRpvNJ1Y=
github.com/go-sql-driver/mysql v1.8.1/go.mod h1:wEBSXgmK//2ZFJyE+qWnIsVGmvmEKlqwuVSjsCm7DZg=
github.com/go-viper/mapstructure/v2 v2.4.0 h1:EBsztssimR/CONLSZZ04E8qAkxNYq4Qp9LvH92wZUgs=
github.com/go-viper/mapstructure/v2 v2.4.0/go.mod h1:oJDH3BJKyqBA2TXFhDsKDGDTlndYOZ6rGS0BRZIxGhM=
github.com/go-viper/mapstructure/v2 v2.2.1 h1:ZAaOCxANMuZx5RCeg0mBdEZk7DZasvvZIxtHqx8aGss=
github.com/go-viper/mapstructure/v2 v2.2.1/go.mod h1:oJDH3BJKyqBA2TXFhDsKDGDTlndYOZ6rGS0BRZIxGhM=
github.com/go4org/plan9netshell v0.0.0-20250324183649-788daa080737 h1:cf60tHxREO3g1nroKr2osU3JWZsJzkfi7rEg+oAB0Lo=
github.com/go4org/plan9netshell v0.0.0-20250324183649-788daa080737/go.mod h1:MIS0jDzbU/vuM9MC4YnBITCv+RYuTRq8dJzmCrFsK9g=
github.com/gobwas/httphead v0.1.0/go.mod h1:O/RXo79gxV8G+RqlR/otEwx4Q36zl9rqC5u12GKvMCM=
@@ -203,6 +206,8 @@ github.com/godbus/dbus/v5 v5.1.1-0.20230522191255-76236955d466 h1:sQspH8M4niEijh
github.com/godbus/dbus/v5 v5.1.1-0.20230522191255-76236955d466/go.mod h1:ZiQxhyQ+bbbfxUKVvjfO498oPYvtYhZzycal3G/NHmU=
github.com/gofrs/uuid/v5 v5.3.2 h1:2jfO8j3XgSwlz/wHqemAEugfnTlikAYHhnqQ8Xh4fE0=
github.com/gofrs/uuid/v5 v5.3.2/go.mod h1:CDOjlDMVAtN56jqyRUZh58JT31Tiw7/oQyEXZV+9bD8=
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/golang-jwt/jwt/v5 v5.2.2 h1:Rl4B7itRWVtYIHFrSNd7vhTiz9UpLdi6gZhZ3wEeDy8=
github.com/golang-jwt/jwt/v5 v5.2.2/go.mod h1:pqrtFR0X4osieyHYxtmOUWsAWrfe1Q5UVIyoH402zdk=
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da h1:oI5xCqsCo564l8iNU+DwB5epxmsaqB+rhGL0m5jtYqE=
@@ -221,32 +226,38 @@ github.com/google/go-querystring v1.1.0 h1:AnCroh3fv4ZBgVIf1Iwtovgjaw/GiKJo8M8yD
github.com/google/go-querystring v1.1.0/go.mod h1:Kcdr2DB4koayq7X8pmAG4sNG59So17icRSOU623lUBU=
github.com/google/go-tpm v0.9.4 h1:awZRf9FwOeTunQmHoDYSHJps3ie6f1UlhS1fOdPEt1I=
github.com/google/go-tpm v0.9.4/go.mod h1:h9jEsEECg7gtLis0upRBQU+GhYVH6jMjrFxI8u6bVUY=
github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0=
github.com/google/gofuzz v1.2.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/nftables v0.2.1-0.20240414091927-5e242ec57806 h1:wG8RYIyctLhdFk6Vl1yPGtSRtwGpVkWyZww1OCil2MI=
github.com/google/nftables v0.2.1-0.20240414091927-5e242ec57806/go.mod h1:Beg6V6zZ3oEn0JuiUQ4wqwuyqqzasOltcoXPtgLbFp4=
github.com/google/pprof v0.0.0-20211214055906-6f57359322fd/go.mod h1:KgnwoLYCZ8IQu3XUZ8Nc/bM9CCZFOyjUNOSygVozoDg=
github.com/google/pprof v0.0.0-20240227163752-401108e1b7e7/go.mod h1:czg5+yv1E0ZGTi6S6vVK1mke0fV+FaUhNGcd6VRS9Ik=
github.com/google/pprof v0.0.0-20251007162407-5df77e3f7d1d h1:KJIErDwbSHjnp/SGzE5ed8Aol7JsKiI5X7yWKAtzhM0=
github.com/google/pprof v0.0.0-20251007162407-5df77e3f7d1d/go.mod h1:I6V7YzU0XDpsHqbsyrghnFZLO1gwK6NPTNvmetQIk9U=
github.com/google/pprof v0.0.0-20250501235452-c0086092b71a h1:rDA3FfmxwXR+BVKKdz55WwMJ1pD2hJQNW31d+l3mPk4=
github.com/google/pprof v0.0.0-20250501235452-c0086092b71a/go.mod h1:5hDyRhoBCxViHszMt12TnOpEI4VVi+U8Gm9iphldiMA=
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 h1:El6M4kTTCOh6aBiKaUGG7oYTSPP8MxqL4YI3kZKwcP4=
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510/go.mod h1:pupxD2MaaD3pAXIBCelhxNneeOaAeabZDe5s4K6zSpQ=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/gookit/assert v0.1.1 h1:lh3GcawXe/p+cU7ESTZ5Ui3Sm/x8JWpIis4/1aF0mY0=
github.com/gookit/assert v0.1.1/go.mod h1:jS5bmIVQZTIwk42uXl4lyj4iaaxx32tqH16CFj0VX2E=
github.com/gookit/color v1.4.2/go.mod h1:fqRyamkC1W8uxl+lxCQxOT09l/vYfZ+QeiX3rKQHCoQ=
github.com/gookit/color v1.5.0/go.mod h1:43aQb+Zerm/BWh2GnrgOQm7ffz7tvQXEKV6BFMl7wAo=
github.com/gookit/color v1.6.0 h1:JjJXBTk1ETNyqyilJhkTXJYYigHG24TM9Xa2M1xAhRA=
github.com/gookit/color v1.6.0/go.mod h1:9ACFc7/1IpHGBW8RwuDm/0YEnhg3dwwXpoMsmtyHfjs=
github.com/gookit/color v1.5.4 h1:FZmqs7XOyGgCAxmWyPslpiok1k05wmY3SJTytgvYFs0=
github.com/gookit/color v1.5.4/go.mod h1:pZJOeOS8DM43rXbp4AZo1n9zCU2qjpcRko0b6/QJi9w=
github.com/gorilla/csrf v1.7.3 h1:BHWt6FTLZAb2HtWT5KDBf6qgpZzvtbp9QWDRKZMXJC0=
github.com/gorilla/csrf v1.7.3/go.mod h1:F1Fj3KG23WYHE6gozCmBAezKookxbIvUJT+121wTuLk=
github.com/gorilla/mux v1.8.1 h1:TuBL49tXwgrFYWhqrNgrUNEY92u81SPhu7sTdzQEiWY=
github.com/gorilla/mux v1.8.1/go.mod h1:AKf9I4AEqPTmMytcMc0KkNouC66V3BtZ4qD5fmWSiMQ=
github.com/gorilla/securecookie v1.1.2 h1:YCIWL56dvtr73r6715mJs5ZvhtnY73hBvEF8kXD8ePA=
github.com/gorilla/securecookie v1.1.2/go.mod h1:NfCASbcHqRSY+3a8tlWJwsQap2VX5pwzwo4h3eOamfo=
github.com/gorilla/websocket v1.5.3 h1:saDtZ6Pbx/0u+bgYQ3q96pZgCzfhKXGPqt7kZ72aNNg=
github.com/gorilla/websocket v1.5.3/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.3 h1:NmZ1PKzSTQbuGHw9DGPFomqkkLWMC+vZCkfs+FHv1Vg=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.3/go.mod h1:zQrxl1YP88HQlA6i9c63DSVPFklWpGX4OWAc9bFuaH4=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.0 h1:+epNPbD5EqgpEMm5wrl4Hqts3jZt8+kYaqUisuuIGTk=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.0/go.mod h1:Zanoh4+gvIgluNqcfMVTJueD4wSS5hT7zTt4Mrutd90=
github.com/hashicorp/go-version v1.7.0 h1:5tqGy27NaOTB8yJKUZELlFAS/LTKJkrmONwQKeRZfjY=
github.com/hashicorp/go-version v1.7.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA=
github.com/hdevalence/ed25519consensus v0.2.0 h1:37ICyZqdyj0lAZ8P4D1d1id3HqbbG1N3iBb1Tb4rdcU=
github.com/hdevalence/ed25519consensus v0.2.0/go.mod h1:w3BHWjwJbFU29IRHL1Iqkw3sus+7FctEyM4RqDxYNzo=
github.com/hinshun/vt10x v0.0.0-20220119200601-820417d04eec h1:qv2VnGeEQHchGaZ/u7lxST/RaJw+cv273q79D81Xbog=
github.com/hinshun/vt10x v0.0.0-20220119200601-820417d04eec/go.mod h1:Q48J4R4DvxnHolD5P8pOtXigYlRuPLGl6moFx3ulM68=
github.com/ianlancetaylor/demangle v0.0.0-20210905161508-09a460cdf81d/go.mod h1:aYm2/VgdVmcIU8iMfdMvDMsRAQjcfZSKFby6HOFvi/w=
github.com/ianlancetaylor/demangle v0.0.0-20230524184225-eabc099b10ab/go.mod h1:gx7rwoVhcfuVKG5uya9Hs3Sxj7EIvldVofAWIUtGouw=
github.com/illarion/gonotify/v3 v3.0.2 h1:O7S6vcopHexutmpObkeWsnzMJt/r1hONIEogeVNmJMk=
@@ -259,8 +270,8 @@ github.com/jackc/pgpassfile v1.0.0 h1:/6Hmqy13Ss2zCq62VdNG8tM1wchn8zjSGOBJ6icpsI
github.com/jackc/pgpassfile v1.0.0/go.mod h1:CEx0iS5ambNFdcRtxPj5JhEz+xB6uRky5eyVu/W2HEg=
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 h1:iCEnooe7UlwOQYpKFhBabPMi4aNAfoODPEFNiAnClxo=
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761/go.mod h1:5TJZWKEWniPve33vlWYSoGYefn3gLQRzjfDlhSJ9ZKM=
github.com/jackc/pgx/v5 v5.7.6 h1:rWQc5FwZSPX58r1OQmkuaNicxdmExaEz5A2DO2hUuTk=
github.com/jackc/pgx/v5 v5.7.6/go.mod h1:aruU7o91Tc2q2cFp5h4uP3f6ztExVpyVv88Xl/8Vl8M=
github.com/jackc/pgx/v5 v5.7.4 h1:9wKznZrhWa2QiHL+NjTSPP6yjl3451BX3imWDnokYlg=
github.com/jackc/pgx/v5 v5.7.4/go.mod h1:ncY89UGWxg82EykZUwSpUKEfccBGGYq1xjrOpsbsfGQ=
github.com/jackc/puddle/v2 v2.2.2 h1:PR8nw+E/1w0GLuRFSmiioY6UooMp6KJv0/61nB7icHo=
github.com/jackc/puddle/v2 v2.2.2/go.mod h1:vriiEXHvEE654aYKXXjOvZM39qJ0q+azkZFrfEOc3H4=
github.com/jagottsicher/termcolor v1.0.2 h1:fo0c51pQSuLBN1+yVX2ZE+hE+P7ULb/TY8eRowJnrsM=
@@ -278,10 +289,12 @@ github.com/jmespath/go-jmespath/internal/testify v1.5.1/go.mod h1:L3OGu8Wl2/fWfC
github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y=
github.com/jsimonetti/rtnetlink v1.4.1 h1:JfD4jthWBqZMEffc5RjgmlzpYttAVw1sdnmiNaPO3hE=
github.com/jsimonetti/rtnetlink v1.4.1/go.mod h1:xJjT7t59UIZ62GLZbv6PLLo8VFrostJMPBAheR6OM8w=
github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51 h1:Z9n2FFNUXsshfwJMBgNA0RU6/i7WVaAegv3PtuIHPMs=
github.com/kballard/go-shellquote v0.0.0-20180428030007-95032a82bc51/go.mod h1:CzGEWj7cYgsdH8dAjBGEr58BoE7ScuLd+fwFZ44+/x8=
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo=
github.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ=
github.com/klauspost/compress v1.18.1 h1:bcSGx7UbpBqMChDtsF28Lw6v/G94LPrrbMbdC3JH2co=
github.com/klauspost/compress v1.18.1/go.mod h1:ZQFFVG+MdnR0P+l6wpXgIL4NTtwiKIdBnrBd8Nrxr+0=
github.com/klauspost/cpuid/v2 v2.0.9/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg=
github.com/klauspost/cpuid/v2 v2.0.10/go.mod h1:g2LTdtYhdyuGPqyWyv7qRAmj1WBqxuObKfj5c0PQa7c=
github.com/klauspost/cpuid/v2 v2.0.12/go.mod h1:g2LTdtYhdyuGPqyWyv7qRAmj1WBqxuObKfj5c0PQa7c=
@@ -308,16 +321,18 @@ github.com/lib/pq v1.10.9/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
github.com/lithammer/fuzzysearch v1.1.8 h1:/HIuJnjHuXS8bKaiTMeeDlW2/AyIWk2brx1V8LFgLN4=
github.com/lithammer/fuzzysearch v1.1.8/go.mod h1:IdqeyBClc3FFqSzYq/MXESsS4S0FsZ5ajtkr5xPLts4=
github.com/mailru/easyjson v0.7.7/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc=
github.com/mattn/go-colorable v0.1.2/go.mod h1:U0ppj6V5qS13XJ6of8GYAs25YV2eR4EVcfRqFIhoBtE=
github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg=
github.com/mattn/go-colorable v0.1.14 h1:9A9LHSqF/7dyVVX6g0U9cwm9pG3kP9gSzcuIPHPsaIE=
github.com/mattn/go-colorable v0.1.14/go.mod h1:6LmQG8QLFO4G5z1gPvYEzlUgJ2wF+stgPZH1UqBm1s8=
github.com/mattn/go-isatty v0.0.8/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
github.com/mattn/go-isatty v0.0.19/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mattn/go-runewidth v0.0.13/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w=
github.com/mattn/go-runewidth v0.0.19 h1:v++JhqYnZuu5jSKrk9RbgF5v4CGUjqRfBm05byFGLdw=
github.com/mattn/go-runewidth v0.0.19/go.mod h1:XBkDxAl56ILZc9knddidhrOlY5R/pDhgLpndooCuJAs=
github.com/mattn/go-runewidth v0.0.16 h1:E5ScNMtiwvlvB5paMFdw9p4kSQzbXFikJ5SQO6TULQc=
github.com/mattn/go-runewidth v0.0.16/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w=
github.com/mdlayher/genetlink v1.3.2 h1:KdrNKe+CTu+IbZnm/GVUMXSqBBLqcGpRDa0xkQy56gw=
github.com/mdlayher/genetlink v1.3.2/go.mod h1:tcC3pkCrPUGIKKsCsp0B3AdaaKuHtaxoJRz3cc+528o=
github.com/mdlayher/netlink v1.7.3-0.20250113171957-fbb4dce95f42 h1:A1Cq6Ysb0GM0tpKMbdCXCIfBclan4oHk1Jb+Hrejirg=
@@ -326,6 +341,9 @@ github.com/mdlayher/sdnotify v1.0.0 h1:Ma9XeLVN/l0qpyx1tNeMSeTjCPH6NtuD6/N9XdTlQ
github.com/mdlayher/sdnotify v1.0.0/go.mod h1:HQUmpM4XgYkhDLtd+Uad8ZFK1T9D5+pNxnXQjCeJlGE=
github.com/mdlayher/socket v0.5.0 h1:ilICZmJcQz70vrWVes1MFera4jGiWNocSkykwwoy3XI=
github.com/mdlayher/socket v0.5.0/go.mod h1:WkcBFfvyG8QENs5+hfQPl1X6Jpd2yeLIYgrGFmJiJxI=
github.com/mgutz/ansi v0.0.0-20170206155736-9520e82c474b/go.mod h1:01TrycV0kFyexm33Z7vhZRXopbI8J3TDReVlkTgMUxE=
github.com/mgutz/ansi v0.0.0-20200706080929-d51e80ef957d h1:5PJl274Y63IEHC+7izoQE9x6ikvDFZS2mDVS3drnohI=
github.com/mgutz/ansi v0.0.0-20200706080929-d51e80ef957d/go.mod h1:01TrycV0kFyexm33Z7vhZRXopbI8J3TDReVlkTgMUxE=
github.com/miekg/dns v1.1.58 h1:ca2Hdkz+cDg/7eNF6V56jjzuZ4aCAE+DbVkILdQWG/4=
github.com/miekg/dns v1.1.58/go.mod h1:Ypv+3b/KadlvW9vJfXOTf300O4UqaHFzFCuHz+rPkBY=
github.com/mitchellh/go-ps v1.0.0 h1:i6ampVEEF4wQFF+bkYfwYgY+F/uYJDktmvLPf7qIgjc=
@@ -344,8 +362,8 @@ github.com/morikuni/aec v1.0.0 h1:nP9CBfwrvYnBRgY6qfDQkygYDmYwOilePFkwzv4dU8A=
github.com/morikuni/aec v1.0.0/go.mod h1:BbKIizmSmc5MMPqRYbxO4ZU0S0+P200+tUnFx7PXmsc=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/ncruces/go-strftime v1.0.0 h1:HMFp8mLCTPp341M/ZnA4qaf7ZlsbTc+miZjCLOFAw7w=
github.com/ncruces/go-strftime v1.0.0/go.mod h1:Fwc5htZGVVkseilnfgOVb9mKy6w1naJmn9CehxcKcls=
github.com/ncruces/go-strftime v0.1.9 h1:bY0MQC28UADQmHmaF5dgpLmImcShSi2kHU9XLdhx/f4=
github.com/ncruces/go-strftime v0.1.9/go.mod h1:Fwc5htZGVVkseilnfgOVb9mKy6w1naJmn9CehxcKcls=
github.com/nfnt/resize v0.0.0-20180221191011-83c6a9932646 h1:zYyBkD/k9seD2A7fsi6Oo2LfFZAehjjQMERAvZLEDnQ=
github.com/nfnt/resize v0.0.0-20180221191011-83c6a9932646/go.mod h1:jpp1/29i3P1S/RLdc7JQKbRpFeM1dOBd8T9ki5s+AY8=
github.com/oauth2-proxy/mockoidc v0.0.0-20240214162133-caebfff84d25 h1:9bCMuD3TcnjeqjPT2gSlha4asp8NvgcFRYExCaikCxk=
@@ -354,16 +372,16 @@ github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8
github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
github.com/opencontainers/image-spec v1.1.1 h1:y0fUlFfIZhPF1W537XOLg0/fcx6zcHCJwooC2xJA040=
github.com/opencontainers/image-spec v1.1.1/go.mod h1:qpqAh3Dmcf36wStyyWU+kCeDgrGnAve2nCC8+7h8Q0M=
github.com/opencontainers/runc v1.3.2 h1:GUwgo0Fx9M/pl2utaSYlJfdBcXAB/CZXDxe322lvJ3Y=
github.com/opencontainers/runc v1.3.2/go.mod h1:F7UQQEsxcjUNnFpT1qPLHZBKYP7yWwk6hq8suLy9cl0=
github.com/opencontainers/runc v1.3.0 h1:cvP7xbEvD0QQAs0nZKLzkVog2OPZhI/V2w3WmTmUSXI=
github.com/opencontainers/runc v1.3.0/go.mod h1:9wbWt42gV+KRxKRVVugNP6D5+PQciRbenB4fLVsqGPs=
github.com/orisano/pixelmatch v0.0.0-20220722002657-fb0b55479cde/go.mod h1:nZgzbfBr3hhjoZnS66nKrHmduYNpc34ny7RK4z5/HM0=
github.com/ory/dockertest/v3 v3.12.0 h1:3oV9d0sDzlSQfHtIaB5k6ghUCVMVLpAY8hwrqoCyRCw=
github.com/ory/dockertest/v3 v3.12.0/go.mod h1:aKNDTva3cp8dwOWwb9cWuX84aH5akkxXRvO7KCwWVjE=
github.com/pelletier/go-toml/v2 v2.2.4 h1:mye9XuhQ6gvn5h28+VilKrrPoQVanw5PMw/TB0t5Ec4=
github.com/pelletier/go-toml/v2 v2.2.4/go.mod h1:2gIqNv+qfxSVS7cM2xJQKtLSTLUE9V8t9Stt+h56mCY=
github.com/petermattis/goid v0.0.0-20250813065127-a731cc31b4fe/go.mod h1:pxMtw7cyUw6B2bRH0ZBANSPg+AoSud1I1iyJHI69jH4=
github.com/petermattis/goid v0.0.0-20250904145737-900bdf8bb490 h1:QTvNkZ5ylY0PGgA+Lih+GdboMLY/G9SEGLMEGVjTVA4=
github.com/petermattis/goid v0.0.0-20250904145737-900bdf8bb490/go.mod h1:pxMtw7cyUw6B2bRH0ZBANSPg+AoSud1I1iyJHI69jH4=
github.com/petermattis/goid v0.0.0-20240813172612-4fcff4a6cae7/go.mod h1:pxMtw7cyUw6B2bRH0ZBANSPg+AoSud1I1iyJHI69jH4=
github.com/petermattis/goid v0.0.0-20250319124200-ccd6737f222a h1:S+AGcmAESQ0pXCUNnRH7V+bOUIgkSX5qVt2cNKCrm0Q=
github.com/petermattis/goid v0.0.0-20250319124200-ccd6737f222a/go.mod h1:pxMtw7cyUw6B2bRH0ZBANSPg+AoSud1I1iyJHI69jH4=
github.com/philip-bui/grpc-zerolog v1.0.1 h1:EMacvLRUd2O1K0eWod27ZP5CY1iTNkhBDLSN+Q4JEvA=
github.com/philip-bui/grpc-zerolog v1.0.1/go.mod h1:qXbiq/2X4ZUMMshsqlWyTHOcw7ns+GZmlqZZN05ZHcQ=
github.com/pierrec/lz4/v4 v4.1.21 h1:yOVMLb6qSIDP67pl/5F7RepeKYu/VmTyEXvuMI5d9mQ=
@@ -380,14 +398,14 @@ github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRI
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus-community/pro-bing v0.4.0 h1:YMbv+i08gQz97OZZBwLyvmmQEEzyfyrrjEaAchdy3R4=
github.com/prometheus-community/pro-bing v0.4.0/go.mod h1:b7wRYZtCcPmt4Sz319BykUU241rWLe1VFXyiyWK/dH4=
github.com/prometheus/client_golang v1.23.2 h1:Je96obch5RDVy3FDMndoUsjAhG5Edi49h0RJWRi/o0o=
github.com/prometheus/client_golang v1.23.2/go.mod h1:Tb1a6LWHB3/SPIzCoaDXI4I8UHKeFTEQ1YCr+0Gyqmg=
github.com/prometheus/client_golang v1.22.0 h1:rb93p9lokFEsctTys46VnV1kLCDpVZ0a/Y92Vm0Zc6Q=
github.com/prometheus/client_golang v1.22.0/go.mod h1:R7ljNsLXhuQXYZYtw6GAE9AZg8Y7vEW5scdCXrWRXC0=
github.com/prometheus/client_model v0.6.2 h1:oBsgwpGs7iVziMvrGhE53c/GrLUsZdHnqNwqPLxwZyk=
github.com/prometheus/client_model v0.6.2/go.mod h1:y3m2F6Gdpfy6Ut/GBsUqTWZqCUvMVzSfMLjcu6wAwpE=
github.com/prometheus/common v0.66.1 h1:h5E0h5/Y8niHc5DlaLlWLArTQI7tMrsfQjHV+d9ZoGs=
github.com/prometheus/common v0.66.1/go.mod h1:gcaUsgf3KfRSwHY4dIMXLPV0K/Wg1oZ8+SbZk/HH/dA=
github.com/prometheus/procfs v0.16.1 h1:hZ15bTNuirocR6u0JZ6BAHHmwS1p8B4P6MRqxtzMyRg=
github.com/prometheus/procfs v0.16.1/go.mod h1:teAbpZRB1iIAJYREa1LsoWUXykVXA1KlTmWl8x/U+Is=
github.com/prometheus/common v0.65.0 h1:QDwzd+G1twt//Kwj/Ww6E9FQq1iVMmODnILtW1t2VzE=
github.com/prometheus/common v0.65.0/go.mod h1:0gZns+BLRQ3V6NdaerOhMbwwRbNh9hkGINtQAsP5GS8=
github.com/prometheus/procfs v0.15.1 h1:YagwOFzUgYfKKHX6Dr+sHT7km/hxC76UB0learggepc=
github.com/prometheus/procfs v0.15.1/go.mod h1:fB45yRUv8NstnjriLhBQLuOUt+WW4BsoGhij/e3PBqk=
github.com/pterm/pterm v0.12.27/go.mod h1:PhQ89w4i95rhgE+xedAoqous6K9X+r6aSOI2eFF7DZI=
github.com/pterm/pterm v0.12.29/go.mod h1:WI3qxgvoQFFGKGjGnJR849gU0TsEOvKn5Q8LlY1U7lg=
github.com/pterm/pterm v0.12.30/go.mod h1:MOqLIyMOgmTDz9yorcYbcw+HsgoZo3BQfg2wtl3HEFE=
@@ -395,13 +413,15 @@ github.com/pterm/pterm v0.12.31/go.mod h1:32ZAWZVXD7ZfG0s8qqHXePte42kdz8ECtRyEej
github.com/pterm/pterm v0.12.33/go.mod h1:x+h2uL+n7CP/rel9+bImHD5lF3nM9vJj80k9ybiiTTE=
github.com/pterm/pterm v0.12.36/go.mod h1:NjiL09hFhT/vWjQHSj1athJpx6H8cjpHXNAK5bUw8T8=
github.com/pterm/pterm v0.12.40/go.mod h1:ffwPLwlbXxP+rxT0GsgDTzS3y3rmpAO1NMjUkGTYf8s=
github.com/pterm/pterm v0.12.82 h1:+D9wYhCaeaK0FIQoZtqbNQuNpe2lB2tajKKsTd5paVQ=
github.com/pterm/pterm v0.12.82/go.mod h1:TyuyrPjnxfwP+ccJdBTeWHtd/e0ybQHkOS/TakajZCw=
github.com/puzpuzpuz/xsync/v4 v4.2.0 h1:dlxm77dZj2c3rxq0/XNvvUKISAmovoXF4a4qM6Wvkr0=
github.com/puzpuzpuz/xsync/v4 v4.2.0/go.mod h1:VJDmTCJMBt8igNxnkQd86r+8KUeN1quSfNKu5bLYFQo=
github.com/pterm/pterm v0.12.81 h1:ju+j5I2++FO1jBKMmscgh5h5DPFDFMB7epEjSoKehKA=
github.com/pterm/pterm v0.12.81/go.mod h1:TyuyrPjnxfwP+ccJdBTeWHtd/e0ybQHkOS/TakajZCw=
github.com/puzpuzpuz/xsync/v4 v4.1.0 h1:x9eHRl4QhZFIPJ17yl4KKW9xLyVWbb3/Yq4SXpjF71U=
github.com/puzpuzpuz/xsync/v4 v4.1.0/go.mod h1:VJDmTCJMBt8igNxnkQd86r+8KUeN1quSfNKu5bLYFQo=
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec h1:W09IVJc94icq4NjY3clb7Lk8O1qJ8BdBEF8z0ibU0rE=
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo=
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
github.com/rivo/uniseg v0.4.7 h1:WUdvkW8uEhrYfLC4ZzdpI2ztxP1I582+49Oc5Mq64VQ=
github.com/rivo/uniseg v0.4.7/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88=
github.com/rogpeppe/go-internal v1.9.0/go.mod h1:WtVeX8xhTBvf0smdhujwtBcq4Qrzq/fJaraNFVN+nFs=
github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc=
@@ -411,28 +431,29 @@ github.com/rs/zerolog v1.34.0/go.mod h1:bJsvje4Z08ROH4Nhs5iH600c3IkWhwp44iRc54W6
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/safchain/ethtool v0.3.0 h1:gimQJpsI6sc1yIqP/y8GYgiXn/NjgvpM0RNoWLVVmP0=
github.com/safchain/ethtool v0.3.0/go.mod h1:SA9BwrgyAqNo7M+uaL6IYbxpm5wk3L7Mm6ocLW+CJUs=
github.com/sagikazarmark/locafero v0.12.0 h1:/NQhBAkUb4+fH1jivKHWusDYFjMOOKU88eegjfxfHb4=
github.com/sagikazarmark/locafero v0.12.0/go.mod h1:sZh36u/YSZ918v0Io+U9ogLYQJ9tLLBmM4eneO6WwsI=
github.com/samber/lo v1.52.0 h1:Rvi+3BFHES3A8meP33VPAxiBZX/Aws5RxrschYGjomw=
github.com/samber/lo v1.52.0/go.mod h1:4+MXEGsJzbKGaUEQFKBq2xtfuznW9oz/WrgyzMzRoM0=
github.com/sasha-s/go-deadlock v0.3.6 h1:TR7sfOnZ7x00tWPfD397Peodt57KzMDo+9Ae9rMiUmw=
github.com/sasha-s/go-deadlock v0.3.6/go.mod h1:CUqNyyvMxTyjFqDT7MRg9mb4Dv/btmGTqSR+rky/UXo=
github.com/sagikazarmark/locafero v0.9.0 h1:GbgQGNtTrEmddYDSAH9QLRyfAHY12md+8YFTqyMTC9k=
github.com/sagikazarmark/locafero v0.9.0/go.mod h1:UBUyz37V+EdMS3hDF3QWIiVr/2dPrx49OMO0Bn0hJqk=
github.com/samber/lo v1.51.0 h1:kysRYLbHy/MB7kQZf5DSN50JHmMsNEdeY24VzJFu7wI=
github.com/samber/lo v1.51.0/go.mod h1:4+MXEGsJzbKGaUEQFKBq2xtfuznW9oz/WrgyzMzRoM0=
github.com/sasha-s/go-deadlock v0.3.5 h1:tNCOEEDG6tBqrNDOX35j/7hL5FcFViG6awUGROb2NsU=
github.com/sasha-s/go-deadlock v0.3.5/go.mod h1:bugP6EGbdGYObIlx7pUZtWqlvo8k9H6vCBBsiChJQ5U=
github.com/sergi/go-diff v1.2.0/go.mod h1:STckp+ISIX8hZLjrqAeVduY0gWCT9IjLuqbuNXdaHfM=
github.com/sergi/go-diff v1.3.2-0.20230802210424-5b0b94c5c0d3 h1:n661drycOFuPLCN3Uc8sB6B/s6Z4t2xvBgU1htSHuq8=
github.com/sergi/go-diff v1.3.2-0.20230802210424-5b0b94c5c0d3/go.mod h1:A0bzQcvG0E7Rwjx0REVgAGH58e96+X0MeOfepqsbeW4=
github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ=
github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
github.com/spf13/afero v1.15.0 h1:b/YBCLWAJdFWJTN9cLhiXXcD7mzKn9Dm86dNnfyQw1I=
github.com/spf13/afero v1.15.0/go.mod h1:NC2ByUVxtQs4b3sIUphxK0NioZnmxgyCrfzeuq8lxMg=
github.com/spf13/cast v1.10.0 h1:h2x0u2shc1QuLHfxi+cTJvs30+ZAHOGRic8uyGTDWxY=
github.com/spf13/cast v1.10.0/go.mod h1:jNfB8QC9IA6ZuY2ZjDp0KtFO2LZZlg4S/7bzP6qqeHo=
github.com/spf13/cobra v1.10.1 h1:lJeBwCfmrnXthfAupyUTzJ/J4Nc1RsHC/mSRU2dll/s=
github.com/spf13/cobra v1.10.1/go.mod h1:7SmJGaTHFVBY0jW4NXGluQoLvhqFQM+6XSKD+P4XaB0=
github.com/spf13/pflag v1.0.9/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/spf13/pflag v1.0.10 h1:4EBh2KAYBwaONj6b2Ye1GiHfwjqyROoF4RwYO+vPwFk=
github.com/spf13/pflag v1.0.10/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/spf13/viper v1.21.0 h1:x5S+0EU27Lbphp4UKm1C+1oQO+rKx36vfCoaVebLFSU=
github.com/spf13/viper v1.21.0/go.mod h1:P0lhsswPGWD/1lZJ9ny3fYnVqxiegrlNrEmgLjbTCAY=
github.com/sourcegraph/conc v0.3.0 h1:OQTbbt6P72L20UqAkXXuLOj79LfEanQ+YQFNpLA9ySo=
github.com/sourcegraph/conc v0.3.0/go.mod h1:Sdozi7LEKbFPqYX2/J+iBAM6HpqSLTASQIKqDmF7Mt0=
github.com/spf13/afero v1.14.0 h1:9tH6MapGnn/j0eb0yIXiLjERO8RB6xIVZRDCX7PtqWA=
github.com/spf13/afero v1.14.0/go.mod h1:acJQ8t0ohCGuMN3O+Pv0V0hgMxNYDlvdk+VTfyZmbYo=
github.com/spf13/cast v1.8.0 h1:gEN9K4b8Xws4EX0+a0reLmhq8moKn7ntRlQYgjPeCDk=
github.com/spf13/cast v1.8.0/go.mod h1:ancEpBxwJDODSW/UG4rDrAqiKolqNNh2DX3mk86cAdo=
github.com/spf13/cobra v1.9.1 h1:CXSaggrXdbHK9CF+8ywj8Amf7PBRmPCOJugH954Nnlo=
github.com/spf13/cobra v1.9.1/go.mod h1:nDyEzZ8ogv936Cinf6g1RU9MRY64Ir93oCnqb9wxYW0=
github.com/spf13/pflag v1.0.6 h1:jFzHGLGAlb3ruxLB8MhbI6A8+AQX/2eW4qeyNZXNp2o=
github.com/spf13/pflag v1.0.6/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/spf13/viper v1.20.1 h1:ZMi+z/lvLyPSCoNtFCpqjy0S4kPbirhpTMwl8BkW9X4=
github.com/spf13/viper v1.20.1/go.mod h1:P9Mdzt1zoHIG8m2eZQinpiBjo6kCmZSKBClNNqjJvu4=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.2 h1:xuMeJ0Sdp5ZMRXx/aWO6RZxdr3beISkG5/G/aIRr3pY=
@@ -443,8 +464,8 @@ github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/subosito/gotenv v1.6.0 h1:9NlTDc1FTs4qu0DDq7AEtTPNw6SVm7uBMsUCUjABIf8=
github.com/subosito/gotenv v1.6.0/go.mod h1:Dk4QP5c2W3ibzajGcXpNraDfq2IrhjMIvMSWPKKo0FU=
github.com/tailscale/certstore v0.1.1-0.20231202035212-d3fa0460f47e h1:PtWT87weP5LWHEY//SWsYkSO3RWRZo4OSWagh3YD2vQ=
@@ -465,16 +486,14 @@ github.com/tailscale/setec v0.0.0-20250305161714-445cadbbca3d h1:mnqtPWYyvNiPU9l
github.com/tailscale/setec v0.0.0-20250305161714-445cadbbca3d/go.mod h1:9BzmlFc3OLqLzLTF/5AY+BMs+clxMqyhSGzgXIm8mNI=
github.com/tailscale/squibble v0.0.0-20250108170732-a4ca58afa694 h1:95eIP97c88cqAFU/8nURjgI9xxPbD+Ci6mY/a79BI/w=
github.com/tailscale/squibble v0.0.0-20250108170732-a4ca58afa694/go.mod h1:veguaG8tVg1H/JG5RfpoUW41I+O8ClPElo/fTYr8mMk=
github.com/tailscale/squibble v0.0.0-20251030164342-4d5df9caa993 h1:FyiiAvDAxpB0DrW2GW3KOVfi3YFOtsQUEeFWbf55JJU=
github.com/tailscale/squibble v0.0.0-20251030164342-4d5df9caa993/go.mod h1:xJkMmR3t+thnUQhA3Q4m2VSlS5pcOq+CIjmU/xfKKx4=
github.com/tailscale/tailsql v0.0.0-20250421235516-02f85f087b97 h1:JJkDnrAhHvOCttk8z9xeZzcDlzzkRA7+Duxj9cwOyxk=
github.com/tailscale/tailsql v0.0.0-20250421235516-02f85f087b97/go.mod h1:9jS8HxwsP2fU4ESZ7DZL+fpH/U66EVlVMzdgznH12RM=
github.com/tailscale/web-client-prebuilt v0.0.0-20250124233751-d4cd19a26976 h1:UBPHPtv8+nEAy2PD8RyAhOYvau1ek0HDJqLS/Pysi14=
github.com/tailscale/web-client-prebuilt v0.0.0-20250124233751-d4cd19a26976/go.mod h1:agQPE6y6ldqCOui2gkIh7ZMztTkIQKH049tv8siLuNQ=
github.com/tailscale/wf v0.0.0-20240214030419-6fbb0a674ee6 h1:l10Gi6w9jxvinoiq15g8OToDdASBni4CyJOdHY1Hr8M=
github.com/tailscale/wf v0.0.0-20240214030419-6fbb0a674ee6/go.mod h1:ZXRML051h7o4OcI0d3AaILDIad/Xw0IkXaHM17dic1Y=
github.com/tailscale/wireguard-go v0.0.0-20250716170648-1d0488a3d7da h1:jVRUZPRs9sqyKlYHHzHjAqKN+6e/Vog6NpHYeNPJqOw=
github.com/tailscale/wireguard-go v0.0.0-20250716170648-1d0488a3d7da/go.mod h1:BOm5fXUBFM+m9woLNBoxI9TaBXXhGNP50LX/TGIvGb4=
github.com/tailscale/wireguard-go v0.0.0-20250304000100-91a0587fb251 h1:h/41LFTrwMxB9Xvvug0kRdQCU5TlV1+pAMQw0ZtDE3U=
github.com/tailscale/wireguard-go v0.0.0-20250304000100-91a0587fb251/go.mod h1:BOm5fXUBFM+m9woLNBoxI9TaBXXhGNP50LX/TGIvGb4=
github.com/tailscale/xnet v0.0.0-20240729143630-8497ac4dab2e h1:zOGKqN5D5hHhiYUp091JqK7DPCqSARyUfduhGUY8Bek=
github.com/tailscale/xnet v0.0.0-20240729143630-8497ac4dab2e/go.mod h1:orPd6JZXXRyuDusYilywte7k094d7dycXXU5YnWsrwg=
github.com/tc-hib/winres v0.2.1 h1:YDE0FiP0VmtRaDn7+aaChp1KiF4owBiJa5l964l5ujA=
@@ -488,8 +507,8 @@ github.com/u-root/u-root v0.14.0/go.mod h1:hAyZorapJe4qzbLWlAkmSVCJGbfoU9Pu4jpJ1
github.com/u-root/uio v0.0.0-20240224005618-d2acac8f3701 h1:pyC9PaHYZFgEKFdlp3G8RaCKgVpHZnecvArXvPXcFkM=
github.com/u-root/uio v0.0.0-20240224005618-d2acac8f3701/go.mod h1:P3a5rG4X7tI17Nn3aOIAYr5HbIMukwXG0urG0WuL8OA=
github.com/vishvananda/netns v0.0.0-20200728191858-db3c7e526aae/go.mod h1:DD4vA1DwXk04H54A1oHXtwZmA0grkVMdPxx/VGLCah0=
github.com/vishvananda/netns v0.0.5 h1:DfiHV+j8bA32MFM7bfEunvT8IAqQ/NzSJHtcmW5zdEY=
github.com/vishvananda/netns v0.0.5/go.mod h1:SpkAiCQRtJ6TvvxPnOSyH3BMl6unz3xZlaprSwhNNJM=
github.com/vishvananda/netns v0.0.4 h1:Oeaw1EM2JMxD51g9uhtC0D7erkIjgmj8+JZc26m1YX8=
github.com/vishvananda/netns v0.0.4/go.mod h1:SpkAiCQRtJ6TvvxPnOSyH3BMl6unz3xZlaprSwhNNJM=
github.com/x448/float16 v0.8.4 h1:qLwI1I70+NjRFUR3zs1JPUCgaCXSh3SW62uAKT1mSBM=
github.com/x448/float16 v0.8.4/go.mod h1:14CWIYCyZA/cWjXOioeEpHeN/83MdbZDRQHoFcYsOfg=
github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f/go.mod h1:N2zxlSyiKSe5eX1tZViRH5QA0qijqEDrYZiPEAiq3wU=
@@ -502,70 +521,80 @@ github.com/xeipuuv/gojsonschema v1.2.0/go.mod h1:anYRn/JVcOK2ZgGU+IjEV4nwlhoK5sQ
github.com/xo/terminfo v0.0.0-20210125001918-ca9a967f8778/go.mod h1:2MuV+tbUrU1zIOPMxZ5EncGwgmMJsa+9ucAQZXxsObs=
github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e h1:JVG44RsyaB9T2KIHavMF/ppJZNG9ZpyihvCd0w101no=
github.com/xo/terminfo v0.0.0-20220910002029-abceb7e1c41e/go.mod h1:RbqR21r5mrJuqunuUZ/Dhy/avygyECGrLceyNeo4LiM=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
go.opentelemetry.io/auto/sdk v1.1.0 h1:cH53jehLUN6UFLY71z+NDOiNJqDdPRaXzTel0sJySYA=
go.opentelemetry.io/auto/sdk v1.1.0/go.mod h1:3wSPjt5PWp2RhlCcmmOial7AvC4DQqZb7a7wCow3W8A=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.58.0 h1:yd02MEjBdJkG3uabWP9apV+OuWRIXGDuJEUJbOHmCFU=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.58.0/go.mod h1:umTcuxiv1n/s/S6/c2AT/g2CQ7u5C59sHDNmfSwgz7Q=
go.opentelemetry.io/otel v1.37.0 h1:9zhNfelUvx0KBfu/gb+ZgeAfAgtWrfHJZcAqFC228wQ=
go.opentelemetry.io/otel v1.37.0/go.mod h1:ehE/umFRLnuLa/vSccNq9oS1ErUlkkK71gMcN34UG8I=
go.opentelemetry.io/otel v1.36.0 h1:UumtzIklRBY6cI/lllNZlALOF5nNIzJVb16APdvgTXg=
go.opentelemetry.io/otel v1.36.0/go.mod h1:/TcFMXYjyRNh8khOAO9ybYkqaDBb/70aVwkNML4pP8E=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.36.0 h1:dNzwXjZKpMpE2JhmO+9HsPl42NIXFIFSUSSs0fiqra0=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.36.0/go.mod h1:90PoxvaEB5n6AOdZvi+yWJQoE95U8Dhhw2bSyRqnTD0=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.36.0 h1:nRVXXvf78e00EwY6Wp0YII8ww2JVWshZ20HfTlE11AM=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.36.0/go.mod h1:r49hO7CgrxY9Voaj3Xe8pANWtr0Oq916d0XAmOoCZAQ=
go.opentelemetry.io/otel/metric v1.37.0 h1:mvwbQS5m0tbmqML4NqK+e3aDiO02vsf/WgbsdpcPoZE=
go.opentelemetry.io/otel/metric v1.37.0/go.mod h1:04wGrZurHYKOc+RKeye86GwKiTb9FKm1WHtO+4EVr2E=
go.opentelemetry.io/otel/sdk v1.37.0 h1:ItB0QUqnjesGRvNcmAcU0LyvkVyGJ2xftD29bWdDvKI=
go.opentelemetry.io/otel/sdk v1.37.0/go.mod h1:VredYzxUvuo2q3WRcDnKDjbdvmO0sCzOvVAiY+yUkAg=
go.opentelemetry.io/otel/sdk/metric v1.37.0 h1:90lI228XrB9jCMuSdA0673aubgRobVZFhbjxHHspCPc=
go.opentelemetry.io/otel/sdk/metric v1.37.0/go.mod h1:cNen4ZWfiD37l5NhS+Keb5RXVWZWpRE+9WyVCpbo5ps=
go.opentelemetry.io/otel/trace v1.37.0 h1:HLdcFNbRQBE2imdSEgm/kwqmQj1Or1l/7bW6mxVK7z4=
go.opentelemetry.io/otel/trace v1.37.0/go.mod h1:TlgrlQ+PtQO5XFerSPUYG0JSgGyryXewPGyayAWSBS0=
go.opentelemetry.io/otel/metric v1.36.0 h1:MoWPKVhQvJ+eeXWHFBOPoBOi20jh6Iq2CcCREuTYufE=
go.opentelemetry.io/otel/metric v1.36.0/go.mod h1:zC7Ks+yeyJt4xig9DEw9kuUFe5C3zLbVjV2PzT6qzbs=
go.opentelemetry.io/otel/sdk v1.36.0 h1:b6SYIuLRs88ztox4EyrvRti80uXIFy+Sqzoh9kFULbs=
go.opentelemetry.io/otel/sdk v1.36.0/go.mod h1:+lC+mTgD+MUWfjJubi2vvXWcVxyr9rmlshZni72pXeY=
go.opentelemetry.io/otel/sdk/metric v1.35.0 h1:1RriWBmCKgkeHEhM7a2uMjMUfP7MsOF5JpUCaEqEI9o=
go.opentelemetry.io/otel/sdk/metric v1.35.0/go.mod h1:is6XYCUMpcKi+ZsOvfluY5YstFnhW0BidkR+gL+qN+w=
go.opentelemetry.io/otel/trace v1.36.0 h1:ahxWNuqZjpdiFAyrIoQ4GIiAIhxAunQR6MUoKrsNd4w=
go.opentelemetry.io/otel/trace v1.36.0/go.mod h1:gQ+OnDZzrybY4k4seLzPAWNwVBBVlF2szhehOBB/tGA=
go.opentelemetry.io/proto/otlp v1.6.0 h1:jQjP+AQyTf+Fe7OKj/MfkDrmK4MNVtw2NpXsf9fefDI=
go.opentelemetry.io/proto/otlp v1.6.0/go.mod h1:cicgGehlFuNdgZkcALOCh3VE6K/u2tAjzlRhDwmVpZc=
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
go.yaml.in/yaml/v2 v2.4.2 h1:DzmwEr2rDGHl7lsFgAHxmNz/1NlQ7xLIrlN2h5d1eGI=
go.yaml.in/yaml/v2 v2.4.2/go.mod h1:081UH+NErpNdqlCXm3TtEran0rJZGxAYx9hb/ELlsPU=
go.yaml.in/yaml/v3 v3.0.4 h1:tfq32ie2Jv2UxXFdLJdh3jXuOzWiL1fo0bu/FbuKpbc=
go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg=
go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0=
go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y=
go4.org/mem v0.0.0-20240501181205-ae6ca9944745 h1:Tl++JLUCe4sxGu8cTpDzRLd3tN7US4hOxG5YpKCzkek=
go4.org/mem v0.0.0-20240501181205-ae6ca9944745/go.mod h1:reUoABIJ9ikfM5sgtSF3Wushcza7+WeD01VB9Lirh3g=
go4.org/netipx v0.0.0-20231129151722-fdeea329fbba h1:0b9z3AuHCjxk0x/opv64kcgZLBseWJUpBw5I82+2U4M=
go4.org/netipx v0.0.0-20231129151722-fdeea329fbba/go.mod h1:PLyyIXexvUFg3Owu6p/WfdlivPbZJsZdgWZlrGope/Y=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.19.0/go.mod h1:Iy9bg/ha4yyC70EfRS8jz+B6ybOBKMaSxLj6P6oBDfU=
golang.org/x/crypto v0.43.0 h1:dduJYIi3A3KOfdGOHX8AVZ/jGiyPa3IbBozJ5kNuE04=
golang.org/x/crypto v0.43.0/go.mod h1:BFbav4mRNlXJL4wNeejLpWxB7wMbc79PdRGhWKncxR0=
golang.org/x/exp v0.0.0-20251009144603-d2f985daa21b h1:18qgiDvlvH7kk8Ioa8Ov+K6xCi0GMvmGfGW0sgd/SYA=
golang.org/x/exp v0.0.0-20251009144603-d2f985daa21b/go.mod h1:j/pmGrbnkbPtQfxEe5D0VQhZC6qKbfKifgD0oM7sR70=
golang.org/x/crypto v0.40.0 h1:r4x+VvoG5Fm+eJcxMaY8CQM7Lb0l1lsmjGBQ6s8BfKM=
golang.org/x/crypto v0.40.0/go.mod h1:Qr1vMER5WyS2dfPHAlsOj01wgLbsyWtFn/aY+5+ZdxY=
golang.org/x/exp v0.0.0-20250408133849-7e4ce0ab07d0 h1:R84qjqJb5nVJMxqWYb3np9L5ZsaDtB+a39EqjV0JSUM=
golang.org/x/exp v0.0.0-20250408133849-7e4ce0ab07d0/go.mod h1:S9Xr4PYopiDyqSyp5NjCrhFrqg6A5zA2E/iPHPhqnS8=
golang.org/x/exp/typeparams v0.0.0-20240314144324-c7f7c6466f7f h1:phY1HzDcf18Aq9A8KkmRtY9WvOFIxN8wgfvy6Zm1DV8=
golang.org/x/exp/typeparams v0.0.0-20240314144324-c7f7c6466f7f/go.mod h1:AbB0pIl9nAr9wVwH+Z2ZpaocVmF5I4GyWCDIsVjR0bk=
golang.org/x/image v0.27.0 h1:C8gA4oWU/tKkdCfYT6T2u4faJu3MeNS5O8UPWlPF61w=
golang.org/x/image v0.27.0/go.mod h1:xbdrClrAUway1MUTEZDq9mz/UpRwYAkFFNUslZtcB+g=
golang.org/x/image v0.24.0 h1:AN7zRgVsbvmTfNyqIbbOraYL8mSwcKncEj8ofjgzcMQ=
golang.org/x/image v0.24.0/go.mod h1:4b/ITuLfqYq1hqZcjofwctIhi7sZh2WaCjvsBNjjya8=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/mod v0.29.0 h1:HV8lRxZC4l2cr3Zq1LvtOsi/ThTgWnUk/y64QSs8GwA=
golang.org/x/mod v0.29.0/go.mod h1:NyhrlYXJ2H4eJiRy/WDBO6HMqZQ6q9nk4JzS3NuCK+w=
golang.org/x/mod v0.26.0 h1:EGMPT//Ezu+ylkCijjPc+f4Aih7sZvaAr+O3EHBxvZg=
golang.org/x/mod v0.26.0/go.mod h1:/j6NAhSk8iQ723BGAUyoAcn7SlD7s15Dp9Nd/SfeaFQ=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg=
golang.org/x/net v0.46.0 h1:giFlY12I07fugqwPuWJi68oOnpfqFnJIJzaIIm2JVV4=
golang.org/x/net v0.46.0/go.mod h1:Q9BGdFy1y4nkUwiLvT5qtyhAnEHgnQ/zd8PfU6nc210=
golang.org/x/oauth2 v0.32.0 h1:jsCblLleRMDrxMN29H3z/k1KliIvpLgCkE6R8FXXNgY=
golang.org/x/oauth2 v0.32.0/go.mod h1:lzm5WQJQwKZ3nwavOZ3IS5Aulzxi68dUSgRHujetwEA=
golang.org/x/net v0.42.0 h1:jzkYrhi3YQWD6MLBJcsklgQsoAcw89EcZbJw8Z614hs=
golang.org/x/net v0.42.0/go.mod h1:FF1RA5d3u7nAYA4z2TkclSCKh68eSXtiFwcWQpPXdt8=
golang.org/x/oauth2 v0.30.0 h1:dnDm7JmhM45NNpd8FDDeLhK6FwqbOf4MLCM9zb1BOHI=
golang.org/x/oauth2 v0.30.0/go.mod h1:B++QgG3ZKulg6sRPGD/mqlHQs5rB3Ml9erfeDY7xKlU=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.17.0 h1:l60nONMj9l5drqw6jlhIELNv9I0A4OFgRsG9k2oT9Ug=
golang.org/x/sync v0.17.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/sync v0.16.0 h1:ycBJEhp9p4vXvUZNszeOq0kGTPghopOL8q0fq3vstxw=
golang.org/x/sync v0.16.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200217220822-9197077df867/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200728102440-3e129f6d46b1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@@ -586,8 +615,8 @@ golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.37.0 h1:fdNQudmxPjkdUTPnLn5mdQv7Zwvbvpaxqs831goi9kQ=
golang.org/x/sys v0.37.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/sys v0.34.0 h1:H5Y5sJ2L2JRdyv7ROF1he/lPdvFsd0mJHFw2ThKHxLA=
golang.org/x/sys v0.34.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210220032956-6a3ed077a48d/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210615171337-6886f2dfbf5b/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
@@ -595,40 +624,43 @@ golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuX
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo=
golang.org/x/term v0.17.0/go.mod h1:lLRBjIVuehSbZlaOtGMbcMncT+aqLLLmKrsjNrUguwk=
golang.org/x/term v0.36.0 h1:zMPR+aF8gfksFprF/Nc/rd1wRS1EI6nDBGyWAvDzx2Q=
golang.org/x/term v0.36.0/go.mod h1:Qu394IJq6V6dCBRgwqshf3mPF85AqzYEzofzRdZkWss=
golang.org/x/term v0.33.0 h1:NuFncQrRcaRvVmgRkvM3j/F00gWIAlcmlB8ACEKmGIg=
golang.org/x/term v0.33.0/go.mod h1:s18+ql9tYWp1IfpV9DmCtQDDSRBUjKaw9M1eAv5UeF0=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/text v0.30.0 h1:yznKA/E9zq54KzlzBEAWn1NXSQ8DIp/NYMy88xJjl4k=
golang.org/x/text v0.30.0/go.mod h1:yDdHFIX9t+tORqspjENWgzaCVXgk0yYnYuSZ8UzzBVM=
golang.org/x/time v0.11.0 h1:/bpjEDfN9tkoN/ryeYHnv5hcMlc8ncjMcM4XBk5NWV0=
golang.org/x/time v0.11.0/go.mod h1:CDIdPxbZBQxdj6cxyCIdrNogrJKMJ7pr37NYpMcMDSg=
golang.org/x/text v0.27.0 h1:4fGWRpyh641NLlecmyl4LOe6yDdfaYNrGb2zdfo4JV4=
golang.org/x/text v0.27.0/go.mod h1:1D28KMCvyooCX9hBiosv5Tz/+YLxj0j7XhWjpSUF7CU=
golang.org/x/time v0.10.0 h1:3usCWA8tQn0L8+hFJQNgzpWbd89begxN66o1Ojdn5L4=
golang.org/x/time v0.10.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
golang.org/x/tools v0.38.0 h1:Hx2Xv8hISq8Lm16jvBZ2VQf+RLmbd7wVUsALibYI/IQ=
golang.org/x/tools v0.38.0/go.mod h1:yEsQ/d/YK8cjh0L6rZlY8tgtlKiBNTL14pGDJPJpYQs=
golang.org/x/tools v0.35.0 h1:mBffYraMEf7aa0sB+NuKnuCy8qI/9Bughn8dC2Gu5r0=
golang.org/x/tools v0.35.0/go.mod h1:NKdj5HkL/73byiZSJjqJgKn3ep7KjFkBOkR/Hps3VPw=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.zx2c4.com/wintun v0.0.0-20230126152724-0fa3db229ce2 h1:B82qJJgjvYKsXS9jeunTOisW56dUokqW/FOteYJJ/yg=
golang.zx2c4.com/wintun v0.0.0-20230126152724-0fa3db229ce2/go.mod h1:deeaetjYA+DHMHg+sMSMI58GrEteJUUzzw7en6TJQcI=
golang.zx2c4.com/wireguard/windows v0.5.3 h1:On6j2Rpn3OEMXqBq00QEDC7bWSZrPIHKIus8eIuExIE=
golang.zx2c4.com/wireguard/windows v0.5.3/go.mod h1:9TEe8TJmtwyQebdFwAkEWOPr3prrtqm+REGFifP60hI=
gonum.org/v1/gonum v0.16.0 h1:5+ul4Swaf3ESvrOnidPp4GZbzf0mxVQpDCYUQE7OJfk=
gonum.org/v1/gonum v0.16.0/go.mod h1:fef3am4MQ93R2HHpKnLk4/Tbh/s0+wqD5nfa6Pnwy4E=
google.golang.org/genproto/googleapis/api v0.0.0-20250929231259-57b25ae835d4 h1:8XJ4pajGwOlasW+L13MnEGA8W4115jJySQtVfS2/IBU=
google.golang.org/genproto/googleapis/api v0.0.0-20250929231259-57b25ae835d4/go.mod h1:NnuHhy+bxcg30o7FnVAZbXsPHUDQ9qKWAQKCD7VxFtk=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250929231259-57b25ae835d4 h1:i8QOKZfYg6AbGVZzUAY3LrNWCKF8O6zFisU9Wl9RER4=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250929231259-57b25ae835d4/go.mod h1:HSkG/KdJWusxU1F6CNrwNDjBMgisKxGnc5dAZfT0mjQ=
google.golang.org/grpc v1.75.1 h1:/ODCNEuf9VghjgO3rqLcfg8fiOP0nSluljWFlDxELLI=
google.golang.org/grpc v1.75.1/go.mod h1:JtPAzKiq4v1xcAB2hydNlWI2RnF85XXcV0mhKXr2ecQ=
google.golang.org/protobuf v1.36.10 h1:AYd7cD/uASjIL6Q9LiTjz8JLcrh/88q5UObnmY3aOOE=
google.golang.org/protobuf v1.36.10/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco=
google.golang.org/genproto/googleapis/api v0.0.0-20250603155806-513f23925822 h1:oWVWY3NzT7KJppx2UKhKmzPq4SRe0LdCijVRwvGeikY=
google.golang.org/genproto/googleapis/api v0.0.0-20250603155806-513f23925822/go.mod h1:h3c4v36UTKzUiuaOKQ6gr3S+0hovBtUrXzTG/i3+XEc=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250603155806-513f23925822 h1:fc6jSaCT0vBduLYZHYrBBNY4dsWuvgyff9noRNDdBeE=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250603155806-513f23925822/go.mod h1:qQ0YXyHHx3XkvlzUtpXDkS29lDSafHMZBAZDc03LQ3A=
google.golang.org/grpc v1.73.0 h1:VIWSmpI2MegBtTuFt5/JWy2oXxtjJ/e89Z70ImfD2ok=
google.golang.org/grpc v1.73.0/go.mod h1:50sbHOUqWoCQGI8V2HQLJM0B+LMlIUjNSZmow7EVBQc=
google.golang.org/protobuf v1.36.6 h1:z1NpPI8ku2WgiWnf+t9wTPsn6eP1L7ksHUlkfLvd9xY=
google.golang.org/protobuf v1.36.6/go.mod h1:jduwjTPXsFjZGTmRluh+L6NjiWu7pchiJ2/5YcXBHnY=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
@@ -644,8 +676,8 @@ gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gorm.io/driver/postgres v1.6.0 h1:2dxzU8xJ+ivvqTRph34QX+WrRaJlmfyPqXmoGVjMBa4=
gorm.io/driver/postgres v1.6.0/go.mod h1:vUw0mrGgrTK+uPHEhAdV4sfFELrByKVGnaVRkXDhtWo=
gorm.io/gorm v1.31.0 h1:0VlycGreVhK7RF/Bwt51Fk8v0xLiiiFdbGDPIZQ7mJY=
gorm.io/gorm v1.31.0/go.mod h1:XyQVbO2k6YkOis7C2437jSit3SsDK72s7n7rsSHd+Gs=
gorm.io/gorm v1.30.0 h1:qbT5aPv1UH8gI99OsRlvDToLxW5zR7FzS9acZDOZcgs=
gorm.io/gorm v1.30.0/go.mod h1:8Z33v652h4//uMA76KjeDH8mJXPm1QNCYrMeatR0DOE=
gotest.tools/v3 v3.5.1 h1:EENdUnS3pdur5nybKYIh2Vfgc8IUNBjxDPSjtiJcOzU=
gotest.tools/v3 v3.5.1/go.mod h1:isy3WKz7GK6uNw/sbHzfKBLvlvXwUyV06n6brMxxopU=
gvisor.dev/gvisor v0.0.0-20250205023644-9414b50a5633 h1:2gap+Kh/3F47cO6hAu3idFvsJ0ue6TRcEi2IUkv/F8k=
@@ -654,37 +686,35 @@ honnef.co/go/tools v0.6.1 h1:R094WgE8K4JirYjBaOpz/AvTyUu/3wbmAoskKN/pxTI=
honnef.co/go/tools v0.6.1/go.mod h1:3puzxxljPCe8RGJX7BIy1plGbxEOZni5mR2aXe3/uk4=
howett.net/plist v1.0.0 h1:7CrbWYbPPO/PyNy38b2EB/+gYbjCe2DXBxgtOOZbSQM=
howett.net/plist v1.0.0/go.mod h1:lqaXoTrLY4hg8tnEzNru53gicrbv7rrk+2xJA/7hw9g=
modernc.org/cc/v4 v4.26.5 h1:xM3bX7Mve6G8K8b+T11ReenJOT+BmVqQj0FY5T4+5Y4=
modernc.org/cc/v4 v4.26.5/go.mod h1:uVtb5OGqUKpoLWhqwNQo/8LwvoiEBLvZXIQ/SmO6mL0=
modernc.org/ccgo/v4 v4.28.1 h1:wPKYn5EC/mYTqBO373jKjvX2n+3+aK7+sICCv4Fjy1A=
modernc.org/ccgo/v4 v4.28.1/go.mod h1:uD+4RnfrVgE6ec9NGguUNdhqzNIeeomeXf6CL0GTE5Q=
modernc.org/fileutil v1.3.40 h1:ZGMswMNc9JOCrcrakF1HrvmergNLAmxOPjizirpfqBA=
modernc.org/fileutil v1.3.40/go.mod h1:HxmghZSZVAz/LXcMNwZPA/DRrQZEVP9VX0V4LQGQFOc=
modernc.org/cc/v4 v4.25.2 h1:T2oH7sZdGvTaie0BRNFbIYsabzCxUQg8nLqCdQ2i0ic=
modernc.org/cc/v4 v4.25.2/go.mod h1:uVtb5OGqUKpoLWhqwNQo/8LwvoiEBLvZXIQ/SmO6mL0=
modernc.org/ccgo/v4 v4.25.1 h1:TFSzPrAGmDsdnhT9X2UrcPMI3N/mJ9/X9ykKXwLhDsU=
modernc.org/ccgo/v4 v4.25.1/go.mod h1:njjuAYiPflywOOrm3B7kCB444ONP5pAVr8PIEoE0uDw=
modernc.org/fileutil v1.3.0 h1:gQ5SIzK3H9kdfai/5x41oQiKValumqNTDXMvKo62HvE=
modernc.org/fileutil v1.3.0/go.mod h1:XatxS8fZi3pS8/hKG2GH/ArUogfxjpEKs3Ku3aK4JyQ=
modernc.org/gc/v2 v2.6.5 h1:nyqdV8q46KvTpZlsw66kWqwXRHdjIlJOhG6kxiV/9xI=
modernc.org/gc/v2 v2.6.5/go.mod h1:YgIahr1ypgfe7chRuJi2gD7DBQiKSLMPgBQe9oIiito=
modernc.org/goabi0 v0.2.0 h1:HvEowk7LxcPd0eq6mVOAEMai46V+i7Jrj13t4AzuNks=
modernc.org/goabi0 v0.2.0/go.mod h1:CEFRnnJhKvWT1c1JTI3Avm+tgOWbkOu5oPA8eH8LnMI=
modernc.org/libc v1.66.10 h1:yZkb3YeLx4oynyR+iUsXsybsX4Ubx7MQlSYEw4yj59A=
modernc.org/libc v1.66.10/go.mod h1:8vGSEwvoUoltr4dlywvHqjtAqHBaw0j1jI7iFBTAr2I=
modernc.org/libc v1.62.1 h1:s0+fv5E3FymN8eJVmnk0llBe6rOxCu/DEU+XygRbS8s=
modernc.org/libc v1.62.1/go.mod h1:iXhATfJQLjG3NWy56a6WVU73lWOcdYVxsvwCgoPljuo=
modernc.org/mathutil v1.7.1 h1:GCZVGXdaN8gTqB1Mf/usp1Y/hSqgI2vAGGP4jZMCxOU=
modernc.org/mathutil v1.7.1/go.mod h1:4p5IwJITfppl0G4sUEDtCr4DthTaT47/N3aT6MhfgJg=
modernc.org/memory v1.11.0 h1:o4QC8aMQzmcwCK3t3Ux/ZHmwFPzE6hf2Y5LbkRs+hbI=
modernc.org/memory v1.11.0/go.mod h1:/JP4VbVC+K5sU2wZi9bHoq2MAkCnrt2r98UGeSK7Mjw=
modernc.org/memory v1.10.0 h1:fzumd51yQ1DxcOxSO+S6X7+QTuVU+n8/Aj7swYjFfC4=
modernc.org/memory v1.10.0/go.mod h1:/JP4VbVC+K5sU2wZi9bHoq2MAkCnrt2r98UGeSK7Mjw=
modernc.org/opt v0.1.4 h1:2kNGMRiUjrp4LcaPuLY2PzUfqM/w9N23quVwhKt5Qm8=
modernc.org/opt v0.1.4/go.mod h1:03fq9lsNfvkYSfxrfUhZCWPk1lm4cq4N+Bh//bEtgns=
modernc.org/sortutil v1.2.1 h1:+xyoGf15mM3NMlPDnFqrteY07klSFxLElE2PVuWIJ7w=
modernc.org/sortutil v1.2.1/go.mod h1:7ZI3a3REbai7gzCLcotuw9AC4VZVpYMjDzETGsSMqJE=
modernc.org/sqlite v1.39.1 h1:H+/wGFzuSCIEVCvXYVHX5RQglwhMOvtHSv+VtidL2r4=
modernc.org/sqlite v1.39.1/go.mod h1:9fjQZ0mB1LLP0GYrp39oOJXx/I2sxEnZtzCmEQIKvGE=
modernc.org/sqlite v1.37.0 h1:s1TMe7T3Q3ovQiK2Ouz4Jwh7dw4ZDqbebSDTlSJdfjI=
modernc.org/sqlite v1.37.0/go.mod h1:5YiWv+YviqGMuGw4V+PNplcyaJ5v+vQd7TQOgkACoJM=
modernc.org/strutil v1.2.1 h1:UneZBkQA+DX2Rp35KcM69cSsNES9ly8mQWD71HKlOA0=
modernc.org/strutil v1.2.1/go.mod h1:EHkiggD70koQxjVdSBM3JKM7k6L0FbGE5eymy9i3B9A=
modernc.org/token v1.1.0 h1:Xl7Ap9dKaEs5kLoOQeQmPWevfnk/DM5qcLcYlA8ys6Y=
modernc.org/token v1.1.0/go.mod h1:UGzOrNV1mAFSEB63lOFHIpNRUVMvYTc6yu1SMY/XTDM=
software.sslmate.com/src/go-pkcs12 v0.4.0 h1:H2g08FrTvSFKUj+D309j1DPfk5APnIdAQAB8aEykJ5k=
software.sslmate.com/src/go-pkcs12 v0.4.0/go.mod h1:Qiz0EyvDRJjjxGyUQa2cCNZn/wMyzrRJ/qcDXOQazLI=
tailscale.com v1.86.5 h1:yBtWFjuLYDmxVnfnvPbZNZcKADCYgNfMd0rUAOA9XCs=
tailscale.com v1.86.5/go.mod h1:Lm8dnzU2i/Emw15r6sl3FRNp/liSQ/nYw6ZSQvIdZ1M=
zgo.at/zcache/v2 v2.4.1 h1:Dfjoi8yI0Uq7NCc4lo2kaQJJmp9Mijo21gef+oJstbY=
zgo.at/zcache/v2 v2.4.1/go.mod h1:gyCeoLVo01QjDZynjime8xUGHHMbsLiPyUTBpDGd4Gk=
tailscale.com v1.84.3 h1:Ur9LMedSgicwbqpy5xn7t49G8490/s6rqAJOk5Q5AYE=
tailscale.com v1.84.3/go.mod h1:6/S63NMAhmncYT/1zIPDJkvCuZwMw+JnUuOfSPNazpo=
zgo.at/zcache/v2 v2.2.0 h1:K29/IPjMniZfveYE+IRXfrl11tMzHkIPuyGrfVZ2fGo=
zgo.at/zcache/v2 v2.2.0/go.mod h1:gyCeoLVo01QjDZynjime8xUGHHMbsLiPyUTBpDGd4Gk=
zombiezen.com/go/postgrestest v1.0.1 h1:aXoADQAJmZDU3+xilYVut0pHhgc0sF8ZspPW9gFNwP4=
zombiezen.com/go/postgrestest v1.0.1/go.mod h1:marlZezr+k2oSJrvXHnZUs1olHqpE9czlz8ZYkVxliQ=

View File

@@ -17,7 +17,6 @@ import (
"syscall"
"time"
"github.com/cenkalti/backoff/v5"
"github.com/davecgh/go-spew/spew"
"github.com/gorilla/mux"
grpcRuntime "github.com/grpc-ecosystem/grpc-gateway/v2/runtime"
@@ -100,7 +99,7 @@ type Headscale struct {
authProvider AuthProvider
mapBatcher mapper.Batcher
clientStreamsOpen sync.WaitGroup
pollNetMapStreamWG sync.WaitGroup
}
var (
@@ -129,29 +128,28 @@ func NewHeadscale(cfg *types.Config) (*Headscale, error) {
}
app := Headscale{
cfg: cfg,
noisePrivateKey: noisePrivateKey,
clientStreamsOpen: sync.WaitGroup{},
state: s,
cfg: cfg,
noisePrivateKey: noisePrivateKey,
pollNetMapStreamWG: sync.WaitGroup{},
state: s,
}
// Initialize ephemeral garbage collector
ephemeralGC := db.NewEphemeralGarbageCollector(func(ni types.NodeID) {
node, ok := app.state.GetNodeByID(ni)
if !ok {
log.Error().Uint64("node.id", ni.Uint64()).Msg("Ephemeral node deletion failed")
log.Debug().Caller().Uint64("node.id", ni.Uint64()).Msg("Ephemeral node deletion failed because node not found in NodeStore")
node, err := app.state.GetNodeByID(ni)
if err != nil {
log.Err(err).Uint64("node.id", ni.Uint64()).Msgf("failed to get ephemeral node for deletion")
return
}
policyChanged, err := app.state.DeleteNode(node)
if err != nil {
log.Error().Err(err).Uint64("node.id", ni.Uint64()).Str("node.name", node.Hostname()).Msg("Ephemeral node deletion failed")
log.Err(err).Uint64("node.id", ni.Uint64()).Msgf("failed to delete ephemeral node")
return
}
app.Change(policyChanged)
log.Debug().Caller().Uint64("node.id", ni.Uint64()).Str("node.name", node.Hostname()).Msg("Ephemeral node deleted because garbage collection timeout reached")
log.Debug().Uint64("node.id", ni.Uint64()).Msgf("deleted ephemeral node")
})
app.ephemeralGC = ephemeralGC
@@ -286,23 +284,11 @@ func (h *Headscale) scheduledTasks(ctx context.Context) {
case <-derpTickerChan:
log.Info().Msg("Fetching DERPMap updates")
derpMap, err := backoff.Retry(ctx, func() (*tailcfg.DERPMap, error) {
derpMap, err := derp.GetDERPMap(h.cfg.DERP)
if err != nil {
return nil, err
}
if h.cfg.DERP.ServerEnabled && h.cfg.DERP.AutomaticallyAddEmbeddedDerpRegion {
region, _ := h.DERPServer.GenerateRegion()
derpMap.Regions[region.RegionID] = &region
}
return derpMap, nil
}, backoff.WithBackOff(backoff.NewExponentialBackOff()))
if err != nil {
log.Error().Err(err).Msg("failed to build new DERPMap, retrying later")
continue
derpMap := derp.GetDERPMap(h.cfg.DERP)
if h.cfg.DERP.ServerEnabled && h.cfg.DERP.AutomaticallyAddEmbeddedDerpRegion {
region, _ := h.DERPServer.GenerateRegion()
derpMap.Regions[region.RegionID] = &region
}
h.state.SetDERPMap(derpMap)
h.Change(change.DERPSet)
@@ -385,32 +371,42 @@ func (h *Headscale) httpAuthenticationMiddleware(next http.Handler) http.Handler
Str("client_address", req.RemoteAddr).
Msg("HTTP authentication invoked")
authHeader := req.Header.Get("Authorization")
writeUnauthorized := func(statusCode int) {
writer.WriteHeader(statusCode)
if _, err := writer.Write([]byte("Unauthorized")); err != nil {
log.Error().Err(err).Msg("writing HTTP response failed")
}
}
authHeader := req.Header.Get("authorization")
if !strings.HasPrefix(authHeader, AuthPrefix) {
log.Error().
Caller().
Str("client_address", req.RemoteAddr).
Msg(`missing "Bearer " prefix in "Authorization" header`)
writeUnauthorized(http.StatusUnauthorized)
writer.WriteHeader(http.StatusUnauthorized)
_, err := writer.Write([]byte("Unauthorized"))
if err != nil {
log.Error().
Caller().
Err(err).
Msg("Failed to write response")
}
return
}
valid, err := h.state.ValidateAPIKey(strings.TrimPrefix(authHeader, AuthPrefix))
if err != nil {
log.Info().
log.Error().
Caller().
Err(err).
Str("client_address", req.RemoteAddr).
Msg("failed to validate token")
writeUnauthorized(http.StatusUnauthorized)
writer.WriteHeader(http.StatusInternalServerError)
_, err := writer.Write([]byte("Unauthorized"))
if err != nil {
log.Error().
Caller().
Err(err).
Msg("Failed to write response")
}
return
}
@@ -418,7 +414,16 @@ func (h *Headscale) httpAuthenticationMiddleware(next http.Handler) http.Handler
log.Info().
Str("client_address", req.RemoteAddr).
Msg("invalid token")
writeUnauthorized(http.StatusUnauthorized)
writer.WriteHeader(http.StatusUnauthorized)
_, err := writer.Write([]byte("Unauthorized"))
if err != nil {
log.Error().
Caller().
Err(err).
Msg("Failed to write response")
}
return
}
@@ -444,9 +449,7 @@ func (h *Headscale) createRouter(grpcMux *grpcRuntime.ServeMux) *mux.Router {
router.HandleFunc(ts2021UpgradePath, h.NoiseUpgradeHandler).
Methods(http.MethodPost, http.MethodGet)
router.HandleFunc("/robots.txt", h.RobotsHandler).Methods(http.MethodGet)
router.HandleFunc("/health", h.HealthHandler).Methods(http.MethodGet)
router.HandleFunc("/version", h.VersionHandler).Methods(http.MethodGet)
router.HandleFunc("/key", h.KeyHandler).Methods(http.MethodGet)
router.HandleFunc("/register/{registration_id}", h.authProvider.RegisterHandler).
Methods(http.MethodGet)
@@ -484,12 +487,11 @@ func (h *Headscale) createRouter(grpcMux *grpcRuntime.ServeMux) *mux.Router {
// Serve launches the HTTP and gRPC server service Headscale and the API.
func (h *Headscale) Serve() error {
var err error
capver.CanOldCodeBeCleanedUp()
if profilingEnabled {
if profilingPath != "" {
err = os.MkdirAll(profilingPath, os.ModePerm)
err := os.MkdirAll(profilingPath, os.ModePerm)
if err != nil {
log.Fatal().Err(err).Msg("failed to create profiling directory")
}
@@ -504,8 +506,7 @@ func (h *Headscale) Serve() error {
spew.Dump(h.cfg)
}
versionInfo := types.GetVersionInfo()
log.Info().Str("version", versionInfo.Version).Str("commit", versionInfo.Commit).Msg("Starting Headscale")
log.Info().Str("version", types.Version).Str("commit", types.GitCommitHash).Msg("Starting Headscale")
log.Info().
Str("minimum_version", capver.TailscaleVersion(capver.MinSupportedCapabilityVersion)).
Msg("Clients with a lower minimum version will be rejected")
@@ -514,39 +515,40 @@ func (h *Headscale) Serve() error {
h.mapBatcher.Start()
defer h.mapBatcher.Close()
// TODO(kradalby): fix state part.
if h.cfg.DERP.ServerEnabled {
// When embedded DERP is enabled we always need a STUN server
if h.cfg.DERP.STUNAddr == "" {
return errSTUNAddressNotSet
}
region, err := h.DERPServer.GenerateRegion()
if err != nil {
return fmt.Errorf("generating DERP region for embedded server: %w", err)
}
if h.cfg.DERP.AutomaticallyAddEmbeddedDerpRegion {
h.state.DERPMap().Regions[region.RegionID] = &region
}
go h.DERPServer.ServeSTUN()
}
derpMap, err := derp.GetDERPMap(h.cfg.DERP)
if err != nil {
return fmt.Errorf("failed to get DERPMap: %w", err)
}
if h.cfg.DERP.ServerEnabled && h.cfg.DERP.AutomaticallyAddEmbeddedDerpRegion {
region, _ := h.DERPServer.GenerateRegion()
derpMap.Regions[region.RegionID] = &region
}
if len(derpMap.Regions) == 0 {
if len(h.state.DERPMap().Regions) == 0 {
return errEmptyInitialDERPMap
}
h.state.SetDERPMap(derpMap)
// Start ephemeral node garbage collector and schedule all nodes
// that are already in the database and ephemeral. If they are still
// around between restarts, they will reconnect and the GC will
// be cancelled.
go h.ephemeralGC.Start()
ephmNodes := h.state.ListEphemeralNodes()
for _, node := range ephmNodes.All() {
h.ephemeralGC.Schedule(node.ID(), h.cfg.EphemeralNodeInactivityTimeout)
ephmNodes, err := h.state.ListEphemeralNodes()
if err != nil {
return fmt.Errorf("failed to list ephemeral nodes: %w", err)
}
for _, node := range ephmNodes {
h.ephemeralGC.Schedule(node.ID, h.cfg.EphemeralNodeInactivityTimeout)
}
if h.cfg.DNSConfig.ExtraRecordsPath != "" {
@@ -776,14 +778,23 @@ func (h *Headscale) Serve() error {
continue
}
changes, err := h.state.ReloadPolicy()
changed, err := h.state.ReloadPolicy()
if err != nil {
log.Error().Err(err).Msgf("reloading policy")
continue
}
h.Change(changes...)
if changed {
log.Info().
Msg("ACL policy successfully reloaded, notifying nodes of change")
err = h.state.AutoApproveNodes()
if err != nil {
log.Error().Err(err).Msg("failed to approve routes after new policy")
}
h.Change(change.PolicySet)
}
default:
info := func(msg string) { log.Info().Msg(msg) }
log.Info().
@@ -807,11 +818,10 @@ func (h *Headscale) Serve() error {
log.Error().Err(err).Msg("failed to shutdown http")
}
info("closing batcher")
h.mapBatcher.Close()
info("closing node notifier")
info("waiting for netmap stream to close")
h.clientStreamsOpen.Wait()
h.pollNetMapStreamWG.Wait()
info("shutting down grpc server (socket)")
grpcSocket.GracefulStop()
@@ -837,11 +847,11 @@ func (h *Headscale) Serve() error {
info("closing socket listener")
socketListener.Close()
// Close state connections
info("closing state and database")
// Close db connections
info("closing database connection")
err = h.state.Close()
if err != nil {
log.Error().Err(err).Msg("failed to close state")
log.Error().Err(err).Msg("failed to close db")
}
log.Info().
@@ -994,6 +1004,6 @@ func readOrCreatePrivateKey(path string) (*key.MachinePrivate, error) {
// Change is used to send changes to nodes.
// All change should be enqueued here and empty will be automatically
// ignored.
func (h *Headscale) Change(cs ...change.ChangeSet) {
h.mapBatcher.AddWork(cs...)
func (h *Headscale) Change(c change.ChangeSet) {
h.mapBatcher.AddWork(c)
}

View File

@@ -1,7 +1,6 @@
package hscontrol
import (
"cmp"
"context"
"errors"
"fmt"
@@ -11,8 +10,8 @@ import (
"time"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/rs/zerolog/log"
"github.com/juanfont/headscale/hscontrol/types/change"
"gorm.io/gorm"
"tailscale.com/tailcfg"
"tailscale.com/types/key"
@@ -26,105 +25,57 @@ type AuthProvider interface {
func (h *Headscale) handleRegister(
ctx context.Context,
req tailcfg.RegisterRequest,
regReq tailcfg.RegisterRequest,
machineKey key.MachinePublic,
) (*tailcfg.RegisterResponse, error) {
// Check for logout/expiry FIRST, before checking auth key.
// Tailscale clients may send logout requests with BOTH a past expiry AND an auth key.
// A past expiry takes precedence - it's a logout regardless of other fields.
if !req.Expiry.IsZero() && req.Expiry.Before(time.Now()) {
log.Debug().
Str("node.key", req.NodeKey.ShortString()).
Time("expiry", req.Expiry).
Bool("has_auth", req.Auth != nil).
Msg("Detected logout attempt with past expiry")
node, err := h.state.GetNodeByNodeKey(regReq.NodeKey)
if err != nil && !errors.Is(err, gorm.ErrRecordNotFound) {
return nil, fmt.Errorf("looking up node in database: %w", err)
}
// This is a logout attempt (expiry in the past)
if node, ok := h.state.GetNodeByNodeKey(req.NodeKey); ok {
log.Debug().
Uint64("node.id", node.ID().Uint64()).
Str("node.name", node.Hostname()).
Bool("is_ephemeral", node.IsEphemeral()).
Bool("has_authkey", node.AuthKey().Valid()).
Msg("Found existing node for logout, calling handleLogout")
resp, err := h.handleLogout(node, req, machineKey)
if node != nil {
// If an existing node is trying to register with an auth key,
// we need to validate the auth key even for existing nodes
if regReq.Auth != nil && regReq.Auth.AuthKey != "" {
resp, err := h.handleRegisterWithAuthKey(regReq, machineKey)
if err != nil {
return nil, fmt.Errorf("handling logout: %w", err)
// Preserve HTTPError types so they can be handled properly by the HTTP layer
var httpErr HTTPError
if errors.As(err, &httpErr) {
return nil, httpErr
}
return nil, fmt.Errorf("handling register with auth key for existing node: %w", err)
}
if resp != nil {
return resp, nil
}
} else {
log.Warn().
Str("node.key", req.NodeKey.ShortString()).
Msg("Logout attempt but node not found in NodeStore")
return resp, nil
}
}
// If the register request does not contain a Auth struct, it means we are logging
// out an existing node (legacy logout path for clients that send Auth=nil).
if req.Auth == nil {
// If the register request present a NodeKey that is currently in use, we will
// check if the node needs to be sent to re-auth, or if the node is logging out.
// We do not look up nodes by [key.MachinePublic] as it might belong to multiple
// nodes, separated by users and this path is handling expiring/logout paths.
if node, ok := h.state.GetNodeByNodeKey(req.NodeKey); ok {
// When tailscaled restarts, it sends RegisterRequest with Auth=nil and Expiry=zero.
// Return the current node state without modification.
// See: https://github.com/juanfont/headscale/issues/2862
if req.Expiry.IsZero() && node.Expiry().Valid() && !node.IsExpired() {
return nodeToRegisterResponse(node), nil
}
resp, err := h.handleLogout(node, req, machineKey)
if err != nil {
return nil, fmt.Errorf("handling existing node: %w", err)
}
// If resp is not nil, we have a response to return to the node.
// If resp is nil, we should proceed and see if the node is trying to re-auth.
if resp != nil {
return resp, nil
}
} else {
// If the register request is not attempting to register a node, and
// we cannot match it with an existing node, we consider that unexpected
// as only register nodes should attempt to log out.
log.Debug().
Str("node.key", req.NodeKey.ShortString()).
Str("machine.key", machineKey.ShortString()).
Bool("unexpected", true).
Msg("received register request with no auth, and no existing node")
resp, err := h.handleExistingNode(node, regReq, machineKey)
if err != nil {
return nil, fmt.Errorf("handling existing node: %w", err)
}
return resp, nil
}
// If the [tailcfg.RegisterRequest] has a Followup URL, it means that the
// node has already started the registration process and we should wait for
// it to finish the original registration.
if req.Followup != "" {
return h.waitForFollowup(ctx, req, machineKey)
if regReq.Followup != "" {
return h.waitForFollowup(ctx, regReq)
}
// Pre authenticated keys are handled slightly different than interactive
// logins as they can be done fully sync and we can respond to the node with
// the result as it is waiting.
if isAuthKey(req) {
resp, err := h.handleRegisterWithAuthKey(req, machineKey)
if regReq.Auth != nil && regReq.Auth.AuthKey != "" {
resp, err := h.handleRegisterWithAuthKey(regReq, machineKey)
if err != nil {
// Preserve HTTPError types so they can be handled properly by the HTTP layer
var httpErr HTTPError
if errors.As(err, &httpErr) {
return nil, httpErr
}
return nil, fmt.Errorf("handling register with auth key: %w", err)
}
return resp, nil
}
resp, err := h.handleRegisterInteractive(req, machineKey)
resp, err := h.handleRegisterInteractive(regReq, machineKey)
if err != nil {
return nil, fmt.Errorf("handling register interactive: %w", err)
}
@@ -132,112 +83,57 @@ func (h *Headscale) handleRegister(
return resp, nil
}
// handleLogout checks if the [tailcfg.RegisterRequest] is a
// logout attempt from a node. If the node is not attempting to
func (h *Headscale) handleLogout(
node types.NodeView,
req tailcfg.RegisterRequest,
func (h *Headscale) handleExistingNode(
node *types.Node,
regReq tailcfg.RegisterRequest,
machineKey key.MachinePublic,
) (*tailcfg.RegisterResponse, error) {
// Fail closed if it looks like this is an attempt to modify a node where
// the node key and the machine key the noise session was started with does
// not align.
if node.MachineKey() != machineKey {
if node.MachineKey != machineKey {
return nil, NewHTTPError(http.StatusUnauthorized, "node exist with different machine key", nil)
}
// Note: We do NOT return early if req.Auth is set, because Tailscale clients
// may send logout requests with BOTH a past expiry AND an auth key.
// A past expiry indicates logout, regardless of whether Auth is present.
// The expiry check below will handle the logout logic.
expired := node.IsExpired()
// If the node is expired and this is not a re-authentication attempt,
// force the client to re-authenticate.
// TODO(kradalby): I wonder if this is a path we ever hit?
if node.IsExpired() {
log.Trace().Str("node.name", node.Hostname()).
Uint64("node.id", node.ID().Uint64()).
Interface("reg.req", req).
Bool("unexpected", true).
Msg("Node key expired, forcing re-authentication")
return &tailcfg.RegisterResponse{
NodeKeyExpired: true,
MachineAuthorized: false,
AuthURL: "", // Client will need to re-authenticate
}, nil
}
if !expired && !regReq.Expiry.IsZero() {
requestExpiry := regReq.Expiry
// If we get here, the node is not currently expired, and not trying to
// do an auth.
// The node is likely logging out, but before we run that logic, we will validate
// that the node is not attempting to tamper/extend their expiry.
// If it is not, we will expire the node or in the case of an ephemeral node, delete it.
// The client is trying to extend their key, this is not allowed.
if req.Expiry.After(time.Now()) {
return nil, NewHTTPError(http.StatusBadRequest, "extending key is not allowed", nil)
}
// If the request expiry is in the past, we consider it a logout.
// Zero expiry is handled in handleRegister() before calling this function.
if req.Expiry.Before(time.Now()) {
log.Debug().
Uint64("node.id", node.ID().Uint64()).
Str("node.name", node.Hostname()).
Bool("is_ephemeral", node.IsEphemeral()).
Bool("has_authkey", node.AuthKey().Valid()).
Time("req.expiry", req.Expiry).
Msg("Processing logout request with past expiry")
if node.IsEphemeral() {
log.Info().
Uint64("node.id", node.ID().Uint64()).
Str("node.name", node.Hostname()).
Msg("Deleting ephemeral node during logout")
c, err := h.state.DeleteNode(node)
if err != nil {
return nil, fmt.Errorf("deleting ephemeral node: %w", err)
}
h.Change(c)
return &tailcfg.RegisterResponse{
NodeKeyExpired: true,
MachineAuthorized: false,
}, nil
// The client is trying to extend their key, this is not allowed.
if requestExpiry.After(time.Now()) {
return nil, NewHTTPError(http.StatusBadRequest, "extending key is not allowed", nil)
}
log.Debug().
Uint64("node.id", node.ID().Uint64()).
Str("node.name", node.Hostname()).
Msg("Node is not ephemeral, setting expiry instead of deleting")
}
// If the request expiry is in the past, we consider it a logout.
if requestExpiry.Before(time.Now()) {
if node.IsEphemeral() {
c, err := h.state.DeleteNode(node)
if err != nil {
return nil, fmt.Errorf("deleting ephemeral node: %w", err)
}
// Update the internal state with the nodes new expiry, meaning it is
// logged out.
updatedNode, c, err := h.state.SetNodeExpiry(node.ID(), req.Expiry)
if err != nil {
return nil, fmt.Errorf("setting node expiry: %w", err)
}
h.Change(c)
h.Change(c)
return nil, nil
}
}
return nodeToRegisterResponse(updatedNode), nil
_, c, err := h.state.SetNodeExpiry(node.ID, requestExpiry)
if err != nil {
return nil, fmt.Errorf("setting node expiry: %w", err)
}
h.Change(c)
}
return nodeToRegisterResponse(node), nil
}
// isAuthKey reports if the register request is a registration request
// using an pre auth key.
func isAuthKey(req tailcfg.RegisterRequest) bool {
return req.Auth != nil && req.Auth.AuthKey != ""
}
func nodeToRegisterResponse(node types.NodeView) *tailcfg.RegisterResponse {
func nodeToRegisterResponse(node *types.Node) *tailcfg.RegisterResponse {
return &tailcfg.RegisterResponse{
// TODO(kradalby): Only send for user-owned nodes
// and not tagged nodes when tags is working.
User: node.UserView().TailscaleUser(),
Login: node.UserView().TailscaleLogin(),
User: *node.User.TailscaleUser(),
Login: *node.User.TailscaleLogin(),
NodeKeyExpired: node.IsExpired(),
// Headscale does not implement the concept of machine authorization
@@ -249,10 +145,9 @@ func nodeToRegisterResponse(node types.NodeView) *tailcfg.RegisterResponse {
func (h *Headscale) waitForFollowup(
ctx context.Context,
req tailcfg.RegisterRequest,
machineKey key.MachinePublic,
regReq tailcfg.RegisterRequest,
) (*tailcfg.RegisterResponse, error) {
fu, err := url.Parse(req.Followup)
fu, err := url.Parse(regReq.Followup)
if err != nil {
return nil, NewHTTPError(http.StatusUnauthorized, "invalid followup URL", err)
}
@@ -268,68 +163,21 @@ func (h *Headscale) waitForFollowup(
return nil, NewHTTPError(http.StatusUnauthorized, "registration timed out", err)
case node := <-reg.Registered:
if node == nil {
// registration is expired in the cache, instruct the client to try a new registration
return h.reqToNewRegisterResponse(req, machineKey)
return nil, NewHTTPError(http.StatusUnauthorized, "node not found", nil)
}
return nodeToRegisterResponse(node.View()), nil
return nodeToRegisterResponse(node), nil
}
}
// if the follow-up registration isn't found anymore, instruct the client to try a new registration
return h.reqToNewRegisterResponse(req, machineKey)
}
// reqToNewRegisterResponse refreshes the registration flow by creating a new
// registration ID and returning the corresponding AuthURL so the client can
// restart the authentication process.
func (h *Headscale) reqToNewRegisterResponse(
req tailcfg.RegisterRequest,
machineKey key.MachinePublic,
) (*tailcfg.RegisterResponse, error) {
newRegID, err := types.NewRegistrationID()
if err != nil {
return nil, NewHTTPError(http.StatusInternalServerError, "failed to generate registration ID", err)
}
// Ensure we have a valid hostname
hostname := util.EnsureHostname(
req.Hostinfo,
machineKey.String(),
req.NodeKey.String(),
)
// Ensure we have valid hostinfo
hostinfo := cmp.Or(req.Hostinfo, &tailcfg.Hostinfo{})
hostinfo.Hostname = hostname
nodeToRegister := types.NewRegisterNode(
types.Node{
Hostname: hostname,
MachineKey: machineKey,
NodeKey: req.NodeKey,
Hostinfo: hostinfo,
LastSeen: ptr.To(time.Now()),
},
)
if !req.Expiry.IsZero() {
nodeToRegister.Node.Expiry = &req.Expiry
}
log.Info().Msgf("New followup node registration using key: %s", newRegID)
h.state.SetRegistrationCacheEntry(newRegID, nodeToRegister)
return &tailcfg.RegisterResponse{
AuthURL: h.authProvider.AuthURL(newRegID),
}, nil
return nil, NewHTTPError(http.StatusNotFound, "followup registration not found", nil)
}
func (h *Headscale) handleRegisterWithAuthKey(
req tailcfg.RegisterRequest,
regReq tailcfg.RegisterRequest,
machineKey key.MachinePublic,
) (*tailcfg.RegisterResponse, error) {
node, changed, err := h.state.HandleNodeFromPreAuthKey(
req,
node, changed, policyChanged, err := h.state.HandleNodeFromPreAuthKey(
regReq,
machineKey,
)
if err != nil {
@@ -344,8 +192,8 @@ func (h *Headscale) handleRegisterWithAuthKey(
return nil, err
}
// If node is not valid, it means an ephemeral node was deleted during logout
if !node.Valid() {
// If node is nil, it means an ephemeral node was deleted during logout
if node == nil {
h.Change(changed)
return nil, nil
}
@@ -363,41 +211,32 @@ func (h *Headscale) handleRegisterWithAuthKey(
// eventbus.
// TODO(kradalby): This needs to be ran as part of the batcher maybe?
// now since we dont update the node/pol here anymore
routesChange, err := h.state.AutoApproveRoutes(node)
if err != nil {
return nil, fmt.Errorf("auto approving routes: %w", err)
routeChange := h.state.AutoApproveRoutes(node)
if _, _, err := h.state.SaveNode(node); err != nil {
return nil, fmt.Errorf("saving auto approved routes to node: %w", err)
}
// Send both changes. Empty changes are ignored by Change().
h.Change(changed, routesChange)
if routeChange && changed.Empty() {
changed = change.NodeAdded(node.ID)
}
h.Change(changed)
// TODO(kradalby): I think this is covered above, but we need to validate that.
// // If policy changed due to node registration, send a separate policy change
// if policyChanged {
// policyChange := change.PolicyChange()
// h.Change(policyChange)
// }
// If policy changed due to node registration, send a separate policy change
if policyChanged {
policyChange := change.PolicyChange()
h.Change(policyChange)
}
resp := &tailcfg.RegisterResponse{
return &tailcfg.RegisterResponse{
MachineAuthorized: true,
NodeKeyExpired: node.IsExpired(),
User: node.UserView().TailscaleUser(),
Login: node.UserView().TailscaleLogin(),
}
log.Trace().
Caller().
Interface("reg.resp", resp).
Interface("reg.req", req).
Str("node.name", node.Hostname()).
Uint64("node.id", node.ID().Uint64()).
Msg("RegisterResponse")
return resp, nil
User: *node.User.TailscaleUser(),
Login: *node.User.TailscaleLogin(),
}, nil
}
func (h *Headscale) handleRegisterInteractive(
req tailcfg.RegisterRequest,
regReq tailcfg.RegisterRequest,
machineKey key.MachinePublic,
) (*tailcfg.RegisterResponse, error) {
registrationId, err := types.NewRegistrationID()
@@ -405,42 +244,19 @@ func (h *Headscale) handleRegisterInteractive(
return nil, fmt.Errorf("generating registration ID: %w", err)
}
// Ensure we have a valid hostname
hostname := util.EnsureHostname(
req.Hostinfo,
machineKey.String(),
req.NodeKey.String(),
)
// Ensure we have valid hostinfo
hostinfo := cmp.Or(req.Hostinfo, &tailcfg.Hostinfo{})
if req.Hostinfo == nil {
log.Warn().
Str("machine.key", machineKey.ShortString()).
Str("node.key", req.NodeKey.ShortString()).
Str("generated.hostname", hostname).
Msg("Received registration request with nil hostinfo, generated default hostname")
} else if req.Hostinfo.Hostname == "" {
log.Warn().
Str("machine.key", machineKey.ShortString()).
Str("node.key", req.NodeKey.ShortString()).
Str("generated.hostname", hostname).
Msg("Received registration request with empty hostname, generated default")
}
hostinfo.Hostname = hostname
nodeToRegister := types.NewRegisterNode(
types.Node{
Hostname: hostname,
nodeToRegister := types.RegisterNode{
Node: types.Node{
Hostname: regReq.Hostinfo.Hostname,
MachineKey: machineKey,
NodeKey: req.NodeKey,
Hostinfo: hostinfo,
NodeKey: regReq.NodeKey,
Hostinfo: regReq.Hostinfo,
LastSeen: ptr.To(time.Now()),
},
)
Registered: make(chan *types.Node),
}
if !req.Expiry.IsZero() {
nodeToRegister.Node.Expiry = &req.Expiry
if !regReq.Expiry.IsZero() {
nodeToRegister.Node.Expiry = &regReq.Expiry
}
h.state.SetRegistrationCacheEntry(
@@ -448,8 +264,6 @@ func (h *Headscale) handleRegisterInteractive(
nodeToRegister,
)
log.Info().Msgf("Starting node registration using key: %s", registrationId)
return &tailcfg.RegisterResponse{
AuthURL: h.authProvider.AuthURL(registrationId),
}, nil

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
package capver
// Generated DO NOT EDIT
//Generated DO NOT EDIT
import "tailscale.com/tailcfg"
@@ -37,15 +37,16 @@ var tailscaleToCapVer = map[string]tailcfg.CapabilityVersion{
"v1.84.2": 116,
}
var capVerToTailscaleVer = map[tailcfg.CapabilityVersion]string{
90: "v1.64.0",
95: "v1.66.0",
97: "v1.68.0",
102: "v1.70.0",
104: "v1.72.0",
106: "v1.74.0",
109: "v1.78.0",
113: "v1.80.0",
115: "v1.82.0",
116: "v1.84.0",
90: "v1.64.0",
95: "v1.66.0",
97: "v1.68.0",
102: "v1.70.0",
104: "v1.72.0",
106: "v1.74.0",
109: "v1.78.0",
113: "v1.80.0",
115: "v1.82.0",
116: "v1.84.0",
}

View File

@@ -260,7 +260,7 @@ func NewHeadscaleDatabase(
log.Error().Err(err).Msg("Error creating route")
} else {
log.Info().
Uint64("node.id", route.NodeID).
Uint64("node_id", route.NodeID).
Str("prefix", prefix.String()).
Msg("Route migrated")
}
@@ -870,23 +870,23 @@ AND auth_key_id NOT IN (
// Copy data directly using SQL
dataCopySQL := []string{
`INSERT INTO users (id, name, display_name, email, provider_identifier, provider, profile_pic_url, created_at, updated_at, deleted_at)
SELECT id, name, display_name, email, provider_identifier, provider, profile_pic_url, created_at, updated_at, deleted_at
SELECT id, name, display_name, email, provider_identifier, provider, profile_pic_url, created_at, updated_at, deleted_at
FROM users_old`,
`INSERT INTO pre_auth_keys (id, key, user_id, reusable, ephemeral, used, tags, expiration, created_at)
SELECT id, key, user_id, reusable, ephemeral, used, tags, expiration, created_at
SELECT id, key, user_id, reusable, ephemeral, used, tags, expiration, created_at
FROM pre_auth_keys_old`,
`INSERT INTO api_keys (id, prefix, hash, expiration, last_seen, created_at)
SELECT id, prefix, hash, expiration, last_seen, created_at
SELECT id, prefix, hash, expiration, last_seen, created_at
FROM api_keys_old`,
`INSERT INTO nodes (id, machine_key, node_key, disco_key, endpoints, host_info, ipv4, ipv6, hostname, given_name, user_id, register_method, forced_tags, auth_key_id, last_seen, expiry, approved_routes, created_at, updated_at, deleted_at)
SELECT id, machine_key, node_key, disco_key, endpoints, host_info, ipv4, ipv6, hostname, given_name, user_id, register_method, forced_tags, auth_key_id, last_seen, expiry, approved_routes, created_at, updated_at, deleted_at
SELECT id, machine_key, node_key, disco_key, endpoints, host_info, ipv4, ipv6, hostname, given_name, user_id, register_method, forced_tags, auth_key_id, last_seen, expiry, approved_routes, created_at, updated_at, deleted_at
FROM nodes_old`,
`INSERT INTO policies (id, data, created_at, updated_at, deleted_at)
SELECT id, data, created_at, updated_at, deleted_at
SELECT id, data, created_at, updated_at, deleted_at
FROM policies_old`,
}
@@ -932,61 +932,6 @@ AND auth_key_id NOT IN (
},
Rollback: func(db *gorm.DB) error { return nil },
},
{
// Drop all tables that are no longer in use and has existed.
// They potentially still present from broken migrations in the past.
ID: "202510311551",
Migrate: func(tx *gorm.DB) error {
for _, oldTable := range []string{"namespaces", "machines", "shared_machines", "kvs", "pre_auth_key_acl_tags", "routes"} {
err := tx.Migrator().DropTable(oldTable)
if err != nil {
log.Trace().Str("table", oldTable).
Err(err).
Msg("Error dropping old table, continuing...")
}
}
return nil
},
Rollback: func(tx *gorm.DB) error {
return nil
},
},
{
// Drop all indices that are no longer in use and has existed.
// They potentially still present from broken migrations in the past.
// They should all be cleaned up by the db engine, but we are a bit
// conservative to ensure all our previous mess is cleaned up.
ID: "202511101554-drop-old-idx",
Migrate: func(tx *gorm.DB) error {
for _, oldIdx := range []struct{ name, table string }{
{"idx_namespaces_deleted_at", "namespaces"},
{"idx_routes_deleted_at", "routes"},
{"idx_shared_machines_deleted_at", "shared_machines"},
} {
err := tx.Migrator().DropIndex(oldIdx.table, oldIdx.name)
if err != nil {
log.Trace().
Str("index", oldIdx.name).
Str("table", oldIdx.table).
Err(err).
Msg("Error dropping old index, continuing...")
}
}
return nil
},
Rollback: func(tx *gorm.DB) error {
return nil
},
},
// Migrations **above** this points will be REMOVED in version **0.29.0**
// This is to clean up a lot of old migrations that is seldom used
// and carries a lot of technical debt.
// Any new migrations should be added after the comment below and follow
// the rules it sets out.
// From this point, the following rules must be followed:
// - NEVER use gorm.AutoMigrate, write the exact migration steps needed
// - AutoMigrate depends on the struct staying exactly the same, which it won't over time.
@@ -1017,17 +962,7 @@ AND auth_key_id NOT IN (
ctx, cancel := context.WithTimeout(context.Background(), contextTimeoutSecs*time.Second)
defer cancel()
opts := squibble.DigestOptions{
IgnoreTables: []string{
// Litestream tables, these are inserted by
// litestream and not part of our schema
// https://litestream.io/how-it-works
"_litestream_lock",
"_litestream_seq",
},
}
if err := squibble.Validate(ctx, sqlConn, dbSchema, &opts); err != nil {
if err := squibble.Validate(ctx, sqlConn, dbSchema); err != nil {
return nil, fmt.Errorf("validating schema: %w", err)
}
}
@@ -1196,7 +1131,7 @@ func runMigrations(cfg types.DatabaseConfig, dbConn *gorm.DB, migrations *gormig
}
for _, migrationID := range migrationIDs {
log.Trace().Caller().Str("migration_id", migrationID).Msg("Running migration")
log.Trace().Str("migration_id", migrationID).Msg("Running migration")
needsFKDisabled := migrationsRequiringFKDisabled[migrationID]
if needsFKDisabled {

View File

@@ -275,7 +275,7 @@ func (db *HSDatabase) BackfillNodeIPs(i *IPAllocator) ([]string, error) {
return errors.New("backfilling IPs: ip allocator was nil")
}
log.Trace().Caller().Msgf("starting to backfill IPs")
log.Trace().Msgf("starting to backfill IPs")
nodes, err := ListNodes(tx)
if err != nil {
@@ -283,7 +283,7 @@ func (db *HSDatabase) BackfillNodeIPs(i *IPAllocator) ([]string, error) {
}
for _, node := range nodes {
log.Trace().Caller().Uint64("node.id", node.ID.Uint64()).Str("node.name", node.Hostname).Msg("IP backfill check started because node found in database")
log.Trace().Uint64("node.id", node.ID.Uint64()).Msg("checking if need backfill")
changed := false
// IPv4 prefix is set, but node ip is missing, alloc
@@ -325,11 +325,7 @@ func (db *HSDatabase) BackfillNodeIPs(i *IPAllocator) ([]string, error) {
}
if changed {
// Use Updates() with Select() to only update IP fields, avoiding overwriting
// other fields like Expiry. We need Select() because Updates() alone skips
// zero values, but we DO want to update IPv4/IPv6 to nil when removing them.
// See issue #2862.
err := tx.Model(node).Select("ipv4", "ipv6").Updates(node).Error
err := tx.Save(node).Error
if err != nil {
return fmt.Errorf("saving node(%d) after adding IPs: %w", node.ID, err)
}

View File

@@ -5,20 +5,19 @@ import (
"errors"
"fmt"
"net/netip"
"regexp"
"slices"
"sort"
"strconv"
"strings"
"sync"
"testing"
"time"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/juanfont/headscale/hscontrol/types/change"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/rs/zerolog/log"
"gorm.io/gorm"
"tailscale.com/net/tsaddr"
"tailscale.com/tailcfg"
"tailscale.com/types/key"
"tailscale.com/types/ptr"
)
@@ -28,8 +27,6 @@ const (
NodeGivenNameTrimSize = 2
)
var invalidDNSRegex = regexp.MustCompile("[^a-z0-9-.]+")
var (
ErrNodeNotFound = errors.New("node not found")
ErrNodeRouteIsNotAvailable = errors.New("route is not available on node")
@@ -37,6 +34,9 @@ var (
"node not found in registration cache",
)
ErrCouldNotConvertNodeInterface = errors.New("failed to convert node interface")
ErrDifferentRegisteredUser = errors.New(
"node was previously registered with a different user",
)
)
// ListPeers returns peers of node, regardless of any Policy or if the node is expired.
@@ -233,17 +233,6 @@ func SetApprovedRoutes(
return nil
}
// When approving exit routes, ensure both IPv4 and IPv6 are included
// If either 0.0.0.0/0 or ::/0 is being approved, both should be approved
hasIPv4Exit := slices.Contains(routes, tsaddr.AllIPv4())
hasIPv6Exit := slices.Contains(routes, tsaddr.AllIPv6())
if hasIPv4Exit && !hasIPv6Exit {
routes = append(routes, tsaddr.AllIPv6())
} else if hasIPv6Exit && !hasIPv4Exit {
routes = append(routes, tsaddr.AllIPv4())
}
b, err := json.Marshal(routes)
if err != nil {
return err
@@ -271,22 +260,24 @@ func SetLastSeen(tx *gorm.DB, nodeID types.NodeID, lastSeen time.Time) error {
}
// RenameNode takes a Node struct and a new GivenName for the nodes
// and renames it. Validation should be done in the state layer before calling this function.
// and renames it. If the name is not unique, it will return an error.
func RenameNode(tx *gorm.DB,
nodeID types.NodeID, newName string,
) error {
if err := util.ValidateHostname(newName); err != nil {
err := util.CheckForFQDNRules(
newName,
)
if err != nil {
return fmt.Errorf("renaming node: %w", err)
}
// Check if the new name is unique
var count int64
if err := tx.Model(&types.Node{}).Where("given_name = ? AND id != ?", newName, nodeID).Count(&count).Error; err != nil {
return fmt.Errorf("failed to check name uniqueness: %w", err)
uniq, err := isUniqueName(tx, newName)
if err != nil {
return fmt.Errorf("checking if name is unique: %w", err)
}
if count > 0 {
return errors.New("name is not unique")
if !uniq {
return fmt.Errorf("name is not unique: %s", newName)
}
if err := tx.Model(&types.Node{}).Where("id = ?", nodeID).Update("given_name", newName).Error; err != nil {
@@ -342,19 +333,108 @@ func (hsdb *HSDatabase) DeleteEphemeralNode(
})
}
// RegisterNodeForTest is used only for testing purposes to register a node directly in the database.
// Production code should use state.HandleNodeFromAuthPath or state.HandleNodeFromPreAuthKey.
func RegisterNodeForTest(tx *gorm.DB, node types.Node, ipv4 *netip.Addr, ipv6 *netip.Addr) (*types.Node, error) {
if !testing.Testing() {
panic("RegisterNodeForTest can only be called during tests")
}
// HandleNodeFromAuthPath is called from the OIDC or CLI auth path
// with a registrationID to register or reauthenticate a node.
// If the node found in the registration cache is not already registered,
// it will be registered with the user and the node will be removed from the cache.
// If the node is already registered, the expiry will be updated.
// The node, and a boolean indicating if it was a new node or not, will be returned.
func (hsdb *HSDatabase) HandleNodeFromAuthPath(
registrationID types.RegistrationID,
userID types.UserID,
nodeExpiry *time.Time,
registrationMethod string,
ipv4 *netip.Addr,
ipv6 *netip.Addr,
) (*types.Node, change.ChangeSet, error) {
var nodeChange change.ChangeSet
node, err := Write(hsdb.DB, func(tx *gorm.DB) (*types.Node, error) {
if reg, ok := hsdb.regCache.Get(registrationID); ok {
if node, _ := GetNodeByNodeKey(tx, reg.Node.NodeKey); node == nil {
user, err := GetUserByID(tx, userID)
if err != nil {
return nil, fmt.Errorf(
"failed to find user in register node from auth callback, %w",
err,
)
}
log.Debug().
Str("registration_id", registrationID.String()).
Str("username", user.Username()).
Str("registrationMethod", registrationMethod).
Str("expiresAt", fmt.Sprintf("%v", nodeExpiry)).
Msg("Registering node from API/CLI or auth callback")
// TODO(kradalby): This looks quite wrong? why ID 0?
// Why not always?
// Registration of expired node with different user
if reg.Node.ID != 0 &&
reg.Node.UserID != user.ID {
return nil, ErrDifferentRegisteredUser
}
reg.Node.UserID = user.ID
reg.Node.User = *user
reg.Node.RegisterMethod = registrationMethod
if nodeExpiry != nil {
reg.Node.Expiry = nodeExpiry
}
node, err := RegisterNode(
tx,
reg.Node,
ipv4, ipv6,
)
if err == nil {
hsdb.regCache.Delete(registrationID)
}
// Signal to waiting clients that the machine has been registered.
select {
case reg.Registered <- node:
default:
}
close(reg.Registered)
nodeChange = change.NodeAdded(node.ID)
return node, err
} else {
// If the node is already registered, this is a refresh.
err := NodeSetExpiry(tx, node.ID, *nodeExpiry)
if err != nil {
return nil, err
}
nodeChange = change.KeyExpiry(node.ID)
return node, nil
}
}
return nil, ErrNodeNotFoundRegistrationCache
})
return node, nodeChange, err
}
func (hsdb *HSDatabase) RegisterNode(node types.Node, ipv4 *netip.Addr, ipv6 *netip.Addr) (*types.Node, error) {
return Write(hsdb.DB, func(tx *gorm.DB) (*types.Node, error) {
return RegisterNode(tx, node, ipv4, ipv6)
})
}
// RegisterNode is executed from the CLI to register a new Node using its MachineKey.
func RegisterNode(tx *gorm.DB, node types.Node, ipv4 *netip.Addr, ipv6 *netip.Addr) (*types.Node, error) {
log.Debug().
Str("node", node.Hostname).
Str("machine_key", node.MachineKey.ShortString()).
Str("node_key", node.NodeKey.ShortString()).
Str("user", node.User.Username()).
Msg("Registering test node")
Msg("Registering node")
// If the a new node is registered with the same machine key, to the same user,
// update the existing node.
@@ -365,13 +445,8 @@ func RegisterNodeForTest(tx *gorm.DB, node types.Node, ipv4 *netip.Addr, ipv6 *n
node.ID = oldNode.ID
node.GivenName = oldNode.GivenName
node.ApprovedRoutes = oldNode.ApprovedRoutes
// Don't overwrite the provided IPs with old ones when they exist
if ipv4 == nil {
ipv4 = oldNode.IPv4
}
if ipv6 == nil {
ipv6 = oldNode.IPv6
}
ipv4 = oldNode.IPv4
ipv6 = oldNode.IPv6
}
// If the node exists and it already has IP(s), we just save it
@@ -388,7 +463,7 @@ func RegisterNodeForTest(tx *gorm.DB, node types.Node, ipv4 *netip.Addr, ipv6 *n
Str("machine_key", node.MachineKey.ShortString()).
Str("node_key", node.NodeKey.ShortString()).
Str("user", node.User.Username()).
Msg("Test node authorized again")
Msg("Node authorized again")
return &node, nil
}
@@ -396,16 +471,8 @@ func RegisterNodeForTest(tx *gorm.DB, node types.Node, ipv4 *netip.Addr, ipv6 *n
node.IPv4 = ipv4
node.IPv6 = ipv6
var err error
node.Hostname, err = util.NormaliseHostname(node.Hostname)
if err != nil {
newHostname := util.InvalidString()
log.Info().Err(err).Str("invalid-hostname", node.Hostname).Str("new-hostname", newHostname).Msgf("Invalid hostname, replacing")
node.Hostname = newHostname
}
if node.GivenName == "" {
givenName, err := EnsureUniqueGivenName(tx, node.Hostname)
givenName, err := ensureUniqueGivenName(tx, node.Hostname)
if err != nil {
return nil, fmt.Errorf("failed to ensure unique given name: %w", err)
}
@@ -420,7 +487,7 @@ func RegisterNodeForTest(tx *gorm.DB, node types.Node, ipv4 *netip.Addr, ipv6 *n
log.Trace().
Caller().
Str("node", node.Hostname).
Msg("Test node registered with the database")
Msg("Node registered with the database")
return &node, nil
}
@@ -452,11 +519,15 @@ func NodeSetMachineKey(
}).Error
}
func generateGivenName(suppliedName string, randomSuffix bool) (string, error) {
// Strip invalid DNS characters for givenName
suppliedName = strings.ToLower(suppliedName)
suppliedName = invalidDNSRegex.ReplaceAllString(suppliedName, "")
// NodeSave saves a node object to the database, prefer to use a specific save method rather
// than this. It is intended to be used when we are changing or.
// TODO(kradalby): Remove this func, just use Save.
func NodeSave(tx *gorm.DB, node *types.Node) error {
return tx.Save(node).Error
}
func generateGivenName(suppliedName string, randomSuffix bool) (string, error) {
suppliedName = util.ConvertWithFQDNRules(suppliedName)
if len(suppliedName) > util.LabelHostnameLength {
return "", types.ErrHostnameTooLong
}
@@ -489,8 +560,7 @@ func isUniqueName(tx *gorm.DB, name string) (bool, error) {
return len(nodes) == 0, nil
}
// EnsureUniqueGivenName generates a unique given name for a node based on its hostname.
func EnsureUniqueGivenName(
func ensureUniqueGivenName(
tx *gorm.DB,
name string,
) (string, error) {
@@ -516,6 +586,41 @@ func EnsureUniqueGivenName(
return givenName, nil
}
// ExpireExpiredNodes checks for nodes that have expired since the last check
// and returns a time to be used for the next check, a StateUpdate
// containing the expired nodes, and a boolean indicating if any nodes were found.
func ExpireExpiredNodes(tx *gorm.DB,
lastCheck time.Time,
) (time.Time, []change.ChangeSet, bool) {
// use the time of the start of the function to ensure we
// dont miss some nodes by returning it _after_ we have
// checked everything.
started := time.Now()
expired := make([]*tailcfg.PeerChange, 0)
var updates []change.ChangeSet
nodes, err := ListNodes(tx)
if err != nil {
return time.Unix(0, 0), nil, false
}
for _, node := range nodes {
if node.IsExpired() && node.Expiry.After(lastCheck) {
expired = append(expired, &tailcfg.PeerChange{
NodeID: tailcfg.NodeID(node.ID),
KeyExpiry: node.Expiry,
})
updates = append(updates, change.KeyExpiry(node.ID))
}
}
if len(expired) > 0 {
return started, updates, true
}
return started, nil, false
}
// EphemeralGarbageCollector is a garbage collector that will delete nodes after
// a certain amount of time.
// It is used to delete ephemeral nodes that have disconnected and should be
@@ -676,23 +781,19 @@ func (hsdb *HSDatabase) CreateRegisteredNodeForTest(user *types.User, hostname .
node := hsdb.CreateNodeForTest(user, hostname...)
// Allocate IPs for the test node using the database's IP allocator
// This is a simplified allocation for testing - in production this would use State.ipAlloc
ipv4, ipv6, err := hsdb.allocateTestIPs(node.ID)
if err != nil {
panic(fmt.Sprintf("failed to allocate IPs for test node: %v", err))
}
var registeredNode *types.Node
err = hsdb.DB.Transaction(func(tx *gorm.DB) error {
var err error
registeredNode, err = RegisterNodeForTest(tx, *node, ipv4, ipv6)
err := hsdb.DB.Transaction(func(tx *gorm.DB) error {
_, err := RegisterNode(tx, *node, nil, nil)
return err
})
if err != nil {
panic(fmt.Sprintf("failed to register test node: %v", err))
}
registeredNode, err := hsdb.GetNodeByID(node.ID)
if err != nil {
panic(fmt.Sprintf("failed to get registered test node: %v", err))
}
return registeredNode
}
@@ -741,23 +842,3 @@ func (hsdb *HSDatabase) CreateRegisteredNodesForTest(user *types.User, count int
return nodes
}
// allocateTestIPs allocates sequential test IPs for nodes during testing.
func (hsdb *HSDatabase) allocateTestIPs(nodeID types.NodeID) (*netip.Addr, *netip.Addr, error) {
if !testing.Testing() {
panic("allocateTestIPs can only be called during tests")
}
// Use simple sequential allocation for tests
// IPv4: 100.64.0.x (where x is nodeID)
// IPv6: fd7a:115c:a1e0::x (where x is nodeID)
if nodeID > 254 {
return nil, nil, fmt.Errorf("test node ID %d too large for simple IP allocation", nodeID)
}
ipv4 := netip.AddrFrom4([4]byte{100, 64, 0, byte(nodeID)})
ipv6 := netip.AddrFrom16([16]byte{0xfd, 0x7a, 0x11, 0x5c, 0xa1, 0xe0, 0, 0, 0, 0, 0, 0, 0, 0, 0, byte(nodeID)})
return &ipv4, &ipv6, nil
}

View File

@@ -292,57 +292,12 @@ func TestHeadscale_generateGivenName(t *testing.T) {
func TestAutoApproveRoutes(t *testing.T) {
tests := []struct {
name string
acl string
routes []netip.Prefix
want []netip.Prefix
want2 []netip.Prefix
expectChange bool // whether to expect route changes
name string
acl string
routes []netip.Prefix
want []netip.Prefix
want2 []netip.Prefix
}{
{
name: "no-auto-approvers-empty-policy",
acl: `
{
"groups": {
"group:admins": ["test@"]
},
"acls": [
{
"action": "accept",
"src": ["group:admins"],
"dst": ["group:admins:*"]
}
]
}`,
routes: []netip.Prefix{netip.MustParsePrefix("10.33.0.0/16")},
want: []netip.Prefix{}, // Should be empty - no auto-approvers
want2: []netip.Prefix{}, // Should be empty - no auto-approvers
expectChange: false, // No changes expected
},
{
name: "no-auto-approvers-explicit-empty",
acl: `
{
"groups": {
"group:admins": ["test@"]
},
"acls": [
{
"action": "accept",
"src": ["group:admins"],
"dst": ["group:admins:*"]
}
],
"autoApprovers": {
"routes": {},
"exitNode": []
}
}`,
routes: []netip.Prefix{netip.MustParsePrefix("10.33.0.0/16")},
want: []netip.Prefix{}, // Should be empty - explicitly empty auto-approvers
want2: []netip.Prefix{}, // Should be empty - explicitly empty auto-approvers
expectChange: false, // No changes expected
},
{
name: "2068-approve-issue-sub-kube",
acl: `
@@ -361,9 +316,8 @@ func TestAutoApproveRoutes(t *testing.T) {
}
}
}`,
routes: []netip.Prefix{netip.MustParsePrefix("10.42.7.0/24")},
want: []netip.Prefix{netip.MustParsePrefix("10.42.7.0/24")},
expectChange: true, // Routes should be approved
routes: []netip.Prefix{netip.MustParsePrefix("10.42.7.0/24")},
want: []netip.Prefix{netip.MustParsePrefix("10.42.7.0/24")},
},
{
name: "2068-approve-issue-sub-exit-tag",
@@ -407,7 +361,6 @@ func TestAutoApproveRoutes(t *testing.T) {
tsaddr.AllIPv4(),
tsaddr.AllIPv6(),
},
expectChange: true, // Routes should be approved
},
}
@@ -468,40 +421,28 @@ func TestAutoApproveRoutes(t *testing.T) {
require.NoError(t, err)
require.NotNil(t, pm)
newRoutes1, changed1 := policy.ApproveRoutesWithPolicy(pm, node.View(), node.ApprovedRoutes, tt.routes)
assert.Equal(t, tt.expectChange, changed1)
changed1 := policy.AutoApproveRoutes(pm, &node)
assert.True(t, changed1)
if changed1 {
err = SetApprovedRoutes(adb.DB, node.ID, newRoutes1)
require.NoError(t, err)
}
err = adb.DB.Save(&node).Error
require.NoError(t, err)
newRoutes2, changed2 := policy.ApproveRoutesWithPolicy(pm, nodeTagged.View(), nodeTagged.ApprovedRoutes, tt.routes)
if changed2 {
err = SetApprovedRoutes(adb.DB, nodeTagged.ID, newRoutes2)
require.NoError(t, err)
}
_ = policy.AutoApproveRoutes(pm, &nodeTagged)
err = adb.DB.Save(&nodeTagged).Error
require.NoError(t, err)
node1ByID, err := adb.GetNodeByID(1)
require.NoError(t, err)
// For empty auto-approvers tests, handle nil vs empty slice comparison
expectedRoutes1 := tt.want
if len(expectedRoutes1) == 0 {
expectedRoutes1 = nil
}
if diff := cmp.Diff(expectedRoutes1, node1ByID.AllApprovedRoutes(), util.Comparers...); diff != "" {
if diff := cmp.Diff(tt.want, node1ByID.SubnetRoutes(), util.Comparers...); diff != "" {
t.Errorf("unexpected enabled routes (-want +got):\n%s", diff)
}
node2ByID, err := adb.GetNodeByID(2)
require.NoError(t, err)
expectedRoutes2 := tt.want2
if len(expectedRoutes2) == 0 {
expectedRoutes2 = nil
}
if diff := cmp.Diff(expectedRoutes2, node2ByID.AllApprovedRoutes(), util.Comparers...); diff != "" {
if diff := cmp.Diff(tt.want2, node2ByID.SubnetRoutes(), util.Comparers...); diff != "" {
t.Errorf("unexpected enabled routes (-want +got):\n%s", diff)
}
})
@@ -640,7 +581,7 @@ func TestListEphemeralNodes(t *testing.T) {
assert.Equal(t, nodeEph.Hostname, ephemeralNodes[0].Hostname)
}
func TestNodeNaming(t *testing.T) {
func TestRenameNode(t *testing.T) {
db, err := newSQLiteTestDB()
if err != nil {
t.Fatalf("creating db: %s", err)
@@ -672,26 +613,6 @@ func TestNodeNaming(t *testing.T) {
Hostinfo: &tailcfg.Hostinfo{},
}
// Using non-ASCII characters in the hostname can
// break your network, so they should be replaced when registering
// a node.
// https://github.com/juanfont/headscale/issues/2343
nodeInvalidHostname := types.Node{
MachineKey: key.NewMachine().Public(),
NodeKey: key.NewNode().Public(),
Hostname: "我的电脑",
UserID: user2.ID,
RegisterMethod: util.RegisterMethodAuthKey,
}
nodeShortHostname := types.Node{
MachineKey: key.NewMachine().Public(),
NodeKey: key.NewNode().Public(),
Hostname: "a",
UserID: user2.ID,
RegisterMethod: util.RegisterMethodAuthKey,
}
err = db.DB.Save(&node).Error
require.NoError(t, err)
@@ -699,16 +620,12 @@ func TestNodeNaming(t *testing.T) {
require.NoError(t, err)
err = db.DB.Transaction(func(tx *gorm.DB) error {
_, err := RegisterNodeForTest(tx, node, nil, nil)
_, err := RegisterNode(tx, node, nil, nil)
if err != nil {
return err
}
_, err = RegisterNodeForTest(tx, node2, nil, nil)
if err != nil {
return err
}
_, err = RegisterNodeForTest(tx, nodeInvalidHostname, ptr.To(mpp("100.64.0.66/32").Addr()), nil)
_, err = RegisterNodeForTest(tx, nodeShortHostname, ptr.To(mpp("100.64.0.67/32").Addr()), nil)
_, err = RegisterNode(tx, node2, nil, nil)
return err
})
require.NoError(t, err)
@@ -716,12 +633,10 @@ func TestNodeNaming(t *testing.T) {
nodes, err := db.ListNodes()
require.NoError(t, err)
assert.Len(t, nodes, 4)
assert.Len(t, nodes, 2)
t.Logf("node1 %s %s", nodes[0].Hostname, nodes[0].GivenName)
t.Logf("node2 %s %s", nodes[1].Hostname, nodes[1].GivenName)
t.Logf("node3 %s %s", nodes[2].Hostname, nodes[2].GivenName)
t.Logf("node4 %s %s", nodes[3].Hostname, nodes[3].GivenName)
assert.Equal(t, nodes[0].Hostname, nodes[0].GivenName)
assert.NotEqual(t, nodes[1].Hostname, nodes[1].GivenName)
@@ -733,10 +648,6 @@ func TestNodeNaming(t *testing.T) {
assert.Len(t, nodes[1].Hostname, 4)
assert.Len(t, nodes[0].GivenName, 4)
assert.Len(t, nodes[1].GivenName, 13)
assert.Contains(t, nodes[2].Hostname, "invalid-") // invalid chars
assert.Contains(t, nodes[2].GivenName, "invalid-")
assert.Contains(t, nodes[3].Hostname, "invalid-") // too short
assert.Contains(t, nodes[3].GivenName, "invalid-")
// Nodes can be renamed to a unique name
err = db.Write(func(tx *gorm.DB) error {
@@ -746,7 +657,7 @@ func TestNodeNaming(t *testing.T) {
nodes, err = db.ListNodes()
require.NoError(t, err)
assert.Len(t, nodes, 4)
assert.Len(t, nodes, 2)
assert.Equal(t, "test", nodes[0].Hostname)
assert.Equal(t, "newname", nodes[0].GivenName)
@@ -758,7 +669,7 @@ func TestNodeNaming(t *testing.T) {
nodes, err = db.ListNodes()
require.NoError(t, err)
assert.Len(t, nodes, 4)
assert.Len(t, nodes, 2)
assert.Equal(t, "test", nodes[0].Hostname)
assert.Equal(t, "newname", nodes[0].GivenName)
assert.Equal(t, "test", nodes[1].GivenName)
@@ -768,149 +679,6 @@ func TestNodeNaming(t *testing.T) {
return RenameNode(tx, nodes[0].ID, "test")
})
assert.ErrorContains(t, err, "name is not unique")
// Rename invalid chars
err = db.Write(func(tx *gorm.DB) error {
return RenameNode(tx, nodes[2].ID, "我的电脑")
})
assert.ErrorContains(t, err, "invalid characters")
// Rename too short
err = db.Write(func(tx *gorm.DB) error {
return RenameNode(tx, nodes[3].ID, "a")
})
assert.ErrorContains(t, err, "at least 2 characters")
// Rename with emoji
err = db.Write(func(tx *gorm.DB) error {
return RenameNode(tx, nodes[0].ID, "hostname-with-💩")
})
assert.ErrorContains(t, err, "invalid characters")
// Rename with only emoji
err = db.Write(func(tx *gorm.DB) error {
return RenameNode(tx, nodes[0].ID, "🚀")
})
assert.ErrorContains(t, err, "invalid characters")
}
func TestRenameNodeComprehensive(t *testing.T) {
db, err := newSQLiteTestDB()
if err != nil {
t.Fatalf("creating db: %s", err)
}
user, err := db.CreateUser(types.User{Name: "test"})
require.NoError(t, err)
node := types.Node{
ID: 0,
MachineKey: key.NewMachine().Public(),
NodeKey: key.NewNode().Public(),
Hostname: "testnode",
UserID: user.ID,
RegisterMethod: util.RegisterMethodAuthKey,
Hostinfo: &tailcfg.Hostinfo{},
}
err = db.DB.Save(&node).Error
require.NoError(t, err)
err = db.DB.Transaction(func(tx *gorm.DB) error {
_, err := RegisterNodeForTest(tx, node, nil, nil)
return err
})
require.NoError(t, err)
nodes, err := db.ListNodes()
require.NoError(t, err)
assert.Len(t, nodes, 1)
tests := []struct {
name string
newName string
wantErr string
}{
{
name: "uppercase_rejected",
newName: "User2-Host",
wantErr: "must be lowercase",
},
{
name: "underscore_rejected",
newName: "test_node",
wantErr: "invalid characters",
},
{
name: "at_sign_uppercase_rejected",
newName: "Test@Host",
wantErr: "must be lowercase",
},
{
name: "at_sign_rejected",
newName: "test@host",
wantErr: "invalid characters",
},
{
name: "chinese_chars_with_dash_rejected",
newName: "server-北京-01",
wantErr: "invalid characters",
},
{
name: "chinese_only_rejected",
newName: "我的电脑",
wantErr: "invalid characters",
},
{
name: "emoji_with_text_rejected",
newName: "laptop-🚀",
wantErr: "invalid characters",
},
{
name: "mixed_chinese_emoji_rejected",
newName: "测试💻机器",
wantErr: "invalid characters",
},
{
name: "only_emojis_rejected",
newName: "🎉🎊",
wantErr: "invalid characters",
},
{
name: "only_at_signs_rejected",
newName: "@@@",
wantErr: "invalid characters",
},
{
name: "starts_with_dash_rejected",
newName: "-test",
wantErr: "cannot start or end with a hyphen",
},
{
name: "ends_with_dash_rejected",
newName: "test-",
wantErr: "cannot start or end with a hyphen",
},
{
name: "too_long_hostname_rejected",
newName: "this-is-a-very-long-hostname-that-exceeds-sixty-three-characters-limit",
wantErr: "must not exceed 63 characters",
},
{
name: "too_short_hostname_rejected",
newName: "a",
wantErr: "at least 2 characters",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := db.Write(func(tx *gorm.DB) error {
return RenameNode(tx, nodes[0].ID, tt.newName)
})
assert.ErrorContains(t, err, tt.wantErr)
})
}
}
func TestListPeers(t *testing.T) {
@@ -953,11 +721,11 @@ func TestListPeers(t *testing.T) {
require.NoError(t, err)
err = db.DB.Transaction(func(tx *gorm.DB) error {
_, err := RegisterNodeForTest(tx, node1, nil, nil)
_, err := RegisterNode(tx, node1, nil, nil)
if err != nil {
return err
}
_, err = RegisterNodeForTest(tx, node2, nil, nil)
_, err = RegisterNode(tx, node2, nil, nil)
return err
})
@@ -971,13 +739,13 @@ func TestListPeers(t *testing.T) {
// No parameter means no filter, should return all peers
nodes, err = db.ListPeers(1)
require.NoError(t, err)
assert.Len(t, nodes, 1)
assert.Equal(t, 1, len(nodes))
assert.Equal(t, "test2", nodes[0].Hostname)
// Empty node list should return all peers
nodes, err = db.ListPeers(1, types.NodeIDs{}...)
require.NoError(t, err)
assert.Len(t, nodes, 1)
assert.Equal(t, 1, len(nodes))
assert.Equal(t, "test2", nodes[0].Hostname)
// No match in IDs should return empty list and no error
@@ -988,13 +756,13 @@ func TestListPeers(t *testing.T) {
// Partial match in IDs
nodes, err = db.ListPeers(1, types.NodeIDs{2, 3}...)
require.NoError(t, err)
assert.Len(t, nodes, 1)
assert.Equal(t, 1, len(nodes))
assert.Equal(t, "test2", nodes[0].Hostname)
// Several matched IDs, but node ID is still filtered out
nodes, err = db.ListPeers(1, types.NodeIDs{1, 2, 3}...)
require.NoError(t, err)
assert.Len(t, nodes, 1)
assert.Equal(t, 1, len(nodes))
assert.Equal(t, "test2", nodes[0].Hostname)
}
@@ -1038,11 +806,11 @@ func TestListNodes(t *testing.T) {
require.NoError(t, err)
err = db.DB.Transaction(func(tx *gorm.DB) error {
_, err := RegisterNodeForTest(tx, node1, nil, nil)
_, err := RegisterNode(tx, node1, nil, nil)
if err != nil {
return err
}
_, err = RegisterNodeForTest(tx, node2, nil, nil)
_, err = RegisterNode(tx, node2, nil, nil)
return err
})
@@ -1056,14 +824,14 @@ func TestListNodes(t *testing.T) {
// No parameter means no filter, should return all nodes
nodes, err = db.ListNodes()
require.NoError(t, err)
assert.Len(t, nodes, 2)
assert.Equal(t, 2, len(nodes))
assert.Equal(t, "test1", nodes[0].Hostname)
assert.Equal(t, "test2", nodes[1].Hostname)
// Empty node list should return all nodes
nodes, err = db.ListNodes(types.NodeIDs{}...)
require.NoError(t, err)
assert.Len(t, nodes, 2)
assert.Equal(t, 2, len(nodes))
assert.Equal(t, "test1", nodes[0].Hostname)
assert.Equal(t, "test2", nodes[1].Hostname)
@@ -1075,13 +843,13 @@ func TestListNodes(t *testing.T) {
// Partial match in IDs
nodes, err = db.ListNodes(types.NodeIDs{2, 3}...)
require.NoError(t, err)
assert.Len(t, nodes, 1)
assert.Equal(t, 1, len(nodes))
assert.Equal(t, "test2", nodes[0].Hostname)
// Several matched IDs
nodes, err = db.ListNodes(types.NodeIDs{1, 2, 3}...)
require.NoError(t, err)
assert.Len(t, nodes, 2)
assert.Equal(t, 2, len(nodes))
assert.Equal(t, "test1", nodes[0].Hostname)
assert.Equal(t, "test2", nodes[1].Hostname)
}

View File

@@ -5,7 +5,6 @@ import (
"encoding/hex"
"errors"
"fmt"
"slices"
"strings"
"time"
@@ -48,9 +47,8 @@ func CreatePreAuthKey(
return nil, err
}
// Remove duplicates and sort for consistency
// Remove duplicates
aclTags = set.SetOf(aclTags).Slice()
slices.Sort(aclTags)
// TODO(kradalby): factor out and create a reusable tag validation,
// check if there is one in Tailscale's lib.
@@ -126,18 +124,9 @@ func GetPreAuthKey(tx *gorm.DB, key string) (*types.PreAuthKey, error) {
}
// DestroyPreAuthKey destroys a preauthkey. Returns error if the PreAuthKey
// does not exist. This also clears the auth_key_id on any nodes that reference
// this key.
// does not exist.
func DestroyPreAuthKey(tx *gorm.DB, pak types.PreAuthKey) error {
return tx.Transaction(func(db *gorm.DB) error {
// First, clear the foreign key reference on any nodes using this key
if err := db.Model(&types.Node{}).
Where("auth_key_id = ?", pak.ID).
Update("auth_key_id", nil).Error; err != nil {
return fmt.Errorf("failed to clear auth_key_id on nodes: %w", err)
}
// Then delete the pre-auth key
if result := db.Unscoped().Delete(pak); result.Error != nil {
return result.Error
}
@@ -152,20 +141,13 @@ func (hsdb *HSDatabase) ExpirePreAuthKey(k *types.PreAuthKey) error {
})
}
func (hsdb *HSDatabase) DeletePreAuthKey(k *types.PreAuthKey) error {
return hsdb.Write(func(tx *gorm.DB) error {
return DestroyPreAuthKey(tx, *k)
})
}
// UsePreAuthKey marks a PreAuthKey as used.
func UsePreAuthKey(tx *gorm.DB, k *types.PreAuthKey) error {
err := tx.Model(k).Update("used", true).Error
if err != nil {
k.Used = true
if err := tx.Save(k).Error; err != nil {
return fmt.Errorf("failed to update key used status in the database: %w", err)
}
k.Used = true
return nil
}

View File

@@ -1,45 +0,0 @@
PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`));
INSERT INTO migrations VALUES('202312101416');
INSERT INTO migrations VALUES('202312101430');
INSERT INTO migrations VALUES('202402151347');
INSERT INTO migrations VALUES('2024041121742');
INSERT INTO migrations VALUES('202406021630');
INSERT INTO migrations VALUES('202409271400');
INSERT INTO migrations VALUES('202407191627');
INSERT INTO migrations VALUES('202408181235');
INSERT INTO migrations VALUES('202501221827');
INSERT INTO migrations VALUES('202501311657');
INSERT INTO migrations VALUES('202502070949');
INSERT INTO migrations VALUES('202502131714');
INSERT INTO migrations VALUES('202502171819');
INSERT INTO migrations VALUES('202505091439');
INSERT INTO migrations VALUES('202505141324');
CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text,`display_name` text,`email` text,`provider_identifier` text,`provider` text,`profile_pic_url` text);
CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`tags` text,`created_at` datetime,`expiration` datetime,CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE SET NULL);
CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime);
CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`expiry` datetime,`last_seen` datetime,`approved_routes` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`));
CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text);
DELETE FROM sqlite_sequence;
INSERT INTO sqlite_sequence VALUES('nodes',0);
CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`);
CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`);
CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`);
CREATE UNIQUE INDEX idx_provider_identifier ON users (provider_identifier) WHERE provider_identifier IS NOT NULL;
CREATE UNIQUE INDEX idx_name_provider_identifier ON users (name,provider_identifier);
CREATE UNIQUE INDEX idx_name_no_provider_identifier ON users (name) WHERE provider_identifier IS NULL;
-- Create all the old tables we have had and ensure they are clean up.
CREATE TABLE `namespaces` (`id` text,`deleted_at` datetime,PRIMARY KEY (`id`));
CREATE TABLE `machines` (`id` text,PRIMARY KEY (`id`));
CREATE TABLE `kvs` (`id` text,PRIMARY KEY (`id`));
CREATE TABLE `shared_machines` (`id` text,`deleted_at` datetime,PRIMARY KEY (`id`));
CREATE TABLE `pre_auth_key_acl_tags` (`id` text,PRIMARY KEY (`id`));
CREATE TABLE `routes` (`id` text,`deleted_at` datetime,PRIMARY KEY (`id`));
CREATE INDEX `idx_routes_deleted_at` ON `routes`(`deleted_at`);
CREATE INDEX `idx_namespaces_deleted_at` ON `namespaces`(`deleted_at`);
CREATE INDEX `idx_shared_machines_deleted_at` ON `shared_machines`(`deleted_at`);
COMMIT;

View File

@@ -1,14 +0,0 @@
CREATE TABLE `migrations` (`id` text,PRIMARY KEY (`id`));
CREATE TABLE `users` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`name` text,`display_name` text,`email` text,`provider_identifier` text,`provider` text,`profile_pic_url` text);
CREATE INDEX `idx_users_deleted_at` ON `users`(`deleted_at`);
CREATE TABLE `pre_auth_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`key` text,`user_id` integer,`reusable` numeric,`ephemeral` numeric DEFAULT false,`used` numeric DEFAULT false,`tags` text,`created_at` datetime,`expiration` datetime,CONSTRAINT `fk_pre_auth_keys_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE SET NULL);
CREATE TABLE `api_keys` (`id` integer PRIMARY KEY AUTOINCREMENT,`prefix` text,`hash` blob,`created_at` datetime,`expiration` datetime,`last_seen` datetime);
CREATE UNIQUE INDEX `idx_api_keys_prefix` ON `api_keys`(`prefix`);
CREATE TABLE IF NOT EXISTS "nodes" (`id` integer PRIMARY KEY AUTOINCREMENT,`machine_key` text,`node_key` text,`disco_key` text,`endpoints` text,`host_info` text,`ipv4` text,`ipv6` text,`hostname` text,`given_name` varchar(63),`user_id` integer,`register_method` text,`forced_tags` text,`auth_key_id` integer,`expiry` datetime,`last_seen` datetime,`approved_routes` text,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,CONSTRAINT `fk_nodes_user` FOREIGN KEY (`user_id`) REFERENCES `users`(`id`) ON DELETE CASCADE,CONSTRAINT `fk_nodes_auth_key` FOREIGN KEY (`auth_key_id`) REFERENCES `pre_auth_keys`(`id`));
CREATE TABLE `policies` (`id` integer PRIMARY KEY AUTOINCREMENT,`created_at` datetime,`updated_at` datetime,`deleted_at` datetime,`data` text);
CREATE INDEX `idx_policies_deleted_at` ON `policies`(`deleted_at`);
CREATE UNIQUE INDEX idx_provider_identifier ON users (provider_identifier) WHERE provider_identifier IS NOT NULL;
CREATE UNIQUE INDEX idx_name_provider_identifier ON users (name,provider_identifier);
CREATE UNIQUE INDEX idx_name_no_provider_identifier ON users (name) WHERE provider_identifier IS NULL;
CREATE TABLE _litestream_seq (id INTEGER PRIMARY KEY, seq INTEGER);
CREATE TABLE _litestream_lock (id INTEGER);

View File

@@ -10,7 +10,7 @@ import (
)
// Got from https://github.com/xdg-go/strum/blob/main/types.go
var textUnmarshalerType = reflect.TypeFor[encoding.TextUnmarshaler]()
var textUnmarshalerType = reflect.TypeOf((*encoding.TextUnmarshaler)(nil)).Elem()
func isTextUnmarshaler(rv reflect.Value) bool {
return rv.Type().Implements(textUnmarshalerType)

View File

@@ -1,134 +0,0 @@
package db
import (
"database/sql"
"testing"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"gorm.io/gorm"
)
// TestUserUpdatePreservesUnchangedFields verifies that updating a user
// preserves fields that aren't modified. This test validates the fix
// for using Updates() instead of Save() in UpdateUser-like operations.
func TestUserUpdatePreservesUnchangedFields(t *testing.T) {
database := dbForTest(t)
// Create a user with all fields set
initialUser := types.User{
Name: "testuser",
DisplayName: "Test User Display",
Email: "test@example.com",
ProviderIdentifier: sql.NullString{
String: "provider-123",
Valid: true,
},
}
createdUser, err := database.CreateUser(initialUser)
require.NoError(t, err)
require.NotNil(t, createdUser)
// Verify initial state
assert.Equal(t, "testuser", createdUser.Name)
assert.Equal(t, "Test User Display", createdUser.DisplayName)
assert.Equal(t, "test@example.com", createdUser.Email)
assert.True(t, createdUser.ProviderIdentifier.Valid)
assert.Equal(t, "provider-123", createdUser.ProviderIdentifier.String)
// Simulate what UpdateUser does: load user, modify one field, save
_, err = Write(database.DB, func(tx *gorm.DB) (*types.User, error) {
user, err := GetUserByID(tx, types.UserID(createdUser.ID))
if err != nil {
return nil, err
}
// Modify ONLY DisplayName
user.DisplayName = "Updated Display Name"
// This is the line being tested - currently uses Save() which writes ALL fields, potentially overwriting unchanged ones
err = tx.Save(user).Error
if err != nil {
return nil, err
}
return user, nil
})
require.NoError(t, err)
// Read user back from database
updatedUser, err := Read(database.DB, func(rx *gorm.DB) (*types.User, error) {
return GetUserByID(rx, types.UserID(createdUser.ID))
})
require.NoError(t, err)
// Verify that DisplayName was updated
assert.Equal(t, "Updated Display Name", updatedUser.DisplayName)
// CRITICAL: Verify that other fields were NOT overwritten
// With Save(), these assertions should pass because the user object
// was loaded from DB and has all fields populated.
// But if Updates() is used, these will also pass (and it's safer).
assert.Equal(t, "testuser", updatedUser.Name, "Name should be preserved")
assert.Equal(t, "test@example.com", updatedUser.Email, "Email should be preserved")
assert.True(t, updatedUser.ProviderIdentifier.Valid, "ProviderIdentifier should be preserved")
assert.Equal(t, "provider-123", updatedUser.ProviderIdentifier.String, "ProviderIdentifier value should be preserved")
}
// TestUserUpdateWithUpdatesMethod tests that using Updates() instead of Save()
// works correctly and only updates modified fields.
func TestUserUpdateWithUpdatesMethod(t *testing.T) {
database := dbForTest(t)
// Create a user
initialUser := types.User{
Name: "testuser",
DisplayName: "Original Display",
Email: "original@example.com",
ProviderIdentifier: sql.NullString{
String: "provider-abc",
Valid: true,
},
}
createdUser, err := database.CreateUser(initialUser)
require.NoError(t, err)
// Update using Updates() method
_, err = Write(database.DB, func(tx *gorm.DB) (*types.User, error) {
user, err := GetUserByID(tx, types.UserID(createdUser.ID))
if err != nil {
return nil, err
}
// Modify multiple fields
user.DisplayName = "New Display"
user.Email = "new@example.com"
// Use Updates() instead of Save()
err = tx.Updates(user).Error
if err != nil {
return nil, err
}
return user, nil
})
require.NoError(t, err)
// Verify changes
updatedUser, err := Read(database.DB, func(rx *gorm.DB) (*types.User, error) {
return GetUserByID(rx, types.UserID(createdUser.ID))
})
require.NoError(t, err)
// Verify updated fields
assert.Equal(t, "New Display", updatedUser.DisplayName)
assert.Equal(t, "new@example.com", updatedUser.Email)
// Verify preserved fields
assert.Equal(t, "testuser", updatedUser.Name)
assert.True(t, updatedUser.ProviderIdentifier.Valid)
assert.Equal(t, "provider-abc", updatedUser.ProviderIdentifier.String)
}

View File

@@ -26,7 +26,8 @@ func (hsdb *HSDatabase) CreateUser(user types.User) (*types.User, error) {
// CreateUser creates a new User. Returns error if could not be created
// or another user already exists.
func CreateUser(tx *gorm.DB, user types.User) (*types.User, error) {
if err := util.ValidateHostname(user.Name); err != nil {
err := util.ValidateUsername(user.Name)
if err != nil {
return nil, err
}
if err := tx.Create(&user).Error; err != nil {
@@ -92,7 +93,8 @@ func RenameUser(tx *gorm.DB, uid types.UserID, newName string) error {
if err != nil {
return err
}
if err = util.ValidateHostname(newName); err != nil {
err = util.ValidateUsername(newName)
if err != nil {
return err
}
@@ -102,8 +104,7 @@ func RenameUser(tx *gorm.DB, uid types.UserID, newName string) error {
oldUser.Name = newName
err = tx.Updates(&oldUser).Error
if err != nil {
if err := tx.Save(&oldUser).Error; err != nil {
return err
}
@@ -197,20 +198,19 @@ func ListNodesByUser(tx *gorm.DB, uid types.UserID) (types.Nodes, error) {
}
// AssignNodeToUser assigns a Node to a user.
// Note: Validation should be done in the state layer before calling this function.
func AssignNodeToUser(tx *gorm.DB, nodeID types.NodeID, uid types.UserID) error {
// Check if the user exists
var userExists bool
if err := tx.Model(&types.User{}).Select("count(*) > 0").Where("id = ?", uid).Find(&userExists).Error; err != nil {
return fmt.Errorf("failed to check if user exists: %w", err)
node, err := GetNodeByID(tx, nodeID)
if err != nil {
return err
}
if !userExists {
return ErrUserNotFound
user, err := GetUserByID(tx, uid)
if err != nil {
return err
}
if err := tx.Model(&types.Node{}).Where("id = ?", nodeID).Update("user_id", uid).Error; err != nil {
return fmt.Errorf("failed to assign node to user: %w", err)
node.User = *user
node.UserID = user.ID
if result := tx.Save(&node); result.Error != nil {
return result.Error
}
return nil

View File

@@ -4,83 +4,58 @@ import (
"encoding/json"
"fmt"
"net/http"
"strings"
"os"
"github.com/arl/statsviz"
"github.com/juanfont/headscale/hscontrol/mapper"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/prometheus/client_golang/prometheus/promhttp"
"tailscale.com/tailcfg"
"tailscale.com/tsweb"
)
func (h *Headscale) debugHTTPServer() *http.Server {
debugMux := http.NewServeMux()
debug := tsweb.Debugger(debugMux)
// State overview endpoint
debug.Handle("overview", "State overview", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// Check Accept header to determine response format
acceptHeader := r.Header.Get("Accept")
wantsJSON := strings.Contains(acceptHeader, "application/json")
if wantsJSON {
overview := h.state.DebugOverviewJSON()
overviewJSON, err := json.MarshalIndent(overview, "", " ")
if err != nil {
httpError(w, err)
return
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusOK)
w.Write(overviewJSON)
} else {
// Default to text/plain for backward compatibility
overview := h.state.DebugOverview()
w.Header().Set("Content-Type", "text/plain")
w.WriteHeader(http.StatusOK)
w.Write([]byte(overview))
}
}))
// Configuration endpoint
debug.Handle("config", "Current configuration", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
config := h.state.DebugConfig()
configJSON, err := json.MarshalIndent(config, "", " ")
config, err := json.MarshalIndent(h.cfg, "", " ")
if err != nil {
httpError(w, err)
return
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusOK)
w.Write(configJSON)
w.Write(config)
}))
// Policy endpoint
debug.Handle("policy", "Current policy", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
policy, err := h.state.DebugPolicy()
if err != nil {
httpError(w, err)
return
}
// Policy data is HuJSON, which is a superset of JSON
// Set content type based on Accept header preference
acceptHeader := r.Header.Get("Accept")
if strings.Contains(acceptHeader, "application/json") {
switch h.cfg.Policy.Mode {
case types.PolicyModeDB:
p, err := h.state.GetPolicy()
if err != nil {
httpError(w, err)
return
}
w.Header().Set("Content-Type", "application/json")
} else {
w.Header().Set("Content-Type", "text/plain")
w.WriteHeader(http.StatusOK)
w.Write([]byte(p.Data))
case types.PolicyModeFile:
// Read the file directly for debug purposes
absPath := util.AbsolutePathFromConfigPath(h.cfg.Policy.Path)
pol, err := os.ReadFile(absPath)
if err != nil {
httpError(w, err)
return
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusOK)
w.Write(pol)
default:
httpError(w, fmt.Errorf("unsupported policy mode: %s", h.cfg.Policy.Mode))
}
w.WriteHeader(http.StatusOK)
w.Write([]byte(policy))
}))
debug.Handle("filter", "Current filter", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
filter, _ := h.state.Filter()
// Filter rules endpoint
debug.Handle("filter", "Current filter rules", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
filter, err := h.state.DebugFilter()
if err != nil {
httpError(w, err)
return
}
filterJSON, err := json.MarshalIndent(filter, "", " ")
if err != nil {
httpError(w, err)
@@ -90,11 +65,25 @@ func (h *Headscale) debugHTTPServer() *http.Server {
w.WriteHeader(http.StatusOK)
w.Write(filterJSON)
}))
debug.Handle("ssh", "SSH Policy per node", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
nodes, err := h.state.ListNodes()
if err != nil {
httpError(w, err)
return
}
// SSH policies endpoint
debug.Handle("ssh", "SSH policies per node", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
sshPolicies := h.state.DebugSSHPolicies()
sshJSON, err := json.MarshalIndent(sshPolicies, "", " ")
sshPol := make(map[string]*tailcfg.SSHPolicy)
for _, node := range nodes {
pol, err := h.state.SSHPolicy(node.View())
if err != nil {
httpError(w, err)
return
}
sshPol[fmt.Sprintf("id:%d hostname:%s givenname:%s", node.ID, node.Hostname, node.GivenName)] = pol
}
sshJSON, err := json.MarshalIndent(sshPol, "", " ")
if err != nil {
httpError(w, err)
return
@@ -103,169 +92,33 @@ func (h *Headscale) debugHTTPServer() *http.Server {
w.WriteHeader(http.StatusOK)
w.Write(sshJSON)
}))
debug.Handle("derpmap", "Current DERPMap", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
dm := h.state.DERPMap()
// DERP map endpoint
debug.Handle("derp", "DERP map configuration", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// Check Accept header to determine response format
acceptHeader := r.Header.Get("Accept")
wantsJSON := strings.Contains(acceptHeader, "application/json")
if wantsJSON {
derpInfo := h.state.DebugDERPJSON()
derpJSON, err := json.MarshalIndent(derpInfo, "", " ")
if err != nil {
httpError(w, err)
return
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusOK)
w.Write(derpJSON)
} else {
// Default to text/plain for backward compatibility
derpInfo := h.state.DebugDERPMap()
w.Header().Set("Content-Type", "text/plain")
w.WriteHeader(http.StatusOK)
w.Write([]byte(derpInfo))
}
}))
// NodeStore endpoint
debug.Handle("nodestore", "NodeStore information", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// Check Accept header to determine response format
acceptHeader := r.Header.Get("Accept")
wantsJSON := strings.Contains(acceptHeader, "application/json")
if wantsJSON {
nodeStoreNodes := h.state.DebugNodeStoreJSON()
nodeStoreJSON, err := json.MarshalIndent(nodeStoreNodes, "", " ")
if err != nil {
httpError(w, err)
return
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusOK)
w.Write(nodeStoreJSON)
} else {
// Default to text/plain for backward compatibility
nodeStoreInfo := h.state.DebugNodeStore()
w.Header().Set("Content-Type", "text/plain")
w.WriteHeader(http.StatusOK)
w.Write([]byte(nodeStoreInfo))
}
}))
// Registration cache endpoint
debug.Handle("registration-cache", "Registration cache information", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
cacheInfo := h.state.DebugRegistrationCache()
cacheJSON, err := json.MarshalIndent(cacheInfo, "", " ")
dmJSON, err := json.MarshalIndent(dm, "", " ")
if err != nil {
httpError(w, err)
return
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusOK)
w.Write(cacheJSON)
w.Write(dmJSON)
}))
// Routes endpoint
debug.Handle("routes", "Primary routes", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// Check Accept header to determine response format
acceptHeader := r.Header.Get("Accept")
wantsJSON := strings.Contains(acceptHeader, "application/json")
if wantsJSON {
routes := h.state.DebugRoutes()
routesJSON, err := json.MarshalIndent(routes, "", " ")
if err != nil {
httpError(w, err)
return
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusOK)
w.Write(routesJSON)
} else {
// Default to text/plain for backward compatibility
routes := h.state.DebugRoutesString()
w.Header().Set("Content-Type", "text/plain")
w.WriteHeader(http.StatusOK)
w.Write([]byte(routes))
}
}))
// Policy manager endpoint
debug.Handle("policy-manager", "Policy manager state", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// Check Accept header to determine response format
acceptHeader := r.Header.Get("Accept")
wantsJSON := strings.Contains(acceptHeader, "application/json")
if wantsJSON {
policyManagerInfo := h.state.DebugPolicyManagerJSON()
policyManagerJSON, err := json.MarshalIndent(policyManagerInfo, "", " ")
if err != nil {
httpError(w, err)
return
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusOK)
w.Write(policyManagerJSON)
} else {
// Default to text/plain for backward compatibility
policyManagerInfo := h.state.DebugPolicyManager()
w.Header().Set("Content-Type", "text/plain")
w.WriteHeader(http.StatusOK)
w.Write([]byte(policyManagerInfo))
}
}))
debug.Handle("mapresponses", "Map responses for all nodes", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
res, err := h.mapBatcher.DebugMapResponses()
if err != nil {
httpError(w, err)
return
}
if res == nil {
w.WriteHeader(http.StatusOK)
w.Write([]byte("HEADSCALE_DEBUG_DUMP_MAPRESPONSE_PATH not set"))
return
}
resJSON, err := json.MarshalIndent(res, "", " ")
if err != nil {
httpError(w, err)
return
}
debug.Handle("registration-cache", "Pending registrations", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// TODO(kradalby): This should be replaced with a proper state method that returns registration info
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusOK)
w.Write(resJSON)
w.Write([]byte("{}")) // For now, return empty object
}))
// Batcher endpoint
debug.Handle("batcher", "Batcher connected nodes", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// Check Accept header to determine response format
acceptHeader := r.Header.Get("Accept")
wantsJSON := strings.Contains(acceptHeader, "application/json")
if wantsJSON {
batcherInfo := h.debugBatcherJSON()
batcherJSON, err := json.MarshalIndent(batcherInfo, "", " ")
if err != nil {
httpError(w, err)
return
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusOK)
w.Write(batcherJSON)
} else {
// Default to text/plain for backward compatibility
batcherInfo := h.debugBatcher()
w.Header().Set("Content-Type", "text/plain")
w.WriteHeader(http.StatusOK)
w.Write([]byte(batcherInfo))
}
debug.Handle("routes", "Routes", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "text/plain")
w.WriteHeader(http.StatusOK)
w.Write([]byte(h.state.PrimaryRoutesString()))
}))
debug.Handle("policy-manager", "Policy Manager", http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "text/plain")
w.WriteHeader(http.StatusOK)
w.Write([]byte(h.state.PolicyDebugString()))
}))
err := statsviz.Register(debugMux)
@@ -285,124 +138,3 @@ func (h *Headscale) debugHTTPServer() *http.Server {
return debugHTTPServer
}
// debugBatcher returns debug information about the batcher's connected nodes.
func (h *Headscale) debugBatcher() string {
var sb strings.Builder
sb.WriteString("=== Batcher Connected Nodes ===\n\n")
totalNodes := 0
connectedCount := 0
// Collect nodes and sort them by ID
type nodeStatus struct {
id types.NodeID
connected bool
activeConnections int
}
var nodes []nodeStatus
// Try to get detailed debug info if we have a LockFreeBatcher
if batcher, ok := h.mapBatcher.(*mapper.LockFreeBatcher); ok {
debugInfo := batcher.Debug()
for nodeID, info := range debugInfo {
nodes = append(nodes, nodeStatus{
id: nodeID,
connected: info.Connected,
activeConnections: info.ActiveConnections,
})
totalNodes++
if info.Connected {
connectedCount++
}
}
} else {
// Fallback to basic connection info
connectedMap := h.mapBatcher.ConnectedMap()
connectedMap.Range(func(nodeID types.NodeID, connected bool) bool {
nodes = append(nodes, nodeStatus{
id: nodeID,
connected: connected,
activeConnections: 0,
})
totalNodes++
if connected {
connectedCount++
}
return true
})
}
// Sort by node ID
for i := 0; i < len(nodes); i++ {
for j := i + 1; j < len(nodes); j++ {
if nodes[i].id > nodes[j].id {
nodes[i], nodes[j] = nodes[j], nodes[i]
}
}
}
// Output sorted nodes
for _, node := range nodes {
status := "disconnected"
if node.connected {
status = "connected"
}
if node.activeConnections > 0 {
sb.WriteString(fmt.Sprintf("Node %d:\t%s (%d connections)\n", node.id, status, node.activeConnections))
} else {
sb.WriteString(fmt.Sprintf("Node %d:\t%s\n", node.id, status))
}
}
sb.WriteString(fmt.Sprintf("\nSummary: %d connected, %d total\n", connectedCount, totalNodes))
return sb.String()
}
// DebugBatcherInfo represents batcher connection information in a structured format.
type DebugBatcherInfo struct {
ConnectedNodes map[string]DebugBatcherNodeInfo `json:"connected_nodes"` // NodeID -> node connection info
TotalNodes int `json:"total_nodes"`
}
// DebugBatcherNodeInfo represents connection information for a single node.
type DebugBatcherNodeInfo struct {
Connected bool `json:"connected"`
ActiveConnections int `json:"active_connections"`
}
// debugBatcherJSON returns structured debug information about the batcher's connected nodes.
func (h *Headscale) debugBatcherJSON() DebugBatcherInfo {
info := DebugBatcherInfo{
ConnectedNodes: make(map[string]DebugBatcherNodeInfo),
TotalNodes: 0,
}
// Try to get detailed debug info if we have a LockFreeBatcher
if batcher, ok := h.mapBatcher.(*mapper.LockFreeBatcher); ok {
debugInfo := batcher.Debug()
for nodeID, debugData := range debugInfo {
info.ConnectedNodes[fmt.Sprintf("%d", nodeID)] = DebugBatcherNodeInfo{
Connected: debugData.Connected,
ActiveConnections: debugData.ActiveConnections,
}
info.TotalNodes++
}
} else {
// Fallback to basic connection info
connectedMap := h.mapBatcher.ConnectedMap()
connectedMap.Range(func(nodeID types.NodeID, connected bool) bool {
info.ConnectedNodes[fmt.Sprintf("%d", nodeID)] = DebugBatcherNodeInfo{
Connected: connected,
ActiveConnections: 0,
}
info.TotalNodes++
return true
})
}
return info
}

View File

@@ -1,23 +1,16 @@
package derp
import (
"cmp"
"context"
"encoding/json"
"hash/crc64"
"io"
"maps"
"math/rand"
"net/http"
"net/url"
"os"
"reflect"
"slices"
"sync"
"time"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/spf13/viper"
"github.com/rs/zerolog/log"
"gopkg.in/yaml.v3"
"tailscale.com/tailcfg"
)
@@ -83,98 +76,56 @@ func mergeDERPMaps(derpMaps []*tailcfg.DERPMap) *tailcfg.DERPMap {
maps.Copy(result.Regions, derpMap.Regions)
}
for id, region := range result.Regions {
if region == nil {
delete(result.Regions, id)
}
}
return &result
}
func GetDERPMap(cfg types.DERPConfig) (*tailcfg.DERPMap, error) {
func GetDERPMap(cfg types.DERPConfig) *tailcfg.DERPMap {
var derpMaps []*tailcfg.DERPMap
if cfg.DERPMap != nil {
derpMaps = append(derpMaps, cfg.DERPMap)
}
for _, addr := range cfg.URLs {
derpMap, err := loadDERPMapFromURL(addr)
for _, path := range cfg.Paths {
log.Debug().
Str("func", "GetDERPMap").
Str("path", path).
Msg("Loading DERPMap from path")
derpMap, err := loadDERPMapFromPath(path)
if err != nil {
return nil, err
log.Error().
Str("func", "GetDERPMap").
Str("path", path).
Err(err).
Msg("Could not load DERP map from path")
break
}
derpMaps = append(derpMaps, derpMap)
}
for _, path := range cfg.Paths {
derpMap, err := loadDERPMapFromPath(path)
for _, addr := range cfg.URLs {
derpMap, err := loadDERPMapFromURL(addr)
log.Debug().
Str("func", "GetDERPMap").
Str("url", addr.String()).
Msg("Loading DERPMap from path")
if err != nil {
return nil, err
log.Error().
Str("func", "GetDERPMap").
Str("url", addr.String()).
Err(err).
Msg("Could not load DERP map from path")
break
}
derpMaps = append(derpMaps, derpMap)
}
derpMap := mergeDERPMaps(derpMaps)
shuffleDERPMap(derpMap)
return derpMap, nil
}
func shuffleDERPMap(dm *tailcfg.DERPMap) {
if dm == nil || len(dm.Regions) == 0 {
return
}
// Collect region IDs and sort them to ensure deterministic iteration order.
// Map iteration order is non-deterministic in Go, which would cause the
// shuffle to be non-deterministic even with a fixed seed.
ids := make([]int, 0, len(dm.Regions))
for id := range dm.Regions {
ids = append(ids, id)
}
slices.Sort(ids)
for _, id := range ids {
region := dm.Regions[id]
if len(region.Nodes) == 0 {
continue
}
dm.Regions[id] = shuffleRegionNoClone(region)
}
}
var crc64Table = crc64.MakeTable(crc64.ISO)
var (
derpRandomOnce sync.Once
derpRandomInst *rand.Rand
derpRandomMu sync.Mutex
)
func derpRandom() *rand.Rand {
derpRandomMu.Lock()
defer derpRandomMu.Unlock()
derpRandomOnce.Do(func() {
seed := cmp.Or(viper.GetString("dns.base_domain"), time.Now().String())
rnd := rand.New(rand.NewSource(0))
rnd.Seed(int64(crc64.Checksum([]byte(seed), crc64Table)))
derpRandomInst = rnd
})
return derpRandomInst
}
func resetDerpRandomForTesting() {
derpRandomMu.Lock()
defer derpRandomMu.Unlock()
derpRandomOnce = sync.Once{}
derpRandomInst = nil
}
func shuffleRegionNoClone(r *tailcfg.DERPRegion) *tailcfg.DERPRegion {
derpRandom().Shuffle(len(r.Nodes), reflect.Swapper(r.Nodes))
return r
log.Trace().Interface("derpMap", derpMap).Msg("DERPMap loaded")
return derpMap
}

View File

@@ -1,350 +0,0 @@
package derp
import (
"testing"
"github.com/google/go-cmp/cmp"
"github.com/spf13/viper"
"tailscale.com/tailcfg"
)
func TestShuffleDERPMapDeterministic(t *testing.T) {
tests := []struct {
name string
baseDomain string
derpMap *tailcfg.DERPMap
expected *tailcfg.DERPMap
}{
{
name: "single region with 4 nodes",
baseDomain: "test1.example.com",
derpMap: &tailcfg.DERPMap{
Regions: map[int]*tailcfg.DERPRegion{
1: {
RegionID: 1,
RegionCode: "nyc",
RegionName: "New York City",
Nodes: []*tailcfg.DERPNode{
{Name: "1f", RegionID: 1, HostName: "derp1f.tailscale.com"},
{Name: "1g", RegionID: 1, HostName: "derp1g.tailscale.com"},
{Name: "1h", RegionID: 1, HostName: "derp1h.tailscale.com"},
{Name: "1i", RegionID: 1, HostName: "derp1i.tailscale.com"},
},
},
},
},
expected: &tailcfg.DERPMap{
Regions: map[int]*tailcfg.DERPRegion{
1: {
RegionID: 1,
RegionCode: "nyc",
RegionName: "New York City",
Nodes: []*tailcfg.DERPNode{
{Name: "1g", RegionID: 1, HostName: "derp1g.tailscale.com"},
{Name: "1f", RegionID: 1, HostName: "derp1f.tailscale.com"},
{Name: "1i", RegionID: 1, HostName: "derp1i.tailscale.com"},
{Name: "1h", RegionID: 1, HostName: "derp1h.tailscale.com"},
},
},
},
},
},
{
name: "multiple regions with nodes",
baseDomain: "test2.example.com",
derpMap: &tailcfg.DERPMap{
Regions: map[int]*tailcfg.DERPRegion{
10: {
RegionID: 10,
RegionCode: "sea",
RegionName: "Seattle",
Nodes: []*tailcfg.DERPNode{
{Name: "10b", RegionID: 10, HostName: "derp10b.tailscale.com"},
{Name: "10c", RegionID: 10, HostName: "derp10c.tailscale.com"},
{Name: "10d", RegionID: 10, HostName: "derp10d.tailscale.com"},
},
},
2: {
RegionID: 2,
RegionCode: "sfo",
RegionName: "San Francisco",
Nodes: []*tailcfg.DERPNode{
{Name: "2d", RegionID: 2, HostName: "derp2d.tailscale.com"},
{Name: "2e", RegionID: 2, HostName: "derp2e.tailscale.com"},
{Name: "2f", RegionID: 2, HostName: "derp2f.tailscale.com"},
},
},
},
},
expected: &tailcfg.DERPMap{
Regions: map[int]*tailcfg.DERPRegion{
10: {
RegionID: 10,
RegionCode: "sea",
RegionName: "Seattle",
Nodes: []*tailcfg.DERPNode{
{Name: "10d", RegionID: 10, HostName: "derp10d.tailscale.com"},
{Name: "10c", RegionID: 10, HostName: "derp10c.tailscale.com"},
{Name: "10b", RegionID: 10, HostName: "derp10b.tailscale.com"},
},
},
2: {
RegionID: 2,
RegionCode: "sfo",
RegionName: "San Francisco",
Nodes: []*tailcfg.DERPNode{
{Name: "2d", RegionID: 2, HostName: "derp2d.tailscale.com"},
{Name: "2e", RegionID: 2, HostName: "derp2e.tailscale.com"},
{Name: "2f", RegionID: 2, HostName: "derp2f.tailscale.com"},
},
},
},
},
},
{
name: "large region with many nodes",
baseDomain: "test3.example.com",
derpMap: &tailcfg.DERPMap{
Regions: map[int]*tailcfg.DERPRegion{
4: {
RegionID: 4,
RegionCode: "fra",
RegionName: "Frankfurt",
Nodes: []*tailcfg.DERPNode{
{Name: "4f", RegionID: 4, HostName: "derp4f.tailscale.com"},
{Name: "4g", RegionID: 4, HostName: "derp4g.tailscale.com"},
{Name: "4h", RegionID: 4, HostName: "derp4h.tailscale.com"},
{Name: "4i", RegionID: 4, HostName: "derp4i.tailscale.com"},
},
},
},
},
expected: &tailcfg.DERPMap{
Regions: map[int]*tailcfg.DERPRegion{
4: {
RegionID: 4,
RegionCode: "fra",
RegionName: "Frankfurt",
Nodes: []*tailcfg.DERPNode{
{Name: "4f", RegionID: 4, HostName: "derp4f.tailscale.com"},
{Name: "4h", RegionID: 4, HostName: "derp4h.tailscale.com"},
{Name: "4g", RegionID: 4, HostName: "derp4g.tailscale.com"},
{Name: "4i", RegionID: 4, HostName: "derp4i.tailscale.com"},
},
},
},
},
},
{
name: "same region different base domain",
baseDomain: "different.example.com",
derpMap: &tailcfg.DERPMap{
Regions: map[int]*tailcfg.DERPRegion{
4: {
RegionID: 4,
RegionCode: "fra",
RegionName: "Frankfurt",
Nodes: []*tailcfg.DERPNode{
{Name: "4f", RegionID: 4, HostName: "derp4f.tailscale.com"},
{Name: "4g", RegionID: 4, HostName: "derp4g.tailscale.com"},
{Name: "4h", RegionID: 4, HostName: "derp4h.tailscale.com"},
{Name: "4i", RegionID: 4, HostName: "derp4i.tailscale.com"},
},
},
},
},
expected: &tailcfg.DERPMap{
Regions: map[int]*tailcfg.DERPRegion{
4: {
RegionID: 4,
RegionCode: "fra",
RegionName: "Frankfurt",
Nodes: []*tailcfg.DERPNode{
{Name: "4g", RegionID: 4, HostName: "derp4g.tailscale.com"},
{Name: "4i", RegionID: 4, HostName: "derp4i.tailscale.com"},
{Name: "4f", RegionID: 4, HostName: "derp4f.tailscale.com"},
{Name: "4h", RegionID: 4, HostName: "derp4h.tailscale.com"},
},
},
},
},
},
{
name: "same dataset with another base domain",
baseDomain: "another.example.com",
derpMap: &tailcfg.DERPMap{
Regions: map[int]*tailcfg.DERPRegion{
4: {
RegionID: 4,
RegionCode: "fra",
RegionName: "Frankfurt",
Nodes: []*tailcfg.DERPNode{
{Name: "4f", RegionID: 4, HostName: "derp4f.tailscale.com"},
{Name: "4g", RegionID: 4, HostName: "derp4g.tailscale.com"},
{Name: "4h", RegionID: 4, HostName: "derp4h.tailscale.com"},
{Name: "4i", RegionID: 4, HostName: "derp4i.tailscale.com"},
},
},
},
},
expected: &tailcfg.DERPMap{
Regions: map[int]*tailcfg.DERPRegion{
4: {
RegionID: 4,
RegionCode: "fra",
RegionName: "Frankfurt",
Nodes: []*tailcfg.DERPNode{
{Name: "4h", RegionID: 4, HostName: "derp4h.tailscale.com"},
{Name: "4f", RegionID: 4, HostName: "derp4f.tailscale.com"},
{Name: "4g", RegionID: 4, HostName: "derp4g.tailscale.com"},
{Name: "4i", RegionID: 4, HostName: "derp4i.tailscale.com"},
},
},
},
},
},
{
name: "same dataset with yet another base domain",
baseDomain: "yetanother.example.com",
derpMap: &tailcfg.DERPMap{
Regions: map[int]*tailcfg.DERPRegion{
4: {
RegionID: 4,
RegionCode: "fra",
RegionName: "Frankfurt",
Nodes: []*tailcfg.DERPNode{
{Name: "4f", RegionID: 4, HostName: "derp4f.tailscale.com"},
{Name: "4g", RegionID: 4, HostName: "derp4g.tailscale.com"},
{Name: "4h", RegionID: 4, HostName: "derp4h.tailscale.com"},
{Name: "4i", RegionID: 4, HostName: "derp4i.tailscale.com"},
},
},
},
},
expected: &tailcfg.DERPMap{
Regions: map[int]*tailcfg.DERPRegion{
4: {
RegionID: 4,
RegionCode: "fra",
RegionName: "Frankfurt",
Nodes: []*tailcfg.DERPNode{
{Name: "4i", RegionID: 4, HostName: "derp4i.tailscale.com"},
{Name: "4h", RegionID: 4, HostName: "derp4h.tailscale.com"},
{Name: "4f", RegionID: 4, HostName: "derp4f.tailscale.com"},
{Name: "4g", RegionID: 4, HostName: "derp4g.tailscale.com"},
},
},
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
viper.Set("dns.base_domain", tt.baseDomain)
defer viper.Reset()
resetDerpRandomForTesting()
testMap := tt.derpMap.View().AsStruct()
shuffleDERPMap(testMap)
if diff := cmp.Diff(tt.expected, testMap); diff != "" {
t.Errorf("Shuffled DERP map doesn't match expected (-expected +actual):\n%s", diff)
}
})
}
}
func TestShuffleDERPMapEdgeCases(t *testing.T) {
tests := []struct {
name string
derpMap *tailcfg.DERPMap
}{
{
name: "nil derp map",
derpMap: nil,
},
{
name: "empty derp map",
derpMap: &tailcfg.DERPMap{
Regions: map[int]*tailcfg.DERPRegion{},
},
},
{
name: "region with no nodes",
derpMap: &tailcfg.DERPMap{
Regions: map[int]*tailcfg.DERPRegion{
1: {
RegionID: 1,
RegionCode: "empty",
RegionName: "Empty Region",
Nodes: []*tailcfg.DERPNode{},
},
},
},
},
{
name: "region with single node",
derpMap: &tailcfg.DERPMap{
Regions: map[int]*tailcfg.DERPRegion{
1: {
RegionID: 1,
RegionCode: "single",
RegionName: "Single Node Region",
Nodes: []*tailcfg.DERPNode{
{Name: "1a", RegionID: 1, HostName: "derp1a.tailscale.com"},
},
},
},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
shuffleDERPMap(tt.derpMap)
})
}
}
func TestShuffleDERPMapWithoutBaseDomain(t *testing.T) {
viper.Reset()
resetDerpRandomForTesting()
derpMap := &tailcfg.DERPMap{
Regions: map[int]*tailcfg.DERPRegion{
1: {
RegionID: 1,
RegionCode: "test",
RegionName: "Test Region",
Nodes: []*tailcfg.DERPNode{
{Name: "1a", RegionID: 1, HostName: "derp1a.test.com"},
{Name: "1b", RegionID: 1, HostName: "derp1b.test.com"},
{Name: "1c", RegionID: 1, HostName: "derp1c.test.com"},
{Name: "1d", RegionID: 1, HostName: "derp1d.test.com"},
},
},
},
}
original := derpMap.View().AsStruct()
shuffleDERPMap(derpMap)
if len(derpMap.Regions) != 1 || len(derpMap.Regions[1].Nodes) != 4 {
t.Error("Shuffle corrupted DERP map structure")
}
originalNodes := make(map[string]bool)
for _, node := range original.Regions[1].Nodes {
originalNodes[node.Name] = true
}
shuffledNodes := make(map[string]bool)
for _, node := range derpMap.Regions[1].Nodes {
shuffledNodes[node.Name] = true
}
if diff := cmp.Diff(originalNodes, shuffledNodes); diff != "" {
t.Errorf("Shuffle changed node set (-original +shuffled):\n%s", diff)
}
}

View File

@@ -20,7 +20,6 @@ import (
"github.com/juanfont/headscale/hscontrol/util"
"github.com/rs/zerolog/log"
"tailscale.com/derp"
"tailscale.com/envknob"
"tailscale.com/net/stun"
"tailscale.com/net/wsconn"
"tailscale.com/tailcfg"
@@ -36,11 +35,6 @@ const (
DerpVerifyScheme = "headscale-derp-verify"
)
// debugUseDERPIP is a debug-only flag that causes the DERP server to resolve
// hostnames to IP addresses when generating the DERP region configuration.
// This is useful for integration testing where DNS resolution may be unreliable.
var debugUseDERPIP = envknob.Bool("HEADSCALE_DEBUG_DERP_USE_IP")
type DERPServer struct {
serverURL string
key key.NodePrivate
@@ -76,10 +70,7 @@ func (d *DERPServer) GenerateRegion() (tailcfg.DERPRegion, error) {
}
var host string
var port int
var portStr string
// Extract hostname and port from URL
host, portStr, err = net.SplitHostPort(serverURL.Host)
host, portStr, err := net.SplitHostPort(serverURL.Host)
if err != nil {
if serverURL.Scheme == "https" {
host = serverURL.Host
@@ -95,19 +86,6 @@ func (d *DERPServer) GenerateRegion() (tailcfg.DERPRegion, error) {
}
}
// If debug flag is set, resolve hostname to IP address
if debugUseDERPIP {
ips, err := net.LookupIP(host)
if err != nil {
log.Error().Caller().Err(err).Msgf("Failed to resolve DERP hostname %s to IP, using hostname", host)
} else if len(ips) > 0 {
// Use the first IP address
ipStr := ips[0].String()
log.Info().Caller().Msgf("HEADSCALE_DEBUG_DERP_USE_IP: Resolved %s to %s", host, ipStr)
host = ipStr
}
}
localDERPregion := tailcfg.DERPRegion{
RegionID: d.cfg.ServerRegionID,
RegionCode: d.cfg.ServerRegionCode,
@@ -161,7 +139,7 @@ func (d *DERPServer) DERPHandler(
log.Error().
Caller().
Err(err).
Msg("Failed to write HTTP response")
Msg("Failed to write response")
}
return
@@ -199,7 +177,7 @@ func (d *DERPServer) serveWebsocket(writer http.ResponseWriter, req *http.Reques
log.Error().
Caller().
Err(err).
Msg("Failed to write HTTP response")
Msg("Failed to write response")
}
return
@@ -229,7 +207,7 @@ func (d *DERPServer) servePlain(writer http.ResponseWriter, req *http.Request) {
log.Error().
Caller().
Err(err).
Msg("Failed to write HTTP response")
Msg("Failed to write response")
}
return
@@ -245,7 +223,7 @@ func (d *DERPServer) servePlain(writer http.ResponseWriter, req *http.Request) {
log.Error().
Caller().
Err(err).
Msg("Failed to write HTTP response")
Msg("Failed to write response")
}
return
@@ -284,7 +262,7 @@ func DERPProbeHandler(
log.Error().
Caller().
Err(err).
Msg("Failed to write HTTP response")
Msg("Failed to write response")
}
}
}
@@ -298,7 +276,7 @@ func DERPProbeHandler(
// An example implementation is found here https://derp.tailscale.com/bootstrap-dns
// Coordination server is included automatically, since local DERP is using the same DNS Name in d.serverURL.
func DERPBootstrapDNSHandler(
derpMap tailcfg.DERPMapView,
derpMap *tailcfg.DERPMap,
) func(http.ResponseWriter, *http.Request) {
return func(
writer http.ResponseWriter,
@@ -309,18 +287,18 @@ func DERPBootstrapDNSHandler(
resolvCtx, cancel := context.WithTimeout(req.Context(), time.Minute)
defer cancel()
var resolver net.Resolver
for _, region := range derpMap.Regions().All() {
for _, node := range region.Nodes().All() { // we don't care if we override some nodes
addrs, err := resolver.LookupIP(resolvCtx, "ip", node.HostName())
for _, region := range derpMap.Regions {
for _, node := range region.Nodes { // we don't care if we override some nodes
addrs, err := resolver.LookupIP(resolvCtx, "ip", node.HostName)
if err != nil {
log.Trace().
Caller().
Err(err).
Msgf("bootstrap DNS lookup failed %q", node.HostName())
Msgf("bootstrap DNS lookup failed %q", node.HostName)
continue
}
dnsEntries[node.HostName()] = addrs
dnsEntries[node.HostName] = addrs
}
}
writer.Header().Set("Content-Type", "application/json")
@@ -330,7 +308,7 @@ func DERPBootstrapDNSHandler(
log.Error().
Caller().
Err(err).
Msg("Failed to write HTTP response")
Msg("Failed to write response")
}
}
}

View File

@@ -15,6 +15,7 @@ import (
"strings"
"time"
"github.com/puzpuzpuz/xsync/v4"
"github.com/rs/zerolog/log"
"github.com/samber/lo"
"google.golang.org/grpc/codes"
@@ -24,7 +25,6 @@ import (
"tailscale.com/net/tsaddr"
"tailscale.com/tailcfg"
"tailscale.com/types/key"
"tailscale.com/types/views"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/juanfont/headscale/hscontrol/state"
@@ -59,10 +59,9 @@ func (api headscaleV1APIServer) CreateUser(
return nil, status.Errorf(codes.Internal, "failed to create user: %s", err)
}
c := change.UserAdded(types.UserID(user.ID))
// TODO(kradalby): Both of these might be policy changes, find a better way to merge.
if !policyChanged.Empty() {
c := change.UserAdded(types.UserID(user.ID))
if policyChanged {
c.Change = change.Policy
}
@@ -80,13 +79,15 @@ func (api headscaleV1APIServer) RenameUser(
return nil, err
}
_, c, err := api.h.state.RenameUser(types.UserID(oldUser.ID), request.GetNewName())
_, policyChanged, err := api.h.state.RenameUser(types.UserID(oldUser.ID), request.GetNewName())
if err != nil {
return nil, err
}
// Send policy update notifications if needed
api.h.Change(c)
if policyChanged {
api.h.Change(change.PolicyChange())
}
newUser, err := api.h.state.GetUserByName(request.GetNewName())
if err != nil {
@@ -206,27 +207,6 @@ func (api headscaleV1APIServer) ExpirePreAuthKey(
return &v1.ExpirePreAuthKeyResponse{}, nil
}
func (api headscaleV1APIServer) DeletePreAuthKey(
ctx context.Context,
request *v1.DeletePreAuthKeyRequest,
) (*v1.DeletePreAuthKeyResponse, error) {
preAuthKey, err := api.h.state.GetPreAuthKey(request.Key)
if err != nil {
return nil, err
}
if uint64(preAuthKey.User.ID) != request.GetUser() {
return nil, fmt.Errorf("preauth key does not belong to user")
}
err = api.h.state.DeletePreAuthKey(preAuthKey)
if err != nil {
return nil, err
}
return &v1.DeletePreAuthKeyResponse{}, nil
}
func (api headscaleV1APIServer) ListPreAuthKeys(
ctx context.Context,
request *v1.ListPreAuthKeysRequest,
@@ -258,7 +238,6 @@ func (api headscaleV1APIServer) RegisterNode(
request *v1.RegisterNodeRequest,
) (*v1.RegisterNodeResponse, error) {
log.Trace().
Caller().
Str("user", request.GetUser()).
Str("registration_id", request.GetKey()).
Msg("Registering node")
@@ -294,13 +273,13 @@ func (api headscaleV1APIServer) RegisterNode(
// ensure we send an update.
// This works, but might be another good candidate for doing some sort of
// eventbus.
routeChange, err := api.h.state.AutoApproveRoutes(node)
_ = api.h.state.AutoApproveRoutes(node)
_, _, err = api.h.state.SaveNode(node)
if err != nil {
return nil, fmt.Errorf("auto approving routes: %w", err)
return nil, fmt.Errorf("saving auto approved routes to node: %w", err)
}
// Send both changes. Empty changes are ignored by Change().
api.h.Change(nodeChange, routeChange)
api.h.Change(nodeChange)
return &v1.RegisterNodeResponse{Node: node.Proto()}, nil
}
@@ -309,13 +288,17 @@ func (api headscaleV1APIServer) GetNode(
ctx context.Context,
request *v1.GetNodeRequest,
) (*v1.GetNodeResponse, error) {
node, ok := api.h.state.GetNodeByID(types.NodeID(request.GetNodeId()))
if !ok {
return nil, status.Errorf(codes.NotFound, "node not found")
node, err := api.h.state.GetNodeByID(types.NodeID(request.GetNodeId()))
if err != nil {
return nil, err
}
resp := node.Proto()
// Populate the online field based on
// currently connected nodes.
resp.Online = api.h.mapBatcher.IsConnected(node.ID)
return &v1.GetNodeResponse{Node: resp}, nil
}
@@ -340,8 +323,7 @@ func (api headscaleV1APIServer) SetTags(
api.h.Change(nodeChange)
log.Trace().
Caller().
Str("node", node.Hostname()).
Str("node", node.Hostname).
Strs("tags", request.GetTags()).
Msg("Changing tags of node")
@@ -352,13 +334,7 @@ func (api headscaleV1APIServer) SetApprovedRoutes(
ctx context.Context,
request *v1.SetApprovedRoutesRequest,
) (*v1.SetApprovedRoutesResponse, error) {
log.Debug().
Caller().
Uint64("node.id", request.GetNodeId()).
Strs("requestedRoutes", request.GetRoutes()).
Msg("gRPC SetApprovedRoutes called")
var newApproved []netip.Prefix
var routes []netip.Prefix
for _, route := range request.GetRoutes() {
prefix, err := netip.ParsePrefix(route)
if err != nil {
@@ -368,35 +344,31 @@ func (api headscaleV1APIServer) SetApprovedRoutes(
// If the prefix is an exit route, add both. The client expect both
// to annotate the node as an exit node.
if prefix == tsaddr.AllIPv4() || prefix == tsaddr.AllIPv6() {
newApproved = append(newApproved, tsaddr.AllIPv4(), tsaddr.AllIPv6())
routes = append(routes, tsaddr.AllIPv4(), tsaddr.AllIPv6())
} else {
newApproved = append(newApproved, prefix)
routes = append(routes, prefix)
}
}
tsaddr.SortPrefixes(newApproved)
newApproved = slices.Compact(newApproved)
tsaddr.SortPrefixes(routes)
routes = slices.Compact(routes)
node, nodeChange, err := api.h.state.SetApprovedRoutes(types.NodeID(request.GetNodeId()), newApproved)
node, nodeChange, err := api.h.state.SetApprovedRoutes(types.NodeID(request.GetNodeId()), routes)
if err != nil {
return nil, status.Error(codes.InvalidArgument, err.Error())
}
routeChange := api.h.state.SetNodeRoutes(node.ID, node.SubnetRoutes()...)
// Always propagate node changes from SetApprovedRoutes
api.h.Change(nodeChange)
proto := node.Proto()
// Populate SubnetRoutes with PrimaryRoutes to ensure it includes only the
// routes that are actively served from the node (per architectural requirement in types/node.go)
primaryRoutes := api.h.state.GetNodePrimaryRoutes(node.ID())
proto.SubnetRoutes = util.PrefixesToString(primaryRoutes)
// If routes changed, propagate those changes too
if !routeChange.Empty() {
api.h.Change(routeChange)
}
log.Debug().
Caller().
Uint64("node.id", node.ID().Uint64()).
Strs("approvedRoutes", util.PrefixesToString(node.ApprovedRoutes().AsSlice())).
Strs("primaryRoutes", util.PrefixesToString(primaryRoutes)).
Strs("finalSubnetRoutes", proto.SubnetRoutes).
Msg("gRPC SetApprovedRoutes completed")
proto := node.Proto()
proto.SubnetRoutes = util.PrefixesToString(api.h.state.GetNodePrimaryRoutes(node.ID))
return &v1.SetApprovedRoutesResponse{Node: proto}, nil
}
@@ -418,9 +390,9 @@ func (api headscaleV1APIServer) DeleteNode(
ctx context.Context,
request *v1.DeleteNodeRequest,
) (*v1.DeleteNodeResponse, error) {
node, ok := api.h.state.GetNodeByID(types.NodeID(request.GetNodeId()))
if !ok {
return nil, status.Errorf(codes.NotFound, "node not found")
node, err := api.h.state.GetNodeByID(types.NodeID(request.GetNodeId()))
if err != nil {
return nil, err
}
nodeChange, err := api.h.state.DeleteNode(node)
@@ -437,12 +409,9 @@ func (api headscaleV1APIServer) ExpireNode(
ctx context.Context,
request *v1.ExpireNodeRequest,
) (*v1.ExpireNodeResponse, error) {
expiry := time.Now()
if request.GetExpiry() != nil {
expiry = request.GetExpiry().AsTime()
}
now := time.Now()
node, nodeChange, err := api.h.state.SetNodeExpiry(types.NodeID(request.GetNodeId()), expiry)
node, nodeChange, err := api.h.state.SetNodeExpiry(types.NodeID(request.GetNodeId()), now)
if err != nil {
return nil, err
}
@@ -451,9 +420,8 @@ func (api headscaleV1APIServer) ExpireNode(
api.h.Change(nodeChange)
log.Trace().
Caller().
Str("node", node.Hostname()).
Time("expiry", *node.AsStruct().Expiry).
Str("node", node.Hostname).
Time("expiry", *node.Expiry).
Msg("node expired")
return &v1.ExpireNodeResponse{Node: node.Proto()}, nil
@@ -472,8 +440,7 @@ func (api headscaleV1APIServer) RenameNode(
api.h.Change(nodeChange)
log.Trace().
Caller().
Str("node", node.Hostname()).
Str("node", node.Hostname).
Str("new_name", request.GetNewName()).
Msg("node renamed")
@@ -488,45 +455,58 @@ func (api headscaleV1APIServer) ListNodes(
// the filtering of nodes by user, vs nodes as a whole can
// probably be done once.
// TODO(kradalby): This should be done in one tx.
IsConnected := api.h.mapBatcher.ConnectedMap()
if request.GetUser() != "" {
user, err := api.h.state.GetUserByName(request.GetUser())
if err != nil {
return nil, err
}
nodes := api.h.state.ListNodesByUser(types.UserID(user.ID))
nodes, err := api.h.state.ListNodesByUser(types.UserID(user.ID))
if err != nil {
return nil, err
}
response := nodesToProto(api.h.state, nodes)
response := nodesToProto(api.h.state, IsConnected, nodes)
return &v1.ListNodesResponse{Nodes: response}, nil
}
nodes := api.h.state.ListNodes()
nodes, err := api.h.state.ListNodes()
if err != nil {
return nil, err
}
response := nodesToProto(api.h.state, nodes)
sort.Slice(nodes, func(i, j int) bool {
return nodes[i].ID < nodes[j].ID
})
response := nodesToProto(api.h.state, IsConnected, nodes)
return &v1.ListNodesResponse{Nodes: response}, nil
}
func nodesToProto(state *state.State, nodes views.Slice[types.NodeView]) []*v1.Node {
response := make([]*v1.Node, nodes.Len())
for index, node := range nodes.All() {
func nodesToProto(state *state.State, IsConnected *xsync.MapOf[types.NodeID, bool], nodes types.Nodes) []*v1.Node {
response := make([]*v1.Node, len(nodes))
for index, node := range nodes {
resp := node.Proto()
// Populate the online field based on
// currently connected nodes.
if val, ok := IsConnected.Load(node.ID); ok && val {
resp.Online = true
}
var tags []string
for _, tag := range node.RequestTags() {
if state.NodeCanHaveTag(node, tag) {
if state.NodeCanHaveTag(node.View(), tag) {
tags = append(tags, tag)
}
}
resp.ValidTags = lo.Uniq(append(tags, node.ForcedTags().AsSlice()...))
resp.SubnetRoutes = util.PrefixesToString(append(state.GetNodePrimaryRoutes(node.ID()), node.ExitRoutes()...))
resp.ValidTags = lo.Uniq(append(tags, node.ForcedTags...))
resp.SubnetRoutes = util.PrefixesToString(append(state.GetNodePrimaryRoutes(node.ID), node.ExitRoutes()...))
response[index] = resp
}
sort.Slice(response, func(i, j int) bool {
return response[i].Id < response[j].Id
})
return response
}
@@ -550,7 +530,7 @@ func (api headscaleV1APIServer) BackfillNodeIPs(
ctx context.Context,
request *v1.BackfillNodeIPsRequest,
) (*v1.BackfillNodeIPsResponse, error) {
log.Trace().Caller().Msg("Backfill called")
log.Trace().Msg("Backfill called")
if !request.Confirmed {
return nil, errors.New("not confirmed, aborting")
@@ -694,15 +674,17 @@ func (api headscaleV1APIServer) SetPolicy(
// a scenario where they might be allowed if the server has no nodes
// yet, but it should help for the general case and for hot reloading
// configurations.
nodes := api.h.state.ListNodes()
_, err := api.h.state.SetPolicy([]byte(p))
nodes, err := api.h.state.ListNodes()
if err != nil {
return nil, fmt.Errorf("loading nodes from database to validate policy: %w", err)
}
changed, err := api.h.state.SetPolicy([]byte(p))
if err != nil {
return nil, fmt.Errorf("setting policy: %w", err)
}
if nodes.Len() > 0 {
_, err = api.h.state.SSHPolicy(nodes.At(0))
if len(nodes) > 0 {
_, err = api.h.state.SSHPolicy(nodes[0].View())
if err != nil {
return nil, fmt.Errorf("verifying SSH rules: %w", err)
}
@@ -713,20 +695,14 @@ func (api headscaleV1APIServer) SetPolicy(
return nil, err
}
// Always reload policy to ensure route re-evaluation, even if policy content hasn't changed.
// This ensures that routes are re-evaluated for auto-approval in cases where routes
// were manually disabled but could now be auto-approved with the current policy.
cs, err := api.h.state.ReloadPolicy()
if err != nil {
return nil, fmt.Errorf("reloading policy: %w", err)
}
// Only send update if the packet filter has changed.
if changed {
err = api.h.state.AutoApproveNodes()
if err != nil {
return nil, err
}
if len(cs) > 0 {
api.h.Change(cs...)
} else {
log.Debug().
Caller().
Msg("No policy changes to distribute because ReloadPolicy returned empty changeset")
api.h.Change(change.PolicyChange())
}
response := &v1.SetPolicyResponse{
@@ -734,10 +710,6 @@ func (api headscaleV1APIServer) SetPolicy(
UpdatedAt: timestamppb.New(updated.UpdatedAt),
}
log.Debug().
Caller().
Msg("gRPC SetPolicy completed successfully because response prepared")
return response, nil
}
@@ -760,12 +732,12 @@ func (api headscaleV1APIServer) DebugCreateNode(
Caller().
Interface("route-prefix", routes).
Interface("route-str", request.GetRoutes()).
Msg("Creating routes for node")
Msg("")
hostinfo := tailcfg.Hostinfo{
RoutableIPs: routes,
OS: "TestOS",
Hostname: request.GetName(),
Hostname: "DebugTestNode",
}
registrationId, err := types.RegistrationIDFromString(request.GetKey())
@@ -773,8 +745,8 @@ func (api headscaleV1APIServer) DebugCreateNode(
return nil, err
}
newNode := types.NewRegisterNode(
types.Node{
newNode := types.RegisterNode{
Node: types.Node{
NodeKey: key.NewNode().Public(),
MachineKey: key.NewMachine().Public(),
Hostname: request.GetName(),
@@ -785,10 +757,10 @@ func (api headscaleV1APIServer) DebugCreateNode(
Hostinfo: &hostinfo,
},
)
Registered: make(chan *types.Node),
}
log.Debug().
Caller().
Str("registration_id", registrationId.String()).
Msg("adding debug machine via CLI, appending to registration cache")
@@ -797,24 +769,4 @@ func (api headscaleV1APIServer) DebugCreateNode(
return &v1.DebugCreateNodeResponse{Node: newNode.Node.Proto()}, nil
}
func (api headscaleV1APIServer) Health(
ctx context.Context,
request *v1.HealthRequest,
) (*v1.HealthResponse, error) {
var healthErr error
response := &v1.HealthResponse{}
if err := api.h.state.PingDB(ctx); err != nil {
healthErr = fmt.Errorf("database ping failed: %w", err)
} else {
response.DatabaseConnectivity = true
}
if healthErr != nil {
log.Error().Err(healthErr).Msg("Health check failed")
}
return response, healthErr
}
func (api headscaleV1APIServer) mustEmbedUnimplementedHeadscaleServiceServer() {}

View File

@@ -91,22 +91,16 @@ func (h *Headscale) handleVerifyRequest(
var derpAdmitClientRequest tailcfg.DERPAdmitClientRequest
if err := json.Unmarshal(body, &derpAdmitClientRequest); err != nil {
return NewHTTPError(http.StatusBadRequest, "Bad Request: invalid JSON", fmt.Errorf("cannot parse derpAdmitClientRequest: %w", err))
return fmt.Errorf("cannot parse derpAdmitClientRequest: %w", err)
}
nodes := h.state.ListNodes()
// Check if any node has the requested NodeKey
var nodeKeyFound bool
for _, node := range nodes.All() {
if node.NodeKey() == derpAdmitClientRequest.NodePublic {
nodeKeyFound = true
break
}
nodes, err := h.state.ListNodes()
if err != nil {
return fmt.Errorf("cannot list nodes: %w", err)
}
resp := &tailcfg.DERPAdmitClientResponse{
Allow: nodeKeyFound,
Allow: nodes.ContainsNodeKey(derpAdmitClientRequest.NodePublic),
}
return json.NewEncoder(writer).Encode(resp)
@@ -186,39 +180,6 @@ func (h *Headscale) HealthHandler(
respond(nil)
}
func (h *Headscale) RobotsHandler(
writer http.ResponseWriter,
req *http.Request,
) {
writer.Header().Set("Content-Type", "text/plain")
writer.WriteHeader(http.StatusOK)
_, err := writer.Write([]byte("User-agent: *\nDisallow: /"))
if err != nil {
log.Error().
Caller().
Err(err).
Msg("Failed to write HTTP response")
}
}
// VersionHandler returns version information about the Headscale server
// Listens in /version.
func (h *Headscale) VersionHandler(
writer http.ResponseWriter,
req *http.Request,
) {
writer.Header().Set("Content-Type", "application/json")
writer.WriteHeader(http.StatusOK)
versionInfo := types.GetVersionInfo()
if err := json.NewEncoder(writer).Encode(versionInfo); err != nil {
log.Error().
Caller().
Err(err).
Msg("Failed to write version response")
}
}
var codeStyleRegisterWebAPI = styles.Props{
styles.Display: "block",
styles.Padding: "20px",

View File

@@ -1,7 +1,6 @@
package mapper
import (
"errors"
"fmt"
"time"
@@ -9,7 +8,6 @@ import (
"github.com/juanfont/headscale/hscontrol/types"
"github.com/juanfont/headscale/hscontrol/types/change"
"github.com/puzpuzpuz/xsync/v4"
"github.com/rs/zerolog/log"
"tailscale.com/tailcfg"
"tailscale.com/types/ptr"
)
@@ -20,13 +18,12 @@ type batcherFunc func(cfg *types.Config, state *state.State) Batcher
type Batcher interface {
Start()
Close()
AddNode(id types.NodeID, c chan<- *tailcfg.MapResponse, version tailcfg.CapabilityVersion) error
RemoveNode(id types.NodeID, c chan<- *tailcfg.MapResponse) bool
AddNode(id types.NodeID, c chan<- *tailcfg.MapResponse, isRouter bool, version tailcfg.CapabilityVersion) error
RemoveNode(id types.NodeID, c chan<- *tailcfg.MapResponse, isRouter bool)
IsConnected(id types.NodeID) bool
ConnectedMap() *xsync.Map[types.NodeID, bool]
AddWork(c ...change.ChangeSet)
AddWork(c change.ChangeSet)
MapResponseFromChange(id types.NodeID, c change.ChangeSet) (*tailcfg.MapResponse, error)
DebugMapResponses() (map[types.NodeID][]tailcfg.MapResponse, error)
}
func NewBatcher(batchTime time.Duration, workers int, mapper *mapper) *LockFreeBatcher {
@@ -37,7 +34,7 @@ func NewBatcher(batchTime time.Duration, workers int, mapper *mapper) *LockFreeB
// The size of this channel is arbitrary chosen, the sizing should be revisited.
workCh: make(chan work, workers*200),
nodes: xsync.NewMap[types.NodeID, *multiChannelNodeConn](),
nodes: xsync.NewMap[types.NodeID, *nodeConn](),
connected: xsync.NewMap[types.NodeID, *time.Time](),
pendingChanges: xsync.NewMap[types.NodeID, []change.ChangeSet](),
}
@@ -48,7 +45,6 @@ func NewBatcherAndMapper(cfg *types.Config, state *state.State) Batcher {
m := newMapper(cfg, state)
b := NewBatcher(cfg.Tuning.BatchChangeDelay, cfg.Tuning.BatcherWorkers, m)
m.batcher = b
return b
}
@@ -74,10 +70,8 @@ func generateMapResponse(nodeID types.NodeID, version tailcfg.CapabilityVersion,
return nil, fmt.Errorf("mapper is nil for nodeID %d", nodeID)
}
var (
mapResp *tailcfg.MapResponse
err error
)
var mapResp *tailcfg.MapResponse
var err error
switch c.Change {
case change.DERP:
@@ -88,46 +82,20 @@ func generateMapResponse(nodeID types.NodeID, version tailcfg.CapabilityVersion,
// TODO(kradalby): This can potentially be a peer update of the old and new subnet router.
mapResp, err = mapper.fullMapResponse(nodeID, version)
} else {
// Trust the change type for online/offline status to avoid race conditions
// between NodeStore updates and change processing
onlineStatus := c.Change == change.NodeCameOnline
mapResp, err = mapper.peerChangedPatchResponse(nodeID, []*tailcfg.PeerChange{
{
NodeID: c.NodeID.NodeID(),
Online: ptr.To(onlineStatus),
Online: ptr.To(c.Change == change.NodeCameOnline),
},
})
}
case change.NodeNewOrUpdate:
// If the node is the one being updated, we send a self update that preserves peer information
// to ensure the node sees changes to its own properties (e.g., hostname/DNS name changes)
// without losing its view of peer status during rapid reconnection cycles
if c.IsSelfUpdate(nodeID) {
mapResp, err = mapper.selfMapResponse(nodeID, version)
} else {
mapResp, err = mapper.peerChangeResponse(nodeID, version, c.NodeID)
}
mapResp, err = mapper.fullMapResponse(nodeID, version)
case change.NodeRemove:
mapResp, err = mapper.peerRemovedResponse(nodeID, c.NodeID)
case change.NodeKeyExpiry:
// If the node is the one whose key is expiring, we send a "full" self update
// as nodes will ignore patch updates about themselves (?).
if c.IsSelfUpdate(nodeID) {
mapResp, err = mapper.selfMapResponse(nodeID, version)
// mapResp, err = mapper.fullMapResponse(nodeID, version)
} else {
mapResp, err = mapper.peerChangedPatchResponse(nodeID, []*tailcfg.PeerChange{
{
NodeID: c.NodeID.NodeID(),
KeyExpiry: c.NodeExpiry,
},
})
}
default:
// The following will always hit this:
// change.Full, change.Policy
@@ -151,16 +119,11 @@ func generateMapResponse(nodeID types.NodeID, version tailcfg.CapabilityVersion,
// handleNodeChange generates and sends a [tailcfg.MapResponse] for a given node and [change.ChangeSet].
func handleNodeChange(nc nodeConnection, mapper *mapper, c change.ChangeSet) error {
if nc == nil {
return errors.New("nodeConnection is nil")
return fmt.Errorf("nodeConnection is nil")
}
nodeID := nc.nodeID()
log.Debug().Caller().Uint64("node.id", nodeID.Uint64()).Str("change.type", c.Change.String()).Msg("Node change processing started because change notification received")
var data *tailcfg.MapResponse
var err error
data, err = generateMapResponse(nodeID, nc.version(), mapper, c)
data, err := generateMapResponse(nodeID, nc.version(), mapper, c)
if err != nil {
return fmt.Errorf("generating map response for node %d: %w", nodeID, err)
}
@@ -171,8 +134,7 @@ func handleNodeChange(nc nodeConnection, mapper *mapper, c change.ChangeSet) err
}
// Send the map response
err = nc.send(data)
if err != nil {
if err := nc.send(data); err != nil {
return fmt.Errorf("sending map response to node %d: %w", nodeID, err)
}

View File

@@ -2,7 +2,6 @@ package mapper
import (
"context"
"crypto/rand"
"fmt"
"sync"
"sync/atomic"
@@ -22,7 +21,8 @@ type LockFreeBatcher struct {
mapper *mapper
workers int
nodes *xsync.Map[types.NodeID, *multiChannelNodeConn]
// Lock-free concurrent maps
nodes *xsync.Map[types.NodeID, *nodeConn]
connected *xsync.Map[types.NodeID, *time.Time]
// Work queue channel
@@ -32,6 +32,7 @@ type LockFreeBatcher struct {
// Batching state
pendingChanges *xsync.Map[types.NodeID, []change.ChangeSet]
batchMutex sync.RWMutex
// Metrics
totalNodes atomic.Int64
@@ -44,104 +45,87 @@ type LockFreeBatcher struct {
// AddNode registers a new node connection with the batcher and sends an initial map response.
// It creates or updates the node's connection data, validates the initial map generation,
// and notifies other nodes that this node has come online.
func (b *LockFreeBatcher) AddNode(id types.NodeID, c chan<- *tailcfg.MapResponse, version tailcfg.CapabilityVersion) error {
addNodeStart := time.Now()
// TODO(kradalby): See if we can move the isRouter argument somewhere else.
func (b *LockFreeBatcher) AddNode(id types.NodeID, c chan<- *tailcfg.MapResponse, isRouter bool, version tailcfg.CapabilityVersion) error {
// First validate that we can generate initial map before doing anything else
fullSelfChange := change.FullSelf(id)
// Generate connection ID
connID := generateConnectionID()
// Create new connection entry
now := time.Now()
newEntry := &connectionEntry{
id: connID,
c: c,
version: version,
created: now,
}
// Initialize last used timestamp
newEntry.lastUsed.Store(now.Unix())
// Get or create multiChannelNodeConn - this reuses existing offline nodes for rapid reconnection
nodeConn, loaded := b.nodes.LoadOrStore(id, newMultiChannelNodeConn(id, b.mapper))
if !loaded {
b.totalNodes.Add(1)
}
// Add connection to the list (lock-free)
nodeConn.addConnection(newEntry)
// Use the worker pool for controlled concurrency instead of direct generation
initialMap, err := b.MapResponseFromChange(id, change.FullSelf(id))
// TODO(kradalby): This should not be generated here, but rather in MapResponseFromChange.
// This currently means that the goroutine for the node connection will do the processing
// which means that we might have uncontrolled concurrency.
// When we use MapResponseFromChange, it will be processed by the same worker pool, causing
// it to be processed in a more controlled manner.
initialMap, err := generateMapResponse(id, version, b.mapper, fullSelfChange)
if err != nil {
log.Error().Uint64("node.id", id.Uint64()).Err(err).Msg("Initial map generation failed")
nodeConn.removeConnectionByChannel(c)
return fmt.Errorf("failed to generate initial map for node %d: %w", id, err)
}
// Use a blocking send with timeout for initial map since the channel should be ready
// and we want to avoid the race condition where the receiver isn't ready yet
select {
case c <- initialMap:
// Success
case <-time.After(5 * time.Second):
log.Error().Uint64("node.id", id.Uint64()).Err(fmt.Errorf("timeout")).Msg("Initial map send timeout")
log.Debug().Caller().Uint64("node.id", id.Uint64()).Dur("timeout.duration", 5*time.Second).
Msg("Initial map send timed out because channel was blocked or receiver not ready")
nodeConn.removeConnectionByChannel(c)
return fmt.Errorf("failed to send initial map to node %d: timeout", id)
// Only after validation succeeds, create or update node connection
newConn := newNodeConn(id, c, version, b.mapper)
var conn *nodeConn
if existing, loaded := b.nodes.LoadOrStore(id, newConn); loaded {
// Update existing connection
existing.updateConnection(c, version)
conn = existing
} else {
b.totalNodes.Add(1)
conn = newConn
}
// Update connection status
// Mark as connected only after validation succeeds
b.connected.Store(id, nil) // nil = connected
// Node will automatically receive updates through the normal flow
// The initial full map already contains all current state
log.Info().Uint64("node.id", id.Uint64()).Bool("isRouter", isRouter).Msg("Node connected to batcher")
log.Debug().Caller().Uint64("node.id", id.Uint64()).Dur("total.duration", time.Since(addNodeStart)).
Int("active.connections", nodeConn.getActiveConnectionCount()).
Msg("Node connection established in batcher because AddNode completed successfully")
// Send the validated initial map
if initialMap != nil {
if err := conn.send(initialMap); err != nil {
// Clean up the connection state on send failure
b.nodes.Delete(id)
b.connected.Delete(id)
return fmt.Errorf("failed to send initial map to node %d: %w", id, err)
}
// Notify other nodes that this node came online
b.addWork(change.ChangeSet{NodeID: id, Change: change.NodeCameOnline, IsSubnetRouter: isRouter})
}
return nil
}
// RemoveNode disconnects a node from the batcher, marking it as offline and cleaning up its state.
// It validates the connection channel matches one of the current connections, closes that specific connection,
// and keeps the node entry alive for rapid reconnections instead of aggressive deletion.
// Reports if the node still has active connections after removal.
func (b *LockFreeBatcher) RemoveNode(id types.NodeID, c chan<- *tailcfg.MapResponse) bool {
nodeConn, exists := b.nodes.Load(id)
if !exists {
log.Debug().Caller().Uint64("node.id", id.Uint64()).Msg("RemoveNode called for non-existent node because node not found in batcher")
return false
// It validates the connection channel matches the current one, closes the connection,
// and notifies other nodes that this node has gone offline.
func (b *LockFreeBatcher) RemoveNode(id types.NodeID, c chan<- *tailcfg.MapResponse, isRouter bool) {
// Check if this is the current connection and mark it as closed
if existing, ok := b.nodes.Load(id); ok {
if !existing.matchesChannel(c) {
log.Debug().Uint64("node.id", id.Uint64()).Msg("RemoveNode called for non-current connection, ignoring")
return // Not the current connection, not an error
}
// Mark the connection as closed to prevent further sends
if connData := existing.connData.Load(); connData != nil {
connData.closed.Store(true)
}
}
// Remove specific connection
removed := nodeConn.removeConnectionByChannel(c)
if !removed {
log.Debug().Caller().Uint64("node.id", id.Uint64()).Msg("RemoveNode: channel not found because connection already removed or invalid")
return false
}
log.Info().Uint64("node.id", id.Uint64()).Bool("isRouter", isRouter).Msg("Node disconnected from batcher, marking as offline")
// Check if node has any remaining active connections
if nodeConn.hasActiveConnections() {
log.Debug().Caller().Uint64("node.id", id.Uint64()).
Int("active.connections", nodeConn.getActiveConnectionCount()).
Msg("Node connection removed but keeping online because other connections remain")
return true // Node still has active connections
}
// No active connections - keep the node entry alive for rapid reconnections
// The node will get a fresh full map when it reconnects
log.Debug().Caller().Uint64("node.id", id.Uint64()).Msg("Node disconnected from batcher because all connections removed, keeping entry for rapid reconnection")
// Remove node and mark disconnected atomically
b.nodes.Delete(id)
b.connected.Store(id, ptr.To(time.Now()))
b.totalNodes.Add(-1)
return false
// Notify other nodes that this node went offline
b.addWork(change.ChangeSet{NodeID: id, Change: change.NodeWentOffline, IsSubnetRouter: isRouter})
}
// AddWork queues a change to be processed by the batcher.
func (b *LockFreeBatcher) AddWork(c ...change.ChangeSet) {
b.addWork(c...)
// Critical changes are processed immediately, while others are batched for efficiency.
func (b *LockFreeBatcher) AddWork(c change.ChangeSet) {
b.addWork(c)
}
func (b *LockFreeBatcher) Start() {
@@ -152,57 +136,41 @@ func (b *LockFreeBatcher) Start() {
func (b *LockFreeBatcher) Close() {
if b.cancel != nil {
b.cancel()
b.cancel = nil
}
// Only close workCh once
select {
case <-b.workCh:
// Channel is already closed
default:
close(b.workCh)
}
// Close the underlying channels supplying the data to the clients.
b.nodes.Range(func(nodeID types.NodeID, conn *multiChannelNodeConn) bool {
conn.close()
return true
})
close(b.workCh)
}
func (b *LockFreeBatcher) doWork() {
log.Debug().Msg("batcher doWork loop started")
defer log.Debug().Msg("batcher doWork loop stopped")
for i := range b.workers {
go b.worker(i + 1)
}
// Create a cleanup ticker for removing truly disconnected nodes
cleanupTicker := time.NewTicker(5 * time.Minute)
defer cleanupTicker.Stop()
for {
select {
case <-b.tick.C:
// Process batched changes
b.processBatchedChanges()
case <-cleanupTicker.C:
// Clean up nodes that have been offline for too long
b.cleanupOfflineNodes()
case <-b.ctx.Done():
log.Info().Msg("batcher context done, stopping to feed workers")
return
}
}
}
func (b *LockFreeBatcher) worker(workerID int) {
log.Debug().Int("workerID", workerID).Msg("batcher worker started")
defer log.Debug().Int("workerID", workerID).Msg("batcher worker stopped")
for {
select {
case w, ok := <-b.workCh:
if !ok {
log.Debug().Int("worker.id", workerID).Msgf("worker channel closing, shutting down worker %d", workerID)
return
}
startTime := time.Now()
b.workProcessed.Add(1)
// If the resultCh is set, it means that this is a work request
@@ -212,23 +180,20 @@ func (b *LockFreeBatcher) worker(workerID int) {
if w.resultCh != nil {
var result workResult
if nc, exists := b.nodes.Load(w.nodeID); exists {
var err error
result.mapResponse, err = generateMapResponse(nc.nodeID(), nc.version(), b.mapper, w.c)
result.err = err
result.mapResponse, result.err = generateMapResponse(nc.nodeID(), nc.version(), b.mapper, w.c)
if result.err != nil {
b.workErrors.Add(1)
log.Error().Err(result.err).
Int("worker.id", workerID).
Int("workerID", workerID).
Uint64("node.id", w.nodeID.Uint64()).
Str("change", w.c.Change.String()).
Msg("failed to generate map response for synchronous work")
}
} else {
result.err = fmt.Errorf("node %d not found", w.nodeID)
b.workErrors.Add(1)
log.Error().Err(result.err).
Int("worker.id", workerID).
Int("workerID", workerID).
Uint64("node.id", w.nodeID.Uint64()).
Msg("node not found for synchronous work")
}
@@ -240,6 +205,15 @@ func (b *LockFreeBatcher) worker(workerID int) {
return
}
duration := time.Since(startTime)
if duration > 100*time.Millisecond {
log.Warn().
Int("workerID", workerID).
Uint64("node.id", w.nodeID.Uint64()).
Str("change", w.c.Change.String()).
Dur("duration", duration).
Msg("slow synchronous work processing")
}
continue
}
@@ -247,30 +221,71 @@ func (b *LockFreeBatcher) worker(workerID int) {
// that should be processed and sent to the node instead of
// returned to the caller.
if nc, exists := b.nodes.Load(w.nodeID); exists {
// Apply change to node - this will handle offline nodes gracefully
// and queue work for when they reconnect
// Check if this connection is still active before processing
if connData := nc.connData.Load(); connData != nil && connData.closed.Load() {
log.Debug().
Int("workerID", workerID).
Uint64("node.id", w.nodeID.Uint64()).
Str("change", w.c.Change.String()).
Msg("skipping work for closed connection")
continue
}
err := nc.change(w.c)
if err != nil {
b.workErrors.Add(1)
log.Error().Err(err).
Int("worker.id", workerID).
Int("workerID", workerID).
Uint64("node.id", w.c.NodeID.Uint64()).
Str("change", w.c.Change.String()).
Msg("failed to apply change")
}
} else {
log.Debug().
Int("workerID", workerID).
Uint64("node.id", w.nodeID.Uint64()).
Str("change", w.c.Change.String()).
Msg("node not found for asynchronous work - node may have disconnected")
}
duration := time.Since(startTime)
if duration > 100*time.Millisecond {
log.Warn().
Int("workerID", workerID).
Uint64("node.id", w.nodeID.Uint64()).
Str("change", w.c.Change.String()).
Dur("duration", duration).
Msg("slow asynchronous work processing")
}
case <-b.ctx.Done():
log.Debug().Int("workder.id", workerID).Msg("batcher context is done, exiting worker")
return
}
}
}
func (b *LockFreeBatcher) addWork(c ...change.ChangeSet) {
b.addToBatch(c...)
func (b *LockFreeBatcher) addWork(c change.ChangeSet) {
// For critical changes that need immediate processing, send directly
if b.shouldProcessImmediately(c) {
if c.SelfUpdateOnly {
b.queueWork(work{c: c, nodeID: c.NodeID, resultCh: nil})
return
}
b.nodes.Range(func(nodeID types.NodeID, _ *nodeConn) bool {
if c.NodeID == nodeID && !c.AlsoSelf() {
return true
}
b.queueWork(work{c: c, nodeID: nodeID, resultCh: nil})
return true
})
return
}
// For non-critical changes, add to batch
b.addToBatch(c)
}
// queueWork safely queues work.
// queueWork safely queues work
func (b *LockFreeBatcher) queueWork(w work) {
b.workQueuedCount.Add(1)
@@ -283,42 +298,46 @@ func (b *LockFreeBatcher) queueWork(w work) {
}
}
// addToBatch adds a change to the pending batch.
func (b *LockFreeBatcher) addToBatch(c ...change.ChangeSet) {
// Short circuit if any of the changes is a full update, which
// means we can skip sending individual changes.
if change.HasFull(c) {
b.nodes.Range(func(nodeID types.NodeID, _ *multiChannelNodeConn) bool {
b.pendingChanges.Store(nodeID, []change.ChangeSet{{Change: change.Full}})
// shouldProcessImmediately determines if a change should bypass batching
func (b *LockFreeBatcher) shouldProcessImmediately(c change.ChangeSet) bool {
// Process these changes immediately to avoid delaying critical functionality
switch c.Change {
case change.Full, change.NodeRemove, change.NodeCameOnline, change.NodeWentOffline, change.Policy:
return true
default:
return false
}
}
// addToBatch adds a change to the pending batch
func (b *LockFreeBatcher) addToBatch(c change.ChangeSet) {
b.batchMutex.Lock()
defer b.batchMutex.Unlock()
if c.SelfUpdateOnly {
changes, _ := b.pendingChanges.LoadOrStore(c.NodeID, []change.ChangeSet{})
changes = append(changes, c)
b.pendingChanges.Store(c.NodeID, changes)
return
}
b.nodes.Range(func(nodeID types.NodeID, _ *nodeConn) bool {
if c.NodeID == nodeID && !c.AlsoSelf() {
return true
})
return
}
all, self := change.SplitAllAndSelf(c)
for _, changeSet := range self {
changes, _ := b.pendingChanges.LoadOrStore(changeSet.NodeID, []change.ChangeSet{})
changes = append(changes, changeSet)
b.pendingChanges.Store(changeSet.NodeID, changes)
return
}
b.nodes.Range(func(nodeID types.NodeID, _ *multiChannelNodeConn) bool {
rel := change.RemoveUpdatesForSelf(nodeID, all)
}
changes, _ := b.pendingChanges.LoadOrStore(nodeID, []change.ChangeSet{})
changes = append(changes, rel...)
changes = append(changes, c)
b.pendingChanges.Store(nodeID, changes)
return true
})
}
// processBatchedChanges processes all pending batched changes.
// processBatchedChanges processes all pending batched changes
func (b *LockFreeBatcher) processBatchedChanges() {
b.batchMutex.Lock()
defer b.batchMutex.Unlock()
if b.pendingChanges == nil {
return
}
@@ -336,69 +355,16 @@ func (b *LockFreeBatcher) processBatchedChanges() {
// Clear the pending changes for this node
b.pendingChanges.Delete(nodeID)
return true
})
}
// cleanupOfflineNodes removes nodes that have been offline for too long to prevent memory leaks.
// TODO(kradalby): reevaluate if we want to keep this.
func (b *LockFreeBatcher) cleanupOfflineNodes() {
cleanupThreshold := 15 * time.Minute
now := time.Now()
var nodesToCleanup []types.NodeID
// Find nodes that have been offline for too long
b.connected.Range(func(nodeID types.NodeID, disconnectTime *time.Time) bool {
if disconnectTime != nil && now.Sub(*disconnectTime) > cleanupThreshold {
// Double-check the node doesn't have active connections
if nodeConn, exists := b.nodes.Load(nodeID); exists {
if !nodeConn.hasActiveConnections() {
nodesToCleanup = append(nodesToCleanup, nodeID)
}
}
}
return true
})
// Clean up the identified nodes
for _, nodeID := range nodesToCleanup {
log.Info().Uint64("node.id", nodeID.Uint64()).
Dur("offline_duration", cleanupThreshold).
Msg("Cleaning up node that has been offline for too long")
b.nodes.Delete(nodeID)
b.connected.Delete(nodeID)
b.totalNodes.Add(-1)
}
if len(nodesToCleanup) > 0 {
log.Info().Int("cleaned_nodes", len(nodesToCleanup)).
Msg("Completed cleanup of long-offline nodes")
}
}
// IsConnected is lock-free read that checks if a node has any active connections.
// IsConnected is lock-free read.
func (b *LockFreeBatcher) IsConnected(id types.NodeID) bool {
// First check if we have active connections for this node
if nodeConn, exists := b.nodes.Load(id); exists {
if nodeConn.hasActiveConnections() {
return true
}
if val, ok := b.connected.Load(id); ok {
// nil means connected
return val == nil
}
// Check disconnected timestamp with grace period
val, ok := b.connected.Load(id)
if !ok {
return false
}
// nil means connected
if val == nil {
return true
}
return false
}
@@ -406,26 +372,9 @@ func (b *LockFreeBatcher) IsConnected(id types.NodeID) bool {
func (b *LockFreeBatcher) ConnectedMap() *xsync.Map[types.NodeID, bool] {
ret := xsync.NewMap[types.NodeID, bool]()
// First, add all nodes with active connections
b.nodes.Range(func(id types.NodeID, nodeConn *multiChannelNodeConn) bool {
if nodeConn.hasActiveConnections() {
ret.Store(id, true)
}
return true
})
// Then add all entries from the connected map
b.connected.Range(func(id types.NodeID, val *time.Time) bool {
// Only add if not already added as connected above
if _, exists := ret.Load(id); !exists {
if val == nil {
// nil means connected
ret.Store(id, true)
} else {
// timestamp means disconnected
ret.Store(id, false)
}
}
// nil means connected
ret.Store(id, val == nil)
return true
})
@@ -449,269 +398,94 @@ func (b *LockFreeBatcher) MapResponseFromChange(id types.NodeID, c change.Change
}
}
// connectionEntry represents a single connection to a node.
type connectionEntry struct {
id string // unique connection ID
c chan<- *tailcfg.MapResponse
version tailcfg.CapabilityVersion
created time.Time
lastUsed atomic.Int64 // Unix timestamp of last successful send
// connectionData holds the channel and connection parameters.
type connectionData struct {
c chan<- *tailcfg.MapResponse
version tailcfg.CapabilityVersion
closed atomic.Bool // Track if this connection has been closed
}
// multiChannelNodeConn manages multiple concurrent connections for a single node.
type multiChannelNodeConn struct {
// nodeConn described the node connection and its associated data.
type nodeConn struct {
id types.NodeID
mapper *mapper
mutex sync.RWMutex
connections []*connectionEntry
// Atomic pointer to connection data - allows lock-free updates
connData atomic.Pointer[connectionData]
updateCount atomic.Int64
}
// generateConnectionID generates a unique connection identifier.
func generateConnectionID() string {
bytes := make([]byte, 8)
rand.Read(bytes)
return fmt.Sprintf("%x", bytes)
}
// newMultiChannelNodeConn creates a new multi-channel node connection.
func newMultiChannelNodeConn(id types.NodeID, mapper *mapper) *multiChannelNodeConn {
return &multiChannelNodeConn{
func newNodeConn(id types.NodeID, c chan<- *tailcfg.MapResponse, version tailcfg.CapabilityVersion, mapper *mapper) *nodeConn {
nc := &nodeConn{
id: id,
mapper: mapper,
}
}
func (mc *multiChannelNodeConn) close() {
mc.mutex.Lock()
defer mc.mutex.Unlock()
for _, conn := range mc.connections {
close(conn.c)
// Initialize connection data
data := &connectionData{
c: c,
version: version,
}
nc.connData.Store(data)
return nc
}
// addConnection adds a new connection.
func (mc *multiChannelNodeConn) addConnection(entry *connectionEntry) {
mutexWaitStart := time.Now()
log.Debug().Caller().Uint64("node.id", mc.id.Uint64()).Str("chan", fmt.Sprintf("%p", entry.c)).Str("conn.id", entry.id).
Msg("addConnection: waiting for mutex - POTENTIAL CONTENTION POINT")
mc.mutex.Lock()
mutexWaitDur := time.Since(mutexWaitStart)
defer mc.mutex.Unlock()
mc.connections = append(mc.connections, entry)
log.Debug().Caller().Uint64("node.id", mc.id.Uint64()).Str("chan", fmt.Sprintf("%p", entry.c)).Str("conn.id", entry.id).
Int("total_connections", len(mc.connections)).
Dur("mutex_wait_time", mutexWaitDur).
Msg("Successfully added connection after mutex wait")
}
// removeConnectionByChannel removes a connection by matching channel pointer.
func (mc *multiChannelNodeConn) removeConnectionByChannel(c chan<- *tailcfg.MapResponse) bool {
mc.mutex.Lock()
defer mc.mutex.Unlock()
for i, entry := range mc.connections {
if entry.c == c {
// Remove this connection
mc.connections = append(mc.connections[:i], mc.connections[i+1:]...)
log.Debug().Caller().Uint64("node.id", mc.id.Uint64()).Str("chan", fmt.Sprintf("%p", c)).
Int("remaining_connections", len(mc.connections)).
Msg("Successfully removed connection")
return true
}
// updateConnection atomically updates connection parameters.
func (nc *nodeConn) updateConnection(c chan<- *tailcfg.MapResponse, version tailcfg.CapabilityVersion) {
newData := &connectionData{
c: c,
version: version,
}
return false
nc.connData.Store(newData)
}
// hasActiveConnections checks if the node has any active connections.
func (mc *multiChannelNodeConn) hasActiveConnections() bool {
mc.mutex.RLock()
defer mc.mutex.RUnlock()
return len(mc.connections) > 0
}
// getActiveConnectionCount returns the number of active connections.
func (mc *multiChannelNodeConn) getActiveConnectionCount() int {
mc.mutex.RLock()
defer mc.mutex.RUnlock()
return len(mc.connections)
}
// send broadcasts data to all active connections for the node.
func (mc *multiChannelNodeConn) send(data *tailcfg.MapResponse) error {
// matchesChannel checks if the given channel matches current connection.
func (nc *nodeConn) matchesChannel(c chan<- *tailcfg.MapResponse) bool {
data := nc.connData.Load()
if data == nil {
return nil
return false
}
mc.mutex.Lock()
defer mc.mutex.Unlock()
if len(mc.connections) == 0 {
// During rapid reconnection, nodes may temporarily have no active connections
// This is not an error - the node will receive a full map when it reconnects
log.Debug().Caller().Uint64("node.id", mc.id.Uint64()).
Msg("send: skipping send to node with no active connections (likely rapid reconnection)")
return nil // Return success instead of error
}
log.Debug().Caller().Uint64("node.id", mc.id.Uint64()).
Int("total_connections", len(mc.connections)).
Msg("send: broadcasting to all connections")
var lastErr error
successCount := 0
var failedConnections []int // Track failed connections for removal
// Send to all connections
for i, conn := range mc.connections {
log.Debug().Caller().Uint64("node.id", mc.id.Uint64()).Str("chan", fmt.Sprintf("%p", conn.c)).
Str("conn.id", conn.id).Int("connection_index", i).
Msg("send: attempting to send to connection")
if err := conn.send(data); err != nil {
lastErr = err
failedConnections = append(failedConnections, i)
log.Warn().Err(err).
Uint64("node.id", mc.id.Uint64()).Str("chan", fmt.Sprintf("%p", conn.c)).
Str("conn.id", conn.id).Int("connection_index", i).
Msg("send: connection send failed")
} else {
successCount++
log.Debug().Caller().Uint64("node.id", mc.id.Uint64()).Str("chan", fmt.Sprintf("%p", conn.c)).
Str("conn.id", conn.id).Int("connection_index", i).
Msg("send: successfully sent to connection")
}
}
// Remove failed connections (in reverse order to maintain indices)
for i := len(failedConnections) - 1; i >= 0; i-- {
idx := failedConnections[i]
log.Debug().Caller().Uint64("node.id", mc.id.Uint64()).
Str("conn.id", mc.connections[idx].id).
Msg("send: removing failed connection")
mc.connections = append(mc.connections[:idx], mc.connections[idx+1:]...)
}
mc.updateCount.Add(1)
log.Debug().Uint64("node.id", mc.id.Uint64()).
Int("successful_sends", successCount).
Int("failed_connections", len(failedConnections)).
Int("remaining_connections", len(mc.connections)).
Msg("send: completed broadcast")
// Success if at least one send succeeded
if successCount > 0 {
return nil
}
return fmt.Errorf("node %d: all connections failed, last error: %w", mc.id, lastErr)
// Compare channel pointers directly
return data.c == c
}
// send sends data to a single connection entry with timeout-based stale connection detection.
func (entry *connectionEntry) send(data *tailcfg.MapResponse) error {
// compressAndVersion atomically reads connection settings.
func (nc *nodeConn) version() tailcfg.CapabilityVersion {
data := nc.connData.Load()
if data == nil {
return nil
}
// Use a short timeout to detect stale connections where the client isn't reading the channel.
// This is critical for detecting Docker containers that are forcefully terminated
// but still have channels that appear open.
select {
case entry.c <- data:
// Update last used timestamp on successful send
entry.lastUsed.Store(time.Now().Unix())
return nil
case <-time.After(50 * time.Millisecond):
// Connection is likely stale - client isn't reading from channel
// This catches the case where Docker containers are killed but channels remain open
return fmt.Errorf("connection %s: timeout sending to channel (likely stale connection)", entry.id)
}
}
// nodeID returns the node ID.
func (mc *multiChannelNodeConn) nodeID() types.NodeID {
return mc.id
}
// version returns the capability version from the first active connection.
// All connections for a node should have the same version in practice.
func (mc *multiChannelNodeConn) version() tailcfg.CapabilityVersion {
mc.mutex.RLock()
defer mc.mutex.RUnlock()
if len(mc.connections) == 0 {
return 0
}
return mc.connections[0].version
return data.version
}
// change applies a change to all active connections for the node.
func (mc *multiChannelNodeConn) change(c change.ChangeSet) error {
return handleNodeChange(mc, mc.mapper, c)
func (nc *nodeConn) nodeID() types.NodeID {
return nc.id
}
// DebugNodeInfo contains debug information about a node's connections.
type DebugNodeInfo struct {
Connected bool `json:"connected"`
ActiveConnections int `json:"active_connections"`
func (nc *nodeConn) change(c change.ChangeSet) error {
return handleNodeChange(nc, nc.mapper, c)
}
// Debug returns a pre-baked map of node debug information for the debug interface.
func (b *LockFreeBatcher) Debug() map[types.NodeID]DebugNodeInfo {
result := make(map[types.NodeID]DebugNodeInfo)
// send sends data to the node's channel.
// The node will pick it up and send it to the HTTP handler.
func (nc *nodeConn) send(data *tailcfg.MapResponse) error {
connData := nc.connData.Load()
if connData == nil {
return fmt.Errorf("node %d: no connection data", nc.id)
}
// Get all nodes with their connection status using immediate connection logic
// (no grace period) for debug purposes
b.nodes.Range(func(id types.NodeID, nodeConn *multiChannelNodeConn) bool {
nodeConn.mutex.RLock()
activeConnCount := len(nodeConn.connections)
nodeConn.mutex.RUnlock()
// Check if connection has been closed
if connData.closed.Load() {
return fmt.Errorf("node %d: connection closed", nc.id)
}
// Use immediate connection status: if active connections exist, node is connected
// If not, check the connected map for nil (connected) vs timestamp (disconnected)
connected := false
if activeConnCount > 0 {
connected = true
} else {
// Check connected map for immediate status
if val, ok := b.connected.Load(id); ok && val == nil {
connected = true
}
}
result[id] = DebugNodeInfo{
Connected: connected,
ActiveConnections: activeConnCount,
}
return true
})
// Add all entries from the connected map to capture both connected and disconnected nodes
b.connected.Range(func(id types.NodeID, val *time.Time) bool {
// Only add if not already processed above
if _, exists := result[id]; !exists {
// Use immediate connection status for debug (no grace period)
connected := (val == nil) // nil means connected, timestamp means disconnected
result[id] = DebugNodeInfo{
Connected: connected,
ActiveConnections: 0,
}
}
return true
})
return result
}
func (b *LockFreeBatcher) DebugMapResponses() (map[types.NodeID][]tailcfg.MapResponse, error) {
return b.mapper.debugMapResponses()
// TODO(kradalby): We might need some sort of timeout here if the client is not reading
// the channel. That might mean that we are sending to a node that has gone offline, but
// the channel is still open.
connData.c <- data
nc.updateCount.Add(1)
return nil
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,7 +1,6 @@
package mapper
import (
"errors"
"net/netip"
"sort"
"time"
@@ -13,29 +12,16 @@ import (
"tailscale.com/util/multierr"
)
// MapResponseBuilder provides a fluent interface for building tailcfg.MapResponse.
// MapResponseBuilder provides a fluent interface for building tailcfg.MapResponse
type MapResponseBuilder struct {
resp *tailcfg.MapResponse
mapper *mapper
nodeID types.NodeID
capVer tailcfg.CapabilityVersion
errs []error
debugType debugType
}
type debugType string
const (
fullResponseDebug debugType = "full"
selfResponseDebug debugType = "self"
patchResponseDebug debugType = "patch"
removeResponseDebug debugType = "remove"
changeResponseDebug debugType = "change"
derpResponseDebug debugType = "derp"
)
// NewMapResponseBuilder creates a new builder with basic fields set.
// NewMapResponseBuilder creates a new builder with basic fields set
func (m *mapper) NewMapResponseBuilder(nodeID types.NodeID) *MapResponseBuilder {
now := time.Now()
return &MapResponseBuilder{
@@ -49,37 +35,37 @@ func (m *mapper) NewMapResponseBuilder(nodeID types.NodeID) *MapResponseBuilder
}
}
// addError adds an error to the builder's error list.
// addError adds an error to the builder's error list
func (b *MapResponseBuilder) addError(err error) {
if err != nil {
b.errs = append(b.errs, err)
}
}
// hasErrors returns true if the builder has accumulated any errors.
// hasErrors returns true if the builder has accumulated any errors
func (b *MapResponseBuilder) hasErrors() bool {
return len(b.errs) > 0
}
// WithCapabilityVersion sets the capability version for the response.
// WithCapabilityVersion sets the capability version for the response
func (b *MapResponseBuilder) WithCapabilityVersion(capVer tailcfg.CapabilityVersion) *MapResponseBuilder {
b.capVer = capVer
return b
}
// WithSelfNode adds the requesting node to the response.
// WithSelfNode adds the requesting node to the response
func (b *MapResponseBuilder) WithSelfNode() *MapResponseBuilder {
nv, ok := b.mapper.state.GetNodeByID(b.nodeID)
if !ok {
b.addError(errors.New("node not found"))
node, err := b.mapper.state.GetNodeByID(b.nodeID)
if err != nil {
b.addError(err)
return b
}
_, matchers := b.mapper.state.Filter()
tailnode, err := tailNode(
nv, b.capVer, b.mapper.state,
node.View(), b.capVer, b.mapper.state,
func(id types.NodeID) []netip.Prefix {
return policy.ReduceRoutes(nv, b.mapper.state.GetNodePrimaryRoutes(id), matchers)
return policy.ReduceRoutes(node.View(), b.mapper.state.GetNodePrimaryRoutes(id), matchers)
},
b.mapper.cfg)
if err != nil {
@@ -88,38 +74,29 @@ func (b *MapResponseBuilder) WithSelfNode() *MapResponseBuilder {
}
b.resp.Node = tailnode
return b
}
func (b *MapResponseBuilder) WithDebugType(t debugType) *MapResponseBuilder {
if debugDumpMapResponsePath != "" {
b.debugType = t
}
return b
}
// WithDERPMap adds the DERP map to the response.
// WithDERPMap adds the DERP map to the response
func (b *MapResponseBuilder) WithDERPMap() *MapResponseBuilder {
b.resp.DERPMap = b.mapper.state.DERPMap().AsStruct()
b.resp.DERPMap = b.mapper.state.DERPMap()
return b
}
// WithDomain adds the domain configuration.
// WithDomain adds the domain configuration
func (b *MapResponseBuilder) WithDomain() *MapResponseBuilder {
b.resp.Domain = b.mapper.cfg.Domain()
return b
}
// WithCollectServicesDisabled sets the collect services flag to false.
// WithCollectServicesDisabled sets the collect services flag to false
func (b *MapResponseBuilder) WithCollectServicesDisabled() *MapResponseBuilder {
b.resp.CollectServices.Set(false)
return b
}
// WithDebugConfig adds debug configuration
// It disables log tailing if the mapper's LogTail is not enabled.
// It disables log tailing if the mapper's LogTail is not enabled
func (b *MapResponseBuilder) WithDebugConfig() *MapResponseBuilder {
b.resp.Debug = &tailcfg.Debug{
DisableLogTail: !b.mapper.cfg.LogTail.Enabled,
@@ -127,81 +104,72 @@ func (b *MapResponseBuilder) WithDebugConfig() *MapResponseBuilder {
return b
}
// WithSSHPolicy adds SSH policy configuration for the requesting node.
// WithSSHPolicy adds SSH policy configuration for the requesting node
func (b *MapResponseBuilder) WithSSHPolicy() *MapResponseBuilder {
node, ok := b.mapper.state.GetNodeByID(b.nodeID)
if !ok {
b.addError(errors.New("node not found"))
node, err := b.mapper.state.GetNodeByID(b.nodeID)
if err != nil {
b.addError(err)
return b
}
sshPolicy, err := b.mapper.state.SSHPolicy(node)
sshPolicy, err := b.mapper.state.SSHPolicy(node.View())
if err != nil {
b.addError(err)
return b
}
b.resp.SSHPolicy = sshPolicy
return b
}
// WithDNSConfig adds DNS configuration for the requesting node.
// WithDNSConfig adds DNS configuration for the requesting node
func (b *MapResponseBuilder) WithDNSConfig() *MapResponseBuilder {
node, ok := b.mapper.state.GetNodeByID(b.nodeID)
if !ok {
b.addError(errors.New("node not found"))
return b
}
b.resp.DNSConfig = generateDNSConfig(b.mapper.cfg, node)
return b
}
// WithUserProfiles adds user profiles for the requesting node and given peers.
func (b *MapResponseBuilder) WithUserProfiles(peers views.Slice[types.NodeView]) *MapResponseBuilder {
node, ok := b.mapper.state.GetNodeByID(b.nodeID)
if !ok {
b.addError(errors.New("node not found"))
return b
}
b.resp.UserProfiles = generateUserProfiles(node, peers)
return b
}
// WithPacketFilters adds packet filter rules based on policy.
func (b *MapResponseBuilder) WithPacketFilters() *MapResponseBuilder {
node, ok := b.mapper.state.GetNodeByID(b.nodeID)
if !ok {
b.addError(errors.New("node not found"))
return b
}
// FilterForNode returns rules already reduced to only those relevant for this node.
// For autogroup:self policies, it returns per-node compiled rules.
// For global policies, it returns the global filter reduced for this node.
filter, err := b.mapper.state.FilterForNode(node)
node, err := b.mapper.state.GetNodeByID(b.nodeID)
if err != nil {
b.addError(err)
return b
}
b.resp.DNSConfig = generateDNSConfig(b.mapper.cfg, node)
return b
}
// WithUserProfiles adds user profiles for the requesting node and given peers
func (b *MapResponseBuilder) WithUserProfiles(peers types.Nodes) *MapResponseBuilder {
node, err := b.mapper.state.GetNodeByID(b.nodeID)
if err != nil {
b.addError(err)
return b
}
b.resp.UserProfiles = generateUserProfiles(node, peers)
return b
}
// WithPacketFilters adds packet filter rules based on policy
func (b *MapResponseBuilder) WithPacketFilters() *MapResponseBuilder {
node, err := b.mapper.state.GetNodeByID(b.nodeID)
if err != nil {
b.addError(err)
return b
}
filter, _ := b.mapper.state.Filter()
// CapVer 81: 2023-11-17: MapResponse.PacketFilters (incremental packet filter updates)
// Currently, we do not send incremental package filters, however using the
// new PacketFilters field and "base" allows us to send a full update when we
// have to send an empty list, avoiding the hack in the else block.
b.resp.PacketFilters = map[string][]tailcfg.FilterRule{
"base": filter,
"base": policy.ReduceFilterRules(node.View(), filter),
}
return b
}
// WithPeers adds full peer list with policy filtering (for full map response).
func (b *MapResponseBuilder) WithPeers(peers views.Slice[types.NodeView]) *MapResponseBuilder {
// WithPeers adds full peer list with policy filtering (for full map response)
func (b *MapResponseBuilder) WithPeers(peers types.Nodes) *MapResponseBuilder {
tailPeers, err := b.buildTailPeers(peers)
if err != nil {
b.addError(err)
@@ -209,12 +177,12 @@ func (b *MapResponseBuilder) WithPeers(peers views.Slice[types.NodeView]) *MapRe
}
b.resp.Peers = tailPeers
return b
}
// WithPeerChanges adds changed peers with policy filtering (for incremental updates).
func (b *MapResponseBuilder) WithPeerChanges(peers views.Slice[types.NodeView]) *MapResponseBuilder {
// WithPeerChanges adds changed peers with policy filtering (for incremental updates)
func (b *MapResponseBuilder) WithPeerChanges(peers types.Nodes) *MapResponseBuilder {
tailPeers, err := b.buildTailPeers(peers)
if err != nil {
b.addError(err)
@@ -222,39 +190,31 @@ func (b *MapResponseBuilder) WithPeerChanges(peers views.Slice[types.NodeView])
}
b.resp.PeersChanged = tailPeers
return b
}
// buildTailPeers converts views.Slice[types.NodeView] to []tailcfg.Node with policy filtering and sorting.
func (b *MapResponseBuilder) buildTailPeers(peers views.Slice[types.NodeView]) ([]*tailcfg.Node, error) {
node, ok := b.mapper.state.GetNodeByID(b.nodeID)
if !ok {
return nil, errors.New("node not found")
}
// Get unreduced matchers for peer relationship determination.
// MatchersForNode returns unreduced matchers that include all rules where the node
// could be either source or destination. This is different from FilterForNode which
// returns reduced rules for packet filtering (only rules where node is destination).
matchers, err := b.mapper.state.MatchersForNode(node)
// buildTailPeers converts types.Nodes to []tailcfg.Node with policy filtering and sorting
func (b *MapResponseBuilder) buildTailPeers(peers types.Nodes) ([]*tailcfg.Node, error) {
node, err := b.mapper.state.GetNodeByID(b.nodeID)
if err != nil {
return nil, err
}
filter, matchers := b.mapper.state.Filter()
// If there are filter rules present, see if there are any nodes that cannot
// access each-other at all and remove them from the peers.
var changedViews views.Slice[types.NodeView]
if len(matchers) > 0 {
changedViews = policy.ReduceNodes(node, peers, matchers)
if len(filter) > 0 {
changedViews = policy.ReduceNodes(node.View(), peers.ViewSlice(), matchers)
} else {
changedViews = peers
changedViews = peers.ViewSlice()
}
tailPeers, err := tailNodes(
changedViews, b.capVer, b.mapper.state,
func(id types.NodeID) []netip.Prefix {
return policy.ReduceRoutes(node, b.mapper.state.GetNodePrimaryRoutes(id), matchers)
return policy.ReduceRoutes(node.View(), b.mapper.state.GetNodePrimaryRoutes(id), matchers)
},
b.mapper.cfg)
if err != nil {
@@ -269,30 +229,30 @@ func (b *MapResponseBuilder) buildTailPeers(peers views.Slice[types.NodeView]) (
return tailPeers, nil
}
// WithPeerChangedPatch adds peer change patches.
// WithPeerChangedPatch adds peer change patches
func (b *MapResponseBuilder) WithPeerChangedPatch(changes []*tailcfg.PeerChange) *MapResponseBuilder {
b.resp.PeersChangedPatch = changes
return b
}
// WithPeersRemoved adds removed peer IDs.
// WithPeersRemoved adds removed peer IDs
func (b *MapResponseBuilder) WithPeersRemoved(removedIDs ...types.NodeID) *MapResponseBuilder {
var tailscaleIDs []tailcfg.NodeID
for _, id := range removedIDs {
tailscaleIDs = append(tailscaleIDs, id.NodeID())
}
b.resp.PeersRemoved = tailscaleIDs
return b
}
// Build finalizes the response and returns marshaled bytes
func (b *MapResponseBuilder) Build() (*tailcfg.MapResponse, error) {
func (b *MapResponseBuilder) Build(messages ...string) (*tailcfg.MapResponse, error) {
if len(b.errs) > 0 {
return nil, multierr.New(b.errs...)
}
if debugDumpMapResponsePath != "" {
writeDebugMapResponse(b.resp, b.debugType, b.nodeID)
writeDebugMapResponse(b.resp, b.nodeID)
}
return b.resp, nil

View File

@@ -18,17 +18,17 @@ func TestMapResponseBuilder_Basic(t *testing.T) {
Enabled: true,
},
}
mockState := &state.State{}
m := &mapper{
cfg: cfg,
state: mockState,
}
nodeID := types.NodeID(1)
builder := m.NewMapResponseBuilder(nodeID)
// Test basic builder creation
assert.NotNil(t, builder)
assert.Equal(t, nodeID, builder.nodeID)
@@ -45,13 +45,13 @@ func TestMapResponseBuilder_WithCapabilityVersion(t *testing.T) {
cfg: cfg,
state: mockState,
}
nodeID := types.NodeID(1)
capVer := tailcfg.CapabilityVersion(42)
builder := m.NewMapResponseBuilder(nodeID).
WithCapabilityVersion(capVer)
assert.Equal(t, capVer, builder.capVer)
assert.False(t, builder.hasErrors())
}
@@ -62,18 +62,18 @@ func TestMapResponseBuilder_WithDomain(t *testing.T) {
ServerURL: "https://test.example.com",
BaseDomain: domain,
}
mockState := &state.State{}
m := &mapper{
cfg: cfg,
state: mockState,
}
nodeID := types.NodeID(1)
builder := m.NewMapResponseBuilder(nodeID).
WithDomain()
assert.Equal(t, domain, builder.resp.Domain)
assert.False(t, builder.hasErrors())
}
@@ -85,12 +85,12 @@ func TestMapResponseBuilder_WithCollectServicesDisabled(t *testing.T) {
cfg: cfg,
state: mockState,
}
nodeID := types.NodeID(1)
builder := m.NewMapResponseBuilder(nodeID).
WithCollectServicesDisabled()
value, isSet := builder.resp.CollectServices.Get()
assert.True(t, isSet)
assert.False(t, value)
@@ -99,22 +99,22 @@ func TestMapResponseBuilder_WithCollectServicesDisabled(t *testing.T) {
func TestMapResponseBuilder_WithDebugConfig(t *testing.T) {
tests := []struct {
name string
name string
logTailEnabled bool
expected bool
expected bool
}{
{
name: "LogTail enabled",
name: "LogTail enabled",
logTailEnabled: true,
expected: false, // DisableLogTail should be false when LogTail is enabled
expected: false, // DisableLogTail should be false when LogTail is enabled
},
{
name: "LogTail disabled",
name: "LogTail disabled",
logTailEnabled: false,
expected: true, // DisableLogTail should be true when LogTail is disabled
expected: true, // DisableLogTail should be true when LogTail is disabled
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
cfg := &types.Config{
@@ -127,12 +127,12 @@ func TestMapResponseBuilder_WithDebugConfig(t *testing.T) {
cfg: cfg,
state: mockState,
}
nodeID := types.NodeID(1)
builder := m.NewMapResponseBuilder(nodeID).
WithDebugConfig()
require.NotNil(t, builder.resp.Debug)
assert.Equal(t, tt.expected, builder.resp.Debug.DisableLogTail)
assert.False(t, builder.hasErrors())
@@ -147,22 +147,22 @@ func TestMapResponseBuilder_WithPeerChangedPatch(t *testing.T) {
cfg: cfg,
state: mockState,
}
nodeID := types.NodeID(1)
changes := []*tailcfg.PeerChange{
{
NodeID: 123,
NodeID: 123,
DERPRegion: 1,
},
{
NodeID: 456,
NodeID: 456,
DERPRegion: 2,
},
}
builder := m.NewMapResponseBuilder(nodeID).
WithPeerChangedPatch(changes)
assert.Equal(t, changes, builder.resp.PeersChangedPatch)
assert.False(t, builder.hasErrors())
}
@@ -174,14 +174,14 @@ func TestMapResponseBuilder_WithPeersRemoved(t *testing.T) {
cfg: cfg,
state: mockState,
}
nodeID := types.NodeID(1)
removedID1 := types.NodeID(123)
removedID2 := types.NodeID(456)
builder := m.NewMapResponseBuilder(nodeID).
WithPeersRemoved(removedID1, removedID2)
expected := []tailcfg.NodeID{
removedID1.NodeID(),
removedID2.NodeID(),
@@ -197,25 +197,25 @@ func TestMapResponseBuilder_ErrorHandling(t *testing.T) {
cfg: cfg,
state: mockState,
}
nodeID := types.NodeID(1)
// Simulate an error in the builder
builder := m.NewMapResponseBuilder(nodeID)
builder.addError(assert.AnError)
// All subsequent calls should continue to work and accumulate errors
result := builder.
WithDomain().
WithCollectServicesDisabled().
WithDebugConfig()
assert.True(t, result.hasErrors())
assert.Len(t, result.errs, 1)
assert.Equal(t, assert.AnError, result.errs[0])
// Build should return the error
data, err := result.Build()
data, err := result.Build("none")
assert.Nil(t, data)
assert.Error(t, err)
}
@@ -229,22 +229,22 @@ func TestMapResponseBuilder_ChainedCalls(t *testing.T) {
Enabled: false,
},
}
mockState := &state.State{}
m := &mapper{
cfg: cfg,
state: mockState,
}
nodeID := types.NodeID(1)
capVer := tailcfg.CapabilityVersion(99)
builder := m.NewMapResponseBuilder(nodeID).
WithCapabilityVersion(capVer).
WithDomain().
WithCollectServicesDisabled().
WithDebugConfig()
// Verify all fields are set correctly
assert.Equal(t, capVer, builder.capVer)
assert.Equal(t, domain, builder.resp.Domain)
@@ -263,16 +263,16 @@ func TestMapResponseBuilder_MultipleWithPeersRemoved(t *testing.T) {
cfg: cfg,
state: mockState,
}
nodeID := types.NodeID(1)
removedID1 := types.NodeID(100)
removedID2 := types.NodeID(200)
// Test calling WithPeersRemoved multiple times
builder := m.NewMapResponseBuilder(nodeID).
WithPeersRemoved(removedID1).
WithPeersRemoved(removedID2)
// Second call should overwrite the first
expected := []tailcfg.NodeID{removedID2.NodeID()}
assert.Equal(t, expected, builder.resp.PeersRemoved)
@@ -286,12 +286,12 @@ func TestMapResponseBuilder_EmptyPeerChangedPatch(t *testing.T) {
cfg: cfg,
state: mockState,
}
nodeID := types.NodeID(1)
builder := m.NewMapResponseBuilder(nodeID).
WithPeerChangedPatch([]*tailcfg.PeerChange{})
assert.Empty(t, builder.resp.PeersChangedPatch)
assert.False(t, builder.hasErrors())
}
@@ -303,12 +303,12 @@ func TestMapResponseBuilder_NilPeerChangedPatch(t *testing.T) {
cfg: cfg,
state: mockState,
}
nodeID := types.NodeID(1)
builder := m.NewMapResponseBuilder(nodeID).
WithPeerChangedPatch(nil)
assert.Nil(t, builder.resp.PeersChangedPatch)
assert.False(t, builder.hasErrors())
}
@@ -320,28 +320,28 @@ func TestMapResponseBuilder_MultipleErrors(t *testing.T) {
cfg: cfg,
state: mockState,
}
nodeID := types.NodeID(1)
// Create a builder and add multiple errors
builder := m.NewMapResponseBuilder(nodeID)
builder.addError(assert.AnError)
builder.addError(assert.AnError)
builder.addError(nil) // This should be ignored
// All subsequent calls should continue to work
result := builder.
WithDomain().
WithCollectServicesDisabled()
assert.True(t, result.hasErrors())
assert.Len(t, result.errs, 2) // nil error should be ignored
// Build should return a multierr
data, err := result.Build()
data, err := result.Build("none")
assert.Nil(t, data)
assert.Error(t, err)
// The error should contain information about multiple errors
assert.Contains(t, err.Error(), "multiple errors")
}
}

View File

@@ -9,7 +9,6 @@ import (
"os"
"path"
"slices"
"strconv"
"strings"
"time"
@@ -19,7 +18,6 @@ import (
"tailscale.com/envknob"
"tailscale.com/tailcfg"
"tailscale.com/types/dnstype"
"tailscale.com/types/views"
)
const (
@@ -70,18 +68,16 @@ func newMapper(
}
func generateUserProfiles(
node types.NodeView,
peers views.Slice[types.NodeView],
node *types.Node,
peers types.Nodes,
) []tailcfg.UserProfile {
userMap := make(map[uint]*types.User)
ids := make([]uint, 0, len(userMap))
user := node.User()
userMap[user.ID] = &user
ids = append(ids, user.ID)
for _, peer := range peers.All() {
peerUser := peer.User()
userMap[peerUser.ID] = &peerUser
ids = append(ids, peerUser.ID)
userMap[node.User.ID] = &node.User
ids = append(ids, node.User.ID)
for _, peer := range peers {
userMap[peer.User.ID] = &peer.User
ids = append(ids, peer.User.ID)
}
slices.Sort(ids)
@@ -98,7 +94,7 @@ func generateUserProfiles(
func generateDNSConfig(
cfg *types.Config,
node types.NodeView,
node *types.Node,
) *tailcfg.DNSConfig {
if cfg.TailcfgDNSConfig == nil {
return nil
@@ -118,12 +114,12 @@ func generateDNSConfig(
//
// This will produce a resolver like:
// `https://dns.nextdns.io/<nextdns-id>?device_name=node-name&device_model=linux&device_ip=100.64.0.1`
func addNextDNSMetadata(resolvers []*dnstype.Resolver, node types.NodeView) {
func addNextDNSMetadata(resolvers []*dnstype.Resolver, node *types.Node) {
for _, resolver := range resolvers {
if strings.HasPrefix(resolver.Addr, nextDNSDoHPrefix) {
attrs := url.Values{
"device_name": []string{node.Hostname()},
"device_model": []string{node.Hostinfo().OS()},
"device_name": []string{node.Hostname},
"device_model": []string{node.Hostinfo.OS},
}
if len(node.IPs()) > 0 {
@@ -139,11 +135,14 @@ func addNextDNSMetadata(resolvers []*dnstype.Resolver, node types.NodeView) {
func (m *mapper) fullMapResponse(
nodeID types.NodeID,
capVer tailcfg.CapabilityVersion,
messages ...string,
) (*tailcfg.MapResponse, error) {
peers := m.state.ListPeers(nodeID)
peers, err := m.listPeers(nodeID)
if err != nil {
return nil, err
}
return m.NewMapResponseBuilder(nodeID).
WithDebugType(fullResponseDebug).
WithCapabilityVersion(capVer).
WithSelfNode().
WithDERPMap().
@@ -155,34 +154,13 @@ func (m *mapper) fullMapResponse(
WithUserProfiles(peers).
WithPacketFilters().
WithPeers(peers).
Build()
}
func (m *mapper) selfMapResponse(
nodeID types.NodeID,
capVer tailcfg.CapabilityVersion,
) (*tailcfg.MapResponse, error) {
ma, err := m.NewMapResponseBuilder(nodeID).
WithDebugType(selfResponseDebug).
WithCapabilityVersion(capVer).
WithSelfNode().
Build()
if err != nil {
return nil, err
}
// Set the peers to nil, to ensure the node does not think
// its getting a new list.
ma.Peers = nil
return ma, err
Build(messages...)
}
func (m *mapper) derpMapResponse(
nodeID types.NodeID,
) (*tailcfg.MapResponse, error) {
return m.NewMapResponseBuilder(nodeID).
WithDebugType(derpResponseDebug).
WithDERPMap().
Build()
}
@@ -194,7 +172,6 @@ func (m *mapper) peerChangedPatchResponse(
changed []*tailcfg.PeerChange,
) (*tailcfg.MapResponse, error) {
return m.NewMapResponseBuilder(nodeID).
WithDebugType(patchResponseDebug).
WithPeerChangedPatch(changed).
Build()
}
@@ -205,11 +182,14 @@ func (m *mapper) peerChangeResponse(
capVer tailcfg.CapabilityVersion,
changedNodeID types.NodeID,
) (*tailcfg.MapResponse, error) {
peers := m.state.ListPeers(nodeID, changedNodeID)
peers, err := m.listPeers(nodeID, changedNodeID)
if err != nil {
return nil, err
}
return m.NewMapResponseBuilder(nodeID).
WithDebugType(changeResponseDebug).
WithCapabilityVersion(capVer).
WithSelfNode().
WithUserProfiles(peers).
WithPeerChanges(peers).
Build()
@@ -221,23 +201,42 @@ func (m *mapper) peerRemovedResponse(
removedNodeID types.NodeID,
) (*tailcfg.MapResponse, error) {
return m.NewMapResponseBuilder(nodeID).
WithDebugType(removeResponseDebug).
WithPeersRemoved(removedNodeID).
Build()
}
func writeDebugMapResponse(
resp *tailcfg.MapResponse,
t debugType,
nodeID types.NodeID,
messages ...string,
) {
body, err := json.MarshalIndent(resp, "", " ")
data := map[string]any{
"Messages": messages,
"MapResponse": resp,
}
responseType := "keepalive"
switch {
case len(resp.Peers) > 0:
responseType = "full"
case resp.Peers == nil && resp.PeersChanged == nil && resp.PeersChangedPatch == nil && resp.DERPMap == nil && !resp.KeepAlive:
responseType = "self"
case len(resp.PeersChanged) > 0:
responseType = "changed"
case len(resp.PeersChangedPatch) > 0:
responseType = "patch"
case len(resp.PeersRemoved) > 0:
responseType = "removed"
}
body, err := json.MarshalIndent(data, "", " ")
if err != nil {
panic(err)
}
perms := fs.FileMode(debugMapResponsePerm)
mPath := path.Join(debugDumpMapResponsePath, fmt.Sprintf("%d", nodeID))
mPath := path.Join(debugDumpMapResponsePath, nodeID.String())
err = os.MkdirAll(mPath, perms)
if err != nil {
panic(err)
@@ -247,7 +246,7 @@ func writeDebugMapResponse(
mapResponsePath := path.Join(
mPath,
fmt.Sprintf("%s-%s.json", now, t),
fmt.Sprintf("%s-%s.json", now, responseType),
)
log.Trace().Msgf("Writing MapResponse to %s", mapResponsePath)
@@ -257,70 +256,26 @@ func writeDebugMapResponse(
}
}
// routeFilterFunc is a function that takes a node ID and returns a list of
// netip.Prefixes that are allowed for that node. It is used to filter routes
// from the primary route manager to the node.
type routeFilterFunc func(id types.NodeID) []netip.Prefix
func (m *mapper) debugMapResponses() (map[types.NodeID][]tailcfg.MapResponse, error) {
if debugDumpMapResponsePath == "" {
return nil, nil
}
return ReadMapResponsesFromDirectory(debugDumpMapResponsePath)
}
func ReadMapResponsesFromDirectory(dir string) (map[types.NodeID][]tailcfg.MapResponse, error) {
nodes, err := os.ReadDir(dir)
// listPeers returns peers of node, regardless of any Policy or if the node is expired.
// If no peer IDs are given, all peers are returned.
// If at least one peer ID is given, only these peer nodes will be returned.
func (m *mapper) listPeers(nodeID types.NodeID, peerIDs ...types.NodeID) (types.Nodes, error) {
peers, err := m.state.ListPeers(nodeID, peerIDs...)
if err != nil {
return nil, err
}
result := make(map[types.NodeID][]tailcfg.MapResponse)
for _, node := range nodes {
if !node.IsDir() {
continue
}
nodeIDu, err := strconv.ParseUint(node.Name(), 10, 64)
if err != nil {
log.Error().Err(err).Msgf("Parsing node ID from dir %s", node.Name())
continue
}
nodeID := types.NodeID(nodeIDu)
files, err := os.ReadDir(path.Join(dir, node.Name()))
if err != nil {
log.Error().Err(err).Msgf("Reading dir %s", node.Name())
continue
}
slices.SortStableFunc(files, func(a, b fs.DirEntry) int {
return strings.Compare(a.Name(), b.Name())
})
for _, file := range files {
if file.IsDir() || !strings.HasSuffix(file.Name(), ".json") {
continue
}
body, err := os.ReadFile(path.Join(dir, node.Name(), file.Name()))
if err != nil {
log.Error().Err(err).Msgf("Reading file %s", file.Name())
continue
}
var resp tailcfg.MapResponse
err = json.Unmarshal(body, &resp)
if err != nil {
log.Error().Err(err).Msgf("Unmarshalling file %s", file.Name())
continue
}
result[nodeID] = append(result[nodeID], resp)
}
// TODO(kradalby): Add back online via batcher. This was removed
// to avoid a circular dependency between the mapper and the notification.
for _, peer := range peers {
online := m.batcher.IsConnected(peer.ID)
peer.IsOnline = &online
}
return result, nil
return peers, nil
}
// routeFilterFunc is a function that takes a node ID and returns a list of
// netip.Prefixes that are allowed for that node. It is used to filter routes
// from the primary route manager to the node.
type routeFilterFunc func(id types.NodeID) []netip.Prefix

View File

@@ -71,7 +71,7 @@ func TestDNSConfigMapResponse(t *testing.T) {
&types.Config{
TailcfgDNSConfig: &dnsConfigOrig,
},
nodeInShared1.View(),
nodeInShared1,
)
if diff := cmp.Diff(tt.want, got, cmpopts.EquateEmpty()); diff != "" {

View File

@@ -133,12 +133,13 @@ func tailNode(
tNode.CapMap[tailcfg.NodeAttrRandomizeClientPort] = []tailcfg.RawMessage{}
}
// Set LastSeen only for offline nodes to avoid confusing Tailscale clients
// during rapid reconnection cycles. Online nodes should not have LastSeen set
// as this can make clients interpret them as "not online" despite Online=true.
if node.LastSeen().Valid() && node.IsOnline().Valid() && !node.IsOnline().Get() {
lastSeen := node.LastSeen().Get()
tNode.LastSeen = &lastSeen
if !node.IsOnline().Valid() || !node.IsOnline().Get() {
// LastSeen is only set when node is
// not connected to the control server.
if node.LastSeen().Valid() {
lastSeen := node.LastSeen().Get()
tNode.LastSeen = &lastSeen
}
}
return &tNode, nil

View File

@@ -108,12 +108,11 @@ func TestTailNode(t *testing.T) {
Hostinfo: &tailcfg.Hostinfo{
RoutableIPs: []netip.Prefix{
tsaddr.AllIPv4(),
tsaddr.AllIPv6(),
netip.MustParsePrefix("192.168.0.0/24"),
netip.MustParsePrefix("172.0.0.0/10"),
},
},
ApprovedRoutes: []netip.Prefix{tsaddr.AllIPv4(), tsaddr.AllIPv6(), netip.MustParsePrefix("192.168.0.0/24")},
ApprovedRoutes: []netip.Prefix{tsaddr.AllIPv4(), netip.MustParsePrefix("192.168.0.0/24")},
CreatedAt: created,
},
dnsConfig: &tailcfg.DNSConfig{},
@@ -151,7 +150,6 @@ func TestTailNode(t *testing.T) {
Hostinfo: hiview(tailcfg.Hostinfo{
RoutableIPs: []netip.Prefix{
tsaddr.AllIPv4(),
tsaddr.AllIPv6(),
netip.MustParsePrefix("192.168.0.0/24"),
netip.MustParsePrefix("172.0.0.0/10"),
},
@@ -160,6 +158,7 @@ func TestTailNode(t *testing.T) {
Tags: []string{},
LastSeen: &lastSeen,
MachineAuthorized: true,
CapMap: tailcfg.NodeCapMap{

View File

@@ -13,6 +13,7 @@ import (
"github.com/juanfont/headscale/hscontrol/types"
"github.com/rs/zerolog/log"
"golang.org/x/net/http2"
"gorm.io/gorm"
"tailscale.com/control/controlbase"
"tailscale.com/control/controlhttp/controlhttpserver"
"tailscale.com/tailcfg"
@@ -175,8 +176,8 @@ func rejectUnsupported(
Int("client_cap_ver", int(version)).
Str("minimum_version", capver.TailscaleVersion(capver.MinSupportedCapabilityVersion)).
Str("client_version", capver.TailscaleVersion(version)).
Str("node.key", nkey.ShortString()).
Str("machine.key", mkey.ShortString()).
Str("node_key", nkey.ShortString()).
Str("machine_key", mkey.ShortString()).
Msg("unsupported client connected")
http.Error(writer, unsupportedClientError(version).Error(), http.StatusBadRequest)
@@ -282,7 +283,7 @@ func (ns *noiseServer) NoiseRegistrationHandler(
writer.WriteHeader(http.StatusOK)
if err := json.NewEncoder(writer).Encode(registerResponse); err != nil {
log.Error().Caller().Err(err).Msg("NoiseRegistrationHandler: failed to encode RegisterResponse")
log.Error().Err(err).Msg("NoiseRegistrationHandler: failed to encode RegisterResponse")
return
}
@@ -295,11 +296,16 @@ func (ns *noiseServer) NoiseRegistrationHandler(
// getAndValidateNode retrieves the node from the database using the NodeKey
// and validates that it matches the MachineKey from the Noise session.
func (ns *noiseServer) getAndValidateNode(mapRequest tailcfg.MapRequest) (types.NodeView, error) {
nv, ok := ns.headscale.state.GetNodeByNodeKey(mapRequest.NodeKey)
if !ok {
return types.NodeView{}, NewHTTPError(http.StatusNotFound, "node not found", nil)
node, err := ns.headscale.state.GetNodeByNodeKey(mapRequest.NodeKey)
if err != nil {
if errors.Is(err, gorm.ErrRecordNotFound) {
return types.NodeView{}, NewHTTPError(http.StatusNotFound, "node not found", nil)
}
return types.NodeView{}, NewHTTPError(http.StatusInternalServerError, fmt.Sprintf("lookup node: %s", err), nil)
}
nv := node.View()
// Validate that the MachineKey in the Noise session matches the one associated with the NodeKey.
if ns.machineKey != nv.MachineKey() {
return types.NodeView{}, NewHTTPError(http.StatusNotFound, "node key in request does not match the one associated with this machine key", nil)

View File

@@ -42,6 +42,10 @@ var (
errOIDCAllowedUsers = errors.New(
"authenticated principal does not match any allowed user",
)
errOIDCInvalidNodeState = errors.New(
"requested node state key expired before authorisation completed",
)
errOIDCNodeKeyMissing = errors.New("could not get node key from cache")
)
// RegistrationInfo contains both machine key and verifier information for OIDC validation.
@@ -104,8 +108,16 @@ func (a *AuthProviderOIDC) AuthURL(registrationID types.RegistrationID) string {
registrationID.String())
}
// RegisterHandler registers the OIDC callback handler with the given router.
// It puts NodeKey in cache so the callback can retrieve it using the oidc state param.
func (a *AuthProviderOIDC) determineNodeExpiry(idTokenExpiration time.Time) time.Time {
if a.cfg.UseExpiryFromToken {
return idTokenExpiration
}
return time.Now().Add(a.cfg.Expiry)
}
// RegisterOIDC redirects to the OIDC provider for authentication
// Puts NodeKey in cache so the callback can retrieve it using the oidc state param
// Listens in /register/:registration_id.
func (a *AuthProviderOIDC) RegisterHandler(
writer http.ResponseWriter,
@@ -169,7 +181,7 @@ func (a *AuthProviderOIDC) RegisterHandler(
a.registrationCache.Set(state, registrationInfo)
authURL := a.oauth2Config.AuthCodeURL(state, extras...)
log.Debug().Caller().Msgf("Redirecting to %s for authentication", authURL)
log.Debug().Msgf("Redirecting to %s for authentication", authURL)
http.Redirect(writer, req, authURL, http.StatusFound)
}
@@ -201,8 +213,7 @@ func (a *AuthProviderOIDC) OIDCCallbackHandler(
return
}
stateCookieName := getCookieName("state", state)
cookieState, err := req.Cookie(stateCookieName)
cookieState, err := req.Cookie("state")
if err != nil {
httpError(writer, NewHTTPError(http.StatusBadRequest, "state not found", err))
return
@@ -224,13 +235,8 @@ func (a *AuthProviderOIDC) OIDCCallbackHandler(
httpError(writer, err)
return
}
if idToken.Nonce == "" {
httpError(writer, NewHTTPError(http.StatusBadRequest, "nonce not found in IDToken", err))
return
}
nonceCookieName := getCookieName("nonce", idToken.Nonce)
nonce, err := req.Cookie(nonceCookieName)
nonce, err := req.Cookie("nonce")
if err != nil {
httpError(writer, NewHTTPError(http.StatusBadRequest, "nonce not found", err))
return
@@ -248,35 +254,6 @@ func (a *AuthProviderOIDC) OIDCCallbackHandler(
return
}
// Fetch user information (email, groups, name, etc) from the userinfo endpoint
// https://openid.net/specs/openid-connect-core-1_0.html#UserInfo
var userinfo *oidc.UserInfo
userinfo, err = a.oidcProvider.UserInfo(req.Context(), oauth2.StaticTokenSource(oauth2Token))
if err != nil {
util.LogErr(err, "could not get userinfo; only using claims from id token")
}
// The oidc.UserInfo type only decodes some fields (Subject, Profile, Email, EmailVerified).
// We are interested in other fields too (e.g. groups are required for allowedGroups) so we
// decode into our own OIDCUserInfo type using the underlying claims struct.
var userinfo2 types.OIDCUserInfo
if userinfo != nil && userinfo.Claims(&userinfo2) == nil && userinfo2.Sub == claims.Sub {
// Update the user with the userinfo claims (with id token claims as fallback).
// TODO(kradalby): there might be more interesting fields here that we have not found yet.
claims.Email = cmp.Or(userinfo2.Email, claims.Email)
claims.EmailVerified = cmp.Or(userinfo2.EmailVerified, claims.EmailVerified)
claims.Username = cmp.Or(userinfo2.PreferredUsername, claims.Username)
claims.Name = cmp.Or(userinfo2.Name, claims.Name)
claims.ProfilePictureURL = cmp.Or(userinfo2.Picture, claims.ProfilePictureURL)
if userinfo2.Groups != nil {
claims.Groups = userinfo2.Groups
}
} else {
util.LogErr(err, "could not get userinfo; only using claims from id token")
}
// The user claims are now updated from the userinfo endpoint so we can verify the user
// against allowed emails, email domains, and groups.
if err := validateOIDCAllowedDomains(a.cfg.AllowedDomains, &claims); err != nil {
httpError(writer, err)
return
@@ -292,7 +269,31 @@ func (a *AuthProviderOIDC) OIDCCallbackHandler(
return
}
user, _, err := a.createOrUpdateUserFromClaim(&claims)
var userinfo *oidc.UserInfo
userinfo, err = a.oidcProvider.UserInfo(req.Context(), oauth2.StaticTokenSource(oauth2Token))
if err != nil {
util.LogErr(err, "could not get userinfo; only checking claim")
}
// If the userinfo is available, we can check if the subject matches the
// claims, then use some of the userinfo fields to update the user.
// https://openid.net/specs/openid-connect-core-1_0.html#UserInfo
if userinfo != nil && userinfo.Subject == claims.Sub {
claims.Email = cmp.Or(claims.Email, userinfo.Email)
claims.EmailVerified = cmp.Or(claims.EmailVerified, types.FlexibleBoolean(userinfo.EmailVerified))
// The userinfo has some extra fields that we can use to update the user but they are only
// available in the underlying claims struct.
// TODO(kradalby): there might be more interesting fields here that we have not found yet.
var userinfo2 types.OIDCUserInfo
if err := userinfo.Claims(&userinfo2); err == nil {
claims.Username = cmp.Or(claims.Username, userinfo2.PreferredUsername)
claims.Name = cmp.Or(claims.Name, userinfo2.Name)
claims.ProfilePictureURL = cmp.Or(claims.ProfilePictureURL, userinfo2.Picture)
}
}
user, policyChanged, err := a.createOrUpdateUserFromClaim(&claims)
if err != nil {
log.Error().
Err(err).
@@ -305,12 +306,17 @@ func (a *AuthProviderOIDC) OIDCCallbackHandler(
log.Error().
Caller().
Err(werr).
Msg("Failed to write HTTP response")
Msg("Failed to write response")
}
return
}
// Send policy update notifications if needed
if policyChanged {
a.h.Change(change.PolicyChange())
}
// TODO(kradalby): Is this comment right?
// If the node exists, then the node should be reauthenticated,
// if the node does not exist, and the machine key exists, then
@@ -322,12 +328,6 @@ func (a *AuthProviderOIDC) OIDCCallbackHandler(
verb := "Reauthenticated"
newNode, err := a.handleRegistration(user, *registrationId, nodeExpiry)
if err != nil {
if errors.Is(err, db.ErrNodeNotFoundRegistrationCache) {
log.Debug().Caller().Str("registration_id", registrationId.String()).Msg("registration session expired before authorization completed")
httpError(writer, NewHTTPError(http.StatusGone, "login session expired, try again", err))
return
}
httpError(writer, err)
return
}
@@ -346,7 +346,7 @@ func (a *AuthProviderOIDC) OIDCCallbackHandler(
writer.Header().Set("Content-Type", "text/html; charset=utf-8")
writer.WriteHeader(http.StatusOK)
if _, err := writer.Write(content.Bytes()); err != nil {
util.LogErr(err, "Failed to write HTTP response")
util.LogErr(err, "Failed to write response")
}
return
@@ -357,14 +357,6 @@ func (a *AuthProviderOIDC) OIDCCallbackHandler(
httpError(writer, NewHTTPError(http.StatusGone, "login session expired, try again", nil))
}
func (a *AuthProviderOIDC) determineNodeExpiry(idTokenExpiration time.Time) time.Time {
if a.cfg.UseExpiryFromToken {
return idTokenExpiration
}
return time.Now().Add(a.cfg.Expiry)
}
func extractCodeAndStateParamFromRequest(
req *http.Request,
) (string, string, error) {
@@ -486,19 +478,19 @@ func (a *AuthProviderOIDC) getRegistrationIDFromState(state string) *types.Regis
func (a *AuthProviderOIDC) createOrUpdateUserFromClaim(
claims *types.OIDCClaims,
) (*types.User, change.ChangeSet, error) {
) (*types.User, bool, error) {
var user *types.User
var err error
var newUser bool
var c change.ChangeSet
var policyChanged bool
user, err = a.h.state.GetUserByOIDCIdentifier(claims.Identifier())
if err != nil && !errors.Is(err, db.ErrUserNotFound) {
return nil, change.EmptySet, fmt.Errorf("creating or updating user: %w", err)
return nil, false, fmt.Errorf("creating or updating user: %w", err)
}
// if the user is still not found, create a new empty user.
// TODO(kradalby): This context is not inherited from the request, which is probably not ideal.
// However, we need a context to use the OIDC provider.
// TODO(kradalby): This might cause us to not have an ID below which
// is a problem.
if user == nil {
newUser = true
user = &types.User{}
@@ -507,21 +499,21 @@ func (a *AuthProviderOIDC) createOrUpdateUserFromClaim(
user.FromClaim(claims)
if newUser {
user, c, err = a.h.state.CreateUser(*user)
user, policyChanged, err = a.h.state.CreateUser(*user)
if err != nil {
return nil, change.EmptySet, fmt.Errorf("creating user: %w", err)
return nil, false, fmt.Errorf("creating user: %w", err)
}
} else {
_, c, err = a.h.state.UpdateUser(types.UserID(user.ID), func(u *types.User) error {
_, policyChanged, err = a.h.state.UpdateUser(types.UserID(user.ID), func(u *types.User) error {
*u = *user
return nil
})
if err != nil {
return nil, change.EmptySet, fmt.Errorf("updating user: %w", err)
return nil, false, fmt.Errorf("updating user: %w", err)
}
}
return user, c, nil
return user, policyChanged, nil
}
func (a *AuthProviderOIDC) handleRegistration(
@@ -550,13 +542,18 @@ func (a *AuthProviderOIDC) handleRegistration(
// ensure we send an update.
// This works, but might be another good candidate for doing some sort of
// eventbus.
routesChange, err := a.h.state.AutoApproveRoutes(node)
_ = a.h.state.AutoApproveRoutes(node)
_, policyChange, err := a.h.state.SaveNode(node)
if err != nil {
return false, fmt.Errorf("auto approving routes: %w", err)
return false, fmt.Errorf("saving auto approved routes to node: %w", err)
}
// Send both changes. Empty changes are ignored by Change().
a.h.Change(nodeChange, routesChange)
// Policy updates are full and take precedence over node changes.
if !policyChange.Empty() {
a.h.Change(policyChange)
} else {
a.h.Change(nodeChange)
}
return !nodeChange.Empty(), nil
}
@@ -578,11 +575,6 @@ func renderOIDCCallbackTemplate(
return &content, nil
}
// getCookieName generates a unique cookie name based on a cookie value.
func getCookieName(baseName, value string) string {
return fmt.Sprintf("%s_%s", baseName, value[:6])
}
func setCSRFCookie(w http.ResponseWriter, r *http.Request, name string) (string, error) {
val, err := util.GenerateRandomStringURLSafe(64)
if err != nil {
@@ -591,7 +583,7 @@ func setCSRFCookie(w http.ResponseWriter, r *http.Request, name string) (string,
c := &http.Cookie{
Path: "/oidc/callback",
Name: getCookieName(name, val),
Name: name,
Value: val,
MaxAge: int(time.Hour.Seconds()),
Secure: r.TLS != nil,

View File

@@ -7,7 +7,6 @@ import (
"github.com/juanfont/headscale/hscontrol/util"
"go4.org/netipx"
"tailscale.com/net/tsaddr"
"tailscale.com/tailcfg"
)
@@ -92,12 +91,3 @@ func (m *Match) SrcsOverlapsPrefixes(prefixes ...netip.Prefix) bool {
func (m *Match) DestsOverlapsPrefixes(prefixes ...netip.Prefix) bool {
return slices.ContainsFunc(prefixes, m.dests.OverlapsPrefix)
}
// DestsIsTheInternet reports if the destination is equal to "the internet"
// which is a IPSet that represents "autogroup:internet" and is special
// cased for exit nodes.
func (m Match) DestsIsTheInternet() bool {
return m.dests.Equal(util.TheInternet()) ||
m.dests.ContainsPrefix(tsaddr.AllIPv4()) ||
m.dests.ContainsPrefix(tsaddr.AllIPv6())
}

View File

@@ -13,12 +13,6 @@ import (
type PolicyManager interface {
// Filter returns the current filter rules for the entire tailnet and the associated matchers.
Filter() ([]tailcfg.FilterRule, []matcher.Match)
// FilterForNode returns filter rules for a specific node, handling autogroup:self
FilterForNode(node types.NodeView) ([]tailcfg.FilterRule, error)
// MatchersForNode returns matchers for peer relationship determination (unreduced)
MatchersForNode(node types.NodeView) ([]matcher.Match, error)
// BuildPeerMap constructs peer relationship maps for the given nodes
BuildPeerMap(nodes views.Slice[types.NodeView]) map[types.NodeID][]types.NodeView
SSHPolicy(types.NodeView) (*tailcfg.SSHPolicy, error)
SetPolicy([]byte) (bool, error)
SetUsers(users []types.User) (bool, error)

View File

@@ -7,9 +7,9 @@ import (
"github.com/juanfont/headscale/hscontrol/policy/matcher"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/rs/zerolog/log"
"github.com/samber/lo"
"tailscale.com/net/tsaddr"
"tailscale.com/tailcfg"
"tailscale.com/types/views"
)
@@ -78,74 +78,99 @@ func BuildPeerMap(
return ret
}
// ApproveRoutesWithPolicy checks if the node can approve the announced routes
// and returns the new list of approved routes.
// The approved routes will include:
// 1. ALL previously approved routes (regardless of whether they're still advertised)
// 2. New routes from announcedRoutes that can be auto-approved by policy
// This ensures that:
// - Previously approved routes are ALWAYS preserved (auto-approval never removes routes)
// - New routes can be auto-approved according to policy
// - Routes can only be removed by explicit admin action (not by auto-approval).
func ApproveRoutesWithPolicy(pm PolicyManager, nv types.NodeView, currentApproved, announcedRoutes []netip.Prefix) ([]netip.Prefix, bool) {
if pm == nil {
return currentApproved, false
}
// ReduceFilterRules takes a node and a set of rules and removes all rules and destinations
// that are not relevant to that particular node.
func ReduceFilterRules(node types.NodeView, rules []tailcfg.FilterRule) []tailcfg.FilterRule {
ret := []tailcfg.FilterRule{}
// Start with ALL currently approved routes - we never remove approved routes
newApproved := make([]netip.Prefix, len(currentApproved))
copy(newApproved, currentApproved)
for _, rule := range rules {
// record if the rule is actually relevant for the given node.
var dests []tailcfg.NetPortRange
DEST_LOOP:
for _, dest := range rule.DstPorts {
expanded, err := util.ParseIPSet(dest.IP, nil)
// Fail closed, if we can't parse it, then we should not allow
// access.
if err != nil {
continue DEST_LOOP
}
// Then, check for new routes that can be auto-approved
for _, route := range announcedRoutes {
// Skip if already approved
if slices.Contains(newApproved, route) {
continue
if node.InIPSet(expanded) {
dests = append(dests, dest)
continue DEST_LOOP
}
// If the node exposes routes, ensure they are note removed
// when the filters are reduced.
if node.Hostinfo().Valid() {
routableIPs := node.Hostinfo().RoutableIPs()
if routableIPs.Len() > 0 {
for _, routableIP := range routableIPs.All() {
if expanded.OverlapsPrefix(routableIP) {
dests = append(dests, dest)
continue DEST_LOOP
}
}
}
}
// Also check approved subnet routes - nodes should have access
// to subnets they're approved to route traffic for.
subnetRoutes := node.SubnetRoutes()
for _, subnetRoute := range subnetRoutes {
if expanded.OverlapsPrefix(subnetRoute) {
dests = append(dests, dest)
continue DEST_LOOP
}
}
}
// Check if this new route can be auto-approved by policy
canApprove := pm.NodeCanApproveRoute(nv, route)
if canApprove {
if len(dests) > 0 {
ret = append(ret, tailcfg.FilterRule{
SrcIPs: rule.SrcIPs,
DstPorts: dests,
IPProto: rule.IPProto,
})
}
}
return ret
}
// AutoApproveRoutes approves any route that can be autoapproved from
// the nodes perspective according to the given policy.
// It reports true if any routes were approved.
// Note: This function now takes a pointer to the actual node to modify ApprovedRoutes.
func AutoApproveRoutes(pm PolicyManager, node *types.Node) bool {
if pm == nil {
return false
}
nodeView := node.View()
var newApproved []netip.Prefix
for _, route := range nodeView.AnnouncedRoutes() {
if pm.NodeCanApproveRoute(nodeView, route) {
newApproved = append(newApproved, route)
}
}
// Sort and deduplicate
tsaddr.SortPrefixes(newApproved)
newApproved = slices.Compact(newApproved)
newApproved = lo.Filter(newApproved, func(route netip.Prefix, index int) bool {
return route.IsValid()
})
// Only modify ApprovedRoutes if we have new routes to approve.
// This prevents clearing existing approved routes when nodes
// temporarily don't have announced routes during policy changes.
if len(newApproved) > 0 {
combined := append(newApproved, node.ApprovedRoutes...)
tsaddr.SortPrefixes(combined)
combined = slices.Compact(combined)
combined = lo.Filter(combined, func(route netip.Prefix, index int) bool {
return route.IsValid()
})
// Sort the current approved for comparison
sortedCurrent := make([]netip.Prefix, len(currentApproved))
copy(sortedCurrent, currentApproved)
tsaddr.SortPrefixes(sortedCurrent)
// Only update if the routes actually changed
if !slices.Equal(sortedCurrent, newApproved) {
// Log what changed
var added, kept []netip.Prefix
for _, route := range newApproved {
if !slices.Contains(sortedCurrent, route) {
added = append(added, route)
} else {
kept = append(kept, route)
}
// Only update if the routes actually changed
if !slices.Equal(node.ApprovedRoutes, combined) {
node.ApprovedRoutes = combined
return true
}
if len(added) > 0 {
log.Debug().
Uint64("node.id", nv.ID().Uint64()).
Str("node.name", nv.Hostname()).
Strs("routes.added", util.PrefixesToString(added)).
Strs("routes.kept", util.PrefixesToString(kept)).
Int("routes.total", len(newApproved)).
Msg("Routes auto-approved by policy")
}
return newApproved, true
}
return newApproved, false
return false
}

View File

@@ -1,339 +0,0 @@
package policy
import (
"fmt"
"net/netip"
"testing"
policyv2 "github.com/juanfont/headscale/hscontrol/policy/v2"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/stretchr/testify/assert"
"gorm.io/gorm"
"tailscale.com/net/tsaddr"
"tailscale.com/types/key"
"tailscale.com/types/ptr"
"tailscale.com/types/views"
)
func TestApproveRoutesWithPolicy_NeverRemovesApprovedRoutes(t *testing.T) {
user1 := types.User{
Model: gorm.Model{ID: 1},
Name: "testuser@",
}
user2 := types.User{
Model: gorm.Model{ID: 2},
Name: "otheruser@",
}
users := []types.User{user1, user2}
node1 := &types.Node{
ID: 1,
MachineKey: key.NewMachine().Public(),
NodeKey: key.NewNode().Public(),
Hostname: "test-node",
UserID: user1.ID,
User: user1,
RegisterMethod: util.RegisterMethodAuthKey,
IPv4: ptr.To(netip.MustParseAddr("100.64.0.1")),
ForcedTags: []string{"tag:test"},
}
node2 := &types.Node{
ID: 2,
MachineKey: key.NewMachine().Public(),
NodeKey: key.NewNode().Public(),
Hostname: "other-node",
UserID: user2.ID,
User: user2,
RegisterMethod: util.RegisterMethodAuthKey,
IPv4: ptr.To(netip.MustParseAddr("100.64.0.2")),
}
// Create a policy that auto-approves specific routes
policyJSON := `{
"groups": {
"group:test": ["testuser@"]
},
"tagOwners": {
"tag:test": ["testuser@"]
},
"acls": [
{
"action": "accept",
"src": ["*"],
"dst": ["*:*"]
}
],
"autoApprovers": {
"routes": {
"10.0.0.0/8": ["testuser@", "tag:test"],
"10.1.0.0/24": ["testuser@"],
"10.2.0.0/24": ["testuser@"],
"192.168.0.0/24": ["tag:test"]
}
}
}`
pm, err := policyv2.NewPolicyManager([]byte(policyJSON), users, views.SliceOf([]types.NodeView{node1.View(), node2.View()}))
assert.NoError(t, err)
tests := []struct {
name string
node *types.Node
currentApproved []netip.Prefix
announcedRoutes []netip.Prefix
wantApproved []netip.Prefix
wantChanged bool
description string
}{
{
name: "previously_approved_route_no_longer_advertised_should_remain",
node: node1,
currentApproved: []netip.Prefix{
netip.MustParsePrefix("10.0.0.0/24"),
netip.MustParsePrefix("192.168.0.0/24"),
},
announcedRoutes: []netip.Prefix{
netip.MustParsePrefix("10.0.0.0/24"), // Only this one is still advertised
},
wantApproved: []netip.Prefix{
netip.MustParsePrefix("10.0.0.0/24"),
netip.MustParsePrefix("192.168.0.0/24"), // Should still be here!
},
wantChanged: false,
description: "Previously approved routes should never be removed even when no longer advertised",
},
{
name: "add_new_auto_approved_route_keeps_old_approved",
node: node1,
currentApproved: []netip.Prefix{
netip.MustParsePrefix("10.5.0.0/24"), // This was manually approved
},
announcedRoutes: []netip.Prefix{
netip.MustParsePrefix("10.1.0.0/24"), // New route that should be auto-approved
},
wantApproved: []netip.Prefix{
netip.MustParsePrefix("10.1.0.0/24"), // New auto-approved route (subset of 10.0.0.0/8)
netip.MustParsePrefix("10.5.0.0/24"), // Old approved route kept
},
wantChanged: true,
description: "New auto-approved routes should be added while keeping old approved routes",
},
{
name: "no_announced_routes_keeps_all_approved",
node: node1,
currentApproved: []netip.Prefix{
netip.MustParsePrefix("10.0.0.0/24"),
netip.MustParsePrefix("192.168.0.0/24"),
netip.MustParsePrefix("172.16.0.0/16"),
},
announcedRoutes: []netip.Prefix{}, // No routes announced
wantApproved: []netip.Prefix{
netip.MustParsePrefix("10.0.0.0/24"),
netip.MustParsePrefix("172.16.0.0/16"),
netip.MustParsePrefix("192.168.0.0/24"),
},
wantChanged: false,
description: "All approved routes should remain when no routes are announced",
},
{
name: "no_changes_when_announced_equals_approved",
node: node1,
currentApproved: []netip.Prefix{
netip.MustParsePrefix("10.0.0.0/24"),
},
announcedRoutes: []netip.Prefix{
netip.MustParsePrefix("10.0.0.0/24"),
},
wantApproved: []netip.Prefix{
netip.MustParsePrefix("10.0.0.0/24"),
},
wantChanged: false,
description: "No changes should occur when announced routes match approved routes",
},
{
name: "auto_approve_multiple_new_routes",
node: node1,
currentApproved: []netip.Prefix{
netip.MustParsePrefix("172.16.0.0/24"), // This was manually approved
},
announcedRoutes: []netip.Prefix{
netip.MustParsePrefix("10.2.0.0/24"), // Should be auto-approved (subset of 10.0.0.0/8)
netip.MustParsePrefix("192.168.0.0/24"), // Should be auto-approved for tag:test
},
wantApproved: []netip.Prefix{
netip.MustParsePrefix("10.2.0.0/24"), // New auto-approved
netip.MustParsePrefix("172.16.0.0/24"), // Original kept
netip.MustParsePrefix("192.168.0.0/24"), // New auto-approved
},
wantChanged: true,
description: "Multiple new routes should be auto-approved while keeping existing approved routes",
},
{
name: "node_without_permission_no_auto_approval",
node: node2, // Different node without the tag
currentApproved: []netip.Prefix{
netip.MustParsePrefix("10.0.0.0/24"),
},
announcedRoutes: []netip.Prefix{
netip.MustParsePrefix("192.168.0.0/24"), // This requires tag:test
},
wantApproved: []netip.Prefix{
netip.MustParsePrefix("10.0.0.0/24"), // Only the original approved route
},
wantChanged: false,
description: "Routes should not be auto-approved for nodes without proper permissions",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
gotApproved, gotChanged := ApproveRoutesWithPolicy(pm, tt.node.View(), tt.currentApproved, tt.announcedRoutes)
assert.Equal(t, tt.wantChanged, gotChanged, "changed flag mismatch: %s", tt.description)
// Sort for comparison since ApproveRoutesWithPolicy sorts the results
tsaddr.SortPrefixes(tt.wantApproved)
assert.Equal(t, tt.wantApproved, gotApproved, "approved routes mismatch: %s", tt.description)
// Verify that all previously approved routes are still present
for _, prevRoute := range tt.currentApproved {
assert.Contains(t, gotApproved, prevRoute,
"previously approved route %s was removed - this should never happen", prevRoute)
}
})
}
}
func TestApproveRoutesWithPolicy_NilAndEmptyCases(t *testing.T) {
// Create a basic policy for edge case testing
aclPolicy := `
{
"acls": [
{"action": "accept", "src": ["*"], "dst": ["*:*"]},
],
"autoApprovers": {
"routes": {
"10.1.0.0/24": ["test@"],
},
},
}`
pmfs := PolicyManagerFuncsForTest([]byte(aclPolicy))
tests := []struct {
name string
currentApproved []netip.Prefix
announcedRoutes []netip.Prefix
wantApproved []netip.Prefix
wantChanged bool
}{
{
name: "nil_policy_manager",
currentApproved: []netip.Prefix{
netip.MustParsePrefix("10.0.0.0/24"),
},
announcedRoutes: []netip.Prefix{
netip.MustParsePrefix("192.168.0.0/24"),
},
wantApproved: []netip.Prefix{
netip.MustParsePrefix("10.0.0.0/24"),
},
wantChanged: false,
},
{
name: "nil_current_approved",
currentApproved: nil,
announcedRoutes: []netip.Prefix{
netip.MustParsePrefix("10.1.0.0/24"),
},
wantApproved: []netip.Prefix{
netip.MustParsePrefix("10.1.0.0/24"),
},
wantChanged: true,
},
{
name: "nil_announced_routes",
currentApproved: []netip.Prefix{
netip.MustParsePrefix("10.0.0.0/24"),
},
announcedRoutes: nil,
wantApproved: []netip.Prefix{
netip.MustParsePrefix("10.0.0.0/24"),
},
wantChanged: false,
},
{
name: "duplicate_approved_routes",
currentApproved: []netip.Prefix{
netip.MustParsePrefix("10.0.0.0/24"),
netip.MustParsePrefix("10.0.0.0/24"), // Duplicate
},
announcedRoutes: []netip.Prefix{
netip.MustParsePrefix("10.1.0.0/24"),
},
wantApproved: []netip.Prefix{
netip.MustParsePrefix("10.0.0.0/24"),
netip.MustParsePrefix("10.1.0.0/24"),
},
wantChanged: true,
},
{
name: "empty_slices",
currentApproved: []netip.Prefix{},
announcedRoutes: []netip.Prefix{},
wantApproved: []netip.Prefix{},
wantChanged: false,
},
}
for _, tt := range tests {
for i, pmf := range pmfs {
t.Run(fmt.Sprintf("%s-policy-index%d", tt.name, i), func(t *testing.T) {
// Create test user
user := types.User{
Model: gorm.Model{ID: 1},
Name: "test",
}
users := []types.User{user}
// Create test node
node := types.Node{
ID: 1,
MachineKey: key.NewMachine().Public(),
NodeKey: key.NewNode().Public(),
Hostname: "testnode",
UserID: user.ID,
User: user,
RegisterMethod: util.RegisterMethodAuthKey,
IPv4: ptr.To(netip.MustParseAddr("100.64.0.1")),
ApprovedRoutes: tt.currentApproved,
}
nodes := types.Nodes{&node}
// Create policy manager or use nil if specified
var pm PolicyManager
var err error
if tt.name != "nil_policy_manager" {
pm, err = pmf(users, nodes.ViewSlice())
assert.NoError(t, err)
} else {
pm = nil
}
gotApproved, gotChanged := ApproveRoutesWithPolicy(pm, node.View(), tt.currentApproved, tt.announcedRoutes)
assert.Equal(t, tt.wantChanged, gotChanged, "changed flag mismatch")
// Handle nil vs empty slice comparison
if tt.wantApproved == nil {
assert.Nil(t, gotApproved, "expected nil approved routes")
} else {
tsaddr.SortPrefixes(tt.wantApproved)
assert.Equal(t, tt.wantApproved, gotApproved, "approved routes mismatch")
}
})
}
}
}

View File

@@ -1,361 +0,0 @@
package policy
import (
"fmt"
"net/netip"
"testing"
"github.com/google/go-cmp/cmp"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"gorm.io/gorm"
"tailscale.com/tailcfg"
"tailscale.com/types/key"
"tailscale.com/types/ptr"
)
func TestApproveRoutesWithPolicy_NeverRemovesRoutes(t *testing.T) {
// Test policy that allows specific routes to be auto-approved
aclPolicy := `
{
"groups": {
"group:admins": ["test@"],
},
"acls": [
{"action": "accept", "src": ["*"], "dst": ["*:*"]},
],
"autoApprovers": {
"routes": {
"10.0.0.0/24": ["test@"],
"192.168.0.0/24": ["group:admins"],
"172.16.0.0/16": ["tag:approved"],
},
},
"tagOwners": {
"tag:approved": ["test@"],
},
}`
tests := []struct {
name string
currentApproved []netip.Prefix
announcedRoutes []netip.Prefix
nodeHostname string
nodeUser string
nodeTags []string
wantApproved []netip.Prefix
wantChanged bool
wantRemovedRoutes []netip.Prefix // Routes that should NOT be in the result
}{
{
name: "previously_approved_route_no_longer_advertised_remains",
currentApproved: []netip.Prefix{
netip.MustParsePrefix("10.0.0.0/24"),
netip.MustParsePrefix("192.168.0.0/24"),
},
announcedRoutes: []netip.Prefix{
netip.MustParsePrefix("192.168.0.0/24"), // Only this one still advertised
},
nodeUser: "test",
wantApproved: []netip.Prefix{
netip.MustParsePrefix("10.0.0.0/24"), // Should remain!
netip.MustParsePrefix("192.168.0.0/24"),
},
wantChanged: false,
wantRemovedRoutes: []netip.Prefix{}, // Nothing should be removed
},
{
name: "add_new_auto_approved_route_keeps_existing",
currentApproved: []netip.Prefix{
netip.MustParsePrefix("10.0.0.0/24"),
},
announcedRoutes: []netip.Prefix{
netip.MustParsePrefix("10.0.0.0/24"), // Still advertised
netip.MustParsePrefix("192.168.0.0/24"), // New route
},
nodeUser: "test",
wantApproved: []netip.Prefix{
netip.MustParsePrefix("10.0.0.0/24"),
netip.MustParsePrefix("192.168.0.0/24"), // Auto-approved via group
},
wantChanged: true,
},
{
name: "no_announced_routes_keeps_all_approved",
currentApproved: []netip.Prefix{
netip.MustParsePrefix("10.0.0.0/24"),
netip.MustParsePrefix("192.168.0.0/24"),
netip.MustParsePrefix("172.16.0.0/16"),
},
announcedRoutes: []netip.Prefix{}, // No routes announced anymore
nodeUser: "test",
wantApproved: []netip.Prefix{
netip.MustParsePrefix("172.16.0.0/16"),
netip.MustParsePrefix("10.0.0.0/24"),
netip.MustParsePrefix("192.168.0.0/24"),
},
wantChanged: false,
},
{
name: "manually_approved_route_not_in_policy_remains",
currentApproved: []netip.Prefix{
netip.MustParsePrefix("203.0.113.0/24"), // Not in auto-approvers
},
announcedRoutes: []netip.Prefix{
netip.MustParsePrefix("10.0.0.0/24"), // Can be auto-approved
},
nodeUser: "test",
wantApproved: []netip.Prefix{
netip.MustParsePrefix("10.0.0.0/24"), // New auto-approved
netip.MustParsePrefix("203.0.113.0/24"), // Manual approval preserved
},
wantChanged: true,
},
{
name: "tagged_node_gets_tag_approved_routes",
currentApproved: []netip.Prefix{
netip.MustParsePrefix("10.0.0.0/24"),
},
announcedRoutes: []netip.Prefix{
netip.MustParsePrefix("172.16.0.0/16"), // Tag-approved route
},
nodeUser: "test",
nodeTags: []string{"tag:approved"},
wantApproved: []netip.Prefix{
netip.MustParsePrefix("172.16.0.0/16"), // New tag-approved
netip.MustParsePrefix("10.0.0.0/24"), // Previous approval preserved
},
wantChanged: true,
},
{
name: "complex_scenario_multiple_changes",
currentApproved: []netip.Prefix{
netip.MustParsePrefix("10.0.0.0/24"), // Will not be advertised
netip.MustParsePrefix("203.0.113.0/24"), // Manual, not advertised
},
announcedRoutes: []netip.Prefix{
netip.MustParsePrefix("192.168.0.0/24"), // New, auto-approvable
netip.MustParsePrefix("172.16.0.0/16"), // New, not approvable (no tag)
netip.MustParsePrefix("198.51.100.0/24"), // New, not in policy
},
nodeUser: "test",
wantApproved: []netip.Prefix{
netip.MustParsePrefix("10.0.0.0/24"), // Kept despite not advertised
netip.MustParsePrefix("192.168.0.0/24"), // New auto-approved
netip.MustParsePrefix("203.0.113.0/24"), // Kept despite not advertised
},
wantChanged: true,
},
}
pmfs := PolicyManagerFuncsForTest([]byte(aclPolicy))
for _, tt := range tests {
for i, pmf := range pmfs {
t.Run(fmt.Sprintf("%s-policy-index%d", tt.name, i), func(t *testing.T) {
// Create test user
user := types.User{
Model: gorm.Model{ID: 1},
Name: tt.nodeUser,
}
users := []types.User{user}
// Create test node
node := types.Node{
ID: 1,
MachineKey: key.NewMachine().Public(),
NodeKey: key.NewNode().Public(),
Hostname: tt.nodeHostname,
UserID: user.ID,
User: user,
RegisterMethod: util.RegisterMethodAuthKey,
Hostinfo: &tailcfg.Hostinfo{
RoutableIPs: tt.announcedRoutes,
},
IPv4: ptr.To(netip.MustParseAddr("100.64.0.1")),
ApprovedRoutes: tt.currentApproved,
ForcedTags: tt.nodeTags,
}
nodes := types.Nodes{&node}
// Create policy manager
pm, err := pmf(users, nodes.ViewSlice())
require.NoError(t, err)
require.NotNil(t, pm)
// Test ApproveRoutesWithPolicy
gotApproved, gotChanged := ApproveRoutesWithPolicy(
pm,
node.View(),
tt.currentApproved,
tt.announcedRoutes,
)
// Check change flag
assert.Equal(t, tt.wantChanged, gotChanged, "change flag mismatch")
// Check approved routes match expected
if diff := cmp.Diff(tt.wantApproved, gotApproved, util.Comparers...); diff != "" {
t.Logf("Want: %v", tt.wantApproved)
t.Logf("Got: %v", gotApproved)
t.Errorf("unexpected approved routes (-want +got):\n%s", diff)
}
// Verify all previously approved routes are still present
for _, prevRoute := range tt.currentApproved {
assert.Contains(t, gotApproved, prevRoute,
"previously approved route %s was removed - this should NEVER happen", prevRoute)
}
// Verify no routes were incorrectly removed
for _, removedRoute := range tt.wantRemovedRoutes {
assert.NotContains(t, gotApproved, removedRoute,
"route %s should have been removed but wasn't", removedRoute)
}
})
}
}
}
func TestApproveRoutesWithPolicy_EdgeCases(t *testing.T) {
aclPolicy := `
{
"acls": [
{"action": "accept", "src": ["*"], "dst": ["*:*"]},
],
"autoApprovers": {
"routes": {
"10.0.0.0/8": ["test@"],
},
},
}`
tests := []struct {
name string
currentApproved []netip.Prefix
announcedRoutes []netip.Prefix
wantApproved []netip.Prefix
wantChanged bool
}{
{
name: "nil_current_approved",
currentApproved: nil,
announcedRoutes: []netip.Prefix{
netip.MustParsePrefix("10.0.0.0/24"),
},
wantApproved: []netip.Prefix{
netip.MustParsePrefix("10.0.0.0/24"),
},
wantChanged: true,
},
{
name: "empty_current_approved",
currentApproved: []netip.Prefix{},
announcedRoutes: []netip.Prefix{
netip.MustParsePrefix("10.0.0.0/24"),
},
wantApproved: []netip.Prefix{
netip.MustParsePrefix("10.0.0.0/24"),
},
wantChanged: true,
},
{
name: "duplicate_routes_handled",
currentApproved: []netip.Prefix{
netip.MustParsePrefix("10.0.0.0/24"),
netip.MustParsePrefix("10.0.0.0/24"), // Duplicate
},
announcedRoutes: []netip.Prefix{
netip.MustParsePrefix("10.0.0.0/24"),
},
wantApproved: []netip.Prefix{
netip.MustParsePrefix("10.0.0.0/24"),
},
wantChanged: true, // Duplicates are removed, so it's a change
},
}
pmfs := PolicyManagerFuncsForTest([]byte(aclPolicy))
for _, tt := range tests {
for i, pmf := range pmfs {
t.Run(fmt.Sprintf("%s-policy-index%d", tt.name, i), func(t *testing.T) {
// Create test user
user := types.User{
Model: gorm.Model{ID: 1},
Name: "test",
}
users := []types.User{user}
node := types.Node{
ID: 1,
MachineKey: key.NewMachine().Public(),
NodeKey: key.NewNode().Public(),
Hostname: "testnode",
UserID: user.ID,
User: user,
RegisterMethod: util.RegisterMethodAuthKey,
Hostinfo: &tailcfg.Hostinfo{
RoutableIPs: tt.announcedRoutes,
},
IPv4: ptr.To(netip.MustParseAddr("100.64.0.1")),
ApprovedRoutes: tt.currentApproved,
}
nodes := types.Nodes{&node}
pm, err := pmf(users, nodes.ViewSlice())
require.NoError(t, err)
gotApproved, gotChanged := ApproveRoutesWithPolicy(
pm,
node.View(),
tt.currentApproved,
tt.announcedRoutes,
)
assert.Equal(t, tt.wantChanged, gotChanged)
if diff := cmp.Diff(tt.wantApproved, gotApproved, util.Comparers...); diff != "" {
t.Errorf("unexpected approved routes (-want +got):\n%s", diff)
}
})
}
}
}
func TestApproveRoutesWithPolicy_NilPolicyManagerCase(t *testing.T) {
user := types.User{
Model: gorm.Model{ID: 1},
Name: "test",
}
currentApproved := []netip.Prefix{
netip.MustParsePrefix("10.0.0.0/24"),
}
announcedRoutes := []netip.Prefix{
netip.MustParsePrefix("192.168.0.0/24"),
}
node := types.Node{
ID: 1,
MachineKey: key.NewMachine().Public(),
NodeKey: key.NewNode().Public(),
Hostname: "testnode",
UserID: user.ID,
User: user,
RegisterMethod: util.RegisterMethodAuthKey,
Hostinfo: &tailcfg.Hostinfo{
RoutableIPs: announcedRoutes,
},
IPv4: ptr.To(netip.MustParseAddr("100.64.0.1")),
ApprovedRoutes: currentApproved,
}
// With nil policy manager, should return current approved unchanged
gotApproved, gotChanged := ApproveRoutesWithPolicy(nil, node.View(), currentApproved, announcedRoutes)
assert.False(t, gotChanged)
assert.Equal(t, currentApproved, gotApproved)
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,71 +0,0 @@
package policyutil
import (
"github.com/juanfont/headscale/hscontrol/types"
"github.com/juanfont/headscale/hscontrol/util"
"tailscale.com/tailcfg"
)
// ReduceFilterRules takes a node and a set of global filter rules and removes all rules
// and destinations that are not relevant to that particular node.
//
// IMPORTANT: This function is designed for global filters only. Per-node filters
// (from autogroup:self policies) are already node-specific and should not be passed
// to this function. Use PolicyManager.FilterForNode() instead, which handles both cases.
func ReduceFilterRules(node types.NodeView, rules []tailcfg.FilterRule) []tailcfg.FilterRule {
ret := []tailcfg.FilterRule{}
for _, rule := range rules {
// record if the rule is actually relevant for the given node.
var dests []tailcfg.NetPortRange
DEST_LOOP:
for _, dest := range rule.DstPorts {
expanded, err := util.ParseIPSet(dest.IP, nil)
// Fail closed, if we can't parse it, then we should not allow
// access.
if err != nil {
continue DEST_LOOP
}
if node.InIPSet(expanded) {
dests = append(dests, dest)
continue DEST_LOOP
}
// If the node exposes routes, ensure they are note removed
// when the filters are reduced.
if node.Hostinfo().Valid() {
routableIPs := node.Hostinfo().RoutableIPs()
if routableIPs.Len() > 0 {
for _, routableIP := range routableIPs.All() {
if expanded.OverlapsPrefix(routableIP) {
dests = append(dests, dest)
continue DEST_LOOP
}
}
}
}
// Also check approved subnet routes - nodes should have access
// to subnets they're approved to route traffic for.
subnetRoutes := node.SubnetRoutes()
for _, subnetRoute := range subnetRoutes {
if expanded.OverlapsPrefix(subnetRoute) {
dests = append(dests, dest)
continue DEST_LOOP
}
}
}
if len(dests) > 0 {
ret = append(ret, tailcfg.FilterRule{
SrcIPs: rule.SrcIPs,
DstPorts: dests,
IPProto: rule.IPProto,
})
}
}
return ret
}

View File

@@ -1,841 +0,0 @@
package policyutil_test
import (
"encoding/json"
"fmt"
"net/netip"
"testing"
"github.com/google/go-cmp/cmp"
"github.com/juanfont/headscale/hscontrol/policy"
"github.com/juanfont/headscale/hscontrol/policy/policyutil"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/rs/zerolog/log"
"github.com/stretchr/testify/require"
"gorm.io/gorm"
"tailscale.com/net/tsaddr"
"tailscale.com/tailcfg"
"tailscale.com/util/must"
)
var ap = func(ipStr string) *netip.Addr {
ip := netip.MustParseAddr(ipStr)
return &ip
}
var p = func(prefStr string) netip.Prefix {
ip := netip.MustParsePrefix(prefStr)
return ip
}
// hsExitNodeDestForTest is the list of destination IP ranges that are allowed when
// we use headscale "autogroup:internet".
var hsExitNodeDestForTest = []tailcfg.NetPortRange{
{IP: "0.0.0.0/5", Ports: tailcfg.PortRangeAny},
{IP: "8.0.0.0/7", Ports: tailcfg.PortRangeAny},
{IP: "11.0.0.0/8", Ports: tailcfg.PortRangeAny},
{IP: "12.0.0.0/6", Ports: tailcfg.PortRangeAny},
{IP: "16.0.0.0/4", Ports: tailcfg.PortRangeAny},
{IP: "32.0.0.0/3", Ports: tailcfg.PortRangeAny},
{IP: "64.0.0.0/3", Ports: tailcfg.PortRangeAny},
{IP: "96.0.0.0/6", Ports: tailcfg.PortRangeAny},
{IP: "100.0.0.0/10", Ports: tailcfg.PortRangeAny},
{IP: "100.128.0.0/9", Ports: tailcfg.PortRangeAny},
{IP: "101.0.0.0/8", Ports: tailcfg.PortRangeAny},
{IP: "102.0.0.0/7", Ports: tailcfg.PortRangeAny},
{IP: "104.0.0.0/5", Ports: tailcfg.PortRangeAny},
{IP: "112.0.0.0/4", Ports: tailcfg.PortRangeAny},
{IP: "128.0.0.0/3", Ports: tailcfg.PortRangeAny},
{IP: "160.0.0.0/5", Ports: tailcfg.PortRangeAny},
{IP: "168.0.0.0/8", Ports: tailcfg.PortRangeAny},
{IP: "169.0.0.0/9", Ports: tailcfg.PortRangeAny},
{IP: "169.128.0.0/10", Ports: tailcfg.PortRangeAny},
{IP: "169.192.0.0/11", Ports: tailcfg.PortRangeAny},
{IP: "169.224.0.0/12", Ports: tailcfg.PortRangeAny},
{IP: "169.240.0.0/13", Ports: tailcfg.PortRangeAny},
{IP: "169.248.0.0/14", Ports: tailcfg.PortRangeAny},
{IP: "169.252.0.0/15", Ports: tailcfg.PortRangeAny},
{IP: "169.255.0.0/16", Ports: tailcfg.PortRangeAny},
{IP: "170.0.0.0/7", Ports: tailcfg.PortRangeAny},
{IP: "172.0.0.0/12", Ports: tailcfg.PortRangeAny},
{IP: "172.32.0.0/11", Ports: tailcfg.PortRangeAny},
{IP: "172.64.0.0/10", Ports: tailcfg.PortRangeAny},
{IP: "172.128.0.0/9", Ports: tailcfg.PortRangeAny},
{IP: "173.0.0.0/8", Ports: tailcfg.PortRangeAny},
{IP: "174.0.0.0/7", Ports: tailcfg.PortRangeAny},
{IP: "176.0.0.0/4", Ports: tailcfg.PortRangeAny},
{IP: "192.0.0.0/9", Ports: tailcfg.PortRangeAny},
{IP: "192.128.0.0/11", Ports: tailcfg.PortRangeAny},
{IP: "192.160.0.0/13", Ports: tailcfg.PortRangeAny},
{IP: "192.169.0.0/16", Ports: tailcfg.PortRangeAny},
{IP: "192.170.0.0/15", Ports: tailcfg.PortRangeAny},
{IP: "192.172.0.0/14", Ports: tailcfg.PortRangeAny},
{IP: "192.176.0.0/12", Ports: tailcfg.PortRangeAny},
{IP: "192.192.0.0/10", Ports: tailcfg.PortRangeAny},
{IP: "193.0.0.0/8", Ports: tailcfg.PortRangeAny},
{IP: "194.0.0.0/7", Ports: tailcfg.PortRangeAny},
{IP: "196.0.0.0/6", Ports: tailcfg.PortRangeAny},
{IP: "200.0.0.0/5", Ports: tailcfg.PortRangeAny},
{IP: "208.0.0.0/4", Ports: tailcfg.PortRangeAny},
{IP: "224.0.0.0/3", Ports: tailcfg.PortRangeAny},
{IP: "2000::/3", Ports: tailcfg.PortRangeAny},
}
func TestTheInternet(t *testing.T) {
internetSet := util.TheInternet()
internetPrefs := internetSet.Prefixes()
for i := range internetPrefs {
if internetPrefs[i].String() != hsExitNodeDestForTest[i].IP {
t.Errorf(
"prefix from internet set %q != hsExit list %q",
internetPrefs[i].String(),
hsExitNodeDestForTest[i].IP,
)
}
}
if len(internetPrefs) != len(hsExitNodeDestForTest) {
t.Fatalf(
"expected same length of prefixes, internet: %d, hsExit: %d",
len(internetPrefs),
len(hsExitNodeDestForTest),
)
}
}
func TestReduceFilterRules(t *testing.T) {
users := types.Users{
types.User{Model: gorm.Model{ID: 1}, Name: "mickael"},
types.User{Model: gorm.Model{ID: 2}, Name: "user1"},
types.User{Model: gorm.Model{ID: 3}, Name: "user2"},
types.User{Model: gorm.Model{ID: 4}, Name: "user100"},
types.User{Model: gorm.Model{ID: 5}, Name: "user3"},
}
tests := []struct {
name string
node *types.Node
peers types.Nodes
pol string
want []tailcfg.FilterRule
}{
{
name: "host1-can-reach-host2-no-rules",
pol: `
{
"acls": [
{
"action": "accept",
"proto": "",
"src": [
"100.64.0.1"
],
"dst": [
"100.64.0.2:*"
]
}
],
}
`,
node: &types.Node{
IPv4: ap("100.64.0.1"),
IPv6: ap("fd7a:115c:a1e0:ab12:4843:2222:6273:2221"),
User: users[0],
},
peers: types.Nodes{
&types.Node{
IPv4: ap("100.64.0.2"),
IPv6: ap("fd7a:115c:a1e0:ab12:4843:2222:6273:2222"),
User: users[0],
},
},
want: []tailcfg.FilterRule{},
},
{
name: "1604-subnet-routers-are-preserved",
pol: `
{
"groups": {
"group:admins": [
"user1@"
]
},
"acls": [
{
"action": "accept",
"proto": "",
"src": [
"group:admins"
],
"dst": [
"group:admins:*"
]
},
{
"action": "accept",
"proto": "",
"src": [
"group:admins"
],
"dst": [
"10.33.0.0/16:*"
]
}
],
}
`,
node: &types.Node{
IPv4: ap("100.64.0.1"),
IPv6: ap("fd7a:115c:a1e0::1"),
User: users[1],
Hostinfo: &tailcfg.Hostinfo{
RoutableIPs: []netip.Prefix{
netip.MustParsePrefix("10.33.0.0/16"),
},
},
},
peers: types.Nodes{
&types.Node{
IPv4: ap("100.64.0.2"),
IPv6: ap("fd7a:115c:a1e0::2"),
User: users[1],
},
},
want: []tailcfg.FilterRule{
{
SrcIPs: []string{
"100.64.0.1/32",
"100.64.0.2/32",
"fd7a:115c:a1e0::1/128",
"fd7a:115c:a1e0::2/128",
},
DstPorts: []tailcfg.NetPortRange{
{
IP: "100.64.0.1/32",
Ports: tailcfg.PortRangeAny,
},
{
IP: "fd7a:115c:a1e0::1/128",
Ports: tailcfg.PortRangeAny,
},
},
IPProto: []int{6, 17},
},
{
SrcIPs: []string{
"100.64.0.1/32",
"100.64.0.2/32",
"fd7a:115c:a1e0::1/128",
"fd7a:115c:a1e0::2/128",
},
DstPorts: []tailcfg.NetPortRange{
{
IP: "10.33.0.0/16",
Ports: tailcfg.PortRangeAny,
},
},
IPProto: []int{6, 17},
},
},
},
{
name: "1786-reducing-breaks-exit-nodes-the-client",
pol: `
{
"groups": {
"group:team": [
"user3@",
"user2@",
"user1@"
]
},
"hosts": {
"internal": "100.64.0.100/32"
},
"acls": [
{
"action": "accept",
"proto": "",
"src": [
"group:team"
],
"dst": [
"internal:*"
]
},
{
"action": "accept",
"proto": "",
"src": [
"group:team"
],
"dst": [
"autogroup:internet:*"
]
}
],
}
`,
node: &types.Node{
IPv4: ap("100.64.0.1"),
IPv6: ap("fd7a:115c:a1e0::1"),
User: users[1],
},
peers: types.Nodes{
&types.Node{
IPv4: ap("100.64.0.2"),
IPv6: ap("fd7a:115c:a1e0::2"),
User: users[2],
},
// "internal" exit node
&types.Node{
IPv4: ap("100.64.0.100"),
IPv6: ap("fd7a:115c:a1e0::100"),
User: users[3],
Hostinfo: &tailcfg.Hostinfo{
RoutableIPs: tsaddr.ExitRoutes(),
},
},
},
want: []tailcfg.FilterRule{},
},
{
name: "1786-reducing-breaks-exit-nodes-the-exit",
pol: `
{
"groups": {
"group:team": [
"user3@",
"user2@",
"user1@"
]
},
"hosts": {
"internal": "100.64.0.100/32"
},
"acls": [
{
"action": "accept",
"proto": "",
"src": [
"group:team"
],
"dst": [
"internal:*"
]
},
{
"action": "accept",
"proto": "",
"src": [
"group:team"
],
"dst": [
"autogroup:internet:*"
]
}
],
}
`,
node: &types.Node{
IPv4: ap("100.64.0.100"),
IPv6: ap("fd7a:115c:a1e0::100"),
User: users[3],
Hostinfo: &tailcfg.Hostinfo{
RoutableIPs: tsaddr.ExitRoutes(),
},
},
peers: types.Nodes{
&types.Node{
IPv4: ap("100.64.0.2"),
IPv6: ap("fd7a:115c:a1e0::2"),
User: users[2],
},
&types.Node{
IPv4: ap("100.64.0.1"),
IPv6: ap("fd7a:115c:a1e0::1"),
User: users[1],
},
},
want: []tailcfg.FilterRule{
{
SrcIPs: []string{"100.64.0.1/32", "100.64.0.2/32", "fd7a:115c:a1e0::1/128", "fd7a:115c:a1e0::2/128"},
DstPorts: []tailcfg.NetPortRange{
{
IP: "100.64.0.100/32",
Ports: tailcfg.PortRangeAny,
},
{
IP: "fd7a:115c:a1e0::100/128",
Ports: tailcfg.PortRangeAny,
},
},
IPProto: []int{6, 17},
},
{
SrcIPs: []string{"100.64.0.1/32", "100.64.0.2/32", "fd7a:115c:a1e0::1/128", "fd7a:115c:a1e0::2/128"},
DstPorts: hsExitNodeDestForTest,
IPProto: []int{6, 17},
},
},
},
{
name: "1786-reducing-breaks-exit-nodes-the-example-from-issue",
pol: `
{
"groups": {
"group:team": [
"user3@",
"user2@",
"user1@"
]
},
"hosts": {
"internal": "100.64.0.100/32"
},
"acls": [
{
"action": "accept",
"proto": "",
"src": [
"group:team"
],
"dst": [
"internal:*"
]
},
{
"action": "accept",
"proto": "",
"src": [
"group:team"
],
"dst": [
"0.0.0.0/5:*",
"8.0.0.0/7:*",
"11.0.0.0/8:*",
"12.0.0.0/6:*",
"16.0.0.0/4:*",
"32.0.0.0/3:*",
"64.0.0.0/2:*",
"128.0.0.0/3:*",
"160.0.0.0/5:*",
"168.0.0.0/6:*",
"172.0.0.0/12:*",
"172.32.0.0/11:*",
"172.64.0.0/10:*",
"172.128.0.0/9:*",
"173.0.0.0/8:*",
"174.0.0.0/7:*",
"176.0.0.0/4:*",
"192.0.0.0/9:*",
"192.128.0.0/11:*",
"192.160.0.0/13:*",
"192.169.0.0/16:*",
"192.170.0.0/15:*",
"192.172.0.0/14:*",
"192.176.0.0/12:*",
"192.192.0.0/10:*",
"193.0.0.0/8:*",
"194.0.0.0/7:*",
"196.0.0.0/6:*",
"200.0.0.0/5:*",
"208.0.0.0/4:*"
]
}
],
}
`,
node: &types.Node{
IPv4: ap("100.64.0.100"),
IPv6: ap("fd7a:115c:a1e0::100"),
User: users[3],
Hostinfo: &tailcfg.Hostinfo{
RoutableIPs: tsaddr.ExitRoutes(),
},
},
peers: types.Nodes{
&types.Node{
IPv4: ap("100.64.0.2"),
IPv6: ap("fd7a:115c:a1e0::2"),
User: users[2],
},
&types.Node{
IPv4: ap("100.64.0.1"),
IPv6: ap("fd7a:115c:a1e0::1"),
User: users[1],
},
},
want: []tailcfg.FilterRule{
{
SrcIPs: []string{"100.64.0.1/32", "100.64.0.2/32", "fd7a:115c:a1e0::1/128", "fd7a:115c:a1e0::2/128"},
DstPorts: []tailcfg.NetPortRange{
{
IP: "100.64.0.100/32",
Ports: tailcfg.PortRangeAny,
},
{
IP: "fd7a:115c:a1e0::100/128",
Ports: tailcfg.PortRangeAny,
},
},
IPProto: []int{6, 17},
},
{
SrcIPs: []string{"100.64.0.1/32", "100.64.0.2/32", "fd7a:115c:a1e0::1/128", "fd7a:115c:a1e0::2/128"},
DstPorts: []tailcfg.NetPortRange{
{IP: "0.0.0.0/5", Ports: tailcfg.PortRangeAny},
{IP: "8.0.0.0/7", Ports: tailcfg.PortRangeAny},
{IP: "11.0.0.0/8", Ports: tailcfg.PortRangeAny},
{IP: "12.0.0.0/6", Ports: tailcfg.PortRangeAny},
{IP: "16.0.0.0/4", Ports: tailcfg.PortRangeAny},
{IP: "32.0.0.0/3", Ports: tailcfg.PortRangeAny},
{IP: "64.0.0.0/2", Ports: tailcfg.PortRangeAny},
{IP: "128.0.0.0/3", Ports: tailcfg.PortRangeAny},
{IP: "160.0.0.0/5", Ports: tailcfg.PortRangeAny},
{IP: "168.0.0.0/6", Ports: tailcfg.PortRangeAny},
{IP: "172.0.0.0/12", Ports: tailcfg.PortRangeAny},
{IP: "172.32.0.0/11", Ports: tailcfg.PortRangeAny},
{IP: "172.64.0.0/10", Ports: tailcfg.PortRangeAny},
{IP: "172.128.0.0/9", Ports: tailcfg.PortRangeAny},
{IP: "173.0.0.0/8", Ports: tailcfg.PortRangeAny},
{IP: "174.0.0.0/7", Ports: tailcfg.PortRangeAny},
{IP: "176.0.0.0/4", Ports: tailcfg.PortRangeAny},
{IP: "192.0.0.0/9", Ports: tailcfg.PortRangeAny},
{IP: "192.128.0.0/11", Ports: tailcfg.PortRangeAny},
{IP: "192.160.0.0/13", Ports: tailcfg.PortRangeAny},
{IP: "192.169.0.0/16", Ports: tailcfg.PortRangeAny},
{IP: "192.170.0.0/15", Ports: tailcfg.PortRangeAny},
{IP: "192.172.0.0/14", Ports: tailcfg.PortRangeAny},
{IP: "192.176.0.0/12", Ports: tailcfg.PortRangeAny},
{IP: "192.192.0.0/10", Ports: tailcfg.PortRangeAny},
{IP: "193.0.0.0/8", Ports: tailcfg.PortRangeAny},
{IP: "194.0.0.0/7", Ports: tailcfg.PortRangeAny},
{IP: "196.0.0.0/6", Ports: tailcfg.PortRangeAny},
{IP: "200.0.0.0/5", Ports: tailcfg.PortRangeAny},
{IP: "208.0.0.0/4", Ports: tailcfg.PortRangeAny},
},
IPProto: []int{6, 17},
},
},
},
{
name: "1786-reducing-breaks-exit-nodes-app-connector-like",
pol: `
{
"groups": {
"group:team": [
"user3@",
"user2@",
"user1@"
]
},
"hosts": {
"internal": "100.64.0.100/32"
},
"acls": [
{
"action": "accept",
"proto": "",
"src": [
"group:team"
],
"dst": [
"internal:*"
]
},
{
"action": "accept",
"proto": "",
"src": [
"group:team"
],
"dst": [
"8.0.0.0/8:*",
"16.0.0.0/8:*"
]
}
],
}
`,
node: &types.Node{
IPv4: ap("100.64.0.100"),
IPv6: ap("fd7a:115c:a1e0::100"),
User: users[3],
Hostinfo: &tailcfg.Hostinfo{
RoutableIPs: []netip.Prefix{netip.MustParsePrefix("8.0.0.0/16"), netip.MustParsePrefix("16.0.0.0/16")},
},
},
peers: types.Nodes{
&types.Node{
IPv4: ap("100.64.0.2"),
IPv6: ap("fd7a:115c:a1e0::2"),
User: users[2],
},
&types.Node{
IPv4: ap("100.64.0.1"),
IPv6: ap("fd7a:115c:a1e0::1"),
User: users[1],
},
},
want: []tailcfg.FilterRule{
{
SrcIPs: []string{"100.64.0.1/32", "100.64.0.2/32", "fd7a:115c:a1e0::1/128", "fd7a:115c:a1e0::2/128"},
DstPorts: []tailcfg.NetPortRange{
{
IP: "100.64.0.100/32",
Ports: tailcfg.PortRangeAny,
},
{
IP: "fd7a:115c:a1e0::100/128",
Ports: tailcfg.PortRangeAny,
},
},
IPProto: []int{6, 17},
},
{
SrcIPs: []string{"100.64.0.1/32", "100.64.0.2/32", "fd7a:115c:a1e0::1/128", "fd7a:115c:a1e0::2/128"},
DstPorts: []tailcfg.NetPortRange{
{
IP: "8.0.0.0/8",
Ports: tailcfg.PortRangeAny,
},
{
IP: "16.0.0.0/8",
Ports: tailcfg.PortRangeAny,
},
},
IPProto: []int{6, 17},
},
},
},
{
name: "1786-reducing-breaks-exit-nodes-app-connector-like2",
pol: `
{
"groups": {
"group:team": [
"user3@",
"user2@",
"user1@"
]
},
"hosts": {
"internal": "100.64.0.100/32"
},
"acls": [
{
"action": "accept",
"proto": "",
"src": [
"group:team"
],
"dst": [
"internal:*"
]
},
{
"action": "accept",
"proto": "",
"src": [
"group:team"
],
"dst": [
"8.0.0.0/16:*",
"16.0.0.0/16:*"
]
}
],
}
`,
node: &types.Node{
IPv4: ap("100.64.0.100"),
IPv6: ap("fd7a:115c:a1e0::100"),
User: users[3],
Hostinfo: &tailcfg.Hostinfo{
RoutableIPs: []netip.Prefix{netip.MustParsePrefix("8.0.0.0/8"), netip.MustParsePrefix("16.0.0.0/8")},
},
},
peers: types.Nodes{
&types.Node{
IPv4: ap("100.64.0.2"),
IPv6: ap("fd7a:115c:a1e0::2"),
User: users[2],
},
&types.Node{
IPv4: ap("100.64.0.1"),
IPv6: ap("fd7a:115c:a1e0::1"),
User: users[1],
},
},
want: []tailcfg.FilterRule{
{
SrcIPs: []string{"100.64.0.1/32", "100.64.0.2/32", "fd7a:115c:a1e0::1/128", "fd7a:115c:a1e0::2/128"},
DstPorts: []tailcfg.NetPortRange{
{
IP: "100.64.0.100/32",
Ports: tailcfg.PortRangeAny,
},
{
IP: "fd7a:115c:a1e0::100/128",
Ports: tailcfg.PortRangeAny,
},
},
IPProto: []int{6, 17},
},
{
SrcIPs: []string{"100.64.0.1/32", "100.64.0.2/32", "fd7a:115c:a1e0::1/128", "fd7a:115c:a1e0::2/128"},
DstPorts: []tailcfg.NetPortRange{
{
IP: "8.0.0.0/16",
Ports: tailcfg.PortRangeAny,
},
{
IP: "16.0.0.0/16",
Ports: tailcfg.PortRangeAny,
},
},
IPProto: []int{6, 17},
},
},
},
{
name: "1817-reduce-breaks-32-mask",
pol: `
{
"tagOwners": {
"tag:access-servers": ["user100@"],
},
"groups": {
"group:access": [
"user1@"
]
},
"hosts": {
"dns1": "172.16.0.21/32",
"vlan1": "172.16.0.0/24"
},
"acls": [
{
"action": "accept",
"proto": "",
"src": [
"group:access"
],
"dst": [
"tag:access-servers:*",
"dns1:*"
]
}
],
}
`,
node: &types.Node{
IPv4: ap("100.64.0.100"),
IPv6: ap("fd7a:115c:a1e0::100"),
User: users[3],
Hostinfo: &tailcfg.Hostinfo{
RoutableIPs: []netip.Prefix{netip.MustParsePrefix("172.16.0.0/24")},
},
ForcedTags: []string{"tag:access-servers"},
},
peers: types.Nodes{
&types.Node{
IPv4: ap("100.64.0.1"),
IPv6: ap("fd7a:115c:a1e0::1"),
User: users[1],
},
},
want: []tailcfg.FilterRule{
{
SrcIPs: []string{"100.64.0.1/32", "fd7a:115c:a1e0::1/128"},
DstPorts: []tailcfg.NetPortRange{
{
IP: "100.64.0.100/32",
Ports: tailcfg.PortRangeAny,
},
{
IP: "fd7a:115c:a1e0::100/128",
Ports: tailcfg.PortRangeAny,
},
{
IP: "172.16.0.21/32",
Ports: tailcfg.PortRangeAny,
},
},
IPProto: []int{6, 17},
},
},
},
{
name: "2365-only-route-policy",
pol: `
{
"hosts": {
"router": "100.64.0.1/32",
"node": "100.64.0.2/32"
},
"acls": [
{
"action": "accept",
"src": [
"*"
],
"dst": [
"router:8000"
]
},
{
"action": "accept",
"src": [
"node"
],
"dst": [
"172.26.0.0/16:*"
]
}
],
}
`,
node: &types.Node{
IPv4: ap("100.64.0.2"),
IPv6: ap("fd7a:115c:a1e0::2"),
User: users[3],
},
peers: types.Nodes{
&types.Node{
IPv4: ap("100.64.0.1"),
IPv6: ap("fd7a:115c:a1e0::1"),
User: users[1],
Hostinfo: &tailcfg.Hostinfo{
RoutableIPs: []netip.Prefix{p("172.16.0.0/24"), p("10.10.11.0/24"), p("10.10.12.0/24")},
},
ApprovedRoutes: []netip.Prefix{p("172.16.0.0/24"), p("10.10.11.0/24"), p("10.10.12.0/24")},
},
},
want: []tailcfg.FilterRule{},
},
}
for _, tt := range tests {
for idx, pmf := range policy.PolicyManagerFuncsForTest([]byte(tt.pol)) {
t.Run(fmt.Sprintf("%s-index%d", tt.name, idx), func(t *testing.T) {
var pm policy.PolicyManager
var err error
pm, err = pmf(users, append(tt.peers, tt.node).ViewSlice())
require.NoError(t, err)
got, _ := pm.Filter()
t.Logf("full filter:\n%s", must.Get(json.MarshalIndent(got, "", " ")))
got = policyutil.ReduceFilterRules(tt.node.View(), got)
if diff := cmp.Diff(tt.want, got); diff != "" {
log.Trace().Interface("got", got).Msg("result")
t.Errorf("TestReduceFilterRules() unexpected result (-want +got):\n%s", diff)
}
})
}
}
}

View File

@@ -771,29 +771,6 @@ func TestNodeCanApproveRoute(t *testing.T) {
policy: `{"acls":[{"action":"accept","src":["*"],"dst":["*:*"]}]}`,
canApprove: false,
},
{
name: "policy-without-autoApprovers-section",
node: normalNode,
route: p("10.33.0.0/16"),
policy: `{
"groups": {
"group:admin": ["user1@"]
},
"acls": [
{
"action": "accept",
"src": ["group:admin"],
"dst": ["group:admin:*"]
},
{
"action": "accept",
"src": ["group:admin"],
"dst": ["10.33.0.0/16:*"]
}
]
}`,
canApprove: false,
},
}
for _, tt := range tests {

View File

@@ -21,37 +21,42 @@ func (pol *Policy) compileFilterRules(
users types.Users,
nodes views.Slice[types.NodeView],
) ([]tailcfg.FilterRule, error) {
if pol == nil || pol.ACLs == nil {
if pol == nil {
return tailcfg.FilterAllowAll, nil
}
var rules []tailcfg.FilterRule
for _, acl := range pol.ACLs {
if acl.Action != ActionAccept {
if acl.Action != "accept" {
return nil, ErrInvalidAction
}
srcIPs, err := acl.Sources.Resolve(pol, users, nodes)
if err != nil {
log.Trace().Caller().Err(err).Msgf("resolving source ips")
log.Trace().Err(err).Msgf("resolving source ips")
}
if srcIPs == nil || len(srcIPs.Prefixes()) == 0 {
continue
}
protocols, _ := acl.Protocol.parseProtocol()
// TODO(kradalby): integrate type into schema
// TODO(kradalby): figure out the _ is wildcard stuff
protocols, _, err := parseProtocol(acl.Protocol)
if err != nil {
return nil, fmt.Errorf("parsing policy, protocol err: %w ", err)
}
var destPorts []tailcfg.NetPortRange
for _, dest := range acl.Destinations {
ips, err := dest.Resolve(pol, users, nodes)
if err != nil {
log.Trace().Caller().Err(err).Msgf("resolving destination ips")
log.Trace().Err(err).Msgf("resolving destination ips")
}
if ips == nil {
log.Debug().Caller().Msgf("destination resolved to nil ips: %v", dest)
log.Debug().Msgf("destination resolved to nil ips: %v", dest)
continue
}
@@ -82,203 +87,13 @@ func (pol *Policy) compileFilterRules(
return rules, nil
}
// compileFilterRulesForNode compiles filter rules for a specific node.
func (pol *Policy) compileFilterRulesForNode(
users types.Users,
node types.NodeView,
nodes views.Slice[types.NodeView],
) ([]tailcfg.FilterRule, error) {
if pol == nil {
return tailcfg.FilterAllowAll, nil
}
var rules []tailcfg.FilterRule
for _, acl := range pol.ACLs {
if acl.Action != ActionAccept {
return nil, ErrInvalidAction
}
aclRules, err := pol.compileACLWithAutogroupSelf(acl, users, node, nodes)
if err != nil {
log.Trace().Err(err).Msgf("compiling ACL")
continue
}
for _, rule := range aclRules {
if rule != nil {
rules = append(rules, *rule)
}
}
}
return rules, nil
}
// compileACLWithAutogroupSelf compiles a single ACL rule, handling
// autogroup:self per-node while supporting all other alias types normally.
// It returns a slice of filter rules because when an ACL has both autogroup:self
// and other destinations, they need to be split into separate rules with different
// source filtering logic.
func (pol *Policy) compileACLWithAutogroupSelf(
acl ACL,
users types.Users,
node types.NodeView,
nodes views.Slice[types.NodeView],
) ([]*tailcfg.FilterRule, error) {
var autogroupSelfDests []AliasWithPorts
var otherDests []AliasWithPorts
for _, dest := range acl.Destinations {
if ag, ok := dest.Alias.(*AutoGroup); ok && ag.Is(AutoGroupSelf) {
autogroupSelfDests = append(autogroupSelfDests, dest)
} else {
otherDests = append(otherDests, dest)
}
}
protocols, _ := acl.Protocol.parseProtocol()
var rules []*tailcfg.FilterRule
var resolvedSrcIPs []*netipx.IPSet
for _, src := range acl.Sources {
if ag, ok := src.(*AutoGroup); ok && ag.Is(AutoGroupSelf) {
return nil, fmt.Errorf("autogroup:self cannot be used in sources")
}
ips, err := src.Resolve(pol, users, nodes)
if err != nil {
log.Trace().Err(err).Msgf("resolving source ips")
continue
}
if ips != nil {
resolvedSrcIPs = append(resolvedSrcIPs, ips)
}
}
if len(resolvedSrcIPs) == 0 {
return rules, nil
}
// Handle autogroup:self destinations (if any)
if len(autogroupSelfDests) > 0 {
// Pre-filter to same-user untagged devices once - reuse for both sources and destinations
sameUserNodes := make([]types.NodeView, 0)
for _, n := range nodes.All() {
if n.User().ID == node.User().ID && !n.IsTagged() {
sameUserNodes = append(sameUserNodes, n)
}
}
if len(sameUserNodes) > 0 {
// Filter sources to only same-user untagged devices
var srcIPs netipx.IPSetBuilder
for _, ips := range resolvedSrcIPs {
for _, n := range sameUserNodes {
// Check if any of this node's IPs are in the source set
for _, nodeIP := range n.IPs() {
if ips.Contains(nodeIP) {
n.AppendToIPSet(&srcIPs)
break
}
}
}
}
srcSet, err := srcIPs.IPSet()
if err != nil {
return nil, err
}
if srcSet != nil && len(srcSet.Prefixes()) > 0 {
var destPorts []tailcfg.NetPortRange
for _, dest := range autogroupSelfDests {
for _, n := range sameUserNodes {
for _, port := range dest.Ports {
for _, ip := range n.IPs() {
destPorts = append(destPorts, tailcfg.NetPortRange{
IP: ip.String(),
Ports: port,
})
}
}
}
}
if len(destPorts) > 0 {
rules = append(rules, &tailcfg.FilterRule{
SrcIPs: ipSetToPrefixStringList(srcSet),
DstPorts: destPorts,
IPProto: protocols,
})
}
}
}
}
if len(otherDests) > 0 {
var srcIPs netipx.IPSetBuilder
for _, ips := range resolvedSrcIPs {
srcIPs.AddSet(ips)
}
srcSet, err := srcIPs.IPSet()
if err != nil {
return nil, err
}
if srcSet != nil && len(srcSet.Prefixes()) > 0 {
var destPorts []tailcfg.NetPortRange
for _, dest := range otherDests {
ips, err := dest.Resolve(pol, users, nodes)
if err != nil {
log.Trace().Err(err).Msgf("resolving destination ips")
continue
}
if ips == nil {
log.Debug().Msgf("destination resolved to nil ips: %v", dest)
continue
}
prefixes := ips.Prefixes()
for _, pref := range prefixes {
for _, port := range dest.Ports {
pr := tailcfg.NetPortRange{
IP: pref.String(),
Ports: port,
}
destPorts = append(destPorts, pr)
}
}
}
if len(destPorts) > 0 {
rules = append(rules, &tailcfg.FilterRule{
SrcIPs: ipSetToPrefixStringList(srcSet),
DstPorts: destPorts,
IPProto: protocols,
})
}
}
}
return rules, nil
}
func sshAction(accept bool, duration time.Duration) tailcfg.SSHAction {
return tailcfg.SSHAction{
Reject: !accept,
Accept: accept,
SessionDuration: duration,
AllowAgentForwarding: true,
AllowLocalPortForwarding: true,
AllowRemotePortForwarding: true,
Reject: !accept,
Accept: accept,
SessionDuration: duration,
AllowAgentForwarding: true,
AllowLocalPortForwarding: true,
}
}
@@ -291,161 +106,61 @@ func (pol *Policy) compileSSHPolicy(
return nil, nil
}
log.Trace().Caller().Msgf("compiling SSH policy for node %q", node.Hostname())
log.Trace().Msgf("compiling SSH policy for node %q", node.Hostname())
var rules []*tailcfg.SSHRule
for index, rule := range pol.SSHs {
// Separate destinations into autogroup:self and others
// This is needed because autogroup:self requires filtering sources to same-user only,
// while other destinations should use all resolved sources
var autogroupSelfDests []Alias
var otherDests []Alias
for _, dst := range rule.Destinations {
if ag, ok := dst.(*AutoGroup); ok && ag.Is(AutoGroupSelf) {
autogroupSelfDests = append(autogroupSelfDests, dst)
} else {
otherDests = append(otherDests, dst)
var dest netipx.IPSetBuilder
for _, src := range rule.Destinations {
ips, err := src.Resolve(pol, users, nodes)
if err != nil {
log.Trace().Err(err).Msgf("resolving destination ips")
}
dest.AddSet(ips)
}
// Note: Tagged nodes can't match autogroup:self destinations, but can still match other destinations
// Resolve sources once - we'll use them differently for each destination type
srcIPs, err := rule.Sources.Resolve(pol, users, nodes)
destSet, err := dest.IPSet()
if err != nil {
log.Trace().Caller().Err(err).Msgf("SSH policy compilation failed resolving source ips for rule %+v", rule)
return nil, err
}
if srcIPs == nil || len(srcIPs.Prefixes()) == 0 {
if !node.InIPSet(destSet) {
continue
}
var action tailcfg.SSHAction
switch rule.Action {
case SSHActionAccept:
case "accept":
action = sshAction(true, 0)
case SSHActionCheck:
case "check":
action = sshAction(true, time.Duration(rule.CheckPeriod))
default:
return nil, fmt.Errorf("parsing SSH policy, unknown action %q, index: %d: %w", rule.Action, index, err)
}
var principals []*tailcfg.SSHPrincipal
srcIPs, err := rule.Sources.Resolve(pol, users, nodes)
if err != nil {
log.Trace().Err(err).Msgf("SSH policy compilation failed resolving source ips for rule %+v", rule)
continue // Skip this rule if we can't resolve sources
}
for addr := range util.IPSetAddrIter(srcIPs) {
principals = append(principals, &tailcfg.SSHPrincipal{
NodeIP: addr.String(),
})
}
userMap := make(map[string]string, len(rule.Users))
if rule.Users.ContainsNonRoot() {
userMap["*"] = "="
// by default, we do not allow root unless explicitly stated
userMap["root"] = ""
}
if rule.Users.ContainsRoot() {
userMap["root"] = "root"
}
for _, u := range rule.Users.NormalUsers() {
userMap[u.String()] = u.String()
}
// Handle autogroup:self destinations (if any)
// Note: Tagged nodes can't match autogroup:self, so skip this block for tagged nodes
if len(autogroupSelfDests) > 0 && !node.IsTagged() {
// Build destination set for autogroup:self (same-user untagged devices only)
var dest netipx.IPSetBuilder
for _, n := range nodes.All() {
if n.User().ID == node.User().ID && !n.IsTagged() {
n.AppendToIPSet(&dest)
}
}
destSet, err := dest.IPSet()
if err != nil {
return nil, err
}
// Only create rule if this node is in the destination set
if node.InIPSet(destSet) {
// Filter sources to only same-user untagged devices
// Pre-filter to same-user untagged devices for efficiency
sameUserNodes := make([]types.NodeView, 0)
for _, n := range nodes.All() {
if n.User().ID == node.User().ID && !n.IsTagged() {
sameUserNodes = append(sameUserNodes, n)
}
}
var filteredSrcIPs netipx.IPSetBuilder
for _, n := range sameUserNodes {
// Check if any of this node's IPs are in the source set
for _, nodeIP := range n.IPs() {
if srcIPs.Contains(nodeIP) {
n.AppendToIPSet(&filteredSrcIPs)
break // Found this node, move to next
}
}
}
filteredSrcSet, err := filteredSrcIPs.IPSet()
if err != nil {
return nil, err
}
if filteredSrcSet != nil && len(filteredSrcSet.Prefixes()) > 0 {
var principals []*tailcfg.SSHPrincipal
for addr := range util.IPSetAddrIter(filteredSrcSet) {
principals = append(principals, &tailcfg.SSHPrincipal{
NodeIP: addr.String(),
})
}
if len(principals) > 0 {
rules = append(rules, &tailcfg.SSHRule{
Principals: principals,
SSHUsers: userMap,
Action: &action,
})
}
}
}
}
// Handle other destinations (if any)
if len(otherDests) > 0 {
// Build destination set for other destinations
var dest netipx.IPSetBuilder
for _, dst := range otherDests {
ips, err := dst.Resolve(pol, users, nodes)
if err != nil {
log.Trace().Caller().Err(err).Msgf("resolving destination ips")
continue
}
if ips != nil {
dest.AddSet(ips)
}
}
destSet, err := dest.IPSet()
if err != nil {
return nil, err
}
// Only create rule if this node is in the destination set
if node.InIPSet(destSet) {
// For non-autogroup:self destinations, use all resolved sources (no filtering)
var principals []*tailcfg.SSHPrincipal
for addr := range util.IPSetAddrIter(srcIPs) {
principals = append(principals, &tailcfg.SSHPrincipal{
NodeIP: addr.String(),
})
}
if len(principals) > 0 {
rules = append(rules, &tailcfg.SSHRule{
Principals: principals,
SSHUsers: userMap,
Action: &action,
})
}
}
for _, user := range rule.Users {
userMap[user.String()] = "="
}
rules = append(rules, &tailcfg.SSHRule{
Principals: principals,
SSHUsers: userMap,
Action: &action,
})
}
return &tailcfg.SSHPolicy{

File diff suppressed because it is too large Load Diff

View File

@@ -9,9 +9,7 @@ import (
"sync"
"github.com/juanfont/headscale/hscontrol/policy/matcher"
"github.com/juanfont/headscale/hscontrol/policy/policyutil"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/rs/zerolog/log"
"go4.org/netipx"
"tailscale.com/net/tsaddr"
"tailscale.com/tailcfg"
@@ -39,20 +37,6 @@ type PolicyManager struct {
// Lazy map of SSH policies
sshPolicyMap map[types.NodeID]*tailcfg.SSHPolicy
// Lazy map of per-node compiled filter rules (unreduced, for autogroup:self)
compiledFilterRulesMap map[types.NodeID][]tailcfg.FilterRule
// Lazy map of per-node filter rules (reduced, for packet filters)
filterRulesMap map[types.NodeID][]tailcfg.FilterRule
usesAutogroupSelf bool
}
// filterAndPolicy combines the compiled filter rules with policy content for hashing.
// This ensures filterHash changes when policy changes, even for autogroup:self where
// the compiled filter is always empty.
type filterAndPolicy struct {
Filter []tailcfg.FilterRule
Policy *Policy
}
// NewPolicyManager creates a new PolicyManager from a policy file and a list of users and nodes.
@@ -65,13 +49,10 @@ func NewPolicyManager(b []byte, users []types.User, nodes views.Slice[types.Node
}
pm := PolicyManager{
pol: policy,
users: users,
nodes: nodes,
sshPolicyMap: make(map[types.NodeID]*tailcfg.SSHPolicy, nodes.Len()),
compiledFilterRulesMap: make(map[types.NodeID][]tailcfg.FilterRule, nodes.Len()),
filterRulesMap: make(map[types.NodeID][]tailcfg.FilterRule, nodes.Len()),
usesAutogroupSelf: policy.usesAutogroupSelf(),
pol: policy,
users: users,
nodes: nodes,
sshPolicyMap: make(map[types.NodeID]*tailcfg.SSHPolicy, nodes.Len()),
}
_, err = pm.updateLocked()
@@ -85,36 +66,19 @@ func NewPolicyManager(b []byte, users []types.User, nodes views.Slice[types.Node
// updateLocked updates the filter rules based on the current policy and nodes.
// It must be called with the lock held.
func (pm *PolicyManager) updateLocked() (bool, error) {
// Check if policy uses autogroup:self
pm.usesAutogroupSelf = pm.pol.usesAutogroupSelf()
// Clear the SSH policy map to ensure it's recalculated with the new policy.
// TODO(kradalby): This could potentially be optimized by only clearing the
// policies for nodes that have changed. Particularly if the only difference is
// that nodes has been added or removed.
clear(pm.sshPolicyMap)
var filter []tailcfg.FilterRule
var err error
// Standard compilation for all policies
filter, err = pm.pol.compileFilterRules(pm.users, pm.nodes)
filter, err := pm.pol.compileFilterRules(pm.users, pm.nodes)
if err != nil {
return false, fmt.Errorf("compiling filter rules: %w", err)
}
// Hash both the compiled filter AND the policy content together.
// This ensures filterHash changes when policy changes, even for autogroup:self
// where the compiled filter is always empty. This eliminates the need for
// a separate policyHash field.
filterHash := deephash.Hash(&filterAndPolicy{
Filter: filter,
Policy: pm.pol,
})
filterHash := deephash.Hash(&filter)
filterChanged := filterHash != pm.filterHash
if filterChanged {
log.Debug().
Str("filter.hash.old", pm.filterHash.String()[:8]).
Str("filter.hash.new", filterHash.String()[:8]).
Int("filter.rules", len(pm.filter)).
Int("filter.rules.new", len(filter)).
Msg("Policy filter hash changed")
}
pm.filter = filter
pm.filterHash = filterHash
if filterChanged {
@@ -131,14 +95,6 @@ func (pm *PolicyManager) updateLocked() (bool, error) {
tagOwnerMapHash := deephash.Hash(&tagMap)
tagOwnerChanged := tagOwnerMapHash != pm.tagOwnerMapHash
if tagOwnerChanged {
log.Debug().
Str("tagOwner.hash.old", pm.tagOwnerMapHash.String()[:8]).
Str("tagOwner.hash.new", tagOwnerMapHash.String()[:8]).
Int("tagOwners.old", len(pm.tagOwnerMap)).
Int("tagOwners.new", len(tagMap)).
Msg("Tag owner hash changed")
}
pm.tagOwnerMap = tagMap
pm.tagOwnerMapHash = tagOwnerMapHash
@@ -149,61 +105,19 @@ func (pm *PolicyManager) updateLocked() (bool, error) {
autoApproveMapHash := deephash.Hash(&autoMap)
autoApproveChanged := autoApproveMapHash != pm.autoApproveMapHash
if autoApproveChanged {
log.Debug().
Str("autoApprove.hash.old", pm.autoApproveMapHash.String()[:8]).
Str("autoApprove.hash.new", autoApproveMapHash.String()[:8]).
Int("autoApprovers.old", len(pm.autoApproveMap)).
Int("autoApprovers.new", len(autoMap)).
Msg("Auto-approvers hash changed")
}
pm.autoApproveMap = autoMap
pm.autoApproveMapHash = autoApproveMapHash
exitSetHash := deephash.Hash(&exitSet)
exitSetHash := deephash.Hash(&autoMap)
exitSetChanged := exitSetHash != pm.exitSetHash
if exitSetChanged {
log.Debug().
Str("exitSet.hash.old", pm.exitSetHash.String()[:8]).
Str("exitSet.hash.new", exitSetHash.String()[:8]).
Msg("Exit node set hash changed")
}
pm.exitSet = exitSet
pm.exitSetHash = exitSetHash
// Determine if we need to send updates to nodes
// filterChanged now includes policy content changes (via combined hash),
// so it will detect changes even for autogroup:self where compiled filter is empty
needsUpdate := filterChanged || tagOwnerChanged || autoApproveChanged || exitSetChanged
// Only clear caches if we're actually going to send updates
// This prevents clearing caches when nothing changed, which would leave nodes
// with stale filters until they reconnect. This is critical for autogroup:self
// where even reloading the same policy would clear caches but not send updates.
if needsUpdate {
// Clear the SSH policy map to ensure it's recalculated with the new policy.
// TODO(kradalby): This could potentially be optimized by only clearing the
// policies for nodes that have changed. Particularly if the only difference is
// that nodes has been added or removed.
clear(pm.sshPolicyMap)
clear(pm.compiledFilterRulesMap)
clear(pm.filterRulesMap)
}
// If nothing changed, no need to update nodes
if !needsUpdate {
log.Trace().
Msg("Policy evaluation detected no changes - all hashes match")
// If neither of the calculated values changed, no need to update nodes
if !filterChanged && !tagOwnerChanged && !autoApproveChanged && !exitSetChanged {
return false, nil
}
log.Debug().
Bool("filter.changed", filterChanged).
Bool("tagOwners.changed", tagOwnerChanged).
Bool("autoApprovers.changed", autoApproveChanged).
Bool("exitNodes.changed", exitSetChanged).
Msg("Policy changes require node updates")
return true, nil
}
@@ -237,16 +151,6 @@ func (pm *PolicyManager) SetPolicy(polB []byte) (bool, error) {
pm.mu.Lock()
defer pm.mu.Unlock()
// Log policy metadata for debugging
log.Debug().
Int("policy.bytes", len(polB)).
Int("acls.count", len(pol.ACLs)).
Int("groups.count", len(pol.Groups)).
Int("hosts.count", len(pol.Hosts)).
Int("tagOwners.count", len(pol.TagOwners)).
Int("autoApprovers.routes.count", len(pol.AutoApprovers.Routes)).
Msg("Policy parsed successfully")
pm.pol = pol
return pm.updateLocked()
@@ -264,197 +168,6 @@ func (pm *PolicyManager) Filter() ([]tailcfg.FilterRule, []matcher.Match) {
return pm.filter, pm.matchers
}
// BuildPeerMap constructs peer relationship maps for the given nodes.
// For global filters, it uses the global filter matchers for all nodes.
// For autogroup:self policies (empty global filter), it builds per-node
// peer maps using each node's specific filter rules.
func (pm *PolicyManager) BuildPeerMap(nodes views.Slice[types.NodeView]) map[types.NodeID][]types.NodeView {
if pm == nil {
return nil
}
pm.mu.Lock()
defer pm.mu.Unlock()
// If we have a global filter, use it for all nodes (normal case)
if !pm.usesAutogroupSelf {
ret := make(map[types.NodeID][]types.NodeView, nodes.Len())
// Build the map of all peers according to the matchers.
// Compared to ReduceNodes, which builds the list per node, we end up with doing
// the full work for every node O(n^2), while this will reduce the list as we see
// relationships while building the map, making it O(n^2/2) in the end, but with less work per node.
for i := range nodes.Len() {
for j := i + 1; j < nodes.Len(); j++ {
if nodes.At(i).ID() == nodes.At(j).ID() {
continue
}
if nodes.At(i).CanAccess(pm.matchers, nodes.At(j)) || nodes.At(j).CanAccess(pm.matchers, nodes.At(i)) {
ret[nodes.At(i).ID()] = append(ret[nodes.At(i).ID()], nodes.At(j))
ret[nodes.At(j).ID()] = append(ret[nodes.At(j).ID()], nodes.At(i))
}
}
}
return ret
}
// For autogroup:self (empty global filter), build per-node peer relationships
ret := make(map[types.NodeID][]types.NodeView, nodes.Len())
// Pre-compute per-node matchers using unreduced compiled rules
// We need unreduced rules to determine peer relationships correctly.
// Reduced rules only show destinations where the node is the target,
// but peer relationships require the full bidirectional access rules.
nodeMatchers := make(map[types.NodeID][]matcher.Match, nodes.Len())
for _, node := range nodes.All() {
filter, err := pm.compileFilterRulesForNodeLocked(node)
if err != nil || len(filter) == 0 {
continue
}
nodeMatchers[node.ID()] = matcher.MatchesFromFilterRules(filter)
}
// Check each node pair for peer relationships.
// Start j at i+1 to avoid checking the same pair twice and creating duplicates.
// We check both directions (i->j and j->i) since ACLs can be asymmetric.
for i := range nodes.Len() {
nodeI := nodes.At(i)
matchersI, hasFilterI := nodeMatchers[nodeI.ID()]
for j := i + 1; j < nodes.Len(); j++ {
nodeJ := nodes.At(j)
matchersJ, hasFilterJ := nodeMatchers[nodeJ.ID()]
// Check if nodeI can access nodeJ
if hasFilterI && nodeI.CanAccess(matchersI, nodeJ) {
ret[nodeI.ID()] = append(ret[nodeI.ID()], nodeJ)
}
// Check if nodeJ can access nodeI
if hasFilterJ && nodeJ.CanAccess(matchersJ, nodeI) {
ret[nodeJ.ID()] = append(ret[nodeJ.ID()], nodeI)
}
}
}
return ret
}
// compileFilterRulesForNodeLocked returns the unreduced compiled filter rules for a node
// when using autogroup:self. This is used by BuildPeerMap to determine peer relationships.
// For packet filters sent to nodes, use filterForNodeLocked which returns reduced rules.
func (pm *PolicyManager) compileFilterRulesForNodeLocked(node types.NodeView) ([]tailcfg.FilterRule, error) {
if pm == nil {
return nil, nil
}
// Check if we have cached compiled rules
if rules, ok := pm.compiledFilterRulesMap[node.ID()]; ok {
return rules, nil
}
// Compile per-node rules with autogroup:self expanded
rules, err := pm.pol.compileFilterRulesForNode(pm.users, node, pm.nodes)
if err != nil {
return nil, fmt.Errorf("compiling filter rules for node: %w", err)
}
// Cache the unreduced compiled rules
pm.compiledFilterRulesMap[node.ID()] = rules
return rules, nil
}
// filterForNodeLocked returns the filter rules for a specific node, already reduced
// to only include rules relevant to that node.
// This is a lock-free version of FilterForNode for internal use when the lock is already held.
// BuildPeerMap already holds the lock, so we need a version that doesn't re-acquire it.
func (pm *PolicyManager) filterForNodeLocked(node types.NodeView) ([]tailcfg.FilterRule, error) {
if pm == nil {
return nil, nil
}
if !pm.usesAutogroupSelf {
// For global filters, reduce to only rules relevant to this node.
// Cache the reduced filter per node for efficiency.
if rules, ok := pm.filterRulesMap[node.ID()]; ok {
return rules, nil
}
// Use policyutil.ReduceFilterRules for global filter reduction.
reducedFilter := policyutil.ReduceFilterRules(node, pm.filter)
pm.filterRulesMap[node.ID()] = reducedFilter
return reducedFilter, nil
}
// For autogroup:self, compile per-node rules then reduce them.
// Check if we have cached reduced rules for this node.
if rules, ok := pm.filterRulesMap[node.ID()]; ok {
return rules, nil
}
// Get unreduced compiled rules
compiledRules, err := pm.compileFilterRulesForNodeLocked(node)
if err != nil {
return nil, err
}
// Reduce the compiled rules to only destinations relevant to this node
reducedFilter := policyutil.ReduceFilterRules(node, compiledRules)
// Cache the reduced filter
pm.filterRulesMap[node.ID()] = reducedFilter
return reducedFilter, nil
}
// FilterForNode returns the filter rules for a specific node, already reduced
// to only include rules relevant to that node.
// If the policy uses autogroup:self, this returns node-specific compiled rules.
// Otherwise, it returns the global filter reduced for this node.
func (pm *PolicyManager) FilterForNode(node types.NodeView) ([]tailcfg.FilterRule, error) {
if pm == nil {
return nil, nil
}
pm.mu.Lock()
defer pm.mu.Unlock()
return pm.filterForNodeLocked(node)
}
// MatchersForNode returns the matchers for peer relationship determination for a specific node.
// These are UNREDUCED matchers - they include all rules where the node could be either source or destination.
// This is different from FilterForNode which returns REDUCED rules for packet filtering.
//
// For global policies: returns the global matchers (same for all nodes)
// For autogroup:self: returns node-specific matchers from unreduced compiled rules
func (pm *PolicyManager) MatchersForNode(node types.NodeView) ([]matcher.Match, error) {
if pm == nil {
return nil, nil
}
pm.mu.Lock()
defer pm.mu.Unlock()
// For global policies, return the shared global matchers
if !pm.usesAutogroupSelf {
return pm.matchers, nil
}
// For autogroup:self, get unreduced compiled rules and create matchers
compiledRules, err := pm.compileFilterRulesForNodeLocked(node)
if err != nil {
return nil, err
}
// Create matchers from unreduced rules for peer relationship determination
return matcher.MatchesFromFilterRules(compiledRules), nil
}
// SetUsers updates the users in the policy manager and updates the filter rules.
func (pm *PolicyManager) SetUsers(users []types.User) (bool, error) {
if pm == nil {
@@ -465,23 +178,7 @@ func (pm *PolicyManager) SetUsers(users []types.User) (bool, error) {
defer pm.mu.Unlock()
pm.users = users
// Clear SSH policy map when users change to force SSH policy recomputation
// This ensures that if SSH policy compilation previously failed due to missing users,
// it will be retried with the new user list
clear(pm.sshPolicyMap)
changed, err := pm.updateLocked()
if err != nil {
return false, err
}
// If SSH policies exist, force a policy change when users are updated
// This ensures nodes get updated SSH policies even if other policy hashes didn't change
if pm.pol != nil && pm.pol.SSHs != nil && len(pm.pol.SSHs) > 0 {
return true, nil
}
return changed, nil
return pm.updateLocked()
}
// SetNodes updates the nodes in the policy manager and updates the filter rules.
@@ -492,47 +189,9 @@ func (pm *PolicyManager) SetNodes(nodes views.Slice[types.NodeView]) (bool, erro
pm.mu.Lock()
defer pm.mu.Unlock()
oldNodeCount := pm.nodes.Len()
newNodeCount := nodes.Len()
// Invalidate cache entries for nodes that changed.
// For autogroup:self: invalidate all nodes belonging to affected users (peer changes).
// For global policies: invalidate only nodes whose properties changed (IPs, routes).
pm.invalidateNodeCache(nodes)
pm.nodes = nodes
nodesChanged := oldNodeCount != newNodeCount
// When nodes are added/removed, we must recompile filters because:
// 1. User/group aliases (like "user1@") resolve to node IPs
// 2. Filter compilation needs nodes to generate rules
// 3. Without nodes, filters compile to empty (0 rules)
//
// For autogroup:self: return true when nodes change even if the global filter
// hash didn't change. The global filter is empty for autogroup:self (each node
// has its own filter), so the hash never changes. But peer relationships DO
// change when nodes are added/removed, so we must signal this to trigger updates.
// For global policies: the filter must be recompiled to include the new nodes.
if nodesChanged {
// Recompile filter with the new node list
needsUpdate, err := pm.updateLocked()
if err != nil {
return false, err
}
if !needsUpdate {
// This ensures fresh filter rules are generated for all nodes
clear(pm.sshPolicyMap)
clear(pm.compiledFilterRulesMap)
clear(pm.filterRulesMap)
}
// Always return true when nodes changed, even if filter hash didn't change
// (can happen with autogroup:self or when nodes are added but don't affect rules)
return true, nil
}
return false, nil
return pm.updateLocked()
}
func (pm *PolicyManager) NodeCanHaveTag(node types.NodeView, tag string) bool {
@@ -580,9 +239,8 @@ func (pm *PolicyManager) NodeCanApproveRoute(node types.NodeView, route netip.Pr
// The fast path is that a node requests to approve a prefix
// where there is an exact entry, e.g. 10.0.0.0/8, then
// check and return quickly
if approvers, ok := pm.autoApproveMap[route]; ok {
canApprove := slices.ContainsFunc(node.IPs(), approvers.Contains)
if canApprove {
if _, ok := pm.autoApproveMap[route]; ok {
if slices.ContainsFunc(node.IPs(), pm.autoApproveMap[route].Contains) {
return true
}
}
@@ -595,8 +253,7 @@ func (pm *PolicyManager) NodeCanApproveRoute(node types.NodeView, route netip.Pr
// Check if prefix is larger (so containing) and then overlaps
// the route to see if the node can approve a subset of an autoapprover
if prefix.Bits() <= route.Bits() && prefix.Overlaps(route) {
canApprove := slices.ContainsFunc(node.IPs(), approveAddrs.Contains)
if canApprove {
if slices.ContainsFunc(node.IPs(), approveAddrs.Contains) {
return true
}
}
@@ -674,162 +331,3 @@ func (pm *PolicyManager) DebugString() string {
return sb.String()
}
// invalidateAutogroupSelfCache intelligently clears only the cache entries that need to be
// invalidated when using autogroup:self policies. This is much more efficient than clearing
// the entire cache.
func (pm *PolicyManager) invalidateAutogroupSelfCache(oldNodes, newNodes views.Slice[types.NodeView]) {
// Build maps for efficient lookup
oldNodeMap := make(map[types.NodeID]types.NodeView)
for _, node := range oldNodes.All() {
oldNodeMap[node.ID()] = node
}
newNodeMap := make(map[types.NodeID]types.NodeView)
for _, node := range newNodes.All() {
newNodeMap[node.ID()] = node
}
// Track which users are affected by changes
affectedUsers := make(map[uint]struct{})
// Check for removed nodes
for nodeID, oldNode := range oldNodeMap {
if _, exists := newNodeMap[nodeID]; !exists {
affectedUsers[oldNode.User().ID] = struct{}{}
}
}
// Check for added nodes
for nodeID, newNode := range newNodeMap {
if _, exists := oldNodeMap[nodeID]; !exists {
affectedUsers[newNode.User().ID] = struct{}{}
}
}
// Check for modified nodes (user changes, tag changes, IP changes)
for nodeID, newNode := range newNodeMap {
if oldNode, exists := oldNodeMap[nodeID]; exists {
// Check if user changed
if oldNode.User().ID != newNode.User().ID {
affectedUsers[oldNode.User().ID] = struct{}{}
affectedUsers[newNode.User().ID] = struct{}{}
}
// Check if tag status changed
if oldNode.IsTagged() != newNode.IsTagged() {
affectedUsers[newNode.User().ID] = struct{}{}
}
// Check if IPs changed (simple check - could be more sophisticated)
oldIPs := oldNode.IPs()
newIPs := newNode.IPs()
if len(oldIPs) != len(newIPs) {
affectedUsers[newNode.User().ID] = struct{}{}
} else {
// Check if any IPs are different
for i, oldIP := range oldIPs {
if i >= len(newIPs) || oldIP != newIPs[i] {
affectedUsers[newNode.User().ID] = struct{}{}
break
}
}
}
}
}
// Clear cache entries for affected users only
// For autogroup:self, we need to clear all nodes belonging to affected users
// because autogroup:self rules depend on the entire user's device set
for nodeID := range pm.filterRulesMap {
// Find the user for this cached node
var nodeUserID uint
found := false
// Check in new nodes first
for _, node := range newNodes.All() {
if node.ID() == nodeID {
nodeUserID = node.User().ID
found = true
break
}
}
// If not found in new nodes, check old nodes
if !found {
for _, node := range oldNodes.All() {
if node.ID() == nodeID {
nodeUserID = node.User().ID
found = true
break
}
}
}
// If we found the user and they're affected, clear this cache entry
if found {
if _, affected := affectedUsers[nodeUserID]; affected {
delete(pm.compiledFilterRulesMap, nodeID)
delete(pm.filterRulesMap, nodeID)
}
} else {
// Node not found in either old or new list, clear it
delete(pm.compiledFilterRulesMap, nodeID)
delete(pm.filterRulesMap, nodeID)
}
}
if len(affectedUsers) > 0 {
log.Debug().
Int("affected_users", len(affectedUsers)).
Int("remaining_cache_entries", len(pm.filterRulesMap)).
Msg("Selectively cleared autogroup:self cache for affected users")
}
}
// invalidateNodeCache invalidates cache entries based on what changed.
func (pm *PolicyManager) invalidateNodeCache(newNodes views.Slice[types.NodeView]) {
if pm.usesAutogroupSelf {
// For autogroup:self, a node's filter depends on its peers (same user).
// When any node in a user changes, all nodes for that user need invalidation.
pm.invalidateAutogroupSelfCache(pm.nodes, newNodes)
} else {
// For global policies, a node's filter depends only on its own properties.
// Only invalidate nodes whose properties actually changed.
pm.invalidateGlobalPolicyCache(newNodes)
}
}
// invalidateGlobalPolicyCache invalidates only nodes whose properties affecting
// ReduceFilterRules changed. For global policies, each node's filter is independent.
func (pm *PolicyManager) invalidateGlobalPolicyCache(newNodes views.Slice[types.NodeView]) {
oldNodeMap := make(map[types.NodeID]types.NodeView)
for _, node := range pm.nodes.All() {
oldNodeMap[node.ID()] = node
}
newNodeMap := make(map[types.NodeID]types.NodeView)
for _, node := range newNodes.All() {
newNodeMap[node.ID()] = node
}
// Invalidate nodes whose properties changed
for nodeID, newNode := range newNodeMap {
oldNode, existed := oldNodeMap[nodeID]
if !existed {
// New node - no cache entry yet, will be lazily calculated
continue
}
if newNode.HasNetworkChanges(oldNode) {
delete(pm.filterRulesMap, nodeID)
}
}
// Remove deleted nodes from cache
for nodeID := range pm.filterRulesMap {
if _, exists := newNodeMap[nodeID]; !exists {
delete(pm.filterRulesMap, nodeID)
}
}
}

Some files were not shown because too many files have changed in this diff Show More