Compare commits

..

11 Commits

Author SHA1 Message Date
Kristoffer Dalby
cb5c5c0621 Fix CLI timeout issues in integration tests
- Increase default CLI timeout from 5s to 30s in config.go
- Fix error messages to match test expectations ("user/node not found")
- Smart lookup gRPC calls need more time in containerized CI environment

The CLI smart lookup feature makes additional gRPC calls that require
longer timeouts than the original 5-second default, especially in
containerized test environments with network latency.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-16 11:31:51 +00:00
Kristoffer Dalby
fe4978764b Fix integration test timeouts by increasing CLI timeout for containerized environment
The integration tests were failing with timeout errors when running
route-related operations like ApproveRoutes. The issue was that the
CLI timeout was set to 5 seconds by default, but the containerized
test environment with network latency and startup delays required
more time for CLI operations to complete.

This fix increases the CLI timeout to 30 seconds specifically for
integration tests through the HEADSCALE_CLI_TIMEOUT environment
variable.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-15 18:10:42 +00:00
Kristoffer Dalby
91ff5ab34f Fix remaining CLI inconsistencies
- Update users destroy command usage string to reflect --user flag
- Fix documentation examples to use --node instead of --identifier
- Ensure complete CLI consistency across all commands and docs

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-15 16:24:54 +00:00
Kristoffer Dalby
04d2e553bf derp 2025-07-15 16:17:31 +00:00
Kristoffer Dalby
8253d588c6 derp 2025-07-15 14:51:23 +00:00
Kristoffer Dalby
024ed59ea9 more 2025-07-15 06:49:51 +00:00
Kristoffer Dalby
45baead257 clean 2025-07-14 20:56:01 +00:00
Kristoffer Dalby
67f2c20052 compli 2025-07-14 20:43:57 +00:00
Kristoffer Dalby
9d2cfb1e7e compli 2025-07-14 15:50:46 +00:00
Kristoffer Dalby
7d31735bac test 2025-07-14 15:12:32 +00:00
kradalby
60521283ab init 2025-07-14 07:48:32 +00:00
498 changed files with 24019 additions and 58866 deletions

View File

@@ -1,870 +0,0 @@
---
name: headscale-integration-tester
description: Use this agent when you need to execute, analyze, or troubleshoot Headscale integration tests. This includes running specific test scenarios, investigating test failures, interpreting test artifacts, validating end-to-end functionality, or ensuring integration test quality before releases. Examples: <example>Context: User has made changes to the route management code and wants to validate the changes work correctly. user: 'I've updated the route advertisement logic in poll.go. Can you run the relevant integration tests to make sure everything still works?' assistant: 'I'll use the headscale-integration-tester agent to run the subnet routing integration tests and analyze the results.' <commentary>Since the user wants to validate route-related changes with integration tests, use the headscale-integration-tester agent to execute the appropriate tests and analyze results.</commentary></example> <example>Context: A CI pipeline integration test is failing and the user needs help understanding why. user: 'The TestSubnetRouterMultiNetwork test is failing in CI. The logs show some timing issues but I can't figure out what's wrong.' assistant: 'Let me use the headscale-integration-tester agent to analyze the test failure and examine the artifacts.' <commentary>Since this involves analyzing integration test failures and interpreting test artifacts, use the headscale-integration-tester agent to investigate the issue.</commentary></example>
color: green
---
You are a specialist Quality Assurance Engineer with deep expertise in Headscale's integration testing system. You understand the Docker-based test infrastructure, real Tailscale client interactions, and the complex timing considerations involved in end-to-end network testing.
## Integration Test System Overview
The Headscale integration test system uses Docker containers running real Tailscale clients against a Headscale server. Tests validate end-to-end functionality including routing, ACLs, node lifecycle, and network coordination. The system is built around the `hi` (Headscale Integration) test runner in `cmd/hi/`.
## Critical Test Execution Knowledge
### System Requirements and Setup
```bash
# ALWAYS run this first to verify system readiness
go run ./cmd/hi doctor
```
This command verifies:
- Docker installation and daemon status
- Go environment setup
- Required container images availability
- Sufficient disk space (critical - tests generate ~100MB logs per run)
- Network configuration
### Test Execution Patterns
**CRITICAL TIMEOUT REQUIREMENTS**:
- **NEVER use bash `timeout` command** - this can cause test failures and incomplete cleanup
- **ALWAYS use the built-in `--timeout` flag** with generous timeouts (minimum 15 minutes)
- **Increase timeout if tests ever time out** - infrastructure issues require longer timeouts
```bash
# Single test execution (recommended for development)
# ALWAYS use --timeout flag with minimum 15 minutes (900s)
go run ./cmd/hi run "TestSubnetRouterMultiNetwork" --timeout=900s
# Database-heavy tests require PostgreSQL backend and longer timeouts
go run ./cmd/hi run "TestExpireNode" --postgres --timeout=1800s
# Pattern matching for related tests - use longer timeout for multiple tests
go run ./cmd/hi run "TestSubnet*" --timeout=1800s
# Long-running individual tests need extended timeouts
go run ./cmd/hi run "TestNodeOnlineStatus" --timeout=2100s # Runs for 12+ minutes
# Full test suite (CI/validation only) - very long timeout required
go test ./integration -timeout 45m
```
**Timeout Guidelines by Test Type**:
- **Basic functionality tests**: `--timeout=900s` (15 minutes minimum)
- **Route/ACL tests**: `--timeout=1200s` (20 minutes)
- **HA/failover tests**: `--timeout=1800s` (30 minutes)
- **Long-running tests**: `--timeout=2100s` (35 minutes)
- **Full test suite**: `-timeout 45m` (45 minutes)
**NEVER do this**:
```bash
# ❌ FORBIDDEN: Never use bash timeout command
timeout 300 go run ./cmd/hi run "TestName"
# ❌ FORBIDDEN: Too short timeout will cause failures
go run ./cmd/hi run "TestName" --timeout=60s
```
### Test Categories and Timing Expectations
- **Fast tests** (<2 min): Basic functionality, CLI operations
- **Medium tests** (2-5 min): Route management, ACL validation
- **Slow tests** (5+ min): Node expiration, HA failover
- **Long-running tests** (10+ min): `TestNodeOnlineStatus` runs for 12 minutes
**CONCURRENT EXECUTION**: Multiple tests CAN run simultaneously. Each test run gets a unique Run ID for isolation. See "Concurrent Execution and Run ID Isolation" section below.
## Test Artifacts and Log Analysis
### Artifact Structure
All test runs save comprehensive artifacts to `control_logs/TIMESTAMP-ID/`:
```
control_logs/20250713-213106-iajsux/
├── hs-testname-abc123.stderr.log # Headscale server error logs
├── hs-testname-abc123.stdout.log # Headscale server output logs
├── hs-testname-abc123.db # Database snapshot for post-mortem
├── hs-testname-abc123_metrics.txt # Prometheus metrics dump
├── hs-testname-abc123-mapresponses/ # Protocol-level debug data
├── ts-client-xyz789.stderr.log # Tailscale client error logs
├── ts-client-xyz789.stdout.log # Tailscale client output logs
└── ts-client-xyz789_status.json # Client network status dump
```
### Log Analysis Priority Order
When tests fail, examine artifacts in this specific order:
1. **Headscale server stderr logs** (`hs-*.stderr.log`): Look for errors, panics, database issues, policy evaluation failures
2. **Tailscale client stderr logs** (`ts-*.stderr.log`): Check for authentication failures, network connectivity issues
3. **MapResponse JSON files**: Protocol-level debugging for network map generation issues
4. **Client status dumps** (`*_status.json`): Network state and peer connectivity information
5. **Database snapshots** (`.db` files): For data consistency and state persistence issues
## Concurrent Execution and Run ID Isolation
### Overview
The integration test system supports running multiple tests concurrently on the same Docker daemon. Each test run is isolated through a unique Run ID that ensures containers, networks, and cleanup operations don't interfere with each other.
### Run ID Format and Usage
Each test run generates a unique Run ID in the format: `YYYYMMDD-HHMMSS-{6-char-hash}`
- Example: `20260109-104215-mdjtzx`
The Run ID is used for:
- **Container naming**: `ts-{runIDShort}-{version}-{hash}` (e.g., `ts-mdjtzx-1-74-fgdyls`)
- **Docker labels**: All containers get `hi.run-id={runID}` label
- **Log directories**: `control_logs/{runID}/`
- **Cleanup isolation**: Only containers with matching run ID are cleaned up
### Container Isolation Mechanisms
1. **Unique Container Names**: Each container includes the run ID for identification
2. **Docker Labels**: `hi.run-id` and `hi.test-type` labels on all containers
3. **Dynamic Port Allocation**: All ports use `{HostPort: "0"}` to let kernel assign free ports
4. **Per-Run Networks**: Network names include scenario hash for isolation
5. **Isolated Cleanup**: `killTestContainersByRunID()` only removes containers matching the run ID
### ⚠️ CRITICAL: Never Interfere with Other Test Runs
**FORBIDDEN OPERATIONS** when other tests may be running:
```bash
# ❌ NEVER do global container cleanup while tests are running
docker rm -f $(docker ps -q --filter "name=hs-")
docker rm -f $(docker ps -q --filter "name=ts-")
# ❌ NEVER kill all test containers
# This will destroy other agents' test sessions!
# ❌ NEVER prune all Docker resources during active tests
docker system prune -f # Only safe when NO tests are running
```
**SAFE OPERATIONS**:
```bash
# ✅ Clean up only YOUR test run's containers (by run ID)
# The test runner does this automatically via cleanup functions
# ✅ Clean stale (stopped/exited) containers only
# Pre-test cleanup only removes stopped containers, not running ones
# ✅ Check what's running before cleanup
docker ps --filter "name=headscale-test-suite" --format "{{.Names}}"
```
### Running Concurrent Tests
```bash
# Start multiple tests in parallel - each gets unique run ID
go run ./cmd/hi run "TestPingAllByIP" &
go run ./cmd/hi run "TestACLAllowUserDst" &
go run ./cmd/hi run "TestOIDCAuthenticationPingAll" &
# Monitor running test suites
docker ps --filter "name=headscale-test-suite" --format "table {{.Names}}\t{{.Status}}"
```
### Agent Session Isolation Rules
When working as an agent:
1. **Your run ID is unique**: Each test you start gets its own run ID
2. **Never clean up globally**: Only use run ID-specific cleanup
3. **Check before cleanup**: Verify no other tests are running if you need to prune resources
4. **Respect other sessions**: Other agents may have tests running concurrently
5. **Log directories are isolated**: Your artifacts are in `control_logs/{your-run-id}/`
### Identifying Your Containers
Your test containers can be identified by:
- The run ID in the container name
- The `hi.run-id` Docker label
- The test suite container: `headscale-test-suite-{your-run-id}`
```bash
# List containers for a specific run ID
docker ps --filter "label=hi.run-id=20260109-104215-mdjtzx"
# Get your run ID from the test output
# Look for: "Run ID: 20260109-104215-mdjtzx"
```
## Common Failure Patterns and Root Cause Analysis
### CRITICAL MINDSET: Code Issues vs Infrastructure Issues
**⚠️ IMPORTANT**: When tests fail, it is ALMOST ALWAYS a code issue with Headscale, NOT infrastructure problems. Do not immediately blame disk space, Docker issues, or timing unless you have thoroughly investigated the actual error logs first.
### Systematic Debugging Process
1. **Read the actual error message**: Don't assume - read the stderr logs completely
2. **Check Headscale server logs first**: Most issues originate from server-side logic
3. **Verify client connectivity**: Only after ruling out server issues
4. **Check timing patterns**: Use proper `EventuallyWithT` patterns
5. **Infrastructure as last resort**: Only blame infrastructure after code analysis
### Real Failure Patterns
#### 1. Timing Issues (Common but fixable)
```go
// ❌ Wrong: Immediate assertions after async operations
client.Execute([]string{"tailscale", "set", "--advertise-routes=10.0.0.0/24"})
nodes, _ := headscale.ListNodes()
require.Len(t, nodes[0].GetAvailableRoutes(), 1) // WILL FAIL
// ✅ Correct: Wait for async operations
client.Execute([]string{"tailscale", "set", "--advertise-routes=10.0.0.0/24"})
require.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
assert.Len(c, nodes[0].GetAvailableRoutes(), 1)
}, 10*time.Second, 100*time.Millisecond, "route should be advertised")
```
**Timeout Guidelines**:
- Route operations: 3-5 seconds
- Node state changes: 5-10 seconds
- Complex scenarios: 10-15 seconds
- Policy recalculation: 5-10 seconds
#### 2. NodeStore Synchronization Issues
Route advertisements must propagate through poll requests (`poll.go:420`). NodeStore updates happen at specific synchronization points after Hostinfo changes.
#### 3. Test Data Management Issues
```go
// ❌ Wrong: Assuming array ordering
require.Len(t, nodes[0].GetAvailableRoutes(), 1)
// ✅ Correct: Identify nodes by properties
expectedRoutes := map[string]string{"1": "10.33.0.0/16"}
for _, node := range nodes {
nodeIDStr := fmt.Sprintf("%d", node.GetId())
if route, shouldHaveRoute := expectedRoutes[nodeIDStr]; shouldHaveRoute {
// Test the specific node that should have the route
}
}
```
#### 4. Database Backend Differences
SQLite vs PostgreSQL have different timing characteristics:
- Use `--postgres` flag for database-intensive tests
- PostgreSQL generally has more consistent timing
- Some race conditions only appear with specific backends
## Resource Management and Cleanup
### Disk Space Management
Tests consume significant disk space (~100MB per run):
```bash
# Check available space before running tests
df -h
# Clean up test artifacts periodically
rm -rf control_logs/older-timestamp-dirs/
# Clean Docker resources
docker system prune -f
docker volume prune -f
```
### Container Cleanup
- Successful tests clean up automatically
- Failed tests may leave containers running
- Manually clean if needed: `docker ps -a` and `docker rm -f <containers>`
## Advanced Debugging Techniques
### Protocol-Level Debugging
MapResponse JSON files in `control_logs/*/hs-*-mapresponses/` contain:
- Network topology as sent to clients
- Peer relationships and visibility
- Route distribution and primary route selection
- Policy evaluation results
### Database State Analysis
Use the database snapshots for post-mortem analysis:
```bash
# SQLite examination
sqlite3 control_logs/TIMESTAMP/hs-*.db
.tables
.schema nodes
SELECT * FROM nodes WHERE name LIKE '%problematic%';
```
### Performance Analysis
Prometheus metrics dumps show:
- Request latencies and error rates
- NodeStore operation timing
- Database query performance
- Memory usage patterns
## Test Development and Quality Guidelines
### Proper Test Patterns
```go
// Always use EventuallyWithT for async operations
require.EventuallyWithT(t, func(c *assert.CollectT) {
// Test condition that may take time to become true
}, timeout, interval, "descriptive failure message")
// Handle node identification correctly
var targetNode *v1.Node
for _, node := range nodes {
if node.GetName() == expectedNodeName {
targetNode = node
break
}
}
require.NotNil(t, targetNode, "should find expected node")
```
### Quality Validation Checklist
- ✅ Tests use `EventuallyWithT` for asynchronous operations
- ✅ Tests don't rely on array ordering for node identification
- ✅ Proper cleanup and resource management
- ✅ Tests handle both success and failure scenarios
- ✅ Timing assumptions are realistic for operations being tested
- ✅ Error messages are descriptive and actionable
## Real-World Test Failure Patterns from HA Debugging
### Infrastructure vs Code Issues - Detailed Examples
**INFRASTRUCTURE FAILURES (Rare but Real)**:
1. **DNS Resolution in Auth Tests**: `failed to resolve "hs-pingallbyip-jax97k": no DNS fallback candidates remain`
- **Pattern**: Client containers can't resolve headscale server hostname during logout
- **Detection**: Error messages specifically mention DNS/hostname resolution
- **Solution**: Docker networking reset, not code changes
2. **Container Creation Timeouts**: Test gets stuck during client container setup
- **Pattern**: Tests hang indefinitely at container startup phase
- **Detection**: No progress in logs for >2 minutes during initialization
- **Solution**: `docker system prune -f` and retry
3. **Docker Resource Exhaustion**: Too many concurrent tests overwhelming system
- **Pattern**: Container creation timeouts, OOM kills, slow test execution
- **Detection**: System load high, Docker daemon slow to respond
- **Solution**: Reduce number of concurrent tests, wait for completion before starting more
**CODE ISSUES (99% of failures)**:
1. **Route Approval Process Failures**: Routes not getting approved when they should be
- **Pattern**: Tests expecting approved routes but finding none
- **Detection**: `SubnetRoutes()` returns empty when `AnnouncedRoutes()` shows routes
- **Root Cause**: Auto-approval logic bugs, policy evaluation issues
2. **NodeStore Synchronization Issues**: State updates not propagating correctly
- **Pattern**: Route changes not reflected in NodeStore or Primary Routes
- **Detection**: Logs show route announcements but no tracking updates
- **Root Cause**: Missing synchronization points in `poll.go:420` area
3. **HA Failover Architecture Issues**: Routes removed when nodes go offline
- **Pattern**: `TestHASubnetRouterFailover` fails because approved routes disappear
- **Detection**: Routes available on online nodes but lost when nodes disconnect
- **Root Cause**: Conflating route approval with node connectivity
### Critical Test Environment Setup
**Pre-Test Cleanup**:
The test runner automatically handles cleanup:
- **Before test**: Removes only stale (stopped/exited) containers - does NOT affect running tests
- **After test**: Removes only containers belonging to the specific run ID
```bash
# Only clean old log directories if disk space is low
rm -rf control_logs/202507*
df -h # Verify sufficient disk space
# SAFE: Clean only stale/stopped containers (does not affect running tests)
# The test runner does this automatically via cleanupStaleTestContainers()
# ⚠️ DANGEROUS: Only use when NO tests are running
docker system prune -f
```
**Environment Verification**:
```bash
# Verify system readiness
go run ./cmd/hi doctor
# Check what tests are currently running (ALWAYS check before global cleanup)
docker ps --filter "name=headscale-test-suite" --format "{{.Names}}"
```
### Specific Test Categories and Known Issues
#### Route-Related Tests (Primary Focus)
```bash
# Core route functionality - these should work first
# Note: Generous timeouts are required for reliable execution
go run ./cmd/hi run "TestSubnetRouteACL" --timeout=1200s
go run ./cmd/hi run "TestAutoApproveMultiNetwork" --timeout=1800s
go run ./cmd/hi run "TestHASubnetRouterFailover" --timeout=1800s
```
**Common Route Test Patterns**:
- Tests validate route announcement, approval, and distribution workflows
- Route state changes are asynchronous - may need `EventuallyWithT` wrappers
- Route approval must respect ACL policies - test expectations encode security requirements
- HA tests verify route persistence during node connectivity changes
#### Authentication Tests (Infrastructure-Prone)
```bash
# These tests are more prone to infrastructure issues
# Require longer timeouts due to auth flow complexity
go run ./cmd/hi run "TestAuthKeyLogoutAndReloginSameUser" --timeout=1200s
go run ./cmd/hi run "TestAuthWebFlowLogoutAndRelogin" --timeout=1200s
go run ./cmd/hi run "TestOIDCExpireNodesBasedOnTokenExpiry" --timeout=1800s
```
**Common Auth Test Infrastructure Failures**:
- DNS resolution during logout operations
- Container creation timeouts
- HTTP/2 stream errors (often symptoms, not root cause)
### Security-Critical Debugging Rules
**❌ FORBIDDEN CHANGES (Security & Test Integrity)**:
1. **Never change expected test outputs** - Tests define correct behavior contracts
- Changing `require.Len(t, routes, 3)` to `require.Len(t, routes, 2)` because test fails
- Modifying expected status codes, node counts, or route counts
- Removing assertions that are "inconvenient"
- **Why forbidden**: Test expectations encode business requirements and security policies
2. **Never bypass security mechanisms** - Security must never be compromised for convenience
- Using `AnnouncedRoutes()` instead of `SubnetRoutes()` in production code
- Skipping authentication or authorization checks
- **Why forbidden**: Security bypasses create vulnerabilities in production
3. **Never reduce test coverage** - Tests prevent regressions
- Removing test cases or assertions
- Commenting out "problematic" test sections
- **Why forbidden**: Reduced coverage allows bugs to slip through
**✅ ALLOWED CHANGES (Timing & Observability)**:
1. **Fix timing issues with proper async patterns**
```go
// ✅ GOOD: Add EventuallyWithT for async operations
require.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
assert.Len(c, nodes, expectedCount) // Keep original expectation
}, 10*time.Second, 100*time.Millisecond, "nodes should reach expected count")
```
- **Why allowed**: Fixes race conditions without changing business logic
2. **Add MORE observability and debugging**
- Additional logging statements
- More detailed error messages
- Extra assertions that verify intermediate states
- **Why allowed**: Better observability helps debug without changing behavior
3. **Improve test documentation**
- Add godoc comments explaining test purpose and business logic
- Document timing requirements and async behavior
- **Why encouraged**: Helps future maintainers understand intent
### Advanced Debugging Workflows
#### Route Tracking Debug Flow
```bash
# Run test with detailed logging and proper timeout
go run ./cmd/hi run "TestSubnetRouteACL" --timeout=1200s > test_output.log 2>&1
# Check route approval process
grep -E "(auto-approval|ApproveRoutesWithPolicy|PolicyManager)" test_output.log
# Check route tracking
tail -50 control_logs/*/hs-*.stderr.log | grep -E "(announced|tracking|SetNodeRoutes)"
# Check for security violations
grep -E "(AnnouncedRoutes.*SetNodeRoutes|bypass.*approval)" test_output.log
```
#### HA Failover Debug Flow
```bash
# Test HA failover specifically with adequate timeout
go run ./cmd/hi run "TestHASubnetRouterFailover" --timeout=1800s
# Check route persistence during disconnect
grep -E "(Disconnect|NodeWentOffline|PrimaryRoutes)" control_logs/*/hs-*.stderr.log
# Verify routes don't disappear inappropriately
grep -E "(removing.*routes|SetNodeRoutes.*empty)" control_logs/*/hs-*.stderr.log
```
### Test Result Interpretation Guidelines
#### Success Patterns to Look For
- `"updating node routes for tracking"` in logs
- Routes appearing in `announcedRoutes` logs
- Proper `ApproveRoutesWithPolicy` calls for auto-approval
- Routes persisting through node connectivity changes (HA tests)
#### Failure Patterns to Investigate
- `SubnetRoutes()` returning empty when `AnnouncedRoutes()` has routes
- Routes disappearing when nodes go offline (HA architectural issue)
- Missing `EventuallyWithT` causing timing race conditions
- Security bypass attempts using wrong route methods
### Critical Testing Methodology
**Phase-Based Testing Approach**:
1. **Phase 1**: Core route tests (ACL, auto-approval, basic functionality)
2. **Phase 2**: HA and complex route scenarios
3. **Phase 3**: Auth tests (infrastructure-sensitive, test last)
**Per-Test Process**:
1. Clean environment before each test
2. Monitor logs for route tracking and approval messages
3. Check artifacts in `control_logs/` if test fails
4. Focus on actual error messages, not assumptions
5. Document results and patterns discovered
## Test Documentation and Code Quality Standards
### Adding Missing Test Documentation
When you understand a test's purpose through debugging, always add comprehensive godoc:
```go
// TestSubnetRoutes validates the complete subnet route lifecycle including
// advertisement from clients, policy-based approval, and distribution to peers.
// This test ensures that route security policies are properly enforced and that
// only approved routes are distributed to the network.
//
// The test verifies:
// - Route announcements are received and tracked
// - ACL policies control route approval correctly
// - Only approved routes appear in peer network maps
// - Route state persists correctly in the database
func TestSubnetRoutes(t *testing.T) {
// Test implementation...
}
```
**Why add documentation**: Future maintainers need to understand business logic and security requirements encoded in tests.
### Comment Guidelines - Focus on WHY, Not WHAT
```go
// ✅ GOOD: Explains reasoning and business logic
// Wait for route propagation because NodeStore updates are asynchronous
// and happen after poll requests complete processing
require.EventuallyWithT(t, func(c *assert.CollectT) {
// Check that security policies are enforced...
}, timeout, interval, "route approval must respect ACL policies")
// ❌ BAD: Just describes what the code does
// Wait for routes
require.EventuallyWithT(t, func(c *assert.CollectT) {
// Get routes and check length
}, timeout, interval, "checking routes")
```
**Why focus on WHY**: Helps maintainers understand architectural decisions and security requirements.
## EventuallyWithT Pattern for External Calls
### Overview
EventuallyWithT is a testing pattern used to handle eventual consistency in distributed systems. In Headscale integration tests, many operations are asynchronous - clients advertise routes, the server processes them, updates propagate through the network. EventuallyWithT allows tests to wait for these operations to complete while making assertions.
### External Calls That Must Be Wrapped
The following operations are **external calls** that interact with the headscale server or tailscale clients and MUST be wrapped in EventuallyWithT:
- `headscale.ListNodes()` - Queries server state
- `client.Status()` - Gets client network status
- `client.Curl()` - Makes HTTP requests through the network
- `client.Traceroute()` - Performs network diagnostics
- `client.Execute()` when running commands that query state
- Any operation that reads from the headscale server or tailscale client
### Five Key Rules for EventuallyWithT
1. **One External Call Per EventuallyWithT Block**
- Each EventuallyWithT should make ONE external call (e.g., ListNodes OR Status)
- Related assertions based on that single call can be grouped together
- Unrelated external calls must be in separate EventuallyWithT blocks
2. **Variable Scoping**
- Declare variables that need to be shared across EventuallyWithT blocks at function scope
- Use `=` for assignment inside EventuallyWithT, not `:=` (unless the variable is only used within that block)
- Variables declared with `:=` inside EventuallyWithT are not accessible outside
3. **No Nested EventuallyWithT**
- NEVER put an EventuallyWithT inside another EventuallyWithT
- This is a critical anti-pattern that must be avoided
4. **Use CollectT for Assertions**
- Inside EventuallyWithT, use `assert` methods with the CollectT parameter
- Helper functions called within EventuallyWithT must accept `*assert.CollectT`
5. **Descriptive Messages**
- Always provide a descriptive message as the last parameter
- Message should explain what condition is being waited for
### Correct Pattern Examples
```go
// CORRECT: Single external call with related assertions
var nodes []*v1.Node
var err error
assert.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err = headscale.ListNodes()
assert.NoError(c, err)
assert.Len(c, nodes, 2)
// These assertions are all based on the ListNodes() call
requireNodeRouteCountWithCollect(c, nodes[0], 2, 2, 2)
requireNodeRouteCountWithCollect(c, nodes[1], 1, 1, 1)
}, 10*time.Second, 500*time.Millisecond, "nodes should have expected route counts")
// CORRECT: Separate EventuallyWithT for different external call
assert.EventuallyWithT(t, func(c *assert.CollectT) {
status, err := client.Status()
assert.NoError(c, err)
// All these assertions are based on the single Status() call
for _, peerKey := range status.Peers() {
peerStatus := status.Peer[peerKey]
requirePeerSubnetRoutesWithCollect(c, peerStatus, expectedPrefixes)
}
}, 10*time.Second, 500*time.Millisecond, "client should see expected routes")
// CORRECT: Variable scoping for sharing between blocks
var routeNode *v1.Node
var nodeKey key.NodePublic
// First EventuallyWithT to get the node
assert.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
for _, node := range nodes {
if node.GetName() == "router" {
routeNode = node
nodeKey, _ = key.ParseNodePublicUntyped(mem.S(node.GetNodeKey()))
break
}
}
assert.NotNil(c, routeNode, "should find router node")
}, 10*time.Second, 100*time.Millisecond, "router node should exist")
// Second EventuallyWithT using the nodeKey from first block
assert.EventuallyWithT(t, func(c *assert.CollectT) {
status, err := client.Status()
assert.NoError(c, err)
peerStatus, ok := status.Peer[nodeKey]
assert.True(c, ok, "peer should exist in status")
requirePeerSubnetRoutesWithCollect(c, peerStatus, expectedPrefixes)
}, 10*time.Second, 100*time.Millisecond, "routes should be visible to client")
```
### Incorrect Patterns to Avoid
```go
// INCORRECT: Multiple unrelated external calls in same EventuallyWithT
assert.EventuallyWithT(t, func(c *assert.CollectT) {
// First external call
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
assert.Len(c, nodes, 2)
// Second unrelated external call - WRONG!
status, err := client.Status()
assert.NoError(c, err)
assert.NotNil(c, status)
}, 10*time.Second, 500*time.Millisecond, "mixed operations")
// INCORRECT: Nested EventuallyWithT
assert.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
// NEVER do this!
assert.EventuallyWithT(t, func(c2 *assert.CollectT) {
status, _ := client.Status()
assert.NotNil(c2, status)
}, 5*time.Second, 100*time.Millisecond, "nested")
}, 10*time.Second, 500*time.Millisecond, "outer")
// INCORRECT: Variable scoping error
assert.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes() // This shadows outer 'nodes' variable
assert.NoError(c, err)
}, 10*time.Second, 500*time.Millisecond, "get nodes")
// This will fail - nodes is nil because := created a new variable inside the block
require.Len(t, nodes, 2) // COMPILATION ERROR or nil pointer
// INCORRECT: Not wrapping external calls
nodes, err := headscale.ListNodes() // External call not wrapped!
require.NoError(t, err)
```
### Helper Functions for EventuallyWithT
When creating helper functions for use within EventuallyWithT:
```go
// Helper function that accepts CollectT
func requireNodeRouteCountWithCollect(c *assert.CollectT, node *v1.Node, available, approved, primary int) {
assert.Len(c, node.GetAvailableRoutes(), available, "available routes for node %s", node.GetName())
assert.Len(c, node.GetApprovedRoutes(), approved, "approved routes for node %s", node.GetName())
assert.Len(c, node.GetPrimaryRoutes(), primary, "primary routes for node %s", node.GetName())
}
// Usage within EventuallyWithT
assert.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
requireNodeRouteCountWithCollect(c, nodes[0], 2, 2, 2)
}, 10*time.Second, 500*time.Millisecond, "route counts should match expected")
```
### Operations That Must NOT Be Wrapped
**CRITICAL**: The following operations are **blocking/mutating operations** that change state and MUST NOT be wrapped in EventuallyWithT:
- `tailscale set` commands (e.g., `--advertise-routes`, `--accept-routes`)
- `headscale.ApproveRoute()` - Approves routes on server
- `headscale.CreateUser()` - Creates users
- `headscale.CreatePreAuthKey()` - Creates authentication keys
- `headscale.RegisterNode()` - Registers new nodes
- Any `client.Execute()` that modifies configuration
- Any operation that creates, updates, or deletes resources
These operations:
1. Complete synchronously or fail immediately
2. Should not be retried automatically
3. Need explicit error handling with `require.NoError()`
### Correct Pattern for Blocking Operations
```go
// CORRECT: Blocking operation NOT wrapped
status := client.MustStatus()
command := []string{"tailscale", "set", "--advertise-routes=" + expectedRoutes[string(status.Self.ID)]}
_, _, err = client.Execute(command)
require.NoErrorf(t, err, "failed to advertise route: %s", err)
// Then wait for the result with EventuallyWithT
assert.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
assert.Contains(c, nodes[0].GetAvailableRoutes(), expectedRoutes[string(status.Self.ID)])
}, 10*time.Second, 100*time.Millisecond, "route should be advertised")
// INCORRECT: Blocking operation wrapped (DON'T DO THIS)
assert.EventuallyWithT(t, func(c *assert.CollectT) {
_, _, err = client.Execute([]string{"tailscale", "set", "--advertise-routes=10.0.0.0/24"})
assert.NoError(c, err) // This might retry the command multiple times!
}, 10*time.Second, 100*time.Millisecond, "advertise routes")
```
### Assert vs Require Pattern
When working within EventuallyWithT blocks where you need to prevent panics:
```go
assert.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
// For array bounds - use require with t to prevent panic
assert.Len(c, nodes, 6) // Test expectation
require.GreaterOrEqual(t, len(nodes), 3, "need at least 3 nodes to avoid panic")
// For nil pointer access - use require with t before dereferencing
assert.NotNil(c, srs1PeerStatus.PrimaryRoutes) // Test expectation
require.NotNil(t, srs1PeerStatus.PrimaryRoutes, "primary routes must be set to avoid panic")
assert.Contains(c,
srs1PeerStatus.PrimaryRoutes.AsSlice(),
pref,
)
}, 5*time.Second, 200*time.Millisecond, "checking route state")
```
**Key Principle**:
- Use `assert` with `c` (*assert.CollectT) for test expectations that can be retried
- Use `require` with `t` (*testing.T) for MUST conditions that prevent panics
- Within EventuallyWithT, both are available - choose based on whether failure would cause a panic
### Common Scenarios
1. **Waiting for route advertisement**:
```go
client.Execute([]string{"tailscale", "set", "--advertise-routes=10.0.0.0/24"})
assert.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
assert.Contains(c, nodes[0].GetAvailableRoutes(), "10.0.0.0/24")
}, 10*time.Second, 100*time.Millisecond, "route should be advertised")
```
2. **Checking client sees routes**:
```go
assert.EventuallyWithT(t, func(c *assert.CollectT) {
status, err := client.Status()
assert.NoError(c, err)
// Check all peers have expected routes
for _, peerKey := range status.Peers() {
peerStatus := status.Peer[peerKey]
assert.Contains(c, peerStatus.AllowedIPs, expectedPrefix)
}
}, 10*time.Second, 100*time.Millisecond, "all peers should see route")
```
3. **Sequential operations**:
```go
// First wait for node to appear
var nodeID uint64
assert.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
assert.Len(c, nodes, 1)
nodeID = nodes[0].GetId()
}, 10*time.Second, 100*time.Millisecond, "node should register")
// Then perform operation
_, err := headscale.ApproveRoute(nodeID, "10.0.0.0/24")
require.NoError(t, err)
// Then wait for result
assert.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
assert.Contains(c, nodes[0].GetApprovedRoutes(), "10.0.0.0/24")
}, 10*time.Second, 100*time.Millisecond, "route should be approved")
```
## Your Core Responsibilities
1. **Test Execution Strategy**: Execute integration tests with appropriate configurations, understanding when to use `--postgres` and timing requirements for different test categories. Follow phase-based testing approach prioritizing route tests.
- **Why this priority**: Route tests are less infrastructure-sensitive and validate core security logic
2. **Systematic Test Analysis**: When tests fail, systematically examine artifacts starting with Headscale server logs, then client logs, then protocol data. Focus on CODE ISSUES first (99% of cases), not infrastructure. Use real-world failure patterns to guide investigation.
- **Why this approach**: Most failures are logic bugs, not environment issues - efficient debugging saves time
3. **Timing & Synchronization Expertise**: Understand asynchronous Headscale operations, particularly route advertisements, NodeStore synchronization at `poll.go:420`, and policy propagation. Fix timing with `EventuallyWithT` while preserving original test expectations.
- **Why preserve expectations**: Test assertions encode business requirements and security policies
- **Key Pattern**: Apply the EventuallyWithT pattern correctly for all external calls as documented above
4. **Root Cause Analysis**: Distinguish between actual code regressions (route approval logic, HA failover architecture), timing issues requiring `EventuallyWithT` patterns, and genuine infrastructure problems (DNS, Docker, container issues).
- **Why this distinction matters**: Different problem types require completely different solution approaches
- **EventuallyWithT Issues**: Often manifest as flaky tests or immediate assertion failures after async operations
5. **Security-Aware Quality Validation**: Ensure tests properly validate end-to-end functionality with realistic timing expectations and proper error handling. Never suggest security bypasses or test expectation changes. Add comprehensive godoc when you understand test business logic.
- **Why security focus**: Integration tests are the last line of defense against security regressions
- **EventuallyWithT Usage**: Proper use prevents race conditions without weakening security assertions
6. **Concurrent Execution Awareness**: Respect run ID isolation and never interfere with other agents' test sessions. Each test run has a unique run ID - only clean up YOUR containers (by run ID label), never perform global cleanup while tests may be running.
- **Why this matters**: Multiple agents/users may run tests concurrently on the same Docker daemon
- **Key Rule**: NEVER use global container cleanup commands - the test runner handles cleanup automatically per run ID
**CRITICAL PRINCIPLE**: Test expectations are sacred contracts that define correct system behavior. When tests fail, fix the code to match the test, never change the test to match broken code. Only timing and observability improvements are allowed - business logic expectations are immutable.
**ISOLATION PRINCIPLE**: Each test run is isolated by its unique Run ID. Never interfere with other test sessions. The system handles cleanup automatically - manual global cleanup commands are forbidden when other tests may be running.
**EventuallyWithT PRINCIPLE**: Every external call to headscale server or tailscale client must be wrapped in EventuallyWithT. Follow the five key rules strictly: one external call per block, proper variable scoping, no nesting, use CollectT for assertions, and provide descriptive messages.
**Remember**: Test failures are usually code issues in Headscale that need to be fixed, not infrastructure problems to be ignored. Use the specific debugging workflows and failure patterns documented above to efficiently identify root causes. Infrastructure issues have very specific signatures - everything else is code-related.

View File

@@ -21,3 +21,4 @@ LICENSE
node_modules/
package-lock.json
package.json

View File

@@ -1,16 +0,0 @@
root = true
[*]
charset = utf-8
end_of_line = lf
indent_size = 2
indent_style = space
insert_final_newline = true
trim_trailing_whitespace = true
max_line_length = 120
[*.go]
indent_style = tab
[Makefile]
indent_style = tab

View File

@@ -52,15 +52,12 @@ body:
If you are using a container, always provide the headscale version and not only the Docker image version.
Please do not put "latest".
Describe your "headscale network". Is there a lot of nodes, are the nodes all interconnected, are some subnet routers?
If you are experiencing a problem during an upgrade, please provide the versions of the old and new versions of Headscale and Tailscale.
examples:
- **OS**: Ubuntu 24.04
- **Headscale version**: 0.24.3
- **Tailscale version**: 1.80.0
- **Number of nodes**: 20
value: |
- OS:
- Headscale version:
@@ -80,10 +77,6 @@ body:
attributes:
label: Debug information
description: |
Please have a look at our [Debugging and troubleshooting
guide](https://headscale.net/development/ref/debug/) to learn about
common debugging techniques.
Links? References? Anything that will give us more context about the issue you are encountering.
If **any** of these are omitted we will likely close your issue, do **not** ignore them.
@@ -99,7 +92,7 @@ body:
`tailscale status --json > DESCRIPTIVE_NAME.json`
Get the logs of a Tailscale client that is not working as expected.
`tailscale debug daemon-logs`
`tailscale daemon-logs`
Tip: You can attach images or log files by clicking this area to highlight it and then dragging files in.
**Ensure** you use formatting for files you attach.

View File

@@ -16,13 +16,15 @@ body:
- type: textarea
attributes:
label: Description
description: A clear and precise description of what new or changed feature you want.
description:
A clear and precise description of what new or changed feature you want.
validations:
required: true
- type: checkboxes
attributes:
label: Contribution
description: Are you willing to contribute to the implementation of this feature?
description:
Are you willing to contribute to the implementation of this feature?
options:
- label: I can write the design doc for this feature
required: false
@@ -31,6 +33,7 @@ body:
- type: textarea
attributes:
label: How can it be implemented?
description: Free text for your ideas on how this feature could be implemented.
description:
Free text for your ideas on how this feature could be implemented.
validations:
required: false

View File

@@ -5,6 +5,8 @@ on:
branches:
- main
pull_request:
branches:
- main
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
@@ -15,7 +17,7 @@ jobs:
runs-on: ubuntu-latest
permissions: write-all
steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 2
- name: Get changed files
@@ -29,12 +31,13 @@ jobs:
- '**/*.go'
- 'integration_test/'
- 'config-example.yaml'
- uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34
- uses: nixbuild/nix-quick-install-action@889f3180bb5f064ee9e3201428d04ae9e41d54ad # v31
if: steps.changed-files.outputs.files == 'true'
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
if: steps.changed-files.outputs.files == 'true'
with:
primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
primary-key:
nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
'**/flake.lock') }}
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
@@ -54,7 +57,7 @@ jobs:
exit $BUILD_STATUS
- name: Nix gosum diverging
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
uses: actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea # v7.0.1
if: failure() && steps.build.outcome == 'failure'
with:
github-token: ${{secrets.GITHUB_TOKEN}}
@@ -66,7 +69,7 @@ jobs:
body: 'Nix build failed with wrong gosum, please update "vendorSha256" (${{ steps.build.outputs.OLD_HASH }}) for the "headscale" package in flake.nix with the new SHA: ${{ steps.build.outputs.NEW_HASH }}'
})
- uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
- uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
if: steps.changed-files.outputs.files == 'true'
with:
name: headscale-linux
@@ -76,25 +79,29 @@ jobs:
strategy:
matrix:
env:
- "GOARCH=arm GOOS=linux GOARM=5"
- "GOARCH=arm GOOS=linux GOARM=6"
- "GOARCH=arm GOOS=linux GOARM=7"
- "GOARCH=arm64 GOOS=linux"
- "GOARCH=386 GOOS=linux"
- "GOARCH=amd64 GOOS=linux"
- "GOARCH=arm64 GOOS=darwin"
- "GOARCH=amd64 GOOS=darwin"
steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: nixbuild/nix-quick-install-action@889f3180bb5f064ee9e3201428d04ae9e41d54ad # v31
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
with:
primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
primary-key:
nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
'**/flake.lock') }}
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
- name: Run go cross compile
env:
CGO_ENABLED: 0
run: env ${{ matrix.env }} nix develop --command -- go build -o "headscale"
run:
env ${{ matrix.env }} nix develop --command -- go build -o "headscale"
./cmd/headscale
- uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
- uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: "headscale-${{ matrix.env }}"
path: "headscale"

View File

@@ -1,55 +0,0 @@
name: Check Generated Files
on:
push:
branches:
- main
pull_request:
branches:
- main
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
check-generated:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: dorny/paths-filter@de90cc6fb38fc0963ad72b210f1f284cd68cea36 # v3.0.2
with:
filters: |
files:
- '*.nix'
- 'go.*'
- '**/*.go'
- '**/*.proto'
- 'buf.gen.yaml'
- 'tools/**'
- uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34
if: steps.changed-files.outputs.files == 'true'
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
if: steps.changed-files.outputs.files == 'true'
with:
primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix', '**/flake.lock') }}
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
- name: Run make generate
if: steps.changed-files.outputs.files == 'true'
run: nix develop --command -- make generate
- name: Check for uncommitted changes
if: steps.changed-files.outputs.files == 'true'
run: |
if ! git diff --exit-code; then
echo "❌ Generated files are not up to date!"
echo "Please run 'make generate' and commit the changes."
exit 1
else
echo "✅ All generated files are up to date."
fi

View File

@@ -10,7 +10,7 @@ jobs:
check-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 2
- name: Get changed files
@@ -24,12 +24,13 @@ jobs:
- '**/*.go'
- 'integration_test/'
- 'config-example.yaml'
- uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34
- uses: nixbuild/nix-quick-install-action@889f3180bb5f064ee9e3201428d04ae9e41d54ad # v31
if: steps.changed-files.outputs.files == 'true'
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
if: steps.changed-files.outputs.files == 'true'
with:
primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
primary-key:
nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
'**/flake.lock') }}
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}

View File

@@ -21,15 +21,15 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
- name: Install python
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: 3.x
- name: Setup cache
uses: actions/cache@a7833574556fa59680c1b7cb190c1735db73ebf0 # v5.0.0
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
with:
key: ${{ github.ref }}
path: .cache

View File

@@ -11,13 +11,13 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Install python
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: 3.x
- name: Setup cache
uses: actions/cache@a7833574556fa59680c1b7cb190c1735db73ebf0 # v5.0.0
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
with:
key: ${{ github.ref }}
path: .cache

View File

@@ -10,55 +10,6 @@ import (
"strings"
)
// testsToSplit defines tests that should be split into multiple CI jobs.
// Key is the test function name, value is a list of subtest prefixes.
// Each prefix becomes a separate CI job as "TestName/prefix".
//
// Example: TestAutoApproveMultiNetwork has subtests like:
// - TestAutoApproveMultiNetwork/authkey-tag-advertiseduringup-false-pol-database
// - TestAutoApproveMultiNetwork/webauth-user-advertiseduringup-true-pol-file
//
// Splitting by approver type (tag, user, group) creates 6 CI jobs with 4 tests each:
// - TestAutoApproveMultiNetwork/authkey-tag.* (4 tests)
// - TestAutoApproveMultiNetwork/authkey-user.* (4 tests)
// - TestAutoApproveMultiNetwork/authkey-group.* (4 tests)
// - TestAutoApproveMultiNetwork/webauth-tag.* (4 tests)
// - TestAutoApproveMultiNetwork/webauth-user.* (4 tests)
// - TestAutoApproveMultiNetwork/webauth-group.* (4 tests)
//
// This reduces load per CI job (4 tests instead of 12) to avoid infrastructure
// flakiness when running many sequential Docker-based integration tests.
var testsToSplit = map[string][]string{
"TestAutoApproveMultiNetwork": {
"authkey-tag",
"authkey-user",
"authkey-group",
"webauth-tag",
"webauth-user",
"webauth-group",
},
}
// expandTests takes a list of test names and expands any that need splitting
// into multiple subtest patterns.
func expandTests(tests []string) []string {
var expanded []string
for _, test := range tests {
if prefixes, ok := testsToSplit[test]; ok {
// This test should be split into multiple jobs.
// We append ".*" to each prefix because the CI runner wraps patterns
// with ^...$ anchors. Without ".*", a pattern like "authkey$" wouldn't
// match "authkey-tag-advertiseduringup-false-pol-database".
for _, prefix := range prefixes {
expanded = append(expanded, fmt.Sprintf("%s/%s.*", test, prefix))
}
} else {
expanded = append(expanded, test)
}
}
return expanded
}
func findTests() []string {
rgBin, err := exec.LookPath("rg")
if err != nil {
@@ -115,11 +66,8 @@ func updateYAML(tests []string, jobName string, testPath string) {
func main() {
tests := findTests()
// Expand tests that should be split into multiple jobs
expandedTests := expandTests(tests)
quotedTests := make([]string, len(expandedTests))
for i, test := range expandedTests {
quotedTests := make([]string, len(tests))
for i, test := range tests {
quotedTests[i] = fmt.Sprintf("\"%s\"", test)
}

View File

@@ -11,13 +11,13 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
# [Required] Access token with `workflow` scope.
token: ${{ secrets.WORKFLOW_SECRET }}
- name: Run GitHub Actions Version Updater
uses: saadmk11/github-actions-version-updater@d8781caf11d11168579c8e5e94f62b068038f442 # v0.9.0
uses: saadmk11/github-actions-version-updater@64be81ba69383f81f2be476703ea6570c4c8686e # v0.8.1
with:
# [Required] Access token with `workflow` scope.
token: ${{ secrets.WORKFLOW_SECRET }}

View File

@@ -28,12 +28,23 @@ jobs:
# that triggered the build.
HAS_TAILSCALE_SECRET: ${{ secrets.TS_OAUTH_CLIENT_ID }}
steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: dorny/paths-filter@de90cc6fb38fc0963ad72b210f1f284cd68cea36 # v3.0.2
with:
filters: |
files:
- '*.nix'
- 'go.*'
- '**/*.go'
- 'integration_test/'
- 'config-example.yaml'
- name: Tailscale
if: ${{ env.HAS_TAILSCALE_SECRET }}
uses: tailscale/github-action@a392da0a182bba0e9613b6243ebd69529b1878aa # v4.1.0
uses: tailscale/github-action@6986d2c82a91fbac2949fe01f5bab95cf21b5102 # v3.2.2
with:
oauth-client-id: ${{ secrets.TS_OAUTH_CLIENT_ID }}
oauth-secret: ${{ secrets.TS_OAUTH_SECRET }}
@@ -41,72 +52,44 @@ jobs:
- name: Setup SSH server for Actor
if: ${{ env.HAS_TAILSCALE_SECRET }}
uses: alexellis/setup-sshd-actor@master
- name: Download headscale image
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
- uses: nixbuild/nix-quick-install-action@889f3180bb5f064ee9e3201428d04ae9e41d54ad # v31
if: steps.changed-files.outputs.files == 'true'
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
if: steps.changed-files.outputs.files == 'true'
with:
name: headscale-image
path: /tmp/artifacts
- name: Download tailscale HEAD image
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
with:
name: tailscale-head-image
path: /tmp/artifacts
- name: Download hi binary
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
with:
name: hi-binary
path: /tmp/artifacts
- name: Download Go cache
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
with:
name: go-cache
path: /tmp/artifacts
- name: Download postgres image
if: ${{ inputs.postgres_flag == '--postgres=1' }}
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
with:
name: postgres-image
path: /tmp/artifacts
- name: Load Docker images, Go cache, and prepare binary
run: |
gunzip -c /tmp/artifacts/headscale-image.tar.gz | docker load
gunzip -c /tmp/artifacts/tailscale-head-image.tar.gz | docker load
if [ -f /tmp/artifacts/postgres-image.tar.gz ]; then
gunzip -c /tmp/artifacts/postgres-image.tar.gz | docker load
fi
chmod +x /tmp/artifacts/hi
docker images
# Extract Go cache to host directories for bind mounting
mkdir -p /tmp/go-cache
tar -xzf /tmp/artifacts/go-cache.tar.gz -C /tmp/go-cache
ls -la /tmp/go-cache/ /tmp/go-cache/.cache/
primary-key:
nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
'**/flake.lock') }}
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
- name: Run Integration Test
env:
HEADSCALE_INTEGRATION_HEADSCALE_IMAGE: headscale:${{ github.sha }}
HEADSCALE_INTEGRATION_TAILSCALE_IMAGE: tailscale-head:${{ github.sha }}
HEADSCALE_INTEGRATION_POSTGRES_IMAGE: ${{ inputs.postgres_flag == '--postgres=1' && format('postgres:{0}', github.sha) || '' }}
HEADSCALE_INTEGRATION_GO_CACHE: /tmp/go-cache/go
HEADSCALE_INTEGRATION_GO_BUILD_CACHE: /tmp/go-cache/.cache/go-build
run: /tmp/artifacts/hi run --stats --ts-memory-limit=300 --hs-memory-limit=1500 "^${{ inputs.test }}$" \
--timeout=120m \
${{ inputs.postgres_flag }}
# Sanitize test name for artifact upload (replace invalid characters: " : < > | * ? \ / with -)
- name: Sanitize test name for artifacts
if: always()
id: sanitize
run: echo "name=${TEST_NAME//[\":<>|*?\\\/]/-}" >> $GITHUB_OUTPUT
env:
TEST_NAME: ${{ inputs.test }}
- uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
if: always()
uses: Wandalen/wretry.action@e68c23e6309f2871ca8ae4763e7629b9c258e1ea # v3.8.0
if: steps.changed-files.outputs.files == 'true'
with:
name: ${{ inputs.database_name }}-${{ steps.sanitize.outputs.name }}-logs
# Our integration tests are started like a thundering herd, often
# hitting limits of the various external repositories we depend on
# like docker hub. This will retry jobs every 5 min, 10 times,
# hopefully letting us avoid manual intervention and restarting jobs.
# One could of course argue that we should invest in trying to avoid
# this, but currently it seems like a larger investment to be cleverer
# about this.
# Some of the jobs might still require manual restart as they are really
# slow and this will cause them to eventually be killed by Github actions.
attempt_delay: 300000 # 5 min
attempt_limit: 2
command: |
nix develop --command -- hi run "^${{ inputs.test }}$" \
--timeout=120m \
${{ inputs.postgres_flag }}
- uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
if: always() && steps.changed-files.outputs.files == 'true'
with:
name: ${{ inputs.database_name }}-${{ inputs.test }}-logs
path: "control_logs/*/*.log"
- uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
if: always()
- uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
if: always() && steps.changed-files.outputs.files == 'true'
with:
name: ${{ inputs.database_name }}-${{ steps.sanitize.outputs.name }}-artifacts
path: control_logs/
name: ${{ inputs.database_name }}-${{ inputs.test }}-archives
path: "control_logs/*/*.tar"
- name: Setup a blocking tmux session
if: ${{ env.HAS_TAILSCALE_SECRET }}
uses: alexellis/block-with-tmux-action@master

View File

@@ -10,7 +10,7 @@ jobs:
golangci-lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 2
- name: Get changed files
@@ -24,12 +24,13 @@ jobs:
- '**/*.go'
- 'integration_test/'
- 'config-example.yaml'
- uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34
- uses: nixbuild/nix-quick-install-action@889f3180bb5f064ee9e3201428d04ae9e41d54ad # v31
if: steps.changed-files.outputs.files == 'true'
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
if: steps.changed-files.outputs.files == 'true'
with:
primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
primary-key:
nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
'**/flake.lock') }}
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
@@ -37,15 +38,12 @@ jobs:
if: steps.changed-files.outputs.files == 'true'
run: nix develop --command -- golangci-lint run
--new-from-rev=${{github.event.pull_request.base.sha}}
--output.text.path=stdout
--output.text.print-linter-name
--output.text.print-issued-lines
--output.text.colors
--format=colored-line-number
prettier-lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 2
- name: Get changed files
@@ -64,12 +62,13 @@ jobs:
- '**/*.css'
- '**/*.scss'
- '**/*.html'
- uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34
- uses: nixbuild/nix-quick-install-action@889f3180bb5f064ee9e3201428d04ae9e41d54ad # v31
if: steps.changed-files.outputs.files == 'true'
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
if: steps.changed-files.outputs.files == 'true'
with:
primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
primary-key:
nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
'**/flake.lock') }}
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
@@ -81,11 +80,12 @@ jobs:
proto-lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: nixbuild/nix-quick-install-action@889f3180bb5f064ee9e3201428d04ae9e41d54ad # v31
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
with:
primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
primary-key:
nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
'**/flake.lock') }}
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}

View File

@@ -1,55 +0,0 @@
name: NixOS Module Tests
on:
push:
branches:
- main
pull_request:
branches:
- main
concurrency:
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
nix-module-check:
runs-on: ubuntu-latest
permissions:
contents: read
steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: dorny/paths-filter@de90cc6fb38fc0963ad72b210f1f284cd68cea36 # v3.0.2
with:
filters: |
nix:
- 'nix/**'
- 'flake.nix'
- 'flake.lock'
go:
- 'go.*'
- '**/*.go'
- 'cmd/**'
- 'hscontrol/**'
- uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34
if: steps.changed-files.outputs.nix == 'true' || steps.changed-files.outputs.go == 'true'
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
if: steps.changed-files.outputs.nix == 'true' || steps.changed-files.outputs.go == 'true'
with:
primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
'**/flake.lock') }}
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
- name: Run NixOS module tests
if: steps.changed-files.outputs.nix == 'true' || steps.changed-files.outputs.go == 'true'
run: |
echo "Running NixOS module integration test..."
nix build .#checks.x86_64-linux.headscale -L

View File

@@ -13,27 +13,28 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
- name: Login to DockerHub
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to GHCR
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34
- uses: nixbuild/nix-quick-install-action@889f3180bb5f064ee9e3201428d04ae9e41d54ad # v31
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
with:
primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
primary-key:
nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
'**/flake.lock') }}
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}

View File

@@ -12,14 +12,16 @@ jobs:
issues: write
pull-requests: write
steps:
- uses: actions/stale@997185467fa4f803885201cee163a9f38240193d # v10.1.1
- uses: actions/stale@5bef64f19d7facfb25b37b414482c7164d639639 # v9.1.0
with:
days-before-issue-stale: 90
days-before-issue-close: 7
stale-issue-label: "stale"
stale-issue-message: "This issue is stale because it has been open for 90 days with no
stale-issue-message:
"This issue is stale because it has been open for 90 days with no
activity."
close-issue-message: "This issue was closed because it has been inactive for 14 days
close-issue-message:
"This issue was closed because it has been inactive for 14 days
since being marked as stale."
days-before-pr-stale: -1
days-before-pr-close: -1

View File

@@ -7,117 +7,7 @@ concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
# build: Builds binaries and Docker images once, uploads as artifacts for reuse.
# build-postgres: Pulls postgres image separately to avoid Docker Hub rate limits.
# sqlite: Runs all integration tests with SQLite backend.
# postgres: Runs a subset of tests with PostgreSQL to verify database compatibility.
build:
runs-on: ubuntu-latest
outputs:
files-changed: ${{ steps.changed-files.outputs.files }}
steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
with:
fetch-depth: 2
- name: Get changed files
id: changed-files
uses: dorny/paths-filter@de90cc6fb38fc0963ad72b210f1f284cd68cea36 # v3.0.2
with:
filters: |
files:
- '*.nix'
- 'go.*'
- '**/*.go'
- 'integration/**'
- 'config-example.yaml'
- '.github/workflows/test-integration.yaml'
- '.github/workflows/integration-test-template.yml'
- 'Dockerfile.*'
- uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34
if: steps.changed-files.outputs.files == 'true'
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
if: steps.changed-files.outputs.files == 'true'
with:
primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix', '**/flake.lock') }}
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
- name: Build binaries and warm Go cache
if: steps.changed-files.outputs.files == 'true'
run: |
# Build all Go binaries in one nix shell to maximize cache reuse
nix develop --command -- bash -c '
go build -o hi ./cmd/hi
CGO_ENABLED=0 GOOS=linux go build -o headscale ./cmd/headscale
# Build integration test binary to warm the cache with all dependencies
go test -c ./integration -o /dev/null 2>/dev/null || true
'
- name: Upload hi binary
if: steps.changed-files.outputs.files == 'true'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: hi-binary
path: hi
retention-days: 10
- name: Package Go cache
if: steps.changed-files.outputs.files == 'true'
run: |
# Package Go module cache and build cache
tar -czf go-cache.tar.gz -C ~ go .cache/go-build
- name: Upload Go cache
if: steps.changed-files.outputs.files == 'true'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: go-cache
path: go-cache.tar.gz
retention-days: 10
- name: Build headscale image
if: steps.changed-files.outputs.files == 'true'
run: |
docker build \
--file Dockerfile.integration-ci \
--tag headscale:${{ github.sha }} \
.
docker save headscale:${{ github.sha }} | gzip > headscale-image.tar.gz
- name: Build tailscale HEAD image
if: steps.changed-files.outputs.files == 'true'
run: |
docker build \
--file Dockerfile.tailscale-HEAD \
--tag tailscale-head:${{ github.sha }} \
.
docker save tailscale-head:${{ github.sha }} | gzip > tailscale-head-image.tar.gz
- name: Upload headscale image
if: steps.changed-files.outputs.files == 'true'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: headscale-image
path: headscale-image.tar.gz
retention-days: 10
- name: Upload tailscale HEAD image
if: steps.changed-files.outputs.files == 'true'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: tailscale-head-image
path: tailscale-head-image.tar.gz
retention-days: 10
build-postgres:
runs-on: ubuntu-latest
needs: build
if: needs.build.outputs.files-changed == 'true'
steps:
- name: Pull and save postgres image
run: |
docker pull postgres:latest
docker tag postgres:latest postgres:${{ github.sha }}
docker save postgres:${{ github.sha }} | gzip > postgres-image.tar.gz
- name: Upload postgres image
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: postgres-image
path: postgres-image.tar.gz
retention-days: 10
sqlite:
needs: build
if: needs.build.outputs.files-changed == 'true'
strategy:
fail-fast: false
matrix:
@@ -133,48 +23,28 @@ jobs:
- TestPolicyUpdateWhileRunningWithCLIInDatabase
- TestACLAutogroupMember
- TestACLAutogroupTagged
- TestACLAutogroupSelf
- TestACLPolicyPropagationOverTime
- TestACLTagPropagation
- TestACLTagPropagationPortSpecific
- TestACLGroupWithUnknownUser
- TestACLGroupAfterUserDeletion
- TestACLGroupDeletionExactReproduction
- TestACLDynamicUnknownUserAddition
- TestACLDynamicUnknownUserRemoval
- TestAPIAuthenticationBypass
- TestAPIAuthenticationBypassCurl
- TestGRPCAuthenticationBypass
- TestCLIWithConfigAuthenticationBypass
- TestAuthKeyLogoutAndReloginSameUser
- TestAuthKeyLogoutAndReloginNewUser
- TestAuthKeyLogoutAndReloginSameUserExpiredKey
- TestAuthKeyDeleteKey
- TestAuthKeyLogoutAndReloginRoutesPreserved
- TestOIDCAuthenticationPingAll
- TestOIDCExpireNodesBasedOnTokenExpiry
- TestOIDC024UserCreation
- TestOIDCAuthenticationWithPKCE
- TestOIDCReloginSameNodeNewUser
- TestOIDCFollowUpUrl
- TestOIDCMultipleOpenedLoginUrls
- TestOIDCReloginSameNodeSameUser
- TestOIDCExpiryAfterRestart
- TestOIDCACLPolicyOnJoin
- TestOIDCReloginSameUserRoutesPreserved
- TestAuthWebFlowAuthenticationPingAll
- TestAuthWebFlowLogoutAndReloginSameUser
- TestAuthWebFlowLogoutAndReloginNewUser
- TestAuthWebFlowLogoutAndRelogin
- TestUserCommand
- TestPreAuthKeyCommand
- TestPreAuthKeyCommandWithoutExpiry
- TestPreAuthKeyCommandReusableEphemeral
- TestPreAuthKeyCorrectUserLoggedInCommand
- TestTaggedNodesCLIOutput
- TestApiKeyCommand
- TestNodeTagCommand
- TestNodeAdvertiseTagCommand
- TestNodeCommand
- TestNodeExpireCommand
- TestNodeRenameCommand
- TestNodeMoveCommand
- TestPolicyCommand
- TestPolicyBrokenConfigCommand
- TestDERPVerifyEndpoint
@@ -191,7 +61,6 @@ jobs:
- TestTaildrop
- TestUpdateHostnameFromClient
- TestExpireNode
- TestSetNodeExpiryInFuture
- TestNodeOnlineStatus
- TestPingAllByIPManyUpDown
- Test2118DeletingOnlineNodePanics
@@ -201,12 +70,7 @@ jobs:
- TestEnablingExitRoutes
- TestSubnetRouterMultiNetwork
- TestSubnetRouterMultiNetworkExitNode
- TestAutoApproveMultiNetwork/authkey-tag.*
- TestAutoApproveMultiNetwork/authkey-user.*
- TestAutoApproveMultiNetwork/authkey-group.*
- TestAutoApproveMultiNetwork/webauth-tag.*
- TestAutoApproveMultiNetwork/webauth-user.*
- TestAutoApproveMultiNetwork/webauth-group.*
- TestAutoApproveMultiNetwork
- TestSubnetRouteACLFiltering
- TestHeadscale
- TestTailscaleNodesJoiningHeadcale
@@ -215,48 +79,12 @@ jobs:
- TestSSHNoSSHConfigured
- TestSSHIsBlockedInACL
- TestSSHUserOnlyIsolation
- TestSSHAutogroupSelf
- TestTagsAuthKeyWithTagRequestDifferentTag
- TestTagsAuthKeyWithTagNoAdvertiseFlag
- TestTagsAuthKeyWithTagCannotAddViaCLI
- TestTagsAuthKeyWithTagCannotChangeViaCLI
- TestTagsAuthKeyWithTagAdminOverrideReauthPreserves
- TestTagsAuthKeyWithTagCLICannotModifyAdminTags
- TestTagsAuthKeyWithoutTagCannotRequestTags
- TestTagsAuthKeyWithoutTagRegisterNoTags
- TestTagsAuthKeyWithoutTagCannotAddViaCLI
- TestTagsAuthKeyWithoutTagCLINoOpAfterAdminWithReset
- TestTagsAuthKeyWithoutTagCLINoOpAfterAdminWithEmptyAdvertise
- TestTagsAuthKeyWithoutTagCLICannotReduceAdminMultiTag
- TestTagsUserLoginOwnedTagAtRegistration
- TestTagsUserLoginNonExistentTagAtRegistration
- TestTagsUserLoginUnownedTagAtRegistration
- TestTagsUserLoginAddTagViaCLIReauth
- TestTagsUserLoginRemoveTagViaCLIReauth
- TestTagsUserLoginCLINoOpAfterAdminAssignment
- TestTagsUserLoginCLICannotRemoveAdminTags
- TestTagsAuthKeyWithTagRequestNonExistentTag
- TestTagsAuthKeyWithTagRequestUnownedTag
- TestTagsAuthKeyWithoutTagRequestNonExistentTag
- TestTagsAuthKeyWithoutTagRequestUnownedTag
- TestTagsAdminAPICannotSetNonExistentTag
- TestTagsAdminAPICanSetUnownedTag
- TestTagsAdminAPICannotRemoveAllTags
- TestTagsIssue2978ReproTagReplacement
- TestTagsAdminAPICannotSetInvalidFormat
- TestTagsUserLoginReauthWithEmptyTagsRemovesAllTags
- TestTagsAuthKeyWithoutUserInheritsTags
- TestTagsAuthKeyWithoutUserRejectsAdvertisedTags
- TestTagsAuthKeyConvertToUserViaCLIRegister
uses: ./.github/workflows/integration-test-template.yml
secrets: inherit
with:
test: ${{ matrix.test }}
postgres_flag: "--postgres=0"
database_name: "sqlite"
postgres:
needs: [build, build-postgres]
if: needs.build.outputs.files-changed == 'true'
strategy:
fail-fast: false
matrix:
@@ -267,7 +95,6 @@ jobs:
- TestPingAllByIPManyUpDown
- TestSubnetRouterMultiNetwork
uses: ./.github/workflows/integration-test-template.yml
secrets: inherit
with:
test: ${{ matrix.test }}
postgres_flag: "--postgres=1"

View File

@@ -11,7 +11,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 2
@@ -27,12 +27,13 @@ jobs:
- 'integration_test/'
- 'config-example.yaml'
- uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34
- uses: nixbuild/nix-quick-install-action@889f3180bb5f064ee9e3201428d04ae9e41d54ad # v31
if: steps.changed-files.outputs.files == 'true'
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
if: steps.changed-files.outputs.files == 'true'
with:
primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
primary-key:
nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
'**/flake.lock') }}
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}

6
.gitignore vendored
View File

@@ -1,10 +1,6 @@
ignored/
tailscale/
.vscode/
.claude/
logs/
*.prof
# Binaries for programs and plugins
*.exe
@@ -51,6 +47,8 @@ integration_test/etc/config.dump.yaml
__debug_bin
node_modules/
package-lock.json
package.json

View File

@@ -7,7 +7,6 @@ linters:
- depguard
- dupl
- exhaustruct
- funcorder
- funlen
- gochecknoglobals
- gochecknoinits
@@ -29,15 +28,6 @@ linters:
- wrapcheck
- wsl
settings:
forbidigo:
forbid:
# Forbid time.Sleep everywhere with context-appropriate alternatives
- pattern: 'time\.Sleep'
msg: >-
time.Sleep is forbidden.
In tests: use assert.EventuallyWithT for polling/waiting patterns.
In production code: use a backoff strategy (e.g., cenkalti/backoff) or proper synchronization primitives.
analyze-types: true
gocritic:
disabled-checks:
- appendAssign

View File

@@ -2,39 +2,12 @@
version: 2
before:
hooks:
- go mod tidy -compat=1.25
- go mod tidy -compat=1.24
- go mod vendor
release:
prerelease: auto
draft: true
header: |
## Upgrade
Please follow the steps outlined in the [upgrade guide](https://headscale.net/stable/setup/upgrade/) to update your existing Headscale installation.
**It's best to update from one stable version to the next** (e.g., 0.24.0 → 0.25.1 → 0.26.1) in case you are multiple releases behind. You should always pick the latest available patch release.
Be sure to check the changelog above for version-specific upgrade instructions and breaking changes.
### Backup Your Database
**Always backup your database before upgrading.** Here's how to backup a SQLite database:
```bash
# Stop headscale
systemctl stop headscale
# Backup sqlite database
cp /var/lib/headscale/db.sqlite /var/lib/headscale/db.sqlite.backup
# Backup sqlite WAL/SHM files (if they exist)
cp /var/lib/headscale/db.sqlite-wal /var/lib/headscale/db.sqlite-wal.backup
cp /var/lib/headscale/db.sqlite-shm /var/lib/headscale/db.sqlite-shm.backup
# Start headscale (migration will run automatically)
systemctl start headscale
```
builds:
- id: headscale
@@ -46,10 +19,18 @@ builds:
- darwin_amd64
- darwin_arm64
- freebsd_amd64
- linux_386
- linux_amd64
- linux_arm64
- linux_arm_5
- linux_arm_6
- linux_arm_7
flags:
- -mod=readonly
ldflags:
- -s -w
- -X github.com/juanfont/headscale/hscontrol/types.Version={{ .Version }}
- -X github.com/juanfont/headscale/hscontrol/types.GitCommitHash={{ .Commit }}
tags:
- ts2019
@@ -125,14 +106,16 @@ kos:
# bare tells KO to only use the repository
# for tagging and naming the container.
bare: true
base_image: gcr.io/distroless/base-debian13
base_image: gcr.io/distroless/base-debian12
build: headscale
main: ./cmd/headscale
env:
- CGO_ENABLED=0
platforms:
- linux/amd64
- linux/386
- linux/arm64
- linux/arm/v7
tags:
- "{{ if not .Prerelease }}latest{{ end }}"
- "{{ if not .Prerelease }}{{ .Major }}.{{ .Minor }}.{{ .Patch }}{{ end }}"
@@ -145,8 +128,6 @@ kos:
- "{{ .Tag }}"
- '{{ trimprefix .Tag "v" }}'
- "sha-{{ .ShortCommit }}"
creation_time: "{{.CommitTimestamp}}"
ko_data_creation_time: "{{.CommitTimestamp}}"
- id: ghcr-debug
repositories:
@@ -154,14 +135,16 @@ kos:
- headscale/headscale
bare: true
base_image: gcr.io/distroless/base-debian13:debug
base_image: gcr.io/distroless/base-debian12:debug
build: headscale
main: ./cmd/headscale
env:
- CGO_ENABLED=0
platforms:
- linux/amd64
- linux/386
- linux/arm64
- linux/arm/v7
tags:
- "{{ if not .Prerelease }}latest-debug{{ end }}"
- "{{ if not .Prerelease }}{{ .Major }}.{{ .Minor }}.{{ .Patch }}-debug{{ end }}"

View File

@@ -1,34 +0,0 @@
{
"mcpServers": {
"claude-code-mcp": {
"type": "stdio",
"command": "npx",
"args": ["-y", "@steipete/claude-code-mcp@latest"],
"env": {}
},
"sequential-thinking": {
"type": "stdio",
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-sequential-thinking"],
"env": {}
},
"nixos": {
"type": "stdio",
"command": "uvx",
"args": ["mcp-nixos"],
"env": {}
},
"context7": {
"type": "stdio",
"command": "npx",
"args": ["-y", "@upstash/context7-mcp"],
"env": {}
},
"git": {
"type": "stdio",
"command": "npx",
"args": ["-y", "@cyanheads/git-mcp-server"],
"env": {}
}
}
}

View File

@@ -1,68 +0,0 @@
# prek/pre-commit configuration for headscale
# See: https://prek.j178.dev/quickstart/
# See: https://prek.j178.dev/builtin/
# Global exclusions - ignore generated code
exclude: ^gen/
repos:
# Built-in hooks from pre-commit/pre-commit-hooks
# prek will use fast-path optimized versions automatically
# See: https://prek.j178.dev/builtin/
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v6.0.0
hooks:
- id: check-added-large-files
- id: check-case-conflict
- id: check-executables-have-shebangs
- id: check-json
- id: check-merge-conflict
- id: check-symlinks
- id: check-toml
- id: check-xml
- id: check-yaml
- id: detect-private-key
- id: end-of-file-fixer
- id: fix-byte-order-marker
- id: mixed-line-ending
- id: trailing-whitespace
# Local hooks for project-specific tooling
- repo: local
hooks:
# nixpkgs-fmt for Nix files
- id: nixpkgs-fmt
name: nixpkgs-fmt
entry: nixpkgs-fmt
language: system
files: \.nix$
# Prettier for formatting
- id: prettier
name: prettier
entry: prettier --write --list-different
language: system
exclude: ^docs/
types_or:
[
javascript,
jsx,
ts,
tsx,
yaml,
json,
toml,
html,
css,
scss,
sass,
markdown,
]
# golangci-lint for Go code quality
- id: golangci-lint
name: golangci-lint
entry: nix develop --command golangci-lint run --new-from-rev=HEAD~1 --timeout=5m --fix
language: system
types: [go]
pass_filenames: false

View File

@@ -1,5 +1,5 @@
.github/workflows/test-integration-v2*
docs/about/features.md
docs/ref/api.md
docs/ref/configuration.md
docs/ref/oidc.md
docs/ref/remote-cli.md

1051
AGENTS.md

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

396
CLAUDE.md
View File

@@ -1 +1,395 @@
@AGENTS.md
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Overview
Headscale is an open-source implementation of the Tailscale control server written in Go. It provides self-hosted coordination for Tailscale networks (tailnets), managing node registration, IP allocation, policy enforcement, and DERP routing.
## Development Commands
### Quick Setup
```bash
# Recommended: Use Nix for dependency management
nix develop
# Full development workflow
make dev # runs fmt + lint + test + build
```
### Essential Commands
```bash
# Build headscale binary
make build
# Run tests
make test
go test ./... # All unit tests
go test -race ./... # With race detection
# Run specific integration test
go run ./cmd/hi run "TestName" --postgres
# Code formatting and linting
make fmt # Format all code (Go, docs, proto)
make lint # Lint all code (Go, proto)
make fmt-go # Format Go code only
make lint-go # Lint Go code only
# Protocol buffer generation (after modifying proto/)
make generate
# Clean build artifacts
make clean
```
### Integration Testing
```bash
# Use the hi (Headscale Integration) test runner
go run ./cmd/hi doctor # Check system requirements
go run ./cmd/hi run "TestPattern" # Run specific test
go run ./cmd/hi run "TestPattern" --postgres # With PostgreSQL backend
# Test artifacts are saved to control_logs/ with logs and debug data
```
## Project Structure & Architecture
### Top-Level Organization
```
headscale/
├── cmd/ # Command-line applications
│ ├── headscale/ # Main headscale server binary
│ └── hi/ # Headscale Integration test runner
├── hscontrol/ # Core control plane logic
├── integration/ # End-to-end Docker-based tests
├── proto/ # Protocol buffer definitions
├── gen/ # Generated code (protobuf)
├── docs/ # Documentation
└── packaging/ # Distribution packaging
```
### Core Packages (`hscontrol/`)
**Main Server (`hscontrol/`)**
- `app.go`: Application setup, dependency injection, server lifecycle
- `handlers.go`: HTTP/gRPC API endpoints for management operations
- `grpcv1.go`: gRPC service implementation for headscale API
- `poll.go`: **Critical** - Handles Tailscale MapRequest/MapResponse protocol
- `noise.go`: Noise protocol implementation for secure client communication
- `auth.go`: Authentication flows (web, OIDC, command-line)
- `oidc.go`: OpenID Connect integration for user authentication
**State Management (`hscontrol/state/`)**
- `state.go`: Central coordinator for all subsystems (database, policy, IP allocation, DERP)
- `node_store.go`: **Performance-critical** - In-memory cache with copy-on-write semantics
- Thread-safe operations with deadlock detection
- Coordinates between database persistence and real-time operations
**Database Layer (`hscontrol/db/`)**
- `db.go`: Database abstraction, GORM setup, migration management
- `node.go`: Node lifecycle, registration, expiration, IP assignment
- `users.go`: User management, namespace isolation
- `api_key.go`: API authentication tokens
- `preauth_keys.go`: Pre-authentication keys for automated node registration
- `ip.go`: IP address allocation and management
- `policy.go`: Policy storage and retrieval
- Schema migrations in `schema.sql` with extensive test data coverage
**Policy Engine (`hscontrol/policy/`)**
- `policy.go`: Core ACL evaluation logic, HuJSON parsing
- `v2/`: Next-generation policy system with improved filtering
- `matcher/`: ACL rule matching and evaluation engine
- Determines peer visibility, route approval, and network access rules
- Supports both file-based and database-stored policies
**Network Management (`hscontrol/`)**
- `derp/`: DERP (Designated Encrypted Relay for Packets) server implementation
- NAT traversal when direct connections fail
- Fallback relay for firewall-restricted environments
- `mapper/`: Converts internal Headscale state to Tailscale's wire protocol format
- `tail.go`: Tailscale-specific data structure generation
- `routes/`: Subnet route management and primary route selection
- `dns/`: DNS record management and MagicDNS implementation
**Utilities & Support (`hscontrol/`)**
- `types/`: Core data structures, configuration, validation
- `util/`: Helper functions for networking, DNS, key management
- `templates/`: Client configuration templates (Apple, Windows, etc.)
- `notifier/`: Event notification system for real-time updates
- `metrics.go`: Prometheus metrics collection
- `capver/`: Tailscale capability version management
### Key Subsystem Interactions
**Node Registration Flow**
1. **Client Connection**: `noise.go` handles secure protocol handshake
2. **Authentication**: `auth.go` validates credentials (web/OIDC/preauth)
3. **State Creation**: `state.go` coordinates IP allocation via `db/ip.go`
4. **Storage**: `db/node.go` persists node, `NodeStore` caches in memory
5. **Network Setup**: `mapper/` generates initial Tailscale network map
**Ongoing Operations**
1. **Poll Requests**: `poll.go` receives periodic client updates
2. **State Updates**: `NodeStore` maintains real-time node information
3. **Policy Application**: `policy/` evaluates ACL rules for peer relationships
4. **Map Distribution**: `mapper/` sends network topology to all affected clients
**Route Management**
1. **Advertisement**: Clients announce routes via `poll.go` Hostinfo updates
2. **Storage**: `db/` persists routes, `NodeStore` caches for performance
3. **Approval**: `policy/` auto-approves routes based on ACL rules
4. **Distribution**: `routes/` selects primary routes, `mapper/` distributes to peers
### Command-Line Tools (`cmd/`)
**Main Server (`cmd/headscale/`)**
- `headscale.go`: CLI parsing, configuration loading, server startup
- Supports daemon mode, CLI operations (user/node management), database operations
**Integration Test Runner (`cmd/hi/`)**
- `main.go`: Test execution framework with Docker orchestration
- `run.go`: Individual test execution with artifact collection
- `doctor.go`: System requirements validation
- `docker.go`: Container lifecycle management
- Essential for validating changes against real Tailscale clients
### Generated & External Code
**Protocol Buffers (`proto/` → `gen/`)**
- Defines gRPC API for headscale management operations
- Client libraries can generate from these definitions
- Run `make generate` after modifying `.proto` files
**Integration Testing (`integration/`)**
- `scenario.go`: Docker test environment setup
- `tailscale.go`: Tailscale client container management
- Individual test files for specific functionality areas
- Real end-to-end validation with network isolation
### Critical Performance Paths
**High-Frequency Operations**
1. **MapRequest Processing** (`poll.go`): Every 15-60 seconds per client
2. **NodeStore Reads** (`node_store.go`): Every operation requiring node data
3. **Policy Evaluation** (`policy/`): On every peer relationship calculation
4. **Route Lookups** (`routes/`): During network map generation
**Database Write Patterns**
- **Frequent**: Node heartbeats, endpoint updates, route changes
- **Moderate**: User operations, policy updates, API key management
- **Rare**: Schema migrations, bulk operations
### Configuration & Deployment
**Configuration** (`hscontrol/types/config.go`)**
- Database connection settings (SQLite/PostgreSQL)
- Network configuration (IP ranges, DNS settings)
- Policy mode (file vs database)
- DERP relay configuration
- OIDC provider settings
**Key Dependencies**
- **GORM**: Database ORM with migration support
- **Tailscale Libraries**: Core networking and protocol code
- **Zerolog**: Structured logging throughout the application
- **Buf**: Protocol buffer toolchain for code generation
### Development Workflow Integration
The architecture supports incremental development:
- **Unit Tests**: Focus on individual packages (`*_test.go` files)
- **Integration Tests**: Validate cross-component interactions
- **Database Tests**: Extensive migration and data integrity validation
- **Policy Tests**: ACL rule evaluation and edge cases
- **Performance Tests**: NodeStore and high-frequency operation validation
## Integration Test System
### Overview
Integration tests use Docker containers running real Tailscale clients against a Headscale server. Tests validate end-to-end functionality including routing, ACLs, node lifecycle, and network coordination.
### Running Integration Tests
**System Requirements**
```bash
# Check if your system is ready
go run ./cmd/hi doctor
```
This verifies Docker, Go, required images, and disk space.
**Test Execution Patterns**
```bash
# Run a single test (recommended for development)
go run ./cmd/hi run "TestSubnetRouterMultiNetwork"
# Run with PostgreSQL backend (for database-heavy tests)
go run ./cmd/hi run "TestExpireNode" --postgres
# Run multiple tests with pattern matching
go run ./cmd/hi run "TestSubnet*"
# Run all integration tests (CI/full validation)
go test ./integration -timeout 30m
```
**Test Categories & Timing**
- **Fast tests** (< 2 min): Basic functionality, CLI operations
- **Medium tests** (2-5 min): Route management, ACL validation
- **Slow tests** (5+ min): Node expiration, HA failover
- **Long-running tests** (10+ min): `TestNodeOnlineStatus` (12 min duration)
### Test Infrastructure
**Docker Setup**
- Headscale server container with configurable database backend
- Multiple Tailscale client containers with different versions
- Isolated networks per test scenario
- Automatic cleanup after test completion
**Test Artifacts**
All test runs save artifacts to `control_logs/TIMESTAMP-ID/`:
```
control_logs/20250713-213106-iajsux/
├── hs-testname-abc123.stderr.log # Headscale server logs
├── hs-testname-abc123.stdout.log
├── hs-testname-abc123.db # Database snapshot
├── hs-testname-abc123_metrics.txt # Prometheus metrics
├── hs-testname-abc123-mapresponses/ # Protocol debug data
├── ts-client-xyz789.stderr.log # Tailscale client logs
├── ts-client-xyz789.stdout.log
└── ts-client-xyz789_status.json # Client status dump
```
### Test Development Guidelines
**Timing Considerations**
Integration tests involve real network operations and Docker container lifecycle:
```go
// ❌ Wrong: Immediate assertions after async operations
client.Execute([]string{"tailscale", "set", "--advertise-routes=10.0.0.0/24"})
nodes, _ := headscale.ListNodes()
require.Len(t, nodes[0].GetAvailableRoutes(), 1) // May fail due to timing
// ✅ Correct: Wait for async operations to complete
client.Execute([]string{"tailscale", "set", "--advertise-routes=10.0.0.0/24"})
require.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
assert.Len(c, nodes[0].GetAvailableRoutes(), 1)
}, 10*time.Second, 100*time.Millisecond, "route should be advertised")
```
**Common Test Patterns**
- **Route Advertisement**: Use `EventuallyWithT` for route propagation
- **Node State Changes**: Wait for NodeStore synchronization
- **ACL Policy Changes**: Allow time for policy recalculation
- **Network Connectivity**: Use ping tests with retries
**Test Data Management**
```go
// Node identification: Don't assume array ordering
expectedRoutes := map[string]string{"1": "10.33.0.0/16"}
for _, node := range nodes {
nodeIDStr := fmt.Sprintf("%d", node.GetId())
if route, shouldHaveRoute := expectedRoutes[nodeIDStr]; shouldHaveRoute {
// Test the node that should have the route
}
}
```
### Troubleshooting Integration Tests
**Common Failure Patterns**
1. **Timing Issues**: Test assertions run before async operations complete
- **Solution**: Use `EventuallyWithT` with appropriate timeouts
- **Timeout Guidelines**: 3-5s for route operations, 10s for complex scenarios
2. **Infrastructure Problems**: Disk space, Docker issues, network conflicts
- **Check**: `go run ./cmd/hi doctor` for system health
- **Clean**: Remove old test containers and networks
3. **NodeStore Synchronization**: Tests expecting immediate data availability
- **Key Points**: Route advertisements must propagate through poll requests
- **Fix**: Wait for NodeStore updates after Hostinfo changes
4. **Database Backend Differences**: SQLite vs PostgreSQL behavior differences
- **Use**: `--postgres` flag for database-intensive tests
- **Note**: Some timing characteristics differ between backends
**Debugging Failed Tests**
1. **Check test artifacts** in `control_logs/` for detailed logs
2. **Examine MapResponse JSON** files for protocol-level debugging
3. **Review Headscale stderr logs** for server-side error messages
4. **Check Tailscale client status** for network-level issues
**Resource Management**
- Tests require significant disk space (each run ~100MB of logs)
- Docker containers are cleaned up automatically on success
- Failed tests may leave containers running - clean manually if needed
- Use `docker system prune` periodically to reclaim space
### Best Practices for Test Modifications
1. **Always test locally** before committing integration test changes
2. **Use appropriate timeouts** - too short causes flaky tests, too long slows CI
3. **Clean up properly** - ensure tests don't leave persistent state
4. **Handle both success and failure paths** in test scenarios
5. **Document timing requirements** for complex test scenarios
## NodeStore Implementation Details
**Key Insight from Recent Work**: The NodeStore is a critical performance optimization that caches node data in memory while ensuring consistency with the database. When working with route advertisements or node state changes:
1. **Timing Considerations**: Route advertisements need time to propagate from clients to server. Use `require.EventuallyWithT()` patterns in tests instead of immediate assertions.
2. **Synchronization Points**: NodeStore updates happen at specific points like `poll.go:420` after Hostinfo changes. Ensure these are maintained when modifying the polling logic.
3. **Peer Visibility**: The NodeStore's `peersFunc` determines which nodes are visible to each other. Policy-based filtering is separate from monitoring visibility - expired nodes should remain visible for debugging but marked as expired.
## Testing Guidelines
### Integration Test Patterns
```go
// Use EventuallyWithT for async operations
require.EventuallyWithT(t, func(c *assert.CollectT) {
nodes, err := headscale.ListNodes()
assert.NoError(c, err)
// Check expected state
}, 10*time.Second, 100*time.Millisecond, "description")
// Node route checking by actual node properties, not array position
var routeNode *v1.Node
for _, node := range nodes {
if nodeIDStr := fmt.Sprintf("%d", node.GetId()); expectedRoutes[nodeIDStr] != "" {
routeNode = node
break
}
}
```
### Running Problematic Tests
- Some tests require significant time (e.g., `TestNodeOnlineStatus` runs for 12 minutes)
- Infrastructure issues like disk space can cause test failures unrelated to code changes
- Use `--postgres` flag when testing database-heavy scenarios
## Important Notes
- **Dependencies**: Use `nix develop` for consistent toolchain (Go, buf, protobuf tools, linting)
- **Protocol Buffers**: Changes to `proto/` require `make generate` and should be committed separately
- **Code Style**: Enforced via golangci-lint with golines (width 88) and gofumpt formatting
- **Database**: Supports both SQLite (development) and PostgreSQL (production/testing)
- **Integration Tests**: Require Docker and can consume significant disk space
- **Performance**: NodeStore optimizations are critical for scale - be careful with changes to state management
## Debugging Integration Tests
Test artifacts are preserved in `control_logs/TIMESTAMP-ID/` including:
- Headscale server logs (stderr/stdout)
- Tailscale client logs and status
- Database dumps and network captures
- MapResponse JSON files for protocol debugging
When tests fail, check these artifacts first before assuming code issues.

1821
CLI_IMPROVEMENT_PLAN.md Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,201 @@
# CLI Standardization Summary
## Changes Made
### 1. Command Naming Standardization
- **Fixed**: `backfillips``backfill-ips` (with backward compat alias)
- **Fixed**: `dumpConfig``dump-config` (with backward compat alias)
- **Result**: All commands now use kebab-case consistently
### 2. Flag Standardization
#### Node Commands
- **Added**: `--node` flag as primary way to specify nodes
- **Deprecated**: `--identifier` flag (hidden, marked deprecated)
- **Backward Compatible**: Both flags work, `--identifier` shows deprecation warning
- **Smart Lookup Ready**: `--node` accepts strings for future name/hostname/IP lookup
#### User Commands
- **Updated**: User identification flow prepared for `--user` flag
- **Maintained**: Existing `--name` and `--identifier` flags for backward compatibility
### 3. Description Consistency
- **Fixed**: "Api" → "API" throughout
- **Fixed**: Capitalization consistency in short descriptions
- **Fixed**: Removed unnecessary periods from short descriptions
- **Standardized**: "Handle/Manage the X of Headscale" pattern
### 4. Type Consistency
- **Standardized**: Node IDs use `uint64` consistently
- **Maintained**: Backward compatibility with existing flag types
## Current Status
### ✅ Completed
- Command naming (kebab-case)
- Flag deprecation and aliasing
- Description standardization
- Backward compatibility preservation
- Helper functions for flag processing
- **SMART LOOKUP IMPLEMENTATION**:
- Enhanced `ListNodesRequest` proto with ID, name, hostname, IP filters
- Implemented smart filtering in `ListNodes` gRPC method
- Added CLI smart lookup functions for nodes and users
- Single match validation with helpful error messages
- Automatic detection: ID (numeric) vs IP vs name/hostname/email
### ✅ Smart Lookup Features
- **Node Lookup**: By ID, hostname, or IP address
- **User Lookup**: By ID, username, or email address
- **Single Match Enforcement**: Errors if 0 or >1 matches found
- **Helpful Error Messages**: Shows all matches when ambiguous
- **Full Backward Compatibility**: All existing flags still work
- **Enhanced List Commands**: Both `nodes list` and `users list` support all filter types
## Breaking Changes
**None.** All changes maintain full backward compatibility through flag aliases and deprecation warnings.
## Implementation Details
### Smart Lookup Algorithm
1. **Input Detection**:
```go
if numeric && > 0 -> treat as ID
else if contains "@" -> treat as email (users only)
else if valid IP address -> treat as IP (nodes only)
else -> treat as name/hostname
```
2. **gRPC Filtering**:
- Uses enhanced `ListNodes`/`ListUsers` with specific filters
- Server-side filtering for optimal performance
- Single transaction per lookup
3. **Match Validation**:
- Exactly 1 match: Return ID
- 0 matches: Error with "not found" message
- >1 matches: Error listing all matches for disambiguation
### Enhanced Proto Definitions
```protobuf
message ListNodesRequest {
string user = 1; // existing
uint64 id = 2; // new: filter by ID
string name = 3; // new: filter by hostname
string hostname = 4; // new: alias for name
repeated string ip_addresses = 5; // new: filter by IPs
}
```
### Future Enhancements
- **Fuzzy Matching**: Partial name matching with confirmation
- **Recently Used**: Cache recently accessed nodes/users
- **Tab Completion**: Shell completion for names/hostnames
- **Bulk Operations**: Multi-select with pattern matching
## Migration Path for Users
### Now Available (Current Release)
```bash
# Old way (still works, shows deprecation warning)
headscale nodes expire --identifier 123
# New way with smart lookup:
headscale nodes expire --node 123 # by ID
headscale nodes expire --node "my-laptop" # by hostname
headscale nodes expire --node "100.64.0.1" # by Tailscale IP
headscale nodes expire --node "192.168.1.100" # by real IP
# User operations:
headscale users destroy --user 123 # by ID
headscale users destroy --user "alice" # by username
headscale users destroy --user "alice@company.com" # by email
# Enhanced list commands with filtering:
headscale nodes list --node "laptop" # filter nodes by name
headscale nodes list --ip "100.64.0.1" # filter nodes by IP
headscale nodes list --user "alice" # filter nodes by user
headscale users list --user "alice" # smart lookup user
headscale users list --email "@company.com" # filter by email domain
headscale users list --name "alice" # filter by exact name
# Error handling examples:
headscale nodes expire --node "laptop"
# Error: multiple nodes found matching 'laptop': ID=1 name=laptop-alice, ID=2 name=laptop-bob
headscale nodes expire --node "nonexistent"
# Error: no node found matching 'nonexistent'
```
## Command Structure Overview
```
headscale [global-flags] <command> [command-flags] <subcommand> [subcommand-flags] [args]
Global Flags:
--config, -c config file path
--output, -o output format (json, yaml, json-line)
--force disable prompts
Commands:
├── serve
├── version
├── config-test
├── dump-config (alias: dumpConfig)
├── mockoidc
├── generate/
│ └── private-key
├── nodes/
│ ├── list (--user, --tags, --columns)
│ ├── register (--user, --key)
│ ├── list-routes (--node)
│ ├── expire (--node)
│ ├── rename (--node) <new-name>
│ ├── delete (--node)
│ ├── move (--node, --user)
│ ├── tag (--node, --tags)
│ ├── approve-routes (--node, --routes)
│ └── backfill-ips (alias: backfillips)
├── users/
│ ├── create <name> (--display-name, --email, --picture-url)
│ ├── list (--user, --name, --email, --columns)
│ ├── destroy (--user|--name|--identifier)
│ └── rename (--user|--name|--identifier, --new-name)
├── apikeys/
│ ├── list
│ ├── create (--expiration)
│ ├── expire (--prefix)
│ └── delete (--prefix)
├── preauthkeys/
│ ├── list (--user)
│ ├── create (--user, --reusable, --ephemeral, --expiration, --tags)
│ └── expire (--user) <key>
├── policy/
│ ├── get
│ ├── set (--file)
│ └── check (--file)
└── debug/
└── create-node (--name, --user, --key, --route)
```
## Deprecated Flags
All deprecated flags continue to work but show warnings:
- `--identifier` → use `--node` (for node commands) or `--user` (for user commands)
- `--namespace` → use `--user` (already implemented)
- `dumpConfig` → use `dump-config`
- `backfillips` → use `backfill-ips`
## Error Handling
Improved error messages provide clear guidance:
```
Error: node specifier must be a numeric ID (smart lookup by name/hostname/IP not yet implemented)
Error: --node flag is required
Error: --user flag is required
```

View File

@@ -12,7 +12,7 @@ WORKDIR /go/src/tailscale
ARG TARGETARCH
RUN GOARCH=$TARGETARCH go install -v ./cmd/derper
FROM alpine:3.22
FROM alpine:3.18
RUN apk add --no-cache ca-certificates iptables iproute2 ip6tables curl
COPY --from=build-env /go/bin/* /usr/local/bin/

View File

@@ -2,43 +2,25 @@
# and are in no way endorsed by Headscale's maintainers as an
# official nor supported release or distribution.
FROM docker.io/golang:1.25-trixie AS builder
FROM docker.io/golang:1.24-bookworm
ARG VERSION=dev
ENV GOPATH /go
WORKDIR /go/src/headscale
# Install delve debugger first - rarely changes, good cache candidate
RUN go install github.com/go-delve/delve/cmd/dlv@latest
RUN apt-get update \
&& apt-get install --no-install-recommends --yes less jq sqlite3 dnsutils \
&& rm -rf /var/lib/apt/lists/* \
&& apt-get clean
RUN mkdir -p /var/run/headscale
# Download dependencies - only invalidated when go.mod/go.sum change
COPY go.mod go.sum /go/src/headscale/
RUN go mod download
# Copy source and build - invalidated on any source change
COPY . .
# Build debug binary with debug symbols for delve
RUN CGO_ENABLED=0 GOOS=linux go build -gcflags="all=-N -l" -o /go/bin/headscale ./cmd/headscale
# Runtime stage
FROM debian:trixie-slim
RUN apt-get --update install --no-install-recommends --yes \
bash ca-certificates curl dnsutils findutils iproute2 jq less procps python3 sqlite3 \
&& apt-get dist-clean
RUN mkdir -p /var/run/headscale
# Copy binaries from builder
COPY --from=builder /go/bin/headscale /usr/local/bin/headscale
COPY --from=builder /go/bin/dlv /usr/local/bin/dlv
# Copy source code for delve source-level debugging
COPY --from=builder /go/src/headscale /go/src/headscale
WORKDIR /go/src/headscale
RUN CGO_ENABLED=0 GOOS=linux go install -a ./cmd/headscale && test -e /go/bin/headscale
# Need to reset the entrypoint or everything will run as a busybox script
ENTRYPOINT []
EXPOSE 8080/tcp 40000/tcp
CMD ["dlv", "--listen=0.0.0.0:40000", "--headless=true", "--api-version=2", "--accept-multiclient", "exec", "/usr/local/bin/headscale", "--"]
EXPOSE 8080/tcp
CMD ["headscale"]

View File

@@ -1,17 +0,0 @@
# Minimal CI image - expects pre-built headscale binary in build context
# For local development with delve debugging, use Dockerfile.integration instead
FROM debian:trixie-slim
RUN apt-get --update install --no-install-recommends --yes \
bash ca-certificates curl dnsutils findutils iproute2 jq less procps python3 sqlite3 \
&& apt-get dist-clean
RUN mkdir -p /var/run/headscale
# Copy pre-built headscale binary from build context
COPY headscale /usr/local/bin/headscale
ENTRYPOINT []
EXPOSE 8080/tcp
CMD ["/usr/local/bin/headscale"]

View File

@@ -4,7 +4,7 @@
# This Dockerfile is more or less lifted from tailscale/tailscale
# to ensure a similar build process when testing the HEAD of tailscale.
FROM golang:1.25-alpine AS build-env
FROM golang:1.24-alpine AS build-env
WORKDIR /go/src
@@ -36,10 +36,8 @@ RUN GOARCH=$TARGETARCH go install -tags="${BUILD_TAGS}" -ldflags="\
-X tailscale.com/version.gitCommitStamp=$VERSION_GIT_HASH" \
-v ./cmd/tailscale ./cmd/tailscaled ./cmd/containerboot
FROM alpine:3.22
# Upstream: ca-certificates ip6tables iptables iproute2
# Tests: curl python3 (traceroute via BusyBox)
RUN apk add --no-cache ca-certificates curl ip6tables iptables iproute2 python3
FROM alpine:3.18
RUN apk add --no-cache ca-certificates iptables iproute2 ip6tables curl
COPY --from=build-env /go/bin/* /usr/local/bin/
# For compat with the previous run.sh, although ideally you should be

View File

@@ -64,6 +64,7 @@ fmt-go: check-deps $(GO_SOURCES)
fmt-prettier: check-deps $(DOC_SOURCES)
@echo "Formatting documentation and config files..."
prettier --write '**/*.{ts,js,md,yaml,yml,sass,css,scss,html}'
prettier --write --print-width 80 --prose-wrap always CHANGELOG.md
.PHONY: fmt-proto
fmt-proto: check-deps $(PROTO_SOURCES)
@@ -86,9 +87,10 @@ lint-proto: check-deps $(PROTO_SOURCES)
# Code generation
.PHONY: generate
generate: check-deps
@echo "Generating code..."
go generate ./...
generate: check-deps $(PROTO_SOURCES)
@echo "Generating code from Protocol Buffers..."
rm -rf gen
buf generate proto
# Clean targets
.PHONY: clean
@@ -116,7 +118,7 @@ help:
@echo ""
@echo "Specific targets:"
@echo " fmt-go - Format Go code only"
@echo " fmt-prettier - Format documentation only"
@echo " fmt-prettier - Format documentation only"
@echo " fmt-proto - Format Protocol Buffer files only"
@echo " lint-go - Lint Go code only"
@echo " lint-proto - Lint Protocol Buffer files only"
@@ -125,4 +127,4 @@ help:
@echo " check-deps - Verify required tools are available"
@echo ""
@echo "Note: If not running in a nix shell, ensure dependencies are available:"
@echo " nix develop"
@echo " nix develop"

View File

@@ -1,4 +1,4 @@
![headscale logo](./docs/assets/logo/headscale3_header_stacked_left.png)
![headscale logo](./docs/logo/headscale3_header_stacked_left.png)
![ci](https://github.com/juanfont/headscale/actions/workflows/test.yml/badge.svg)
@@ -63,8 +63,6 @@ and container to run Headscale.**
Please have a look at the [`documentation`](https://headscale.net/stable/).
For NixOS users, a module is available in [`nix/`](./nix/).
## Talks
- Fosdem 2023 (video): [Headscale: How we are using integration testing to reimplement Tailscale](https://fosdem.org/2023/schedule/event/goheadscale/)
@@ -149,7 +147,6 @@ make build
We recommend using Nix for dependency management to ensure you have all required tools. If you prefer to manage dependencies yourself, you can use Make directly:
**With Nix (recommended):**
```shell
nix develop
make test
@@ -157,7 +154,6 @@ make build
```
**With your own dependencies:**
```shell
make test
make build

View File

@@ -1,6 +1,7 @@
package cli
import (
"context"
"fmt"
"strconv"
"time"
@@ -9,15 +10,11 @@ import (
"github.com/juanfont/headscale/hscontrol/util"
"github.com/prometheus/common/model"
"github.com/pterm/pterm"
"github.com/rs/zerolog/log"
"github.com/spf13/cobra"
"google.golang.org/protobuf/types/known/timestamppb"
)
const (
// 90 days.
DefaultAPIKeyExpiry = "90d"
)
func init() {
rootCmd.AddCommand(apiKeysCmd)
apiKeysCmd.AddCommand(listAPIKeys)
@@ -28,85 +25,94 @@ func init() {
apiKeysCmd.AddCommand(createAPIKeyCmd)
expireAPIKeyCmd.Flags().StringP("prefix", "p", "", "ApiKey prefix")
expireAPIKeyCmd.Flags().Uint64P("id", "i", 0, "ApiKey ID")
if err := expireAPIKeyCmd.MarkFlagRequired("prefix"); err != nil {
log.Fatal().Err(err).Msg("")
}
apiKeysCmd.AddCommand(expireAPIKeyCmd)
deleteAPIKeyCmd.Flags().StringP("prefix", "p", "", "ApiKey prefix")
deleteAPIKeyCmd.Flags().Uint64P("id", "i", 0, "ApiKey ID")
if err := deleteAPIKeyCmd.MarkFlagRequired("prefix"); err != nil {
log.Fatal().Err(err).Msg("")
}
apiKeysCmd.AddCommand(deleteAPIKeyCmd)
}
var apiKeysCmd = &cobra.Command{
Use: "apikeys",
Short: "Handle the Api keys in Headscale",
Short: "Handle the API keys in Headscale",
Aliases: []string{"apikey", "api"},
}
var listAPIKeys = &cobra.Command{
Use: "list",
Short: "List the Api keys for headscale",
Short: "List the API keys for Headscale",
Aliases: []string{"ls", "show"},
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
output := GetOutputFlag(cmd)
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
defer cancel()
defer conn.Close()
err := WithClient(func(ctx context.Context, client v1.HeadscaleServiceClient) error {
request := &v1.ListApiKeysRequest{}
request := &v1.ListApiKeysRequest{}
response, err := client.ListApiKeys(ctx, request)
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error getting the list of keys: %s", err),
output,
)
}
if output != "" {
SuccessOutput(response.GetApiKeys(), "", output)
}
tableData := pterm.TableData{
{"ID", "Prefix", "Expiration", "Created"},
}
for _, key := range response.GetApiKeys() {
expiration := "-"
if key.GetExpiration() != nil {
expiration = ColourTime(key.GetExpiration().AsTime())
response, err := client.ListApiKeys(ctx, request)
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error getting the list of keys: %s", err),
output,
)
return err
}
tableData = append(tableData, []string{
strconv.FormatUint(key.GetId(), util.Base10),
key.GetPrefix(),
expiration,
key.GetCreatedAt().AsTime().Format(HeadscaleDateTimeFormat),
})
if output != "" {
SuccessOutput(response.GetApiKeys(), "", output)
return nil
}
}
err = pterm.DefaultTable.WithHasHeader().WithData(tableData).Render()
tableData := pterm.TableData{
{"ID", "Prefix", "Expiration", "Created"},
}
for _, key := range response.GetApiKeys() {
expiration := "-"
if key.GetExpiration() != nil {
expiration = ColourTime(key.GetExpiration().AsTime())
}
tableData = append(tableData, []string{
strconv.FormatUint(key.GetId(), util.Base10),
key.GetPrefix(),
expiration,
key.GetCreatedAt().AsTime().Format(HeadscaleDateTimeFormat),
})
}
err = pterm.DefaultTable.WithHasHeader().WithData(tableData).Render()
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Failed to render pterm table: %s", err),
output,
)
return err
}
return nil
})
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Failed to render pterm table: %s", err),
output,
)
return
}
},
}
var createAPIKeyCmd = &cobra.Command{
Use: "create",
Short: "Creates a new Api key",
Short: "Create a new API key",
Long: `
Creates a new Api key, the Api key is only visible on creation
and cannot be retrieved again.
If you loose a key, create a new one and revoke (expire) the old one.`,
Aliases: []string{"c", "new"},
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
output := GetOutputFlag(cmd)
request := &v1.CreateApiKeyRequest{}
@@ -119,123 +125,101 @@ If you loose a key, create a new one and revoke (expire) the old one.`,
fmt.Sprintf("Could not parse duration: %s\n", err),
output,
)
return
}
expiration := time.Now().UTC().Add(time.Duration(duration))
request.Expiration = timestamppb.New(expiration)
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
defer cancel()
defer conn.Close()
err = WithClient(func(ctx context.Context, client v1.HeadscaleServiceClient) error {
response, err := client.CreateApiKey(ctx, request)
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Cannot create Api Key: %s\n", err),
output,
)
return err
}
response, err := client.CreateApiKey(ctx, request)
SuccessOutput(response.GetApiKey(), response.GetApiKey(), output)
return nil
})
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Cannot create Api Key: %s\n", err),
output,
)
return
}
SuccessOutput(response.GetApiKey(), response.GetApiKey(), output)
},
}
var expireAPIKeyCmd = &cobra.Command{
Use: "expire",
Short: "Expire an ApiKey",
Short: "Expire an API key",
Aliases: []string{"revoke", "exp", "e"},
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
id, _ := cmd.Flags().GetUint64("id")
prefix, _ := cmd.Flags().GetString("prefix")
switch {
case id == 0 && prefix == "":
ErrorOutput(
errMissingParameter,
"Either --id or --prefix must be provided",
output,
)
case id != 0 && prefix != "":
ErrorOutput(
errMissingParameter,
"Only one of --id or --prefix can be provided",
output,
)
}
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
defer cancel()
defer conn.Close()
request := &v1.ExpireApiKeyRequest{}
if id != 0 {
request.Id = id
} else {
request.Prefix = prefix
}
response, err := client.ExpireApiKey(ctx, request)
output := GetOutputFlag(cmd)
prefix, err := cmd.Flags().GetString("prefix")
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Cannot expire Api Key: %s\n", err),
output,
)
ErrorOutput(err, fmt.Sprintf("Error getting prefix from CLI flag: %s", err), output)
return
}
SuccessOutput(response, "Key expired", output)
err = WithClient(func(ctx context.Context, client v1.HeadscaleServiceClient) error {
request := &v1.ExpireApiKeyRequest{
Prefix: prefix,
}
response, err := client.ExpireApiKey(ctx, request)
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Cannot expire Api Key: %s\n", err),
output,
)
return err
}
SuccessOutput(response, "Key expired", output)
return nil
})
if err != nil {
return
}
},
}
var deleteAPIKeyCmd = &cobra.Command{
Use: "delete",
Short: "Delete an ApiKey",
Short: "Delete an API key",
Aliases: []string{"remove", "del"},
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
id, _ := cmd.Flags().GetUint64("id")
prefix, _ := cmd.Flags().GetString("prefix")
switch {
case id == 0 && prefix == "":
ErrorOutput(
errMissingParameter,
"Either --id or --prefix must be provided",
output,
)
case id != 0 && prefix != "":
ErrorOutput(
errMissingParameter,
"Only one of --id or --prefix can be provided",
output,
)
}
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
defer cancel()
defer conn.Close()
request := &v1.DeleteApiKeyRequest{}
if id != 0 {
request.Id = id
} else {
request.Prefix = prefix
}
response, err := client.DeleteApiKey(ctx, request)
output := GetOutputFlag(cmd)
prefix, err := cmd.Flags().GetString("prefix")
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Cannot delete Api Key: %s\n", err),
output,
)
ErrorOutput(err, fmt.Sprintf("Error getting prefix from CLI flag: %s", err), output)
return
}
SuccessOutput(response, "Key deleted", output)
err = WithClient(func(ctx context.Context, client v1.HeadscaleServiceClient) error {
request := &v1.DeleteApiKeyRequest{
Prefix: prefix,
}
response, err := client.DeleteApiKey(ctx, request)
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Cannot delete Api Key: %s\n", err),
output,
)
return err
}
SuccessOutput(response, "Key deleted", output)
return nil
})
if err != nil {
return
}
},
}

View File

@@ -0,0 +1,16 @@
package cli
import (
"context"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
)
// WithClient handles gRPC client setup and cleanup, calls fn with client and context
func WithClient(fn func(context.Context, v1.HeadscaleServiceClient) error) error {
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
defer cancel()
defer conn.Close()
return fn(ctx, client)
}

View File

@@ -11,8 +11,8 @@ func init() {
var configTestCmd = &cobra.Command{
Use: "configtest",
Short: "Test the configuration.",
Long: "Run a test of the configuration and exit.",
Short: "Test the configuration",
Long: "Run a test of the configuration and exit",
Run: func(cmd *cobra.Command, args []string) {
_, err := newHeadscaleServerWithConfig()
if err != nil {

View File

@@ -0,0 +1,46 @@
package cli
import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestConfigTestCommand(t *testing.T) {
// Test that the configtest command exists and is properly configured
assert.NotNil(t, configTestCmd)
assert.Equal(t, "configtest", configTestCmd.Use)
assert.Equal(t, "Test the configuration.", configTestCmd.Short)
assert.Equal(t, "Run a test of the configuration and exit.", configTestCmd.Long)
assert.NotNil(t, configTestCmd.Run)
}
func TestConfigTestCommandInRootCommand(t *testing.T) {
// Test that configtest is available as a subcommand of root
cmd, _, err := rootCmd.Find([]string{"configtest"})
require.NoError(t, err)
assert.Equal(t, "configtest", cmd.Name())
assert.Equal(t, configTestCmd, cmd)
}
func TestConfigTestCommandHelp(t *testing.T) {
// Test that the command has proper help text
assert.NotEmpty(t, configTestCmd.Short)
assert.NotEmpty(t, configTestCmd.Long)
assert.Contains(t, configTestCmd.Short, "configuration")
assert.Contains(t, configTestCmd.Long, "test")
assert.Contains(t, configTestCmd.Long, "configuration")
}
// Note: We can't easily test the actual execution of configtest because:
// 1. It depends on configuration files being present
// 2. It calls log.Fatal() which would exit the test process
// 3. It tries to initialize a full Headscale server
//
// In a real refactor, we would:
// 1. Extract the configuration validation logic to a testable function
// 2. Return errors instead of calling log.Fatal()
// 3. Accept configuration as a parameter instead of loading from global state
//
// For now, we test the command structure and that it's properly wired up.

View File

@@ -1,6 +1,7 @@
package cli
import (
"context"
"fmt"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
@@ -10,10 +11,9 @@ import (
"google.golang.org/grpc/status"
)
// Error is used to compare errors as per https://dave.cheney.net/2016/04/07/constant-errors
type Error string
func (e Error) Error() string { return string(e) }
const (
errPreAuthKeyMalformed = Error("key is malformed. expected 64 hex characters with `nodekey` prefix")
)
func init() {
rootCmd.AddCommand(debugCmd)
@@ -25,11 +25,6 @@ func init() {
}
createNodeCmd.Flags().StringP("user", "u", "", "User")
createNodeCmd.Flags().StringP("namespace", "n", "", "User")
createNodeNamespaceFlag := createNodeCmd.Flags().Lookup("namespace")
createNodeNamespaceFlag.Deprecated = deprecateNamespaceMessage
createNodeNamespaceFlag.Hidden = true
err = createNodeCmd.MarkFlagRequired("user")
if err != nil {
log.Fatal().Err(err).Msg("")
@@ -55,17 +50,14 @@ var createNodeCmd = &cobra.Command{
Use: "create-node",
Short: "Create a node that can be registered with `nodes register <>` command",
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
output := GetOutputFlag(cmd)
user, err := cmd.Flags().GetString("user")
if err != nil {
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
return
}
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
defer cancel()
defer conn.Close()
name, err := cmd.Flags().GetString("name")
if err != nil {
ErrorOutput(
@@ -73,6 +65,7 @@ var createNodeCmd = &cobra.Command{
fmt.Sprintf("Error getting node from flag: %s", err),
output,
)
return
}
registrationID, err := cmd.Flags().GetString("key")
@@ -82,6 +75,7 @@ var createNodeCmd = &cobra.Command{
fmt.Sprintf("Error getting key from flag: %s", err),
output,
)
return
}
_, err = types.RegistrationIDFromString(registrationID)
@@ -91,6 +85,7 @@ var createNodeCmd = &cobra.Command{
fmt.Sprintf("Failed to parse machine key from flag: %s", err),
output,
)
return
}
routes, err := cmd.Flags().GetStringSlice("route")
@@ -100,24 +95,32 @@ var createNodeCmd = &cobra.Command{
fmt.Sprintf("Error getting routes from flag: %s", err),
output,
)
return
}
request := &v1.DebugCreateNodeRequest{
Key: registrationID,
Name: name,
User: user,
Routes: routes,
}
err = WithClient(func(ctx context.Context, client v1.HeadscaleServiceClient) error {
request := &v1.DebugCreateNodeRequest{
Key: registrationID,
Name: name,
User: user,
Routes: routes,
}
response, err := client.DebugCreateNode(ctx, request)
response, err := client.DebugCreateNode(ctx, request)
if err != nil {
ErrorOutput(
err,
"Cannot create node: "+status.Convert(err).Message(),
output,
)
return err
}
SuccessOutput(response.GetNode(), "Node created", output)
return nil
})
if err != nil {
ErrorOutput(
err,
"Cannot create node: "+status.Convert(err).Message(),
output,
)
return
}
SuccessOutput(response.GetNode(), "Node created", output)
},
}

View File

@@ -0,0 +1,144 @@
package cli
import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestDebugCommand(t *testing.T) {
// Test that the debug command exists and is properly configured
assert.NotNil(t, debugCmd)
assert.Equal(t, "debug", debugCmd.Use)
assert.Equal(t, "debug and testing commands", debugCmd.Short)
assert.Equal(t, "debug contains extra commands used for debugging and testing headscale", debugCmd.Long)
}
func TestDebugCommandInRootCommand(t *testing.T) {
// Test that debug is available as a subcommand of root
cmd, _, err := rootCmd.Find([]string{"debug"})
require.NoError(t, err)
assert.Equal(t, "debug", cmd.Name())
assert.Equal(t, debugCmd, cmd)
}
func TestCreateNodeCommand(t *testing.T) {
// Test that the create-node command exists and is properly configured
assert.NotNil(t, createNodeCmd)
assert.Equal(t, "create-node", createNodeCmd.Use)
assert.Equal(t, "Create a node that can be registered with `nodes register <>` command", createNodeCmd.Short)
assert.NotNil(t, createNodeCmd.Run)
}
func TestCreateNodeCommandInDebugCommand(t *testing.T) {
// Test that create-node is available as a subcommand of debug
cmd, _, err := rootCmd.Find([]string{"debug", "create-node"})
require.NoError(t, err)
assert.Equal(t, "create-node", cmd.Name())
assert.Equal(t, createNodeCmd, cmd)
}
func TestCreateNodeCommandFlags(t *testing.T) {
// Test that create-node has the required flags
// Test name flag
nameFlag := createNodeCmd.Flags().Lookup("name")
assert.NotNil(t, nameFlag)
assert.Equal(t, "", nameFlag.Shorthand) // No shorthand for name
assert.Equal(t, "", nameFlag.DefValue)
// Test user flag
userFlag := createNodeCmd.Flags().Lookup("user")
assert.NotNil(t, userFlag)
assert.Equal(t, "u", userFlag.Shorthand)
// Test key flag
keyFlag := createNodeCmd.Flags().Lookup("key")
assert.NotNil(t, keyFlag)
assert.Equal(t, "k", keyFlag.Shorthand)
// Test route flag
routeFlag := createNodeCmd.Flags().Lookup("route")
assert.NotNil(t, routeFlag)
assert.Equal(t, "r", routeFlag.Shorthand)
}
func TestCreateNodeCommandRequiredFlags(t *testing.T) {
// Test that required flags are marked as required
// We can't easily test the actual requirement enforcement without executing the command
// But we can test that the flags exist and have the expected properties
// These flags should be required based on the init() function
requiredFlags := []string{"name", "user", "key"}
for _, flagName := range requiredFlags {
flag := createNodeCmd.Flags().Lookup(flagName)
assert.NotNil(t, flag, "Required flag %s should exist", flagName)
}
}
func TestErrorType(t *testing.T) {
// Test the Error type implementation
err := errPreAuthKeyMalformed
assert.Equal(t, "key is malformed. expected 64 hex characters with `nodekey` prefix", err.Error())
assert.Equal(t, "key is malformed. expected 64 hex characters with `nodekey` prefix", string(err))
// Test that it implements the error interface
var genericErr error = err
assert.Equal(t, "key is malformed. expected 64 hex characters with `nodekey` prefix", genericErr.Error())
}
func TestErrorConstants(t *testing.T) {
// Test that error constants are defined properly
assert.Equal(t, Error("key is malformed. expected 64 hex characters with `nodekey` prefix"), errPreAuthKeyMalformed)
}
func TestDebugCommandStructure(t *testing.T) {
// Test that debug has create-node as a subcommand
found := false
for _, subcmd := range debugCmd.Commands() {
if subcmd.Name() == "create-node" {
found = true
break
}
}
assert.True(t, found, "create-node should be a subcommand of debug")
}
func TestCreateNodeCommandHelp(t *testing.T) {
// Test that the command has proper help text
assert.NotEmpty(t, createNodeCmd.Short)
assert.Contains(t, createNodeCmd.Short, "Create a node")
assert.Contains(t, createNodeCmd.Short, "nodes register")
}
func TestCreateNodeCommandFlagDescriptions(t *testing.T) {
// Test that flags have appropriate usage descriptions
nameFlag := createNodeCmd.Flags().Lookup("name")
assert.Equal(t, "Name", nameFlag.Usage)
userFlag := createNodeCmd.Flags().Lookup("user")
assert.Equal(t, "User", userFlag.Usage)
keyFlag := createNodeCmd.Flags().Lookup("key")
assert.Equal(t, "Key", keyFlag.Usage)
routeFlag := createNodeCmd.Flags().Lookup("route")
assert.Contains(t, routeFlag.Usage, "routes to advertise")
}
// Note: We can't easily test the actual execution of create-node because:
// 1. It depends on gRPC client configuration
// 2. It calls SuccessOutput/ErrorOutput which exit the process
// 3. It requires valid registration keys and user setup
//
// In a real refactor, we would:
// 1. Extract the business logic to testable functions
// 2. Use dependency injection for the gRPC client
// 3. Return errors instead of calling ErrorOutput/SuccessOutput
// 4. Add validation functions that can be tested independently
//
// For now, we test the command structure and flag configuration.

View File

@@ -12,9 +12,10 @@ func init() {
}
var dumpConfigCmd = &cobra.Command{
Use: "dumpConfig",
Short: "dump current config to /etc/headscale/config.dump.yaml, integration test only",
Hidden: true,
Use: "dump-config",
Short: "Dump current config to /etc/headscale/config.dump.yaml, integration test only",
Aliases: []string{"dumpConfig"},
Hidden: true,
Args: func(cmd *cobra.Command, args []string) error {
return nil
},

View File

@@ -22,7 +22,7 @@ var generatePrivateKeyCmd = &cobra.Command{
Use: "private-key",
Short: "Generate a private key for the headscale server",
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
output := GetOutputFlag(cmd)
machineKey := key.NewMachine()
machineKeyStr, err := machineKey.MarshalText()

View File

@@ -0,0 +1,230 @@
package cli
import (
"bytes"
"encoding/json"
"strings"
"testing"
"github.com/spf13/cobra"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"gopkg.in/yaml.v3"
)
func TestGenerateCommand(t *testing.T) {
// Test that the generate command exists and shows help
cmd := &cobra.Command{
Use: "headscale",
Short: "headscale - a Tailscale control server",
}
cmd.AddCommand(generateCmd)
out := new(bytes.Buffer)
cmd.SetOut(out)
cmd.SetErr(out)
cmd.SetArgs([]string{"generate", "--help"})
err := cmd.Execute()
require.NoError(t, err)
outStr := out.String()
assert.Contains(t, outStr, "Generate commands")
assert.Contains(t, outStr, "private-key")
assert.Contains(t, outStr, "Aliases:")
assert.Contains(t, outStr, "gen")
}
func TestGenerateCommandAlias(t *testing.T) {
// Test that the "gen" alias works
cmd := &cobra.Command{
Use: "headscale",
Short: "headscale - a Tailscale control server",
}
cmd.AddCommand(generateCmd)
out := new(bytes.Buffer)
cmd.SetOut(out)
cmd.SetErr(out)
cmd.SetArgs([]string{"gen", "--help"})
err := cmd.Execute()
require.NoError(t, err)
outStr := out.String()
assert.Contains(t, outStr, "Generate commands")
}
func TestGeneratePrivateKeyCommand(t *testing.T) {
tests := []struct {
name string
args []string
expectJSON bool
expectYAML bool
}{
{
name: "default output",
args: []string{"generate", "private-key"},
expectJSON: false,
expectYAML: false,
},
{
name: "json output",
args: []string{"generate", "private-key", "--output", "json"},
expectJSON: true,
expectYAML: false,
},
{
name: "yaml output",
args: []string{"generate", "private-key", "--output", "yaml"},
expectJSON: false,
expectYAML: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
// Note: This command calls SuccessOutput which exits the process
// We can't test the actual execution easily without mocking
// Instead, we test the command structure and that it exists
cmd := &cobra.Command{
Use: "headscale",
Short: "headscale - a Tailscale control server",
}
cmd.AddCommand(generateCmd)
cmd.PersistentFlags().StringP("output", "o", "", "Output format")
// Test that the command exists and can be found
privateKeyCmd, _, err := cmd.Find([]string{"generate", "private-key"})
require.NoError(t, err)
assert.Equal(t, "private-key", privateKeyCmd.Name())
assert.Equal(t, "Generate a private key for the headscale server", privateKeyCmd.Short)
})
}
}
func TestGeneratePrivateKeyHelp(t *testing.T) {
cmd := &cobra.Command{
Use: "headscale",
Short: "headscale - a Tailscale control server",
}
cmd.AddCommand(generateCmd)
out := new(bytes.Buffer)
cmd.SetOut(out)
cmd.SetErr(out)
cmd.SetArgs([]string{"generate", "private-key", "--help"})
err := cmd.Execute()
require.NoError(t, err)
outStr := out.String()
assert.Contains(t, outStr, "Generate a private key for the headscale server")
assert.Contains(t, outStr, "Usage:")
}
// Test the key generation logic in isolation (without SuccessOutput/ErrorOutput)
func TestPrivateKeyGeneration(t *testing.T) {
// We can't easily test the full command because it calls SuccessOutput which exits
// But we can test that the key generation produces valid output format
// This is testing the core logic that would be in the command
// In a real refactor, we'd extract this to a testable function
// For now, we can test that the command structure is correct
assert.NotNil(t, generatePrivateKeyCmd)
assert.Equal(t, "private-key", generatePrivateKeyCmd.Use)
assert.Equal(t, "Generate a private key for the headscale server", generatePrivateKeyCmd.Short)
assert.NotNil(t, generatePrivateKeyCmd.Run)
}
func TestGenerateCommandStructure(t *testing.T) {
// Test the command hierarchy
assert.Equal(t, "generate", generateCmd.Use)
assert.Equal(t, "Generate commands", generateCmd.Short)
assert.Contains(t, generateCmd.Aliases, "gen")
// Test that private-key is a subcommand
found := false
for _, subcmd := range generateCmd.Commands() {
if subcmd.Name() == "private-key" {
found = true
break
}
}
assert.True(t, found, "private-key should be a subcommand of generate")
}
// Helper function to test output formats (would be used if we refactored the command)
func validatePrivateKeyOutput(t *testing.T, output string, format string) {
switch format {
case "json":
var result map[string]interface{}
err := json.Unmarshal([]byte(output), &result)
require.NoError(t, err, "Output should be valid JSON")
privateKey, exists := result["private_key"]
require.True(t, exists, "JSON should contain private_key field")
keyStr, ok := privateKey.(string)
require.True(t, ok, "private_key should be a string")
require.NotEmpty(t, keyStr, "private_key should not be empty")
// Basic validation that it looks like a machine key
assert.True(t, strings.HasPrefix(keyStr, "mkey:"), "Machine key should start with mkey:")
case "yaml":
var result map[string]interface{}
err := yaml.Unmarshal([]byte(output), &result)
require.NoError(t, err, "Output should be valid YAML")
privateKey, exists := result["private_key"]
require.True(t, exists, "YAML should contain private_key field")
keyStr, ok := privateKey.(string)
require.True(t, ok, "private_key should be a string")
require.NotEmpty(t, keyStr, "private_key should not be empty")
assert.True(t, strings.HasPrefix(keyStr, "mkey:"), "Machine key should start with mkey:")
default:
// Default format should just be the key itself
assert.True(t, strings.HasPrefix(output, "mkey:"), "Default output should be the machine key")
assert.NotContains(t, output, "{", "Default output should not contain JSON")
assert.NotContains(t, output, "private_key:", "Default output should not contain YAML structure")
}
}
func TestPrivateKeyOutputFormats(t *testing.T) {
// Test cases for different output formats
// These test the validation logic we would use after refactoring
tests := []struct {
format string
sample string
}{
{
format: "json",
sample: `{"private_key": "mkey:abcd1234567890abcd1234567890abcd1234567890abcd1234567890abcd1234"}`,
},
{
format: "yaml",
sample: "private_key: mkey:abcd1234567890abcd1234567890abcd1234567890abcd1234567890abcd1234\n",
},
{
format: "",
sample: "mkey:abcd1234567890abcd1234567890abcd1234567890abcd1234567890abcd1234",
},
}
for _, tt := range tests {
t.Run("format_"+tt.format, func(t *testing.T) {
validatePrivateKeyOutput(t, tt.sample, tt.format)
})
}
}

View File

@@ -1,29 +0,0 @@
package cli
import (
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/spf13/cobra"
)
func init() {
rootCmd.AddCommand(healthCmd)
}
var healthCmd = &cobra.Command{
Use: "health",
Short: "Check the health of the Headscale server",
Long: "Check the health of the Headscale server. This command will return an exit code of 0 if the server is healthy, or 1 if it is not.",
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
defer cancel()
defer conn.Close()
response, err := client.Health(ctx, &v1.HealthRequest{})
if err != nil {
ErrorOutput(err, "Error checking health", output)
}
SuccessOutput(response, "", output)
},
}

View File

@@ -15,6 +15,11 @@ import (
"github.com/spf13/cobra"
)
// Error is used to compare errors as per https://dave.cheney.net/2016/04/07/constant-errors
type Error string
func (e Error) Error() string { return string(e) }
const (
errMockOidcClientIDNotDefined = Error("MOCKOIDC_CLIENT_ID not defined")
errMockOidcClientSecretNotDefined = Error("MOCKOIDC_CLIENT_SECRET not defined")

File diff suppressed because it is too large Load Diff

View File

@@ -1,35 +1,27 @@
package cli
import (
"context"
"fmt"
"io"
"os"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/juanfont/headscale/hscontrol/db"
"github.com/juanfont/headscale/hscontrol/policy"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/rs/zerolog/log"
"github.com/spf13/cobra"
"tailscale.com/types/views"
)
const (
bypassFlag = "bypass-grpc-and-access-database-directly"
)
func init() {
rootCmd.AddCommand(policyCmd)
getPolicy.Flags().BoolP(bypassFlag, "", false, "Uses the headscale config to directly access the database, bypassing gRPC and does not require the server to be running")
policyCmd.AddCommand(getPolicy)
setPolicy.Flags().StringP("file", "f", "", "Path to a policy file in HuJSON format")
if err := setPolicy.MarkFlagRequired("file"); err != nil {
log.Fatal().Err(err).Msg("")
}
setPolicy.Flags().BoolP(bypassFlag, "", false, "Uses the headscale config to directly access the database, bypassing gRPC and does not require the server to be running")
policyCmd.AddCommand(setPolicy)
checkPolicy.Flags().StringP("file", "f", "", "Path to a policy file in HuJSON format")
@@ -49,58 +41,26 @@ var getPolicy = &cobra.Command{
Short: "Print the current ACL Policy",
Aliases: []string{"show", "view", "fetch"},
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
var policy string
if bypass, _ := cmd.Flags().GetBool(bypassFlag); bypass {
confirm := false
force, _ := cmd.Flags().GetBool("force")
if !force {
confirm = util.YesNo("DO NOT run this command if an instance of headscale is running, are you sure headscale is not running?")
}
if !confirm && !force {
ErrorOutput(nil, "Aborting command", output)
return
}
cfg, err := types.LoadServerConfig()
if err != nil {
ErrorOutput(err, fmt.Sprintf("Failed loading config: %s", err), output)
}
d, err := db.NewHeadscaleDatabase(
cfg,
nil,
)
if err != nil {
ErrorOutput(err, fmt.Sprintf("Failed to open database: %s", err), output)
}
pol, err := d.GetPolicy()
if err != nil {
ErrorOutput(err, fmt.Sprintf("Failed loading Policy from database: %s", err), output)
}
policy = pol.Data
} else {
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
defer cancel()
defer conn.Close()
output := GetOutputFlag(cmd)
err := WithClient(func(ctx context.Context, client v1.HeadscaleServiceClient) error {
request := &v1.GetPolicyRequest{}
response, err := client.GetPolicy(ctx, request)
if err != nil {
ErrorOutput(err, fmt.Sprintf("Failed loading ACL Policy: %s", err), output)
return err
}
policy = response.GetPolicy()
// TODO(pallabpain): Maybe print this better?
// This does not pass output as we dont support yaml, json or json-line
// output for this command. It is HuJSON already.
SuccessOutput("", response.GetPolicy(), "")
return nil
})
if err != nil {
return
}
// TODO(pallabpain): Maybe print this better?
// This does not pass output as we dont support yaml, json or json-line
// output for this command. It is HuJSON already.
SuccessOutput("", policy, "")
},
}
@@ -112,73 +72,36 @@ var setPolicy = &cobra.Command{
This command only works when the acl.policy_mode is set to "db", and the policy will be stored in the database.`,
Aliases: []string{"put", "update"},
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
output := GetOutputFlag(cmd)
policyPath, _ := cmd.Flags().GetString("file")
f, err := os.Open(policyPath)
if err != nil {
ErrorOutput(err, fmt.Sprintf("Error opening the policy file: %s", err), output)
return
}
defer f.Close()
policyBytes, err := io.ReadAll(f)
if err != nil {
ErrorOutput(err, fmt.Sprintf("Error reading the policy file: %s", err), output)
return
}
if bypass, _ := cmd.Flags().GetBool(bypassFlag); bypass {
confirm := false
force, _ := cmd.Flags().GetBool("force")
if !force {
confirm = util.YesNo("DO NOT run this command if an instance of headscale is running, are you sure headscale is not running?")
}
if !confirm && !force {
ErrorOutput(nil, "Aborting command", output)
return
}
cfg, err := types.LoadServerConfig()
if err != nil {
ErrorOutput(err, fmt.Sprintf("Failed loading config: %s", err), output)
}
d, err := db.NewHeadscaleDatabase(
cfg,
nil,
)
if err != nil {
ErrorOutput(err, fmt.Sprintf("Failed to open database: %s", err), output)
}
users, err := d.ListUsers()
if err != nil {
ErrorOutput(err, fmt.Sprintf("Failed to load users for policy validation: %s", err), output)
}
_, err = policy.NewPolicyManager(policyBytes, users, views.Slice[types.NodeView]{})
if err != nil {
ErrorOutput(err, fmt.Sprintf("Error parsing the policy file: %s", err), output)
return
}
_, err = d.SetPolicy(string(policyBytes))
if err != nil {
ErrorOutput(err, fmt.Sprintf("Failed to set ACL Policy: %s", err), output)
}
} else {
request := &v1.SetPolicyRequest{Policy: string(policyBytes)}
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
defer cancel()
defer conn.Close()
request := &v1.SetPolicyRequest{Policy: string(policyBytes)}
err = WithClient(func(ctx context.Context, client v1.HeadscaleServiceClient) error {
if _, err := client.SetPolicy(ctx, request); err != nil {
ErrorOutput(err, fmt.Sprintf("Failed to set ACL Policy: %s", err), output)
return err
}
}
SuccessOutput(nil, "Policy updated.", "")
SuccessOutput(nil, "Policy updated.", "")
return nil
})
if err != nil {
return
}
},
}
@@ -186,23 +109,26 @@ var checkPolicy = &cobra.Command{
Use: "check",
Short: "Check the Policy file for errors",
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
output := GetOutputFlag(cmd)
policyPath, _ := cmd.Flags().GetString("file")
f, err := os.Open(policyPath)
if err != nil {
ErrorOutput(err, fmt.Sprintf("Error opening the policy file: %s", err), output)
return
}
defer f.Close()
policyBytes, err := io.ReadAll(f)
if err != nil {
ErrorOutput(err, fmt.Sprintf("Error reading the policy file: %s", err), output)
return
}
_, err = policy.NewPolicyManager(policyBytes, nil, views.Slice[types.NodeView]{})
if err != nil {
ErrorOutput(err, fmt.Sprintf("Error parsing the policy file: %s", err), output)
return
}
SuccessOutput(nil, "Policy is valid", "")

View File

@@ -1,6 +1,7 @@
package cli
import (
"context"
"fmt"
"strconv"
"strings"
@@ -9,20 +10,22 @@ import (
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/prometheus/common/model"
"github.com/pterm/pterm"
"github.com/rs/zerolog/log"
"github.com/spf13/cobra"
"google.golang.org/protobuf/types/known/timestamppb"
)
const (
DefaultPreAuthKeyExpiry = "1h"
)
func init() {
rootCmd.AddCommand(preauthkeysCmd)
preauthkeysCmd.PersistentFlags().Uint64P("user", "u", 0, "User identifier (ID)")
err := preauthkeysCmd.MarkPersistentFlagRequired("user")
if err != nil {
log.Fatal().Err(err).Msg("")
}
preauthkeysCmd.AddCommand(listPreAuthKeys)
preauthkeysCmd.AddCommand(createPreAuthKeyCmd)
preauthkeysCmd.AddCommand(expirePreAuthKeyCmd)
preauthkeysCmd.AddCommand(deletePreAuthKeyCmd)
createPreAuthKeyCmd.PersistentFlags().
Bool("reusable", false, "Make the preauthkey reusable")
createPreAuthKeyCmd.PersistentFlags().
@@ -31,9 +34,6 @@ func init() {
StringP("expiration", "e", DefaultPreAuthKeyExpiry, "Human-readable expiration of the key (e.g. 30m, 24h)")
createPreAuthKeyCmd.Flags().
StringSlice("tags", []string{}, "Tags to automatically assign to node")
createPreAuthKeyCmd.PersistentFlags().Uint64P("user", "u", 0, "User identifier (ID)")
expirePreAuthKeyCmd.PersistentFlags().Uint64P("id", "i", 0, "Authkey ID")
deletePreAuthKeyCmd.PersistentFlags().Uint64P("id", "i", 0, "Authkey ID")
}
var preauthkeysCmd = &cobra.Command{
@@ -44,88 +44,105 @@ var preauthkeysCmd = &cobra.Command{
var listPreAuthKeys = &cobra.Command{
Use: "list",
Short: "List all preauthkeys",
Short: "List the preauthkeys for this user",
Aliases: []string{"ls", "show"},
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
output := GetOutputFlag(cmd)
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
defer cancel()
defer conn.Close()
response, err := client.ListPreAuthKeys(ctx, &v1.ListPreAuthKeysRequest{})
user, err := cmd.Flags().GetUint64("user")
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error getting the list of keys: %s", err),
output,
)
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
return
}
if output != "" {
SuccessOutput(response.GetPreAuthKeys(), "", output)
}
tableData := pterm.TableData{
{
"ID",
"Key/Prefix",
"Reusable",
"Ephemeral",
"Used",
"Expiration",
"Created",
"Owner",
},
}
for _, key := range response.GetPreAuthKeys() {
expiration := "-"
if key.GetExpiration() != nil {
expiration = ColourTime(key.GetExpiration().AsTime())
err = WithClient(func(ctx context.Context, client v1.HeadscaleServiceClient) error {
request := &v1.ListPreAuthKeysRequest{
User: user,
}
var owner string
if len(key.GetAclTags()) > 0 {
owner = strings.Join(key.GetAclTags(), "\n")
} else if key.GetUser() != nil {
owner = key.GetUser().GetName()
} else {
owner = "-"
response, err := client.ListPreAuthKeys(ctx, request)
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Error getting the list of keys: %s", err),
output,
)
return err
}
tableData = append(tableData, []string{
strconv.FormatUint(key.GetId(), 10),
key.GetKey(),
strconv.FormatBool(key.GetReusable()),
strconv.FormatBool(key.GetEphemeral()),
strconv.FormatBool(key.GetUsed()),
expiration,
key.GetCreatedAt().AsTime().Format("2006-01-02 15:04:05"),
owner,
})
if output != "" {
SuccessOutput(response.GetPreAuthKeys(), "", output)
return nil
}
}
err = pterm.DefaultTable.WithHasHeader().WithData(tableData).Render()
tableData := pterm.TableData{
{
"ID",
"Key",
"Reusable",
"Ephemeral",
"Used",
"Expiration",
"Created",
"Tags",
},
}
for _, key := range response.GetPreAuthKeys() {
expiration := "-"
if key.GetExpiration() != nil {
expiration = ColourTime(key.GetExpiration().AsTime())
}
aclTags := ""
for _, tag := range key.GetAclTags() {
aclTags += "," + tag
}
aclTags = strings.TrimLeft(aclTags, ",")
tableData = append(tableData, []string{
strconv.FormatUint(key.GetId(), 10),
key.GetKey(),
strconv.FormatBool(key.GetReusable()),
strconv.FormatBool(key.GetEphemeral()),
strconv.FormatBool(key.GetUsed()),
expiration,
key.GetCreatedAt().AsTime().Format(HeadscaleDateTimeFormat),
aclTags,
})
}
err = pterm.DefaultTable.WithHasHeader().WithData(tableData).Render()
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Failed to render pterm table: %s", err),
output,
)
return err
}
return nil
})
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Failed to render pterm table: %s", err),
output,
)
return
}
},
}
var createPreAuthKeyCmd = &cobra.Command{
Use: "create",
Short: "Creates a new preauthkey",
Short: "Creates a new preauthkey in the specified user",
Aliases: []string{"c", "new"},
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
output := GetOutputFlag(cmd)
user, err := cmd.Flags().GetUint64("user")
if err != nil {
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
return
}
user, _ := cmd.Flags().GetUint64("user")
reusable, _ := cmd.Flags().GetBool("reusable")
ephemeral, _ := cmd.Flags().GetBool("ephemeral")
tags, _ := cmd.Flags().GetStringSlice("tags")
@@ -146,103 +163,77 @@ var createPreAuthKeyCmd = &cobra.Command{
fmt.Sprintf("Could not parse duration: %s\n", err),
output,
)
return
}
expiration := time.Now().UTC().Add(time.Duration(duration))
log.Trace().
Dur("expiration", time.Duration(duration)).
Msg("expiration has been set")
request.Expiration = timestamppb.New(expiration)
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
defer cancel()
defer conn.Close()
err = WithClient(func(ctx context.Context, client v1.HeadscaleServiceClient) error {
response, err := client.CreatePreAuthKey(ctx, request)
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Cannot create Pre Auth Key: %s\n", err),
output,
)
return err
}
response, err := client.CreatePreAuthKey(ctx, request)
SuccessOutput(response.GetPreAuthKey(), response.GetPreAuthKey().GetKey(), output)
return nil
})
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Cannot create Pre Auth Key: %s\n", err),
output,
)
return
}
SuccessOutput(response.GetPreAuthKey(), response.GetPreAuthKey().GetKey(), output)
},
}
var expirePreAuthKeyCmd = &cobra.Command{
Use: "expire",
Use: "expire KEY",
Short: "Expire a preauthkey",
Aliases: []string{"revoke", "exp", "e"},
Args: func(cmd *cobra.Command, args []string) error {
if len(args) < 1 {
return errMissingParameter
}
return nil
},
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
id, _ := cmd.Flags().GetUint64("id")
if id == 0 {
ErrorOutput(
errMissingParameter,
"Error: missing --id parameter",
output,
)
output := GetOutputFlag(cmd)
user, err := cmd.Flags().GetUint64("user")
if err != nil {
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
return
}
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
defer cancel()
defer conn.Close()
err = WithClient(func(ctx context.Context, client v1.HeadscaleServiceClient) error {
request := &v1.ExpirePreAuthKeyRequest{
User: user,
Key: args[0],
}
request := &v1.ExpirePreAuthKeyRequest{
Id: id,
}
response, err := client.ExpirePreAuthKey(ctx, request)
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Cannot expire Pre Auth Key: %s\n", err),
output,
)
return err
}
response, err := client.ExpirePreAuthKey(ctx, request)
SuccessOutput(response, "Key expired", output)
return nil
})
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Cannot expire Pre Auth Key: %s\n", err),
output,
)
}
SuccessOutput(response, "Key expired", output)
},
}
var deletePreAuthKeyCmd = &cobra.Command{
Use: "delete",
Short: "Delete a preauthkey",
Aliases: []string{"del", "rm", "d"},
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
id, _ := cmd.Flags().GetUint64("id")
if id == 0 {
ErrorOutput(
errMissingParameter,
"Error: missing --id parameter",
output,
)
return
}
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
defer cancel()
defer conn.Close()
request := &v1.DeletePreAuthKeyRequest{
Id: id,
}
response, err := client.DeletePreAuthKey(ctx, request)
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Cannot delete Pre Auth Key: %s\n", err),
output,
)
}
SuccessOutput(response, "Key deleted", output)
},
}

View File

@@ -7,7 +7,7 @@ import (
)
func ColourTime(date time.Time) string {
dateStr := date.Format("2006-01-02 15:04:05")
dateStr := date.Format(HeadscaleDateTimeFormat)
if date.After(time.Now()) {
dateStr = pterm.LightGreen(dateStr)

View File

@@ -5,7 +5,6 @@ import (
"os"
"runtime"
"slices"
"strings"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/rs/zerolog"
@@ -15,10 +14,6 @@ import (
"github.com/tcnksm/go-latest"
)
const (
deprecateNamespaceMessage = "use --user"
)
var cfgFile string = ""
func init() {
@@ -72,64 +67,25 @@ func initConfig() {
disableUpdateCheck := viper.GetBool("disable_check_updates")
if !disableUpdateCheck && !machineOutput {
versionInfo := types.GetVersionInfo()
if (runtime.GOOS == "linux" || runtime.GOOS == "darwin") &&
!versionInfo.Dirty {
types.Version != "dev" {
githubTag := &latest.GithubTag{
Owner: "juanfont",
Repository: "headscale",
TagFilterFunc: filterPreReleasesIfStable(func() string { return versionInfo.Version }),
Owner: "juanfont",
Repository: "headscale",
}
res, err := latest.Check(githubTag, versionInfo.Version)
res, err := latest.Check(githubTag, types.Version)
if err == nil && res.Outdated {
//nolint
log.Warn().Msgf(
"An updated version of Headscale has been found (%s vs. your current %s). Check it out https://github.com/juanfont/headscale/releases\n",
res.Current,
versionInfo.Version,
types.Version,
)
}
}
}
}
var prereleases = []string{"alpha", "beta", "rc", "dev"}
func isPreReleaseVersion(version string) bool {
for _, unstable := range prereleases {
if strings.Contains(version, unstable) {
return true
}
}
return false
}
// filterPreReleasesIfStable returns a function that filters out
// pre-release tags if the current version is stable.
// If the current version is a pre-release, it does not filter anything.
// versionFunc is a function that returns the current version string, it is
// a func for testability.
func filterPreReleasesIfStable(versionFunc func() string) func(string) bool {
return func(tag string) bool {
version := versionFunc()
// If we are on a pre-release version, then we do not filter anything
// as we want to recommend the user the latest pre-release.
if isPreReleaseVersion(version) {
return false
}
// If we are on a stable release, filter out pre-releases.
for _, ignore := range prereleases {
if strings.Contains(tag, ignore) {
return true
}
}
return false
}
}
var rootCmd = &cobra.Command{
Use: "headscale",
Short: "headscale - a Tailscale control server",

View File

@@ -1,293 +0,0 @@
package cli
import (
"testing"
)
func TestFilterPreReleasesIfStable(t *testing.T) {
tests := []struct {
name string
currentVersion string
tag string
expectedFilter bool
description string
}{
{
name: "stable version filters alpha tag",
currentVersion: "0.23.0",
tag: "v0.24.0-alpha.1",
expectedFilter: true,
description: "When on stable release, alpha tags should be filtered",
},
{
name: "stable version filters beta tag",
currentVersion: "0.23.0",
tag: "v0.24.0-beta.2",
expectedFilter: true,
description: "When on stable release, beta tags should be filtered",
},
{
name: "stable version filters rc tag",
currentVersion: "0.23.0",
tag: "v0.24.0-rc.1",
expectedFilter: true,
description: "When on stable release, rc tags should be filtered",
},
{
name: "stable version allows stable tag",
currentVersion: "0.23.0",
tag: "v0.24.0",
expectedFilter: false,
description: "When on stable release, stable tags should not be filtered",
},
{
name: "alpha version allows alpha tag",
currentVersion: "0.23.0-alpha.1",
tag: "v0.24.0-alpha.2",
expectedFilter: false,
description: "When on alpha release, alpha tags should not be filtered",
},
{
name: "alpha version allows beta tag",
currentVersion: "0.23.0-alpha.1",
tag: "v0.24.0-beta.1",
expectedFilter: false,
description: "When on alpha release, beta tags should not be filtered",
},
{
name: "alpha version allows rc tag",
currentVersion: "0.23.0-alpha.1",
tag: "v0.24.0-rc.1",
expectedFilter: false,
description: "When on alpha release, rc tags should not be filtered",
},
{
name: "alpha version allows stable tag",
currentVersion: "0.23.0-alpha.1",
tag: "v0.24.0",
expectedFilter: false,
description: "When on alpha release, stable tags should not be filtered",
},
{
name: "beta version allows alpha tag",
currentVersion: "0.23.0-beta.1",
tag: "v0.24.0-alpha.1",
expectedFilter: false,
description: "When on beta release, alpha tags should not be filtered",
},
{
name: "beta version allows beta tag",
currentVersion: "0.23.0-beta.2",
tag: "v0.24.0-beta.3",
expectedFilter: false,
description: "When on beta release, beta tags should not be filtered",
},
{
name: "beta version allows rc tag",
currentVersion: "0.23.0-beta.1",
tag: "v0.24.0-rc.1",
expectedFilter: false,
description: "When on beta release, rc tags should not be filtered",
},
{
name: "beta version allows stable tag",
currentVersion: "0.23.0-beta.1",
tag: "v0.24.0",
expectedFilter: false,
description: "When on beta release, stable tags should not be filtered",
},
{
name: "rc version allows alpha tag",
currentVersion: "0.23.0-rc.1",
tag: "v0.24.0-alpha.1",
expectedFilter: false,
description: "When on rc release, alpha tags should not be filtered",
},
{
name: "rc version allows beta tag",
currentVersion: "0.23.0-rc.1",
tag: "v0.24.0-beta.1",
expectedFilter: false,
description: "When on rc release, beta tags should not be filtered",
},
{
name: "rc version allows rc tag",
currentVersion: "0.23.0-rc.2",
tag: "v0.24.0-rc.3",
expectedFilter: false,
description: "When on rc release, rc tags should not be filtered",
},
{
name: "rc version allows stable tag",
currentVersion: "0.23.0-rc.1",
tag: "v0.24.0",
expectedFilter: false,
description: "When on rc release, stable tags should not be filtered",
},
{
name: "stable version with patch filters alpha",
currentVersion: "0.23.1",
tag: "v0.24.0-alpha.1",
expectedFilter: true,
description: "Stable version with patch number should filter alpha tags",
},
{
name: "stable version with patch allows stable",
currentVersion: "0.23.1",
tag: "v0.24.0",
expectedFilter: false,
description: "Stable version with patch number should allow stable tags",
},
{
name: "tag with alpha substring in version number",
currentVersion: "0.23.0",
tag: "v1.0.0-alpha.1",
expectedFilter: true,
description: "Tags with alpha in version string should be filtered on stable",
},
{
name: "tag with beta substring in version number",
currentVersion: "0.23.0",
tag: "v1.0.0-beta.1",
expectedFilter: true,
description: "Tags with beta in version string should be filtered on stable",
},
{
name: "tag with rc substring in version number",
currentVersion: "0.23.0",
tag: "v1.0.0-rc.1",
expectedFilter: true,
description: "Tags with rc in version string should be filtered on stable",
},
{
name: "empty tag on stable version",
currentVersion: "0.23.0",
tag: "",
expectedFilter: false,
description: "Empty tags should not be filtered",
},
{
name: "dev version allows all tags",
currentVersion: "0.23.0-dev",
tag: "v0.24.0-alpha.1",
expectedFilter: false,
description: "Dev versions should not filter any tags (pre-release allows all)",
},
{
name: "stable version filters dev tag",
currentVersion: "0.23.0",
tag: "v0.24.0-dev",
expectedFilter: true,
description: "When on stable release, dev tags should be filtered",
},
{
name: "dev version allows dev tag",
currentVersion: "0.23.0-dev",
tag: "v0.24.0-dev.1",
expectedFilter: false,
description: "When on dev release, dev tags should not be filtered",
},
{
name: "dev version allows stable tag",
currentVersion: "0.23.0-dev",
tag: "v0.24.0",
expectedFilter: false,
description: "When on dev release, stable tags should not be filtered",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := filterPreReleasesIfStable(func() string { return tt.currentVersion })(tt.tag)
if result != tt.expectedFilter {
t.Errorf("%s: got %v, want %v\nDescription: %s\nCurrent version: %s, Tag: %s",
tt.name,
result,
tt.expectedFilter,
tt.description,
tt.currentVersion,
tt.tag,
)
}
})
}
}
func TestIsPreReleaseVersion(t *testing.T) {
tests := []struct {
name string
version string
expected bool
description string
}{
{
name: "stable version",
version: "0.23.0",
expected: false,
description: "Stable version should not be pre-release",
},
{
name: "alpha version",
version: "0.23.0-alpha.1",
expected: true,
description: "Alpha version should be pre-release",
},
{
name: "beta version",
version: "0.23.0-beta.1",
expected: true,
description: "Beta version should be pre-release",
},
{
name: "rc version",
version: "0.23.0-rc.1",
expected: true,
description: "RC version should be pre-release",
},
{
name: "version with alpha substring",
version: "0.23.0-alphabetical",
expected: true,
description: "Version containing 'alpha' should be pre-release",
},
{
name: "version with beta substring",
version: "0.23.0-betamax",
expected: true,
description: "Version containing 'beta' should be pre-release",
},
{
name: "dev version",
version: "0.23.0-dev",
expected: true,
description: "Dev version should be pre-release",
},
{
name: "empty version",
version: "",
expected: false,
description: "Empty version should not be pre-release",
},
{
name: "version with patch number",
version: "0.23.1",
expected: false,
description: "Stable version with patch should not be pre-release",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := isPreReleaseVersion(tt.version)
if result != tt.expected {
t.Errorf("%s: got %v, want %v\nDescription: %s\nVersion: %s",
tt.name,
result,
tt.expected,
tt.description,
tt.version,
)
}
})
}
}

View File

@@ -0,0 +1,70 @@
package cli
import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestServeCommand(t *testing.T) {
// Test that the serve command exists and is properly configured
assert.NotNil(t, serveCmd)
assert.Equal(t, "serve", serveCmd.Use)
assert.Equal(t, "Launches the headscale server", serveCmd.Short)
assert.NotNil(t, serveCmd.Run)
assert.NotNil(t, serveCmd.Args)
}
func TestServeCommandInRootCommand(t *testing.T) {
// Test that serve is available as a subcommand of root
cmd, _, err := rootCmd.Find([]string{"serve"})
require.NoError(t, err)
assert.Equal(t, "serve", cmd.Name())
assert.Equal(t, serveCmd, cmd)
}
func TestServeCommandArgs(t *testing.T) {
// Test that the Args function is defined and accepts any arguments
// The current implementation always returns nil (accepts any args)
assert.NotNil(t, serveCmd.Args)
// Test the args function directly
err := serveCmd.Args(serveCmd, []string{})
assert.NoError(t, err, "Args function should accept empty arguments")
err = serveCmd.Args(serveCmd, []string{"extra", "args"})
assert.NoError(t, err, "Args function should accept extra arguments")
}
func TestServeCommandHelp(t *testing.T) {
// Test that the command has proper help text
assert.NotEmpty(t, serveCmd.Short)
assert.Contains(t, serveCmd.Short, "server")
assert.Contains(t, serveCmd.Short, "headscale")
}
func TestServeCommandStructure(t *testing.T) {
// Test basic command structure
assert.Equal(t, "serve", serveCmd.Name())
assert.Equal(t, "Launches the headscale server", serveCmd.Short)
// Test that it has no subcommands (it's a leaf command)
subcommands := serveCmd.Commands()
assert.Empty(t, subcommands, "Serve command should not have subcommands")
}
// Note: We can't easily test the actual execution of serve because:
// 1. It depends on configuration files being present and valid
// 2. It calls log.Fatal() which would exit the test process
// 3. It tries to start an actual HTTP server which would block forever
// 4. It requires database connections and other infrastructure
//
// In a real refactor, we would:
// 1. Extract server initialization logic to a testable function
// 2. Use dependency injection for configuration and dependencies
// 3. Return errors instead of calling log.Fatal()
// 4. Add graceful shutdown capabilities for testing
// 5. Allow server startup to be cancelled via context
//
// For now, we test the command structure and basic properties.

View File

@@ -0,0 +1,55 @@
package cli
import (
"strings"
"github.com/pterm/pterm"
"github.com/spf13/cobra"
)
const (
HeadscaleDateTimeFormat = "2006-01-02 15:04:05"
DefaultAPIKeyExpiry = "90d"
DefaultPreAuthKeyExpiry = "1h"
)
// FilterTableColumns filters table columns based on --columns flag
func FilterTableColumns(cmd *cobra.Command, tableData pterm.TableData) pterm.TableData {
columns, _ := cmd.Flags().GetString("columns")
if columns == "" || len(tableData) == 0 {
return tableData
}
headers := tableData[0]
wantedColumns := strings.Split(columns, ",")
// Find column indices
var indices []int
for _, wanted := range wantedColumns {
wanted = strings.TrimSpace(wanted)
for i, header := range headers {
if strings.EqualFold(header, wanted) {
indices = append(indices, i)
break
}
}
}
if len(indices) == 0 {
return tableData
}
// Filter all rows
filtered := make(pterm.TableData, len(tableData))
for i, row := range tableData {
newRow := make([]string, len(indices))
for j, idx := range indices {
if idx < len(row) {
newRow[j] = row[idx]
}
}
filtered[i] = newRow
}
return filtered
}

View File

@@ -1,13 +1,15 @@
package cli
import (
"context"
"errors"
"fmt"
"net/url"
"strconv"
"strings"
survey "github.com/AlecAivazis/survey/v2"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/pterm/pterm"
"github.com/rs/zerolog/log"
"github.com/spf13/cobra"
@@ -15,25 +17,23 @@ import (
)
func usernameAndIDFlag(cmd *cobra.Command) {
cmd.Flags().Int64P("identifier", "i", -1, "User identifier (ID)")
cmd.Flags().StringP("user", "u", "", "User identifier (ID, name, or email)")
cmd.Flags().StringP("name", "n", "", "Username")
}
// usernameAndIDFromFlag returns the username and ID from the flags of the command.
// If both are empty, it will exit the program with an error.
func usernameAndIDFromFlag(cmd *cobra.Command) (uint64, string) {
username, _ := cmd.Flags().GetString("name")
identifier, _ := cmd.Flags().GetInt64("identifier")
if username == "" && identifier < 0 {
err := errors.New("--name or --identifier flag is required")
// userIDFromFlag returns the user ID using smart lookup.
// If no user is specified, it will exit the program with an error.
func userIDFromFlag(cmd *cobra.Command) uint64 {
userID, err := GetUserIdentifier(cmd)
if err != nil {
ErrorOutput(
err,
"Cannot rename user: "+status.Convert(err).Message(),
"",
"Cannot identify user: "+err.Error(),
GetOutputFlag(cmd),
)
}
return uint64(identifier), username
return userID
}
func init() {
@@ -43,14 +43,18 @@ func init() {
createUserCmd.Flags().StringP("email", "e", "", "Email")
createUserCmd.Flags().StringP("picture-url", "p", "", "Profile picture URL")
userCmd.AddCommand(listUsersCmd)
usernameAndIDFlag(listUsersCmd)
listUsersCmd.Flags().StringP("email", "e", "", "Email")
// Smart lookup filters - can be used individually or combined
listUsersCmd.Flags().StringP("user", "u", "", "Filter by user (ID, name, or email)")
listUsersCmd.Flags().Uint64P("id", "", 0, "Filter by user ID")
listUsersCmd.Flags().StringP("name", "n", "", "Filter by username")
listUsersCmd.Flags().StringP("email", "e", "", "Filter by email address")
listUsersCmd.Flags().String("columns", "", "Comma-separated list of columns to display (ID,Name,Username,Email,Created)")
userCmd.AddCommand(destroyUserCmd)
usernameAndIDFlag(destroyUserCmd)
userCmd.AddCommand(renameUserCmd)
usernameAndIDFlag(renameUserCmd)
renameUserCmd.Flags().StringP("new-name", "r", "", "New username")
renameNodeCmd.MarkFlagRequired("new-name")
renameUserCmd.MarkFlagRequired("new-name")
}
var errMissingParameter = errors.New("missing parameters")
@@ -73,16 +77,9 @@ var createUserCmd = &cobra.Command{
return nil
},
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
output := GetOutputFlag(cmd)
userName := args[0]
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
defer cancel()
defer conn.Close()
log.Trace().Interface("client", client).Msg("Obtained gRPC client")
request := &v1.CreateUserRequest{Name: userName}
if displayName, _ := cmd.Flags().GetString("display-name"); displayName != "" {
@@ -103,82 +100,109 @@ var createUserCmd = &cobra.Command{
),
output,
)
return
}
request.PictureUrl = pictureURL
}
log.Trace().Interface("request", request).Msg("Sending CreateUser request")
response, err := client.CreateUser(ctx, request)
if err != nil {
ErrorOutput(
err,
"Cannot create user: "+status.Convert(err).Message(),
output,
)
}
err := WithClient(func(ctx context.Context, client v1.HeadscaleServiceClient) error {
log.Trace().Interface("client", client).Msg("Obtained gRPC client")
log.Trace().Interface("request", request).Msg("Sending CreateUser request")
SuccessOutput(response.GetUser(), "User created", output)
response, err := client.CreateUser(ctx, request)
if err != nil {
ErrorOutput(
err,
"Cannot create user: "+status.Convert(err).Message(),
output,
)
return err
}
SuccessOutput(response.GetUser(), "User created", output)
return nil
})
if err != nil {
return
}
},
}
var destroyUserCmd = &cobra.Command{
Use: "destroy --identifier ID or --name NAME",
Use: "destroy --user USER",
Short: "Destroys a user",
Aliases: []string{"delete"},
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
output := GetOutputFlag(cmd)
id, username := usernameAndIDFromFlag(cmd)
id := userIDFromFlag(cmd)
request := &v1.ListUsersRequest{
Name: username,
Id: id,
Id: id,
}
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
defer cancel()
defer conn.Close()
var user *v1.User
err := WithClient(func(ctx context.Context, client v1.HeadscaleServiceClient) error {
users, err := client.ListUsers(ctx, request)
if err != nil {
ErrorOutput(
err,
"Error: "+status.Convert(err).Message(),
output,
)
return err
}
users, err := client.ListUsers(ctx, request)
if len(users.GetUsers()) != 1 {
err := errors.New("Unable to determine user to delete, query returned multiple users, use ID")
ErrorOutput(
err,
"Error: "+status.Convert(err).Message(),
output,
)
return err
}
user = users.GetUsers()[0]
return nil
})
if err != nil {
ErrorOutput(
err,
"Error: "+status.Convert(err).Message(),
output,
)
return
}
if len(users.GetUsers()) != 1 {
err := errors.New("Unable to determine user to delete, query returned multiple users, use ID")
ErrorOutput(
err,
"Error: "+status.Convert(err).Message(),
output,
)
}
user := users.GetUsers()[0]
confirm := false
force, _ := cmd.Flags().GetBool("force")
if !force {
confirm = util.YesNo(fmt.Sprintf(
"Do you want to remove the user %q (%d) and any associated preauthkeys?",
user.GetName(), user.GetId(),
))
prompt := &survey.Confirm{
Message: fmt.Sprintf(
"Do you want to remove the user %q (%d) and any associated preauthkeys?",
user.GetName(), user.GetId(),
),
}
err := survey.AskOne(prompt, &confirm)
if err != nil {
return
}
}
if confirm || force {
request := &v1.DeleteUserRequest{Id: user.GetId()}
err = WithClient(func(ctx context.Context, client v1.HeadscaleServiceClient) error {
request := &v1.DeleteUserRequest{Id: user.GetId()}
response, err := client.DeleteUser(ctx, request)
response, err := client.DeleteUser(ctx, request)
if err != nil {
ErrorOutput(
err,
"Cannot destroy user: "+status.Convert(err).Message(),
output,
)
return err
}
SuccessOutput(response, "User destroyed", output)
return nil
})
if err != nil {
ErrorOutput(
err,
"Cannot destroy user: "+status.Convert(err).Message(),
output,
)
return
}
SuccessOutput(response, "User destroyed", output)
} else {
SuccessOutput(map[string]string{"Result": "User not destroyed"}, "User not destroyed", output)
}
@@ -190,61 +214,76 @@ var listUsersCmd = &cobra.Command{
Short: "List all the users",
Aliases: []string{"ls", "show"},
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
output := GetOutputFlag(cmd)
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
defer cancel()
defer conn.Close()
err := WithClient(func(ctx context.Context, client v1.HeadscaleServiceClient) error {
request := &v1.ListUsersRequest{}
request := &v1.ListUsersRequest{}
// Check for smart lookup flag first
userFlag, _ := cmd.Flags().GetString("user")
if userFlag != "" {
// Use smart lookup to determine filter type
if id, err := strconv.ParseUint(userFlag, 10, 64); err == nil && id > 0 {
request.Id = id
} else if strings.Contains(userFlag, "@") {
request.Email = userFlag
} else {
request.Name = userFlag
}
} else {
// Check specific filter flags
if id, _ := cmd.Flags().GetUint64("id"); id > 0 {
request.Id = id
} else if name, _ := cmd.Flags().GetString("name"); name != "" {
request.Name = name
} else if email, _ := cmd.Flags().GetString("email"); email != "" {
request.Email = email
}
}
id, _ := cmd.Flags().GetInt64("identifier")
username, _ := cmd.Flags().GetString("name")
email, _ := cmd.Flags().GetString("email")
response, err := client.ListUsers(ctx, request)
if err != nil {
ErrorOutput(
err,
"Cannot get users: "+status.Convert(err).Message(),
output,
)
return err
}
// filter by one param at most
switch {
case id > 0:
request.Id = uint64(id)
case username != "":
request.Name = username
case email != "":
request.Email = email
}
if output != "" {
SuccessOutput(response.GetUsers(), "", output)
return nil
}
response, err := client.ListUsers(ctx, request)
tableData := pterm.TableData{{"ID", "Name", "Username", "Email", "Created"}}
for _, user := range response.GetUsers() {
tableData = append(
tableData,
[]string{
strconv.FormatUint(user.GetId(), 10),
user.GetDisplayName(),
user.GetName(),
user.GetEmail(),
user.GetCreatedAt().AsTime().Format(HeadscaleDateTimeFormat),
},
)
}
tableData = FilterTableColumns(cmd, tableData)
err = pterm.DefaultTable.WithHasHeader().WithData(tableData).Render()
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Failed to render pterm table: %s", err),
output,
)
return err
}
return nil
})
if err != nil {
ErrorOutput(
err,
"Cannot get users: "+status.Convert(err).Message(),
output,
)
}
if output != "" {
SuccessOutput(response.GetUsers(), "", output)
}
tableData := pterm.TableData{{"ID", "Name", "Username", "Email", "Created"}}
for _, user := range response.GetUsers() {
tableData = append(
tableData,
[]string{
strconv.FormatUint(user.GetId(), 10),
user.GetDisplayName(),
user.GetName(),
user.GetEmail(),
user.GetCreatedAt().AsTime().Format("2006-01-02 15:04:05"),
},
)
}
err = pterm.DefaultTable.WithHasHeader().WithData(tableData).Render()
if err != nil {
ErrorOutput(
err,
fmt.Sprintf("Failed to render pterm table: %s", err),
output,
)
// Error already handled in closure
return
}
},
}
@@ -254,52 +293,56 @@ var renameUserCmd = &cobra.Command{
Short: "Renames a user",
Aliases: []string{"mv"},
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
defer cancel()
defer conn.Close()
id, username := usernameAndIDFromFlag(cmd)
listReq := &v1.ListUsersRequest{
Name: username,
Id: id,
}
users, err := client.ListUsers(ctx, listReq)
if err != nil {
ErrorOutput(
err,
"Error: "+status.Convert(err).Message(),
output,
)
}
if len(users.GetUsers()) != 1 {
err := errors.New("Unable to determine user to delete, query returned multiple users, use ID")
ErrorOutput(
err,
"Error: "+status.Convert(err).Message(),
output,
)
}
output := GetOutputFlag(cmd)
id := userIDFromFlag(cmd)
newName, _ := cmd.Flags().GetString("new-name")
renameReq := &v1.RenameUserRequest{
OldId: id,
NewName: newName,
}
err := WithClient(func(ctx context.Context, client v1.HeadscaleServiceClient) error {
listReq := &v1.ListUsersRequest{
Id: id,
}
response, err := client.RenameUser(ctx, renameReq)
users, err := client.ListUsers(ctx, listReq)
if err != nil {
ErrorOutput(
err,
"Error: "+status.Convert(err).Message(),
output,
)
return err
}
if len(users.GetUsers()) != 1 {
err := errors.New("Unable to determine user to delete, query returned multiple users, use ID")
ErrorOutput(
err,
"Error: "+status.Convert(err).Message(),
output,
)
return err
}
renameReq := &v1.RenameUserRequest{
OldId: id,
NewName: newName,
}
response, err := client.RenameUser(ctx, renameReq)
if err != nil {
ErrorOutput(
err,
"Cannot rename user: "+status.Convert(err).Message(),
output,
)
return err
}
SuccessOutput(response.GetUser(), "User renamed", output)
return nil
})
if err != nil {
ErrorOutput(
err,
"Cannot rename user: "+status.Convert(err).Message(),
output,
)
return
}
SuccessOutput(response.GetUser(), "User renamed", output)
},
}

View File

@@ -5,24 +5,23 @@ import (
"crypto/tls"
"encoding/json"
"fmt"
"net"
"os"
"strconv"
"strings"
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
"github.com/juanfont/headscale/hscontrol"
"github.com/juanfont/headscale/hscontrol/types"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/rs/zerolog/log"
"github.com/spf13/cobra"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials"
"google.golang.org/grpc/credentials/insecure"
"gopkg.in/yaml.v3"
)
const (
HeadscaleDateTimeFormat = "2006-01-02 15:04:05"
SocketWritePermissions = 0o666
)
func newHeadscaleServerWithConfig() (*hscontrol.Headscale, error) {
cfg, err := types.LoadServerConfig()
if err != nil {
@@ -72,7 +71,7 @@ func newHeadscaleCLIWithConfig() (context.Context, v1.HeadscaleServiceClient, *g
// Try to give the user better feedback if we cannot write to the headscale
// socket.
socket, err := os.OpenFile(cfg.UnixSocket, os.O_WRONLY, SocketWritePermissions) // nolint
socket, err := os.OpenFile(cfg.UnixSocket, os.O_WRONLY, 0o666) // nolint
if err != nil {
if os.IsPermission(err) {
log.Fatal().
@@ -130,7 +129,7 @@ func newHeadscaleCLIWithConfig() (context.Context, v1.HeadscaleServiceClient, *g
return ctx, client, conn, cancel
}
func output(result any, override string, outputFormat string) string {
func output(result interface{}, override string, outputFormat string) string {
var jsonBytes []byte
var err error
switch outputFormat {
@@ -158,7 +157,7 @@ func output(result any, override string, outputFormat string) string {
}
// SuccessOutput prints the result to stdout and exits with status code 0.
func SuccessOutput(result any, override string, outputFormat string) {
func SuccessOutput(result interface{}, override string, outputFormat string) {
fmt.Println(output(result, override, outputFormat))
os.Exit(0)
}
@@ -169,14 +168,7 @@ func ErrorOutput(errResult error, override string, outputFormat string) {
Error string `json:"error"`
}
var errorMessage string
if errResult != nil {
errorMessage = errResult.Error()
} else {
errorMessage = override
}
fmt.Fprintf(os.Stderr, "%s\n", output(errOutput{errorMessage}, override, outputFormat))
fmt.Fprintf(os.Stderr, "%s\n", output(errOutput{errResult.Error()}, override, outputFormat))
os.Exit(1)
}
@@ -207,3 +199,152 @@ func (t tokenAuth) GetRequestMetadata(
func (tokenAuth) RequireTransportSecurity() bool {
return true
}
// GetOutputFlag returns the output flag value (never fails)
func GetOutputFlag(cmd *cobra.Command) string {
output, _ := cmd.Flags().GetString("output")
return output
}
// GetNodeIdentifier returns the node ID using smart lookup via gRPC ListNodes call
func GetNodeIdentifier(cmd *cobra.Command) (uint64, error) {
nodeFlag, _ := cmd.Flags().GetString("node")
// Use --node flag
if nodeFlag == "" {
return 0, fmt.Errorf("--node flag is required")
}
// Use smart lookup via gRPC
return lookupNodeBySpecifier(nodeFlag)
}
// lookupNodeBySpecifier performs smart lookup of a node by ID, name, hostname, or IP
func lookupNodeBySpecifier(specifier string) (uint64, error) {
var nodeID uint64
err := WithClient(func(ctx context.Context, client v1.HeadscaleServiceClient) error {
request := &v1.ListNodesRequest{}
// Detect what type of specifier this is and set appropriate filter
if id, err := strconv.ParseUint(specifier, 10, 64); err == nil && id > 0 {
// Looks like a numeric ID
request.Id = id
} else if isIPAddress(specifier) {
// Looks like an IP address
request.IpAddresses = []string{specifier}
} else {
// Treat as hostname/name
request.Name = specifier
}
response, err := client.ListNodes(ctx, request)
if err != nil {
return fmt.Errorf("failed to lookup node: %w", err)
}
nodes := response.GetNodes()
if len(nodes) == 0 {
return fmt.Errorf("node not found")
}
if len(nodes) > 1 {
var nodeInfo []string
for _, node := range nodes {
nodeInfo = append(nodeInfo, fmt.Sprintf("ID=%d name=%s", node.GetId(), node.GetName()))
}
return fmt.Errorf("multiple nodes found matching '%s': %s", specifier, strings.Join(nodeInfo, ", "))
}
// Exactly one match - this is what we want
nodeID = nodes[0].GetId()
return nil
})
if err != nil {
return 0, err
}
return nodeID, nil
}
// isIPAddress checks if a string looks like an IP address
func isIPAddress(s string) bool {
// Try parsing as IP address (both IPv4 and IPv6)
if net.ParseIP(s) != nil {
return true
}
// Try parsing as CIDR
if _, _, err := net.ParseCIDR(s); err == nil {
return true
}
return false
}
// GetUserIdentifier returns the user ID using smart lookup via gRPC ListUsers call
func GetUserIdentifier(cmd *cobra.Command) (uint64, error) {
userFlag, _ := cmd.Flags().GetString("user")
nameFlag, _ := cmd.Flags().GetString("name")
var specifier string
// Determine which flag was used (prefer --user, fall back to legacy flags)
if userFlag != "" {
specifier = userFlag
} else if nameFlag != "" {
specifier = nameFlag
} else {
return 0, fmt.Errorf("--user flag is required")
}
// Use smart lookup via gRPC
return lookupUserBySpecifier(specifier)
}
// lookupUserBySpecifier performs smart lookup of a user by ID, name, or email
func lookupUserBySpecifier(specifier string) (uint64, error) {
var userID uint64
err := WithClient(func(ctx context.Context, client v1.HeadscaleServiceClient) error {
request := &v1.ListUsersRequest{}
// Detect what type of specifier this is and set appropriate filter
if id, err := strconv.ParseUint(specifier, 10, 64); err == nil && id > 0 {
// Looks like a numeric ID
request.Id = id
} else if strings.Contains(specifier, "@") {
// Looks like an email address
request.Email = specifier
} else {
// Treat as username
request.Name = specifier
}
response, err := client.ListUsers(ctx, request)
if err != nil {
return fmt.Errorf("failed to lookup user: %w", err)
}
users := response.GetUsers()
if len(users) == 0 {
return fmt.Errorf("user not found")
}
if len(users) > 1 {
var userInfo []string
for _, user := range users {
userInfo = append(userInfo, fmt.Sprintf("ID=%d name=%s email=%s", user.GetId(), user.GetName(), user.GetEmail()))
}
return fmt.Errorf("multiple users found matching '%s': %s", specifier, strings.Join(userInfo, ", "))
}
// Exactly one match - this is what we want
userID = users[0].GetId()
return nil
})
if err != nil {
return 0, err
}
return userID, nil
}

View File

@@ -0,0 +1,175 @@
package cli
import (
"os"
"testing"
"github.com/stretchr/testify/assert"
)
func TestHasMachineOutputFlag(t *testing.T) {
tests := []struct {
name string
args []string
expected bool
}{
{
name: "no machine output flags",
args: []string{"headscale", "users", "list"},
expected: false,
},
{
name: "json flag present",
args: []string{"headscale", "users", "list", "json"},
expected: true,
},
{
name: "json-line flag present",
args: []string{"headscale", "nodes", "list", "json-line"},
expected: true,
},
{
name: "yaml flag present",
args: []string{"headscale", "apikeys", "list", "yaml"},
expected: true,
},
{
name: "mixed flags with json",
args: []string{"headscale", "--config", "/tmp/config.yaml", "users", "list", "json"},
expected: true,
},
{
name: "flag as part of longer argument",
args: []string{"headscale", "users", "create", "json-user@example.com"},
expected: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
// Save original os.Args
originalArgs := os.Args
defer func() { os.Args = originalArgs }()
// Set os.Args to test case
os.Args = tt.args
result := HasMachineOutputFlag()
assert.Equal(t, tt.expected, result)
})
}
}
func TestOutput(t *testing.T) {
tests := []struct {
name string
result interface{}
override string
outputFormat string
expected string
}{
{
name: "default format returns override",
result: map[string]string{"test": "value"},
override: "Human readable output",
outputFormat: "",
expected: "Human readable output",
},
{
name: "default format with empty override",
result: map[string]string{"test": "value"},
override: "",
outputFormat: "",
expected: "",
},
{
name: "json format",
result: map[string]string{"name": "test", "id": "123"},
override: "Human readable",
outputFormat: "json",
expected: "{\n\t\"id\": \"123\",\n\t\"name\": \"test\"\n}",
},
{
name: "json-line format",
result: map[string]string{"name": "test", "id": "123"},
override: "Human readable",
outputFormat: "json-line",
expected: "{\"id\":\"123\",\"name\":\"test\"}",
},
{
name: "yaml format",
result: map[string]string{"name": "test", "id": "123"},
override: "Human readable",
outputFormat: "yaml",
expected: "id: \"123\"\nname: test\n",
},
{
name: "invalid format returns override",
result: map[string]string{"test": "value"},
override: "Human readable output",
outputFormat: "invalid",
expected: "Human readable output",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := output(tt.result, tt.override, tt.outputFormat)
assert.Equal(t, tt.expected, result)
})
}
}
func TestOutputWithComplexData(t *testing.T) {
// Test with more complex data structures
complexData := struct {
Users []struct {
Name string `json:"name" yaml:"name"`
ID int `json:"id" yaml:"id"`
} `json:"users" yaml:"users"`
}{
Users: []struct {
Name string `json:"name" yaml:"name"`
ID int `json:"id" yaml:"id"`
}{
{Name: "user1", ID: 1},
{Name: "user2", ID: 2},
},
}
// Test JSON output
jsonResult := output(complexData, "override", "json")
assert.Contains(t, jsonResult, "\"users\":")
assert.Contains(t, jsonResult, "\"name\": \"user1\"")
assert.Contains(t, jsonResult, "\"id\": 1")
// Test YAML output
yamlResult := output(complexData, "override", "yaml")
assert.Contains(t, yamlResult, "users:")
assert.Contains(t, yamlResult, "name: user1")
assert.Contains(t, yamlResult, "id: 1")
}
func TestOutputWithNilData(t *testing.T) {
// Test with nil data
result := output(nil, "fallback", "json")
assert.Equal(t, "null", result)
result = output(nil, "fallback", "yaml")
assert.Equal(t, "null\n", result)
result = output(nil, "fallback", "")
assert.Equal(t, "fallback", result)
}
func TestOutputWithEmptyData(t *testing.T) {
// Test with empty slice
emptySlice := []string{}
result := output(emptySlice, "fallback", "json")
assert.Equal(t, "[]", result)
// Test with empty map
emptyMap := map[string]string{}
result = output(emptyMap, "fallback", "json")
assert.Equal(t, "{}", result)
}

View File

@@ -7,18 +7,17 @@ import (
func init() {
rootCmd.AddCommand(versionCmd)
versionCmd.Flags().StringP("output", "o", "", "Output format. Empty for human-readable, 'json', 'json-line' or 'yaml'")
}
var versionCmd = &cobra.Command{
Use: "version",
Short: "Print the version.",
Long: "The version of headscale.",
Short: "Print the version",
Long: "The version of headscale",
Run: func(cmd *cobra.Command, args []string) {
output, _ := cmd.Flags().GetString("output")
info := types.GetVersionInfo()
SuccessOutput(info, info.String(), output)
output := GetOutputFlag(cmd)
SuccessOutput(map[string]string{
"version": types.Version,
"commit": types.GitCommitHash,
}, types.Version, output)
},
}

View File

@@ -0,0 +1,45 @@
package cli
import (
"testing"
"github.com/stretchr/testify/assert"
)
func TestVersionCommand(t *testing.T) {
// Test that version command exists
assert.NotNil(t, versionCmd)
assert.Equal(t, "version", versionCmd.Use)
assert.Equal(t, "Print the version.", versionCmd.Short)
assert.Equal(t, "The version of headscale.", versionCmd.Long)
}
func TestVersionCommandStructure(t *testing.T) {
// Test command is properly added to root
found := false
for _, cmd := range rootCmd.Commands() {
if cmd.Use == "version" {
found = true
break
}
}
assert.True(t, found, "version command should be added to root command")
}
func TestVersionCommandFlags(t *testing.T) {
// Version command should inherit output flag from root as persistent flag
outputFlag := versionCmd.Flag("output")
if outputFlag == nil {
// Try persistent flags from root
outputFlag = rootCmd.PersistentFlags().Lookup("output")
}
assert.NotNil(t, outputFlag, "version command should have access to output flag")
}
func TestVersionCommandRun(t *testing.T) {
// Test that Run function is set
assert.NotNil(t, versionCmd.Run)
// We can't easily test the actual execution without mocking SuccessOutput
// but we can verify the function exists and has the right signature
}

View File

@@ -9,17 +9,34 @@ import (
"github.com/juanfont/headscale/hscontrol/types"
"github.com/juanfont/headscale/hscontrol/util"
"github.com/spf13/viper"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"gopkg.in/check.v1"
)
func TestConfigFileLoading(t *testing.T) {
func Test(t *testing.T) {
check.TestingT(t)
}
var _ = check.Suite(&Suite{})
type Suite struct{}
func (s *Suite) SetUpSuite(c *check.C) {
}
func (s *Suite) TearDownSuite(c *check.C) {
}
func (*Suite) TestConfigFileLoading(c *check.C) {
tmpDir, err := os.MkdirTemp("", "headscale")
require.NoError(t, err)
if err != nil {
c.Fatal(err)
}
defer os.RemoveAll(tmpDir)
path, err := os.Getwd()
require.NoError(t, err)
if err != nil {
c.Fatal(err)
}
cfgFile := filepath.Join(tmpDir, "config.yaml")
@@ -28,54 +45,70 @@ func TestConfigFileLoading(t *testing.T) {
filepath.Clean(path+"/../../config-example.yaml"),
cfgFile,
)
require.NoError(t, err)
if err != nil {
c.Fatal(err)
}
// Load example config, it should load without validation errors
err = types.LoadConfig(cfgFile, true)
require.NoError(t, err)
c.Assert(err, check.IsNil)
// Test that config file was interpreted correctly
assert.Equal(t, "http://127.0.0.1:8080", viper.GetString("server_url"))
assert.Equal(t, "127.0.0.1:8080", viper.GetString("listen_addr"))
assert.Equal(t, "127.0.0.1:9090", viper.GetString("metrics_listen_addr"))
assert.Equal(t, "sqlite", viper.GetString("database.type"))
assert.Equal(t, "/var/lib/headscale/db.sqlite", viper.GetString("database.sqlite.path"))
assert.Empty(t, viper.GetString("tls_letsencrypt_hostname"))
assert.Equal(t, ":http", viper.GetString("tls_letsencrypt_listen"))
assert.Equal(t, "HTTP-01", viper.GetString("tls_letsencrypt_challenge_type"))
assert.Equal(t, fs.FileMode(0o770), util.GetFileMode("unix_socket_permission"))
assert.False(t, viper.GetBool("logtail.enabled"))
c.Assert(viper.GetString("server_url"), check.Equals, "http://127.0.0.1:8080")
c.Assert(viper.GetString("listen_addr"), check.Equals, "127.0.0.1:8080")
c.Assert(viper.GetString("metrics_listen_addr"), check.Equals, "127.0.0.1:9090")
c.Assert(viper.GetString("database.type"), check.Equals, "sqlite")
c.Assert(viper.GetString("database.sqlite.path"), check.Equals, "/var/lib/headscale/db.sqlite")
c.Assert(viper.GetString("tls_letsencrypt_hostname"), check.Equals, "")
c.Assert(viper.GetString("tls_letsencrypt_listen"), check.Equals, ":http")
c.Assert(viper.GetString("tls_letsencrypt_challenge_type"), check.Equals, "HTTP-01")
c.Assert(
util.GetFileMode("unix_socket_permission"),
check.Equals,
fs.FileMode(0o770),
)
c.Assert(viper.GetBool("logtail.enabled"), check.Equals, false)
}
func TestConfigLoading(t *testing.T) {
func (*Suite) TestConfigLoading(c *check.C) {
tmpDir, err := os.MkdirTemp("", "headscale")
require.NoError(t, err)
if err != nil {
c.Fatal(err)
}
defer os.RemoveAll(tmpDir)
path, err := os.Getwd()
require.NoError(t, err)
if err != nil {
c.Fatal(err)
}
// Symlink the example config file
err = os.Symlink(
filepath.Clean(path+"/../../config-example.yaml"),
filepath.Join(tmpDir, "config.yaml"),
)
require.NoError(t, err)
if err != nil {
c.Fatal(err)
}
// Load example config, it should load without validation errors
err = types.LoadConfig(tmpDir, false)
require.NoError(t, err)
c.Assert(err, check.IsNil)
// Test that config file was interpreted correctly
assert.Equal(t, "http://127.0.0.1:8080", viper.GetString("server_url"))
assert.Equal(t, "127.0.0.1:8080", viper.GetString("listen_addr"))
assert.Equal(t, "127.0.0.1:9090", viper.GetString("metrics_listen_addr"))
assert.Equal(t, "sqlite", viper.GetString("database.type"))
assert.Equal(t, "/var/lib/headscale/db.sqlite", viper.GetString("database.sqlite.path"))
assert.Empty(t, viper.GetString("tls_letsencrypt_hostname"))
assert.Equal(t, ":http", viper.GetString("tls_letsencrypt_listen"))
assert.Equal(t, "HTTP-01", viper.GetString("tls_letsencrypt_challenge_type"))
assert.Equal(t, fs.FileMode(0o770), util.GetFileMode("unix_socket_permission"))
assert.False(t, viper.GetBool("logtail.enabled"))
assert.False(t, viper.GetBool("randomize_client_port"))
c.Assert(viper.GetString("server_url"), check.Equals, "http://127.0.0.1:8080")
c.Assert(viper.GetString("listen_addr"), check.Equals, "127.0.0.1:8080")
c.Assert(viper.GetString("metrics_listen_addr"), check.Equals, "127.0.0.1:9090")
c.Assert(viper.GetString("database.type"), check.Equals, "sqlite")
c.Assert(viper.GetString("database.sqlite.path"), check.Equals, "/var/lib/headscale/db.sqlite")
c.Assert(viper.GetString("tls_letsencrypt_hostname"), check.Equals, "")
c.Assert(viper.GetString("tls_letsencrypt_listen"), check.Equals, ":http")
c.Assert(viper.GetString("tls_letsencrypt_challenge_type"), check.Equals, "HTTP-01")
c.Assert(
util.GetFileMode("unix_socket_permission"),
check.Equals,
fs.FileMode(0o770),
)
c.Assert(viper.GetBool("logtail.enabled"), check.Equals, false)
c.Assert(viper.GetBool("randomize_client_port"), check.Equals, false)
}

View File

@@ -1,6 +0,0 @@
# hi
hi (headscale integration runner) is an entirely "vibe coded" wrapper around our
[integration test suite](../integration). It essentially runs the docker
commands for you with some added benefits of extracting resources like logs and
databases.

View File

@@ -3,13 +3,9 @@ package main
import (
"context"
"fmt"
"log"
"os"
"path/filepath"
"strings"
"time"
"github.com/cenkalti/backoff/v5"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/image"
@@ -18,11 +14,9 @@ import (
)
// cleanupBeforeTest performs cleanup operations before running tests.
// Only removes stale (stopped/exited) test containers to avoid interfering with concurrent test runs.
func cleanupBeforeTest(ctx context.Context) error {
err := cleanupStaleTestContainers(ctx)
if err != nil {
return fmt.Errorf("failed to clean stale test containers: %w", err)
if err := killTestContainers(ctx); err != nil {
return fmt.Errorf("failed to kill test containers: %w", err)
}
if err := pruneDockerNetworks(ctx); err != nil {
@@ -32,25 +26,11 @@ func cleanupBeforeTest(ctx context.Context) error {
return nil
}
// cleanupAfterTest removes the test container and all associated integration test containers for the run.
func cleanupAfterTest(ctx context.Context, cli *client.Client, containerID, runID string) error {
// Remove the main test container
err := cli.ContainerRemove(ctx, containerID, container.RemoveOptions{
// cleanupAfterTest removes the test container after completion.
func cleanupAfterTest(ctx context.Context, cli *client.Client, containerID string) error {
return cli.ContainerRemove(ctx, containerID, container.RemoveOptions{
Force: true,
})
if err != nil {
return fmt.Errorf("failed to remove test container: %w", err)
}
// Clean up integration test containers for this run only
if runID != "" {
err := killTestContainersByRunID(ctx, runID)
if err != nil {
return fmt.Errorf("failed to clean up containers for run %s: %w", runID, err)
}
}
return nil
}
// killTestContainers terminates and removes all test containers.
@@ -103,122 +83,30 @@ func killTestContainers(ctx context.Context) error {
return nil
}
// killTestContainersByRunID terminates and removes all test containers for a specific run ID.
// This function filters containers by the hi.run-id label to only affect containers
// belonging to the specified test run, leaving other concurrent test runs untouched.
func killTestContainersByRunID(ctx context.Context, runID string) error {
cli, err := createDockerClient()
if err != nil {
return fmt.Errorf("failed to create Docker client: %w", err)
}
defer cli.Close()
// Filter containers by hi.run-id label
containers, err := cli.ContainerList(ctx, container.ListOptions{
All: true,
Filters: filters.NewArgs(
filters.Arg("label", "hi.run-id="+runID),
),
})
if err != nil {
return fmt.Errorf("failed to list containers for run %s: %w", runID, err)
}
removed := 0
for _, cont := range containers {
// Kill the container if it's running
if cont.State == "running" {
_ = cli.ContainerKill(ctx, cont.ID, "KILL")
}
// Remove the container with retry logic
if removeContainerWithRetry(ctx, cli, cont.ID) {
removed++
}
}
if removed > 0 {
fmt.Printf("Removed %d containers for run ID %s\n", removed, runID)
}
return nil
}
// cleanupStaleTestContainers removes stopped/exited test containers without affecting running tests.
// This is useful for cleaning up leftover containers from previous crashed or interrupted test runs
// without interfering with currently running concurrent tests.
func cleanupStaleTestContainers(ctx context.Context) error {
cli, err := createDockerClient()
if err != nil {
return fmt.Errorf("failed to create Docker client: %w", err)
}
defer cli.Close()
// Only get stopped/exited containers
containers, err := cli.ContainerList(ctx, container.ListOptions{
All: true,
Filters: filters.NewArgs(
filters.Arg("status", "exited"),
filters.Arg("status", "dead"),
),
})
if err != nil {
return fmt.Errorf("failed to list stopped containers: %w", err)
}
removed := 0
for _, cont := range containers {
// Only remove containers that look like test containers
shouldRemove := false
for _, name := range cont.Names {
if strings.Contains(name, "headscale-test-suite") ||
strings.Contains(name, "hs-") ||
strings.Contains(name, "ts-") ||
strings.Contains(name, "derp-") {
shouldRemove = true
break
}
}
if shouldRemove {
if removeContainerWithRetry(ctx, cli, cont.ID) {
removed++
}
}
}
if removed > 0 {
fmt.Printf("Removed %d stale test containers\n", removed)
}
return nil
}
const (
containerRemoveInitialInterval = 100 * time.Millisecond
containerRemoveMaxElapsedTime = 2 * time.Second
)
// removeContainerWithRetry attempts to remove a container with exponential backoff retry logic.
func removeContainerWithRetry(ctx context.Context, cli *client.Client, containerID string) bool {
expBackoff := backoff.NewExponentialBackOff()
expBackoff.InitialInterval = containerRemoveInitialInterval
maxRetries := 3
baseDelay := 100 * time.Millisecond
_, err := backoff.Retry(ctx, func() (struct{}, error) {
for attempt := range maxRetries {
err := cli.ContainerRemove(ctx, containerID, container.RemoveOptions{
Force: true,
})
if err != nil {
return struct{}{}, err
if err == nil {
return true
}
return struct{}{}, nil
}, backoff.WithBackOff(expBackoff), backoff.WithMaxElapsedTime(containerRemoveMaxElapsedTime))
// If this is the last attempt, don't wait
if attempt == maxRetries-1 {
break
}
return err == nil
// Wait with exponential backoff
delay := baseDelay * time.Duration(1<<attempt)
time.Sleep(delay)
}
return false
}
// pruneDockerNetworks removes unused Docker networks.
@@ -317,110 +205,3 @@ func cleanCacheVolume(ctx context.Context) error {
return nil
}
// cleanupSuccessfulTestArtifacts removes artifacts from successful test runs to save disk space.
// This function removes large artifacts that are mainly useful for debugging failures:
// - Database dumps (.db files)
// - Profile data (pprof directories)
// - MapResponse data (mapresponses directories)
// - Prometheus metrics files
//
// It preserves:
// - Log files (.log) which are small and useful for verification.
func cleanupSuccessfulTestArtifacts(logsDir string, verbose bool) error {
entries, err := os.ReadDir(logsDir)
if err != nil {
return fmt.Errorf("failed to read logs directory: %w", err)
}
var (
removedFiles, removedDirs int
totalSize int64
)
for _, entry := range entries {
name := entry.Name()
fullPath := filepath.Join(logsDir, name)
if entry.IsDir() {
// Remove pprof and mapresponses directories (typically large)
// These directories contain artifacts from all containers in the test run
if name == "pprof" || name == "mapresponses" {
size, sizeErr := getDirSize(fullPath)
if sizeErr == nil {
totalSize += size
}
err := os.RemoveAll(fullPath)
if err != nil {
if verbose {
log.Printf("Warning: failed to remove directory %s: %v", name, err)
}
} else {
removedDirs++
if verbose {
log.Printf("Removed directory: %s/", name)
}
}
}
} else {
// Only process test-related files (headscale and tailscale)
if !strings.HasPrefix(name, "hs-") && !strings.HasPrefix(name, "ts-") {
continue
}
// Remove database, metrics, and status files, but keep logs
shouldRemove := strings.HasSuffix(name, ".db") ||
strings.HasSuffix(name, "_metrics.txt") ||
strings.HasSuffix(name, "_status.json")
if shouldRemove {
info, infoErr := entry.Info()
if infoErr == nil {
totalSize += info.Size()
}
err := os.Remove(fullPath)
if err != nil {
if verbose {
log.Printf("Warning: failed to remove file %s: %v", name, err)
}
} else {
removedFiles++
if verbose {
log.Printf("Removed file: %s", name)
}
}
}
}
}
if removedFiles > 0 || removedDirs > 0 {
const bytesPerMB = 1024 * 1024
log.Printf("Cleaned up %d files and %d directories (freed ~%.2f MB)",
removedFiles, removedDirs, float64(totalSize)/bytesPerMB)
}
return nil
}
// getDirSize calculates the total size of a directory.
func getDirSize(path string) (int64, error) {
var size int64
err := filepath.Walk(path, func(_ string, info os.FileInfo, err error) error {
if err != nil {
return err
}
if !info.IsDir() {
size += info.Size()
}
return nil
})
return size, err
}

View File

@@ -89,35 +89,6 @@ func runTestContainer(ctx context.Context, config *RunConfig) error {
}
log.Printf("Starting test: %s", config.TestPattern)
log.Printf("Run ID: %s", runID)
log.Printf("Monitor with: docker logs -f %s", containerName)
log.Printf("Logs directory: %s", logsDir)
// Start stats collection for container resource monitoring (if enabled)
var statsCollector *StatsCollector
if config.Stats {
var err error
statsCollector, err = NewStatsCollector()
if err != nil {
if config.Verbose {
log.Printf("Warning: failed to create stats collector: %v", err)
}
statsCollector = nil
}
if statsCollector != nil {
defer statsCollector.Close()
// Start stats collection immediately - no need for complex retry logic
// The new implementation monitors Docker events and will catch containers as they start
if err := statsCollector.StartCollection(ctx, runID, config.Verbose); err != nil {
if config.Verbose {
log.Printf("Warning: failed to start stats collection: %v", err)
}
}
defer statsCollector.StopCollection()
}
}
exitCode, err := streamAndWait(ctx, cli, resp.ID)
@@ -134,45 +105,14 @@ func runTestContainer(ctx context.Context, config *RunConfig) error {
// Always list control files regardless of test outcome
listControlFiles(logsDir)
// Print stats summary and check memory limits if enabled
if config.Stats && statsCollector != nil {
violations := statsCollector.PrintSummaryAndCheckLimits(config.HSMemoryLimit, config.TSMemoryLimit)
if len(violations) > 0 {
log.Printf("MEMORY LIMIT VIOLATIONS DETECTED:")
log.Printf("=================================")
for _, violation := range violations {
log.Printf("Container %s exceeded memory limit: %.1f MB > %.1f MB",
violation.ContainerName, violation.MaxMemoryMB, violation.LimitMB)
}
return fmt.Errorf("test failed: %d container(s) exceeded memory limits", len(violations))
}
}
shouldCleanup := config.CleanAfter && (!config.KeepOnFailure || exitCode == 0)
if shouldCleanup {
if config.Verbose {
log.Printf("Running post-test cleanup for run %s...", runID)
log.Printf("Running post-test cleanup...")
}
cleanErr := cleanupAfterTest(ctx, cli, resp.ID, runID)
if cleanErr != nil && config.Verbose {
if cleanErr := cleanupAfterTest(ctx, cli, resp.ID); cleanErr != nil && config.Verbose {
log.Printf("Warning: post-test cleanup failed: %v", cleanErr)
}
// Clean up artifacts from successful tests to save disk space in CI
if exitCode == 0 {
if config.Verbose {
log.Printf("Test succeeded, cleaning up artifacts to save disk space...")
}
cleanErr := cleanupSuccessfulTestArtifacts(logsDir, config.Verbose)
if cleanErr != nil && config.Verbose {
log.Printf("Warning: artifact cleanup failed: %v", cleanErr)
}
}
}
if err != nil {
@@ -221,28 +161,6 @@ func createGoTestContainer(ctx context.Context, cli *client.Client, config *RunC
fmt.Sprintf("HEADSCALE_INTEGRATION_POSTGRES=%d", boolToInt(config.UsePostgres)),
"HEADSCALE_INTEGRATION_RUN_ID=" + runID,
}
// Pass through CI environment variable for CI detection
if ci := os.Getenv("CI"); ci != "" {
env = append(env, "CI="+ci)
}
// Pass through all HEADSCALE_INTEGRATION_* environment variables
for _, e := range os.Environ() {
if strings.HasPrefix(e, "HEADSCALE_INTEGRATION_") {
// Skip the ones we already set explicitly
if strings.HasPrefix(e, "HEADSCALE_INTEGRATION_POSTGRES=") ||
strings.HasPrefix(e, "HEADSCALE_INTEGRATION_RUN_ID=") {
continue
}
env = append(env, e)
}
}
// Set GOCACHE to a known location (used by both bind mount and volume cases)
env = append(env, "GOCACHE=/cache/go-build")
containerConfig := &container.Config{
Image: "golang:" + config.GoVersion,
Cmd: goTestCmd,
@@ -262,43 +180,20 @@ func createGoTestContainer(ctx context.Context, cli *client.Client, config *RunC
log.Printf("Using Docker socket: %s", dockerSocketPath)
}
binds := []string{
fmt.Sprintf("%s:%s", projectRoot, projectRoot),
dockerSocketPath + ":/var/run/docker.sock",
logsDir + ":/tmp/control",
}
// Use bind mounts for Go cache if provided via environment variables,
// otherwise fall back to Docker volumes for local development
var mounts []mount.Mount
goCache := os.Getenv("HEADSCALE_INTEGRATION_GO_CACHE")
goBuildCache := os.Getenv("HEADSCALE_INTEGRATION_GO_BUILD_CACHE")
if goCache != "" {
binds = append(binds, goCache+":/go")
} else {
mounts = append(mounts, mount.Mount{
Type: mount.TypeVolume,
Source: "hs-integration-go-cache",
Target: "/go",
})
}
if goBuildCache != "" {
binds = append(binds, goBuildCache+":/cache/go-build")
} else {
mounts = append(mounts, mount.Mount{
Type: mount.TypeVolume,
Source: "hs-integration-go-build-cache",
Target: "/cache/go-build",
})
}
hostConfig := &container.HostConfig{
AutoRemove: false, // We'll remove manually for better control
Binds: binds,
Mounts: mounts,
Binds: []string{
fmt.Sprintf("%s:%s", projectRoot, projectRoot),
dockerSocketPath + ":/var/run/docker.sock",
logsDir + ":/tmp/control",
},
Mounts: []mount.Mount{
{
Type: mount.TypeVolume,
Source: "hs-integration-go-cache",
Target: "/go",
},
},
}
return cli.ContainerCreate(ctx, containerConfig, hostConfig, nil, nil, containerName)
@@ -421,10 +316,10 @@ func boolToInt(b bool) int {
// DockerContext represents Docker context information.
type DockerContext struct {
Name string `json:"Name"`
Metadata map[string]any `json:"Metadata"`
Endpoints map[string]any `json:"Endpoints"`
Current bool `json:"Current"`
Name string `json:"Name"`
Metadata map[string]interface{} `json:"Metadata"`
Endpoints map[string]interface{} `json:"Endpoints"`
Current bool `json:"Current"`
}
// createDockerClient creates a Docker client with context detection.
@@ -439,7 +334,7 @@ func createDockerClient() (*client.Client, error) {
if contextInfo != nil {
if endpoints, ok := contextInfo.Endpoints["docker"]; ok {
if endpointMap, ok := endpoints.(map[string]any); ok {
if endpointMap, ok := endpoints.(map[string]interface{}); ok {
if host, ok := endpointMap["Host"].(string); ok {
if runConfig.Verbose {
log.Printf("Using Docker host from context '%s': %s", contextInfo.Name, host)
@@ -484,37 +379,10 @@ func getDockerSocketPath() string {
return "/var/run/docker.sock"
}
// checkImageAvailableLocally checks if the specified Docker image is available locally.
func checkImageAvailableLocally(ctx context.Context, cli *client.Client, imageName string) (bool, error) {
_, _, err := cli.ImageInspectWithRaw(ctx, imageName)
if err != nil {
if client.IsErrNotFound(err) {
return false, nil
}
return false, fmt.Errorf("failed to inspect image %s: %w", imageName, err)
}
return true, nil
}
// ensureImageAvailable checks if the image is available locally first, then pulls if needed.
// ensureImageAvailable pulls the specified Docker image to ensure it's available.
func ensureImageAvailable(ctx context.Context, cli *client.Client, imageName string, verbose bool) error {
// First check if image is available locally
available, err := checkImageAvailableLocally(ctx, cli, imageName)
if err != nil {
return fmt.Errorf("failed to check local image availability: %w", err)
}
if available {
if verbose {
log.Printf("Image %s is available locally", imageName)
}
return nil
}
// Image not available locally, try to pull it
if verbose {
log.Printf("Image %s not found locally, pulling...", imageName)
log.Printf("Pulling image %s...", imageName)
}
reader, err := cli.ImagePull(ctx, imageName, image.PullOptions{})
@@ -765,3 +633,63 @@ func extractContainerFiles(ctx context.Context, cli *client.Client, containerID,
// This function is kept for potential future use or other file types
return nil
}
// logExtractionError logs extraction errors with appropriate level based on error type.
func logExtractionError(artifactType, containerName string, err error, verbose bool) {
if errors.Is(err, ErrFileNotFoundInTar) {
// File not found is expected and only logged in verbose mode
if verbose {
log.Printf("No %s found in container %s", artifactType, containerName)
}
} else {
// Other errors are actual failures and should be logged as warnings
log.Printf("Warning: failed to extract %s from %s: %v", artifactType, containerName, err)
}
}
// extractSingleFile copies a single file from a container.
func extractSingleFile(ctx context.Context, cli *client.Client, containerID, sourcePath, fileName, logsDir string, verbose bool) error {
tarReader, _, err := cli.CopyFromContainer(ctx, containerID, sourcePath)
if err != nil {
return fmt.Errorf("failed to copy %s from container: %w", sourcePath, err)
}
defer tarReader.Close()
// Extract the single file from the tar
filePath := filepath.Join(logsDir, fileName)
if err := extractFileFromTar(tarReader, filepath.Base(sourcePath), filePath); err != nil {
return fmt.Errorf("failed to extract file from tar: %w", err)
}
if verbose {
log.Printf("Extracted %s from %s", fileName, containerID[:12])
}
return nil
}
// extractDirectory copies a directory from a container and extracts its contents.
func extractDirectory(ctx context.Context, cli *client.Client, containerID, sourcePath, dirName, logsDir string, verbose bool) error {
tarReader, _, err := cli.CopyFromContainer(ctx, containerID, sourcePath)
if err != nil {
return fmt.Errorf("failed to copy %s from container: %w", sourcePath, err)
}
defer tarReader.Close()
// Create target directory
targetDir := filepath.Join(logsDir, dirName)
if err := os.MkdirAll(targetDir, 0o755); err != nil {
return fmt.Errorf("failed to create directory %s: %w", targetDir, err)
}
// Extract the directory from the tar
if err := extractDirectoryFromTar(tarReader, targetDir); err != nil {
return fmt.Errorf("failed to extract directory from tar: %w", err)
}
if verbose {
log.Printf("Extracted %s/ from %s", dirName, containerID[:12])
}
return nil
}

View File

@@ -190,7 +190,7 @@ func checkDockerSocket(ctx context.Context) DoctorResult {
}
}
// checkGolangImage verifies the golang Docker image is available locally or can be pulled.
// checkGolangImage verifies we can access the golang Docker image.
func checkGolangImage(ctx context.Context) DoctorResult {
cli, err := createDockerClient()
if err != nil {
@@ -205,40 +205,17 @@ func checkGolangImage(ctx context.Context) DoctorResult {
goVersion := detectGoVersion()
imageName := "golang:" + goVersion
// First check if image is available locally
available, err := checkImageAvailableLocally(ctx, cli, imageName)
if err != nil {
return DoctorResult{
Name: "Golang Image",
Status: "FAIL",
Message: fmt.Sprintf("Cannot check golang image %s: %v", imageName, err),
Suggestions: []string{
"Check Docker daemon status",
"Try: docker images | grep golang",
},
}
}
if available {
return DoctorResult{
Name: "Golang Image",
Status: "PASS",
Message: fmt.Sprintf("Golang image %s is available locally", imageName),
}
}
// Image not available locally, try to pull it
// Check if we can pull the image
err = ensureImageAvailable(ctx, cli, imageName, false)
if err != nil {
return DoctorResult{
Name: "Golang Image",
Status: "FAIL",
Message: fmt.Sprintf("Golang image %s not available locally and cannot pull: %v", imageName, err),
Message: fmt.Sprintf("Cannot pull golang image %s: %v", imageName, err),
Suggestions: []string{
"Check internet connectivity",
"Verify Docker Hub access",
"Try: docker pull " + imageName,
"Or run tests offline if image was pulled previously",
},
}
}
@@ -246,7 +223,7 @@ func checkGolangImage(ctx context.Context) DoctorResult {
return DoctorResult{
Name: "Golang Image",
Status: "PASS",
Message: fmt.Sprintf("Golang image %s is now available", imageName),
Message: fmt.Sprintf("Golang image %s is available", imageName),
}
}

View File

@@ -19,14 +19,11 @@ type RunConfig struct {
FailFast bool `flag:"failfast,default=true,Stop on first test failure"`
UsePostgres bool `flag:"postgres,default=false,Use PostgreSQL instead of SQLite"`
GoVersion string `flag:"go-version,Go version to use (auto-detected from go.mod)"`
CleanBefore bool `flag:"clean-before,default=true,Clean stale resources before test"`
CleanBefore bool `flag:"clean-before,default=true,Clean resources before test"`
CleanAfter bool `flag:"clean-after,default=true,Clean resources after test"`
KeepOnFailure bool `flag:"keep-on-failure,default=false,Keep containers on test failure"`
LogsDir string `flag:"logs-dir,default=control_logs,Control logs directory"`
Verbose bool `flag:"verbose,default=false,Verbose output"`
Stats bool `flag:"stats,default=false,Collect and display container resource usage statistics"`
HSMemoryLimit float64 `flag:"hs-memory-limit,default=0,Fail test if any Headscale container exceeds this memory limit in MB (0 = disabled)"`
TSMemoryLimit float64 `flag:"ts-memory-limit,default=0,Fail test if any Tailscale container exceeds this memory limit in MB (0 = disabled)"`
}
// runIntegrationTest executes the integration test workflow.
@@ -74,7 +71,7 @@ func detectGoVersion() string {
content, err := os.ReadFile(goModPath)
if err != nil {
return "1.25"
return "1.24"
}
lines := splitLines(string(content))
@@ -89,7 +86,7 @@ func detectGoVersion() string {
}
}
return "1.25"
return "1.24"
}
// splitLines splits a string into lines without using strings.Split.

View File

@@ -1,471 +0,0 @@
package main
import (
"context"
"encoding/json"
"errors"
"fmt"
"log"
"sort"
"strings"
"sync"
"time"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/events"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/client"
)
// ContainerStats represents statistics for a single container.
type ContainerStats struct {
ContainerID string
ContainerName string
Stats []StatsSample
mutex sync.RWMutex
}
// StatsSample represents a single stats measurement.
type StatsSample struct {
Timestamp time.Time
CPUUsage float64 // CPU usage percentage
MemoryMB float64 // Memory usage in MB
}
// StatsCollector manages collection of container statistics.
type StatsCollector struct {
client *client.Client
containers map[string]*ContainerStats
stopChan chan struct{}
wg sync.WaitGroup
mutex sync.RWMutex
collectionStarted bool
}
// NewStatsCollector creates a new stats collector instance.
func NewStatsCollector() (*StatsCollector, error) {
cli, err := createDockerClient()
if err != nil {
return nil, fmt.Errorf("failed to create Docker client: %w", err)
}
return &StatsCollector{
client: cli,
containers: make(map[string]*ContainerStats),
stopChan: make(chan struct{}),
}, nil
}
// StartCollection begins monitoring all containers and collecting stats for hs- and ts- containers with matching run ID.
func (sc *StatsCollector) StartCollection(ctx context.Context, runID string, verbose bool) error {
sc.mutex.Lock()
defer sc.mutex.Unlock()
if sc.collectionStarted {
return errors.New("stats collection already started")
}
sc.collectionStarted = true
// Start monitoring existing containers
sc.wg.Add(1)
go sc.monitorExistingContainers(ctx, runID, verbose)
// Start Docker events monitoring for new containers
sc.wg.Add(1)
go sc.monitorDockerEvents(ctx, runID, verbose)
if verbose {
log.Printf("Started container monitoring for run ID %s", runID)
}
return nil
}
// StopCollection stops all stats collection.
func (sc *StatsCollector) StopCollection() {
// Check if already stopped without holding lock
sc.mutex.RLock()
if !sc.collectionStarted {
sc.mutex.RUnlock()
return
}
sc.mutex.RUnlock()
// Signal stop to all goroutines
close(sc.stopChan)
// Wait for all goroutines to finish
sc.wg.Wait()
// Mark as stopped
sc.mutex.Lock()
sc.collectionStarted = false
sc.mutex.Unlock()
}
// monitorExistingContainers checks for existing containers that match our criteria.
func (sc *StatsCollector) monitorExistingContainers(ctx context.Context, runID string, verbose bool) {
defer sc.wg.Done()
containers, err := sc.client.ContainerList(ctx, container.ListOptions{})
if err != nil {
if verbose {
log.Printf("Failed to list existing containers: %v", err)
}
return
}
for _, cont := range containers {
if sc.shouldMonitorContainer(cont, runID) {
sc.startStatsForContainer(ctx, cont.ID, cont.Names[0], verbose)
}
}
}
// monitorDockerEvents listens for container start events and begins monitoring relevant containers.
func (sc *StatsCollector) monitorDockerEvents(ctx context.Context, runID string, verbose bool) {
defer sc.wg.Done()
filter := filters.NewArgs()
filter.Add("type", "container")
filter.Add("event", "start")
eventOptions := events.ListOptions{
Filters: filter,
}
events, errs := sc.client.Events(ctx, eventOptions)
for {
select {
case <-sc.stopChan:
return
case <-ctx.Done():
return
case event := <-events:
if event.Type == "container" && event.Action == "start" {
// Get container details
containerInfo, err := sc.client.ContainerInspect(ctx, event.ID)
if err != nil {
continue
}
// Convert to types.Container format for consistency
cont := types.Container{
ID: containerInfo.ID,
Names: []string{containerInfo.Name},
Labels: containerInfo.Config.Labels,
}
if sc.shouldMonitorContainer(cont, runID) {
sc.startStatsForContainer(ctx, cont.ID, cont.Names[0], verbose)
}
}
case err := <-errs:
if verbose {
log.Printf("Error in Docker events stream: %v", err)
}
return
}
}
}
// shouldMonitorContainer determines if a container should be monitored.
func (sc *StatsCollector) shouldMonitorContainer(cont types.Container, runID string) bool {
// Check if it has the correct run ID label
if cont.Labels == nil || cont.Labels["hi.run-id"] != runID {
return false
}
// Check if it's an hs- or ts- container
for _, name := range cont.Names {
containerName := strings.TrimPrefix(name, "/")
if strings.HasPrefix(containerName, "hs-") || strings.HasPrefix(containerName, "ts-") {
return true
}
}
return false
}
// startStatsForContainer begins stats collection for a specific container.
func (sc *StatsCollector) startStatsForContainer(ctx context.Context, containerID, containerName string, verbose bool) {
containerName = strings.TrimPrefix(containerName, "/")
sc.mutex.Lock()
// Check if we're already monitoring this container
if _, exists := sc.containers[containerID]; exists {
sc.mutex.Unlock()
return
}
sc.containers[containerID] = &ContainerStats{
ContainerID: containerID,
ContainerName: containerName,
Stats: make([]StatsSample, 0),
}
sc.mutex.Unlock()
if verbose {
log.Printf("Starting stats collection for container %s (%s)", containerName, containerID[:12])
}
sc.wg.Add(1)
go sc.collectStatsForContainer(ctx, containerID, verbose)
}
// collectStatsForContainer collects stats for a specific container using Docker API streaming.
func (sc *StatsCollector) collectStatsForContainer(ctx context.Context, containerID string, verbose bool) {
defer sc.wg.Done()
// Use Docker API streaming stats - much more efficient than CLI
statsResponse, err := sc.client.ContainerStats(ctx, containerID, true)
if err != nil {
if verbose {
log.Printf("Failed to get stats stream for container %s: %v", containerID[:12], err)
}
return
}
defer statsResponse.Body.Close()
decoder := json.NewDecoder(statsResponse.Body)
var prevStats *container.Stats
for {
select {
case <-sc.stopChan:
return
case <-ctx.Done():
return
default:
var stats container.Stats
if err := decoder.Decode(&stats); err != nil {
// EOF is expected when container stops or stream ends
if err.Error() != "EOF" && verbose {
log.Printf("Failed to decode stats for container %s: %v", containerID[:12], err)
}
return
}
// Calculate CPU percentage (only if we have previous stats)
var cpuPercent float64
if prevStats != nil {
cpuPercent = calculateCPUPercent(prevStats, &stats)
}
// Calculate memory usage in MB
memoryMB := float64(stats.MemoryStats.Usage) / (1024 * 1024)
// Store the sample (skip first sample since CPU calculation needs previous stats)
if prevStats != nil {
// Get container stats reference without holding the main mutex
var containerStats *ContainerStats
var exists bool
sc.mutex.RLock()
containerStats, exists = sc.containers[containerID]
sc.mutex.RUnlock()
if exists && containerStats != nil {
containerStats.mutex.Lock()
containerStats.Stats = append(containerStats.Stats, StatsSample{
Timestamp: time.Now(),
CPUUsage: cpuPercent,
MemoryMB: memoryMB,
})
containerStats.mutex.Unlock()
}
}
// Save current stats for next iteration
prevStats = &stats
}
}
}
// calculateCPUPercent calculates CPU usage percentage from Docker stats.
func calculateCPUPercent(prevStats, stats *container.Stats) float64 {
// CPU calculation based on Docker's implementation
cpuDelta := float64(stats.CPUStats.CPUUsage.TotalUsage) - float64(prevStats.CPUStats.CPUUsage.TotalUsage)
systemDelta := float64(stats.CPUStats.SystemUsage) - float64(prevStats.CPUStats.SystemUsage)
if systemDelta > 0 && cpuDelta >= 0 {
// Calculate CPU percentage: (container CPU delta / system CPU delta) * number of CPUs * 100
numCPUs := float64(len(stats.CPUStats.CPUUsage.PercpuUsage))
if numCPUs == 0 {
// Fallback: if PercpuUsage is not available, assume 1 CPU
numCPUs = 1.0
}
return (cpuDelta / systemDelta) * numCPUs * 100.0
}
return 0.0
}
// ContainerStatsSummary represents summary statistics for a container.
type ContainerStatsSummary struct {
ContainerName string
SampleCount int
CPU StatsSummary
Memory StatsSummary
}
// MemoryViolation represents a container that exceeded the memory limit.
type MemoryViolation struct {
ContainerName string
MaxMemoryMB float64
LimitMB float64
}
// StatsSummary represents min, max, and average for a metric.
type StatsSummary struct {
Min float64
Max float64
Average float64
}
// GetSummary returns a summary of collected statistics.
func (sc *StatsCollector) GetSummary() []ContainerStatsSummary {
// Take snapshot of container references without holding main lock long
sc.mutex.RLock()
containerRefs := make([]*ContainerStats, 0, len(sc.containers))
for _, containerStats := range sc.containers {
containerRefs = append(containerRefs, containerStats)
}
sc.mutex.RUnlock()
summaries := make([]ContainerStatsSummary, 0, len(containerRefs))
for _, containerStats := range containerRefs {
containerStats.mutex.RLock()
stats := make([]StatsSample, len(containerStats.Stats))
copy(stats, containerStats.Stats)
containerName := containerStats.ContainerName
containerStats.mutex.RUnlock()
if len(stats) == 0 {
continue
}
summary := ContainerStatsSummary{
ContainerName: containerName,
SampleCount: len(stats),
}
// Calculate CPU stats
cpuValues := make([]float64, len(stats))
memoryValues := make([]float64, len(stats))
for i, sample := range stats {
cpuValues[i] = sample.CPUUsage
memoryValues[i] = sample.MemoryMB
}
summary.CPU = calculateStatsSummary(cpuValues)
summary.Memory = calculateStatsSummary(memoryValues)
summaries = append(summaries, summary)
}
// Sort by container name for consistent output
sort.Slice(summaries, func(i, j int) bool {
return summaries[i].ContainerName < summaries[j].ContainerName
})
return summaries
}
// calculateStatsSummary calculates min, max, and average for a slice of values.
func calculateStatsSummary(values []float64) StatsSummary {
if len(values) == 0 {
return StatsSummary{}
}
min := values[0]
max := values[0]
sum := 0.0
for _, value := range values {
if value < min {
min = value
}
if value > max {
max = value
}
sum += value
}
return StatsSummary{
Min: min,
Max: max,
Average: sum / float64(len(values)),
}
}
// PrintSummary prints the statistics summary to the console.
func (sc *StatsCollector) PrintSummary() {
summaries := sc.GetSummary()
if len(summaries) == 0 {
log.Printf("No container statistics collected")
return
}
log.Printf("Container Resource Usage Summary:")
log.Printf("================================")
for _, summary := range summaries {
log.Printf("Container: %s (%d samples)", summary.ContainerName, summary.SampleCount)
log.Printf(" CPU Usage: Min: %6.2f%% Max: %6.2f%% Avg: %6.2f%%",
summary.CPU.Min, summary.CPU.Max, summary.CPU.Average)
log.Printf(" Memory Usage: Min: %6.1f MB Max: %6.1f MB Avg: %6.1f MB",
summary.Memory.Min, summary.Memory.Max, summary.Memory.Average)
log.Printf("")
}
}
// CheckMemoryLimits checks if any containers exceeded their memory limits.
func (sc *StatsCollector) CheckMemoryLimits(hsLimitMB, tsLimitMB float64) []MemoryViolation {
if hsLimitMB <= 0 && tsLimitMB <= 0 {
return nil
}
summaries := sc.GetSummary()
var violations []MemoryViolation
for _, summary := range summaries {
var limitMB float64
if strings.HasPrefix(summary.ContainerName, "hs-") {
limitMB = hsLimitMB
} else if strings.HasPrefix(summary.ContainerName, "ts-") {
limitMB = tsLimitMB
} else {
continue // Skip containers that don't match our patterns
}
if limitMB > 0 && summary.Memory.Max > limitMB {
violations = append(violations, MemoryViolation{
ContainerName: summary.ContainerName,
MaxMemoryMB: summary.Memory.Max,
LimitMB: limitMB,
})
}
}
return violations
}
// PrintSummaryAndCheckLimits prints the statistics summary and returns memory violations if any.
func (sc *StatsCollector) PrintSummaryAndCheckLimits(hsLimitMB, tsLimitMB float64) []MemoryViolation {
sc.PrintSummary()
return sc.CheckMemoryLimits(hsLimitMB, tsLimitMB)
}
// Close closes the stats collector and cleans up resources.
func (sc *StatsCollector) Close() error {
sc.StopCollection()
return sc.client.Close()
}

100
cmd/hi/tar_utils.go Normal file
View File

@@ -0,0 +1,100 @@
package main
import (
"archive/tar"
"errors"
"fmt"
"io"
"os"
"path/filepath"
"strings"
)
// ErrFileNotFoundInTar indicates a file was not found in the tar archive.
var ErrFileNotFoundInTar = errors.New("file not found in tar")
// extractFileFromTar extracts a single file from a tar reader.
func extractFileFromTar(tarReader io.Reader, fileName, outputPath string) error {
tr := tar.NewReader(tarReader)
for {
header, err := tr.Next()
if err == io.EOF {
break
}
if err != nil {
return fmt.Errorf("failed to read tar header: %w", err)
}
// Check if this is the file we're looking for
if filepath.Base(header.Name) == fileName {
if header.Typeflag == tar.TypeReg {
// Create the output file
outFile, err := os.Create(outputPath)
if err != nil {
return fmt.Errorf("failed to create output file: %w", err)
}
defer outFile.Close()
// Copy file contents
if _, err := io.Copy(outFile, tr); err != nil {
return fmt.Errorf("failed to copy file contents: %w", err)
}
return nil
}
}
}
return fmt.Errorf("%w: %s", ErrFileNotFoundInTar, fileName)
}
// extractDirectoryFromTar extracts all files from a tar reader to a target directory.
func extractDirectoryFromTar(tarReader io.Reader, targetDir string) error {
tr := tar.NewReader(tarReader)
for {
header, err := tr.Next()
if err == io.EOF {
break
}
if err != nil {
return fmt.Errorf("failed to read tar header: %w", err)
}
// Clean the path to prevent directory traversal
cleanName := filepath.Clean(header.Name)
if strings.Contains(cleanName, "..") {
continue // Skip potentially dangerous paths
}
targetPath := filepath.Join(targetDir, filepath.Base(cleanName))
switch header.Typeflag {
case tar.TypeDir:
// Create directory
if err := os.MkdirAll(targetPath, os.FileMode(header.Mode)); err != nil {
return fmt.Errorf("failed to create directory %s: %w", targetPath, err)
}
case tar.TypeReg:
// Create file
outFile, err := os.Create(targetPath)
if err != nil {
return fmt.Errorf("failed to create file %s: %w", targetPath, err)
}
if _, err := io.Copy(outFile, tr); err != nil {
outFile.Close()
return fmt.Errorf("failed to copy file contents: %w", err)
}
outFile.Close()
// Set file permissions
if err := os.Chmod(targetPath, os.FileMode(header.Mode)); err != nil {
return fmt.Errorf("failed to set file permissions: %w", err)
}
}
}
return nil
}

View File

@@ -1,61 +0,0 @@
package main
import (
"encoding/json"
"fmt"
"os"
"github.com/creachadair/command"
"github.com/creachadair/flax"
"github.com/juanfont/headscale/hscontrol/mapper"
"github.com/juanfont/headscale/integration/integrationutil"
)
type MapConfig struct {
Directory string `flag:"directory,Directory to read map responses from"`
}
var mapConfig MapConfig
func main() {
root := command.C{
Name: "mapresponses",
Help: "MapResponses is a tool to map and compare map responses from a directory",
Commands: []*command.C{
{
Name: "online",
Help: "",
Usage: "run [test-pattern] [flags]",
SetFlags: command.Flags(flax.MustBind, &mapConfig),
Run: runOnline,
},
command.HelpCommand(nil),
},
}
env := root.NewEnv(nil).MergeFlags(true)
command.RunOrFail(env, os.Args[1:])
}
// runIntegrationTest executes the integration test workflow.
func runOnline(env *command.Env) error {
if mapConfig.Directory == "" {
return fmt.Errorf("directory is required")
}
resps, err := mapper.ReadMapResponsesFromDirectory(mapConfig.Directory)
if err != nil {
return fmt.Errorf("reading map responses from directory: %w", err)
}
expected := integrationutil.BuildExpectedOnlineMap(resps)
out, err := json.MarshalIndent(expected, "", " ")
if err != nil {
return fmt.Errorf("marshaling expected online map: %w", err)
}
os.Stderr.Write(out)
os.Stderr.Write([]byte("\n"))
return nil
}

View File

@@ -20,7 +20,6 @@ listen_addr: 127.0.0.1:8080
# Address to listen to /metrics and /debug, you may want
# to keep this endpoint private to your internal network
# Use an emty value to disable the metrics listener.
metrics_listen_addr: 127.0.0.1:9090
# Address to listen for gRPC.
@@ -61,9 +60,7 @@ prefixes:
v6: fd7a:115c:a1e0::/48
# Strategy used for allocation of IPs to nodes, available options:
# - sequential (default): assigns the next free IP from the previous given
# IP. A best-effort approach is used and Headscale might leave holes in the
# IP range or fill up existing holes in the IP range.
# - sequential (default): assigns the next free IP from the previous given IP.
# - random: assigns the next free IP from a pseudo-random IP generator (crypto/rand).
allocation: sequential
@@ -108,7 +105,7 @@ derp:
# For better connection stability (especially when using an Exit-Node and DNS is not working),
# it is possible to optionally add the public IPv4 and IPv6 address to the Derp-Map using:
ipv4: 198.51.100.1
ipv4: 1.2.3.4
ipv6: 2001:db8::1
# List of externally available DERP maps encoded in JSON
@@ -131,7 +128,7 @@ derp:
auto_update_enabled: true
# How often should we check for DERP updates?
update_frequency: 3h
update_frequency: 24h
# Disables the automatic check for headscale updates on startup
disable_check_updates: false
@@ -228,11 +225,9 @@ tls_cert_path: ""
tls_key_path: ""
log:
# Valid log levels: panic, fatal, error, warn, info, debug, trace
level: info
# Output formatting for logs: text or json
format: text
level: info
## Policy
# headscale supports Tailscale's ACL policies.
@@ -278,9 +273,9 @@ dns:
# `hostname.base_domain` (e.g., _myhost.example.com_).
base_domain: example.com
# Whether to use the local DNS settings of a node or override the local DNS
# settings (default) and force the use of Headscale's DNS configuration.
override_local_dns: true
# Whether to use the local DNS settings of a node (default) or override the
# local DNS settings and force the use of Headscale's DNS configuration.
override_local_dns: false
# List of DNS servers to expose to clients.
nameservers:
@@ -296,7 +291,8 @@ dns:
# Split DNS (see https://tailscale.com/kb/1054/dns/),
# a map of domains and which DNS server to use for each.
split: {}
split:
{}
# foo.bar.com:
# - 1.1.1.1
# darp.headscale.net:
@@ -362,12 +358,6 @@ unix_socket_permission: "0770"
# # required "openid" scope.
# scope: ["openid", "profile", "email"]
#
# # Only verified email addresses are synchronized to the user profile by
# # default. Unverified emails may be allowed in case an identity provider
# # does not send the "email_verified: true" claim or email verification is
# # not required.
# email_verified_required: true
#
# # Provide custom key/value pairs which get sent to the identity provider's
# # authorization endpoint.
# extra_params:
@@ -400,13 +390,11 @@ unix_socket_permission: "0770"
# method: S256
# Logtail configuration
# Logtail is Tailscales logging and auditing infrastructure, it allows the
# control panel to instruct tailscale nodes to log their activity to a remote
# server. To disable logging on the client side, please refer to:
# https://tailscale.com/kb/1011/log-mesh-traffic#opting-out-of-client-logging
# Logtail is Tailscales logging and auditing infrastructure, it allows the control panel
# to instruct tailscale nodes to log their activity to a remote server.
logtail:
# Enable logtail for tailscale nodes of this Headscale instance.
# As there is currently no support for overriding the log server in Headscale, this is
# Enable logtail for this headscales clients.
# As there is currently no support for overriding the log server in headscale, this is
# disabled by default. Enabling this will make your clients send logs to Tailscale Inc.
enabled: false
@@ -414,23 +402,3 @@ logtail:
# default static port 41641. This option is intended as a workaround for some buggy
# firewall devices. See https://tailscale.com/kb/1181/firewalls/ for more information.
randomize_client_port: false
# Taildrop configuration
# Taildrop is the file sharing feature of Tailscale, allowing nodes to send files to each other.
# https://tailscale.com/kb/1106/taildrop/
taildrop:
# Enable or disable Taildrop for all nodes.
# When enabled, nodes can send files to other nodes owned by the same user.
# Tagged devices and cross-user transfers are not permitted by Tailscale clients.
enabled: true
# Advanced performance tuning parameters.
# The defaults are carefully chosen and should rarely need adjustment.
# Only modify these if you have identified a specific performance issue.
#
# tuning:
# # NodeStore write batching configuration.
# # The NodeStore batches write operations before rebuilding peer relationships,
# # which is computationally expensive. Batching reduces rebuild frequency.
# #
# # node_store_batch_size: 100
# # node_store_batch_timeout: 500ms

View File

@@ -1,6 +1,5 @@
# If you plan to somehow use headscale, please deploy your own DERP infra: https://tailscale.com/kb/1118/custom-derp-servers/
regions:
1: null # Disable DERP region with ID 1
900:
regionid: 900
regioncode: custom
@@ -8,9 +7,9 @@ regions:
nodes:
- name: 900a
regionid: 900
hostname: myderp.example.com
ipv4: 198.51.100.1
ipv6: 2001:db8::1
hostname: myderp.mydomain.no
ipv4: 123.123.123.123
ipv6: "2604:a880:400:d1::828:b001"
stunport: 0
stunonly: false
derpport: 0

View File

@@ -44,15 +44,6 @@ For convenience, we also [build container images with headscale](../setup/instal
we don't officially support deploying headscale using Docker**. On our [Discord server](https://discord.gg/c84AZQhmpx)
we have a "docker-issues" channel where you can ask for Docker-specific help to the community.
## What is the recommended update path? Can I skip multiple versions while updating?
Please follow the steps outlined in the [upgrade guide](../setup/upgrade.md) to update your existing Headscale
installation. Its best to update from one stable version to the next (e.g. 0.24.0 &rarr; 0.25.1 &rarr; 0.26.1) in case
you are multiple releases behind. You should always pick the latest available patch release.
Be sure to check the [changelog](https://github.com/juanfont/headscale/blob/main/CHANGELOG.md) for version specific
upgrade instructions and breaking changes.
## Scaling / How many clients does Headscale support?
It depends. As often stated, Headscale is not enterprise software and our focus
@@ -60,11 +51,11 @@ is homelabbers and self-hosters. Of course, we do not prevent people from using
it in a commercial/professional setting and often get questions about scaling.
Please note that when Headscale is developed, performance is not part of the
consideration as the main audience is considered to be users with a modest
consideration as the main audience is considered to be users with a moddest
amount of devices. We focus on correctness and feature parity with Tailscale
SaaS over time.
To understand if you might be able to use Headscale for your use case, I will
To understand if you might be able to use Headscale for your usecase, I will
describe two scenarios in an effort to explain what is the central bottleneck
of Headscale:
@@ -85,7 +76,7 @@ new "world map" is created for every node in the network.
This means that under certain conditions, Headscale can likely handle 100s
of devices (maybe more), if there is _little to no change_ happening in the
network. For example, in Scenario 1, the process of computing the world map is
extremely demanding due to the size of the network, but when the map has been
extremly demanding due to the size of the network, but when the map has been
created and the nodes are not changing, the Headscale instance will likely
return to a very low resource usage until the next time there is an event
requiring the new map.
@@ -103,14 +94,14 @@ learn about the current state of the world.
We expect that the performance will improve over time as we improve the code
base, but it is not a focus. In general, we will never make the tradeoff to make
things faster on the cost of less maintainable or readable code. We are a small
team and have to optimise for maintainability.
team and have to optimise for maintainabillity.
## Which database should I use?
We recommend the use of SQLite as database for headscale:
- SQLite is simple to setup and easy to use
- It scales well for all of headscale's use cases
- It scales well for all of headscale's usecases
- Development and testing happens primarily on SQLite
- PostgreSQL is still supported, but is considered to be in "maintenance mode"
@@ -143,35 +134,3 @@ in their output of `tailscale status`. Traffic is still filtered according to th
ping` which is always allowed in either direction.
See also <https://tailscale.com/kb/1087/device-visibility>.
## My policy is stored in the database and Headscale refuses to start due to an invalid policy. How can I recover?
Headscale checks if the policy is valid during startup and refuses to start if it detects an error. The error message
indicates which part of the policy is invalid. Follow these steps to fix your policy:
- Dump the policy to a file: `headscale policy get --bypass-grpc-and-access-database-directly > policy.json`
- Edit and fixup `policy.json`. Use the command `headscale policy check --file policy.json` to validate the policy.
- Load the modified policy: `headscale policy set --bypass-grpc-and-access-database-directly --file policy.json`
- Start Headscale as usual.
!!! warning "Full server configuration required"
The above commands to get/set the policy require a complete server configuration file including database settings. A
minimal config to [control Headscale via remote CLI](../ref/api.md#grpc) is not sufficient. You may use `headscale
-c /path/to/config.yaml` to specify the path to an alternative configuration file.
## How can I avoid to send logs to Tailscale Inc?
A Tailscale client [collects logs about its operation and connection attempts with other
clients](https://tailscale.com/kb/1011/log-mesh-traffic#client-logs) and sends them to a central log service operated by
Tailscale Inc.
Headscale, by default, instructs clients to disable log submission to the central log service. This configuration is
applied by a client once it successfully connected with Headscale. See the configuration option `logtail.enabled` in the
[configuration file](../ref/configuration.md) for details.
Alternatively, logging can also be disabled on the client side. This is independent of Headscale and opting out of
client logging disables log submission early during client startup. The configuration is operating system specific and
is usually achieved by setting the environment variable `TS_NO_LOGS_NO_SUPPORT=true` or by passing the flag
`--no-logs-no-support` to `tailscaled`. See
<https://tailscale.com/kb/1011/log-mesh-traffic#opting-out-of-client-logging> for details.

View File

@@ -5,26 +5,25 @@ to provide self-hosters and hobbyists with an open-source server they can use fo
provides on overview of Headscale's feature and compatibility with the Tailscale control server:
- [x] Full "base" support of Tailscale's features
- [x] [Node registration](../ref/registration.md)
- [x] [Web authentication](../ref/registration.md#web-authentication)
- [x] [Pre authenticated key](../ref/registration.md#pre-authenticated-key)
- [x] Node registration
- [x] Interactive
- [x] Pre authenticated key
- [x] [DNS](../ref/dns.md)
- [x] [MagicDNS](https://tailscale.com/kb/1081/magicdns)
- [x] [Global and restricted nameservers (split DNS)](https://tailscale.com/kb/1054/dns#nameservers)
- [x] [search domains](https://tailscale.com/kb/1054/dns#search-domains)
- [x] [Extra DNS records (Headscale only)](../ref/dns.md#setting-extra-dns-records)
- [x] [Taildrop (File Sharing)](https://tailscale.com/kb/1106/taildrop)
- [x] [Tags](../ref/tags.md)
- [x] [Routes](../ref/routes.md)
- [x] [Subnet routers](../ref/routes.md#subnet-router)
- [x] [Exit nodes](../ref/routes.md#exit-node)
- [x] Dual stack (IPv4 and IPv6)
- [x] Ephemeral nodes
- [x] Embedded [DERP server](../ref/derp.md)
- [x] Embedded [DERP server](https://tailscale.com/kb/1232/derp-servers)
- [x] Access control lists ([GitHub label "policy"](https://github.com/juanfont/headscale/labels/policy%20%F0%9F%93%9D))
- [x] ACL management via API
- [x] Some [Autogroups](https://tailscale.com/kb/1396/targets#autogroups), currently: `autogroup:internet`,
`autogroup:nonroot`, `autogroup:member`, `autogroup:tagged`, `autogroup:self`
`autogroup:nonroot`, `autogroup:member`, `autogroup:tagged`
- [x] [Auto approvers](https://tailscale.com/kb/1337/acl-syntax#auto-approvers) for [subnet
routers](../ref/routes.md#automatically-approve-routes-of-a-subnet-router) and [exit
nodes](../ref/routes.md#automatically-approve-an-exit-node-with-auto-approvers)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 22 KiB

View File

Before

Width:  |  Height:  |  Size: 56 KiB

After

Width:  |  Height:  |  Size: 56 KiB

View File

Before

Width:  |  Height:  |  Size: 34 KiB

After

Width:  |  Height:  |  Size: 34 KiB

View File

@@ -1 +1 @@
<svg xmlns="http://www.w3.org/2000/svg" xml:space="preserve" style="fill-rule:evenodd;clip-rule:evenodd;stroke-linejoin:round;stroke-miterlimit:2" viewBox="0 0 1280 640"><circle cx="141.023" cy="338.36" r="117.472" style="fill:#f8b5cb" transform="matrix(.997276 0 0 1.00556 10.0024 -14.823)"/><circle cx="352.014" cy="268.302" r="33.095" style="fill:#a2a2a2" transform="matrix(1.01749 0 0 1 -3.15847 0)"/><circle cx="352.014" cy="268.302" r="33.095" style="fill:#a2a2a2" transform="matrix(1.01749 0 0 1 -3.15847 115.914)"/><circle cx="352.014" cy="268.302" r="33.095" style="fill:#a2a2a2" transform="matrix(1.01749 0 0 1 148.43 115.914)"/><circle cx="352.014" cy="268.302" r="33.095" style="fill:#a2a2a2" transform="matrix(1.01749 0 0 1 148.851 0)"/><circle cx="805.557" cy="336.915" r="118.199" style="fill:#8d8d8d" transform="matrix(.99196 0 0 1 3.36978 -10.2458)"/><circle cx="805.557" cy="336.915" r="118.199" style="fill:#8d8d8d" transform="matrix(.99196 0 0 1 255.633 -10.2458)"/><path d="M680.282 124.808h-68.093v390.325h68.081v-28.23H640V153.228h40.282v-28.42Z" style="fill:#303030"/><path d="M680.282 124.808h-68.093v390.325h68.081v-28.23H640V153.228h40.282v-28.42Z" style="fill:#303030" transform="matrix(-1 0 0 1 1857.19 0)"/></svg>
<svg xmlns="http://www.w3.org/2000/svg" xml:space="preserve" style="fill-rule:evenodd;clip-rule:evenodd;stroke-linejoin:round;stroke-miterlimit:2" viewBox="0 0 1280 640"><circle cx="141.023" cy="338.36" r="117.472" style="fill:#f8b5cb" transform="matrix(.997276 0 0 1.00556 10.0024 -14.823)"/><circle cx="352.014" cy="268.302" r="33.095" style="fill:#a2a2a2" transform="matrix(1.01749 0 0 1 -3.15847 0)"/><circle cx="352.014" cy="268.302" r="33.095" style="fill:#a2a2a2" transform="matrix(1.01749 0 0 1 -3.15847 115.914)"/><circle cx="352.014" cy="268.302" r="33.095" style="fill:#a2a2a2" transform="matrix(1.01749 0 0 1 148.43 115.914)"/><circle cx="352.014" cy="268.302" r="33.095" style="fill:#a2a2a2" transform="matrix(1.01749 0 0 1 148.851 0)"/><circle cx="805.557" cy="336.915" r="118.199" style="fill:#8d8d8d" transform="matrix(.99196 0 0 1 3.36978 -10.2458)"/><circle cx="805.557" cy="336.915" r="118.199" style="fill:#8d8d8d" transform="matrix(.99196 0 0 1 255.633 -10.2458)"/><path d="M680.282 124.808h-68.093v390.325h68.081v-28.23H640V153.228h40.282v-28.42Z" style="fill:#303030"/><path d="M680.282 124.808h-68.093v390.325h68.081v-28.23H640V153.228h40.282v-28.42Z" style="fill:#303030" transform="matrix(-1 0 0 1 1857.19 0)"/></svg>

Before

Width:  |  Height:  |  Size: 1.2 KiB

After

Width:  |  Height:  |  Size: 1.2 KiB

View File

Before

Width:  |  Height:  |  Size: 49 KiB

After

Width:  |  Height:  |  Size: 49 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 7.8 KiB

After

Width:  |  Height:  |  Size: 7.8 KiB

View File

@@ -9,38 +9,9 @@ When using ACL's the User borders are no longer applied. All machines
whichever the User have the ability to communicate with other hosts as
long as the ACL's permits this exchange.
## ACL Setup
## ACLs use case example
To enable and configure ACLs in Headscale, you need to specify the path to your ACL policy file in the `policy.path` key in `config.yaml`.
Your ACL policy file must be formatted using [huJSON](https://github.com/tailscale/hujson).
Info on how these policies are written can be found
[here](https://tailscale.com/kb/1018/acls/).
Please reload or restart Headscale after updating the ACL file. Headscale may be reloaded either via its systemd service
(`sudo systemctl reload headscale`) or by sending a SIGHUP signal (`sudo kill -HUP $(pidof headscale)`) to the main
process. Headscale logs the result of ACL policy processing after each reload.
## Simple Examples
- [**Allow All**](https://tailscale.com/kb/1192/acl-samples#allow-all-default-acl): If you define an ACL file but completely omit the `"acls"` field from its content, Headscale will default to an "allow all" policy. This means all devices connected to your tailnet will be able to communicate freely with each other.
```json
{}
```
- [**Deny All**](https://tailscale.com/kb/1192/acl-samples#deny-all): To prevent all communication within your tailnet, you can include an empty array for the `"acls"` field in your policy file.
```json
{
"acls": []
}
```
## Complex Example
Let's build a more complex example use case for a small business (It may be the place where
Let's build an example use case for a small business (It may be the place where
ACL's are the most useful).
We have a small company with a boss, an admin, two developers and an intern.
@@ -65,7 +36,11 @@ servers.
- billing.internal
- router.internal
![ACL implementation example](../assets/images/headscale-acl-network.png)
![ACL implementation example](../images/headscale-acl-network.png)
## ACL setup
ACLs have to be written in [huJSON](https://github.com/tailscale/hujson).
When [registering the servers](../usage/getting-started.md#register-a-node) we
will need to add the flag `--advertise-tags=tag:<tag1>,tag:<tag2>`, and the user
@@ -74,6 +49,14 @@ tags to a server they can register, the check of the tags is done on headscale
server and only valid tags are applied. A tag is valid if the user that is
registering it is allowed to do it.
To use ACLs in headscale, you must edit your `config.yaml` file. In there you will find a `policy.path` parameter. This
will need to point to your ACL file. More info on how these policies are written can be found
[here](https://tailscale.com/kb/1018/acls/).
Please reload or restart Headscale after updating the ACL file. Headscale may be reloaded either via its systemd service
(`sudo systemctl reload headscale`) or by sending a SIGHUP signal (`sudo kill -HUP $(pidof headscale)`) to the main
process. Headscale logs the result of ACL policy processing after each reload.
Here are the ACL's to implement the same permissions as above:
```json title="acl.json"
@@ -194,94 +177,13 @@ Here are the ACL's to implement the same permissions as above:
"dst": ["tag:dev-app-servers:80,443"]
},
// Allow users to access their own devices using autogroup:self (see below for more details about performance impact)
{
"action": "accept",
"src": ["autogroup:member"],
"dst": ["autogroup:self:*"]
}
// We still have to allow internal users communications since nothing guarantees that each user have
// their own users.
{ "action": "accept", "src": ["boss@"], "dst": ["boss@:*"] },
{ "action": "accept", "src": ["dev1@"], "dst": ["dev1@:*"] },
{ "action": "accept", "src": ["dev2@"], "dst": ["dev2@:*"] },
{ "action": "accept", "src": ["admin1@"], "dst": ["admin1@:*"] },
{ "action": "accept", "src": ["intern1@"], "dst": ["intern1@:*"] }
]
}
```
## Autogroups
Headscale supports several autogroups that automatically include users, destinations, or devices with specific properties. Autogroups provide a convenient way to write ACL rules without manually listing individual users or devices.
### `autogroup:internet`
Allows access to the internet through [exit nodes](routes.md#exit-node). Can only be used in ACL destinations.
```json
{
"action": "accept",
"src": ["group:users"],
"dst": ["autogroup:internet:*"]
}
```
### `autogroup:member`
Includes all [personal (untagged) devices](registration.md/#identity-model).
```json
{
"action": "accept",
"src": ["autogroup:member"],
"dst": ["tag:prod-app-servers:80,443"]
}
```
### `autogroup:tagged`
Includes all devices that [have at least one tag](registration.md/#identity-model).
```json
{
"action": "accept",
"src": ["autogroup:tagged"],
"dst": ["tag:monitoring:9090"]
}
```
### `autogroup:self`
**(EXPERIMENTAL)**
!!! warning "The current implementation of `autogroup:self` is inefficient"
Includes devices where the same user is authenticated on both the source and destination. Does not include tagged devices. Can only be used in ACL destinations.
```json
{
"action": "accept",
"src": ["autogroup:member"],
"dst": ["autogroup:self:*"]
}
```
*Using `autogroup:self` may cause performance degradation on the Headscale coordinator server in large deployments, as filter rules must be compiled per-node rather than globally and the current implementation is not very efficient.*
If you experience performance issues, consider using more specific ACL rules or limiting the use of `autogroup:self`.
```json
{
// The following rules allow internal users to communicate with their
// own nodes in case autogroup:self is causing performance issues.
{ "action": "accept", "src": ["boss@"], "dst": ["boss@:*"] },
{ "action": "accept", "src": ["dev1@"], "dst": ["dev1@:*"] },
{ "action": "accept", "src": ["dev2@"], "dst": ["dev2@:*"] },
{ "action": "accept", "src": ["admin1@"], "dst": ["admin1@:*"] },
{ "action": "accept", "src": ["intern1@"], "dst": ["intern1@:*"] }
}
```
### `autogroup:nonroot`
Used in Tailscale SSH rules to allow access to any user except root. Can only be used in the `users` field of SSH rules.
```json
{
"action": "accept",
"src": ["autogroup:member"],
"dst": ["autogroup:self"],
"users": ["autogroup:nonroot"]
}
```

View File

@@ -1,129 +0,0 @@
# API
Headscale provides a [HTTP REST API](#rest-api) and a [gRPC interface](#grpc) which may be used to integrate a [web
interface](integration/web-ui.md), [remote control Headscale](#setup-remote-control) or provide a base for custom
integration and tooling.
Both interfaces require a valid API key before use. To create an API key, log into your Headscale server and generate
one with the default expiration of 90 days:
```shell
headscale apikeys create
```
Copy the output of the command and save it for later. Please note that you can not retrieve an API key again. If the API
key is lost, expire the old one, and create a new one.
To list the API keys currently associated with the server:
```shell
headscale apikeys list
```
and to expire an API key:
```shell
headscale apikeys expire --prefix <PREFIX>
```
## REST API
- API endpoint: `/api/v1`, e.g. `https://headscale.example.com/api/v1`
- Documentation: `/swagger`, e.g. `https://headscale.example.com/swagger`
- Headscale Version: `/version`, e.g. `https://headscale.example.com/version`
- Authenticate using HTTP Bearer authentication by sending the [API key](#api) with the HTTP `Authorization: Bearer
<API_KEY>` header.
Start by [creating an API key](#api) and test it with the examples below. Read the API documentation provided by your
Headscale server at `/swagger` for details.
=== "Get details for all users"
```console
curl -H "Authorization: Bearer <API_KEY>" \
https://headscale.example.com/api/v1/user
```
=== "Get details for user 'bob'"
```console
curl -H "Authorization: Bearer <API_KEY>" \
https://headscale.example.com/api/v1/user?name=bob
```
=== "Register a node"
```console
curl -H "Authorization: Bearer <API_KEY>" \
-d user=<USER> -d key=<REGISTRATION_KEY> \
https://headscale.example.com/api/v1/node/register
```
## gRPC
The gRPC interface can be used to control a Headscale instance from a remote machine with the `headscale` binary.
### Prerequisite
- A workstation to run `headscale` (any supported platform, e.g. Linux).
- A Headscale server with gRPC enabled.
- Connections to the gRPC port (default: `50443`) are allowed.
- Remote access requires an encrypted connection via TLS.
- An [API key](#api) to authenticate with the Headscale server.
### Setup remote control
1. Download the [`headscale` binary from GitHub's release page](https://github.com/juanfont/headscale/releases). Make
sure to use the same version as on the server.
1. Put the binary somewhere in your `PATH`, e.g. `/usr/local/bin/headscale`
1. Make `headscale` executable: `chmod +x /usr/local/bin/headscale`
1. [Create an API key](#api) on the Headscale server.
1. Provide the connection parameters for the remote Headscale server either via a minimal YAML configuration file or
via environment variables:
=== "Minimal YAML configuration file"
```yaml title="config.yaml"
cli:
address: <HEADSCALE_ADDRESS>:<PORT>
api_key: <API_KEY>
```
=== "Environment variables"
```shell
export HEADSCALE_CLI_ADDRESS="<HEADSCALE_ADDRESS>:<PORT>"
export HEADSCALE_CLI_API_KEY="<API_KEY>"
```
This instructs the `headscale` binary to connect to a remote instance at `<HEADSCALE_ADDRESS>:<PORT>`, instead of
connecting to the local instance.
1. Test the connection by listing all nodes:
```shell
headscale nodes list
```
You should now be able to see a list of your nodes from your workstation, and you can
now control the Headscale server from your workstation.
### Behind a proxy
It's possible to run the gRPC remote endpoint behind a reverse proxy, like Nginx, and have it run on the _same_ port as Headscale.
While this is _not a supported_ feature, an example on how this can be set up on
[NixOS is shown here](https://github.com/kradalby/dotfiles/blob/4489cdbb19cddfbfae82cd70448a38fde5a76711/machines/headscale.oracldn/headscale.nix#L61-L91).
### Troubleshooting
- Make sure you have the _same_ Headscale version on your server and workstation.
- Ensure that connections to the gRPC port are allowed.
- Verify that your TLS certificate is valid and trusted.
- If you don't have access to a trusted certificate (e.g. from Let's Encrypt), either:
- Add your self-signed certificate to the trust store of your OS _or_
- Disable certificate verification by either setting `cli.insecure: true` in the configuration file or by setting
`HEADSCALE_CLI_INSECURE=1` via an environment variable. We do **not** recommend to disable certificate validation.

View File

@@ -1,118 +0,0 @@
# Debugging and troubleshooting
Headscale and Tailscale provide debug and introspection capabilities that can be helpful when things don't work as
expected. This page explains some debugging techniques to help pinpoint problems.
Please also have a look at [Tailscale's Troubleshooting guide](https://tailscale.com/kb/1023/troubleshooting). It offers
a many tips and suggestions to troubleshoot common issues.
## Tailscale
The Tailscale client itself offers many commands to introspect its state as well as the state of the network:
- [Check local network conditions](https://tailscale.com/kb/1080/cli#netcheck): `tailscale netcheck`
- [Get the client status](https://tailscale.com/kb/1080/cli#status): `tailscale status --json`
- [Get DNS status](https://tailscale.com/kb/1080/cli#dns): `tailscale dns status --all`
- Client logs: `tailscale debug daemon-logs`
- Client netmap: `tailscale debug netmap`
- Test DERP connection: `tailscale debug derp headscale`
- And many more, see: `tailscale debug --help`
Many of the commands are helpful when trying to understand differences between Headscale and Tailscale SaaS.
## Headscale
### Application logging
The log levels `debug` and `trace` can be useful to get more information from Headscale.
```yaml hl_lines="3"
log:
# Valid log levels: panic, fatal, error, warn, info, debug, trace
level: debug
```
### Database logging
The database debug mode logs all database queries. Enable it to see how Headscale interacts with its database. This also
requires the application log level to be set to either `debug` or `trace`.
```yaml hl_lines="3 7"
database:
# Enable debug mode. This setting requires the log.level to be set to "debug" or "trace".
debug: false
log:
# Valid log levels: panic, fatal, error, warn, info, debug, trace
level: debug
```
### Metrics and debug endpoint
Headscale provides a metrics and debug endpoint. It allows to introspect different aspects such as:
- Information about the Go runtime, memory usage and statistics
- Connected nodes and pending registrations
- Active ACLs, filters and SSH policy
- Current DERPMap
- Prometheus metrics
!!! warning "Keep the metrics and debug endpoint private"
The listen address and port can be configured with the `metrics_listen_addr` variable in the [configuration
file](./configuration.md). By default it listens on localhost, port 9090.
Keep the metrics and debug endpoint private to your internal network and don't expose it to the Internet.
The metrics and debug interface can be disabled completely by setting `metrics_listen_addr: null` in the
[configuration file](./configuration.md).
Query metrics via <http://localhost:9090/metrics> and get an overview of available debug information via
<http://localhost:9090/debug/>. Metrics may be queried from outside localhost but the debug interface is subject to
additional protection despite listening on all interfaces.
=== "Direct access"
Access the debug interface directly on the server where Headscale is installed.
```console
curl http://localhost:9090/debug/
```
=== "SSH port forwarding"
Use SSH port forwarding to forward Headscale's metrics and debug port to your device.
```console
ssh <HEADSCALE_SERVER> -L 9090:localhost:9090
```
Access the debug interface on your device by opening <http://localhost:9090/debug/> in your web browser.
=== "Via debug key"
The access control of the debug interface supports the use of a debug key. Traffic is accepted if the path to a
debug key is set via the environment variable `TS_DEBUG_KEY_PATH` and the debug key sent as value for `debugkey`
parameter with each request.
```console
openssl rand -hex 32 | tee debugkey.txt
export TS_DEBUG_KEY_PATH=debugkey.txt
headscale serve
```
Access the debug interface on your device by opening `http://<IP_OF_HEADSCALE>:9090/debug/?debugkey=<DEBUG_KEY>` in
your web browser. The `debugkey` parameter must be sent with every request.
=== "Via debug IP address"
The debug endpoint expects traffic from localhost. A different debug IP address may be configured by setting the
`TS_ALLOW_DEBUG_IP` environment variable before starting Headscale. The debug IP address is ignored when the HTTP
header `X-Forwarded-For` is present.
```console
export TS_ALLOW_DEBUG_IP=192.168.0.10 # IP address of your device
headscale serve
```
Access the debug interface on your device by opening `http://<IP_OF_HEADSCALE>:9090/debug/` in your web browser.

View File

@@ -1,175 +0,0 @@
# DERP
A [DERP (Designated Encrypted Relay for Packets) server](https://tailscale.com/kb/1232/derp-servers) is mainly used to
relay traffic between two nodes in case a direct connection can't be established. Headscale provides an embedded DERP
server to ensure seamless connectivity between nodes.
## Configuration
DERP related settings are configured within the `derp` section of the [configuration file](./configuration.md). The
following sections only use a few of the available settings, check the [example configuration](./configuration.md) for
all available configuration options.
### Enable embedded DERP
Headscale ships with an embedded DERP server which allows to run your own self-hosted DERP server easily. The embedded
DERP server is disabled by default and needs to be enabled. In addition, you should configure the public IPv4 and public
IPv6 address of your Headscale server for improved connection stability:
```yaml title="config.yaml" hl_lines="3-5"
derp:
server:
enabled: true
ipv4: 198.51.100.1
ipv6: 2001:db8::1
```
Keep in mind that [additional ports are needed to run a DERP server](../setup/requirements.md#ports-in-use). Besides
relaying traffic, it also uses STUN (udp/3478) to help clients discover their public IP addresses and perform NAT
traversal. [Check DERP server connectivity](#check-derp-server-connectivity) to see if everything works.
### Remove Tailscale's DERP servers
Once enabled, Headscale's embedded DERP is added to the list of free-to-use [DERP
servers](https://tailscale.com/kb/1232/derp-servers) offered by Tailscale Inc. To only use Headscale's embedded DERP
server, disable the loading of the default DERP map:
```yaml title="config.yaml" hl_lines="6"
derp:
server:
enabled: true
ipv4: 198.51.100.1
ipv6: 2001:db8::1
urls: []
```
!!! warning "Single point of failure"
Removing Tailscale's DERP servers means that there is now just a single DERP server available for clients. This is a
single point of failure and could hamper connectivity.
[Check DERP server connectivity](#check-derp-server-connectivity) with your embedded DERP server before removing
Tailscale's DERP servers.
### Customize DERP map
The DERP map offered to clients can be customized with a [dedicated YAML-configuration
file](https://github.com/juanfont/headscale/blob/main/derp-example.yaml). This allows to modify previously loaded DERP
maps fetched via URL or to offer your own, custom DERP servers to nodes.
=== "Remove specific DERP regions"
The free-to-use [DERP servers](https://tailscale.com/kb/1232/derp-servers) are organized into regions via a region
ID. You can explicitly disable a specific region by setting its region ID to `null`. The following sample
`derp.yaml` disables the New York DERP region (which has the region ID 1):
```yaml title="derp.yaml"
regions:
1: null
```
Use the following configuration to serve the default DERP map (excluding New York) to nodes:
```yaml title="config.yaml" hl_lines="6 7"
derp:
server:
enabled: false
urls:
- https://controlplane.tailscale.com/derpmap/default
paths:
- /etc/headscale/derp.yaml
```
=== "Provide custom DERP servers"
The following sample `derp.yaml` references two custom regions (`custom-east` with ID 900 and `custom-west` with ID 901)
with one custom DERP server in each region. Each DERP server offers DERP relay via HTTPS on tcp/443, support for captive
portal checks via HTTP on tcp/80 and STUN on udp/3478. See the definitions of
[DERPMap](https://pkg.go.dev/tailscale.com/tailcfg#DERPMap),
[DERPRegion](https://pkg.go.dev/tailscale.com/tailcfg#DERPRegion) and
[DERPNode](https://pkg.go.dev/tailscale.com/tailcfg#DERPNode) for all available options.
```yaml title="derp.yaml"
regions:
900:
regionid: 900
regioncode: custom-east
regionname: My region (east)
nodes:
- name: 900a
regionid: 900
hostname: derp900a.example.com
ipv4: 198.51.100.1
ipv6: 2001:db8::1
canport80: true
901:
regionid: 901
regioncode: custom-west
regionname: My Region (west)
nodes:
- name: 901a
regionid: 901
hostname: derp901a.example.com
ipv4: 198.51.100.2
ipv6: 2001:db8::2
canport80: true
```
Use the following configuration to only serve the two DERP servers from the above `derp.yaml`:
```yaml title="config.yaml" hl_lines="5 6"
derp:
server:
enabled: false
urls: []
paths:
- /etc/headscale/derp.yaml
```
Independent of the custom DERP map, you may choose to [enable the embedded DERP server and have it automatically added
to the custom DERP map](#enable-embedded-derp).
### Verify clients
Access to DERP serves can be restricted to nodes that are members of your Tailnet. Relay access is denied for unknown
clients.
=== "Embedded DERP"
Client verification is enabled by default.
```yaml title="config.yaml" hl_lines="3"
derp:
server:
verify_clients: true
```
=== "3rd-party DERP"
Tailscale's `derper` provides two parameters to configure client verification:
- Use the `-verify-client-url` parameter of the `derper` and point it towards the `/verify` endpoint of your
Headscale server (e.g `https://headscale.example.com/verify`). The DERP server will query your Headscale instance
as soon as a client connects with it to ask whether access should be allowed or denied. Access is allowed if
Headscale knows about the connecting client and denied otherwise.
- The parameter `-verify-client-url-fail-open` controls what should happen when the DERP server can't reach the
Headscale instance. By default, it will allow access if Headscale is unreachable.
## Check DERP server connectivity
Any Tailscale client may be used to introspect the DERP map and to check for connectivity issues with DERP servers.
- Display DERP map: `tailscale debug derp-map`
- Check connectivity with the embedded DERP[^1]:`tailscale debug derp headscale`
Additional DERP related metrics and information is available via the [metrics and debug
endpoint](./debug.md#metrics-and-debug-endpoint).
[^1]:
This assumes that the default region code of the [configuration file](./configuration.md) is used.
## Limitations
- The embedded DERP server can't be used for Tailscale's captive portal checks as it doesn't support the `/generate_204`
endpoint via HTTP on port tcp/80.
- There are no speed or throughput optimisations, the main purpose is to assist in node connectivity.

View File

@@ -1,7 +1,7 @@
# DNS
Headscale supports [most DNS features](../about/features.md) from Tailscale. DNS related settings can be configured
within the `dns` section of the [configuration file](./configuration.md).
within `dns` section of the [configuration file](./configuration.md).
## Setting extra DNS records
@@ -23,7 +23,7 @@ hostname and port combination "http://hostname-in-magic-dns.myvpn.example.com:30
!!! warning "Limitations"
Currently, [only A and AAAA records are processed by Tailscale](https://github.com/tailscale/tailscale/blob/v1.86.5/ipn/ipnlocal/node_backend.go#L662).
Currently, [only A and AAAA records are processed by Tailscale](https://github.com/tailscale/tailscale/blob/v1.78.3/ipn/ipnlocal/local.go#L4461-L4479).
1. Configure extra DNS records using one of the available configuration options:

View File

@@ -13,7 +13,7 @@ Running headscale behind a reverse proxy is useful when running multiple applica
The reverse proxy MUST be configured to support WebSockets to communicate with Tailscale clients.
WebSockets support is also required when using the Headscale [embedded DERP server](../derp.md). In this case, you will also need to expose the UDP port used for STUN (by default, udp/3478). Please check our [config-example.yaml](https://github.com/juanfont/headscale/blob/main/config-example.yaml).
WebSockets support is also required when using the headscale embedded DERP server. In this case, you will also need to expose the UDP port used for STUN (by default, udp/3478). Please check our [config-example.yaml](https://github.com/juanfont/headscale/blob/main/config-example.yaml).
### Cloudflare

View File

@@ -7,16 +7,9 @@
This page collects third-party tools, client libraries, and scripts related to headscale.
- [headscale-operator](https://github.com/infradohq/headscale-operator) - Headscale Kubernetes Operator
- [tailscale-manager](https://github.com/singlestore-labs/tailscale-manager) - Dynamically manage Tailscale route
advertisements
- [headscalebacktosqlite](https://github.com/bigbozza/headscalebacktosqlite) - Migrate headscale from PostgreSQL back to
SQLite
- [headscale-pf](https://github.com/YouSysAdmin/headscale-pf) - Populates user groups based on user groups in Jumpcloud
or Authentik
- [headscale-client-go](https://github.com/hibare/headscale-client-go) - A Go client implementation for the Headscale
HTTP API.
- [headscale-zabbix](https://github.com/dblanque/headscale-zabbix) - A Zabbix Monitoring Template for the Headscale
Service.
- [tailscale-exporter](https://github.com/adinhodovic/tailscale-exporter) - A Prometheus exporter for Headscale that
provides network-level metrics using the Headscale API.
| Name | Repository Link | Description |
| --------------------- | --------------------------------------------------------------- | -------------------------------------------------------------------- |
| tailscale-manager | [Github](https://github.com/singlestore-labs/tailscale-manager) | Dynamically manage Tailscale route advertisements |
| headscalebacktosqlite | [Github](https://github.com/bigbozza/headscalebacktosqlite) | Migrate headscale from PostgreSQL back to SQLite |
| headscale-pf | [Github](https://github.com/YouSysAdmin/headscale-pf) | Populates user groups based on user groups in Jumpcloud or Authentik |
| headscale-client-go | [Github](https://github.com/hibare/headscale-client-go) | A Go client implementation for the Headscale HTTP API. |

View File

@@ -7,18 +7,14 @@
Headscale doesn't provide a built-in web interface but users may pick one from the available options.
- [headscale-ui](https://github.com/gurucomputing/headscale-ui) - A web frontend for the headscale Tailscale-compatible
coordination server
- [HeadscaleUi](https://github.com/simcu/headscale-ui) - A static headscale admin ui, no backend environment required
- [Headplane](https://github.com/tale/headplane) - An advanced Tailscale inspired frontend for headscale
- [headscale-admin](https://github.com/GoodiesHQ/headscale-admin) - Headscale-Admin is meant to be a simple, modern web
interface for headscale
- [ouroboros](https://github.com/yellowsink/ouroboros) - Ouroboros is designed for users to manage their own devices,
rather than for admins
- [unraid-headscale-admin](https://github.com/ich777/unraid-headscale-admin) - A simple headscale admin UI for Unraid,
it offers Local (`docker exec`) and API Mode
- [headscale-console](https://github.com/rickli-cloud/headscale-console) - WebAssembly-based client supporting SSH, VNC
and RDP with optional self-service capabilities
- [headscale-piying](https://github.com/wszgrcy/headscale-piying) - headscale web ui,support visual ACL configuration
| Name | Repository Link | Description |
| ---------------------- | ----------------------------------------------------------- | -------------------------------------------------------------------------------------------- |
| headscale-ui | [Github](https://github.com/gurucomputing/headscale-ui) | A web frontend for the headscale Tailscale-compatible coordination server |
| HeadscaleUi | [GitHub](https://github.com/simcu/headscale-ui) | A static headscale admin ui, no backend environment required |
| Headplane | [GitHub](https://github.com/tale/headplane) | An advanced Tailscale inspired frontend for headscale |
| headscale-admin | [Github](https://github.com/GoodiesHQ/headscale-admin) | Headscale-Admin is meant to be a simple, modern web interface for headscale |
| ouroboros | [Github](https://github.com/yellowsink/ouroboros) | Ouroboros is designed for users to manage their own devices, rather than for admins |
| unraid-headscale-admin | [Github](https://github.com/ich777/unraid-headscale-admin) | A simple headscale admin UI for Unraid, it offers Local (`docker exec`) and API Mode |
| headscale-console | [Github](https://github.com/rickli-cloud/headscale-console) | WebAssembly-based client supporting SSH, VNC and RDP with optional self-service capabilities |
You can ask for support on our [Discord server](https://discord.gg/c84AZQhmpx) in the "web-interfaces" channel.

View File

@@ -2,7 +2,7 @@
Headscale supports authentication via external identity providers using OpenID Connect (OIDC). It features:
- Auto configuration via OpenID Connect Discovery Protocol
- Autoconfiguration via OpenID Connect Discovery Protocol
- [Proof Key for Code Exchange (PKCE) code verification](#enable-pkce-recommended)
- [Authorization based on a user's domain, email address or group membership](#authorize-users-with-filters)
- Synchronization of [standard OIDC claims](#supported-oidc-claims)
@@ -77,7 +77,6 @@ are configured, a user needs to pass all of them.
* Check the email domain of each authenticating user against the list of allowed domains and only authorize users
whose email domain matches `example.com`.
* A verified email address is required [unless email verification is disabled](#control-email-verification).
* Access allowed: `alice@example.com`
* Access denied: `bob@example.net`
@@ -94,7 +93,6 @@ are configured, a user needs to pass all of them.
* Check the email address of each authenticating user against the list of allowed email addresses and only authorize
users whose email is part of the `allowed_users` list.
* A verified email address is required [unless email verification is disabled](#control-email-verification).
* Access allowed: `alice@example.com`, `bob@example.net`
* Access denied: `mallory@example.net`
@@ -125,23 +123,6 @@ are configured, a user needs to pass all of them.
- "headscale_users"
```
### Control email verification
Headscale uses the `email` claim from the identity provider to synchronize the email address to its user profile. By
default, a user's email address is only synchronized when the identity provider reports the email address as verified
via the `email_verified: true` claim.
Unverified emails may be allowed in case an identity provider does not send the `email_verified` claim or email
verification is not required. In that case, a user's email address is always synchronized to the user profile.
```yaml hl_lines="5"
oidc:
issuer: "https://sso.example.com"
client_id: "headscale"
client_secret: "generated-secret"
email_verified_required: false
```
### Customize node expiration
The node expiration is the amount of time a node is authenticated with OpenID Connect until it expires and needs to
@@ -161,7 +142,7 @@ Access Token.
=== "Use expiration from Access Token"
Please keep in mind that the Access Token is typically a short-lived token that expires within a few minutes. You
will have to configure token expiration in your identity provider to avoid frequent re-authentication.
will have to configure token expiration in your identity provider to avoid frequent reauthentication.
```yaml hl_lines="5"
@@ -203,12 +184,12 @@ You may refer to users in the Headscale policy via:
## Supported OIDC claims
Headscale uses [the standard OIDC claims](https://openid.net/specs/openid-connect-core-1_0.html#StandardClaims) to
populate and update its local user profile on each login. OIDC claims are read from the ID Token and from the UserInfo
populate and update its local user profile on each login. OIDC claims are read from the ID Token or from the UserInfo
endpoint.
| Headscale profile | OIDC claim | Notes / examples |
| ------------------- | -------------------- | ------------------------------------------------------------------------------------------------- |
| email address | `email` | Only verified emails are synchronized, unless `email_verified_required: false` is configured |
| email address | `email` | Only used when `email_verified: true` |
| display name | `name` | eg: `Sam Smith` |
| username | `preferred_username` | Depends on identity provider, eg: `ssmith`, `ssmith@idp.example.com`, `\\example.com\ssmith` |
| profile picture | `picture` | URL to a profile picture or avatar |
@@ -224,6 +205,8 @@ endpoint.
- The username must be at least two characters long.
- It must only contain letters, digits, hyphens, dots, underscores, and up to a single `@`.
- The username must start with a letter.
- A user's email address is only synchronized to the local user profile when the identity provider marks the email
address as verified (`email_verified: true`).
Please see the [GitHub label "OIDC"](https://github.com/juanfont/headscale/labels/OIDC) for OIDC related issues.
@@ -247,10 +230,23 @@ are known to work:
Authelia is fully supported by Headscale.
#### Additional configuration to authorize users based on filters
Authelia (4.39.0 or newer) no longer provides standard OIDC claims such as `email` or `groups` via the ID Token. The
OIDC `email` and `groups` claims are used to [authorize users with filters](#authorize-users-with-filters). This extra
configuration step is **only** needed if you need to authorize access based on one of the following user properties:
- domain
- email address
- group membership
Please follow the instructions from Authelia's documentation on how to [Restore Functionality Prior to Claims
Parameter](https://www.authelia.com/integration/openid-connect/openid-connect-1.0-claims/#restore-functionality-prior-to-claims-parameter).
### Authentik
- Authentik is fully supported by Headscale.
- [Headscale does not support JSON Web Encryption](https://github.com/juanfont/headscale/issues/2446). Leave the field
- [Headscale does not JSON Web Encryption](https://github.com/juanfont/headscale/issues/2446). Leave the field
`Encryption Key` in the providers section unset.
### Google OAuth
@@ -301,15 +297,13 @@ you need to [authorize access based on group membership](#authorize-users-with-f
- Create a new client scope `groups` for OpenID Connect:
- Configure a `Group Membership` mapper with name `groups` and the token claim name `groups`.
- Add the mapper to at least the UserInfo endpoint.
- Enable the mapper for the ID Token, Access Token and UserInfo endpoint.
- Configure the new client scope for your Headscale client:
- Edit the Headscale client.
- Search for the client scope `group`.
- Add it with assigned type `Default`.
- [Configure the allowed groups in Headscale](#authorize-users-with-filters). How groups need to be specified depends on
Keycloak's `Full group path` option:
- `Full group path` is enabled: groups contain their full path, e.g. `/top/group1`
- `Full group path` is disabled: only the name of the group is used, e.g. `group1`
- [Configure the allowed groups in Headscale](#authorize-users-with-filters). Keep in mind that groups in Keycloak start
with a leading `/`.
### Microsoft Entra ID
@@ -321,14 +315,3 @@ Entra ID is: `https://login.microsoftonline.com/<tenant-UUID>/v2.0`. The followi
- `domain_hint: example.com` to use your own domain
- `prompt: select_account` to force an account picker during login
When using Microsoft Entra ID together with the [allowed groups filter](#authorize-users-with-filters), configure the
Headscale OIDC scope without the `groups` claim, for example:
```yaml
oidc:
scope: ["openid", "profile", "email"]
```
Groups for the [allowed groups filter](#authorize-users-with-filters) need to be specified with their group ID(UUID) instead
of the group name.

View File

@@ -1,141 +0,0 @@
# Registration methods
Headscale supports multiple ways to register a node. The preferred registration method depends on the identity of a node
and your use case.
## Identity model
Tailscale's identity model distinguishes between personal and tagged nodes:
- A personal node (or user-owned node) is owned by a human and typically refers to end-user devices such as laptops,
workstations or mobile phones. End-user devices are managed by a single user.
- A tagged node (or service-based node or non-human node) provides services to the network. Common examples include web-
and database servers. Those nodes are typically managed by a team of users. Some additional restrictions apply for
tagged nodes, e.g. a tagged node is not allowed to [Tailscale SSH](https://tailscale.com/kb/1193/tailscale-ssh) into a
personal node.
Headscale implements Tailscale's identity model and distinguishes between personal and tagged nodes where a personal
node is owned by a Headscale user and a tagged node is owned by a tag. Tagged devices are grouped under the special user
`tagged-devices`.
## Registration methods
There are two main ways to register new nodes, [web authentication](#web-authentication) and [registration with a pre
authenticated key](#pre-authenticated-key). Both methods can be used to register personal and tagged nodes.
### Web authentication
Web authentication is the default method to register a new node. It's interactive, where the client initiates the
registration and the Headscale administrator needs to approve the new node before it is allowed to join the network. A
node can be approved with:
- Headscale CLI (described in this documentation)
- [Headscale API](api.md)
- Or delegated to an identity provider via [OpenID Connect](oidc.md)
Web authentication relies on the presence of a Headscale user. Use the `headscale users` command to create a new user:
```console
headscale users create <USER>
```
=== "Personal devices"
Run `tailscale up` to login your personal device:
```console
tailscale up --login-server <YOUR_HEADSCALE_URL>
```
Usually, a browser window with further instructions is opened. This page explains how to complete the registration
on your Headscale server and it also prints the registration key required to approve the node:
```console
headscale nodes register --user <USER> --key <REGISTRATION_KEY>
```
Congrations, the registration of your personal node is complete and it should be listed as "online" in the output of
`headscale nodes list`. The "User" column displays `<USER>` as the owner of the node.
=== "Tagged devices"
Your Headscale user needs to be authorized to register tagged devices. This authorization is specified in the
[`tagOwners`](https://tailscale.com/kb/1337/policy-syntax#tag-owners) section of the [ACL](acls.md). A simple
example looks like this:
```json title="The user alice can register nodes tagged with tag:server"
{
"tagOwners": {
"tag:server": ["alice@"]
},
// more rules
}
```
Run `tailscale up` and provide at least one tag to login a tagged device:
```console
tailscale up --login-server <YOUR_HEADSCALE_URL> --advertise-tags tag:<TAG>
```
Usually, a browser window with further instructions is opened. This page explains how to complete the registration
on your Headscale server and it also prints the registration key required to approve the node:
```console
headscale nodes register --user <USER> --key <REGISTRATION_KEY>
```
Headscale checks that `<USER>` is allowed to register a node with the specified tag(s) and then transfers ownership
of the new node to the special user `tagged-devices`. The registration of a tagged node is complete and it should be
listed as "online" in the output of `headscale nodes list`. The "User" column displays `tagged-devices` as the owner
of the node. See the "Tags" column for the list of assigned tags.
### Pre authenticated key
Registration with a pre authenticated key (or auth key) is a non-interactive way to register a new node. The Headscale
administrator creates a preauthkey upfront and this preauthkey can then be used to register a node non-interactively.
Its best suited for automation.
=== "Personal devices"
A personal node is always assigned to a Headscale user. Use the `headscale users` command to create a new user:
```console
headscale users create <USER>
```
Use the `headscale user list` command to learn its `<USER_ID>` and create a new pre authenticated key for your user:
```console
headscale preauthkeys create --user <USER_ID>
```
The above prints a pre authenticated key with the default settings (can be used once and is valid for one hour). Use
this auth key to register a node non-interactively:
```console
tailscale up --login-server <YOUR_HEADSCALE_URL> --authkey <YOUR_AUTH_KEY>
```
Congrations, the registration of your personal node is complete and it should be listed as "online" in the output of
`headscale nodes list`. The "User" column displays `<USER>` as the owner of the node.
=== "Tagged devices"
Create a new pre authenticated key and provide at least one tag:
```console
headscale preauthkeys create --tags tag:<TAG>
```
The above prints a pre authenticated key with the default settings (can be used once and is valid for one hour). Use
this auth key to register a node non-interactively. You don't need to provide the `--advertise-tags` parameter as
the tags are automatically read from the pre authenticated key:
```console
tailscale up --login-server <YOUR_HEADSCALE_URL> --authkey <YOUR_AUTH_KEY>
```
The registration of a tagged node is complete and it should be listed as "online" in the output of `headscale nodes
list`. The "User" column displays `tagged-devices` as the owner of the node. See the "Tags" column for the list of
assigned tags.

105
docs/ref/remote-cli.md Normal file
View File

@@ -0,0 +1,105 @@
# Controlling headscale with remote CLI
This documentation has the goal of showing a user how-to control a headscale instance
from a remote machine with the `headscale` command line binary.
## Prerequisite
- A workstation to run `headscale` (any supported platform, e.g. Linux).
- A headscale server with gRPC enabled.
- Connections to the gRPC port (default: `50443`) are allowed.
- Remote access requires an encrypted connection via TLS.
- An API key to authenticate with the headscale server.
## Create an API key
We need to create an API key to authenticate with the remote headscale server when using it from our workstation.
To create an API key, log into your headscale server and generate a key:
```shell
headscale apikeys create --expiration 90d
```
Copy the output of the command and save it for later. Please note that you can not retrieve a key again,
if the key is lost, expire the old one, and create a new key.
To list the keys currently associated with the server:
```shell
headscale apikeys list
```
and to expire a key:
```shell
headscale apikeys expire --prefix "<PREFIX>"
```
## Download and configure headscale
1. Download the [`headscale` binary from GitHub's release page](https://github.com/juanfont/headscale/releases). Make
sure to use the same version as on the server.
1. Put the binary somewhere in your `PATH`, e.g. `/usr/local/bin/headscale`
1. Make `headscale` executable:
```shell
chmod +x /usr/local/bin/headscale
```
1. Provide the connection parameters for the remote headscale server either via a minimal YAML configuration file or via
environment variables:
=== "Minimal YAML configuration file"
```yaml title="config.yaml"
cli:
address: <HEADSCALE_ADDRESS>:<PORT>
api_key: <API_KEY_FROM_PREVIOUS_STEP>
```
=== "Environment variables"
```shell
export HEADSCALE_CLI_ADDRESS="<HEADSCALE_ADDRESS>:<PORT>"
export HEADSCALE_CLI_API_KEY="<API_KEY_FROM_PREVIOUS_STEP>"
```
!!! bug
Headscale currently requires at least an empty configuration file when environment variables are used to
specify connection details. See [issue 2193](https://github.com/juanfont/headscale/issues/2193) for more
information.
This instructs the `headscale` binary to connect to a remote instance at `<HEADSCALE_ADDRESS>:<PORT>`, instead of
connecting to the local instance.
1. Test the connection
Let us run the headscale command to verify that we can connect by listing our nodes:
```shell
headscale nodes list
```
You should now be able to see a list of your nodes from your workstation, and you can
now control the headscale server from your workstation.
## Behind a proxy
It is possible to run the gRPC remote endpoint behind a reverse proxy, like Nginx, and have it run on the _same_ port as headscale.
While this is _not a supported_ feature, an example on how this can be set up on
[NixOS is shown here](https://github.com/kradalby/dotfiles/blob/4489cdbb19cddfbfae82cd70448a38fde5a76711/machines/headscale.oracldn/headscale.nix#L61-L91).
## Troubleshooting
- Make sure you have the _same_ headscale version on your server and workstation.
- Ensure that connections to the gRPC port are allowed.
- Verify that your TLS certificate is valid and trusted.
- If you don't have access to a trusted certificate (e.g. from Let's Encrypt), either:
- Add your self-signed certificate to the trust store of your OS _or_
- Disable certificate verification by either setting `cli.insecure: true` in the configuration file or by setting
`HEADSCALE_CLI_INSECURE=1` via an environment variable. We do **not** recommend to disable certificate validation.

View File

@@ -42,15 +42,14 @@ can be used.
```console
$ headscale nodes list-routes
ID | Hostname | Approved | Available | Serving (Primary)
1 | myrouter | | 10.0.0.0/8 |
| | | 192.168.0.0/24 |
ID | Hostname | Approved | Available | Serving (Primary)
1 | myrouter | | 10.0.0.0/8, 192.168.0.0/24 |
```
Approve all desired routes of a subnet router by specifying them as comma separated list:
```console
$ headscale nodes approve-routes --identifier 1 --routes 10.0.0.0/8,192.168.0.0/24
$ headscale nodes approve-routes --node 1 --routes 10.0.0.0/8,192.168.0.0/24
Node updated
```
@@ -58,9 +57,8 @@ The node `myrouter` can now route the IPv4 networks `10.0.0.0/8` and `192.168.0.
```console
$ headscale nodes list-routes
ID | Hostname | Approved | Available | Serving (Primary)
1 | myrouter | 10.0.0.0/8 | 10.0.0.0/8 | 10.0.0.0/8
| | 192.168.0.0/24 | 192.168.0.0/24 | 192.168.0.0/24
ID | Hostname | Approved | Available | Serving (Primary)
1 | myrouter | 10.0.0.0/8, 192.168.0.0/24 | 10.0.0.0/8, 192.168.0.0/24 | 10.0.0.0/8, 192.168.0.0/24
```
#### Use the subnet router
@@ -111,9 +109,9 @@ approval of routes served with a subnet router.
The ACL snippet below defines the tag `tag:router` owned by the user `alice`. This tag is used for `routes` in the
`autoApprovers` section. The IPv4 route `192.168.0.0/24` is automatically approved once announced by a subnet router
that advertises the tag `tag:router`.
owned by the user `alice` and that also advertises the tag `tag:router`.
```json title="Subnet routers tagged with tag:router are automatically approved"
```json title="Subnet routers owned by alice and tagged with tag:router are automatically approved"
{
"tagOwners": {
"tag:router": ["alice@"]
@@ -170,15 +168,14 @@ available, but needs to be approved:
```console
$ headscale nodes list-routes
ID | Hostname | Approved | Available | Serving (Primary)
1 | myexit | | 0.0.0.0/0 |
| | | ::/0 |
ID | Hostname | Approved | Available | Serving (Primary)
1 | myexit | | 0.0.0.0/0, ::/0 |
```
For exit nodes, it is sufficient to approve either the IPv4 or IPv6 route. The other will be approved automatically.
```console
$ headscale nodes approve-routes --identifier 1 --routes 0.0.0.0/0
$ headscale nodes approve-routes --node 1 --routes 0.0.0.0/0
Node updated
```
@@ -186,9 +183,8 @@ The node `myexit` is now approved as exit node for the tailnet:
```console
$ headscale nodes list-routes
ID | Hostname | Approved | Available | Serving (Primary)
1 | myexit | 0.0.0.0/0 | 0.0.0.0/0 | 0.0.0.0/0
| | ::/0 | ::/0 | ::/0
ID | Hostname | Approved | Available | Serving (Primary)
1 | myexit | 0.0.0.0/0, ::/0 | 0.0.0.0/0, ::/0 | 0.0.0.0/0, ::/0
```
#### Use the exit node
@@ -220,39 +216,6 @@ nodes.
}
```
### Restrict access to exit nodes per user or group
A user can use _any_ of the available exit nodes with `autogroup:internet`. Alternatively, the ACL snippet below assigns
each user a specific exit node while hiding all other exit nodes. The user `alice` can only use exit node `exit1` while
user `bob` can only use exit node `exit2`.
```json title="Assign each user a dedicated exit node"
{
"hosts": {
"exit1": "100.64.0.1/32",
"exit2": "100.64.0.2/32"
},
"acls": [
{
"action": "accept",
"src": ["alice@"],
"dst": ["exit1:*"]
},
{
"action": "accept",
"src": ["bob@"],
"dst": ["exit2:*"]
}
]
}
```
!!! warning
- The above implementation is Headscale specific and will likely be removed once [support for
`via`](https://github.com/juanfont/headscale/issues/2409) is available.
- Beware that a user can also connect to any port of the exit node itself.
### Automatically approve an exit node with auto approvers
The initial setup of an exit node usually requires manual approval on the control server before it can be used by a node
@@ -260,9 +223,10 @@ in a tailnet. Headscale supports the `autoApprovers` section of an ACL to automa
soon as it joins the tailnet.
The ACL snippet below defines the tag `tag:exit` owned by the user `alice`. This tag is used for `exitNode` in the
`autoApprovers` section. A new exit node that advertises the tag `tag:exit` is automatically approved:
`autoApprovers` section. A new exit node which is owned by the user `alice` and that also advertises the tag `tag:exit`
is automatically approved:
```json title="Exit nodes tagged with tag:exit are automatically approved"
```json title="Exit nodes owned by alice and tagged with tag:exit are automatically approved"
{
"tagOwners": {
"tag:exit": ["alice@"]

View File

@@ -1,54 +0,0 @@
# Tags
Headscale supports Tailscale tags. Please read [Tailscale's tag documentation](https://tailscale.com/kb/1068/tags) to
learn how tags work and how to use them.
Tags can be applied during [node registration](registration.md):
- using the `--advertise-tags` flag, see [web authentication for tagged devices](registration.md#__tabbed_1_2)
- using a tagged pre authenticated key, see [how to create and use it](registration.md#__tabbed_2_2)
Administrators can manage tags with:
- Headscale CLI
- [Headscale API](api.md)
## Common operations
### Manage tags for a node
Run `headscale nodes list` to list the tags for a node.
Use the `headscale nodes tag` command to modify the tags for a node. At least one tag is required and multiple tags can
be provided as comma separated list. The following command sets the tags `tag:server` and `tag:prod` on node with ID 1:
```console
headscale nodes tag -i 1 -t tag:server,tag:prod
```
### Convert from personal to tagged node
Use the `headscale nodes tag` command to convert a personal (user-owned) node to a tagged node:
```console
headscale nodes tag -i <NODE_ID> -t <TAG>
```
The node is now owned by the special user `tagged-devices` and has the specified tags assigned to it.
### Convert from tagged to personal node
Tagged nodes can return to personal (user-owned) nodes by re-authenticating with:
```console
tailscale up --login-server <YOUR_HEADSCALE_URL> --advertise-tags= --force-reauth
```
Usually, a browser window with further instructions is opened. This page explains how to complete the registration on
your Headscale server and it also prints the registration key required to approve the node:
```console
headscale nodes register --user <USER> --key <REGISTRATION_KEY>
```
All previously assigned tags get removed and the node is now owned by the user specified in the above command.

View File

@@ -18,10 +18,10 @@ Registry](https://github.com/juanfont/headscale/pkgs/container/headscale). The c
## Configure and run headscale
1. Create a directory on the container host to store headscale's [configuration](../../ref/configuration.md) and the [SQLite](https://www.sqlite.org/) database:
1. Create a directory on the Docker host to store headscale's [configuration](../../ref/configuration.md) and the [SQLite](https://www.sqlite.org/) database:
```shell
mkdir -p ./headscale/{config,lib}
mkdir -p ./headscale/{config,lib,run}
cd ./headscale
```
@@ -34,13 +34,11 @@ Registry](https://github.com/juanfont/headscale/pkgs/container/headscale). The c
docker run \
--name headscale \
--detach \
--read-only \
--tmpfs /var/run/headscale \
--volume "$(pwd)/config:/etc/headscale:ro" \
--volume "$(pwd)/config:/etc/headscale" \
--volume "$(pwd)/lib:/var/lib/headscale" \
--volume "$(pwd)/run:/var/run/headscale" \
--publish 127.0.0.1:8080:8080 \
--publish 127.0.0.1:9090:9090 \
--health-cmd "CMD headscale health" \
docker.io/headscale/headscale:<VERSION> \
serve
```
@@ -58,20 +56,16 @@ Registry](https://github.com/juanfont/headscale/pkgs/container/headscale). The c
image: docker.io/headscale/headscale:<VERSION>
restart: unless-stopped
container_name: headscale
read_only: true
tmpfs:
- /var/run/headscale
ports:
- "127.0.0.1:8080:8080"
- "127.0.0.1:9090:9090"
volumes:
# Please set <HEADSCALE_PATH> to the absolute path
# of the previously created headscale directory.
- <HEADSCALE_PATH>/config:/etc/headscale:ro
- <HEADSCALE_PATH>/config:/etc/headscale
- <HEADSCALE_PATH>/lib:/var/lib/headscale
- <HEADSCALE_PATH>/run:/var/run/headscale
command: serve
healthcheck:
test: ["CMD", "headscale", "health"]
```
1. Verify headscale is running:
@@ -91,10 +85,45 @@ Registry](https://github.com/juanfont/headscale/pkgs/container/headscale). The c
Verify headscale is available:
```shell
curl http://127.0.0.1:8080/health
curl http://127.0.0.1:9090/metrics
```
Continue on the [getting started page](../../usage/getting-started.md) to register your first machine.
1. Create a headscale user:
```shell
docker exec -it headscale \
headscale users create myfirstuser
```
### Register a machine (normal login)
On a client machine, execute the `tailscale up` command to login:
```shell
tailscale up --login-server YOUR_HEADSCALE_URL
```
To register a machine when running headscale in a container, take the headscale command and pass it to the container:
```shell
docker exec -it headscale \
headscale nodes register --user myfirstuser --key <YOUR_MACHINE_KEY>
```
### Register a machine using a pre authenticated key
Generate a key using the command line:
```shell
docker exec -it headscale \
headscale preauthkeys create --user myfirstuser --reusable --expiration 24h
```
This will return a pre-authenticated key that can be used to connect a node to headscale with the `tailscale up` command:
```shell
tailscale up --login-server <YOUR_HEADSCALE_URL> --authkey <YOUR_AUTH_KEY>
```
## Debugging headscale running in Docker

View File

@@ -7,7 +7,7 @@ Both are available on the [GitHub releases page](https://github.com/juanfont/hea
It is recommended to use our DEB packages to install headscale on a Debian based system as those packages configure a
local user to run headscale, provide a default configuration and ship with a systemd service file. Supported
distributions are Ubuntu 22.04 or newer, Debian 12 or newer.
distributions are Ubuntu 22.04 or newer, Debian 11 or newer.
1. Download the [latest headscale package](https://github.com/juanfont/headscale/releases/latest) for your platform (`.deb` for Ubuntu and Debian).
@@ -42,8 +42,6 @@ distributions are Ubuntu 22.04 or newer, Debian 12 or newer.
sudo systemctl status headscale
```
Continue on the [getting started page](../../usage/getting-started.md) to register your first machine.
## Using standalone binaries (advanced)
!!! warning "Advanced"
@@ -59,14 +57,14 @@ managed by systemd.
1. Download the latest [`headscale` binary from GitHub's release page](https://github.com/juanfont/headscale/releases):
```shell
sudo wget --output-document=/usr/bin/headscale \
sudo wget --output-document=/usr/local/bin/headscale \
https://github.com/juanfont/headscale/releases/download/v<HEADSCALE VERSION>/headscale_<HEADSCALE VERSION>_linux_<ARCH>
```
1. Make `headscale` executable:
```shell
sudo chmod +x /usr/bin/headscale
sudo chmod +x /usr/local/bin/headscale
```
1. Add a dedicated local user to run headscale:
@@ -117,5 +115,3 @@ managed by systemd.
```shell
systemctl status headscale
```
Continue on the [getting started page](../../usage/getting-started.md) to register your first machine.

View File

@@ -4,35 +4,11 @@ Headscale should just work as long as the following requirements are met:
- A server with a public IP address for headscale. A dual-stack setup with a public IPv4 and a public IPv6 address is
recommended.
- Headscale is served via HTTPS on port 443[^1] and [may use additional ports](#ports-in-use).
- Headscale is served via HTTPS on port 443[^1].
- A reasonably modern Linux or BSD based operating system.
- A dedicated local user account to run headscale.
- A little bit of command line knowledge to configure and operate headscale.
## Ports in use
The ports in use vary with the intended scenario and enabled features. Some of the listed ports may be changed via the
[configuration file](../ref/configuration.md) but we recommend to stick with the default values.
- tcp/80
- Expose publicly: yes
- HTTP, used by Let's Encrypt to verify ownership via the HTTP-01 challenge.
- Only required if the built-in Let's Enrypt client with the HTTP-01 challenge is used. See [TLS](../ref/tls.md) for
details.
- tcp/443
- Expose publicly: yes
- HTTPS, required to make Headscale available to Tailscale clients[^1]
- Required if the [embedded DERP server](../ref/derp.md) is enabled
- udp/3478
- Expose publicly: yes
- STUN, required if the [embedded DERP server](../ref/derp.md) is enabled
- tcp/50443
- Expose publicly: yes
- Only required if the gRPC interface is used to [remote-control Headscale](../ref/api.md#grpc).
- tcp/9090
- Expose publicly: no
- [Metrics and debug endpoint](../ref/debug.md#metrics-and-debug-endpoint)
## Assumptions
The headscale documentation and the provided examples are written with a few assumptions in mind:

View File

@@ -6,23 +6,9 @@ This documentation has the goal of showing how a user can use the official Andro
Install the official Tailscale Android client from the [Google Play Store](https://play.google.com/store/apps/details?id=com.tailscale.ipn) or [F-Droid](https://f-droid.org/packages/com.tailscale.ipn/).
## Connect via web authentication
## Configuring the headscale URL
- Open the app and select the settings menu in the upper-right corner
- Tap on `Accounts`
- In the kebab menu icon (three dots) in the upper-right corner select `Use an alternate server`
- Enter your server URL (e.g `https://headscale.example.com`) and follow the instructions
- The client connects automatically as soon as the node registration is complete on headscale. Until then, nothing is
visible in the server logs.
## Connect using a pre authenticated key
- Open the app and select the settings menu in the upper-right corner
- Tap on `Accounts`
- In the kebab menu icon (three dots) in the upper-right corner select `Use an alternate server`
- Enter your server URL (e.g `https://headscale.example.com`). If login prompts open, close it and continue
- Open the settings menu in the upper-right corner
- Tap on `Accounts`
- In the kebab menu icon (three dots) in the upper-right corner select `Use an auth key`
- Enter your [preauthkey generated from headscale](../../ref/registration.md#pre-authenticated-key)
- If needed, tap `Log in` on the main screen. You should now be connected to your headscale.

View File

@@ -9,8 +9,8 @@ This page helps you get started with headscale and provides a few usage examples
installation instructions.
* The configuration file exists and is adjusted to suit your environment, see
[Configuration](../ref/configuration.md) for details.
* Headscale is reachable from the Internet. Verify this by visiting the health endpoint:
https://headscale.example.com/health
* Headscale is reachable from the Internet. Verify this by opening client specific setup instructions in your
browser, e.g. https://headscale.example.com/windows
* The Tailscale client is installed, see [Client and operating system support](../about/clients.md) for more
information.
@@ -41,28 +41,12 @@ options, run:
headscale <COMMAND> --help
```
!!! note "Manage headscale from another local user"
By default only the user `headscale` or `root` will have the necessary permissions to access the unix socket
(`/var/run/headscale/headscale.sock`) that is used to communicate with the service. In order to be able to
communicate with the headscale service you have to make sure the unix socket is accessible by the user that runs
the commands. In general you can achieve this by any of the following methods:
* using `sudo`
* run the commands as user `headscale`
* add your user to the `headscale` group
To verify you can run the following command using your preferred method:
```shell
headscale users list
```
## Manage headscale users
In headscale, a node (also known as machine or device) is [typically assigned to a headscale
user](../ref/registration.md#identity-model). Such a headscale user may have many nodes assigned to them and can be
managed with the `headscale users` command. Invoke the built-in help for more information: `headscale users --help`.
In headscale, a node (also known as machine or device) is always assigned to a
headscale user. Such a headscale user may have many nodes assigned to them and
can be managed with the `headscale users` command. Invoke the built-in help for
more information: `headscale users --help`.
### Create a headscale user
@@ -96,12 +80,11 @@ managed with the `headscale users` command. Invoke the built-in help for more in
## Register a node
One has to [register a node](../ref/registration.md) first to use headscale as coordination server with Tailscale. The
following examples work for the Tailscale client on Linux/BSD operating systems. Alternatively, follow the instructions
to connect [Android](connect/android.md), [Apple](connect/apple.md) or [Windows](connect/windows.md) devices. Read
[registration methods](../ref/registration.md) for an overview of available registration methods.
One has to register a node first to use headscale as coordination with Tailscale. The following examples work for the
Tailscale client on Linux/BSD operating systems. Alternatively, follow the instructions to connect
[Android](connect/android.md), [Apple](connect/apple.md) or [Windows](connect/windows.md) devices.
### [Web authentication](../ref/registration.md#web-authentication)
### Normal, interactive login
On a client machine, run the `tailscale up` command and provide the FQDN of your headscale instance as argument:
@@ -109,23 +92,23 @@ On a client machine, run the `tailscale up` command and provide the FQDN of your
tailscale up --login-server <YOUR_HEADSCALE_URL>
```
Usually, a browser window with further instructions is opened. This page explains how to complete the registration on
your headscale server and it also prints the registration key required to approve the node:
Usually, a browser window with further instructions is opened and contains the value for `<YOUR_MACHINE_KEY>`. Approve
and register the node on your headscale server:
=== "Native"
```shell
headscale nodes register --user <USER> --key <REGISTRATION_KEY>
headscale nodes register --user <USER> --key <YOUR_MACHINE_KEY>
```
=== "Container"
```shell
docker exec -it headscale \
headscale nodes register --user <USER> --key <REGISTRATION_KEY>
headscale nodes register --user <USER> --key <YOUR_MACHINE_KEY>
```
### [Pre authenticated key](../ref/registration.md#pre-authenticated-key)
### Using a preauthkey
It is also possible to generate a preauthkey and register a node non-interactively. First, generate a preauthkey on the
headscale instance. By default, the key is valid for one hour and can only be used once (see `headscale preauthkeys
@@ -134,14 +117,14 @@ headscale instance. By default, the key is valid for one hour and can only be us
=== "Native"
```shell
headscale preauthkeys create --user <USER_ID>
headscale preauthkeys create --user <USER>
```
=== "Container"
```shell
docker exec -it headscale \
headscale preauthkeys create --user <USER_ID>
headscale preauthkeys create --user <USER>
```
The command returns the preauthkey on success which is used to connect a node to the headscale instance via the

6
flake.lock generated
View File

@@ -20,11 +20,11 @@
},
"nixpkgs": {
"locked": {
"lastModified": 1770141374,
"narHash": "sha256-yD4K/vRHPwXbJf5CK3JkptBA6nFWUKNX/jlFp2eKEQc=",
"lastModified": 1752012998,
"narHash": "sha256-Q82Ms+FQmgOBkdoSVm+FBpuFoeUAffNerR5yVV7SgT8=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "41965737c1797c1d83cfb0b644ed0840a6220bd1",
"rev": "2a2130494ad647f953593c4e84ea4df839fbd68c",
"type": "github"
},
"original": {

Some files were not shown because too many files have changed in this diff Show More