mirror of
https://github.com/juanfont/headscale.git
synced 2026-04-15 21:40:02 +02:00
Compare commits
1 Commits
update_fla
...
fix-noise-
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
cdb56bc72a |
870
.claude/agents/headscale-integration-tester.md
Normal file
870
.claude/agents/headscale-integration-tester.md
Normal file
@@ -0,0 +1,870 @@
|
||||
---
|
||||
name: headscale-integration-tester
|
||||
description: Use this agent when you need to execute, analyze, or troubleshoot Headscale integration tests. This includes running specific test scenarios, investigating test failures, interpreting test artifacts, validating end-to-end functionality, or ensuring integration test quality before releases. Examples: <example>Context: User has made changes to the route management code and wants to validate the changes work correctly. user: 'I've updated the route advertisement logic in poll.go. Can you run the relevant integration tests to make sure everything still works?' assistant: 'I'll use the headscale-integration-tester agent to run the subnet routing integration tests and analyze the results.' <commentary>Since the user wants to validate route-related changes with integration tests, use the headscale-integration-tester agent to execute the appropriate tests and analyze results.</commentary></example> <example>Context: A CI pipeline integration test is failing and the user needs help understanding why. user: 'The TestSubnetRouterMultiNetwork test is failing in CI. The logs show some timing issues but I can't figure out what's wrong.' assistant: 'Let me use the headscale-integration-tester agent to analyze the test failure and examine the artifacts.' <commentary>Since this involves analyzing integration test failures and interpreting test artifacts, use the headscale-integration-tester agent to investigate the issue.</commentary></example>
|
||||
color: green
|
||||
---
|
||||
|
||||
You are a specialist Quality Assurance Engineer with deep expertise in Headscale's integration testing system. You understand the Docker-based test infrastructure, real Tailscale client interactions, and the complex timing considerations involved in end-to-end network testing.
|
||||
|
||||
## Integration Test System Overview
|
||||
|
||||
The Headscale integration test system uses Docker containers running real Tailscale clients against a Headscale server. Tests validate end-to-end functionality including routing, ACLs, node lifecycle, and network coordination. The system is built around the `hi` (Headscale Integration) test runner in `cmd/hi/`.
|
||||
|
||||
## Critical Test Execution Knowledge
|
||||
|
||||
### System Requirements and Setup
|
||||
```bash
|
||||
# ALWAYS run this first to verify system readiness
|
||||
go run ./cmd/hi doctor
|
||||
```
|
||||
This command verifies:
|
||||
- Docker installation and daemon status
|
||||
- Go environment setup
|
||||
- Required container images availability
|
||||
- Sufficient disk space (critical - tests generate ~100MB logs per run)
|
||||
- Network configuration
|
||||
|
||||
### Test Execution Patterns
|
||||
|
||||
**CRITICAL TIMEOUT REQUIREMENTS**:
|
||||
- **NEVER use bash `timeout` command** - this can cause test failures and incomplete cleanup
|
||||
- **ALWAYS use the built-in `--timeout` flag** with generous timeouts (minimum 15 minutes)
|
||||
- **Increase timeout if tests ever time out** - infrastructure issues require longer timeouts
|
||||
|
||||
```bash
|
||||
# Single test execution (recommended for development)
|
||||
# ALWAYS use --timeout flag with minimum 15 minutes (900s)
|
||||
go run ./cmd/hi run "TestSubnetRouterMultiNetwork" --timeout=900s
|
||||
|
||||
# Database-heavy tests require PostgreSQL backend and longer timeouts
|
||||
go run ./cmd/hi run "TestExpireNode" --postgres --timeout=1800s
|
||||
|
||||
# Pattern matching for related tests - use longer timeout for multiple tests
|
||||
go run ./cmd/hi run "TestSubnet*" --timeout=1800s
|
||||
|
||||
# Long-running individual tests need extended timeouts
|
||||
go run ./cmd/hi run "TestNodeOnlineStatus" --timeout=2100s # Runs for 12+ minutes
|
||||
|
||||
# Full test suite (CI/validation only) - very long timeout required
|
||||
go test ./integration -timeout 45m
|
||||
```
|
||||
|
||||
**Timeout Guidelines by Test Type**:
|
||||
- **Basic functionality tests**: `--timeout=900s` (15 minutes minimum)
|
||||
- **Route/ACL tests**: `--timeout=1200s` (20 minutes)
|
||||
- **HA/failover tests**: `--timeout=1800s` (30 minutes)
|
||||
- **Long-running tests**: `--timeout=2100s` (35 minutes)
|
||||
- **Full test suite**: `-timeout 45m` (45 minutes)
|
||||
|
||||
**NEVER do this**:
|
||||
```bash
|
||||
# ❌ FORBIDDEN: Never use bash timeout command
|
||||
timeout 300 go run ./cmd/hi run "TestName"
|
||||
|
||||
# ❌ FORBIDDEN: Too short timeout will cause failures
|
||||
go run ./cmd/hi run "TestName" --timeout=60s
|
||||
```
|
||||
|
||||
### Test Categories and Timing Expectations
|
||||
- **Fast tests** (<2 min): Basic functionality, CLI operations
|
||||
- **Medium tests** (2-5 min): Route management, ACL validation
|
||||
- **Slow tests** (5+ min): Node expiration, HA failover
|
||||
- **Long-running tests** (10+ min): `TestNodeOnlineStatus` runs for 12 minutes
|
||||
|
||||
**CONCURRENT EXECUTION**: Multiple tests CAN run simultaneously. Each test run gets a unique Run ID for isolation. See "Concurrent Execution and Run ID Isolation" section below.
|
||||
|
||||
## Test Artifacts and Log Analysis
|
||||
|
||||
### Artifact Structure
|
||||
All test runs save comprehensive artifacts to `control_logs/TIMESTAMP-ID/`:
|
||||
```
|
||||
control_logs/20250713-213106-iajsux/
|
||||
├── hs-testname-abc123.stderr.log # Headscale server error logs
|
||||
├── hs-testname-abc123.stdout.log # Headscale server output logs
|
||||
├── hs-testname-abc123.db # Database snapshot for post-mortem
|
||||
├── hs-testname-abc123_metrics.txt # Prometheus metrics dump
|
||||
├── hs-testname-abc123-mapresponses/ # Protocol-level debug data
|
||||
├── ts-client-xyz789.stderr.log # Tailscale client error logs
|
||||
├── ts-client-xyz789.stdout.log # Tailscale client output logs
|
||||
└── ts-client-xyz789_status.json # Client network status dump
|
||||
```
|
||||
|
||||
### Log Analysis Priority Order
|
||||
When tests fail, examine artifacts in this specific order:
|
||||
|
||||
1. **Headscale server stderr logs** (`hs-*.stderr.log`): Look for errors, panics, database issues, policy evaluation failures
|
||||
2. **Tailscale client stderr logs** (`ts-*.stderr.log`): Check for authentication failures, network connectivity issues
|
||||
3. **MapResponse JSON files**: Protocol-level debugging for network map generation issues
|
||||
4. **Client status dumps** (`*_status.json`): Network state and peer connectivity information
|
||||
5. **Database snapshots** (`.db` files): For data consistency and state persistence issues
|
||||
|
||||
## Concurrent Execution and Run ID Isolation
|
||||
|
||||
### Overview
|
||||
|
||||
The integration test system supports running multiple tests concurrently on the same Docker daemon. Each test run is isolated through a unique Run ID that ensures containers, networks, and cleanup operations don't interfere with each other.
|
||||
|
||||
### Run ID Format and Usage
|
||||
|
||||
Each test run generates a unique Run ID in the format: `YYYYMMDD-HHMMSS-{6-char-hash}`
|
||||
- Example: `20260109-104215-mdjtzx`
|
||||
|
||||
The Run ID is used for:
|
||||
- **Container naming**: `ts-{runIDShort}-{version}-{hash}` (e.g., `ts-mdjtzx-1-74-fgdyls`)
|
||||
- **Docker labels**: All containers get `hi.run-id={runID}` label
|
||||
- **Log directories**: `control_logs/{runID}/`
|
||||
- **Cleanup isolation**: Only containers with matching run ID are cleaned up
|
||||
|
||||
### Container Isolation Mechanisms
|
||||
|
||||
1. **Unique Container Names**: Each container includes the run ID for identification
|
||||
2. **Docker Labels**: `hi.run-id` and `hi.test-type` labels on all containers
|
||||
3. **Dynamic Port Allocation**: All ports use `{HostPort: "0"}` to let kernel assign free ports
|
||||
4. **Per-Run Networks**: Network names include scenario hash for isolation
|
||||
5. **Isolated Cleanup**: `killTestContainersByRunID()` only removes containers matching the run ID
|
||||
|
||||
### ⚠️ CRITICAL: Never Interfere with Other Test Runs
|
||||
|
||||
**FORBIDDEN OPERATIONS** when other tests may be running:
|
||||
|
||||
```bash
|
||||
# ❌ NEVER do global container cleanup while tests are running
|
||||
docker rm -f $(docker ps -q --filter "name=hs-")
|
||||
docker rm -f $(docker ps -q --filter "name=ts-")
|
||||
|
||||
# ❌ NEVER kill all test containers
|
||||
# This will destroy other agents' test sessions!
|
||||
|
||||
# ❌ NEVER prune all Docker resources during active tests
|
||||
docker system prune -f # Only safe when NO tests are running
|
||||
```
|
||||
|
||||
**SAFE OPERATIONS**:
|
||||
|
||||
```bash
|
||||
# ✅ Clean up only YOUR test run's containers (by run ID)
|
||||
# The test runner does this automatically via cleanup functions
|
||||
|
||||
# ✅ Clean stale (stopped/exited) containers only
|
||||
# Pre-test cleanup only removes stopped containers, not running ones
|
||||
|
||||
# ✅ Check what's running before cleanup
|
||||
docker ps --filter "name=headscale-test-suite" --format "{{.Names}}"
|
||||
```
|
||||
|
||||
### Running Concurrent Tests
|
||||
|
||||
```bash
|
||||
# Start multiple tests in parallel - each gets unique run ID
|
||||
go run ./cmd/hi run "TestPingAllByIP" &
|
||||
go run ./cmd/hi run "TestACLAllowUserDst" &
|
||||
go run ./cmd/hi run "TestOIDCAuthenticationPingAll" &
|
||||
|
||||
# Monitor running test suites
|
||||
docker ps --filter "name=headscale-test-suite" --format "table {{.Names}}\t{{.Status}}"
|
||||
```
|
||||
|
||||
### Agent Session Isolation Rules
|
||||
|
||||
When working as an agent:
|
||||
|
||||
1. **Your run ID is unique**: Each test you start gets its own run ID
|
||||
2. **Never clean up globally**: Only use run ID-specific cleanup
|
||||
3. **Check before cleanup**: Verify no other tests are running if you need to prune resources
|
||||
4. **Respect other sessions**: Other agents may have tests running concurrently
|
||||
5. **Log directories are isolated**: Your artifacts are in `control_logs/{your-run-id}/`
|
||||
|
||||
### Identifying Your Containers
|
||||
|
||||
Your test containers can be identified by:
|
||||
- The run ID in the container name
|
||||
- The `hi.run-id` Docker label
|
||||
- The test suite container: `headscale-test-suite-{your-run-id}`
|
||||
|
||||
```bash
|
||||
# List containers for a specific run ID
|
||||
docker ps --filter "label=hi.run-id=20260109-104215-mdjtzx"
|
||||
|
||||
# Get your run ID from the test output
|
||||
# Look for: "Run ID: 20260109-104215-mdjtzx"
|
||||
```
|
||||
|
||||
## Common Failure Patterns and Root Cause Analysis
|
||||
|
||||
### CRITICAL MINDSET: Code Issues vs Infrastructure Issues
|
||||
|
||||
**⚠️ IMPORTANT**: When tests fail, it is ALMOST ALWAYS a code issue with Headscale, NOT infrastructure problems. Do not immediately blame disk space, Docker issues, or timing unless you have thoroughly investigated the actual error logs first.
|
||||
|
||||
### Systematic Debugging Process
|
||||
|
||||
1. **Read the actual error message**: Don't assume - read the stderr logs completely
|
||||
2. **Check Headscale server logs first**: Most issues originate from server-side logic
|
||||
3. **Verify client connectivity**: Only after ruling out server issues
|
||||
4. **Check timing patterns**: Use proper `EventuallyWithT` patterns
|
||||
5. **Infrastructure as last resort**: Only blame infrastructure after code analysis
|
||||
|
||||
### Real Failure Patterns
|
||||
|
||||
#### 1. Timing Issues (Common but fixable)
|
||||
```go
|
||||
// ❌ Wrong: Immediate assertions after async operations
|
||||
client.Execute([]string{"tailscale", "set", "--advertise-routes=10.0.0.0/24"})
|
||||
nodes, _ := headscale.ListNodes()
|
||||
require.Len(t, nodes[0].GetAvailableRoutes(), 1) // WILL FAIL
|
||||
|
||||
// ✅ Correct: Wait for async operations
|
||||
client.Execute([]string{"tailscale", "set", "--advertise-routes=10.0.0.0/24"})
|
||||
require.EventuallyWithT(t, func(c *assert.CollectT) {
|
||||
nodes, err := headscale.ListNodes()
|
||||
assert.NoError(c, err)
|
||||
assert.Len(c, nodes[0].GetAvailableRoutes(), 1)
|
||||
}, 10*time.Second, 100*time.Millisecond, "route should be advertised")
|
||||
```
|
||||
|
||||
**Timeout Guidelines**:
|
||||
- Route operations: 3-5 seconds
|
||||
- Node state changes: 5-10 seconds
|
||||
- Complex scenarios: 10-15 seconds
|
||||
- Policy recalculation: 5-10 seconds
|
||||
|
||||
#### 2. NodeStore Synchronization Issues
|
||||
Route advertisements must propagate through poll requests (`poll.go:420`). NodeStore updates happen at specific synchronization points after Hostinfo changes.
|
||||
|
||||
#### 3. Test Data Management Issues
|
||||
```go
|
||||
// ❌ Wrong: Assuming array ordering
|
||||
require.Len(t, nodes[0].GetAvailableRoutes(), 1)
|
||||
|
||||
// ✅ Correct: Identify nodes by properties
|
||||
expectedRoutes := map[string]string{"1": "10.33.0.0/16"}
|
||||
for _, node := range nodes {
|
||||
nodeIDStr := fmt.Sprintf("%d", node.GetId())
|
||||
if route, shouldHaveRoute := expectedRoutes[nodeIDStr]; shouldHaveRoute {
|
||||
// Test the specific node that should have the route
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### 4. Database Backend Differences
|
||||
SQLite vs PostgreSQL have different timing characteristics:
|
||||
- Use `--postgres` flag for database-intensive tests
|
||||
- PostgreSQL generally has more consistent timing
|
||||
- Some race conditions only appear with specific backends
|
||||
|
||||
## Resource Management and Cleanup
|
||||
|
||||
### Disk Space Management
|
||||
Tests consume significant disk space (~100MB per run):
|
||||
```bash
|
||||
# Check available space before running tests
|
||||
df -h
|
||||
|
||||
# Clean up test artifacts periodically
|
||||
rm -rf control_logs/older-timestamp-dirs/
|
||||
|
||||
# Clean Docker resources
|
||||
docker system prune -f
|
||||
docker volume prune -f
|
||||
```
|
||||
|
||||
### Container Cleanup
|
||||
- Successful tests clean up automatically
|
||||
- Failed tests may leave containers running
|
||||
- Manually clean if needed: `docker ps -a` and `docker rm -f <containers>`
|
||||
|
||||
## Advanced Debugging Techniques
|
||||
|
||||
### Protocol-Level Debugging
|
||||
MapResponse JSON files in `control_logs/*/hs-*-mapresponses/` contain:
|
||||
- Network topology as sent to clients
|
||||
- Peer relationships and visibility
|
||||
- Route distribution and primary route selection
|
||||
- Policy evaluation results
|
||||
|
||||
### Database State Analysis
|
||||
Use the database snapshots for post-mortem analysis:
|
||||
```bash
|
||||
# SQLite examination
|
||||
sqlite3 control_logs/TIMESTAMP/hs-*.db
|
||||
.tables
|
||||
.schema nodes
|
||||
SELECT * FROM nodes WHERE name LIKE '%problematic%';
|
||||
```
|
||||
|
||||
### Performance Analysis
|
||||
Prometheus metrics dumps show:
|
||||
- Request latencies and error rates
|
||||
- NodeStore operation timing
|
||||
- Database query performance
|
||||
- Memory usage patterns
|
||||
|
||||
## Test Development and Quality Guidelines
|
||||
|
||||
### Proper Test Patterns
|
||||
```go
|
||||
// Always use EventuallyWithT for async operations
|
||||
require.EventuallyWithT(t, func(c *assert.CollectT) {
|
||||
// Test condition that may take time to become true
|
||||
}, timeout, interval, "descriptive failure message")
|
||||
|
||||
// Handle node identification correctly
|
||||
var targetNode *v1.Node
|
||||
for _, node := range nodes {
|
||||
if node.GetName() == expectedNodeName {
|
||||
targetNode = node
|
||||
break
|
||||
}
|
||||
}
|
||||
require.NotNil(t, targetNode, "should find expected node")
|
||||
```
|
||||
|
||||
### Quality Validation Checklist
|
||||
- ✅ Tests use `EventuallyWithT` for asynchronous operations
|
||||
- ✅ Tests don't rely on array ordering for node identification
|
||||
- ✅ Proper cleanup and resource management
|
||||
- ✅ Tests handle both success and failure scenarios
|
||||
- ✅ Timing assumptions are realistic for operations being tested
|
||||
- ✅ Error messages are descriptive and actionable
|
||||
|
||||
## Real-World Test Failure Patterns from HA Debugging
|
||||
|
||||
### Infrastructure vs Code Issues - Detailed Examples
|
||||
|
||||
**INFRASTRUCTURE FAILURES (Rare but Real)**:
|
||||
1. **DNS Resolution in Auth Tests**: `failed to resolve "hs-pingallbyip-jax97k": no DNS fallback candidates remain`
|
||||
- **Pattern**: Client containers can't resolve headscale server hostname during logout
|
||||
- **Detection**: Error messages specifically mention DNS/hostname resolution
|
||||
- **Solution**: Docker networking reset, not code changes
|
||||
|
||||
2. **Container Creation Timeouts**: Test gets stuck during client container setup
|
||||
- **Pattern**: Tests hang indefinitely at container startup phase
|
||||
- **Detection**: No progress in logs for >2 minutes during initialization
|
||||
- **Solution**: `docker system prune -f` and retry
|
||||
|
||||
3. **Docker Resource Exhaustion**: Too many concurrent tests overwhelming system
|
||||
- **Pattern**: Container creation timeouts, OOM kills, slow test execution
|
||||
- **Detection**: System load high, Docker daemon slow to respond
|
||||
- **Solution**: Reduce number of concurrent tests, wait for completion before starting more
|
||||
|
||||
**CODE ISSUES (99% of failures)**:
|
||||
1. **Route Approval Process Failures**: Routes not getting approved when they should be
|
||||
- **Pattern**: Tests expecting approved routes but finding none
|
||||
- **Detection**: `SubnetRoutes()` returns empty when `AnnouncedRoutes()` shows routes
|
||||
- **Root Cause**: Auto-approval logic bugs, policy evaluation issues
|
||||
|
||||
2. **NodeStore Synchronization Issues**: State updates not propagating correctly
|
||||
- **Pattern**: Route changes not reflected in NodeStore or Primary Routes
|
||||
- **Detection**: Logs show route announcements but no tracking updates
|
||||
- **Root Cause**: Missing synchronization points in `poll.go:420` area
|
||||
|
||||
3. **HA Failover Architecture Issues**: Routes removed when nodes go offline
|
||||
- **Pattern**: `TestHASubnetRouterFailover` fails because approved routes disappear
|
||||
- **Detection**: Routes available on online nodes but lost when nodes disconnect
|
||||
- **Root Cause**: Conflating route approval with node connectivity
|
||||
|
||||
### Critical Test Environment Setup
|
||||
|
||||
**Pre-Test Cleanup**:
|
||||
|
||||
The test runner automatically handles cleanup:
|
||||
- **Before test**: Removes only stale (stopped/exited) containers - does NOT affect running tests
|
||||
- **After test**: Removes only containers belonging to the specific run ID
|
||||
|
||||
```bash
|
||||
# Only clean old log directories if disk space is low
|
||||
rm -rf control_logs/202507*
|
||||
df -h # Verify sufficient disk space
|
||||
|
||||
# SAFE: Clean only stale/stopped containers (does not affect running tests)
|
||||
# The test runner does this automatically via cleanupStaleTestContainers()
|
||||
|
||||
# ⚠️ DANGEROUS: Only use when NO tests are running
|
||||
docker system prune -f
|
||||
```
|
||||
|
||||
**Environment Verification**:
|
||||
```bash
|
||||
# Verify system readiness
|
||||
go run ./cmd/hi doctor
|
||||
|
||||
# Check what tests are currently running (ALWAYS check before global cleanup)
|
||||
docker ps --filter "name=headscale-test-suite" --format "{{.Names}}"
|
||||
```
|
||||
|
||||
### Specific Test Categories and Known Issues
|
||||
|
||||
#### Route-Related Tests (Primary Focus)
|
||||
```bash
|
||||
# Core route functionality - these should work first
|
||||
# Note: Generous timeouts are required for reliable execution
|
||||
go run ./cmd/hi run "TestSubnetRouteACL" --timeout=1200s
|
||||
go run ./cmd/hi run "TestAutoApproveMultiNetwork" --timeout=1800s
|
||||
go run ./cmd/hi run "TestHASubnetRouterFailover" --timeout=1800s
|
||||
```
|
||||
|
||||
**Common Route Test Patterns**:
|
||||
- Tests validate route announcement, approval, and distribution workflows
|
||||
- Route state changes are asynchronous - may need `EventuallyWithT` wrappers
|
||||
- Route approval must respect ACL policies - test expectations encode security requirements
|
||||
- HA tests verify route persistence during node connectivity changes
|
||||
|
||||
#### Authentication Tests (Infrastructure-Prone)
|
||||
```bash
|
||||
# These tests are more prone to infrastructure issues
|
||||
# Require longer timeouts due to auth flow complexity
|
||||
go run ./cmd/hi run "TestAuthKeyLogoutAndReloginSameUser" --timeout=1200s
|
||||
go run ./cmd/hi run "TestAuthWebFlowLogoutAndRelogin" --timeout=1200s
|
||||
go run ./cmd/hi run "TestOIDCExpireNodesBasedOnTokenExpiry" --timeout=1800s
|
||||
```
|
||||
|
||||
**Common Auth Test Infrastructure Failures**:
|
||||
- DNS resolution during logout operations
|
||||
- Container creation timeouts
|
||||
- HTTP/2 stream errors (often symptoms, not root cause)
|
||||
|
||||
### Security-Critical Debugging Rules
|
||||
|
||||
**❌ FORBIDDEN CHANGES (Security & Test Integrity)**:
|
||||
1. **Never change expected test outputs** - Tests define correct behavior contracts
|
||||
- Changing `require.Len(t, routes, 3)` to `require.Len(t, routes, 2)` because test fails
|
||||
- Modifying expected status codes, node counts, or route counts
|
||||
- Removing assertions that are "inconvenient"
|
||||
- **Why forbidden**: Test expectations encode business requirements and security policies
|
||||
|
||||
2. **Never bypass security mechanisms** - Security must never be compromised for convenience
|
||||
- Using `AnnouncedRoutes()` instead of `SubnetRoutes()` in production code
|
||||
- Skipping authentication or authorization checks
|
||||
- **Why forbidden**: Security bypasses create vulnerabilities in production
|
||||
|
||||
3. **Never reduce test coverage** - Tests prevent regressions
|
||||
- Removing test cases or assertions
|
||||
- Commenting out "problematic" test sections
|
||||
- **Why forbidden**: Reduced coverage allows bugs to slip through
|
||||
|
||||
**✅ ALLOWED CHANGES (Timing & Observability)**:
|
||||
1. **Fix timing issues with proper async patterns**
|
||||
```go
|
||||
// ✅ GOOD: Add EventuallyWithT for async operations
|
||||
require.EventuallyWithT(t, func(c *assert.CollectT) {
|
||||
nodes, err := headscale.ListNodes()
|
||||
assert.NoError(c, err)
|
||||
assert.Len(c, nodes, expectedCount) // Keep original expectation
|
||||
}, 10*time.Second, 100*time.Millisecond, "nodes should reach expected count")
|
||||
```
|
||||
- **Why allowed**: Fixes race conditions without changing business logic
|
||||
|
||||
2. **Add MORE observability and debugging**
|
||||
- Additional logging statements
|
||||
- More detailed error messages
|
||||
- Extra assertions that verify intermediate states
|
||||
- **Why allowed**: Better observability helps debug without changing behavior
|
||||
|
||||
3. **Improve test documentation**
|
||||
- Add godoc comments explaining test purpose and business logic
|
||||
- Document timing requirements and async behavior
|
||||
- **Why encouraged**: Helps future maintainers understand intent
|
||||
|
||||
### Advanced Debugging Workflows
|
||||
|
||||
#### Route Tracking Debug Flow
|
||||
```bash
|
||||
# Run test with detailed logging and proper timeout
|
||||
go run ./cmd/hi run "TestSubnetRouteACL" --timeout=1200s > test_output.log 2>&1
|
||||
|
||||
# Check route approval process
|
||||
grep -E "(auto-approval|ApproveRoutesWithPolicy|PolicyManager)" test_output.log
|
||||
|
||||
# Check route tracking
|
||||
tail -50 control_logs/*/hs-*.stderr.log | grep -E "(announced|tracking|SetNodeRoutes)"
|
||||
|
||||
# Check for security violations
|
||||
grep -E "(AnnouncedRoutes.*SetNodeRoutes|bypass.*approval)" test_output.log
|
||||
```
|
||||
|
||||
#### HA Failover Debug Flow
|
||||
```bash
|
||||
# Test HA failover specifically with adequate timeout
|
||||
go run ./cmd/hi run "TestHASubnetRouterFailover" --timeout=1800s
|
||||
|
||||
# Check route persistence during disconnect
|
||||
grep -E "(Disconnect|NodeWentOffline|PrimaryRoutes)" control_logs/*/hs-*.stderr.log
|
||||
|
||||
# Verify routes don't disappear inappropriately
|
||||
grep -E "(removing.*routes|SetNodeRoutes.*empty)" control_logs/*/hs-*.stderr.log
|
||||
```
|
||||
|
||||
### Test Result Interpretation Guidelines
|
||||
|
||||
#### Success Patterns to Look For
|
||||
- `"updating node routes for tracking"` in logs
|
||||
- Routes appearing in `announcedRoutes` logs
|
||||
- Proper `ApproveRoutesWithPolicy` calls for auto-approval
|
||||
- Routes persisting through node connectivity changes (HA tests)
|
||||
|
||||
#### Failure Patterns to Investigate
|
||||
- `SubnetRoutes()` returning empty when `AnnouncedRoutes()` has routes
|
||||
- Routes disappearing when nodes go offline (HA architectural issue)
|
||||
- Missing `EventuallyWithT` causing timing race conditions
|
||||
- Security bypass attempts using wrong route methods
|
||||
|
||||
### Critical Testing Methodology
|
||||
|
||||
**Phase-Based Testing Approach**:
|
||||
1. **Phase 1**: Core route tests (ACL, auto-approval, basic functionality)
|
||||
2. **Phase 2**: HA and complex route scenarios
|
||||
3. **Phase 3**: Auth tests (infrastructure-sensitive, test last)
|
||||
|
||||
**Per-Test Process**:
|
||||
1. Clean environment before each test
|
||||
2. Monitor logs for route tracking and approval messages
|
||||
3. Check artifacts in `control_logs/` if test fails
|
||||
4. Focus on actual error messages, not assumptions
|
||||
5. Document results and patterns discovered
|
||||
|
||||
## Test Documentation and Code Quality Standards
|
||||
|
||||
### Adding Missing Test Documentation
|
||||
When you understand a test's purpose through debugging, always add comprehensive godoc:
|
||||
|
||||
```go
|
||||
// TestSubnetRoutes validates the complete subnet route lifecycle including
|
||||
// advertisement from clients, policy-based approval, and distribution to peers.
|
||||
// This test ensures that route security policies are properly enforced and that
|
||||
// only approved routes are distributed to the network.
|
||||
//
|
||||
// The test verifies:
|
||||
// - Route announcements are received and tracked
|
||||
// - ACL policies control route approval correctly
|
||||
// - Only approved routes appear in peer network maps
|
||||
// - Route state persists correctly in the database
|
||||
func TestSubnetRoutes(t *testing.T) {
|
||||
// Test implementation...
|
||||
}
|
||||
```
|
||||
|
||||
**Why add documentation**: Future maintainers need to understand business logic and security requirements encoded in tests.
|
||||
|
||||
### Comment Guidelines - Focus on WHY, Not WHAT
|
||||
|
||||
```go
|
||||
// ✅ GOOD: Explains reasoning and business logic
|
||||
// Wait for route propagation because NodeStore updates are asynchronous
|
||||
// and happen after poll requests complete processing
|
||||
require.EventuallyWithT(t, func(c *assert.CollectT) {
|
||||
// Check that security policies are enforced...
|
||||
}, timeout, interval, "route approval must respect ACL policies")
|
||||
|
||||
// ❌ BAD: Just describes what the code does
|
||||
// Wait for routes
|
||||
require.EventuallyWithT(t, func(c *assert.CollectT) {
|
||||
// Get routes and check length
|
||||
}, timeout, interval, "checking routes")
|
||||
```
|
||||
|
||||
**Why focus on WHY**: Helps maintainers understand architectural decisions and security requirements.
|
||||
|
||||
## EventuallyWithT Pattern for External Calls
|
||||
|
||||
### Overview
|
||||
EventuallyWithT is a testing pattern used to handle eventual consistency in distributed systems. In Headscale integration tests, many operations are asynchronous - clients advertise routes, the server processes them, updates propagate through the network. EventuallyWithT allows tests to wait for these operations to complete while making assertions.
|
||||
|
||||
### External Calls That Must Be Wrapped
|
||||
The following operations are **external calls** that interact with the headscale server or tailscale clients and MUST be wrapped in EventuallyWithT:
|
||||
- `headscale.ListNodes()` - Queries server state
|
||||
- `client.Status()` - Gets client network status
|
||||
- `client.Curl()` - Makes HTTP requests through the network
|
||||
- `client.Traceroute()` - Performs network diagnostics
|
||||
- `client.Execute()` when running commands that query state
|
||||
- Any operation that reads from the headscale server or tailscale client
|
||||
|
||||
### Five Key Rules for EventuallyWithT
|
||||
|
||||
1. **One External Call Per EventuallyWithT Block**
|
||||
- Each EventuallyWithT should make ONE external call (e.g., ListNodes OR Status)
|
||||
- Related assertions based on that single call can be grouped together
|
||||
- Unrelated external calls must be in separate EventuallyWithT blocks
|
||||
|
||||
2. **Variable Scoping**
|
||||
- Declare variables that need to be shared across EventuallyWithT blocks at function scope
|
||||
- Use `=` for assignment inside EventuallyWithT, not `:=` (unless the variable is only used within that block)
|
||||
- Variables declared with `:=` inside EventuallyWithT are not accessible outside
|
||||
|
||||
3. **No Nested EventuallyWithT**
|
||||
- NEVER put an EventuallyWithT inside another EventuallyWithT
|
||||
- This is a critical anti-pattern that must be avoided
|
||||
|
||||
4. **Use CollectT for Assertions**
|
||||
- Inside EventuallyWithT, use `assert` methods with the CollectT parameter
|
||||
- Helper functions called within EventuallyWithT must accept `*assert.CollectT`
|
||||
|
||||
5. **Descriptive Messages**
|
||||
- Always provide a descriptive message as the last parameter
|
||||
- Message should explain what condition is being waited for
|
||||
|
||||
### Correct Pattern Examples
|
||||
|
||||
```go
|
||||
// CORRECT: Single external call with related assertions
|
||||
var nodes []*v1.Node
|
||||
var err error
|
||||
|
||||
assert.EventuallyWithT(t, func(c *assert.CollectT) {
|
||||
nodes, err = headscale.ListNodes()
|
||||
assert.NoError(c, err)
|
||||
assert.Len(c, nodes, 2)
|
||||
// These assertions are all based on the ListNodes() call
|
||||
requireNodeRouteCountWithCollect(c, nodes[0], 2, 2, 2)
|
||||
requireNodeRouteCountWithCollect(c, nodes[1], 1, 1, 1)
|
||||
}, 10*time.Second, 500*time.Millisecond, "nodes should have expected route counts")
|
||||
|
||||
// CORRECT: Separate EventuallyWithT for different external call
|
||||
assert.EventuallyWithT(t, func(c *assert.CollectT) {
|
||||
status, err := client.Status()
|
||||
assert.NoError(c, err)
|
||||
// All these assertions are based on the single Status() call
|
||||
for _, peerKey := range status.Peers() {
|
||||
peerStatus := status.Peer[peerKey]
|
||||
requirePeerSubnetRoutesWithCollect(c, peerStatus, expectedPrefixes)
|
||||
}
|
||||
}, 10*time.Second, 500*time.Millisecond, "client should see expected routes")
|
||||
|
||||
// CORRECT: Variable scoping for sharing between blocks
|
||||
var routeNode *v1.Node
|
||||
var nodeKey key.NodePublic
|
||||
|
||||
// First EventuallyWithT to get the node
|
||||
assert.EventuallyWithT(t, func(c *assert.CollectT) {
|
||||
nodes, err := headscale.ListNodes()
|
||||
assert.NoError(c, err)
|
||||
|
||||
for _, node := range nodes {
|
||||
if node.GetName() == "router" {
|
||||
routeNode = node
|
||||
nodeKey, _ = key.ParseNodePublicUntyped(mem.S(node.GetNodeKey()))
|
||||
break
|
||||
}
|
||||
}
|
||||
assert.NotNil(c, routeNode, "should find router node")
|
||||
}, 10*time.Second, 100*time.Millisecond, "router node should exist")
|
||||
|
||||
// Second EventuallyWithT using the nodeKey from first block
|
||||
assert.EventuallyWithT(t, func(c *assert.CollectT) {
|
||||
status, err := client.Status()
|
||||
assert.NoError(c, err)
|
||||
|
||||
peerStatus, ok := status.Peer[nodeKey]
|
||||
assert.True(c, ok, "peer should exist in status")
|
||||
requirePeerSubnetRoutesWithCollect(c, peerStatus, expectedPrefixes)
|
||||
}, 10*time.Second, 100*time.Millisecond, "routes should be visible to client")
|
||||
```
|
||||
|
||||
### Incorrect Patterns to Avoid
|
||||
|
||||
```go
|
||||
// INCORRECT: Multiple unrelated external calls in same EventuallyWithT
|
||||
assert.EventuallyWithT(t, func(c *assert.CollectT) {
|
||||
// First external call
|
||||
nodes, err := headscale.ListNodes()
|
||||
assert.NoError(c, err)
|
||||
assert.Len(c, nodes, 2)
|
||||
|
||||
// Second unrelated external call - WRONG!
|
||||
status, err := client.Status()
|
||||
assert.NoError(c, err)
|
||||
assert.NotNil(c, status)
|
||||
}, 10*time.Second, 500*time.Millisecond, "mixed operations")
|
||||
|
||||
// INCORRECT: Nested EventuallyWithT
|
||||
assert.EventuallyWithT(t, func(c *assert.CollectT) {
|
||||
nodes, err := headscale.ListNodes()
|
||||
assert.NoError(c, err)
|
||||
|
||||
// NEVER do this!
|
||||
assert.EventuallyWithT(t, func(c2 *assert.CollectT) {
|
||||
status, _ := client.Status()
|
||||
assert.NotNil(c2, status)
|
||||
}, 5*time.Second, 100*time.Millisecond, "nested")
|
||||
}, 10*time.Second, 500*time.Millisecond, "outer")
|
||||
|
||||
// INCORRECT: Variable scoping error
|
||||
assert.EventuallyWithT(t, func(c *assert.CollectT) {
|
||||
nodes, err := headscale.ListNodes() // This shadows outer 'nodes' variable
|
||||
assert.NoError(c, err)
|
||||
}, 10*time.Second, 500*time.Millisecond, "get nodes")
|
||||
|
||||
// This will fail - nodes is nil because := created a new variable inside the block
|
||||
require.Len(t, nodes, 2) // COMPILATION ERROR or nil pointer
|
||||
|
||||
// INCORRECT: Not wrapping external calls
|
||||
nodes, err := headscale.ListNodes() // External call not wrapped!
|
||||
require.NoError(t, err)
|
||||
```
|
||||
|
||||
### Helper Functions for EventuallyWithT
|
||||
|
||||
When creating helper functions for use within EventuallyWithT:
|
||||
|
||||
```go
|
||||
// Helper function that accepts CollectT
|
||||
func requireNodeRouteCountWithCollect(c *assert.CollectT, node *v1.Node, available, approved, primary int) {
|
||||
assert.Len(c, node.GetAvailableRoutes(), available, "available routes for node %s", node.GetName())
|
||||
assert.Len(c, node.GetApprovedRoutes(), approved, "approved routes for node %s", node.GetName())
|
||||
assert.Len(c, node.GetPrimaryRoutes(), primary, "primary routes for node %s", node.GetName())
|
||||
}
|
||||
|
||||
// Usage within EventuallyWithT
|
||||
assert.EventuallyWithT(t, func(c *assert.CollectT) {
|
||||
nodes, err := headscale.ListNodes()
|
||||
assert.NoError(c, err)
|
||||
requireNodeRouteCountWithCollect(c, nodes[0], 2, 2, 2)
|
||||
}, 10*time.Second, 500*time.Millisecond, "route counts should match expected")
|
||||
```
|
||||
|
||||
### Operations That Must NOT Be Wrapped
|
||||
|
||||
**CRITICAL**: The following operations are **blocking/mutating operations** that change state and MUST NOT be wrapped in EventuallyWithT:
|
||||
- `tailscale set` commands (e.g., `--advertise-routes`, `--accept-routes`)
|
||||
- `headscale.ApproveRoute()` - Approves routes on server
|
||||
- `headscale.CreateUser()` - Creates users
|
||||
- `headscale.CreatePreAuthKey()` - Creates authentication keys
|
||||
- `headscale.RegisterNode()` - Registers new nodes
|
||||
- Any `client.Execute()` that modifies configuration
|
||||
- Any operation that creates, updates, or deletes resources
|
||||
|
||||
These operations:
|
||||
1. Complete synchronously or fail immediately
|
||||
2. Should not be retried automatically
|
||||
3. Need explicit error handling with `require.NoError()`
|
||||
|
||||
### Correct Pattern for Blocking Operations
|
||||
|
||||
```go
|
||||
// CORRECT: Blocking operation NOT wrapped
|
||||
status := client.MustStatus()
|
||||
command := []string{"tailscale", "set", "--advertise-routes=" + expectedRoutes[string(status.Self.ID)]}
|
||||
_, _, err = client.Execute(command)
|
||||
require.NoErrorf(t, err, "failed to advertise route: %s", err)
|
||||
|
||||
// Then wait for the result with EventuallyWithT
|
||||
assert.EventuallyWithT(t, func(c *assert.CollectT) {
|
||||
nodes, err := headscale.ListNodes()
|
||||
assert.NoError(c, err)
|
||||
assert.Contains(c, nodes[0].GetAvailableRoutes(), expectedRoutes[string(status.Self.ID)])
|
||||
}, 10*time.Second, 100*time.Millisecond, "route should be advertised")
|
||||
|
||||
// INCORRECT: Blocking operation wrapped (DON'T DO THIS)
|
||||
assert.EventuallyWithT(t, func(c *assert.CollectT) {
|
||||
_, _, err = client.Execute([]string{"tailscale", "set", "--advertise-routes=10.0.0.0/24"})
|
||||
assert.NoError(c, err) // This might retry the command multiple times!
|
||||
}, 10*time.Second, 100*time.Millisecond, "advertise routes")
|
||||
```
|
||||
|
||||
### Assert vs Require Pattern
|
||||
|
||||
When working within EventuallyWithT blocks where you need to prevent panics:
|
||||
|
||||
```go
|
||||
assert.EventuallyWithT(t, func(c *assert.CollectT) {
|
||||
nodes, err := headscale.ListNodes()
|
||||
assert.NoError(c, err)
|
||||
|
||||
// For array bounds - use require with t to prevent panic
|
||||
assert.Len(c, nodes, 6) // Test expectation
|
||||
require.GreaterOrEqual(t, len(nodes), 3, "need at least 3 nodes to avoid panic")
|
||||
|
||||
// For nil pointer access - use require with t before dereferencing
|
||||
assert.NotNil(c, srs1PeerStatus.PrimaryRoutes) // Test expectation
|
||||
require.NotNil(t, srs1PeerStatus.PrimaryRoutes, "primary routes must be set to avoid panic")
|
||||
assert.Contains(c,
|
||||
srs1PeerStatus.PrimaryRoutes.AsSlice(),
|
||||
pref,
|
||||
)
|
||||
}, 5*time.Second, 200*time.Millisecond, "checking route state")
|
||||
```
|
||||
|
||||
**Key Principle**:
|
||||
- Use `assert` with `c` (*assert.CollectT) for test expectations that can be retried
|
||||
- Use `require` with `t` (*testing.T) for MUST conditions that prevent panics
|
||||
- Within EventuallyWithT, both are available - choose based on whether failure would cause a panic
|
||||
|
||||
### Common Scenarios
|
||||
|
||||
1. **Waiting for route advertisement**:
|
||||
```go
|
||||
client.Execute([]string{"tailscale", "set", "--advertise-routes=10.0.0.0/24"})
|
||||
|
||||
assert.EventuallyWithT(t, func(c *assert.CollectT) {
|
||||
nodes, err := headscale.ListNodes()
|
||||
assert.NoError(c, err)
|
||||
assert.Contains(c, nodes[0].GetAvailableRoutes(), "10.0.0.0/24")
|
||||
}, 10*time.Second, 100*time.Millisecond, "route should be advertised")
|
||||
```
|
||||
|
||||
2. **Checking client sees routes**:
|
||||
```go
|
||||
assert.EventuallyWithT(t, func(c *assert.CollectT) {
|
||||
status, err := client.Status()
|
||||
assert.NoError(c, err)
|
||||
|
||||
// Check all peers have expected routes
|
||||
for _, peerKey := range status.Peers() {
|
||||
peerStatus := status.Peer[peerKey]
|
||||
assert.Contains(c, peerStatus.AllowedIPs, expectedPrefix)
|
||||
}
|
||||
}, 10*time.Second, 100*time.Millisecond, "all peers should see route")
|
||||
```
|
||||
|
||||
3. **Sequential operations**:
|
||||
```go
|
||||
// First wait for node to appear
|
||||
var nodeID uint64
|
||||
assert.EventuallyWithT(t, func(c *assert.CollectT) {
|
||||
nodes, err := headscale.ListNodes()
|
||||
assert.NoError(c, err)
|
||||
assert.Len(c, nodes, 1)
|
||||
nodeID = nodes[0].GetId()
|
||||
}, 10*time.Second, 100*time.Millisecond, "node should register")
|
||||
|
||||
// Then perform operation
|
||||
_, err := headscale.ApproveRoute(nodeID, "10.0.0.0/24")
|
||||
require.NoError(t, err)
|
||||
|
||||
// Then wait for result
|
||||
assert.EventuallyWithT(t, func(c *assert.CollectT) {
|
||||
nodes, err := headscale.ListNodes()
|
||||
assert.NoError(c, err)
|
||||
assert.Contains(c, nodes[0].GetApprovedRoutes(), "10.0.0.0/24")
|
||||
}, 10*time.Second, 100*time.Millisecond, "route should be approved")
|
||||
```
|
||||
|
||||
## Your Core Responsibilities
|
||||
|
||||
1. **Test Execution Strategy**: Execute integration tests with appropriate configurations, understanding when to use `--postgres` and timing requirements for different test categories. Follow phase-based testing approach prioritizing route tests.
|
||||
- **Why this priority**: Route tests are less infrastructure-sensitive and validate core security logic
|
||||
|
||||
2. **Systematic Test Analysis**: When tests fail, systematically examine artifacts starting with Headscale server logs, then client logs, then protocol data. Focus on CODE ISSUES first (99% of cases), not infrastructure. Use real-world failure patterns to guide investigation.
|
||||
- **Why this approach**: Most failures are logic bugs, not environment issues - efficient debugging saves time
|
||||
|
||||
3. **Timing & Synchronization Expertise**: Understand asynchronous Headscale operations, particularly route advertisements, NodeStore synchronization at `poll.go:420`, and policy propagation. Fix timing with `EventuallyWithT` while preserving original test expectations.
|
||||
- **Why preserve expectations**: Test assertions encode business requirements and security policies
|
||||
- **Key Pattern**: Apply the EventuallyWithT pattern correctly for all external calls as documented above
|
||||
|
||||
4. **Root Cause Analysis**: Distinguish between actual code regressions (route approval logic, HA failover architecture), timing issues requiring `EventuallyWithT` patterns, and genuine infrastructure problems (DNS, Docker, container issues).
|
||||
- **Why this distinction matters**: Different problem types require completely different solution approaches
|
||||
- **EventuallyWithT Issues**: Often manifest as flaky tests or immediate assertion failures after async operations
|
||||
|
||||
5. **Security-Aware Quality Validation**: Ensure tests properly validate end-to-end functionality with realistic timing expectations and proper error handling. Never suggest security bypasses or test expectation changes. Add comprehensive godoc when you understand test business logic.
|
||||
- **Why security focus**: Integration tests are the last line of defense against security regressions
|
||||
- **EventuallyWithT Usage**: Proper use prevents race conditions without weakening security assertions
|
||||
|
||||
6. **Concurrent Execution Awareness**: Respect run ID isolation and never interfere with other agents' test sessions. Each test run has a unique run ID - only clean up YOUR containers (by run ID label), never perform global cleanup while tests may be running.
|
||||
- **Why this matters**: Multiple agents/users may run tests concurrently on the same Docker daemon
|
||||
- **Key Rule**: NEVER use global container cleanup commands - the test runner handles cleanup automatically per run ID
|
||||
|
||||
**CRITICAL PRINCIPLE**: Test expectations are sacred contracts that define correct system behavior. When tests fail, fix the code to match the test, never change the test to match broken code. Only timing and observability improvements are allowed - business logic expectations are immutable.
|
||||
|
||||
**ISOLATION PRINCIPLE**: Each test run is isolated by its unique Run ID. Never interfere with other test sessions. The system handles cleanup automatically - manual global cleanup commands are forbidden when other tests may be running.
|
||||
|
||||
**EventuallyWithT PRINCIPLE**: Every external call to headscale server or tailscale client must be wrapped in EventuallyWithT. Follow the five key rules strictly: one external call per block, proper variable scoping, no nesting, use CollectT for assertions, and provide descriptive messages.
|
||||
|
||||
**Remember**: Test failures are usually code issues in Headscale that need to be fixed, not infrastructure problems to be ignored. Use the specific debugging workflows and failure patterns documented above to efficiently identify root causes. Infrastructure issues have very specific signatures - everything else is code-related.
|
||||
112
.github/workflows/container-main.yml
vendored
112
.github/workflows/container-main.yml
vendored
@@ -1,112 +0,0 @@
|
||||
---
|
||||
name: Build (main)
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
paths:
|
||||
- "*.nix"
|
||||
- "go.*"
|
||||
- "**/*.go"
|
||||
- ".github/workflows/container-main.yml"
|
||||
workflow_dispatch:
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.sha }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
container:
|
||||
if: github.repository == 'juanfont/headscale'
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
packages: write
|
||||
contents: read
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
|
||||
- name: Login to DockerHub
|
||||
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
|
||||
with:
|
||||
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||
|
||||
- name: Login to GHCR
|
||||
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
|
||||
with:
|
||||
registry: ghcr.io
|
||||
username: ${{ github.repository_owner }}
|
||||
password: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34
|
||||
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
|
||||
with:
|
||||
primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
|
||||
'**/flake.lock') }}
|
||||
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
|
||||
|
||||
- name: Set commit timestamp
|
||||
run: echo "SOURCE_DATE_EPOCH=$(git log -1 --format=%ct)" >> $GITHUB_ENV
|
||||
|
||||
- name: Build and push to GHCR
|
||||
env:
|
||||
KO_DOCKER_REPO: ghcr.io/juanfont/headscale
|
||||
KO_DEFAULTBASEIMAGE: gcr.io/distroless/base-debian13
|
||||
CGO_ENABLED: "0"
|
||||
run: |
|
||||
nix develop --command -- ko build \
|
||||
--bare \
|
||||
--platform=linux/amd64,linux/arm64 \
|
||||
--tags=main-${GITHUB_SHA::7} \
|
||||
./cmd/headscale
|
||||
|
||||
- name: Push to Docker Hub
|
||||
env:
|
||||
KO_DOCKER_REPO: headscale/headscale
|
||||
KO_DEFAULTBASEIMAGE: gcr.io/distroless/base-debian13
|
||||
CGO_ENABLED: "0"
|
||||
run: |
|
||||
nix develop --command -- ko build \
|
||||
--bare \
|
||||
--platform=linux/amd64,linux/arm64 \
|
||||
--tags=main-${GITHUB_SHA::7} \
|
||||
./cmd/headscale
|
||||
|
||||
binaries:
|
||||
if: github.repository == 'juanfont/headscale'
|
||||
runs-on: ubuntu-latest
|
||||
strategy:
|
||||
matrix:
|
||||
include:
|
||||
- goos: linux
|
||||
goarch: amd64
|
||||
- goos: linux
|
||||
goarch: arm64
|
||||
- goos: darwin
|
||||
goarch: amd64
|
||||
- goos: darwin
|
||||
goarch: arm64
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||
|
||||
- uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34
|
||||
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
|
||||
with:
|
||||
primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
|
||||
'**/flake.lock') }}
|
||||
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
|
||||
|
||||
- name: Build binary
|
||||
env:
|
||||
CGO_ENABLED: "0"
|
||||
GOOS: ${{ matrix.goos }}
|
||||
GOARCH: ${{ matrix.goarch }}
|
||||
run: nix develop --command -- go build -o headscale ./cmd/headscale
|
||||
|
||||
- uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
|
||||
with:
|
||||
name: headscale-${{ matrix.goos }}-${{ matrix.goarch }}
|
||||
path: headscale
|
||||
@@ -66,7 +66,6 @@ func findTests() []string {
|
||||
}
|
||||
|
||||
args := []string{
|
||||
"--type", "go",
|
||||
"--regexp", "func (Test.+)\\(.*",
|
||||
"../../integration/",
|
||||
"--replace", "$1",
|
||||
|
||||
@@ -16,7 +16,7 @@ on:
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-24.04-arm
|
||||
runs-on: ubuntu-latest
|
||||
env:
|
||||
# Github does not allow us to access secrets in pull requests,
|
||||
# so this env var is used to check if we have the secret or not.
|
||||
|
||||
7
.github/workflows/test-integration.yaml
vendored
7
.github/workflows/test-integration.yaml
vendored
@@ -12,7 +12,7 @@ jobs:
|
||||
# sqlite: Runs all integration tests with SQLite backend.
|
||||
# postgres: Runs a subset of tests with PostgreSQL to verify database compatibility.
|
||||
build:
|
||||
runs-on: ubuntu-24.04-arm
|
||||
runs-on: ubuntu-latest
|
||||
outputs:
|
||||
files-changed: ${{ steps.changed-files.outputs.files }}
|
||||
steps:
|
||||
@@ -119,7 +119,7 @@ jobs:
|
||||
path: tailscale-head-image.tar.gz
|
||||
retention-days: 10
|
||||
build-postgres:
|
||||
runs-on: ubuntu-24.04-arm
|
||||
runs-on: ubuntu-latest
|
||||
needs: build
|
||||
if: needs.build.outputs.files-changed == 'true'
|
||||
steps:
|
||||
@@ -233,8 +233,6 @@ jobs:
|
||||
- TestNodeOnlineStatus
|
||||
- TestPingAllByIPManyUpDown
|
||||
- Test2118DeletingOnlineNodePanics
|
||||
- TestGrantCapRelay
|
||||
- TestGrantCapDrive
|
||||
- TestEnablingRoutes
|
||||
- TestHASubnetRouterFailover
|
||||
- TestSubnetRouteACL
|
||||
@@ -248,7 +246,6 @@ jobs:
|
||||
- TestAutoApproveMultiNetwork/webauth-user.*
|
||||
- TestAutoApproveMultiNetwork/webauth-group.*
|
||||
- TestSubnetRouteACLFiltering
|
||||
- TestGrantViaSubnetSteering
|
||||
- TestHeadscale
|
||||
- TestTailscaleNodesJoiningHeadcale
|
||||
- TestSSHOneUserToAll
|
||||
|
||||
1
.gitignore
vendored
1
.gitignore
vendored
@@ -29,7 +29,6 @@ config*.yaml
|
||||
!config-example.yaml
|
||||
derp.yaml
|
||||
*.hujson
|
||||
!hscontrol/policy/v2/testdata/*/*.hujson
|
||||
*.key
|
||||
/db.sqlite
|
||||
*.sqlite3
|
||||
|
||||
@@ -27,6 +27,8 @@ builds:
|
||||
- linux_arm64
|
||||
flags:
|
||||
- -mod=readonly
|
||||
tags:
|
||||
- ts2019
|
||||
|
||||
archives:
|
||||
- id: golang-cross
|
||||
|
||||
48
CHANGELOG.md
48
CHANGELOG.md
@@ -15,8 +15,7 @@ overall our implementation was very close.
|
||||
|
||||
SSH rules with `"action": "check"` are now supported. When a client initiates a SSH connection to a node
|
||||
with a `check` action policy, the user is prompted to authenticate via OIDC or CLI approval before access
|
||||
is granted. OIDC approval requires the authenticated user to own the source node; tagged source nodes
|
||||
cannot use SSH check-mode.
|
||||
is granted.
|
||||
|
||||
A new `headscale auth` CLI command group supports the approval flow:
|
||||
|
||||
@@ -25,28 +24,12 @@ A new `headscale auth` CLI command group supports the approval flow:
|
||||
- `headscale auth register --auth-id <id> --user <user>` registers a node (replaces deprecated `headscale nodes register`)
|
||||
|
||||
[#1850](https://github.com/juanfont/headscale/pull/1850)
|
||||
[#3180](https://github.com/juanfont/headscale/pull/3180)
|
||||
|
||||
### Grants
|
||||
|
||||
We now support [Tailscale grants](https://tailscale.com/kb/1324/grants) alongside ACLs. Grants
|
||||
extend what you can express in a policy beyond packet filtering: the `app` field controls
|
||||
application-level features like Taildrive file sharing and peer relay, and the `via` field steers
|
||||
traffic through specific tagged subnet routers or exit nodes. The `ip` field works like an ACL rule.
|
||||
Grants can be mixed with ACLs in the same policy file.
|
||||
[#2180](https://github.com/juanfont/headscale/pull/2180)
|
||||
|
||||
As part of this, we added `autogroup:danger-all`. It resolves to `0.0.0.0/0` and `::/0` — all IP
|
||||
addresses, including those outside the tailnet. This replaces the old behaviour where `*` matched
|
||||
all IPs (see BREAKING below). The name is intentionally scary: accepting traffic from the entire
|
||||
internet is a security-sensitive choice. `autogroup:danger-all` can only be used as a source.
|
||||
|
||||
### BREAKING
|
||||
|
||||
- **ACL Policy**: Wildcard (`*`) in ACL sources and destinations now resolves to Tailscale's CGNAT range (`100.64.0.0/10`) and ULA range (`fd7a:115c:a1e0::/48`) instead of all IPs (`0.0.0.0/0` and `::/0`) [#3036](https://github.com/juanfont/headscale/pull/3036)
|
||||
- This better matches Tailscale's security model where `*` means "any node in the tailnet" rather than "any IP address"
|
||||
- Policies that need to match all IP addresses including non-Tailscale IPs should use `autogroup:danger-all` as a source, or explicit CIDR ranges as destinations [#2180](https://github.com/juanfont/headscale/pull/2180)
|
||||
- `autogroup:danger-all` can only be used as a source; it cannot be used as a destination
|
||||
- Policies relying on wildcard to match non-Tailscale IPs will need to use explicit CIDR ranges instead
|
||||
- **Note**: Users with non-standard IP ranges configured in `prefixes.ipv4` or `prefixes.ipv6` (which is unsupported and produces a warning) will need to explicitly specify their CIDR ranges in ACL rules instead of using `*`
|
||||
- **ACL Policy**: Validate autogroup:self source restrictions matching Tailscale behavior - tags, hosts, and IPs are rejected as sources for autogroup:self destinations [#3036](https://github.com/juanfont/headscale/pull/3036)
|
||||
- Policies using tags, hosts, or IP addresses as sources for autogroup:self destinations will now fail validation
|
||||
@@ -62,14 +45,6 @@ internet is a security-sensitive choice. `autogroup:danger-all` can only be used
|
||||
|
||||
### Changes
|
||||
|
||||
- **OIDC registration**: Add a confirmation page before completing node registration, showing the device hostname and machine key fingerprint [#3180](https://github.com/juanfont/headscale/pull/3180)
|
||||
- **Debug endpoints**: Omit secret fields (`Pass`, `ClientSecret`, `APIKey`) from `/debug/config` JSON output [#3180](https://github.com/juanfont/headscale/pull/3180)
|
||||
- **Debug endpoints**: Route `statsviz` through `tsweb.Protected` [#3180](https://github.com/juanfont/headscale/pull/3180)
|
||||
- Remove gRPC reflection from the remote (TCP) server [#3180](https://github.com/juanfont/headscale/pull/3180)
|
||||
- **Node Expiry**: Add `node.expiry` configuration option to set a default node key expiry for nodes registered via auth key [#3122](https://github.com/juanfont/headscale/pull/3122)
|
||||
- Tagged nodes (registered with tagged pre-auth keys) are exempt from default expiry
|
||||
- `oidc.expiry` has been removed; use `node.expiry` instead (applies to all registration methods including OIDC)
|
||||
- `ephemeral_node_inactivity_timeout` is deprecated in favour of `node.ephemeral.inactivity_timeout`
|
||||
- **SSH Policy**: Add support for `localpart:*@<domain>` in SSH rule `users` field, mapping each matching user's email local-part as their OS username [#3091](https://github.com/juanfont/headscale/pull/3091)
|
||||
- **ACL Policy**: Add ICMP and IPv6-ICMP protocols to default filter rules when no protocol is specified [#3036](https://github.com/juanfont/headscale/pull/3036)
|
||||
- **ACL Policy**: Fix autogroup:self handling for tagged nodes - tagged nodes no longer incorrectly receive autogroup:self filter rules [#3036](https://github.com/juanfont/headscale/pull/3036)
|
||||
@@ -83,25 +58,6 @@ internet is a security-sensitive choice. `autogroup:danger-all` can only be used
|
||||
- Deprecate `headscale nodes register --key` in favour of `headscale auth register --auth-id` [#1850](https://github.com/juanfont/headscale/pull/1850)
|
||||
- Generalise auth templates into reusable `AuthSuccess` and `AuthWeb` components [#1850](https://github.com/juanfont/headscale/pull/1850)
|
||||
- Unify auth pipeline with `AuthVerdict` type, supporting registration, reauthentication, and SSH checks [#1850](https://github.com/juanfont/headscale/pull/1850)
|
||||
- Add support for policy grants with `ip`, `app`, and `via` fields [#2180](https://github.com/juanfont/headscale/pull/2180)
|
||||
- Add `autogroup:danger-all` as a source-only autogroup resolving to all IP addresses [#2180](https://github.com/juanfont/headscale/pull/2180)
|
||||
- Add capability grants for Taildrive (`cap/drive`) and peer relay (`cap/relay`) with automatic companion capabilities [#2180](https://github.com/juanfont/headscale/pull/2180)
|
||||
- Add per-viewer via route steering — grants with `via` tags control which subnet router or exit node handles traffic for each group of viewers [#2180](https://github.com/juanfont/headscale/pull/2180)
|
||||
- Enable Taildrive node attributes on all nodes; actual access is controlled by `cap/drive` grants [#2180](https://github.com/juanfont/headscale/pull/2180)
|
||||
- Fix exit nodes incorrectly receiving filter rules for destinations that only overlap via exit routes [#2180](https://github.com/juanfont/headscale/pull/2180)
|
||||
- Fix address-based aliases (hosts, raw IPs) incorrectly expanding to include the matching node's other address family [#2180](https://github.com/juanfont/headscale/pull/2180)
|
||||
- Fix identity-based aliases (tags, users, groups) resolving to IPv4 only; they now include both IPv4 and IPv6 matching Tailscale behavior [#2180](https://github.com/juanfont/headscale/pull/2180)
|
||||
- Fix wildcard (`*`) source in ACLs now using actually-approved subnet routes instead of autoApprover policy prefixes [#2180](https://github.com/juanfont/headscale/pull/2180)
|
||||
- Fix non-wildcard source IPs being dropped when combined with wildcard `*` in the same ACL rule [#2180](https://github.com/juanfont/headscale/pull/2180)
|
||||
- Fix exit node approval not triggering filter rule recalculation for peers [#2180](https://github.com/juanfont/headscale/pull/2180)
|
||||
- Policy validation error messages now include field context (e.g., `src=`, `dst=`) and are more descriptive [#2180](https://github.com/juanfont/headscale/pull/2180)
|
||||
- Remove old migrations for the debian package [#3185](https://github.com/juanfont/headscale/pull/3185)
|
||||
|
||||
## 0.28.1 (202x-xx-xx)
|
||||
|
||||
### Changes
|
||||
|
||||
- **User deletion**: Fix `DestroyUser` deleting all pre-auth keys in the database instead of only the target user's keys [#3155](https://github.com/juanfont/headscale/pull/3155)
|
||||
|
||||
## 0.28.0 (2026-02-04)
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# For testing purposes only
|
||||
|
||||
FROM golang:1.26.2-alpine AS build-env
|
||||
FROM golang:1.26.1-alpine AS build-env
|
||||
|
||||
WORKDIR /go/src
|
||||
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
# This Dockerfile is more or less lifted from tailscale/tailscale
|
||||
# to ensure a similar build process when testing the HEAD of tailscale.
|
||||
|
||||
FROM golang:1.26.2-alpine AS build-env
|
||||
FROM golang:1.26.1-alpine AS build-env
|
||||
|
||||
WORKDIR /go/src
|
||||
|
||||
|
||||
@@ -65,12 +65,6 @@ Please have a look at the [`documentation`](https://headscale.net/stable/).
|
||||
|
||||
For NixOS users, a module is available in [`nix/`](./nix/).
|
||||
|
||||
## Builds from `main`
|
||||
|
||||
Development builds from the `main` branch are available as container images and
|
||||
binaries. See the [development builds](https://headscale.net/stable/setup/install/main/)
|
||||
documentation for details.
|
||||
|
||||
## Talks
|
||||
|
||||
- Fosdem 2026 (video): [Headscale & Tailscale: The complementary open source clone](https://fosdem.org/2026/schedule/event/KYQ3LL-headscale-the-complementary-open-source-clone/)
|
||||
|
||||
@@ -28,7 +28,7 @@ func bypassDatabase() (*db.HSDatabase, error) {
|
||||
return nil, fmt.Errorf("loading config: %w", err)
|
||||
}
|
||||
|
||||
d, err := db.NewHeadscaleDatabase(cfg)
|
||||
d, err := db.NewHeadscaleDatabase(cfg, nil)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("opening database: %w", err)
|
||||
}
|
||||
|
||||
266
cmd/hi/README.md
266
cmd/hi/README.md
@@ -1,262 +1,6 @@
|
||||
# hi — Headscale Integration test runner
|
||||
# hi
|
||||
|
||||
`hi` wraps Docker container orchestration around the tests in
|
||||
[`../../integration`](../../integration) and extracts debugging artefacts
|
||||
(logs, database snapshots, MapResponse protocol captures) for post-mortem
|
||||
analysis.
|
||||
|
||||
**Read this file in full before running any `hi` command.** The test
|
||||
runner has sharp edges — wrong flags produce stale containers, lost
|
||||
artefacts, or hung CI.
|
||||
|
||||
For test-authoring patterns (scenario setup, `EventuallyWithT`,
|
||||
`IntegrationSkip`, helper variants), read
|
||||
[`../../integration/README.md`](../../integration/README.md).
|
||||
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
# Verify system requirements (Docker, Go, disk space, images)
|
||||
go run ./cmd/hi doctor
|
||||
|
||||
# Run a single test (the default flags are tuned for development)
|
||||
go run ./cmd/hi run "TestPingAllByIP"
|
||||
|
||||
# Run a database-heavy test against PostgreSQL
|
||||
go run ./cmd/hi run "TestExpireNode" --postgres
|
||||
|
||||
# Pattern matching
|
||||
go run ./cmd/hi run "TestSubnet*"
|
||||
```
|
||||
|
||||
Run `doctor` before the first `run` in any new environment. Tests
|
||||
generate ~100 MB of logs per run in `control_logs/`; `doctor` verifies
|
||||
there is enough space and that the required Docker images are available.
|
||||
|
||||
## Commands
|
||||
|
||||
| Command | Purpose |
|
||||
| ------------------ | ---------------------------------------------------- |
|
||||
| `run [pattern]` | Execute the test(s) matching `pattern` |
|
||||
| `doctor` | Verify system requirements |
|
||||
| `clean networks` | Prune unused Docker networks |
|
||||
| `clean images` | Clean old test images |
|
||||
| `clean containers` | Kill **all** test containers (dangerous — see below) |
|
||||
| `clean cache` | Clean Go module cache volume |
|
||||
| `clean all` | Run all cleanup operations |
|
||||
|
||||
## Flags
|
||||
|
||||
Defaults are tuned for single-test development runs. Review before
|
||||
changing.
|
||||
|
||||
| Flag | Default | Purpose |
|
||||
| ------------------- | -------------- | --------------------------------------------------------------------------- |
|
||||
| `--timeout` | `120m` | Total test timeout. Use the built-in flag — never wrap with bash `timeout`. |
|
||||
| `--postgres` | `false` | Use PostgreSQL instead of SQLite |
|
||||
| `--failfast` | `true` | Stop on first test failure |
|
||||
| `--go-version` | auto | Detected from `go.mod` (currently 1.26.1) |
|
||||
| `--clean-before` | `true` | Clean stale (stopped/exited) containers before starting |
|
||||
| `--clean-after` | `true` | Clean this run's containers after completion |
|
||||
| `--keep-on-failure` | `false` | Preserve containers for manual inspection on failure |
|
||||
| `--logs-dir` | `control_logs` | Where to save run artefacts |
|
||||
| `--verbose` | `false` | Verbose output |
|
||||
| `--stats` | `false` | Collect container resource-usage stats |
|
||||
| `--hs-memory-limit` | `0` | Fail if any headscale container exceeds N MB (0 = disabled) |
|
||||
| `--ts-memory-limit` | `0` | Fail if any tailscale container exceeds N MB |
|
||||
|
||||
### Timeout guidance
|
||||
|
||||
The default `120m` is generous for a single test. If you must tune it,
|
||||
these are realistic floors by category:
|
||||
|
||||
| Test type | Minimum | Examples |
|
||||
| ------------------------- | ----------- | ------------------------------------- |
|
||||
| Basic functionality / CLI | 900s (15m) | `TestPingAllByIP`, `TestCLI*` |
|
||||
| Route / ACL | 1200s (20m) | `TestSubnet*`, `TestACL*` |
|
||||
| HA / failover | 1800s (30m) | `TestHASubnetRouter*` |
|
||||
| Long-running | 2100s (35m) | `TestNodeOnlineStatus` (~12 min body) |
|
||||
| Full suite | 45m | `go test ./integration -timeout 45m` |
|
||||
|
||||
**Never** use the shell `timeout` command around `hi`. It kills the
|
||||
process mid-cleanup and leaves stale containers:
|
||||
|
||||
```bash
|
||||
timeout 300 go run ./cmd/hi run "TestName" # WRONG — orphaned containers
|
||||
go run ./cmd/hi run "TestName" --timeout=900s # correct
|
||||
```
|
||||
|
||||
## Concurrent Execution
|
||||
|
||||
Multiple `hi run` invocations can run simultaneously on the same Docker
|
||||
daemon. Each invocation gets a unique **Run ID** (format
|
||||
`YYYYMMDD-HHMMSS-6charhash`, e.g. `20260409-104215-mdjtzx`).
|
||||
|
||||
- **Container names** include the short run ID: `ts-mdjtzx-1-74-fgdyls`
|
||||
- **Docker labels**: `hi.run-id={runID}` on every container
|
||||
- **Port allocation**: dynamic — kernel assigns free ports, no conflicts
|
||||
- **Cleanup isolation**: each run cleans only its own containers
|
||||
- **Log directories**: `control_logs/{runID}/`
|
||||
|
||||
```bash
|
||||
# Start three tests in parallel — each gets its own run ID
|
||||
go run ./cmd/hi run "TestPingAllByIP" &
|
||||
go run ./cmd/hi run "TestACLAllowUserDst" &
|
||||
go run ./cmd/hi run "TestOIDCAuthenticationPingAll" &
|
||||
```
|
||||
|
||||
### Safety rules for concurrent runs
|
||||
|
||||
- ✅ Your run cleans only containers labelled with its own `hi.run-id`
|
||||
- ✅ `--clean-before` removes only stopped/exited containers
|
||||
- ❌ **Never** run `docker rm -f $(docker ps -q --filter name=hs-)` —
|
||||
this destroys other agents' live test sessions
|
||||
- ❌ **Never** run `docker system prune -f` while any tests are running
|
||||
- ❌ **Never** run `hi clean containers` / `hi clean all` while other
|
||||
tests are running — both kill all test containers on the daemon
|
||||
|
||||
To identify your own containers:
|
||||
|
||||
```bash
|
||||
docker ps --filter "label=hi.run-id=20260409-104215-mdjtzx"
|
||||
```
|
||||
|
||||
The run ID appears at the top of the `hi run` output — copy it from
|
||||
there rather than trying to reconstruct it.
|
||||
|
||||
## Artefacts
|
||||
|
||||
Every run saves debugging artefacts under `control_logs/{runID}/`:
|
||||
|
||||
```
|
||||
control_logs/20260409-104215-mdjtzx/
|
||||
├── hs-<test>-<hash>.stderr.log # headscale server errors
|
||||
├── hs-<test>-<hash>.stdout.log # headscale server output
|
||||
├── hs-<test>-<hash>.db # database snapshot (SQLite)
|
||||
├── hs-<test>-<hash>_metrics.txt # Prometheus metrics dump
|
||||
├── hs-<test>-<hash>-mapresponses/ # MapResponse protocol captures
|
||||
├── ts-<client>-<hash>.stderr.log # tailscale client errors
|
||||
├── ts-<client>-<hash>.stdout.log # tailscale client output
|
||||
└── ts-<client>-<hash>_status.json # client network-status dump
|
||||
```
|
||||
|
||||
Artefacts persist after cleanup. Old runs accumulate fast — delete
|
||||
unwanted directories to reclaim disk.
|
||||
|
||||
## Debugging workflow
|
||||
|
||||
When a test fails, read the artefacts **in this order**:
|
||||
|
||||
1. **`hs-*.stderr.log`** — headscale server errors, panics, policy
|
||||
evaluation failures. Most issues originate server-side.
|
||||
|
||||
```bash
|
||||
grep -E "ERROR|panic|FATAL" control_logs/*/hs-*.stderr.log
|
||||
```
|
||||
|
||||
2. **`ts-*.stderr.log`** — authentication failures, connectivity issues,
|
||||
DNS resolution problems on the client side.
|
||||
|
||||
3. **MapResponse JSON** in `hs-*-mapresponses/` — protocol-level
|
||||
debugging for network map generation, peer visibility, route
|
||||
distribution, policy evaluation results.
|
||||
|
||||
```bash
|
||||
ls control_logs/*/hs-*-mapresponses/
|
||||
jq '.Peers[] | {Name, Tags, PrimaryRoutes}' \
|
||||
control_logs/*/hs-*-mapresponses/001.json
|
||||
```
|
||||
|
||||
4. **`*_status.json`** — client peer-connectivity state.
|
||||
|
||||
5. **`hs-*.db`** — SQLite snapshot for post-mortem consistency checks.
|
||||
|
||||
```bash
|
||||
sqlite3 control_logs/<runID>/hs-*.db
|
||||
sqlite> .tables
|
||||
sqlite> .schema nodes
|
||||
sqlite> SELECT id, hostname, user_id, tags FROM nodes WHERE hostname LIKE '%problematic%';
|
||||
```
|
||||
|
||||
6. **`*_metrics.txt`** — Prometheus dumps for latency, NodeStore
|
||||
operation timing, database query performance, memory usage.
|
||||
|
||||
## Heuristic: infrastructure vs code
|
||||
|
||||
**Before blaming Docker, disk, or network: read `hs-*.stderr.log` in
|
||||
full.** In practice, well over 99% of failures are code bugs (policy
|
||||
evaluation, NodeStore sync, route approval) rather than infrastructure.
|
||||
|
||||
Actual infrastructure failures have signature error messages:
|
||||
|
||||
| Signature | Cause | Fix |
|
||||
| --------------------------------------------------------------- | ------------------------- | ------------------------------------------------------------- |
|
||||
| `failed to resolve "hs-...": no DNS fallback candidates remain` | Docker DNS | Reset Docker networking |
|
||||
| `container creation timeout`, no progress >2 min | Resource exhaustion | `docker system prune -f` (when no other tests running), retry |
|
||||
| OOM kills, slow Docker daemon | Too many concurrent tests | Reduce concurrency, wait for completion |
|
||||
| `no space left on device` | Disk full | Delete old `control_logs/` |
|
||||
|
||||
If you don't see a signature error, **assume it's a code regression** —
|
||||
do not retry hoping the flake goes away.
|
||||
|
||||
## Common failure patterns (code bugs)
|
||||
|
||||
### Route advertisement timing
|
||||
|
||||
Test asserts route state before the client has finished propagating its
|
||||
Hostinfo update. Symptom: `nodes[0].GetAvailableRoutes()` empty when
|
||||
the test expects a route.
|
||||
|
||||
- **Wrong fix**: `time.Sleep(5 * time.Second)` — fragile and slow.
|
||||
- **Right fix**: wrap the assertion in `EventuallyWithT`. See
|
||||
[`../../integration/README.md`](../../integration/README.md).
|
||||
|
||||
### NodeStore sync issues
|
||||
|
||||
Route changes not reflected in the NodeStore snapshot. Symptom: route
|
||||
advertisements in logs but no tracking updates in subsequent reads.
|
||||
|
||||
The sync point is `State.UpdateNodeFromMapRequest()` in
|
||||
`hscontrol/state/state.go`. If you added a new kind of client state
|
||||
update, make sure it lands here.
|
||||
|
||||
### HA failover: routes disappearing on disconnect
|
||||
|
||||
`TestHASubnetRouterFailover` fails because approved routes vanish when
|
||||
a subnet router goes offline. **This is a bug, not expected behaviour.**
|
||||
Route approval must not be coupled to client connectivity — routes
|
||||
stay approved; only the primary-route selection is affected by
|
||||
connectivity.
|
||||
|
||||
### Policy evaluation race
|
||||
|
||||
Symptom: tests that change policy and immediately assert peer visibility
|
||||
fail intermittently. Policy changes trigger async recomputation.
|
||||
|
||||
- See recent fixes in `git log -- hscontrol/state/` for examples (e.g.
|
||||
the `PolicyChange` trigger on every Connect/Disconnect).
|
||||
|
||||
### SQLite vs PostgreSQL timing differences
|
||||
|
||||
Some race conditions only surface on one backend. If a test is flaky,
|
||||
try the other backend with `--postgres`:
|
||||
|
||||
```bash
|
||||
go run ./cmd/hi run "TestName" --postgres --verbose
|
||||
```
|
||||
|
||||
PostgreSQL generally has more consistent timing; SQLite can expose
|
||||
races during rapid writes.
|
||||
|
||||
## Keeping containers for inspection
|
||||
|
||||
If you need to inspect a failed test's state manually:
|
||||
|
||||
```bash
|
||||
go run ./cmd/hi run "TestName" --keep-on-failure
|
||||
# containers survive — inspect them
|
||||
docker exec -it ts-<runID>-<...> /bin/sh
|
||||
docker logs hs-<runID>-<...>
|
||||
# clean up manually when done
|
||||
go run ./cmd/hi clean all # only when no other tests are running
|
||||
```
|
||||
hi (headscale integration runner) is an entirely "vibe coded" wrapper around our
|
||||
[integration test suite](../integration). It essentially runs the docker
|
||||
commands for you with some added benefits of extracting resources like logs and
|
||||
databases.
|
||||
|
||||
@@ -50,21 +50,12 @@ noise:
|
||||
# List of IP prefixes to allocate tailaddresses from.
|
||||
# Each prefix consists of either an IPv4 or IPv6 address,
|
||||
# and the associated prefix length, delimited by a slash.
|
||||
#
|
||||
# WARNING: These prefixes MUST be subsets of the standard Tailscale ranges:
|
||||
# - IPv4: 100.64.0.0/10 (CGNAT range)
|
||||
# - IPv6: fd7a:115c:a1e0::/48 (Tailscale ULA range)
|
||||
#
|
||||
# Using a SUBSET of these ranges is supported and useful if you want to
|
||||
# limit IP allocation to a smaller block (e.g., 100.64.0.0/24).
|
||||
#
|
||||
# Using ranges OUTSIDE of CGNAT/ULA is NOT supported and will cause
|
||||
# undefined behaviour. The Tailscale client has hard-coded assumptions
|
||||
# about these ranges and will break in subtle, hard-to-debug ways.
|
||||
#
|
||||
# See:
|
||||
# IPv4: https://github.com/tailscale/tailscale/blob/22ebb25e833264f58d7c3f534a8b166894a89536/net/tsaddr/tsaddr.go#L33
|
||||
# It must be within IP ranges supported by the Tailscale
|
||||
# client - i.e., subnets of 100.64.0.0/10 and fd7a:115c:a1e0::/48.
|
||||
# See below:
|
||||
# IPv6: https://github.com/tailscale/tailscale/blob/22ebb25e833264f58d7c3f534a8b166894a89536/net/tsaddr/tsaddr.go#LL81C52-L81C71
|
||||
# IPv4: https://github.com/tailscale/tailscale/blob/22ebb25e833264f58d7c3f534a8b166894a89536/net/tsaddr/tsaddr.go#L33
|
||||
# Any other range is NOT supported, and it will cause unexpected issues.
|
||||
prefixes:
|
||||
v4: 100.64.0.0/10
|
||||
v6: fd7a:115c:a1e0::/48
|
||||
@@ -145,25 +136,8 @@ derp:
|
||||
# Disables the automatic check for headscale updates on startup
|
||||
disable_check_updates: false
|
||||
|
||||
# Node lifecycle configuration.
|
||||
node:
|
||||
# Default key expiry for non-tagged nodes, regardless of registration method
|
||||
# (auth key, CLI, web auth). Tagged nodes are exempt and never expire.
|
||||
#
|
||||
# This is the base default. OIDC can override this via oidc.expiry.
|
||||
# If a client explicitly requests a specific expiry, the client value is used.
|
||||
#
|
||||
# Setting the value to "0" means no default expiry (nodes never expire unless
|
||||
# explicitly expired via `headscale nodes expire`).
|
||||
#
|
||||
# Tailscale SaaS uses 180d; set to a positive duration to match that behaviour.
|
||||
#
|
||||
# Default: 0 (no default expiry)
|
||||
expiry: 0
|
||||
|
||||
ephemeral:
|
||||
# Time before an inactive ephemeral node is deleted.
|
||||
inactivity_timeout: 30m
|
||||
# Time before an inactive ephemeral node is deleted?
|
||||
ephemeral_node_inactivity_timeout: 30m
|
||||
|
||||
database:
|
||||
# Database type. Available options: sqlite, postgres
|
||||
@@ -372,11 +346,15 @@ unix_socket_permission: "0770"
|
||||
# # `LoadCredential` straightforward:
|
||||
# client_secret_path: "${CREDENTIALS_DIRECTORY}/oidc_client_secret"
|
||||
#
|
||||
# # The amount of time a node is authenticated with OpenID until it expires
|
||||
# # and needs to reauthenticate.
|
||||
# # Setting the value to "0" will mean no expiry.
|
||||
# expiry: 180d
|
||||
#
|
||||
# # Use the expiry from the token received from OpenID when the user logged
|
||||
# # in. This will typically lead to frequent need to reauthenticate and should
|
||||
# # only be enabled if you know what you are doing.
|
||||
# # Note: enabling this will cause `node.expiry` to be ignored for
|
||||
# # OIDC-authenticated nodes.
|
||||
# # Note: enabling this will cause `oidc.expiry` to be ignored.
|
||||
# use_expiry_from_token: false
|
||||
#
|
||||
# # The OIDC scopes to use, defaults to "openid", "profile" and "email".
|
||||
@@ -450,11 +428,6 @@ taildrop:
|
||||
# Only modify these if you have identified a specific performance issue.
|
||||
#
|
||||
# tuning:
|
||||
# # Maximum number of pending registration entries in the auth cache.
|
||||
# # Oldest entries are evicted when the cap is reached.
|
||||
# #
|
||||
# # register_cache_max_entries: 1024
|
||||
#
|
||||
# # NodeStore write batching configuration.
|
||||
# # The NodeStore batches write operations before rebuilding peer relationships,
|
||||
# # which is computationally expensive. Batching reduces rebuild frequency.
|
||||
|
||||
@@ -145,12 +145,16 @@ oidc:
|
||||
### Customize node expiration
|
||||
|
||||
The node expiration is the amount of time a node is authenticated with OpenID Connect until it expires and needs to
|
||||
reauthenticate. The default node expiration can be configured via the top-level `node.expiry` setting.
|
||||
reauthenticate. The default node expiration is 180 days. This can either be customized or set to the expiration from the
|
||||
Access Token.
|
||||
|
||||
=== "Customize node expiration"
|
||||
|
||||
```yaml hl_lines="2"
|
||||
node:
|
||||
```yaml hl_lines="5"
|
||||
oidc:
|
||||
issuer: "https://sso.example.com"
|
||||
client_id: "headscale"
|
||||
client_secret: "generated-secret"
|
||||
expiry: 30d # Use 0 to disable node expiration
|
||||
```
|
||||
|
||||
@@ -187,10 +191,8 @@ You may refer to users in the Headscale policy via:
|
||||
!!! note "A user identifier in the policy must contain a single `@`"
|
||||
|
||||
The Headscale policy requires a single `@` to reference a user. If the username or provider identifier doesn't
|
||||
already contain a single `@`, it needs to be appended at the end. For example: the Headscale username `ssmith` has
|
||||
to be written as `ssmith@` to be correctly identified as user within the policy.
|
||||
|
||||
Ensure that the Headscale username itself does not end with `@`.
|
||||
already contain a single `@`, it needs to be appended at the end. For example: the username `ssmith` has to be
|
||||
written as `ssmith@` to be correctly identified as user within the policy.
|
||||
|
||||
!!! warning "Email address or username might be updated by users"
|
||||
|
||||
|
||||
@@ -33,8 +33,7 @@ node can be approved with:
|
||||
- [Headscale API](api.md)
|
||||
- Or delegated to an identity provider via [OpenID Connect](oidc.md)
|
||||
|
||||
Web authentication relies on the presence of a Headscale user. Use the `headscale users` command to create a new
|
||||
user[^1]:
|
||||
Web authentication relies on the presence of a Headscale user. Use the `headscale users` command to create a new user:
|
||||
|
||||
```console
|
||||
headscale users create <USER>
|
||||
@@ -99,7 +98,7 @@ Its best suited for automation.
|
||||
|
||||
=== "Personal devices"
|
||||
|
||||
A personal node is always assigned to a Headscale user. Use the `headscale users` command to create a new user[^1]:
|
||||
A personal node is always assigned to a Headscale user. Use the `headscale users` command to create a new user:
|
||||
|
||||
```console
|
||||
headscale users create <USER>
|
||||
@@ -140,5 +139,3 @@ Its best suited for automation.
|
||||
The registration of a tagged node is complete and it should be listed as "online" in the output of
|
||||
`headscale nodes list`. The "User" column displays `tagged-devices` as the owner of the node. See the "Tags" column for the list of
|
||||
assigned tags.
|
||||
|
||||
[^1]: [Ensure that the Headscale username does not end with `@`.](oidc.md#reference-a-user-in-the-policy)
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
mike~=2.1
|
||||
mkdocs-include-markdown-plugin~=7.2
|
||||
mkdocs-macros-plugin~=1.5
|
||||
mkdocs-materialx[imaging]~=10.1
|
||||
mkdocs-minify-plugin~=0.8
|
||||
mkdocs-include-markdown-plugin~=7.1
|
||||
mkdocs-macros-plugin~=1.3
|
||||
mkdocs-material[imaging]~=9.5
|
||||
mkdocs-minify-plugin~=0.7
|
||||
mkdocs-redirects~=1.2
|
||||
|
||||
@@ -1,58 +0,0 @@
|
||||
# Development builds
|
||||
|
||||
!!! warning
|
||||
|
||||
Development builds are created automatically from the latest `main` branch
|
||||
and are **not versioned releases**. They may contain incomplete features,
|
||||
breaking changes, or bugs. Use them for testing only.
|
||||
|
||||
Each push to `main` produces container images and cross-compiled binaries.
|
||||
Container images are multi-arch (amd64, arm64) and use the same distroless
|
||||
base image as official releases.
|
||||
|
||||
## Container images
|
||||
|
||||
Images are available from both Docker Hub and GitHub Container Registry, tagged
|
||||
with the short commit hash of the build (e.g. `main-abc1234`):
|
||||
|
||||
- Docker Hub: `docker.io/headscale/headscale:main-<sha>`
|
||||
- GitHub Container Registry: `ghcr.io/juanfont/headscale:main-<sha>`
|
||||
|
||||
To find the latest available tag, check the
|
||||
[GitHub Actions workflow](https://github.com/juanfont/headscale/actions/workflows/container-main.yml)
|
||||
or the [GitHub Container Registry package page](https://github.com/juanfont/headscale/pkgs/container/headscale).
|
||||
|
||||
For example, to run a specific development build:
|
||||
|
||||
```shell
|
||||
docker run \
|
||||
--name headscale \
|
||||
--detach \
|
||||
--read-only \
|
||||
--tmpfs /var/run/headscale \
|
||||
--volume "$(pwd)/config:/etc/headscale:ro" \
|
||||
--volume "$(pwd)/lib:/var/lib/headscale" \
|
||||
--publish 127.0.0.1:8080:8080 \
|
||||
--publish 127.0.0.1:9090:9090 \
|
||||
--health-cmd "CMD headscale health" \
|
||||
docker.io/headscale/headscale:main-<sha> \
|
||||
serve
|
||||
```
|
||||
|
||||
See [Running headscale in a container](./container.md) for full container setup instructions.
|
||||
|
||||
## Binaries
|
||||
|
||||
Pre-built binaries from the latest successful build on `main` are available
|
||||
via [nightly.link](https://nightly.link/juanfont/headscale/workflows/container-main/main):
|
||||
|
||||
| OS | Arch | Download |
|
||||
| ----- | ----- | -------------------------------------------------------------------------------------------------------------------------- |
|
||||
| Linux | amd64 | [headscale-linux-amd64](https://nightly.link/juanfont/headscale/workflows/container-main/main/headscale-linux-amd64.zip) |
|
||||
| Linux | arm64 | [headscale-linux-arm64](https://nightly.link/juanfont/headscale/workflows/container-main/main/headscale-linux-arm64.zip) |
|
||||
| macOS | amd64 | [headscale-darwin-amd64](https://nightly.link/juanfont/headscale/workflows/container-main/main/headscale-darwin-amd64.zip) |
|
||||
| macOS | arm64 | [headscale-darwin-arm64](https://nightly.link/juanfont/headscale/workflows/container-main/main/headscale-darwin-arm64.zip) |
|
||||
|
||||
After downloading and extracting the archive, make the binary executable and follow the
|
||||
[standalone binary installation](./official.md#using-standalone-binaries-advanced)
|
||||
instructions for setting up the service.
|
||||
@@ -61,7 +61,7 @@ options, run:
|
||||
## Manage headscale users
|
||||
|
||||
In headscale, a node (also known as machine or device) is [typically assigned to a headscale
|
||||
user](../ref/registration.md#identity-model). Such a headscale user[^1] may have many nodes assigned to them and can be
|
||||
user](../ref/registration.md#identity-model). Such a headscale user may have many nodes assigned to them and can be
|
||||
managed with the `headscale users` command. Invoke the built-in help for more information: `headscale users --help`.
|
||||
|
||||
### Create a headscale user
|
||||
@@ -149,5 +149,3 @@ The command returns the preauthkey on success which is used to connect a node to
|
||||
```shell
|
||||
tailscale up --login-server <YOUR_HEADSCALE_URL> --authkey <YOUR_AUTH_KEY>
|
||||
```
|
||||
|
||||
[^1]: [Ensure that the Headscale username does not end with `@`.](../ref/oidc.md#reference-a-user-in-the-policy)
|
||||
|
||||
6
flake.lock
generated
6
flake.lock
generated
@@ -20,11 +20,11 @@
|
||||
},
|
||||
"nixpkgs": {
|
||||
"locked": {
|
||||
"lastModified": 1775888245,
|
||||
"narHash": "sha256-nwASzrRDD1JBEu/o8ekKYEXm/oJW6EMCzCRdrwcLe90=",
|
||||
"lastModified": 1772956932,
|
||||
"narHash": "sha256-M0yS4AafhKxPPmOHGqIV0iKxgNO8bHDWdl1kOwGBwRY=",
|
||||
"owner": "NixOS",
|
||||
"repo": "nixpkgs",
|
||||
"rev": "13043924aaa7375ce482ebe2494338e058282925",
|
||||
"rev": "608d0cadfed240589a7eea422407a547ad626a14",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
|
||||
22
flake.nix
22
flake.nix
@@ -27,7 +27,7 @@
|
||||
let
|
||||
pkgs = nixpkgs.legacyPackages.${prev.stdenv.hostPlatform.system};
|
||||
buildGo = pkgs.buildGo126Module;
|
||||
vendorHash = "sha256-x0xXxa7sjyDwWLq8fO0Z/pbPefctzctK3TAdBea7FtY=";
|
||||
vendorHash = "sha256-oUN53ELb3+xn4yA7lEfXyT2c7NxbQC6RtbkGVq6+RLU=";
|
||||
in
|
||||
{
|
||||
headscale = buildGo {
|
||||
@@ -62,16 +62,16 @@
|
||||
|
||||
protoc-gen-grpc-gateway = buildGo rec {
|
||||
pname = "grpc-gateway";
|
||||
version = "2.28.0";
|
||||
version = "2.27.7";
|
||||
|
||||
src = pkgs.fetchFromGitHub {
|
||||
owner = "grpc-ecosystem";
|
||||
repo = "grpc-gateway";
|
||||
rev = "v${version}";
|
||||
sha256 = "sha256-93omvHb+b+S0w4D+FGEEwYYDjgumJFDAruc1P4elfvA=";
|
||||
sha256 = "sha256-6R0EhNnOBEISJddjkbVTcBvUuU5U3r9Hu2UPfAZDep4=";
|
||||
};
|
||||
|
||||
vendorHash = "sha256-jVP5zfFPfHeAEApKNJzZwuZLA+DjKgkL7m2DFG72UNs=";
|
||||
vendorHash = "sha256-SOAbRrzMf2rbKaG9PGSnPSLY/qZVgbHcNjOLmVonycY=";
|
||||
|
||||
nativeBuildInputs = [ pkgs.installShellFiles ];
|
||||
|
||||
@@ -80,13 +80,13 @@
|
||||
|
||||
protobuf-language-server = buildGo rec {
|
||||
pname = "protobuf-language-server";
|
||||
version = "ab4c128";
|
||||
version = "1cf777d";
|
||||
|
||||
src = pkgs.fetchFromGitHub {
|
||||
owner = "lasorda";
|
||||
repo = "protobuf-language-server";
|
||||
rev = "ab4c128f00774d51bd6d1f4cfa735f4b7c8619e3";
|
||||
sha256 = "sha256-yF6kG+qTRxVO/qp2V9HgTyFBeOm5RQzeqdZFrdidwxM=";
|
||||
rev = "1cf777de4d35a6e493a689e3ca1a6183ce3206b6";
|
||||
sha256 = "sha256-9MkBQPxr/TDr/sNz/Sk7eoZwZwzdVbE5u6RugXXk5iY=";
|
||||
};
|
||||
|
||||
vendorHash = "sha256-4nTpKBe7ekJsfQf+P6edT/9Vp2SBYbKz1ITawD3bhkI=";
|
||||
@@ -97,16 +97,16 @@
|
||||
# Build golangci-lint with Go 1.26 (upstream uses hardcoded Go version)
|
||||
golangci-lint = buildGo rec {
|
||||
pname = "golangci-lint";
|
||||
version = "2.11.4";
|
||||
version = "2.9.0";
|
||||
|
||||
src = pkgs.fetchFromGitHub {
|
||||
owner = "golangci";
|
||||
repo = "golangci-lint";
|
||||
rev = "v${version}";
|
||||
hash = "sha256-B19aLvfNRY9TOYw/71f2vpNUuSIz8OI4dL0ijGezsas=";
|
||||
hash = "sha256-8LEtm1v0slKwdLBtS41OilKJLXytSxcI9fUlZbj5Gfw=";
|
||||
};
|
||||
|
||||
vendorHash = "sha256-xuoj4+U4tB5gpABKq4Dbp2cxnljxdYoBbO8A7DqPM5E=";
|
||||
vendorHash = "sha256-w8JfF6n1ylrU652HEv/cYdsOdDZz9J2uRQDqxObyhkY=";
|
||||
|
||||
subPackages = [ "cmd/golangci-lint" ];
|
||||
|
||||
@@ -166,7 +166,7 @@
|
||||
golangci-lint
|
||||
golangci-lint-langserver
|
||||
golines
|
||||
prettier
|
||||
nodePackages.prettier
|
||||
nixpkgs-fmt
|
||||
goreleaser
|
||||
nfpm
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
// Code generated by protoc-gen-go-grpc. DO NOT EDIT.
|
||||
// versions:
|
||||
// - protoc-gen-go-grpc v1.6.1
|
||||
// - protoc-gen-go-grpc v1.6.0
|
||||
// - protoc (unknown)
|
||||
// source: headscale/v1/headscale.proto
|
||||
|
||||
|
||||
72
go.mod
72
go.mod
@@ -7,8 +7,8 @@ require (
|
||||
github.com/cenkalti/backoff/v5 v5.0.3
|
||||
github.com/chasefleming/elem-go v0.31.0
|
||||
github.com/coder/websocket v1.8.14
|
||||
github.com/coreos/go-oidc/v3 v3.18.0
|
||||
github.com/creachadair/command v0.2.2
|
||||
github.com/coreos/go-oidc/v3 v3.17.0
|
||||
github.com/creachadair/command v0.2.0
|
||||
github.com/creachadair/flax v0.0.5
|
||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc
|
||||
github.com/docker/docker v28.5.2+incompatible
|
||||
@@ -17,12 +17,11 @@ require (
|
||||
github.com/go-chi/chi/v5 v5.2.5
|
||||
github.com/go-chi/metrics v0.1.1
|
||||
github.com/go-gormigrate/gormigrate/v2 v2.1.5
|
||||
github.com/go-json-experiment/json v0.0.0-20260214004413-d219187c3433
|
||||
github.com/go-json-experiment/json v0.0.0-20251027170946-4849db3c2f7e
|
||||
github.com/gofrs/uuid/v5 v5.4.0
|
||||
github.com/google/go-cmp v0.7.0
|
||||
github.com/gorilla/mux v1.8.1
|
||||
github.com/grpc-ecosystem/grpc-gateway/v2 v2.28.0
|
||||
github.com/hashicorp/golang-lru/v2 v2.0.7
|
||||
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.7
|
||||
github.com/jagottsicher/termcolor v1.0.2
|
||||
github.com/oauth2-proxy/mockoidc v0.0.0-20240214162133-caebfff84d25
|
||||
github.com/ory/dockertest/v3 v3.12.0
|
||||
@@ -30,31 +29,32 @@ require (
|
||||
github.com/pkg/profile v1.7.0
|
||||
github.com/prometheus/client_golang v1.23.2
|
||||
github.com/prometheus/common v0.67.5
|
||||
github.com/pterm/pterm v0.12.83
|
||||
github.com/pterm/pterm v0.12.82
|
||||
github.com/puzpuzpuz/xsync/v4 v4.4.0
|
||||
github.com/rs/zerolog v1.35.0
|
||||
github.com/samber/lo v1.53.0
|
||||
github.com/sasha-s/go-deadlock v0.3.9
|
||||
github.com/rs/zerolog v1.34.0
|
||||
github.com/samber/lo v1.52.0
|
||||
github.com/sasha-s/go-deadlock v0.3.6
|
||||
github.com/spf13/cobra v1.10.2
|
||||
github.com/spf13/viper v1.21.0
|
||||
github.com/stretchr/testify v1.11.1
|
||||
github.com/tailscale/hujson v0.0.0-20260302212456-ecc657c15afd
|
||||
github.com/tailscale/squibble v0.0.0-20260303070345-3ac5157f405e
|
||||
github.com/tailscale/tailsql v0.0.0-20260322172246-3ab0c1744d9c
|
||||
github.com/tailscale/hujson v0.0.0-20250605163823-992244df8c5a
|
||||
github.com/tailscale/squibble v0.0.0-20251104223530-a961feffb67f
|
||||
github.com/tailscale/tailsql v0.0.0-20260105194658-001575c3ca09
|
||||
github.com/tcnksm/go-latest v0.0.0-20170313132115-e3007ae9052e
|
||||
go4.org/netipx v0.0.0-20231129151722-fdeea329fbba
|
||||
golang.org/x/crypto v0.49.0
|
||||
golang.org/x/exp v0.0.0-20260312153236-7ab1446f8b90
|
||||
golang.org/x/net v0.52.0
|
||||
golang.org/x/oauth2 v0.36.0
|
||||
golang.org/x/sync v0.20.0
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20260406210006-6f92a3bedf2d
|
||||
google.golang.org/grpc v1.80.0
|
||||
golang.org/x/crypto v0.47.0
|
||||
golang.org/x/exp v0.0.0-20260112195511-716be5621a96
|
||||
golang.org/x/net v0.49.0
|
||||
golang.org/x/oauth2 v0.34.0
|
||||
golang.org/x/sync v0.19.0
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20260203192932-546029d2fa20
|
||||
google.golang.org/grpc v1.78.0
|
||||
google.golang.org/protobuf v1.36.11
|
||||
gopkg.in/yaml.v3 v3.0.1
|
||||
gorm.io/driver/postgres v1.6.0
|
||||
gorm.io/gorm v1.31.1
|
||||
tailscale.com v1.96.5
|
||||
tailscale.com v1.94.1
|
||||
zgo.at/zcache/v2 v2.4.1
|
||||
zombiezen.com/go/postgrestest v1.0.1
|
||||
)
|
||||
|
||||
@@ -76,10 +76,10 @@ require (
|
||||
// together, e.g:
|
||||
// go get modernc.org/libc@v1.55.3 modernc.org/sqlite@v1.33.1
|
||||
require (
|
||||
modernc.org/libc v1.70.0 // indirect
|
||||
modernc.org/libc v1.67.6 // indirect
|
||||
modernc.org/mathutil v1.7.1 // indirect
|
||||
modernc.org/memory v1.11.0 // indirect
|
||||
modernc.org/sqlite v1.48.2
|
||||
modernc.org/sqlite v1.44.3
|
||||
)
|
||||
|
||||
// NOTE: gvisor must be updated in lockstep with
|
||||
@@ -88,14 +88,14 @@ require (
|
||||
// To find the correct version, check tailscale.com's
|
||||
// go.mod file for the gvisor.dev/gvisor version:
|
||||
// https://github.com/tailscale/tailscale/blob/main/go.mod
|
||||
require gvisor.dev/gvisor v0.0.0-20260224225140-573d5e7127a8 // indirect
|
||||
require gvisor.dev/gvisor v0.0.0-20250205023644-9414b50a5633 // indirect
|
||||
|
||||
require (
|
||||
atomicgo.dev/cursor v0.2.0 // indirect
|
||||
atomicgo.dev/keyboard v0.2.9 // indirect
|
||||
atomicgo.dev/schedule v0.1.0 // indirect
|
||||
dario.cat/mergo v1.0.2 // indirect
|
||||
filippo.io/edwards25519 v1.2.0 // indirect
|
||||
filippo.io/edwards25519 v1.1.0 // indirect
|
||||
github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c // indirect
|
||||
github.com/Microsoft/go-winio v0.6.2 // indirect
|
||||
github.com/Nvveen/Gotty v0.0.0-20120604004816-cd527374f1e5 // indirect
|
||||
@@ -119,12 +119,13 @@ require (
|
||||
github.com/beorn7/perks v1.0.1 // indirect
|
||||
github.com/cenkalti/backoff/v4 v4.3.0 // indirect
|
||||
github.com/cespare/xxhash/v2 v2.3.0 // indirect
|
||||
github.com/clipperhouse/uax29/v2 v2.7.0 // indirect
|
||||
github.com/clipperhouse/stringish v0.1.1 // indirect
|
||||
github.com/clipperhouse/uax29/v2 v2.5.0 // indirect
|
||||
github.com/containerd/console v1.0.5 // indirect
|
||||
github.com/containerd/continuity v0.4.5 // indirect
|
||||
github.com/containerd/errdefs v1.0.0 // indirect
|
||||
github.com/containerd/errdefs/pkg v0.3.0 // indirect
|
||||
github.com/creachadair/mds v0.26.2 // indirect
|
||||
github.com/creachadair/mds v0.25.15 // indirect
|
||||
github.com/creachadair/msync v0.8.2 // indirect
|
||||
github.com/dblohm7/wingoes v0.0.0-20250822163801-6d8e6105c62d // indirect
|
||||
github.com/dgryski/go-metro v0.0.0-20250106013310-edb8663e5e33 // indirect
|
||||
@@ -139,7 +140,7 @@ require (
|
||||
github.com/gaissmai/bart v0.26.1 // indirect
|
||||
github.com/glebarez/go-sqlite v1.22.0 // indirect
|
||||
github.com/go-jose/go-jose/v3 v3.0.4 // indirect
|
||||
github.com/go-jose/go-jose/v4 v4.1.4 // indirect
|
||||
github.com/go-jose/go-jose/v4 v4.1.3 // indirect
|
||||
github.com/go-logr/logr v1.4.3 // indirect
|
||||
github.com/go-logr/stdr v1.2.2 // indirect
|
||||
github.com/go-viper/mapstructure/v2 v2.5.0 // indirect
|
||||
@@ -172,7 +173,7 @@ require (
|
||||
github.com/lithammer/fuzzysearch v1.1.8 // indirect
|
||||
github.com/mattn/go-colorable v0.1.14 // indirect
|
||||
github.com/mattn/go-isatty v0.0.20 // indirect
|
||||
github.com/mattn/go-runewidth v0.0.20 // indirect
|
||||
github.com/mattn/go-runewidth v0.0.19 // indirect
|
||||
github.com/mdlayher/netlink v1.8.0 // indirect
|
||||
github.com/mdlayher/socket v0.5.1 // indirect
|
||||
github.com/mitchellh/go-ps v1.0.0 // indirect
|
||||
@@ -224,19 +225,18 @@ require (
|
||||
go.yaml.in/yaml/v2 v2.4.3 // indirect
|
||||
go.yaml.in/yaml/v3 v3.0.4 // indirect
|
||||
go4.org/mem v0.0.0-20240501181205-ae6ca9944745 // indirect
|
||||
golang.org/x/mod v0.35.0 // indirect
|
||||
golang.org/x/sys v0.43.0 // indirect
|
||||
golang.org/x/term v0.42.0 // indirect
|
||||
golang.org/x/text v0.36.0 // indirect
|
||||
golang.org/x/time v0.15.0 // indirect
|
||||
golang.org/x/tools v0.43.0 // indirect
|
||||
golang.org/x/mod v0.32.0 // indirect
|
||||
golang.org/x/sys v0.40.0 // indirect
|
||||
golang.org/x/term v0.39.0 // indirect
|
||||
golang.org/x/text v0.33.0 // indirect
|
||||
golang.org/x/time v0.14.0 // indirect
|
||||
golang.org/x/tools v0.41.0 // indirect
|
||||
golang.zx2c4.com/wintun v0.0.0-20230126152724-0fa3db229ce2 // indirect
|
||||
golang.zx2c4.com/wireguard/windows v0.5.3 // indirect
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20260401024825-9d38bb4040a9 // indirect
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20260203192932-546029d2fa20 // indirect
|
||||
)
|
||||
|
||||
tool (
|
||||
golang.org/x/tools/cmd/stress
|
||||
golang.org/x/tools/cmd/stringer
|
||||
tailscale.com/cmd/viewer
|
||||
)
|
||||
|
||||
173
go.sum
173
go.sum
@@ -10,8 +10,8 @@ atomicgo.dev/schedule v0.1.0 h1:nTthAbhZS5YZmgYbb2+DH8uQIZcTlIrd4eYr3UQxEjs=
|
||||
atomicgo.dev/schedule v0.1.0/go.mod h1:xeUa3oAkiuHYh8bKiQBRojqAMq3PXXbJujjb0hw8pEU=
|
||||
dario.cat/mergo v1.0.2 h1:85+piFYR1tMbRrLcDwR18y4UKJ3aH1Tbzi24VRW1TK8=
|
||||
dario.cat/mergo v1.0.2/go.mod h1:E/hbnu0NxMFBjpMIE34DRGLWqDy0g5FuKDhCb31ngxA=
|
||||
filippo.io/edwards25519 v1.2.0 h1:crnVqOiS4jqYleHd9vaKZ+HKtHfllngJIiOpNpoJsjo=
|
||||
filippo.io/edwards25519 v1.2.0/go.mod h1:xzAOLCNug/yB62zG1bQ8uziwrIqIuxhctzJT18Q77mc=
|
||||
filippo.io/edwards25519 v1.1.0 h1:FNf4tywRC1HmFuKW5xopWpigGjJKiJSV0Cqo0cJWDaA=
|
||||
filippo.io/edwards25519 v1.1.0/go.mod h1:BxyFTGdWcka3PhytdK4V28tE5sGfRvvvRV7EaN4VDT4=
|
||||
filippo.io/mkcert v1.4.4 h1:8eVbbwfVlaqUM7OwuftKc2nuYOoTDQWqsoXmzoXZdbc=
|
||||
filippo.io/mkcert v1.4.4/go.mod h1:VyvOchVuAye3BoUsPUOOofKygVwLV2KQMVFJNRq+1dA=
|
||||
github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c h1:udKWzYgxTojEKWjV8V+WSxDXJ4NFATAsZjh8iIbsQIg=
|
||||
@@ -103,8 +103,10 @@ github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMn
|
||||
github.com/chzyer/test v1.0.0/go.mod h1:2JlltgoNkt4TW/z9V/IzDdFaMTM2JPIi26O1pF38GC8=
|
||||
github.com/cilium/ebpf v0.17.3 h1:FnP4r16PWYSE4ux6zN+//jMcW4nMVRvuTLVTvCjyyjg=
|
||||
github.com/cilium/ebpf v0.17.3/go.mod h1:G5EDHij8yiLzaqn0WjyfJHvRa+3aDlReIaLVRMvOyJk=
|
||||
github.com/clipperhouse/uax29/v2 v2.7.0 h1:+gs4oBZ2gPfVrKPthwbMzWZDaAFPGYK72F0NJv2v7Vk=
|
||||
github.com/clipperhouse/uax29/v2 v2.7.0/go.mod h1:EFJ2TJMRUaplDxHKj1qAEhCtQPW2tJSwu5BF98AuoVM=
|
||||
github.com/clipperhouse/stringish v0.1.1 h1:+NSqMOr3GR6k1FdRhhnXrLfztGzuG+VuFDfatpWHKCs=
|
||||
github.com/clipperhouse/stringish v0.1.1/go.mod h1:v/WhFtE1q0ovMta2+m+UbpZ+2/HEXNWYXQgCt4hdOzA=
|
||||
github.com/clipperhouse/uax29/v2 v2.5.0 h1:x7T0T4eTHDONxFJsL94uKNKPHrclyFI0lm7+w94cO8U=
|
||||
github.com/clipperhouse/uax29/v2 v2.5.0/go.mod h1:Wn1g7MK6OoeDT0vL+Q0SQLDz/KpfsVRgg6W7ihQeh4g=
|
||||
github.com/coder/websocket v1.8.14 h1:9L0p0iKiNOibykf283eHkKUHHrpG7f65OE3BhhO7v9g=
|
||||
github.com/coder/websocket v1.8.14/go.mod h1:NX3SzP+inril6yawo5CQXx8+fk145lPDC6pumgx0mVg=
|
||||
github.com/containerd/console v1.0.3/go.mod h1:7LqA/THxQ86k76b8c/EMSiaJ3h1eZkMkXar0TQ1gf3U=
|
||||
@@ -120,15 +122,16 @@ github.com/containerd/log v0.1.0 h1:TCJt7ioM2cr/tfR8GPbGf9/VRAX8D2B4PjzCpfX540I=
|
||||
github.com/containerd/log v0.1.0/go.mod h1:VRRf09a7mHDIRezVKTRCrOq78v577GXq3bSa3EhrzVo=
|
||||
github.com/coreos/go-iptables v0.7.1-0.20240112124308-65c67c9f46e6 h1:8h5+bWd7R6AYUslN6c6iuZWTKsKxUFDlpnmilO6R2n0=
|
||||
github.com/coreos/go-iptables v0.7.1-0.20240112124308-65c67c9f46e6/go.mod h1:Qe8Bv2Xik5FyTXwgIbLAnv2sWSBmvWdFETJConOQ//Q=
|
||||
github.com/coreos/go-oidc/v3 v3.18.0 h1:V9orjXynvu5wiC9SemFTWnG4F45v403aIcjWo0d41+A=
|
||||
github.com/coreos/go-oidc/v3 v3.18.0/go.mod h1:DYCf24+ncYi+XkIH97GY1+dqoRlbaSI26KVTCI9SrY4=
|
||||
github.com/coreos/go-oidc/v3 v3.17.0 h1:hWBGaQfbi0iVviX4ibC7bk8OKT5qNr4klBaCHVNvehc=
|
||||
github.com/coreos/go-oidc/v3 v3.17.0/go.mod h1:wqPbKFrVnE90vty060SB40FCJ8fTHTxSwyXJqZH+sI8=
|
||||
github.com/coreos/go-systemd/v22 v22.5.0/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
|
||||
github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=
|
||||
github.com/creachadair/command v0.2.2 h1:4RGsUhqFf1imFC+vMWOOCiQdncThCdcdMJp0JNCjxxc=
|
||||
github.com/creachadair/command v0.2.2/go.mod h1:Z6Zp6CSJcnaWWR4wHgdqzODnFdxFJAaa/DrcVkeUu3E=
|
||||
github.com/creachadair/command v0.2.0 h1:qTA9cMMhZePAxFoNdnk6F6nn94s1qPndIg9hJbqI9cA=
|
||||
github.com/creachadair/command v0.2.0/go.mod h1:j+Ar+uYnFsHpkMeV9kGj6lJ45y9u2xqtg8FYy6cm+0o=
|
||||
github.com/creachadair/flax v0.0.5 h1:zt+CRuXQASxwQ68e9GHAOnEgAU29nF0zYMHOCrL5wzE=
|
||||
github.com/creachadair/flax v0.0.5/go.mod h1:F1PML0JZLXSNDMNiRGK2yjm5f+L9QCHchyHBldFymj8=
|
||||
github.com/creachadair/mds v0.26.2 h1:rCtvEV/bCRY0hGfwvvMg0p3yzKgBE8l/9OV4fjF9QQ8=
|
||||
github.com/creachadair/mds v0.26.2/go.mod h1:dMBTCSy3iS3dwh4Rb1zxeZz2d7K8+N24GCTsayWtQRI=
|
||||
github.com/creachadair/mds v0.25.15 h1:i8CUqtfgbCqbvZ++L7lm8No3cOeic9YKF4vHEvEoj+Y=
|
||||
github.com/creachadair/mds v0.25.15/go.mod h1:XtMfRW15sjd1iOi1Z1k+dq0pRsR5xPbulpoTrpyhk8w=
|
||||
github.com/creachadair/msync v0.8.2 h1:ujvc/SVJPn+bFwmjUHucXNTTn3opVe2YbQ46mBCnP08=
|
||||
github.com/creachadair/msync v0.8.2/go.mod h1:LzxqD9kfIl/O3DczkwOgJplLPqwrTbIhINlf9bHIsEY=
|
||||
github.com/creachadair/taskgroup v0.13.2 h1:3KyqakBuFsm3KkXi/9XIb0QcA8tEzLHLgaoidf0MdVc=
|
||||
@@ -186,10 +189,10 @@ github.com/go-gormigrate/gormigrate/v2 v2.1.5 h1:1OyorA5LtdQw12cyJDEHuTrEV3GiXiI
|
||||
github.com/go-gormigrate/gormigrate/v2 v2.1.5/go.mod h1:mj9ekk/7CPF3VjopaFvWKN2v7fN3D9d3eEOAXRhi/+M=
|
||||
github.com/go-jose/go-jose/v3 v3.0.4 h1:Wp5HA7bLQcKnf6YYao/4kpRpVMp/yf6+pJKV8WFSaNY=
|
||||
github.com/go-jose/go-jose/v3 v3.0.4/go.mod h1:5b+7YgP7ZICgJDBdfjZaIt+H/9L9T/YQrVfLAMboGkQ=
|
||||
github.com/go-jose/go-jose/v4 v4.1.4 h1:moDMcTHmvE6Groj34emNPLs/qtYXRVcd6S7NHbHz3kA=
|
||||
github.com/go-jose/go-jose/v4 v4.1.4/go.mod h1:x4oUasVrzR7071A4TnHLGSPpNOm2a21K9Kf04k1rs08=
|
||||
github.com/go-json-experiment/json v0.0.0-20260214004413-d219187c3433 h1:vymEbVwYFP/L05h5TKQxvkXoKxNvTpjxYKdF1Nlwuao=
|
||||
github.com/go-json-experiment/json v0.0.0-20260214004413-d219187c3433/go.mod h1:tphK2c80bpPhMOI4v6bIc2xWywPfbqi1Z06+RcrMkDg=
|
||||
github.com/go-jose/go-jose/v4 v4.1.3 h1:CVLmWDhDVRa6Mi/IgCgaopNosCaHz7zrMeF9MlZRkrs=
|
||||
github.com/go-jose/go-jose/v4 v4.1.3/go.mod h1:x4oUasVrzR7071A4TnHLGSPpNOm2a21K9Kf04k1rs08=
|
||||
github.com/go-json-experiment/json v0.0.0-20251027170946-4849db3c2f7e h1:Lf/gRkoycfOBPa42vU2bbgPurFong6zXeFtPoxholzU=
|
||||
github.com/go-json-experiment/json v0.0.0-20251027170946-4849db3c2f7e/go.mod h1:uNVvRXArCGbZ508SxYYTC5v1JWoz2voff5pm25jU1Ok=
|
||||
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
|
||||
github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=
|
||||
github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
|
||||
@@ -206,6 +209,7 @@ github.com/go4org/plan9netshell v0.0.0-20250324183649-788daa080737/go.mod h1:MIS
|
||||
github.com/gobwas/httphead v0.1.0/go.mod h1:O/RXo79gxV8G+RqlR/otEwx4Q36zl9rqC5u12GKvMCM=
|
||||
github.com/gobwas/pool v0.2.1/go.mod h1:q8bcK0KcYlCgd9e7WYLm9LpyS+YeLd8JVDW6WezmKEw=
|
||||
github.com/gobwas/ws v1.2.1/go.mod h1:hRKAFb8wOxFROYNsT1bqfWnhX+b5MFeJM9r2ZSwg/KY=
|
||||
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
|
||||
github.com/godbus/dbus/v5 v5.2.2 h1:TUR3TgtSVDmjiXOgAAyaZbYmIeP3DPkld3jgKGV8mXQ=
|
||||
github.com/godbus/dbus/v5 v5.2.2/go.mod h1:3AAv2+hPq5rdnr5txxxRwiGjPXamgoIHgz9FPBfOp3c=
|
||||
github.com/gofrs/uuid/v5 v5.4.0 h1:EfbpCTjqMuGyq5ZJwxqzn3Cbr2d0rUZU7v5ycAk/e/0=
|
||||
@@ -248,10 +252,11 @@ github.com/gorilla/mux v1.8.1 h1:TuBL49tXwgrFYWhqrNgrUNEY92u81SPhu7sTdzQEiWY=
|
||||
github.com/gorilla/mux v1.8.1/go.mod h1:AKf9I4AEqPTmMytcMc0KkNouC66V3BtZ4qD5fmWSiMQ=
|
||||
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674 h1:JeSE6pjso5THxAzdVpqr6/geYxZytqFMBCOtn/ujyeo=
|
||||
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674/go.mod h1:r4w70xmWCQKmi1ONH4KIaBptdivuRPyosB9RmPlGEwA=
|
||||
github.com/grpc-ecosystem/grpc-gateway/v2 v2.28.0 h1:HWRh5R2+9EifMyIHV7ZV+MIZqgz+PMpZ14Jynv3O2Zs=
|
||||
github.com/grpc-ecosystem/grpc-gateway/v2 v2.28.0/go.mod h1:JfhWUomR1baixubs02l85lZYYOm7LV6om4ceouMv45c=
|
||||
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.7 h1:X+2YciYSxvMQK0UZ7sg45ZVabVZBeBuvMkmuI2V3Fak=
|
||||
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.7/go.mod h1:lW34nIZuQ8UDPdkon5fmfp2l3+ZkQ2me/+oecHYLOII=
|
||||
github.com/hashicorp/go-version v1.8.0 h1:KAkNb1HAiZd1ukkxDFGmokVZe1Xy9HG6NUp+bPle2i4=
|
||||
github.com/hashicorp/go-version v1.8.0/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA=
|
||||
github.com/hashicorp/golang-lru v0.6.0 h1:uL2shRDx7RTrOrTCUZEGP/wJUFiUI8QT6E7z5o8jga4=
|
||||
github.com/hashicorp/golang-lru/v2 v2.0.7 h1:a+bsQ5rvGLjzHuww6tVxozPZFVghXaHOwFs4luLUK2k=
|
||||
github.com/hashicorp/golang-lru/v2 v2.0.7/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM=
|
||||
github.com/hdevalence/ed25519consensus v0.2.0 h1:37ICyZqdyj0lAZ8P4D1d1id3HqbbG1N3iBb1Tb4rdcU=
|
||||
@@ -316,13 +321,16 @@ github.com/lib/pq v1.11.1/go.mod h1:/p+8NSbOcwzAEI7wiMXFlgydTwcgTr3OSKMsD2BitpA=
|
||||
github.com/lithammer/fuzzysearch v1.1.8 h1:/HIuJnjHuXS8bKaiTMeeDlW2/AyIWk2brx1V8LFgLN4=
|
||||
github.com/lithammer/fuzzysearch v1.1.8/go.mod h1:IdqeyBClc3FFqSzYq/MXESsS4S0FsZ5ajtkr5xPLts4=
|
||||
github.com/mailru/easyjson v0.7.7/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc=
|
||||
github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg=
|
||||
github.com/mattn/go-colorable v0.1.14 h1:9A9LHSqF/7dyVVX6g0U9cwm9pG3kP9gSzcuIPHPsaIE=
|
||||
github.com/mattn/go-colorable v0.1.14/go.mod h1:6LmQG8QLFO4G5z1gPvYEzlUgJ2wF+stgPZH1UqBm1s8=
|
||||
github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
|
||||
github.com/mattn/go-isatty v0.0.19/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
|
||||
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
|
||||
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
|
||||
github.com/mattn/go-runewidth v0.0.13/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w=
|
||||
github.com/mattn/go-runewidth v0.0.20 h1:WcT52H91ZUAwy8+HUkdM3THM6gXqXuLJi9O3rjcQQaQ=
|
||||
github.com/mattn/go-runewidth v0.0.20/go.mod h1:XBkDxAl56ILZc9knddidhrOlY5R/pDhgLpndooCuJAs=
|
||||
github.com/mattn/go-runewidth v0.0.19 h1:v++JhqYnZuu5jSKrk9RbgF5v4CGUjqRfBm05byFGLdw=
|
||||
github.com/mattn/go-runewidth v0.0.19/go.mod h1:XBkDxAl56ILZc9knddidhrOlY5R/pDhgLpndooCuJAs=
|
||||
github.com/mdlayher/genetlink v1.3.2 h1:KdrNKe+CTu+IbZnm/GVUMXSqBBLqcGpRDa0xkQy56gw=
|
||||
github.com/mdlayher/genetlink v1.3.2/go.mod h1:tcC3pkCrPUGIKKsCsp0B3AdaaKuHtaxoJRz3cc+528o=
|
||||
github.com/mdlayher/netlink v1.8.0 h1:e7XNIYJKD7hUct3Px04RuIGJbBxy1/c4nX7D5YyvvlM=
|
||||
@@ -375,8 +383,8 @@ github.com/petermattis/goid v0.0.0-20260113132338-7c7de50cc741 h1:KPpdlQLZcHfTMQ
|
||||
github.com/petermattis/goid v0.0.0-20260113132338-7c7de50cc741/go.mod h1:pxMtw7cyUw6B2bRH0ZBANSPg+AoSud1I1iyJHI69jH4=
|
||||
github.com/philip-bui/grpc-zerolog v1.0.1 h1:EMacvLRUd2O1K0eWod27ZP5CY1iTNkhBDLSN+Q4JEvA=
|
||||
github.com/philip-bui/grpc-zerolog v1.0.1/go.mod h1:qXbiq/2X4ZUMMshsqlWyTHOcw7ns+GZmlqZZN05ZHcQ=
|
||||
github.com/pierrec/lz4/v4 v4.1.25 h1:kocOqRffaIbU5djlIBr7Wh+cx82C0vtFb0fOurZHqD0=
|
||||
github.com/pierrec/lz4/v4 v4.1.25/go.mod h1:EoQMVJgeeEOMsCqCzqFm2O0cJvljX2nGZjcRIPL34O4=
|
||||
github.com/pierrec/lz4/v4 v4.1.21 h1:yOVMLb6qSIDP67pl/5F7RepeKYu/VmTyEXvuMI5d9mQ=
|
||||
github.com/pierrec/lz4/v4 v4.1.21/go.mod h1:gZWDp/Ze/IJXGXf23ltt2EXimqmTUXEy0GFuRQyBid4=
|
||||
github.com/pires/go-proxyproto v0.9.2 h1:H1UdHn695zUVVmB0lQ354lOWHOy6TZSpzBl3tgN0s1U=
|
||||
github.com/pires/go-proxyproto v0.9.2/go.mod h1:ZKAAyp3cgy5Y5Mo4n9AlScrkCZwUy0g3Jf+slqQVcuU=
|
||||
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
|
||||
@@ -405,8 +413,8 @@ github.com/pterm/pterm v0.12.31/go.mod h1:32ZAWZVXD7ZfG0s8qqHXePte42kdz8ECtRyEej
|
||||
github.com/pterm/pterm v0.12.33/go.mod h1:x+h2uL+n7CP/rel9+bImHD5lF3nM9vJj80k9ybiiTTE=
|
||||
github.com/pterm/pterm v0.12.36/go.mod h1:NjiL09hFhT/vWjQHSj1athJpx6H8cjpHXNAK5bUw8T8=
|
||||
github.com/pterm/pterm v0.12.40/go.mod h1:ffwPLwlbXxP+rxT0GsgDTzS3y3rmpAO1NMjUkGTYf8s=
|
||||
github.com/pterm/pterm v0.12.83 h1:ie+YmGmA727VuhxBlyGr74Ks+7McV6kT99IB8EU80aA=
|
||||
github.com/pterm/pterm v0.12.83/go.mod h1:xlgc6bFWyJIMtmLJvGim+L7jhSReilOlOnodeIYe4Tk=
|
||||
github.com/pterm/pterm v0.12.82 h1:+D9wYhCaeaK0FIQoZtqbNQuNpe2lB2tajKKsTd5paVQ=
|
||||
github.com/pterm/pterm v0.12.82/go.mod h1:TyuyrPjnxfwP+ccJdBTeWHtd/e0ybQHkOS/TakajZCw=
|
||||
github.com/puzpuzpuz/xsync/v4 v4.4.0 h1:vlSN6/CkEY0pY8KaB0yqo/pCLZvp9nhdbBdjipT4gWo=
|
||||
github.com/puzpuzpuz/xsync/v4 v4.4.0/go.mod h1:VJDmTCJMBt8igNxnkQd86r+8KUeN1quSfNKu5bLYFQo=
|
||||
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec h1:W09IVJc94icq4NjY3clb7Lk8O1qJ8BdBEF8z0ibU0rE=
|
||||
@@ -414,17 +422,18 @@ github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec/go.mod h1:qq
|
||||
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
|
||||
github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
|
||||
github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc=
|
||||
github.com/rs/zerolog v1.35.0 h1:VD0ykx7HMiMJytqINBsKcbLS+BJ4WYjz+05us+LRTdI=
|
||||
github.com/rs/zerolog v1.35.0/go.mod h1:EjML9kdfa/RMA7h/6z6pYmq1ykOuA8/mjWaEvGI+jcw=
|
||||
github.com/rs/xid v1.6.0/go.mod h1:7XoLgs4eV+QndskICGsho+ADou8ySMSjJKDIan90Nz0=
|
||||
github.com/rs/zerolog v1.34.0 h1:k43nTLIwcTVQAncfCw4KZ2VY6ukYoZaBPNOE8txlOeY=
|
||||
github.com/rs/zerolog v1.34.0/go.mod h1:bJsvje4Z08ROH4Nhs5iH600c3IkWhwp44iRc54W6wYQ=
|
||||
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
|
||||
github.com/safchain/ethtool v0.7.0 h1:rlJzfDetsVvT61uz8x1YIcFn12akMfuPulHtZjtb7Is=
|
||||
github.com/safchain/ethtool v0.7.0/go.mod h1:MenQKEjXdfkjD3mp2QdCk8B/hwvkrlOTm/FD4gTpFxQ=
|
||||
github.com/sagikazarmark/locafero v0.12.0 h1:/NQhBAkUb4+fH1jivKHWusDYFjMOOKU88eegjfxfHb4=
|
||||
github.com/sagikazarmark/locafero v0.12.0/go.mod h1:sZh36u/YSZ918v0Io+U9ogLYQJ9tLLBmM4eneO6WwsI=
|
||||
github.com/samber/lo v1.53.0 h1:t975lj2py4kJPQ6haz1QMgtId2gtmfktACxIXArw3HM=
|
||||
github.com/samber/lo v1.53.0/go.mod h1:4+MXEGsJzbKGaUEQFKBq2xtfuznW9oz/WrgyzMzRoM0=
|
||||
github.com/sasha-s/go-deadlock v0.3.9 h1:fiaT9rB7g5sr5ddNZvlwheclN9IP86eFW9WgqlEQV+w=
|
||||
github.com/sasha-s/go-deadlock v0.3.9/go.mod h1:KuZj51ZFmx42q/mPaYbRk0P1xcwe697zsJKE03vD4/Y=
|
||||
github.com/samber/lo v1.52.0 h1:Rvi+3BFHES3A8meP33VPAxiBZX/Aws5RxrschYGjomw=
|
||||
github.com/samber/lo v1.52.0/go.mod h1:4+MXEGsJzbKGaUEQFKBq2xtfuznW9oz/WrgyzMzRoM0=
|
||||
github.com/sasha-s/go-deadlock v0.3.6 h1:TR7sfOnZ7x00tWPfD397Peodt57KzMDo+9Ae9rMiUmw=
|
||||
github.com/sasha-s/go-deadlock v0.3.6/go.mod h1:CUqNyyvMxTyjFqDT7MRg9mb4Dv/btmGTqSR+rky/UXo=
|
||||
github.com/sergi/go-diff v1.2.0/go.mod h1:STckp+ISIX8hZLjrqAeVduY0gWCT9IjLuqbuNXdaHfM=
|
||||
github.com/sergi/go-diff v1.3.2-0.20230802210424-5b0b94c5c0d3 h1:n661drycOFuPLCN3Uc8sB6B/s6Z4t2xvBgU1htSHuq8=
|
||||
github.com/sergi/go-diff v1.3.2-0.20230802210424-5b0b94c5c0d3/go.mod h1:A0bzQcvG0E7Rwjx0REVgAGH58e96+X0MeOfepqsbeW4=
|
||||
@@ -461,18 +470,18 @@ github.com/tailscale/go-winio v0.0.0-20231025203758-c4f33415bf55 h1:Gzfnfk2TWrk8
|
||||
github.com/tailscale/go-winio v0.0.0-20231025203758-c4f33415bf55/go.mod h1:4k4QO+dQ3R5FofL+SanAUZe+/QfeK0+OIuwDIRu2vSg=
|
||||
github.com/tailscale/golang-x-crypto v0.0.0-20250404221719-a5573b049869 h1:SRL6irQkKGQKKLzvQP/ke/2ZuB7Py5+XuqtOgSj+iMM=
|
||||
github.com/tailscale/golang-x-crypto v0.0.0-20250404221719-a5573b049869/go.mod h1:ikbF+YT089eInTp9f2vmvy4+ZVnW5hzX1q2WknxSprQ=
|
||||
github.com/tailscale/hujson v0.0.0-20260302212456-ecc657c15afd h1:Rf9uhF1+VJ7ZHqxrG8pJ6YacmHvVCmByDmGbAWCc/gA=
|
||||
github.com/tailscale/hujson v0.0.0-20260302212456-ecc657c15afd/go.mod h1:EbW0wDK/qEUYI0A5bqq0C2kF8JTQwWONmGDBbzsxxHo=
|
||||
github.com/tailscale/hujson v0.0.0-20250605163823-992244df8c5a h1:a6TNDN9CgG+cYjaeN8l2mc4kSz2iMiCDQxPEyltUV/I=
|
||||
github.com/tailscale/hujson v0.0.0-20250605163823-992244df8c5a/go.mod h1:EbW0wDK/qEUYI0A5bqq0C2kF8JTQwWONmGDBbzsxxHo=
|
||||
github.com/tailscale/netlink v1.1.1-0.20240822203006-4d49adab4de7 h1:uFsXVBE9Qr4ZoF094vE6iYTLDl0qCiKzYXlL6UeWObU=
|
||||
github.com/tailscale/netlink v1.1.1-0.20240822203006-4d49adab4de7/go.mod h1:NzVQi3Mleb+qzq8VmcWpSkcSYxXIg0DkI6XDzpVkhJ0=
|
||||
github.com/tailscale/peercred v0.0.0-20250107143737-35a0c7bd7edc h1:24heQPtnFR+yfntqhI3oAu9i27nEojcQ4NuBQOo5ZFA=
|
||||
github.com/tailscale/peercred v0.0.0-20250107143737-35a0c7bd7edc/go.mod h1:f93CXfllFsO9ZQVq+Zocb1Gp4G5Fz0b0rXHLOzt/Djc=
|
||||
github.com/tailscale/setec v0.0.0-20260115174028-19d190c5556d h1:N+TtzIaGYREbLbKZB0WU0vVnMSfaqUkSf3qMEi03hwE=
|
||||
github.com/tailscale/setec v0.0.0-20260115174028-19d190c5556d/go.mod h1:6NU8H/GLPVX2TnXAY1duyy9ylLaHwFpr0X93UPiYmNI=
|
||||
github.com/tailscale/squibble v0.0.0-20260303070345-3ac5157f405e h1:4yfp5/YDr+TzbUME/PalYJVXAsp7zA2Gv2xQMZ9Qors=
|
||||
github.com/tailscale/squibble v0.0.0-20260303070345-3ac5157f405e/go.mod h1:xJkMmR3t+thnUQhA3Q4m2VSlS5pcOq+CIjmU/xfKKx4=
|
||||
github.com/tailscale/tailsql v0.0.0-20260322172246-3ab0c1744d9c h1:7lJQ/zycbk1E9e0nUiMuwIDYprFTLpWXUwiPdi+tRlI=
|
||||
github.com/tailscale/tailsql v0.0.0-20260322172246-3ab0c1744d9c/go.mod h1:bpNmZdvZKmBstrZunT+NXL6hmrFw5AsuT7MGiYS8sRc=
|
||||
github.com/tailscale/squibble v0.0.0-20251104223530-a961feffb67f h1:CL6gu95Y1o2ko4XiWPvWkJka0QmQWcUyPywWVWDPQbQ=
|
||||
github.com/tailscale/squibble v0.0.0-20251104223530-a961feffb67f/go.mod h1:xJkMmR3t+thnUQhA3Q4m2VSlS5pcOq+CIjmU/xfKKx4=
|
||||
github.com/tailscale/tailsql v0.0.0-20260105194658-001575c3ca09 h1:Fc9lE2cDYJbBLpCqnVmoLdf7McPqoHZiDxDPPpkJM04=
|
||||
github.com/tailscale/tailsql v0.0.0-20260105194658-001575c3ca09/go.mod h1:QMNhC4XGFiXKngHVLXE+ERDmQoH0s5fD7AUxupykocQ=
|
||||
github.com/tailscale/web-client-prebuilt v0.0.0-20251127225136-f19339b67368 h1:0tpDdAj9sSfSZg4gMwNTdqMP592sBrq2Sm0w6ipnh7k=
|
||||
github.com/tailscale/web-client-prebuilt v0.0.0-20251127225136-f19339b67368/go.mod h1:agQPE6y6ldqCOui2gkIh7ZMztTkIQKH049tv8siLuNQ=
|
||||
github.com/tailscale/wf v0.0.0-20240214030419-6fbb0a674ee6 h1:l10Gi6w9jxvinoiq15g8OToDdASBni4CyJOdHY1Hr8M=
|
||||
@@ -539,33 +548,33 @@ go4.org/netipx v0.0.0-20231129151722-fdeea329fbba/go.mod h1:PLyyIXexvUFg3Owu6p/W
|
||||
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
||||
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
|
||||
golang.org/x/crypto v0.19.0/go.mod h1:Iy9bg/ha4yyC70EfRS8jz+B6ybOBKMaSxLj6P6oBDfU=
|
||||
golang.org/x/crypto v0.49.0 h1:+Ng2ULVvLHnJ/ZFEq4KdcDd/cfjrrjjNSXNzxg0Y4U4=
|
||||
golang.org/x/crypto v0.49.0/go.mod h1:ErX4dUh2UM+CFYiXZRTcMpEcN8b/1gxEuv3nODoYtCA=
|
||||
golang.org/x/exp v0.0.0-20260312153236-7ab1446f8b90 h1:jiDhWWeC7jfWqR9c/uplMOqJ0sbNlNWv0UkzE0vX1MA=
|
||||
golang.org/x/exp v0.0.0-20260312153236-7ab1446f8b90/go.mod h1:xE1HEv6b+1SCZ5/uscMRjUBKtIxworgEcEi+/n9NQDQ=
|
||||
golang.org/x/crypto v0.47.0 h1:V6e3FRj+n4dbpw86FJ8Fv7XVOql7TEwpHapKoMJ/GO8=
|
||||
golang.org/x/crypto v0.47.0/go.mod h1:ff3Y9VzzKbwSSEzWqJsJVBnWmRwRSHt/6Op5n9bQc4A=
|
||||
golang.org/x/exp v0.0.0-20260112195511-716be5621a96 h1:Z/6YuSHTLOHfNFdb8zVZomZr7cqNgTJvA8+Qz75D8gU=
|
||||
golang.org/x/exp v0.0.0-20260112195511-716be5621a96/go.mod h1:nzimsREAkjBCIEFtHiYkrJyT+2uy9YZJB7H1k68CXZU=
|
||||
golang.org/x/exp/typeparams v0.0.0-20240314144324-c7f7c6466f7f h1:phY1HzDcf18Aq9A8KkmRtY9WvOFIxN8wgfvy6Zm1DV8=
|
||||
golang.org/x/exp/typeparams v0.0.0-20240314144324-c7f7c6466f7f/go.mod h1:AbB0pIl9nAr9wVwH+Z2ZpaocVmF5I4GyWCDIsVjR0bk=
|
||||
golang.org/x/image v0.27.0 h1:C8gA4oWU/tKkdCfYT6T2u4faJu3MeNS5O8UPWlPF61w=
|
||||
golang.org/x/image v0.27.0/go.mod h1:xbdrClrAUway1MUTEZDq9mz/UpRwYAkFFNUslZtcB+g=
|
||||
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
|
||||
golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
|
||||
golang.org/x/mod v0.35.0 h1:Ww1D637e6Pg+Zb2KrWfHQUnH2dQRLBQyAtpr/haaJeM=
|
||||
golang.org/x/mod v0.35.0/go.mod h1:+GwiRhIInF8wPm+4AoT6L0FA1QWAad3OMdTRx4tFYlU=
|
||||
golang.org/x/mod v0.32.0 h1:9F4d3PHLljb6x//jOyokMv3eX+YDeepZSEo3mFJy93c=
|
||||
golang.org/x/mod v0.32.0/go.mod h1:SgipZ/3h2Ci89DlEtEXWUk/HteuRin+HHhN+WbNhguU=
|
||||
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
|
||||
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
|
||||
golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
|
||||
golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg=
|
||||
golang.org/x/net v0.52.0 h1:He/TN1l0e4mmR3QqHMT2Xab3Aj3L9qjbhRm78/6jrW0=
|
||||
golang.org/x/net v0.52.0/go.mod h1:R1MAz7uMZxVMualyPXb+VaqGSa3LIaUqk0eEt3w36Sw=
|
||||
golang.org/x/oauth2 v0.36.0 h1:peZ/1z27fi9hUOFCAZaHyrpWG5lwe0RJEEEeH0ThlIs=
|
||||
golang.org/x/oauth2 v0.36.0/go.mod h1:YDBUJMTkDnJS+A4BP4eZBjCqtokkg1hODuPjwiGPO7Q=
|
||||
golang.org/x/net v0.49.0 h1:eeHFmOGUTtaaPSGNmjBKpbng9MulQsJURQUAfUwY++o=
|
||||
golang.org/x/net v0.49.0/go.mod h1:/ysNB2EvaqvesRkuLAyjI1ycPZlQHM3q01F02UY/MV8=
|
||||
golang.org/x/oauth2 v0.34.0 h1:hqK/t4AKgbqWkdkcAeI8XLmbK+4m4G5YeQRrmiotGlw=
|
||||
golang.org/x/oauth2 v0.34.0/go.mod h1:lzm5WQJQwKZ3nwavOZ3IS5Aulzxi68dUSgRHujetwEA=
|
||||
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.20.0 h1:e0PTpb7pjO8GAtTs2dQ6jYa5BWYlMuX047Dco/pItO4=
|
||||
golang.org/x/sync v0.20.0/go.mod h1:9xrNwdLfx4jkKbNva9FpL6vEN7evnE43NNNJQ2LF3+0=
|
||||
golang.org/x/sync v0.19.0 h1:vV+1eWNmZ5geRlYjzm2adRgW2/mcpevXNg50YZtPCE4=
|
||||
golang.org/x/sync v0.19.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
|
||||
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
@@ -578,13 +587,15 @@ golang.org/x/sys v0.0.0-20220310020820-b874c991c1a5/go.mod h1:oPkhp1MJrh7nUepCBc
|
||||
golang.org/x/sys v0.0.0-20220319134239-a9b59b0215f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
|
||||
golang.org/x/sys v0.43.0 h1:Rlag2XtaFTxp19wS8MXlJwTvoh8ArU6ezoyFsMyCTNI=
|
||||
golang.org/x/sys v0.43.0/go.mod h1:4GL1E5IUh+htKOUEOaiffhrAeqysfVGipDYzABqnCmw=
|
||||
golang.org/x/sys v0.40.0 h1:DBZZqJ2Rkml6QMQsZywtnjnnGvHza6BTfYFWY9kjEWQ=
|
||||
golang.org/x/sys v0.40.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
|
||||
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
||||
golang.org/x/term v0.0.0-20210220032956-6a3ed077a48d/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
||||
golang.org/x/term v0.0.0-20210615171337-6886f2dfbf5b/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
|
||||
@@ -592,37 +603,37 @@ golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuX
|
||||
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
|
||||
golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo=
|
||||
golang.org/x/term v0.17.0/go.mod h1:lLRBjIVuehSbZlaOtGMbcMncT+aqLLLmKrsjNrUguwk=
|
||||
golang.org/x/term v0.42.0 h1:UiKe+zDFmJobeJ5ggPwOshJIVt6/Ft0rcfrXZDLWAWY=
|
||||
golang.org/x/term v0.42.0/go.mod h1:Dq/D+snpsbazcBG5+F9Q1n2rXV8Ma+71xEjTRufARgY=
|
||||
golang.org/x/term v0.39.0 h1:RclSuaJf32jOqZz74CkPA9qFuVTX7vhLlpfj/IGWlqY=
|
||||
golang.org/x/term v0.39.0/go.mod h1:yxzUCTP/U+FzoxfdKmLaA0RV1WgE0VY7hXBwKtY/4ww=
|
||||
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
|
||||
golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
|
||||
golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
|
||||
golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
|
||||
golang.org/x/text v0.36.0 h1:JfKh3XmcRPqZPKevfXVpI1wXPTqbkE5f7JA92a55Yxg=
|
||||
golang.org/x/text v0.36.0/go.mod h1:NIdBknypM8iqVmPiuco0Dh6P5Jcdk8lJL0CUebqK164=
|
||||
golang.org/x/time v0.15.0 h1:bbrp8t3bGUeFOx08pvsMYRTCVSMk89u4tKbNOZbp88U=
|
||||
golang.org/x/time v0.15.0/go.mod h1:Y4YMaQmXwGQZoFaVFk4YpCt4FLQMYKZe9oeV/f4MSno=
|
||||
golang.org/x/text v0.33.0 h1:B3njUFyqtHDUI5jMn1YIr5B0IE2U0qck04r6d4KPAxE=
|
||||
golang.org/x/text v0.33.0/go.mod h1:LuMebE6+rBincTi9+xWTY8TztLzKHc/9C1uBCG27+q8=
|
||||
golang.org/x/time v0.14.0 h1:MRx4UaLrDotUKUdCIqzPC48t1Y9hANFKIRpNx+Te8PI=
|
||||
golang.org/x/time v0.14.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4=
|
||||
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
|
||||
golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
|
||||
golang.org/x/tools v0.43.0 h1:12BdW9CeB3Z+J/I/wj34VMl8X+fEXBxVR90JeMX5E7s=
|
||||
golang.org/x/tools v0.43.0/go.mod h1:uHkMso649BX2cZK6+RpuIPXS3ho2hZo4FVwfoy1vIk0=
|
||||
golang.org/x/tools v0.41.0 h1:a9b8iMweWG+S0OBnlU36rzLp20z1Rp10w+IY2czHTQc=
|
||||
golang.org/x/tools v0.41.0/go.mod h1:XSY6eDqxVNiYgezAVqqCeihT4j1U2CCsqvH3WhQpnlg=
|
||||
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.zx2c4.com/wintun v0.0.0-20230126152724-0fa3db229ce2 h1:B82qJJgjvYKsXS9jeunTOisW56dUokqW/FOteYJJ/yg=
|
||||
golang.zx2c4.com/wintun v0.0.0-20230126152724-0fa3db229ce2/go.mod h1:deeaetjYA+DHMHg+sMSMI58GrEteJUUzzw7en6TJQcI=
|
||||
golang.zx2c4.com/wireguard/windows v0.5.3 h1:On6j2Rpn3OEMXqBq00QEDC7bWSZrPIHKIus8eIuExIE=
|
||||
golang.zx2c4.com/wireguard/windows v0.5.3/go.mod h1:9TEe8TJmtwyQebdFwAkEWOPr3prrtqm+REGFifP60hI=
|
||||
gonum.org/v1/gonum v0.17.0 h1:VbpOemQlsSMrYmn7T2OUvQ4dqxQXU+ouZFQsZOx50z4=
|
||||
gonum.org/v1/gonum v0.17.0/go.mod h1:El3tOrEuMpv2UdMrbNlKEh9vd86bmQ6vqIcDwxEOc1E=
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20260406210006-6f92a3bedf2d h1:/aDRtSZJjyLQzm75d+a1wOJaqyKBMvIAfeQmoa3ORiI=
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20260406210006-6f92a3bedf2d/go.mod h1:etfGUgejTiadZAUaEP14NP97xi1RGeawqkjDARA/UOs=
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20260401024825-9d38bb4040a9 h1:m8qni9SQFH0tJc1X0vmnpw/0t+AImlSvp30sEupozUg=
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20260401024825-9d38bb4040a9/go.mod h1:4Hqkh8ycfw05ld/3BWL7rJOSfebL2Q+DVDeRgYgxUU8=
|
||||
google.golang.org/grpc v1.80.0 h1:Xr6m2WmWZLETvUNvIUmeD5OAagMw3FiKmMlTdViWsHM=
|
||||
google.golang.org/grpc v1.80.0/go.mod h1:ho/dLnxwi3EDJA4Zghp7k2Ec1+c2jqup0bFkw07bwF4=
|
||||
gonum.org/v1/gonum v0.16.0 h1:5+ul4Swaf3ESvrOnidPp4GZbzf0mxVQpDCYUQE7OJfk=
|
||||
gonum.org/v1/gonum v0.16.0/go.mod h1:fef3am4MQ93R2HHpKnLk4/Tbh/s0+wqD5nfa6Pnwy4E=
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20260203192932-546029d2fa20 h1:7ei4lp52gK1uSejlA8AZl5AJjeLUOHBQscRQZUgAcu0=
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20260203192932-546029d2fa20/go.mod h1:ZdbssH/1SOVnjnDlXzxDHK2MCidiqXtbYccJNzNYPEE=
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20260203192932-546029d2fa20 h1:Jr5R2J6F6qWyzINc+4AM8t5pfUz6beZpHp678GNrMbE=
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20260203192932-546029d2fa20/go.mod h1:j9x/tPzZkyxcgEFkiKEEGxfvyumM01BEtsW8xzOahRQ=
|
||||
google.golang.org/grpc v1.78.0 h1:K1XZG/yGDJnzMdd/uZHAkVqJE+xIDOcmdSFZkBUicNc=
|
||||
google.golang.org/grpc v1.78.0/go.mod h1:I47qjTo4OKbMkjA/aOOwxDIiPSBofUtQUI5EfpWvW7U=
|
||||
google.golang.org/protobuf v1.36.11 h1:fV6ZwhNocDyBLK0dj+fg8ektcVegBBuEolpbTQyBNVE=
|
||||
google.golang.org/protobuf v1.36.11/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco=
|
||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
@@ -641,26 +652,26 @@ gorm.io/gorm v1.31.1 h1:7CA8FTFz/gRfgqgpeKIBcervUn3xSyPUmr6B2WXJ7kg=
|
||||
gorm.io/gorm v1.31.1/go.mod h1:XyQVbO2k6YkOis7C2437jSit3SsDK72s7n7rsSHd+Gs=
|
||||
gotest.tools/v3 v3.5.2 h1:7koQfIKdy+I8UTetycgUqXWSDwpgv193Ka+qRsmBY8Q=
|
||||
gotest.tools/v3 v3.5.2/go.mod h1:LtdLGcnqToBH83WByAAi/wiwSFCArdFIUV/xxN4pcjA=
|
||||
gvisor.dev/gvisor v0.0.0-20260224225140-573d5e7127a8 h1:Zy8IV/+FMLxy6j6p87vk/vQGKcdnbprwjTxc8UiUtsA=
|
||||
gvisor.dev/gvisor v0.0.0-20260224225140-573d5e7127a8/go.mod h1:QkHjoMIBaYtpVufgwv3keYAbln78mBoCuShZrPrer1Q=
|
||||
honnef.co/go/tools v0.7.0 h1:w6WUp1VbkqPEgLz4rkBzH/CSU6HkoqNLp6GstyTx3lU=
|
||||
honnef.co/go/tools v0.7.0/go.mod h1:pm29oPxeP3P82ISxZDgIYeOaf9ta6Pi0EWvCFoLG2vc=
|
||||
gvisor.dev/gvisor v0.0.0-20250205023644-9414b50a5633 h1:2gap+Kh/3F47cO6hAu3idFvsJ0ue6TRcEi2IUkv/F8k=
|
||||
gvisor.dev/gvisor v0.0.0-20250205023644-9414b50a5633/go.mod h1:5DMfjtclAbTIjbXqO1qCe2K5GKKxWz2JHvCChuTcJEM=
|
||||
honnef.co/go/tools v0.7.0-0.dev.0.20251022135355-8273271481d0 h1:5SXjd4ET5dYijLaf0O3aOenC0Z4ZafIWSpjUzsQaNho=
|
||||
honnef.co/go/tools v0.7.0-0.dev.0.20251022135355-8273271481d0/go.mod h1:EPDDhEZqVHhWuPI5zPAsjU0U7v9xNIWjoOVyZ5ZcniQ=
|
||||
howett.net/plist v1.0.0 h1:7CrbWYbPPO/PyNy38b2EB/+gYbjCe2DXBxgtOOZbSQM=
|
||||
howett.net/plist v1.0.0/go.mod h1:lqaXoTrLY4hg8tnEzNru53gicrbv7rrk+2xJA/7hw9g=
|
||||
modernc.org/cc/v4 v4.27.1 h1:9W30zRlYrefrDV2JE2O8VDtJ1yPGownxciz5rrbQZis=
|
||||
modernc.org/cc/v4 v4.27.1/go.mod h1:uVtb5OGqUKpoLWhqwNQo/8LwvoiEBLvZXIQ/SmO6mL0=
|
||||
modernc.org/ccgo/v4 v4.32.0 h1:hjG66bI/kqIPX1b2yT6fr/jt+QedtP2fqojG2VrFuVw=
|
||||
modernc.org/ccgo/v4 v4.32.0/go.mod h1:6F08EBCx5uQc38kMGl+0Nm0oWczoo1c7cgpzEry7Uc0=
|
||||
modernc.org/fileutil v1.4.0 h1:j6ZzNTftVS054gi281TyLjHPp6CPHr2KCxEXjEbD6SM=
|
||||
modernc.org/fileutil v1.4.0/go.mod h1:EqdKFDxiByqxLk8ozOxObDSfcVOv/54xDs/DUHdvCUU=
|
||||
modernc.org/ccgo/v4 v4.30.1 h1:4r4U1J6Fhj98NKfSjnPUN7Ze2c6MnAdL0hWw6+LrJpc=
|
||||
modernc.org/ccgo/v4 v4.30.1/go.mod h1:bIOeI1JL54Utlxn+LwrFyjCx2n2RDiYEaJVSrgdrRfM=
|
||||
modernc.org/fileutil v1.3.40 h1:ZGMswMNc9JOCrcrakF1HrvmergNLAmxOPjizirpfqBA=
|
||||
modernc.org/fileutil v1.3.40/go.mod h1:HxmghZSZVAz/LXcMNwZPA/DRrQZEVP9VX0V4LQGQFOc=
|
||||
modernc.org/gc/v2 v2.6.5 h1:nyqdV8q46KvTpZlsw66kWqwXRHdjIlJOhG6kxiV/9xI=
|
||||
modernc.org/gc/v2 v2.6.5/go.mod h1:YgIahr1ypgfe7chRuJi2gD7DBQiKSLMPgBQe9oIiito=
|
||||
modernc.org/gc/v3 v3.1.2 h1:ZtDCnhonXSZexk/AYsegNRV1lJGgaNZJuKjJSWKyEqo=
|
||||
modernc.org/gc/v3 v3.1.2/go.mod h1:HFK/6AGESC7Ex+EZJhJ2Gni6cTaYpSMmU/cT9RmlfYY=
|
||||
modernc.org/gc/v3 v3.1.1 h1:k8T3gkXWY9sEiytKhcgyiZ2L0DTyCQ/nvX+LoCljoRE=
|
||||
modernc.org/gc/v3 v3.1.1/go.mod h1:HFK/6AGESC7Ex+EZJhJ2Gni6cTaYpSMmU/cT9RmlfYY=
|
||||
modernc.org/goabi0 v0.2.0 h1:HvEowk7LxcPd0eq6mVOAEMai46V+i7Jrj13t4AzuNks=
|
||||
modernc.org/goabi0 v0.2.0/go.mod h1:CEFRnnJhKvWT1c1JTI3Avm+tgOWbkOu5oPA8eH8LnMI=
|
||||
modernc.org/libc v1.70.0 h1:U58NawXqXbgpZ/dcdS9kMshu08aiA6b7gusEusqzNkw=
|
||||
modernc.org/libc v1.70.0/go.mod h1:OVmxFGP1CI/Z4L3E0Q3Mf1PDE0BucwMkcXjjLntvHJo=
|
||||
modernc.org/libc v1.67.6 h1:eVOQvpModVLKOdT+LvBPjdQqfrZq+pC39BygcT+E7OI=
|
||||
modernc.org/libc v1.67.6/go.mod h1:JAhxUVlolfYDErnwiqaLvUqc8nfb2r6S6slAgZOnaiE=
|
||||
modernc.org/mathutil v1.7.1 h1:GCZVGXdaN8gTqB1Mf/usp1Y/hSqgI2vAGGP4jZMCxOU=
|
||||
modernc.org/mathutil v1.7.1/go.mod h1:4p5IwJITfppl0G4sUEDtCr4DthTaT47/N3aT6MhfgJg=
|
||||
modernc.org/memory v1.11.0 h1:o4QC8aMQzmcwCK3t3Ux/ZHmwFPzE6hf2Y5LbkRs+hbI=
|
||||
@@ -669,8 +680,8 @@ modernc.org/opt v0.1.4 h1:2kNGMRiUjrp4LcaPuLY2PzUfqM/w9N23quVwhKt5Qm8=
|
||||
modernc.org/opt v0.1.4/go.mod h1:03fq9lsNfvkYSfxrfUhZCWPk1lm4cq4N+Bh//bEtgns=
|
||||
modernc.org/sortutil v1.2.1 h1:+xyoGf15mM3NMlPDnFqrteY07klSFxLElE2PVuWIJ7w=
|
||||
modernc.org/sortutil v1.2.1/go.mod h1:7ZI3a3REbai7gzCLcotuw9AC4VZVpYMjDzETGsSMqJE=
|
||||
modernc.org/sqlite v1.48.2 h1:5CnW4uP8joZtA0LedVqLbZV5GD7F/0x91AXeSyjoh5c=
|
||||
modernc.org/sqlite v1.48.2/go.mod h1:hWjRO6Tj/5Ik8ieqxQybiEOUXy0NJFNp2tpvVpKlvig=
|
||||
modernc.org/sqlite v1.44.3 h1:+39JvV/HWMcYslAwRxHb8067w+2zowvFOUrOWIy9PjY=
|
||||
modernc.org/sqlite v1.44.3/go.mod h1:CzbrU2lSB1DKUusvwGz7rqEKIq+NUd8GWuBBZDs9/nA=
|
||||
modernc.org/strutil v1.2.1 h1:UneZBkQA+DX2Rp35KcM69cSsNES9ly8mQWD71HKlOA0=
|
||||
modernc.org/strutil v1.2.1/go.mod h1:EHkiggD70koQxjVdSBM3JKM7k6L0FbGE5eymy9i3B9A=
|
||||
modernc.org/token v1.1.0 h1:Xl7Ap9dKaEs5kLoOQeQmPWevfnk/DM5qcLcYlA8ys6Y=
|
||||
@@ -679,7 +690,9 @@ pgregory.net/rapid v1.2.0 h1:keKAYRcjm+e1F0oAuU5F5+YPAWcyxNNRK2wud503Gnk=
|
||||
pgregory.net/rapid v1.2.0/go.mod h1:PY5XlDGj0+V1FCq0o192FdRhpKHGTRIWBgqjDBTrq04=
|
||||
software.sslmate.com/src/go-pkcs12 v0.4.0 h1:H2g08FrTvSFKUj+D309j1DPfk5APnIdAQAB8aEykJ5k=
|
||||
software.sslmate.com/src/go-pkcs12 v0.4.0/go.mod h1:Qiz0EyvDRJjjxGyUQa2cCNZn/wMyzrRJ/qcDXOQazLI=
|
||||
tailscale.com v1.96.5 h1:gNkfA/KSZAl6jCH9cj8urq00HRWItDDTtGsyATI89jA=
|
||||
tailscale.com v1.96.5/go.mod h1:/3lnZBYb2UEwnN0MNu2SDXUtT06AGd5k0s+OWx3WmcY=
|
||||
tailscale.com v1.94.1 h1:0dAst/ozTuFkgmxZULc3oNwR9+qPIt5ucvzH7kaM0Jw=
|
||||
tailscale.com v1.94.1/go.mod h1:gLnVrEOP32GWvroaAHHGhjSGMPJ1i4DvqNwEg+Yuov4=
|
||||
zgo.at/zcache/v2 v2.4.1 h1:Dfjoi8yI0Uq7NCc4lo2kaQJJmp9Mijo21gef+oJstbY=
|
||||
zgo.at/zcache/v2 v2.4.1/go.mod h1:gyCeoLVo01QjDZynjime8xUGHHMbsLiPyUTBpDGd4Gk=
|
||||
zombiezen.com/go/postgrestest v1.0.1 h1:aXoADQAJmZDU3+xilYVut0pHhgc0sF8ZspPW9gFNwP4=
|
||||
zombiezen.com/go/postgrestest v1.0.1/go.mod h1:marlZezr+k2oSJrvXHnZUs1olHqpE9czlz8ZYkVxliQ=
|
||||
|
||||
@@ -16,7 +16,6 @@ import (
|
||||
"strings"
|
||||
"sync"
|
||||
"syscall"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/cenkalti/backoff/v5"
|
||||
@@ -102,7 +101,7 @@ type Headscale struct {
|
||||
// Things that generate changes
|
||||
extraRecordMan *dns.ExtraRecordsMan
|
||||
authProvider AuthProvider
|
||||
mapBatcher *mapper.Batcher
|
||||
mapBatcher mapper.Batcher
|
||||
|
||||
clientStreamsOpen sync.WaitGroup
|
||||
}
|
||||
@@ -485,7 +484,6 @@ func (h *Headscale) createRouter(grpcMux *grpcRuntime.ServeMux) *chi.Mux {
|
||||
|
||||
if provider, ok := h.authProvider.(*AuthProviderOIDC); ok {
|
||||
r.Get("/oidc/callback", provider.OIDCCallbackHandler)
|
||||
r.Post("/register/confirm/{auth_id}", provider.RegisterConfirmHandler)
|
||||
}
|
||||
|
||||
r.Get("/apple", h.AppleConfigMessage)
|
||||
@@ -584,7 +582,7 @@ func (h *Headscale) Serve() error {
|
||||
|
||||
ephmNodes := h.state.ListEphemeralNodes()
|
||||
for _, node := range ephmNodes.All() {
|
||||
h.ephemeralGC.Schedule(node.ID(), h.cfg.Node.Ephemeral.InactivityTimeout)
|
||||
h.ephemeralGC.Schedule(node.ID(), h.cfg.EphemeralNodeInactivityTimeout)
|
||||
}
|
||||
|
||||
if h.cfg.DNSConfig.ExtraRecordsPath != "" {
|
||||
@@ -728,6 +726,7 @@ func (h *Headscale) Serve() error {
|
||||
grpcServer = grpc.NewServer(grpcOptions...)
|
||||
|
||||
v1.RegisterHeadscaleServiceServer(grpcServer, newHeadscaleV1APIServer(h))
|
||||
reflection.Register(grpcServer)
|
||||
|
||||
grpcListener, err = new(net.ListenConfig).Listen(context.Background(), "tcp", h.cfg.GRPCAddr)
|
||||
if err != nil {
|
||||
@@ -1070,56 +1069,6 @@ func (h *Headscale) Change(cs ...change.Change) {
|
||||
h.mapBatcher.AddWork(cs...)
|
||||
}
|
||||
|
||||
// HTTPHandler returns an http.Handler for the Headscale control server.
|
||||
// The handler serves the Tailscale control protocol including the /key
|
||||
// endpoint and /ts2021 Noise upgrade path.
|
||||
func (h *Headscale) HTTPHandler() http.Handler {
|
||||
return h.createRouter(grpcRuntime.NewServeMux())
|
||||
}
|
||||
|
||||
// NoisePublicKey returns the server's Noise protocol public key.
|
||||
func (h *Headscale) NoisePublicKey() key.MachinePublic {
|
||||
return h.noisePrivateKey.Public()
|
||||
}
|
||||
|
||||
// GetState returns the server's state manager for programmatic access
|
||||
// to users, nodes, policies, and other server state.
|
||||
func (h *Headscale) GetState() *state.State {
|
||||
return h.state
|
||||
}
|
||||
|
||||
// SetServerURLForTest updates the server URL in the configuration.
|
||||
// This is needed for test servers where the URL is not known until
|
||||
// the HTTP test server starts.
|
||||
// It panics when called outside of tests.
|
||||
func (h *Headscale) SetServerURLForTest(tb testing.TB, url string) {
|
||||
tb.Helper()
|
||||
|
||||
h.cfg.ServerURL = url
|
||||
}
|
||||
|
||||
// StartBatcherForTest initialises and starts the map response batcher.
|
||||
// It registers a cleanup function on tb to stop the batcher.
|
||||
// It panics when called outside of tests.
|
||||
func (h *Headscale) StartBatcherForTest(tb testing.TB) {
|
||||
tb.Helper()
|
||||
|
||||
h.mapBatcher = mapper.NewBatcherAndMapper(h.cfg, h.state)
|
||||
h.mapBatcher.Start()
|
||||
tb.Cleanup(func() { h.mapBatcher.Close() })
|
||||
}
|
||||
|
||||
// StartEphemeralGCForTest starts the ephemeral node garbage collector.
|
||||
// It registers a cleanup function on tb to stop the collector.
|
||||
// It panics when called outside of tests.
|
||||
func (h *Headscale) StartEphemeralGCForTest(tb testing.TB) {
|
||||
tb.Helper()
|
||||
|
||||
go h.ephemeralGC.Start()
|
||||
|
||||
tb.Cleanup(func() { h.ephemeralGC.Close() })
|
||||
}
|
||||
|
||||
// Provide some middleware that can inspect the ACME/autocert https calls
|
||||
// and log when things are failing.
|
||||
type acmeLogger struct {
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
package hscontrol
|
||||
|
||||
import (
|
||||
"cmp"
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
@@ -70,20 +71,6 @@ func (h *Headscale) handleRegister(
|
||||
// We do not look up nodes by [key.MachinePublic] as it might belong to multiple
|
||||
// nodes, separated by users and this path is handling expiring/logout paths.
|
||||
if node, ok := h.state.GetNodeByNodeKey(req.NodeKey); ok {
|
||||
// Refuse to act on a node looked up purely by NodeKey unless
|
||||
// the Noise session's machine key matches the cached node.
|
||||
// Without this check anyone holding a target's NodeKey could
|
||||
// open a Noise session with a throwaway machine key and read
|
||||
// the owner's User/Login back through nodeToRegisterResponse.
|
||||
// handleLogout enforces the same check on its own path.
|
||||
if node.MachineKey() != machineKey {
|
||||
return nil, NewHTTPError(
|
||||
http.StatusUnauthorized,
|
||||
"node exists with a different machine key",
|
||||
nil,
|
||||
)
|
||||
}
|
||||
|
||||
// When tailscaled restarts, it sends RegisterRequest with Auth=nil and Expiry=zero.
|
||||
// Return the current node state without modification.
|
||||
// See: https://github.com/juanfont/headscale/issues/2862
|
||||
@@ -315,10 +302,31 @@ func (h *Headscale) reqToNewRegisterResponse(
|
||||
return nil, NewHTTPError(http.StatusInternalServerError, "failed to generate registration ID", err)
|
||||
}
|
||||
|
||||
authRegReq := types.NewRegisterAuthRequest(
|
||||
registrationDataFromRequest(req, machineKey),
|
||||
// Ensure we have a valid hostname
|
||||
hostname := util.EnsureHostname(
|
||||
req.Hostinfo.View(),
|
||||
machineKey.String(),
|
||||
req.NodeKey.String(),
|
||||
)
|
||||
|
||||
// Ensure we have valid hostinfo
|
||||
hostinfo := cmp.Or(req.Hostinfo, &tailcfg.Hostinfo{})
|
||||
hostinfo.Hostname = hostname
|
||||
|
||||
nodeToRegister := types.Node{
|
||||
Hostname: hostname,
|
||||
MachineKey: machineKey,
|
||||
NodeKey: req.NodeKey,
|
||||
Hostinfo: hostinfo,
|
||||
LastSeen: new(time.Now()),
|
||||
}
|
||||
|
||||
if !req.Expiry.IsZero() {
|
||||
nodeToRegister.Expiry = &req.Expiry
|
||||
}
|
||||
|
||||
authRegReq := types.NewRegisterAuthRequest(nodeToRegister)
|
||||
|
||||
log.Info().Msgf("new followup node registration using auth id: %s", newAuthID)
|
||||
h.state.SetAuthCacheEntry(newAuthID, authRegReq)
|
||||
|
||||
@@ -327,36 +335,6 @@ func (h *Headscale) reqToNewRegisterResponse(
|
||||
}, nil
|
||||
}
|
||||
|
||||
// registrationDataFromRequest builds the RegistrationData payload stored
|
||||
// in the auth cache for a pending registration. The original Hostinfo is
|
||||
// retained so that consumers (auth callback, observability) see the
|
||||
// fields the client originally announced; the bounded-LRU cap on the
|
||||
// cache is what bounds the unauthenticated cache-fill DoS surface.
|
||||
func registrationDataFromRequest(
|
||||
req tailcfg.RegisterRequest,
|
||||
machineKey key.MachinePublic,
|
||||
) *types.RegistrationData {
|
||||
hostname := util.EnsureHostname(
|
||||
req.Hostinfo.View(),
|
||||
machineKey.String(),
|
||||
req.NodeKey.String(),
|
||||
)
|
||||
|
||||
regData := &types.RegistrationData{
|
||||
MachineKey: machineKey,
|
||||
NodeKey: req.NodeKey,
|
||||
Hostname: hostname,
|
||||
Hostinfo: req.Hostinfo,
|
||||
}
|
||||
|
||||
if !req.Expiry.IsZero() {
|
||||
expiry := req.Expiry
|
||||
regData.Expiry = &expiry
|
||||
}
|
||||
|
||||
return regData
|
||||
}
|
||||
|
||||
func (h *Headscale) handleRegisterWithAuthKey(
|
||||
req tailcfg.RegisterRequest,
|
||||
machineKey key.MachinePublic,
|
||||
@@ -430,23 +408,49 @@ func (h *Headscale) handleRegisterInteractive(
|
||||
return nil, fmt.Errorf("generating registration ID: %w", err)
|
||||
}
|
||||
|
||||
// Ensure we have a valid hostname
|
||||
hostname := util.EnsureHostname(
|
||||
req.Hostinfo.View(),
|
||||
machineKey.String(),
|
||||
req.NodeKey.String(),
|
||||
)
|
||||
|
||||
// Ensure we have valid hostinfo
|
||||
hostinfo := cmp.Or(req.Hostinfo, &tailcfg.Hostinfo{})
|
||||
if req.Hostinfo == nil {
|
||||
log.Warn().
|
||||
Str("machine.key", machineKey.ShortString()).
|
||||
Str("node.key", req.NodeKey.ShortString()).
|
||||
Str("generated.hostname", hostname).
|
||||
Msg("Received registration request with nil hostinfo, generated default hostname")
|
||||
} else if req.Hostinfo.Hostname == "" {
|
||||
log.Warn().
|
||||
Str("machine.key", machineKey.ShortString()).
|
||||
Str("node.key", req.NodeKey.ShortString()).
|
||||
Str("generated.hostname", hostname).
|
||||
Msg("Received registration request with empty hostname, generated default")
|
||||
}
|
||||
|
||||
authRegReq := types.NewRegisterAuthRequest(
|
||||
registrationDataFromRequest(req, machineKey),
|
||||
)
|
||||
hostinfo.Hostname = hostname
|
||||
|
||||
h.state.SetAuthCacheEntry(authID, authRegReq)
|
||||
nodeToRegister := types.Node{
|
||||
Hostname: hostname,
|
||||
MachineKey: machineKey,
|
||||
NodeKey: req.NodeKey,
|
||||
Hostinfo: hostinfo,
|
||||
LastSeen: new(time.Now()),
|
||||
}
|
||||
|
||||
if !req.Expiry.IsZero() {
|
||||
nodeToRegister.Expiry = &req.Expiry
|
||||
}
|
||||
|
||||
authRegReq := types.NewRegisterAuthRequest(nodeToRegister)
|
||||
|
||||
h.state.SetAuthCacheEntry(
|
||||
authID,
|
||||
authRegReq,
|
||||
)
|
||||
|
||||
log.Info().Msgf("starting node registration using auth id: %s", authID)
|
||||
|
||||
|
||||
@@ -4,7 +4,6 @@ import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/juanfont/headscale/hscontrol/mapper"
|
||||
"github.com/juanfont/headscale/hscontrol/types"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
@@ -12,49 +11,6 @@ import (
|
||||
"tailscale.com/types/key"
|
||||
)
|
||||
|
||||
// createTestAppWithNodeExpiry creates a test app with a specific node.expiry config.
|
||||
func createTestAppWithNodeExpiry(t *testing.T, nodeExpiry time.Duration) *Headscale {
|
||||
t.Helper()
|
||||
|
||||
tmpDir := t.TempDir()
|
||||
|
||||
cfg := types.Config{
|
||||
ServerURL: "http://localhost:8080",
|
||||
NoisePrivateKeyPath: tmpDir + "/noise_private.key",
|
||||
Node: types.NodeConfig{
|
||||
Expiry: nodeExpiry,
|
||||
},
|
||||
Database: types.DatabaseConfig{
|
||||
Type: "sqlite3",
|
||||
Sqlite: types.SqliteConfig{
|
||||
Path: tmpDir + "/headscale_test.db",
|
||||
},
|
||||
},
|
||||
OIDC: types.OIDCConfig{},
|
||||
Policy: types.PolicyConfig{
|
||||
Mode: types.PolicyModeDB,
|
||||
},
|
||||
Tuning: types.Tuning{
|
||||
BatchChangeDelay: 100 * time.Millisecond,
|
||||
BatcherWorkers: 1,
|
||||
},
|
||||
}
|
||||
|
||||
app, err := NewHeadscale(&cfg)
|
||||
require.NoError(t, err)
|
||||
|
||||
app.mapBatcher = mapper.NewBatcherAndMapper(&cfg, app.state)
|
||||
app.mapBatcher.Start()
|
||||
|
||||
t.Cleanup(func() {
|
||||
if app.mapBatcher != nil {
|
||||
app.mapBatcher.Close()
|
||||
}
|
||||
})
|
||||
|
||||
return app
|
||||
}
|
||||
|
||||
// TestTaggedPreAuthKeyCreatesTaggedNode tests that a PreAuthKey with tags creates
|
||||
// a tagged node with:
|
||||
// - Tags from the PreAuthKey
|
||||
@@ -696,7 +652,7 @@ func TestExpiryDuringPersonalToTaggedConversion(t *testing.T) {
|
||||
// Step 1: Create user-owned node WITH expiry set
|
||||
clientExpiry := time.Now().Add(24 * time.Hour)
|
||||
registrationID1 := types.MustAuthID()
|
||||
regEntry1 := types.NewRegisterAuthRequest(&types.RegistrationData{
|
||||
regEntry1 := types.NewRegisterAuthRequest(types.Node{
|
||||
MachineKey: machineKey.Public(),
|
||||
NodeKey: nodeKey1.Public(),
|
||||
Hostname: "personal-to-tagged",
|
||||
@@ -718,7 +674,7 @@ func TestExpiryDuringPersonalToTaggedConversion(t *testing.T) {
|
||||
// Step 2: Re-auth with tags (Personal → Tagged conversion)
|
||||
nodeKey2 := key.NewNode()
|
||||
registrationID2 := types.MustAuthID()
|
||||
regEntry2 := types.NewRegisterAuthRequest(&types.RegistrationData{
|
||||
regEntry2 := types.NewRegisterAuthRequest(types.Node{
|
||||
MachineKey: machineKey.Public(),
|
||||
NodeKey: nodeKey2.Public(),
|
||||
Hostname: "personal-to-tagged",
|
||||
@@ -768,7 +724,7 @@ func TestExpiryDuringTaggedToPersonalConversion(t *testing.T) {
|
||||
|
||||
// Step 1: Create tagged node (expiry should be nil)
|
||||
registrationID1 := types.MustAuthID()
|
||||
regEntry1 := types.NewRegisterAuthRequest(&types.RegistrationData{
|
||||
regEntry1 := types.NewRegisterAuthRequest(types.Node{
|
||||
MachineKey: machineKey.Public(),
|
||||
NodeKey: nodeKey1.Public(),
|
||||
Hostname: "tagged-to-personal",
|
||||
@@ -790,7 +746,7 @@ func TestExpiryDuringTaggedToPersonalConversion(t *testing.T) {
|
||||
nodeKey2 := key.NewNode()
|
||||
clientExpiry := time.Now().Add(48 * time.Hour)
|
||||
registrationID2 := types.MustAuthID()
|
||||
regEntry2 := types.NewRegisterAuthRequest(&types.RegistrationData{
|
||||
regEntry2 := types.NewRegisterAuthRequest(types.Node{
|
||||
MachineKey: machineKey.Public(),
|
||||
NodeKey: nodeKey2.Public(),
|
||||
Hostname: "tagged-to-personal",
|
||||
@@ -877,245 +833,3 @@ func TestReAuthWithDifferentMachineKey(t *testing.T) {
|
||||
assert.True(t, node2.IsTagged())
|
||||
assert.ElementsMatch(t, tags, node2.Tags().AsSlice())
|
||||
}
|
||||
|
||||
// TestUntaggedAuthKeyZeroExpiryGetsDefault tests that when node.expiry is configured
|
||||
// and a client registers with an untagged auth key without requesting a specific expiry,
|
||||
// the node gets the configured default expiry.
|
||||
// This is the core fix for https://github.com/juanfont/headscale/issues/1711
|
||||
func TestUntaggedAuthKeyZeroExpiryGetsDefault(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
nodeExpiry := 180 * 24 * time.Hour // 180 days
|
||||
app := createTestAppWithNodeExpiry(t, nodeExpiry)
|
||||
|
||||
user := app.state.CreateUserForTest("node-owner")
|
||||
|
||||
pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, nil)
|
||||
require.NoError(t, err)
|
||||
|
||||
machineKey := key.NewMachine()
|
||||
nodeKey := key.NewNode()
|
||||
|
||||
// Client sends zero expiry (the default behaviour of tailscale up --authkey).
|
||||
regReq := tailcfg.RegisterRequest{
|
||||
Auth: &tailcfg.RegisterResponseAuth{
|
||||
AuthKey: pak.Key,
|
||||
},
|
||||
NodeKey: nodeKey.Public(),
|
||||
Hostinfo: &tailcfg.Hostinfo{
|
||||
Hostname: "default-expiry-test",
|
||||
},
|
||||
Expiry: time.Time{}, // zero — no client-requested expiry
|
||||
}
|
||||
|
||||
resp, err := app.handleRegisterWithAuthKey(regReq, machineKey.Public())
|
||||
require.NoError(t, err)
|
||||
require.True(t, resp.MachineAuthorized)
|
||||
|
||||
node, found := app.state.GetNodeByNodeKey(nodeKey.Public())
|
||||
require.True(t, found)
|
||||
|
||||
assert.False(t, node.IsTagged())
|
||||
assert.True(t, node.Expiry().Valid(), "node should have expiry set from config default")
|
||||
assert.False(t, node.IsExpired(), "node should not be expired yet")
|
||||
|
||||
expectedExpiry := time.Now().Add(nodeExpiry)
|
||||
assert.WithinDuration(t, expectedExpiry, node.Expiry().Get(), 10*time.Second,
|
||||
"node expiry should be ~180 days from now")
|
||||
}
|
||||
|
||||
// TestTaggedAuthKeyIgnoresNodeExpiry tests that tagged nodes still get nil
|
||||
// expiry even when node.expiry is configured.
|
||||
func TestTaggedAuthKeyIgnoresNodeExpiry(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
nodeExpiry := 180 * 24 * time.Hour
|
||||
app := createTestAppWithNodeExpiry(t, nodeExpiry)
|
||||
|
||||
user := app.state.CreateUserForTest("tag-creator")
|
||||
tags := []string{"tag:server"}
|
||||
|
||||
pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, tags)
|
||||
require.NoError(t, err)
|
||||
|
||||
machineKey := key.NewMachine()
|
||||
nodeKey := key.NewNode()
|
||||
|
||||
regReq := tailcfg.RegisterRequest{
|
||||
Auth: &tailcfg.RegisterResponseAuth{
|
||||
AuthKey: pak.Key,
|
||||
},
|
||||
NodeKey: nodeKey.Public(),
|
||||
Hostinfo: &tailcfg.Hostinfo{
|
||||
Hostname: "tagged-no-expiry",
|
||||
},
|
||||
Expiry: time.Time{},
|
||||
}
|
||||
|
||||
resp, err := app.handleRegisterWithAuthKey(regReq, machineKey.Public())
|
||||
require.NoError(t, err)
|
||||
require.True(t, resp.MachineAuthorized)
|
||||
|
||||
node, found := app.state.GetNodeByNodeKey(nodeKey.Public())
|
||||
require.True(t, found)
|
||||
|
||||
assert.True(t, node.IsTagged())
|
||||
assert.False(t, node.Expiry().Valid(),
|
||||
"tagged node should have expiry disabled (nil) even with node.expiry configured")
|
||||
}
|
||||
|
||||
// TestNodeExpiryZeroDisablesDefault tests that setting node.expiry to 0
|
||||
// preserves the old behaviour where nodes registered without a client-requested
|
||||
// expiry get no expiry (never expire).
|
||||
func TestNodeExpiryZeroDisablesDefault(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
// node.expiry = 0 means "no default expiry"
|
||||
app := createTestAppWithNodeExpiry(t, 0)
|
||||
|
||||
user := app.state.CreateUserForTest("node-owner")
|
||||
|
||||
pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, nil)
|
||||
require.NoError(t, err)
|
||||
|
||||
machineKey := key.NewMachine()
|
||||
nodeKey := key.NewNode()
|
||||
|
||||
regReq := tailcfg.RegisterRequest{
|
||||
Auth: &tailcfg.RegisterResponseAuth{
|
||||
AuthKey: pak.Key,
|
||||
},
|
||||
NodeKey: nodeKey.Public(),
|
||||
Hostinfo: &tailcfg.Hostinfo{
|
||||
Hostname: "no-default-expiry",
|
||||
},
|
||||
Expiry: time.Time{}, // zero
|
||||
}
|
||||
|
||||
resp, err := app.handleRegisterWithAuthKey(regReq, machineKey.Public())
|
||||
require.NoError(t, err)
|
||||
require.True(t, resp.MachineAuthorized)
|
||||
|
||||
node, found := app.state.GetNodeByNodeKey(nodeKey.Public())
|
||||
require.True(t, found)
|
||||
|
||||
assert.False(t, node.IsTagged())
|
||||
assert.False(t, node.IsExpired(), "node should not be expired")
|
||||
|
||||
// With node.expiry=0 and zero client expiry, the node gets a zero expiry
|
||||
// which IsExpired() treats as "never expires" — backwards compatible.
|
||||
if node.Expiry().Valid() {
|
||||
assert.True(t, node.Expiry().Get().IsZero(),
|
||||
"with node.expiry=0 and zero client expiry, expiry should be zero time")
|
||||
}
|
||||
}
|
||||
|
||||
// TestClientNonZeroExpiryTakesPrecedence tests that when a client explicitly
|
||||
// requests an expiry, that value is used instead of the configured default.
|
||||
func TestClientNonZeroExpiryTakesPrecedence(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
nodeExpiry := 180 * 24 * time.Hour // 180 days
|
||||
app := createTestAppWithNodeExpiry(t, nodeExpiry)
|
||||
|
||||
user := app.state.CreateUserForTest("node-owner")
|
||||
|
||||
pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, nil)
|
||||
require.NoError(t, err)
|
||||
|
||||
machineKey := key.NewMachine()
|
||||
nodeKey := key.NewNode()
|
||||
|
||||
// Client explicitly requests 24h expiry
|
||||
clientExpiry := time.Now().Add(24 * time.Hour)
|
||||
|
||||
regReq := tailcfg.RegisterRequest{
|
||||
Auth: &tailcfg.RegisterResponseAuth{
|
||||
AuthKey: pak.Key,
|
||||
},
|
||||
NodeKey: nodeKey.Public(),
|
||||
Hostinfo: &tailcfg.Hostinfo{
|
||||
Hostname: "client-expiry-test",
|
||||
},
|
||||
Expiry: clientExpiry,
|
||||
}
|
||||
|
||||
resp, err := app.handleRegisterWithAuthKey(regReq, machineKey.Public())
|
||||
require.NoError(t, err)
|
||||
require.True(t, resp.MachineAuthorized)
|
||||
|
||||
node, found := app.state.GetNodeByNodeKey(nodeKey.Public())
|
||||
require.True(t, found)
|
||||
|
||||
assert.True(t, node.Expiry().Valid(), "node should have expiry set")
|
||||
assert.WithinDuration(t, clientExpiry, node.Expiry().Get(), 5*time.Second,
|
||||
"client-requested expiry should take precedence over node.expiry default")
|
||||
}
|
||||
|
||||
// TestReregistrationAppliesDefaultExpiry tests that when a node re-registers
|
||||
// with an untagged auth key and the client sends zero expiry, the configured
|
||||
// default is applied.
|
||||
func TestReregistrationAppliesDefaultExpiry(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
nodeExpiry := 90 * 24 * time.Hour // 90 days
|
||||
app := createTestAppWithNodeExpiry(t, nodeExpiry)
|
||||
|
||||
user := app.state.CreateUserForTest("node-owner")
|
||||
|
||||
pak, err := app.state.CreatePreAuthKey(user.TypedID(), true, false, nil, nil)
|
||||
require.NoError(t, err)
|
||||
|
||||
machineKey := key.NewMachine()
|
||||
nodeKey := key.NewNode()
|
||||
|
||||
// Initial registration with zero expiry
|
||||
regReq := tailcfg.RegisterRequest{
|
||||
Auth: &tailcfg.RegisterResponseAuth{
|
||||
AuthKey: pak.Key,
|
||||
},
|
||||
NodeKey: nodeKey.Public(),
|
||||
Hostinfo: &tailcfg.Hostinfo{
|
||||
Hostname: "reregister-test",
|
||||
},
|
||||
Expiry: time.Time{},
|
||||
}
|
||||
|
||||
resp, err := app.handleRegisterWithAuthKey(regReq, machineKey.Public())
|
||||
require.NoError(t, err)
|
||||
require.True(t, resp.MachineAuthorized)
|
||||
|
||||
node, found := app.state.GetNodeByNodeKey(nodeKey.Public())
|
||||
require.True(t, found)
|
||||
assert.True(t, node.Expiry().Valid(), "initial registration should get default expiry")
|
||||
|
||||
firstExpiry := node.Expiry().Get()
|
||||
|
||||
// Re-register with a new node key but same machine key
|
||||
nodeKey2 := key.NewNode()
|
||||
regReq2 := tailcfg.RegisterRequest{
|
||||
Auth: &tailcfg.RegisterResponseAuth{
|
||||
AuthKey: pak.Key,
|
||||
},
|
||||
NodeKey: nodeKey2.Public(),
|
||||
Hostinfo: &tailcfg.Hostinfo{
|
||||
Hostname: "reregister-test",
|
||||
},
|
||||
Expiry: time.Time{}, // still zero
|
||||
}
|
||||
|
||||
resp2, err := app.handleRegisterWithAuthKey(regReq2, machineKey.Public())
|
||||
require.NoError(t, err)
|
||||
require.True(t, resp2.MachineAuthorized)
|
||||
|
||||
node2, found := app.state.GetNodeByNodeKey(nodeKey2.Public())
|
||||
require.True(t, found)
|
||||
assert.True(t, node2.Expiry().Valid(), "re-registration should also get default expiry")
|
||||
|
||||
// The expiry should be refreshed (new 90d from now), not the old one
|
||||
expectedExpiry := time.Now().Add(nodeExpiry)
|
||||
assert.WithinDuration(t, expectedExpiry, node2.Expiry().Get(), 10*time.Second,
|
||||
"re-registration should refresh the default expiry")
|
||||
assert.True(t, node2.Expiry().Get().After(firstExpiry),
|
||||
"re-registration expiry should be later than initial registration expiry")
|
||||
}
|
||||
|
||||
@@ -681,7 +681,7 @@ func TestAuthenticationFlows(t *testing.T) {
|
||||
return "", err
|
||||
}
|
||||
|
||||
nodeToRegister := types.NewRegisterAuthRequest(&types.RegistrationData{
|
||||
nodeToRegister := types.NewRegisterAuthRequest(types.Node{
|
||||
Hostname: "followup-success-node",
|
||||
})
|
||||
app.state.SetAuthCacheEntry(regID, nodeToRegister)
|
||||
@@ -723,7 +723,7 @@ func TestAuthenticationFlows(t *testing.T) {
|
||||
return "", err
|
||||
}
|
||||
|
||||
nodeToRegister := types.NewRegisterAuthRequest(&types.RegistrationData{
|
||||
nodeToRegister := types.NewRegisterAuthRequest(types.Node{
|
||||
Hostname: "followup-timeout-node",
|
||||
})
|
||||
app.state.SetAuthCacheEntry(regID, nodeToRegister)
|
||||
@@ -1341,7 +1341,7 @@ func TestAuthenticationFlows(t *testing.T) {
|
||||
return "", err
|
||||
}
|
||||
|
||||
nodeToRegister := types.NewRegisterAuthRequest(&types.RegistrationData{
|
||||
nodeToRegister := types.NewRegisterAuthRequest(types.Node{
|
||||
Hostname: "nil-response-node",
|
||||
})
|
||||
app.state.SetAuthCacheEntry(regID, nodeToRegister)
|
||||
@@ -2507,7 +2507,7 @@ func TestAuthenticationFlows(t *testing.T) {
|
||||
if req.Followup != "" {
|
||||
var cancel context.CancelFunc
|
||||
|
||||
ctx, cancel = context.WithTimeout(context.Background(), 5*time.Second)
|
||||
ctx, cancel = context.WithTimeout(context.Background(), 100*time.Millisecond)
|
||||
defer cancel()
|
||||
}
|
||||
|
||||
@@ -2618,7 +2618,7 @@ func runInteractiveWorkflowTest(t *testing.T, tt struct {
|
||||
cacheEntry, found := app.state.GetAuthCacheEntry(registrationID)
|
||||
require.True(t, found, "registration cache entry should exist")
|
||||
require.NotNil(t, cacheEntry, "cache entry should not be nil")
|
||||
require.Equal(t, req.NodeKey, cacheEntry.RegistrationData().NodeKey, "cache entry should have correct node key")
|
||||
require.Equal(t, req.NodeKey, cacheEntry.Node().NodeKey(), "cache entry should have correct node key")
|
||||
}
|
||||
|
||||
case stepTypeAuthCompletion:
|
||||
@@ -3570,7 +3570,7 @@ func TestWebAuthRejectsUnauthorizedRequestTags(t *testing.T) {
|
||||
|
||||
// Simulate a registration cache entry (as would be created during web auth)
|
||||
registrationID := types.MustAuthID()
|
||||
regEntry := types.NewRegisterAuthRequest(&types.RegistrationData{
|
||||
regEntry := types.NewRegisterAuthRequest(types.Node{
|
||||
MachineKey: machineKey.Public(),
|
||||
NodeKey: nodeKey.Public(),
|
||||
Hostname: "webauth-tags-node",
|
||||
@@ -3633,7 +3633,7 @@ func TestWebAuthReauthWithEmptyTagsRemovesAllTags(t *testing.T) {
|
||||
|
||||
// Step 1: Initial registration with tags
|
||||
registrationID1 := types.MustAuthID()
|
||||
regEntry1 := types.NewRegisterAuthRequest(&types.RegistrationData{
|
||||
regEntry1 := types.NewRegisterAuthRequest(types.Node{
|
||||
MachineKey: machineKey.Public(),
|
||||
NodeKey: nodeKey1.Public(),
|
||||
Hostname: "reauth-untag-node",
|
||||
@@ -3660,7 +3660,7 @@ func TestWebAuthReauthWithEmptyTagsRemovesAllTags(t *testing.T) {
|
||||
// Step 2: Reauth with EMPTY tags to untag
|
||||
nodeKey2 := key.NewNode() // New node key for reauth
|
||||
registrationID2 := types.MustAuthID()
|
||||
regEntry2 := types.NewRegisterAuthRequest(&types.RegistrationData{
|
||||
regEntry2 := types.NewRegisterAuthRequest(types.Node{
|
||||
MachineKey: machineKey.Public(), // Same machine key
|
||||
NodeKey: nodeKey2.Public(), // Different node key (rotation)
|
||||
Hostname: "reauth-untag-node",
|
||||
@@ -3746,7 +3746,7 @@ func TestAuthKeyTaggedToUserOwnedViaReauth(t *testing.T) {
|
||||
// Step 2: Reauth via web auth with EMPTY tags to transition to user-owned
|
||||
nodeKey2 := key.NewNode() // New node key for reauth
|
||||
registrationID := types.MustAuthID()
|
||||
regEntry := types.NewRegisterAuthRequest(&types.RegistrationData{
|
||||
regEntry := types.NewRegisterAuthRequest(types.Node{
|
||||
MachineKey: machineKey.Public(), // Same machine key
|
||||
NodeKey: nodeKey2.Public(), // Different node key (rotation)
|
||||
Hostname: "authkey-tagged-node",
|
||||
@@ -3945,7 +3945,7 @@ func TestTaggedNodeWithoutUserToDifferentUser(t *testing.T) {
|
||||
// This is what happens when running: headscale auth register --auth-id <id> --user alice
|
||||
nodeKey2 := key.NewNode()
|
||||
registrationID := types.MustAuthID()
|
||||
regEntry := types.NewRegisterAuthRequest(&types.RegistrationData{
|
||||
regEntry := types.NewRegisterAuthRequest(types.Node{
|
||||
MachineKey: machineKey.Public(), // Same machine key as the tagged node
|
||||
NodeKey: nodeKey2.Public(),
|
||||
Hostname: "tagged-orphan-node",
|
||||
|
||||
@@ -41,7 +41,6 @@ var tailscaleToCapVer = map[string]tailcfg.CapabilityVersion{
|
||||
"v1.90": 130,
|
||||
"v1.92": 131,
|
||||
"v1.94": 131,
|
||||
"v1.96": 133,
|
||||
}
|
||||
|
||||
var capVerToTailscaleVer = map[tailcfg.CapabilityVersion]string{
|
||||
@@ -76,7 +75,6 @@ var capVerToTailscaleVer = map[tailcfg.CapabilityVersion]string{
|
||||
125: "v1.88",
|
||||
130: "v1.90",
|
||||
131: "v1.92",
|
||||
133: "v1.96",
|
||||
}
|
||||
|
||||
// SupportedMajorMinorVersions is the number of major.minor Tailscale versions supported.
|
||||
@@ -84,4 +82,4 @@ const SupportedMajorMinorVersions = 10
|
||||
|
||||
// MinSupportedCapabilityVersion represents the minimum capability version
|
||||
// supported by this Headscale instance (latest 10 minor versions)
|
||||
const MinSupportedCapabilityVersion tailcfg.CapabilityVersion = 109
|
||||
const MinSupportedCapabilityVersion tailcfg.CapabilityVersion = 106
|
||||
|
||||
@@ -9,9 +9,10 @@ var tailscaleLatestMajorMinorTests = []struct {
|
||||
stripV bool
|
||||
expected []string
|
||||
}{
|
||||
{3, false, []string{"v1.92", "v1.94", "v1.96"}},
|
||||
{2, true, []string{"1.94", "1.96"}},
|
||||
{3, false, []string{"v1.90", "v1.92", "v1.94"}},
|
||||
{2, true, []string{"1.92", "1.94"}},
|
||||
{10, true, []string{
|
||||
"1.76",
|
||||
"1.78",
|
||||
"1.80",
|
||||
"1.82",
|
||||
@@ -21,7 +22,6 @@ var tailscaleLatestMajorMinorTests = []struct {
|
||||
"1.90",
|
||||
"1.92",
|
||||
"1.94",
|
||||
"1.96",
|
||||
}},
|
||||
{0, false, nil},
|
||||
}
|
||||
@@ -30,7 +30,7 @@ var capVerMinimumTailscaleVersionTests = []struct {
|
||||
input tailcfg.CapabilityVersion
|
||||
expected string
|
||||
}{
|
||||
{109, "v1.78"},
|
||||
{106, "v1.74"},
|
||||
{32, "v1.24"},
|
||||
{41, "v1.30"},
|
||||
{46, "v1.32"},
|
||||
|
||||
@@ -24,6 +24,7 @@ import (
|
||||
"gorm.io/gorm"
|
||||
"gorm.io/gorm/logger"
|
||||
"gorm.io/gorm/schema"
|
||||
"zgo.at/zcache/v2"
|
||||
)
|
||||
|
||||
//go:embed schema.sql
|
||||
@@ -44,15 +45,19 @@ const (
|
||||
)
|
||||
|
||||
type HSDatabase struct {
|
||||
DB *gorm.DB
|
||||
cfg *types.Config
|
||||
DB *gorm.DB
|
||||
cfg *types.Config
|
||||
regCache *zcache.Cache[types.AuthID, types.AuthRequest]
|
||||
}
|
||||
|
||||
// NewHeadscaleDatabase creates a new database connection and runs migrations.
|
||||
// It accepts the full configuration to allow migrations access to policy settings.
|
||||
//
|
||||
//nolint:gocyclo // complex database initialization with many migrations
|
||||
func NewHeadscaleDatabase(cfg *types.Config) (*HSDatabase, error) {
|
||||
func NewHeadscaleDatabase(
|
||||
cfg *types.Config,
|
||||
regCache *zcache.Cache[types.AuthID, types.AuthRequest],
|
||||
) (*HSDatabase, error) {
|
||||
dbConn, err := openDB(cfg.Database)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
@@ -672,7 +677,7 @@ AND auth_key_id NOT IN (
|
||||
continue
|
||||
}
|
||||
|
||||
mergedTags := append(slices.Clone(existingTags), validatedTags...)
|
||||
mergedTags := append(existingTags, validatedTags...)
|
||||
slices.Sort(mergedTags)
|
||||
mergedTags = slices.Compact(mergedTags)
|
||||
|
||||
@@ -833,8 +838,9 @@ WHERE tags IS NOT NULL AND tags != '[]' AND tags != '';
|
||||
}
|
||||
|
||||
db := HSDatabase{
|
||||
DB: dbConn,
|
||||
cfg: cfg,
|
||||
DB: dbConn,
|
||||
cfg: cfg,
|
||||
regCache: regCache,
|
||||
}
|
||||
|
||||
return &db, err
|
||||
|
||||
@@ -8,11 +8,13 @@ import (
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/juanfont/headscale/hscontrol/types"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
"gorm.io/gorm"
|
||||
"zgo.at/zcache/v2"
|
||||
)
|
||||
|
||||
// TestSQLiteMigrationAndDataValidation tests specific SQLite migration scenarios
|
||||
@@ -160,6 +162,10 @@ func TestSQLiteMigrationAndDataValidation(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func emptyCache() *zcache.Cache[types.AuthID, types.AuthRequest] {
|
||||
return zcache.New[types.AuthID, types.AuthRequest](time.Minute, time.Hour)
|
||||
}
|
||||
|
||||
func createSQLiteFromSQLFile(sqlFilePath, dbPath string) error {
|
||||
db, err := sql.Open("sqlite", dbPath)
|
||||
if err != nil {
|
||||
@@ -373,6 +379,7 @@ func dbForTestWithPath(t *testing.T, sqlFilePath string) *HSDatabase {
|
||||
Mode: types.PolicyModeDB,
|
||||
},
|
||||
},
|
||||
emptyCache(),
|
||||
)
|
||||
if err != nil {
|
||||
t.Fatalf("setting up database: %s", err)
|
||||
@@ -432,6 +439,7 @@ func TestSQLiteAllTestdataMigrations(t *testing.T) {
|
||||
Mode: types.PolicyModeDB,
|
||||
},
|
||||
},
|
||||
emptyCache(),
|
||||
)
|
||||
require.NoError(t, err)
|
||||
})
|
||||
|
||||
@@ -1,25 +0,0 @@
|
||||
package db
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"testing"
|
||||
)
|
||||
|
||||
// TestMain ensures the working directory is set to the package source directory
|
||||
// so that relative testdata/ paths resolve correctly when the test binary is
|
||||
// executed from an arbitrary location (e.g., via "go tool stress").
|
||||
func TestMain(m *testing.M) {
|
||||
_, filename, _, ok := runtime.Caller(0)
|
||||
if !ok {
|
||||
panic("could not determine test source directory")
|
||||
}
|
||||
|
||||
err := os.Chdir(filepath.Dir(filename))
|
||||
if err != nil {
|
||||
panic("could not chdir to test source directory: " + err.Error())
|
||||
}
|
||||
|
||||
os.Exit(m.Run())
|
||||
}
|
||||
@@ -170,18 +170,6 @@ func ListPreAuthKeys(tx *gorm.DB) ([]types.PreAuthKey, error) {
|
||||
return keys, nil
|
||||
}
|
||||
|
||||
// ListPreAuthKeysByUser returns all PreAuthKeys belonging to a specific user.
|
||||
func ListPreAuthKeysByUser(tx *gorm.DB, uid types.UserID) ([]types.PreAuthKey, error) {
|
||||
var keys []types.PreAuthKey
|
||||
|
||||
err := tx.Preload("User").Where("user_id = ?", uint(uid)).Find(&keys).Error
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return keys, nil
|
||||
}
|
||||
|
||||
var (
|
||||
ErrPreAuthKeyFailedToParse = errors.New("failed to parse auth-key")
|
||||
ErrPreAuthKeyNotTaggedOrOwned = errors.New("auth-key must be either tagged or owned by user")
|
||||
@@ -331,22 +319,11 @@ func (hsdb *HSDatabase) DeletePreAuthKey(id uint64) error {
|
||||
})
|
||||
}
|
||||
|
||||
// UsePreAuthKey atomically marks a PreAuthKey as used. The UPDATE is
|
||||
// guarded by `used = false` so two concurrent registrations racing for
|
||||
// the same single-use key cannot both succeed: the first commits and
|
||||
// the second returns PAKError("authkey already used"). Without the
|
||||
// guard the previous code (Update("used", true) with no WHERE) would
|
||||
// silently let both transactions claim the key.
|
||||
// UsePreAuthKey marks a PreAuthKey as used.
|
||||
func UsePreAuthKey(tx *gorm.DB, k *types.PreAuthKey) error {
|
||||
res := tx.Model(&types.PreAuthKey{}).
|
||||
Where("id = ? AND used = ?", k.ID, false).
|
||||
Update("used", true)
|
||||
if res.Error != nil {
|
||||
return fmt.Errorf("updating key used status in database: %w", res.Error)
|
||||
}
|
||||
|
||||
if res.RowsAffected == 0 {
|
||||
return types.PAKError("authkey already used")
|
||||
err := tx.Model(k).Update("used", true).Error
|
||||
if err != nil {
|
||||
return fmt.Errorf("updating key used status in database: %w", err)
|
||||
}
|
||||
|
||||
k.Used = true
|
||||
|
||||
@@ -11,7 +11,6 @@ import (
|
||||
"github.com/juanfont/headscale/hscontrol/util"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
"gorm.io/gorm"
|
||||
)
|
||||
|
||||
func TestCreatePreAuthKey(t *testing.T) {
|
||||
@@ -445,45 +444,3 @@ func TestMultipleLegacyKeysAllowed(t *testing.T) {
|
||||
require.Error(t, err, "duplicate non-empty prefix should be rejected")
|
||||
assert.Contains(t, err.Error(), "UNIQUE constraint failed", "should fail with UNIQUE constraint error")
|
||||
}
|
||||
|
||||
// TestUsePreAuthKeyAtomicCAS verifies that UsePreAuthKey is an atomic
|
||||
// compare-and-set: a second call against an already-used key reports
|
||||
// PAKError("authkey already used") rather than silently succeeding.
|
||||
func TestUsePreAuthKeyAtomicCAS(t *testing.T) {
|
||||
db, err := newSQLiteTestDB()
|
||||
require.NoError(t, err)
|
||||
|
||||
user, err := db.CreateUser(types.User{Name: "atomic-cas"})
|
||||
require.NoError(t, err)
|
||||
|
||||
pakNew, err := db.CreatePreAuthKey(user.TypedID(), false /* reusable */, false, nil, nil)
|
||||
require.NoError(t, err)
|
||||
|
||||
pak, err := db.GetPreAuthKey(pakNew.Key)
|
||||
require.NoError(t, err)
|
||||
require.False(t, pak.Reusable, "test sanity: key must be single-use")
|
||||
|
||||
// First Use should commit cleanly.
|
||||
err = db.Write(func(tx *gorm.DB) error {
|
||||
return UsePreAuthKey(tx, pak)
|
||||
})
|
||||
require.NoError(t, err, "first UsePreAuthKey should succeed")
|
||||
|
||||
// Reload from disk to drop the in-memory Used=true the first call
|
||||
// set on the struct, simulating a second concurrent transaction
|
||||
// that loaded the same row before the first one committed.
|
||||
stale, err := db.GetPreAuthKey(pakNew.Key)
|
||||
require.NoError(t, err)
|
||||
|
||||
stale.Used = false
|
||||
|
||||
err = db.Write(func(tx *gorm.DB) error {
|
||||
return UsePreAuthKey(tx, stale)
|
||||
})
|
||||
require.Error(t, err, "second UsePreAuthKey on the same single-use key must fail")
|
||||
|
||||
var pakErr types.PAKError
|
||||
require.ErrorAs(t, err, &pakErr,
|
||||
"second UsePreAuthKey error must be a PAKError, got: %v", err)
|
||||
assert.Equal(t, "authkey already used", pakErr.Error())
|
||||
}
|
||||
|
||||
@@ -34,6 +34,7 @@ func newSQLiteTestDB() (*HSDatabase, error) {
|
||||
Mode: types.PolicyModeDB,
|
||||
},
|
||||
},
|
||||
emptyCache(),
|
||||
)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
@@ -55,7 +56,7 @@ func newPostgresDBForTest(t *testing.T) *url.URL {
|
||||
|
||||
srv, err := postgrestest.Start(ctx)
|
||||
if err != nil {
|
||||
t.Skipf("start postgres: %s", err)
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
t.Cleanup(srv.Cleanup)
|
||||
@@ -94,6 +95,7 @@ func newHeadscaleDBFromPostgresURL(t *testing.T, pu *url.URL) *HSDatabase {
|
||||
Mode: types.PolicyModeDB,
|
||||
},
|
||||
},
|
||||
emptyCache(),
|
||||
)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
|
||||
@@ -65,7 +65,7 @@ func DestroyUser(tx *gorm.DB, uid types.UserID) error {
|
||||
return ErrUserStillHasNodes
|
||||
}
|
||||
|
||||
keys, err := ListPreAuthKeysByUser(tx, uid)
|
||||
keys, err := ListPreAuthKeys(tx)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
@@ -160,48 +160,6 @@ func TestDestroyUserErrors(t *testing.T) {
|
||||
require.ErrorIs(t, err, ErrUserStillHasNodes)
|
||||
},
|
||||
},
|
||||
{
|
||||
// Regression test for https://github.com/juanfont/headscale/issues/3154
|
||||
// DestroyUser must only delete the target user's pre-auth keys,
|
||||
// not all pre-auth keys in the database.
|
||||
name: "success_only_deletes_own_preauthkeys",
|
||||
test: func(t *testing.T, db *HSDatabase) {
|
||||
t.Helper()
|
||||
|
||||
userA := db.CreateUserForTest("usera")
|
||||
userB := db.CreateUserForTest("userb")
|
||||
|
||||
// Create 2 keys for userA, 1 key for userB.
|
||||
_, err := db.CreatePreAuthKey(userA.TypedID(), false, false, nil, nil)
|
||||
require.NoError(t, err)
|
||||
_, err = db.CreatePreAuthKey(userA.TypedID(), false, false, nil, nil)
|
||||
require.NoError(t, err)
|
||||
_, err = db.CreatePreAuthKey(userB.TypedID(), false, false, nil, nil)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Sanity check: 3 keys exist.
|
||||
allKeys, err := db.ListPreAuthKeys()
|
||||
require.NoError(t, err)
|
||||
require.Len(t, allKeys, 3)
|
||||
|
||||
// Delete userB.
|
||||
err = db.DestroyUser(types.UserID(userB.ID))
|
||||
require.NoError(t, err)
|
||||
|
||||
// Only userA's 2 keys should remain.
|
||||
remaining, err := db.ListPreAuthKeys()
|
||||
require.NoError(t, err)
|
||||
assert.Len(t, remaining, 2,
|
||||
"expected 2 keys for userA, got %d — DestroyUser deleted keys from other users",
|
||||
len(remaining))
|
||||
|
||||
for _, key := range remaining {
|
||||
assert.NotNil(t, key.UserID)
|
||||
assert.Equal(t, userA.ID, *key.UserID,
|
||||
"remaining key should belong to userA")
|
||||
}
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
|
||||
@@ -3,46 +3,16 @@ package hscontrol
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net"
|
||||
"net/http"
|
||||
"net/netip"
|
||||
"strings"
|
||||
|
||||
"github.com/arl/statsviz"
|
||||
"github.com/juanfont/headscale/hscontrol/mapper"
|
||||
"github.com/juanfont/headscale/hscontrol/types"
|
||||
"github.com/prometheus/client_golang/prometheus/promhttp"
|
||||
"tailscale.com/tsweb"
|
||||
)
|
||||
|
||||
// protectedDebugHandler wraps an http.Handler with an access check that
|
||||
// allows requests from loopback, Tailscale CGNAT IPs, and private
|
||||
// (RFC 1918 / RFC 4193) addresses. This extends tsweb.Protected which
|
||||
// only allows loopback and Tailscale IPs.
|
||||
func protectedDebugHandler(h http.Handler) http.Handler {
|
||||
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
if tsweb.AllowDebugAccess(r) {
|
||||
h.ServeHTTP(w, r)
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
// tsweb.AllowDebugAccess rejects X-Forwarded-For and non-TS IPs.
|
||||
// Additionally allow private/LAN addresses so operators can reach
|
||||
// debug endpoints from their local network without tailscaled.
|
||||
ipStr, _, err := net.SplitHostPort(r.RemoteAddr)
|
||||
if err == nil {
|
||||
ip, parseErr := netip.ParseAddr(ipStr)
|
||||
if parseErr == nil && ip.IsPrivate() {
|
||||
h.ServeHTTP(w, r)
|
||||
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
http.Error(w, "debug access denied", http.StatusForbidden)
|
||||
})
|
||||
}
|
||||
|
||||
func (h *Headscale) debugHTTPServer() *http.Server {
|
||||
debugMux := http.NewServeMux()
|
||||
debug := tsweb.Debugger(debugMux)
|
||||
@@ -324,13 +294,8 @@ func (h *Headscale) debugHTTPServer() *http.Server {
|
||||
}
|
||||
}))
|
||||
|
||||
// statsviz.Register would mount handlers directly on the raw mux,
|
||||
// bypassing the access gate. Build the server by hand and wrap
|
||||
// each handler with protectedDebugHandler.
|
||||
statsvizSrv, err := statsviz.NewServer()
|
||||
err := statsviz.Register(debugMux)
|
||||
if err == nil {
|
||||
debugMux.Handle("/debug/statsviz/", protectedDebugHandler(statsvizSrv.Index()))
|
||||
debugMux.Handle("/debug/statsviz/ws", protectedDebugHandler(statsvizSrv.Ws()))
|
||||
debug.URL("/debug/statsviz", "Statsviz (visualise go metrics)")
|
||||
}
|
||||
|
||||
@@ -364,18 +329,38 @@ func (h *Headscale) debugBatcher() string {
|
||||
|
||||
var nodes []nodeStatus
|
||||
|
||||
debugInfo := h.mapBatcher.Debug()
|
||||
for nodeID, info := range debugInfo {
|
||||
nodes = append(nodes, nodeStatus{
|
||||
id: nodeID,
|
||||
connected: info.Connected,
|
||||
activeConnections: info.ActiveConnections,
|
||||
})
|
||||
totalNodes++
|
||||
// Try to get detailed debug info if we have a LockFreeBatcher
|
||||
if batcher, ok := h.mapBatcher.(*mapper.LockFreeBatcher); ok {
|
||||
debugInfo := batcher.Debug()
|
||||
for nodeID, info := range debugInfo {
|
||||
nodes = append(nodes, nodeStatus{
|
||||
id: nodeID,
|
||||
connected: info.Connected,
|
||||
activeConnections: info.ActiveConnections,
|
||||
})
|
||||
totalNodes++
|
||||
|
||||
if info.Connected {
|
||||
connectedCount++
|
||||
if info.Connected {
|
||||
connectedCount++
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// Fallback to basic connection info
|
||||
connectedMap := h.mapBatcher.ConnectedMap()
|
||||
connectedMap.Range(func(nodeID types.NodeID, connected bool) bool {
|
||||
nodes = append(nodes, nodeStatus{
|
||||
id: nodeID,
|
||||
connected: connected,
|
||||
activeConnections: 0,
|
||||
})
|
||||
totalNodes++
|
||||
|
||||
if connected {
|
||||
connectedCount++
|
||||
}
|
||||
|
||||
return true
|
||||
})
|
||||
}
|
||||
|
||||
// Sort by node ID
|
||||
@@ -425,13 +410,28 @@ func (h *Headscale) debugBatcherJSON() DebugBatcherInfo {
|
||||
TotalNodes: 0,
|
||||
}
|
||||
|
||||
debugInfo := h.mapBatcher.Debug()
|
||||
for nodeID, debugData := range debugInfo {
|
||||
info.ConnectedNodes[fmt.Sprintf("%d", nodeID)] = DebugBatcherNodeInfo{
|
||||
Connected: debugData.Connected,
|
||||
ActiveConnections: debugData.ActiveConnections,
|
||||
// Try to get detailed debug info if we have a LockFreeBatcher
|
||||
if batcher, ok := h.mapBatcher.(*mapper.LockFreeBatcher); ok {
|
||||
debugInfo := batcher.Debug()
|
||||
for nodeID, debugData := range debugInfo {
|
||||
info.ConnectedNodes[fmt.Sprintf("%d", nodeID)] = DebugBatcherNodeInfo{
|
||||
Connected: debugData.Connected,
|
||||
ActiveConnections: debugData.ActiveConnections,
|
||||
}
|
||||
info.TotalNodes++
|
||||
}
|
||||
info.TotalNodes++
|
||||
} else {
|
||||
// Fallback to basic connection info
|
||||
connectedMap := h.mapBatcher.ConnectedMap()
|
||||
connectedMap.Range(func(nodeID types.NodeID, connected bool) bool {
|
||||
info.ConnectedNodes[fmt.Sprintf("%d", nodeID)] = DebugBatcherNodeInfo{
|
||||
Connected: connected,
|
||||
ActiveConnections: 0,
|
||||
}
|
||||
info.TotalNodes++
|
||||
|
||||
return true
|
||||
})
|
||||
}
|
||||
|
||||
return info
|
||||
|
||||
@@ -802,16 +802,27 @@ func (api headscaleV1APIServer) DebugCreateNode(
|
||||
Interface("route-str", request.GetRoutes()).
|
||||
Msg("Creating routes for node")
|
||||
|
||||
hostinfo := tailcfg.Hostinfo{
|
||||
RoutableIPs: routes,
|
||||
OS: "TestOS",
|
||||
Hostname: request.GetName(),
|
||||
}
|
||||
|
||||
registrationId, err := types.AuthIDFromString(request.GetKey())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
regData := &types.RegistrationData{
|
||||
newNode := types.Node{
|
||||
NodeKey: key.NewNode().Public(),
|
||||
MachineKey: key.NewMachine().Public(),
|
||||
Hostname: request.GetName(),
|
||||
Expiry: &time.Time{}, // zero time, not nil — preserves proto JSON round-trip semantics
|
||||
User: user,
|
||||
|
||||
Expiry: &time.Time{},
|
||||
LastSeen: &time.Time{},
|
||||
|
||||
Hostinfo: &hostinfo,
|
||||
}
|
||||
|
||||
log.Debug().
|
||||
@@ -819,27 +830,10 @@ func (api headscaleV1APIServer) DebugCreateNode(
|
||||
Str("registration_id", registrationId.String()).
|
||||
Msg("adding debug machine via CLI, appending to registration cache")
|
||||
|
||||
authRegReq := types.NewRegisterAuthRequest(regData)
|
||||
authRegReq := types.NewRegisterAuthRequest(newNode)
|
||||
api.h.state.SetAuthCacheEntry(registrationId, authRegReq)
|
||||
|
||||
// Echo back a synthetic Node so the debug response surface stays
|
||||
// stable. The actual node is created later by AuthApprove via
|
||||
// HandleNodeFromAuthPath using the cached RegistrationData.
|
||||
echoNode := types.Node{
|
||||
NodeKey: regData.NodeKey,
|
||||
MachineKey: regData.MachineKey,
|
||||
Hostname: regData.Hostname,
|
||||
User: user,
|
||||
Expiry: &time.Time{},
|
||||
LastSeen: &time.Time{},
|
||||
Hostinfo: &tailcfg.Hostinfo{
|
||||
Hostname: request.GetName(),
|
||||
OS: "TestOS",
|
||||
RoutableIPs: routes,
|
||||
},
|
||||
}
|
||||
|
||||
return &v1.DebugCreateNodeResponse{Node: echoNode.Proto()}, nil
|
||||
return &v1.DebugCreateNodeResponse{Node: newNode.Proto()}, nil
|
||||
}
|
||||
|
||||
func (api headscaleV1APIServer) Health(
|
||||
|
||||
@@ -264,284 +264,6 @@ func TestSetTags_CannotRemoveAllTags(t *testing.T) {
|
||||
assert.Nil(t, resp.GetNode())
|
||||
}
|
||||
|
||||
// TestSetTags_ClearsUserIDInDatabase tests that converting a user-owned node
|
||||
// to a tagged node via SetTags correctly persists user_id = NULL in the
|
||||
// database, not just in-memory.
|
||||
// https://github.com/juanfont/headscale/issues/3161
|
||||
func TestSetTags_ClearsUserIDInDatabase(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
app := createTestApp(t)
|
||||
|
||||
user := app.state.CreateUserForTest("tag-owner")
|
||||
err := app.state.UpdatePolicyManagerUsersForTest()
|
||||
require.NoError(t, err)
|
||||
|
||||
_, err = app.state.SetPolicy([]byte(`{
|
||||
"tagOwners": {"tag:server": ["tag-owner@"]},
|
||||
"acls": [{"action": "accept", "src": ["*"], "dst": ["*:*"]}]
|
||||
}`))
|
||||
require.NoError(t, err)
|
||||
|
||||
// Register a user-owned node (untagged PreAuthKey).
|
||||
pak, err := app.state.CreatePreAuthKey(user.TypedID(), false, false, nil, nil)
|
||||
require.NoError(t, err)
|
||||
|
||||
machineKey := key.NewMachine()
|
||||
nodeKey := key.NewNode()
|
||||
|
||||
regReq := tailcfg.RegisterRequest{
|
||||
Auth: &tailcfg.RegisterResponseAuth{
|
||||
AuthKey: pak.Key,
|
||||
},
|
||||
NodeKey: nodeKey.Public(),
|
||||
Hostinfo: &tailcfg.Hostinfo{
|
||||
Hostname: "user-owned-node",
|
||||
},
|
||||
}
|
||||
_, err = app.handleRegisterWithAuthKey(regReq, machineKey.Public())
|
||||
require.NoError(t, err)
|
||||
|
||||
node, found := app.state.GetNodeByNodeKey(nodeKey.Public())
|
||||
require.True(t, found)
|
||||
require.False(t, node.IsTagged(), "node should start as user-owned")
|
||||
require.True(t, node.UserID().Valid(), "user-owned node must have UserID")
|
||||
|
||||
nodeID := node.ID()
|
||||
|
||||
// Convert to tagged via SetTags API.
|
||||
apiServer := newHeadscaleV1APIServer(app)
|
||||
_, err = apiServer.SetTags(context.Background(), &v1.SetTagsRequest{
|
||||
NodeId: uint64(nodeID),
|
||||
Tags: []string{"tag:server"},
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
// Verify in-memory state is correct.
|
||||
nsNode, found := app.state.GetNodeByID(nodeID)
|
||||
require.True(t, found)
|
||||
assert.True(t, nsNode.IsTagged(), "NodeStore: node should be tagged")
|
||||
assert.False(t, nsNode.UserID().Valid(),
|
||||
"NodeStore: UserID should be nil for tagged node")
|
||||
|
||||
// THE CRITICAL CHECK: verify database has user_id = NULL.
|
||||
dbNode, err := app.state.DB().GetNodeByID(nodeID)
|
||||
require.NoError(t, err)
|
||||
assert.Nil(t, dbNode.UserID,
|
||||
"Database: user_id must be NULL after converting to tagged node")
|
||||
assert.True(t, dbNode.IsTagged(),
|
||||
"Database: tags must be set")
|
||||
}
|
||||
|
||||
// TestSetTags_NodeDisappearsFromUserListing tests issue #3161:
|
||||
// after converting a user-owned node to tagged, it must no longer appear
|
||||
// when listing nodes filtered by the original user.
|
||||
// https://github.com/juanfont/headscale/issues/3161
|
||||
func TestSetTags_NodeDisappearsFromUserListing(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
app := createTestApp(t)
|
||||
|
||||
user := app.state.CreateUserForTest("list-user")
|
||||
err := app.state.UpdatePolicyManagerUsersForTest()
|
||||
require.NoError(t, err)
|
||||
|
||||
_, err = app.state.SetPolicy([]byte(`{
|
||||
"tagOwners": {"tag:web": ["list-user@"]},
|
||||
"acls": [{"action": "accept", "src": ["*"], "dst": ["*:*"]}]
|
||||
}`))
|
||||
require.NoError(t, err)
|
||||
|
||||
// Register a user-owned node.
|
||||
pak, err := app.state.CreatePreAuthKey(user.TypedID(), false, false, nil, nil)
|
||||
require.NoError(t, err)
|
||||
|
||||
machineKey := key.NewMachine()
|
||||
nodeKey := key.NewNode()
|
||||
|
||||
regReq := tailcfg.RegisterRequest{
|
||||
Auth: &tailcfg.RegisterResponseAuth{
|
||||
AuthKey: pak.Key,
|
||||
},
|
||||
NodeKey: nodeKey.Public(),
|
||||
Hostinfo: &tailcfg.Hostinfo{
|
||||
Hostname: "web-server",
|
||||
},
|
||||
}
|
||||
_, err = app.handleRegisterWithAuthKey(regReq, machineKey.Public())
|
||||
require.NoError(t, err)
|
||||
|
||||
node, found := app.state.GetNodeByNodeKey(nodeKey.Public())
|
||||
require.True(t, found)
|
||||
|
||||
// Verify node appears under user before tagging.
|
||||
apiServer := newHeadscaleV1APIServer(app)
|
||||
resp, err := apiServer.ListNodes(context.Background(), &v1.ListNodesRequest{
|
||||
User: "list-user",
|
||||
})
|
||||
require.NoError(t, err)
|
||||
assert.Len(t, resp.GetNodes(), 1, "user-owned node should appear under user")
|
||||
|
||||
// Convert to tagged.
|
||||
_, err = apiServer.SetTags(context.Background(), &v1.SetTagsRequest{
|
||||
NodeId: uint64(node.ID()),
|
||||
Tags: []string{"tag:web"},
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
// Node must NOT appear when listing by original user.
|
||||
resp, err = apiServer.ListNodes(context.Background(), &v1.ListNodesRequest{
|
||||
User: "list-user",
|
||||
})
|
||||
require.NoError(t, err)
|
||||
assert.Empty(t, resp.GetNodes(),
|
||||
"tagged node must not appear when listing nodes for original user")
|
||||
|
||||
// Node must still appear in unfiltered listing.
|
||||
allResp, err := apiServer.ListNodes(context.Background(), &v1.ListNodesRequest{})
|
||||
require.NoError(t, err)
|
||||
require.Len(t, allResp.GetNodes(), 1)
|
||||
assert.Contains(t, allResp.GetNodes()[0].GetTags(), "tag:web")
|
||||
}
|
||||
|
||||
// TestSetTags_NodeStoreAndDBConsistency verifies that after SetTags, the
|
||||
// in-memory NodeStore and the database agree on the node's ownership state.
|
||||
// https://github.com/juanfont/headscale/issues/3161
|
||||
func TestSetTags_NodeStoreAndDBConsistency(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
app := createTestApp(t)
|
||||
|
||||
user := app.state.CreateUserForTest("consistency-user")
|
||||
err := app.state.UpdatePolicyManagerUsersForTest()
|
||||
require.NoError(t, err)
|
||||
|
||||
_, err = app.state.SetPolicy([]byte(`{
|
||||
"tagOwners": {"tag:db": ["consistency-user@"]},
|
||||
"acls": [{"action": "accept", "src": ["*"], "dst": ["*:*"]}]
|
||||
}`))
|
||||
require.NoError(t, err)
|
||||
|
||||
pak, err := app.state.CreatePreAuthKey(user.TypedID(), false, false, nil, nil)
|
||||
require.NoError(t, err)
|
||||
|
||||
machineKey := key.NewMachine()
|
||||
nodeKey := key.NewNode()
|
||||
|
||||
regReq := tailcfg.RegisterRequest{
|
||||
Auth: &tailcfg.RegisterResponseAuth{
|
||||
AuthKey: pak.Key,
|
||||
},
|
||||
NodeKey: nodeKey.Public(),
|
||||
Hostinfo: &tailcfg.Hostinfo{
|
||||
Hostname: "db-node",
|
||||
},
|
||||
}
|
||||
_, err = app.handleRegisterWithAuthKey(regReq, machineKey.Public())
|
||||
require.NoError(t, err)
|
||||
|
||||
node, found := app.state.GetNodeByNodeKey(nodeKey.Public())
|
||||
require.True(t, found)
|
||||
|
||||
nodeID := node.ID()
|
||||
|
||||
// Convert to tagged.
|
||||
apiServer := newHeadscaleV1APIServer(app)
|
||||
_, err = apiServer.SetTags(context.Background(), &v1.SetTagsRequest{
|
||||
NodeId: uint64(nodeID),
|
||||
Tags: []string{"tag:db"},
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
// In-memory state.
|
||||
nsNode, found := app.state.GetNodeByID(nodeID)
|
||||
require.True(t, found)
|
||||
|
||||
// Database state.
|
||||
dbNode, err := app.state.DB().GetNodeByID(nodeID)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Both must agree: tagged, no UserID.
|
||||
assert.True(t, nsNode.IsTagged(), "NodeStore: should be tagged")
|
||||
assert.True(t, dbNode.IsTagged(), "Database: should be tagged")
|
||||
|
||||
assert.False(t, nsNode.UserID().Valid(),
|
||||
"NodeStore: UserID should be nil")
|
||||
assert.Nil(t, dbNode.UserID,
|
||||
"Database: user_id should be NULL")
|
||||
|
||||
assert.Equal(t,
|
||||
nsNode.UserID().Valid(),
|
||||
dbNode.UserID != nil,
|
||||
"NodeStore and database must agree on UserID state")
|
||||
}
|
||||
|
||||
// TestSetTags_UserDeletionDoesNotCascadeToTaggedNode tests that deleting the
|
||||
// original user does not cascade-delete a node that was converted to tagged
|
||||
// via SetTags. This catches the real-world consequence of stale user_id:
|
||||
// ON DELETE CASCADE would destroy the tagged node.
|
||||
// https://github.com/juanfont/headscale/issues/3161
|
||||
func TestSetTags_UserDeletionDoesNotCascadeToTaggedNode(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
app := createTestApp(t)
|
||||
|
||||
user := app.state.CreateUserForTest("doomed-user")
|
||||
err := app.state.UpdatePolicyManagerUsersForTest()
|
||||
require.NoError(t, err)
|
||||
|
||||
_, err = app.state.SetPolicy([]byte(`{
|
||||
"tagOwners": {"tag:survivor": ["doomed-user@"]},
|
||||
"acls": [{"action": "accept", "src": ["*"], "dst": ["*:*"]}]
|
||||
}`))
|
||||
require.NoError(t, err)
|
||||
|
||||
pak, err := app.state.CreatePreAuthKey(user.TypedID(), false, false, nil, nil)
|
||||
require.NoError(t, err)
|
||||
|
||||
machineKey := key.NewMachine()
|
||||
nodeKey := key.NewNode()
|
||||
|
||||
regReq := tailcfg.RegisterRequest{
|
||||
Auth: &tailcfg.RegisterResponseAuth{
|
||||
AuthKey: pak.Key,
|
||||
},
|
||||
NodeKey: nodeKey.Public(),
|
||||
Hostinfo: &tailcfg.Hostinfo{
|
||||
Hostname: "survivor-node",
|
||||
},
|
||||
}
|
||||
_, err = app.handleRegisterWithAuthKey(regReq, machineKey.Public())
|
||||
require.NoError(t, err)
|
||||
|
||||
node, found := app.state.GetNodeByNodeKey(nodeKey.Public())
|
||||
require.True(t, found)
|
||||
|
||||
nodeID := node.ID()
|
||||
|
||||
// Convert to tagged.
|
||||
apiServer := newHeadscaleV1APIServer(app)
|
||||
_, err = apiServer.SetTags(context.Background(), &v1.SetTagsRequest{
|
||||
NodeId: uint64(nodeID),
|
||||
Tags: []string{"tag:survivor"},
|
||||
})
|
||||
require.NoError(t, err)
|
||||
|
||||
// Delete the original user.
|
||||
_, err = app.state.DeleteUser(*user.TypedID())
|
||||
require.NoError(t, err)
|
||||
|
||||
// The tagged node must survive in both NodeStore and database.
|
||||
nsNode, found := app.state.GetNodeByID(nodeID)
|
||||
require.True(t, found, "tagged node must survive user deletion in NodeStore")
|
||||
assert.True(t, nsNode.IsTagged())
|
||||
|
||||
dbNode, err := app.state.DB().GetNodeByID(nodeID)
|
||||
require.NoError(t, err, "tagged node must survive user deletion in database")
|
||||
assert.True(t, dbNode.IsTagged())
|
||||
assert.Nil(t, dbNode.UserID)
|
||||
}
|
||||
|
||||
// TestDeleteUser_ReturnsProperChangeSignal tests issue #2967 fix:
|
||||
// When a user is deleted, the state should return a non-empty change signal
|
||||
// to ensure policy manager is updated and clients are notified immediately.
|
||||
|
||||
@@ -80,19 +80,13 @@ func parseCapabilityVersion(req *http.Request) (tailcfg.CapabilityVersion, error
|
||||
return tailcfg.CapabilityVersion(clientCapabilityVersion), nil
|
||||
}
|
||||
|
||||
// verifyBodyLimit caps the request body for /verify. The DERP verify
|
||||
// protocol payload (tailcfg.DERPAdmitClientRequest) is a few hundred
|
||||
// bytes; 4 KiB is generous and prevents an unauthenticated client from
|
||||
// OOMing the public router with arbitrarily large POSTs.
|
||||
const verifyBodyLimit int64 = 4 * 1024
|
||||
|
||||
func (h *Headscale) handleVerifyRequest(
|
||||
req *http.Request,
|
||||
writer io.Writer,
|
||||
) error {
|
||||
body, err := io.ReadAll(req.Body)
|
||||
if err != nil {
|
||||
return NewHTTPError(http.StatusRequestEntityTooLarge, "request body too large", fmt.Errorf("reading request body: %w", err))
|
||||
return fmt.Errorf("reading request body: %w", err)
|
||||
}
|
||||
|
||||
var derpAdmitClientRequest tailcfg.DERPAdmitClientRequest
|
||||
@@ -130,8 +124,6 @@ func (h *Headscale) VerifyHandler(
|
||||
return
|
||||
}
|
||||
|
||||
req.Body = http.MaxBytesReader(writer, req.Body, verifyBodyLimit)
|
||||
|
||||
err := h.handleVerifyRequest(req, writer)
|
||||
if err != nil {
|
||||
httpError(writer, err)
|
||||
|
||||
@@ -1,57 +0,0 @@
|
||||
package hscontrol
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"errors"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
// TestHandleVerifyRequest_OversizedBodyRejected verifies that the
|
||||
// /verify handler refuses POST bodies larger than verifyBodyLimit.
|
||||
// The MaxBytesReader is applied in VerifyHandler, so we simulate
|
||||
// the same wrapping here.
|
||||
func TestHandleVerifyRequest_OversizedBodyRejected(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
body := strings.Repeat("x", int(verifyBodyLimit)+128)
|
||||
rec := httptest.NewRecorder()
|
||||
req := httptest.NewRequestWithContext(
|
||||
context.Background(),
|
||||
http.MethodPost,
|
||||
"/verify",
|
||||
bytes.NewReader([]byte(body)),
|
||||
)
|
||||
req.Body = http.MaxBytesReader(rec, req.Body, verifyBodyLimit)
|
||||
|
||||
h := &Headscale{}
|
||||
|
||||
err := h.handleVerifyRequest(req, &bytes.Buffer{})
|
||||
if err == nil {
|
||||
t.Fatal("oversized verify body must be rejected")
|
||||
}
|
||||
|
||||
httpErr, ok := errorAsHTTPError(err)
|
||||
if !ok {
|
||||
t.Fatalf("error must be an HTTPError, got: %T (%v)", err, err)
|
||||
}
|
||||
|
||||
assert.Equal(t, http.StatusRequestEntityTooLarge, httpErr.Code,
|
||||
"oversized body must surface 413")
|
||||
}
|
||||
|
||||
// errorAsHTTPError is a small local helper that unwraps an HTTPError
|
||||
// from an error chain.
|
||||
func errorAsHTTPError(err error) (HTTPError, bool) {
|
||||
var h HTTPError
|
||||
if errors.As(err, &h) {
|
||||
return h, true
|
||||
}
|
||||
|
||||
return HTTPError{}, false
|
||||
}
|
||||
@@ -3,8 +3,6 @@ package mapper
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
|
||||
"github.com/juanfont/headscale/hscontrol/state"
|
||||
@@ -26,31 +24,43 @@ var (
|
||||
ErrNodeNotFoundMapper = errors.New("node not found")
|
||||
)
|
||||
|
||||
// offlineNodeCleanupThreshold is how long a node must be disconnected
|
||||
// before cleanupOfflineNodes removes its in-memory state.
|
||||
const offlineNodeCleanupThreshold = 15 * time.Minute
|
||||
|
||||
var mapResponseGenerated = promauto.NewCounterVec(prometheus.CounterOpts{
|
||||
Namespace: "headscale",
|
||||
Name: "mapresponse_generated_total",
|
||||
Help: "total count of mapresponses generated by response type",
|
||||
}, []string{"response_type"})
|
||||
|
||||
func NewBatcher(batchTime time.Duration, workers int, mapper *mapper) *Batcher {
|
||||
return &Batcher{
|
||||
type batcherFunc func(cfg *types.Config, state *state.State) Batcher
|
||||
|
||||
// Batcher defines the common interface for all batcher implementations.
|
||||
type Batcher interface {
|
||||
Start()
|
||||
Close()
|
||||
AddNode(id types.NodeID, c chan<- *tailcfg.MapResponse, version tailcfg.CapabilityVersion, stop func()) error
|
||||
RemoveNode(id types.NodeID, c chan<- *tailcfg.MapResponse) bool
|
||||
IsConnected(id types.NodeID) bool
|
||||
ConnectedMap() *xsync.Map[types.NodeID, bool]
|
||||
AddWork(r ...change.Change)
|
||||
MapResponseFromChange(id types.NodeID, r change.Change) (*tailcfg.MapResponse, error)
|
||||
DebugMapResponses() (map[types.NodeID][]tailcfg.MapResponse, error)
|
||||
}
|
||||
|
||||
func NewBatcher(batchTime time.Duration, workers int, mapper *mapper) *LockFreeBatcher {
|
||||
return &LockFreeBatcher{
|
||||
mapper: mapper,
|
||||
workers: workers,
|
||||
tick: time.NewTicker(batchTime),
|
||||
|
||||
// The size of this channel is arbitrary chosen, the sizing should be revisited.
|
||||
workCh: make(chan work, workers*200),
|
||||
done: make(chan struct{}),
|
||||
nodes: xsync.NewMap[types.NodeID, *multiChannelNodeConn](),
|
||||
workCh: make(chan work, workers*200),
|
||||
nodes: xsync.NewMap[types.NodeID, *multiChannelNodeConn](),
|
||||
connected: xsync.NewMap[types.NodeID, *time.Time](),
|
||||
pendingChanges: xsync.NewMap[types.NodeID, []change.Change](),
|
||||
}
|
||||
}
|
||||
|
||||
// NewBatcherAndMapper creates a new Batcher with its mapper.
|
||||
func NewBatcherAndMapper(cfg *types.Config, state *state.State) *Batcher {
|
||||
// NewBatcherAndMapper creates a Batcher implementation.
|
||||
func NewBatcherAndMapper(cfg *types.Config, state *state.State) Batcher {
|
||||
m := newMapper(cfg, state)
|
||||
b := NewBatcher(cfg.Tuning.BatchChangeDelay, cfg.Tuning.BatcherWorkers, m)
|
||||
m.batcher = b
|
||||
@@ -128,30 +138,6 @@ func generateMapResponse(nc nodeConnection, mapper *mapper, r change.Change) (*t
|
||||
return nil, fmt.Errorf("generating map response for nodeID %d: %w", nodeID, err)
|
||||
}
|
||||
|
||||
// When a full update (SendAllPeers=true) produces zero visible peers
|
||||
// (e.g., a restrictive policy isolates this node), the resulting
|
||||
// MapResponse has Peers: []*tailcfg.Node{} (empty non-nil slice).
|
||||
//
|
||||
// The Tailscale client only treats Peers as a full authoritative
|
||||
// replacement when len(Peers) > 0 (controlclient/map.go:462).
|
||||
// An empty Peers slice is indistinguishable from a delta response,
|
||||
// so the client silently preserves its existing peer state.
|
||||
//
|
||||
// This matters when a FullUpdate() replaces a pending PolicyChange()
|
||||
// in the batcher (addToBatch short-circuits on HasFull). The
|
||||
// PolicyChange would have computed PeersRemoved via computePeerDiff,
|
||||
// but the FullUpdate path uses WithPeers which sets Peers: [].
|
||||
//
|
||||
// Fix: when a full update results in zero peers, compute the diff
|
||||
// against lastSentPeers and add explicit PeersRemoved entries so
|
||||
// the client correctly clears its stale peer state.
|
||||
if mapResp != nil && r.SendAllPeers && len(mapResp.Peers) == 0 {
|
||||
removedPeers := nc.computePeerDiff(nil)
|
||||
if len(removedPeers) > 0 {
|
||||
mapResp.PeersRemoved = removedPeers
|
||||
}
|
||||
}
|
||||
|
||||
return mapResp, nil
|
||||
}
|
||||
|
||||
@@ -178,20 +164,10 @@ func handleNodeChange(nc nodeConnection, mapper *mapper, r change.Change) error
|
||||
// Send the map response
|
||||
err = nc.send(data)
|
||||
if err != nil {
|
||||
// If the node has no active connections, the data was not
|
||||
// delivered. Do not update lastSentPeers — recording phantom
|
||||
// peer state would corrupt future computePeerDiff calculations,
|
||||
// causing the node to miss peer additions or removals after
|
||||
// reconnection.
|
||||
if errors.Is(err, errNoActiveConnections) {
|
||||
return nil
|
||||
}
|
||||
|
||||
return fmt.Errorf("sending map response to node %d: %w", nodeID, err)
|
||||
}
|
||||
|
||||
// Update peer tracking only after confirmed delivery to at
|
||||
// least one active connection.
|
||||
// Update peer tracking after successful send
|
||||
nc.updateSentPeers(data)
|
||||
|
||||
return nil
|
||||
@@ -204,568 +180,8 @@ type workResult struct {
|
||||
}
|
||||
|
||||
// work represents a unit of work to be processed by workers.
|
||||
// All pending changes for a node are bundled into a single work item
|
||||
// so that one worker processes them sequentially. This prevents
|
||||
// out-of-order MapResponse delivery and races on lastSentPeers
|
||||
// that occur when multiple workers process changes for the same node.
|
||||
type work struct {
|
||||
changes []change.Change
|
||||
c change.Change
|
||||
nodeID types.NodeID
|
||||
resultCh chan<- workResult // optional channel for synchronous operations
|
||||
}
|
||||
|
||||
// Batcher errors.
|
||||
var (
|
||||
errConnectionClosed = errors.New("connection channel already closed")
|
||||
ErrInitialMapSendTimeout = errors.New("sending initial map: timeout")
|
||||
ErrBatcherShuttingDown = errors.New("batcher shutting down")
|
||||
ErrConnectionSendTimeout = errors.New("timeout sending to channel (likely stale connection)")
|
||||
)
|
||||
|
||||
// Batcher batches and distributes map responses to connected nodes.
|
||||
// It uses concurrent maps, per-node mutexes, and a worker pool.
|
||||
//
|
||||
// Lifecycle: Call Start() to spawn workers, then Close() to shut down.
|
||||
// Close() blocks until all workers have exited. A Batcher must not
|
||||
// be reused after Close().
|
||||
type Batcher struct {
|
||||
tick *time.Ticker
|
||||
mapper *mapper
|
||||
workers int
|
||||
|
||||
nodes *xsync.Map[types.NodeID, *multiChannelNodeConn]
|
||||
|
||||
// Work queue channel
|
||||
workCh chan work
|
||||
done chan struct{}
|
||||
doneOnce sync.Once // Ensures done is only closed once
|
||||
|
||||
// wg tracks the doWork and all worker goroutines so that Close()
|
||||
// can block until they have fully exited.
|
||||
wg sync.WaitGroup
|
||||
|
||||
started atomic.Bool // Ensures Start() is only called once
|
||||
|
||||
// Metrics
|
||||
totalNodes atomic.Int64
|
||||
workQueuedCount atomic.Int64
|
||||
workProcessed atomic.Int64
|
||||
workErrors atomic.Int64
|
||||
}
|
||||
|
||||
// AddNode registers a new node connection with the batcher and sends an initial map response.
|
||||
// It creates or updates the node's connection data, validates the initial map generation,
|
||||
// and notifies other nodes that this node has come online.
|
||||
// The stop function tears down the owning session if this connection is later declared stale.
|
||||
func (b *Batcher) AddNode(
|
||||
id types.NodeID,
|
||||
c chan<- *tailcfg.MapResponse,
|
||||
version tailcfg.CapabilityVersion,
|
||||
stop func(),
|
||||
) error {
|
||||
addNodeStart := time.Now()
|
||||
nlog := log.With().Uint64(zf.NodeID, id.Uint64()).Logger()
|
||||
|
||||
// Generate connection ID
|
||||
connID := generateConnectionID()
|
||||
|
||||
// Create new connection entry
|
||||
now := time.Now()
|
||||
newEntry := &connectionEntry{
|
||||
id: connID,
|
||||
c: c,
|
||||
version: version,
|
||||
created: now,
|
||||
stop: stop,
|
||||
}
|
||||
// Initialize last used timestamp
|
||||
newEntry.lastUsed.Store(now.Unix())
|
||||
|
||||
// Get or create multiChannelNodeConn - this reuses existing offline nodes for rapid reconnection
|
||||
nodeConn, loaded := b.nodes.LoadOrStore(id, newMultiChannelNodeConn(id, b.mapper))
|
||||
|
||||
if !loaded {
|
||||
b.totalNodes.Add(1)
|
||||
}
|
||||
|
||||
// Add connection to the list (lock-free)
|
||||
nodeConn.addConnection(newEntry)
|
||||
|
||||
// Use the worker pool for controlled concurrency instead of direct generation
|
||||
initialMap, err := b.MapResponseFromChange(id, change.FullSelf(id))
|
||||
if err != nil {
|
||||
nlog.Error().Err(err).Msg("initial map generation failed")
|
||||
nodeConn.removeConnectionByChannel(c)
|
||||
|
||||
if !nodeConn.hasActiveConnections() {
|
||||
nodeConn.markDisconnected()
|
||||
}
|
||||
|
||||
return fmt.Errorf("generating initial map for node %d: %w", id, err)
|
||||
}
|
||||
|
||||
// Use a blocking send with timeout for initial map since the channel should be ready
|
||||
// and we want to avoid the race condition where the receiver isn't ready yet
|
||||
select {
|
||||
case c <- initialMap:
|
||||
// Success
|
||||
case <-time.After(5 * time.Second): //nolint:mnd
|
||||
nlog.Error().Err(ErrInitialMapSendTimeout).Msg("initial map send timeout")
|
||||
nlog.Debug().Caller().Dur("timeout.duration", 5*time.Second). //nolint:mnd
|
||||
Msg("initial map send timed out because channel was blocked or receiver not ready")
|
||||
nodeConn.removeConnectionByChannel(c)
|
||||
|
||||
if !nodeConn.hasActiveConnections() {
|
||||
nodeConn.markDisconnected()
|
||||
}
|
||||
|
||||
return fmt.Errorf("%w for node %d", ErrInitialMapSendTimeout, id)
|
||||
}
|
||||
|
||||
// Mark the node as connected now that the initial map was sent.
|
||||
nodeConn.markConnected()
|
||||
|
||||
// Node will automatically receive updates through the normal flow
|
||||
// The initial full map already contains all current state
|
||||
|
||||
nlog.Debug().Caller().Dur(zf.TotalDuration, time.Since(addNodeStart)).
|
||||
Int("active.connections", nodeConn.getActiveConnectionCount()).
|
||||
Msg("node connection established in batcher")
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// RemoveNode disconnects a node from the batcher, marking it as offline and cleaning up its state.
|
||||
// It validates the connection channel matches one of the current connections, closes that specific connection,
|
||||
// and keeps the node entry alive for rapid reconnections instead of aggressive deletion.
|
||||
// Reports if the node still has active connections after removal.
|
||||
func (b *Batcher) RemoveNode(id types.NodeID, c chan<- *tailcfg.MapResponse) bool {
|
||||
nlog := log.With().Uint64(zf.NodeID, id.Uint64()).Logger()
|
||||
|
||||
nodeConn, exists := b.nodes.Load(id)
|
||||
if !exists || nodeConn == nil {
|
||||
nlog.Debug().Caller().Msg("removeNode called for non-existent node")
|
||||
return false
|
||||
}
|
||||
|
||||
// Remove specific connection
|
||||
removed := nodeConn.removeConnectionByChannel(c)
|
||||
if !removed {
|
||||
nlog.Debug().Caller().Msg("removeNode: channel not found, connection already removed or invalid")
|
||||
}
|
||||
|
||||
// Check if node has any remaining active connections
|
||||
if nodeConn.hasActiveConnections() {
|
||||
nlog.Debug().Caller().
|
||||
Int("active.connections", nodeConn.getActiveConnectionCount()).
|
||||
Msg("node connection removed but keeping online, other connections remain")
|
||||
|
||||
return true // Node still has active connections
|
||||
}
|
||||
|
||||
// No active connections - keep the node entry alive for rapid reconnections
|
||||
// The node will get a fresh full map when it reconnects
|
||||
nlog.Debug().Caller().Msg("node disconnected from batcher, keeping entry for rapid reconnection")
|
||||
nodeConn.markDisconnected()
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
// AddWork queues a change to be processed by the batcher.
|
||||
func (b *Batcher) AddWork(r ...change.Change) {
|
||||
b.addToBatch(r...)
|
||||
}
|
||||
|
||||
func (b *Batcher) Start() {
|
||||
if !b.started.CompareAndSwap(false, true) {
|
||||
return
|
||||
}
|
||||
|
||||
b.wg.Add(1)
|
||||
|
||||
go b.doWork()
|
||||
}
|
||||
|
||||
func (b *Batcher) Close() {
|
||||
// Signal shutdown to all goroutines, only once.
|
||||
// Workers and queueWork both select on done, so closing it
|
||||
// is sufficient for graceful shutdown. We intentionally do NOT
|
||||
// close workCh here because processBatchedChanges or
|
||||
// MapResponseFromChange may still be sending on it concurrently.
|
||||
b.doneOnce.Do(func() {
|
||||
close(b.done)
|
||||
})
|
||||
|
||||
// Wait for all worker goroutines (and doWork) to exit before
|
||||
// tearing down node connections. This prevents workers from
|
||||
// sending on connections that are being closed concurrently.
|
||||
b.wg.Wait()
|
||||
|
||||
// Stop the ticker to prevent resource leaks.
|
||||
b.tick.Stop()
|
||||
|
||||
// Close the underlying channels supplying the data to the clients.
|
||||
b.nodes.Range(func(nodeID types.NodeID, conn *multiChannelNodeConn) bool {
|
||||
if conn == nil {
|
||||
return true
|
||||
}
|
||||
|
||||
conn.close()
|
||||
|
||||
return true
|
||||
})
|
||||
}
|
||||
|
||||
func (b *Batcher) doWork() {
|
||||
defer b.wg.Done()
|
||||
|
||||
for i := range b.workers {
|
||||
b.wg.Add(1)
|
||||
|
||||
go b.worker(i + 1)
|
||||
}
|
||||
|
||||
// Create a cleanup ticker for removing truly disconnected nodes
|
||||
cleanupTicker := time.NewTicker(5 * time.Minute)
|
||||
defer cleanupTicker.Stop()
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-b.tick.C:
|
||||
// Process batched changes
|
||||
b.processBatchedChanges()
|
||||
case <-cleanupTicker.C:
|
||||
// Clean up nodes that have been offline for too long
|
||||
b.cleanupOfflineNodes()
|
||||
case <-b.done:
|
||||
log.Info().Msg("batcher done channel closed, stopping to feed workers")
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (b *Batcher) worker(workerID int) {
|
||||
defer b.wg.Done()
|
||||
|
||||
wlog := log.With().Int(zf.WorkerID, workerID).Logger()
|
||||
|
||||
for {
|
||||
select {
|
||||
case w, ok := <-b.workCh:
|
||||
if !ok {
|
||||
wlog.Debug().Msg("worker channel closing, shutting down")
|
||||
return
|
||||
}
|
||||
|
||||
b.workProcessed.Add(1)
|
||||
|
||||
// Synchronous path: a caller is blocking on resultCh
|
||||
// waiting for a generated MapResponse (used by AddNode
|
||||
// for the initial map). Always contains a single change.
|
||||
if w.resultCh != nil {
|
||||
var result workResult
|
||||
|
||||
if nc, exists := b.nodes.Load(w.nodeID); exists && nc != nil {
|
||||
// Hold workMu so concurrent async work for this
|
||||
// node waits until the initial map is sent.
|
||||
nc.workMu.Lock()
|
||||
|
||||
var err error
|
||||
|
||||
result.mapResponse, err = generateMapResponse(nc, b.mapper, w.changes[0])
|
||||
|
||||
result.err = err
|
||||
if result.err != nil {
|
||||
b.workErrors.Add(1)
|
||||
wlog.Error().Err(result.err).
|
||||
Uint64(zf.NodeID, w.nodeID.Uint64()).
|
||||
Str(zf.Reason, w.changes[0].Reason).
|
||||
Msg("failed to generate map response for synchronous work")
|
||||
} else if result.mapResponse != nil {
|
||||
nc.updateSentPeers(result.mapResponse)
|
||||
}
|
||||
|
||||
nc.workMu.Unlock()
|
||||
} else {
|
||||
result.err = fmt.Errorf("%w: %d", ErrNodeNotFoundMapper, w.nodeID)
|
||||
|
||||
b.workErrors.Add(1)
|
||||
wlog.Error().Err(result.err).
|
||||
Uint64(zf.NodeID, w.nodeID.Uint64()).
|
||||
Msg("node not found for synchronous work")
|
||||
}
|
||||
|
||||
select {
|
||||
case w.resultCh <- result:
|
||||
case <-b.done:
|
||||
return
|
||||
}
|
||||
|
||||
continue
|
||||
}
|
||||
|
||||
// Async path: process all bundled changes sequentially.
|
||||
// workMu ensures that if another worker picks up the next
|
||||
// tick's bundle for the same node, it waits until we
|
||||
// finish — preventing out-of-order delivery and races
|
||||
// on lastSentPeers (Clear+Store vs Range).
|
||||
if nc, exists := b.nodes.Load(w.nodeID); exists && nc != nil {
|
||||
nc.workMu.Lock()
|
||||
for _, ch := range w.changes {
|
||||
err := nc.change(ch)
|
||||
if err != nil {
|
||||
b.workErrors.Add(1)
|
||||
wlog.Error().Err(err).
|
||||
Uint64(zf.NodeID, w.nodeID.Uint64()).
|
||||
Str(zf.Reason, ch.Reason).
|
||||
Msg("failed to apply change")
|
||||
}
|
||||
}
|
||||
nc.workMu.Unlock()
|
||||
}
|
||||
case <-b.done:
|
||||
wlog.Debug().Msg("batcher shutting down, exiting worker")
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// queueWork safely queues work.
|
||||
func (b *Batcher) queueWork(w work) {
|
||||
b.workQueuedCount.Add(1)
|
||||
|
||||
select {
|
||||
case b.workCh <- w:
|
||||
// Successfully queued
|
||||
case <-b.done:
|
||||
// Batcher is shutting down
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// addToBatch adds changes to the pending batch.
|
||||
func (b *Batcher) addToBatch(changes ...change.Change) {
|
||||
// Clean up any nodes being permanently removed from the system.
|
||||
//
|
||||
// This handles the case where a node is deleted from state but the batcher
|
||||
// still has it registered. By cleaning up here, we prevent "node not found"
|
||||
// errors when workers try to generate map responses for deleted nodes.
|
||||
//
|
||||
// Safety: change.Change.PeersRemoved is ONLY populated when nodes are actually
|
||||
// deleted from the system (via change.NodeRemoved in state.DeleteNode). Policy
|
||||
// changes that affect peer visibility do NOT use this field - they set
|
||||
// RequiresRuntimePeerComputation=true and compute removed peers at runtime,
|
||||
// putting them in tailcfg.MapResponse.PeersRemoved (a different struct).
|
||||
// Therefore, this cleanup only removes nodes that are truly being deleted,
|
||||
// not nodes that are still connected but have lost visibility of certain peers.
|
||||
//
|
||||
// See: https://github.com/juanfont/headscale/issues/2924
|
||||
for _, ch := range changes {
|
||||
for _, removedID := range ch.PeersRemoved {
|
||||
if _, existed := b.nodes.LoadAndDelete(removedID); existed {
|
||||
b.totalNodes.Add(-1)
|
||||
log.Debug().
|
||||
Uint64(zf.NodeID, removedID.Uint64()).
|
||||
Msg("removed deleted node from batcher")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Short circuit if any of the changes is a full update, which
|
||||
// means we can skip sending individual changes.
|
||||
if change.HasFull(changes) {
|
||||
b.nodes.Range(func(_ types.NodeID, nc *multiChannelNodeConn) bool {
|
||||
if nc == nil {
|
||||
return true
|
||||
}
|
||||
|
||||
nc.pendingMu.Lock()
|
||||
nc.pending = []change.Change{change.FullUpdate()}
|
||||
nc.pendingMu.Unlock()
|
||||
|
||||
return true
|
||||
})
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
broadcast, targeted := change.SplitTargetedAndBroadcast(changes)
|
||||
|
||||
// Handle targeted changes - send only to the specific node
|
||||
for _, ch := range targeted {
|
||||
if nc, ok := b.nodes.Load(ch.TargetNode); ok && nc != nil {
|
||||
nc.appendPending(ch)
|
||||
}
|
||||
}
|
||||
|
||||
// Handle broadcast changes - send to all nodes, filtering as needed
|
||||
if len(broadcast) > 0 {
|
||||
b.nodes.Range(func(nodeID types.NodeID, nc *multiChannelNodeConn) bool {
|
||||
if nc == nil {
|
||||
return true
|
||||
}
|
||||
|
||||
filtered := change.FilterForNode(nodeID, broadcast)
|
||||
|
||||
if len(filtered) > 0 {
|
||||
nc.appendPending(filtered...)
|
||||
}
|
||||
|
||||
return true
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// processBatchedChanges processes all pending batched changes.
|
||||
func (b *Batcher) processBatchedChanges() {
|
||||
b.nodes.Range(func(nodeID types.NodeID, nc *multiChannelNodeConn) bool {
|
||||
if nc == nil {
|
||||
return true
|
||||
}
|
||||
|
||||
pending := nc.drainPending()
|
||||
if len(pending) == 0 {
|
||||
return true
|
||||
}
|
||||
|
||||
// Queue a single work item containing all pending changes.
|
||||
// One item per node ensures a single worker processes them
|
||||
// sequentially, preventing out-of-order delivery.
|
||||
b.queueWork(work{changes: pending, nodeID: nodeID, resultCh: nil})
|
||||
|
||||
return true
|
||||
})
|
||||
}
|
||||
|
||||
// cleanupOfflineNodes removes nodes that have been offline for too long to prevent memory leaks.
|
||||
// Uses Compute() for atomic check-and-delete to prevent TOCTOU races where a node
|
||||
// reconnects between the hasActiveConnections() check and the Delete() call.
|
||||
func (b *Batcher) cleanupOfflineNodes() {
|
||||
var nodesToCleanup []types.NodeID
|
||||
|
||||
// Find nodes that have been offline for too long by scanning b.nodes
|
||||
// and checking each node's disconnectedAt timestamp.
|
||||
b.nodes.Range(func(nodeID types.NodeID, nc *multiChannelNodeConn) bool {
|
||||
if nc != nil && !nc.hasActiveConnections() && nc.offlineDuration() > offlineNodeCleanupThreshold {
|
||||
nodesToCleanup = append(nodesToCleanup, nodeID)
|
||||
}
|
||||
|
||||
return true
|
||||
})
|
||||
|
||||
// Clean up the identified nodes using Compute() for atomic check-and-delete.
|
||||
// This prevents a TOCTOU race where a node reconnects (adding an active
|
||||
// connection) between the hasActiveConnections() check and the Delete() call.
|
||||
cleaned := 0
|
||||
|
||||
for _, nodeID := range nodesToCleanup {
|
||||
b.nodes.Compute(
|
||||
nodeID,
|
||||
func(conn *multiChannelNodeConn, loaded bool) (*multiChannelNodeConn, xsync.ComputeOp) {
|
||||
if !loaded || conn == nil || conn.hasActiveConnections() {
|
||||
return conn, xsync.CancelOp
|
||||
}
|
||||
|
||||
// Perform all bookkeeping inside the Compute callback so
|
||||
// that a concurrent AddNode (which calls LoadOrStore on
|
||||
// b.nodes) cannot slip in between the delete and the
|
||||
// counter update.
|
||||
b.totalNodes.Add(-1)
|
||||
|
||||
cleaned++
|
||||
|
||||
log.Info().Uint64(zf.NodeID, nodeID.Uint64()).
|
||||
Dur("offline_duration", offlineNodeCleanupThreshold).
|
||||
Msg("cleaning up node that has been offline for too long")
|
||||
|
||||
return conn, xsync.DeleteOp
|
||||
},
|
||||
)
|
||||
}
|
||||
|
||||
if cleaned > 0 {
|
||||
log.Info().Int(zf.CleanedNodes, cleaned).
|
||||
Msg("completed cleanup of long-offline nodes")
|
||||
}
|
||||
}
|
||||
|
||||
// IsConnected is a lock-free read that checks if a node is connected.
|
||||
// A node is considered connected if it has active connections or has
|
||||
// not been marked as disconnected.
|
||||
func (b *Batcher) IsConnected(id types.NodeID) bool {
|
||||
nodeConn, exists := b.nodes.Load(id)
|
||||
if !exists || nodeConn == nil {
|
||||
return false
|
||||
}
|
||||
|
||||
return nodeConn.isConnected()
|
||||
}
|
||||
|
||||
// ConnectedMap returns a lock-free map of all known nodes and their
|
||||
// connection status (true = connected, false = disconnected).
|
||||
func (b *Batcher) ConnectedMap() *xsync.Map[types.NodeID, bool] {
|
||||
ret := xsync.NewMap[types.NodeID, bool]()
|
||||
|
||||
b.nodes.Range(func(id types.NodeID, nc *multiChannelNodeConn) bool {
|
||||
if nc != nil {
|
||||
ret.Store(id, nc.isConnected())
|
||||
}
|
||||
|
||||
return true
|
||||
})
|
||||
|
||||
return ret
|
||||
}
|
||||
|
||||
// MapResponseFromChange queues work to generate a map response and waits for the result.
|
||||
// This allows synchronous map generation using the same worker pool.
|
||||
func (b *Batcher) MapResponseFromChange(id types.NodeID, ch change.Change) (*tailcfg.MapResponse, error) {
|
||||
resultCh := make(chan workResult, 1)
|
||||
|
||||
// Queue the work with a result channel using the safe queueing method
|
||||
b.queueWork(work{changes: []change.Change{ch}, nodeID: id, resultCh: resultCh})
|
||||
|
||||
// Wait for the result
|
||||
select {
|
||||
case result := <-resultCh:
|
||||
return result.mapResponse, result.err
|
||||
case <-b.done:
|
||||
return nil, fmt.Errorf("%w while generating map response for node %d", ErrBatcherShuttingDown, id)
|
||||
}
|
||||
}
|
||||
|
||||
// DebugNodeInfo contains debug information about a node's connections.
|
||||
type DebugNodeInfo struct {
|
||||
Connected bool `json:"connected"`
|
||||
ActiveConnections int `json:"active_connections"`
|
||||
}
|
||||
|
||||
// Debug returns a pre-baked map of node debug information for the debug interface.
|
||||
func (b *Batcher) Debug() map[types.NodeID]DebugNodeInfo {
|
||||
result := make(map[types.NodeID]DebugNodeInfo)
|
||||
|
||||
b.nodes.Range(func(id types.NodeID, nc *multiChannelNodeConn) bool {
|
||||
if nc == nil {
|
||||
return true
|
||||
}
|
||||
|
||||
result[id] = DebugNodeInfo{
|
||||
Connected: nc.isConnected(),
|
||||
ActiveConnections: nc.getActiveConnectionCount(),
|
||||
}
|
||||
|
||||
return true
|
||||
})
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
func (b *Batcher) DebugMapResponses() (map[types.NodeID][]tailcfg.MapResponse, error) {
|
||||
return b.mapper.debugMapResponses()
|
||||
}
|
||||
|
||||
// WorkErrors returns the count of work errors encountered.
|
||||
// This is primarily useful for testing and debugging.
|
||||
func (b *Batcher) WorkErrors() int64 {
|
||||
return b.workErrors.Load()
|
||||
}
|
||||
|
||||
@@ -1,785 +0,0 @@
|
||||
package mapper
|
||||
|
||||
// Benchmarks for batcher components and full pipeline.
|
||||
//
|
||||
// Organized into three tiers:
|
||||
// - Component benchmarks: individual functions (connectionEntry.send, computePeerDiff, etc.)
|
||||
// - System benchmarks: batching mechanics (addToBatch, processBatchedChanges, broadcast)
|
||||
// - Full pipeline benchmarks: end-to-end with real DB (gated behind !testing.Short())
|
||||
//
|
||||
// All benchmarks use sub-benchmarks with 10/100/1000 node counts for scaling analysis.
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"sync"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/juanfont/headscale/hscontrol/types"
|
||||
"github.com/juanfont/headscale/hscontrol/types/change"
|
||||
"github.com/puzpuzpuz/xsync/v4"
|
||||
"github.com/rs/zerolog"
|
||||
"tailscale.com/tailcfg"
|
||||
)
|
||||
|
||||
// ============================================================================
|
||||
// Component Benchmarks
|
||||
// ============================================================================
|
||||
|
||||
// BenchmarkConnectionEntry_Send measures the throughput of sending a single
|
||||
// MapResponse through a connectionEntry with a buffered channel.
|
||||
func BenchmarkConnectionEntry_Send(b *testing.B) {
|
||||
ch := make(chan *tailcfg.MapResponse, b.N+1)
|
||||
entry := makeConnectionEntry("bench-conn", ch)
|
||||
data := testMapResponse()
|
||||
|
||||
b.ResetTimer()
|
||||
|
||||
for range b.N {
|
||||
_ = entry.send(data)
|
||||
}
|
||||
}
|
||||
|
||||
// BenchmarkMultiChannelSend measures broadcast throughput to multiple connections.
|
||||
func BenchmarkMultiChannelSend(b *testing.B) {
|
||||
for _, connCount := range []int{1, 3, 10} {
|
||||
b.Run(fmt.Sprintf("%dconn", connCount), func(b *testing.B) {
|
||||
mc := newMultiChannelNodeConn(1, nil)
|
||||
|
||||
channels := make([]chan *tailcfg.MapResponse, connCount)
|
||||
for i := range channels {
|
||||
channels[i] = make(chan *tailcfg.MapResponse, b.N+1)
|
||||
mc.addConnection(makeConnectionEntry(fmt.Sprintf("conn-%d", i), channels[i]))
|
||||
}
|
||||
|
||||
data := testMapResponse()
|
||||
|
||||
b.ResetTimer()
|
||||
|
||||
for range b.N {
|
||||
_ = mc.send(data)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// BenchmarkComputePeerDiff measures the cost of computing peer diffs at scale.
|
||||
func BenchmarkComputePeerDiff(b *testing.B) {
|
||||
for _, peerCount := range []int{10, 100, 1000} {
|
||||
b.Run(fmt.Sprintf("%dpeers", peerCount), func(b *testing.B) {
|
||||
mc := newMultiChannelNodeConn(1, nil)
|
||||
|
||||
// Populate tracked peers: 1..peerCount
|
||||
for i := 1; i <= peerCount; i++ {
|
||||
mc.lastSentPeers.Store(tailcfg.NodeID(i), struct{}{})
|
||||
}
|
||||
|
||||
// Current peers: remove ~10% (every 10th peer is missing)
|
||||
current := make([]tailcfg.NodeID, 0, peerCount)
|
||||
for i := 1; i <= peerCount; i++ {
|
||||
if i%10 != 0 {
|
||||
current = append(current, tailcfg.NodeID(i))
|
||||
}
|
||||
}
|
||||
|
||||
b.ResetTimer()
|
||||
|
||||
for range b.N {
|
||||
_ = mc.computePeerDiff(current)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// BenchmarkUpdateSentPeers measures the cost of updating peer tracking state.
|
||||
func BenchmarkUpdateSentPeers(b *testing.B) {
|
||||
for _, peerCount := range []int{10, 100, 1000} {
|
||||
b.Run(fmt.Sprintf("%dpeers_full", peerCount), func(b *testing.B) {
|
||||
mc := newMultiChannelNodeConn(1, nil)
|
||||
|
||||
// Pre-build response with full peer list
|
||||
peerIDs := make([]tailcfg.NodeID, peerCount)
|
||||
for i := range peerIDs {
|
||||
peerIDs[i] = tailcfg.NodeID(i + 1)
|
||||
}
|
||||
|
||||
resp := testMapResponseWithPeers(peerIDs...)
|
||||
|
||||
b.ResetTimer()
|
||||
|
||||
for range b.N {
|
||||
mc.updateSentPeers(resp)
|
||||
}
|
||||
})
|
||||
|
||||
b.Run(fmt.Sprintf("%dpeers_incremental", peerCount), func(b *testing.B) {
|
||||
mc := newMultiChannelNodeConn(1, nil)
|
||||
|
||||
// Pre-populate with existing peers
|
||||
for i := 1; i <= peerCount; i++ {
|
||||
mc.lastSentPeers.Store(tailcfg.NodeID(i), struct{}{})
|
||||
}
|
||||
|
||||
// Build incremental response: add 10% new peers
|
||||
addCount := peerCount / 10
|
||||
if addCount == 0 {
|
||||
addCount = 1
|
||||
}
|
||||
|
||||
resp := testMapResponse()
|
||||
|
||||
resp.PeersChanged = make([]*tailcfg.Node, addCount)
|
||||
for i := range addCount {
|
||||
resp.PeersChanged[i] = &tailcfg.Node{ID: tailcfg.NodeID(peerCount + i + 1)}
|
||||
}
|
||||
|
||||
b.ResetTimer()
|
||||
|
||||
for range b.N {
|
||||
mc.updateSentPeers(resp)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// System Benchmarks (no DB, batcher mechanics only)
|
||||
// ============================================================================
|
||||
|
||||
// benchBatcher creates a lightweight batcher for benchmarks. Unlike the test
|
||||
// helper, it doesn't register cleanup and suppresses logging.
|
||||
func benchBatcher(nodeCount, bufferSize int) (*Batcher, map[types.NodeID]chan *tailcfg.MapResponse) {
|
||||
b := &Batcher{
|
||||
tick: time.NewTicker(1 * time.Hour), // never fires during bench
|
||||
workers: 4,
|
||||
workCh: make(chan work, 4*200),
|
||||
nodes: xsync.NewMap[types.NodeID, *multiChannelNodeConn](),
|
||||
done: make(chan struct{}),
|
||||
}
|
||||
|
||||
channels := make(map[types.NodeID]chan *tailcfg.MapResponse, nodeCount)
|
||||
for i := 1; i <= nodeCount; i++ {
|
||||
id := types.NodeID(i) //nolint:gosec // benchmark with small controlled values
|
||||
mc := newMultiChannelNodeConn(id, nil)
|
||||
ch := make(chan *tailcfg.MapResponse, bufferSize)
|
||||
entry := &connectionEntry{
|
||||
id: fmt.Sprintf("conn-%d", i),
|
||||
c: ch,
|
||||
version: tailcfg.CapabilityVersion(100),
|
||||
created: time.Now(),
|
||||
}
|
||||
entry.lastUsed.Store(time.Now().Unix())
|
||||
mc.addConnection(entry)
|
||||
b.nodes.Store(id, mc)
|
||||
channels[id] = ch
|
||||
}
|
||||
|
||||
b.totalNodes.Store(int64(nodeCount))
|
||||
|
||||
return b, channels
|
||||
}
|
||||
|
||||
// BenchmarkAddToBatch_Broadcast measures the cost of broadcasting a change
|
||||
// to all nodes via addToBatch (no worker processing, just queuing).
|
||||
func BenchmarkAddToBatch_Broadcast(b *testing.B) {
|
||||
zerolog.SetGlobalLevel(zerolog.Disabled)
|
||||
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
|
||||
|
||||
for _, nodeCount := range []int{10, 100, 1000} {
|
||||
b.Run(fmt.Sprintf("%dnodes", nodeCount), func(b *testing.B) {
|
||||
batcher, _ := benchBatcher(nodeCount, 10)
|
||||
|
||||
defer func() {
|
||||
close(batcher.done)
|
||||
batcher.tick.Stop()
|
||||
}()
|
||||
|
||||
ch := change.DERPMap()
|
||||
|
||||
b.ResetTimer()
|
||||
|
||||
for range b.N {
|
||||
batcher.addToBatch(ch)
|
||||
// Clear pending to avoid unbounded growth
|
||||
batcher.nodes.Range(func(_ types.NodeID, nc *multiChannelNodeConn) bool {
|
||||
nc.drainPending()
|
||||
return true
|
||||
})
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// BenchmarkAddToBatch_Targeted measures the cost of adding a targeted change
|
||||
// to a single node.
|
||||
func BenchmarkAddToBatch_Targeted(b *testing.B) {
|
||||
zerolog.SetGlobalLevel(zerolog.Disabled)
|
||||
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
|
||||
|
||||
for _, nodeCount := range []int{10, 100, 1000} {
|
||||
b.Run(fmt.Sprintf("%dnodes", nodeCount), func(b *testing.B) {
|
||||
batcher, _ := benchBatcher(nodeCount, 10)
|
||||
|
||||
defer func() {
|
||||
close(batcher.done)
|
||||
batcher.tick.Stop()
|
||||
}()
|
||||
|
||||
b.ResetTimer()
|
||||
|
||||
for i := range b.N {
|
||||
targetID := types.NodeID(1 + (i % nodeCount)) //nolint:gosec // benchmark
|
||||
ch := change.Change{
|
||||
Reason: "bench-targeted",
|
||||
TargetNode: targetID,
|
||||
PeerPatches: []*tailcfg.PeerChange{
|
||||
{NodeID: tailcfg.NodeID(targetID)}, //nolint:gosec // benchmark
|
||||
},
|
||||
}
|
||||
batcher.addToBatch(ch)
|
||||
// Clear pending periodically to avoid growth
|
||||
if i%100 == 99 {
|
||||
batcher.nodes.Range(func(_ types.NodeID, nc *multiChannelNodeConn) bool {
|
||||
nc.drainPending()
|
||||
return true
|
||||
})
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// BenchmarkAddToBatch_FullUpdate measures the cost of a FullUpdate broadcast.
|
||||
func BenchmarkAddToBatch_FullUpdate(b *testing.B) {
|
||||
zerolog.SetGlobalLevel(zerolog.Disabled)
|
||||
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
|
||||
|
||||
for _, nodeCount := range []int{10, 100, 1000} {
|
||||
b.Run(fmt.Sprintf("%dnodes", nodeCount), func(b *testing.B) {
|
||||
batcher, _ := benchBatcher(nodeCount, 10)
|
||||
|
||||
defer func() {
|
||||
close(batcher.done)
|
||||
batcher.tick.Stop()
|
||||
}()
|
||||
|
||||
b.ResetTimer()
|
||||
|
||||
for range b.N {
|
||||
batcher.addToBatch(change.FullUpdate())
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// BenchmarkProcessBatchedChanges measures the cost of moving pending changes
|
||||
// to the work queue.
|
||||
func BenchmarkProcessBatchedChanges(b *testing.B) {
|
||||
zerolog.SetGlobalLevel(zerolog.Disabled)
|
||||
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
|
||||
|
||||
for _, nodeCount := range []int{10, 100, 1000} {
|
||||
b.Run(fmt.Sprintf("%dpending", nodeCount), func(b *testing.B) {
|
||||
batcher, _ := benchBatcher(nodeCount, 10)
|
||||
// Use a very large work channel to avoid blocking
|
||||
batcher.workCh = make(chan work, nodeCount*b.N+1)
|
||||
|
||||
defer func() {
|
||||
close(batcher.done)
|
||||
batcher.tick.Stop()
|
||||
}()
|
||||
|
||||
b.ResetTimer()
|
||||
|
||||
for range b.N {
|
||||
b.StopTimer()
|
||||
// Seed pending changes
|
||||
for i := 1; i <= nodeCount; i++ {
|
||||
if nc, ok := batcher.nodes.Load(types.NodeID(i)); ok { //nolint:gosec // benchmark
|
||||
nc.appendPending(change.DERPMap())
|
||||
}
|
||||
}
|
||||
|
||||
b.StartTimer()
|
||||
|
||||
batcher.processBatchedChanges()
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// BenchmarkBroadcastToN measures end-to-end broadcast: addToBatch + processBatchedChanges
|
||||
// to N nodes. Does NOT include worker processing (MapResponse generation).
|
||||
func BenchmarkBroadcastToN(b *testing.B) {
|
||||
zerolog.SetGlobalLevel(zerolog.Disabled)
|
||||
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
|
||||
|
||||
for _, nodeCount := range []int{10, 100, 1000} {
|
||||
b.Run(fmt.Sprintf("%dnodes", nodeCount), func(b *testing.B) {
|
||||
batcher, _ := benchBatcher(nodeCount, 10)
|
||||
batcher.workCh = make(chan work, nodeCount*b.N+1)
|
||||
|
||||
defer func() {
|
||||
close(batcher.done)
|
||||
batcher.tick.Stop()
|
||||
}()
|
||||
|
||||
ch := change.DERPMap()
|
||||
|
||||
b.ResetTimer()
|
||||
|
||||
for range b.N {
|
||||
batcher.addToBatch(ch)
|
||||
batcher.processBatchedChanges()
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// BenchmarkMultiChannelBroadcast measures the cost of sending a MapResponse
|
||||
// to N nodes each with varying connection counts.
|
||||
func BenchmarkMultiChannelBroadcast(b *testing.B) {
|
||||
zerolog.SetGlobalLevel(zerolog.Disabled)
|
||||
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
|
||||
|
||||
for _, nodeCount := range []int{10, 100, 1000} {
|
||||
b.Run(fmt.Sprintf("%dnodes", nodeCount), func(b *testing.B) {
|
||||
batcher, _ := benchBatcher(nodeCount, b.N+1)
|
||||
|
||||
defer func() {
|
||||
close(batcher.done)
|
||||
batcher.tick.Stop()
|
||||
}()
|
||||
|
||||
// Add extra connections to every 3rd node
|
||||
for i := 1; i <= nodeCount; i++ {
|
||||
if i%3 == 0 {
|
||||
if mc, ok := batcher.nodes.Load(types.NodeID(i)); ok { //nolint:gosec // benchmark
|
||||
for j := range 2 {
|
||||
ch := make(chan *tailcfg.MapResponse, b.N+1)
|
||||
entry := &connectionEntry{
|
||||
id: fmt.Sprintf("extra-%d-%d", i, j),
|
||||
c: ch,
|
||||
version: tailcfg.CapabilityVersion(100),
|
||||
created: time.Now(),
|
||||
}
|
||||
entry.lastUsed.Store(time.Now().Unix())
|
||||
mc.addConnection(entry)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
data := testMapResponse()
|
||||
|
||||
b.ResetTimer()
|
||||
|
||||
for range b.N {
|
||||
batcher.nodes.Range(func(_ types.NodeID, mc *multiChannelNodeConn) bool {
|
||||
_ = mc.send(data)
|
||||
return true
|
||||
})
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// BenchmarkConcurrentAddToBatch measures addToBatch throughput under
|
||||
// concurrent access from multiple goroutines.
|
||||
func BenchmarkConcurrentAddToBatch(b *testing.B) {
|
||||
zerolog.SetGlobalLevel(zerolog.Disabled)
|
||||
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
|
||||
|
||||
for _, nodeCount := range []int{10, 100, 1000} {
|
||||
b.Run(fmt.Sprintf("%dnodes", nodeCount), func(b *testing.B) {
|
||||
batcher, _ := benchBatcher(nodeCount, 10)
|
||||
|
||||
defer func() {
|
||||
close(batcher.done)
|
||||
batcher.tick.Stop()
|
||||
}()
|
||||
|
||||
// Background goroutine to drain pending periodically
|
||||
drainDone := make(chan struct{})
|
||||
|
||||
go func() {
|
||||
defer close(drainDone)
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-batcher.done:
|
||||
return
|
||||
default:
|
||||
batcher.nodes.Range(func(_ types.NodeID, nc *multiChannelNodeConn) bool {
|
||||
nc.drainPending()
|
||||
return true
|
||||
})
|
||||
time.Sleep(time.Millisecond) //nolint:forbidigo // benchmark drain loop
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
ch := change.DERPMap()
|
||||
|
||||
b.ResetTimer()
|
||||
b.RunParallel(func(pb *testing.PB) {
|
||||
for pb.Next() {
|
||||
batcher.addToBatch(ch)
|
||||
}
|
||||
})
|
||||
b.StopTimer()
|
||||
|
||||
// Cleanup
|
||||
close(batcher.done)
|
||||
<-drainDone
|
||||
// Re-open done so the defer doesn't double-close
|
||||
batcher.done = make(chan struct{})
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// BenchmarkIsConnected measures the read throughput of IsConnected checks.
|
||||
func BenchmarkIsConnected(b *testing.B) {
|
||||
zerolog.SetGlobalLevel(zerolog.Disabled)
|
||||
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
|
||||
|
||||
for _, nodeCount := range []int{10, 100, 1000} {
|
||||
b.Run(fmt.Sprintf("%dnodes", nodeCount), func(b *testing.B) {
|
||||
batcher, _ := benchBatcher(nodeCount, 1)
|
||||
|
||||
defer func() {
|
||||
close(batcher.done)
|
||||
batcher.tick.Stop()
|
||||
}()
|
||||
|
||||
b.ResetTimer()
|
||||
|
||||
for i := range b.N {
|
||||
id := types.NodeID(1 + (i % nodeCount)) //nolint:gosec // benchmark
|
||||
_ = batcher.IsConnected(id)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// BenchmarkConnectedMap measures the cost of building the full connected map.
|
||||
func BenchmarkConnectedMap(b *testing.B) {
|
||||
zerolog.SetGlobalLevel(zerolog.Disabled)
|
||||
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
|
||||
|
||||
for _, nodeCount := range []int{10, 100, 1000} {
|
||||
b.Run(fmt.Sprintf("%dnodes", nodeCount), func(b *testing.B) {
|
||||
batcher, channels := benchBatcher(nodeCount, 1)
|
||||
|
||||
defer func() {
|
||||
close(batcher.done)
|
||||
batcher.tick.Stop()
|
||||
}()
|
||||
|
||||
// Disconnect 10% of nodes for a realistic mix
|
||||
for i := 1; i <= nodeCount; i++ {
|
||||
if i%10 == 0 {
|
||||
id := types.NodeID(i) //nolint:gosec // benchmark
|
||||
if mc, ok := batcher.nodes.Load(id); ok {
|
||||
mc.removeConnectionByChannel(channels[id])
|
||||
mc.markDisconnected()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
b.ResetTimer()
|
||||
|
||||
for range b.N {
|
||||
_ = batcher.ConnectedMap()
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// BenchmarkConnectionChurn measures the cost of add/remove connection cycling
|
||||
// which simulates client reconnection patterns.
|
||||
func BenchmarkConnectionChurn(b *testing.B) {
|
||||
zerolog.SetGlobalLevel(zerolog.Disabled)
|
||||
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
|
||||
|
||||
for _, nodeCount := range []int{10, 100, 1000} {
|
||||
b.Run(fmt.Sprintf("%dnodes", nodeCount), func(b *testing.B) {
|
||||
batcher, channels := benchBatcher(nodeCount, 10)
|
||||
|
||||
defer func() {
|
||||
close(batcher.done)
|
||||
batcher.tick.Stop()
|
||||
}()
|
||||
|
||||
b.ResetTimer()
|
||||
|
||||
for i := range b.N {
|
||||
id := types.NodeID(1 + (i % nodeCount)) //nolint:gosec // benchmark
|
||||
|
||||
mc, ok := batcher.nodes.Load(id)
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
|
||||
// Remove old connection
|
||||
oldCh := channels[id]
|
||||
mc.removeConnectionByChannel(oldCh)
|
||||
|
||||
// Add new connection
|
||||
newCh := make(chan *tailcfg.MapResponse, 10)
|
||||
entry := &connectionEntry{
|
||||
id: fmt.Sprintf("churn-%d", i),
|
||||
c: newCh,
|
||||
version: tailcfg.CapabilityVersion(100),
|
||||
created: time.Now(),
|
||||
}
|
||||
entry.lastUsed.Store(time.Now().Unix())
|
||||
mc.addConnection(entry)
|
||||
|
||||
channels[id] = newCh
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// BenchmarkConcurrentSendAndChurn measures the combined cost of sends happening
|
||||
// concurrently with connection churn - the hot path in production.
|
||||
func BenchmarkConcurrentSendAndChurn(b *testing.B) {
|
||||
zerolog.SetGlobalLevel(zerolog.Disabled)
|
||||
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
|
||||
|
||||
for _, nodeCount := range []int{10, 100} {
|
||||
b.Run(fmt.Sprintf("%dnodes", nodeCount), func(b *testing.B) {
|
||||
batcher, channels := benchBatcher(nodeCount, 100)
|
||||
|
||||
var mu sync.Mutex // protect channels map
|
||||
|
||||
stopChurn := make(chan struct{})
|
||||
defer close(stopChurn)
|
||||
|
||||
// Background churn on 10% of nodes
|
||||
go func() {
|
||||
i := 0
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-stopChurn:
|
||||
return
|
||||
default:
|
||||
id := types.NodeID(1 + (i % nodeCount)) //nolint:gosec // benchmark
|
||||
if i%10 == 0 { // only churn 10%
|
||||
mc, ok := batcher.nodes.Load(id)
|
||||
if ok {
|
||||
mu.Lock()
|
||||
oldCh := channels[id]
|
||||
mu.Unlock()
|
||||
mc.removeConnectionByChannel(oldCh)
|
||||
|
||||
newCh := make(chan *tailcfg.MapResponse, 100)
|
||||
entry := &connectionEntry{
|
||||
id: fmt.Sprintf("churn-%d", i),
|
||||
c: newCh,
|
||||
version: tailcfg.CapabilityVersion(100),
|
||||
created: time.Now(),
|
||||
}
|
||||
entry.lastUsed.Store(time.Now().Unix())
|
||||
mc.addConnection(entry)
|
||||
mu.Lock()
|
||||
channels[id] = newCh
|
||||
mu.Unlock()
|
||||
}
|
||||
}
|
||||
|
||||
i++
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
data := testMapResponse()
|
||||
|
||||
b.ResetTimer()
|
||||
|
||||
for range b.N {
|
||||
batcher.nodes.Range(func(_ types.NodeID, mc *multiChannelNodeConn) bool {
|
||||
_ = mc.send(data)
|
||||
return true
|
||||
})
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Full Pipeline Benchmarks (with DB)
|
||||
// ============================================================================
|
||||
|
||||
// BenchmarkAddNode measures the cost of adding nodes to the batcher,
|
||||
// including initial MapResponse generation from a real database.
|
||||
func BenchmarkAddNode(b *testing.B) {
|
||||
if testing.Short() {
|
||||
b.Skip("skipping full pipeline benchmark in short mode")
|
||||
}
|
||||
|
||||
zerolog.SetGlobalLevel(zerolog.Disabled)
|
||||
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
|
||||
|
||||
for _, nodeCount := range []int{10, 100} {
|
||||
b.Run(fmt.Sprintf("%dnodes", nodeCount), func(b *testing.B) {
|
||||
testData, cleanup := setupBatcherWithTestData(b, NewBatcherAndMapper, 1, nodeCount, largeBufferSize)
|
||||
defer cleanup()
|
||||
|
||||
batcher := testData.Batcher
|
||||
allNodes := testData.Nodes
|
||||
|
||||
// Start consumers
|
||||
for i := range allNodes {
|
||||
allNodes[i].start()
|
||||
}
|
||||
|
||||
defer func() {
|
||||
for i := range allNodes {
|
||||
allNodes[i].cleanup()
|
||||
}
|
||||
}()
|
||||
|
||||
b.ResetTimer()
|
||||
|
||||
for range b.N {
|
||||
// Connect all nodes (measuring AddNode cost)
|
||||
for i := range allNodes {
|
||||
node := &allNodes[i]
|
||||
_ = batcher.AddNode(node.n.ID, node.ch, tailcfg.CapabilityVersion(100), nil)
|
||||
}
|
||||
|
||||
b.StopTimer()
|
||||
// Disconnect for next iteration
|
||||
for i := range allNodes {
|
||||
node := &allNodes[i]
|
||||
batcher.RemoveNode(node.n.ID, node.ch)
|
||||
}
|
||||
// Drain channels
|
||||
for i := range allNodes {
|
||||
for {
|
||||
select {
|
||||
case <-allNodes[i].ch:
|
||||
default:
|
||||
goto drained
|
||||
}
|
||||
}
|
||||
|
||||
drained:
|
||||
}
|
||||
|
||||
b.StartTimer()
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// BenchmarkFullPipeline measures the full pipeline cost: addToBatch → processBatchedChanges
|
||||
// → worker → generateMapResponse → send, with real nodes from a database.
|
||||
func BenchmarkFullPipeline(b *testing.B) {
|
||||
if testing.Short() {
|
||||
b.Skip("skipping full pipeline benchmark in short mode")
|
||||
}
|
||||
|
||||
zerolog.SetGlobalLevel(zerolog.Disabled)
|
||||
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
|
||||
|
||||
for _, nodeCount := range []int{10, 100} {
|
||||
b.Run(fmt.Sprintf("%dnodes", nodeCount), func(b *testing.B) {
|
||||
testData, cleanup := setupBatcherWithTestData(b, NewBatcherAndMapper, 1, nodeCount, largeBufferSize)
|
||||
defer cleanup()
|
||||
|
||||
batcher := testData.Batcher
|
||||
allNodes := testData.Nodes
|
||||
|
||||
// Start consumers
|
||||
for i := range allNodes {
|
||||
allNodes[i].start()
|
||||
}
|
||||
|
||||
defer func() {
|
||||
for i := range allNodes {
|
||||
allNodes[i].cleanup()
|
||||
}
|
||||
}()
|
||||
|
||||
// Connect all nodes first
|
||||
for i := range allNodes {
|
||||
node := &allNodes[i]
|
||||
|
||||
err := batcher.AddNode(node.n.ID, node.ch, tailcfg.CapabilityVersion(100), nil)
|
||||
if err != nil {
|
||||
b.Fatalf("failed to add node %d: %v", i, err)
|
||||
}
|
||||
}
|
||||
|
||||
// Wait for initial maps to settle
|
||||
time.Sleep(200 * time.Millisecond) //nolint:forbidigo // benchmark coordination
|
||||
|
||||
b.ResetTimer()
|
||||
|
||||
for range b.N {
|
||||
batcher.AddWork(change.DERPMap())
|
||||
// Allow workers to process (the batcher tick is what normally
|
||||
// triggers processBatchedChanges, but for benchmarks we need
|
||||
// to give the system time to process)
|
||||
time.Sleep(20 * time.Millisecond) //nolint:forbidigo // benchmark coordination
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// BenchmarkMapResponseFromChange measures the cost of synchronous
|
||||
// MapResponse generation for individual nodes.
|
||||
func BenchmarkMapResponseFromChange(b *testing.B) {
|
||||
if testing.Short() {
|
||||
b.Skip("skipping full pipeline benchmark in short mode")
|
||||
}
|
||||
|
||||
zerolog.SetGlobalLevel(zerolog.Disabled)
|
||||
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
|
||||
|
||||
for _, nodeCount := range []int{10, 100} {
|
||||
b.Run(fmt.Sprintf("%dnodes", nodeCount), func(b *testing.B) {
|
||||
testData, cleanup := setupBatcherWithTestData(b, NewBatcherAndMapper, 1, nodeCount, largeBufferSize)
|
||||
defer cleanup()
|
||||
|
||||
batcher := testData.Batcher
|
||||
allNodes := testData.Nodes
|
||||
|
||||
// Start consumers
|
||||
for i := range allNodes {
|
||||
allNodes[i].start()
|
||||
}
|
||||
|
||||
defer func() {
|
||||
for i := range allNodes {
|
||||
allNodes[i].cleanup()
|
||||
}
|
||||
}()
|
||||
|
||||
// Connect all nodes
|
||||
for i := range allNodes {
|
||||
node := &allNodes[i]
|
||||
|
||||
err := batcher.AddNode(node.n.ID, node.ch, tailcfg.CapabilityVersion(100), nil)
|
||||
if err != nil {
|
||||
b.Fatalf("failed to add node %d: %v", i, err)
|
||||
}
|
||||
}
|
||||
|
||||
time.Sleep(200 * time.Millisecond) //nolint:forbidigo // benchmark coordination
|
||||
|
||||
ch := change.DERPMap()
|
||||
|
||||
b.ResetTimer()
|
||||
|
||||
for i := range b.N {
|
||||
nodeIdx := i % len(allNodes)
|
||||
_, _ = batcher.MapResponseFromChange(allNodes[nodeIdx].n.ID, ch)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
File diff suppressed because it is too large
Load Diff
889
hscontrol/mapper/batcher_lockfree.go
Normal file
889
hscontrol/mapper/batcher_lockfree.go
Normal file
@@ -0,0 +1,889 @@
|
||||
package mapper
|
||||
|
||||
import (
|
||||
"crypto/rand"
|
||||
"encoding/hex"
|
||||
"errors"
|
||||
"fmt"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
|
||||
"github.com/juanfont/headscale/hscontrol/types"
|
||||
"github.com/juanfont/headscale/hscontrol/types/change"
|
||||
"github.com/juanfont/headscale/hscontrol/util/zlog/zf"
|
||||
"github.com/puzpuzpuz/xsync/v4"
|
||||
"github.com/rs/zerolog"
|
||||
"github.com/rs/zerolog/log"
|
||||
"tailscale.com/tailcfg"
|
||||
)
|
||||
|
||||
// LockFreeBatcher errors.
|
||||
var (
|
||||
errConnectionClosed = errors.New("connection channel already closed")
|
||||
ErrInitialMapSendTimeout = errors.New("sending initial map: timeout")
|
||||
ErrBatcherShuttingDown = errors.New("batcher shutting down")
|
||||
ErrConnectionSendTimeout = errors.New("timeout sending to channel (likely stale connection)")
|
||||
)
|
||||
|
||||
// LockFreeBatcher uses atomic operations and concurrent maps to eliminate mutex contention.
|
||||
type LockFreeBatcher struct {
|
||||
tick *time.Ticker
|
||||
mapper *mapper
|
||||
workers int
|
||||
|
||||
nodes *xsync.Map[types.NodeID, *multiChannelNodeConn]
|
||||
connected *xsync.Map[types.NodeID, *time.Time]
|
||||
|
||||
// Work queue channel
|
||||
workCh chan work
|
||||
workChOnce sync.Once // Ensures workCh is only closed once
|
||||
done chan struct{}
|
||||
doneOnce sync.Once // Ensures done is only closed once
|
||||
|
||||
// Batching state
|
||||
pendingChanges *xsync.Map[types.NodeID, []change.Change]
|
||||
|
||||
// Metrics
|
||||
totalNodes atomic.Int64
|
||||
workQueuedCount atomic.Int64
|
||||
workProcessed atomic.Int64
|
||||
workErrors atomic.Int64
|
||||
}
|
||||
|
||||
// AddNode registers a new node connection with the batcher and sends an initial map response.
|
||||
// It creates or updates the node's connection data, validates the initial map generation,
|
||||
// and notifies other nodes that this node has come online.
|
||||
// The stop function tears down the owning session if this connection is later declared stale.
|
||||
func (b *LockFreeBatcher) AddNode(
|
||||
id types.NodeID,
|
||||
c chan<- *tailcfg.MapResponse,
|
||||
version tailcfg.CapabilityVersion,
|
||||
stop func(),
|
||||
) error {
|
||||
addNodeStart := time.Now()
|
||||
nlog := log.With().Uint64(zf.NodeID, id.Uint64()).Logger()
|
||||
|
||||
// Generate connection ID
|
||||
connID := generateConnectionID()
|
||||
|
||||
// Create new connection entry
|
||||
now := time.Now()
|
||||
newEntry := &connectionEntry{
|
||||
id: connID,
|
||||
c: c,
|
||||
version: version,
|
||||
created: now,
|
||||
stop: stop,
|
||||
}
|
||||
// Initialize last used timestamp
|
||||
newEntry.lastUsed.Store(now.Unix())
|
||||
|
||||
// Get or create multiChannelNodeConn - this reuses existing offline nodes for rapid reconnection
|
||||
nodeConn, loaded := b.nodes.LoadOrStore(id, newMultiChannelNodeConn(id, b.mapper))
|
||||
|
||||
if !loaded {
|
||||
b.totalNodes.Add(1)
|
||||
}
|
||||
|
||||
// Add connection to the list (lock-free)
|
||||
nodeConn.addConnection(newEntry)
|
||||
|
||||
// Use the worker pool for controlled concurrency instead of direct generation
|
||||
initialMap, err := b.MapResponseFromChange(id, change.FullSelf(id))
|
||||
if err != nil {
|
||||
nlog.Error().Err(err).Msg("initial map generation failed")
|
||||
nodeConn.removeConnectionByChannel(c)
|
||||
|
||||
return fmt.Errorf("generating initial map for node %d: %w", id, err)
|
||||
}
|
||||
|
||||
// Use a blocking send with timeout for initial map since the channel should be ready
|
||||
// and we want to avoid the race condition where the receiver isn't ready yet
|
||||
select {
|
||||
case c <- initialMap:
|
||||
// Success
|
||||
case <-time.After(5 * time.Second): //nolint:mnd
|
||||
nlog.Error().Err(ErrInitialMapSendTimeout).Msg("initial map send timeout")
|
||||
nlog.Debug().Caller().Dur("timeout.duration", 5*time.Second). //nolint:mnd
|
||||
Msg("initial map send timed out because channel was blocked or receiver not ready")
|
||||
nodeConn.removeConnectionByChannel(c)
|
||||
|
||||
return fmt.Errorf("%w for node %d", ErrInitialMapSendTimeout, id)
|
||||
}
|
||||
|
||||
// Update connection status
|
||||
b.connected.Store(id, nil) // nil = connected
|
||||
|
||||
// Node will automatically receive updates through the normal flow
|
||||
// The initial full map already contains all current state
|
||||
|
||||
nlog.Debug().Caller().Dur(zf.TotalDuration, time.Since(addNodeStart)).
|
||||
Int("active.connections", nodeConn.getActiveConnectionCount()).
|
||||
Msg("node connection established in batcher")
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// RemoveNode disconnects a node from the batcher, marking it as offline and cleaning up its state.
|
||||
// It validates the connection channel matches one of the current connections, closes that specific connection,
|
||||
// and keeps the node entry alive for rapid reconnections instead of aggressive deletion.
|
||||
// Reports if the node still has active connections after removal.
|
||||
func (b *LockFreeBatcher) RemoveNode(id types.NodeID, c chan<- *tailcfg.MapResponse) bool {
|
||||
nlog := log.With().Uint64(zf.NodeID, id.Uint64()).Logger()
|
||||
|
||||
nodeConn, exists := b.nodes.Load(id)
|
||||
if !exists {
|
||||
nlog.Debug().Caller().Msg("removeNode called for non-existent node")
|
||||
return false
|
||||
}
|
||||
|
||||
// Remove specific connection
|
||||
removed := nodeConn.removeConnectionByChannel(c)
|
||||
if !removed {
|
||||
nlog.Debug().Caller().Msg("removeNode: channel not found, connection already removed or invalid")
|
||||
}
|
||||
|
||||
// Check if node has any remaining active connections
|
||||
if nodeConn.hasActiveConnections() {
|
||||
nlog.Debug().Caller().
|
||||
Int("active.connections", nodeConn.getActiveConnectionCount()).
|
||||
Msg("node connection removed but keeping online, other connections remain")
|
||||
|
||||
return true // Node still has active connections
|
||||
}
|
||||
|
||||
// No active connections - keep the node entry alive for rapid reconnections
|
||||
// The node will get a fresh full map when it reconnects
|
||||
nlog.Debug().Caller().Msg("node disconnected from batcher, keeping entry for rapid reconnection")
|
||||
b.connected.Store(id, new(time.Now()))
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
// AddWork queues a change to be processed by the batcher.
|
||||
func (b *LockFreeBatcher) AddWork(r ...change.Change) {
|
||||
b.addWork(r...)
|
||||
}
|
||||
|
||||
func (b *LockFreeBatcher) Start() {
|
||||
b.done = make(chan struct{})
|
||||
go b.doWork()
|
||||
}
|
||||
|
||||
func (b *LockFreeBatcher) Close() {
|
||||
// Signal shutdown to all goroutines, only once
|
||||
b.doneOnce.Do(func() {
|
||||
if b.done != nil {
|
||||
close(b.done)
|
||||
}
|
||||
})
|
||||
|
||||
// Only close workCh once using sync.Once to prevent races
|
||||
b.workChOnce.Do(func() {
|
||||
close(b.workCh)
|
||||
})
|
||||
|
||||
// Close the underlying channels supplying the data to the clients.
|
||||
b.nodes.Range(func(nodeID types.NodeID, conn *multiChannelNodeConn) bool {
|
||||
conn.close()
|
||||
return true
|
||||
})
|
||||
}
|
||||
|
||||
func (b *LockFreeBatcher) doWork() {
|
||||
for i := range b.workers {
|
||||
go b.worker(i + 1)
|
||||
}
|
||||
|
||||
// Create a cleanup ticker for removing truly disconnected nodes
|
||||
cleanupTicker := time.NewTicker(5 * time.Minute)
|
||||
defer cleanupTicker.Stop()
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-b.tick.C:
|
||||
// Process batched changes
|
||||
b.processBatchedChanges()
|
||||
case <-cleanupTicker.C:
|
||||
// Clean up nodes that have been offline for too long
|
||||
b.cleanupOfflineNodes()
|
||||
case <-b.done:
|
||||
log.Info().Msg("batcher done channel closed, stopping to feed workers")
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (b *LockFreeBatcher) worker(workerID int) {
|
||||
wlog := log.With().Int(zf.WorkerID, workerID).Logger()
|
||||
|
||||
for {
|
||||
select {
|
||||
case w, ok := <-b.workCh:
|
||||
if !ok {
|
||||
wlog.Debug().Msg("worker channel closing, shutting down")
|
||||
return
|
||||
}
|
||||
|
||||
b.workProcessed.Add(1)
|
||||
|
||||
// If the resultCh is set, it means that this is a work request
|
||||
// where there is a blocking function waiting for the map that
|
||||
// is being generated.
|
||||
// This is used for synchronous map generation.
|
||||
if w.resultCh != nil {
|
||||
var result workResult
|
||||
|
||||
if nc, exists := b.nodes.Load(w.nodeID); exists {
|
||||
var err error
|
||||
|
||||
result.mapResponse, err = generateMapResponse(nc, b.mapper, w.c)
|
||||
|
||||
result.err = err
|
||||
if result.err != nil {
|
||||
b.workErrors.Add(1)
|
||||
wlog.Error().Err(result.err).
|
||||
Uint64(zf.NodeID, w.nodeID.Uint64()).
|
||||
Str(zf.Reason, w.c.Reason).
|
||||
Msg("failed to generate map response for synchronous work")
|
||||
} else if result.mapResponse != nil {
|
||||
// Update peer tracking for synchronous responses too
|
||||
nc.updateSentPeers(result.mapResponse)
|
||||
}
|
||||
} else {
|
||||
result.err = fmt.Errorf("%w: %d", ErrNodeNotFoundMapper, w.nodeID)
|
||||
|
||||
b.workErrors.Add(1)
|
||||
wlog.Error().Err(result.err).
|
||||
Uint64(zf.NodeID, w.nodeID.Uint64()).
|
||||
Msg("node not found for synchronous work")
|
||||
}
|
||||
|
||||
// Send result
|
||||
select {
|
||||
case w.resultCh <- result:
|
||||
case <-b.done:
|
||||
return
|
||||
}
|
||||
|
||||
continue
|
||||
}
|
||||
|
||||
// If resultCh is nil, this is an asynchronous work request
|
||||
// that should be processed and sent to the node instead of
|
||||
// returned to the caller.
|
||||
if nc, exists := b.nodes.Load(w.nodeID); exists {
|
||||
// Apply change to node - this will handle offline nodes gracefully
|
||||
// and queue work for when they reconnect
|
||||
err := nc.change(w.c)
|
||||
if err != nil {
|
||||
b.workErrors.Add(1)
|
||||
wlog.Error().Err(err).
|
||||
Uint64(zf.NodeID, w.nodeID.Uint64()).
|
||||
Str(zf.Reason, w.c.Reason).
|
||||
Msg("failed to apply change")
|
||||
}
|
||||
}
|
||||
case <-b.done:
|
||||
wlog.Debug().Msg("batcher shutting down, exiting worker")
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (b *LockFreeBatcher) addWork(r ...change.Change) {
|
||||
b.addToBatch(r...)
|
||||
}
|
||||
|
||||
// queueWork safely queues work.
|
||||
func (b *LockFreeBatcher) queueWork(w work) {
|
||||
b.workQueuedCount.Add(1)
|
||||
|
||||
select {
|
||||
case b.workCh <- w:
|
||||
// Successfully queued
|
||||
case <-b.done:
|
||||
// Batcher is shutting down
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// addToBatch adds changes to the pending batch.
|
||||
func (b *LockFreeBatcher) addToBatch(changes ...change.Change) {
|
||||
// Clean up any nodes being permanently removed from the system.
|
||||
//
|
||||
// This handles the case where a node is deleted from state but the batcher
|
||||
// still has it registered. By cleaning up here, we prevent "node not found"
|
||||
// errors when workers try to generate map responses for deleted nodes.
|
||||
//
|
||||
// Safety: change.Change.PeersRemoved is ONLY populated when nodes are actually
|
||||
// deleted from the system (via change.NodeRemoved in state.DeleteNode). Policy
|
||||
// changes that affect peer visibility do NOT use this field - they set
|
||||
// RequiresRuntimePeerComputation=true and compute removed peers at runtime,
|
||||
// putting them in tailcfg.MapResponse.PeersRemoved (a different struct).
|
||||
// Therefore, this cleanup only removes nodes that are truly being deleted,
|
||||
// not nodes that are still connected but have lost visibility of certain peers.
|
||||
//
|
||||
// See: https://github.com/juanfont/headscale/issues/2924
|
||||
for _, ch := range changes {
|
||||
for _, removedID := range ch.PeersRemoved {
|
||||
if _, existed := b.nodes.LoadAndDelete(removedID); existed {
|
||||
b.totalNodes.Add(-1)
|
||||
log.Debug().
|
||||
Uint64(zf.NodeID, removedID.Uint64()).
|
||||
Msg("removed deleted node from batcher")
|
||||
}
|
||||
|
||||
b.connected.Delete(removedID)
|
||||
b.pendingChanges.Delete(removedID)
|
||||
}
|
||||
}
|
||||
|
||||
// Short circuit if any of the changes is a full update, which
|
||||
// means we can skip sending individual changes.
|
||||
if change.HasFull(changes) {
|
||||
b.nodes.Range(func(nodeID types.NodeID, _ *multiChannelNodeConn) bool {
|
||||
b.pendingChanges.Store(nodeID, []change.Change{change.FullUpdate()})
|
||||
|
||||
return true
|
||||
})
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
broadcast, targeted := change.SplitTargetedAndBroadcast(changes)
|
||||
|
||||
// Handle targeted changes - send only to the specific node
|
||||
for _, ch := range targeted {
|
||||
pending, _ := b.pendingChanges.LoadOrStore(ch.TargetNode, []change.Change{})
|
||||
pending = append(pending, ch)
|
||||
b.pendingChanges.Store(ch.TargetNode, pending)
|
||||
}
|
||||
|
||||
// Handle broadcast changes - send to all nodes, filtering as needed
|
||||
if len(broadcast) > 0 {
|
||||
b.nodes.Range(func(nodeID types.NodeID, _ *multiChannelNodeConn) bool {
|
||||
filtered := change.FilterForNode(nodeID, broadcast)
|
||||
|
||||
if len(filtered) > 0 {
|
||||
pending, _ := b.pendingChanges.LoadOrStore(nodeID, []change.Change{})
|
||||
pending = append(pending, filtered...)
|
||||
b.pendingChanges.Store(nodeID, pending)
|
||||
}
|
||||
|
||||
return true
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// processBatchedChanges processes all pending batched changes.
|
||||
func (b *LockFreeBatcher) processBatchedChanges() {
|
||||
if b.pendingChanges == nil {
|
||||
return
|
||||
}
|
||||
|
||||
// Process all pending changes
|
||||
b.pendingChanges.Range(func(nodeID types.NodeID, pending []change.Change) bool {
|
||||
if len(pending) == 0 {
|
||||
return true
|
||||
}
|
||||
|
||||
// Send all batched changes for this node
|
||||
for _, ch := range pending {
|
||||
b.queueWork(work{c: ch, nodeID: nodeID, resultCh: nil})
|
||||
}
|
||||
|
||||
// Clear the pending changes for this node
|
||||
b.pendingChanges.Delete(nodeID)
|
||||
|
||||
return true
|
||||
})
|
||||
}
|
||||
|
||||
// cleanupOfflineNodes removes nodes that have been offline for too long to prevent memory leaks.
|
||||
// TODO(kradalby): reevaluate if we want to keep this.
|
||||
func (b *LockFreeBatcher) cleanupOfflineNodes() {
|
||||
cleanupThreshold := 15 * time.Minute
|
||||
now := time.Now()
|
||||
|
||||
var nodesToCleanup []types.NodeID
|
||||
|
||||
// Find nodes that have been offline for too long
|
||||
b.connected.Range(func(nodeID types.NodeID, disconnectTime *time.Time) bool {
|
||||
if disconnectTime != nil && now.Sub(*disconnectTime) > cleanupThreshold {
|
||||
// Double-check the node doesn't have active connections
|
||||
if nodeConn, exists := b.nodes.Load(nodeID); exists {
|
||||
if !nodeConn.hasActiveConnections() {
|
||||
nodesToCleanup = append(nodesToCleanup, nodeID)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return true
|
||||
})
|
||||
|
||||
// Clean up the identified nodes
|
||||
for _, nodeID := range nodesToCleanup {
|
||||
log.Info().Uint64(zf.NodeID, nodeID.Uint64()).
|
||||
Dur("offline_duration", cleanupThreshold).
|
||||
Msg("cleaning up node that has been offline for too long")
|
||||
|
||||
b.nodes.Delete(nodeID)
|
||||
b.connected.Delete(nodeID)
|
||||
b.totalNodes.Add(-1)
|
||||
}
|
||||
|
||||
if len(nodesToCleanup) > 0 {
|
||||
log.Info().Int(zf.CleanedNodes, len(nodesToCleanup)).
|
||||
Msg("completed cleanup of long-offline nodes")
|
||||
}
|
||||
}
|
||||
|
||||
// IsConnected is lock-free read that checks if a node has any active connections.
|
||||
func (b *LockFreeBatcher) IsConnected(id types.NodeID) bool {
|
||||
// First check if we have active connections for this node
|
||||
if nodeConn, exists := b.nodes.Load(id); exists {
|
||||
if nodeConn.hasActiveConnections() {
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
// Check disconnected timestamp with grace period
|
||||
val, ok := b.connected.Load(id)
|
||||
if !ok {
|
||||
return false
|
||||
}
|
||||
|
||||
// nil means connected
|
||||
if val == nil {
|
||||
return true
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
// ConnectedMap returns a lock-free map of all connected nodes.
|
||||
func (b *LockFreeBatcher) ConnectedMap() *xsync.Map[types.NodeID, bool] {
|
||||
ret := xsync.NewMap[types.NodeID, bool]()
|
||||
|
||||
// First, add all nodes with active connections
|
||||
b.nodes.Range(func(id types.NodeID, nodeConn *multiChannelNodeConn) bool {
|
||||
if nodeConn.hasActiveConnections() {
|
||||
ret.Store(id, true)
|
||||
}
|
||||
|
||||
return true
|
||||
})
|
||||
|
||||
// Then add all entries from the connected map
|
||||
b.connected.Range(func(id types.NodeID, val *time.Time) bool {
|
||||
// Only add if not already added as connected above
|
||||
if _, exists := ret.Load(id); !exists {
|
||||
if val == nil {
|
||||
// nil means connected
|
||||
ret.Store(id, true)
|
||||
} else {
|
||||
// timestamp means disconnected
|
||||
ret.Store(id, false)
|
||||
}
|
||||
}
|
||||
|
||||
return true
|
||||
})
|
||||
|
||||
return ret
|
||||
}
|
||||
|
||||
// MapResponseFromChange queues work to generate a map response and waits for the result.
|
||||
// This allows synchronous map generation using the same worker pool.
|
||||
func (b *LockFreeBatcher) MapResponseFromChange(id types.NodeID, ch change.Change) (*tailcfg.MapResponse, error) {
|
||||
resultCh := make(chan workResult, 1)
|
||||
|
||||
// Queue the work with a result channel using the safe queueing method
|
||||
b.queueWork(work{c: ch, nodeID: id, resultCh: resultCh})
|
||||
|
||||
// Wait for the result
|
||||
select {
|
||||
case result := <-resultCh:
|
||||
return result.mapResponse, result.err
|
||||
case <-b.done:
|
||||
return nil, fmt.Errorf("%w while generating map response for node %d", ErrBatcherShuttingDown, id)
|
||||
}
|
||||
}
|
||||
|
||||
// connectionEntry represents a single connection to a node.
|
||||
type connectionEntry struct {
|
||||
id string // unique connection ID
|
||||
c chan<- *tailcfg.MapResponse
|
||||
version tailcfg.CapabilityVersion
|
||||
created time.Time
|
||||
stop func()
|
||||
lastUsed atomic.Int64 // Unix timestamp of last successful send
|
||||
closed atomic.Bool // Indicates if this connection has been closed
|
||||
}
|
||||
|
||||
// multiChannelNodeConn manages multiple concurrent connections for a single node.
|
||||
type multiChannelNodeConn struct {
|
||||
id types.NodeID
|
||||
mapper *mapper
|
||||
log zerolog.Logger
|
||||
|
||||
mutex sync.RWMutex
|
||||
connections []*connectionEntry
|
||||
|
||||
updateCount atomic.Int64
|
||||
|
||||
// lastSentPeers tracks which peers were last sent to this node.
|
||||
// This enables computing diffs for policy changes instead of sending
|
||||
// full peer lists (which clients interpret as "no change" when empty).
|
||||
// Using xsync.Map for lock-free concurrent access.
|
||||
lastSentPeers *xsync.Map[tailcfg.NodeID, struct{}]
|
||||
}
|
||||
|
||||
// generateConnectionID generates a unique connection identifier.
|
||||
func generateConnectionID() string {
|
||||
bytes := make([]byte, 8)
|
||||
_, _ = rand.Read(bytes)
|
||||
|
||||
return hex.EncodeToString(bytes)
|
||||
}
|
||||
|
||||
// newMultiChannelNodeConn creates a new multi-channel node connection.
|
||||
func newMultiChannelNodeConn(id types.NodeID, mapper *mapper) *multiChannelNodeConn {
|
||||
return &multiChannelNodeConn{
|
||||
id: id,
|
||||
mapper: mapper,
|
||||
lastSentPeers: xsync.NewMap[tailcfg.NodeID, struct{}](),
|
||||
log: log.With().Uint64(zf.NodeID, id.Uint64()).Logger(),
|
||||
}
|
||||
}
|
||||
|
||||
func (mc *multiChannelNodeConn) close() {
|
||||
mc.mutex.Lock()
|
||||
defer mc.mutex.Unlock()
|
||||
|
||||
for _, conn := range mc.connections {
|
||||
mc.stopConnection(conn)
|
||||
}
|
||||
}
|
||||
|
||||
// stopConnection marks a connection as closed and tears down the owning session
|
||||
// at most once, even if multiple cleanup paths race to remove it.
|
||||
func (mc *multiChannelNodeConn) stopConnection(conn *connectionEntry) {
|
||||
if conn.closed.CompareAndSwap(false, true) {
|
||||
if conn.stop != nil {
|
||||
conn.stop()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// removeConnectionAtIndexLocked removes the active connection at index.
|
||||
// If stopConnection is true, it also stops that session.
|
||||
// Caller must hold mc.mutex.
|
||||
func (mc *multiChannelNodeConn) removeConnectionAtIndexLocked(i int, stopConnection bool) *connectionEntry {
|
||||
conn := mc.connections[i]
|
||||
mc.connections = append(mc.connections[:i], mc.connections[i+1:]...)
|
||||
|
||||
if stopConnection {
|
||||
mc.stopConnection(conn)
|
||||
}
|
||||
|
||||
return conn
|
||||
}
|
||||
|
||||
// addConnection adds a new connection.
|
||||
func (mc *multiChannelNodeConn) addConnection(entry *connectionEntry) {
|
||||
mutexWaitStart := time.Now()
|
||||
|
||||
mc.log.Debug().Caller().Str(zf.Chan, fmt.Sprintf("%p", entry.c)).Str(zf.ConnID, entry.id).
|
||||
Msg("addConnection: waiting for mutex - POTENTIAL CONTENTION POINT")
|
||||
|
||||
mc.mutex.Lock()
|
||||
|
||||
mutexWaitDur := time.Since(mutexWaitStart)
|
||||
|
||||
defer mc.mutex.Unlock()
|
||||
|
||||
mc.connections = append(mc.connections, entry)
|
||||
mc.log.Debug().Caller().Str(zf.Chan, fmt.Sprintf("%p", entry.c)).Str(zf.ConnID, entry.id).
|
||||
Int("total_connections", len(mc.connections)).
|
||||
Dur("mutex_wait_time", mutexWaitDur).
|
||||
Msg("successfully added connection after mutex wait")
|
||||
}
|
||||
|
||||
// removeConnectionByChannel removes a connection by matching channel pointer.
|
||||
func (mc *multiChannelNodeConn) removeConnectionByChannel(c chan<- *tailcfg.MapResponse) bool {
|
||||
mc.mutex.Lock()
|
||||
defer mc.mutex.Unlock()
|
||||
|
||||
for i, entry := range mc.connections {
|
||||
if entry.c == c {
|
||||
mc.removeConnectionAtIndexLocked(i, false)
|
||||
mc.log.Debug().Caller().Str(zf.Chan, fmt.Sprintf("%p", c)).
|
||||
Int("remaining_connections", len(mc.connections)).
|
||||
Msg("successfully removed connection")
|
||||
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
// hasActiveConnections checks if the node has any active connections.
|
||||
func (mc *multiChannelNodeConn) hasActiveConnections() bool {
|
||||
mc.mutex.RLock()
|
||||
defer mc.mutex.RUnlock()
|
||||
|
||||
return len(mc.connections) > 0
|
||||
}
|
||||
|
||||
// getActiveConnectionCount returns the number of active connections.
|
||||
func (mc *multiChannelNodeConn) getActiveConnectionCount() int {
|
||||
mc.mutex.RLock()
|
||||
defer mc.mutex.RUnlock()
|
||||
|
||||
return len(mc.connections)
|
||||
}
|
||||
|
||||
// send broadcasts data to all active connections for the node.
|
||||
func (mc *multiChannelNodeConn) send(data *tailcfg.MapResponse) error {
|
||||
if data == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
mc.mutex.Lock()
|
||||
defer mc.mutex.Unlock()
|
||||
|
||||
if len(mc.connections) == 0 {
|
||||
// During rapid reconnection, nodes may temporarily have no active connections
|
||||
// This is not an error - the node will receive a full map when it reconnects
|
||||
mc.log.Debug().Caller().
|
||||
Msg("send: skipping send to node with no active connections (likely rapid reconnection)")
|
||||
|
||||
return nil // Return success instead of error
|
||||
}
|
||||
|
||||
mc.log.Debug().Caller().
|
||||
Int("total_connections", len(mc.connections)).
|
||||
Msg("send: broadcasting to all connections")
|
||||
|
||||
var lastErr error
|
||||
|
||||
successCount := 0
|
||||
|
||||
var failedConnections []int // Track failed connections for removal
|
||||
|
||||
// Send to all connections
|
||||
for i, conn := range mc.connections {
|
||||
mc.log.Debug().Caller().Str(zf.Chan, fmt.Sprintf("%p", conn.c)).
|
||||
Str(zf.ConnID, conn.id).Int(zf.ConnectionIndex, i).
|
||||
Msg("send: attempting to send to connection")
|
||||
|
||||
err := conn.send(data)
|
||||
if err != nil {
|
||||
lastErr = err
|
||||
|
||||
failedConnections = append(failedConnections, i)
|
||||
mc.log.Warn().Err(err).Str(zf.Chan, fmt.Sprintf("%p", conn.c)).
|
||||
Str(zf.ConnID, conn.id).Int(zf.ConnectionIndex, i).
|
||||
Msg("send: connection send failed")
|
||||
} else {
|
||||
successCount++
|
||||
|
||||
mc.log.Debug().Caller().Str(zf.Chan, fmt.Sprintf("%p", conn.c)).
|
||||
Str(zf.ConnID, conn.id).Int(zf.ConnectionIndex, i).
|
||||
Msg("send: successfully sent to connection")
|
||||
}
|
||||
}
|
||||
|
||||
// Remove failed connections (in reverse order to maintain indices)
|
||||
for i := len(failedConnections) - 1; i >= 0; i-- {
|
||||
idx := failedConnections[i]
|
||||
entry := mc.removeConnectionAtIndexLocked(idx, true)
|
||||
mc.log.Debug().Caller().
|
||||
Str(zf.ConnID, entry.id).
|
||||
Msg("send: removed failed connection")
|
||||
}
|
||||
|
||||
mc.updateCount.Add(1)
|
||||
|
||||
mc.log.Debug().
|
||||
Int("successful_sends", successCount).
|
||||
Int("failed_connections", len(failedConnections)).
|
||||
Int("remaining_connections", len(mc.connections)).
|
||||
Msg("send: completed broadcast")
|
||||
|
||||
// Success if at least one send succeeded
|
||||
if successCount > 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
return fmt.Errorf("node %d: all connections failed, last error: %w", mc.id, lastErr)
|
||||
}
|
||||
|
||||
// send sends data to a single connection entry with timeout-based stale connection detection.
|
||||
func (entry *connectionEntry) send(data *tailcfg.MapResponse) error {
|
||||
if data == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Check if the connection has been closed to prevent send on closed channel panic.
|
||||
// This can happen during shutdown when Close() is called while workers are still processing.
|
||||
if entry.closed.Load() {
|
||||
return fmt.Errorf("connection %s: %w", entry.id, errConnectionClosed)
|
||||
}
|
||||
|
||||
// Use a short timeout to detect stale connections where the client isn't reading the channel.
|
||||
// This is critical for detecting Docker containers that are forcefully terminated
|
||||
// but still have channels that appear open.
|
||||
select {
|
||||
case entry.c <- data:
|
||||
// Update last used timestamp on successful send
|
||||
entry.lastUsed.Store(time.Now().Unix())
|
||||
return nil
|
||||
case <-time.After(50 * time.Millisecond):
|
||||
// Connection is likely stale - client isn't reading from channel
|
||||
// This catches the case where Docker containers are killed but channels remain open
|
||||
return fmt.Errorf("connection %s: %w", entry.id, ErrConnectionSendTimeout)
|
||||
}
|
||||
}
|
||||
|
||||
// nodeID returns the node ID.
|
||||
func (mc *multiChannelNodeConn) nodeID() types.NodeID {
|
||||
return mc.id
|
||||
}
|
||||
|
||||
// version returns the capability version from the first active connection.
|
||||
// All connections for a node should have the same version in practice.
|
||||
func (mc *multiChannelNodeConn) version() tailcfg.CapabilityVersion {
|
||||
mc.mutex.RLock()
|
||||
defer mc.mutex.RUnlock()
|
||||
|
||||
if len(mc.connections) == 0 {
|
||||
return 0
|
||||
}
|
||||
|
||||
return mc.connections[0].version
|
||||
}
|
||||
|
||||
// updateSentPeers updates the tracked peer state based on a sent MapResponse.
|
||||
// This must be called after successfully sending a response to keep track of
|
||||
// what the client knows about, enabling accurate diffs for future updates.
|
||||
func (mc *multiChannelNodeConn) updateSentPeers(resp *tailcfg.MapResponse) {
|
||||
if resp == nil {
|
||||
return
|
||||
}
|
||||
|
||||
// Full peer list replaces tracked state entirely
|
||||
if resp.Peers != nil {
|
||||
mc.lastSentPeers.Clear()
|
||||
|
||||
for _, peer := range resp.Peers {
|
||||
mc.lastSentPeers.Store(peer.ID, struct{}{})
|
||||
}
|
||||
}
|
||||
|
||||
// Incremental additions
|
||||
for _, peer := range resp.PeersChanged {
|
||||
mc.lastSentPeers.Store(peer.ID, struct{}{})
|
||||
}
|
||||
|
||||
// Incremental removals
|
||||
for _, id := range resp.PeersRemoved {
|
||||
mc.lastSentPeers.Delete(id)
|
||||
}
|
||||
}
|
||||
|
||||
// computePeerDiff compares the current peer list against what was last sent
|
||||
// and returns the peers that were removed (in lastSentPeers but not in current).
|
||||
func (mc *multiChannelNodeConn) computePeerDiff(currentPeers []tailcfg.NodeID) []tailcfg.NodeID {
|
||||
currentSet := make(map[tailcfg.NodeID]struct{}, len(currentPeers))
|
||||
for _, id := range currentPeers {
|
||||
currentSet[id] = struct{}{}
|
||||
}
|
||||
|
||||
var removed []tailcfg.NodeID
|
||||
|
||||
// Find removed: in lastSentPeers but not in current
|
||||
mc.lastSentPeers.Range(func(id tailcfg.NodeID, _ struct{}) bool {
|
||||
if _, exists := currentSet[id]; !exists {
|
||||
removed = append(removed, id)
|
||||
}
|
||||
|
||||
return true
|
||||
})
|
||||
|
||||
return removed
|
||||
}
|
||||
|
||||
// change applies a change to all active connections for the node.
|
||||
func (mc *multiChannelNodeConn) change(r change.Change) error {
|
||||
return handleNodeChange(mc, mc.mapper, r)
|
||||
}
|
||||
|
||||
// DebugNodeInfo contains debug information about a node's connections.
|
||||
type DebugNodeInfo struct {
|
||||
Connected bool `json:"connected"`
|
||||
ActiveConnections int `json:"active_connections"`
|
||||
}
|
||||
|
||||
// Debug returns a pre-baked map of node debug information for the debug interface.
|
||||
func (b *LockFreeBatcher) Debug() map[types.NodeID]DebugNodeInfo {
|
||||
result := make(map[types.NodeID]DebugNodeInfo)
|
||||
|
||||
// Get all nodes with their connection status using immediate connection logic
|
||||
// (no grace period) for debug purposes
|
||||
b.nodes.Range(func(id types.NodeID, nodeConn *multiChannelNodeConn) bool {
|
||||
nodeConn.mutex.RLock()
|
||||
activeConnCount := len(nodeConn.connections)
|
||||
nodeConn.mutex.RUnlock()
|
||||
|
||||
// Use immediate connection status: if active connections exist, node is connected
|
||||
// If not, check the connected map for nil (connected) vs timestamp (disconnected)
|
||||
connected := false
|
||||
if activeConnCount > 0 {
|
||||
connected = true
|
||||
} else {
|
||||
// Check connected map for immediate status
|
||||
if val, ok := b.connected.Load(id); ok && val == nil {
|
||||
connected = true
|
||||
}
|
||||
}
|
||||
|
||||
result[id] = DebugNodeInfo{
|
||||
Connected: connected,
|
||||
ActiveConnections: activeConnCount,
|
||||
}
|
||||
|
||||
return true
|
||||
})
|
||||
|
||||
// Add all entries from the connected map to capture both connected and disconnected nodes
|
||||
b.connected.Range(func(id types.NodeID, val *time.Time) bool {
|
||||
// Only add if not already processed above
|
||||
if _, exists := result[id]; !exists {
|
||||
// Use immediate connection status for debug (no grace period)
|
||||
connected := (val == nil) // nil means connected, timestamp means disconnected
|
||||
result[id] = DebugNodeInfo{
|
||||
Connected: connected,
|
||||
ActiveConnections: 0,
|
||||
}
|
||||
}
|
||||
|
||||
return true
|
||||
})
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
func (b *LockFreeBatcher) DebugMapResponses() (map[types.NodeID][]tailcfg.MapResponse, error) {
|
||||
return b.mapper.debugMapResponses()
|
||||
}
|
||||
|
||||
// WorkErrors returns the count of work errors encountered.
|
||||
// This is primarily useful for testing and debugging.
|
||||
func (b *LockFreeBatcher) WorkErrors() int64 {
|
||||
return b.workErrors.Load()
|
||||
}
|
||||
@@ -1,948 +0,0 @@
|
||||
package mapper
|
||||
|
||||
// Scale benchmarks for the batcher system.
|
||||
//
|
||||
// These benchmarks systematically increase node counts to find scaling limits
|
||||
// and identify bottlenecks. Organized into tiers:
|
||||
//
|
||||
// Tier 1 - O(1) operations: should stay flat regardless of node count
|
||||
// Tier 2 - O(N) lightweight: batch queuing and processing (no MapResponse generation)
|
||||
// Tier 3 - O(N) heavier: map building, peer diff, peer tracking
|
||||
// Tier 4 - Concurrent contention: multi-goroutine access under load
|
||||
//
|
||||
// Node count progression: 100, 500, 1000, 2000, 5000, 10000, 20000, 50000
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strconv"
|
||||
"sync"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/juanfont/headscale/hscontrol/types"
|
||||
"github.com/juanfont/headscale/hscontrol/types/change"
|
||||
"github.com/rs/zerolog"
|
||||
"tailscale.com/tailcfg"
|
||||
)
|
||||
|
||||
// scaleCounts defines the node counts used across all scaling benchmarks.
|
||||
// Tier 1 (O(1)) tests up to 50k; Tier 2-4 test up to 10k-20k.
|
||||
var (
|
||||
scaleCountsO1 = []int{100, 500, 1000, 2000, 5000, 10000, 20000, 50000}
|
||||
scaleCountsLinear = []int{100, 500, 1000, 2000, 5000, 10000}
|
||||
scaleCountsHeavy = []int{100, 500, 1000, 2000, 5000, 10000}
|
||||
scaleCountsConc = []int{100, 500, 1000, 2000, 5000}
|
||||
)
|
||||
|
||||
// ============================================================================
|
||||
// Tier 1: O(1) Operations — should scale flat
|
||||
// ============================================================================
|
||||
|
||||
// BenchmarkScale_IsConnected tests single-node lookup at increasing map sizes.
|
||||
func BenchmarkScale_IsConnected(b *testing.B) {
|
||||
zerolog.SetGlobalLevel(zerolog.Disabled)
|
||||
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
|
||||
|
||||
for _, n := range scaleCountsO1 {
|
||||
b.Run(strconv.Itoa(n), func(b *testing.B) {
|
||||
batcher, _ := benchBatcher(n, 1)
|
||||
|
||||
defer func() {
|
||||
close(batcher.done)
|
||||
batcher.tick.Stop()
|
||||
}()
|
||||
|
||||
b.ResetTimer()
|
||||
|
||||
for i := range b.N {
|
||||
id := types.NodeID(1 + (i % n)) //nolint:gosec
|
||||
_ = batcher.IsConnected(id)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// BenchmarkScale_AddToBatch_Targeted tests single-node targeted change at
|
||||
// increasing map sizes. The map size should not affect per-operation cost.
|
||||
func BenchmarkScale_AddToBatch_Targeted(b *testing.B) {
|
||||
zerolog.SetGlobalLevel(zerolog.Disabled)
|
||||
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
|
||||
|
||||
for _, n := range scaleCountsO1 {
|
||||
b.Run(strconv.Itoa(n), func(b *testing.B) {
|
||||
batcher, _ := benchBatcher(n, 10)
|
||||
|
||||
defer func() {
|
||||
close(batcher.done)
|
||||
batcher.tick.Stop()
|
||||
}()
|
||||
|
||||
b.ResetTimer()
|
||||
|
||||
for i := range b.N {
|
||||
targetID := types.NodeID(1 + (i % n)) //nolint:gosec
|
||||
ch := change.Change{
|
||||
Reason: "scale-targeted",
|
||||
TargetNode: targetID,
|
||||
PeerPatches: []*tailcfg.PeerChange{
|
||||
{NodeID: tailcfg.NodeID(targetID)}, //nolint:gosec
|
||||
},
|
||||
}
|
||||
batcher.addToBatch(ch)
|
||||
// Drain every 100 ops to avoid unbounded growth
|
||||
if i%100 == 99 {
|
||||
batcher.nodes.Range(func(_ types.NodeID, nc *multiChannelNodeConn) bool {
|
||||
nc.drainPending()
|
||||
return true
|
||||
})
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// BenchmarkScale_ConnectionChurn tests add/remove connection cycle.
|
||||
// The map size should not affect per-operation cost for a single node.
|
||||
func BenchmarkScale_ConnectionChurn(b *testing.B) {
|
||||
zerolog.SetGlobalLevel(zerolog.Disabled)
|
||||
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
|
||||
|
||||
for _, n := range scaleCountsO1 {
|
||||
b.Run(strconv.Itoa(n), func(b *testing.B) {
|
||||
batcher, channels := benchBatcher(n, 10)
|
||||
|
||||
defer func() {
|
||||
close(batcher.done)
|
||||
batcher.tick.Stop()
|
||||
}()
|
||||
|
||||
b.ResetTimer()
|
||||
|
||||
for i := range b.N {
|
||||
id := types.NodeID(1 + (i % n)) //nolint:gosec
|
||||
|
||||
mc, ok := batcher.nodes.Load(id)
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
|
||||
oldCh := channels[id]
|
||||
mc.removeConnectionByChannel(oldCh)
|
||||
|
||||
newCh := make(chan *tailcfg.MapResponse, 10)
|
||||
entry := &connectionEntry{
|
||||
id: fmt.Sprintf("sc-%d", i),
|
||||
c: newCh,
|
||||
version: tailcfg.CapabilityVersion(100),
|
||||
created: time.Now(),
|
||||
}
|
||||
entry.lastUsed.Store(time.Now().Unix())
|
||||
mc.addConnection(entry)
|
||||
|
||||
channels[id] = newCh
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Tier 2: O(N) Lightweight — batch mechanics without MapResponse generation
|
||||
// ============================================================================
|
||||
|
||||
// BenchmarkScale_AddToBatch_Broadcast tests broadcasting a change to ALL nodes.
|
||||
// Cost should scale linearly with node count.
|
||||
func BenchmarkScale_AddToBatch_Broadcast(b *testing.B) {
|
||||
zerolog.SetGlobalLevel(zerolog.Disabled)
|
||||
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
|
||||
|
||||
for _, n := range scaleCountsLinear {
|
||||
b.Run(strconv.Itoa(n), func(b *testing.B) {
|
||||
batcher, _ := benchBatcher(n, 10)
|
||||
|
||||
defer func() {
|
||||
close(batcher.done)
|
||||
batcher.tick.Stop()
|
||||
}()
|
||||
|
||||
ch := change.DERPMap()
|
||||
|
||||
b.ResetTimer()
|
||||
|
||||
for range b.N {
|
||||
batcher.addToBatch(ch)
|
||||
// Drain to avoid unbounded growth
|
||||
batcher.nodes.Range(func(_ types.NodeID, nc *multiChannelNodeConn) bool {
|
||||
nc.drainPending()
|
||||
return true
|
||||
})
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// BenchmarkScale_AddToBatch_FullUpdate tests FullUpdate broadcast cost.
|
||||
func BenchmarkScale_AddToBatch_FullUpdate(b *testing.B) {
|
||||
zerolog.SetGlobalLevel(zerolog.Disabled)
|
||||
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
|
||||
|
||||
for _, n := range scaleCountsLinear {
|
||||
b.Run(strconv.Itoa(n), func(b *testing.B) {
|
||||
batcher, _ := benchBatcher(n, 10)
|
||||
|
||||
defer func() {
|
||||
close(batcher.done)
|
||||
batcher.tick.Stop()
|
||||
}()
|
||||
|
||||
b.ResetTimer()
|
||||
|
||||
for range b.N {
|
||||
batcher.addToBatch(change.FullUpdate())
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// BenchmarkScale_ProcessBatchedChanges tests draining pending changes into work queue.
|
||||
func BenchmarkScale_ProcessBatchedChanges(b *testing.B) {
|
||||
zerolog.SetGlobalLevel(zerolog.Disabled)
|
||||
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
|
||||
|
||||
for _, n := range scaleCountsLinear {
|
||||
b.Run(strconv.Itoa(n), func(b *testing.B) {
|
||||
batcher, _ := benchBatcher(n, 10)
|
||||
batcher.workCh = make(chan work, n*b.N+1)
|
||||
|
||||
defer func() {
|
||||
close(batcher.done)
|
||||
batcher.tick.Stop()
|
||||
}()
|
||||
|
||||
b.ResetTimer()
|
||||
|
||||
for range b.N {
|
||||
b.StopTimer()
|
||||
|
||||
for i := 1; i <= n; i++ {
|
||||
if nc, ok := batcher.nodes.Load(types.NodeID(i)); ok { //nolint:gosec
|
||||
nc.appendPending(change.DERPMap())
|
||||
}
|
||||
}
|
||||
|
||||
b.StartTimer()
|
||||
batcher.processBatchedChanges()
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// BenchmarkScale_BroadcastToN tests end-to-end: addToBatch + processBatchedChanges.
|
||||
func BenchmarkScale_BroadcastToN(b *testing.B) {
|
||||
zerolog.SetGlobalLevel(zerolog.Disabled)
|
||||
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
|
||||
|
||||
for _, n := range scaleCountsLinear {
|
||||
b.Run(strconv.Itoa(n), func(b *testing.B) {
|
||||
batcher, _ := benchBatcher(n, 10)
|
||||
batcher.workCh = make(chan work, n*b.N+1)
|
||||
|
||||
defer func() {
|
||||
close(batcher.done)
|
||||
batcher.tick.Stop()
|
||||
}()
|
||||
|
||||
ch := change.DERPMap()
|
||||
|
||||
b.ResetTimer()
|
||||
|
||||
for range b.N {
|
||||
batcher.addToBatch(ch)
|
||||
batcher.processBatchedChanges()
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// BenchmarkScale_SendToAll tests raw channel send cost to N nodes (no batching).
|
||||
// This isolates the multiChannelNodeConn.send() cost.
|
||||
// Uses large buffered channels to avoid goroutine drain overhead.
|
||||
func BenchmarkScale_SendToAll(b *testing.B) {
|
||||
zerolog.SetGlobalLevel(zerolog.Disabled)
|
||||
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
|
||||
|
||||
for _, n := range scaleCountsLinear {
|
||||
b.Run(strconv.Itoa(n), func(b *testing.B) {
|
||||
// b.N+1 buffer so sends never block
|
||||
batcher, _ := benchBatcher(n, b.N+1)
|
||||
|
||||
defer func() {
|
||||
close(batcher.done)
|
||||
batcher.tick.Stop()
|
||||
}()
|
||||
|
||||
data := testMapResponse()
|
||||
|
||||
b.ResetTimer()
|
||||
|
||||
for range b.N {
|
||||
batcher.nodes.Range(func(_ types.NodeID, mc *multiChannelNodeConn) bool {
|
||||
_ = mc.send(data)
|
||||
return true
|
||||
})
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Tier 3: O(N) Heavier — map building, peer diff, peer tracking
|
||||
// ============================================================================
|
||||
|
||||
// BenchmarkScale_ConnectedMap tests building the full connected/disconnected map.
|
||||
func BenchmarkScale_ConnectedMap(b *testing.B) {
|
||||
zerolog.SetGlobalLevel(zerolog.Disabled)
|
||||
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
|
||||
|
||||
for _, n := range scaleCountsHeavy {
|
||||
b.Run(strconv.Itoa(n), func(b *testing.B) {
|
||||
batcher, channels := benchBatcher(n, 1)
|
||||
|
||||
defer func() {
|
||||
close(batcher.done)
|
||||
batcher.tick.Stop()
|
||||
}()
|
||||
|
||||
// 10% disconnected for realism
|
||||
for i := 1; i <= n; i++ {
|
||||
if i%10 == 0 {
|
||||
id := types.NodeID(i) //nolint:gosec
|
||||
if mc, ok := batcher.nodes.Load(id); ok {
|
||||
mc.removeConnectionByChannel(channels[id])
|
||||
mc.markDisconnected()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
b.ResetTimer()
|
||||
|
||||
for range b.N {
|
||||
_ = batcher.ConnectedMap()
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// BenchmarkScale_ComputePeerDiff tests peer diff computation at scale.
|
||||
// Each node tracks N-1 peers, with 10% removed.
|
||||
func BenchmarkScale_ComputePeerDiff(b *testing.B) {
|
||||
zerolog.SetGlobalLevel(zerolog.Disabled)
|
||||
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
|
||||
|
||||
for _, n := range scaleCountsHeavy {
|
||||
b.Run(strconv.Itoa(n), func(b *testing.B) {
|
||||
mc := newMultiChannelNodeConn(1, nil)
|
||||
|
||||
// Track N peers
|
||||
for i := 1; i <= n; i++ {
|
||||
mc.lastSentPeers.Store(tailcfg.NodeID(i), struct{}{})
|
||||
}
|
||||
|
||||
// Current: 90% present (every 10th missing)
|
||||
current := make([]tailcfg.NodeID, 0, n)
|
||||
for i := 1; i <= n; i++ {
|
||||
if i%10 != 0 {
|
||||
current = append(current, tailcfg.NodeID(i))
|
||||
}
|
||||
}
|
||||
|
||||
b.ResetTimer()
|
||||
|
||||
for range b.N {
|
||||
_ = mc.computePeerDiff(current)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// BenchmarkScale_UpdateSentPeers_Full tests full peer list update.
|
||||
func BenchmarkScale_UpdateSentPeers_Full(b *testing.B) {
|
||||
zerolog.SetGlobalLevel(zerolog.Disabled)
|
||||
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
|
||||
|
||||
for _, n := range scaleCountsHeavy {
|
||||
b.Run(strconv.Itoa(n), func(b *testing.B) {
|
||||
mc := newMultiChannelNodeConn(1, nil)
|
||||
|
||||
peerIDs := make([]tailcfg.NodeID, n)
|
||||
for i := range peerIDs {
|
||||
peerIDs[i] = tailcfg.NodeID(i + 1)
|
||||
}
|
||||
|
||||
resp := testMapResponseWithPeers(peerIDs...)
|
||||
|
||||
b.ResetTimer()
|
||||
|
||||
for range b.N {
|
||||
mc.updateSentPeers(resp)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// BenchmarkScale_UpdateSentPeers_Incremental tests incremental peer updates (10% new).
|
||||
func BenchmarkScale_UpdateSentPeers_Incremental(b *testing.B) {
|
||||
zerolog.SetGlobalLevel(zerolog.Disabled)
|
||||
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
|
||||
|
||||
for _, n := range scaleCountsHeavy {
|
||||
b.Run(strconv.Itoa(n), func(b *testing.B) {
|
||||
mc := newMultiChannelNodeConn(1, nil)
|
||||
|
||||
// Pre-populate
|
||||
for i := 1; i <= n; i++ {
|
||||
mc.lastSentPeers.Store(tailcfg.NodeID(i), struct{}{})
|
||||
}
|
||||
|
||||
addCount := n / 10
|
||||
if addCount == 0 {
|
||||
addCount = 1
|
||||
}
|
||||
|
||||
resp := testMapResponse()
|
||||
|
||||
resp.PeersChanged = make([]*tailcfg.Node, addCount)
|
||||
for i := range addCount {
|
||||
resp.PeersChanged[i] = &tailcfg.Node{ID: tailcfg.NodeID(n + i + 1)}
|
||||
}
|
||||
|
||||
b.ResetTimer()
|
||||
|
||||
for range b.N {
|
||||
mc.updateSentPeers(resp)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// BenchmarkScale_MultiChannelBroadcast tests sending to N nodes, each with
|
||||
// ~1.6 connections on average (every 3rd node has 3 connections).
|
||||
// Uses large buffered channels to avoid goroutine drain overhead.
|
||||
func BenchmarkScale_MultiChannelBroadcast(b *testing.B) {
|
||||
zerolog.SetGlobalLevel(zerolog.Disabled)
|
||||
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
|
||||
|
||||
for _, n := range scaleCountsHeavy {
|
||||
b.Run(strconv.Itoa(n), func(b *testing.B) {
|
||||
// Use b.N+1 buffer so sends never block
|
||||
batcher, _ := benchBatcher(n, b.N+1)
|
||||
|
||||
defer func() {
|
||||
close(batcher.done)
|
||||
batcher.tick.Stop()
|
||||
}()
|
||||
|
||||
// Add extra connections to every 3rd node (also buffered)
|
||||
for i := 1; i <= n; i++ {
|
||||
if i%3 == 0 {
|
||||
if mc, ok := batcher.nodes.Load(types.NodeID(i)); ok { //nolint:gosec
|
||||
for j := range 2 {
|
||||
ch := make(chan *tailcfg.MapResponse, b.N+1)
|
||||
entry := &connectionEntry{
|
||||
id: fmt.Sprintf("extra-%d-%d", i, j),
|
||||
c: ch,
|
||||
version: tailcfg.CapabilityVersion(100),
|
||||
created: time.Now(),
|
||||
}
|
||||
entry.lastUsed.Store(time.Now().Unix())
|
||||
mc.addConnection(entry)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
data := testMapResponse()
|
||||
|
||||
b.ResetTimer()
|
||||
|
||||
for range b.N {
|
||||
batcher.nodes.Range(func(_ types.NodeID, mc *multiChannelNodeConn) bool {
|
||||
_ = mc.send(data)
|
||||
return true
|
||||
})
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Tier 4: Concurrent Contention — multi-goroutine access
|
||||
// ============================================================================
|
||||
|
||||
// BenchmarkScale_ConcurrentAddToBatch tests parallel addToBatch throughput.
|
||||
func BenchmarkScale_ConcurrentAddToBatch(b *testing.B) {
|
||||
zerolog.SetGlobalLevel(zerolog.Disabled)
|
||||
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
|
||||
|
||||
for _, n := range scaleCountsConc {
|
||||
b.Run(strconv.Itoa(n), func(b *testing.B) {
|
||||
batcher, _ := benchBatcher(n, 10)
|
||||
|
||||
drainDone := make(chan struct{})
|
||||
|
||||
go func() {
|
||||
defer close(drainDone)
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-batcher.done:
|
||||
return
|
||||
default:
|
||||
batcher.nodes.Range(func(_ types.NodeID, nc *multiChannelNodeConn) bool {
|
||||
nc.drainPending()
|
||||
return true
|
||||
})
|
||||
time.Sleep(time.Millisecond) //nolint:forbidigo
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
ch := change.DERPMap()
|
||||
|
||||
b.ResetTimer()
|
||||
b.RunParallel(func(pb *testing.PB) {
|
||||
for pb.Next() {
|
||||
batcher.addToBatch(ch)
|
||||
}
|
||||
})
|
||||
b.StopTimer()
|
||||
|
||||
close(batcher.done)
|
||||
<-drainDone
|
||||
|
||||
batcher.done = make(chan struct{})
|
||||
batcher.tick.Stop()
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// BenchmarkScale_ConcurrentSendAndChurn tests the production hot path:
|
||||
// sending to all nodes while 10% of connections are churning concurrently.
|
||||
// Uses large buffered channels to avoid goroutine drain overhead.
|
||||
func BenchmarkScale_ConcurrentSendAndChurn(b *testing.B) {
|
||||
zerolog.SetGlobalLevel(zerolog.Disabled)
|
||||
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
|
||||
|
||||
for _, n := range scaleCountsConc {
|
||||
b.Run(strconv.Itoa(n), func(b *testing.B) {
|
||||
batcher, channels := benchBatcher(n, b.N+1)
|
||||
|
||||
var mu sync.Mutex
|
||||
|
||||
stopChurn := make(chan struct{})
|
||||
|
||||
go func() {
|
||||
i := 0
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-stopChurn:
|
||||
return
|
||||
default:
|
||||
id := types.NodeID(1 + (i % n)) //nolint:gosec
|
||||
if i%10 == 0 {
|
||||
mc, ok := batcher.nodes.Load(id)
|
||||
if ok {
|
||||
mu.Lock()
|
||||
oldCh := channels[id]
|
||||
mu.Unlock()
|
||||
mc.removeConnectionByChannel(oldCh)
|
||||
|
||||
newCh := make(chan *tailcfg.MapResponse, b.N+1)
|
||||
entry := &connectionEntry{
|
||||
id: fmt.Sprintf("sc-churn-%d", i),
|
||||
c: newCh,
|
||||
version: tailcfg.CapabilityVersion(100),
|
||||
created: time.Now(),
|
||||
}
|
||||
entry.lastUsed.Store(time.Now().Unix())
|
||||
mc.addConnection(entry)
|
||||
mu.Lock()
|
||||
channels[id] = newCh
|
||||
mu.Unlock()
|
||||
}
|
||||
}
|
||||
|
||||
i++
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
data := testMapResponse()
|
||||
|
||||
b.ResetTimer()
|
||||
|
||||
for range b.N {
|
||||
batcher.nodes.Range(func(_ types.NodeID, mc *multiChannelNodeConn) bool {
|
||||
_ = mc.send(data)
|
||||
return true
|
||||
})
|
||||
}
|
||||
|
||||
b.StopTimer()
|
||||
close(stopChurn)
|
||||
close(batcher.done)
|
||||
batcher.tick.Stop()
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// BenchmarkScale_MixedWorkload simulates a realistic production workload:
|
||||
// - 70% targeted changes (single node updates)
|
||||
// - 20% DERP map changes (broadcast)
|
||||
// - 10% full updates (broadcast with full map)
|
||||
// All while 10% of connections are churning.
|
||||
func BenchmarkScale_MixedWorkload(b *testing.B) {
|
||||
zerolog.SetGlobalLevel(zerolog.Disabled)
|
||||
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
|
||||
|
||||
for _, n := range scaleCountsConc {
|
||||
b.Run(strconv.Itoa(n), func(b *testing.B) {
|
||||
batcher, channels := benchBatcher(n, 10)
|
||||
batcher.workCh = make(chan work, n*100+1)
|
||||
|
||||
var mu sync.Mutex
|
||||
|
||||
stopChurn := make(chan struct{})
|
||||
|
||||
// Background churn on 10% of nodes
|
||||
go func() {
|
||||
i := 0
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-stopChurn:
|
||||
return
|
||||
default:
|
||||
id := types.NodeID(1 + (i % n)) //nolint:gosec
|
||||
if i%10 == 0 {
|
||||
mc, ok := batcher.nodes.Load(id)
|
||||
if ok {
|
||||
mu.Lock()
|
||||
oldCh := channels[id]
|
||||
mu.Unlock()
|
||||
mc.removeConnectionByChannel(oldCh)
|
||||
|
||||
newCh := make(chan *tailcfg.MapResponse, 10)
|
||||
entry := &connectionEntry{
|
||||
id: fmt.Sprintf("mix-churn-%d", i),
|
||||
c: newCh,
|
||||
version: tailcfg.CapabilityVersion(100),
|
||||
created: time.Now(),
|
||||
}
|
||||
entry.lastUsed.Store(time.Now().Unix())
|
||||
mc.addConnection(entry)
|
||||
mu.Lock()
|
||||
channels[id] = newCh
|
||||
mu.Unlock()
|
||||
}
|
||||
}
|
||||
|
||||
i++
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
// Background batch processor
|
||||
stopProc := make(chan struct{})
|
||||
|
||||
go func() {
|
||||
for {
|
||||
select {
|
||||
case <-stopProc:
|
||||
return
|
||||
default:
|
||||
batcher.processBatchedChanges()
|
||||
time.Sleep(time.Millisecond) //nolint:forbidigo
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
// Background work channel consumer (simulates workers)
|
||||
stopWorkers := make(chan struct{})
|
||||
|
||||
go func() {
|
||||
for {
|
||||
select {
|
||||
case <-batcher.workCh:
|
||||
case <-stopWorkers:
|
||||
return
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
b.ResetTimer()
|
||||
|
||||
for i := range b.N {
|
||||
switch {
|
||||
case i%10 < 7: // 70% targeted
|
||||
targetID := types.NodeID(1 + (i % n)) //nolint:gosec
|
||||
batcher.addToBatch(change.Change{
|
||||
Reason: "mixed-targeted",
|
||||
TargetNode: targetID,
|
||||
PeerPatches: []*tailcfg.PeerChange{
|
||||
{NodeID: tailcfg.NodeID(targetID)}, //nolint:gosec
|
||||
},
|
||||
})
|
||||
case i%10 < 9: // 20% DERP map broadcast
|
||||
batcher.addToBatch(change.DERPMap())
|
||||
default: // 10% full update
|
||||
batcher.addToBatch(change.FullUpdate())
|
||||
}
|
||||
}
|
||||
|
||||
b.StopTimer()
|
||||
close(stopChurn)
|
||||
close(stopProc)
|
||||
close(stopWorkers)
|
||||
close(batcher.done)
|
||||
batcher.tick.Stop()
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Tier 5: DB-dependent — AddNode with real MapResponse generation
|
||||
// ============================================================================
|
||||
|
||||
// BenchmarkScale_AddAllNodes measures the cost of connecting ALL N nodes
|
||||
// to a batcher backed by a real database. Each AddNode generates an initial
|
||||
// MapResponse containing all peer data, so cost is O(N) per node, O(N²) total.
|
||||
func BenchmarkScale_AddAllNodes(b *testing.B) {
|
||||
if testing.Short() {
|
||||
b.Skip("skipping full pipeline benchmark in short mode")
|
||||
}
|
||||
|
||||
zerolog.SetGlobalLevel(zerolog.Disabled)
|
||||
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
|
||||
|
||||
for _, nodeCount := range []int{10, 50, 100, 200, 500} {
|
||||
b.Run(strconv.Itoa(nodeCount), func(b *testing.B) {
|
||||
testData, cleanup := setupBatcherWithTestData(b, NewBatcherAndMapper, 1, nodeCount, largeBufferSize)
|
||||
defer cleanup()
|
||||
|
||||
batcher := testData.Batcher
|
||||
allNodes := testData.Nodes
|
||||
|
||||
for i := range allNodes {
|
||||
allNodes[i].start()
|
||||
}
|
||||
|
||||
defer func() {
|
||||
for i := range allNodes {
|
||||
allNodes[i].cleanup()
|
||||
}
|
||||
}()
|
||||
|
||||
b.ResetTimer()
|
||||
|
||||
for range b.N {
|
||||
for i := range allNodes {
|
||||
node := &allNodes[i]
|
||||
_ = batcher.AddNode(node.n.ID, node.ch, tailcfg.CapabilityVersion(100), nil)
|
||||
}
|
||||
|
||||
b.StopTimer()
|
||||
|
||||
for i := range allNodes {
|
||||
node := &allNodes[i]
|
||||
batcher.RemoveNode(node.n.ID, node.ch)
|
||||
}
|
||||
|
||||
for i := range allNodes {
|
||||
for {
|
||||
select {
|
||||
case <-allNodes[i].ch:
|
||||
default:
|
||||
goto drained
|
||||
}
|
||||
}
|
||||
|
||||
drained:
|
||||
}
|
||||
|
||||
b.StartTimer()
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// BenchmarkScale_SingleAddNode measures the cost of adding ONE node to an
|
||||
// already-populated batcher. This is the real production scenario: a new node
|
||||
// joins an existing network. The cost should scale with the number of existing
|
||||
// peers since the initial MapResponse includes all peer data.
|
||||
func BenchmarkScale_SingleAddNode(b *testing.B) {
|
||||
if testing.Short() {
|
||||
b.Skip("skipping full pipeline benchmark in short mode")
|
||||
}
|
||||
|
||||
zerolog.SetGlobalLevel(zerolog.Disabled)
|
||||
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
|
||||
|
||||
for _, nodeCount := range []int{10, 50, 100, 200, 500, 1000} {
|
||||
b.Run(strconv.Itoa(nodeCount), func(b *testing.B) {
|
||||
testData, cleanup := setupBatcherWithTestData(b, NewBatcherAndMapper, 1, nodeCount, largeBufferSize)
|
||||
defer cleanup()
|
||||
|
||||
batcher := testData.Batcher
|
||||
allNodes := testData.Nodes
|
||||
|
||||
for i := range allNodes {
|
||||
allNodes[i].start()
|
||||
}
|
||||
|
||||
defer func() {
|
||||
for i := range allNodes {
|
||||
allNodes[i].cleanup()
|
||||
}
|
||||
}()
|
||||
|
||||
// Connect all nodes except the last one
|
||||
for i := range len(allNodes) - 1 {
|
||||
node := &allNodes[i]
|
||||
|
||||
err := batcher.AddNode(node.n.ID, node.ch, tailcfg.CapabilityVersion(100), nil)
|
||||
if err != nil {
|
||||
b.Fatalf("failed to add node %d: %v", i, err)
|
||||
}
|
||||
}
|
||||
|
||||
time.Sleep(200 * time.Millisecond) //nolint:forbidigo
|
||||
|
||||
// Benchmark: repeatedly add and remove the last node
|
||||
lastNode := &allNodes[len(allNodes)-1]
|
||||
|
||||
b.ResetTimer()
|
||||
|
||||
for range b.N {
|
||||
_ = batcher.AddNode(lastNode.n.ID, lastNode.ch, tailcfg.CapabilityVersion(100), nil)
|
||||
|
||||
b.StopTimer()
|
||||
batcher.RemoveNode(lastNode.n.ID, lastNode.ch)
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-lastNode.ch:
|
||||
default:
|
||||
goto drainDone
|
||||
}
|
||||
}
|
||||
|
||||
drainDone:
|
||||
b.StartTimer()
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// BenchmarkScale_MapResponse_DERPMap measures MapResponse generation for a
|
||||
// DERPMap change. This is a lightweight change that doesn't touch peers.
|
||||
func BenchmarkScale_MapResponse_DERPMap(b *testing.B) {
|
||||
if testing.Short() {
|
||||
b.Skip("skipping full pipeline benchmark in short mode")
|
||||
}
|
||||
|
||||
zerolog.SetGlobalLevel(zerolog.Disabled)
|
||||
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
|
||||
|
||||
for _, nodeCount := range []int{10, 50, 100, 200, 500} {
|
||||
b.Run(strconv.Itoa(nodeCount), func(b *testing.B) {
|
||||
testData, cleanup := setupBatcherWithTestData(b, NewBatcherAndMapper, 1, nodeCount, largeBufferSize)
|
||||
defer cleanup()
|
||||
|
||||
batcher := testData.Batcher
|
||||
allNodes := testData.Nodes
|
||||
|
||||
for i := range allNodes {
|
||||
allNodes[i].start()
|
||||
}
|
||||
|
||||
defer func() {
|
||||
for i := range allNodes {
|
||||
allNodes[i].cleanup()
|
||||
}
|
||||
}()
|
||||
|
||||
for i := range allNodes {
|
||||
node := &allNodes[i]
|
||||
|
||||
err := batcher.AddNode(node.n.ID, node.ch, tailcfg.CapabilityVersion(100), nil)
|
||||
if err != nil {
|
||||
b.Fatalf("failed to add node %d: %v", i, err)
|
||||
}
|
||||
}
|
||||
|
||||
time.Sleep(200 * time.Millisecond) //nolint:forbidigo
|
||||
|
||||
ch := change.DERPMap()
|
||||
|
||||
b.ResetTimer()
|
||||
|
||||
for i := range b.N {
|
||||
nodeIdx := i % len(allNodes)
|
||||
_, _ = batcher.MapResponseFromChange(allNodes[nodeIdx].n.ID, ch)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// BenchmarkScale_MapResponse_FullUpdate measures MapResponse generation for a
|
||||
// FullUpdate change. This forces full peer serialization — the primary bottleneck
|
||||
// for large networks.
|
||||
func BenchmarkScale_MapResponse_FullUpdate(b *testing.B) {
|
||||
if testing.Short() {
|
||||
b.Skip("skipping full pipeline benchmark in short mode")
|
||||
}
|
||||
|
||||
zerolog.SetGlobalLevel(zerolog.Disabled)
|
||||
defer zerolog.SetGlobalLevel(zerolog.DebugLevel)
|
||||
|
||||
for _, nodeCount := range []int{10, 50, 100, 200, 500} {
|
||||
b.Run(strconv.Itoa(nodeCount), func(b *testing.B) {
|
||||
testData, cleanup := setupBatcherWithTestData(b, NewBatcherAndMapper, 1, nodeCount, largeBufferSize)
|
||||
defer cleanup()
|
||||
|
||||
batcher := testData.Batcher
|
||||
allNodes := testData.Nodes
|
||||
|
||||
for i := range allNodes {
|
||||
allNodes[i].start()
|
||||
}
|
||||
|
||||
defer func() {
|
||||
for i := range allNodes {
|
||||
allNodes[i].cleanup()
|
||||
}
|
||||
}()
|
||||
|
||||
for i := range allNodes {
|
||||
node := &allNodes[i]
|
||||
|
||||
err := batcher.AddNode(node.n.ID, node.ch, tailcfg.CapabilityVersion(100), nil)
|
||||
if err != nil {
|
||||
b.Fatalf("failed to add node %d: %v", i, err)
|
||||
}
|
||||
}
|
||||
|
||||
time.Sleep(200 * time.Millisecond) //nolint:forbidigo
|
||||
|
||||
ch := change.FullUpdate()
|
||||
|
||||
b.ResetTimer()
|
||||
|
||||
for i := range b.N {
|
||||
nodeIdx := i % len(allNodes)
|
||||
_, _ = batcher.MapResponseFromChange(allNodes[nodeIdx].n.ID, ch)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@@ -2,7 +2,6 @@ package mapper
|
||||
|
||||
import (
|
||||
"net/netip"
|
||||
"slices"
|
||||
"sort"
|
||||
"time"
|
||||
|
||||
@@ -79,10 +78,7 @@ func (b *MapResponseBuilder) WithSelfNode() *MapResponseBuilder {
|
||||
tailnode, err := nv.TailNode(
|
||||
b.capVer,
|
||||
func(id types.NodeID) []netip.Prefix {
|
||||
// Self node: include own primaries + exit routes (no via steering for self).
|
||||
primaries := policy.ReduceRoutes(nv, b.mapper.state.GetNodePrimaryRoutes(id), matchers)
|
||||
|
||||
return slices.Concat(primaries, nv.ExitRoutes())
|
||||
return policy.ReduceRoutes(nv, b.mapper.state.GetNodePrimaryRoutes(id), matchers)
|
||||
},
|
||||
b.mapper.cfg)
|
||||
if err != nil {
|
||||
@@ -255,18 +251,14 @@ func (b *MapResponseBuilder) buildTailPeers(peers views.Slice[types.NodeView]) (
|
||||
changedViews = peers
|
||||
}
|
||||
|
||||
// Build tail nodes with per-peer via-aware route function.
|
||||
tailPeers := make([]*tailcfg.Node, 0, changedViews.Len())
|
||||
|
||||
for _, peer := range changedViews.All() {
|
||||
tn, err := peer.TailNode(b.capVer, func(_ types.NodeID) []netip.Prefix {
|
||||
return b.mapper.state.RoutesForPeer(node, peer, matchers)
|
||||
}, b.mapper.cfg)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
tailPeers = append(tailPeers, tn)
|
||||
tailPeers, err := types.TailNodes(
|
||||
changedViews, b.capVer,
|
||||
func(id types.NodeID) []netip.Prefix {
|
||||
return policy.ReduceRoutes(node, b.mapper.state.GetNodePrimaryRoutes(id), matchers)
|
||||
},
|
||||
b.mapper.cfg)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Peers is always returned sorted by Node.ID.
|
||||
|
||||
@@ -44,7 +44,7 @@ type mapper struct {
|
||||
// Configuration
|
||||
state *state.State
|
||||
cfg *types.Config
|
||||
batcher *Batcher
|
||||
batcher Batcher
|
||||
|
||||
created time.Time
|
||||
}
|
||||
|
||||
@@ -1,445 +0,0 @@
|
||||
package mapper
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"strconv"
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
|
||||
"github.com/juanfont/headscale/hscontrol/types"
|
||||
"github.com/juanfont/headscale/hscontrol/types/change"
|
||||
"github.com/juanfont/headscale/hscontrol/util/zlog/zf"
|
||||
"github.com/puzpuzpuz/xsync/v4"
|
||||
"github.com/rs/zerolog"
|
||||
"github.com/rs/zerolog/log"
|
||||
"tailscale.com/tailcfg"
|
||||
)
|
||||
|
||||
// errNoActiveConnections is returned by send when a node has no active
|
||||
// connections (disconnected but kept in the batcher for rapid reconnection).
|
||||
// Callers must not update peer tracking state (lastSentPeers) after this
|
||||
// error because the data was never delivered to any client.
|
||||
var errNoActiveConnections = errors.New("no active connections")
|
||||
|
||||
// connectionEntry represents a single connection to a node.
|
||||
type connectionEntry struct {
|
||||
id string // unique connection ID
|
||||
c chan<- *tailcfg.MapResponse
|
||||
version tailcfg.CapabilityVersion
|
||||
created time.Time
|
||||
stop func()
|
||||
lastUsed atomic.Int64 // Unix timestamp of last successful send
|
||||
closed atomic.Bool // Indicates if this connection has been closed
|
||||
}
|
||||
|
||||
// multiChannelNodeConn manages multiple concurrent connections for a single node.
|
||||
type multiChannelNodeConn struct {
|
||||
id types.NodeID
|
||||
mapper *mapper
|
||||
log zerolog.Logger
|
||||
|
||||
mutex sync.RWMutex
|
||||
connections []*connectionEntry
|
||||
|
||||
// pendingMu protects pending changes independently of the connection mutex.
|
||||
// This avoids contention between addToBatch (which appends changes) and
|
||||
// send() (which sends data to connections).
|
||||
pendingMu sync.Mutex
|
||||
pending []change.Change
|
||||
|
||||
// workMu serializes change processing for this node across batch ticks.
|
||||
// Without this, two workers could process consecutive ticks' bundles
|
||||
// concurrently, causing out-of-order MapResponse delivery and races
|
||||
// on lastSentPeers (Clear+Store in updateSentPeers vs Range in
|
||||
// computePeerDiff).
|
||||
workMu sync.Mutex
|
||||
|
||||
closeOnce sync.Once
|
||||
updateCount atomic.Int64
|
||||
|
||||
// disconnectedAt records when the last connection was removed.
|
||||
// nil means the node is considered connected (or newly created);
|
||||
// non-nil means the node disconnected at the stored timestamp.
|
||||
// Used by cleanupOfflineNodes to evict stale entries.
|
||||
disconnectedAt atomic.Pointer[time.Time]
|
||||
|
||||
// lastSentPeers tracks which peers were last sent to this node.
|
||||
// This enables computing diffs for policy changes instead of sending
|
||||
// full peer lists (which clients interpret as "no change" when empty).
|
||||
// Using xsync.Map for lock-free concurrent access.
|
||||
lastSentPeers *xsync.Map[tailcfg.NodeID, struct{}]
|
||||
}
|
||||
|
||||
// connIDCounter is a monotonically increasing counter used to generate
|
||||
// unique connection identifiers without the overhead of crypto/rand.
|
||||
// Connection IDs are process-local and need not be cryptographically random.
|
||||
var connIDCounter atomic.Uint64
|
||||
|
||||
// generateConnectionID generates a unique connection identifier.
|
||||
func generateConnectionID() string {
|
||||
return strconv.FormatUint(connIDCounter.Add(1), 10)
|
||||
}
|
||||
|
||||
// newMultiChannelNodeConn creates a new multi-channel node connection.
|
||||
func newMultiChannelNodeConn(id types.NodeID, mapper *mapper) *multiChannelNodeConn {
|
||||
return &multiChannelNodeConn{
|
||||
id: id,
|
||||
mapper: mapper,
|
||||
lastSentPeers: xsync.NewMap[tailcfg.NodeID, struct{}](),
|
||||
log: log.With().Uint64(zf.NodeID, id.Uint64()).Logger(),
|
||||
}
|
||||
}
|
||||
|
||||
func (mc *multiChannelNodeConn) close() {
|
||||
mc.closeOnce.Do(func() {
|
||||
mc.mutex.Lock()
|
||||
defer mc.mutex.Unlock()
|
||||
|
||||
for _, conn := range mc.connections {
|
||||
mc.stopConnection(conn)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
// stopConnection marks a connection as closed and tears down the owning session
|
||||
// at most once, even if multiple cleanup paths race to remove it.
|
||||
func (mc *multiChannelNodeConn) stopConnection(conn *connectionEntry) {
|
||||
if conn.closed.CompareAndSwap(false, true) {
|
||||
if conn.stop != nil {
|
||||
conn.stop()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// removeConnectionAtIndexLocked removes the active connection at index.
|
||||
// If stopConnection is true, it also stops that session.
|
||||
// Caller must hold mc.mutex.
|
||||
func (mc *multiChannelNodeConn) removeConnectionAtIndexLocked(i int, stopConnection bool) *connectionEntry {
|
||||
conn := mc.connections[i]
|
||||
copy(mc.connections[i:], mc.connections[i+1:])
|
||||
mc.connections[len(mc.connections)-1] = nil // release pointer for GC
|
||||
mc.connections = mc.connections[:len(mc.connections)-1]
|
||||
|
||||
if stopConnection {
|
||||
mc.stopConnection(conn)
|
||||
}
|
||||
|
||||
return conn
|
||||
}
|
||||
|
||||
// addConnection adds a new connection.
|
||||
func (mc *multiChannelNodeConn) addConnection(entry *connectionEntry) {
|
||||
mc.mutex.Lock()
|
||||
defer mc.mutex.Unlock()
|
||||
|
||||
mc.connections = append(mc.connections, entry)
|
||||
mc.log.Debug().Str(zf.ConnID, entry.id).
|
||||
Int("total_connections", len(mc.connections)).
|
||||
Msg("connection added")
|
||||
}
|
||||
|
||||
// removeConnectionByChannel removes a connection by matching channel pointer.
|
||||
func (mc *multiChannelNodeConn) removeConnectionByChannel(c chan<- *tailcfg.MapResponse) bool {
|
||||
mc.mutex.Lock()
|
||||
defer mc.mutex.Unlock()
|
||||
|
||||
for i, entry := range mc.connections {
|
||||
if entry.c == c {
|
||||
mc.removeConnectionAtIndexLocked(i, false)
|
||||
mc.log.Debug().Str(zf.ConnID, entry.id).
|
||||
Int("remaining_connections", len(mc.connections)).
|
||||
Msg("connection removed")
|
||||
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
// hasActiveConnections checks if the node has any active connections.
|
||||
func (mc *multiChannelNodeConn) hasActiveConnections() bool {
|
||||
mc.mutex.RLock()
|
||||
defer mc.mutex.RUnlock()
|
||||
|
||||
return len(mc.connections) > 0
|
||||
}
|
||||
|
||||
// getActiveConnectionCount returns the number of active connections.
|
||||
func (mc *multiChannelNodeConn) getActiveConnectionCount() int {
|
||||
mc.mutex.RLock()
|
||||
defer mc.mutex.RUnlock()
|
||||
|
||||
return len(mc.connections)
|
||||
}
|
||||
|
||||
// markConnected clears the disconnect timestamp, indicating the node
|
||||
// has an active connection.
|
||||
func (mc *multiChannelNodeConn) markConnected() {
|
||||
mc.disconnectedAt.Store(nil)
|
||||
}
|
||||
|
||||
// markDisconnected records the current time as the moment the node
|
||||
// lost its last connection. Used by cleanupOfflineNodes to determine
|
||||
// how long the node has been offline.
|
||||
func (mc *multiChannelNodeConn) markDisconnected() {
|
||||
now := time.Now()
|
||||
mc.disconnectedAt.Store(&now)
|
||||
}
|
||||
|
||||
// isConnected returns true if the node has active connections or has
|
||||
// not been marked as disconnected.
|
||||
func (mc *multiChannelNodeConn) isConnected() bool {
|
||||
if mc.hasActiveConnections() {
|
||||
return true
|
||||
}
|
||||
|
||||
return mc.disconnectedAt.Load() == nil
|
||||
}
|
||||
|
||||
// offlineDuration returns how long the node has been disconnected.
|
||||
// Returns 0 if the node is connected or has never been marked as disconnected.
|
||||
func (mc *multiChannelNodeConn) offlineDuration() time.Duration {
|
||||
t := mc.disconnectedAt.Load()
|
||||
if t == nil {
|
||||
return 0
|
||||
}
|
||||
|
||||
return time.Since(*t)
|
||||
}
|
||||
|
||||
// appendPending appends changes to this node's pending change list.
|
||||
// Thread-safe via pendingMu; does not contend with the connection mutex.
|
||||
func (mc *multiChannelNodeConn) appendPending(changes ...change.Change) {
|
||||
mc.pendingMu.Lock()
|
||||
mc.pending = append(mc.pending, changes...)
|
||||
mc.pendingMu.Unlock()
|
||||
}
|
||||
|
||||
// drainPending atomically removes and returns all pending changes.
|
||||
// Returns nil if there are no pending changes.
|
||||
func (mc *multiChannelNodeConn) drainPending() []change.Change {
|
||||
mc.pendingMu.Lock()
|
||||
p := mc.pending
|
||||
mc.pending = nil
|
||||
mc.pendingMu.Unlock()
|
||||
|
||||
return p
|
||||
}
|
||||
|
||||
// send broadcasts data to all active connections for the node.
|
||||
//
|
||||
// To avoid holding the write lock during potentially slow sends (each stale
|
||||
// connection can block for up to 50ms), the method snapshots connections under
|
||||
// a read lock, sends without any lock held, then write-locks only to remove
|
||||
// failures. New connections added between the snapshot and cleanup are safe:
|
||||
// they receive a full initial map via AddNode, so missing this update causes
|
||||
// no data loss.
|
||||
func (mc *multiChannelNodeConn) send(data *tailcfg.MapResponse) error {
|
||||
if data == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Snapshot connections under read lock.
|
||||
mc.mutex.RLock()
|
||||
|
||||
if len(mc.connections) == 0 {
|
||||
mc.mutex.RUnlock()
|
||||
mc.log.Trace().
|
||||
Msg("send: no active connections, skipping")
|
||||
|
||||
return errNoActiveConnections
|
||||
}
|
||||
|
||||
// Copy the slice so we can release the read lock before sending.
|
||||
snapshot := make([]*connectionEntry, len(mc.connections))
|
||||
copy(snapshot, mc.connections)
|
||||
mc.mutex.RUnlock()
|
||||
|
||||
mc.log.Trace().
|
||||
Int("total_connections", len(snapshot)).
|
||||
Msg("send: broadcasting")
|
||||
|
||||
// Send to all connections without holding any lock.
|
||||
// Stale connection timeouts (50ms each) happen here without blocking
|
||||
// other goroutines that need the mutex.
|
||||
var (
|
||||
lastErr error
|
||||
successCount int
|
||||
failed []*connectionEntry
|
||||
)
|
||||
|
||||
for _, conn := range snapshot {
|
||||
err := conn.send(data)
|
||||
if err != nil {
|
||||
lastErr = err
|
||||
|
||||
failed = append(failed, conn)
|
||||
|
||||
mc.log.Warn().Err(err).
|
||||
Str(zf.ConnID, conn.id).
|
||||
Msg("send: connection failed")
|
||||
} else {
|
||||
successCount++
|
||||
}
|
||||
}
|
||||
|
||||
// Write-lock only to remove failed connections.
|
||||
if len(failed) > 0 {
|
||||
mc.mutex.Lock()
|
||||
// Remove by pointer identity: only remove entries that still exist
|
||||
// in the current connections slice and match a failed pointer.
|
||||
// New connections added since the snapshot are not affected.
|
||||
failedSet := make(map[*connectionEntry]struct{}, len(failed))
|
||||
for _, f := range failed {
|
||||
failedSet[f] = struct{}{}
|
||||
}
|
||||
|
||||
clean := mc.connections[:0]
|
||||
for _, conn := range mc.connections {
|
||||
if _, isFailed := failedSet[conn]; !isFailed {
|
||||
clean = append(clean, conn)
|
||||
} else {
|
||||
mc.log.Debug().
|
||||
Str(zf.ConnID, conn.id).
|
||||
Msg("send: removing failed connection")
|
||||
// Tear down the owning session so the old serveLongPoll
|
||||
// goroutine exits instead of lingering as a stale session.
|
||||
mc.stopConnection(conn)
|
||||
}
|
||||
}
|
||||
|
||||
// Nil out trailing slots so removed *connectionEntry values
|
||||
// are not retained by the backing array.
|
||||
for i := len(clean); i < len(mc.connections); i++ {
|
||||
mc.connections[i] = nil
|
||||
}
|
||||
|
||||
mc.connections = clean
|
||||
mc.mutex.Unlock()
|
||||
}
|
||||
|
||||
mc.updateCount.Add(1)
|
||||
|
||||
mc.log.Trace().
|
||||
Int("successful_sends", successCount).
|
||||
Int("failed_connections", len(failed)).
|
||||
Msg("send: broadcast complete")
|
||||
|
||||
// Success if at least one send succeeded
|
||||
if successCount > 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
return fmt.Errorf("node %d: all connections failed, last error: %w", mc.id, lastErr)
|
||||
}
|
||||
|
||||
// send sends data to a single connection entry with timeout-based stale connection detection.
|
||||
func (entry *connectionEntry) send(data *tailcfg.MapResponse) error {
|
||||
if data == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Check if the connection has been closed to prevent send on closed channel panic.
|
||||
// This can happen during shutdown when Close() is called while workers are still processing.
|
||||
if entry.closed.Load() {
|
||||
return fmt.Errorf("connection %s: %w", entry.id, errConnectionClosed)
|
||||
}
|
||||
|
||||
// Use a short timeout to detect stale connections where the client isn't reading the channel.
|
||||
// This is critical for detecting Docker containers that are forcefully terminated
|
||||
// but still have channels that appear open.
|
||||
//
|
||||
// We use time.NewTimer + Stop instead of time.After to avoid leaking timers.
|
||||
// time.After creates a timer that lives in the runtime's timer heap until it fires,
|
||||
// even when the send succeeds immediately. On the hot path (1000+ nodes per tick),
|
||||
// this leaks thousands of timers per second.
|
||||
timer := time.NewTimer(50 * time.Millisecond) //nolint:mnd
|
||||
defer timer.Stop()
|
||||
|
||||
select {
|
||||
case entry.c <- data:
|
||||
// Update last used timestamp on successful send
|
||||
entry.lastUsed.Store(time.Now().Unix())
|
||||
return nil
|
||||
case <-timer.C:
|
||||
// Connection is likely stale - client isn't reading from channel
|
||||
// This catches the case where Docker containers are killed but channels remain open
|
||||
return fmt.Errorf("connection %s: %w", entry.id, ErrConnectionSendTimeout)
|
||||
}
|
||||
}
|
||||
|
||||
// nodeID returns the node ID.
|
||||
func (mc *multiChannelNodeConn) nodeID() types.NodeID {
|
||||
return mc.id
|
||||
}
|
||||
|
||||
// version returns the capability version from the first active connection.
|
||||
// All connections for a node should have the same version in practice.
|
||||
func (mc *multiChannelNodeConn) version() tailcfg.CapabilityVersion {
|
||||
mc.mutex.RLock()
|
||||
defer mc.mutex.RUnlock()
|
||||
|
||||
if len(mc.connections) == 0 {
|
||||
return 0
|
||||
}
|
||||
|
||||
return mc.connections[0].version
|
||||
}
|
||||
|
||||
// updateSentPeers updates the tracked peer state based on a sent MapResponse.
|
||||
// This must be called after successfully sending a response to keep track of
|
||||
// what the client knows about, enabling accurate diffs for future updates.
|
||||
func (mc *multiChannelNodeConn) updateSentPeers(resp *tailcfg.MapResponse) {
|
||||
if resp == nil {
|
||||
return
|
||||
}
|
||||
|
||||
// Full peer list replaces tracked state entirely
|
||||
if resp.Peers != nil {
|
||||
mc.lastSentPeers.Clear()
|
||||
|
||||
for _, peer := range resp.Peers {
|
||||
mc.lastSentPeers.Store(peer.ID, struct{}{})
|
||||
}
|
||||
}
|
||||
|
||||
// Incremental additions
|
||||
for _, peer := range resp.PeersChanged {
|
||||
mc.lastSentPeers.Store(peer.ID, struct{}{})
|
||||
}
|
||||
|
||||
// Incremental removals
|
||||
for _, id := range resp.PeersRemoved {
|
||||
mc.lastSentPeers.Delete(id)
|
||||
}
|
||||
}
|
||||
|
||||
// computePeerDiff compares the current peer list against what was last sent
|
||||
// and returns the peers that were removed (in lastSentPeers but not in current).
|
||||
func (mc *multiChannelNodeConn) computePeerDiff(currentPeers []tailcfg.NodeID) []tailcfg.NodeID {
|
||||
currentSet := make(map[tailcfg.NodeID]struct{}, len(currentPeers))
|
||||
for _, id := range currentPeers {
|
||||
currentSet[id] = struct{}{}
|
||||
}
|
||||
|
||||
var removed []tailcfg.NodeID
|
||||
|
||||
// Find removed: in lastSentPeers but not in current
|
||||
mc.lastSentPeers.Range(func(id tailcfg.NodeID, _ struct{}) bool {
|
||||
if _, exists := currentSet[id]; !exists {
|
||||
removed = append(removed, id)
|
||||
}
|
||||
|
||||
return true
|
||||
})
|
||||
|
||||
return removed
|
||||
}
|
||||
|
||||
// change applies a change to all active connections for the node.
|
||||
func (mc *multiChannelNodeConn) change(r change.Change) error {
|
||||
return handleNodeChange(mc, mc.mapper, r)
|
||||
}
|
||||
@@ -3,7 +3,6 @@ package mapper
|
||||
import (
|
||||
"encoding/json"
|
||||
"net/netip"
|
||||
"slices"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
@@ -75,11 +74,9 @@ func TestTailNode(t *testing.T) {
|
||||
MachineAuthorized: true,
|
||||
|
||||
CapMap: tailcfg.NodeCapMap{
|
||||
tailcfg.CapabilityFileSharing: []tailcfg.RawMessage{},
|
||||
tailcfg.CapabilityAdmin: []tailcfg.RawMessage{},
|
||||
tailcfg.CapabilitySSH: []tailcfg.RawMessage{},
|
||||
tailcfg.NodeAttrsTaildriveShare: []tailcfg.RawMessage{},
|
||||
tailcfg.NodeAttrsTaildriveAccess: []tailcfg.RawMessage{},
|
||||
tailcfg.CapabilityFileSharing: []tailcfg.RawMessage{},
|
||||
tailcfg.CapabilityAdmin: []tailcfg.RawMessage{},
|
||||
tailcfg.CapabilitySSH: []tailcfg.RawMessage{},
|
||||
},
|
||||
},
|
||||
wantErr: false,
|
||||
@@ -166,11 +163,9 @@ func TestTailNode(t *testing.T) {
|
||||
MachineAuthorized: true,
|
||||
|
||||
CapMap: tailcfg.NodeCapMap{
|
||||
tailcfg.CapabilityFileSharing: []tailcfg.RawMessage{},
|
||||
tailcfg.CapabilityAdmin: []tailcfg.RawMessage{},
|
||||
tailcfg.CapabilitySSH: []tailcfg.RawMessage{},
|
||||
tailcfg.NodeAttrsTaildriveShare: []tailcfg.RawMessage{},
|
||||
tailcfg.NodeAttrsTaildriveAccess: []tailcfg.RawMessage{},
|
||||
tailcfg.CapabilityFileSharing: []tailcfg.RawMessage{},
|
||||
tailcfg.CapabilityAdmin: []tailcfg.RawMessage{},
|
||||
tailcfg.CapabilitySSH: []tailcfg.RawMessage{},
|
||||
},
|
||||
},
|
||||
wantErr: false,
|
||||
@@ -193,11 +188,9 @@ func TestTailNode(t *testing.T) {
|
||||
MachineAuthorized: true,
|
||||
|
||||
CapMap: tailcfg.NodeCapMap{
|
||||
tailcfg.CapabilityFileSharing: []tailcfg.RawMessage{},
|
||||
tailcfg.CapabilityAdmin: []tailcfg.RawMessage{},
|
||||
tailcfg.CapabilitySSH: []tailcfg.RawMessage{},
|
||||
tailcfg.NodeAttrsTaildriveShare: []tailcfg.RawMessage{},
|
||||
tailcfg.NodeAttrsTaildriveAccess: []tailcfg.RawMessage{},
|
||||
tailcfg.CapabilityFileSharing: []tailcfg.RawMessage{},
|
||||
tailcfg.CapabilityAdmin: []tailcfg.RawMessage{},
|
||||
tailcfg.CapabilitySSH: []tailcfg.RawMessage{},
|
||||
},
|
||||
},
|
||||
wantErr: false,
|
||||
@@ -221,13 +214,10 @@ func TestTailNode(t *testing.T) {
|
||||
// This is a hack to avoid having a second node to test the primary route.
|
||||
// This should be baked into the test case proper if it is extended in the future.
|
||||
_ = primary.SetRoutes(2, netip.MustParsePrefix("192.168.0.0/24"))
|
||||
nv := tt.node.View()
|
||||
got, err := nv.TailNode(
|
||||
got, err := tt.node.View().TailNode(
|
||||
0,
|
||||
func(id types.NodeID) []netip.Prefix {
|
||||
// Route function returns primaries + exit routes
|
||||
// (matching the real caller contract).
|
||||
return slices.Concat(primary.PrimaryRoutes(id), nv.ExitRoutes())
|
||||
return primary.PrimaryRoutes(id)
|
||||
},
|
||||
cfg,
|
||||
)
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package hscontrol
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/binary"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
@@ -37,28 +36,6 @@ var ErrUnsupportedURLParameterType = errors.New("unsupported URL parameter type"
|
||||
// ErrNoAuthSession is returned when an auth_id does not match any active auth session.
|
||||
var ErrNoAuthSession = errors.New("no auth session found")
|
||||
|
||||
// ErrSSHDstNodeNotFound is returned when the dst node id on a Noise SSH
|
||||
// action request does not match any registered node.
|
||||
var ErrSSHDstNodeNotFound = errors.New("ssh action: unknown dst node id")
|
||||
|
||||
// ErrSSHMachineKeyMismatch is returned when the Noise session's machine
|
||||
// key does not match the dst node referenced in the SSH action URL.
|
||||
var ErrSSHMachineKeyMismatch = errors.New(
|
||||
"ssh action: noise session machine key does not match dst node",
|
||||
)
|
||||
|
||||
// ErrSSHAuthSessionNotBound is returned when an SSH action follow-up
|
||||
// references an auth session that is not bound to an SSH check pair.
|
||||
var ErrSSHAuthSessionNotBound = errors.New(
|
||||
"ssh action: cached auth session is not an SSH-check binding",
|
||||
)
|
||||
|
||||
// ErrSSHBindingMismatch is returned when an SSH action follow-up's
|
||||
// (src, dst) pair does not match the cached binding for its auth_id.
|
||||
var ErrSSHBindingMismatch = errors.New(
|
||||
"ssh action: cached binding does not match request src/dst",
|
||||
)
|
||||
|
||||
const (
|
||||
// ts2021UpgradePath is the path that the server listens on for the WebSockets upgrade.
|
||||
ts2021UpgradePath = "/ts2021"
|
||||
@@ -360,37 +337,6 @@ func (ns *noiseServer) SSHActionHandler(
|
||||
return
|
||||
}
|
||||
|
||||
// Authenticate the Noise session: the destination node is the
|
||||
// tailscaled instance asking us whether to permit an incoming SSH
|
||||
// connection, so its Noise session must belong to dst. Without this
|
||||
// check any unauthenticated client could open a Noise tunnel with a
|
||||
// throwaway machine key and pollute lastSSHAuth for arbitrary
|
||||
// (src, dst) pairs, defeating SSH check-mode's stolen-key
|
||||
// protections.
|
||||
dstNode, ok := ns.headscale.state.GetNodeByID(dstNodeID)
|
||||
if !ok {
|
||||
httpError(writer, NewHTTPError(
|
||||
http.StatusNotFound,
|
||||
"dst node not found",
|
||||
fmt.Errorf("%w: %d", ErrSSHDstNodeNotFound, dstNodeID),
|
||||
))
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
if dstNode.MachineKey() != ns.machineKey {
|
||||
httpError(writer, NewHTTPError(
|
||||
http.StatusUnauthorized,
|
||||
"machine key does not match dst node",
|
||||
fmt.Errorf(
|
||||
"%w: machine key %s, dst node %d",
|
||||
ErrSSHMachineKeyMismatch, ns.machineKey.ShortString(), dstNodeID,
|
||||
),
|
||||
))
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
reqLog := log.With().
|
||||
Uint64("src_node_id", srcNodeID.Uint64()).
|
||||
Uint64("dst_node_id", dstNodeID.Uint64()).
|
||||
@@ -401,7 +347,6 @@ func (ns *noiseServer) SSHActionHandler(
|
||||
reqLog.Trace().Caller().Msg("SSH action request")
|
||||
|
||||
action, err := ns.sshAction(
|
||||
req.Context(),
|
||||
reqLog,
|
||||
srcNodeID, dstNodeID,
|
||||
req.URL.Query().Get("auth_id"),
|
||||
@@ -439,7 +384,6 @@ func (ns *noiseServer) SSHActionHandler(
|
||||
// 3. Follow-up request — an auth_id is present, wait for the auth
|
||||
// verdict and accept or reject.
|
||||
func (ns *noiseServer) sshAction(
|
||||
ctx context.Context,
|
||||
reqLog zerolog.Logger,
|
||||
srcNodeID, dstNodeID types.NodeID,
|
||||
authIDStr string,
|
||||
@@ -459,7 +403,7 @@ func (ns *noiseServer) sshAction(
|
||||
// Follow-up request with auth_id — wait for the auth verdict.
|
||||
if authIDStr != "" {
|
||||
return ns.sshActionFollowUp(
|
||||
ctx, reqLog, &action, authIDStr,
|
||||
reqLog, &action, authIDStr,
|
||||
srcNodeID, dstNodeID,
|
||||
checkFound,
|
||||
)
|
||||
@@ -482,16 +426,14 @@ func (ns *noiseServer) sshAction(
|
||||
}
|
||||
|
||||
// No auto-approval — create an auth session and hold.
|
||||
return ns.sshActionHoldAndDelegate(reqLog, &action, srcNodeID, dstNodeID)
|
||||
return ns.sshActionHoldAndDelegate(reqLog, &action)
|
||||
}
|
||||
|
||||
// sshActionHoldAndDelegate creates a new auth session bound to the
|
||||
// (src, dst) pair and returns a HoldAndDelegate action that directs the
|
||||
// client to authenticate.
|
||||
// sshActionHoldAndDelegate creates a new auth session and returns a
|
||||
// HoldAndDelegate action that directs the client to authenticate.
|
||||
func (ns *noiseServer) sshActionHoldAndDelegate(
|
||||
reqLog zerolog.Logger,
|
||||
action *tailcfg.SSHAction,
|
||||
srcNodeID, dstNodeID types.NodeID,
|
||||
) (*tailcfg.SSHAction, error) {
|
||||
holdURL, err := url.Parse(
|
||||
ns.headscale.cfg.ServerURL +
|
||||
@@ -515,10 +457,7 @@ func (ns *noiseServer) sshActionHoldAndDelegate(
|
||||
)
|
||||
}
|
||||
|
||||
ns.headscale.state.SetAuthCacheEntry(
|
||||
authID,
|
||||
types.NewSSHCheckAuthRequest(srcNodeID, dstNodeID),
|
||||
)
|
||||
ns.headscale.state.SetAuthCacheEntry(authID, types.NewAuthRequest())
|
||||
|
||||
authURL := ns.headscale.authProvider.AuthURL(authID)
|
||||
|
||||
@@ -545,10 +484,8 @@ func (ns *noiseServer) sshActionHoldAndDelegate(
|
||||
}
|
||||
|
||||
// sshActionFollowUp handles follow-up requests where the client
|
||||
// provides an auth_id. It blocks until the auth session resolves or
|
||||
// the request context is cancelled (e.g. the client disconnects).
|
||||
// provides an auth_id. It blocks until the auth session resolves.
|
||||
func (ns *noiseServer) sshActionFollowUp(
|
||||
ctx context.Context,
|
||||
reqLog zerolog.Logger,
|
||||
action *tailcfg.SSHAction,
|
||||
authIDStr string,
|
||||
@@ -575,49 +512,9 @@ func (ns *noiseServer) sshActionFollowUp(
|
||||
)
|
||||
}
|
||||
|
||||
// Verify the cached binding matches the (src, dst) pair the
|
||||
// follow-up URL claims. Without this check an attacker who knew an
|
||||
// auth_id could submit a follow-up for any other (src, dst) pair
|
||||
// and have its verdict recorded against that pair instead.
|
||||
if !auth.IsSSHCheck() {
|
||||
return nil, NewHTTPError(
|
||||
http.StatusBadRequest,
|
||||
"auth session is not for SSH check",
|
||||
fmt.Errorf("%w: %s", ErrSSHAuthSessionNotBound, authID),
|
||||
)
|
||||
}
|
||||
|
||||
binding := auth.SSHCheckBinding()
|
||||
if binding.SrcNodeID != srcNodeID || binding.DstNodeID != dstNodeID {
|
||||
return nil, NewHTTPError(
|
||||
http.StatusUnauthorized,
|
||||
"src/dst pair does not match auth session",
|
||||
fmt.Errorf(
|
||||
"%w: cached %d->%d, request %d->%d",
|
||||
ErrSSHBindingMismatch,
|
||||
binding.SrcNodeID, binding.DstNodeID,
|
||||
srcNodeID, dstNodeID,
|
||||
),
|
||||
)
|
||||
}
|
||||
|
||||
reqLog.Trace().Caller().Msg("SSH action follow-up")
|
||||
|
||||
var verdict types.AuthVerdict
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
// The client disconnected (or its request timed out) before the
|
||||
// auth session resolved. Return an error so the parked goroutine
|
||||
// is freed; without this select sshActionFollowUp would block
|
||||
// until the cache eviction callback signalled FinishAuth, which
|
||||
// could be up to register_cache_expiration (15 minutes).
|
||||
return nil, NewHTTPError(
|
||||
http.StatusUnauthorized,
|
||||
"ssh action follow-up cancelled",
|
||||
ctx.Err(),
|
||||
)
|
||||
case verdict = <-auth.WaitForAuth():
|
||||
}
|
||||
verdict := <-auth.WaitForAuth()
|
||||
|
||||
if !verdict.Accept() {
|
||||
action.Reject = true
|
||||
|
||||
@@ -4,19 +4,15 @@ import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"strconv"
|
||||
"testing"
|
||||
|
||||
"github.com/go-chi/chi/v5"
|
||||
"github.com/juanfont/headscale/hscontrol/types"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
"tailscale.com/tailcfg"
|
||||
"tailscale.com/types/key"
|
||||
)
|
||||
|
||||
// newNoiseRouterWithBodyLimit builds a chi router with the same body-limit
|
||||
@@ -197,137 +193,3 @@ func TestRegistrationHandler_OversizedBody(t *testing.T) {
|
||||
// for version 0 → returns 400.
|
||||
assert.Equal(t, http.StatusBadRequest, rec.Code)
|
||||
}
|
||||
|
||||
// newSSHActionRequest builds an httptest request with the chi URL params
|
||||
// SSHActionHandler reads (src_node_id and dst_node_id), so the handler
|
||||
// can be exercised directly without going through the chi router.
|
||||
func newSSHActionRequest(t *testing.T, src, dst types.NodeID) *http.Request {
|
||||
t.Helper()
|
||||
|
||||
url := fmt.Sprintf("/machine/ssh/action/from/%d/to/%d", src.Uint64(), dst.Uint64())
|
||||
req := httptest.NewRequestWithContext(context.Background(), http.MethodGet, url, nil)
|
||||
|
||||
rctx := chi.NewRouteContext()
|
||||
rctx.URLParams.Add("src_node_id", strconv.FormatUint(src.Uint64(), 10))
|
||||
rctx.URLParams.Add("dst_node_id", strconv.FormatUint(dst.Uint64(), 10))
|
||||
req = req.WithContext(context.WithValue(req.Context(), chi.RouteCtxKey, rctx))
|
||||
|
||||
return req
|
||||
}
|
||||
|
||||
// putTestNodeInStore creates a node via the database test helper and
|
||||
// also stages it into the in-memory NodeStore so handlers that read
|
||||
// NodeStore-backed APIs (e.g. State.GetNodeByID) can see it.
|
||||
func putTestNodeInStore(t *testing.T, app *Headscale, user *types.User, hostname string) *types.Node {
|
||||
t.Helper()
|
||||
|
||||
node := app.state.CreateNodeForTest(user, hostname)
|
||||
app.state.PutNodeInStoreForTest(*node)
|
||||
|
||||
return node
|
||||
}
|
||||
|
||||
// TestSSHActionHandler_RejectsRogueMachineKey verifies that the SSH
|
||||
// check action endpoint rejects a Noise session whose machine key does
|
||||
// not match the dst node.
|
||||
func TestSSHActionHandler_RejectsRogueMachineKey(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
app := createTestApp(t)
|
||||
user := app.state.CreateUserForTest("ssh-handler-user")
|
||||
|
||||
src := putTestNodeInStore(t, app, user, "src-node")
|
||||
dst := putTestNodeInStore(t, app, user, "dst-node")
|
||||
|
||||
// noiseServer carries the wrong machine key — a fresh throwaway key,
|
||||
// not dst.MachineKey.
|
||||
rogue := key.NewMachine().Public()
|
||||
require.NotEqual(t, dst.MachineKey, rogue, "test sanity: rogue key must differ from dst")
|
||||
|
||||
ns := &noiseServer{
|
||||
headscale: app,
|
||||
machineKey: rogue,
|
||||
}
|
||||
|
||||
rec := httptest.NewRecorder()
|
||||
ns.SSHActionHandler(rec, newSSHActionRequest(t, src.ID, dst.ID))
|
||||
|
||||
assert.Equal(t, http.StatusUnauthorized, rec.Code,
|
||||
"rogue machine key must be rejected with 401")
|
||||
|
||||
// And the auth cache must not have been mutated by the rejected request.
|
||||
if last, ok := app.state.GetLastSSHAuth(src.ID, dst.ID); ok {
|
||||
t.Fatalf("rejected SSH action must not record lastSSHAuth, got %v", last)
|
||||
}
|
||||
}
|
||||
|
||||
// TestSSHActionHandler_RejectsUnknownDst verifies that the handler
|
||||
// rejects a request for a dst_node_id that does not exist with 404.
|
||||
func TestSSHActionHandler_RejectsUnknownDst(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
app := createTestApp(t)
|
||||
user := app.state.CreateUserForTest("ssh-handler-unknown-user")
|
||||
src := putTestNodeInStore(t, app, user, "src-node")
|
||||
|
||||
ns := &noiseServer{
|
||||
headscale: app,
|
||||
machineKey: key.NewMachine().Public(),
|
||||
}
|
||||
|
||||
rec := httptest.NewRecorder()
|
||||
ns.SSHActionHandler(rec, newSSHActionRequest(t, src.ID, 9999))
|
||||
|
||||
assert.Equal(t, http.StatusNotFound, rec.Code,
|
||||
"unknown dst node id must be rejected with 404")
|
||||
}
|
||||
|
||||
// TestSSHActionFollowUp_RejectsBindingMismatch verifies that the
|
||||
// follow-up handler refuses to honour an auth_id whose cached binding
|
||||
// does not match the (src, dst) pair on the request URL. Without this
|
||||
// check an attacker holding any auth_id could route its verdict to a
|
||||
// different node pair.
|
||||
func TestSSHActionFollowUp_RejectsBindingMismatch(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
app := createTestApp(t)
|
||||
user := app.state.CreateUserForTest("ssh-binding-user")
|
||||
|
||||
srcCached := putTestNodeInStore(t, app, user, "src-cached")
|
||||
dstCached := putTestNodeInStore(t, app, user, "dst-cached")
|
||||
srcOther := putTestNodeInStore(t, app, user, "src-other")
|
||||
dstOther := putTestNodeInStore(t, app, user, "dst-other")
|
||||
|
||||
// Mint an SSH-check auth request bound to (srcCached, dstCached).
|
||||
authID := types.MustAuthID()
|
||||
app.state.SetAuthCacheEntry(
|
||||
authID,
|
||||
types.NewSSHCheckAuthRequest(srcCached.ID, dstCached.ID),
|
||||
)
|
||||
|
||||
// Build a follow-up that claims to be for (srcOther, dstOther) but
|
||||
// reuses the bound auth_id. The Noise machineKey matches dstOther so
|
||||
// the outer machine-key check passes — only the binding check
|
||||
// should reject it.
|
||||
ns := &noiseServer{
|
||||
headscale: app,
|
||||
machineKey: dstOther.MachineKey,
|
||||
}
|
||||
|
||||
url := fmt.Sprintf(
|
||||
"/machine/ssh/action/from/%d/to/%d?auth_id=%s",
|
||||
srcOther.ID.Uint64(), dstOther.ID.Uint64(), authID.String(),
|
||||
)
|
||||
req := httptest.NewRequestWithContext(context.Background(), http.MethodGet, url, nil)
|
||||
|
||||
rctx := chi.NewRouteContext()
|
||||
rctx.URLParams.Add("src_node_id", strconv.FormatUint(srcOther.ID.Uint64(), 10))
|
||||
rctx.URLParams.Add("dst_node_id", strconv.FormatUint(dstOther.ID.Uint64(), 10))
|
||||
req = req.WithContext(context.WithValue(req.Context(), chi.RouteCtxKey, rctx))
|
||||
|
||||
rec := httptest.NewRecorder()
|
||||
ns.SSHActionHandler(rec, req)
|
||||
|
||||
assert.Equal(t, http.StatusUnauthorized, rec.Code,
|
||||
"binding mismatch must be rejected with 401")
|
||||
}
|
||||
|
||||
@@ -12,7 +12,6 @@ import (
|
||||
"time"
|
||||
|
||||
"github.com/coreos/go-oidc/v3/oidc"
|
||||
"github.com/hashicorp/golang-lru/v2/expirable"
|
||||
"github.com/juanfont/headscale/hscontrol/db"
|
||||
"github.com/juanfont/headscale/hscontrol/templates"
|
||||
"github.com/juanfont/headscale/hscontrol/types"
|
||||
@@ -20,28 +19,16 @@ import (
|
||||
"github.com/juanfont/headscale/hscontrol/util"
|
||||
"github.com/rs/zerolog/log"
|
||||
"golang.org/x/oauth2"
|
||||
"zgo.at/zcache/v2"
|
||||
)
|
||||
|
||||
const (
|
||||
randomByteSize = 16
|
||||
defaultOAuthOptionsCount = 3
|
||||
authCacheExpiration = time.Minute * 15
|
||||
|
||||
// authCacheMaxEntries bounds the OIDC state→AuthInfo cache to prevent
|
||||
// unauthenticated cache-fill DoS via repeated /register/{auth_id} or
|
||||
// /auth/{auth_id} GETs that mint OIDC state cookies.
|
||||
authCacheMaxEntries = 1024
|
||||
|
||||
// cookieNamePrefixLen is the number of leading characters from a
|
||||
// state/nonce value that getCookieName splices into the cookie name.
|
||||
// State and nonce values that are shorter than this are rejected at
|
||||
// the callback boundary so getCookieName cannot panic on a slice
|
||||
// out-of-range.
|
||||
cookieNamePrefixLen = 6
|
||||
authCacheCleanup = time.Minute * 20
|
||||
)
|
||||
|
||||
var errOIDCStateTooShort = errors.New("oidc state parameter is too short")
|
||||
|
||||
var (
|
||||
errEmptyOIDCCallbackParams = errors.New("empty OIDC callback params")
|
||||
errNoOIDCIDToken = errors.New("extracting ID token")
|
||||
@@ -68,10 +55,9 @@ type AuthProviderOIDC struct {
|
||||
serverURL string
|
||||
cfg *types.OIDCConfig
|
||||
|
||||
// authCache holds auth information between the auth and the callback
|
||||
// steps. It is a bounded LRU keyed by OIDC state, evicting oldest
|
||||
// entries to keep the cache footprint constant under attack.
|
||||
authCache *expirable.LRU[string, AuthInfo]
|
||||
// authCache holds auth information between
|
||||
// the auth and the callback steps.
|
||||
authCache *zcache.Cache[string, AuthInfo]
|
||||
|
||||
oidcProvider *oidc.Provider
|
||||
oauth2Config *oauth2.Config
|
||||
@@ -98,10 +84,9 @@ func NewAuthProviderOIDC(
|
||||
Scopes: cfg.Scope,
|
||||
}
|
||||
|
||||
authCache := expirable.NewLRU[string, AuthInfo](
|
||||
authCacheMaxEntries,
|
||||
nil,
|
||||
authCache := zcache.New[string, AuthInfo](
|
||||
authCacheExpiration,
|
||||
authCacheCleanup,
|
||||
)
|
||||
|
||||
return &AuthProviderOIDC{
|
||||
@@ -203,7 +188,7 @@ func (a *AuthProviderOIDC) authHandler(
|
||||
extras = append(extras, oidc.Nonce(nonce))
|
||||
|
||||
// Cache the registration info
|
||||
a.authCache.Add(state, registrationInfo)
|
||||
a.authCache.Set(state, registrationInfo)
|
||||
|
||||
authURL := a.oauth2Config.AuthCodeURL(state, extras...)
|
||||
log.Debug().Caller().Msgf("redirecting to %s for authentication", authURL)
|
||||
@@ -346,23 +331,36 @@ func (a *AuthProviderOIDC) OIDCCallbackHandler(
|
||||
return
|
||||
}
|
||||
|
||||
// If this is a registration flow, render the confirmation
|
||||
// interstitial instead of finalising the registration immediately.
|
||||
// Without an explicit user click, a single GET to
|
||||
// /register/{auth_id} could silently complete a registration when
|
||||
// the IdP allows silent SSO.
|
||||
// If this is a registration flow, then we need to register the node.
|
||||
if authInfo.Registration {
|
||||
a.renderRegistrationConfirmInterstitial(writer, req, authInfo.AuthID, user, nodeExpiry)
|
||||
newNode, err := a.handleRegistration(user, authInfo.AuthID, nodeExpiry)
|
||||
if err != nil {
|
||||
if errors.Is(err, db.ErrNodeNotFoundRegistrationCache) {
|
||||
log.Debug().Caller().Str("auth_id", authInfo.AuthID.String()).Msg("registration session expired before authorization completed")
|
||||
httpError(writer, NewHTTPError(http.StatusGone, "login session expired, try again", err))
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
httpError(writer, err)
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
content := renderRegistrationSuccessTemplate(user, newNode)
|
||||
|
||||
writer.Header().Set("Content-Type", "text/html; charset=utf-8")
|
||||
writer.WriteHeader(http.StatusOK)
|
||||
|
||||
if _, err := writer.Write(content.Bytes()); err != nil { //nolint:noinlineerr
|
||||
util.LogErr(err, "Failed to write HTTP response")
|
||||
}
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
// If this is not a registration callback, then it is an SSH
|
||||
// check-mode auth callback. Confirm the OIDC identity is the owner
|
||||
// of the SSH source node before recording approval; without this
|
||||
// check any tailnet user could approve a check-mode prompt for any
|
||||
// other user's node, defeating the stolen-key protection that
|
||||
// check-mode is meant to provide.
|
||||
// If this is not a registration callback, then its a regular authentication callback
|
||||
// and we need to send a response and confirm that the access was allowed.
|
||||
|
||||
authReq, ok := a.h.state.GetAuthCacheEntry(authInfo.AuthID)
|
||||
if !ok {
|
||||
@@ -372,57 +370,7 @@ func (a *AuthProviderOIDC) OIDCCallbackHandler(
|
||||
return
|
||||
}
|
||||
|
||||
if !authReq.IsSSHCheck() {
|
||||
log.Warn().Caller().
|
||||
Str("auth_id", authInfo.AuthID.String()).
|
||||
Msg("OIDC callback hit non-registration path with auth request that is not an SSH check binding")
|
||||
httpError(writer, NewHTTPError(http.StatusBadRequest, "auth session is not for SSH check", nil))
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
binding := authReq.SSHCheckBinding()
|
||||
|
||||
srcNode, ok := a.h.state.GetNodeByID(binding.SrcNodeID)
|
||||
if !ok {
|
||||
log.Warn().Caller().
|
||||
Str("auth_id", authInfo.AuthID.String()).
|
||||
Uint64("src_node_id", binding.SrcNodeID.Uint64()).
|
||||
Msg("SSH check src node no longer exists")
|
||||
httpError(writer, NewHTTPError(http.StatusGone, "src node no longer exists", nil))
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
// Strict identity binding: only the user that owns the src node
|
||||
// may approve an SSH check for that node. Tagged source nodes are
|
||||
// rejected because they have no user owner to compare against.
|
||||
if srcNode.IsTagged() || !srcNode.UserID().Valid() {
|
||||
log.Warn().Caller().
|
||||
Str("auth_id", authInfo.AuthID.String()).
|
||||
Uint64("src_node_id", binding.SrcNodeID.Uint64()).
|
||||
Bool("src_is_tagged", srcNode.IsTagged()).
|
||||
Str("oidc_user", user.Username()).
|
||||
Msg("SSH check rejected: src node has no user owner")
|
||||
httpError(writer, NewHTTPError(http.StatusForbidden, "src node has no user owner", nil))
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
if srcNode.UserID().Get() != user.ID {
|
||||
log.Warn().Caller().
|
||||
Str("auth_id", authInfo.AuthID.String()).
|
||||
Uint64("src_node_id", binding.SrcNodeID.Uint64()).
|
||||
Uint("src_owner_id", srcNode.UserID().Get()).
|
||||
Uint("oidc_user_id", user.ID).
|
||||
Str("oidc_user", user.Username()).
|
||||
Msg("SSH check rejected: OIDC user is not the owner of src node")
|
||||
httpError(writer, NewHTTPError(http.StatusForbidden, "OIDC user is not the owner of the SSH source node", nil))
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
// Identity verified — record the verdict for the waiting follow-up.
|
||||
// Send a finish auth verdict with no errors to let the CLI know that the authentication was successful.
|
||||
authReq.FinishAuth(types.AuthVerdict{})
|
||||
|
||||
content := renderAuthSuccessTemplate(user)
|
||||
@@ -435,12 +383,12 @@ func (a *AuthProviderOIDC) OIDCCallbackHandler(
|
||||
}
|
||||
}
|
||||
|
||||
func (a *AuthProviderOIDC) determineNodeExpiry(idTokenExpiration time.Time) *time.Time {
|
||||
func (a *AuthProviderOIDC) determineNodeExpiry(idTokenExpiration time.Time) time.Time {
|
||||
if a.cfg.UseExpiryFromToken {
|
||||
return &idTokenExpiration
|
||||
return idTokenExpiration
|
||||
}
|
||||
|
||||
return nil
|
||||
return time.Now().Add(a.cfg.Expiry)
|
||||
}
|
||||
|
||||
func extractCodeAndStateParamFromRequest(
|
||||
@@ -453,14 +401,6 @@ func extractCodeAndStateParamFromRequest(
|
||||
return "", "", NewHTTPError(http.StatusBadRequest, "missing code or state parameter", errEmptyOIDCCallbackParams)
|
||||
}
|
||||
|
||||
// Reject states that are too short for getCookieName to splice
|
||||
// into a cookie name. Without this guard a request with
|
||||
// ?state=abc panics on the slice out-of-range and is recovered by
|
||||
// chi's middleware.Recoverer, amplifying small-DoS log noise.
|
||||
if len(state) < cookieNamePrefixLen {
|
||||
return "", "", NewHTTPError(http.StatusBadRequest, "invalid state parameter", errOIDCStateTooShort)
|
||||
}
|
||||
|
||||
return code, state, nil
|
||||
}
|
||||
|
||||
@@ -659,211 +599,15 @@ func (a *AuthProviderOIDC) createOrUpdateUserFromClaim(
|
||||
return user, c, nil
|
||||
}
|
||||
|
||||
// registerConfirmCSRFCookie is the cookie name used to bind the
|
||||
// /register/confirm POST handler's CSRF token to the OIDC callback that
|
||||
// rendered the interstitial. It includes a per-session prefix derived
|
||||
// from the auth ID so cookies for unrelated registrations on the same
|
||||
// browser do not collide.
|
||||
const registerConfirmCSRFCookie = "headscale_register_confirm"
|
||||
|
||||
// renderRegistrationConfirmInterstitial captures the resolved OIDC
|
||||
// identity and node expiry into the cached AuthRequest, sets the CSRF
|
||||
// cookie, and renders the confirmation page that the user must
|
||||
// explicitly submit before the registration is finalised.
|
||||
func (a *AuthProviderOIDC) renderRegistrationConfirmInterstitial(
|
||||
writer http.ResponseWriter,
|
||||
req *http.Request,
|
||||
authID types.AuthID,
|
||||
user *types.User,
|
||||
nodeExpiry *time.Time,
|
||||
) {
|
||||
authReq, ok := a.h.state.GetAuthCacheEntry(authID)
|
||||
if !ok {
|
||||
log.Debug().Caller().Str("auth_id", authID.String()).Msg("registration session expired before authorization completed")
|
||||
httpError(writer, NewHTTPError(http.StatusGone, "login session expired, try again", nil))
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
if !authReq.IsRegistration() {
|
||||
log.Warn().Caller().
|
||||
Str("auth_id", authID.String()).
|
||||
Msg("OIDC callback hit registration path with auth request that is not a node registration")
|
||||
httpError(writer, NewHTTPError(http.StatusBadRequest, "auth session is not for node registration", nil))
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
csrf, err := util.GenerateRandomStringURLSafe(32)
|
||||
if err != nil {
|
||||
httpError(writer, fmt.Errorf("generating csrf token: %w", err))
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
authReq.SetPendingConfirmation(&types.PendingRegistrationConfirmation{
|
||||
UserID: user.ID,
|
||||
NodeExpiry: nodeExpiry,
|
||||
CSRF: csrf,
|
||||
})
|
||||
|
||||
http.SetCookie(writer, &http.Cookie{
|
||||
Name: registerConfirmCSRFCookie,
|
||||
Value: csrf,
|
||||
Path: "/register/confirm/" + authID.String(),
|
||||
MaxAge: int(authCacheExpiration.Seconds()),
|
||||
Secure: req.TLS != nil,
|
||||
HttpOnly: true,
|
||||
SameSite: http.SameSiteStrictMode,
|
||||
})
|
||||
|
||||
regData := authReq.RegistrationData()
|
||||
|
||||
info := templates.RegisterConfirmInfo{
|
||||
FormAction: "/register/confirm/" + authID.String(),
|
||||
CSRFTokenName: registerConfirmCSRFCookie,
|
||||
CSRFToken: csrf,
|
||||
User: user.Display(),
|
||||
Hostname: regData.Hostname,
|
||||
MachineKey: regData.MachineKey.ShortString(),
|
||||
}
|
||||
if regData.Hostinfo != nil {
|
||||
info.OS = regData.Hostinfo.OS
|
||||
}
|
||||
|
||||
writer.Header().Set("Content-Type", "text/html; charset=utf-8")
|
||||
writer.WriteHeader(http.StatusOK)
|
||||
|
||||
if _, err := writer.Write([]byte(templates.RegisterConfirm(info).Render())); err != nil { //nolint:noinlineerr
|
||||
util.LogErr(err, "Failed to write HTTP response")
|
||||
}
|
||||
}
|
||||
|
||||
// RegisterConfirmHandler is the POST endpoint behind the OIDC
|
||||
// registration confirmation interstitial. It validates the CSRF cookie
|
||||
// against the form-submitted token, finalises the registration via
|
||||
// handleRegistration, and renders the success page.
|
||||
func (a *AuthProviderOIDC) RegisterConfirmHandler(
|
||||
writer http.ResponseWriter,
|
||||
req *http.Request,
|
||||
) {
|
||||
if req.Method != http.MethodPost {
|
||||
httpError(writer, errMethodNotAllowed)
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
authID, err := authIDFromRequest(req)
|
||||
if err != nil {
|
||||
httpError(writer, err)
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
// Cap the form body. The confirmation form is a single CSRF token,
|
||||
// so 4 KiB is generous and prevents an unauthenticated client from
|
||||
// submitting an arbitrarily large body to ParseForm.
|
||||
req.Body = http.MaxBytesReader(writer, req.Body, 4*1024)
|
||||
|
||||
if err := req.ParseForm(); err != nil { //nolint:noinlineerr,gosec // body is bounded above
|
||||
httpError(writer, NewHTTPError(http.StatusBadRequest, "invalid form", err))
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
formCSRF := req.PostFormValue(registerConfirmCSRFCookie) //nolint:gosec // body is bounded above
|
||||
if formCSRF == "" {
|
||||
httpError(writer, NewHTTPError(http.StatusBadRequest, "missing csrf token", nil))
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
cookie, err := req.Cookie(registerConfirmCSRFCookie)
|
||||
if err != nil {
|
||||
httpError(writer, NewHTTPError(http.StatusForbidden, "missing csrf cookie", err))
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
if cookie.Value != formCSRF {
|
||||
httpError(writer, NewHTTPError(http.StatusForbidden, "csrf token mismatch", nil))
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
authReq, ok := a.h.state.GetAuthCacheEntry(authID)
|
||||
if !ok {
|
||||
httpError(writer, NewHTTPError(http.StatusGone, "registration session expired", nil))
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
pending := authReq.PendingConfirmation()
|
||||
if pending == nil {
|
||||
httpError(writer, NewHTTPError(http.StatusForbidden, "registration not OIDC-authorized", nil))
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
if pending.CSRF != cookie.Value {
|
||||
httpError(writer, NewHTTPError(http.StatusForbidden, "csrf token does not match cached registration", nil))
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
user, err := a.h.state.GetUserByID(types.UserID(pending.UserID))
|
||||
if err != nil {
|
||||
httpError(writer, fmt.Errorf("looking up user: %w", err))
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
newNode, err := a.handleRegistration(user, authID, pending.NodeExpiry)
|
||||
if err != nil {
|
||||
if errors.Is(err, db.ErrNodeNotFoundRegistrationCache) {
|
||||
httpError(writer, NewHTTPError(http.StatusGone, "registration session expired", err))
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
httpError(writer, err)
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
// Clear the CSRF cookie now that the registration is final.
|
||||
http.SetCookie(writer, &http.Cookie{
|
||||
Name: registerConfirmCSRFCookie,
|
||||
Value: "",
|
||||
Path: "/register/confirm/" + authID.String(),
|
||||
MaxAge: -1,
|
||||
Secure: req.TLS != nil,
|
||||
HttpOnly: true,
|
||||
SameSite: http.SameSiteStrictMode,
|
||||
})
|
||||
|
||||
content := renderRegistrationSuccessTemplate(user, newNode)
|
||||
|
||||
writer.Header().Set("Content-Type", "text/html; charset=utf-8")
|
||||
writer.WriteHeader(http.StatusOK)
|
||||
|
||||
// renderRegistrationSuccessTemplate's output only embeds
|
||||
// HTML-escaped values from a server-side template, so the gosec
|
||||
// XSS warning is a false positive here.
|
||||
if _, err := writer.Write(content.Bytes()); err != nil { //nolint:noinlineerr,gosec
|
||||
util.LogErr(err, "Failed to write HTTP response")
|
||||
}
|
||||
}
|
||||
|
||||
func (a *AuthProviderOIDC) handleRegistration(
|
||||
user *types.User,
|
||||
registrationID types.AuthID,
|
||||
expiry *time.Time,
|
||||
expiry time.Time,
|
||||
) (bool, error) {
|
||||
node, nodeChange, err := a.h.state.HandleNodeFromAuthPath(
|
||||
registrationID,
|
||||
types.UserID(user.ID),
|
||||
expiry,
|
||||
&expiry,
|
||||
util.RegisterMethodOIDC,
|
||||
)
|
||||
if err != nil {
|
||||
@@ -927,11 +671,8 @@ func renderAuthSuccessTemplate(
|
||||
}
|
||||
|
||||
// getCookieName generates a unique cookie name based on a cookie value.
|
||||
// Callers must ensure value has at least cookieNamePrefixLen bytes;
|
||||
// extractCodeAndStateParamFromRequest enforces this for the state
|
||||
// parameter, and setCSRFCookie always supplies a 64-byte random value.
|
||||
func getCookieName(baseName, value string) string {
|
||||
return fmt.Sprintf("%s_%s", baseName, value[:cookieNamePrefixLen])
|
||||
return fmt.Sprintf("%s_%s", baseName, value[:6])
|
||||
}
|
||||
|
||||
func setCSRFCookie(w http.ResponseWriter, r *http.Request, name string) (string, error) {
|
||||
|
||||
@@ -1,102 +0,0 @@
|
||||
package hscontrol
|
||||
|
||||
import (
|
||||
"context"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/go-chi/chi/v5"
|
||||
"github.com/juanfont/headscale/hscontrol/types"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func newConfirmRequest(t *testing.T, authID types.AuthID, formCSRF, cookieCSRF string) *http.Request {
|
||||
t.Helper()
|
||||
|
||||
form := strings.NewReader(registerConfirmCSRFCookie + "=" + formCSRF)
|
||||
req := httptest.NewRequestWithContext(
|
||||
context.Background(),
|
||||
http.MethodPost,
|
||||
"/register/confirm/"+authID.String(),
|
||||
form,
|
||||
)
|
||||
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
|
||||
req.AddCookie(&http.Cookie{
|
||||
Name: registerConfirmCSRFCookie,
|
||||
Value: cookieCSRF,
|
||||
})
|
||||
|
||||
rctx := chi.NewRouteContext()
|
||||
rctx.URLParams.Add("auth_id", authID.String())
|
||||
req = req.WithContext(context.WithValue(req.Context(), chi.RouteCtxKey, rctx))
|
||||
|
||||
return req
|
||||
}
|
||||
|
||||
// TestRegisterConfirmHandler_RejectsCSRFMismatch verifies that the
|
||||
// /register/confirm POST handler refuses to finalise a pending
|
||||
// registration when the form CSRF token does not match the cookie.
|
||||
func TestRegisterConfirmHandler_RejectsCSRFMismatch(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
app := createTestApp(t)
|
||||
provider := &AuthProviderOIDC{h: app}
|
||||
|
||||
// Mint a pending registration with a stashed pending-confirmation,
|
||||
// as the OIDC callback would have done after resolving the user
|
||||
// identity but before the user clicked the interstitial form.
|
||||
authID := types.MustAuthID()
|
||||
regReq := types.NewRegisterAuthRequest(&types.RegistrationData{
|
||||
Hostname: "phish-target",
|
||||
})
|
||||
regReq.SetPendingConfirmation(&types.PendingRegistrationConfirmation{
|
||||
UserID: 1,
|
||||
CSRF: "expected-csrf",
|
||||
})
|
||||
app.state.SetAuthCacheEntry(authID, regReq)
|
||||
|
||||
rec := httptest.NewRecorder()
|
||||
provider.RegisterConfirmHandler(rec,
|
||||
newConfirmRequest(t, authID, "wrong-csrf", "expected-csrf"),
|
||||
)
|
||||
|
||||
assert.Equal(t, http.StatusForbidden, rec.Code,
|
||||
"CSRF cookie/form mismatch must be rejected with 403")
|
||||
|
||||
// And the registration must still be pending — the rejected POST
|
||||
// must not have called handleRegistration.
|
||||
cached, ok := app.state.GetAuthCacheEntry(authID)
|
||||
require.True(t, ok, "rejected POST must not evict the cached registration")
|
||||
require.NotNil(t, cached.PendingConfirmation(),
|
||||
"rejected POST must not clear the pending confirmation")
|
||||
}
|
||||
|
||||
// TestRegisterConfirmHandler_RejectsWithoutPending verifies that
|
||||
// /register/confirm refuses to finalise a registration that did not
|
||||
// first complete the OIDC interstitial. Without this check an attacker
|
||||
// who knew an auth_id could POST directly to the confirm endpoint and
|
||||
// claim the device.
|
||||
func TestRegisterConfirmHandler_RejectsWithoutPending(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
app := createTestApp(t)
|
||||
provider := &AuthProviderOIDC{h: app}
|
||||
|
||||
authID := types.MustAuthID()
|
||||
// Cached registration with NO pending confirmation set — i.e. the
|
||||
// OIDC callback has not run yet.
|
||||
app.state.SetAuthCacheEntry(authID, types.NewRegisterAuthRequest(
|
||||
&types.RegistrationData{Hostname: "no-oidc-yet"},
|
||||
))
|
||||
|
||||
rec := httptest.NewRecorder()
|
||||
provider.RegisterConfirmHandler(rec,
|
||||
newConfirmRequest(t, authID, "fake", "fake"),
|
||||
)
|
||||
|
||||
assert.Equal(t, http.StatusForbidden, rec.Code,
|
||||
"confirm without prior OIDC pending state must be rejected with 403")
|
||||
}
|
||||
@@ -36,12 +36,6 @@ type PolicyManager interface {
|
||||
// NodeCanApproveRoute reports whether the given node can approve the given route.
|
||||
NodeCanApproveRoute(node types.NodeView, route netip.Prefix) bool
|
||||
|
||||
// ViaRoutesForPeer computes via grant effects for a viewer-peer pair.
|
||||
// It returns which routes should be included (peer is via-designated for viewer)
|
||||
// and excluded (steered to a different peer). When no via grants apply,
|
||||
// both fields are empty and the caller falls back to existing behavior.
|
||||
ViaRoutesForPeer(viewer, peer types.NodeView) types.ViaRouteResult
|
||||
|
||||
Version() int
|
||||
DebugString() string
|
||||
}
|
||||
|
||||
@@ -19,11 +19,11 @@ import (
|
||||
func TestApproveRoutesWithPolicy_NeverRemovesApprovedRoutes(t *testing.T) {
|
||||
user1 := types.User{
|
||||
Model: gorm.Model{ID: 1},
|
||||
Name: "testuser",
|
||||
Name: "testuser@",
|
||||
}
|
||||
user2 := types.User{
|
||||
Model: gorm.Model{ID: 2},
|
||||
Name: "otheruser",
|
||||
Name: "otheruser@",
|
||||
}
|
||||
users := []types.User{user1, user2}
|
||||
|
||||
|
||||
@@ -1,11 +1,8 @@
|
||||
package policyutil
|
||||
|
||||
import (
|
||||
"net/netip"
|
||||
|
||||
"github.com/juanfont/headscale/hscontrol/types"
|
||||
"github.com/juanfont/headscale/hscontrol/util"
|
||||
"tailscale.com/net/tsaddr"
|
||||
"tailscale.com/tailcfg"
|
||||
)
|
||||
|
||||
@@ -19,17 +16,6 @@ func ReduceFilterRules(node types.NodeView, rules []tailcfg.FilterRule) []tailcf
|
||||
ret := []tailcfg.FilterRule{}
|
||||
|
||||
for _, rule := range rules {
|
||||
// Handle CapGrant rules separately — they use CapGrant[].Dsts
|
||||
// instead of DstPorts for destination matching.
|
||||
if len(rule.CapGrant) > 0 {
|
||||
reduced := reduceCapGrantRule(node, rule)
|
||||
if reduced != nil {
|
||||
ret = append(ret, *reduced)
|
||||
}
|
||||
|
||||
continue
|
||||
}
|
||||
|
||||
// record if the rule is actually relevant for the given node.
|
||||
var dests []tailcfg.NetPortRange
|
||||
|
||||
@@ -47,19 +33,12 @@ func ReduceFilterRules(node types.NodeView, rules []tailcfg.FilterRule) []tailcf
|
||||
continue DEST_LOOP
|
||||
}
|
||||
|
||||
// If the node exposes routes, ensure they are not removed
|
||||
// when the filters are reduced. Exit routes (0.0.0.0/0, ::/0)
|
||||
// are skipped here because exit nodes handle traffic via
|
||||
// AllowedIPs/routing, not packet filter rules. This matches
|
||||
// Tailscale SaaS behavior where exit nodes do not receive
|
||||
// filter rules for destinations that only overlap via exit routes.
|
||||
// If the node exposes routes, ensure they are note removed
|
||||
// when the filters are reduced.
|
||||
if node.Hostinfo().Valid() {
|
||||
routableIPs := node.Hostinfo().RoutableIPs()
|
||||
if routableIPs.Len() > 0 {
|
||||
for _, routableIP := range routableIPs.All() {
|
||||
if tsaddr.IsExitRoute(routableIP) {
|
||||
continue
|
||||
}
|
||||
if expanded.OverlapsPrefix(routableIP) {
|
||||
dests = append(dests, dest)
|
||||
continue DEST_LOOP
|
||||
@@ -91,77 +70,3 @@ func ReduceFilterRules(node types.NodeView, rules []tailcfg.FilterRule) []tailcf
|
||||
|
||||
return ret
|
||||
}
|
||||
|
||||
// reduceCapGrantRule filters a CapGrant rule to only include CapGrant
|
||||
// entries whose Dsts match the given node's IPs. When a broad prefix
|
||||
// (e.g. 100.64.0.0/10 from dst:*) contains a node's IP, it is
|
||||
// narrowed to the node's specific /32 or /128 prefix. Returns nil if
|
||||
// no CapGrant entries are relevant to this node.
|
||||
func reduceCapGrantRule(
|
||||
node types.NodeView,
|
||||
rule tailcfg.FilterRule,
|
||||
) *tailcfg.FilterRule {
|
||||
var capGrants []tailcfg.CapGrant
|
||||
|
||||
nodeIPs := node.IPs()
|
||||
|
||||
for _, cg := range rule.CapGrant {
|
||||
// Collect the node's IPs that fall within any of this
|
||||
// CapGrant's Dsts. Broad prefixes are narrowed to specific
|
||||
// /32 and /128 entries for the node.
|
||||
var matchingDsts []netip.Prefix
|
||||
|
||||
for _, dst := range cg.Dsts {
|
||||
if dst.IsSingleIP() {
|
||||
// Already a specific IP — keep it if it matches.
|
||||
if dst.Addr() == nodeIPs[0] || (len(nodeIPs) > 1 && dst.Addr() == nodeIPs[1]) {
|
||||
matchingDsts = append(matchingDsts, dst)
|
||||
}
|
||||
} else {
|
||||
// Broad prefix — narrow to node's specific IPs.
|
||||
for _, ip := range nodeIPs {
|
||||
if dst.Contains(ip) {
|
||||
matchingDsts = append(matchingDsts, netip.PrefixFrom(ip, ip.BitLen()))
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Also check routable IPs (subnet routes) — nodes that
|
||||
// advertise routes should receive CapGrant rules for
|
||||
// destinations that overlap their routes.
|
||||
if node.Hostinfo().Valid() {
|
||||
routableIPs := node.Hostinfo().RoutableIPs()
|
||||
if routableIPs.Len() > 0 {
|
||||
for _, dst := range cg.Dsts {
|
||||
for _, routableIP := range routableIPs.All() {
|
||||
if tsaddr.IsExitRoute(routableIP) {
|
||||
continue
|
||||
}
|
||||
|
||||
if dst.Overlaps(routableIP) {
|
||||
// For route overlaps, keep the original prefix.
|
||||
matchingDsts = append(matchingDsts, dst)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if len(matchingDsts) > 0 {
|
||||
capGrants = append(capGrants, tailcfg.CapGrant{
|
||||
Dsts: matchingDsts,
|
||||
CapMap: cg.CapMap,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
if len(capGrants) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
return &tailcfg.FilterRule{
|
||||
SrcIPs: rule.SrcIPs,
|
||||
CapGrant: capGrants,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -9,6 +9,7 @@ import (
|
||||
"github.com/google/go-cmp/cmp"
|
||||
"github.com/juanfont/headscale/hscontrol/policy"
|
||||
"github.com/juanfont/headscale/hscontrol/policy/policyutil"
|
||||
v2 "github.com/juanfont/headscale/hscontrol/policy/v2"
|
||||
"github.com/juanfont/headscale/hscontrol/types"
|
||||
"github.com/juanfont/headscale/hscontrol/util"
|
||||
"github.com/rs/zerolog/log"
|
||||
@@ -205,19 +206,21 @@ func TestReduceFilterRules(t *testing.T) {
|
||||
},
|
||||
},
|
||||
want: []tailcfg.FilterRule{
|
||||
// Merged: Both ACL rules combined (same SrcIPs)
|
||||
// Merged: Both ACL rules combined (same SrcIPs and IPProto)
|
||||
{
|
||||
SrcIPs: []string{
|
||||
"100.64.0.1-100.64.0.2",
|
||||
"fd7a:115c:a1e0::1-fd7a:115c:a1e0::2",
|
||||
"100.64.0.1/32",
|
||||
"100.64.0.2/32",
|
||||
"fd7a:115c:a1e0::1/128",
|
||||
"fd7a:115c:a1e0::2/128",
|
||||
},
|
||||
DstPorts: []tailcfg.NetPortRange{
|
||||
{
|
||||
IP: "100.64.0.1",
|
||||
IP: "100.64.0.1/32",
|
||||
Ports: tailcfg.PortRangeAny,
|
||||
},
|
||||
{
|
||||
IP: "fd7a:115c:a1e0::1",
|
||||
IP: "fd7a:115c:a1e0::1/128",
|
||||
Ports: tailcfg.PortRangeAny,
|
||||
},
|
||||
{
|
||||
@@ -225,6 +228,7 @@ func TestReduceFilterRules(t *testing.T) {
|
||||
Ports: tailcfg.PortRangeAny,
|
||||
},
|
||||
},
|
||||
IPProto: []int{v2.ProtocolTCP, v2.ProtocolUDP, v2.ProtocolICMP, v2.ProtocolIPv6ICMP},
|
||||
},
|
||||
},
|
||||
},
|
||||
@@ -352,16 +356,18 @@ func TestReduceFilterRules(t *testing.T) {
|
||||
// autogroup:internet does NOT generate packet filters - it's handled
|
||||
// by exit node routing via AllowedIPs, not by packet filtering.
|
||||
{
|
||||
SrcIPs: []string{
|
||||
"100.64.0.1-100.64.0.2",
|
||||
"fd7a:115c:a1e0::1-fd7a:115c:a1e0::2",
|
||||
},
|
||||
SrcIPs: []string{"100.64.0.1/32", "100.64.0.2/32", "fd7a:115c:a1e0::1/128", "fd7a:115c:a1e0::2/128"},
|
||||
DstPorts: []tailcfg.NetPortRange{
|
||||
{
|
||||
IP: "100.64.0.100",
|
||||
IP: "100.64.0.100/32",
|
||||
Ports: tailcfg.PortRangeAny,
|
||||
},
|
||||
{
|
||||
IP: "fd7a:115c:a1e0::100/128",
|
||||
Ports: tailcfg.PortRangeAny,
|
||||
},
|
||||
},
|
||||
IPProto: []int{v2.ProtocolTCP, v2.ProtocolUDP, v2.ProtocolICMP, v2.ProtocolIPv6ICMP},
|
||||
},
|
||||
},
|
||||
},
|
||||
@@ -453,22 +459,50 @@ func TestReduceFilterRules(t *testing.T) {
|
||||
},
|
||||
},
|
||||
want: []tailcfg.FilterRule{
|
||||
// Exit routes (0.0.0.0/0, ::/0) are skipped when checking RoutableIPs
|
||||
// overlap, matching Tailscale SaaS behavior. Only destinations that
|
||||
// contain the node's own Tailscale IP (via InIPSet) are kept.
|
||||
// Here, 64.0.0.0/2 contains 100.64.0.100 (CGNAT range), so it matches.
|
||||
// Merged: Both ACL rules combined (same SrcIPs and IPProto)
|
||||
{
|
||||
SrcIPs: []string{
|
||||
"100.64.0.1-100.64.0.2",
|
||||
"fd7a:115c:a1e0::1-fd7a:115c:a1e0::2",
|
||||
},
|
||||
SrcIPs: []string{"100.64.0.1/32", "100.64.0.2/32", "fd7a:115c:a1e0::1/128", "fd7a:115c:a1e0::2/128"},
|
||||
DstPorts: []tailcfg.NetPortRange{
|
||||
{
|
||||
IP: "100.64.0.100",
|
||||
IP: "100.64.0.100/32",
|
||||
Ports: tailcfg.PortRangeAny,
|
||||
},
|
||||
{
|
||||
IP: "fd7a:115c:a1e0::100/128",
|
||||
Ports: tailcfg.PortRangeAny,
|
||||
},
|
||||
{IP: "0.0.0.0/5", Ports: tailcfg.PortRangeAny},
|
||||
{IP: "8.0.0.0/7", Ports: tailcfg.PortRangeAny},
|
||||
{IP: "11.0.0.0/8", Ports: tailcfg.PortRangeAny},
|
||||
{IP: "12.0.0.0/6", Ports: tailcfg.PortRangeAny},
|
||||
{IP: "16.0.0.0/4", Ports: tailcfg.PortRangeAny},
|
||||
{IP: "32.0.0.0/3", Ports: tailcfg.PortRangeAny},
|
||||
{IP: "64.0.0.0/2", Ports: tailcfg.PortRangeAny},
|
||||
{IP: "128.0.0.0/3", Ports: tailcfg.PortRangeAny},
|
||||
{IP: "160.0.0.0/5", Ports: tailcfg.PortRangeAny},
|
||||
{IP: "168.0.0.0/6", Ports: tailcfg.PortRangeAny},
|
||||
{IP: "172.0.0.0/12", Ports: tailcfg.PortRangeAny},
|
||||
{IP: "172.32.0.0/11", Ports: tailcfg.PortRangeAny},
|
||||
{IP: "172.64.0.0/10", Ports: tailcfg.PortRangeAny},
|
||||
{IP: "172.128.0.0/9", Ports: tailcfg.PortRangeAny},
|
||||
{IP: "173.0.0.0/8", Ports: tailcfg.PortRangeAny},
|
||||
{IP: "174.0.0.0/7", Ports: tailcfg.PortRangeAny},
|
||||
{IP: "176.0.0.0/4", Ports: tailcfg.PortRangeAny},
|
||||
{IP: "192.0.0.0/9", Ports: tailcfg.PortRangeAny},
|
||||
{IP: "192.128.0.0/11", Ports: tailcfg.PortRangeAny},
|
||||
{IP: "192.160.0.0/13", Ports: tailcfg.PortRangeAny},
|
||||
{IP: "192.169.0.0/16", Ports: tailcfg.PortRangeAny},
|
||||
{IP: "192.170.0.0/15", Ports: tailcfg.PortRangeAny},
|
||||
{IP: "192.172.0.0/14", Ports: tailcfg.PortRangeAny},
|
||||
{IP: "192.176.0.0/12", Ports: tailcfg.PortRangeAny},
|
||||
{IP: "192.192.0.0/10", Ports: tailcfg.PortRangeAny},
|
||||
{IP: "193.0.0.0/8", Ports: tailcfg.PortRangeAny},
|
||||
{IP: "194.0.0.0/7", Ports: tailcfg.PortRangeAny},
|
||||
{IP: "196.0.0.0/6", Ports: tailcfg.PortRangeAny},
|
||||
{IP: "200.0.0.0/5", Ports: tailcfg.PortRangeAny},
|
||||
{IP: "208.0.0.0/4", Ports: tailcfg.PortRangeAny},
|
||||
},
|
||||
IPProto: []int{v2.ProtocolTCP, v2.ProtocolUDP, v2.ProtocolICMP, v2.ProtocolIPv6ICMP},
|
||||
},
|
||||
},
|
||||
},
|
||||
@@ -532,15 +566,16 @@ func TestReduceFilterRules(t *testing.T) {
|
||||
},
|
||||
},
|
||||
want: []tailcfg.FilterRule{
|
||||
// Merged: Both ACL rules combined (same SrcIPs)
|
||||
// Merged: Both ACL rules combined (same SrcIPs and IPProto)
|
||||
{
|
||||
SrcIPs: []string{
|
||||
"100.64.0.1-100.64.0.2",
|
||||
"fd7a:115c:a1e0::1-fd7a:115c:a1e0::2",
|
||||
},
|
||||
SrcIPs: []string{"100.64.0.1/32", "100.64.0.2/32", "fd7a:115c:a1e0::1/128", "fd7a:115c:a1e0::2/128"},
|
||||
DstPorts: []tailcfg.NetPortRange{
|
||||
{
|
||||
IP: "100.64.0.100",
|
||||
IP: "100.64.0.100/32",
|
||||
Ports: tailcfg.PortRangeAny,
|
||||
},
|
||||
{
|
||||
IP: "fd7a:115c:a1e0::100/128",
|
||||
Ports: tailcfg.PortRangeAny,
|
||||
},
|
||||
{
|
||||
@@ -552,6 +587,7 @@ func TestReduceFilterRules(t *testing.T) {
|
||||
Ports: tailcfg.PortRangeAny,
|
||||
},
|
||||
},
|
||||
IPProto: []int{v2.ProtocolTCP, v2.ProtocolUDP, v2.ProtocolICMP, v2.ProtocolIPv6ICMP},
|
||||
},
|
||||
},
|
||||
},
|
||||
@@ -615,15 +651,16 @@ func TestReduceFilterRules(t *testing.T) {
|
||||
},
|
||||
},
|
||||
want: []tailcfg.FilterRule{
|
||||
// Merged: Both ACL rules combined (same SrcIPs)
|
||||
// Merged: Both ACL rules combined (same SrcIPs and IPProto)
|
||||
{
|
||||
SrcIPs: []string{
|
||||
"100.64.0.1-100.64.0.2",
|
||||
"fd7a:115c:a1e0::1-fd7a:115c:a1e0::2",
|
||||
},
|
||||
SrcIPs: []string{"100.64.0.1/32", "100.64.0.2/32", "fd7a:115c:a1e0::1/128", "fd7a:115c:a1e0::2/128"},
|
||||
DstPorts: []tailcfg.NetPortRange{
|
||||
{
|
||||
IP: "100.64.0.100",
|
||||
IP: "100.64.0.100/32",
|
||||
Ports: tailcfg.PortRangeAny,
|
||||
},
|
||||
{
|
||||
IP: "fd7a:115c:a1e0::100/128",
|
||||
Ports: tailcfg.PortRangeAny,
|
||||
},
|
||||
{
|
||||
@@ -635,6 +672,7 @@ func TestReduceFilterRules(t *testing.T) {
|
||||
Ports: tailcfg.PortRangeAny,
|
||||
},
|
||||
},
|
||||
IPProto: []int{v2.ProtocolTCP, v2.ProtocolUDP, v2.ProtocolICMP, v2.ProtocolIPv6ICMP},
|
||||
},
|
||||
},
|
||||
},
|
||||
@@ -687,24 +725,22 @@ func TestReduceFilterRules(t *testing.T) {
|
||||
},
|
||||
want: []tailcfg.FilterRule{
|
||||
{
|
||||
SrcIPs: []string{
|
||||
"100.64.0.1",
|
||||
"fd7a:115c:a1e0::1",
|
||||
},
|
||||
SrcIPs: []string{"100.64.0.1/32", "fd7a:115c:a1e0::1/128"},
|
||||
DstPorts: []tailcfg.NetPortRange{
|
||||
{
|
||||
IP: "100.64.0.100",
|
||||
IP: "100.64.0.100/32",
|
||||
Ports: tailcfg.PortRangeAny,
|
||||
},
|
||||
{
|
||||
IP: "fd7a:115c:a1e0::100",
|
||||
IP: "fd7a:115c:a1e0::100/128",
|
||||
Ports: tailcfg.PortRangeAny,
|
||||
},
|
||||
{
|
||||
IP: "172.16.0.21",
|
||||
IP: "172.16.0.21/32",
|
||||
Ports: tailcfg.PortRangeAny,
|
||||
},
|
||||
},
|
||||
IPProto: []int{v2.ProtocolTCP, v2.ProtocolUDP, v2.ProtocolICMP, v2.ProtocolIPv6ICMP},
|
||||
},
|
||||
},
|
||||
},
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@@ -1,25 +0,0 @@
|
||||
package v2
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"testing"
|
||||
)
|
||||
|
||||
// TestMain ensures the working directory is set to the package source directory
|
||||
// so that relative testdata/ paths resolve correctly when the test binary is
|
||||
// executed from an arbitrary location (e.g., via "go tool stress").
|
||||
func TestMain(m *testing.M) {
|
||||
_, filename, _, ok := runtime.Caller(0)
|
||||
if !ok {
|
||||
panic("could not determine test source directory")
|
||||
}
|
||||
|
||||
err := os.Chdir(filepath.Dir(filename))
|
||||
if err != nil {
|
||||
panic("could not chdir to test source directory: " + err.Error())
|
||||
}
|
||||
|
||||
os.Exit(m.Run())
|
||||
}
|
||||
@@ -51,12 +51,6 @@ type PolicyManager struct {
|
||||
// Lazy map of per-node filter rules (reduced, for packet filters)
|
||||
filterRulesMap map[types.NodeID][]tailcfg.FilterRule
|
||||
usesAutogroupSelf bool
|
||||
|
||||
// needsPerNodeFilter is true when filter rules must be compiled
|
||||
// per-node rather than globally. This is required when the policy
|
||||
// uses autogroup:self (node-relative destinations) or via grants
|
||||
// (per-router filter rules for steered traffic).
|
||||
needsPerNodeFilter bool
|
||||
}
|
||||
|
||||
// filterAndPolicy combines the compiled filter rules with policy content for hashing.
|
||||
@@ -84,7 +78,6 @@ func NewPolicyManager(b []byte, users []types.User, nodes views.Slice[types.Node
|
||||
compiledFilterRulesMap: make(map[types.NodeID][]tailcfg.FilterRule, nodes.Len()),
|
||||
filterRulesMap: make(map[types.NodeID][]tailcfg.FilterRule, nodes.Len()),
|
||||
usesAutogroupSelf: policy.usesAutogroupSelf(),
|
||||
needsPerNodeFilter: policy.usesAutogroupSelf() || policy.hasViaGrants(),
|
||||
}
|
||||
|
||||
_, err = pm.updateLocked()
|
||||
@@ -98,9 +91,8 @@ func NewPolicyManager(b []byte, users []types.User, nodes views.Slice[types.Node
|
||||
// updateLocked updates the filter rules based on the current policy and nodes.
|
||||
// It must be called with the lock held.
|
||||
func (pm *PolicyManager) updateLocked() (bool, error) {
|
||||
// Check if policy uses autogroup:self or via grants
|
||||
// Check if policy uses autogroup:self
|
||||
pm.usesAutogroupSelf = pm.pol.usesAutogroupSelf()
|
||||
pm.needsPerNodeFilter = pm.usesAutogroupSelf || pm.pol.hasViaGrants()
|
||||
|
||||
var filter []tailcfg.FilterRule
|
||||
|
||||
@@ -379,10 +371,8 @@ func (pm *PolicyManager) BuildPeerMap(nodes views.Slice[types.NodeView]) map[typ
|
||||
pm.mu.Lock()
|
||||
defer pm.mu.Unlock()
|
||||
|
||||
// If we have a global filter, use it for all nodes (normal case).
|
||||
// Via grants require the per-node path because the global filter
|
||||
// skips via grants (compileFilterRules: if len(grant.Via) > 0 { continue }).
|
||||
if !pm.needsPerNodeFilter {
|
||||
// If we have a global filter, use it for all nodes (normal case)
|
||||
if !pm.usesAutogroupSelf {
|
||||
ret := make(map[types.NodeID][]types.NodeView, nodes.Len())
|
||||
|
||||
// Build the map of all peers according to the matchers.
|
||||
@@ -405,7 +395,7 @@ func (pm *PolicyManager) BuildPeerMap(nodes views.Slice[types.NodeView]) map[typ
|
||||
return ret
|
||||
}
|
||||
|
||||
// For autogroup:self or via grants, build per-node peer relationships
|
||||
// For autogroup:self (empty global filter), build per-node peer relationships
|
||||
ret := make(map[types.NodeID][]types.NodeView, nodes.Len())
|
||||
|
||||
// Pre-compute per-node matchers using unreduced compiled rules
|
||||
@@ -440,21 +430,15 @@ func (pm *PolicyManager) BuildPeerMap(nodes views.Slice[types.NodeView]) map[typ
|
||||
nodeJ := nodes.At(j)
|
||||
matchersJ, hasFilterJ := nodeMatchers[nodeJ.ID()]
|
||||
|
||||
// Check all access directions for symmetric peer visibility.
|
||||
// For via grants, filter rules exist on the via-designated node
|
||||
// (e.g., router-a) with sources being the client (group-a).
|
||||
// We need to check BOTH:
|
||||
// 1. nodeI.CanAccess(matchersI, nodeJ) — can nodeI reach nodeJ?
|
||||
// 2. nodeJ.CanAccess(matchersI, nodeI) — can nodeJ reach nodeI
|
||||
// using nodeI's matchers? (reverse direction: the matchers
|
||||
// on the via node accept traffic FROM the source)
|
||||
// Same for matchersJ in both directions.
|
||||
// If either node can access the other, both should see each other as peers.
|
||||
// This symmetric visibility is required for proper network operation:
|
||||
// - Admin with *:* rule should see tagged servers (even if servers
|
||||
// can't access admin)
|
||||
// - Servers should see admin so they can respond to admin's connections
|
||||
canIAccessJ := hasFilterI && nodeI.CanAccess(matchersI, nodeJ)
|
||||
canJAccessI := hasFilterJ && nodeJ.CanAccess(matchersJ, nodeI)
|
||||
canJReachI := hasFilterI && nodeJ.CanAccess(matchersI, nodeI)
|
||||
canIReachJ := hasFilterJ && nodeI.CanAccess(matchersJ, nodeJ)
|
||||
|
||||
if canIAccessJ || canJAccessI || canJReachI || canIReachJ {
|
||||
if canIAccessJ || canJAccessI {
|
||||
ret[nodeI.ID()] = append(ret[nodeI.ID()], nodeJ)
|
||||
ret[nodeJ.ID()] = append(ret[nodeJ.ID()], nodeI)
|
||||
}
|
||||
@@ -498,7 +482,7 @@ func (pm *PolicyManager) filterForNodeLocked(node types.NodeView) ([]tailcfg.Fil
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
if !pm.needsPerNodeFilter {
|
||||
if !pm.usesAutogroupSelf {
|
||||
// For global filters, reduce to only rules relevant to this node.
|
||||
// Cache the reduced filter per node for efficiency.
|
||||
if rules, ok := pm.filterRulesMap[node.ID()]; ok {
|
||||
@@ -513,8 +497,7 @@ func (pm *PolicyManager) filterForNodeLocked(node types.NodeView) ([]tailcfg.Fil
|
||||
return reducedFilter, nil
|
||||
}
|
||||
|
||||
// Per-node compilation is needed when the policy uses autogroup:self
|
||||
// (node-relative destinations) or via grants (per-router filter rules).
|
||||
// For autogroup:self, compile per-node rules then reduce them.
|
||||
// Check if we have cached reduced rules for this node.
|
||||
if rules, ok := pm.filterRulesMap[node.ID()]; ok {
|
||||
return rules, nil
|
||||
@@ -564,14 +547,12 @@ func (pm *PolicyManager) MatchersForNode(node types.NodeView) ([]matcher.Match,
|
||||
pm.mu.Lock()
|
||||
defer pm.mu.Unlock()
|
||||
|
||||
// For global policies, return the shared global matchers.
|
||||
// Via grants require per-node matchers because the global matchers
|
||||
// are empty for via-grant-only policies.
|
||||
if !pm.needsPerNodeFilter {
|
||||
// For global policies, return the shared global matchers
|
||||
if !pm.usesAutogroupSelf {
|
||||
return pm.matchers, nil
|
||||
}
|
||||
|
||||
// For autogroup:self or via grants, get unreduced compiled rules and create matchers
|
||||
// For autogroup:self, get unreduced compiled rules and create matchers
|
||||
compiledRules, err := pm.compileFilterRulesForNodeLocked(node)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
@@ -679,14 +660,6 @@ func (pm *PolicyManager) nodesHavePolicyAffectingChanges(newNodes views.Slice[ty
|
||||
if newNode.HasPolicyChange(oldNode) {
|
||||
return true
|
||||
}
|
||||
|
||||
// Via grants and autogroup:self compile filter rules per-node
|
||||
// that depend on the node's route state (SubnetRoutes, ExitRoutes).
|
||||
// Route changes are policy-affecting in this context because they
|
||||
// alter which filter rules get generated for the via-designated node.
|
||||
if pm.needsPerNodeFilter && newNode.HasNetworkChanges(oldNode) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
return false
|
||||
@@ -848,101 +821,6 @@ func (pm *PolicyManager) NodeCanApproveRoute(node types.NodeView, route netip.Pr
|
||||
return false
|
||||
}
|
||||
|
||||
// ViaRoutesForPeer computes via grant effects for a viewer-peer pair.
|
||||
// For each via grant where the viewer matches the source, it checks whether the
|
||||
// peer advertises any of the grant's destination prefixes. If the peer has the
|
||||
// via tag, those prefixes go into Include; otherwise into Exclude.
|
||||
func (pm *PolicyManager) ViaRoutesForPeer(viewer, peer types.NodeView) types.ViaRouteResult {
|
||||
var result types.ViaRouteResult
|
||||
|
||||
if pm == nil || pm.pol == nil {
|
||||
return result
|
||||
}
|
||||
|
||||
pm.mu.Lock()
|
||||
defer pm.mu.Unlock()
|
||||
|
||||
// Self-steering doesn't apply.
|
||||
if viewer.ID() == peer.ID() {
|
||||
return result
|
||||
}
|
||||
|
||||
grants := pm.pol.Grants
|
||||
for _, acl := range pm.pol.ACLs {
|
||||
grants = append(grants, aclToGrants(acl)...)
|
||||
}
|
||||
|
||||
for _, grant := range grants {
|
||||
if len(grant.Via) == 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
// Check if viewer matches any grant source.
|
||||
viewerMatches := false
|
||||
|
||||
for _, src := range grant.Sources {
|
||||
ips, err := src.Resolve(pm.pol, pm.users, pm.nodes)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
if ips != nil && slices.ContainsFunc(viewer.IPs(), ips.Contains) {
|
||||
viewerMatches = true
|
||||
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if !viewerMatches {
|
||||
continue
|
||||
}
|
||||
|
||||
// Collect destination prefixes that the peer actually advertises.
|
||||
peerSubnetRoutes := peer.SubnetRoutes()
|
||||
|
||||
var matchedPrefixes []netip.Prefix
|
||||
|
||||
for _, dst := range grant.Destinations {
|
||||
switch d := dst.(type) {
|
||||
case *Prefix:
|
||||
dstPrefix := netip.Prefix(*d)
|
||||
if slices.Contains(peerSubnetRoutes, dstPrefix) {
|
||||
matchedPrefixes = append(matchedPrefixes, dstPrefix)
|
||||
}
|
||||
case *AutoGroup:
|
||||
// autogroup:internet via grants do NOT affect AllowedIPs or
|
||||
// route steering for exit nodes. Tailscale SaaS handles exit
|
||||
// traffic forwarding through the client's exit node selection
|
||||
// mechanism, not through AllowedIPs. Verified by golden
|
||||
// captures GRANT-V14 through GRANT-V36.
|
||||
}
|
||||
}
|
||||
|
||||
if len(matchedPrefixes) == 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
// Check if peer has any of the via tags.
|
||||
peerHasVia := false
|
||||
|
||||
for _, viaTag := range grant.Via {
|
||||
if peer.HasTag(string(viaTag)) {
|
||||
peerHasVia = true
|
||||
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if peerHasVia {
|
||||
result.Include = append(result.Include, matchedPrefixes...)
|
||||
} else {
|
||||
result.Exclude = append(result.Exclude, matchedPrefixes...)
|
||||
}
|
||||
}
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
func (pm *PolicyManager) Version() int {
|
||||
return 2
|
||||
}
|
||||
@@ -1320,11 +1198,7 @@ func resolveTagOwners(p *Policy, users types.Users, nodes views.Slice[types.Node
|
||||
case Alias:
|
||||
// If it does not resolve, that means the tag is not associated with any IP addresses.
|
||||
resolved, _ := o.Resolve(p, users, nodes)
|
||||
if resolved != nil {
|
||||
for _, pref := range resolved.Prefixes() {
|
||||
ips.AddPrefix(pref)
|
||||
}
|
||||
}
|
||||
ips.AddSet(resolved)
|
||||
|
||||
default:
|
||||
// Should never happen - after flattening, all owners should be Alias types
|
||||
|
||||
@@ -1339,457 +1339,3 @@ func TestIssue2990SameUserTaggedDevice(t *testing.T) {
|
||||
t.Logf(" rule %d: SrcIPs=%v DstPorts=%v", i, rule.SrcIPs, rule.DstPorts)
|
||||
}
|
||||
}
|
||||
|
||||
func TestViaRoutesForPeer(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
users := types.Users{
|
||||
{Model: gorm.Model{ID: 1}, Name: "user1", Email: "user1@"},
|
||||
{Model: gorm.Model{ID: 2}, Name: "user2", Email: "user2@"},
|
||||
}
|
||||
|
||||
t.Run("self_returns_empty", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
nodes := types.Nodes{
|
||||
{
|
||||
ID: 1,
|
||||
Hostname: "router",
|
||||
IPv4: ap("100.64.0.1"),
|
||||
User: new(users[0]),
|
||||
UserID: new(users[0].ID),
|
||||
Tags: []string{"tag:router"},
|
||||
Hostinfo: &tailcfg.Hostinfo{
|
||||
RoutableIPs: []netip.Prefix{mp("10.0.0.0/24")},
|
||||
},
|
||||
ApprovedRoutes: []netip.Prefix{mp("10.0.0.0/24")},
|
||||
},
|
||||
}
|
||||
|
||||
//nolint:goconst
|
||||
pol := `{
|
||||
"tagOwners": {
|
||||
"tag:router": ["user1@"]
|
||||
},
|
||||
"grants": [{
|
||||
"src": ["user1@"],
|
||||
"dst": ["10.0.0.0/24"],
|
||||
"ip": ["*"],
|
||||
"via": ["tag:router"]
|
||||
}]
|
||||
}`
|
||||
|
||||
pm, err := NewPolicyManager([]byte(pol), users, nodes.ViewSlice())
|
||||
require.NoError(t, err)
|
||||
|
||||
result := pm.ViaRoutesForPeer(nodes[0].View(), nodes[0].View())
|
||||
require.Empty(t, result.Include)
|
||||
require.Empty(t, result.Exclude)
|
||||
})
|
||||
|
||||
t.Run("viewer_not_in_source", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
nodes := types.Nodes{
|
||||
{
|
||||
ID: 1,
|
||||
Hostname: "viewer",
|
||||
IPv4: ap("100.64.0.1"),
|
||||
User: new(users[1]),
|
||||
UserID: new(users[1].ID),
|
||||
Hostinfo: &tailcfg.Hostinfo{},
|
||||
},
|
||||
{
|
||||
ID: 2,
|
||||
Hostname: "router",
|
||||
IPv4: ap("100.64.0.2"),
|
||||
User: new(users[0]),
|
||||
UserID: new(users[0].ID),
|
||||
Tags: []string{"tag:router"},
|
||||
Hostinfo: &tailcfg.Hostinfo{
|
||||
RoutableIPs: []netip.Prefix{mp("10.0.0.0/24")},
|
||||
},
|
||||
ApprovedRoutes: []netip.Prefix{mp("10.0.0.0/24")},
|
||||
},
|
||||
}
|
||||
|
||||
//nolint:goconst
|
||||
pol := `{
|
||||
"tagOwners": {
|
||||
"tag:router": ["user1@"]
|
||||
},
|
||||
"grants": [{
|
||||
"src": ["user1@"],
|
||||
"dst": ["10.0.0.0/24"],
|
||||
"ip": ["*"],
|
||||
"via": ["tag:router"]
|
||||
}]
|
||||
}`
|
||||
|
||||
pm, err := NewPolicyManager([]byte(pol), users, nodes.ViewSlice())
|
||||
require.NoError(t, err)
|
||||
|
||||
// user2 is not in the grant source (user1@), so result should be empty.
|
||||
result := pm.ViaRoutesForPeer(nodes[0].View(), nodes[1].View())
|
||||
require.Empty(t, result.Include)
|
||||
require.Empty(t, result.Exclude)
|
||||
})
|
||||
|
||||
t.Run("peer_does_not_advertise_destination", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
nodes := types.Nodes{
|
||||
{
|
||||
ID: 1,
|
||||
Hostname: "viewer",
|
||||
IPv4: ap("100.64.0.1"),
|
||||
User: new(users[0]),
|
||||
UserID: new(users[0].ID),
|
||||
Hostinfo: &tailcfg.Hostinfo{},
|
||||
},
|
||||
{
|
||||
ID: 2,
|
||||
Hostname: "router",
|
||||
IPv4: ap("100.64.0.2"),
|
||||
User: new(users[0]),
|
||||
UserID: new(users[0].ID),
|
||||
Tags: []string{"tag:router"},
|
||||
Hostinfo: &tailcfg.Hostinfo{
|
||||
// Advertises 192.168.0.0/24, not 10.0.0.0/24.
|
||||
RoutableIPs: []netip.Prefix{mp("192.168.0.0/24")},
|
||||
},
|
||||
ApprovedRoutes: []netip.Prefix{mp("192.168.0.0/24")},
|
||||
},
|
||||
}
|
||||
|
||||
pol := `{
|
||||
"tagOwners": {
|
||||
"tag:router": ["user1@"]
|
||||
},
|
||||
"grants": [{
|
||||
"src": ["user1@"],
|
||||
"dst": ["10.0.0.0/24"],
|
||||
"ip": ["*"],
|
||||
"via": ["tag:router"]
|
||||
}]
|
||||
}`
|
||||
|
||||
pm, err := NewPolicyManager([]byte(pol), users, nodes.ViewSlice())
|
||||
require.NoError(t, err)
|
||||
|
||||
result := pm.ViaRoutesForPeer(nodes[0].View(), nodes[1].View())
|
||||
require.Empty(t, result.Include)
|
||||
require.Empty(t, result.Exclude)
|
||||
})
|
||||
|
||||
t.Run("peer_with_via_tag_include", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
nodes := types.Nodes{
|
||||
{
|
||||
ID: 1,
|
||||
Hostname: "viewer",
|
||||
IPv4: ap("100.64.0.1"),
|
||||
User: new(users[0]),
|
||||
UserID: new(users[0].ID),
|
||||
Hostinfo: &tailcfg.Hostinfo{},
|
||||
},
|
||||
{
|
||||
ID: 2,
|
||||
Hostname: "router",
|
||||
IPv4: ap("100.64.0.2"),
|
||||
User: new(users[0]),
|
||||
UserID: new(users[0].ID),
|
||||
Tags: []string{"tag:router"},
|
||||
Hostinfo: &tailcfg.Hostinfo{
|
||||
RoutableIPs: []netip.Prefix{mp("10.0.0.0/24")},
|
||||
},
|
||||
ApprovedRoutes: []netip.Prefix{mp("10.0.0.0/24")},
|
||||
},
|
||||
}
|
||||
|
||||
pol := `{
|
||||
"tagOwners": {
|
||||
"tag:router": ["user1@"]
|
||||
},
|
||||
"grants": [{
|
||||
"src": ["user1@"],
|
||||
"dst": ["10.0.0.0/24"],
|
||||
"ip": ["*"],
|
||||
"via": ["tag:router"]
|
||||
}]
|
||||
}`
|
||||
|
||||
pm, err := NewPolicyManager([]byte(pol), users, nodes.ViewSlice())
|
||||
require.NoError(t, err)
|
||||
|
||||
result := pm.ViaRoutesForPeer(nodes[0].View(), nodes[1].View())
|
||||
require.Equal(t, []netip.Prefix{mp("10.0.0.0/24")}, result.Include)
|
||||
require.Empty(t, result.Exclude)
|
||||
})
|
||||
|
||||
t.Run("peer_without_via_tag_exclude", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
nodes := types.Nodes{
|
||||
{
|
||||
ID: 1,
|
||||
Hostname: "viewer",
|
||||
IPv4: ap("100.64.0.1"),
|
||||
User: new(users[0]),
|
||||
UserID: new(users[0].ID),
|
||||
Hostinfo: &tailcfg.Hostinfo{},
|
||||
},
|
||||
{
|
||||
ID: 2,
|
||||
Hostname: "other-router",
|
||||
IPv4: ap("100.64.0.2"),
|
||||
User: new(users[0]),
|
||||
UserID: new(users[0].ID),
|
||||
Tags: []string{"tag:other"},
|
||||
Hostinfo: &tailcfg.Hostinfo{
|
||||
RoutableIPs: []netip.Prefix{mp("10.0.0.0/24")},
|
||||
},
|
||||
ApprovedRoutes: []netip.Prefix{mp("10.0.0.0/24")},
|
||||
},
|
||||
}
|
||||
|
||||
pol := `{
|
||||
"tagOwners": {
|
||||
"tag:router": ["user1@"],
|
||||
"tag:other": ["user1@"]
|
||||
},
|
||||
"grants": [{
|
||||
"src": ["user1@"],
|
||||
"dst": ["10.0.0.0/24"],
|
||||
"ip": ["*"],
|
||||
"via": ["tag:router"]
|
||||
}]
|
||||
}`
|
||||
|
||||
pm, err := NewPolicyManager([]byte(pol), users, nodes.ViewSlice())
|
||||
require.NoError(t, err)
|
||||
|
||||
// Peer has tag:other, not tag:router, so route goes to Exclude.
|
||||
result := pm.ViaRoutesForPeer(nodes[0].View(), nodes[1].View())
|
||||
require.Empty(t, result.Include)
|
||||
require.Equal(t, []netip.Prefix{mp("10.0.0.0/24")}, result.Exclude)
|
||||
})
|
||||
|
||||
t.Run("mixed_prefix_and_autogroup_internet", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
nodes := types.Nodes{
|
||||
{
|
||||
ID: 1,
|
||||
Hostname: "viewer",
|
||||
IPv4: ap("100.64.0.1"),
|
||||
User: new(users[0]),
|
||||
UserID: new(users[0].ID),
|
||||
Hostinfo: &tailcfg.Hostinfo{},
|
||||
},
|
||||
{
|
||||
ID: 2,
|
||||
Hostname: "router",
|
||||
IPv4: ap("100.64.0.2"),
|
||||
User: new(users[0]),
|
||||
UserID: new(users[0].ID),
|
||||
Tags: []string{"tag:router"},
|
||||
Hostinfo: &tailcfg.Hostinfo{
|
||||
RoutableIPs: []netip.Prefix{
|
||||
mp("10.0.0.0/24"),
|
||||
mp("0.0.0.0/0"),
|
||||
mp("::/0"),
|
||||
},
|
||||
},
|
||||
ApprovedRoutes: []netip.Prefix{
|
||||
mp("10.0.0.0/24"),
|
||||
mp("0.0.0.0/0"),
|
||||
mp("::/0"),
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
pol := `{
|
||||
"tagOwners": {
|
||||
"tag:router": ["user1@"]
|
||||
},
|
||||
"grants": [{
|
||||
"src": ["user1@"],
|
||||
"dst": ["10.0.0.0/24", "autogroup:internet"],
|
||||
"ip": ["*"],
|
||||
"via": ["tag:router"]
|
||||
}]
|
||||
}`
|
||||
|
||||
pm, err := NewPolicyManager([]byte(pol), users, nodes.ViewSlice())
|
||||
require.NoError(t, err)
|
||||
|
||||
result := pm.ViaRoutesForPeer(nodes[0].View(), nodes[1].View())
|
||||
// Include should have only the subnet route.
|
||||
// autogroup:internet does not produce via route effects.
|
||||
require.Contains(t, result.Include, mp("10.0.0.0/24"))
|
||||
require.Len(t, result.Include, 1)
|
||||
require.Empty(t, result.Exclude)
|
||||
})
|
||||
|
||||
t.Run("autogroup_internet_exit_routes", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
nodes := types.Nodes{
|
||||
{
|
||||
ID: 1,
|
||||
Hostname: "viewer",
|
||||
IPv4: ap("100.64.0.1"),
|
||||
User: new(users[0]),
|
||||
UserID: new(users[0].ID),
|
||||
Hostinfo: &tailcfg.Hostinfo{},
|
||||
},
|
||||
{
|
||||
ID: 2,
|
||||
Hostname: "exit-node",
|
||||
IPv4: ap("100.64.0.2"),
|
||||
User: new(users[0]),
|
||||
UserID: new(users[0].ID),
|
||||
Tags: []string{"tag:exit"},
|
||||
Hostinfo: &tailcfg.Hostinfo{
|
||||
RoutableIPs: []netip.Prefix{
|
||||
mp("0.0.0.0/0"),
|
||||
mp("::/0"),
|
||||
},
|
||||
},
|
||||
ApprovedRoutes: []netip.Prefix{
|
||||
mp("0.0.0.0/0"),
|
||||
mp("::/0"),
|
||||
},
|
||||
},
|
||||
{
|
||||
ID: 3,
|
||||
Hostname: "non-exit",
|
||||
IPv4: ap("100.64.0.3"),
|
||||
User: new(users[0]),
|
||||
UserID: new(users[0].ID),
|
||||
Tags: []string{"tag:other"},
|
||||
Hostinfo: &tailcfg.Hostinfo{
|
||||
RoutableIPs: []netip.Prefix{
|
||||
mp("0.0.0.0/0"),
|
||||
mp("::/0"),
|
||||
},
|
||||
},
|
||||
ApprovedRoutes: []netip.Prefix{
|
||||
mp("0.0.0.0/0"),
|
||||
mp("::/0"),
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
pol := `{
|
||||
"tagOwners": {
|
||||
"tag:exit": ["user1@"],
|
||||
"tag:other": ["user1@"]
|
||||
},
|
||||
"grants": [{
|
||||
"src": ["user1@"],
|
||||
"dst": ["autogroup:internet"],
|
||||
"ip": ["*"],
|
||||
"via": ["tag:exit"]
|
||||
}]
|
||||
}`
|
||||
|
||||
pm, err := NewPolicyManager([]byte(pol), users, nodes.ViewSlice())
|
||||
require.NoError(t, err)
|
||||
|
||||
// autogroup:internet via grants do NOT affect AllowedIPs or
|
||||
// route steering. Tailscale SaaS handles exit traffic through
|
||||
// the client's exit node mechanism, not ViaRoutesForPeer.
|
||||
// Verified by golden captures GRANT-V14 through GRANT-V36.
|
||||
resultExit := pm.ViaRoutesForPeer(nodes[0].View(), nodes[1].View())
|
||||
require.Empty(t, resultExit.Include)
|
||||
require.Empty(t, resultExit.Exclude)
|
||||
|
||||
resultOther := pm.ViaRoutesForPeer(nodes[0].View(), nodes[2].View())
|
||||
require.Empty(t, resultOther.Include)
|
||||
require.Empty(t, resultOther.Exclude)
|
||||
})
|
||||
|
||||
t.Run("via_routes_survive_reduce_routes", func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
// This test validates that via-included routes are not
|
||||
// filtered out by ReduceRoutes. The viewer's matchers
|
||||
// allow tag-to-tag IP connectivity but don't explicitly
|
||||
// cover the subnet prefix, so ReduceRoutes alone would
|
||||
// drop it. The fix in state.RoutesForPeer applies
|
||||
// ReduceRoutes first, then appends via-included routes.
|
||||
|
||||
nodes := types.Nodes{
|
||||
{
|
||||
ID: 1,
|
||||
Hostname: "client",
|
||||
IPv4: ap("100.64.0.1"),
|
||||
User: new(users[0]),
|
||||
UserID: new(users[0].ID),
|
||||
Tags: []string{"tag:group-a"},
|
||||
Hostinfo: &tailcfg.Hostinfo{},
|
||||
},
|
||||
{
|
||||
ID: 2,
|
||||
Hostname: "router",
|
||||
IPv4: ap("100.64.0.2"),
|
||||
User: new(users[0]),
|
||||
UserID: new(users[0].ID),
|
||||
Tags: []string{"tag:router-a"},
|
||||
Hostinfo: &tailcfg.Hostinfo{
|
||||
RoutableIPs: []netip.Prefix{mp("10.0.0.0/24")},
|
||||
},
|
||||
ApprovedRoutes: []netip.Prefix{mp("10.0.0.0/24")},
|
||||
},
|
||||
}
|
||||
|
||||
pol := `{
|
||||
"tagOwners": {
|
||||
"tag:router-a": ["user1@"],
|
||||
"tag:group-a": ["user1@"]
|
||||
},
|
||||
"grants": [
|
||||
{
|
||||
"src": ["tag:group-a", "tag:router-a"],
|
||||
"dst": ["tag:group-a", "tag:router-a"],
|
||||
"ip": ["*"]
|
||||
},
|
||||
{
|
||||
"src": ["tag:group-a"],
|
||||
"dst": ["10.0.0.0/24"],
|
||||
"ip": ["*"],
|
||||
"via": ["tag:router-a"]
|
||||
}
|
||||
]
|
||||
}`
|
||||
|
||||
pm, err := NewPolicyManager([]byte(pol), users, nodes.ViewSlice())
|
||||
require.NoError(t, err)
|
||||
|
||||
client := nodes[0].View()
|
||||
router := nodes[1].View()
|
||||
|
||||
// ViaRoutesForPeer says router should include 10.0.0.0/24.
|
||||
viaResult := pm.ViaRoutesForPeer(client, router)
|
||||
require.Equal(t, []netip.Prefix{mp("10.0.0.0/24")}, viaResult.Include)
|
||||
require.Empty(t, viaResult.Exclude)
|
||||
|
||||
// Matchers for the client cover tag-to-tag connectivity
|
||||
// but do NOT cover the 10.0.0.0/24 subnet prefix.
|
||||
matchers, err := pm.MatchersForNode(client)
|
||||
require.NoError(t, err)
|
||||
require.NotEmpty(t, matchers)
|
||||
|
||||
// CanAccessRoute with the client's matchers returns false for
|
||||
// 10.0.0.0/24 because the matchers only cover tag-to-tag IPs.
|
||||
// This means ReduceRoutes would filter it out, which is why
|
||||
// state.RoutesForPeer must add via routes AFTER ReduceRoutes.
|
||||
canAccess := client.CanAccessRoute(matchers, mp("10.0.0.0/24"))
|
||||
require.False(t, canAccess,
|
||||
"client should NOT be able to access 10.0.0.0/24 via matchers alone; "+
|
||||
"state.RoutesForPeer adds via routes after ReduceRoutes to fix this")
|
||||
})
|
||||
}
|
||||
|
||||
@@ -1,485 +0,0 @@
|
||||
// This file implements a data-driven test runner for ACL compatibility tests.
|
||||
// It loads JSON golden files from testdata/acl_results/ACL-*.json and compares
|
||||
// headscale's ACL engine output against the expected packet filter rules.
|
||||
//
|
||||
// The JSON files were converted from the original inline Go struct test cases
|
||||
// in tailscale_acl_compat_test.go. Each file contains:
|
||||
// - A full policy (groups, tagOwners, hosts, acls)
|
||||
// - Expected packet_filter_rules per node (5 nodes)
|
||||
// - Or an error response for invalid policies
|
||||
//
|
||||
// Test data source: testdata/acl_results/ACL-*.json
|
||||
// Original source: Tailscale SaaS API captures + headscale-generated expansions
|
||||
|
||||
package v2
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"net/netip"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/google/go-cmp/cmp"
|
||||
"github.com/google/go-cmp/cmp/cmpopts"
|
||||
"github.com/juanfont/headscale/hscontrol/policy/policyutil"
|
||||
"github.com/juanfont/headscale/hscontrol/types"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
"github.com/tailscale/hujson"
|
||||
"gorm.io/gorm"
|
||||
"tailscale.com/tailcfg"
|
||||
)
|
||||
|
||||
// ptrAddr is a helper to create a pointer to a netip.Addr.
|
||||
func ptrAddr(s string) *netip.Addr {
|
||||
addr := netip.MustParseAddr(s)
|
||||
|
||||
return &addr
|
||||
}
|
||||
|
||||
// setupACLCompatUsers returns the 3 test users for ACL compatibility tests.
|
||||
// Email addresses use @example.com domain, matching the converted Tailscale
|
||||
// policy format (Tailscale uses @passkey and @dalby.cc).
|
||||
func setupACLCompatUsers() types.Users {
|
||||
return types.Users{
|
||||
{Model: gorm.Model{ID: 1}, Name: "kratail2tid", Email: "kratail2tid@example.com"},
|
||||
{Model: gorm.Model{ID: 2}, Name: "kristoffer", Email: "kristoffer@example.com"},
|
||||
{Model: gorm.Model{ID: 3}, Name: "monitorpasskeykradalby", Email: "monitorpasskeykradalby@example.com"},
|
||||
}
|
||||
}
|
||||
|
||||
// setupACLCompatNodes returns the 8 test nodes for ACL compatibility tests.
|
||||
// Uses the same topology as the grants compat tests.
|
||||
func setupACLCompatNodes(users types.Users) types.Nodes {
|
||||
return types.Nodes{
|
||||
{
|
||||
ID: 1, GivenName: "user1",
|
||||
User: &users[0], UserID: &users[0].ID,
|
||||
IPv4: ptrAddr("100.90.199.68"), IPv6: ptrAddr("fd7a:115c:a1e0::2d01:c747"),
|
||||
Hostinfo: &tailcfg.Hostinfo{},
|
||||
},
|
||||
{
|
||||
ID: 2, GivenName: "user-kris",
|
||||
User: &users[1], UserID: &users[1].ID,
|
||||
IPv4: ptrAddr("100.110.121.96"), IPv6: ptrAddr("fd7a:115c:a1e0::1737:7960"),
|
||||
Hostinfo: &tailcfg.Hostinfo{},
|
||||
},
|
||||
{
|
||||
ID: 3, GivenName: "user-mon",
|
||||
User: &users[2], UserID: &users[2].ID,
|
||||
IPv4: ptrAddr("100.103.90.82"), IPv6: ptrAddr("fd7a:115c:a1e0::9e37:5a52"),
|
||||
Hostinfo: &tailcfg.Hostinfo{},
|
||||
},
|
||||
{
|
||||
ID: 4, GivenName: "tagged-server",
|
||||
IPv4: ptrAddr("100.108.74.26"), IPv6: ptrAddr("fd7a:115c:a1e0::b901:4a87"),
|
||||
Tags: []string{"tag:server"}, Hostinfo: &tailcfg.Hostinfo{},
|
||||
},
|
||||
{
|
||||
ID: 5, GivenName: "tagged-prod",
|
||||
IPv4: ptrAddr("100.103.8.15"), IPv6: ptrAddr("fd7a:115c:a1e0::5b37:80f"),
|
||||
Tags: []string{"tag:prod"}, Hostinfo: &tailcfg.Hostinfo{},
|
||||
},
|
||||
{
|
||||
ID: 6, GivenName: "tagged-client",
|
||||
IPv4: ptrAddr("100.83.200.69"), IPv6: ptrAddr("fd7a:115c:a1e0::c537:c845"),
|
||||
Tags: []string{"tag:client"}, Hostinfo: &tailcfg.Hostinfo{},
|
||||
},
|
||||
{
|
||||
ID: 7, GivenName: "subnet-router",
|
||||
IPv4: ptrAddr("100.92.142.61"), IPv6: ptrAddr("fd7a:115c:a1e0::3e37:8e3d"),
|
||||
Tags: []string{"tag:router"},
|
||||
Hostinfo: &tailcfg.Hostinfo{
|
||||
RoutableIPs: []netip.Prefix{netip.MustParsePrefix("10.33.0.0/16")},
|
||||
},
|
||||
ApprovedRoutes: []netip.Prefix{netip.MustParsePrefix("10.33.0.0/16")},
|
||||
},
|
||||
{
|
||||
ID: 8, GivenName: "exit-node",
|
||||
IPv4: ptrAddr("100.85.66.106"), IPv6: ptrAddr("fd7a:115c:a1e0::7c37:426a"),
|
||||
Tags: []string{"tag:exit"}, Hostinfo: &tailcfg.Hostinfo{},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// findNodeByGivenName finds a node by its GivenName field.
|
||||
func findNodeByGivenName(nodes types.Nodes, name string) *types.Node {
|
||||
for _, n := range nodes {
|
||||
if n.GivenName == name {
|
||||
return n
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// cmpOptions returns comparison options for FilterRule slices.
|
||||
// It sorts SrcIPs and DstPorts to handle ordering differences.
|
||||
func cmpOptions() []cmp.Option {
|
||||
return []cmp.Option{
|
||||
cmpopts.EquateComparable(netip.Prefix{}, netip.Addr{}),
|
||||
cmpopts.SortSlices(func(a, b string) bool { return a < b }),
|
||||
cmpopts.SortSlices(func(a, b tailcfg.NetPortRange) bool {
|
||||
if a.IP != b.IP {
|
||||
return a.IP < b.IP
|
||||
}
|
||||
|
||||
if a.Ports.First != b.Ports.First {
|
||||
return a.Ports.First < b.Ports.First
|
||||
}
|
||||
|
||||
return a.Ports.Last < b.Ports.Last
|
||||
}),
|
||||
cmpopts.SortSlices(func(a, b int) bool { return a < b }),
|
||||
cmpopts.SortSlices(func(a, b netip.Prefix) bool {
|
||||
if a.Addr() != b.Addr() {
|
||||
return a.Addr().Less(b.Addr())
|
||||
}
|
||||
|
||||
return a.Bits() < b.Bits()
|
||||
}),
|
||||
// Compare json.RawMessage semantically rather than by exact
|
||||
// bytes to handle indentation differences between the policy
|
||||
// source and the golden capture data.
|
||||
cmp.Comparer(func(a, b json.RawMessage) bool {
|
||||
var va, vb any
|
||||
|
||||
err := json.Unmarshal(a, &va)
|
||||
if err != nil {
|
||||
return string(a) == string(b)
|
||||
}
|
||||
|
||||
err = json.Unmarshal(b, &vb)
|
||||
if err != nil {
|
||||
return string(a) == string(b)
|
||||
}
|
||||
|
||||
ja, _ := json.Marshal(va)
|
||||
jb, _ := json.Marshal(vb)
|
||||
|
||||
return string(ja) == string(jb)
|
||||
}),
|
||||
// Compare tailcfg.RawMessage semantically (it's a string type
|
||||
// containing JSON) to handle indentation differences.
|
||||
cmp.Comparer(func(a, b tailcfg.RawMessage) bool {
|
||||
var va, vb any
|
||||
|
||||
err := json.Unmarshal([]byte(a), &va)
|
||||
if err != nil {
|
||||
return a == b
|
||||
}
|
||||
|
||||
err = json.Unmarshal([]byte(b), &vb)
|
||||
if err != nil {
|
||||
return a == b
|
||||
}
|
||||
|
||||
ja, _ := json.Marshal(va)
|
||||
jb, _ := json.Marshal(vb)
|
||||
|
||||
return string(ja) == string(jb)
|
||||
}),
|
||||
}
|
||||
}
|
||||
|
||||
// aclTestFile represents the JSON structure of a captured ACL test file.
|
||||
type aclTestFile struct {
|
||||
TestID string `json:"test_id"`
|
||||
Source string `json:"source"` // "tailscale_saas" or "headscale_adapted"
|
||||
Error bool `json:"error"`
|
||||
HeadscaleDiffers bool `json:"headscale_differs"`
|
||||
ParentTest string `json:"parent_test"`
|
||||
Input struct {
|
||||
FullPolicy json.RawMessage `json:"full_policy"`
|
||||
APIResponseCode int `json:"api_response_code"`
|
||||
APIResponseBody *struct {
|
||||
Message string `json:"message"`
|
||||
} `json:"api_response_body"`
|
||||
} `json:"input"`
|
||||
Topology struct {
|
||||
Nodes map[string]struct {
|
||||
Hostname string `json:"hostname"`
|
||||
Tags []string `json:"tags"`
|
||||
IPv4 string `json:"ipv4"`
|
||||
IPv6 string `json:"ipv6"`
|
||||
User string `json:"user"`
|
||||
RoutableIPs []string `json:"routable_ips"`
|
||||
ApprovedRoutes []string `json:"approved_routes"`
|
||||
} `json:"nodes"`
|
||||
} `json:"topology"`
|
||||
Captures map[string]struct {
|
||||
PacketFilterRules json.RawMessage `json:"packet_filter_rules"`
|
||||
} `json:"captures"`
|
||||
}
|
||||
|
||||
// loadACLTestFile loads and parses a single ACL test JSON file.
|
||||
func loadACLTestFile(t *testing.T, path string) aclTestFile {
|
||||
t.Helper()
|
||||
|
||||
content, err := os.ReadFile(path)
|
||||
require.NoError(t, err, "failed to read test file %s", path)
|
||||
|
||||
ast, err := hujson.Parse(content)
|
||||
require.NoError(t, err, "failed to parse HuJSON in %s", path)
|
||||
ast.Standardize()
|
||||
|
||||
var tf aclTestFile
|
||||
|
||||
err = json.Unmarshal(ast.Pack(), &tf)
|
||||
require.NoError(t, err, "failed to unmarshal test file %s", path)
|
||||
|
||||
return tf
|
||||
}
|
||||
|
||||
// aclSkipReasons documents WHY tests are expected to fail and WHAT needs to be
|
||||
// implemented to fix them. Tests are grouped by root cause.
|
||||
//
|
||||
// Impact summary:
|
||||
//
|
||||
// SRCIPS_FORMAT - tests: SrcIPs use adapted format (100.64.0.0/10 vs partitioned CIDRs)
|
||||
// DSTPORTS_FORMAT - tests: DstPorts IP format differences
|
||||
// IPPROTO_FORMAT - tests: IPProto nil vs [6,17,1,58]
|
||||
// IMPLEMENTATION_PENDING - tests: Not yet implemented in headscale
|
||||
var aclSkipReasons = map[string]string{
|
||||
// Currently all tests are in the skip list because the ACL engine
|
||||
// output format changed with the ResolvedAddresses refactor.
|
||||
// Tests will be removed from this list as the implementation is
|
||||
// updated to match the expected output.
|
||||
}
|
||||
|
||||
// TestACLCompat is a data-driven test that loads all ACL-*.json test files
|
||||
// and compares headscale's ACL engine output against the expected behavior.
|
||||
//
|
||||
// Each JSON file contains:
|
||||
// - A full policy with groups, tagOwners, hosts, and acls
|
||||
// - For success cases: expected packet_filter_rules per node (5 nodes)
|
||||
// - For error cases: expected error message
|
||||
func TestACLCompat(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
files, err := filepath.Glob(
|
||||
filepath.Join("testdata", "acl_results", "ACL-*.hujson"),
|
||||
)
|
||||
require.NoError(t, err, "failed to glob test files")
|
||||
require.NotEmpty(
|
||||
t,
|
||||
files,
|
||||
"no ACL-*.hujson test files found in testdata/acl_results/",
|
||||
)
|
||||
|
||||
t.Logf("Loaded %d ACL test files", len(files))
|
||||
|
||||
users := setupACLCompatUsers()
|
||||
nodes := setupACLCompatNodes(users)
|
||||
|
||||
for _, file := range files {
|
||||
tf := loadACLTestFile(t, file)
|
||||
|
||||
t.Run(tf.TestID, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
// Check skip list
|
||||
if reason, ok := aclSkipReasons[tf.TestID]; ok {
|
||||
t.Skipf(
|
||||
"TODO: %s — see aclSkipReasons for details",
|
||||
reason,
|
||||
)
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
if tf.Error {
|
||||
testACLError(t, tf)
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
testACLSuccess(t, tf, users, nodes)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// testACLError verifies that an invalid policy produces the expected error.
|
||||
func testACLError(t *testing.T, tf aclTestFile) {
|
||||
t.Helper()
|
||||
|
||||
policyJSON := convertPolicyUserEmails(tf.Input.FullPolicy)
|
||||
|
||||
pol, err := unmarshalPolicy(policyJSON)
|
||||
if err != nil {
|
||||
// Parse-time error — valid for some error tests
|
||||
if tf.Input.APIResponseBody != nil {
|
||||
wantMsg := tf.Input.APIResponseBody.Message
|
||||
if wantMsg != "" {
|
||||
assert.Contains(
|
||||
t,
|
||||
err.Error(),
|
||||
wantMsg,
|
||||
"%s: error message should contain expected substring",
|
||||
tf.TestID,
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
err = pol.validate()
|
||||
if err != nil {
|
||||
if tf.Input.APIResponseBody != nil {
|
||||
wantMsg := tf.Input.APIResponseBody.Message
|
||||
if wantMsg != "" {
|
||||
// Allow partial match — headscale error messages differ
|
||||
// from Tailscale's
|
||||
errStr := err.Error()
|
||||
if !strings.Contains(errStr, wantMsg) {
|
||||
// Try matching key parts
|
||||
matched := false
|
||||
|
||||
for _, part := range []string{
|
||||
"autogroup:self",
|
||||
"not valid on the src",
|
||||
"port range",
|
||||
"tag not found",
|
||||
"undefined",
|
||||
} {
|
||||
if strings.Contains(wantMsg, part) &&
|
||||
strings.Contains(errStr, part) {
|
||||
matched = true
|
||||
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if !matched {
|
||||
t.Logf(
|
||||
"%s: error message difference\n want (tailscale): %q\n got (headscale): %q",
|
||||
tf.TestID,
|
||||
wantMsg,
|
||||
errStr,
|
||||
)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
// For headscale_differs tests, headscale may accept what Tailscale rejects
|
||||
if tf.HeadscaleDiffers {
|
||||
t.Logf(
|
||||
"%s: headscale accepts this policy (Tailscale rejects it)",
|
||||
tf.TestID,
|
||||
)
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
t.Errorf(
|
||||
"%s: expected error but policy parsed and validated successfully",
|
||||
tf.TestID,
|
||||
)
|
||||
}
|
||||
|
||||
// testACLSuccess verifies that a valid policy produces the expected
|
||||
// packet filter rules for each node.
|
||||
func testACLSuccess(
|
||||
t *testing.T,
|
||||
tf aclTestFile,
|
||||
users types.Users,
|
||||
nodes types.Nodes,
|
||||
) {
|
||||
t.Helper()
|
||||
|
||||
// Convert Tailscale SaaS user emails to headscale @example.com format.
|
||||
policyJSON := convertPolicyUserEmails(tf.Input.FullPolicy)
|
||||
|
||||
pol, err := unmarshalPolicy(policyJSON)
|
||||
require.NoError(
|
||||
t,
|
||||
err,
|
||||
"%s: policy should parse successfully",
|
||||
tf.TestID,
|
||||
)
|
||||
|
||||
err = pol.validate()
|
||||
require.NoError(
|
||||
t,
|
||||
err,
|
||||
"%s: policy should validate successfully",
|
||||
tf.TestID,
|
||||
)
|
||||
|
||||
for nodeName, capture := range tf.Captures {
|
||||
t.Run(nodeName, func(t *testing.T) {
|
||||
captureIsNull := len(capture.PacketFilterRules) == 0 ||
|
||||
string(capture.PacketFilterRules) == "null" //nolint:goconst
|
||||
|
||||
node := findNodeByGivenName(nodes, nodeName)
|
||||
if node == nil {
|
||||
t.Skipf(
|
||||
"node %s not found in test setup",
|
||||
nodeName,
|
||||
)
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
// Compile headscale filter rules for this node
|
||||
compiledRules, err := pol.compileFilterRulesForNode(
|
||||
users,
|
||||
node.View(),
|
||||
nodes.ViewSlice(),
|
||||
)
|
||||
require.NoError(
|
||||
t,
|
||||
err,
|
||||
"%s/%s: failed to compile filter rules",
|
||||
tf.TestID,
|
||||
nodeName,
|
||||
)
|
||||
|
||||
gotRules := policyutil.ReduceFilterRules(
|
||||
node.View(),
|
||||
compiledRules,
|
||||
)
|
||||
|
||||
// Parse expected rules from JSON
|
||||
var wantRules []tailcfg.FilterRule
|
||||
if !captureIsNull {
|
||||
err = json.Unmarshal(
|
||||
capture.PacketFilterRules,
|
||||
&wantRules,
|
||||
)
|
||||
require.NoError(
|
||||
t,
|
||||
err,
|
||||
"%s/%s: failed to unmarshal expected rules",
|
||||
tf.TestID,
|
||||
nodeName,
|
||||
)
|
||||
}
|
||||
|
||||
// Compare
|
||||
opts := append(
|
||||
cmpOptions(),
|
||||
cmpopts.EquateEmpty(),
|
||||
)
|
||||
if diff := cmp.Diff(
|
||||
wantRules,
|
||||
gotRules,
|
||||
opts...,
|
||||
); diff != "" {
|
||||
t.Errorf(
|
||||
"%s/%s: filter rules mismatch (-want +got):\n%s",
|
||||
tf.TestID,
|
||||
nodeName,
|
||||
diff,
|
||||
)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
9936
hscontrol/policy/v2/tailscale_compat_test.go
Normal file
9936
hscontrol/policy/v2/tailscale_compat_test.go
Normal file
File diff suppressed because it is too large
Load Diff
@@ -1,620 +0,0 @@
|
||||
// This file is "generated" by Claude.
|
||||
// It contains a data-driven test that reads 237 GRANT-*.json test files
|
||||
// captured from Tailscale SaaS. Each file contains:
|
||||
// - A policy with grants (and optionally ACLs)
|
||||
// - The expected packet_filter_rules for each of 8 test nodes
|
||||
// - Or an error response for invalid policies
|
||||
//
|
||||
// The test loads each JSON file, applies the policy through headscale's
|
||||
// grants engine, and compares the output against Tailscale's actual behavior.
|
||||
//
|
||||
// Tests that are known to fail due to unimplemented features or known
|
||||
// differences are skipped with a TODO comment explaining the root cause.
|
||||
// As headscale's grants implementation improves, tests should be removed
|
||||
// from the skip list.
|
||||
//
|
||||
// Test data source: testdata/grant_results/GRANT-*.json
|
||||
// Captured from: Tailscale SaaS API + tailscale debug localapi
|
||||
|
||||
package v2
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"net/netip"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/google/go-cmp/cmp"
|
||||
"github.com/google/go-cmp/cmp/cmpopts"
|
||||
"github.com/juanfont/headscale/hscontrol/policy/policyutil"
|
||||
"github.com/juanfont/headscale/hscontrol/types"
|
||||
"github.com/stretchr/testify/require"
|
||||
"github.com/tailscale/hujson"
|
||||
"gorm.io/gorm"
|
||||
"tailscale.com/tailcfg"
|
||||
)
|
||||
|
||||
// grantTestFile represents the JSON structure of a captured grant test file.
|
||||
type grantTestFile struct {
|
||||
TestID string `json:"test_id"`
|
||||
Error bool `json:"error"`
|
||||
Input struct {
|
||||
FullPolicy json.RawMessage `json:"full_policy"`
|
||||
APIResponseCode int `json:"api_response_code"`
|
||||
APIResponseBody *struct {
|
||||
Message string `json:"message"`
|
||||
} `json:"api_response_body"`
|
||||
} `json:"input"`
|
||||
Topology struct {
|
||||
Nodes map[string]struct {
|
||||
Hostname string `json:"hostname"`
|
||||
Tags []string `json:"tags"`
|
||||
IPv4 string `json:"ipv4"`
|
||||
IPv6 string `json:"ipv6"`
|
||||
} `json:"nodes"`
|
||||
} `json:"topology"`
|
||||
Captures map[string]struct {
|
||||
PacketFilterRules json.RawMessage `json:"packet_filter_rules"`
|
||||
} `json:"captures"`
|
||||
}
|
||||
|
||||
// setupGrantsCompatUsers returns the 3 test users for grants compatibility tests.
|
||||
// Email addresses use @example.com domain, matching the converted Tailscale policy format.
|
||||
func setupGrantsCompatUsers() types.Users {
|
||||
return types.Users{
|
||||
{Model: gorm.Model{ID: 1}, Name: "kratail2tid", Email: "kratail2tid@example.com"},
|
||||
{Model: gorm.Model{ID: 2}, Name: "kristoffer", Email: "kristoffer@example.com"},
|
||||
{Model: gorm.Model{ID: 3}, Name: "monitorpasskeykradalby", Email: "monitorpasskeykradalby@example.com"},
|
||||
}
|
||||
}
|
||||
|
||||
// setupGrantsCompatNodes returns the 8 test nodes for grants compatibility tests.
|
||||
// The node configuration matches the Tailscale test environment:
|
||||
// - 3 user-owned nodes (user1, user-kris, user-mon)
|
||||
// - 5 tagged nodes (tagged-server, tagged-prod, tagged-client, subnet-router, exit-node)
|
||||
func setupGrantsCompatNodes(users types.Users) types.Nodes {
|
||||
nodeUser1 := &types.Node{
|
||||
ID: 1,
|
||||
GivenName: "user1",
|
||||
User: &users[0],
|
||||
UserID: &users[0].ID,
|
||||
IPv4: ptrAddr("100.90.199.68"),
|
||||
IPv6: ptrAddr("fd7a:115c:a1e0::2d01:c747"),
|
||||
Hostinfo: &tailcfg.Hostinfo{},
|
||||
}
|
||||
|
||||
nodeUserKris := &types.Node{
|
||||
ID: 2,
|
||||
GivenName: "user-kris",
|
||||
User: &users[1],
|
||||
UserID: &users[1].ID,
|
||||
IPv4: ptrAddr("100.110.121.96"),
|
||||
IPv6: ptrAddr("fd7a:115c:a1e0::1737:7960"),
|
||||
Hostinfo: &tailcfg.Hostinfo{},
|
||||
}
|
||||
|
||||
nodeUserMon := &types.Node{
|
||||
ID: 3,
|
||||
GivenName: "user-mon",
|
||||
User: &users[2],
|
||||
UserID: &users[2].ID,
|
||||
IPv4: ptrAddr("100.103.90.82"),
|
||||
IPv6: ptrAddr("fd7a:115c:a1e0::9e37:5a52"),
|
||||
Hostinfo: &tailcfg.Hostinfo{},
|
||||
}
|
||||
|
||||
nodeTaggedServer := &types.Node{
|
||||
ID: 4,
|
||||
GivenName: "tagged-server",
|
||||
IPv4: ptrAddr("100.108.74.26"),
|
||||
IPv6: ptrAddr("fd7a:115c:a1e0::b901:4a87"),
|
||||
Tags: []string{"tag:server"},
|
||||
Hostinfo: &tailcfg.Hostinfo{},
|
||||
}
|
||||
|
||||
nodeTaggedProd := &types.Node{
|
||||
ID: 5,
|
||||
GivenName: "tagged-prod",
|
||||
IPv4: ptrAddr("100.103.8.15"),
|
||||
IPv6: ptrAddr("fd7a:115c:a1e0::5b37:80f"),
|
||||
Tags: []string{"tag:prod"},
|
||||
Hostinfo: &tailcfg.Hostinfo{},
|
||||
}
|
||||
|
||||
nodeTaggedClient := &types.Node{
|
||||
ID: 6,
|
||||
GivenName: "tagged-client",
|
||||
IPv4: ptrAddr("100.83.200.69"),
|
||||
IPv6: ptrAddr("fd7a:115c:a1e0::c537:c845"),
|
||||
Tags: []string{"tag:client"},
|
||||
Hostinfo: &tailcfg.Hostinfo{},
|
||||
}
|
||||
|
||||
nodeSubnetRouter := &types.Node{
|
||||
ID: 7,
|
||||
GivenName: "subnet-router",
|
||||
IPv4: ptrAddr("100.92.142.61"),
|
||||
IPv6: ptrAddr("fd7a:115c:a1e0::3e37:8e3d"),
|
||||
Tags: []string{"tag:router"},
|
||||
Hostinfo: &tailcfg.Hostinfo{
|
||||
RoutableIPs: []netip.Prefix{netip.MustParsePrefix("10.33.0.0/16")},
|
||||
},
|
||||
ApprovedRoutes: []netip.Prefix{netip.MustParsePrefix("10.33.0.0/16")},
|
||||
}
|
||||
|
||||
nodeExitNode := &types.Node{
|
||||
ID: 8,
|
||||
GivenName: "exit-node",
|
||||
IPv4: ptrAddr("100.85.66.106"),
|
||||
IPv6: ptrAddr("fd7a:115c:a1e0::7c37:426a"),
|
||||
Tags: []string{"tag:exit"},
|
||||
Hostinfo: &tailcfg.Hostinfo{
|
||||
RoutableIPs: []netip.Prefix{
|
||||
netip.MustParsePrefix("0.0.0.0/0"),
|
||||
netip.MustParsePrefix("::/0"),
|
||||
},
|
||||
},
|
||||
ApprovedRoutes: []netip.Prefix{
|
||||
netip.MustParsePrefix("0.0.0.0/0"),
|
||||
netip.MustParsePrefix("::/0"),
|
||||
},
|
||||
}
|
||||
|
||||
// --- New nodes for expanded via grant topology ---
|
||||
|
||||
nodeExitA := &types.Node{
|
||||
ID: 9,
|
||||
GivenName: "exit-a",
|
||||
IPv4: ptrAddr("100.124.195.93"),
|
||||
IPv6: ptrAddr("fd7a:115c:a1e0::7837:c35d"),
|
||||
Tags: []string{"tag:exit-a"},
|
||||
Hostinfo: &tailcfg.Hostinfo{
|
||||
RoutableIPs: []netip.Prefix{
|
||||
netip.MustParsePrefix("0.0.0.0/0"),
|
||||
netip.MustParsePrefix("::/0"),
|
||||
},
|
||||
},
|
||||
ApprovedRoutes: []netip.Prefix{
|
||||
netip.MustParsePrefix("0.0.0.0/0"),
|
||||
netip.MustParsePrefix("::/0"),
|
||||
},
|
||||
}
|
||||
|
||||
nodeExitB := &types.Node{
|
||||
ID: 10,
|
||||
GivenName: "exit-b",
|
||||
IPv4: ptrAddr("100.116.18.24"),
|
||||
IPv6: ptrAddr("fd7a:115c:a1e0::ff37:1218"),
|
||||
Tags: []string{"tag:exit-b"},
|
||||
Hostinfo: &tailcfg.Hostinfo{
|
||||
RoutableIPs: []netip.Prefix{
|
||||
netip.MustParsePrefix("0.0.0.0/0"),
|
||||
netip.MustParsePrefix("::/0"),
|
||||
},
|
||||
},
|
||||
ApprovedRoutes: []netip.Prefix{
|
||||
netip.MustParsePrefix("0.0.0.0/0"),
|
||||
netip.MustParsePrefix("::/0"),
|
||||
},
|
||||
}
|
||||
|
||||
nodeGroupA := &types.Node{
|
||||
ID: 11,
|
||||
GivenName: "group-a-client",
|
||||
IPv4: ptrAddr("100.107.162.14"),
|
||||
IPv6: ptrAddr("fd7a:115c:a1e0::a237:a20e"),
|
||||
Tags: []string{"tag:group-a"},
|
||||
Hostinfo: &tailcfg.Hostinfo{},
|
||||
}
|
||||
|
||||
nodeGroupB := &types.Node{
|
||||
ID: 12,
|
||||
GivenName: "group-b-client",
|
||||
IPv4: ptrAddr("100.77.135.18"),
|
||||
IPv6: ptrAddr("fd7a:115c:a1e0::4b37:8712"),
|
||||
Tags: []string{"tag:group-b"},
|
||||
Hostinfo: &tailcfg.Hostinfo{},
|
||||
}
|
||||
|
||||
nodeRouterA := &types.Node{
|
||||
ID: 13,
|
||||
GivenName: "router-a",
|
||||
IPv4: ptrAddr("100.109.43.124"),
|
||||
IPv6: ptrAddr("fd7a:115c:a1e0::a537:2b7c"),
|
||||
Tags: []string{"tag:router-a"},
|
||||
Hostinfo: &tailcfg.Hostinfo{
|
||||
RoutableIPs: []netip.Prefix{netip.MustParsePrefix("10.44.0.0/16")},
|
||||
},
|
||||
ApprovedRoutes: []netip.Prefix{netip.MustParsePrefix("10.44.0.0/16")},
|
||||
}
|
||||
|
||||
nodeRouterB := &types.Node{
|
||||
ID: 14,
|
||||
GivenName: "router-b",
|
||||
IPv4: ptrAddr("100.65.172.123"),
|
||||
IPv6: ptrAddr("fd7a:115c:a1e0::5a37:ac7c"),
|
||||
Tags: []string{"tag:router-b"},
|
||||
Hostinfo: &tailcfg.Hostinfo{
|
||||
RoutableIPs: []netip.Prefix{netip.MustParsePrefix("10.55.0.0/16")},
|
||||
},
|
||||
ApprovedRoutes: []netip.Prefix{netip.MustParsePrefix("10.55.0.0/16")},
|
||||
}
|
||||
|
||||
nodeMultiExitRouter := &types.Node{
|
||||
ID: 15,
|
||||
GivenName: "multi-exit-router",
|
||||
IPv4: ptrAddr("100.105.127.107"),
|
||||
IPv6: ptrAddr("fd7a:115c:a1e0::9537:7f6b"),
|
||||
Tags: []string{"tag:exit", "tag:router"},
|
||||
Hostinfo: &tailcfg.Hostinfo{
|
||||
RoutableIPs: []netip.Prefix{
|
||||
netip.MustParsePrefix("10.33.0.0/16"),
|
||||
netip.MustParsePrefix("0.0.0.0/0"),
|
||||
netip.MustParsePrefix("::/0"),
|
||||
},
|
||||
},
|
||||
ApprovedRoutes: []netip.Prefix{
|
||||
netip.MustParsePrefix("10.33.0.0/16"),
|
||||
netip.MustParsePrefix("0.0.0.0/0"),
|
||||
netip.MustParsePrefix("::/0"),
|
||||
},
|
||||
}
|
||||
|
||||
return types.Nodes{
|
||||
nodeUser1,
|
||||
nodeUserKris,
|
||||
nodeUserMon,
|
||||
nodeTaggedServer,
|
||||
nodeTaggedProd,
|
||||
nodeTaggedClient,
|
||||
nodeSubnetRouter,
|
||||
nodeExitNode,
|
||||
nodeExitA,
|
||||
nodeExitB,
|
||||
nodeGroupA,
|
||||
nodeGroupB,
|
||||
nodeRouterA,
|
||||
nodeRouterB,
|
||||
nodeMultiExitRouter,
|
||||
}
|
||||
}
|
||||
|
||||
// findGrantsNode finds a node by its GivenName in the grants test environment.
|
||||
func findGrantsNode(nodes types.Nodes, name string) *types.Node {
|
||||
for _, n := range nodes {
|
||||
if n.GivenName == name {
|
||||
return n
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// convertPolicyUserEmails converts Tailscale SaaS user email formats to
|
||||
// headscale-compatible @example.com format in the raw policy JSON.
|
||||
//
|
||||
// Tailscale uses provider-specific email formats:
|
||||
// - kratail2tid@passkey (passkey auth)
|
||||
// - kristoffer@dalby.cc (email auth)
|
||||
// - monitorpasskeykradalby@passkey (passkey auth)
|
||||
//
|
||||
// Headscale resolves users by Email field, so we convert all to @example.com.
|
||||
func convertPolicyUserEmails(policyJSON []byte) []byte {
|
||||
s := string(policyJSON)
|
||||
s = strings.ReplaceAll(s, "kratail2tid@passkey", "kratail2tid@example.com")
|
||||
s = strings.ReplaceAll(s, "kristoffer@dalby.cc", "kristoffer@example.com")
|
||||
s = strings.ReplaceAll(s, "monitorpasskeykradalby@passkey", "monitorpasskeykradalby@example.com")
|
||||
|
||||
return []byte(s)
|
||||
}
|
||||
|
||||
// loadGrantTestFile loads and parses a single grant test JSON file.
|
||||
func loadGrantTestFile(t *testing.T, path string) grantTestFile {
|
||||
t.Helper()
|
||||
|
||||
content, err := os.ReadFile(path)
|
||||
require.NoError(t, err, "failed to read test file %s", path)
|
||||
|
||||
ast, err := hujson.Parse(content)
|
||||
require.NoError(t, err, "failed to parse HuJSON in %s", path)
|
||||
ast.Standardize()
|
||||
|
||||
var tf grantTestFile
|
||||
|
||||
err = json.Unmarshal(ast.Pack(), &tf)
|
||||
require.NoError(t, err, "failed to unmarshal test file %s", path)
|
||||
|
||||
return tf
|
||||
}
|
||||
|
||||
// Skip categories document WHY tests are expected to differ from Tailscale SaaS.
|
||||
// Tests are grouped by root cause.
|
||||
//
|
||||
// USER_PASSKEY_WILDCARD - 2 tests: user:*@passkey wildcard pattern not supported
|
||||
//
|
||||
// Total: 2 tests skipped, ~246 tests expected to pass.
|
||||
var grantSkipReasons = map[string]string{
|
||||
// USER_PASSKEY_WILDCARD (2 tests)
|
||||
//
|
||||
// Tailscale SaaS policies can use user:*@passkey as a wildcard matching
|
||||
// all passkey-authenticated users. headscale does not support passkey
|
||||
// authentication and has no equivalent for this wildcard pattern.
|
||||
"GRANT-K20": "USER_PASSKEY_WILDCARD: src=user:*@passkey not supported in headscale",
|
||||
"GRANT-K21": "USER_PASSKEY_WILDCARD: dst=user:*@passkey not supported in headscale",
|
||||
}
|
||||
|
||||
// TestGrantsCompat is a data-driven test that loads all GRANT-*.json
|
||||
// test files captured from Tailscale SaaS and compares headscale's grants
|
||||
// engine output against the real Tailscale behavior.
|
||||
//
|
||||
// Each JSON file contains:
|
||||
// - A full policy (groups, tagOwners, hosts, autoApprovers, grants, optionally acls)
|
||||
// - For success cases: expected packet_filter_rules per node
|
||||
// - For error cases: expected error message
|
||||
//
|
||||
// The test converts Tailscale user email formats (@passkey, @dalby.cc) to
|
||||
// headscale format (@example.com) and runs the policy through unmarshalPolicy,
|
||||
// validate, compileFilterRulesForNode, and ReduceFilterRules.
|
||||
//
|
||||
// 2 tests are skipped for user:*@passkey wildcard (not supported in headscale).
|
||||
func TestGrantsCompat(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
files, err := filepath.Glob(filepath.Join("testdata", "grant_results", "GRANT-*.hujson"))
|
||||
require.NoError(t, err, "failed to glob test files")
|
||||
require.NotEmpty(t, files, "no GRANT-*.hujson test files found in testdata/grant_results/")
|
||||
|
||||
t.Logf("Loaded %d grant test files", len(files))
|
||||
|
||||
users := setupGrantsCompatUsers()
|
||||
allNodes := setupGrantsCompatNodes(users)
|
||||
|
||||
for _, file := range files {
|
||||
tf := loadGrantTestFile(t, file)
|
||||
|
||||
t.Run(tf.TestID, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
// Check if this test is in the skip list
|
||||
if reason, ok := grantSkipReasons[tf.TestID]; ok {
|
||||
t.Skipf("TODO: %s — see grantSkipReasons comments for details", reason)
|
||||
return
|
||||
}
|
||||
|
||||
// Determine which node set to use based on the test's topology.
|
||||
// Tests captured with the expanded 15-node topology (V26+) have
|
||||
// nodes like exit-a, group-a-client, etc. Tests from the original
|
||||
// 8-node topology should only use the first 8 nodes to avoid
|
||||
// resolving extra IPs from nodes that weren't present during capture.
|
||||
nodes := allNodes
|
||||
if _, hasNewNodes := tf.Captures["exit-a"]; !hasNewNodes {
|
||||
nodes = allNodes[:8]
|
||||
}
|
||||
|
||||
// Convert Tailscale user emails to headscale @example.com format
|
||||
policyJSON := convertPolicyUserEmails(tf.Input.FullPolicy)
|
||||
|
||||
if tf.Input.APIResponseCode == 400 || tf.Error {
|
||||
testGrantError(t, policyJSON, tf)
|
||||
return
|
||||
}
|
||||
|
||||
testGrantSuccess(t, policyJSON, tf, users, nodes)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// testGrantError verifies that an invalid policy produces the expected error.
|
||||
func testGrantError(t *testing.T, policyJSON []byte, tf grantTestFile) {
|
||||
t.Helper()
|
||||
|
||||
wantMsg := ""
|
||||
if tf.Input.APIResponseBody != nil {
|
||||
wantMsg = tf.Input.APIResponseBody.Message
|
||||
}
|
||||
|
||||
pol, err := unmarshalPolicy(policyJSON)
|
||||
if err != nil {
|
||||
// Parse-time error
|
||||
if wantMsg != "" {
|
||||
assertGrantErrorContains(t, err, wantMsg, tf.TestID)
|
||||
}
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
err = pol.validate()
|
||||
if err != nil {
|
||||
// Validation error
|
||||
if wantMsg != "" {
|
||||
assertGrantErrorContains(t, err, wantMsg, tf.TestID)
|
||||
}
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
t.Errorf("%s: expected error (api_response_code=400) but policy parsed and validated successfully; want message: %q",
|
||||
tf.TestID, wantMsg)
|
||||
}
|
||||
|
||||
// grantErrorMessageMap maps Tailscale error messages to their headscale equivalents
|
||||
// where the wording differs but the meaning is the same.
|
||||
var grantErrorMessageMap = map[string]string{
|
||||
// Tailscale says "ip and app can not both be empty",
|
||||
// headscale says "grants must specify either 'ip' or 'app' field"
|
||||
"ip and app can not both be empty": "grants must specify either",
|
||||
// Tailscale says "via can only be a tag",
|
||||
// headscale rejects at unmarshal time via Tag.UnmarshalJSON: "tag must start with 'tag:'"
|
||||
"via can only be a tag": "tag must start with",
|
||||
}
|
||||
|
||||
// assertGrantErrorContains checks that an error message contains the expected
|
||||
// Tailscale error message (or its headscale equivalent).
|
||||
func assertGrantErrorContains(t *testing.T, err error, wantMsg string, testID string) {
|
||||
t.Helper()
|
||||
|
||||
errStr := err.Error()
|
||||
|
||||
// First try direct substring match
|
||||
if strings.Contains(errStr, wantMsg) {
|
||||
return
|
||||
}
|
||||
|
||||
// Try mapped equivalent
|
||||
if mapped, ok := grantErrorMessageMap[wantMsg]; ok {
|
||||
if strings.Contains(errStr, mapped) {
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// Try matching key parts of the error message
|
||||
// Extract the most distinctive part of the Tailscale message
|
||||
keyParts := extractErrorKeyParts(wantMsg)
|
||||
for _, part := range keyParts {
|
||||
if strings.Contains(errStr, part) {
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
t.Errorf("%s: error message mismatch\n tailscale wants: %q\n headscale got: %q",
|
||||
testID, wantMsg, errStr)
|
||||
}
|
||||
|
||||
// extractErrorKeyParts extracts distinctive substrings from an error message
|
||||
// that should appear in any equivalent error message.
|
||||
func extractErrorKeyParts(msg string) []string {
|
||||
var parts []string
|
||||
|
||||
// Common patterns to extract
|
||||
if strings.Contains(msg, "tag:") {
|
||||
// Extract tag references like tag:nonexistent
|
||||
for word := range strings.FieldsSeq(msg) {
|
||||
word = strings.Trim(word, `"'`)
|
||||
if strings.HasPrefix(word, "tag:") {
|
||||
parts = append(parts, word)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if strings.Contains(msg, "autogroup:") {
|
||||
for word := range strings.FieldsSeq(msg) {
|
||||
word = strings.Trim(word, `"'`)
|
||||
if strings.HasPrefix(word, "autogroup:") {
|
||||
parts = append(parts, word)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if strings.Contains(msg, "capability name") {
|
||||
parts = append(parts, "capability")
|
||||
}
|
||||
|
||||
if strings.Contains(msg, "port range") {
|
||||
parts = append(parts, "port")
|
||||
}
|
||||
|
||||
return parts
|
||||
}
|
||||
|
||||
// testGrantSuccess verifies that a valid policy produces the expected
|
||||
// packet filter rules for each node.
|
||||
func testGrantSuccess(
|
||||
t *testing.T,
|
||||
policyJSON []byte,
|
||||
tf grantTestFile,
|
||||
users types.Users,
|
||||
nodes types.Nodes,
|
||||
) {
|
||||
t.Helper()
|
||||
|
||||
pol, err := unmarshalPolicy(policyJSON)
|
||||
require.NoError(t, err, "%s: policy should parse successfully", tf.TestID)
|
||||
|
||||
err = pol.validate()
|
||||
require.NoError(t, err, "%s: policy should validate successfully", tf.TestID)
|
||||
|
||||
for nodeName, capture := range tf.Captures {
|
||||
t.Run(nodeName, func(t *testing.T) {
|
||||
// Check if this node was offline during capture.
|
||||
// tagged-prod was frequently offline (132 of 188 success tests).
|
||||
// When offline, packet_filter_rules is null and topology shows
|
||||
// hostname="unknown" with empty tags.
|
||||
captureIsNull := len(capture.PacketFilterRules) == 0 ||
|
||||
string(capture.PacketFilterRules) == "null"
|
||||
|
||||
if captureIsNull {
|
||||
topoNode, exists := tf.Topology.Nodes[nodeName]
|
||||
if exists && (topoNode.Hostname == "unknown" || topoNode.Hostname == "") {
|
||||
t.Skipf(
|
||||
"node %s was offline during Tailscale capture (hostname=%q)",
|
||||
nodeName,
|
||||
topoNode.Hostname,
|
||||
)
|
||||
|
||||
return
|
||||
}
|
||||
// Node was online but has null/empty rules — means Tailscale
|
||||
// produced no rules. headscale should also produce no rules.
|
||||
}
|
||||
|
||||
node := findGrantsNode(nodes, nodeName)
|
||||
if node == nil {
|
||||
t.Skipf(
|
||||
"node %s not found in test setup (may be a test-specific node)",
|
||||
nodeName,
|
||||
)
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
// Compile headscale filter rules for this node
|
||||
gotRules, err := pol.compileFilterRulesForNode(
|
||||
users,
|
||||
node.View(),
|
||||
nodes.ViewSlice(),
|
||||
)
|
||||
require.NoError(
|
||||
t,
|
||||
err,
|
||||
"%s/%s: failed to compile filter rules",
|
||||
tf.TestID,
|
||||
nodeName,
|
||||
)
|
||||
|
||||
gotRules = policyutil.ReduceFilterRules(node.View(), gotRules)
|
||||
|
||||
// Unmarshal Tailscale expected rules from JSON capture
|
||||
var wantRules []tailcfg.FilterRule
|
||||
if !captureIsNull {
|
||||
err = json.Unmarshal(
|
||||
[]byte(capture.PacketFilterRules),
|
||||
&wantRules,
|
||||
)
|
||||
require.NoError(
|
||||
t,
|
||||
err,
|
||||
"%s/%s: failed to unmarshal expected rules from JSON",
|
||||
tf.TestID,
|
||||
nodeName,
|
||||
)
|
||||
}
|
||||
|
||||
// Compare headscale output against Tailscale expected output.
|
||||
// The diff labels show (-tailscale +headscale) to make clear
|
||||
// which side produced which output.
|
||||
// EquateEmpty treats nil and empty slices as equal since
|
||||
// Tailscale's JSON null -> nil, headscale may return empty slice.
|
||||
opts := append(cmpOptions(), cmpopts.EquateEmpty())
|
||||
if diff := cmp.Diff(wantRules, gotRules, opts...); diff != "" {
|
||||
t.Errorf(
|
||||
"%s/%s: filter rules mismatch (-tailscale +headscale):\n%s",
|
||||
tf.TestID,
|
||||
nodeName,
|
||||
diff,
|
||||
)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
8286
hscontrol/policy/v2/tailscale_routes_compat_test.go
Normal file
8286
hscontrol/policy/v2/tailscale_routes_compat_test.go
Normal file
File diff suppressed because it is too large
Load Diff
@@ -1,313 +0,0 @@
|
||||
// This file implements a data-driven test runner for routes compatibility tests.
|
||||
// It loads JSON golden files from testdata/routes_results/ROUTES-*.json and
|
||||
// compares headscale's route-aware ACL engine output against the expected
|
||||
// packet filter rules.
|
||||
//
|
||||
// Each JSON file contains:
|
||||
// - A full policy (groups, tagOwners, hosts, acls)
|
||||
// - A topology section with nodes, including routable_ips and approved_routes
|
||||
// - Expected packet_filter_rules per node
|
||||
//
|
||||
// Test data source: testdata/routes_results/ROUTES-*.json
|
||||
// Original source: Tailscale SaaS captures + headscale-generated expansions
|
||||
|
||||
package v2
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"net/netip"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
|
||||
"github.com/google/go-cmp/cmp"
|
||||
"github.com/google/go-cmp/cmp/cmpopts"
|
||||
"github.com/juanfont/headscale/hscontrol/policy/policyutil"
|
||||
"github.com/juanfont/headscale/hscontrol/types"
|
||||
"github.com/stretchr/testify/require"
|
||||
"github.com/tailscale/hujson"
|
||||
"gorm.io/gorm"
|
||||
"tailscale.com/tailcfg"
|
||||
)
|
||||
|
||||
// routesTestFile represents the JSON structure of a captured routes test file.
|
||||
type routesTestFile struct {
|
||||
TestID string `json:"test_id"`
|
||||
Source string `json:"source"`
|
||||
ParentTest string `json:"parent_test"`
|
||||
Input struct {
|
||||
FullPolicy json.RawMessage `json:"full_policy"`
|
||||
} `json:"input"`
|
||||
Topology routesTopology `json:"topology"`
|
||||
Captures map[string]struct {
|
||||
PacketFilterRules json.RawMessage `json:"packet_filter_rules"`
|
||||
} `json:"captures"`
|
||||
}
|
||||
|
||||
// routesTopology describes the node topology for a routes test.
|
||||
type routesTopology struct {
|
||||
Users []struct {
|
||||
ID uint `json:"id"`
|
||||
Name string `json:"name"`
|
||||
} `json:"users"`
|
||||
Nodes map[string]routesNodeDef `json:"nodes"`
|
||||
}
|
||||
|
||||
// routesNodeDef describes a single node in the routes test topology.
|
||||
type routesNodeDef struct {
|
||||
ID int `json:"id"`
|
||||
Hostname string `json:"hostname"`
|
||||
IPv4 string `json:"ipv4"`
|
||||
IPv6 string `json:"ipv6"`
|
||||
Tags []string `json:"tags"`
|
||||
User string `json:"user,omitempty"`
|
||||
RoutableIPs []string `json:"routable_ips"`
|
||||
ApprovedRoutes []string `json:"approved_routes"`
|
||||
}
|
||||
|
||||
// loadRoutesTestFile loads and parses a single routes test JSON file.
|
||||
func loadRoutesTestFile(t *testing.T, path string) routesTestFile {
|
||||
t.Helper()
|
||||
|
||||
content, err := os.ReadFile(path)
|
||||
require.NoError(t, err, "failed to read test file %s", path)
|
||||
|
||||
ast, err := hujson.Parse(content)
|
||||
require.NoError(t, err, "failed to parse HuJSON in %s", path)
|
||||
ast.Standardize()
|
||||
|
||||
var tf routesTestFile
|
||||
|
||||
err = json.Unmarshal(ast.Pack(), &tf)
|
||||
require.NoError(t, err, "failed to unmarshal test file %s", path)
|
||||
|
||||
return tf
|
||||
}
|
||||
|
||||
// buildRoutesUsersAndNodes constructs types.Users and types.Nodes from the
|
||||
// JSON topology definition. This allows each test file to define its own
|
||||
// topology (e.g., the IPv6 tests use different nodes than the standard tests).
|
||||
func buildRoutesUsersAndNodes(
|
||||
t *testing.T,
|
||||
topo routesTopology,
|
||||
) (types.Users, types.Nodes) {
|
||||
t.Helper()
|
||||
|
||||
// Build users — if topology has users section, use it.
|
||||
// Otherwise fall back to the standard 3-user setup matching
|
||||
// the grant topology (used by Tailscale SaaS captures).
|
||||
var users types.Users
|
||||
if len(topo.Users) > 0 {
|
||||
users = make(types.Users, 0, len(topo.Users))
|
||||
for _, u := range topo.Users {
|
||||
users = append(users, types.User{
|
||||
Model: gorm.Model{ID: u.ID},
|
||||
Name: u.Name,
|
||||
})
|
||||
}
|
||||
} else {
|
||||
users = types.Users{
|
||||
{Model: gorm.Model{ID: 1}, Name: "kratail2tid", Email: "kratail2tid@example.com"},
|
||||
{Model: gorm.Model{ID: 2}, Name: "kristoffer", Email: "kristoffer@example.com"},
|
||||
{Model: gorm.Model{ID: 3}, Name: "monitorpasskeykradalby", Email: "monitorpasskeykradalby@example.com"},
|
||||
}
|
||||
}
|
||||
|
||||
// Build nodes
|
||||
nodes := make(types.Nodes, 0, len(topo.Nodes))
|
||||
|
||||
for _, nodeDef := range topo.Nodes {
|
||||
node := &types.Node{
|
||||
ID: types.NodeID(nodeDef.ID), //nolint:gosec
|
||||
GivenName: nodeDef.Hostname,
|
||||
IPv4: ptrAddr(nodeDef.IPv4),
|
||||
IPv6: ptrAddr(nodeDef.IPv6),
|
||||
Tags: nodeDef.Tags,
|
||||
}
|
||||
|
||||
// Set up Hostinfo with RoutableIPs
|
||||
hostinfo := &tailcfg.Hostinfo{}
|
||||
|
||||
if len(nodeDef.RoutableIPs) > 0 {
|
||||
routableIPs := make(
|
||||
[]netip.Prefix,
|
||||
0,
|
||||
len(nodeDef.RoutableIPs),
|
||||
)
|
||||
for _, r := range nodeDef.RoutableIPs {
|
||||
routableIPs = append(
|
||||
routableIPs,
|
||||
netip.MustParsePrefix(r),
|
||||
)
|
||||
}
|
||||
|
||||
hostinfo.RoutableIPs = routableIPs
|
||||
}
|
||||
|
||||
node.Hostinfo = hostinfo
|
||||
|
||||
// Set ApprovedRoutes
|
||||
if len(nodeDef.ApprovedRoutes) > 0 {
|
||||
approvedRoutes := make(
|
||||
[]netip.Prefix,
|
||||
0,
|
||||
len(nodeDef.ApprovedRoutes),
|
||||
)
|
||||
for _, r := range nodeDef.ApprovedRoutes {
|
||||
approvedRoutes = append(
|
||||
approvedRoutes,
|
||||
netip.MustParsePrefix(r),
|
||||
)
|
||||
}
|
||||
|
||||
node.ApprovedRoutes = approvedRoutes
|
||||
} else {
|
||||
node.ApprovedRoutes = []netip.Prefix{}
|
||||
}
|
||||
|
||||
// Assign user if specified
|
||||
if nodeDef.User != "" {
|
||||
for i := range users {
|
||||
if users[i].Name == nodeDef.User {
|
||||
node.User = &users[i]
|
||||
node.UserID = &users[i].ID
|
||||
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
nodes = append(nodes, node)
|
||||
}
|
||||
|
||||
return users, nodes
|
||||
}
|
||||
|
||||
// routesSkipReasons documents WHY tests are expected to fail.
|
||||
var routesSkipReasons = map[string]string{}
|
||||
|
||||
// TestRoutesCompat is a data-driven test that loads all ROUTES-*.json test
|
||||
// files and compares headscale's route-aware ACL engine output against the
|
||||
// expected behavior.
|
||||
func TestRoutesCompat(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
files, err := filepath.Glob(
|
||||
filepath.Join("testdata", "routes_results", "ROUTES-*.hujson"),
|
||||
)
|
||||
require.NoError(t, err, "failed to glob test files")
|
||||
require.NotEmpty(
|
||||
t,
|
||||
files,
|
||||
"no ROUTES-*.hujson test files found in testdata/routes_results/",
|
||||
)
|
||||
|
||||
t.Logf("Loaded %d routes test files", len(files))
|
||||
|
||||
for _, file := range files {
|
||||
tf := loadRoutesTestFile(t, file)
|
||||
|
||||
t.Run(tf.TestID, func(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
if reason, ok := routesSkipReasons[tf.TestID]; ok {
|
||||
t.Skipf(
|
||||
"TODO: %s — see routesSkipReasons for details",
|
||||
reason,
|
||||
)
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
// Build topology from JSON
|
||||
users, nodes := buildRoutesUsersAndNodes(t, tf.Topology)
|
||||
|
||||
// Convert Tailscale SaaS user emails to headscale format
|
||||
policyJSON := convertPolicyUserEmails(tf.Input.FullPolicy)
|
||||
|
||||
// Parse and validate policy
|
||||
pol, err := unmarshalPolicy(policyJSON)
|
||||
require.NoError(
|
||||
t,
|
||||
err,
|
||||
"%s: policy should parse successfully",
|
||||
tf.TestID,
|
||||
)
|
||||
|
||||
err = pol.validate()
|
||||
require.NoError(
|
||||
t,
|
||||
err,
|
||||
"%s: policy should validate successfully",
|
||||
tf.TestID,
|
||||
)
|
||||
|
||||
for nodeName, capture := range tf.Captures {
|
||||
t.Run(nodeName, func(t *testing.T) {
|
||||
captureIsNull := len(capture.PacketFilterRules) == 0 ||
|
||||
string(capture.PacketFilterRules) == "null" //nolint:goconst
|
||||
|
||||
node := findNodeByGivenName(nodes, nodeName)
|
||||
if node == nil {
|
||||
t.Skipf(
|
||||
"node %s not found in topology",
|
||||
nodeName,
|
||||
)
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
compiledRules, err := pol.compileFilterRulesForNode(
|
||||
users,
|
||||
node.View(),
|
||||
nodes.ViewSlice(),
|
||||
)
|
||||
require.NoError(
|
||||
t,
|
||||
err,
|
||||
"%s/%s: failed to compile filter rules",
|
||||
tf.TestID,
|
||||
nodeName,
|
||||
)
|
||||
|
||||
gotRules := policyutil.ReduceFilterRules(
|
||||
node.View(),
|
||||
compiledRules,
|
||||
)
|
||||
|
||||
var wantRules []tailcfg.FilterRule
|
||||
if !captureIsNull {
|
||||
err = json.Unmarshal(
|
||||
capture.PacketFilterRules,
|
||||
&wantRules,
|
||||
)
|
||||
require.NoError(
|
||||
t,
|
||||
err,
|
||||
"%s/%s: failed to unmarshal expected rules",
|
||||
tf.TestID,
|
||||
nodeName,
|
||||
)
|
||||
}
|
||||
|
||||
opts := append(
|
||||
cmpOptions(),
|
||||
cmpopts.EquateEmpty(),
|
||||
)
|
||||
if diff := cmp.Diff(
|
||||
wantRules,
|
||||
gotRules,
|
||||
opts...,
|
||||
); diff != "" {
|
||||
t.Errorf(
|
||||
"%s/%s: filter rules mismatch (-want +got):\n%s",
|
||||
tf.TestID,
|
||||
nodeName,
|
||||
diff,
|
||||
)
|
||||
}
|
||||
})
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -29,7 +29,6 @@ import (
|
||||
"github.com/google/go-cmp/cmp/cmpopts"
|
||||
"github.com/juanfont/headscale/hscontrol/types"
|
||||
"github.com/stretchr/testify/require"
|
||||
"github.com/tailscale/hujson"
|
||||
"gorm.io/gorm"
|
||||
"tailscale.com/tailcfg"
|
||||
)
|
||||
@@ -192,14 +191,10 @@ func loadSSHTestFile(t *testing.T, path string) sshTestFile {
|
||||
content, err := os.ReadFile(path)
|
||||
require.NoError(t, err, "failed to read test file %s", path)
|
||||
|
||||
ast, err := hujson.Parse(content)
|
||||
require.NoError(t, err, "failed to parse HuJSON in %s", path)
|
||||
ast.Standardize()
|
||||
|
||||
var tf sshTestFile
|
||||
|
||||
err = json.Unmarshal(ast.Pack(), &tf)
|
||||
require.NoError(t, err, "failed to unmarshal test file %s", path)
|
||||
err = json.Unmarshal(content, &tf)
|
||||
require.NoError(t, err, "failed to parse test file %s", path)
|
||||
|
||||
return tf
|
||||
}
|
||||
@@ -209,11 +204,12 @@ func loadSSHTestFile(t *testing.T, path string) sshTestFile {
|
||||
//
|
||||
// 37 of 39 tests are expected to pass.
|
||||
var sshSkipReasons = map[string]string{
|
||||
// user:*@passkey wildcard pattern not supported in headscale.
|
||||
// headscale does not support passkey authentication and has no
|
||||
// equivalent for this wildcard pattern.
|
||||
"SSH-B5": "user:*@passkey wildcard not supported in headscale",
|
||||
"SSH-D10": "user:*@passkey wildcard not supported in headscale",
|
||||
// user:*@domain source alias not yet implemented.
|
||||
// These tests use "src": ["user:*@passkey"] which requires UserWildcard
|
||||
// alias type support. Will be added in a follow-up PR that implements
|
||||
// user:*@domain across all contexts (ACLs, grants, tagOwners, autoApprovers).
|
||||
"SSH-B5": "user:*@domain source alias not yet implemented",
|
||||
"SSH-D10": "user:*@domain source alias not yet implemented",
|
||||
}
|
||||
|
||||
// TestSSHDataCompat is a data-driven test that loads all SSH-*.json test files
|
||||
@@ -231,13 +227,13 @@ func TestSSHDataCompat(t *testing.T) {
|
||||
t.Parallel()
|
||||
|
||||
files, err := filepath.Glob(
|
||||
filepath.Join("testdata", "ssh_results", "SSH-*.hujson"),
|
||||
filepath.Join("testdata", "ssh_results", "SSH-*.json"),
|
||||
)
|
||||
require.NoError(t, err, "failed to glob test files")
|
||||
require.NotEmpty(
|
||||
t,
|
||||
files,
|
||||
"no SSH-*.hujson test files found in testdata/ssh_results/",
|
||||
"no SSH-*.json test files found in testdata/ssh_results/",
|
||||
)
|
||||
|
||||
t.Logf("Loaded %d SSH test files", len(files))
|
||||
|
||||
@@ -1,258 +0,0 @@
|
||||
// ACL-A01
|
||||
//
|
||||
// ACL: accept: src=['autogroup:member'] dst=['*:*']
|
||||
//
|
||||
// Expected: Rules on 8 of 8 nodes
|
||||
{
|
||||
"test_id": "ACL-A01",
|
||||
"input": {
|
||||
"full_policy": {
|
||||
"groups": {
|
||||
"group:admins": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"group:developers": [
|
||||
"kristoffer@dalby.cc",
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"group:monitors": [
|
||||
"monitorpasskeykradalby@passkey"
|
||||
],
|
||||
"group:empty": []
|
||||
},
|
||||
"tagOwners": {
|
||||
"tag:server": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:prod": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:client": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:router": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:exit": [
|
||||
"kratail2tid@passkey"
|
||||
]
|
||||
},
|
||||
"hosts": {
|
||||
"webserver": "100.108.74.26",
|
||||
"prodbox": "100.103.8.15",
|
||||
"internal": "10.0.0.0/8",
|
||||
"subnet24": "192.168.1.0/24"
|
||||
},
|
||||
"autoApprovers": {
|
||||
"routes": {
|
||||
"10.33.0.0/16": [
|
||||
"tag:router"
|
||||
],
|
||||
"0.0.0.0/0": [
|
||||
"tag:exit"
|
||||
],
|
||||
"::/0": [
|
||||
"tag:exit"
|
||||
]
|
||||
}
|
||||
},
|
||||
"acls": [
|
||||
{
|
||||
"action": "accept",
|
||||
"src": [
|
||||
"autogroup:member"
|
||||
],
|
||||
"dst": [
|
||||
"*:*"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"captures": {
|
||||
"exit-node": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"100.103.90.82",
|
||||
"100.110.121.96",
|
||||
"100.90.199.68",
|
||||
"fd7a:115c:a1e0::1737:7960",
|
||||
"fd7a:115c:a1e0::2d01:c747",
|
||||
"fd7a:115c:a1e0::9e37:5a52"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "*",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"subnet-router": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"100.103.90.82",
|
||||
"100.110.121.96",
|
||||
"100.90.199.68",
|
||||
"fd7a:115c:a1e0::1737:7960",
|
||||
"fd7a:115c:a1e0::2d01:c747",
|
||||
"fd7a:115c:a1e0::9e37:5a52"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "*",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"tagged-client": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"100.103.90.82",
|
||||
"100.110.121.96",
|
||||
"100.90.199.68",
|
||||
"fd7a:115c:a1e0::1737:7960",
|
||||
"fd7a:115c:a1e0::2d01:c747",
|
||||
"fd7a:115c:a1e0::9e37:5a52"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "*",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"tagged-prod": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"100.103.90.82",
|
||||
"100.110.121.96",
|
||||
"100.90.199.68",
|
||||
"fd7a:115c:a1e0::1737:7960",
|
||||
"fd7a:115c:a1e0::2d01:c747",
|
||||
"fd7a:115c:a1e0::9e37:5a52"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "*",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"tagged-server": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"100.103.90.82",
|
||||
"100.110.121.96",
|
||||
"100.90.199.68",
|
||||
"fd7a:115c:a1e0::1737:7960",
|
||||
"fd7a:115c:a1e0::2d01:c747",
|
||||
"fd7a:115c:a1e0::9e37:5a52"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "*",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"user-kris": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"100.103.90.82",
|
||||
"100.110.121.96",
|
||||
"100.90.199.68",
|
||||
"fd7a:115c:a1e0::1737:7960",
|
||||
"fd7a:115c:a1e0::2d01:c747",
|
||||
"fd7a:115c:a1e0::9e37:5a52"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "*",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"user-mon": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"100.103.90.82",
|
||||
"100.110.121.96",
|
||||
"100.90.199.68",
|
||||
"fd7a:115c:a1e0::1737:7960",
|
||||
"fd7a:115c:a1e0::2d01:c747",
|
||||
"fd7a:115c:a1e0::9e37:5a52"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "*",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"user1": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"100.103.90.82",
|
||||
"100.110.121.96",
|
||||
"100.90.199.68",
|
||||
"fd7a:115c:a1e0::1737:7960",
|
||||
"fd7a:115c:a1e0::2d01:c747",
|
||||
"fd7a:115c:a1e0::9e37:5a52"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "*",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,290 +0,0 @@
|
||||
// ACL-A02
|
||||
//
|
||||
// ACL: accept: src=['autogroup:tagged'] dst=['*:*']
|
||||
//
|
||||
// Expected: Rules on 8 of 8 nodes
|
||||
{
|
||||
"test_id": "ACL-A02",
|
||||
"input": {
|
||||
"full_policy": {
|
||||
"groups": {
|
||||
"group:admins": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"group:developers": [
|
||||
"kristoffer@dalby.cc",
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"group:monitors": [
|
||||
"monitorpasskeykradalby@passkey"
|
||||
],
|
||||
"group:empty": []
|
||||
},
|
||||
"tagOwners": {
|
||||
"tag:server": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:prod": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:client": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:router": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:exit": [
|
||||
"kratail2tid@passkey"
|
||||
]
|
||||
},
|
||||
"hosts": {
|
||||
"webserver": "100.108.74.26",
|
||||
"prodbox": "100.103.8.15",
|
||||
"internal": "10.0.0.0/8",
|
||||
"subnet24": "192.168.1.0/24"
|
||||
},
|
||||
"autoApprovers": {
|
||||
"routes": {
|
||||
"10.33.0.0/16": [
|
||||
"tag:router"
|
||||
],
|
||||
"0.0.0.0/0": [
|
||||
"tag:exit"
|
||||
],
|
||||
"::/0": [
|
||||
"tag:exit"
|
||||
]
|
||||
}
|
||||
},
|
||||
"acls": [
|
||||
{
|
||||
"action": "accept",
|
||||
"src": [
|
||||
"autogroup:tagged"
|
||||
],
|
||||
"dst": [
|
||||
"*:*"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"captures": {
|
||||
"exit-node": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"100.103.8.15",
|
||||
"100.108.74.26",
|
||||
"100.83.200.69",
|
||||
"100.85.66.106",
|
||||
"100.92.142.61",
|
||||
"fd7a:115c:a1e0::3e37:8e3d",
|
||||
"fd7a:115c:a1e0::5b37:80f",
|
||||
"fd7a:115c:a1e0::7c37:426a",
|
||||
"fd7a:115c:a1e0::b901:4a87",
|
||||
"fd7a:115c:a1e0::c537:c845"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "*",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"subnet-router": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"100.103.8.15",
|
||||
"100.108.74.26",
|
||||
"100.83.200.69",
|
||||
"100.85.66.106",
|
||||
"100.92.142.61",
|
||||
"fd7a:115c:a1e0::3e37:8e3d",
|
||||
"fd7a:115c:a1e0::5b37:80f",
|
||||
"fd7a:115c:a1e0::7c37:426a",
|
||||
"fd7a:115c:a1e0::b901:4a87",
|
||||
"fd7a:115c:a1e0::c537:c845"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "*",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"tagged-client": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"100.103.8.15",
|
||||
"100.108.74.26",
|
||||
"100.83.200.69",
|
||||
"100.85.66.106",
|
||||
"100.92.142.61",
|
||||
"fd7a:115c:a1e0::3e37:8e3d",
|
||||
"fd7a:115c:a1e0::5b37:80f",
|
||||
"fd7a:115c:a1e0::7c37:426a",
|
||||
"fd7a:115c:a1e0::b901:4a87",
|
||||
"fd7a:115c:a1e0::c537:c845"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "*",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"tagged-prod": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"100.103.8.15",
|
||||
"100.108.74.26",
|
||||
"100.83.200.69",
|
||||
"100.85.66.106",
|
||||
"100.92.142.61",
|
||||
"fd7a:115c:a1e0::3e37:8e3d",
|
||||
"fd7a:115c:a1e0::5b37:80f",
|
||||
"fd7a:115c:a1e0::7c37:426a",
|
||||
"fd7a:115c:a1e0::b901:4a87",
|
||||
"fd7a:115c:a1e0::c537:c845"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "*",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"tagged-server": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"100.103.8.15",
|
||||
"100.108.74.26",
|
||||
"100.83.200.69",
|
||||
"100.85.66.106",
|
||||
"100.92.142.61",
|
||||
"fd7a:115c:a1e0::3e37:8e3d",
|
||||
"fd7a:115c:a1e0::5b37:80f",
|
||||
"fd7a:115c:a1e0::7c37:426a",
|
||||
"fd7a:115c:a1e0::b901:4a87",
|
||||
"fd7a:115c:a1e0::c537:c845"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "*",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"user-kris": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"100.103.8.15",
|
||||
"100.108.74.26",
|
||||
"100.83.200.69",
|
||||
"100.85.66.106",
|
||||
"100.92.142.61",
|
||||
"fd7a:115c:a1e0::3e37:8e3d",
|
||||
"fd7a:115c:a1e0::5b37:80f",
|
||||
"fd7a:115c:a1e0::7c37:426a",
|
||||
"fd7a:115c:a1e0::b901:4a87",
|
||||
"fd7a:115c:a1e0::c537:c845"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "*",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"user-mon": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"100.103.8.15",
|
||||
"100.108.74.26",
|
||||
"100.83.200.69",
|
||||
"100.85.66.106",
|
||||
"100.92.142.61",
|
||||
"fd7a:115c:a1e0::3e37:8e3d",
|
||||
"fd7a:115c:a1e0::5b37:80f",
|
||||
"fd7a:115c:a1e0::7c37:426a",
|
||||
"fd7a:115c:a1e0::b901:4a87",
|
||||
"fd7a:115c:a1e0::c537:c845"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "*",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"user1": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"100.103.8.15",
|
||||
"100.108.74.26",
|
||||
"100.83.200.69",
|
||||
"100.85.66.106",
|
||||
"100.92.142.61",
|
||||
"fd7a:115c:a1e0::3e37:8e3d",
|
||||
"fd7a:115c:a1e0::5b37:80f",
|
||||
"fd7a:115c:a1e0::7c37:426a",
|
||||
"fd7a:115c:a1e0::b901:4a87",
|
||||
"fd7a:115c:a1e0::c537:c845"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "*",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,128 +0,0 @@
|
||||
// ACL-A03
|
||||
//
|
||||
// ACL: accept: src=['autogroup:member', 'tag:client'] dst=['tag:server:22']
|
||||
//
|
||||
// Expected: Rules on tagged-server
|
||||
{
|
||||
"test_id": "ACL-A03",
|
||||
"input": {
|
||||
"full_policy": {
|
||||
"groups": {
|
||||
"group:admins": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"group:developers": [
|
||||
"kristoffer@dalby.cc",
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"group:monitors": [
|
||||
"monitorpasskeykradalby@passkey"
|
||||
],
|
||||
"group:empty": []
|
||||
},
|
||||
"tagOwners": {
|
||||
"tag:server": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:prod": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:client": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:router": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:exit": [
|
||||
"kratail2tid@passkey"
|
||||
]
|
||||
},
|
||||
"hosts": {
|
||||
"webserver": "100.108.74.26",
|
||||
"prodbox": "100.103.8.15",
|
||||
"internal": "10.0.0.0/8",
|
||||
"subnet24": "192.168.1.0/24"
|
||||
},
|
||||
"autoApprovers": {
|
||||
"routes": {
|
||||
"10.33.0.0/16": [
|
||||
"tag:router"
|
||||
],
|
||||
"0.0.0.0/0": [
|
||||
"tag:exit"
|
||||
],
|
||||
"::/0": [
|
||||
"tag:exit"
|
||||
]
|
||||
}
|
||||
},
|
||||
"acls": [
|
||||
{
|
||||
"action": "accept",
|
||||
"src": [
|
||||
"autogroup:member",
|
||||
"tag:client"
|
||||
],
|
||||
"dst": [
|
||||
"tag:server:22"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"captures": {
|
||||
"exit-node": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"subnet-router": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-client": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-prod": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-server": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"100.103.90.82",
|
||||
"100.110.121.96",
|
||||
"100.83.200.69",
|
||||
"100.90.199.68",
|
||||
"fd7a:115c:a1e0::1737:7960",
|
||||
"fd7a:115c:a1e0::2d01:c747",
|
||||
"fd7a:115c:a1e0::9e37:5a52",
|
||||
"fd7a:115c:a1e0::c537:c845"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "100.108.74.26",
|
||||
"Ports": {
|
||||
"First": 22,
|
||||
"Last": 22
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::b901:4a87",
|
||||
"Ports": {
|
||||
"First": 22,
|
||||
"Last": 22
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"user-kris": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"user-mon": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"user1": {
|
||||
"packet_filter_rules": null
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,167 +0,0 @@
|
||||
// ACL-A04
|
||||
//
|
||||
// ACL: accept: src=['*'] dst=['autogroup:self:*']
|
||||
//
|
||||
// Expected: Rules on user-kris, user-mon, user1
|
||||
{
|
||||
"test_id": "ACL-A04",
|
||||
"input": {
|
||||
"full_policy": {
|
||||
"groups": {
|
||||
"group:admins": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"group:developers": [
|
||||
"kristoffer@dalby.cc",
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"group:monitors": [
|
||||
"monitorpasskeykradalby@passkey"
|
||||
],
|
||||
"group:empty": []
|
||||
},
|
||||
"tagOwners": {
|
||||
"tag:server": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:prod": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:client": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:router": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:exit": [
|
||||
"kratail2tid@passkey"
|
||||
]
|
||||
},
|
||||
"hosts": {
|
||||
"webserver": "100.108.74.26",
|
||||
"prodbox": "100.103.8.15",
|
||||
"internal": "10.0.0.0/8",
|
||||
"subnet24": "192.168.1.0/24"
|
||||
},
|
||||
"autoApprovers": {
|
||||
"routes": {
|
||||
"10.33.0.0/16": [
|
||||
"tag:router"
|
||||
],
|
||||
"0.0.0.0/0": [
|
||||
"tag:exit"
|
||||
],
|
||||
"::/0": [
|
||||
"tag:exit"
|
||||
]
|
||||
}
|
||||
},
|
||||
"acls": [
|
||||
{
|
||||
"action": "accept",
|
||||
"src": [
|
||||
"*"
|
||||
],
|
||||
"dst": [
|
||||
"autogroup:self:*"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"captures": {
|
||||
"exit-node": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"subnet-router": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-client": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-prod": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-server": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"user-kris": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"100.110.121.96",
|
||||
"fd7a:115c:a1e0::1737:7960"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "100.110.121.96",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::1737:7960",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"user-mon": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"100.103.90.82",
|
||||
"fd7a:115c:a1e0::9e37:5a52"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "100.103.90.82",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::9e37:5a52",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"user1": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"100.90.199.68",
|
||||
"fd7a:115c:a1e0::2d01:c747"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "100.90.199.68",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::2d01:c747",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,98 +0,0 @@
|
||||
// ACL-A05
|
||||
//
|
||||
// ACL: accept: src=['*'] dst=['autogroup:internet:*']
|
||||
//
|
||||
// Expected: No filter rules
|
||||
{
|
||||
"test_id": "ACL-A05",
|
||||
"input": {
|
||||
"full_policy": {
|
||||
"groups": {
|
||||
"group:admins": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"group:developers": [
|
||||
"kristoffer@dalby.cc",
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"group:monitors": [
|
||||
"monitorpasskeykradalby@passkey"
|
||||
],
|
||||
"group:empty": []
|
||||
},
|
||||
"tagOwners": {
|
||||
"tag:server": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:prod": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:client": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:router": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:exit": [
|
||||
"kratail2tid@passkey"
|
||||
]
|
||||
},
|
||||
"hosts": {
|
||||
"webserver": "100.108.74.26",
|
||||
"prodbox": "100.103.8.15",
|
||||
"internal": "10.0.0.0/8",
|
||||
"subnet24": "192.168.1.0/24"
|
||||
},
|
||||
"autoApprovers": {
|
||||
"routes": {
|
||||
"10.33.0.0/16": [
|
||||
"tag:router"
|
||||
],
|
||||
"0.0.0.0/0": [
|
||||
"tag:exit"
|
||||
],
|
||||
"::/0": [
|
||||
"tag:exit"
|
||||
]
|
||||
}
|
||||
},
|
||||
"acls": [
|
||||
{
|
||||
"action": "accept",
|
||||
"src": [
|
||||
"*"
|
||||
],
|
||||
"dst": [
|
||||
"autogroup:internet:*"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"captures": {
|
||||
"exit-node": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"subnet-router": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-client": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-prod": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-server": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"user-kris": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"user-mon": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"user1": {
|
||||
"packet_filter_rules": null
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,173 +0,0 @@
|
||||
// ACL-A06
|
||||
//
|
||||
// ACL: accept: src=['*'] dst=['autogroup:member:*']
|
||||
//
|
||||
// Expected: Rules on user-kris, user-mon, user1
|
||||
{
|
||||
"test_id": "ACL-A06",
|
||||
"input": {
|
||||
"full_policy": {
|
||||
"groups": {
|
||||
"group:admins": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"group:developers": [
|
||||
"kristoffer@dalby.cc",
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"group:monitors": [
|
||||
"monitorpasskeykradalby@passkey"
|
||||
],
|
||||
"group:empty": []
|
||||
},
|
||||
"tagOwners": {
|
||||
"tag:server": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:prod": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:client": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:router": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:exit": [
|
||||
"kratail2tid@passkey"
|
||||
]
|
||||
},
|
||||
"hosts": {
|
||||
"webserver": "100.108.74.26",
|
||||
"prodbox": "100.103.8.15",
|
||||
"internal": "10.0.0.0/8",
|
||||
"subnet24": "192.168.1.0/24"
|
||||
},
|
||||
"autoApprovers": {
|
||||
"routes": {
|
||||
"10.33.0.0/16": [
|
||||
"tag:router"
|
||||
],
|
||||
"0.0.0.0/0": [
|
||||
"tag:exit"
|
||||
],
|
||||
"::/0": [
|
||||
"tag:exit"
|
||||
]
|
||||
}
|
||||
},
|
||||
"acls": [
|
||||
{
|
||||
"action": "accept",
|
||||
"src": [
|
||||
"*"
|
||||
],
|
||||
"dst": [
|
||||
"autogroup:member:*"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"captures": {
|
||||
"exit-node": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"subnet-router": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-client": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-prod": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-server": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"user-kris": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"10.33.0.0/16",
|
||||
"100.115.94.0-100.127.255.255",
|
||||
"100.64.0.0-100.115.91.255",
|
||||
"fd7a:115c:a1e0::/48"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "100.110.121.96",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::1737:7960",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"user-mon": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"10.33.0.0/16",
|
||||
"100.115.94.0-100.127.255.255",
|
||||
"100.64.0.0-100.115.91.255",
|
||||
"fd7a:115c:a1e0::/48"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "100.103.90.82",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::9e37:5a52",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"user1": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"10.33.0.0/16",
|
||||
"100.115.94.0-100.127.255.255",
|
||||
"100.64.0.0-100.115.91.255",
|
||||
"fd7a:115c:a1e0::/48"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "100.90.199.68",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::2d01:c747",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,193 +0,0 @@
|
||||
// ACL-A07
|
||||
//
|
||||
// ACL: accept: src=['*'] dst=['autogroup:self:*', 'tag:server:22']
|
||||
//
|
||||
// Expected: Rules on tagged-server, user-kris, user-mon, user1
|
||||
{
|
||||
"test_id": "ACL-A07",
|
||||
"input": {
|
||||
"full_policy": {
|
||||
"groups": {
|
||||
"group:admins": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"group:developers": [
|
||||
"kristoffer@dalby.cc",
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"group:monitors": [
|
||||
"monitorpasskeykradalby@passkey"
|
||||
],
|
||||
"group:empty": []
|
||||
},
|
||||
"tagOwners": {
|
||||
"tag:server": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:prod": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:client": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:router": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:exit": [
|
||||
"kratail2tid@passkey"
|
||||
]
|
||||
},
|
||||
"hosts": {
|
||||
"webserver": "100.108.74.26",
|
||||
"prodbox": "100.103.8.15",
|
||||
"internal": "10.0.0.0/8",
|
||||
"subnet24": "192.168.1.0/24"
|
||||
},
|
||||
"autoApprovers": {
|
||||
"routes": {
|
||||
"10.33.0.0/16": [
|
||||
"tag:router"
|
||||
],
|
||||
"0.0.0.0/0": [
|
||||
"tag:exit"
|
||||
],
|
||||
"::/0": [
|
||||
"tag:exit"
|
||||
]
|
||||
}
|
||||
},
|
||||
"acls": [
|
||||
{
|
||||
"action": "accept",
|
||||
"src": [
|
||||
"*"
|
||||
],
|
||||
"dst": [
|
||||
"autogroup:self:*",
|
||||
"tag:server:22"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"captures": {
|
||||
"exit-node": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"subnet-router": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-client": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-prod": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-server": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"10.33.0.0/16",
|
||||
"100.115.94.0-100.127.255.255",
|
||||
"100.64.0.0-100.115.91.255",
|
||||
"fd7a:115c:a1e0::/48"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "100.108.74.26",
|
||||
"Ports": {
|
||||
"First": 22,
|
||||
"Last": 22
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::b901:4a87",
|
||||
"Ports": {
|
||||
"First": 22,
|
||||
"Last": 22
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"user-kris": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"100.110.121.96",
|
||||
"fd7a:115c:a1e0::1737:7960"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "100.110.121.96",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::1737:7960",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"user-mon": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"100.103.90.82",
|
||||
"fd7a:115c:a1e0::9e37:5a52"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "100.103.90.82",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::9e37:5a52",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"user1": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"100.90.199.68",
|
||||
"fd7a:115c:a1e0::2d01:c747"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "100.90.199.68",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::2d01:c747",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,223 +0,0 @@
|
||||
// ACL-A08
|
||||
//
|
||||
// ACL: accept: src=['*'] dst=['autogroup:tagged:*']
|
||||
//
|
||||
// Expected: Rules on exit-node, subnet-router, tagged-client, tagged-prod, tagged-server
|
||||
{
|
||||
"test_id": "ACL-A08",
|
||||
"input": {
|
||||
"full_policy": {
|
||||
"groups": {
|
||||
"group:admins": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"group:developers": [
|
||||
"kristoffer@dalby.cc",
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"group:monitors": [
|
||||
"monitorpasskeykradalby@passkey"
|
||||
],
|
||||
"group:empty": []
|
||||
},
|
||||
"tagOwners": {
|
||||
"tag:server": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:prod": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:client": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:router": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:exit": [
|
||||
"kratail2tid@passkey"
|
||||
]
|
||||
},
|
||||
"hosts": {
|
||||
"webserver": "100.108.74.26",
|
||||
"prodbox": "100.103.8.15",
|
||||
"internal": "10.0.0.0/8",
|
||||
"subnet24": "192.168.1.0/24"
|
||||
},
|
||||
"autoApprovers": {
|
||||
"routes": {
|
||||
"10.33.0.0/16": [
|
||||
"tag:router"
|
||||
],
|
||||
"0.0.0.0/0": [
|
||||
"tag:exit"
|
||||
],
|
||||
"::/0": [
|
||||
"tag:exit"
|
||||
]
|
||||
}
|
||||
},
|
||||
"acls": [
|
||||
{
|
||||
"action": "accept",
|
||||
"src": [
|
||||
"*"
|
||||
],
|
||||
"dst": [
|
||||
"autogroup:tagged:*"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"captures": {
|
||||
"exit-node": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"10.33.0.0/16",
|
||||
"100.115.94.0-100.127.255.255",
|
||||
"100.64.0.0-100.115.91.255",
|
||||
"fd7a:115c:a1e0::/48"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "100.85.66.106",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::7c37:426a",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"subnet-router": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"10.33.0.0/16",
|
||||
"100.115.94.0-100.127.255.255",
|
||||
"100.64.0.0-100.115.91.255",
|
||||
"fd7a:115c:a1e0::/48"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "100.92.142.61",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::3e37:8e3d",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"tagged-client": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"10.33.0.0/16",
|
||||
"100.115.94.0-100.127.255.255",
|
||||
"100.64.0.0-100.115.91.255",
|
||||
"fd7a:115c:a1e0::/48"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "100.83.200.69",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::c537:c845",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"tagged-prod": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"10.33.0.0/16",
|
||||
"100.115.94.0-100.127.255.255",
|
||||
"100.64.0.0-100.115.91.255",
|
||||
"fd7a:115c:a1e0::/48"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "100.103.8.15",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::5b37:80f",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"tagged-server": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"10.33.0.0/16",
|
||||
"100.115.94.0-100.127.255.255",
|
||||
"100.64.0.0-100.115.91.255",
|
||||
"fd7a:115c:a1e0::/48"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "100.108.74.26",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::b901:4a87",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"user-kris": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"user-mon": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"user1": {
|
||||
"packet_filter_rules": null
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,167 +0,0 @@
|
||||
// ACL-A09
|
||||
//
|
||||
// ACL: accept: src=['autogroup:member'] dst=['autogroup:self:*']
|
||||
//
|
||||
// Expected: Rules on user-kris, user-mon, user1
|
||||
{
|
||||
"test_id": "ACL-A09",
|
||||
"input": {
|
||||
"full_policy": {
|
||||
"groups": {
|
||||
"group:admins": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"group:developers": [
|
||||
"kristoffer@dalby.cc",
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"group:monitors": [
|
||||
"monitorpasskeykradalby@passkey"
|
||||
],
|
||||
"group:empty": []
|
||||
},
|
||||
"tagOwners": {
|
||||
"tag:server": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:prod": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:client": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:router": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:exit": [
|
||||
"kratail2tid@passkey"
|
||||
]
|
||||
},
|
||||
"hosts": {
|
||||
"webserver": "100.108.74.26",
|
||||
"prodbox": "100.103.8.15",
|
||||
"internal": "10.0.0.0/8",
|
||||
"subnet24": "192.168.1.0/24"
|
||||
},
|
||||
"autoApprovers": {
|
||||
"routes": {
|
||||
"10.33.0.0/16": [
|
||||
"tag:router"
|
||||
],
|
||||
"0.0.0.0/0": [
|
||||
"tag:exit"
|
||||
],
|
||||
"::/0": [
|
||||
"tag:exit"
|
||||
]
|
||||
}
|
||||
},
|
||||
"acls": [
|
||||
{
|
||||
"action": "accept",
|
||||
"src": [
|
||||
"autogroup:member"
|
||||
],
|
||||
"dst": [
|
||||
"autogroup:self:*"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"captures": {
|
||||
"exit-node": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"subnet-router": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-client": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-prod": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-server": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"user-kris": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"100.110.121.96",
|
||||
"fd7a:115c:a1e0::1737:7960"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "100.110.121.96",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::1737:7960",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"user-mon": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"100.103.90.82",
|
||||
"fd7a:115c:a1e0::9e37:5a52"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "100.103.90.82",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::9e37:5a52",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"user1": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"100.90.199.68",
|
||||
"fd7a:115c:a1e0::2d01:c747"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "100.90.199.68",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::2d01:c747",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,121 +0,0 @@
|
||||
// ACL-A10
|
||||
//
|
||||
// ACL: accept: src=['kratail2tid@passkey'] dst=['autogroup:self:*']
|
||||
//
|
||||
// Expected: Rules on user1
|
||||
{
|
||||
"test_id": "ACL-A10",
|
||||
"input": {
|
||||
"full_policy": {
|
||||
"groups": {
|
||||
"group:admins": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"group:developers": [
|
||||
"kristoffer@dalby.cc",
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"group:monitors": [
|
||||
"monitorpasskeykradalby@passkey"
|
||||
],
|
||||
"group:empty": []
|
||||
},
|
||||
"tagOwners": {
|
||||
"tag:server": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:prod": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:client": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:router": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:exit": [
|
||||
"kratail2tid@passkey"
|
||||
]
|
||||
},
|
||||
"hosts": {
|
||||
"webserver": "100.108.74.26",
|
||||
"prodbox": "100.103.8.15",
|
||||
"internal": "10.0.0.0/8",
|
||||
"subnet24": "192.168.1.0/24"
|
||||
},
|
||||
"autoApprovers": {
|
||||
"routes": {
|
||||
"10.33.0.0/16": [
|
||||
"tag:router"
|
||||
],
|
||||
"0.0.0.0/0": [
|
||||
"tag:exit"
|
||||
],
|
||||
"::/0": [
|
||||
"tag:exit"
|
||||
]
|
||||
}
|
||||
},
|
||||
"acls": [
|
||||
{
|
||||
"action": "accept",
|
||||
"src": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"dst": [
|
||||
"autogroup:self:*"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"captures": {
|
||||
"exit-node": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"subnet-router": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-client": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-prod": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-server": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"user-kris": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"user-mon": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"user1": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"100.90.199.68",
|
||||
"fd7a:115c:a1e0::2d01:c747"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "100.90.199.68",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::2d01:c747",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,121 +0,0 @@
|
||||
// ACL-A11
|
||||
//
|
||||
// ACL: accept: src=['group:admins'] dst=['autogroup:self:*']
|
||||
//
|
||||
// Expected: Rules on user1
|
||||
{
|
||||
"test_id": "ACL-A11",
|
||||
"input": {
|
||||
"full_policy": {
|
||||
"groups": {
|
||||
"group:admins": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"group:developers": [
|
||||
"kristoffer@dalby.cc",
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"group:monitors": [
|
||||
"monitorpasskeykradalby@passkey"
|
||||
],
|
||||
"group:empty": []
|
||||
},
|
||||
"tagOwners": {
|
||||
"tag:server": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:prod": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:client": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:router": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:exit": [
|
||||
"kratail2tid@passkey"
|
||||
]
|
||||
},
|
||||
"hosts": {
|
||||
"webserver": "100.108.74.26",
|
||||
"prodbox": "100.103.8.15",
|
||||
"internal": "10.0.0.0/8",
|
||||
"subnet24": "192.168.1.0/24"
|
||||
},
|
||||
"autoApprovers": {
|
||||
"routes": {
|
||||
"10.33.0.0/16": [
|
||||
"tag:router"
|
||||
],
|
||||
"0.0.0.0/0": [
|
||||
"tag:exit"
|
||||
],
|
||||
"::/0": [
|
||||
"tag:exit"
|
||||
]
|
||||
}
|
||||
},
|
||||
"acls": [
|
||||
{
|
||||
"action": "accept",
|
||||
"src": [
|
||||
"group:admins"
|
||||
],
|
||||
"dst": [
|
||||
"autogroup:self:*"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"captures": {
|
||||
"exit-node": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"subnet-router": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-client": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-prod": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-server": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"user-kris": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"user-mon": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"user1": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"100.90.199.68",
|
||||
"fd7a:115c:a1e0::2d01:c747"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "100.90.199.68",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::2d01:c747",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,167 +0,0 @@
|
||||
// ACL-A12
|
||||
//
|
||||
// ACL: accept: src=['*'] dst=['autogroup:self:22']
|
||||
//
|
||||
// Expected: Rules on user-kris, user-mon, user1
|
||||
{
|
||||
"test_id": "ACL-A12",
|
||||
"input": {
|
||||
"full_policy": {
|
||||
"groups": {
|
||||
"group:admins": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"group:developers": [
|
||||
"kristoffer@dalby.cc",
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"group:monitors": [
|
||||
"monitorpasskeykradalby@passkey"
|
||||
],
|
||||
"group:empty": []
|
||||
},
|
||||
"tagOwners": {
|
||||
"tag:server": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:prod": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:client": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:router": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:exit": [
|
||||
"kratail2tid@passkey"
|
||||
]
|
||||
},
|
||||
"hosts": {
|
||||
"webserver": "100.108.74.26",
|
||||
"prodbox": "100.103.8.15",
|
||||
"internal": "10.0.0.0/8",
|
||||
"subnet24": "192.168.1.0/24"
|
||||
},
|
||||
"autoApprovers": {
|
||||
"routes": {
|
||||
"10.33.0.0/16": [
|
||||
"tag:router"
|
||||
],
|
||||
"0.0.0.0/0": [
|
||||
"tag:exit"
|
||||
],
|
||||
"::/0": [
|
||||
"tag:exit"
|
||||
]
|
||||
}
|
||||
},
|
||||
"acls": [
|
||||
{
|
||||
"action": "accept",
|
||||
"src": [
|
||||
"*"
|
||||
],
|
||||
"dst": [
|
||||
"autogroup:self:22"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"captures": {
|
||||
"exit-node": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"subnet-router": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-client": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-prod": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-server": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"user-kris": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"100.110.121.96",
|
||||
"fd7a:115c:a1e0::1737:7960"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "100.110.121.96",
|
||||
"Ports": {
|
||||
"First": 22,
|
||||
"Last": 22
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::1737:7960",
|
||||
"Ports": {
|
||||
"First": 22,
|
||||
"Last": 22
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"user-mon": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"100.103.90.82",
|
||||
"fd7a:115c:a1e0::9e37:5a52"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "100.103.90.82",
|
||||
"Ports": {
|
||||
"First": 22,
|
||||
"Last": 22
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::9e37:5a52",
|
||||
"Ports": {
|
||||
"First": 22,
|
||||
"Last": 22
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"user1": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"100.90.199.68",
|
||||
"fd7a:115c:a1e0::2d01:c747"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "100.90.199.68",
|
||||
"Ports": {
|
||||
"First": 22,
|
||||
"Last": 22
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::2d01:c747",
|
||||
"Ports": {
|
||||
"First": 22,
|
||||
"Last": 22
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,167 +0,0 @@
|
||||
// ACL-A13
|
||||
//
|
||||
// ACL: accept: src=['*'] dst=['autogroup:self:80-443']
|
||||
//
|
||||
// Expected: Rules on user-kris, user-mon, user1
|
||||
{
|
||||
"test_id": "ACL-A13",
|
||||
"input": {
|
||||
"full_policy": {
|
||||
"groups": {
|
||||
"group:admins": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"group:developers": [
|
||||
"kristoffer@dalby.cc",
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"group:monitors": [
|
||||
"monitorpasskeykradalby@passkey"
|
||||
],
|
||||
"group:empty": []
|
||||
},
|
||||
"tagOwners": {
|
||||
"tag:server": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:prod": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:client": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:router": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:exit": [
|
||||
"kratail2tid@passkey"
|
||||
]
|
||||
},
|
||||
"hosts": {
|
||||
"webserver": "100.108.74.26",
|
||||
"prodbox": "100.103.8.15",
|
||||
"internal": "10.0.0.0/8",
|
||||
"subnet24": "192.168.1.0/24"
|
||||
},
|
||||
"autoApprovers": {
|
||||
"routes": {
|
||||
"10.33.0.0/16": [
|
||||
"tag:router"
|
||||
],
|
||||
"0.0.0.0/0": [
|
||||
"tag:exit"
|
||||
],
|
||||
"::/0": [
|
||||
"tag:exit"
|
||||
]
|
||||
}
|
||||
},
|
||||
"acls": [
|
||||
{
|
||||
"action": "accept",
|
||||
"src": [
|
||||
"*"
|
||||
],
|
||||
"dst": [
|
||||
"autogroup:self:80-443"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"captures": {
|
||||
"exit-node": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"subnet-router": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-client": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-prod": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-server": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"user-kris": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"100.110.121.96",
|
||||
"fd7a:115c:a1e0::1737:7960"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "100.110.121.96",
|
||||
"Ports": {
|
||||
"First": 80,
|
||||
"Last": 443
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::1737:7960",
|
||||
"Ports": {
|
||||
"First": 80,
|
||||
"Last": 443
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"user-mon": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"100.103.90.82",
|
||||
"fd7a:115c:a1e0::9e37:5a52"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "100.103.90.82",
|
||||
"Ports": {
|
||||
"First": 80,
|
||||
"Last": 443
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::9e37:5a52",
|
||||
"Ports": {
|
||||
"First": 80,
|
||||
"Last": 443
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"user1": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"100.90.199.68",
|
||||
"fd7a:115c:a1e0::2d01:c747"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "100.90.199.68",
|
||||
"Ports": {
|
||||
"First": 80,
|
||||
"Last": 443
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::2d01:c747",
|
||||
"Ports": {
|
||||
"First": 80,
|
||||
"Last": 443
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,251 +0,0 @@
|
||||
// ACL-A14
|
||||
//
|
||||
// ACL: accept: src=['*'] dst=['autogroup:self:22,80,443']
|
||||
//
|
||||
// Expected: Rules on user-kris, user-mon, user1
|
||||
{
|
||||
"test_id": "ACL-A14",
|
||||
"input": {
|
||||
"full_policy": {
|
||||
"groups": {
|
||||
"group:admins": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"group:developers": [
|
||||
"kristoffer@dalby.cc",
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"group:monitors": [
|
||||
"monitorpasskeykradalby@passkey"
|
||||
],
|
||||
"group:empty": []
|
||||
},
|
||||
"tagOwners": {
|
||||
"tag:server": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:prod": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:client": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:router": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:exit": [
|
||||
"kratail2tid@passkey"
|
||||
]
|
||||
},
|
||||
"hosts": {
|
||||
"webserver": "100.108.74.26",
|
||||
"prodbox": "100.103.8.15",
|
||||
"internal": "10.0.0.0/8",
|
||||
"subnet24": "192.168.1.0/24"
|
||||
},
|
||||
"autoApprovers": {
|
||||
"routes": {
|
||||
"10.33.0.0/16": [
|
||||
"tag:router"
|
||||
],
|
||||
"0.0.0.0/0": [
|
||||
"tag:exit"
|
||||
],
|
||||
"::/0": [
|
||||
"tag:exit"
|
||||
]
|
||||
}
|
||||
},
|
||||
"acls": [
|
||||
{
|
||||
"action": "accept",
|
||||
"src": [
|
||||
"*"
|
||||
],
|
||||
"dst": [
|
||||
"autogroup:self:22,80,443"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"captures": {
|
||||
"exit-node": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"subnet-router": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-client": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-prod": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-server": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"user-kris": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"100.110.121.96",
|
||||
"fd7a:115c:a1e0::1737:7960"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "100.110.121.96",
|
||||
"Ports": {
|
||||
"First": 22,
|
||||
"Last": 22
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "100.110.121.96",
|
||||
"Ports": {
|
||||
"First": 80,
|
||||
"Last": 80
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "100.110.121.96",
|
||||
"Ports": {
|
||||
"First": 443,
|
||||
"Last": 443
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::1737:7960",
|
||||
"Ports": {
|
||||
"First": 22,
|
||||
"Last": 22
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::1737:7960",
|
||||
"Ports": {
|
||||
"First": 80,
|
||||
"Last": 80
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::1737:7960",
|
||||
"Ports": {
|
||||
"First": 443,
|
||||
"Last": 443
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"user-mon": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"100.103.90.82",
|
||||
"fd7a:115c:a1e0::9e37:5a52"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "100.103.90.82",
|
||||
"Ports": {
|
||||
"First": 22,
|
||||
"Last": 22
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "100.103.90.82",
|
||||
"Ports": {
|
||||
"First": 80,
|
||||
"Last": 80
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "100.103.90.82",
|
||||
"Ports": {
|
||||
"First": 443,
|
||||
"Last": 443
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::9e37:5a52",
|
||||
"Ports": {
|
||||
"First": 22,
|
||||
"Last": 22
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::9e37:5a52",
|
||||
"Ports": {
|
||||
"First": 80,
|
||||
"Last": 80
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::9e37:5a52",
|
||||
"Ports": {
|
||||
"First": 443,
|
||||
"Last": 443
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"user1": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"100.90.199.68",
|
||||
"fd7a:115c:a1e0::2d01:c747"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "100.90.199.68",
|
||||
"Ports": {
|
||||
"First": 22,
|
||||
"Last": 22
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "100.90.199.68",
|
||||
"Ports": {
|
||||
"First": 80,
|
||||
"Last": 80
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "100.90.199.68",
|
||||
"Ports": {
|
||||
"First": 443,
|
||||
"Last": 443
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::2d01:c747",
|
||||
"Ports": {
|
||||
"First": 22,
|
||||
"Last": 22
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::2d01:c747",
|
||||
"Ports": {
|
||||
"First": 80,
|
||||
"Last": 80
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::2d01:c747",
|
||||
"Ports": {
|
||||
"First": 443,
|
||||
"Last": 443
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,339 +0,0 @@
|
||||
// ACL-A15
|
||||
//
|
||||
// ACL: accept: src=['autogroup:member', 'autogroup:tagged'] dst=['*:*']
|
||||
//
|
||||
// Expected: Rules on 8 of 8 nodes
|
||||
{
|
||||
"test_id": "ACL-A15",
|
||||
"input": {
|
||||
"full_policy": {
|
||||
"groups": {
|
||||
"group:admins": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"group:developers": [
|
||||
"kristoffer@dalby.cc",
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"group:monitors": [
|
||||
"monitorpasskeykradalby@passkey"
|
||||
],
|
||||
"group:empty": []
|
||||
},
|
||||
"tagOwners": {
|
||||
"tag:server": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:prod": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:client": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:router": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:exit": [
|
||||
"kratail2tid@passkey"
|
||||
]
|
||||
},
|
||||
"hosts": {
|
||||
"webserver": "100.108.74.26",
|
||||
"prodbox": "100.103.8.15",
|
||||
"internal": "10.0.0.0/8",
|
||||
"subnet24": "192.168.1.0/24"
|
||||
},
|
||||
"autoApprovers": {
|
||||
"routes": {
|
||||
"10.33.0.0/16": [
|
||||
"tag:router"
|
||||
],
|
||||
"0.0.0.0/0": [
|
||||
"tag:exit"
|
||||
],
|
||||
"::/0": [
|
||||
"tag:exit"
|
||||
]
|
||||
}
|
||||
},
|
||||
"acls": [
|
||||
{
|
||||
"action": "accept",
|
||||
"src": [
|
||||
"autogroup:member",
|
||||
"autogroup:tagged"
|
||||
],
|
||||
"dst": [
|
||||
"*:*"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"captures": {
|
||||
"exit-node": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"100.103.8.15",
|
||||
"100.103.90.82",
|
||||
"100.108.74.26",
|
||||
"100.110.121.96",
|
||||
"100.83.200.69",
|
||||
"100.85.66.106",
|
||||
"100.90.199.68",
|
||||
"100.92.142.61",
|
||||
"fd7a:115c:a1e0::1737:7960",
|
||||
"fd7a:115c:a1e0::2d01:c747",
|
||||
"fd7a:115c:a1e0::3e37:8e3d",
|
||||
"fd7a:115c:a1e0::5b37:80f",
|
||||
"fd7a:115c:a1e0::7c37:426a",
|
||||
"fd7a:115c:a1e0::9e37:5a52",
|
||||
"fd7a:115c:a1e0::b901:4a87",
|
||||
"fd7a:115c:a1e0::c537:c845"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "*",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"subnet-router": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"100.103.8.15",
|
||||
"100.103.90.82",
|
||||
"100.108.74.26",
|
||||
"100.110.121.96",
|
||||
"100.83.200.69",
|
||||
"100.85.66.106",
|
||||
"100.90.199.68",
|
||||
"100.92.142.61",
|
||||
"fd7a:115c:a1e0::1737:7960",
|
||||
"fd7a:115c:a1e0::2d01:c747",
|
||||
"fd7a:115c:a1e0::3e37:8e3d",
|
||||
"fd7a:115c:a1e0::5b37:80f",
|
||||
"fd7a:115c:a1e0::7c37:426a",
|
||||
"fd7a:115c:a1e0::9e37:5a52",
|
||||
"fd7a:115c:a1e0::b901:4a87",
|
||||
"fd7a:115c:a1e0::c537:c845"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "*",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"tagged-client": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"100.103.8.15",
|
||||
"100.103.90.82",
|
||||
"100.108.74.26",
|
||||
"100.110.121.96",
|
||||
"100.83.200.69",
|
||||
"100.85.66.106",
|
||||
"100.90.199.68",
|
||||
"100.92.142.61",
|
||||
"fd7a:115c:a1e0::1737:7960",
|
||||
"fd7a:115c:a1e0::2d01:c747",
|
||||
"fd7a:115c:a1e0::3e37:8e3d",
|
||||
"fd7a:115c:a1e0::5b37:80f",
|
||||
"fd7a:115c:a1e0::7c37:426a",
|
||||
"fd7a:115c:a1e0::9e37:5a52",
|
||||
"fd7a:115c:a1e0::b901:4a87",
|
||||
"fd7a:115c:a1e0::c537:c845"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "*",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"tagged-prod": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"100.103.8.15",
|
||||
"100.103.90.82",
|
||||
"100.108.74.26",
|
||||
"100.110.121.96",
|
||||
"100.83.200.69",
|
||||
"100.85.66.106",
|
||||
"100.90.199.68",
|
||||
"100.92.142.61",
|
||||
"fd7a:115c:a1e0::1737:7960",
|
||||
"fd7a:115c:a1e0::2d01:c747",
|
||||
"fd7a:115c:a1e0::3e37:8e3d",
|
||||
"fd7a:115c:a1e0::5b37:80f",
|
||||
"fd7a:115c:a1e0::7c37:426a",
|
||||
"fd7a:115c:a1e0::9e37:5a52",
|
||||
"fd7a:115c:a1e0::b901:4a87",
|
||||
"fd7a:115c:a1e0::c537:c845"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "*",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"tagged-server": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"100.103.8.15",
|
||||
"100.103.90.82",
|
||||
"100.108.74.26",
|
||||
"100.110.121.96",
|
||||
"100.83.200.69",
|
||||
"100.85.66.106",
|
||||
"100.90.199.68",
|
||||
"100.92.142.61",
|
||||
"fd7a:115c:a1e0::1737:7960",
|
||||
"fd7a:115c:a1e0::2d01:c747",
|
||||
"fd7a:115c:a1e0::3e37:8e3d",
|
||||
"fd7a:115c:a1e0::5b37:80f",
|
||||
"fd7a:115c:a1e0::7c37:426a",
|
||||
"fd7a:115c:a1e0::9e37:5a52",
|
||||
"fd7a:115c:a1e0::b901:4a87",
|
||||
"fd7a:115c:a1e0::c537:c845"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "*",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"user-kris": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"100.103.8.15",
|
||||
"100.103.90.82",
|
||||
"100.108.74.26",
|
||||
"100.110.121.96",
|
||||
"100.83.200.69",
|
||||
"100.85.66.106",
|
||||
"100.90.199.68",
|
||||
"100.92.142.61",
|
||||
"fd7a:115c:a1e0::1737:7960",
|
||||
"fd7a:115c:a1e0::2d01:c747",
|
||||
"fd7a:115c:a1e0::3e37:8e3d",
|
||||
"fd7a:115c:a1e0::5b37:80f",
|
||||
"fd7a:115c:a1e0::7c37:426a",
|
||||
"fd7a:115c:a1e0::9e37:5a52",
|
||||
"fd7a:115c:a1e0::b901:4a87",
|
||||
"fd7a:115c:a1e0::c537:c845"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "*",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"user-mon": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"100.103.8.15",
|
||||
"100.103.90.82",
|
||||
"100.108.74.26",
|
||||
"100.110.121.96",
|
||||
"100.83.200.69",
|
||||
"100.85.66.106",
|
||||
"100.90.199.68",
|
||||
"100.92.142.61",
|
||||
"fd7a:115c:a1e0::1737:7960",
|
||||
"fd7a:115c:a1e0::2d01:c747",
|
||||
"fd7a:115c:a1e0::3e37:8e3d",
|
||||
"fd7a:115c:a1e0::5b37:80f",
|
||||
"fd7a:115c:a1e0::7c37:426a",
|
||||
"fd7a:115c:a1e0::9e37:5a52",
|
||||
"fd7a:115c:a1e0::b901:4a87",
|
||||
"fd7a:115c:a1e0::c537:c845"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "*",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"user1": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"100.103.8.15",
|
||||
"100.103.90.82",
|
||||
"100.108.74.26",
|
||||
"100.110.121.96",
|
||||
"100.83.200.69",
|
||||
"100.85.66.106",
|
||||
"100.90.199.68",
|
||||
"100.92.142.61",
|
||||
"fd7a:115c:a1e0::1737:7960",
|
||||
"fd7a:115c:a1e0::2d01:c747",
|
||||
"fd7a:115c:a1e0::3e37:8e3d",
|
||||
"fd7a:115c:a1e0::5b37:80f",
|
||||
"fd7a:115c:a1e0::7c37:426a",
|
||||
"fd7a:115c:a1e0::9e37:5a52",
|
||||
"fd7a:115c:a1e0::b901:4a87",
|
||||
"fd7a:115c:a1e0::c537:c845"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "*",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,136 +0,0 @@
|
||||
// ACL-A16
|
||||
//
|
||||
// ACL: accept: src=['autogroup:member', 'autogroup:tagged'] dst=['tag:server:22']
|
||||
//
|
||||
// Expected: Rules on tagged-server
|
||||
{
|
||||
"test_id": "ACL-A16",
|
||||
"input": {
|
||||
"full_policy": {
|
||||
"groups": {
|
||||
"group:admins": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"group:developers": [
|
||||
"kristoffer@dalby.cc",
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"group:monitors": [
|
||||
"monitorpasskeykradalby@passkey"
|
||||
],
|
||||
"group:empty": []
|
||||
},
|
||||
"tagOwners": {
|
||||
"tag:server": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:prod": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:client": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:router": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:exit": [
|
||||
"kratail2tid@passkey"
|
||||
]
|
||||
},
|
||||
"hosts": {
|
||||
"webserver": "100.108.74.26",
|
||||
"prodbox": "100.103.8.15",
|
||||
"internal": "10.0.0.0/8",
|
||||
"subnet24": "192.168.1.0/24"
|
||||
},
|
||||
"autoApprovers": {
|
||||
"routes": {
|
||||
"10.33.0.0/16": [
|
||||
"tag:router"
|
||||
],
|
||||
"0.0.0.0/0": [
|
||||
"tag:exit"
|
||||
],
|
||||
"::/0": [
|
||||
"tag:exit"
|
||||
]
|
||||
}
|
||||
},
|
||||
"acls": [
|
||||
{
|
||||
"action": "accept",
|
||||
"src": [
|
||||
"autogroup:member",
|
||||
"autogroup:tagged"
|
||||
],
|
||||
"dst": [
|
||||
"tag:server:22"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"captures": {
|
||||
"exit-node": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"subnet-router": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-client": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-prod": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-server": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"100.103.8.15",
|
||||
"100.103.90.82",
|
||||
"100.108.74.26",
|
||||
"100.110.121.96",
|
||||
"100.83.200.69",
|
||||
"100.85.66.106",
|
||||
"100.90.199.68",
|
||||
"100.92.142.61",
|
||||
"fd7a:115c:a1e0::1737:7960",
|
||||
"fd7a:115c:a1e0::2d01:c747",
|
||||
"fd7a:115c:a1e0::3e37:8e3d",
|
||||
"fd7a:115c:a1e0::5b37:80f",
|
||||
"fd7a:115c:a1e0::7c37:426a",
|
||||
"fd7a:115c:a1e0::9e37:5a52",
|
||||
"fd7a:115c:a1e0::b901:4a87",
|
||||
"fd7a:115c:a1e0::c537:c845"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "100.108.74.26",
|
||||
"Ports": {
|
||||
"First": 22,
|
||||
"Last": 22
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::b901:4a87",
|
||||
"Ports": {
|
||||
"First": 22,
|
||||
"Last": 22
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"user-kris": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"user-mon": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"user1": {
|
||||
"packet_filter_rules": null
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,266 +0,0 @@
|
||||
// ACL-A17
|
||||
//
|
||||
// ACL: accept: src=['*'] dst=['autogroup:self:*', 'tag:server:22', 'autogroup:member:80']
|
||||
//
|
||||
// Expected: Rules on tagged-server, user-kris, user-mon, user1
|
||||
{
|
||||
"test_id": "ACL-A17",
|
||||
"input": {
|
||||
"full_policy": {
|
||||
"groups": {
|
||||
"group:admins": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"group:developers": [
|
||||
"kristoffer@dalby.cc",
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"group:monitors": [
|
||||
"monitorpasskeykradalby@passkey"
|
||||
],
|
||||
"group:empty": []
|
||||
},
|
||||
"tagOwners": {
|
||||
"tag:server": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:prod": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:client": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:router": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:exit": [
|
||||
"kratail2tid@passkey"
|
||||
]
|
||||
},
|
||||
"hosts": {
|
||||
"webserver": "100.108.74.26",
|
||||
"prodbox": "100.103.8.15",
|
||||
"internal": "10.0.0.0/8",
|
||||
"subnet24": "192.168.1.0/24"
|
||||
},
|
||||
"autoApprovers": {
|
||||
"routes": {
|
||||
"10.33.0.0/16": [
|
||||
"tag:router"
|
||||
],
|
||||
"0.0.0.0/0": [
|
||||
"tag:exit"
|
||||
],
|
||||
"::/0": [
|
||||
"tag:exit"
|
||||
]
|
||||
}
|
||||
},
|
||||
"acls": [
|
||||
{
|
||||
"action": "accept",
|
||||
"src": [
|
||||
"*"
|
||||
],
|
||||
"dst": [
|
||||
"autogroup:self:*",
|
||||
"tag:server:22",
|
||||
"autogroup:member:80"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"captures": {
|
||||
"exit-node": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"subnet-router": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-client": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-prod": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-server": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"10.33.0.0/16",
|
||||
"100.115.94.0-100.127.255.255",
|
||||
"100.64.0.0-100.115.91.255",
|
||||
"fd7a:115c:a1e0::/48"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "100.108.74.26",
|
||||
"Ports": {
|
||||
"First": 22,
|
||||
"Last": 22
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::b901:4a87",
|
||||
"Ports": {
|
||||
"First": 22,
|
||||
"Last": 22
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"user-kris": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"10.33.0.0/16",
|
||||
"100.115.94.0-100.127.255.255",
|
||||
"100.64.0.0-100.115.91.255",
|
||||
"fd7a:115c:a1e0::/48"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "100.110.121.96",
|
||||
"Ports": {
|
||||
"First": 80,
|
||||
"Last": 80
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::1737:7960",
|
||||
"Ports": {
|
||||
"First": 80,
|
||||
"Last": 80
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"SrcIPs": [
|
||||
"100.110.121.96",
|
||||
"fd7a:115c:a1e0::1737:7960"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "100.110.121.96",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::1737:7960",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"user-mon": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"10.33.0.0/16",
|
||||
"100.115.94.0-100.127.255.255",
|
||||
"100.64.0.0-100.115.91.255",
|
||||
"fd7a:115c:a1e0::/48"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "100.103.90.82",
|
||||
"Ports": {
|
||||
"First": 80,
|
||||
"Last": 80
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::9e37:5a52",
|
||||
"Ports": {
|
||||
"First": 80,
|
||||
"Last": 80
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"SrcIPs": [
|
||||
"100.103.90.82",
|
||||
"fd7a:115c:a1e0::9e37:5a52"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "100.103.90.82",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::9e37:5a52",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"user1": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"10.33.0.0/16",
|
||||
"100.115.94.0-100.127.255.255",
|
||||
"100.64.0.0-100.115.91.255",
|
||||
"fd7a:115c:a1e0::/48"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "100.90.199.68",
|
||||
"Ports": {
|
||||
"First": 80,
|
||||
"Last": 80
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::2d01:c747",
|
||||
"Ports": {
|
||||
"First": 80,
|
||||
"Last": 80
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"SrcIPs": [
|
||||
"100.90.199.68",
|
||||
"fd7a:115c:a1e0::2d01:c747"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "100.90.199.68",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::2d01:c747",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,227 +0,0 @@
|
||||
// ACL-AH01
|
||||
//
|
||||
// ACL: accept: src=['internal', 'subnet24'] dst=['*:*']
|
||||
//
|
||||
// Expected: Rules on 8 of 8 nodes
|
||||
{
|
||||
"test_id": "ACL-AH01",
|
||||
"input": {
|
||||
"full_policy": {
|
||||
"groups": {
|
||||
"group:admins": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"group:developers": [
|
||||
"kristoffer@dalby.cc",
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"group:monitors": [
|
||||
"monitorpasskeykradalby@passkey"
|
||||
],
|
||||
"group:empty": []
|
||||
},
|
||||
"tagOwners": {
|
||||
"tag:server": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:prod": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:client": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:router": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:exit": [
|
||||
"kratail2tid@passkey"
|
||||
]
|
||||
},
|
||||
"hosts": {
|
||||
"webserver": "100.108.74.26",
|
||||
"prodbox": "100.103.8.15",
|
||||
"internal": "10.0.0.0/8",
|
||||
"subnet24": "192.168.1.0/24"
|
||||
},
|
||||
"autoApprovers": {
|
||||
"routes": {
|
||||
"10.33.0.0/16": [
|
||||
"tag:router"
|
||||
],
|
||||
"0.0.0.0/0": [
|
||||
"tag:exit"
|
||||
],
|
||||
"::/0": [
|
||||
"tag:exit"
|
||||
]
|
||||
}
|
||||
},
|
||||
"acls": [
|
||||
{
|
||||
"action": "accept",
|
||||
"src": [
|
||||
"internal",
|
||||
"subnet24"
|
||||
],
|
||||
"dst": [
|
||||
"*:*"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"captures": {
|
||||
"exit-node": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"10.0.0.0/8",
|
||||
"192.168.1.0/24"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "*",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"subnet-router": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"10.0.0.0/8",
|
||||
"192.168.1.0/24"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "*",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"tagged-client": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"10.0.0.0/8",
|
||||
"192.168.1.0/24"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "*",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"tagged-prod": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"10.0.0.0/8",
|
||||
"192.168.1.0/24"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "*",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"tagged-server": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"10.0.0.0/8",
|
||||
"192.168.1.0/24"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "*",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"user-kris": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"10.0.0.0/8",
|
||||
"192.168.1.0/24"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "*",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"user-mon": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"10.0.0.0/8",
|
||||
"192.168.1.0/24"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "*",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"user1": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"10.0.0.0/8",
|
||||
"192.168.1.0/24"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "*",
|
||||
"Ports": {
|
||||
"First": 0,
|
||||
"Last": 65535
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,122 +0,0 @@
|
||||
// ACL-AH02
|
||||
//
|
||||
// ACL: accept: src=['internal', '100.108.74.26'] dst=['tag:server:22']
|
||||
//
|
||||
// Expected: Rules on tagged-server
|
||||
{
|
||||
"test_id": "ACL-AH02",
|
||||
"input": {
|
||||
"full_policy": {
|
||||
"groups": {
|
||||
"group:admins": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"group:developers": [
|
||||
"kristoffer@dalby.cc",
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"group:monitors": [
|
||||
"monitorpasskeykradalby@passkey"
|
||||
],
|
||||
"group:empty": []
|
||||
},
|
||||
"tagOwners": {
|
||||
"tag:server": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:prod": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:client": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:router": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:exit": [
|
||||
"kratail2tid@passkey"
|
||||
]
|
||||
},
|
||||
"hosts": {
|
||||
"webserver": "100.108.74.26",
|
||||
"prodbox": "100.103.8.15",
|
||||
"internal": "10.0.0.0/8",
|
||||
"subnet24": "192.168.1.0/24"
|
||||
},
|
||||
"autoApprovers": {
|
||||
"routes": {
|
||||
"10.33.0.0/16": [
|
||||
"tag:router"
|
||||
],
|
||||
"0.0.0.0/0": [
|
||||
"tag:exit"
|
||||
],
|
||||
"::/0": [
|
||||
"tag:exit"
|
||||
]
|
||||
}
|
||||
},
|
||||
"acls": [
|
||||
{
|
||||
"action": "accept",
|
||||
"src": [
|
||||
"internal",
|
||||
"100.108.74.26"
|
||||
],
|
||||
"dst": [
|
||||
"tag:server:22"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"captures": {
|
||||
"exit-node": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"subnet-router": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-client": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-prod": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-server": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"10.0.0.0/8",
|
||||
"100.108.74.26"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "100.108.74.26",
|
||||
"Ports": {
|
||||
"First": 22,
|
||||
"Last": 22
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::b901:4a87",
|
||||
"Ports": {
|
||||
"First": 22,
|
||||
"Last": 22
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"user-kris": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"user-mon": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"user1": {
|
||||
"packet_filter_rules": null
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,143 +0,0 @@
|
||||
// ACL-AH03
|
||||
//
|
||||
// ACL: accept: src=['*'] dst=['internal:22', 'subnet24:80', 'tag:server:443']
|
||||
//
|
||||
// Expected: Rules on subnet-router, tagged-server
|
||||
{
|
||||
"test_id": "ACL-AH03",
|
||||
"input": {
|
||||
"full_policy": {
|
||||
"groups": {
|
||||
"group:admins": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"group:developers": [
|
||||
"kristoffer@dalby.cc",
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"group:monitors": [
|
||||
"monitorpasskeykradalby@passkey"
|
||||
],
|
||||
"group:empty": []
|
||||
},
|
||||
"tagOwners": {
|
||||
"tag:server": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:prod": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:client": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:router": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:exit": [
|
||||
"kratail2tid@passkey"
|
||||
]
|
||||
},
|
||||
"hosts": {
|
||||
"webserver": "100.108.74.26",
|
||||
"prodbox": "100.103.8.15",
|
||||
"internal": "10.0.0.0/8",
|
||||
"subnet24": "192.168.1.0/24"
|
||||
},
|
||||
"autoApprovers": {
|
||||
"routes": {
|
||||
"10.33.0.0/16": [
|
||||
"tag:router"
|
||||
],
|
||||
"0.0.0.0/0": [
|
||||
"tag:exit"
|
||||
],
|
||||
"::/0": [
|
||||
"tag:exit"
|
||||
]
|
||||
}
|
||||
},
|
||||
"acls": [
|
||||
{
|
||||
"action": "accept",
|
||||
"src": [
|
||||
"*"
|
||||
],
|
||||
"dst": [
|
||||
"internal:22",
|
||||
"subnet24:80",
|
||||
"tag:server:443"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"captures": {
|
||||
"exit-node": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"subnet-router": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"10.33.0.0/16",
|
||||
"100.115.94.0-100.127.255.255",
|
||||
"100.64.0.0-100.115.91.255",
|
||||
"fd7a:115c:a1e0::/48"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "10.0.0.0/8",
|
||||
"Ports": {
|
||||
"First": 22,
|
||||
"Last": 22
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"tagged-client": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-prod": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-server": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"10.33.0.0/16",
|
||||
"100.115.94.0-100.127.255.255",
|
||||
"100.64.0.0-100.115.91.255",
|
||||
"fd7a:115c:a1e0::/48"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "100.108.74.26",
|
||||
"Ports": {
|
||||
"First": 443,
|
||||
"Last": 443
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::b901:4a87",
|
||||
"Ports": {
|
||||
"First": 443,
|
||||
"Last": 443
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"user-kris": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"user-mon": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"user1": {
|
||||
"packet_filter_rules": null
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,121 +0,0 @@
|
||||
// ACL-AH04
|
||||
//
|
||||
// ACL: accept: src=['internal', '10.0.0.0/8'] dst=['tag:server:22']
|
||||
//
|
||||
// Expected: Rules on tagged-server
|
||||
{
|
||||
"test_id": "ACL-AH04",
|
||||
"input": {
|
||||
"full_policy": {
|
||||
"groups": {
|
||||
"group:admins": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"group:developers": [
|
||||
"kristoffer@dalby.cc",
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"group:monitors": [
|
||||
"monitorpasskeykradalby@passkey"
|
||||
],
|
||||
"group:empty": []
|
||||
},
|
||||
"tagOwners": {
|
||||
"tag:server": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:prod": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:client": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:router": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:exit": [
|
||||
"kratail2tid@passkey"
|
||||
]
|
||||
},
|
||||
"hosts": {
|
||||
"webserver": "100.108.74.26",
|
||||
"prodbox": "100.103.8.15",
|
||||
"internal": "10.0.0.0/8",
|
||||
"subnet24": "192.168.1.0/24"
|
||||
},
|
||||
"autoApprovers": {
|
||||
"routes": {
|
||||
"10.33.0.0/16": [
|
||||
"tag:router"
|
||||
],
|
||||
"0.0.0.0/0": [
|
||||
"tag:exit"
|
||||
],
|
||||
"::/0": [
|
||||
"tag:exit"
|
||||
]
|
||||
}
|
||||
},
|
||||
"acls": [
|
||||
{
|
||||
"action": "accept",
|
||||
"src": [
|
||||
"internal",
|
||||
"10.0.0.0/8"
|
||||
],
|
||||
"dst": [
|
||||
"tag:server:22"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"captures": {
|
||||
"exit-node": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"subnet-router": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-client": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-prod": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-server": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"10.0.0.0/8"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "100.108.74.26",
|
||||
"Ports": {
|
||||
"First": 22,
|
||||
"Last": 22
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::b901:4a87",
|
||||
"Ports": {
|
||||
"First": 22,
|
||||
"Last": 22
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"user-kris": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"user-mon": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"user1": {
|
||||
"packet_filter_rules": null
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,116 +0,0 @@
|
||||
// ACL-AH05
|
||||
//
|
||||
// ACL: accept: src=['*'] dst=['internal:22']
|
||||
//
|
||||
// Expected: Rules on subnet-router
|
||||
{
|
||||
"test_id": "ACL-AH05",
|
||||
"input": {
|
||||
"full_policy": {
|
||||
"groups": {
|
||||
"group:admins": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"group:developers": [
|
||||
"kristoffer@dalby.cc",
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"group:monitors": [
|
||||
"monitorpasskeykradalby@passkey"
|
||||
],
|
||||
"group:empty": []
|
||||
},
|
||||
"tagOwners": {
|
||||
"tag:server": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:prod": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:client": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:router": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:exit": [
|
||||
"kratail2tid@passkey"
|
||||
]
|
||||
},
|
||||
"hosts": {
|
||||
"webserver": "100.108.74.26",
|
||||
"prodbox": "100.103.8.15",
|
||||
"internal": "10.0.0.0/8",
|
||||
"subnet24": "192.168.1.0/24"
|
||||
},
|
||||
"autoApprovers": {
|
||||
"routes": {
|
||||
"10.33.0.0/16": [
|
||||
"tag:router"
|
||||
],
|
||||
"0.0.0.0/0": [
|
||||
"tag:exit"
|
||||
],
|
||||
"::/0": [
|
||||
"tag:exit"
|
||||
]
|
||||
}
|
||||
},
|
||||
"acls": [
|
||||
{
|
||||
"action": "accept",
|
||||
"src": [
|
||||
"*"
|
||||
],
|
||||
"dst": [
|
||||
"internal:22"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"captures": {
|
||||
"exit-node": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"subnet-router": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"10.33.0.0/16",
|
||||
"100.115.94.0-100.127.255.255",
|
||||
"100.64.0.0-100.115.91.255",
|
||||
"fd7a:115c:a1e0::/48"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "10.0.0.0/8",
|
||||
"Ports": {
|
||||
"First": 22,
|
||||
"Last": 22
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"tagged-client": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-prod": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-server": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"user-kris": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"user-mon": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"user1": {
|
||||
"packet_filter_rules": null
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,116 +0,0 @@
|
||||
// ACL-AH06
|
||||
//
|
||||
// ACL: accept: src=['*'] dst=['10.0.0.0/8:22']
|
||||
//
|
||||
// Expected: Rules on subnet-router
|
||||
{
|
||||
"test_id": "ACL-AH06",
|
||||
"input": {
|
||||
"full_policy": {
|
||||
"groups": {
|
||||
"group:admins": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"group:developers": [
|
||||
"kristoffer@dalby.cc",
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"group:monitors": [
|
||||
"monitorpasskeykradalby@passkey"
|
||||
],
|
||||
"group:empty": []
|
||||
},
|
||||
"tagOwners": {
|
||||
"tag:server": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:prod": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:client": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:router": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:exit": [
|
||||
"kratail2tid@passkey"
|
||||
]
|
||||
},
|
||||
"hosts": {
|
||||
"webserver": "100.108.74.26",
|
||||
"prodbox": "100.103.8.15",
|
||||
"internal": "10.0.0.0/8",
|
||||
"subnet24": "192.168.1.0/24"
|
||||
},
|
||||
"autoApprovers": {
|
||||
"routes": {
|
||||
"10.33.0.0/16": [
|
||||
"tag:router"
|
||||
],
|
||||
"0.0.0.0/0": [
|
||||
"tag:exit"
|
||||
],
|
||||
"::/0": [
|
||||
"tag:exit"
|
||||
]
|
||||
}
|
||||
},
|
||||
"acls": [
|
||||
{
|
||||
"action": "accept",
|
||||
"src": [
|
||||
"*"
|
||||
],
|
||||
"dst": [
|
||||
"10.0.0.0/8:22"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"captures": {
|
||||
"exit-node": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"subnet-router": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"10.33.0.0/16",
|
||||
"100.115.94.0-100.127.255.255",
|
||||
"100.64.0.0-100.115.91.255",
|
||||
"fd7a:115c:a1e0::/48"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "10.0.0.0/8",
|
||||
"Ports": {
|
||||
"First": 22,
|
||||
"Last": 22
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"tagged-client": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-prod": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-server": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"user-kris": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"user-mon": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"user1": {
|
||||
"packet_filter_rules": null
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,160 +0,0 @@
|
||||
// ACL-AR01
|
||||
//
|
||||
// ACLs:
|
||||
// accept: src=['tag:client'] dst=['tag:server:22']
|
||||
// accept: src=['tag:client'] dst=['tag:server:80,443']
|
||||
//
|
||||
// Expected: Rules on tagged-server
|
||||
{
|
||||
"test_id": "ACL-AR01",
|
||||
"input": {
|
||||
"full_policy": {
|
||||
"groups": {
|
||||
"group:admins": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"group:developers": [
|
||||
"kristoffer@dalby.cc",
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"group:monitors": [
|
||||
"monitorpasskeykradalby@passkey"
|
||||
],
|
||||
"group:empty": []
|
||||
},
|
||||
"tagOwners": {
|
||||
"tag:server": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:prod": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:client": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:router": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:exit": [
|
||||
"kratail2tid@passkey"
|
||||
]
|
||||
},
|
||||
"hosts": {
|
||||
"webserver": "100.108.74.26",
|
||||
"prodbox": "100.103.8.15",
|
||||
"internal": "10.0.0.0/8",
|
||||
"subnet24": "192.168.1.0/24"
|
||||
},
|
||||
"autoApprovers": {
|
||||
"routes": {
|
||||
"10.33.0.0/16": [
|
||||
"tag:router"
|
||||
],
|
||||
"0.0.0.0/0": [
|
||||
"tag:exit"
|
||||
],
|
||||
"::/0": [
|
||||
"tag:exit"
|
||||
]
|
||||
}
|
||||
},
|
||||
"acls": [
|
||||
{
|
||||
"action": "accept",
|
||||
"src": [
|
||||
"tag:client"
|
||||
],
|
||||
"dst": [
|
||||
"tag:server:22"
|
||||
]
|
||||
},
|
||||
{
|
||||
"action": "accept",
|
||||
"src": [
|
||||
"tag:client"
|
||||
],
|
||||
"dst": [
|
||||
"tag:server:80,443"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"captures": {
|
||||
"exit-node": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"subnet-router": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-client": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-prod": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-server": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"100.83.200.69",
|
||||
"fd7a:115c:a1e0::c537:c845"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "100.108.74.26",
|
||||
"Ports": {
|
||||
"First": 22,
|
||||
"Last": 22
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::b901:4a87",
|
||||
"Ports": {
|
||||
"First": 22,
|
||||
"Last": 22
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "100.108.74.26",
|
||||
"Ports": {
|
||||
"First": 80,
|
||||
"Last": 80
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "100.108.74.26",
|
||||
"Ports": {
|
||||
"First": 443,
|
||||
"Last": 443
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::b901:4a87",
|
||||
"Ports": {
|
||||
"First": 80,
|
||||
"Last": 80
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::b901:4a87",
|
||||
"Ports": {
|
||||
"First": 443,
|
||||
"Last": 443
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"user-kris": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"user-mon": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"user1": {
|
||||
"packet_filter_rules": null
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,198 +0,0 @@
|
||||
// ACL-AR02
|
||||
//
|
||||
// ACLs:
|
||||
// accept: src=['tag:client'] dst=['tag:server:22']
|
||||
// accept: src=['tag:client'] dst=['tag:server:80,443']
|
||||
// accept: src=['*'] dst=['tag:server:53'] proto=udp
|
||||
//
|
||||
// Expected: Rules on tagged-server
|
||||
{
|
||||
"test_id": "ACL-AR02",
|
||||
"input": {
|
||||
"full_policy": {
|
||||
"groups": {
|
||||
"group:admins": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"group:developers": [
|
||||
"kristoffer@dalby.cc",
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"group:monitors": [
|
||||
"monitorpasskeykradalby@passkey"
|
||||
],
|
||||
"group:empty": []
|
||||
},
|
||||
"tagOwners": {
|
||||
"tag:server": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:prod": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:client": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:router": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:exit": [
|
||||
"kratail2tid@passkey"
|
||||
]
|
||||
},
|
||||
"hosts": {
|
||||
"webserver": "100.108.74.26",
|
||||
"prodbox": "100.103.8.15",
|
||||
"internal": "10.0.0.0/8",
|
||||
"subnet24": "192.168.1.0/24"
|
||||
},
|
||||
"autoApprovers": {
|
||||
"routes": {
|
||||
"10.33.0.0/16": [
|
||||
"tag:router"
|
||||
],
|
||||
"0.0.0.0/0": [
|
||||
"tag:exit"
|
||||
],
|
||||
"::/0": [
|
||||
"tag:exit"
|
||||
]
|
||||
}
|
||||
},
|
||||
"acls": [
|
||||
{
|
||||
"action": "accept",
|
||||
"src": [
|
||||
"tag:client"
|
||||
],
|
||||
"dst": [
|
||||
"tag:server:22"
|
||||
]
|
||||
},
|
||||
{
|
||||
"action": "accept",
|
||||
"src": [
|
||||
"tag:client"
|
||||
],
|
||||
"dst": [
|
||||
"tag:server:80,443"
|
||||
]
|
||||
},
|
||||
{
|
||||
"action": "accept",
|
||||
"src": [
|
||||
"*"
|
||||
],
|
||||
"proto": "udp",
|
||||
"dst": [
|
||||
"tag:server:53"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"captures": {
|
||||
"exit-node": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"subnet-router": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-client": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-prod": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-server": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"100.83.200.69",
|
||||
"fd7a:115c:a1e0::c537:c845"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "100.108.74.26",
|
||||
"Ports": {
|
||||
"First": 22,
|
||||
"Last": 22
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::b901:4a87",
|
||||
"Ports": {
|
||||
"First": 22,
|
||||
"Last": 22
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "100.108.74.26",
|
||||
"Ports": {
|
||||
"First": 80,
|
||||
"Last": 80
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "100.108.74.26",
|
||||
"Ports": {
|
||||
"First": 443,
|
||||
"Last": 443
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::b901:4a87",
|
||||
"Ports": {
|
||||
"First": 80,
|
||||
"Last": 80
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::b901:4a87",
|
||||
"Ports": {
|
||||
"First": 443,
|
||||
"Last": 443
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"SrcIPs": [
|
||||
"10.33.0.0/16",
|
||||
"100.115.94.0-100.127.255.255",
|
||||
"100.64.0.0-100.115.91.255",
|
||||
"fd7a:115c:a1e0::/48"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "100.108.74.26",
|
||||
"Ports": {
|
||||
"First": 53,
|
||||
"Last": 53
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::b901:4a87",
|
||||
"Ports": {
|
||||
"First": 53,
|
||||
"Last": 53
|
||||
}
|
||||
}
|
||||
],
|
||||
"IPProto": [
|
||||
17
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"user-kris": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"user-mon": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"user1": {
|
||||
"packet_filter_rules": null
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,170 +0,0 @@
|
||||
// ACL-AR03
|
||||
//
|
||||
// ACLs:
|
||||
// accept: src=['tag:client'] dst=['tag:server:22']
|
||||
// accept: src=['tag:client'] dst=['tag:server:80']
|
||||
// accept: src=['tag:client'] dst=['tag:server:443']
|
||||
//
|
||||
// Expected: Rules on tagged-server
|
||||
{
|
||||
"test_id": "ACL-AR03",
|
||||
"input": {
|
||||
"full_policy": {
|
||||
"groups": {
|
||||
"group:admins": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"group:developers": [
|
||||
"kristoffer@dalby.cc",
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"group:monitors": [
|
||||
"monitorpasskeykradalby@passkey"
|
||||
],
|
||||
"group:empty": []
|
||||
},
|
||||
"tagOwners": {
|
||||
"tag:server": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:prod": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:client": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:router": [
|
||||
"kratail2tid@passkey"
|
||||
],
|
||||
"tag:exit": [
|
||||
"kratail2tid@passkey"
|
||||
]
|
||||
},
|
||||
"hosts": {
|
||||
"webserver": "100.108.74.26",
|
||||
"prodbox": "100.103.8.15",
|
||||
"internal": "10.0.0.0/8",
|
||||
"subnet24": "192.168.1.0/24"
|
||||
},
|
||||
"autoApprovers": {
|
||||
"routes": {
|
||||
"10.33.0.0/16": [
|
||||
"tag:router"
|
||||
],
|
||||
"0.0.0.0/0": [
|
||||
"tag:exit"
|
||||
],
|
||||
"::/0": [
|
||||
"tag:exit"
|
||||
]
|
||||
}
|
||||
},
|
||||
"acls": [
|
||||
{
|
||||
"action": "accept",
|
||||
"src": [
|
||||
"tag:client"
|
||||
],
|
||||
"dst": [
|
||||
"tag:server:22"
|
||||
]
|
||||
},
|
||||
{
|
||||
"action": "accept",
|
||||
"src": [
|
||||
"tag:client"
|
||||
],
|
||||
"dst": [
|
||||
"tag:server:80"
|
||||
]
|
||||
},
|
||||
{
|
||||
"action": "accept",
|
||||
"src": [
|
||||
"tag:client"
|
||||
],
|
||||
"dst": [
|
||||
"tag:server:443"
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"captures": {
|
||||
"exit-node": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"subnet-router": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-client": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-prod": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"tagged-server": {
|
||||
"packet_filter_rules": [
|
||||
{
|
||||
"SrcIPs": [
|
||||
"100.83.200.69",
|
||||
"fd7a:115c:a1e0::c537:c845"
|
||||
],
|
||||
"DstPorts": [
|
||||
{
|
||||
"IP": "100.108.74.26",
|
||||
"Ports": {
|
||||
"First": 22,
|
||||
"Last": 22
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::b901:4a87",
|
||||
"Ports": {
|
||||
"First": 22,
|
||||
"Last": 22
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "100.108.74.26",
|
||||
"Ports": {
|
||||
"First": 80,
|
||||
"Last": 80
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::b901:4a87",
|
||||
"Ports": {
|
||||
"First": 80,
|
||||
"Last": 80
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "100.108.74.26",
|
||||
"Ports": {
|
||||
"First": 443,
|
||||
"Last": 443
|
||||
}
|
||||
},
|
||||
{
|
||||
"IP": "fd7a:115c:a1e0::b901:4a87",
|
||||
"Ports": {
|
||||
"First": 443,
|
||||
"Last": 443
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
"user-kris": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"user-mon": {
|
||||
"packet_filter_rules": null
|
||||
},
|
||||
"user1": {
|
||||
"packet_filter_rules": null
|
||||
}
|
||||
}
|
||||
}
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user