Compare commits
1 Commits
main
...
dependabot
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
ab70b4e37e |
@@ -1,16 +0,0 @@
|
|||||||
root = true
|
|
||||||
|
|
||||||
[*]
|
|
||||||
charset = utf-8
|
|
||||||
end_of_line = lf
|
|
||||||
indent_size = 2
|
|
||||||
indent_style = space
|
|
||||||
insert_final_newline = true
|
|
||||||
trim_trailing_whitespace = true
|
|
||||||
max_line_length = 120
|
|
||||||
|
|
||||||
[*.go]
|
|
||||||
indent_style = tab
|
|
||||||
|
|
||||||
[Makefile]
|
|
||||||
indent_style = tab
|
|
||||||
9
.github/ISSUE_TEMPLATE/bug_report.yaml
vendored
@@ -6,7 +6,8 @@ body:
|
|||||||
- type: checkboxes
|
- type: checkboxes
|
||||||
attributes:
|
attributes:
|
||||||
label: Is this a support request?
|
label: Is this a support request?
|
||||||
description: This issue tracker is for bugs and feature requests only. If you need
|
description:
|
||||||
|
This issue tracker is for bugs and feature requests only. If you need
|
||||||
help, please use ask in our Discord community
|
help, please use ask in our Discord community
|
||||||
options:
|
options:
|
||||||
- label: This is not a support request
|
- label: This is not a support request
|
||||||
@@ -14,7 +15,8 @@ body:
|
|||||||
- type: checkboxes
|
- type: checkboxes
|
||||||
attributes:
|
attributes:
|
||||||
label: Is there an existing issue for this?
|
label: Is there an existing issue for this?
|
||||||
description: Please search to see if an issue already exists for the bug you
|
description:
|
||||||
|
Please search to see if an issue already exists for the bug you
|
||||||
encountered.
|
encountered.
|
||||||
options:
|
options:
|
||||||
- label: I have searched the existing issues
|
- label: I have searched the existing issues
|
||||||
@@ -50,15 +52,12 @@ body:
|
|||||||
If you are using a container, always provide the headscale version and not only the Docker image version.
|
If you are using a container, always provide the headscale version and not only the Docker image version.
|
||||||
Please do not put "latest".
|
Please do not put "latest".
|
||||||
|
|
||||||
Describe your "headscale network". Is there a lot of nodes, are the nodes all interconnected, are some subnet routers?
|
|
||||||
|
|
||||||
If you are experiencing a problem during an upgrade, please provide the versions of the old and new versions of Headscale and Tailscale.
|
If you are experiencing a problem during an upgrade, please provide the versions of the old and new versions of Headscale and Tailscale.
|
||||||
|
|
||||||
examples:
|
examples:
|
||||||
- **OS**: Ubuntu 24.04
|
- **OS**: Ubuntu 24.04
|
||||||
- **Headscale version**: 0.24.3
|
- **Headscale version**: 0.24.3
|
||||||
- **Tailscale version**: 1.80.0
|
- **Tailscale version**: 1.80.0
|
||||||
- **Number of nodes**: 20
|
|
||||||
value: |
|
value: |
|
||||||
- OS:
|
- OS:
|
||||||
- Headscale version:
|
- Headscale version:
|
||||||
|
|||||||
8
.github/ISSUE_TEMPLATE/config.yml
vendored
@@ -3,9 +3,9 @@ blank_issues_enabled: false
|
|||||||
|
|
||||||
# Contact links
|
# Contact links
|
||||||
contact_links:
|
contact_links:
|
||||||
- name: "headscale Discord community"
|
|
||||||
url: "https://discord.gg/c84AZQhmpx"
|
|
||||||
about: "Please ask and answer questions about usage of headscale here."
|
|
||||||
- name: "headscale usage documentation"
|
- name: "headscale usage documentation"
|
||||||
url: "https://headscale.net/"
|
url: "https://github.com/juanfont/headscale/blob/main/docs"
|
||||||
about: "Find documentation about how to configure and run headscale."
|
about: "Find documentation about how to configure and run headscale."
|
||||||
|
- name: "headscale Discord community"
|
||||||
|
url: "https://discord.gg/xGj2TuqyxY"
|
||||||
|
about: "Please ask and answer questions about usage of headscale here."
|
||||||
|
|||||||
9
.github/ISSUE_TEMPLATE/feature_request.yaml
vendored
@@ -16,13 +16,15 @@ body:
|
|||||||
- type: textarea
|
- type: textarea
|
||||||
attributes:
|
attributes:
|
||||||
label: Description
|
label: Description
|
||||||
description: A clear and precise description of what new or changed feature you want.
|
description:
|
||||||
|
A clear and precise description of what new or changed feature you want.
|
||||||
validations:
|
validations:
|
||||||
required: true
|
required: true
|
||||||
- type: checkboxes
|
- type: checkboxes
|
||||||
attributes:
|
attributes:
|
||||||
label: Contribution
|
label: Contribution
|
||||||
description: Are you willing to contribute to the implementation of this feature?
|
description:
|
||||||
|
Are you willing to contribute to the implementation of this feature?
|
||||||
options:
|
options:
|
||||||
- label: I can write the design doc for this feature
|
- label: I can write the design doc for this feature
|
||||||
required: false
|
required: false
|
||||||
@@ -31,6 +33,7 @@ body:
|
|||||||
- type: textarea
|
- type: textarea
|
||||||
attributes:
|
attributes:
|
||||||
label: How can it be implemented?
|
label: How can it be implemented?
|
||||||
description: Free text for your ideas on how this feature could be implemented.
|
description:
|
||||||
|
Free text for your ideas on how this feature could be implemented.
|
||||||
validations:
|
validations:
|
||||||
required: false
|
required: false
|
||||||
|
|||||||
80
.github/label-response/needs-more-info.md
vendored
@@ -1,80 +0,0 @@
|
|||||||
Thank you for taking the time to report this issue.
|
|
||||||
|
|
||||||
To help us investigate and resolve this, we need more information. Please provide the following:
|
|
||||||
|
|
||||||
> [!TIP]
|
|
||||||
> Most issues turn out to be configuration errors rather than bugs. We encourage you to discuss your problem in our [Discord community](https://discord.gg/c84AZQhmpx) **before** opening an issue. The community can often help identify misconfigurations quickly, saving everyone time.
|
|
||||||
|
|
||||||
## Required Information
|
|
||||||
|
|
||||||
### Environment Details
|
|
||||||
|
|
||||||
- **Headscale version**: (run `headscale version`)
|
|
||||||
- **Tailscale client version**: (run `tailscale version`)
|
|
||||||
- **Operating System**: (e.g., Ubuntu 24.04, macOS 14, Windows 11)
|
|
||||||
- **Deployment method**: (binary, Docker, Kubernetes, etc.)
|
|
||||||
- **Reverse proxy**: (if applicable: nginx, Traefik, Caddy, etc. - include configuration)
|
|
||||||
|
|
||||||
### Debug Information
|
|
||||||
|
|
||||||
Please follow our [Debugging and Troubleshooting Guide](https://headscale.net/stable/ref/debug/) and provide:
|
|
||||||
|
|
||||||
1. **Client netmap dump** (from affected Tailscale client):
|
|
||||||
|
|
||||||
```bash
|
|
||||||
tailscale debug netmap > netmap.json
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Client status dump** (from affected Tailscale client):
|
|
||||||
|
|
||||||
```bash
|
|
||||||
tailscale status --json > status.json
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **Tailscale client logs** (if experiencing client issues):
|
|
||||||
|
|
||||||
```bash
|
|
||||||
tailscale debug daemon-logs
|
|
||||||
```
|
|
||||||
|
|
||||||
> [!IMPORTANT]
|
|
||||||
> We need logs from **multiple nodes** to understand the full picture:
|
|
||||||
>
|
|
||||||
> - The node(s) initiating connections
|
|
||||||
> - The node(s) being connected to
|
|
||||||
>
|
|
||||||
> Without logs from both sides, we cannot diagnose connectivity issues.
|
|
||||||
|
|
||||||
4. **Headscale server logs** with `log.level: trace` enabled
|
|
||||||
|
|
||||||
5. **Headscale configuration** (with sensitive values redacted - see rules below)
|
|
||||||
|
|
||||||
6. **ACL/Policy configuration** (if using ACLs)
|
|
||||||
|
|
||||||
7. **Proxy/Docker configuration** (if applicable - nginx.conf, docker-compose.yml, Traefik config, etc.)
|
|
||||||
|
|
||||||
## Formatting Requirements
|
|
||||||
|
|
||||||
- **Attach long files** - Do not paste large logs or configurations inline. Use GitHub file attachments or GitHub Gists.
|
|
||||||
- **Use proper Markdown** - Format code blocks, logs, and configurations with appropriate syntax highlighting.
|
|
||||||
- **Structure your response** - Use the headings above to organize your information clearly.
|
|
||||||
|
|
||||||
## Redaction Rules
|
|
||||||
|
|
||||||
> [!CAUTION]
|
|
||||||
> **Replace, do not remove.** Removing information makes debugging impossible.
|
|
||||||
|
|
||||||
When redacting sensitive information:
|
|
||||||
|
|
||||||
- ✅ **Replace consistently** - If you change `alice@company.com` to `user1@example.com`, use `user1@example.com` everywhere (logs, config, policy, etc.)
|
|
||||||
- ✅ **Use meaningful placeholders** - `user1@example.com`, `bob@example.com`, `my-secret-key` are acceptable
|
|
||||||
- ❌ **Never remove information** - Gaps in data prevent us from correlating events across logs
|
|
||||||
- ❌ **Never redact IP addresses** - We need the actual IPs to trace network paths and identify issues
|
|
||||||
|
|
||||||
**If redaction rules are not followed, we will be unable to debug the issue and will have to close it.**
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Note:** This issue will be automatically closed in 3 days if no additional information is provided. Once you reply with the requested information, the `needs-more-info` label will be removed automatically.
|
|
||||||
|
|
||||||
If you need help gathering this information, please visit our [Discord community](https://discord.gg/c84AZQhmpx).
|
|
||||||
15
.github/label-response/support-request.md
vendored
@@ -1,15 +0,0 @@
|
|||||||
Thank you for reaching out.
|
|
||||||
|
|
||||||
This issue tracker is used for **bug reports and feature requests** only. Your question appears to be a support or configuration question rather than a bug report.
|
|
||||||
|
|
||||||
For help with setup, configuration, or general questions, please visit our [Discord community](https://discord.gg/c84AZQhmpx) where the community and maintainers can assist you in real-time.
|
|
||||||
|
|
||||||
**Before posting in Discord, please check:**
|
|
||||||
|
|
||||||
- [Documentation](https://headscale.net/)
|
|
||||||
- [FAQ](https://headscale.net/stable/faq/)
|
|
||||||
- [Debugging and Troubleshooting Guide](https://headscale.net/stable/ref/debug/)
|
|
||||||
|
|
||||||
If after troubleshooting you determine this is actually a bug, please open a new issue with the required debug information from the troubleshooting guide.
|
|
||||||
|
|
||||||
This issue has been automatically closed.
|
|
||||||
27
.github/workflows/build.yml
vendored
@@ -5,6 +5,8 @@ on:
|
|||||||
branches:
|
branches:
|
||||||
- main
|
- main
|
||||||
pull_request:
|
pull_request:
|
||||||
|
branches:
|
||||||
|
- main
|
||||||
|
|
||||||
concurrency:
|
concurrency:
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||||
@@ -15,7 +17,7 @@ jobs:
|
|||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
permissions: write-all
|
permissions: write-all
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||||
with:
|
with:
|
||||||
fetch-depth: 2
|
fetch-depth: 2
|
||||||
- name: Get changed files
|
- name: Get changed files
|
||||||
@@ -29,12 +31,13 @@ jobs:
|
|||||||
- '**/*.go'
|
- '**/*.go'
|
||||||
- 'integration_test/'
|
- 'integration_test/'
|
||||||
- 'config-example.yaml'
|
- 'config-example.yaml'
|
||||||
- uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34
|
- uses: nixbuild/nix-quick-install-action@889f3180bb5f064ee9e3201428d04ae9e41d54ad # v31
|
||||||
if: steps.changed-files.outputs.files == 'true'
|
if: steps.changed-files.outputs.files == 'true'
|
||||||
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
|
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
|
||||||
if: steps.changed-files.outputs.files == 'true'
|
if: steps.changed-files.outputs.files == 'true'
|
||||||
with:
|
with:
|
||||||
primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
|
primary-key:
|
||||||
|
nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
|
||||||
'**/flake.lock') }}
|
'**/flake.lock') }}
|
||||||
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
|
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
|
||||||
|
|
||||||
@@ -54,7 +57,7 @@ jobs:
|
|||||||
exit $BUILD_STATUS
|
exit $BUILD_STATUS
|
||||||
|
|
||||||
- name: Nix gosum diverging
|
- name: Nix gosum diverging
|
||||||
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
|
uses: actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea # v7.0.1
|
||||||
if: failure() && steps.build.outcome == 'failure'
|
if: failure() && steps.build.outcome == 'failure'
|
||||||
with:
|
with:
|
||||||
github-token: ${{secrets.GITHUB_TOKEN}}
|
github-token: ${{secrets.GITHUB_TOKEN}}
|
||||||
@@ -66,7 +69,7 @@ jobs:
|
|||||||
body: 'Nix build failed with wrong gosum, please update "vendorSha256" (${{ steps.build.outputs.OLD_HASH }}) for the "headscale" package in flake.nix with the new SHA: ${{ steps.build.outputs.NEW_HASH }}'
|
body: 'Nix build failed with wrong gosum, please update "vendorSha256" (${{ steps.build.outputs.OLD_HASH }}) for the "headscale" package in flake.nix with the new SHA: ${{ steps.build.outputs.NEW_HASH }}'
|
||||||
})
|
})
|
||||||
|
|
||||||
- uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
|
- uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
|
||||||
if: steps.changed-files.outputs.files == 'true'
|
if: steps.changed-files.outputs.files == 'true'
|
||||||
with:
|
with:
|
||||||
name: headscale-linux
|
name: headscale-linux
|
||||||
@@ -81,20 +84,20 @@ jobs:
|
|||||||
- "GOARCH=arm64 GOOS=darwin"
|
- "GOARCH=arm64 GOOS=darwin"
|
||||||
- "GOARCH=amd64 GOOS=darwin"
|
- "GOARCH=amd64 GOOS=darwin"
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||||
- uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34
|
- uses: nixbuild/nix-quick-install-action@889f3180bb5f064ee9e3201428d04ae9e41d54ad # v31
|
||||||
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
|
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
|
||||||
with:
|
with:
|
||||||
primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
|
primary-key:
|
||||||
|
nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
|
||||||
'**/flake.lock') }}
|
'**/flake.lock') }}
|
||||||
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
|
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
|
||||||
|
|
||||||
- name: Run go cross compile
|
- name: Run go cross compile
|
||||||
env:
|
run:
|
||||||
CGO_ENABLED: 0
|
env ${{ matrix.env }} nix develop --command -- go build -o "headscale"
|
||||||
run: env ${{ matrix.env }} nix develop --command -- go build -o "headscale"
|
|
||||||
./cmd/headscale
|
./cmd/headscale
|
||||||
- uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
|
- uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
|
||||||
with:
|
with:
|
||||||
name: "headscale-${{ matrix.env }}"
|
name: "headscale-${{ matrix.env }}"
|
||||||
path: "headscale"
|
path: "headscale"
|
||||||
|
|||||||
4
.github/workflows/check-generated.yml
vendored
@@ -16,7 +16,7 @@ jobs:
|
|||||||
check-generated:
|
check-generated:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||||
with:
|
with:
|
||||||
fetch-depth: 2
|
fetch-depth: 2
|
||||||
- name: Get changed files
|
- name: Get changed files
|
||||||
@@ -31,7 +31,7 @@ jobs:
|
|||||||
- '**/*.proto'
|
- '**/*.proto'
|
||||||
- 'buf.gen.yaml'
|
- 'buf.gen.yaml'
|
||||||
- 'tools/**'
|
- 'tools/**'
|
||||||
- uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34
|
- uses: nixbuild/nix-quick-install-action@889f3180bb5f064ee9e3201428d04ae9e41d54ad # v31
|
||||||
if: steps.changed-files.outputs.files == 'true'
|
if: steps.changed-files.outputs.files == 'true'
|
||||||
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
|
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
|
||||||
if: steps.changed-files.outputs.files == 'true'
|
if: steps.changed-files.outputs.files == 'true'
|
||||||
|
|||||||
7
.github/workflows/check-tests.yaml
vendored
@@ -10,7 +10,7 @@ jobs:
|
|||||||
check-tests:
|
check-tests:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||||
with:
|
with:
|
||||||
fetch-depth: 2
|
fetch-depth: 2
|
||||||
- name: Get changed files
|
- name: Get changed files
|
||||||
@@ -24,12 +24,13 @@ jobs:
|
|||||||
- '**/*.go'
|
- '**/*.go'
|
||||||
- 'integration_test/'
|
- 'integration_test/'
|
||||||
- 'config-example.yaml'
|
- 'config-example.yaml'
|
||||||
- uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34
|
- uses: nixbuild/nix-quick-install-action@889f3180bb5f064ee9e3201428d04ae9e41d54ad # v31
|
||||||
if: steps.changed-files.outputs.files == 'true'
|
if: steps.changed-files.outputs.files == 'true'
|
||||||
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
|
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
|
||||||
if: steps.changed-files.outputs.files == 'true'
|
if: steps.changed-files.outputs.files == 'true'
|
||||||
with:
|
with:
|
||||||
primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
|
primary-key:
|
||||||
|
nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
|
||||||
'**/flake.lock') }}
|
'**/flake.lock') }}
|
||||||
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
|
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
|
||||||
|
|
||||||
|
|||||||
112
.github/workflows/container-main.yml
vendored
@@ -1,112 +0,0 @@
|
|||||||
---
|
|
||||||
name: Build (main)
|
|
||||||
|
|
||||||
on:
|
|
||||||
push:
|
|
||||||
branches:
|
|
||||||
- main
|
|
||||||
paths:
|
|
||||||
- "*.nix"
|
|
||||||
- "go.*"
|
|
||||||
- "**/*.go"
|
|
||||||
- ".github/workflows/container-main.yml"
|
|
||||||
workflow_dispatch:
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-${{ github.sha }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
container:
|
|
||||||
if: github.repository == 'juanfont/headscale'
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
permissions:
|
|
||||||
packages: write
|
|
||||||
contents: read
|
|
||||||
steps:
|
|
||||||
- name: Checkout
|
|
||||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
|
||||||
|
|
||||||
- name: Login to DockerHub
|
|
||||||
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
|
|
||||||
with:
|
|
||||||
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
|
||||||
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
|
||||||
|
|
||||||
- name: Login to GHCR
|
|
||||||
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
|
|
||||||
with:
|
|
||||||
registry: ghcr.io
|
|
||||||
username: ${{ github.repository_owner }}
|
|
||||||
password: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
|
|
||||||
- uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34
|
|
||||||
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
|
|
||||||
with:
|
|
||||||
primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
|
|
||||||
'**/flake.lock') }}
|
|
||||||
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
|
|
||||||
|
|
||||||
- name: Set commit timestamp
|
|
||||||
run: echo "SOURCE_DATE_EPOCH=$(git log -1 --format=%ct)" >> $GITHUB_ENV
|
|
||||||
|
|
||||||
- name: Build and push to GHCR
|
|
||||||
env:
|
|
||||||
KO_DOCKER_REPO: ghcr.io/juanfont/headscale
|
|
||||||
KO_DEFAULTBASEIMAGE: gcr.io/distroless/base-debian13
|
|
||||||
CGO_ENABLED: "0"
|
|
||||||
run: |
|
|
||||||
nix develop --command -- ko build \
|
|
||||||
--bare \
|
|
||||||
--platform=linux/amd64,linux/arm64 \
|
|
||||||
--tags=main-${GITHUB_SHA::7},development \
|
|
||||||
./cmd/headscale
|
|
||||||
|
|
||||||
- name: Push to Docker Hub
|
|
||||||
env:
|
|
||||||
KO_DOCKER_REPO: headscale/headscale
|
|
||||||
KO_DEFAULTBASEIMAGE: gcr.io/distroless/base-debian13
|
|
||||||
CGO_ENABLED: "0"
|
|
||||||
run: |
|
|
||||||
nix develop --command -- ko build \
|
|
||||||
--bare \
|
|
||||||
--platform=linux/amd64,linux/arm64 \
|
|
||||||
--tags=main-${GITHUB_SHA::7},development \
|
|
||||||
./cmd/headscale
|
|
||||||
|
|
||||||
binaries:
|
|
||||||
if: github.repository == 'juanfont/headscale'
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
strategy:
|
|
||||||
matrix:
|
|
||||||
include:
|
|
||||||
- goos: linux
|
|
||||||
goarch: amd64
|
|
||||||
- goos: linux
|
|
||||||
goarch: arm64
|
|
||||||
- goos: darwin
|
|
||||||
goarch: amd64
|
|
||||||
- goos: darwin
|
|
||||||
goarch: arm64
|
|
||||||
steps:
|
|
||||||
- name: Checkout
|
|
||||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
|
||||||
|
|
||||||
- uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34
|
|
||||||
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
|
|
||||||
with:
|
|
||||||
primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
|
|
||||||
'**/flake.lock') }}
|
|
||||||
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
|
|
||||||
|
|
||||||
- name: Build binary
|
|
||||||
env:
|
|
||||||
CGO_ENABLED: "0"
|
|
||||||
GOOS: ${{ matrix.goos }}
|
|
||||||
GOARCH: ${{ matrix.goarch }}
|
|
||||||
run: nix develop --command -- go build -o headscale ./cmd/headscale
|
|
||||||
|
|
||||||
- uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
|
|
||||||
with:
|
|
||||||
name: headscale-${{ matrix.goos }}-${{ matrix.goarch }}
|
|
||||||
path: headscale
|
|
||||||
6
.github/workflows/docs-deploy.yml
vendored
@@ -21,15 +21,15 @@ jobs:
|
|||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
steps:
|
steps:
|
||||||
- name: Checkout repository
|
- name: Checkout repository
|
||||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||||
with:
|
with:
|
||||||
fetch-depth: 0
|
fetch-depth: 0
|
||||||
- name: Install python
|
- name: Install python
|
||||||
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
|
uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
|
||||||
with:
|
with:
|
||||||
python-version: 3.x
|
python-version: 3.x
|
||||||
- name: Setup cache
|
- name: Setup cache
|
||||||
uses: actions/cache@a7833574556fa59680c1b7cb190c1735db73ebf0 # v5.0.0
|
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
|
||||||
with:
|
with:
|
||||||
key: ${{ github.ref }}
|
key: ${{ github.ref }}
|
||||||
path: .cache
|
path: .cache
|
||||||
|
|||||||
6
.github/workflows/docs-test.yml
vendored
@@ -11,13 +11,13 @@ jobs:
|
|||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
steps:
|
steps:
|
||||||
- name: Checkout repository
|
- name: Checkout repository
|
||||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||||
- name: Install python
|
- name: Install python
|
||||||
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
|
uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
|
||||||
with:
|
with:
|
||||||
python-version: 3.x
|
python-version: 3.x
|
||||||
- name: Setup cache
|
- name: Setup cache
|
||||||
uses: actions/cache@a7833574556fa59680c1b7cb190c1735db73ebf0 # v5.0.0
|
uses: actions/cache@5a3ec84eff668545956fd18022155c47e93e2684 # v4.2.3
|
||||||
with:
|
with:
|
||||||
key: ${{ github.ref }}
|
key: ${{ github.ref }}
|
||||||
path: .cache
|
path: .cache
|
||||||
|
|||||||
@@ -10,55 +10,6 @@ import (
|
|||||||
"strings"
|
"strings"
|
||||||
)
|
)
|
||||||
|
|
||||||
// testsToSplit defines tests that should be split into multiple CI jobs.
|
|
||||||
// Key is the test function name, value is a list of subtest prefixes.
|
|
||||||
// Each prefix becomes a separate CI job as "TestName/prefix".
|
|
||||||
//
|
|
||||||
// Example: TestAutoApproveMultiNetwork has subtests like:
|
|
||||||
// - TestAutoApproveMultiNetwork/authkey-tag-advertiseduringup-false-pol-database
|
|
||||||
// - TestAutoApproveMultiNetwork/webauth-user-advertiseduringup-true-pol-file
|
|
||||||
//
|
|
||||||
// Splitting by approver type (tag, user, group) creates 6 CI jobs with 4 tests each:
|
|
||||||
// - TestAutoApproveMultiNetwork/authkey-tag.* (4 tests)
|
|
||||||
// - TestAutoApproveMultiNetwork/authkey-user.* (4 tests)
|
|
||||||
// - TestAutoApproveMultiNetwork/authkey-group.* (4 tests)
|
|
||||||
// - TestAutoApproveMultiNetwork/webauth-tag.* (4 tests)
|
|
||||||
// - TestAutoApproveMultiNetwork/webauth-user.* (4 tests)
|
|
||||||
// - TestAutoApproveMultiNetwork/webauth-group.* (4 tests)
|
|
||||||
//
|
|
||||||
// This reduces load per CI job (4 tests instead of 12) to avoid infrastructure
|
|
||||||
// flakiness when running many sequential Docker-based integration tests.
|
|
||||||
var testsToSplit = map[string][]string{
|
|
||||||
"TestAutoApproveMultiNetwork": {
|
|
||||||
"authkey-tag",
|
|
||||||
"authkey-user",
|
|
||||||
"authkey-group",
|
|
||||||
"webauth-tag",
|
|
||||||
"webauth-user",
|
|
||||||
"webauth-group",
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
// expandTests takes a list of test names and expands any that need splitting
|
|
||||||
// into multiple subtest patterns.
|
|
||||||
func expandTests(tests []string) []string {
|
|
||||||
var expanded []string
|
|
||||||
for _, test := range tests {
|
|
||||||
if prefixes, ok := testsToSplit[test]; ok {
|
|
||||||
// This test should be split into multiple jobs.
|
|
||||||
// We append ".*" to each prefix because the CI runner wraps patterns
|
|
||||||
// with ^...$ anchors. Without ".*", a pattern like "authkey$" wouldn't
|
|
||||||
// match "authkey-tag-advertiseduringup-false-pol-database".
|
|
||||||
for _, prefix := range prefixes {
|
|
||||||
expanded = append(expanded, fmt.Sprintf("%s/%s.*", test, prefix))
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
expanded = append(expanded, test)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return expanded
|
|
||||||
}
|
|
||||||
|
|
||||||
func findTests() []string {
|
func findTests() []string {
|
||||||
rgBin, err := exec.LookPath("rg")
|
rgBin, err := exec.LookPath("rg")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -66,7 +17,6 @@ func findTests() []string {
|
|||||||
}
|
}
|
||||||
|
|
||||||
args := []string{
|
args := []string{
|
||||||
"--type", "go",
|
|
||||||
"--regexp", "func (Test.+)\\(.*",
|
"--regexp", "func (Test.+)\\(.*",
|
||||||
"../../integration/",
|
"../../integration/",
|
||||||
"--replace", "$1",
|
"--replace", "$1",
|
||||||
@@ -116,11 +66,8 @@ func updateYAML(tests []string, jobName string, testPath string) {
|
|||||||
func main() {
|
func main() {
|
||||||
tests := findTests()
|
tests := findTests()
|
||||||
|
|
||||||
// Expand tests that should be split into multiple jobs
|
quotedTests := make([]string, len(tests))
|
||||||
expandedTests := expandTests(tests)
|
for i, test := range tests {
|
||||||
|
|
||||||
quotedTests := make([]string, len(expandedTests))
|
|
||||||
for i, test := range expandedTests {
|
|
||||||
quotedTests[i] = fmt.Sprintf("\"%s\"", test)
|
quotedTests[i] = fmt.Sprintf("\"%s\"", test)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
4
.github/workflows/gh-actions-updater.yaml
vendored
@@ -11,13 +11,13 @@ jobs:
|
|||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||||
with:
|
with:
|
||||||
# [Required] Access token with `workflow` scope.
|
# [Required] Access token with `workflow` scope.
|
||||||
token: ${{ secrets.WORKFLOW_SECRET }}
|
token: ${{ secrets.WORKFLOW_SECRET }}
|
||||||
|
|
||||||
- name: Run GitHub Actions Version Updater
|
- name: Run GitHub Actions Version Updater
|
||||||
uses: saadmk11/github-actions-version-updater@d8781caf11d11168579c8e5e94f62b068038f442 # v0.9.0
|
uses: saadmk11/github-actions-version-updater@64be81ba69383f81f2be476703ea6570c4c8686e # v0.8.1
|
||||||
with:
|
with:
|
||||||
# [Required] Access token with `workflow` scope.
|
# [Required] Access token with `workflow` scope.
|
||||||
token: ${{ secrets.WORKFLOW_SECRET }}
|
token: ${{ secrets.WORKFLOW_SECRET }}
|
||||||
|
|||||||
125
.github/workflows/integration-test-template.yml
vendored
@@ -16,7 +16,7 @@ on:
|
|||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
test:
|
test:
|
||||||
runs-on: ubuntu-24.04-arm
|
runs-on: ubuntu-latest
|
||||||
env:
|
env:
|
||||||
# Github does not allow us to access secrets in pull requests,
|
# Github does not allow us to access secrets in pull requests,
|
||||||
# so this env var is used to check if we have the secret or not.
|
# so this env var is used to check if we have the secret or not.
|
||||||
@@ -28,12 +28,23 @@ jobs:
|
|||||||
# that triggered the build.
|
# that triggered the build.
|
||||||
HAS_TAILSCALE_SECRET: ${{ secrets.TS_OAUTH_CLIENT_ID }}
|
HAS_TAILSCALE_SECRET: ${{ secrets.TS_OAUTH_CLIENT_ID }}
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||||
with:
|
with:
|
||||||
fetch-depth: 2
|
fetch-depth: 2
|
||||||
|
- name: Get changed files
|
||||||
|
id: changed-files
|
||||||
|
uses: dorny/paths-filter@de90cc6fb38fc0963ad72b210f1f284cd68cea36 # v3.0.2
|
||||||
|
with:
|
||||||
|
filters: |
|
||||||
|
files:
|
||||||
|
- '*.nix'
|
||||||
|
- 'go.*'
|
||||||
|
- '**/*.go'
|
||||||
|
- 'integration_test/'
|
||||||
|
- 'config-example.yaml'
|
||||||
- name: Tailscale
|
- name: Tailscale
|
||||||
if: ${{ env.HAS_TAILSCALE_SECRET }}
|
if: ${{ env.HAS_TAILSCALE_SECRET }}
|
||||||
uses: tailscale/github-action@a392da0a182bba0e9613b6243ebd69529b1878aa # v4.1.0
|
uses: tailscale/github-action@6986d2c82a91fbac2949fe01f5bab95cf21b5102 # v3.2.2
|
||||||
with:
|
with:
|
||||||
oauth-client-id: ${{ secrets.TS_OAUTH_CLIENT_ID }}
|
oauth-client-id: ${{ secrets.TS_OAUTH_CLIENT_ID }}
|
||||||
oauth-secret: ${{ secrets.TS_OAUTH_SECRET }}
|
oauth-secret: ${{ secrets.TS_OAUTH_SECRET }}
|
||||||
@@ -41,90 +52,44 @@ jobs:
|
|||||||
- name: Setup SSH server for Actor
|
- name: Setup SSH server for Actor
|
||||||
if: ${{ env.HAS_TAILSCALE_SECRET }}
|
if: ${{ env.HAS_TAILSCALE_SECRET }}
|
||||||
uses: alexellis/setup-sshd-actor@master
|
uses: alexellis/setup-sshd-actor@master
|
||||||
- name: Download headscale image
|
- uses: nixbuild/nix-quick-install-action@889f3180bb5f064ee9e3201428d04ae9e41d54ad # v31
|
||||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
|
if: steps.changed-files.outputs.files == 'true'
|
||||||
|
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
|
||||||
|
if: steps.changed-files.outputs.files == 'true'
|
||||||
with:
|
with:
|
||||||
name: headscale-image
|
primary-key:
|
||||||
path: /tmp/artifacts
|
nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
|
||||||
- name: Download tailscale HEAD image
|
'**/flake.lock') }}
|
||||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
|
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
|
||||||
with:
|
|
||||||
name: tailscale-head-image
|
|
||||||
path: /tmp/artifacts
|
|
||||||
- name: Download hi binary
|
|
||||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
|
|
||||||
with:
|
|
||||||
name: hi-binary
|
|
||||||
path: /tmp/artifacts
|
|
||||||
- name: Download Go cache
|
|
||||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
|
|
||||||
with:
|
|
||||||
name: go-cache
|
|
||||||
path: /tmp/artifacts
|
|
||||||
- name: Download postgres image
|
|
||||||
if: ${{ inputs.postgres_flag == '--postgres=1' }}
|
|
||||||
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
|
|
||||||
with:
|
|
||||||
name: postgres-image
|
|
||||||
path: /tmp/artifacts
|
|
||||||
- name: Pin Docker to v28 (avoid v29 breaking changes)
|
|
||||||
run: |
|
|
||||||
# Docker 29 breaks docker build via Go client libraries and
|
|
||||||
# docker load/save with certain tarball formats.
|
|
||||||
# Pin to Docker 28.x until our tooling is updated.
|
|
||||||
# https://github.com/actions/runner-images/issues/13474
|
|
||||||
sudo install -m 0755 -d /etc/apt/keyrings
|
|
||||||
curl -fsSL https://download.docker.com/linux/ubuntu/gpg \
|
|
||||||
| sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
|
|
||||||
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
|
|
||||||
https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "$VERSION_CODENAME") stable" \
|
|
||||||
| sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
|
|
||||||
sudo apt-get update -qq
|
|
||||||
VERSION=$(apt-cache madison docker-ce | grep '28\.5' | head -1 | awk '{print $3}')
|
|
||||||
sudo apt-get install -y --allow-downgrades \
|
|
||||||
"docker-ce=${VERSION}" "docker-ce-cli=${VERSION}"
|
|
||||||
sudo systemctl restart docker
|
|
||||||
docker version
|
|
||||||
- name: Load Docker images, Go cache, and prepare binary
|
|
||||||
run: |
|
|
||||||
gunzip -c /tmp/artifacts/headscale-image.tar.gz | docker load
|
|
||||||
gunzip -c /tmp/artifacts/tailscale-head-image.tar.gz | docker load
|
|
||||||
if [ -f /tmp/artifacts/postgres-image.tar.gz ]; then
|
|
||||||
gunzip -c /tmp/artifacts/postgres-image.tar.gz | docker load
|
|
||||||
fi
|
|
||||||
chmod +x /tmp/artifacts/hi
|
|
||||||
docker images
|
|
||||||
# Extract Go cache to host directories for bind mounting
|
|
||||||
mkdir -p /tmp/go-cache
|
|
||||||
tar -xzf /tmp/artifacts/go-cache.tar.gz -C /tmp/go-cache
|
|
||||||
ls -la /tmp/go-cache/ /tmp/go-cache/.cache/
|
|
||||||
- name: Run Integration Test
|
- name: Run Integration Test
|
||||||
env:
|
uses: Wandalen/wretry.action@e68c23e6309f2871ca8ae4763e7629b9c258e1ea # v3.8.0
|
||||||
HEADSCALE_INTEGRATION_HEADSCALE_IMAGE: headscale:${{ github.sha }}
|
if: steps.changed-files.outputs.files == 'true'
|
||||||
HEADSCALE_INTEGRATION_TAILSCALE_IMAGE: tailscale-head:${{ github.sha }}
|
with:
|
||||||
HEADSCALE_INTEGRATION_POSTGRES_IMAGE: ${{ inputs.postgres_flag == '--postgres=1' && format('postgres:{0}', github.sha) || '' }}
|
# Our integration tests are started like a thundering herd, often
|
||||||
HEADSCALE_INTEGRATION_GO_CACHE: /tmp/go-cache/go
|
# hitting limits of the various external repositories we depend on
|
||||||
HEADSCALE_INTEGRATION_GO_BUILD_CACHE: /tmp/go-cache/.cache/go-build
|
# like docker hub. This will retry jobs every 5 min, 10 times,
|
||||||
run: /tmp/artifacts/hi run --stats --ts-memory-limit=300 --hs-memory-limit=1500 "^${{ inputs.test }}$" \
|
# hopefully letting us avoid manual intervention and restarting jobs.
|
||||||
|
# One could of course argue that we should invest in trying to avoid
|
||||||
|
# this, but currently it seems like a larger investment to be cleverer
|
||||||
|
# about this.
|
||||||
|
# Some of the jobs might still require manual restart as they are really
|
||||||
|
# slow and this will cause them to eventually be killed by Github actions.
|
||||||
|
attempt_delay: 300000 # 5 min
|
||||||
|
attempt_limit: 2
|
||||||
|
command: |
|
||||||
|
nix develop --command -- hi run --stats --ts-memory-limit=300 --hs-memory-limit=500 "^${{ inputs.test }}$" \
|
||||||
--timeout=120m \
|
--timeout=120m \
|
||||||
${{ inputs.postgres_flag }}
|
${{ inputs.postgres_flag }}
|
||||||
# Sanitize test name for artifact upload (replace invalid characters: " : < > | * ? \ / with -)
|
- uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
|
||||||
- name: Sanitize test name for artifacts
|
if: always() && steps.changed-files.outputs.files == 'true'
|
||||||
if: always()
|
|
||||||
id: sanitize
|
|
||||||
run: echo "name=${TEST_NAME//[\":<>|*?\\\/]/-}" >> $GITHUB_OUTPUT
|
|
||||||
env:
|
|
||||||
TEST_NAME: ${{ inputs.test }}
|
|
||||||
- uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
|
|
||||||
if: always()
|
|
||||||
with:
|
with:
|
||||||
name: ${{ inputs.database_name }}-${{ steps.sanitize.outputs.name }}-logs
|
name: ${{ inputs.database_name }}-${{ inputs.test }}-logs
|
||||||
path: "control_logs/*/*.log"
|
path: "control_logs/*/*.log"
|
||||||
- uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
|
- uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
|
||||||
if: always()
|
if: always() && steps.changed-files.outputs.files == 'true'
|
||||||
with:
|
with:
|
||||||
name: ${{ inputs.database_name }}-${{ steps.sanitize.outputs.name }}-artifacts
|
name: ${{ inputs.database_name }}-${{ inputs.test }}-archives
|
||||||
path: control_logs/
|
path: "control_logs/*/*.tar"
|
||||||
- name: Setup a blocking tmux session
|
- name: Setup a blocking tmux session
|
||||||
if: ${{ env.HAS_TAILSCALE_SECRET }}
|
if: ${{ env.HAS_TAILSCALE_SECRET }}
|
||||||
uses: alexellis/block-with-tmux-action@master
|
uses: alexellis/block-with-tmux-action@master
|
||||||
|
|||||||
21
.github/workflows/lint.yml
vendored
@@ -10,7 +10,7 @@ jobs:
|
|||||||
golangci-lint:
|
golangci-lint:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||||
with:
|
with:
|
||||||
fetch-depth: 2
|
fetch-depth: 2
|
||||||
- name: Get changed files
|
- name: Get changed files
|
||||||
@@ -24,12 +24,13 @@ jobs:
|
|||||||
- '**/*.go'
|
- '**/*.go'
|
||||||
- 'integration_test/'
|
- 'integration_test/'
|
||||||
- 'config-example.yaml'
|
- 'config-example.yaml'
|
||||||
- uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34
|
- uses: nixbuild/nix-quick-install-action@889f3180bb5f064ee9e3201428d04ae9e41d54ad # v31
|
||||||
if: steps.changed-files.outputs.files == 'true'
|
if: steps.changed-files.outputs.files == 'true'
|
||||||
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
|
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
|
||||||
if: steps.changed-files.outputs.files == 'true'
|
if: steps.changed-files.outputs.files == 'true'
|
||||||
with:
|
with:
|
||||||
primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
|
primary-key:
|
||||||
|
nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
|
||||||
'**/flake.lock') }}
|
'**/flake.lock') }}
|
||||||
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
|
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
|
||||||
|
|
||||||
@@ -45,7 +46,7 @@ jobs:
|
|||||||
prettier-lint:
|
prettier-lint:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||||
with:
|
with:
|
||||||
fetch-depth: 2
|
fetch-depth: 2
|
||||||
- name: Get changed files
|
- name: Get changed files
|
||||||
@@ -64,12 +65,13 @@ jobs:
|
|||||||
- '**/*.css'
|
- '**/*.css'
|
||||||
- '**/*.scss'
|
- '**/*.scss'
|
||||||
- '**/*.html'
|
- '**/*.html'
|
||||||
- uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34
|
- uses: nixbuild/nix-quick-install-action@889f3180bb5f064ee9e3201428d04ae9e41d54ad # v31
|
||||||
if: steps.changed-files.outputs.files == 'true'
|
if: steps.changed-files.outputs.files == 'true'
|
||||||
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
|
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
|
||||||
if: steps.changed-files.outputs.files == 'true'
|
if: steps.changed-files.outputs.files == 'true'
|
||||||
with:
|
with:
|
||||||
primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
|
primary-key:
|
||||||
|
nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
|
||||||
'**/flake.lock') }}
|
'**/flake.lock') }}
|
||||||
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
|
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
|
||||||
|
|
||||||
@@ -81,11 +83,12 @@ jobs:
|
|||||||
proto-lint:
|
proto-lint:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||||
- uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34
|
- uses: nixbuild/nix-quick-install-action@889f3180bb5f064ee9e3201428d04ae9e41d54ad # v31
|
||||||
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
|
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
|
||||||
with:
|
with:
|
||||||
primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
|
primary-key:
|
||||||
|
nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
|
||||||
'**/flake.lock') }}
|
'**/flake.lock') }}
|
||||||
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
|
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
|
||||||
|
|
||||||
|
|||||||
28
.github/workflows/needs-more-info-comment.yml
vendored
@@ -1,28 +0,0 @@
|
|||||||
name: Needs More Info - Post Comment
|
|
||||||
|
|
||||||
on:
|
|
||||||
issues:
|
|
||||||
types: [labeled]
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
post-comment:
|
|
||||||
if: >-
|
|
||||||
github.event.label.name == 'needs-more-info' &&
|
|
||||||
github.repository == 'juanfont/headscale'
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
permissions:
|
|
||||||
issues: write
|
|
||||||
contents: read
|
|
||||||
steps:
|
|
||||||
- name: Checkout repository
|
|
||||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
|
||||||
with:
|
|
||||||
sparse-checkout: .github/label-response/needs-more-info.md
|
|
||||||
sparse-checkout-cone-mode: false
|
|
||||||
|
|
||||||
- name: Post instruction comment
|
|
||||||
run: gh issue comment "$NUMBER" --body-file .github/label-response/needs-more-info.md
|
|
||||||
env:
|
|
||||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
GH_REPO: ${{ github.repository }}
|
|
||||||
NUMBER: ${{ github.event.issue.number }}
|
|
||||||
98
.github/workflows/needs-more-info-timer.yml
vendored
@@ -1,98 +0,0 @@
|
|||||||
name: Needs More Info - Timer
|
|
||||||
|
|
||||||
on:
|
|
||||||
schedule:
|
|
||||||
- cron: "0 0 * * *" # Daily at midnight UTC
|
|
||||||
issue_comment:
|
|
||||||
types: [created]
|
|
||||||
workflow_dispatch:
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
# When a non-bot user comments on a needs-more-info issue, remove the label.
|
|
||||||
remove-label-on-response:
|
|
||||||
if: >-
|
|
||||||
github.repository == 'juanfont/headscale' &&
|
|
||||||
github.event_name == 'issue_comment' &&
|
|
||||||
github.event.comment.user.type != 'Bot' &&
|
|
||||||
contains(github.event.issue.labels.*.name, 'needs-more-info')
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
permissions:
|
|
||||||
issues: write
|
|
||||||
steps:
|
|
||||||
- name: Remove needs-more-info label
|
|
||||||
run: gh issue edit "$NUMBER" --remove-label needs-more-info
|
|
||||||
env:
|
|
||||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
GH_REPO: ${{ github.repository }}
|
|
||||||
NUMBER: ${{ github.event.issue.number }}
|
|
||||||
|
|
||||||
# On schedule, close issues that have had no human response for 3 days.
|
|
||||||
close-stale:
|
|
||||||
if: >-
|
|
||||||
github.repository == 'juanfont/headscale' &&
|
|
||||||
github.event_name != 'issue_comment'
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
permissions:
|
|
||||||
issues: write
|
|
||||||
steps:
|
|
||||||
- uses: hustcer/setup-nu@920172d92eb04671776f3ba69d605d3b09351c30 # v3.22
|
|
||||||
with:
|
|
||||||
version: "*"
|
|
||||||
|
|
||||||
- name: Close stale needs-more-info issues
|
|
||||||
shell: nu {0}
|
|
||||||
env:
|
|
||||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
GH_REPO: ${{ github.repository }}
|
|
||||||
run: |
|
|
||||||
let issues = (gh issue list
|
|
||||||
--repo $env.GH_REPO
|
|
||||||
--label "needs-more-info"
|
|
||||||
--state open
|
|
||||||
--json number
|
|
||||||
| from json)
|
|
||||||
|
|
||||||
for issue in $issues {
|
|
||||||
let number = $issue.number
|
|
||||||
print $"Checking issue #($number)"
|
|
||||||
|
|
||||||
# Find when needs-more-info was last added
|
|
||||||
let events = (gh api $"repos/($env.GH_REPO)/issues/($number)/events"
|
|
||||||
--paginate | from json | flatten)
|
|
||||||
let label_event = ($events
|
|
||||||
| where event == "labeled" and label.name == "needs-more-info"
|
|
||||||
| last)
|
|
||||||
let label_added_at = ($label_event.created_at | into datetime)
|
|
||||||
|
|
||||||
# Check for non-bot comments after the label was added
|
|
||||||
let comments = (gh api $"repos/($env.GH_REPO)/issues/($number)/comments"
|
|
||||||
--paginate | from json | flatten)
|
|
||||||
let human_responses = ($comments
|
|
||||||
| where user.type != "Bot"
|
|
||||||
| where { ($in.created_at | into datetime) > $label_added_at })
|
|
||||||
|
|
||||||
if ($human_responses | length) > 0 {
|
|
||||||
print $" Human responded, removing label"
|
|
||||||
gh issue edit $number --repo $env.GH_REPO --remove-label needs-more-info
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
# Check if 3 days have passed
|
|
||||||
let elapsed = (date now) - $label_added_at
|
|
||||||
if $elapsed < 3day {
|
|
||||||
print $" Only ($elapsed | format duration day) elapsed, skipping"
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
print $" No response for ($elapsed | format duration day), closing"
|
|
||||||
let message = [
|
|
||||||
"This issue has been automatically closed because no additional information was provided within 3 days."
|
|
||||||
""
|
|
||||||
"If you have the requested information, please open a new issue and include the debug information requested above."
|
|
||||||
""
|
|
||||||
"Thank you for your understanding."
|
|
||||||
] | str join "\n"
|
|
||||||
gh issue comment $number --repo $env.GH_REPO --body $message
|
|
||||||
gh issue close $number --repo $env.GH_REPO --reason "not planned"
|
|
||||||
gh issue edit $number --repo $env.GH_REPO --remove-label needs-more-info
|
|
||||||
}
|
|
||||||
55
.github/workflows/nix-module-test.yml
vendored
@@ -1,55 +0,0 @@
|
|||||||
name: NixOS Module Tests
|
|
||||||
|
|
||||||
on:
|
|
||||||
push:
|
|
||||||
branches:
|
|
||||||
- main
|
|
||||||
pull_request:
|
|
||||||
branches:
|
|
||||||
- main
|
|
||||||
|
|
||||||
concurrency:
|
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
|
||||||
cancel-in-progress: true
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
nix-module-check:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
permissions:
|
|
||||||
contents: read
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: dorny/paths-filter@de90cc6fb38fc0963ad72b210f1f284cd68cea36 # v3.0.2
|
|
||||||
with:
|
|
||||||
filters: |
|
|
||||||
nix:
|
|
||||||
- 'nix/**'
|
|
||||||
- 'flake.nix'
|
|
||||||
- 'flake.lock'
|
|
||||||
go:
|
|
||||||
- 'go.*'
|
|
||||||
- '**/*.go'
|
|
||||||
- 'cmd/**'
|
|
||||||
- 'hscontrol/**'
|
|
||||||
|
|
||||||
- uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34
|
|
||||||
if: steps.changed-files.outputs.nix == 'true' || steps.changed-files.outputs.go == 'true'
|
|
||||||
|
|
||||||
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
|
|
||||||
if: steps.changed-files.outputs.nix == 'true' || steps.changed-files.outputs.go == 'true'
|
|
||||||
with:
|
|
||||||
primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
|
|
||||||
'**/flake.lock') }}
|
|
||||||
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
|
|
||||||
|
|
||||||
- name: Run NixOS module tests
|
|
||||||
if: steps.changed-files.outputs.nix == 'true' || steps.changed-files.outputs.go == 'true'
|
|
||||||
run: |
|
|
||||||
echo "Running NixOS module integration test..."
|
|
||||||
nix build .#checks.x86_64-linux.headscale -L
|
|
||||||
30
.github/workflows/release.yml
vendored
@@ -13,46 +13,28 @@ jobs:
|
|||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
steps:
|
steps:
|
||||||
- name: Checkout
|
- name: Checkout
|
||||||
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||||
with:
|
with:
|
||||||
fetch-depth: 0
|
fetch-depth: 0
|
||||||
|
|
||||||
- name: Pin Docker to v28 (avoid v29 breaking changes)
|
|
||||||
run: |
|
|
||||||
# Docker 29 breaks docker build via Go client libraries and
|
|
||||||
# docker load/save with certain tarball formats.
|
|
||||||
# Pin to Docker 28.x until our tooling is updated.
|
|
||||||
# https://github.com/actions/runner-images/issues/13474
|
|
||||||
sudo install -m 0755 -d /etc/apt/keyrings
|
|
||||||
curl -fsSL https://download.docker.com/linux/ubuntu/gpg \
|
|
||||||
| sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
|
|
||||||
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
|
|
||||||
https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "$VERSION_CODENAME") stable" \
|
|
||||||
| sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
|
|
||||||
sudo apt-get update -qq
|
|
||||||
VERSION=$(apt-cache madison docker-ce | grep '28\.5' | head -1 | awk '{print $3}')
|
|
||||||
sudo apt-get install -y --allow-downgrades \
|
|
||||||
"docker-ce=${VERSION}" "docker-ce-cli=${VERSION}"
|
|
||||||
sudo systemctl restart docker
|
|
||||||
docker version
|
|
||||||
|
|
||||||
- name: Login to DockerHub
|
- name: Login to DockerHub
|
||||||
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
|
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
|
||||||
with:
|
with:
|
||||||
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||||
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||||
|
|
||||||
- name: Login to GHCR
|
- name: Login to GHCR
|
||||||
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
|
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
|
||||||
with:
|
with:
|
||||||
registry: ghcr.io
|
registry: ghcr.io
|
||||||
username: ${{ github.repository_owner }}
|
username: ${{ github.repository_owner }}
|
||||||
password: ${{ secrets.GITHUB_TOKEN }}
|
password: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
|
||||||
- uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34
|
- uses: nixbuild/nix-quick-install-action@889f3180bb5f064ee9e3201428d04ae9e41d54ad # v31
|
||||||
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
|
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
|
||||||
with:
|
with:
|
||||||
primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
|
primary-key:
|
||||||
|
nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
|
||||||
'**/flake.lock') }}
|
'**/flake.lock') }}
|
||||||
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
|
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
|
||||||
|
|
||||||
|
|||||||
10
.github/workflows/stale.yml
vendored
@@ -12,16 +12,18 @@ jobs:
|
|||||||
issues: write
|
issues: write
|
||||||
pull-requests: write
|
pull-requests: write
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/stale@997185467fa4f803885201cee163a9f38240193d # v10.1.1
|
- uses: actions/stale@5bef64f19d7facfb25b37b414482c7164d639639 # v9.1.0
|
||||||
with:
|
with:
|
||||||
days-before-issue-stale: 90
|
days-before-issue-stale: 90
|
||||||
days-before-issue-close: 7
|
days-before-issue-close: 7
|
||||||
stale-issue-label: "stale"
|
stale-issue-label: "stale"
|
||||||
stale-issue-message: "This issue is stale because it has been open for 90 days with no
|
stale-issue-message:
|
||||||
|
"This issue is stale because it has been open for 90 days with no
|
||||||
activity."
|
activity."
|
||||||
close-issue-message: "This issue was closed because it has been inactive for 14 days
|
close-issue-message:
|
||||||
|
"This issue was closed because it has been inactive for 14 days
|
||||||
since being marked as stale."
|
since being marked as stale."
|
||||||
days-before-pr-stale: -1
|
days-before-pr-stale: -1
|
||||||
days-before-pr-close: -1
|
days-before-pr-close: -1
|
||||||
exempt-issue-labels: "no-stale-bot,needs-more-info"
|
exempt-issue-labels: "no-stale-bot"
|
||||||
repo-token: ${{ secrets.GITHUB_TOKEN }}
|
repo-token: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
|||||||
30
.github/workflows/support-request.yml
vendored
@@ -1,30 +0,0 @@
|
|||||||
name: Support Request - Close Issue
|
|
||||||
|
|
||||||
on:
|
|
||||||
issues:
|
|
||||||
types: [labeled]
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
close-support-request:
|
|
||||||
if: >-
|
|
||||||
github.event.label.name == 'support-request' &&
|
|
||||||
github.repository == 'juanfont/headscale'
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
permissions:
|
|
||||||
issues: write
|
|
||||||
contents: read
|
|
||||||
steps:
|
|
||||||
- name: Checkout repository
|
|
||||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
|
||||||
with:
|
|
||||||
sparse-checkout: .github/label-response/support-request.md
|
|
||||||
sparse-checkout-cone-mode: false
|
|
||||||
|
|
||||||
- name: Post comment and close issue
|
|
||||||
run: |
|
|
||||||
gh issue comment "$NUMBER" --body-file .github/label-response/support-request.md
|
|
||||||
gh issue close "$NUMBER" --reason "not planned"
|
|
||||||
env:
|
|
||||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
GH_REPO: ${{ github.repository }}
|
|
||||||
NUMBER: ${{ github.event.issue.number }}
|
|
||||||
231
.github/workflows/test-integration.yaml
vendored
@@ -7,154 +7,7 @@ concurrency:
|
|||||||
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
|
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
|
||||||
cancel-in-progress: true
|
cancel-in-progress: true
|
||||||
jobs:
|
jobs:
|
||||||
# build: Builds binaries and Docker images once, uploads as artifacts for reuse.
|
|
||||||
# build-postgres: Pulls postgres image separately to avoid Docker Hub rate limits.
|
|
||||||
# sqlite: Runs all integration tests with SQLite backend.
|
|
||||||
# postgres: Runs a subset of tests with PostgreSQL to verify database compatibility.
|
|
||||||
build:
|
|
||||||
runs-on: ubuntu-24.04-arm
|
|
||||||
outputs:
|
|
||||||
files-changed: ${{ steps.changed-files.outputs.files }}
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
|
||||||
with:
|
|
||||||
fetch-depth: 2
|
|
||||||
- name: Get changed files
|
|
||||||
id: changed-files
|
|
||||||
uses: dorny/paths-filter@de90cc6fb38fc0963ad72b210f1f284cd68cea36 # v3.0.2
|
|
||||||
with:
|
|
||||||
filters: |
|
|
||||||
files:
|
|
||||||
- '*.nix'
|
|
||||||
- 'go.*'
|
|
||||||
- '**/*.go'
|
|
||||||
- 'integration/**'
|
|
||||||
- 'config-example.yaml'
|
|
||||||
- '.github/workflows/test-integration.yaml'
|
|
||||||
- '.github/workflows/integration-test-template.yml'
|
|
||||||
- 'Dockerfile.*'
|
|
||||||
- uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34
|
|
||||||
if: steps.changed-files.outputs.files == 'true'
|
|
||||||
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
|
|
||||||
if: steps.changed-files.outputs.files == 'true'
|
|
||||||
with:
|
|
||||||
primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix', '**/flake.lock') }}
|
|
||||||
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
|
|
||||||
- name: Build binaries and warm Go cache
|
|
||||||
if: steps.changed-files.outputs.files == 'true'
|
|
||||||
run: |
|
|
||||||
# Build all Go binaries in one nix shell to maximize cache reuse
|
|
||||||
nix develop --command -- bash -c '
|
|
||||||
go build -o hi ./cmd/hi
|
|
||||||
CGO_ENABLED=0 GOOS=linux go build -o headscale ./cmd/headscale
|
|
||||||
# Build integration test binary to warm the cache with all dependencies
|
|
||||||
go test -c ./integration -o /dev/null 2>/dev/null || true
|
|
||||||
'
|
|
||||||
- name: Upload hi binary
|
|
||||||
if: steps.changed-files.outputs.files == 'true'
|
|
||||||
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
|
|
||||||
with:
|
|
||||||
name: hi-binary
|
|
||||||
path: hi
|
|
||||||
retention-days: 10
|
|
||||||
- name: Package Go cache
|
|
||||||
if: steps.changed-files.outputs.files == 'true'
|
|
||||||
run: |
|
|
||||||
# Package Go module cache and build cache
|
|
||||||
tar -czf go-cache.tar.gz -C ~ go .cache/go-build
|
|
||||||
- name: Upload Go cache
|
|
||||||
if: steps.changed-files.outputs.files == 'true'
|
|
||||||
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
|
|
||||||
with:
|
|
||||||
name: go-cache
|
|
||||||
path: go-cache.tar.gz
|
|
||||||
retention-days: 10
|
|
||||||
- name: Pin Docker to v28 (avoid v29 breaking changes)
|
|
||||||
if: steps.changed-files.outputs.files == 'true'
|
|
||||||
run: |
|
|
||||||
# Docker 29 breaks docker build via Go client libraries and
|
|
||||||
# docker load/save with certain tarball formats.
|
|
||||||
# Pin to Docker 28.x until our tooling is updated.
|
|
||||||
# https://github.com/actions/runner-images/issues/13474
|
|
||||||
sudo install -m 0755 -d /etc/apt/keyrings
|
|
||||||
curl -fsSL https://download.docker.com/linux/ubuntu/gpg \
|
|
||||||
| sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
|
|
||||||
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
|
|
||||||
https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "$VERSION_CODENAME") stable" \
|
|
||||||
| sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
|
|
||||||
sudo apt-get update -qq
|
|
||||||
VERSION=$(apt-cache madison docker-ce | grep '28\.5' | head -1 | awk '{print $3}')
|
|
||||||
sudo apt-get install -y --allow-downgrades \
|
|
||||||
"docker-ce=${VERSION}" "docker-ce-cli=${VERSION}"
|
|
||||||
sudo systemctl restart docker
|
|
||||||
docker version
|
|
||||||
- name: Build headscale image
|
|
||||||
if: steps.changed-files.outputs.files == 'true'
|
|
||||||
run: |
|
|
||||||
docker build \
|
|
||||||
--file Dockerfile.integration-ci \
|
|
||||||
--tag headscale:${{ github.sha }} \
|
|
||||||
.
|
|
||||||
docker save headscale:${{ github.sha }} | gzip > headscale-image.tar.gz
|
|
||||||
- name: Build tailscale HEAD image
|
|
||||||
if: steps.changed-files.outputs.files == 'true'
|
|
||||||
run: |
|
|
||||||
docker build \
|
|
||||||
--file Dockerfile.tailscale-HEAD \
|
|
||||||
--tag tailscale-head:${{ github.sha }} \
|
|
||||||
.
|
|
||||||
docker save tailscale-head:${{ github.sha }} | gzip > tailscale-head-image.tar.gz
|
|
||||||
- name: Upload headscale image
|
|
||||||
if: steps.changed-files.outputs.files == 'true'
|
|
||||||
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
|
|
||||||
with:
|
|
||||||
name: headscale-image
|
|
||||||
path: headscale-image.tar.gz
|
|
||||||
retention-days: 10
|
|
||||||
- name: Upload tailscale HEAD image
|
|
||||||
if: steps.changed-files.outputs.files == 'true'
|
|
||||||
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
|
|
||||||
with:
|
|
||||||
name: tailscale-head-image
|
|
||||||
path: tailscale-head-image.tar.gz
|
|
||||||
retention-days: 10
|
|
||||||
build-postgres:
|
|
||||||
runs-on: ubuntu-24.04-arm
|
|
||||||
needs: build
|
|
||||||
if: needs.build.outputs.files-changed == 'true'
|
|
||||||
steps:
|
|
||||||
- name: Pin Docker to v28 (avoid v29 breaking changes)
|
|
||||||
run: |
|
|
||||||
# Docker 29 breaks docker build via Go client libraries and
|
|
||||||
# docker load/save with certain tarball formats.
|
|
||||||
# Pin to Docker 28.x until our tooling is updated.
|
|
||||||
# https://github.com/actions/runner-images/issues/13474
|
|
||||||
sudo install -m 0755 -d /etc/apt/keyrings
|
|
||||||
curl -fsSL https://download.docker.com/linux/ubuntu/gpg \
|
|
||||||
| sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
|
|
||||||
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
|
|
||||||
https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "$VERSION_CODENAME") stable" \
|
|
||||||
| sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
|
|
||||||
sudo apt-get update -qq
|
|
||||||
VERSION=$(apt-cache madison docker-ce | grep '28\.5' | head -1 | awk '{print $3}')
|
|
||||||
sudo apt-get install -y --allow-downgrades \
|
|
||||||
"docker-ce=${VERSION}" "docker-ce-cli=${VERSION}"
|
|
||||||
sudo systemctl restart docker
|
|
||||||
docker version
|
|
||||||
- name: Pull and save postgres image
|
|
||||||
run: |
|
|
||||||
docker pull postgres:latest
|
|
||||||
docker tag postgres:latest postgres:${{ github.sha }}
|
|
||||||
docker save postgres:${{ github.sha }} | gzip > postgres-image.tar.gz
|
|
||||||
- name: Upload postgres image
|
|
||||||
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
|
|
||||||
with:
|
|
||||||
name: postgres-image
|
|
||||||
path: postgres-image.tar.gz
|
|
||||||
retention-days: 10
|
|
||||||
sqlite:
|
sqlite:
|
||||||
needs: build
|
|
||||||
if: needs.build.outputs.files-changed == 'true'
|
|
||||||
strategy:
|
strategy:
|
||||||
fail-fast: false
|
fail-fast: false
|
||||||
matrix:
|
matrix:
|
||||||
@@ -170,48 +23,28 @@ jobs:
|
|||||||
- TestPolicyUpdateWhileRunningWithCLIInDatabase
|
- TestPolicyUpdateWhileRunningWithCLIInDatabase
|
||||||
- TestACLAutogroupMember
|
- TestACLAutogroupMember
|
||||||
- TestACLAutogroupTagged
|
- TestACLAutogroupTagged
|
||||||
- TestACLAutogroupSelf
|
|
||||||
- TestACLPolicyPropagationOverTime
|
|
||||||
- TestACLTagPropagation
|
|
||||||
- TestACLTagPropagationPortSpecific
|
|
||||||
- TestACLGroupWithUnknownUser
|
|
||||||
- TestACLGroupAfterUserDeletion
|
|
||||||
- TestACLGroupDeletionExactReproduction
|
|
||||||
- TestACLDynamicUnknownUserAddition
|
|
||||||
- TestACLDynamicUnknownUserRemoval
|
|
||||||
- TestAPIAuthenticationBypass
|
|
||||||
- TestAPIAuthenticationBypassCurl
|
|
||||||
- TestGRPCAuthenticationBypass
|
|
||||||
- TestCLIWithConfigAuthenticationBypass
|
|
||||||
- TestAuthKeyLogoutAndReloginSameUser
|
- TestAuthKeyLogoutAndReloginSameUser
|
||||||
- TestAuthKeyLogoutAndReloginNewUser
|
- TestAuthKeyLogoutAndReloginNewUser
|
||||||
- TestAuthKeyLogoutAndReloginSameUserExpiredKey
|
- TestAuthKeyLogoutAndReloginSameUserExpiredKey
|
||||||
- TestAuthKeyDeleteKey
|
|
||||||
- TestAuthKeyLogoutAndReloginRoutesPreserved
|
|
||||||
- TestOIDCAuthenticationPingAll
|
- TestOIDCAuthenticationPingAll
|
||||||
- TestOIDCExpireNodesBasedOnTokenExpiry
|
- TestOIDCExpireNodesBasedOnTokenExpiry
|
||||||
- TestOIDC024UserCreation
|
- TestOIDC024UserCreation
|
||||||
- TestOIDCAuthenticationWithPKCE
|
- TestOIDCAuthenticationWithPKCE
|
||||||
- TestOIDCReloginSameNodeNewUser
|
- TestOIDCReloginSameNodeNewUser
|
||||||
- TestOIDCFollowUpUrl
|
|
||||||
- TestOIDCMultipleOpenedLoginUrls
|
|
||||||
- TestOIDCReloginSameNodeSameUser
|
|
||||||
- TestOIDCExpiryAfterRestart
|
|
||||||
- TestOIDCACLPolicyOnJoin
|
|
||||||
- TestOIDCReloginSameUserRoutesPreserved
|
|
||||||
- TestAuthWebFlowAuthenticationPingAll
|
- TestAuthWebFlowAuthenticationPingAll
|
||||||
- TestAuthWebFlowLogoutAndReloginSameUser
|
- TestAuthWebFlowLogoutAndRelogin
|
||||||
- TestAuthWebFlowLogoutAndReloginNewUser
|
|
||||||
- TestUserCommand
|
- TestUserCommand
|
||||||
- TestPreAuthKeyCommand
|
- TestPreAuthKeyCommand
|
||||||
- TestPreAuthKeyCommandWithoutExpiry
|
- TestPreAuthKeyCommandWithoutExpiry
|
||||||
- TestPreAuthKeyCommandReusableEphemeral
|
- TestPreAuthKeyCommandReusableEphemeral
|
||||||
- TestPreAuthKeyCorrectUserLoggedInCommand
|
- TestPreAuthKeyCorrectUserLoggedInCommand
|
||||||
- TestTaggedNodesCLIOutput
|
|
||||||
- TestApiKeyCommand
|
- TestApiKeyCommand
|
||||||
|
- TestNodeTagCommand
|
||||||
|
- TestNodeAdvertiseTagCommand
|
||||||
- TestNodeCommand
|
- TestNodeCommand
|
||||||
- TestNodeExpireCommand
|
- TestNodeExpireCommand
|
||||||
- TestNodeRenameCommand
|
- TestNodeRenameCommand
|
||||||
|
- TestNodeMoveCommand
|
||||||
- TestPolicyCommand
|
- TestPolicyCommand
|
||||||
- TestPolicyBrokenConfigCommand
|
- TestPolicyBrokenConfigCommand
|
||||||
- TestDERPVerifyEndpoint
|
- TestDERPVerifyEndpoint
|
||||||
@@ -228,27 +61,17 @@ jobs:
|
|||||||
- TestTaildrop
|
- TestTaildrop
|
||||||
- TestUpdateHostnameFromClient
|
- TestUpdateHostnameFromClient
|
||||||
- TestExpireNode
|
- TestExpireNode
|
||||||
- TestSetNodeExpiryInFuture
|
|
||||||
- TestDisableNodeExpiry
|
|
||||||
- TestNodeOnlineStatus
|
- TestNodeOnlineStatus
|
||||||
- TestPingAllByIPManyUpDown
|
- TestPingAllByIPManyUpDown
|
||||||
- Test2118DeletingOnlineNodePanics
|
- Test2118DeletingOnlineNodePanics
|
||||||
- TestGrantCapRelay
|
|
||||||
- TestGrantCapDrive
|
|
||||||
- TestEnablingRoutes
|
- TestEnablingRoutes
|
||||||
- TestHASubnetRouterFailover
|
- TestHASubnetRouterFailover
|
||||||
- TestSubnetRouteACL
|
- TestSubnetRouteACL
|
||||||
- TestEnablingExitRoutes
|
- TestEnablingExitRoutes
|
||||||
- TestSubnetRouterMultiNetwork
|
- TestSubnetRouterMultiNetwork
|
||||||
- TestSubnetRouterMultiNetworkExitNode
|
- TestSubnetRouterMultiNetworkExitNode
|
||||||
- TestAutoApproveMultiNetwork/authkey-tag.*
|
- TestAutoApproveMultiNetwork
|
||||||
- TestAutoApproveMultiNetwork/authkey-user.*
|
|
||||||
- TestAutoApproveMultiNetwork/authkey-group.*
|
|
||||||
- TestAutoApproveMultiNetwork/webauth-tag.*
|
|
||||||
- TestAutoApproveMultiNetwork/webauth-user.*
|
|
||||||
- TestAutoApproveMultiNetwork/webauth-group.*
|
|
||||||
- TestSubnetRouteACLFiltering
|
- TestSubnetRouteACLFiltering
|
||||||
- TestGrantViaSubnetSteering
|
|
||||||
- TestHeadscale
|
- TestHeadscale
|
||||||
- TestTailscaleNodesJoiningHeadcale
|
- TestTailscaleNodesJoiningHeadcale
|
||||||
- TestSSHOneUserToAll
|
- TestSSHOneUserToAll
|
||||||
@@ -256,55 +79,12 @@ jobs:
|
|||||||
- TestSSHNoSSHConfigured
|
- TestSSHNoSSHConfigured
|
||||||
- TestSSHIsBlockedInACL
|
- TestSSHIsBlockedInACL
|
||||||
- TestSSHUserOnlyIsolation
|
- TestSSHUserOnlyIsolation
|
||||||
- TestSSHAutogroupSelf
|
|
||||||
- TestSSHOneUserToOneCheckModeCLI
|
|
||||||
- TestSSHOneUserToOneCheckModeOIDC
|
|
||||||
- TestSSHCheckModeUnapprovedTimeout
|
|
||||||
- TestSSHCheckModeCheckPeriodCLI
|
|
||||||
- TestSSHCheckModeAutoApprove
|
|
||||||
- TestSSHCheckModeNegativeCLI
|
|
||||||
- TestSSHLocalpart
|
|
||||||
- TestTagsAuthKeyWithTagRequestDifferentTag
|
|
||||||
- TestTagsAuthKeyWithTagNoAdvertiseFlag
|
|
||||||
- TestTagsAuthKeyWithTagCannotAddViaCLI
|
|
||||||
- TestTagsAuthKeyWithTagCannotChangeViaCLI
|
|
||||||
- TestTagsAuthKeyWithTagAdminOverrideReauthPreserves
|
|
||||||
- TestTagsAuthKeyWithTagCLICannotModifyAdminTags
|
|
||||||
- TestTagsAuthKeyWithoutTagCannotRequestTags
|
|
||||||
- TestTagsAuthKeyWithoutTagRegisterNoTags
|
|
||||||
- TestTagsAuthKeyWithoutTagCannotAddViaCLI
|
|
||||||
- TestTagsAuthKeyWithoutTagCLINoOpAfterAdminWithReset
|
|
||||||
- TestTagsAuthKeyWithoutTagCLINoOpAfterAdminWithEmptyAdvertise
|
|
||||||
- TestTagsAuthKeyWithoutTagCLICannotReduceAdminMultiTag
|
|
||||||
- TestTagsUserLoginOwnedTagAtRegistration
|
|
||||||
- TestTagsUserLoginNonExistentTagAtRegistration
|
|
||||||
- TestTagsUserLoginUnownedTagAtRegistration
|
|
||||||
- TestTagsUserLoginAddTagViaCLIReauth
|
|
||||||
- TestTagsUserLoginRemoveTagViaCLIReauth
|
|
||||||
- TestTagsUserLoginCLINoOpAfterAdminAssignment
|
|
||||||
- TestTagsUserLoginCLICannotRemoveAdminTags
|
|
||||||
- TestTagsAuthKeyWithTagRequestNonExistentTag
|
|
||||||
- TestTagsAuthKeyWithTagRequestUnownedTag
|
|
||||||
- TestTagsAuthKeyWithoutTagRequestNonExistentTag
|
|
||||||
- TestTagsAuthKeyWithoutTagRequestUnownedTag
|
|
||||||
- TestTagsAdminAPICannotSetNonExistentTag
|
|
||||||
- TestTagsAdminAPICanSetUnownedTag
|
|
||||||
- TestTagsAdminAPICannotRemoveAllTags
|
|
||||||
- TestTagsIssue2978ReproTagReplacement
|
|
||||||
- TestTagsAdminAPICannotSetInvalidFormat
|
|
||||||
- TestTagsUserLoginReauthWithEmptyTagsRemovesAllTags
|
|
||||||
- TestTagsAuthKeyWithoutUserInheritsTags
|
|
||||||
- TestTagsAuthKeyWithoutUserRejectsAdvertisedTags
|
|
||||||
- TestTagsAuthKeyConvertToUserViaCLIRegister
|
|
||||||
uses: ./.github/workflows/integration-test-template.yml
|
uses: ./.github/workflows/integration-test-template.yml
|
||||||
secrets: inherit
|
|
||||||
with:
|
with:
|
||||||
test: ${{ matrix.test }}
|
test: ${{ matrix.test }}
|
||||||
postgres_flag: "--postgres=0"
|
postgres_flag: "--postgres=0"
|
||||||
database_name: "sqlite"
|
database_name: "sqlite"
|
||||||
postgres:
|
postgres:
|
||||||
needs: [build, build-postgres]
|
|
||||||
if: needs.build.outputs.files-changed == 'true'
|
|
||||||
strategy:
|
strategy:
|
||||||
fail-fast: false
|
fail-fast: false
|
||||||
matrix:
|
matrix:
|
||||||
@@ -315,7 +95,6 @@ jobs:
|
|||||||
- TestPingAllByIPManyUpDown
|
- TestPingAllByIPManyUpDown
|
||||||
- TestSubnetRouterMultiNetwork
|
- TestSubnetRouterMultiNetwork
|
||||||
uses: ./.github/workflows/integration-test-template.yml
|
uses: ./.github/workflows/integration-test-template.yml
|
||||||
secrets: inherit
|
|
||||||
with:
|
with:
|
||||||
test: ${{ matrix.test }}
|
test: ${{ matrix.test }}
|
||||||
postgres_flag: "--postgres=1"
|
postgres_flag: "--postgres=1"
|
||||||
|
|||||||
7
.github/workflows/test.yml
vendored
@@ -11,7 +11,7 @@ jobs:
|
|||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||||
with:
|
with:
|
||||||
fetch-depth: 2
|
fetch-depth: 2
|
||||||
|
|
||||||
@@ -27,12 +27,13 @@ jobs:
|
|||||||
- 'integration_test/'
|
- 'integration_test/'
|
||||||
- 'config-example.yaml'
|
- 'config-example.yaml'
|
||||||
|
|
||||||
- uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34
|
- uses: nixbuild/nix-quick-install-action@889f3180bb5f064ee9e3201428d04ae9e41d54ad # v31
|
||||||
if: steps.changed-files.outputs.files == 'true'
|
if: steps.changed-files.outputs.files == 'true'
|
||||||
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
|
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
|
||||||
if: steps.changed-files.outputs.files == 'true'
|
if: steps.changed-files.outputs.files == 'true'
|
||||||
with:
|
with:
|
||||||
primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
|
primary-key:
|
||||||
|
nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
|
||||||
'**/flake.lock') }}
|
'**/flake.lock') }}
|
||||||
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
|
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
|
||||||
|
|
||||||
|
|||||||
2
.gitignore
vendored
@@ -2,7 +2,6 @@ ignored/
|
|||||||
tailscale/
|
tailscale/
|
||||||
.vscode/
|
.vscode/
|
||||||
.claude/
|
.claude/
|
||||||
logs/
|
|
||||||
|
|
||||||
*.prof
|
*.prof
|
||||||
|
|
||||||
@@ -29,7 +28,6 @@ config*.yaml
|
|||||||
!config-example.yaml
|
!config-example.yaml
|
||||||
derp.yaml
|
derp.yaml
|
||||||
*.hujson
|
*.hujson
|
||||||
!hscontrol/policy/v2/testdata/*/*.hujson
|
|
||||||
*.key
|
*.key
|
||||||
/db.sqlite
|
/db.sqlite
|
||||||
*.sqlite3
|
*.sqlite3
|
||||||
|
|||||||
@@ -7,7 +7,6 @@ linters:
|
|||||||
- depguard
|
- depguard
|
||||||
- dupl
|
- dupl
|
||||||
- exhaustruct
|
- exhaustruct
|
||||||
- funcorder
|
|
||||||
- funlen
|
- funlen
|
||||||
- gochecknoglobals
|
- gochecknoglobals
|
||||||
- gochecknoinits
|
- gochecknoinits
|
||||||
@@ -18,7 +17,6 @@ linters:
|
|||||||
- lll
|
- lll
|
||||||
- maintidx
|
- maintidx
|
||||||
- makezero
|
- makezero
|
||||||
- mnd
|
|
||||||
- musttag
|
- musttag
|
||||||
- nestif
|
- nestif
|
||||||
- nolintlint
|
- nolintlint
|
||||||
@@ -30,32 +28,6 @@ linters:
|
|||||||
- wrapcheck
|
- wrapcheck
|
||||||
- wsl
|
- wsl
|
||||||
settings:
|
settings:
|
||||||
forbidigo:
|
|
||||||
forbid:
|
|
||||||
# Forbid time.Sleep everywhere with context-appropriate alternatives
|
|
||||||
- pattern: 'time\.Sleep'
|
|
||||||
msg: >-
|
|
||||||
time.Sleep is forbidden.
|
|
||||||
In tests: use assert.EventuallyWithT for polling/waiting patterns.
|
|
||||||
In production code: use a backoff strategy (e.g., cenkalti/backoff) or proper synchronization primitives.
|
|
||||||
# Forbid inline string literals in zerolog field methods - use zf.* constants
|
|
||||||
- pattern: '\.(Str|Int|Int8|Int16|Int32|Int64|Uint|Uint8|Uint16|Uint32|Uint64|Float32|Float64|Bool|Dur|Time|TimeDiff|Strs|Ints|Uints|Floats|Bools|Any|Interface)\("[^"]+"'
|
|
||||||
msg: >-
|
|
||||||
Use zf.* constants for zerolog field names instead of string literals.
|
|
||||||
Import "github.com/juanfont/headscale/hscontrol/util/zlog/zf" and use
|
|
||||||
constants like zf.NodeID, zf.UserName, etc. Add new constants to
|
|
||||||
hscontrol/util/zlog/zf/fields.go if needed.
|
|
||||||
# Forbid ptr.To - use Go 1.26 new(expr) instead
|
|
||||||
- pattern: 'ptr\.To\('
|
|
||||||
msg: >-
|
|
||||||
ptr.To is forbidden. Use Go 1.26's new(expr) syntax instead.
|
|
||||||
Example: ptr.To(value) → new(value)
|
|
||||||
# Forbid tsaddr.SortPrefixes - use slices.SortFunc with netip.Prefix.Compare
|
|
||||||
- pattern: 'tsaddr\.SortPrefixes'
|
|
||||||
msg: >-
|
|
||||||
tsaddr.SortPrefixes is forbidden. Use Go 1.26's netip.Prefix.Compare instead.
|
|
||||||
Example: slices.SortFunc(prefixes, netip.Prefix.Compare)
|
|
||||||
analyze-types: true
|
|
||||||
gocritic:
|
gocritic:
|
||||||
disabled-checks:
|
disabled-checks:
|
||||||
- appendAssign
|
- appendAssign
|
||||||
|
|||||||
@@ -2,16 +2,12 @@
|
|||||||
version: 2
|
version: 2
|
||||||
before:
|
before:
|
||||||
hooks:
|
hooks:
|
||||||
- go mod tidy -compat=1.26
|
- go mod tidy -compat=1.24
|
||||||
- go mod vendor
|
- go mod vendor
|
||||||
|
|
||||||
release:
|
release:
|
||||||
prerelease: auto
|
prerelease: auto
|
||||||
draft: true
|
draft: true
|
||||||
header: |
|
|
||||||
## Upgrade
|
|
||||||
|
|
||||||
Please follow the steps outlined in the [upgrade guide](https://headscale.net/stable/setup/upgrade/) to update your existing Headscale installation.
|
|
||||||
|
|
||||||
builds:
|
builds:
|
||||||
- id: headscale
|
- id: headscale
|
||||||
@@ -27,6 +23,12 @@ builds:
|
|||||||
- linux_arm64
|
- linux_arm64
|
||||||
flags:
|
flags:
|
||||||
- -mod=readonly
|
- -mod=readonly
|
||||||
|
ldflags:
|
||||||
|
- -s -w
|
||||||
|
- -X github.com/juanfont/headscale/hscontrol/types.Version={{ .Version }}
|
||||||
|
- -X github.com/juanfont/headscale/hscontrol/types.GitCommitHash={{ .Commit }}
|
||||||
|
tags:
|
||||||
|
- ts2019
|
||||||
|
|
||||||
archives:
|
archives:
|
||||||
- id: golang-cross
|
- id: golang-cross
|
||||||
@@ -42,9 +44,10 @@ source:
|
|||||||
- "vendor/"
|
- "vendor/"
|
||||||
|
|
||||||
nfpms:
|
nfpms:
|
||||||
# Configure nFPM for .deb releases
|
# Configure nFPM for .deb and .rpm releases
|
||||||
#
|
#
|
||||||
# See https://goreleaser.com/customization/package/nfpm/
|
# See https://nfpm.goreleaser.com/configuration/
|
||||||
|
# and https://goreleaser.com/customization/nfpm/
|
||||||
#
|
#
|
||||||
# Useful tools for debugging .debs:
|
# Useful tools for debugging .debs:
|
||||||
# List file contents: dpkg -c dist/headscale...deb
|
# List file contents: dpkg -c dist/headscale...deb
|
||||||
@@ -78,8 +81,6 @@ nfpms:
|
|||||||
dst: /usr/lib/systemd/system/headscale.service
|
dst: /usr/lib/systemd/system/headscale.service
|
||||||
- dst: /var/lib/headscale
|
- dst: /var/lib/headscale
|
||||||
type: dir
|
type: dir
|
||||||
- src: ./config-example.yaml
|
|
||||||
dst: /usr/share/doc/headscale/examples/config-example.yaml
|
|
||||||
- src: LICENSE
|
- src: LICENSE
|
||||||
dst: /usr/share/doc/headscale/copyright
|
dst: /usr/share/doc/headscale/copyright
|
||||||
scripts:
|
scripts:
|
||||||
@@ -101,7 +102,7 @@ kos:
|
|||||||
# bare tells KO to only use the repository
|
# bare tells KO to only use the repository
|
||||||
# for tagging and naming the container.
|
# for tagging and naming the container.
|
||||||
bare: true
|
bare: true
|
||||||
base_image: gcr.io/distroless/base-debian13
|
base_image: gcr.io/distroless/base-debian12
|
||||||
build: headscale
|
build: headscale
|
||||||
main: ./cmd/headscale
|
main: ./cmd/headscale
|
||||||
env:
|
env:
|
||||||
@@ -121,8 +122,6 @@ kos:
|
|||||||
- "{{ .Tag }}"
|
- "{{ .Tag }}"
|
||||||
- '{{ trimprefix .Tag "v" }}'
|
- '{{ trimprefix .Tag "v" }}'
|
||||||
- "sha-{{ .ShortCommit }}"
|
- "sha-{{ .ShortCommit }}"
|
||||||
creation_time: "{{.CommitTimestamp}}"
|
|
||||||
ko_data_creation_time: "{{.CommitTimestamp}}"
|
|
||||||
|
|
||||||
- id: ghcr-debug
|
- id: ghcr-debug
|
||||||
repositories:
|
repositories:
|
||||||
@@ -130,7 +129,7 @@ kos:
|
|||||||
- headscale/headscale
|
- headscale/headscale
|
||||||
|
|
||||||
bare: true
|
bare: true
|
||||||
base_image: gcr.io/distroless/base-debian13:debug
|
base_image: gcr.io/distroless/base-debian12:debug
|
||||||
build: headscale
|
build: headscale
|
||||||
main: ./cmd/headscale
|
main: ./cmd/headscale
|
||||||
env:
|
env:
|
||||||
|
|||||||
34
.mcp.json
@@ -1,34 +0,0 @@
|
|||||||
{
|
|
||||||
"mcpServers": {
|
|
||||||
"claude-code-mcp": {
|
|
||||||
"type": "stdio",
|
|
||||||
"command": "npx",
|
|
||||||
"args": ["-y", "@steipete/claude-code-mcp@latest"],
|
|
||||||
"env": {}
|
|
||||||
},
|
|
||||||
"sequential-thinking": {
|
|
||||||
"type": "stdio",
|
|
||||||
"command": "npx",
|
|
||||||
"args": ["-y", "@modelcontextprotocol/server-sequential-thinking"],
|
|
||||||
"env": {}
|
|
||||||
},
|
|
||||||
"nixos": {
|
|
||||||
"type": "stdio",
|
|
||||||
"command": "uvx",
|
|
||||||
"args": ["mcp-nixos"],
|
|
||||||
"env": {}
|
|
||||||
},
|
|
||||||
"context7": {
|
|
||||||
"type": "stdio",
|
|
||||||
"command": "npx",
|
|
||||||
"args": ["-y", "@upstash/context7-mcp"],
|
|
||||||
"env": {}
|
|
||||||
},
|
|
||||||
"git": {
|
|
||||||
"type": "stdio",
|
|
||||||
"command": "npx",
|
|
||||||
"args": ["-y", "@cyanheads/git-mcp-server"],
|
|
||||||
"env": {}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,2 +0,0 @@
|
|||||||
[plugin.mkdocs]
|
|
||||||
align_semantic_breaks_in_lists = true
|
|
||||||
@@ -1,62 +0,0 @@
|
|||||||
# prek/pre-commit configuration for headscale
|
|
||||||
# See: https://prek.j178.dev/quickstart/
|
|
||||||
# See: https://prek.j178.dev/builtin/
|
|
||||||
|
|
||||||
# Global exclusions - ignore generated code
|
|
||||||
exclude: ^gen/
|
|
||||||
|
|
||||||
repos:
|
|
||||||
# Built-in hooks from pre-commit/pre-commit-hooks
|
|
||||||
# prek will use fast-path optimized versions automatically
|
|
||||||
# See: https://prek.j178.dev/builtin/
|
|
||||||
- repo: https://github.com/pre-commit/pre-commit-hooks
|
|
||||||
rev: v6.0.0
|
|
||||||
hooks:
|
|
||||||
- id: check-added-large-files
|
|
||||||
- id: check-case-conflict
|
|
||||||
- id: check-executables-have-shebangs
|
|
||||||
- id: check-json
|
|
||||||
- id: check-merge-conflict
|
|
||||||
- id: check-symlinks
|
|
||||||
- id: check-toml
|
|
||||||
- id: check-xml
|
|
||||||
- id: check-yaml
|
|
||||||
- id: detect-private-key
|
|
||||||
- id: end-of-file-fixer
|
|
||||||
- id: fix-byte-order-marker
|
|
||||||
- id: mixed-line-ending
|
|
||||||
- id: trailing-whitespace
|
|
||||||
|
|
||||||
# Local hooks for project-specific tooling
|
|
||||||
- repo: local
|
|
||||||
hooks:
|
|
||||||
# nixpkgs-fmt for Nix files
|
|
||||||
- id: nixpkgs-fmt
|
|
||||||
name: nixpkgs-fmt
|
|
||||||
entry: nixpkgs-fmt
|
|
||||||
language: system
|
|
||||||
files: \.nix$
|
|
||||||
|
|
||||||
# Prettier for formatting
|
|
||||||
- id: prettier
|
|
||||||
name: prettier
|
|
||||||
entry: prettier --write --list-different
|
|
||||||
language: system
|
|
||||||
exclude: ^docs/
|
|
||||||
types_or: [javascript, jsx, ts, tsx, yaml, json, toml, html, css, scss, sass, markdown]
|
|
||||||
|
|
||||||
# mdformat for docs
|
|
||||||
- id: mdformat
|
|
||||||
name: mdformat
|
|
||||||
entry: mdformat
|
|
||||||
language: system
|
|
||||||
types_or: [markdown]
|
|
||||||
files: ^docs/
|
|
||||||
|
|
||||||
# golangci-lint for Go code quality
|
|
||||||
- id: golangci-lint
|
|
||||||
name: golangci-lint
|
|
||||||
entry: nix develop --command -- golangci-lint run --new-from-rev=HEAD~1 --timeout=5m --fix
|
|
||||||
language: system
|
|
||||||
types: [go]
|
|
||||||
pass_filenames: false
|
|
||||||
@@ -1,2 +1,5 @@
|
|||||||
.github/workflows/test-integration-v2*
|
.github/workflows/test-integration-v2*
|
||||||
docs/
|
docs/about/features.md
|
||||||
|
docs/ref/configuration.md
|
||||||
|
docs/ref/oidc.md
|
||||||
|
docs/ref/remote-cli.md
|
||||||
|
|||||||
291
AGENTS.md
@@ -1,291 +0,0 @@
|
|||||||
# AGENTS.md
|
|
||||||
|
|
||||||
Behavioural guidance for AI agents working in this repository. Reference
|
|
||||||
material for complex procedures lives next to the code — integration
|
|
||||||
testing is documented in [`cmd/hi/README.md`](cmd/hi/README.md) and
|
|
||||||
[`integration/README.md`](integration/README.md). Read those files
|
|
||||||
before running tests or writing new ones.
|
|
||||||
|
|
||||||
Headscale is an open-source implementation of the Tailscale control server
|
|
||||||
written in Go. It manages node registration, IP allocation, policy
|
|
||||||
enforcement, and DERP routing for self-hosted tailnets.
|
|
||||||
|
|
||||||
## Interaction Rules
|
|
||||||
|
|
||||||
These rules govern how you work in this repo. They are listed first
|
|
||||||
because they shape every other decision.
|
|
||||||
|
|
||||||
### Ask with comprehensive multiple-choice options
|
|
||||||
|
|
||||||
When you need to clarify intent, scope, or approach, use the
|
|
||||||
`AskUserQuestion` tool (or a numbered list fallback) and present the user
|
|
||||||
with a comprehensive set of options. Cover the likely branches explicitly
|
|
||||||
and include an "other — please describe" escape.
|
|
||||||
|
|
||||||
- Bad: _"How should I handle expired nodes?"_
|
|
||||||
- Good: _"How should expired nodes be handled? (a) Remain visible to peers
|
|
||||||
but marked expired (current behaviour); (b) Hidden from peers entirely;
|
|
||||||
(c) Hidden from peers but visible in admin API; (d) Other."_
|
|
||||||
|
|
||||||
This matters more than you think — open-ended questions waste a round
|
|
||||||
trip and often produce a misaligned answer.
|
|
||||||
|
|
||||||
### Read the documented procedure before running complex commands
|
|
||||||
|
|
||||||
Before invoking any `hi` command, integration test, generator, or
|
|
||||||
migration tool, read the referenced README in full —
|
|
||||||
`cmd/hi/README.md` for running tests, `integration/README.md` for
|
|
||||||
writing them. Never guess flags. If the procedure is not documented
|
|
||||||
anywhere, ask the user rather than inventing one.
|
|
||||||
|
|
||||||
### Map once, then act
|
|
||||||
|
|
||||||
Use `Glob` / `Grep` to understand file structure, then execute. Do not
|
|
||||||
re-explore the same area to "double-check" once you have a plan. Do not
|
|
||||||
re-read files you edited in this session — the harness tracks state for
|
|
||||||
you.
|
|
||||||
|
|
||||||
### Fail fast, report up
|
|
||||||
|
|
||||||
If a command fails twice with the same error, stop and report the exact
|
|
||||||
error to the user with context. Do not loop through variants or
|
|
||||||
"try one more thing". A repeated failure means your model of the problem
|
|
||||||
is wrong.
|
|
||||||
|
|
||||||
### Confirm scope for multi-file changes
|
|
||||||
|
|
||||||
Before touching more than three files, show the user which files will
|
|
||||||
change and why. Use plan mode (`ExitPlanMode`) for non-trivial work.
|
|
||||||
|
|
||||||
### Prefer editing existing files
|
|
||||||
|
|
||||||
Do not create new files unless strictly necessary. Do not generate helper
|
|
||||||
abstractions, wrapper utilities, or "just in case" configuration. Three
|
|
||||||
similar lines of code is better than a premature abstraction.
|
|
||||||
|
|
||||||
## Quick Start
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Enter the nix dev shell (Go 1.26.1, buf, golangci-lint, prek)
|
|
||||||
nix develop
|
|
||||||
|
|
||||||
# Full development workflow: fmt + lint + test + build
|
|
||||||
make dev
|
|
||||||
|
|
||||||
# Individual targets
|
|
||||||
make build # build the headscale binary
|
|
||||||
make test # go test ./...
|
|
||||||
make fmt # format Go, docs, proto
|
|
||||||
make lint # lint Go, proto
|
|
||||||
make generate # regenerate protobuf code (after changes to proto/)
|
|
||||||
make clean # remove build artefacts
|
|
||||||
|
|
||||||
# Direct go test invocations
|
|
||||||
go test ./...
|
|
||||||
go test -race ./...
|
|
||||||
|
|
||||||
# Integration tests — read cmd/hi/README.md first
|
|
||||||
go run ./cmd/hi doctor
|
|
||||||
go run ./cmd/hi run "TestName"
|
|
||||||
```
|
|
||||||
|
|
||||||
Go 1.26.1 minimum (per `go.mod:3`). `nix develop` pins the exact toolchain
|
|
||||||
used in CI.
|
|
||||||
|
|
||||||
## Pre-Commit with prek
|
|
||||||
|
|
||||||
`prek` installs git hooks that run the same checks as CI.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
nix develop
|
|
||||||
prek install # one-time setup
|
|
||||||
prek run # run hooks on staged files
|
|
||||||
prek run --all-files # run hooks on the full tree
|
|
||||||
```
|
|
||||||
|
|
||||||
Hooks cover: file hygiene (trailing whitespace, line endings, BOM),
|
|
||||||
syntax validation (JSON/YAML/TOML/XML), merge-conflict markers, private
|
|
||||||
key detection, nixpkgs-fmt, prettier, and `golangci-lint` via
|
|
||||||
`--new-from-rev=HEAD~1` (see `.pre-commit-config.yaml:59`). A manual
|
|
||||||
invocation with an `upstream/main` remote is equivalent:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
golangci-lint run --new-from-rev=upstream/main --timeout=5m --fix
|
|
||||||
```
|
|
||||||
|
|
||||||
`git commit --no-verify` is acceptable only for WIP commits on feature
|
|
||||||
branches — never on `main`.
|
|
||||||
|
|
||||||
## Project Layout
|
|
||||||
|
|
||||||
```
|
|
||||||
headscale/
|
|
||||||
├── cmd/
|
|
||||||
│ ├── headscale/ # Main headscale server binary
|
|
||||||
│ └── hi/ # Integration test runner (see cmd/hi/README.md)
|
|
||||||
├── hscontrol/ # Core control plane
|
|
||||||
├── integration/ # End-to-end Docker-based tests (see integration/README.md)
|
|
||||||
├── proto/ # Protocol buffer definitions
|
|
||||||
├── gen/ # Generated code (buf output — do not edit)
|
|
||||||
├── docs/ # User and ACL reference documentation
|
|
||||||
└── packaging/ # Distribution packaging
|
|
||||||
```
|
|
||||||
|
|
||||||
### `hscontrol/` packages
|
|
||||||
|
|
||||||
- `app.go`, `handlers.go`, `grpcv1.go`, `noise.go`, `auth.go`, `oidc.go`,
|
|
||||||
`poll.go`, `metrics.go`, `debug.go`, `tailsql.go`, `platform_config.go`
|
|
||||||
— top-level server files
|
|
||||||
- `state/` — central coordinator (`state.go`) and the copy-on-write
|
|
||||||
`NodeStore` (`node_store.go`). All cross-subsystem operations go
|
|
||||||
through `State`.
|
|
||||||
- `db/` — GORM layer, migrations, schema. `node.go`, `users.go`,
|
|
||||||
`api_key.go`, `preauth_keys.go`, `ip.go`, `policy.go`.
|
|
||||||
- `mapper/` — streaming batcher that distributes MapResponses to
|
|
||||||
clients: `batcher.go`, `node_conn.go`, `builder.go`, `mapper.go`.
|
|
||||||
Performance-critical.
|
|
||||||
- `policy/` — `policy/v2/` is **the** policy implementation. The
|
|
||||||
top-level `policy.go` is thin wrappers. There is no v1 directory.
|
|
||||||
- `routes/`, `dns/`, `derp/`, `types/`, `util/`, `templates/`, `capver/`
|
|
||||||
— routing, MagicDNS, relay, core types, helpers, client templates,
|
|
||||||
capability versioning.
|
|
||||||
- `servertest/` — in-memory test harness for server-level tests that
|
|
||||||
don't need Docker. Prefer this over `integration/` when possible.
|
|
||||||
- `assets/` — embedded UI assets.
|
|
||||||
|
|
||||||
### `cmd/hi/` files
|
|
||||||
|
|
||||||
`main.go`, `run.go`, `doctor.go`, `docker.go`, `cleanup.go`, `stats.go`,
|
|
||||||
`README.md`. **Read `cmd/hi/README.md` before running any `hi` command.**
|
|
||||||
|
|
||||||
## Architecture Essentials
|
|
||||||
|
|
||||||
- **`hscontrol/state/state.go`** is the central coordinator. Cross-cutting
|
|
||||||
operations (node updates, policy evaluation, IP allocation) go through
|
|
||||||
the `State` type, not directly to the database.
|
|
||||||
- **`NodeStore`** in `hscontrol/state/node_store.go` is a copy-on-write
|
|
||||||
in-memory cache backed by `atomic.Pointer[Snapshot]`. Every read is a
|
|
||||||
pointer load; writes rebuild a new snapshot and atomically swap. It is
|
|
||||||
the hot path for `MapRequest` processing and peer visibility.
|
|
||||||
- **The map-request sync point** is
|
|
||||||
`State.UpdateNodeFromMapRequest()` in
|
|
||||||
`hscontrol/state/state.go:2351`. This is where Hostinfo changes,
|
|
||||||
endpoint updates, and route advertisements land in the NodeStore.
|
|
||||||
- **Mapper subsystem** streams MapResponses via `batcher.go` and
|
|
||||||
`node_conn.go`. Changes here affect all connected clients.
|
|
||||||
- **Node registration flow**: noise handshake (`noise.go`) → auth
|
|
||||||
(`auth.go`) → state/DB persistence (`state/`, `db/`) → initial map
|
|
||||||
(`mapper/`).
|
|
||||||
|
|
||||||
## Database Migration Rules
|
|
||||||
|
|
||||||
These rules are load-bearing — violating them corrupts production
|
|
||||||
databases. The `migrationsRequiringFKDisabled` map in
|
|
||||||
`hscontrol/db/db.go:962` is frozen as of 2025-07-02 (see the comment at
|
|
||||||
`db.go:989`). All new migrations must:
|
|
||||||
|
|
||||||
1. **Never reorder existing migrations.** Migration order is immutable
|
|
||||||
once committed.
|
|
||||||
2. **Only add new migrations to the end** of the migrations array.
|
|
||||||
3. **Never disable foreign keys.** No new entries in
|
|
||||||
`migrationsRequiringFKDisabled`.
|
|
||||||
4. **Use the migration ID format** `YYYYMMDDHHMM-short-description`
|
|
||||||
(timestamp + descriptive suffix). Example: `202602201200-clear-tagged-node-user-id`.
|
|
||||||
5. **Never rename columns** that later migrations reference. Let
|
|
||||||
`AutoMigrate` create a new column if needed.
|
|
||||||
|
|
||||||
## Tags-as-Identity
|
|
||||||
|
|
||||||
Headscale enforces **tags XOR user ownership**: every node is either
|
|
||||||
tagged (owned by tags) or user-owned (owned by a user namespace), never
|
|
||||||
both. This is a load-bearing architectural invariant.
|
|
||||||
|
|
||||||
- **Use `node.IsTagged()`** (`hscontrol/types/node.go:221`) to determine
|
|
||||||
ownership, not `node.UserID().Valid()`. A tagged node may still have
|
|
||||||
`UserID` set for "created by" tracking — `IsTagged()` is authoritative.
|
|
||||||
- `IsUserOwned()` (`node.go:227`) returns `!IsTagged()`.
|
|
||||||
- Tagged nodes are presented to Tailscale as the special
|
|
||||||
`TaggedDevices` user (`hscontrol/types/users.go`, ID `2147455555`).
|
|
||||||
- `SetTags` validation is enforced by `validateNodeOwnership()` in
|
|
||||||
`hscontrol/state/tags.go`.
|
|
||||||
- Examples and edge cases live in `hscontrol/types/node_tags_test.go`
|
|
||||||
and `hscontrol/grpcv1_test.go` (`TestSetTags_*`).
|
|
||||||
|
|
||||||
**Don't do this**:
|
|
||||||
|
|
||||||
```go
|
|
||||||
if node.UserID().Valid() { /* assume user-owned */ } // WRONG
|
|
||||||
if node.UserID().Valid() && !node.IsTagged() { /* ok */ } // correct
|
|
||||||
```
|
|
||||||
|
|
||||||
## Policy Engine
|
|
||||||
|
|
||||||
`hscontrol/policy/v2/policy.go` is the policy implementation. The
|
|
||||||
top-level `hscontrol/policy/policy.go` contains only wrapper functions
|
|
||||||
around v2. There is no v1 directory.
|
|
||||||
|
|
||||||
Key concepts an agent will encounter:
|
|
||||||
|
|
||||||
- **Autogroups**: `autogroup:self`, `autogroup:member`, `autogroup:internet`
|
|
||||||
- **Tag owners**: IP-based authorization for who can claim a tag
|
|
||||||
- **Route approvals**: auto-approval of subnet routes by policy
|
|
||||||
- **SSH policies**: SSH access control via grants
|
|
||||||
- **HuJSON** parsing for policy files
|
|
||||||
|
|
||||||
For usage examples, read `hscontrol/policy/v2/policy_test.go`. For ACL
|
|
||||||
reference documentation, see `docs/`.
|
|
||||||
|
|
||||||
## Integration Testing
|
|
||||||
|
|
||||||
**Before running any `hi` command, read `cmd/hi/README.md` in full.**
|
|
||||||
Guessing at `hi` flags leads to broken runs and stale containers.
|
|
||||||
|
|
||||||
Test-authoring patterns (`EventuallyWithT`, `IntegrationSkip`, helper
|
|
||||||
variants, scenario setup) are documented in `integration/README.md`.
|
|
||||||
|
|
||||||
Key reminders:
|
|
||||||
|
|
||||||
- Integration test functions **must** start with `IntegrationSkip(t)`.
|
|
||||||
- External calls (`client.Status`, `headscale.ListNodes`, etc.) belong
|
|
||||||
inside `EventuallyWithT`; state-mutating commands (`tailscale set`)
|
|
||||||
must not.
|
|
||||||
- Tests generate ~100 MB of logs per run under `control_logs/{runID}/`.
|
|
||||||
Prune old runs if disk is tight.
|
|
||||||
- Flakes are almost always code, not infrastructure. Read `hs-*.stderr.log`
|
|
||||||
before blaming Docker.
|
|
||||||
|
|
||||||
## Code Conventions
|
|
||||||
|
|
||||||
- **Commit messages** follow Go-style `package: imperative description`.
|
|
||||||
Recent examples from `git log`:
|
|
||||||
- `db: scope DestroyUser to only delete the target user's pre-auth keys`
|
|
||||||
- `state: fix policy change race in UpdateNodeFromMapRequest`
|
|
||||||
- `integration: fix ACL tests for address-family-specific resolve`
|
|
||||||
|
|
||||||
Not Conventional Commits. No `feat:`/`chore:`/`docs:` prefixes.
|
|
||||||
|
|
||||||
- **Protobuf regeneration**: changes under `proto/` require
|
|
||||||
`make generate` (which runs `buf generate`) and should land in a
|
|
||||||
**separate commit** from the callers that use the regenerated types.
|
|
||||||
- **Formatting** is enforced by `golangci-lint` with `golines` (width 88)
|
|
||||||
and `gofumpt`. Run `make fmt` or rely on the pre-commit hook.
|
|
||||||
- **Logging** uses `zerolog`. Prefer single-line chains
|
|
||||||
(`log.Info().Str(...).Msg(...)`). For 4+ fields or conditional fields,
|
|
||||||
build incrementally and **reassign** the event variable:
|
|
||||||
`e = e.Str("k", v)`. Forgetting to reassign silently drops the field.
|
|
||||||
- **Tests**: prefer `hscontrol/servertest/` for server-level tests that
|
|
||||||
don't need Docker — faster than full integration tests.
|
|
||||||
|
|
||||||
## Gotchas
|
|
||||||
|
|
||||||
- **Database**: SQLite for local dev, PostgreSQL for integration-heavy
|
|
||||||
tests (`go run ./cmd/hi run "..." --postgres`). Some race conditions
|
|
||||||
only surface on one backend.
|
|
||||||
- **NodeStore writes** rebuild a full snapshot. Measure before changing
|
|
||||||
hot-path code.
|
|
||||||
- **`.claude/agents/` is deprecated.** Do not create new agent files
|
|
||||||
there. Put behavioural guidance in this file and procedural guidance
|
|
||||||
in the nearest README.
|
|
||||||
- **Do not edit `gen/`** — it is regenerated from `proto/` by
|
|
||||||
`make generate`.
|
|
||||||
- **Proto changes + code changes should be two commits**, not one.
|
|
||||||
906
CHANGELOG.md
396
CLAUDE.md
@@ -1 +1,395 @@
|
|||||||
@AGENTS.md
|
# CLAUDE.md
|
||||||
|
|
||||||
|
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
Headscale is an open-source implementation of the Tailscale control server written in Go. It provides self-hosted coordination for Tailscale networks (tailnets), managing node registration, IP allocation, policy enforcement, and DERP routing.
|
||||||
|
|
||||||
|
## Development Commands
|
||||||
|
|
||||||
|
### Quick Setup
|
||||||
|
```bash
|
||||||
|
# Recommended: Use Nix for dependency management
|
||||||
|
nix develop
|
||||||
|
|
||||||
|
# Full development workflow
|
||||||
|
make dev # runs fmt + lint + test + build
|
||||||
|
```
|
||||||
|
|
||||||
|
### Essential Commands
|
||||||
|
```bash
|
||||||
|
# Build headscale binary
|
||||||
|
make build
|
||||||
|
|
||||||
|
# Run tests
|
||||||
|
make test
|
||||||
|
go test ./... # All unit tests
|
||||||
|
go test -race ./... # With race detection
|
||||||
|
|
||||||
|
# Run specific integration test
|
||||||
|
go run ./cmd/hi run "TestName" --postgres
|
||||||
|
|
||||||
|
# Code formatting and linting
|
||||||
|
make fmt # Format all code (Go, docs, proto)
|
||||||
|
make lint # Lint all code (Go, proto)
|
||||||
|
make fmt-go # Format Go code only
|
||||||
|
make lint-go # Lint Go code only
|
||||||
|
|
||||||
|
# Protocol buffer generation (after modifying proto/)
|
||||||
|
make generate
|
||||||
|
|
||||||
|
# Clean build artifacts
|
||||||
|
make clean
|
||||||
|
```
|
||||||
|
|
||||||
|
### Integration Testing
|
||||||
|
```bash
|
||||||
|
# Use the hi (Headscale Integration) test runner
|
||||||
|
go run ./cmd/hi doctor # Check system requirements
|
||||||
|
go run ./cmd/hi run "TestPattern" # Run specific test
|
||||||
|
go run ./cmd/hi run "TestPattern" --postgres # With PostgreSQL backend
|
||||||
|
|
||||||
|
# Test artifacts are saved to control_logs/ with logs and debug data
|
||||||
|
```
|
||||||
|
|
||||||
|
## Project Structure & Architecture
|
||||||
|
|
||||||
|
### Top-Level Organization
|
||||||
|
|
||||||
|
```
|
||||||
|
headscale/
|
||||||
|
├── cmd/ # Command-line applications
|
||||||
|
│ ├── headscale/ # Main headscale server binary
|
||||||
|
│ └── hi/ # Headscale Integration test runner
|
||||||
|
├── hscontrol/ # Core control plane logic
|
||||||
|
├── integration/ # End-to-end Docker-based tests
|
||||||
|
├── proto/ # Protocol buffer definitions
|
||||||
|
├── gen/ # Generated code (protobuf)
|
||||||
|
├── docs/ # Documentation
|
||||||
|
└── packaging/ # Distribution packaging
|
||||||
|
```
|
||||||
|
|
||||||
|
### Core Packages (`hscontrol/`)
|
||||||
|
|
||||||
|
**Main Server (`hscontrol/`)**
|
||||||
|
- `app.go`: Application setup, dependency injection, server lifecycle
|
||||||
|
- `handlers.go`: HTTP/gRPC API endpoints for management operations
|
||||||
|
- `grpcv1.go`: gRPC service implementation for headscale API
|
||||||
|
- `poll.go`: **Critical** - Handles Tailscale MapRequest/MapResponse protocol
|
||||||
|
- `noise.go`: Noise protocol implementation for secure client communication
|
||||||
|
- `auth.go`: Authentication flows (web, OIDC, command-line)
|
||||||
|
- `oidc.go`: OpenID Connect integration for user authentication
|
||||||
|
|
||||||
|
**State Management (`hscontrol/state/`)**
|
||||||
|
- `state.go`: Central coordinator for all subsystems (database, policy, IP allocation, DERP)
|
||||||
|
- `node_store.go`: **Performance-critical** - In-memory cache with copy-on-write semantics
|
||||||
|
- Thread-safe operations with deadlock detection
|
||||||
|
- Coordinates between database persistence and real-time operations
|
||||||
|
|
||||||
|
**Database Layer (`hscontrol/db/`)**
|
||||||
|
- `db.go`: Database abstraction, GORM setup, migration management
|
||||||
|
- `node.go`: Node lifecycle, registration, expiration, IP assignment
|
||||||
|
- `users.go`: User management, namespace isolation
|
||||||
|
- `api_key.go`: API authentication tokens
|
||||||
|
- `preauth_keys.go`: Pre-authentication keys for automated node registration
|
||||||
|
- `ip.go`: IP address allocation and management
|
||||||
|
- `policy.go`: Policy storage and retrieval
|
||||||
|
- Schema migrations in `schema.sql` with extensive test data coverage
|
||||||
|
|
||||||
|
**Policy Engine (`hscontrol/policy/`)**
|
||||||
|
- `policy.go`: Core ACL evaluation logic, HuJSON parsing
|
||||||
|
- `v2/`: Next-generation policy system with improved filtering
|
||||||
|
- `matcher/`: ACL rule matching and evaluation engine
|
||||||
|
- Determines peer visibility, route approval, and network access rules
|
||||||
|
- Supports both file-based and database-stored policies
|
||||||
|
|
||||||
|
**Network Management (`hscontrol/`)**
|
||||||
|
- `derp/`: DERP (Designated Encrypted Relay for Packets) server implementation
|
||||||
|
- NAT traversal when direct connections fail
|
||||||
|
- Fallback relay for firewall-restricted environments
|
||||||
|
- `mapper/`: Converts internal Headscale state to Tailscale's wire protocol format
|
||||||
|
- `tail.go`: Tailscale-specific data structure generation
|
||||||
|
- `routes/`: Subnet route management and primary route selection
|
||||||
|
- `dns/`: DNS record management and MagicDNS implementation
|
||||||
|
|
||||||
|
**Utilities & Support (`hscontrol/`)**
|
||||||
|
- `types/`: Core data structures, configuration, validation
|
||||||
|
- `util/`: Helper functions for networking, DNS, key management
|
||||||
|
- `templates/`: Client configuration templates (Apple, Windows, etc.)
|
||||||
|
- `notifier/`: Event notification system for real-time updates
|
||||||
|
- `metrics.go`: Prometheus metrics collection
|
||||||
|
- `capver/`: Tailscale capability version management
|
||||||
|
|
||||||
|
### Key Subsystem Interactions
|
||||||
|
|
||||||
|
**Node Registration Flow**
|
||||||
|
1. **Client Connection**: `noise.go` handles secure protocol handshake
|
||||||
|
2. **Authentication**: `auth.go` validates credentials (web/OIDC/preauth)
|
||||||
|
3. **State Creation**: `state.go` coordinates IP allocation via `db/ip.go`
|
||||||
|
4. **Storage**: `db/node.go` persists node, `NodeStore` caches in memory
|
||||||
|
5. **Network Setup**: `mapper/` generates initial Tailscale network map
|
||||||
|
|
||||||
|
**Ongoing Operations**
|
||||||
|
1. **Poll Requests**: `poll.go` receives periodic client updates
|
||||||
|
2. **State Updates**: `NodeStore` maintains real-time node information
|
||||||
|
3. **Policy Application**: `policy/` evaluates ACL rules for peer relationships
|
||||||
|
4. **Map Distribution**: `mapper/` sends network topology to all affected clients
|
||||||
|
|
||||||
|
**Route Management**
|
||||||
|
1. **Advertisement**: Clients announce routes via `poll.go` Hostinfo updates
|
||||||
|
2. **Storage**: `db/` persists routes, `NodeStore` caches for performance
|
||||||
|
3. **Approval**: `policy/` auto-approves routes based on ACL rules
|
||||||
|
4. **Distribution**: `routes/` selects primary routes, `mapper/` distributes to peers
|
||||||
|
|
||||||
|
### Command-Line Tools (`cmd/`)
|
||||||
|
|
||||||
|
**Main Server (`cmd/headscale/`)**
|
||||||
|
- `headscale.go`: CLI parsing, configuration loading, server startup
|
||||||
|
- Supports daemon mode, CLI operations (user/node management), database operations
|
||||||
|
|
||||||
|
**Integration Test Runner (`cmd/hi/`)**
|
||||||
|
- `main.go`: Test execution framework with Docker orchestration
|
||||||
|
- `run.go`: Individual test execution with artifact collection
|
||||||
|
- `doctor.go`: System requirements validation
|
||||||
|
- `docker.go`: Container lifecycle management
|
||||||
|
- Essential for validating changes against real Tailscale clients
|
||||||
|
|
||||||
|
### Generated & External Code
|
||||||
|
|
||||||
|
**Protocol Buffers (`proto/` → `gen/`)**
|
||||||
|
- Defines gRPC API for headscale management operations
|
||||||
|
- Client libraries can generate from these definitions
|
||||||
|
- Run `make generate` after modifying `.proto` files
|
||||||
|
|
||||||
|
**Integration Testing (`integration/`)**
|
||||||
|
- `scenario.go`: Docker test environment setup
|
||||||
|
- `tailscale.go`: Tailscale client container management
|
||||||
|
- Individual test files for specific functionality areas
|
||||||
|
- Real end-to-end validation with network isolation
|
||||||
|
|
||||||
|
### Critical Performance Paths
|
||||||
|
|
||||||
|
**High-Frequency Operations**
|
||||||
|
1. **MapRequest Processing** (`poll.go`): Every 15-60 seconds per client
|
||||||
|
2. **NodeStore Reads** (`node_store.go`): Every operation requiring node data
|
||||||
|
3. **Policy Evaluation** (`policy/`): On every peer relationship calculation
|
||||||
|
4. **Route Lookups** (`routes/`): During network map generation
|
||||||
|
|
||||||
|
**Database Write Patterns**
|
||||||
|
- **Frequent**: Node heartbeats, endpoint updates, route changes
|
||||||
|
- **Moderate**: User operations, policy updates, API key management
|
||||||
|
- **Rare**: Schema migrations, bulk operations
|
||||||
|
|
||||||
|
### Configuration & Deployment
|
||||||
|
|
||||||
|
**Configuration** (`hscontrol/types/config.go`)**
|
||||||
|
- Database connection settings (SQLite/PostgreSQL)
|
||||||
|
- Network configuration (IP ranges, DNS settings)
|
||||||
|
- Policy mode (file vs database)
|
||||||
|
- DERP relay configuration
|
||||||
|
- OIDC provider settings
|
||||||
|
|
||||||
|
**Key Dependencies**
|
||||||
|
- **GORM**: Database ORM with migration support
|
||||||
|
- **Tailscale Libraries**: Core networking and protocol code
|
||||||
|
- **Zerolog**: Structured logging throughout the application
|
||||||
|
- **Buf**: Protocol buffer toolchain for code generation
|
||||||
|
|
||||||
|
### Development Workflow Integration
|
||||||
|
|
||||||
|
The architecture supports incremental development:
|
||||||
|
- **Unit Tests**: Focus on individual packages (`*_test.go` files)
|
||||||
|
- **Integration Tests**: Validate cross-component interactions
|
||||||
|
- **Database Tests**: Extensive migration and data integrity validation
|
||||||
|
- **Policy Tests**: ACL rule evaluation and edge cases
|
||||||
|
- **Performance Tests**: NodeStore and high-frequency operation validation
|
||||||
|
|
||||||
|
## Integration Test System
|
||||||
|
|
||||||
|
### Overview
|
||||||
|
Integration tests use Docker containers running real Tailscale clients against a Headscale server. Tests validate end-to-end functionality including routing, ACLs, node lifecycle, and network coordination.
|
||||||
|
|
||||||
|
### Running Integration Tests
|
||||||
|
|
||||||
|
**System Requirements**
|
||||||
|
```bash
|
||||||
|
# Check if your system is ready
|
||||||
|
go run ./cmd/hi doctor
|
||||||
|
```
|
||||||
|
This verifies Docker, Go, required images, and disk space.
|
||||||
|
|
||||||
|
**Test Execution Patterns**
|
||||||
|
```bash
|
||||||
|
# Run a single test (recommended for development)
|
||||||
|
go run ./cmd/hi run "TestSubnetRouterMultiNetwork"
|
||||||
|
|
||||||
|
# Run with PostgreSQL backend (for database-heavy tests)
|
||||||
|
go run ./cmd/hi run "TestExpireNode" --postgres
|
||||||
|
|
||||||
|
# Run multiple tests with pattern matching
|
||||||
|
go run ./cmd/hi run "TestSubnet*"
|
||||||
|
|
||||||
|
# Run all integration tests (CI/full validation)
|
||||||
|
go test ./integration -timeout 30m
|
||||||
|
```
|
||||||
|
|
||||||
|
**Test Categories & Timing**
|
||||||
|
- **Fast tests** (< 2 min): Basic functionality, CLI operations
|
||||||
|
- **Medium tests** (2-5 min): Route management, ACL validation
|
||||||
|
- **Slow tests** (5+ min): Node expiration, HA failover
|
||||||
|
- **Long-running tests** (10+ min): `TestNodeOnlineStatus` (12 min duration)
|
||||||
|
|
||||||
|
### Test Infrastructure
|
||||||
|
|
||||||
|
**Docker Setup**
|
||||||
|
- Headscale server container with configurable database backend
|
||||||
|
- Multiple Tailscale client containers with different versions
|
||||||
|
- Isolated networks per test scenario
|
||||||
|
- Automatic cleanup after test completion
|
||||||
|
|
||||||
|
**Test Artifacts**
|
||||||
|
All test runs save artifacts to `control_logs/TIMESTAMP-ID/`:
|
||||||
|
```
|
||||||
|
control_logs/20250713-213106-iajsux/
|
||||||
|
├── hs-testname-abc123.stderr.log # Headscale server logs
|
||||||
|
├── hs-testname-abc123.stdout.log
|
||||||
|
├── hs-testname-abc123.db # Database snapshot
|
||||||
|
├── hs-testname-abc123_metrics.txt # Prometheus metrics
|
||||||
|
├── hs-testname-abc123-mapresponses/ # Protocol debug data
|
||||||
|
├── ts-client-xyz789.stderr.log # Tailscale client logs
|
||||||
|
├── ts-client-xyz789.stdout.log
|
||||||
|
└── ts-client-xyz789_status.json # Client status dump
|
||||||
|
```
|
||||||
|
|
||||||
|
### Test Development Guidelines
|
||||||
|
|
||||||
|
**Timing Considerations**
|
||||||
|
Integration tests involve real network operations and Docker container lifecycle:
|
||||||
|
|
||||||
|
```go
|
||||||
|
// ❌ Wrong: Immediate assertions after async operations
|
||||||
|
client.Execute([]string{"tailscale", "set", "--advertise-routes=10.0.0.0/24"})
|
||||||
|
nodes, _ := headscale.ListNodes()
|
||||||
|
require.Len(t, nodes[0].GetAvailableRoutes(), 1) // May fail due to timing
|
||||||
|
|
||||||
|
// ✅ Correct: Wait for async operations to complete
|
||||||
|
client.Execute([]string{"tailscale", "set", "--advertise-routes=10.0.0.0/24"})
|
||||||
|
require.EventuallyWithT(t, func(c *assert.CollectT) {
|
||||||
|
nodes, err := headscale.ListNodes()
|
||||||
|
assert.NoError(c, err)
|
||||||
|
assert.Len(c, nodes[0].GetAvailableRoutes(), 1)
|
||||||
|
}, 10*time.Second, 100*time.Millisecond, "route should be advertised")
|
||||||
|
```
|
||||||
|
|
||||||
|
**Common Test Patterns**
|
||||||
|
- **Route Advertisement**: Use `EventuallyWithT` for route propagation
|
||||||
|
- **Node State Changes**: Wait for NodeStore synchronization
|
||||||
|
- **ACL Policy Changes**: Allow time for policy recalculation
|
||||||
|
- **Network Connectivity**: Use ping tests with retries
|
||||||
|
|
||||||
|
**Test Data Management**
|
||||||
|
```go
|
||||||
|
// Node identification: Don't assume array ordering
|
||||||
|
expectedRoutes := map[string]string{"1": "10.33.0.0/16"}
|
||||||
|
for _, node := range nodes {
|
||||||
|
nodeIDStr := fmt.Sprintf("%d", node.GetId())
|
||||||
|
if route, shouldHaveRoute := expectedRoutes[nodeIDStr]; shouldHaveRoute {
|
||||||
|
// Test the node that should have the route
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Troubleshooting Integration Tests
|
||||||
|
|
||||||
|
**Common Failure Patterns**
|
||||||
|
1. **Timing Issues**: Test assertions run before async operations complete
|
||||||
|
- **Solution**: Use `EventuallyWithT` with appropriate timeouts
|
||||||
|
- **Timeout Guidelines**: 3-5s for route operations, 10s for complex scenarios
|
||||||
|
|
||||||
|
2. **Infrastructure Problems**: Disk space, Docker issues, network conflicts
|
||||||
|
- **Check**: `go run ./cmd/hi doctor` for system health
|
||||||
|
- **Clean**: Remove old test containers and networks
|
||||||
|
|
||||||
|
3. **NodeStore Synchronization**: Tests expecting immediate data availability
|
||||||
|
- **Key Points**: Route advertisements must propagate through poll requests
|
||||||
|
- **Fix**: Wait for NodeStore updates after Hostinfo changes
|
||||||
|
|
||||||
|
4. **Database Backend Differences**: SQLite vs PostgreSQL behavior differences
|
||||||
|
- **Use**: `--postgres` flag for database-intensive tests
|
||||||
|
- **Note**: Some timing characteristics differ between backends
|
||||||
|
|
||||||
|
**Debugging Failed Tests**
|
||||||
|
1. **Check test artifacts** in `control_logs/` for detailed logs
|
||||||
|
2. **Examine MapResponse JSON** files for protocol-level debugging
|
||||||
|
3. **Review Headscale stderr logs** for server-side error messages
|
||||||
|
4. **Check Tailscale client status** for network-level issues
|
||||||
|
|
||||||
|
**Resource Management**
|
||||||
|
- Tests require significant disk space (each run ~100MB of logs)
|
||||||
|
- Docker containers are cleaned up automatically on success
|
||||||
|
- Failed tests may leave containers running - clean manually if needed
|
||||||
|
- Use `docker system prune` periodically to reclaim space
|
||||||
|
|
||||||
|
### Best Practices for Test Modifications
|
||||||
|
|
||||||
|
1. **Always test locally** before committing integration test changes
|
||||||
|
2. **Use appropriate timeouts** - too short causes flaky tests, too long slows CI
|
||||||
|
3. **Clean up properly** - ensure tests don't leave persistent state
|
||||||
|
4. **Handle both success and failure paths** in test scenarios
|
||||||
|
5. **Document timing requirements** for complex test scenarios
|
||||||
|
|
||||||
|
## NodeStore Implementation Details
|
||||||
|
|
||||||
|
**Key Insight from Recent Work**: The NodeStore is a critical performance optimization that caches node data in memory while ensuring consistency with the database. When working with route advertisements or node state changes:
|
||||||
|
|
||||||
|
1. **Timing Considerations**: Route advertisements need time to propagate from clients to server. Use `require.EventuallyWithT()` patterns in tests instead of immediate assertions.
|
||||||
|
|
||||||
|
2. **Synchronization Points**: NodeStore updates happen at specific points like `poll.go:420` after Hostinfo changes. Ensure these are maintained when modifying the polling logic.
|
||||||
|
|
||||||
|
3. **Peer Visibility**: The NodeStore's `peersFunc` determines which nodes are visible to each other. Policy-based filtering is separate from monitoring visibility - expired nodes should remain visible for debugging but marked as expired.
|
||||||
|
|
||||||
|
## Testing Guidelines
|
||||||
|
|
||||||
|
### Integration Test Patterns
|
||||||
|
```go
|
||||||
|
// Use EventuallyWithT for async operations
|
||||||
|
require.EventuallyWithT(t, func(c *assert.CollectT) {
|
||||||
|
nodes, err := headscale.ListNodes()
|
||||||
|
assert.NoError(c, err)
|
||||||
|
// Check expected state
|
||||||
|
}, 10*time.Second, 100*time.Millisecond, "description")
|
||||||
|
|
||||||
|
// Node route checking by actual node properties, not array position
|
||||||
|
var routeNode *v1.Node
|
||||||
|
for _, node := range nodes {
|
||||||
|
if nodeIDStr := fmt.Sprintf("%d", node.GetId()); expectedRoutes[nodeIDStr] != "" {
|
||||||
|
routeNode = node
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Running Problematic Tests
|
||||||
|
- Some tests require significant time (e.g., `TestNodeOnlineStatus` runs for 12 minutes)
|
||||||
|
- Infrastructure issues like disk space can cause test failures unrelated to code changes
|
||||||
|
- Use `--postgres` flag when testing database-heavy scenarios
|
||||||
|
|
||||||
|
## Important Notes
|
||||||
|
|
||||||
|
- **Dependencies**: Use `nix develop` for consistent toolchain (Go, buf, protobuf tools, linting)
|
||||||
|
- **Protocol Buffers**: Changes to `proto/` require `make generate` and should be committed separately
|
||||||
|
- **Code Style**: Enforced via golangci-lint with golines (width 88) and gofumpt formatting
|
||||||
|
- **Database**: Supports both SQLite (development) and PostgreSQL (production/testing)
|
||||||
|
- **Integration Tests**: Require Docker and can consume significant disk space
|
||||||
|
- **Performance**: NodeStore optimizations are critical for scale - be careful with changes to state management
|
||||||
|
|
||||||
|
## Debugging Integration Tests
|
||||||
|
|
||||||
|
Test artifacts are preserved in `control_logs/TIMESTAMP-ID/` including:
|
||||||
|
- Headscale server logs (stderr/stdout)
|
||||||
|
- Tailscale client logs and status
|
||||||
|
- Database dumps and network captures
|
||||||
|
- MapResponse JSON files for protocol debugging
|
||||||
|
|
||||||
|
When tests fail, check these artifacts first before assuming code issues.
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
# For testing purposes only
|
# For testing purposes only
|
||||||
|
|
||||||
FROM golang:1.26.2-alpine AS build-env
|
FROM golang:alpine AS build-env
|
||||||
|
|
||||||
WORKDIR /go/src
|
WORKDIR /go/src
|
||||||
|
|
||||||
@@ -12,7 +12,7 @@ WORKDIR /go/src/tailscale
|
|||||||
ARG TARGETARCH
|
ARG TARGETARCH
|
||||||
RUN GOARCH=$TARGETARCH go install -v ./cmd/derper
|
RUN GOARCH=$TARGETARCH go install -v ./cmd/derper
|
||||||
|
|
||||||
FROM alpine:3.22
|
FROM alpine:3.18
|
||||||
RUN apk add --no-cache ca-certificates iptables iproute2 ip6tables curl
|
RUN apk add --no-cache ca-certificates iptables iproute2 ip6tables curl
|
||||||
|
|
||||||
COPY --from=build-env /go/bin/* /usr/local/bin/
|
COPY --from=build-env /go/bin/* /usr/local/bin/
|
||||||
|
|||||||
@@ -2,43 +2,29 @@
|
|||||||
# and are in no way endorsed by Headscale's maintainers as an
|
# and are in no way endorsed by Headscale's maintainers as an
|
||||||
# official nor supported release or distribution.
|
# official nor supported release or distribution.
|
||||||
|
|
||||||
FROM docker.io/golang:1.26.1-trixie AS builder
|
FROM docker.io/golang:1.24-bookworm
|
||||||
ARG VERSION=dev
|
ARG VERSION=dev
|
||||||
ENV GOPATH /go
|
ENV GOPATH /go
|
||||||
WORKDIR /go/src/headscale
|
WORKDIR /go/src/headscale
|
||||||
|
|
||||||
# Install delve debugger first - rarely changes, good cache candidate
|
RUN apt-get update \
|
||||||
|
&& apt-get install --no-install-recommends --yes less jq sqlite3 dnsutils \
|
||||||
|
&& rm -rf /var/lib/apt/lists/* \
|
||||||
|
&& apt-get clean
|
||||||
|
RUN mkdir -p /var/run/headscale
|
||||||
|
|
||||||
|
# Install delve debugger
|
||||||
RUN go install github.com/go-delve/delve/cmd/dlv@latest
|
RUN go install github.com/go-delve/delve/cmd/dlv@latest
|
||||||
|
|
||||||
# Download dependencies - only invalidated when go.mod/go.sum change
|
|
||||||
COPY go.mod go.sum /go/src/headscale/
|
COPY go.mod go.sum /go/src/headscale/
|
||||||
RUN go mod download
|
RUN go mod download
|
||||||
|
|
||||||
# Copy source and build - invalidated on any source change
|
|
||||||
COPY . .
|
COPY . .
|
||||||
|
|
||||||
# Build debug binary with debug symbols for delve
|
# Build debug binary with debug symbols for delve
|
||||||
RUN CGO_ENABLED=0 GOOS=linux go build -gcflags="all=-N -l" -o /go/bin/headscale ./cmd/headscale
|
RUN CGO_ENABLED=0 GOOS=linux go build -gcflags="all=-N -l" -o /go/bin/headscale ./cmd/headscale
|
||||||
|
|
||||||
# Runtime stage
|
|
||||||
FROM debian:trixie-slim
|
|
||||||
|
|
||||||
RUN apt-get --update install --no-install-recommends --yes \
|
|
||||||
bash ca-certificates curl dnsutils findutils iproute2 jq less procps python3 sqlite3 \
|
|
||||||
&& apt-get dist-clean
|
|
||||||
|
|
||||||
RUN mkdir -p /var/run/headscale
|
|
||||||
|
|
||||||
# Copy binaries from builder
|
|
||||||
COPY --from=builder /go/bin/headscale /usr/local/bin/headscale
|
|
||||||
COPY --from=builder /go/bin/dlv /usr/local/bin/dlv
|
|
||||||
|
|
||||||
# Copy source code for delve source-level debugging
|
|
||||||
COPY --from=builder /go/src/headscale /go/src/headscale
|
|
||||||
|
|
||||||
WORKDIR /go/src/headscale
|
|
||||||
|
|
||||||
# Need to reset the entrypoint or everything will run as a busybox script
|
# Need to reset the entrypoint or everything will run as a busybox script
|
||||||
ENTRYPOINT []
|
ENTRYPOINT []
|
||||||
EXPOSE 8080/tcp 40000/tcp
|
EXPOSE 8080/tcp 40000/tcp
|
||||||
CMD ["dlv", "--listen=0.0.0.0:40000", "--headless=true", "--api-version=2", "--accept-multiclient", "exec", "/usr/local/bin/headscale", "--"]
|
CMD ["/go/bin/dlv", "--listen=0.0.0.0:40000", "--headless=true", "--api-version=2", "--accept-multiclient", "exec", "/go/bin/headscale", "--"]
|
||||||
|
|||||||
@@ -1,17 +0,0 @@
|
|||||||
# Minimal CI image - expects pre-built headscale binary in build context
|
|
||||||
# For local development with delve debugging, use Dockerfile.integration instead
|
|
||||||
|
|
||||||
FROM debian:trixie-slim
|
|
||||||
|
|
||||||
RUN apt-get --update install --no-install-recommends --yes \
|
|
||||||
bash ca-certificates curl dnsutils findutils iproute2 jq less procps python3 sqlite3 \
|
|
||||||
&& apt-get dist-clean
|
|
||||||
|
|
||||||
RUN mkdir -p /var/run/headscale
|
|
||||||
|
|
||||||
# Copy pre-built headscale binary from build context
|
|
||||||
COPY headscale /usr/local/bin/headscale
|
|
||||||
|
|
||||||
ENTRYPOINT []
|
|
||||||
EXPOSE 8080/tcp
|
|
||||||
CMD ["/usr/local/bin/headscale"]
|
|
||||||
@@ -4,7 +4,7 @@
|
|||||||
# This Dockerfile is more or less lifted from tailscale/tailscale
|
# This Dockerfile is more or less lifted from tailscale/tailscale
|
||||||
# to ensure a similar build process when testing the HEAD of tailscale.
|
# to ensure a similar build process when testing the HEAD of tailscale.
|
||||||
|
|
||||||
FROM golang:1.26.2-alpine AS build-env
|
FROM golang:1.24-alpine AS build-env
|
||||||
|
|
||||||
WORKDIR /go/src
|
WORKDIR /go/src
|
||||||
|
|
||||||
@@ -36,10 +36,8 @@ RUN GOARCH=$TARGETARCH go install -tags="${BUILD_TAGS}" -ldflags="\
|
|||||||
-X tailscale.com/version.gitCommitStamp=$VERSION_GIT_HASH" \
|
-X tailscale.com/version.gitCommitStamp=$VERSION_GIT_HASH" \
|
||||||
-v ./cmd/tailscale ./cmd/tailscaled ./cmd/containerboot
|
-v ./cmd/tailscale ./cmd/tailscaled ./cmd/containerboot
|
||||||
|
|
||||||
FROM alpine:3.22
|
FROM alpine:3.18
|
||||||
# Upstream: ca-certificates ip6tables iptables iproute2
|
RUN apk add --no-cache ca-certificates iptables iproute2 ip6tables curl
|
||||||
# Tests: curl python3 (traceroute via BusyBox)
|
|
||||||
RUN apk add --no-cache ca-certificates curl ip6tables iptables iproute2 python3
|
|
||||||
|
|
||||||
COPY --from=build-env /go/bin/* /usr/local/bin/
|
COPY --from=build-env /go/bin/* /usr/local/bin/
|
||||||
# For compat with the previous run.sh, although ideally you should be
|
# For compat with the previous run.sh, although ideally you should be
|
||||||
|
|||||||
23
Makefile
@@ -21,7 +21,7 @@ endef
|
|||||||
# Source file collections using shell find for better performance
|
# Source file collections using shell find for better performance
|
||||||
GO_SOURCES := $(shell find . -name '*.go' -not -path './gen/*' -not -path './vendor/*')
|
GO_SOURCES := $(shell find . -name '*.go' -not -path './gen/*' -not -path './vendor/*')
|
||||||
PROTO_SOURCES := $(shell find . -name '*.proto' -not -path './gen/*' -not -path './vendor/*')
|
PROTO_SOURCES := $(shell find . -name '*.proto' -not -path './gen/*' -not -path './vendor/*')
|
||||||
PRETTIER_SOURCES := $(shell find . \( -name '*.md' -o -name '*.yaml' -o -name '*.yml' -o -name '*.ts' -o -name '*.js' -o -name '*.html' -o -name '*.css' -o -name '*.scss' -o -name '*.sass' \) -not -path './gen/*' -not -path './vendor/*' -not -path './node_modules/*')
|
DOC_SOURCES := $(shell find . \( -name '*.md' -o -name '*.yaml' -o -name '*.yml' -o -name '*.ts' -o -name '*.js' -o -name '*.html' -o -name '*.css' -o -name '*.scss' -o -name '*.sass' \) -not -path './gen/*' -not -path './vendor/*' -not -path './node_modules/*')
|
||||||
|
|
||||||
# Default target
|
# Default target
|
||||||
.PHONY: all
|
.PHONY: all
|
||||||
@@ -33,7 +33,6 @@ check-deps:
|
|||||||
$(call check_tool,go)
|
$(call check_tool,go)
|
||||||
$(call check_tool,golangci-lint)
|
$(call check_tool,golangci-lint)
|
||||||
$(call check_tool,gofumpt)
|
$(call check_tool,gofumpt)
|
||||||
$(call check_tool,mdformat)
|
|
||||||
$(call check_tool,prettier)
|
$(call check_tool,prettier)
|
||||||
$(call check_tool,clang-format)
|
$(call check_tool,clang-format)
|
||||||
$(call check_tool,buf)
|
$(call check_tool,buf)
|
||||||
@@ -53,7 +52,7 @@ test: check-deps $(GO_SOURCES) go.mod go.sum
|
|||||||
|
|
||||||
# Formatting targets
|
# Formatting targets
|
||||||
.PHONY: fmt
|
.PHONY: fmt
|
||||||
fmt: fmt-go fmt-mdformat fmt-prettier fmt-proto
|
fmt: fmt-go fmt-prettier fmt-proto
|
||||||
|
|
||||||
.PHONY: fmt-go
|
.PHONY: fmt-go
|
||||||
fmt-go: check-deps $(GO_SOURCES)
|
fmt-go: check-deps $(GO_SOURCES)
|
||||||
@@ -61,15 +60,11 @@ fmt-go: check-deps $(GO_SOURCES)
|
|||||||
gofumpt -l -w .
|
gofumpt -l -w .
|
||||||
golangci-lint run --fix
|
golangci-lint run --fix
|
||||||
|
|
||||||
.PHONY: fmt-mdformat
|
|
||||||
fmt-mdformat: check-deps
|
|
||||||
@echo "Formatting documentation..."
|
|
||||||
mdformat docs/
|
|
||||||
|
|
||||||
.PHONY: fmt-prettier
|
.PHONY: fmt-prettier
|
||||||
fmt-prettier: check-deps $(PRETTIER_SOURCES)
|
fmt-prettier: check-deps $(DOC_SOURCES)
|
||||||
@echo "Formatting markup and config files..."
|
@echo "Formatting documentation and config files..."
|
||||||
prettier --write '**/*.{ts,js,md,yaml,yml,sass,css,scss,html}'
|
prettier --write '**/*.{ts,js,md,yaml,yml,sass,css,scss,html}'
|
||||||
|
prettier --write --print-width 80 --prose-wrap always CHANGELOG.md
|
||||||
|
|
||||||
.PHONY: fmt-proto
|
.PHONY: fmt-proto
|
||||||
fmt-proto: check-deps $(PROTO_SOURCES)
|
fmt-proto: check-deps $(PROTO_SOURCES)
|
||||||
@@ -105,11 +100,6 @@ clean:
|
|||||||
.PHONY: dev
|
.PHONY: dev
|
||||||
dev: fmt lint test build
|
dev: fmt lint test build
|
||||||
|
|
||||||
# Start a local headscale dev server (use mts to add nodes)
|
|
||||||
.PHONY: dev-server
|
|
||||||
dev-server:
|
|
||||||
go run ./cmd/dev
|
|
||||||
|
|
||||||
# Help target
|
# Help target
|
||||||
.PHONY: help
|
.PHONY: help
|
||||||
help:
|
help:
|
||||||
@@ -127,8 +117,7 @@ help:
|
|||||||
@echo ""
|
@echo ""
|
||||||
@echo "Specific targets:"
|
@echo "Specific targets:"
|
||||||
@echo " fmt-go - Format Go code only"
|
@echo " fmt-go - Format Go code only"
|
||||||
@echo " fmt-mdformat - Format documentation only"
|
@echo " fmt-prettier - Format documentation only"
|
||||||
@echo " fmt-prettier - Format markup and config files only"
|
|
||||||
@echo " fmt-proto - Format Protocol Buffer files only"
|
@echo " fmt-proto - Format Protocol Buffer files only"
|
||||||
@echo " lint-go - Lint Go code only"
|
@echo " lint-go - Lint Go code only"
|
||||||
@echo " lint-proto - Lint Protocol Buffer files only"
|
@echo " lint-proto - Lint Protocol Buffer files only"
|
||||||
|
|||||||
16
README.md
@@ -1,4 +1,4 @@
|
|||||||

|

|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
@@ -63,18 +63,8 @@ and container to run Headscale.**
|
|||||||
|
|
||||||
Please have a look at the [`documentation`](https://headscale.net/stable/).
|
Please have a look at the [`documentation`](https://headscale.net/stable/).
|
||||||
|
|
||||||
For NixOS users, a module is available in [`nix/`](./nix/).
|
|
||||||
|
|
||||||
## Builds from `main`
|
|
||||||
|
|
||||||
Development builds from the `main` branch are available as container images and
|
|
||||||
binaries. See the [development builds](https://headscale.net/stable/setup/install/main/)
|
|
||||||
documentation for details.
|
|
||||||
|
|
||||||
## Talks
|
## Talks
|
||||||
|
|
||||||
- Fosdem 2026 (video): [Headscale & Tailscale: The complementary open source clone](https://fosdem.org/2026/schedule/event/KYQ3LL-headscale-the-complementary-open-source-clone/)
|
|
||||||
- presented by Kristoffer Dalby
|
|
||||||
- Fosdem 2023 (video): [Headscale: How we are using integration testing to reimplement Tailscale](https://fosdem.org/2023/schedule/event/goheadscale/)
|
- Fosdem 2023 (video): [Headscale: How we are using integration testing to reimplement Tailscale](https://fosdem.org/2023/schedule/event/goheadscale/)
|
||||||
- presented by Juan Font Alonso and Kristoffer Dalby
|
- presented by Juan Font Alonso and Kristoffer Dalby
|
||||||
|
|
||||||
@@ -113,8 +103,6 @@ run `make lint` and `make fmt` before committing any code.
|
|||||||
The **Proto** code is linted with [`buf`](https://docs.buf.build/lint/overview) and
|
The **Proto** code is linted with [`buf`](https://docs.buf.build/lint/overview) and
|
||||||
formatted with [`clang-format`](https://clang.llvm.org/docs/ClangFormat.html).
|
formatted with [`clang-format`](https://clang.llvm.org/docs/ClangFormat.html).
|
||||||
|
|
||||||
The **docs** are formatted with [`mdformat`](https://mdformat.readthedocs.io).
|
|
||||||
|
|
||||||
The **rest** (Markdown, YAML, etc) is formatted with [`prettier`](https://prettier.io).
|
The **rest** (Markdown, YAML, etc) is formatted with [`prettier`](https://prettier.io).
|
||||||
|
|
||||||
Check out the `.golangci.yaml` and `Makefile` to see the specific configuration.
|
Check out the `.golangci.yaml` and `Makefile` to see the specific configuration.
|
||||||
@@ -159,7 +147,6 @@ make build
|
|||||||
We recommend using Nix for dependency management to ensure you have all required tools. If you prefer to manage dependencies yourself, you can use Make directly:
|
We recommend using Nix for dependency management to ensure you have all required tools. If you prefer to manage dependencies yourself, you can use Make directly:
|
||||||
|
|
||||||
**With Nix (recommended):**
|
**With Nix (recommended):**
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
nix develop
|
nix develop
|
||||||
make test
|
make test
|
||||||
@@ -167,7 +154,6 @@ make build
|
|||||||
```
|
```
|
||||||
|
|
||||||
**With your own dependencies:**
|
**With your own dependencies:**
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
make test
|
make test
|
||||||
make build
|
make build
|
||||||
|
|||||||
@@ -1,96 +0,0 @@
|
|||||||
# cmd/dev -- Local Development Environment
|
|
||||||
|
|
||||||
Starts a headscale server on localhost with a pre-created user and
|
|
||||||
pre-auth key. Pair with `mts` to add real tailscale nodes.
|
|
||||||
|
|
||||||
## Quick start
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Terminal 1: start headscale
|
|
||||||
go run ./cmd/dev
|
|
||||||
|
|
||||||
# Terminal 2: start mts server
|
|
||||||
go tool mts server run
|
|
||||||
|
|
||||||
# Terminal 3: add and connect nodes
|
|
||||||
go tool mts server add node1
|
|
||||||
go tool mts server add node2
|
|
||||||
|
|
||||||
# Disable logtail (avoids startup delays, see "Known issues" below)
|
|
||||||
for n in node1 node2; do
|
|
||||||
cat > ~/.config/multi-tailscale-dev/$n/env.txt << 'EOF'
|
|
||||||
TS_NO_LOGS_NO_SUPPORT=true
|
|
||||||
EOF
|
|
||||||
done
|
|
||||||
|
|
||||||
# Restart nodes so env.txt takes effect
|
|
||||||
go tool mts server stop node1 && go tool mts server start node1
|
|
||||||
go tool mts server stop node2 && go tool mts server start node2
|
|
||||||
|
|
||||||
# Connect to headscale (use the auth key printed by cmd/dev)
|
|
||||||
go tool mts node1 up --login-server=http://127.0.0.1:8080 --authkey=<KEY> --reset
|
|
||||||
go tool mts node2 up --login-server=http://127.0.0.1:8080 --authkey=<KEY> --reset
|
|
||||||
|
|
||||||
# Verify
|
|
||||||
go tool mts node1 status
|
|
||||||
```
|
|
||||||
|
|
||||||
## Flags
|
|
||||||
|
|
||||||
| Flag | Default | Description |
|
|
||||||
| -------- | ------- | ---------------------------- |
|
|
||||||
| `--port` | 8080 | Headscale listen port |
|
|
||||||
| `--keep` | false | Keep state directory on exit |
|
|
||||||
|
|
||||||
The metrics/debug port is `port + 1010` (default 9090) and the gRPC
|
|
||||||
port is `port + 42363` (default 50443).
|
|
||||||
|
|
||||||
## What it does
|
|
||||||
|
|
||||||
1. Builds the headscale binary into a temp directory
|
|
||||||
2. Writes a minimal dev config (SQLite, public DERP, debug logging)
|
|
||||||
3. Starts `headscale serve` as a subprocess
|
|
||||||
4. Creates a "dev" user and a reusable 24h pre-auth key via the CLI
|
|
||||||
5. Prints a banner with server URL, auth key, and usage instructions
|
|
||||||
6. Blocks until Ctrl+C, then kills headscale
|
|
||||||
|
|
||||||
State lives in `/tmp/headscale-dev-*/`. Pass `--keep` to preserve it
|
|
||||||
across restarts (useful for inspecting the database or reusing keys).
|
|
||||||
|
|
||||||
## Useful endpoints
|
|
||||||
|
|
||||||
- `http://127.0.0.1:8080/health` -- health check
|
|
||||||
- `http://127.0.0.1:9090/debug/ping` -- interactive ping UI
|
|
||||||
- `http://127.0.0.1:9090/debug/ping?node=1` -- quick-ping a node
|
|
||||||
- `POST http://127.0.0.1:9090/debug/ping` with `node=<id>` -- trigger ping
|
|
||||||
|
|
||||||
## Managing headscale
|
|
||||||
|
|
||||||
The banner prints the full path to the built binary and config. Use it
|
|
||||||
for any headscale CLI command:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
/tmp/headscale-dev-*/headscale -c /tmp/headscale-dev-*/config.yaml nodes list
|
|
||||||
/tmp/headscale-dev-*/headscale -c /tmp/headscale-dev-*/config.yaml users list
|
|
||||||
```
|
|
||||||
|
|
||||||
## Known issues
|
|
||||||
|
|
||||||
### Logtail delays on mts nodes
|
|
||||||
|
|
||||||
Freshly created `mts` instances may take 30+ seconds to start if
|
|
||||||
`~/.local/share/tailscale/` contains stale logtail cache from previous
|
|
||||||
tailscaled runs. The daemon blocks trying to upload old logs before
|
|
||||||
creating its socket.
|
|
||||||
|
|
||||||
Fix: write `TS_NO_LOGS_NO_SUPPORT=true` to each instance's `env.txt`
|
|
||||||
before starting (or restart after writing). See the quick start above.
|
|
||||||
|
|
||||||
### mts node cleanup
|
|
||||||
|
|
||||||
`mts` stores state in `~/.config/multi-tailscale-dev/`. Old instances
|
|
||||||
accumulate over time. Clean them with:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
go tool mts server rm <name>
|
|
||||||
```
|
|
||||||
302
cmd/dev/main.go
@@ -1,302 +0,0 @@
|
|||||||
// cmd/dev starts a local headscale development server with a pre-created
|
|
||||||
// user and pre-auth key, ready for connecting tailscale nodes via mts.
|
|
||||||
package main
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"encoding/json"
|
|
||||||
"errors"
|
|
||||||
"flag"
|
|
||||||
"fmt"
|
|
||||||
"log"
|
|
||||||
"net/http"
|
|
||||||
"os"
|
|
||||||
"os/exec"
|
|
||||||
"os/signal"
|
|
||||||
"path/filepath"
|
|
||||||
"strconv"
|
|
||||||
"syscall"
|
|
||||||
"time"
|
|
||||||
)
|
|
||||||
|
|
||||||
var (
|
|
||||||
port = flag.Int("port", 8080, "headscale listen port")
|
|
||||||
keep = flag.Bool("keep", false, "keep state directory on exit")
|
|
||||||
)
|
|
||||||
|
|
||||||
var errHealthTimeout = errors.New("health check timed out")
|
|
||||||
|
|
||||||
var errEmptyAuthKey = errors.New("empty auth key in response")
|
|
||||||
|
|
||||||
const devConfig = `---
|
|
||||||
server_url: http://127.0.0.1:%d
|
|
||||||
listen_addr: 127.0.0.1:%d
|
|
||||||
metrics_listen_addr: 127.0.0.1:%d
|
|
||||||
grpc_listen_addr: 127.0.0.1:%d
|
|
||||||
grpc_allow_insecure: true
|
|
||||||
|
|
||||||
noise:
|
|
||||||
private_key_path: %s/noise_private.key
|
|
||||||
|
|
||||||
prefixes:
|
|
||||||
v4: 100.64.0.0/10
|
|
||||||
v6: fd7a:115c:a1e0::/48
|
|
||||||
allocation: sequential
|
|
||||||
|
|
||||||
database:
|
|
||||||
type: sqlite
|
|
||||||
sqlite:
|
|
||||||
path: %s/db.sqlite
|
|
||||||
write_ahead_log: true
|
|
||||||
|
|
||||||
derp:
|
|
||||||
server:
|
|
||||||
enabled: false
|
|
||||||
urls:
|
|
||||||
- https://controlplane.tailscale.com/derpmap/default
|
|
||||||
auto_update_enabled: false
|
|
||||||
|
|
||||||
dns:
|
|
||||||
magic_dns: true
|
|
||||||
base_domain: headscale.dev
|
|
||||||
override_local_dns: false
|
|
||||||
|
|
||||||
log:
|
|
||||||
level: debug
|
|
||||||
format: text
|
|
||||||
|
|
||||||
policy:
|
|
||||||
mode: database
|
|
||||||
|
|
||||||
unix_socket: %s/headscale.sock
|
|
||||||
unix_socket_permission: "0770"
|
|
||||||
`
|
|
||||||
|
|
||||||
func main() {
|
|
||||||
flag.Parse()
|
|
||||||
log.SetFlags(0)
|
|
||||||
|
|
||||||
http.DefaultClient.Timeout = 2 * time.Second
|
|
||||||
http.DefaultClient.CheckRedirect = func(*http.Request, []*http.Request) error {
|
|
||||||
return http.ErrUseLastResponse
|
|
||||||
}
|
|
||||||
|
|
||||||
err := run()
|
|
||||||
if err != nil {
|
|
||||||
log.Fatal(err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func run() error {
|
|
||||||
metricsPort := *port + 1010 // default 9090
|
|
||||||
grpcPort := *port + 42363 // default 50443
|
|
||||||
|
|
||||||
tmpDir, err := os.MkdirTemp("", "headscale-dev-")
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("creating temp dir: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
if !*keep {
|
|
||||||
defer os.RemoveAll(tmpDir)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Write config.
|
|
||||||
configPath := filepath.Join(tmpDir, "config.yaml")
|
|
||||||
configContent := fmt.Sprintf(devConfig,
|
|
||||||
*port, *port, metricsPort, grpcPort,
|
|
||||||
tmpDir, tmpDir, tmpDir,
|
|
||||||
)
|
|
||||||
|
|
||||||
err = os.WriteFile(configPath, []byte(configContent), 0o600)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("writing config: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Build headscale.
|
|
||||||
fmt.Println("Building headscale...")
|
|
||||||
|
|
||||||
hsBin := filepath.Join(tmpDir, "headscale")
|
|
||||||
|
|
||||||
ctx, stop := signal.NotifyContext(context.Background(), syscall.SIGINT, syscall.SIGTERM)
|
|
||||||
defer stop()
|
|
||||||
|
|
||||||
build := exec.CommandContext(ctx, "go", "build", "-o", hsBin, "./cmd/headscale")
|
|
||||||
build.Stdout = os.Stdout
|
|
||||||
build.Stderr = os.Stderr
|
|
||||||
|
|
||||||
err = build.Run()
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("building headscale: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Start headscale serve.
|
|
||||||
fmt.Println("Starting headscale server...")
|
|
||||||
|
|
||||||
serve := exec.CommandContext(ctx, hsBin, "serve", "-c", configPath)
|
|
||||||
serve.Stdout = os.Stdout
|
|
||||||
serve.Stderr = os.Stderr
|
|
||||||
|
|
||||||
err = serve.Start()
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("starting headscale: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Wait for server to be ready.
|
|
||||||
healthURL := fmt.Sprintf("http://127.0.0.1:%d/health", *port)
|
|
||||||
|
|
||||||
err = waitForHealth(ctx, healthURL, 30*time.Second)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("waiting for headscale: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create user.
|
|
||||||
fmt.Println("Creating user and pre-auth key...")
|
|
||||||
|
|
||||||
userJSON, err := runHS(ctx, hsBin, configPath, "users", "create", "dev", "-o", "json")
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("creating user: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
userID, err := extractUserID(userJSON)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("parsing user: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create pre-auth key.
|
|
||||||
keyJSON, err := runHS(
|
|
||||||
ctx, hsBin, configPath,
|
|
||||||
"preauthkeys", "create",
|
|
||||||
"-u", strconv.FormatUint(userID, 10),
|
|
||||||
"--reusable",
|
|
||||||
"-e", "24h",
|
|
||||||
"-o", "json",
|
|
||||||
)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("creating pre-auth key: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
authKey, err := extractAuthKey(keyJSON)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("parsing pre-auth key: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Print banner.
|
|
||||||
fmt.Printf(`
|
|
||||||
=== Headscale Dev Environment ===
|
|
||||||
Server: http://127.0.0.1:%d
|
|
||||||
Metrics: http://127.0.0.1:%d
|
|
||||||
Debug: http://127.0.0.1:%d/debug/ping
|
|
||||||
Config: %s
|
|
||||||
State: %s
|
|
||||||
|
|
||||||
Pre-auth key: %s
|
|
||||||
|
|
||||||
Connect nodes with mts:
|
|
||||||
go tool mts server run # start mts (once, another terminal)
|
|
||||||
go tool mts server add node1 # create a node
|
|
||||||
go tool mts node1 up --login-server=http://127.0.0.1:%d --authkey=%s
|
|
||||||
go tool mts node1 status # check connection
|
|
||||||
|
|
||||||
Manage headscale:
|
|
||||||
%s -c %s nodes list
|
|
||||||
%s -c %s users list
|
|
||||||
|
|
||||||
Press Ctrl+C to stop.
|
|
||||||
`,
|
|
||||||
*port, metricsPort, metricsPort,
|
|
||||||
configPath, tmpDir,
|
|
||||||
authKey,
|
|
||||||
*port, authKey,
|
|
||||||
hsBin, configPath,
|
|
||||||
hsBin, configPath,
|
|
||||||
)
|
|
||||||
|
|
||||||
// Wait for headscale to exit.
|
|
||||||
err = serve.Wait()
|
|
||||||
if err != nil {
|
|
||||||
// Context cancellation is expected on Ctrl+C.
|
|
||||||
if ctx.Err() != nil {
|
|
||||||
fmt.Println("\nShutting down...")
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
return fmt.Errorf("headscale exited: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// waitForHealth polls the health endpoint until it returns 200 or the
|
|
||||||
// timeout expires.
|
|
||||||
func waitForHealth(ctx context.Context, url string, timeout time.Duration) error {
|
|
||||||
deadline := time.Now().Add(timeout)
|
|
||||||
|
|
||||||
for time.Now().Before(deadline) {
|
|
||||||
if ctx.Err() != nil {
|
|
||||||
return ctx.Err()
|
|
||||||
}
|
|
||||||
|
|
||||||
req, err := http.NewRequestWithContext(ctx, http.MethodGet, url, nil)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("creating request: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
resp, err := http.DefaultClient.Do(req)
|
|
||||||
if err == nil {
|
|
||||||
resp.Body.Close()
|
|
||||||
|
|
||||||
if resp.StatusCode == http.StatusOK {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Busy-wait is acceptable for a dev tool polling a local server.
|
|
||||||
time.Sleep(200 * time.Millisecond) //nolint:forbidigo
|
|
||||||
}
|
|
||||||
|
|
||||||
return errHealthTimeout
|
|
||||||
}
|
|
||||||
|
|
||||||
// runHS executes a headscale CLI command and returns its stdout.
|
|
||||||
func runHS(ctx context.Context, bin, config string, args ...string) ([]byte, error) {
|
|
||||||
fullArgs := append([]string{"-c", config}, args...)
|
|
||||||
cmd := exec.CommandContext(ctx, bin, fullArgs...)
|
|
||||||
cmd.Stderr = os.Stderr
|
|
||||||
|
|
||||||
return cmd.Output()
|
|
||||||
}
|
|
||||||
|
|
||||||
// extractUserID parses the JSON output of "users create" and returns the
|
|
||||||
// user ID.
|
|
||||||
func extractUserID(data []byte) (uint64, error) {
|
|
||||||
var user struct {
|
|
||||||
ID uint64 `json:"id"`
|
|
||||||
}
|
|
||||||
|
|
||||||
err := json.Unmarshal(data, &user)
|
|
||||||
if err != nil {
|
|
||||||
return 0, fmt.Errorf("unmarshalling user JSON: %w (raw: %s)", err, data)
|
|
||||||
}
|
|
||||||
|
|
||||||
return user.ID, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// extractAuthKey parses the JSON output of "preauthkeys create" and
|
|
||||||
// returns the key string.
|
|
||||||
func extractAuthKey(data []byte) (string, error) {
|
|
||||||
var key struct {
|
|
||||||
Key string `json:"key"`
|
|
||||||
}
|
|
||||||
|
|
||||||
err := json.Unmarshal(data, &key)
|
|
||||||
if err != nil {
|
|
||||||
return "", fmt.Errorf("unmarshalling key JSON: %w (raw: %s)", err, data)
|
|
||||||
}
|
|
||||||
|
|
||||||
if key.Key == "" {
|
|
||||||
return "", errEmptyAuthKey
|
|
||||||
}
|
|
||||||
|
|
||||||
return key.Key, nil
|
|
||||||
}
|
|
||||||
@@ -1,18 +1,21 @@
|
|||||||
package cli
|
package cli
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
|
||||||
"fmt"
|
"fmt"
|
||||||
"strconv"
|
"strconv"
|
||||||
|
"time"
|
||||||
|
|
||||||
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
||||||
"github.com/juanfont/headscale/hscontrol/util"
|
"github.com/juanfont/headscale/hscontrol/util"
|
||||||
|
"github.com/prometheus/common/model"
|
||||||
"github.com/pterm/pterm"
|
"github.com/pterm/pterm"
|
||||||
|
"github.com/rs/zerolog/log"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
|
"google.golang.org/protobuf/types/known/timestamppb"
|
||||||
)
|
)
|
||||||
|
|
||||||
const (
|
const (
|
||||||
// DefaultAPIKeyExpiry is 90 days.
|
// 90 days.
|
||||||
DefaultAPIKeyExpiry = "90d"
|
DefaultAPIKeyExpiry = "90d"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -26,11 +29,15 @@ func init() {
|
|||||||
apiKeysCmd.AddCommand(createAPIKeyCmd)
|
apiKeysCmd.AddCommand(createAPIKeyCmd)
|
||||||
|
|
||||||
expireAPIKeyCmd.Flags().StringP("prefix", "p", "", "ApiKey prefix")
|
expireAPIKeyCmd.Flags().StringP("prefix", "p", "", "ApiKey prefix")
|
||||||
expireAPIKeyCmd.Flags().Uint64P("id", "i", 0, "ApiKey ID")
|
if err := expireAPIKeyCmd.MarkFlagRequired("prefix"); err != nil {
|
||||||
|
log.Fatal().Err(err).Msg("")
|
||||||
|
}
|
||||||
apiKeysCmd.AddCommand(expireAPIKeyCmd)
|
apiKeysCmd.AddCommand(expireAPIKeyCmd)
|
||||||
|
|
||||||
deleteAPIKeyCmd.Flags().StringP("prefix", "p", "", "ApiKey prefix")
|
deleteAPIKeyCmd.Flags().StringP("prefix", "p", "", "ApiKey prefix")
|
||||||
deleteAPIKeyCmd.Flags().Uint64P("id", "i", 0, "ApiKey ID")
|
if err := deleteAPIKeyCmd.MarkFlagRequired("prefix"); err != nil {
|
||||||
|
log.Fatal().Err(err).Msg("")
|
||||||
|
}
|
||||||
apiKeysCmd.AddCommand(deleteAPIKeyCmd)
|
apiKeysCmd.AddCommand(deleteAPIKeyCmd)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -44,17 +51,31 @@ var listAPIKeys = &cobra.Command{
|
|||||||
Use: "list",
|
Use: "list",
|
||||||
Short: "List the Api keys for headscale",
|
Short: "List the Api keys for headscale",
|
||||||
Aliases: []string{"ls", "show"},
|
Aliases: []string{"ls", "show"},
|
||||||
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
response, err := client.ListApiKeys(ctx, &v1.ListApiKeysRequest{})
|
output, _ := cmd.Flags().GetString("output")
|
||||||
|
|
||||||
|
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
|
||||||
|
defer cancel()
|
||||||
|
defer conn.Close()
|
||||||
|
|
||||||
|
request := &v1.ListApiKeysRequest{}
|
||||||
|
|
||||||
|
response, err := client.ListApiKeys(ctx, request)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("listing api keys: %w", err)
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
fmt.Sprintf("Error getting the list of keys: %s", err),
|
||||||
|
output,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
if output != "" {
|
||||||
|
SuccessOutput(response.GetApiKeys(), "", output)
|
||||||
}
|
}
|
||||||
|
|
||||||
return printListOutput(cmd, response.GetApiKeys(), func() error {
|
|
||||||
tableData := pterm.TableData{
|
tableData := pterm.TableData{
|
||||||
{"ID", "Prefix", "Expiration", "Created"},
|
{"ID", "Prefix", "Expiration", "Created"},
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, key := range response.GetApiKeys() {
|
for _, key := range response.GetApiKeys() {
|
||||||
expiration := "-"
|
expiration := "-"
|
||||||
|
|
||||||
@@ -68,11 +89,17 @@ var listAPIKeys = &cobra.Command{
|
|||||||
expiration,
|
expiration,
|
||||||
key.GetCreatedAt().AsTime().Format(HeadscaleDateTimeFormat),
|
key.GetCreatedAt().AsTime().Format(HeadscaleDateTimeFormat),
|
||||||
})
|
})
|
||||||
}
|
|
||||||
|
|
||||||
return pterm.DefaultTable.WithHasHeader().WithData(tableData).Render()
|
}
|
||||||
})
|
err = pterm.DefaultTable.WithHasHeader().WithData(tableData).Render()
|
||||||
}),
|
if err != nil {
|
||||||
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
fmt.Sprintf("Failed to render pterm table: %s", err),
|
||||||
|
output,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
var createAPIKeyCmd = &cobra.Command{
|
var createAPIKeyCmd = &cobra.Command{
|
||||||
@@ -83,79 +110,113 @@ Creates a new Api key, the Api key is only visible on creation
|
|||||||
and cannot be retrieved again.
|
and cannot be retrieved again.
|
||||||
If you loose a key, create a new one and revoke (expire) the old one.`,
|
If you loose a key, create a new one and revoke (expire) the old one.`,
|
||||||
Aliases: []string{"c", "new"},
|
Aliases: []string{"c", "new"},
|
||||||
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
expiration, err := expirationFromFlag(cmd)
|
output, _ := cmd.Flags().GetString("output")
|
||||||
|
|
||||||
|
request := &v1.CreateApiKeyRequest{}
|
||||||
|
|
||||||
|
durationStr, _ := cmd.Flags().GetString("expiration")
|
||||||
|
|
||||||
|
duration, err := model.ParseDuration(durationStr)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
fmt.Sprintf("Could not parse duration: %s\n", err),
|
||||||
|
output,
|
||||||
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
response, err := client.CreateApiKey(ctx, &v1.CreateApiKeyRequest{
|
expiration := time.Now().UTC().Add(time.Duration(duration))
|
||||||
Expiration: expiration,
|
|
||||||
})
|
request.Expiration = timestamppb.New(expiration)
|
||||||
|
|
||||||
|
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
|
||||||
|
defer cancel()
|
||||||
|
defer conn.Close()
|
||||||
|
|
||||||
|
response, err := client.CreateApiKey(ctx, request)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("creating api key: %w", err)
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
fmt.Sprintf("Cannot create Api Key: %s\n", err),
|
||||||
|
output,
|
||||||
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
return printOutput(cmd, response.GetApiKey(), response.GetApiKey())
|
SuccessOutput(response.GetApiKey(), response.GetApiKey(), output)
|
||||||
}),
|
},
|
||||||
}
|
|
||||||
|
|
||||||
// apiKeyIDOrPrefix reads --id and --prefix from cmd and validates that
|
|
||||||
// exactly one is provided.
|
|
||||||
func apiKeyIDOrPrefix(cmd *cobra.Command) (uint64, string, error) {
|
|
||||||
id, _ := cmd.Flags().GetUint64("id")
|
|
||||||
prefix, _ := cmd.Flags().GetString("prefix")
|
|
||||||
|
|
||||||
switch {
|
|
||||||
case id == 0 && prefix == "":
|
|
||||||
return 0, "", fmt.Errorf("either --id or --prefix must be provided: %w", errMissingParameter)
|
|
||||||
case id != 0 && prefix != "":
|
|
||||||
return 0, "", fmt.Errorf("only one of --id or --prefix can be provided: %w", errMissingParameter)
|
|
||||||
}
|
|
||||||
|
|
||||||
return id, prefix, nil
|
|
||||||
}
|
}
|
||||||
|
|
||||||
var expireAPIKeyCmd = &cobra.Command{
|
var expireAPIKeyCmd = &cobra.Command{
|
||||||
Use: "expire",
|
Use: "expire",
|
||||||
Short: "Expire an ApiKey",
|
Short: "Expire an ApiKey",
|
||||||
Aliases: []string{"revoke", "exp", "e"},
|
Aliases: []string{"revoke", "exp", "e"},
|
||||||
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
id, prefix, err := apiKeyIDOrPrefix(cmd)
|
output, _ := cmd.Flags().GetString("output")
|
||||||
|
|
||||||
|
prefix, err := cmd.Flags().GetString("prefix")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
fmt.Sprintf("Error getting prefix from CLI flag: %s", err),
|
||||||
|
output,
|
||||||
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
response, err := client.ExpireApiKey(ctx, &v1.ExpireApiKeyRequest{
|
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
|
||||||
Id: id,
|
defer cancel()
|
||||||
|
defer conn.Close()
|
||||||
|
|
||||||
|
request := &v1.ExpireApiKeyRequest{
|
||||||
Prefix: prefix,
|
Prefix: prefix,
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("expiring api key: %w", err)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
return printOutput(cmd, response, "Key expired")
|
response, err := client.ExpireApiKey(ctx, request)
|
||||||
}),
|
if err != nil {
|
||||||
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
fmt.Sprintf("Cannot expire Api Key: %s\n", err),
|
||||||
|
output,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
SuccessOutput(response, "Key expired", output)
|
||||||
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
var deleteAPIKeyCmd = &cobra.Command{
|
var deleteAPIKeyCmd = &cobra.Command{
|
||||||
Use: "delete",
|
Use: "delete",
|
||||||
Short: "Delete an ApiKey",
|
Short: "Delete an ApiKey",
|
||||||
Aliases: []string{"remove", "del"},
|
Aliases: []string{"remove", "del"},
|
||||||
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
id, prefix, err := apiKeyIDOrPrefix(cmd)
|
output, _ := cmd.Flags().GetString("output")
|
||||||
|
|
||||||
|
prefix, err := cmd.Flags().GetString("prefix")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
fmt.Sprintf("Error getting prefix from CLI flag: %s", err),
|
||||||
|
output,
|
||||||
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
response, err := client.DeleteApiKey(ctx, &v1.DeleteApiKeyRequest{
|
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
|
||||||
Id: id,
|
defer cancel()
|
||||||
|
defer conn.Close()
|
||||||
|
|
||||||
|
request := &v1.DeleteApiKeyRequest{
|
||||||
Prefix: prefix,
|
Prefix: prefix,
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("deleting api key: %w", err)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
return printOutput(cmd, response, "Key deleted")
|
response, err := client.DeleteApiKey(ctx, request)
|
||||||
}),
|
if err != nil {
|
||||||
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
fmt.Sprintf("Cannot delete Api Key: %s\n", err),
|
||||||
|
output,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
SuccessOutput(response, "Key deleted", output)
|
||||||
|
},
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,93 +0,0 @@
|
|||||||
package cli
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"fmt"
|
|
||||||
|
|
||||||
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
|
||||||
"github.com/spf13/cobra"
|
|
||||||
)
|
|
||||||
|
|
||||||
func init() {
|
|
||||||
rootCmd.AddCommand(authCmd)
|
|
||||||
|
|
||||||
authRegisterCmd.Flags().StringP("user", "u", "", "User")
|
|
||||||
authRegisterCmd.Flags().String("auth-id", "", "Auth ID")
|
|
||||||
mustMarkRequired(authRegisterCmd, "user", "auth-id")
|
|
||||||
authCmd.AddCommand(authRegisterCmd)
|
|
||||||
|
|
||||||
authApproveCmd.Flags().String("auth-id", "", "Auth ID")
|
|
||||||
mustMarkRequired(authApproveCmd, "auth-id")
|
|
||||||
authCmd.AddCommand(authApproveCmd)
|
|
||||||
|
|
||||||
authRejectCmd.Flags().String("auth-id", "", "Auth ID")
|
|
||||||
mustMarkRequired(authRejectCmd, "auth-id")
|
|
||||||
authCmd.AddCommand(authRejectCmd)
|
|
||||||
}
|
|
||||||
|
|
||||||
var authCmd = &cobra.Command{
|
|
||||||
Use: "auth",
|
|
||||||
Short: "Manage node authentication and approval",
|
|
||||||
}
|
|
||||||
|
|
||||||
var authRegisterCmd = &cobra.Command{
|
|
||||||
Use: "register",
|
|
||||||
Short: "Register a node to your network",
|
|
||||||
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
|
|
||||||
user, _ := cmd.Flags().GetString("user")
|
|
||||||
authID, _ := cmd.Flags().GetString("auth-id")
|
|
||||||
|
|
||||||
request := &v1.AuthRegisterRequest{
|
|
||||||
AuthId: authID,
|
|
||||||
User: user,
|
|
||||||
}
|
|
||||||
|
|
||||||
response, err := client.AuthRegister(ctx, request)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("registering node: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return printOutput(
|
|
||||||
cmd,
|
|
||||||
response.GetNode(),
|
|
||||||
fmt.Sprintf("Node %s registered", response.GetNode().GetGivenName()))
|
|
||||||
}),
|
|
||||||
}
|
|
||||||
|
|
||||||
var authApproveCmd = &cobra.Command{
|
|
||||||
Use: "approve",
|
|
||||||
Short: "Approve a pending authentication request",
|
|
||||||
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
|
|
||||||
authID, _ := cmd.Flags().GetString("auth-id")
|
|
||||||
|
|
||||||
request := &v1.AuthApproveRequest{
|
|
||||||
AuthId: authID,
|
|
||||||
}
|
|
||||||
|
|
||||||
response, err := client.AuthApprove(ctx, request)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("approving auth request: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return printOutput(cmd, response, "Auth request approved")
|
|
||||||
}),
|
|
||||||
}
|
|
||||||
|
|
||||||
var authRejectCmd = &cobra.Command{
|
|
||||||
Use: "reject",
|
|
||||||
Short: "Reject a pending authentication request",
|
|
||||||
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
|
|
||||||
authID, _ := cmd.Flags().GetString("auth-id")
|
|
||||||
|
|
||||||
request := &v1.AuthRejectRequest{
|
|
||||||
AuthId: authID,
|
|
||||||
}
|
|
||||||
|
|
||||||
response, err := client.AuthReject(ctx, request)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("rejecting auth request: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return printOutput(cmd, response, "Auth request rejected")
|
|
||||||
}),
|
|
||||||
}
|
|
||||||
@@ -1,8 +1,7 @@
|
|||||||
package cli
|
package cli
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"fmt"
|
"github.com/rs/zerolog/log"
|
||||||
|
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -14,12 +13,10 @@ var configTestCmd = &cobra.Command{
|
|||||||
Use: "configtest",
|
Use: "configtest",
|
||||||
Short: "Test the configuration.",
|
Short: "Test the configuration.",
|
||||||
Long: "Run a test of the configuration and exit.",
|
Long: "Run a test of the configuration and exit.",
|
||||||
RunE: func(cmd *cobra.Command, args []string) error {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
_, err := newHeadscaleServerWithConfig()
|
_, err := newHeadscaleServerWithConfig()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("configuration error: %w", err)
|
log.Fatal().Caller().Err(err).Msg("Error initializing")
|
||||||
}
|
}
|
||||||
|
|
||||||
return nil
|
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,22 +1,48 @@
|
|||||||
package cli
|
package cli
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
|
||||||
"fmt"
|
"fmt"
|
||||||
|
|
||||||
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
||||||
"github.com/juanfont/headscale/hscontrol/types"
|
"github.com/juanfont/headscale/hscontrol/types"
|
||||||
|
"github.com/rs/zerolog/log"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
|
"google.golang.org/grpc/status"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
errPreAuthKeyMalformed = Error("key is malformed. expected 64 hex characters with `nodekey` prefix")
|
||||||
|
)
|
||||||
|
|
||||||
|
// Error is used to compare errors as per https://dave.cheney.net/2016/04/07/constant-errors
|
||||||
|
type Error string
|
||||||
|
|
||||||
|
func (e Error) Error() string { return string(e) }
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
rootCmd.AddCommand(debugCmd)
|
rootCmd.AddCommand(debugCmd)
|
||||||
|
|
||||||
createNodeCmd.Flags().StringP("name", "", "", "Name")
|
createNodeCmd.Flags().StringP("name", "", "", "Name")
|
||||||
|
err := createNodeCmd.MarkFlagRequired("name")
|
||||||
|
if err != nil {
|
||||||
|
log.Fatal().Err(err).Msg("")
|
||||||
|
}
|
||||||
createNodeCmd.Flags().StringP("user", "u", "", "User")
|
createNodeCmd.Flags().StringP("user", "u", "", "User")
|
||||||
createNodeCmd.Flags().StringP("key", "k", "", "Key")
|
|
||||||
mustMarkRequired(createNodeCmd, "name", "user", "key")
|
|
||||||
|
|
||||||
|
createNodeCmd.Flags().StringP("namespace", "n", "", "User")
|
||||||
|
createNodeNamespaceFlag := createNodeCmd.Flags().Lookup("namespace")
|
||||||
|
createNodeNamespaceFlag.Deprecated = deprecateNamespaceMessage
|
||||||
|
createNodeNamespaceFlag.Hidden = true
|
||||||
|
|
||||||
|
err = createNodeCmd.MarkFlagRequired("user")
|
||||||
|
if err != nil {
|
||||||
|
log.Fatal().Err(err).Msg("")
|
||||||
|
}
|
||||||
|
createNodeCmd.Flags().StringP("key", "k", "", "Key")
|
||||||
|
err = createNodeCmd.MarkFlagRequired("key")
|
||||||
|
if err != nil {
|
||||||
|
log.Fatal().Err(err).Msg("")
|
||||||
|
}
|
||||||
createNodeCmd.Flags().
|
createNodeCmd.Flags().
|
||||||
StringSliceP("route", "r", []string{}, "List (or repeated flags) of routes to advertise")
|
StringSliceP("route", "r", []string{}, "List (or repeated flags) of routes to advertise")
|
||||||
|
|
||||||
@@ -31,18 +57,54 @@ var debugCmd = &cobra.Command{
|
|||||||
|
|
||||||
var createNodeCmd = &cobra.Command{
|
var createNodeCmd = &cobra.Command{
|
||||||
Use: "create-node",
|
Use: "create-node",
|
||||||
Short: "Create a node that can be registered with `auth register <>` command",
|
Short: "Create a node that can be registered with `nodes register <>` command",
|
||||||
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
user, _ := cmd.Flags().GetString("user")
|
output, _ := cmd.Flags().GetString("output")
|
||||||
name, _ := cmd.Flags().GetString("name")
|
|
||||||
registrationID, _ := cmd.Flags().GetString("key")
|
|
||||||
|
|
||||||
_, err := types.AuthIDFromString(registrationID)
|
user, err := cmd.Flags().GetString("user")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("parsing machine key: %w", err)
|
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
|
||||||
}
|
}
|
||||||
|
|
||||||
routes, _ := cmd.Flags().GetStringSlice("route")
|
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
|
||||||
|
defer cancel()
|
||||||
|
defer conn.Close()
|
||||||
|
|
||||||
|
name, err := cmd.Flags().GetString("name")
|
||||||
|
if err != nil {
|
||||||
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
fmt.Sprintf("Error getting node from flag: %s", err),
|
||||||
|
output,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
registrationID, err := cmd.Flags().GetString("key")
|
||||||
|
if err != nil {
|
||||||
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
fmt.Sprintf("Error getting key from flag: %s", err),
|
||||||
|
output,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err = types.RegistrationIDFromString(registrationID)
|
||||||
|
if err != nil {
|
||||||
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
fmt.Sprintf("Failed to parse machine key from flag: %s", err),
|
||||||
|
output,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
routes, err := cmd.Flags().GetStringSlice("route")
|
||||||
|
if err != nil {
|
||||||
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
fmt.Sprintf("Error getting routes from flag: %s", err),
|
||||||
|
output,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
request := &v1.DebugCreateNodeRequest{
|
request := &v1.DebugCreateNodeRequest{
|
||||||
Key: registrationID,
|
Key: registrationID,
|
||||||
@@ -53,9 +115,13 @@ var createNodeCmd = &cobra.Command{
|
|||||||
|
|
||||||
response, err := client.DebugCreateNode(ctx, request)
|
response, err := client.DebugCreateNode(ctx, request)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("creating node: %w", err)
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
"Cannot create node: "+status.Convert(err).Message(),
|
||||||
|
output,
|
||||||
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
return printOutput(cmd, response.GetNode(), "Node created")
|
SuccessOutput(response.GetNode(), "Node created", output)
|
||||||
}),
|
},
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -15,12 +15,14 @@ var dumpConfigCmd = &cobra.Command{
|
|||||||
Use: "dumpConfig",
|
Use: "dumpConfig",
|
||||||
Short: "dump current config to /etc/headscale/config.dump.yaml, integration test only",
|
Short: "dump current config to /etc/headscale/config.dump.yaml, integration test only",
|
||||||
Hidden: true,
|
Hidden: true,
|
||||||
RunE: func(cmd *cobra.Command, args []string) error {
|
Args: func(cmd *cobra.Command, args []string) error {
|
||||||
err := viper.WriteConfigAs("/etc/headscale/config.dump.yaml")
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("dumping config: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
},
|
},
|
||||||
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
|
err := viper.WriteConfigAs("/etc/headscale/config.dump.yaml")
|
||||||
|
if err != nil {
|
||||||
|
//nolint
|
||||||
|
fmt.Println("Failed to dump config")
|
||||||
|
}
|
||||||
|
},
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -21,17 +21,22 @@ var generateCmd = &cobra.Command{
|
|||||||
var generatePrivateKeyCmd = &cobra.Command{
|
var generatePrivateKeyCmd = &cobra.Command{
|
||||||
Use: "private-key",
|
Use: "private-key",
|
||||||
Short: "Generate a private key for the headscale server",
|
Short: "Generate a private key for the headscale server",
|
||||||
RunE: func(cmd *cobra.Command, args []string) error {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
|
output, _ := cmd.Flags().GetString("output")
|
||||||
machineKey := key.NewMachine()
|
machineKey := key.NewMachine()
|
||||||
|
|
||||||
machineKeyStr, err := machineKey.MarshalText()
|
machineKeyStr, err := machineKey.MarshalText()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("marshalling machine key: %w", err)
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
fmt.Sprintf("Error getting machine key from flag: %s", err),
|
||||||
|
output,
|
||||||
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
return printOutput(cmd, map[string]string{
|
SuccessOutput(map[string]string{
|
||||||
"private_key": string(machineKeyStr),
|
"private_key": string(machineKeyStr),
|
||||||
},
|
},
|
||||||
string(machineKeyStr))
|
string(machineKeyStr), output)
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,27 +0,0 @@
|
|||||||
package cli
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"fmt"
|
|
||||||
|
|
||||||
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
|
||||||
"github.com/spf13/cobra"
|
|
||||||
)
|
|
||||||
|
|
||||||
func init() {
|
|
||||||
rootCmd.AddCommand(healthCmd)
|
|
||||||
}
|
|
||||||
|
|
||||||
var healthCmd = &cobra.Command{
|
|
||||||
Use: "health",
|
|
||||||
Short: "Check the health of the Headscale server",
|
|
||||||
Long: "Check the health of the Headscale server. This command will return an exit code of 0 if the server is healthy, or 1 if it is not.",
|
|
||||||
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
|
|
||||||
response, err := client.Health(ctx, &v1.HealthRequest{})
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("checking health: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return printOutput(cmd, response, "")
|
|
||||||
}),
|
|
||||||
}
|
|
||||||
@@ -1,8 +1,8 @@
|
|||||||
package cli
|
package cli
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
|
||||||
"encoding/json"
|
"encoding/json"
|
||||||
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
"net"
|
"net"
|
||||||
"net/http"
|
"net/http"
|
||||||
@@ -10,22 +10,15 @@ import (
|
|||||||
"strconv"
|
"strconv"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/juanfont/headscale/hscontrol/util/zlog/zf"
|
|
||||||
"github.com/oauth2-proxy/mockoidc"
|
"github.com/oauth2-proxy/mockoidc"
|
||||||
"github.com/rs/zerolog/log"
|
"github.com/rs/zerolog/log"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
||||||
// Error is used to compare errors as per https://dave.cheney.net/2016/04/07/constant-errors
|
|
||||||
type Error string
|
|
||||||
|
|
||||||
func (e Error) Error() string { return string(e) }
|
|
||||||
|
|
||||||
const (
|
const (
|
||||||
errMockOidcClientIDNotDefined = Error("MOCKOIDC_CLIENT_ID not defined")
|
errMockOidcClientIDNotDefined = Error("MOCKOIDC_CLIENT_ID not defined")
|
||||||
errMockOidcClientSecretNotDefined = Error("MOCKOIDC_CLIENT_SECRET not defined")
|
errMockOidcClientSecretNotDefined = Error("MOCKOIDC_CLIENT_SECRET not defined")
|
||||||
errMockOidcPortNotDefined = Error("MOCKOIDC_PORT not defined")
|
errMockOidcPortNotDefined = Error("MOCKOIDC_PORT not defined")
|
||||||
errMockOidcUsersNotDefined = Error("MOCKOIDC_USERS not defined")
|
|
||||||
refreshTTL = 60 * time.Minute
|
refreshTTL = 60 * time.Minute
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -39,13 +32,12 @@ var mockOidcCmd = &cobra.Command{
|
|||||||
Use: "mockoidc",
|
Use: "mockoidc",
|
||||||
Short: "Runs a mock OIDC server for testing",
|
Short: "Runs a mock OIDC server for testing",
|
||||||
Long: "This internal command runs a OpenID Connect for testing purposes",
|
Long: "This internal command runs a OpenID Connect for testing purposes",
|
||||||
RunE: func(cmd *cobra.Command, args []string) error {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
err := mockOIDC()
|
err := mockOIDC()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("running mock OIDC server: %w", err)
|
log.Error().Err(err).Msgf("Error running mock OIDC server")
|
||||||
|
os.Exit(1)
|
||||||
}
|
}
|
||||||
|
|
||||||
return nil
|
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -54,47 +46,41 @@ func mockOIDC() error {
|
|||||||
if clientID == "" {
|
if clientID == "" {
|
||||||
return errMockOidcClientIDNotDefined
|
return errMockOidcClientIDNotDefined
|
||||||
}
|
}
|
||||||
|
|
||||||
clientSecret := os.Getenv("MOCKOIDC_CLIENT_SECRET")
|
clientSecret := os.Getenv("MOCKOIDC_CLIENT_SECRET")
|
||||||
if clientSecret == "" {
|
if clientSecret == "" {
|
||||||
return errMockOidcClientSecretNotDefined
|
return errMockOidcClientSecretNotDefined
|
||||||
}
|
}
|
||||||
|
|
||||||
addrStr := os.Getenv("MOCKOIDC_ADDR")
|
addrStr := os.Getenv("MOCKOIDC_ADDR")
|
||||||
if addrStr == "" {
|
if addrStr == "" {
|
||||||
return errMockOidcPortNotDefined
|
return errMockOidcPortNotDefined
|
||||||
}
|
}
|
||||||
|
|
||||||
portStr := os.Getenv("MOCKOIDC_PORT")
|
portStr := os.Getenv("MOCKOIDC_PORT")
|
||||||
if portStr == "" {
|
if portStr == "" {
|
||||||
return errMockOidcPortNotDefined
|
return errMockOidcPortNotDefined
|
||||||
}
|
}
|
||||||
|
|
||||||
accessTTLOverride := os.Getenv("MOCKOIDC_ACCESS_TTL")
|
accessTTLOverride := os.Getenv("MOCKOIDC_ACCESS_TTL")
|
||||||
if accessTTLOverride != "" {
|
if accessTTLOverride != "" {
|
||||||
newTTL, err := time.ParseDuration(accessTTLOverride)
|
newTTL, err := time.ParseDuration(accessTTLOverride)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
accessTTL = newTTL
|
accessTTL = newTTL
|
||||||
}
|
}
|
||||||
|
|
||||||
userStr := os.Getenv("MOCKOIDC_USERS")
|
userStr := os.Getenv("MOCKOIDC_USERS")
|
||||||
if userStr == "" {
|
if userStr == "" {
|
||||||
return errMockOidcUsersNotDefined
|
return errors.New("MOCKOIDC_USERS not defined")
|
||||||
}
|
}
|
||||||
|
|
||||||
var users []mockoidc.MockUser
|
var users []mockoidc.MockUser
|
||||||
|
|
||||||
err := json.Unmarshal([]byte(userStr), &users)
|
err := json.Unmarshal([]byte(userStr), &users)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("unmarshalling users: %w", err)
|
return fmt.Errorf("unmarshalling users: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
log.Info().Interface(zf.Users, users).Msg("loading users from JSON")
|
log.Info().Interface("users", users).Msg("loading users from JSON")
|
||||||
|
|
||||||
log.Info().Msgf("access token TTL: %s", accessTTL)
|
log.Info().Msgf("Access token TTL: %s", accessTTL)
|
||||||
|
|
||||||
port, err := strconv.Atoi(portStr)
|
port, err := strconv.Atoi(portStr)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -106,7 +92,7 @@ func mockOIDC() error {
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
listener, err := new(net.ListenConfig).Listen(context.Background(), "tcp", fmt.Sprintf("%s:%d", addrStr, port))
|
listener, err := net.Listen("tcp", fmt.Sprintf("%s:%d", addrStr, port))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -115,10 +101,8 @@ func mockOIDC() error {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
log.Info().Msgf("Mock OIDC server listening on %s", listener.Addr().String())
|
||||||
log.Info().Msgf("mock OIDC server listening on %s", listener.Addr().String())
|
log.Info().Msgf("Issuer: %s", mock.Issuer())
|
||||||
log.Info().Msgf("issuer: %s", mock.Issuer())
|
|
||||||
|
|
||||||
c := make(chan struct{})
|
c := make(chan struct{})
|
||||||
<-c
|
<-c
|
||||||
|
|
||||||
@@ -149,13 +133,12 @@ func getMockOIDC(clientID string, clientSecret string, users []mockoidc.MockUser
|
|||||||
ErrorQueue: &mockoidc.ErrorQueue{},
|
ErrorQueue: &mockoidc.ErrorQueue{},
|
||||||
}
|
}
|
||||||
|
|
||||||
_ = mock.AddMiddleware(func(h http.Handler) http.Handler {
|
mock.AddMiddleware(func(h http.Handler) http.Handler {
|
||||||
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||||
log.Info().Msgf("request: %+v", r)
|
log.Info().Msgf("Request: %+v", r)
|
||||||
h.ServeHTTP(w, r)
|
h.ServeHTTP(w, r)
|
||||||
|
|
||||||
if r.Response != nil {
|
if r.Response != nil {
|
||||||
log.Info().Msgf("response: %+v", r.Response)
|
log.Info().Msgf("Response: %+v", r.Response)
|
||||||
}
|
}
|
||||||
})
|
})
|
||||||
})
|
})
|
||||||
|
|||||||
@@ -1,56 +1,104 @@
|
|||||||
package cli
|
package cli
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"log"
|
||||||
"net/netip"
|
"net/netip"
|
||||||
|
"slices"
|
||||||
"strconv"
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
survey "github.com/AlecAivazis/survey/v2"
|
||||||
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
||||||
"github.com/juanfont/headscale/hscontrol/util"
|
"github.com/juanfont/headscale/hscontrol/util"
|
||||||
"github.com/pterm/pterm"
|
"github.com/pterm/pterm"
|
||||||
"github.com/samber/lo"
|
"github.com/samber/lo"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
"google.golang.org/protobuf/types/known/timestamppb"
|
"google.golang.org/grpc/status"
|
||||||
"tailscale.com/types/key"
|
"tailscale.com/types/key"
|
||||||
)
|
)
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
rootCmd.AddCommand(nodeCmd)
|
rootCmd.AddCommand(nodeCmd)
|
||||||
listNodesCmd.Flags().StringP("user", "u", "", "Filter by user")
|
listNodesCmd.Flags().StringP("user", "u", "", "Filter by user")
|
||||||
|
listNodesCmd.Flags().BoolP("tags", "t", false, "Show tags")
|
||||||
|
|
||||||
|
listNodesCmd.Flags().StringP("namespace", "n", "", "User")
|
||||||
|
listNodesNamespaceFlag := listNodesCmd.Flags().Lookup("namespace")
|
||||||
|
listNodesNamespaceFlag.Deprecated = deprecateNamespaceMessage
|
||||||
|
listNodesNamespaceFlag.Hidden = true
|
||||||
nodeCmd.AddCommand(listNodesCmd)
|
nodeCmd.AddCommand(listNodesCmd)
|
||||||
|
|
||||||
listNodeRoutesCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
|
listNodeRoutesCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
|
||||||
nodeCmd.AddCommand(listNodeRoutesCmd)
|
nodeCmd.AddCommand(listNodeRoutesCmd)
|
||||||
|
|
||||||
registerNodeCmd.Flags().StringP("user", "u", "", "User")
|
registerNodeCmd.Flags().StringP("user", "u", "", "User")
|
||||||
|
|
||||||
|
registerNodeCmd.Flags().StringP("namespace", "n", "", "User")
|
||||||
|
registerNodeNamespaceFlag := registerNodeCmd.Flags().Lookup("namespace")
|
||||||
|
registerNodeNamespaceFlag.Deprecated = deprecateNamespaceMessage
|
||||||
|
registerNodeNamespaceFlag.Hidden = true
|
||||||
|
|
||||||
|
err := registerNodeCmd.MarkFlagRequired("user")
|
||||||
|
if err != nil {
|
||||||
|
log.Fatal(err.Error())
|
||||||
|
}
|
||||||
registerNodeCmd.Flags().StringP("key", "k", "", "Key")
|
registerNodeCmd.Flags().StringP("key", "k", "", "Key")
|
||||||
mustMarkRequired(registerNodeCmd, "user", "key")
|
err = registerNodeCmd.MarkFlagRequired("key")
|
||||||
|
if err != nil {
|
||||||
|
log.Fatal(err.Error())
|
||||||
|
}
|
||||||
nodeCmd.AddCommand(registerNodeCmd)
|
nodeCmd.AddCommand(registerNodeCmd)
|
||||||
|
|
||||||
expireNodeCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
|
expireNodeCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
|
||||||
expireNodeCmd.Flags().StringP("expiry", "e", "", "Set expire to (RFC3339 format, e.g. 2025-08-27T10:00:00Z), or leave empty to expire immediately.")
|
err = expireNodeCmd.MarkFlagRequired("identifier")
|
||||||
expireNodeCmd.Flags().BoolP("disable", "d", false, "Disable key expiry (node will never expire)")
|
if err != nil {
|
||||||
mustMarkRequired(expireNodeCmd, "identifier")
|
log.Fatal(err.Error())
|
||||||
|
}
|
||||||
nodeCmd.AddCommand(expireNodeCmd)
|
nodeCmd.AddCommand(expireNodeCmd)
|
||||||
|
|
||||||
renameNodeCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
|
renameNodeCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
|
||||||
mustMarkRequired(renameNodeCmd, "identifier")
|
err = renameNodeCmd.MarkFlagRequired("identifier")
|
||||||
|
if err != nil {
|
||||||
|
log.Fatal(err.Error())
|
||||||
|
}
|
||||||
nodeCmd.AddCommand(renameNodeCmd)
|
nodeCmd.AddCommand(renameNodeCmd)
|
||||||
|
|
||||||
deleteNodeCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
|
deleteNodeCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
|
||||||
mustMarkRequired(deleteNodeCmd, "identifier")
|
err = deleteNodeCmd.MarkFlagRequired("identifier")
|
||||||
|
if err != nil {
|
||||||
|
log.Fatal(err.Error())
|
||||||
|
}
|
||||||
nodeCmd.AddCommand(deleteNodeCmd)
|
nodeCmd.AddCommand(deleteNodeCmd)
|
||||||
|
|
||||||
|
moveNodeCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
|
||||||
|
|
||||||
|
err = moveNodeCmd.MarkFlagRequired("identifier")
|
||||||
|
if err != nil {
|
||||||
|
log.Fatal(err.Error())
|
||||||
|
}
|
||||||
|
|
||||||
|
moveNodeCmd.Flags().Uint64P("user", "u", 0, "New user")
|
||||||
|
|
||||||
|
moveNodeCmd.Flags().StringP("namespace", "n", "", "User")
|
||||||
|
moveNodeNamespaceFlag := moveNodeCmd.Flags().Lookup("namespace")
|
||||||
|
moveNodeNamespaceFlag.Deprecated = deprecateNamespaceMessage
|
||||||
|
moveNodeNamespaceFlag.Hidden = true
|
||||||
|
|
||||||
|
err = moveNodeCmd.MarkFlagRequired("user")
|
||||||
|
if err != nil {
|
||||||
|
log.Fatal(err.Error())
|
||||||
|
}
|
||||||
|
nodeCmd.AddCommand(moveNodeCmd)
|
||||||
|
|
||||||
tagCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
|
tagCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
|
||||||
mustMarkRequired(tagCmd, "identifier")
|
tagCmd.MarkFlagRequired("identifier")
|
||||||
tagCmd.Flags().StringSliceP("tags", "t", []string{}, "List of tags to add to the node")
|
tagCmd.Flags().StringSliceP("tags", "t", []string{}, "List of tags to add to the node")
|
||||||
nodeCmd.AddCommand(tagCmd)
|
nodeCmd.AddCommand(tagCmd)
|
||||||
|
|
||||||
approveRoutesCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
|
approveRoutesCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
|
||||||
mustMarkRequired(approveRoutesCmd, "identifier")
|
approveRoutesCmd.MarkFlagRequired("identifier")
|
||||||
approveRoutesCmd.Flags().StringSliceP("routes", "r", []string{}, `List of routes that will be approved (comma-separated, e.g. "10.0.0.0/8,192.168.0.0/24" or empty string to remove all approved routes)`)
|
approveRoutesCmd.Flags().StringSliceP("routes", "r", []string{}, `List of routes that will be approved (comma-separated, e.g. "10.0.0.0/8,192.168.0.0/24" or empty string to remove all approved routes)`)
|
||||||
nodeCmd.AddCommand(approveRoutesCmd)
|
nodeCmd.AddCommand(approveRoutesCmd)
|
||||||
|
|
||||||
@@ -60,16 +108,31 @@ func init() {
|
|||||||
var nodeCmd = &cobra.Command{
|
var nodeCmd = &cobra.Command{
|
||||||
Use: "nodes",
|
Use: "nodes",
|
||||||
Short: "Manage the nodes of Headscale",
|
Short: "Manage the nodes of Headscale",
|
||||||
Aliases: []string{"node"},
|
Aliases: []string{"node", "machine", "machines"},
|
||||||
}
|
}
|
||||||
|
|
||||||
var registerNodeCmd = &cobra.Command{
|
var registerNodeCmd = &cobra.Command{
|
||||||
Use: "register",
|
Use: "register",
|
||||||
Short: "Registers a node to your network",
|
Short: "Registers a node to your network",
|
||||||
Deprecated: "use 'headscale auth register --auth-id <id> --user <user>' instead",
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
|
output, _ := cmd.Flags().GetString("output")
|
||||||
user, _ := cmd.Flags().GetString("user")
|
user, err := cmd.Flags().GetString("user")
|
||||||
registrationID, _ := cmd.Flags().GetString("key")
|
if err != nil {
|
||||||
|
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
|
||||||
|
}
|
||||||
|
|
||||||
|
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
|
||||||
|
defer cancel()
|
||||||
|
defer conn.Close()
|
||||||
|
|
||||||
|
registrationID, err := cmd.Flags().GetString("key")
|
||||||
|
if err != nil {
|
||||||
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
fmt.Sprintf("Error getting node key from flag: %s", err),
|
||||||
|
output,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
request := &v1.RegisterNodeRequest{
|
request := &v1.RegisterNodeRequest{
|
||||||
Key: registrationID,
|
Key: registrationID,
|
||||||
@@ -78,49 +141,108 @@ var registerNodeCmd = &cobra.Command{
|
|||||||
|
|
||||||
response, err := client.RegisterNode(ctx, request)
|
response, err := client.RegisterNode(ctx, request)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("registering node: %w", err)
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
fmt.Sprintf(
|
||||||
|
"Cannot register node: %s\n",
|
||||||
|
status.Convert(err).Message(),
|
||||||
|
),
|
||||||
|
output,
|
||||||
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
return printOutput(
|
SuccessOutput(
|
||||||
cmd,
|
|
||||||
response.GetNode(),
|
response.GetNode(),
|
||||||
fmt.Sprintf("Node %s registered", response.GetNode().GetGivenName()))
|
fmt.Sprintf("Node %s registered", response.GetNode().GetGivenName()), output)
|
||||||
}),
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
var listNodesCmd = &cobra.Command{
|
var listNodesCmd = &cobra.Command{
|
||||||
Use: "list",
|
Use: "list",
|
||||||
Short: "List nodes",
|
Short: "List nodes",
|
||||||
Aliases: []string{"ls", "show"},
|
Aliases: []string{"ls", "show"},
|
||||||
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
user, _ := cmd.Flags().GetString("user")
|
output, _ := cmd.Flags().GetString("output")
|
||||||
|
user, err := cmd.Flags().GetString("user")
|
||||||
response, err := client.ListNodes(ctx, &v1.ListNodesRequest{User: user})
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("listing nodes: %w", err)
|
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
|
||||||
|
}
|
||||||
|
showTags, err := cmd.Flags().GetBool("tags")
|
||||||
|
if err != nil {
|
||||||
|
ErrorOutput(err, fmt.Sprintf("Error getting tags flag: %s", err), output)
|
||||||
}
|
}
|
||||||
|
|
||||||
return printListOutput(cmd, response.GetNodes(), func() error {
|
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
|
||||||
tableData, err := nodesToPtables(user, response.GetNodes())
|
defer cancel()
|
||||||
if err != nil {
|
defer conn.Close()
|
||||||
return fmt.Errorf("converting to table: %w", err)
|
|
||||||
|
request := &v1.ListNodesRequest{
|
||||||
|
User: user,
|
||||||
}
|
}
|
||||||
|
|
||||||
return pterm.DefaultTable.WithHasHeader().WithData(tableData).Render()
|
response, err := client.ListNodes(ctx, request)
|
||||||
})
|
if err != nil {
|
||||||
}),
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
"Cannot get nodes: "+status.Convert(err).Message(),
|
||||||
|
output,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
if output != "" {
|
||||||
|
SuccessOutput(response.GetNodes(), "", output)
|
||||||
|
}
|
||||||
|
|
||||||
|
tableData, err := nodesToPtables(user, showTags, response.GetNodes())
|
||||||
|
if err != nil {
|
||||||
|
ErrorOutput(err, fmt.Sprintf("Error converting to table: %s", err), output)
|
||||||
|
}
|
||||||
|
|
||||||
|
err = pterm.DefaultTable.WithHasHeader().WithData(tableData).Render()
|
||||||
|
if err != nil {
|
||||||
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
fmt.Sprintf("Failed to render pterm table: %s", err),
|
||||||
|
output,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
var listNodeRoutesCmd = &cobra.Command{
|
var listNodeRoutesCmd = &cobra.Command{
|
||||||
Use: "list-routes",
|
Use: "list-routes",
|
||||||
Short: "List routes available on nodes",
|
Short: "List routes available on nodes",
|
||||||
Aliases: []string{"lsr", "routes"},
|
Aliases: []string{"lsr", "routes"},
|
||||||
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
identifier, _ := cmd.Flags().GetUint64("identifier")
|
output, _ := cmd.Flags().GetString("output")
|
||||||
|
identifier, err := cmd.Flags().GetUint64("identifier")
|
||||||
response, err := client.ListNodes(ctx, &v1.ListNodesRequest{})
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("listing nodes: %w", err)
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
fmt.Sprintf("Error converting ID to integer: %s", err),
|
||||||
|
output,
|
||||||
|
)
|
||||||
|
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
|
||||||
|
defer cancel()
|
||||||
|
defer conn.Close()
|
||||||
|
|
||||||
|
request := &v1.ListNodesRequest{}
|
||||||
|
|
||||||
|
response, err := client.ListNodes(ctx, request)
|
||||||
|
if err != nil {
|
||||||
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
"Cannot get nodes: "+status.Convert(err).Message(),
|
||||||
|
output,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
if output != "" {
|
||||||
|
SuccessOutput(response.GetNodes(), "", output)
|
||||||
}
|
}
|
||||||
|
|
||||||
nodes := response.GetNodes()
|
nodes := response.GetNodes()
|
||||||
@@ -128,7 +250,6 @@ var listNodeRoutesCmd = &cobra.Command{
|
|||||||
for _, node := range response.GetNodes() {
|
for _, node := range response.GetNodes() {
|
||||||
if node.GetId() == identifier {
|
if node.GetId() == identifier {
|
||||||
nodes = []*v1.Node{node}
|
nodes = []*v1.Node{node}
|
||||||
|
|
||||||
break
|
break
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -138,82 +259,92 @@ var listNodeRoutesCmd = &cobra.Command{
|
|||||||
return (n.GetSubnetRoutes() != nil && len(n.GetSubnetRoutes()) > 0) || (n.GetApprovedRoutes() != nil && len(n.GetApprovedRoutes()) > 0) || (n.GetAvailableRoutes() != nil && len(n.GetAvailableRoutes()) > 0)
|
return (n.GetSubnetRoutes() != nil && len(n.GetSubnetRoutes()) > 0) || (n.GetApprovedRoutes() != nil && len(n.GetApprovedRoutes()) > 0) || (n.GetAvailableRoutes() != nil && len(n.GetAvailableRoutes()) > 0)
|
||||||
})
|
})
|
||||||
|
|
||||||
return printListOutput(cmd, nodes, func() error {
|
tableData, err := nodeRoutesToPtables(nodes)
|
||||||
return pterm.DefaultTable.WithHasHeader().WithData(nodeRoutesToPtables(nodes)).Render()
|
if err != nil {
|
||||||
})
|
ErrorOutput(err, fmt.Sprintf("Error converting to table: %s", err), output)
|
||||||
}),
|
}
|
||||||
|
|
||||||
|
err = pterm.DefaultTable.WithHasHeader().WithData(tableData).Render()
|
||||||
|
if err != nil {
|
||||||
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
fmt.Sprintf("Failed to render pterm table: %s", err),
|
||||||
|
output,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
var expireNodeCmd = &cobra.Command{
|
var expireNodeCmd = &cobra.Command{
|
||||||
Use: "expire",
|
Use: "expire",
|
||||||
Short: "Expire (log out) a node in your network",
|
Short: "Expire (log out) a node in your network",
|
||||||
Long: `Expiring a node will keep the node in the database and force it to reauthenticate.
|
Long: "Expiring a node will keep the node in the database and force it to reauthenticate.",
|
||||||
|
|
||||||
Use --disable to disable key expiry (node will never expire).`,
|
|
||||||
Aliases: []string{"logout", "exp", "e"},
|
Aliases: []string{"logout", "exp", "e"},
|
||||||
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
identifier, _ := cmd.Flags().GetUint64("identifier")
|
output, _ := cmd.Flags().GetString("output")
|
||||||
disableExpiry, _ := cmd.Flags().GetBool("disable")
|
|
||||||
|
identifier, err := cmd.Flags().GetUint64("identifier")
|
||||||
|
if err != nil {
|
||||||
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
fmt.Sprintf("Error converting ID to integer: %s", err),
|
||||||
|
output,
|
||||||
|
)
|
||||||
|
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
|
||||||
|
defer cancel()
|
||||||
|
defer conn.Close()
|
||||||
|
|
||||||
// Handle disable expiry - node will never expire.
|
|
||||||
if disableExpiry {
|
|
||||||
request := &v1.ExpireNodeRequest{
|
request := &v1.ExpireNodeRequest{
|
||||||
NodeId: identifier,
|
NodeId: identifier,
|
||||||
DisableExpiry: true,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
response, err := client.ExpireNode(ctx, request)
|
response, err := client.ExpireNode(ctx, request)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("disabling node expiry: %w", err)
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
fmt.Sprintf(
|
||||||
|
"Cannot expire node: %s\n",
|
||||||
|
status.Convert(err).Message(),
|
||||||
|
),
|
||||||
|
output,
|
||||||
|
)
|
||||||
|
|
||||||
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
return printOutput(cmd, response.GetNode(), "Node expiry disabled")
|
SuccessOutput(response.GetNode(), "Node expired", output)
|
||||||
}
|
},
|
||||||
|
|
||||||
expiry, _ := cmd.Flags().GetString("expiry")
|
|
||||||
|
|
||||||
now := time.Now()
|
|
||||||
|
|
||||||
expiryTime := now
|
|
||||||
|
|
||||||
if expiry != "" {
|
|
||||||
var err error
|
|
||||||
|
|
||||||
expiryTime, err = time.Parse(time.RFC3339, expiry)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("parsing expiry time: %w", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
request := &v1.ExpireNodeRequest{
|
|
||||||
NodeId: identifier,
|
|
||||||
Expiry: timestamppb.New(expiryTime),
|
|
||||||
}
|
|
||||||
|
|
||||||
response, err := client.ExpireNode(ctx, request)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("expiring node: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
if now.Equal(expiryTime) || now.After(expiryTime) {
|
|
||||||
return printOutput(cmd, response.GetNode(), "Node expired")
|
|
||||||
}
|
|
||||||
|
|
||||||
return printOutput(cmd, response.GetNode(), "Node expiration updated")
|
|
||||||
}),
|
|
||||||
}
|
}
|
||||||
|
|
||||||
var renameNodeCmd = &cobra.Command{
|
var renameNodeCmd = &cobra.Command{
|
||||||
Use: "rename NEW_NAME",
|
Use: "rename NEW_NAME",
|
||||||
Short: "Renames a node in your network",
|
Short: "Renames a node in your network",
|
||||||
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
identifier, _ := cmd.Flags().GetUint64("identifier")
|
output, _ := cmd.Flags().GetString("output")
|
||||||
|
|
||||||
|
identifier, err := cmd.Flags().GetUint64("identifier")
|
||||||
|
if err != nil {
|
||||||
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
fmt.Sprintf("Error converting ID to integer: %s", err),
|
||||||
|
output,
|
||||||
|
)
|
||||||
|
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
|
||||||
|
defer cancel()
|
||||||
|
defer conn.Close()
|
||||||
|
|
||||||
newName := ""
|
newName := ""
|
||||||
if len(args) > 0 {
|
if len(args) > 0 {
|
||||||
newName = args[0]
|
newName = args[0]
|
||||||
}
|
}
|
||||||
|
|
||||||
request := &v1.RenameNodeRequest{
|
request := &v1.RenameNodeRequest{
|
||||||
NodeId: identifier,
|
NodeId: identifier,
|
||||||
NewName: newName,
|
NewName: newName,
|
||||||
@@ -221,19 +352,43 @@ var renameNodeCmd = &cobra.Command{
|
|||||||
|
|
||||||
response, err := client.RenameNode(ctx, request)
|
response, err := client.RenameNode(ctx, request)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("renaming node: %w", err)
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
fmt.Sprintf(
|
||||||
|
"Cannot rename node: %s\n",
|
||||||
|
status.Convert(err).Message(),
|
||||||
|
),
|
||||||
|
output,
|
||||||
|
)
|
||||||
|
|
||||||
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
return printOutput(cmd, response.GetNode(), "Node renamed")
|
SuccessOutput(response.GetNode(), "Node renamed", output)
|
||||||
}),
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
var deleteNodeCmd = &cobra.Command{
|
var deleteNodeCmd = &cobra.Command{
|
||||||
Use: "delete",
|
Use: "delete",
|
||||||
Short: "Delete a node",
|
Short: "Delete a node",
|
||||||
Aliases: []string{"del"},
|
Aliases: []string{"del"},
|
||||||
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
identifier, _ := cmd.Flags().GetUint64("identifier")
|
output, _ := cmd.Flags().GetString("output")
|
||||||
|
|
||||||
|
identifier, err := cmd.Flags().GetUint64("identifier")
|
||||||
|
if err != nil {
|
||||||
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
fmt.Sprintf("Error converting ID to integer: %s", err),
|
||||||
|
output,
|
||||||
|
)
|
||||||
|
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
|
||||||
|
defer cancel()
|
||||||
|
defer conn.Close()
|
||||||
|
|
||||||
getRequest := &v1.GetNodeRequest{
|
getRequest := &v1.GetNodeRequest{
|
||||||
NodeId: identifier,
|
NodeId: identifier,
|
||||||
@@ -241,31 +396,127 @@ var deleteNodeCmd = &cobra.Command{
|
|||||||
|
|
||||||
getResponse, err := client.GetNode(ctx, getRequest)
|
getResponse, err := client.GetNode(ctx, getRequest)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("getting node: %w", err)
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
"Error getting node node: "+status.Convert(err).Message(),
|
||||||
|
output,
|
||||||
|
)
|
||||||
|
|
||||||
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
deleteRequest := &v1.DeleteNodeRequest{
|
deleteRequest := &v1.DeleteNodeRequest{
|
||||||
NodeId: identifier,
|
NodeId: identifier,
|
||||||
}
|
}
|
||||||
|
|
||||||
if !confirmAction(cmd, fmt.Sprintf(
|
confirm := false
|
||||||
|
force, _ := cmd.Flags().GetBool("force")
|
||||||
|
if !force {
|
||||||
|
prompt := &survey.Confirm{
|
||||||
|
Message: fmt.Sprintf(
|
||||||
"Do you want to remove the node %s?",
|
"Do you want to remove the node %s?",
|
||||||
getResponse.GetNode().GetName(),
|
getResponse.GetNode().GetName(),
|
||||||
)) {
|
),
|
||||||
return printOutput(cmd, map[string]string{"Result": "Node not deleted"}, "Node not deleted")
|
|
||||||
}
|
}
|
||||||
|
err = survey.AskOne(prompt, &confirm)
|
||||||
_, err = client.DeleteNode(ctx, deleteRequest)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("deleting node: %w", err)
|
return
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return printOutput(
|
if confirm || force {
|
||||||
cmd,
|
response, err := client.DeleteNode(ctx, deleteRequest)
|
||||||
|
if output != "" {
|
||||||
|
SuccessOutput(response, "", output)
|
||||||
|
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if err != nil {
|
||||||
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
"Error deleting node: "+status.Convert(err).Message(),
|
||||||
|
output,
|
||||||
|
)
|
||||||
|
|
||||||
|
return
|
||||||
|
}
|
||||||
|
SuccessOutput(
|
||||||
map[string]string{"Result": "Node deleted"},
|
map[string]string{"Result": "Node deleted"},
|
||||||
"Node deleted",
|
"Node deleted",
|
||||||
|
output,
|
||||||
)
|
)
|
||||||
}),
|
} else {
|
||||||
|
SuccessOutput(map[string]string{"Result": "Node not deleted"}, "Node not deleted", output)
|
||||||
|
}
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
var moveNodeCmd = &cobra.Command{
|
||||||
|
Use: "move",
|
||||||
|
Short: "Move node to another user",
|
||||||
|
Aliases: []string{"mv"},
|
||||||
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
|
output, _ := cmd.Flags().GetString("output")
|
||||||
|
|
||||||
|
identifier, err := cmd.Flags().GetUint64("identifier")
|
||||||
|
if err != nil {
|
||||||
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
fmt.Sprintf("Error converting ID to integer: %s", err),
|
||||||
|
output,
|
||||||
|
)
|
||||||
|
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
user, err := cmd.Flags().GetUint64("user")
|
||||||
|
if err != nil {
|
||||||
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
fmt.Sprintf("Error getting user: %s", err),
|
||||||
|
output,
|
||||||
|
)
|
||||||
|
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
|
||||||
|
defer cancel()
|
||||||
|
defer conn.Close()
|
||||||
|
|
||||||
|
getRequest := &v1.GetNodeRequest{
|
||||||
|
NodeId: identifier,
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err = client.GetNode(ctx, getRequest)
|
||||||
|
if err != nil {
|
||||||
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
"Error getting node: "+status.Convert(err).Message(),
|
||||||
|
output,
|
||||||
|
)
|
||||||
|
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
moveRequest := &v1.MoveNodeRequest{
|
||||||
|
NodeId: identifier,
|
||||||
|
User: user,
|
||||||
|
}
|
||||||
|
|
||||||
|
moveResponse, err := client.MoveNode(ctx, moveRequest)
|
||||||
|
if err != nil {
|
||||||
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
"Error moving node: "+status.Convert(err).Message(),
|
||||||
|
output,
|
||||||
|
)
|
||||||
|
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
SuccessOutput(moveResponse.GetNode(), "Node moved to another user", output)
|
||||||
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
var backfillNodeIPsCmd = &cobra.Command{
|
var backfillNodeIPsCmd = &cobra.Command{
|
||||||
@@ -283,29 +534,42 @@ all nodes that are missing.
|
|||||||
If you remove IPv4 or IPv6 prefixes from the config,
|
If you remove IPv4 or IPv6 prefixes from the config,
|
||||||
it can be run to remove the IPs that should no longer
|
it can be run to remove the IPs that should no longer
|
||||||
be assigned to nodes.`,
|
be assigned to nodes.`,
|
||||||
RunE: func(cmd *cobra.Command, args []string) error {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
if !confirmAction(cmd, "Are you sure that you want to assign/remove IPs to/from nodes?") {
|
var err error
|
||||||
return nil
|
output, _ := cmd.Flags().GetString("output")
|
||||||
}
|
|
||||||
|
|
||||||
ctx, client, conn, cancel, err := newHeadscaleCLIWithConfig()
|
confirm := false
|
||||||
if err != nil {
|
prompt := &survey.Confirm{
|
||||||
return fmt.Errorf("connecting to headscale: %w", err)
|
Message: "Are you sure that you want to assign/remove IPs to/from nodes?",
|
||||||
}
|
}
|
||||||
|
err = survey.AskOne(prompt, &confirm)
|
||||||
|
if err != nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if confirm {
|
||||||
|
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
|
||||||
defer cancel()
|
defer cancel()
|
||||||
defer conn.Close()
|
defer conn.Close()
|
||||||
|
|
||||||
changes, err := client.BackfillNodeIPs(ctx, &v1.BackfillNodeIPsRequest{Confirmed: true})
|
changes, err := client.BackfillNodeIPs(ctx, &v1.BackfillNodeIPsRequest{Confirmed: confirm})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("backfilling IPs: %w", err)
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
"Error backfilling IPs: "+status.Convert(err).Message(),
|
||||||
|
output,
|
||||||
|
)
|
||||||
|
|
||||||
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
return printOutput(cmd, changes, "Node IPs backfilled successfully")
|
SuccessOutput(changes, "Node IPs backfilled successfully", output)
|
||||||
|
}
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
func nodesToPtables(
|
func nodesToPtables(
|
||||||
currentUser string,
|
currentUser string,
|
||||||
|
showTags bool,
|
||||||
nodes []*v1.Node,
|
nodes []*v1.Node,
|
||||||
) (pterm.TableData, error) {
|
) (pterm.TableData, error) {
|
||||||
tableHeader := []string{
|
tableHeader := []string{
|
||||||
@@ -315,7 +579,6 @@ func nodesToPtables(
|
|||||||
"MachineKey",
|
"MachineKey",
|
||||||
"NodeKey",
|
"NodeKey",
|
||||||
"User",
|
"User",
|
||||||
"Tags",
|
|
||||||
"IP addresses",
|
"IP addresses",
|
||||||
"Ephemeral",
|
"Ephemeral",
|
||||||
"Last seen",
|
"Last seen",
|
||||||
@@ -323,6 +586,13 @@ func nodesToPtables(
|
|||||||
"Connected",
|
"Connected",
|
||||||
"Expired",
|
"Expired",
|
||||||
}
|
}
|
||||||
|
if showTags {
|
||||||
|
tableHeader = append(tableHeader, []string{
|
||||||
|
"ForcedTags",
|
||||||
|
"InvalidTags",
|
||||||
|
"ValidTags",
|
||||||
|
}...)
|
||||||
|
}
|
||||||
tableData := pterm.TableData{tableHeader}
|
tableData := pterm.TableData{tableHeader}
|
||||||
|
|
||||||
for _, node := range nodes {
|
for _, node := range nodes {
|
||||||
@@ -331,30 +601,23 @@ func nodesToPtables(
|
|||||||
ephemeral = true
|
ephemeral = true
|
||||||
}
|
}
|
||||||
|
|
||||||
var (
|
var lastSeen time.Time
|
||||||
lastSeen time.Time
|
var lastSeenTime string
|
||||||
lastSeenTime string
|
|
||||||
)
|
|
||||||
|
|
||||||
if node.GetLastSeen() != nil {
|
if node.GetLastSeen() != nil {
|
||||||
lastSeen = node.GetLastSeen().AsTime()
|
lastSeen = node.GetLastSeen().AsTime()
|
||||||
lastSeenTime = lastSeen.Format(HeadscaleDateTimeFormat)
|
lastSeenTime = lastSeen.Format("2006-01-02 15:04:05")
|
||||||
}
|
}
|
||||||
|
|
||||||
var (
|
var expiry time.Time
|
||||||
expiry time.Time
|
var expiryTime string
|
||||||
expiryTime string
|
|
||||||
)
|
|
||||||
|
|
||||||
if node.GetExpiry() != nil {
|
if node.GetExpiry() != nil {
|
||||||
expiry = node.GetExpiry().AsTime()
|
expiry = node.GetExpiry().AsTime()
|
||||||
expiryTime = expiry.Format(HeadscaleDateTimeFormat)
|
expiryTime = expiry.Format("2006-01-02 15:04:05")
|
||||||
} else {
|
} else {
|
||||||
expiryTime = "N/A"
|
expiryTime = "N/A"
|
||||||
}
|
}
|
||||||
|
|
||||||
var machineKey key.MachinePublic
|
var machineKey key.MachinePublic
|
||||||
|
|
||||||
err := machineKey.UnmarshalText(
|
err := machineKey.UnmarshalText(
|
||||||
[]byte(node.GetMachineKey()),
|
[]byte(node.GetMachineKey()),
|
||||||
)
|
)
|
||||||
@@ -363,7 +626,6 @@ func nodesToPtables(
|
|||||||
}
|
}
|
||||||
|
|
||||||
var nodeKey key.NodePublic
|
var nodeKey key.NodePublic
|
||||||
|
|
||||||
err = nodeKey.UnmarshalText(
|
err = nodeKey.UnmarshalText(
|
||||||
[]byte(node.GetNodeKey()),
|
[]byte(node.GetNodeKey()),
|
||||||
)
|
)
|
||||||
@@ -379,40 +641,50 @@ func nodesToPtables(
|
|||||||
}
|
}
|
||||||
|
|
||||||
var expired string
|
var expired string
|
||||||
if node.GetExpiry() != nil && node.GetExpiry().AsTime().Before(time.Now()) {
|
if expiry.IsZero() || expiry.After(time.Now()) {
|
||||||
expired = pterm.LightRed("yes")
|
|
||||||
} else {
|
|
||||||
expired = pterm.LightGreen("no")
|
expired = pterm.LightGreen("no")
|
||||||
|
} else {
|
||||||
|
expired = pterm.LightRed("yes")
|
||||||
}
|
}
|
||||||
|
|
||||||
var tagsBuilder strings.Builder
|
var forcedTags string
|
||||||
|
for _, tag := range node.GetForcedTags() {
|
||||||
for _, tag := range node.GetTags() {
|
forcedTags += "," + tag
|
||||||
tagsBuilder.WriteString("\n" + tag)
|
|
||||||
}
|
}
|
||||||
|
forcedTags = strings.TrimLeft(forcedTags, ",")
|
||||||
tags := strings.TrimLeft(tagsBuilder.String(), "\n")
|
var invalidTags string
|
||||||
|
for _, tag := range node.GetInvalidTags() {
|
||||||
|
if !slices.Contains(node.GetForcedTags(), tag) {
|
||||||
|
invalidTags += "," + pterm.LightRed(tag)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
invalidTags = strings.TrimLeft(invalidTags, ",")
|
||||||
|
var validTags string
|
||||||
|
for _, tag := range node.GetValidTags() {
|
||||||
|
if !slices.Contains(node.GetForcedTags(), tag) {
|
||||||
|
validTags += "," + pterm.LightGreen(tag)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
validTags = strings.TrimLeft(validTags, ",")
|
||||||
|
|
||||||
var user string
|
var user string
|
||||||
if node.GetUser() != nil {
|
if currentUser == "" || (currentUser == node.GetUser().GetName()) {
|
||||||
user = node.GetUser().GetName()
|
user = pterm.LightMagenta(node.GetUser().GetName())
|
||||||
|
} else {
|
||||||
|
// Shared into this user
|
||||||
|
user = pterm.LightYellow(node.GetUser().GetName())
|
||||||
}
|
}
|
||||||
|
|
||||||
var ipBuilder strings.Builder
|
var IPV4Address string
|
||||||
|
var IPV6Address string
|
||||||
for _, addr := range node.GetIpAddresses() {
|
for _, addr := range node.GetIpAddresses() {
|
||||||
ip, err := netip.ParseAddr(addr)
|
if netip.MustParseAddr(addr).Is4() {
|
||||||
if err == nil {
|
IPV4Address = addr
|
||||||
if ipBuilder.Len() > 0 {
|
} else {
|
||||||
ipBuilder.WriteString("\n")
|
IPV6Address = addr
|
||||||
}
|
|
||||||
|
|
||||||
ipBuilder.WriteString(ip.String())
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
ipAddresses := ipBuilder.String()
|
|
||||||
|
|
||||||
nodeData := []string{
|
nodeData := []string{
|
||||||
strconv.FormatUint(node.GetId(), util.Base10),
|
strconv.FormatUint(node.GetId(), util.Base10),
|
||||||
node.GetName(),
|
node.GetName(),
|
||||||
@@ -420,14 +692,16 @@ func nodesToPtables(
|
|||||||
machineKey.ShortString(),
|
machineKey.ShortString(),
|
||||||
nodeKey.ShortString(),
|
nodeKey.ShortString(),
|
||||||
user,
|
user,
|
||||||
tags,
|
strings.Join([]string{IPV4Address, IPV6Address}, ", "),
|
||||||
ipAddresses,
|
|
||||||
strconv.FormatBool(ephemeral),
|
strconv.FormatBool(ephemeral),
|
||||||
lastSeenTime,
|
lastSeenTime,
|
||||||
expiryTime,
|
expiryTime,
|
||||||
online,
|
online,
|
||||||
expired,
|
expired,
|
||||||
}
|
}
|
||||||
|
if showTags {
|
||||||
|
nodeData = append(nodeData, []string{forcedTags, invalidTags, validTags}...)
|
||||||
|
}
|
||||||
tableData = append(
|
tableData = append(
|
||||||
tableData,
|
tableData,
|
||||||
nodeData,
|
nodeData,
|
||||||
@@ -439,7 +713,7 @@ func nodesToPtables(
|
|||||||
|
|
||||||
func nodeRoutesToPtables(
|
func nodeRoutesToPtables(
|
||||||
nodes []*v1.Node,
|
nodes []*v1.Node,
|
||||||
) pterm.TableData {
|
) (pterm.TableData, error) {
|
||||||
tableHeader := []string{
|
tableHeader := []string{
|
||||||
"ID",
|
"ID",
|
||||||
"Hostname",
|
"Hostname",
|
||||||
@@ -453,9 +727,9 @@ func nodeRoutesToPtables(
|
|||||||
nodeData := []string{
|
nodeData := []string{
|
||||||
strconv.FormatUint(node.GetId(), util.Base10),
|
strconv.FormatUint(node.GetId(), util.Base10),
|
||||||
node.GetGivenName(),
|
node.GetGivenName(),
|
||||||
strings.Join(node.GetApprovedRoutes(), "\n"),
|
strings.Join(node.GetApprovedRoutes(), ", "),
|
||||||
strings.Join(node.GetAvailableRoutes(), "\n"),
|
strings.Join(node.GetAvailableRoutes(), ", "),
|
||||||
strings.Join(node.GetSubnetRoutes(), "\n"),
|
strings.Join(node.GetSubnetRoutes(), ", "),
|
||||||
}
|
}
|
||||||
tableData = append(
|
tableData = append(
|
||||||
tableData,
|
tableData,
|
||||||
@@ -463,50 +737,120 @@ func nodeRoutesToPtables(
|
|||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
return tableData
|
return tableData, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
var tagCmd = &cobra.Command{
|
var tagCmd = &cobra.Command{
|
||||||
Use: "tag",
|
Use: "tag",
|
||||||
Short: "Manage the tags of a node",
|
Short: "Manage the tags of a node",
|
||||||
Aliases: []string{"tags", "t"},
|
Aliases: []string{"tags", "t"},
|
||||||
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
identifier, _ := cmd.Flags().GetUint64("identifier")
|
output, _ := cmd.Flags().GetString("output")
|
||||||
tagsToSet, _ := cmd.Flags().GetStringSlice("tags")
|
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
|
||||||
|
defer cancel()
|
||||||
|
defer conn.Close()
|
||||||
|
|
||||||
|
// retrieve flags from CLI
|
||||||
|
identifier, err := cmd.Flags().GetUint64("identifier")
|
||||||
|
if err != nil {
|
||||||
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
fmt.Sprintf("Error converting ID to integer: %s", err),
|
||||||
|
output,
|
||||||
|
)
|
||||||
|
|
||||||
|
return
|
||||||
|
}
|
||||||
|
tagsToSet, err := cmd.Flags().GetStringSlice("tags")
|
||||||
|
if err != nil {
|
||||||
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
fmt.Sprintf("Error retrieving list of tags to add to node, %v", err),
|
||||||
|
output,
|
||||||
|
)
|
||||||
|
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
// Sending tags to node
|
// Sending tags to node
|
||||||
request := &v1.SetTagsRequest{
|
request := &v1.SetTagsRequest{
|
||||||
NodeId: identifier,
|
NodeId: identifier,
|
||||||
Tags: tagsToSet,
|
Tags: tagsToSet,
|
||||||
}
|
}
|
||||||
|
|
||||||
resp, err := client.SetTags(ctx, request)
|
resp, err := client.SetTags(ctx, request)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("setting tags: %w", err)
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
fmt.Sprintf("Error while sending tags to headscale: %s", err),
|
||||||
|
output,
|
||||||
|
)
|
||||||
|
|
||||||
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
return printOutput(cmd, resp.GetNode(), "Node updated")
|
if resp != nil {
|
||||||
}),
|
SuccessOutput(
|
||||||
|
resp.GetNode(),
|
||||||
|
"Node updated",
|
||||||
|
output,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
var approveRoutesCmd = &cobra.Command{
|
var approveRoutesCmd = &cobra.Command{
|
||||||
Use: "approve-routes",
|
Use: "approve-routes",
|
||||||
Short: "Manage the approved routes of a node",
|
Short: "Manage the approved routes of a node",
|
||||||
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
identifier, _ := cmd.Flags().GetUint64("identifier")
|
output, _ := cmd.Flags().GetString("output")
|
||||||
routes, _ := cmd.Flags().GetStringSlice("routes")
|
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
|
||||||
|
defer cancel()
|
||||||
|
defer conn.Close()
|
||||||
|
|
||||||
|
// retrieve flags from CLI
|
||||||
|
identifier, err := cmd.Flags().GetUint64("identifier")
|
||||||
|
if err != nil {
|
||||||
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
fmt.Sprintf("Error converting ID to integer: %s", err),
|
||||||
|
output,
|
||||||
|
)
|
||||||
|
|
||||||
|
return
|
||||||
|
}
|
||||||
|
routes, err := cmd.Flags().GetStringSlice("routes")
|
||||||
|
if err != nil {
|
||||||
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
fmt.Sprintf("Error retrieving list of routes to add to node, %v", err),
|
||||||
|
output,
|
||||||
|
)
|
||||||
|
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
// Sending routes to node
|
// Sending routes to node
|
||||||
request := &v1.SetApprovedRoutesRequest{
|
request := &v1.SetApprovedRoutesRequest{
|
||||||
NodeId: identifier,
|
NodeId: identifier,
|
||||||
Routes: routes,
|
Routes: routes,
|
||||||
}
|
}
|
||||||
|
|
||||||
resp, err := client.SetApprovedRoutes(ctx, request)
|
resp, err := client.SetApprovedRoutes(ctx, request)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("setting approved routes: %w", err)
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
fmt.Sprintf("Error while sending routes to headscale: %s", err),
|
||||||
|
output,
|
||||||
|
)
|
||||||
|
|
||||||
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
return printOutput(cmd, resp.GetNode(), "Node updated")
|
if resp != nil {
|
||||||
}),
|
SuccessOutput(
|
||||||
|
resp.GetNode(),
|
||||||
|
"Node updated",
|
||||||
|
output,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
},
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,54 +1,32 @@
|
|||||||
package cli
|
package cli
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"errors"
|
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"io"
|
||||||
"os"
|
"os"
|
||||||
|
|
||||||
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
||||||
"github.com/juanfont/headscale/hscontrol/db"
|
|
||||||
"github.com/juanfont/headscale/hscontrol/policy"
|
"github.com/juanfont/headscale/hscontrol/policy"
|
||||||
"github.com/juanfont/headscale/hscontrol/types"
|
"github.com/juanfont/headscale/hscontrol/types"
|
||||||
|
"github.com/rs/zerolog/log"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
"tailscale.com/types/views"
|
"tailscale.com/types/views"
|
||||||
)
|
)
|
||||||
|
|
||||||
const (
|
|
||||||
bypassFlag = "bypass-grpc-and-access-database-directly" //nolint:gosec // not a credential
|
|
||||||
)
|
|
||||||
|
|
||||||
var errAborted = errors.New("command aborted by user")
|
|
||||||
|
|
||||||
// bypassDatabase loads the server config and opens the database directly,
|
|
||||||
// bypassing the gRPC server. The caller is responsible for closing the
|
|
||||||
// returned database handle.
|
|
||||||
func bypassDatabase() (*db.HSDatabase, error) {
|
|
||||||
cfg, err := types.LoadServerConfig()
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("loading config: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
d, err := db.NewHeadscaleDatabase(cfg)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("opening database: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return d, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
rootCmd.AddCommand(policyCmd)
|
rootCmd.AddCommand(policyCmd)
|
||||||
|
|
||||||
getPolicy.Flags().BoolP(bypassFlag, "", false, "Uses the headscale config to directly access the database, bypassing gRPC and does not require the server to be running")
|
|
||||||
policyCmd.AddCommand(getPolicy)
|
policyCmd.AddCommand(getPolicy)
|
||||||
|
|
||||||
setPolicy.Flags().StringP("file", "f", "", "Path to a policy file in HuJSON format")
|
setPolicy.Flags().StringP("file", "f", "", "Path to a policy file in HuJSON format")
|
||||||
setPolicy.Flags().BoolP(bypassFlag, "", false, "Uses the headscale config to directly access the database, bypassing gRPC and does not require the server to be running")
|
if err := setPolicy.MarkFlagRequired("file"); err != nil {
|
||||||
mustMarkRequired(setPolicy, "file")
|
log.Fatal().Err(err).Msg("")
|
||||||
|
}
|
||||||
policyCmd.AddCommand(setPolicy)
|
policyCmd.AddCommand(setPolicy)
|
||||||
|
|
||||||
checkPolicy.Flags().StringP("file", "f", "", "Path to a policy file in HuJSON format")
|
checkPolicy.Flags().StringP("file", "f", "", "Path to a policy file in HuJSON format")
|
||||||
mustMarkRequired(checkPolicy, "file")
|
if err := checkPolicy.MarkFlagRequired("file"); err != nil {
|
||||||
|
log.Fatal().Err(err).Msg("")
|
||||||
|
}
|
||||||
policyCmd.AddCommand(checkPolicy)
|
policyCmd.AddCommand(checkPolicy)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -61,47 +39,23 @@ var getPolicy = &cobra.Command{
|
|||||||
Use: "get",
|
Use: "get",
|
||||||
Short: "Print the current ACL Policy",
|
Short: "Print the current ACL Policy",
|
||||||
Aliases: []string{"show", "view", "fetch"},
|
Aliases: []string{"show", "view", "fetch"},
|
||||||
RunE: func(cmd *cobra.Command, args []string) error {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
var policyData string
|
output, _ := cmd.Flags().GetString("output")
|
||||||
|
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
|
||||||
if bypass, _ := cmd.Flags().GetBool(bypassFlag); bypass {
|
|
||||||
if !confirmAction(cmd, "DO NOT run this command if an instance of headscale is running, are you sure headscale is not running?") {
|
|
||||||
return errAborted
|
|
||||||
}
|
|
||||||
|
|
||||||
d, err := bypassDatabase()
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
defer d.Close()
|
|
||||||
|
|
||||||
pol, err := d.GetPolicy()
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("loading policy from database: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
policyData = pol.Data
|
|
||||||
} else {
|
|
||||||
ctx, client, conn, cancel, err := newHeadscaleCLIWithConfig()
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("connecting to headscale: %w", err)
|
|
||||||
}
|
|
||||||
defer cancel()
|
defer cancel()
|
||||||
defer conn.Close()
|
defer conn.Close()
|
||||||
|
|
||||||
response, err := client.GetPolicy(ctx, &v1.GetPolicyRequest{})
|
request := &v1.GetPolicyRequest{}
|
||||||
|
|
||||||
|
response, err := client.GetPolicy(ctx, request)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("loading ACL policy: %w", err)
|
ErrorOutput(err, fmt.Sprintf("Failed loading ACL Policy: %s", err), output)
|
||||||
}
|
}
|
||||||
|
|
||||||
policyData = response.GetPolicy()
|
// TODO(pallabpain): Maybe print this better?
|
||||||
}
|
// This does not pass output as we dont support yaml, json or json-line
|
||||||
|
// output for this command. It is HuJSON already.
|
||||||
// This does not pass output format as we don't support yaml, json or
|
SuccessOutput("", response.GetPolicy(), "")
|
||||||
// json-line output for this command. It is HuJSON already.
|
|
||||||
fmt.Println(policyData)
|
|
||||||
|
|
||||||
return nil
|
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -112,79 +66,58 @@ var setPolicy = &cobra.Command{
|
|||||||
Updates the existing ACL Policy with the provided policy. The policy must be a valid HuJSON object.
|
Updates the existing ACL Policy with the provided policy. The policy must be a valid HuJSON object.
|
||||||
This command only works when the acl.policy_mode is set to "db", and the policy will be stored in the database.`,
|
This command only works when the acl.policy_mode is set to "db", and the policy will be stored in the database.`,
|
||||||
Aliases: []string{"put", "update"},
|
Aliases: []string{"put", "update"},
|
||||||
RunE: func(cmd *cobra.Command, args []string) error {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
|
output, _ := cmd.Flags().GetString("output")
|
||||||
policyPath, _ := cmd.Flags().GetString("file")
|
policyPath, _ := cmd.Flags().GetString("file")
|
||||||
|
|
||||||
policyBytes, err := os.ReadFile(policyPath)
|
f, err := os.Open(policyPath)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("reading policy file: %w", err)
|
ErrorOutput(err, fmt.Sprintf("Error opening the policy file: %s", err), output)
|
||||||
|
}
|
||||||
|
defer f.Close()
|
||||||
|
|
||||||
|
policyBytes, err := io.ReadAll(f)
|
||||||
|
if err != nil {
|
||||||
|
ErrorOutput(err, fmt.Sprintf("Error reading the policy file: %s", err), output)
|
||||||
}
|
}
|
||||||
|
|
||||||
if bypass, _ := cmd.Flags().GetBool(bypassFlag); bypass {
|
|
||||||
if !confirmAction(cmd, "DO NOT run this command if an instance of headscale is running, are you sure headscale is not running?") {
|
|
||||||
return errAborted
|
|
||||||
}
|
|
||||||
|
|
||||||
d, err := bypassDatabase()
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
defer d.Close()
|
|
||||||
|
|
||||||
users, err := d.ListUsers()
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("loading users for policy validation: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
_, err = policy.NewPolicyManager(policyBytes, users, views.Slice[types.NodeView]{})
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("parsing policy file: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
_, err = d.SetPolicy(string(policyBytes))
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("setting ACL policy: %w", err)
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
request := &v1.SetPolicyRequest{Policy: string(policyBytes)}
|
request := &v1.SetPolicyRequest{Policy: string(policyBytes)}
|
||||||
|
|
||||||
ctx, client, conn, cancel, err := newHeadscaleCLIWithConfig()
|
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("connecting to headscale: %w", err)
|
|
||||||
}
|
|
||||||
defer cancel()
|
defer cancel()
|
||||||
defer conn.Close()
|
defer conn.Close()
|
||||||
|
|
||||||
_, err = client.SetPolicy(ctx, request)
|
if _, err := client.SetPolicy(ctx, request); err != nil {
|
||||||
if err != nil {
|
ErrorOutput(err, fmt.Sprintf("Failed to set ACL Policy: %s", err), output)
|
||||||
return fmt.Errorf("setting ACL policy: %w", err)
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
fmt.Println("Policy updated.")
|
SuccessOutput(nil, "Policy updated.", "")
|
||||||
|
|
||||||
return nil
|
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
var checkPolicy = &cobra.Command{
|
var checkPolicy = &cobra.Command{
|
||||||
Use: "check",
|
Use: "check",
|
||||||
Short: "Check the Policy file for errors",
|
Short: "Check the Policy file for errors",
|
||||||
RunE: func(cmd *cobra.Command, args []string) error {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
|
output, _ := cmd.Flags().GetString("output")
|
||||||
policyPath, _ := cmd.Flags().GetString("file")
|
policyPath, _ := cmd.Flags().GetString("file")
|
||||||
|
|
||||||
policyBytes, err := os.ReadFile(policyPath)
|
f, err := os.Open(policyPath)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("reading policy file: %w", err)
|
ErrorOutput(err, fmt.Sprintf("Error opening the policy file: %s", err), output)
|
||||||
|
}
|
||||||
|
defer f.Close()
|
||||||
|
|
||||||
|
policyBytes, err := io.ReadAll(f)
|
||||||
|
if err != nil {
|
||||||
|
ErrorOutput(err, fmt.Sprintf("Error reading the policy file: %s", err), output)
|
||||||
}
|
}
|
||||||
|
|
||||||
_, err = policy.NewPolicyManager(policyBytes, nil, views.Slice[types.NodeView]{})
|
_, err = policy.NewPolicyManager(policyBytes, nil, views.Slice[types.NodeView]{})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("parsing policy file: %w", err)
|
ErrorOutput(err, fmt.Sprintf("Error parsing the policy file: %s", err), output)
|
||||||
}
|
}
|
||||||
|
|
||||||
fmt.Println("Policy is valid")
|
SuccessOutput(nil, "Policy is valid", "")
|
||||||
|
|
||||||
return nil
|
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,15 +1,17 @@
|
|||||||
package cli
|
package cli
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
|
||||||
"fmt"
|
"fmt"
|
||||||
"strconv"
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
||||||
"github.com/juanfont/headscale/hscontrol/util"
|
"github.com/prometheus/common/model"
|
||||||
"github.com/pterm/pterm"
|
"github.com/pterm/pterm"
|
||||||
|
"github.com/rs/zerolog/log"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
|
"google.golang.org/protobuf/types/known/timestamppb"
|
||||||
)
|
)
|
||||||
|
|
||||||
const (
|
const (
|
||||||
@@ -18,10 +20,20 @@ const (
|
|||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
rootCmd.AddCommand(preauthkeysCmd)
|
rootCmd.AddCommand(preauthkeysCmd)
|
||||||
|
preauthkeysCmd.PersistentFlags().Uint64P("user", "u", 0, "User identifier (ID)")
|
||||||
|
|
||||||
|
preauthkeysCmd.PersistentFlags().StringP("namespace", "n", "", "User")
|
||||||
|
pakNamespaceFlag := preauthkeysCmd.PersistentFlags().Lookup("namespace")
|
||||||
|
pakNamespaceFlag.Deprecated = deprecateNamespaceMessage
|
||||||
|
pakNamespaceFlag.Hidden = true
|
||||||
|
|
||||||
|
err := preauthkeysCmd.MarkPersistentFlagRequired("user")
|
||||||
|
if err != nil {
|
||||||
|
log.Fatal().Err(err).Msg("")
|
||||||
|
}
|
||||||
preauthkeysCmd.AddCommand(listPreAuthKeys)
|
preauthkeysCmd.AddCommand(listPreAuthKeys)
|
||||||
preauthkeysCmd.AddCommand(createPreAuthKeyCmd)
|
preauthkeysCmd.AddCommand(createPreAuthKeyCmd)
|
||||||
preauthkeysCmd.AddCommand(expirePreAuthKeyCmd)
|
preauthkeysCmd.AddCommand(expirePreAuthKeyCmd)
|
||||||
preauthkeysCmd.AddCommand(deletePreAuthKeyCmd)
|
|
||||||
createPreAuthKeyCmd.PersistentFlags().
|
createPreAuthKeyCmd.PersistentFlags().
|
||||||
Bool("reusable", false, "Make the preauthkey reusable")
|
Bool("reusable", false, "Make the preauthkey reusable")
|
||||||
createPreAuthKeyCmd.PersistentFlags().
|
createPreAuthKeyCmd.PersistentFlags().
|
||||||
@@ -30,9 +42,6 @@ func init() {
|
|||||||
StringP("expiration", "e", DefaultPreAuthKeyExpiry, "Human-readable expiration of the key (e.g. 30m, 24h)")
|
StringP("expiration", "e", DefaultPreAuthKeyExpiry, "Human-readable expiration of the key (e.g. 30m, 24h)")
|
||||||
createPreAuthKeyCmd.Flags().
|
createPreAuthKeyCmd.Flags().
|
||||||
StringSlice("tags", []string{}, "Tags to automatically assign to node")
|
StringSlice("tags", []string{}, "Tags to automatically assign to node")
|
||||||
createPreAuthKeyCmd.PersistentFlags().Uint64P("user", "u", 0, "User identifier (ID)")
|
|
||||||
expirePreAuthKeyCmd.PersistentFlags().Uint64P("id", "i", 0, "Authkey ID")
|
|
||||||
deletePreAuthKeyCmd.PersistentFlags().Uint64P("id", "i", 0, "Authkey ID")
|
|
||||||
}
|
}
|
||||||
|
|
||||||
var preauthkeysCmd = &cobra.Command{
|
var preauthkeysCmd = &cobra.Command{
|
||||||
@@ -43,136 +52,183 @@ var preauthkeysCmd = &cobra.Command{
|
|||||||
|
|
||||||
var listPreAuthKeys = &cobra.Command{
|
var listPreAuthKeys = &cobra.Command{
|
||||||
Use: "list",
|
Use: "list",
|
||||||
Short: "List all preauthkeys",
|
Short: "List the preauthkeys for this user",
|
||||||
Aliases: []string{"ls", "show"},
|
Aliases: []string{"ls", "show"},
|
||||||
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
response, err := client.ListPreAuthKeys(ctx, &v1.ListPreAuthKeysRequest{})
|
output, _ := cmd.Flags().GetString("output")
|
||||||
|
|
||||||
|
user, err := cmd.Flags().GetUint64("user")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("listing preauthkeys: %w", err)
|
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
|
||||||
|
}
|
||||||
|
|
||||||
|
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
|
||||||
|
defer cancel()
|
||||||
|
defer conn.Close()
|
||||||
|
|
||||||
|
request := &v1.ListPreAuthKeysRequest{
|
||||||
|
User: user,
|
||||||
|
}
|
||||||
|
|
||||||
|
response, err := client.ListPreAuthKeys(ctx, request)
|
||||||
|
if err != nil {
|
||||||
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
fmt.Sprintf("Error getting the list of keys: %s", err),
|
||||||
|
output,
|
||||||
|
)
|
||||||
|
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
if output != "" {
|
||||||
|
SuccessOutput(response.GetPreAuthKeys(), "", output)
|
||||||
}
|
}
|
||||||
|
|
||||||
return printListOutput(cmd, response.GetPreAuthKeys(), func() error {
|
|
||||||
tableData := pterm.TableData{
|
tableData := pterm.TableData{
|
||||||
{
|
{
|
||||||
"ID",
|
"ID",
|
||||||
"Key/Prefix",
|
"Key",
|
||||||
"Reusable",
|
"Reusable",
|
||||||
"Ephemeral",
|
"Ephemeral",
|
||||||
"Used",
|
"Used",
|
||||||
"Expiration",
|
"Expiration",
|
||||||
"Created",
|
"Created",
|
||||||
"Owner",
|
"Tags",
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, key := range response.GetPreAuthKeys() {
|
for _, key := range response.GetPreAuthKeys() {
|
||||||
expiration := "-"
|
expiration := "-"
|
||||||
if key.GetExpiration() != nil {
|
if key.GetExpiration() != nil {
|
||||||
expiration = ColourTime(key.GetExpiration().AsTime())
|
expiration = ColourTime(key.GetExpiration().AsTime())
|
||||||
}
|
}
|
||||||
|
|
||||||
var owner string
|
aclTags := ""
|
||||||
if len(key.GetAclTags()) > 0 {
|
|
||||||
owner = strings.Join(key.GetAclTags(), "\n")
|
for _, tag := range key.GetAclTags() {
|
||||||
} else if key.GetUser() != nil {
|
aclTags += "," + tag
|
||||||
owner = key.GetUser().GetName()
|
|
||||||
} else {
|
|
||||||
owner = "-"
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
aclTags = strings.TrimLeft(aclTags, ",")
|
||||||
|
|
||||||
tableData = append(tableData, []string{
|
tableData = append(tableData, []string{
|
||||||
strconv.FormatUint(key.GetId(), util.Base10),
|
strconv.FormatUint(key.GetId(), 10),
|
||||||
key.GetKey(),
|
key.GetKey(),
|
||||||
strconv.FormatBool(key.GetReusable()),
|
strconv.FormatBool(key.GetReusable()),
|
||||||
strconv.FormatBool(key.GetEphemeral()),
|
strconv.FormatBool(key.GetEphemeral()),
|
||||||
strconv.FormatBool(key.GetUsed()),
|
strconv.FormatBool(key.GetUsed()),
|
||||||
expiration,
|
expiration,
|
||||||
key.GetCreatedAt().AsTime().Format(HeadscaleDateTimeFormat),
|
key.GetCreatedAt().AsTime().Format("2006-01-02 15:04:05"),
|
||||||
owner,
|
aclTags,
|
||||||
})
|
})
|
||||||
}
|
|
||||||
|
|
||||||
return pterm.DefaultTable.WithHasHeader().WithData(tableData).Render()
|
}
|
||||||
})
|
err = pterm.DefaultTable.WithHasHeader().WithData(tableData).Render()
|
||||||
}),
|
if err != nil {
|
||||||
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
fmt.Sprintf("Failed to render pterm table: %s", err),
|
||||||
|
output,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
var createPreAuthKeyCmd = &cobra.Command{
|
var createPreAuthKeyCmd = &cobra.Command{
|
||||||
Use: "create",
|
Use: "create",
|
||||||
Short: "Creates a new preauthkey",
|
Short: "Creates a new preauthkey in the specified user",
|
||||||
Aliases: []string{"c", "new"},
|
Aliases: []string{"c", "new"},
|
||||||
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
user, _ := cmd.Flags().GetUint64("user")
|
output, _ := cmd.Flags().GetString("output")
|
||||||
|
|
||||||
|
user, err := cmd.Flags().GetUint64("user")
|
||||||
|
if err != nil {
|
||||||
|
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
|
||||||
|
}
|
||||||
|
|
||||||
reusable, _ := cmd.Flags().GetBool("reusable")
|
reusable, _ := cmd.Flags().GetBool("reusable")
|
||||||
ephemeral, _ := cmd.Flags().GetBool("ephemeral")
|
ephemeral, _ := cmd.Flags().GetBool("ephemeral")
|
||||||
tags, _ := cmd.Flags().GetStringSlice("tags")
|
tags, _ := cmd.Flags().GetStringSlice("tags")
|
||||||
|
|
||||||
expiration, err := expirationFromFlag(cmd)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
request := &v1.CreatePreAuthKeyRequest{
|
request := &v1.CreatePreAuthKeyRequest{
|
||||||
User: user,
|
User: user,
|
||||||
Reusable: reusable,
|
Reusable: reusable,
|
||||||
Ephemeral: ephemeral,
|
Ephemeral: ephemeral,
|
||||||
AclTags: tags,
|
AclTags: tags,
|
||||||
Expiration: expiration,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
durationStr, _ := cmd.Flags().GetString("expiration")
|
||||||
|
|
||||||
|
duration, err := model.ParseDuration(durationStr)
|
||||||
|
if err != nil {
|
||||||
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
fmt.Sprintf("Could not parse duration: %s\n", err),
|
||||||
|
output,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
expiration := time.Now().UTC().Add(time.Duration(duration))
|
||||||
|
|
||||||
|
log.Trace().
|
||||||
|
Dur("expiration", time.Duration(duration)).
|
||||||
|
Msg("expiration has been set")
|
||||||
|
|
||||||
|
request.Expiration = timestamppb.New(expiration)
|
||||||
|
|
||||||
|
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
|
||||||
|
defer cancel()
|
||||||
|
defer conn.Close()
|
||||||
|
|
||||||
response, err := client.CreatePreAuthKey(ctx, request)
|
response, err := client.CreatePreAuthKey(ctx, request)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("creating preauthkey: %w", err)
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
fmt.Sprintf("Cannot create Pre Auth Key: %s\n", err),
|
||||||
|
output,
|
||||||
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
return printOutput(cmd, response.GetPreAuthKey(), response.GetPreAuthKey().GetKey())
|
SuccessOutput(response.GetPreAuthKey(), response.GetPreAuthKey().GetKey(), output)
|
||||||
}),
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
var expirePreAuthKeyCmd = &cobra.Command{
|
var expirePreAuthKeyCmd = &cobra.Command{
|
||||||
Use: "expire",
|
Use: "expire KEY",
|
||||||
Short: "Expire a preauthkey",
|
Short: "Expire a preauthkey",
|
||||||
Aliases: []string{"revoke", "exp", "e"},
|
Aliases: []string{"revoke", "exp", "e"},
|
||||||
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
|
Args: func(cmd *cobra.Command, args []string) error {
|
||||||
id, _ := cmd.Flags().GetUint64("id")
|
if len(args) < 1 {
|
||||||
|
return errMissingParameter
|
||||||
if id == 0 {
|
|
||||||
return fmt.Errorf("missing --id parameter: %w", errMissingParameter)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
},
|
||||||
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
|
output, _ := cmd.Flags().GetString("output")
|
||||||
|
user, err := cmd.Flags().GetUint64("user")
|
||||||
|
if err != nil {
|
||||||
|
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
|
||||||
|
}
|
||||||
|
|
||||||
|
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
|
||||||
|
defer cancel()
|
||||||
|
defer conn.Close()
|
||||||
|
|
||||||
request := &v1.ExpirePreAuthKeyRequest{
|
request := &v1.ExpirePreAuthKeyRequest{
|
||||||
Id: id,
|
User: user,
|
||||||
|
Key: args[0],
|
||||||
}
|
}
|
||||||
|
|
||||||
response, err := client.ExpirePreAuthKey(ctx, request)
|
response, err := client.ExpirePreAuthKey(ctx, request)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("expiring preauthkey: %w", err)
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
fmt.Sprintf("Cannot expire Pre Auth Key: %s\n", err),
|
||||||
|
output,
|
||||||
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
return printOutput(cmd, response, "Key expired")
|
SuccessOutput(response, "Key expired", output)
|
||||||
}),
|
},
|
||||||
}
|
|
||||||
|
|
||||||
var deletePreAuthKeyCmd = &cobra.Command{
|
|
||||||
Use: "delete",
|
|
||||||
Short: "Delete a preauthkey",
|
|
||||||
Aliases: []string{"del", "rm", "d"},
|
|
||||||
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
|
|
||||||
id, _ := cmd.Flags().GetUint64("id")
|
|
||||||
|
|
||||||
if id == 0 {
|
|
||||||
return fmt.Errorf("missing --id parameter: %w", errMissingParameter)
|
|
||||||
}
|
|
||||||
|
|
||||||
request := &v1.DeletePreAuthKeyRequest{
|
|
||||||
Id: id,
|
|
||||||
}
|
|
||||||
|
|
||||||
response, err := client.DeletePreAuthKey(ctx, request)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("deleting preauthkey: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return printOutput(cmd, response, "Key deleted")
|
|
||||||
}),
|
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -7,7 +7,7 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
func ColourTime(date time.Time) string {
|
func ColourTime(date time.Time) string {
|
||||||
dateStr := date.Format(HeadscaleDateTimeFormat)
|
dateStr := date.Format("2006-01-02 15:04:05")
|
||||||
|
|
||||||
if date.After(time.Now()) {
|
if date.After(time.Now()) {
|
||||||
dateStr = pterm.LightGreen(dateStr)
|
dateStr = pterm.LightGreen(dateStr)
|
||||||
|
|||||||
@@ -1,10 +1,10 @@
|
|||||||
package cli
|
package cli
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"fmt"
|
||||||
"os"
|
"os"
|
||||||
"runtime"
|
"runtime"
|
||||||
"slices"
|
"slices"
|
||||||
"strings"
|
|
||||||
|
|
||||||
"github.com/juanfont/headscale/hscontrol/types"
|
"github.com/juanfont/headscale/hscontrol/types"
|
||||||
"github.com/rs/zerolog"
|
"github.com/rs/zerolog"
|
||||||
@@ -14,6 +14,10 @@ import (
|
|||||||
"github.com/tcnksm/go-latest"
|
"github.com/tcnksm/go-latest"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
deprecateNamespaceMessage = "use --user"
|
||||||
|
)
|
||||||
|
|
||||||
var cfgFile string = ""
|
var cfgFile string = ""
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
@@ -34,34 +38,25 @@ func init() {
|
|||||||
StringP("output", "o", "", "Output format. Empty for human-readable, 'json', 'json-line' or 'yaml'")
|
StringP("output", "o", "", "Output format. Empty for human-readable, 'json', 'json-line' or 'yaml'")
|
||||||
rootCmd.PersistentFlags().
|
rootCmd.PersistentFlags().
|
||||||
Bool("force", false, "Disable prompts and forces the execution")
|
Bool("force", false, "Disable prompts and forces the execution")
|
||||||
|
|
||||||
// Re-enable usage output only for flag-parsing errors; runtime errors
|
|
||||||
// from RunE should never dump usage text.
|
|
||||||
rootCmd.SetFlagErrorFunc(func(cmd *cobra.Command, err error) error {
|
|
||||||
cmd.SilenceUsage = false
|
|
||||||
|
|
||||||
return err
|
|
||||||
})
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func initConfig() {
|
func initConfig() {
|
||||||
if cfgFile == "" {
|
if cfgFile == "" {
|
||||||
cfgFile = os.Getenv("HEADSCALE_CONFIG")
|
cfgFile = os.Getenv("HEADSCALE_CONFIG")
|
||||||
}
|
}
|
||||||
|
|
||||||
if cfgFile != "" {
|
if cfgFile != "" {
|
||||||
err := types.LoadConfig(cfgFile, true)
|
err := types.LoadConfig(cfgFile, true)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatal().Caller().Err(err).Msgf("error loading config file %s", cfgFile)
|
log.Fatal().Caller().Err(err).Msgf("Error loading config file %s", cfgFile)
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
err := types.LoadConfig("", false)
|
err := types.LoadConfig("", false)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatal().Caller().Err(err).Msgf("error loading config")
|
log.Fatal().Caller().Err(err).Msgf("Error loading config")
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
machineOutput := hasMachineOutputFlag()
|
machineOutput := HasMachineOutputFlag()
|
||||||
|
|
||||||
// If the user has requested a "node" readable format,
|
// If the user has requested a "node" readable format,
|
||||||
// then disable login so the output remains valid.
|
// then disable login so the output remains valid.
|
||||||
@@ -76,66 +71,25 @@ func initConfig() {
|
|||||||
|
|
||||||
disableUpdateCheck := viper.GetBool("disable_check_updates")
|
disableUpdateCheck := viper.GetBool("disable_check_updates")
|
||||||
if !disableUpdateCheck && !machineOutput {
|
if !disableUpdateCheck && !machineOutput {
|
||||||
versionInfo := types.GetVersionInfo()
|
|
||||||
if (runtime.GOOS == "linux" || runtime.GOOS == "darwin") &&
|
if (runtime.GOOS == "linux" || runtime.GOOS == "darwin") &&
|
||||||
!versionInfo.Dirty {
|
types.Version != "dev" {
|
||||||
githubTag := &latest.GithubTag{
|
githubTag := &latest.GithubTag{
|
||||||
Owner: "juanfont",
|
Owner: "juanfont",
|
||||||
Repository: "headscale",
|
Repository: "headscale",
|
||||||
TagFilterFunc: filterPreReleasesIfStable(func() string { return versionInfo.Version }),
|
|
||||||
}
|
}
|
||||||
|
res, err := latest.Check(githubTag, types.Version)
|
||||||
res, err := latest.Check(githubTag, versionInfo.Version)
|
|
||||||
if err == nil && res.Outdated {
|
if err == nil && res.Outdated {
|
||||||
//nolint
|
//nolint
|
||||||
log.Warn().Msgf(
|
log.Warn().Msgf(
|
||||||
"An updated version of Headscale has been found (%s vs. your current %s). Check it out https://github.com/juanfont/headscale/releases\n",
|
"An updated version of Headscale has been found (%s vs. your current %s). Check it out https://github.com/juanfont/headscale/releases\n",
|
||||||
res.Current,
|
res.Current,
|
||||||
versionInfo.Version,
|
types.Version,
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
var prereleases = []string{"alpha", "beta", "rc", "dev"}
|
|
||||||
|
|
||||||
func isPreReleaseVersion(version string) bool {
|
|
||||||
for _, unstable := range prereleases {
|
|
||||||
if strings.Contains(version, unstable) {
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
||||||
// filterPreReleasesIfStable returns a function that filters out
|
|
||||||
// pre-release tags if the current version is stable.
|
|
||||||
// If the current version is a pre-release, it does not filter anything.
|
|
||||||
// versionFunc is a function that returns the current version string, it is
|
|
||||||
// a func for testability.
|
|
||||||
func filterPreReleasesIfStable(versionFunc func() string) func(string) bool {
|
|
||||||
return func(tag string) bool {
|
|
||||||
version := versionFunc()
|
|
||||||
|
|
||||||
// If we are on a pre-release version, then we do not filter anything
|
|
||||||
// as we want to recommend the user the latest pre-release.
|
|
||||||
if isPreReleaseVersion(version) {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
||||||
// If we are on a stable release, filter out pre-releases.
|
|
||||||
for _, ignore := range prereleases {
|
|
||||||
if strings.Contains(tag, ignore) {
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
var rootCmd = &cobra.Command{
|
var rootCmd = &cobra.Command{
|
||||||
Use: "headscale",
|
Use: "headscale",
|
||||||
Short: "headscale - a Tailscale control server",
|
Short: "headscale - a Tailscale control server",
|
||||||
@@ -143,15 +97,11 @@ var rootCmd = &cobra.Command{
|
|||||||
headscale is an open source implementation of the Tailscale control server
|
headscale is an open source implementation of the Tailscale control server
|
||||||
|
|
||||||
https://github.com/juanfont/headscale`,
|
https://github.com/juanfont/headscale`,
|
||||||
SilenceErrors: true,
|
|
||||||
SilenceUsage: true,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func Execute() {
|
func Execute() {
|
||||||
cmd, err := rootCmd.ExecuteC()
|
if err := rootCmd.Execute(); err != nil {
|
||||||
if err != nil {
|
fmt.Fprintln(os.Stderr, err)
|
||||||
outputFormat, _ := cmd.Flags().GetString("output")
|
|
||||||
printError(err, outputFormat)
|
|
||||||
os.Exit(1)
|
os.Exit(1)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,293 +0,0 @@
|
|||||||
package cli
|
|
||||||
|
|
||||||
import (
|
|
||||||
"testing"
|
|
||||||
)
|
|
||||||
|
|
||||||
func TestFilterPreReleasesIfStable(t *testing.T) {
|
|
||||||
tests := []struct {
|
|
||||||
name string
|
|
||||||
currentVersion string
|
|
||||||
tag string
|
|
||||||
expectedFilter bool
|
|
||||||
description string
|
|
||||||
}{
|
|
||||||
{
|
|
||||||
name: "stable version filters alpha tag",
|
|
||||||
currentVersion: "0.23.0",
|
|
||||||
tag: "v0.24.0-alpha.1",
|
|
||||||
expectedFilter: true,
|
|
||||||
description: "When on stable release, alpha tags should be filtered",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "stable version filters beta tag",
|
|
||||||
currentVersion: "0.23.0",
|
|
||||||
tag: "v0.24.0-beta.2",
|
|
||||||
expectedFilter: true,
|
|
||||||
description: "When on stable release, beta tags should be filtered",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "stable version filters rc tag",
|
|
||||||
currentVersion: "0.23.0",
|
|
||||||
tag: "v0.24.0-rc.1",
|
|
||||||
expectedFilter: true,
|
|
||||||
description: "When on stable release, rc tags should be filtered",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "stable version allows stable tag",
|
|
||||||
currentVersion: "0.23.0",
|
|
||||||
tag: "v0.24.0",
|
|
||||||
expectedFilter: false,
|
|
||||||
description: "When on stable release, stable tags should not be filtered",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "alpha version allows alpha tag",
|
|
||||||
currentVersion: "0.23.0-alpha.1",
|
|
||||||
tag: "v0.24.0-alpha.2",
|
|
||||||
expectedFilter: false,
|
|
||||||
description: "When on alpha release, alpha tags should not be filtered",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "alpha version allows beta tag",
|
|
||||||
currentVersion: "0.23.0-alpha.1",
|
|
||||||
tag: "v0.24.0-beta.1",
|
|
||||||
expectedFilter: false,
|
|
||||||
description: "When on alpha release, beta tags should not be filtered",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "alpha version allows rc tag",
|
|
||||||
currentVersion: "0.23.0-alpha.1",
|
|
||||||
tag: "v0.24.0-rc.1",
|
|
||||||
expectedFilter: false,
|
|
||||||
description: "When on alpha release, rc tags should not be filtered",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "alpha version allows stable tag",
|
|
||||||
currentVersion: "0.23.0-alpha.1",
|
|
||||||
tag: "v0.24.0",
|
|
||||||
expectedFilter: false,
|
|
||||||
description: "When on alpha release, stable tags should not be filtered",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "beta version allows alpha tag",
|
|
||||||
currentVersion: "0.23.0-beta.1",
|
|
||||||
tag: "v0.24.0-alpha.1",
|
|
||||||
expectedFilter: false,
|
|
||||||
description: "When on beta release, alpha tags should not be filtered",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "beta version allows beta tag",
|
|
||||||
currentVersion: "0.23.0-beta.2",
|
|
||||||
tag: "v0.24.0-beta.3",
|
|
||||||
expectedFilter: false,
|
|
||||||
description: "When on beta release, beta tags should not be filtered",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "beta version allows rc tag",
|
|
||||||
currentVersion: "0.23.0-beta.1",
|
|
||||||
tag: "v0.24.0-rc.1",
|
|
||||||
expectedFilter: false,
|
|
||||||
description: "When on beta release, rc tags should not be filtered",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "beta version allows stable tag",
|
|
||||||
currentVersion: "0.23.0-beta.1",
|
|
||||||
tag: "v0.24.0",
|
|
||||||
expectedFilter: false,
|
|
||||||
description: "When on beta release, stable tags should not be filtered",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "rc version allows alpha tag",
|
|
||||||
currentVersion: "0.23.0-rc.1",
|
|
||||||
tag: "v0.24.0-alpha.1",
|
|
||||||
expectedFilter: false,
|
|
||||||
description: "When on rc release, alpha tags should not be filtered",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "rc version allows beta tag",
|
|
||||||
currentVersion: "0.23.0-rc.1",
|
|
||||||
tag: "v0.24.0-beta.1",
|
|
||||||
expectedFilter: false,
|
|
||||||
description: "When on rc release, beta tags should not be filtered",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "rc version allows rc tag",
|
|
||||||
currentVersion: "0.23.0-rc.2",
|
|
||||||
tag: "v0.24.0-rc.3",
|
|
||||||
expectedFilter: false,
|
|
||||||
description: "When on rc release, rc tags should not be filtered",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "rc version allows stable tag",
|
|
||||||
currentVersion: "0.23.0-rc.1",
|
|
||||||
tag: "v0.24.0",
|
|
||||||
expectedFilter: false,
|
|
||||||
description: "When on rc release, stable tags should not be filtered",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "stable version with patch filters alpha",
|
|
||||||
currentVersion: "0.23.1",
|
|
||||||
tag: "v0.24.0-alpha.1",
|
|
||||||
expectedFilter: true,
|
|
||||||
description: "Stable version with patch number should filter alpha tags",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "stable version with patch allows stable",
|
|
||||||
currentVersion: "0.23.1",
|
|
||||||
tag: "v0.24.0",
|
|
||||||
expectedFilter: false,
|
|
||||||
description: "Stable version with patch number should allow stable tags",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "tag with alpha substring in version number",
|
|
||||||
currentVersion: "0.23.0",
|
|
||||||
tag: "v1.0.0-alpha.1",
|
|
||||||
expectedFilter: true,
|
|
||||||
description: "Tags with alpha in version string should be filtered on stable",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "tag with beta substring in version number",
|
|
||||||
currentVersion: "0.23.0",
|
|
||||||
tag: "v1.0.0-beta.1",
|
|
||||||
expectedFilter: true,
|
|
||||||
description: "Tags with beta in version string should be filtered on stable",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "tag with rc substring in version number",
|
|
||||||
currentVersion: "0.23.0",
|
|
||||||
tag: "v1.0.0-rc.1",
|
|
||||||
expectedFilter: true,
|
|
||||||
description: "Tags with rc in version string should be filtered on stable",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "empty tag on stable version",
|
|
||||||
currentVersion: "0.23.0",
|
|
||||||
tag: "",
|
|
||||||
expectedFilter: false,
|
|
||||||
description: "Empty tags should not be filtered",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "dev version allows all tags",
|
|
||||||
currentVersion: "0.23.0-dev",
|
|
||||||
tag: "v0.24.0-alpha.1",
|
|
||||||
expectedFilter: false,
|
|
||||||
description: "Dev versions should not filter any tags (pre-release allows all)",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "stable version filters dev tag",
|
|
||||||
currentVersion: "0.23.0",
|
|
||||||
tag: "v0.24.0-dev",
|
|
||||||
expectedFilter: true,
|
|
||||||
description: "When on stable release, dev tags should be filtered",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "dev version allows dev tag",
|
|
||||||
currentVersion: "0.23.0-dev",
|
|
||||||
tag: "v0.24.0-dev.1",
|
|
||||||
expectedFilter: false,
|
|
||||||
description: "When on dev release, dev tags should not be filtered",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "dev version allows stable tag",
|
|
||||||
currentVersion: "0.23.0-dev",
|
|
||||||
tag: "v0.24.0",
|
|
||||||
expectedFilter: false,
|
|
||||||
description: "When on dev release, stable tags should not be filtered",
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, tt := range tests {
|
|
||||||
t.Run(tt.name, func(t *testing.T) {
|
|
||||||
result := filterPreReleasesIfStable(func() string { return tt.currentVersion })(tt.tag)
|
|
||||||
if result != tt.expectedFilter {
|
|
||||||
t.Errorf("%s: got %v, want %v\nDescription: %s\nCurrent version: %s, Tag: %s",
|
|
||||||
tt.name,
|
|
||||||
result,
|
|
||||||
tt.expectedFilter,
|
|
||||||
tt.description,
|
|
||||||
tt.currentVersion,
|
|
||||||
tt.tag,
|
|
||||||
)
|
|
||||||
}
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestIsPreReleaseVersion(t *testing.T) {
|
|
||||||
tests := []struct {
|
|
||||||
name string
|
|
||||||
version string
|
|
||||||
expected bool
|
|
||||||
description string
|
|
||||||
}{
|
|
||||||
{
|
|
||||||
name: "stable version",
|
|
||||||
version: "0.23.0",
|
|
||||||
expected: false,
|
|
||||||
description: "Stable version should not be pre-release",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "alpha version",
|
|
||||||
version: "0.23.0-alpha.1",
|
|
||||||
expected: true,
|
|
||||||
description: "Alpha version should be pre-release",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "beta version",
|
|
||||||
version: "0.23.0-beta.1",
|
|
||||||
expected: true,
|
|
||||||
description: "Beta version should be pre-release",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "rc version",
|
|
||||||
version: "0.23.0-rc.1",
|
|
||||||
expected: true,
|
|
||||||
description: "RC version should be pre-release",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "version with alpha substring",
|
|
||||||
version: "0.23.0-alphabetical",
|
|
||||||
expected: true,
|
|
||||||
description: "Version containing 'alpha' should be pre-release",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "version with beta substring",
|
|
||||||
version: "0.23.0-betamax",
|
|
||||||
expected: true,
|
|
||||||
description: "Version containing 'beta' should be pre-release",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "dev version",
|
|
||||||
version: "0.23.0-dev",
|
|
||||||
expected: true,
|
|
||||||
description: "Dev version should be pre-release",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "empty version",
|
|
||||||
version: "",
|
|
||||||
expected: false,
|
|
||||||
description: "Empty version should not be pre-release",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
name: "version with patch number",
|
|
||||||
version: "0.23.1",
|
|
||||||
expected: false,
|
|
||||||
description: "Stable version with patch should not be pre-release",
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, tt := range tests {
|
|
||||||
t.Run(tt.name, func(t *testing.T) {
|
|
||||||
result := isPreReleaseVersion(tt.version)
|
|
||||||
if result != tt.expected {
|
|
||||||
t.Errorf("%s: got %v, want %v\nDescription: %s\nVersion: %s",
|
|
||||||
tt.name,
|
|
||||||
result,
|
|
||||||
tt.expected,
|
|
||||||
tt.description,
|
|
||||||
tt.version,
|
|
||||||
)
|
|
||||||
}
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -5,6 +5,7 @@ import (
|
|||||||
"fmt"
|
"fmt"
|
||||||
"net/http"
|
"net/http"
|
||||||
|
|
||||||
|
"github.com/rs/zerolog/log"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
"github.com/tailscale/squibble"
|
"github.com/tailscale/squibble"
|
||||||
)
|
)
|
||||||
@@ -16,22 +17,24 @@ func init() {
|
|||||||
var serveCmd = &cobra.Command{
|
var serveCmd = &cobra.Command{
|
||||||
Use: "serve",
|
Use: "serve",
|
||||||
Short: "Launches the headscale server",
|
Short: "Launches the headscale server",
|
||||||
RunE: func(cmd *cobra.Command, args []string) error {
|
Args: func(cmd *cobra.Command, args []string) error {
|
||||||
|
return nil
|
||||||
|
},
|
||||||
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
app, err := newHeadscaleServerWithConfig()
|
app, err := newHeadscaleServerWithConfig()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
if squibbleErr, ok := errors.AsType[squibble.ValidationError](err); ok {
|
var squibbleErr squibble.ValidationError
|
||||||
|
if errors.As(err, &squibbleErr) {
|
||||||
fmt.Printf("SQLite schema failed to validate:\n")
|
fmt.Printf("SQLite schema failed to validate:\n")
|
||||||
fmt.Println(squibbleErr.Diff)
|
fmt.Println(squibbleErr.Diff)
|
||||||
}
|
}
|
||||||
|
|
||||||
return fmt.Errorf("initializing: %w", err)
|
log.Fatal().Caller().Err(err).Msg("Error initializing")
|
||||||
}
|
}
|
||||||
|
|
||||||
err = app.Serve()
|
err = app.Serve()
|
||||||
if err != nil && !errors.Is(err, http.ErrServerClosed) {
|
if err != nil && !errors.Is(err, http.ErrServerClosed) {
|
||||||
return fmt.Errorf("headscale ran into an error and had to shut down: %w", err)
|
log.Fatal().Caller().Err(err).Msg("Headscale ran into an error and had to shut down.")
|
||||||
}
|
}
|
||||||
|
|
||||||
return nil
|
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,24 +1,17 @@
|
|||||||
package cli
|
package cli
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
"net/url"
|
"net/url"
|
||||||
"strconv"
|
"strconv"
|
||||||
|
|
||||||
|
survey "github.com/AlecAivazis/survey/v2"
|
||||||
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
||||||
"github.com/juanfont/headscale/hscontrol/util"
|
|
||||||
"github.com/juanfont/headscale/hscontrol/util/zlog/zf"
|
|
||||||
"github.com/pterm/pterm"
|
"github.com/pterm/pterm"
|
||||||
"github.com/rs/zerolog/log"
|
"github.com/rs/zerolog/log"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
"google.golang.org/grpc/status"
|
||||||
|
|
||||||
// CLI user errors.
|
|
||||||
var (
|
|
||||||
errFlagRequired = errors.New("--name or --identifier flag is required")
|
|
||||||
errMultipleUsersMatch = errors.New("multiple users match query, specify an ID")
|
|
||||||
)
|
)
|
||||||
|
|
||||||
func usernameAndIDFlag(cmd *cobra.Command) {
|
func usernameAndIDFlag(cmd *cobra.Command) {
|
||||||
@@ -27,21 +20,20 @@ func usernameAndIDFlag(cmd *cobra.Command) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// usernameAndIDFromFlag returns the username and ID from the flags of the command.
|
// usernameAndIDFromFlag returns the username and ID from the flags of the command.
|
||||||
func usernameAndIDFromFlag(cmd *cobra.Command) (uint64, string, error) {
|
// If both are empty, it will exit the program with an error.
|
||||||
|
func usernameAndIDFromFlag(cmd *cobra.Command) (uint64, string) {
|
||||||
username, _ := cmd.Flags().GetString("name")
|
username, _ := cmd.Flags().GetString("name")
|
||||||
|
|
||||||
identifier, _ := cmd.Flags().GetInt64("identifier")
|
identifier, _ := cmd.Flags().GetInt64("identifier")
|
||||||
if username == "" && identifier < 0 {
|
if username == "" && identifier < 0 {
|
||||||
return 0, "", errFlagRequired
|
err := errors.New("--name or --identifier flag is required")
|
||||||
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
"Cannot rename user: "+status.Convert(err).Message(),
|
||||||
|
"",
|
||||||
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Normalise unset/negative identifiers to 0 so the uint64
|
return uint64(identifier), username
|
||||||
// conversion does not produce a bogus large value.
|
|
||||||
if identifier < 0 {
|
|
||||||
identifier = 0
|
|
||||||
}
|
|
||||||
|
|
||||||
return uint64(identifier), username, nil //nolint:gosec // identifier is clamped to >= 0 above
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
@@ -58,13 +50,15 @@ func init() {
|
|||||||
userCmd.AddCommand(renameUserCmd)
|
userCmd.AddCommand(renameUserCmd)
|
||||||
usernameAndIDFlag(renameUserCmd)
|
usernameAndIDFlag(renameUserCmd)
|
||||||
renameUserCmd.Flags().StringP("new-name", "r", "", "New username")
|
renameUserCmd.Flags().StringP("new-name", "r", "", "New username")
|
||||||
mustMarkRequired(renameUserCmd, "new-name")
|
renameNodeCmd.MarkFlagRequired("new-name")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
var errMissingParameter = errors.New("missing parameters")
|
||||||
|
|
||||||
var userCmd = &cobra.Command{
|
var userCmd = &cobra.Command{
|
||||||
Use: "users",
|
Use: "users",
|
||||||
Short: "Manage the users of Headscale",
|
Short: "Manage the users of Headscale",
|
||||||
Aliases: []string{"user"},
|
Aliases: []string{"user", "namespace", "namespaces", "ns"},
|
||||||
}
|
}
|
||||||
|
|
||||||
var createUserCmd = &cobra.Command{
|
var createUserCmd = &cobra.Command{
|
||||||
@@ -78,10 +72,16 @@ var createUserCmd = &cobra.Command{
|
|||||||
|
|
||||||
return nil
|
return nil
|
||||||
},
|
},
|
||||||
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
|
output, _ := cmd.Flags().GetString("output")
|
||||||
|
|
||||||
userName := args[0]
|
userName := args[0]
|
||||||
|
|
||||||
log.Trace().Interface(zf.Client, client).Msg("obtained gRPC client")
|
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
|
||||||
|
defer cancel()
|
||||||
|
defer conn.Close()
|
||||||
|
|
||||||
|
log.Trace().Interface("client", client).Msg("Obtained gRPC client")
|
||||||
|
|
||||||
request := &v1.CreateUserRequest{Name: userName}
|
request := &v1.CreateUserRequest{Name: userName}
|
||||||
|
|
||||||
@@ -94,73 +94,114 @@ var createUserCmd = &cobra.Command{
|
|||||||
}
|
}
|
||||||
|
|
||||||
if pictureURL, _ := cmd.Flags().GetString("picture-url"); pictureURL != "" {
|
if pictureURL, _ := cmd.Flags().GetString("picture-url"); pictureURL != "" {
|
||||||
if _, err := url.Parse(pictureURL); err != nil { //nolint:noinlineerr
|
if _, err := url.Parse(pictureURL); err != nil {
|
||||||
return fmt.Errorf("invalid picture URL: %w", err)
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
fmt.Sprintf(
|
||||||
|
"Invalid Picture URL: %s",
|
||||||
|
err,
|
||||||
|
),
|
||||||
|
output,
|
||||||
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
request.PictureUrl = pictureURL
|
request.PictureUrl = pictureURL
|
||||||
}
|
}
|
||||||
|
|
||||||
log.Trace().Interface(zf.Request, request).Msg("sending CreateUser request")
|
log.Trace().Interface("request", request).Msg("Sending CreateUser request")
|
||||||
|
|
||||||
response, err := client.CreateUser(ctx, request)
|
response, err := client.CreateUser(ctx, request)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("creating user: %w", err)
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
"Cannot create user: "+status.Convert(err).Message(),
|
||||||
|
output,
|
||||||
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
return printOutput(cmd, response.GetUser(), "User created")
|
SuccessOutput(response.GetUser(), "User created", output)
|
||||||
}),
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
var destroyUserCmd = &cobra.Command{
|
var destroyUserCmd = &cobra.Command{
|
||||||
Use: "destroy --identifier ID or --name NAME",
|
Use: "destroy --identifier ID or --name NAME",
|
||||||
Short: "Destroys a user",
|
Short: "Destroys a user",
|
||||||
Aliases: []string{"delete"},
|
Aliases: []string{"delete"},
|
||||||
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
id, username, err := usernameAndIDFromFlag(cmd)
|
output, _ := cmd.Flags().GetString("output")
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
|
id, username := usernameAndIDFromFlag(cmd)
|
||||||
request := &v1.ListUsersRequest{
|
request := &v1.ListUsersRequest{
|
||||||
Name: username,
|
Name: username,
|
||||||
Id: id,
|
Id: id,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
|
||||||
|
defer cancel()
|
||||||
|
defer conn.Close()
|
||||||
|
|
||||||
users, err := client.ListUsers(ctx, request)
|
users, err := client.ListUsers(ctx, request)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("listing users: %w", err)
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
"Error: "+status.Convert(err).Message(),
|
||||||
|
output,
|
||||||
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
if len(users.GetUsers()) != 1 {
|
if len(users.GetUsers()) != 1 {
|
||||||
return errMultipleUsersMatch
|
err := errors.New("Unable to determine user to delete, query returned multiple users, use ID")
|
||||||
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
"Error: "+status.Convert(err).Message(),
|
||||||
|
output,
|
||||||
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
user := users.GetUsers()[0]
|
user := users.GetUsers()[0]
|
||||||
|
|
||||||
if !confirmAction(cmd, fmt.Sprintf(
|
confirm := false
|
||||||
|
force, _ := cmd.Flags().GetBool("force")
|
||||||
|
if !force {
|
||||||
|
prompt := &survey.Confirm{
|
||||||
|
Message: fmt.Sprintf(
|
||||||
"Do you want to remove the user %q (%d) and any associated preauthkeys?",
|
"Do you want to remove the user %q (%d) and any associated preauthkeys?",
|
||||||
user.GetName(), user.GetId(),
|
user.GetName(), user.GetId(),
|
||||||
)) {
|
),
|
||||||
return printOutput(cmd, map[string]string{"Result": "User not destroyed"}, "User not destroyed")
|
|
||||||
}
|
}
|
||||||
|
err := survey.AskOne(prompt, &confirm)
|
||||||
deleteRequest := &v1.DeleteUserRequest{Id: user.GetId()}
|
|
||||||
|
|
||||||
response, err := client.DeleteUser(ctx, deleteRequest)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("destroying user: %w", err)
|
return
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return printOutput(cmd, response, "User destroyed")
|
if confirm || force {
|
||||||
}),
|
request := &v1.DeleteUserRequest{Id: user.GetId()}
|
||||||
|
|
||||||
|
response, err := client.DeleteUser(ctx, request)
|
||||||
|
if err != nil {
|
||||||
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
"Cannot destroy user: "+status.Convert(err).Message(),
|
||||||
|
output,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
SuccessOutput(response, "User destroyed", output)
|
||||||
|
} else {
|
||||||
|
SuccessOutput(map[string]string{"Result": "User not destroyed"}, "User not destroyed", output)
|
||||||
|
}
|
||||||
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
var listUsersCmd = &cobra.Command{
|
var listUsersCmd = &cobra.Command{
|
||||||
Use: "list",
|
Use: "list",
|
||||||
Short: "List all the users",
|
Short: "List all the users",
|
||||||
Aliases: []string{"ls", "show"},
|
Aliases: []string{"ls", "show"},
|
||||||
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
|
output, _ := cmd.Flags().GetString("output")
|
||||||
|
|
||||||
|
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
|
||||||
|
defer cancel()
|
||||||
|
defer conn.Close()
|
||||||
|
|
||||||
request := &v1.ListUsersRequest{}
|
request := &v1.ListUsersRequest{}
|
||||||
|
|
||||||
id, _ := cmd.Flags().GetInt64("identifier")
|
id, _ := cmd.Flags().GetInt64("identifier")
|
||||||
@@ -179,39 +220,53 @@ var listUsersCmd = &cobra.Command{
|
|||||||
|
|
||||||
response, err := client.ListUsers(ctx, request)
|
response, err := client.ListUsers(ctx, request)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("listing users: %w", err)
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
"Cannot get users: "+status.Convert(err).Message(),
|
||||||
|
output,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
if output != "" {
|
||||||
|
SuccessOutput(response.GetUsers(), "", output)
|
||||||
}
|
}
|
||||||
|
|
||||||
return printListOutput(cmd, response.GetUsers(), func() error {
|
|
||||||
tableData := pterm.TableData{{"ID", "Name", "Username", "Email", "Created"}}
|
tableData := pterm.TableData{{"ID", "Name", "Username", "Email", "Created"}}
|
||||||
for _, user := range response.GetUsers() {
|
for _, user := range response.GetUsers() {
|
||||||
tableData = append(
|
tableData = append(
|
||||||
tableData,
|
tableData,
|
||||||
[]string{
|
[]string{
|
||||||
strconv.FormatUint(user.GetId(), util.Base10),
|
strconv.FormatUint(user.GetId(), 10),
|
||||||
user.GetDisplayName(),
|
user.GetDisplayName(),
|
||||||
user.GetName(),
|
user.GetName(),
|
||||||
user.GetEmail(),
|
user.GetEmail(),
|
||||||
user.GetCreatedAt().AsTime().Format(HeadscaleDateTimeFormat),
|
user.GetCreatedAt().AsTime().Format("2006-01-02 15:04:05"),
|
||||||
},
|
},
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
err = pterm.DefaultTable.WithHasHeader().WithData(tableData).Render()
|
||||||
return pterm.DefaultTable.WithHasHeader().WithData(tableData).Render()
|
if err != nil {
|
||||||
})
|
ErrorOutput(
|
||||||
}),
|
err,
|
||||||
|
fmt.Sprintf("Failed to render pterm table: %s", err),
|
||||||
|
output,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
var renameUserCmd = &cobra.Command{
|
var renameUserCmd = &cobra.Command{
|
||||||
Use: "rename",
|
Use: "rename",
|
||||||
Short: "Renames a user",
|
Short: "Renames a user",
|
||||||
Aliases: []string{"mv"},
|
Aliases: []string{"mv"},
|
||||||
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
id, username, err := usernameAndIDFromFlag(cmd)
|
output, _ := cmd.Flags().GetString("output")
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
|
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
|
||||||
|
defer cancel()
|
||||||
|
defer conn.Close()
|
||||||
|
|
||||||
|
id, username := usernameAndIDFromFlag(cmd)
|
||||||
listReq := &v1.ListUsersRequest{
|
listReq := &v1.ListUsersRequest{
|
||||||
Name: username,
|
Name: username,
|
||||||
Id: id,
|
Id: id,
|
||||||
@@ -219,11 +274,20 @@ var renameUserCmd = &cobra.Command{
|
|||||||
|
|
||||||
users, err := client.ListUsers(ctx, listReq)
|
users, err := client.ListUsers(ctx, listReq)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("listing users: %w", err)
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
"Error: "+status.Convert(err).Message(),
|
||||||
|
output,
|
||||||
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
if len(users.GetUsers()) != 1 {
|
if len(users.GetUsers()) != 1 {
|
||||||
return errMultipleUsersMatch
|
err := errors.New("Unable to determine user to delete, query returned multiple users, use ID")
|
||||||
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
"Error: "+status.Convert(err).Message(),
|
||||||
|
output,
|
||||||
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
newName, _ := cmd.Flags().GetString("new-name")
|
newName, _ := cmd.Flags().GetString("new-name")
|
||||||
@@ -235,9 +299,13 @@ var renameUserCmd = &cobra.Command{
|
|||||||
|
|
||||||
response, err := client.RenameUser(ctx, renameReq)
|
response, err := client.RenameUser(ctx, renameReq)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("renaming user: %w", err)
|
ErrorOutput(
|
||||||
|
err,
|
||||||
|
"Cannot rename user: "+status.Convert(err).Message(),
|
||||||
|
output,
|
||||||
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
return printOutput(cmd, response.GetUser(), "User renamed")
|
SuccessOutput(response.GetUser(), "User renamed", output)
|
||||||
}),
|
},
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -4,52 +4,25 @@ import (
|
|||||||
"context"
|
"context"
|
||||||
"crypto/tls"
|
"crypto/tls"
|
||||||
"encoding/json"
|
"encoding/json"
|
||||||
"errors"
|
|
||||||
"fmt"
|
"fmt"
|
||||||
"os"
|
"os"
|
||||||
"time"
|
|
||||||
|
|
||||||
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
||||||
"github.com/juanfont/headscale/hscontrol"
|
"github.com/juanfont/headscale/hscontrol"
|
||||||
"github.com/juanfont/headscale/hscontrol/types"
|
"github.com/juanfont/headscale/hscontrol/types"
|
||||||
"github.com/juanfont/headscale/hscontrol/util"
|
"github.com/juanfont/headscale/hscontrol/util"
|
||||||
"github.com/juanfont/headscale/hscontrol/util/zlog/zf"
|
|
||||||
"github.com/prometheus/common/model"
|
|
||||||
"github.com/rs/zerolog/log"
|
"github.com/rs/zerolog/log"
|
||||||
"github.com/spf13/cobra"
|
|
||||||
"google.golang.org/grpc"
|
"google.golang.org/grpc"
|
||||||
"google.golang.org/grpc/credentials"
|
"google.golang.org/grpc/credentials"
|
||||||
"google.golang.org/grpc/credentials/insecure"
|
"google.golang.org/grpc/credentials/insecure"
|
||||||
"google.golang.org/protobuf/types/known/timestamppb"
|
|
||||||
"gopkg.in/yaml.v3"
|
"gopkg.in/yaml.v3"
|
||||||
)
|
)
|
||||||
|
|
||||||
const (
|
const (
|
||||||
HeadscaleDateTimeFormat = "2006-01-02 15:04:05"
|
HeadscaleDateTimeFormat = "2006-01-02 15:04:05"
|
||||||
SocketWritePermissions = 0o666
|
SocketWritePermissions = 0o666
|
||||||
|
|
||||||
outputFormatJSON = "json"
|
|
||||||
outputFormatJSONLine = "json-line"
|
|
||||||
outputFormatYAML = "yaml"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
var (
|
|
||||||
errAPIKeyNotSet = errors.New("HEADSCALE_CLI_API_KEY environment variable needs to be set")
|
|
||||||
errMissingParameter = errors.New("missing parameters")
|
|
||||||
)
|
|
||||||
|
|
||||||
// mustMarkRequired marks the named flags as required on cmd, panicking
|
|
||||||
// if any name does not match a registered flag. This is only called
|
|
||||||
// from init() where a failure indicates a programming error.
|
|
||||||
func mustMarkRequired(cmd *cobra.Command, names ...string) {
|
|
||||||
for _, n := range names {
|
|
||||||
err := cmd.MarkFlagRequired(n)
|
|
||||||
if err != nil {
|
|
||||||
panic(fmt.Sprintf("marking flag %q required on %q: %v", n, cmd.Name(), err))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func newHeadscaleServerWithConfig() (*hscontrol.Headscale, error) {
|
func newHeadscaleServerWithConfig() (*hscontrol.Headscale, error) {
|
||||||
cfg, err := types.LoadServerConfig()
|
cfg, err := types.LoadServerConfig()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -67,28 +40,14 @@ func newHeadscaleServerWithConfig() (*hscontrol.Headscale, error) {
|
|||||||
return app, nil
|
return app, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// grpcRunE wraps a cobra RunE func, injecting a ready gRPC client and
|
func newHeadscaleCLIWithConfig() (context.Context, v1.HeadscaleServiceClient, *grpc.ClientConn, context.CancelFunc) {
|
||||||
// context. Connection lifecycle is managed by the wrapper — callers
|
|
||||||
// never see the underlying conn or cancel func.
|
|
||||||
func grpcRunE(
|
|
||||||
fn func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error,
|
|
||||||
) func(*cobra.Command, []string) error {
|
|
||||||
return func(cmd *cobra.Command, args []string) error {
|
|
||||||
ctx, client, conn, cancel, err := newHeadscaleCLIWithConfig()
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("connecting to headscale: %w", err)
|
|
||||||
}
|
|
||||||
defer cancel()
|
|
||||||
defer conn.Close()
|
|
||||||
|
|
||||||
return fn(ctx, client, cmd, args)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func newHeadscaleCLIWithConfig() (context.Context, v1.HeadscaleServiceClient, *grpc.ClientConn, context.CancelFunc, error) {
|
|
||||||
cfg, err := types.LoadCLIConfig()
|
cfg, err := types.LoadCLIConfig()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, nil, nil, nil, fmt.Errorf("loading configuration: %w", err)
|
log.Fatal().
|
||||||
|
Err(err).
|
||||||
|
Caller().
|
||||||
|
Msgf("Failed to load configuration")
|
||||||
|
os.Exit(-1) // we get here if logging is suppressed (i.e., json output)
|
||||||
}
|
}
|
||||||
|
|
||||||
log.Debug().
|
log.Debug().
|
||||||
@@ -98,7 +57,7 @@ func newHeadscaleCLIWithConfig() (context.Context, v1.HeadscaleServiceClient, *g
|
|||||||
ctx, cancel := context.WithTimeout(context.Background(), cfg.CLI.Timeout)
|
ctx, cancel := context.WithTimeout(context.Background(), cfg.CLI.Timeout)
|
||||||
|
|
||||||
grpcOptions := []grpc.DialOption{
|
grpcOptions := []grpc.DialOption{
|
||||||
grpc.WithBlock(), //nolint:staticcheck // SA1019: deprecated but supported in 1.x
|
grpc.WithBlock(),
|
||||||
}
|
}
|
||||||
|
|
||||||
address := cfg.CLI.Address
|
address := cfg.CLI.Address
|
||||||
@@ -112,23 +71,17 @@ func newHeadscaleCLIWithConfig() (context.Context, v1.HeadscaleServiceClient, *g
|
|||||||
address = cfg.UnixSocket
|
address = cfg.UnixSocket
|
||||||
|
|
||||||
// Try to give the user better feedback if we cannot write to the headscale
|
// Try to give the user better feedback if we cannot write to the headscale
|
||||||
// socket. Note: os.OpenFile on a Unix domain socket returns ENXIO on
|
// socket.
|
||||||
// Linux which is expected — only permission errors are actionable here.
|
socket, err := os.OpenFile(cfg.UnixSocket, os.O_WRONLY, SocketWritePermissions) // nolint
|
||||||
// The actual gRPC connection uses net.Dial which handles sockets properly.
|
|
||||||
socket, err := os.OpenFile(cfg.UnixSocket, os.O_WRONLY, SocketWritePermissions) //nolint
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
if os.IsPermission(err) {
|
if os.IsPermission(err) {
|
||||||
cancel()
|
log.Fatal().
|
||||||
|
Err(err).
|
||||||
return nil, nil, nil, nil, fmt.Errorf(
|
Str("socket", cfg.UnixSocket).
|
||||||
"unable to read/write to headscale socket %q, do you have the correct permissions? %w",
|
Msgf("Unable to read/write to headscale socket, do you have the correct permissions?")
|
||||||
cfg.UnixSocket,
|
}
|
||||||
err,
|
|
||||||
)
|
|
||||||
}
|
}
|
||||||
} else {
|
|
||||||
socket.Close()
|
socket.Close()
|
||||||
}
|
|
||||||
|
|
||||||
grpcOptions = append(
|
grpcOptions = append(
|
||||||
grpcOptions,
|
grpcOptions,
|
||||||
@@ -139,11 +92,8 @@ func newHeadscaleCLIWithConfig() (context.Context, v1.HeadscaleServiceClient, *g
|
|||||||
// If we are not connecting to a local server, require an API key for authentication
|
// If we are not connecting to a local server, require an API key for authentication
|
||||||
apiKey := cfg.CLI.APIKey
|
apiKey := cfg.CLI.APIKey
|
||||||
if apiKey == "" {
|
if apiKey == "" {
|
||||||
cancel()
|
log.Fatal().Caller().Msgf("HEADSCALE_CLI_API_KEY environment variable needs to be set.")
|
||||||
|
|
||||||
return nil, nil, nil, nil, errAPIKeyNotSet
|
|
||||||
}
|
}
|
||||||
|
|
||||||
grpcOptions = append(grpcOptions,
|
grpcOptions = append(grpcOptions,
|
||||||
grpc.WithPerRPCCredentials(tokenAuth{
|
grpc.WithPerRPCCredentials(tokenAuth{
|
||||||
token: apiKey,
|
token: apiKey,
|
||||||
@@ -168,136 +118,64 @@ func newHeadscaleCLIWithConfig() (context.Context, v1.HeadscaleServiceClient, *g
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
log.Trace().Caller().Str(zf.Address, address).Msg("connecting via gRPC")
|
log.Trace().Caller().Str("address", address).Msg("Connecting via gRPC")
|
||||||
|
conn, err := grpc.DialContext(ctx, address, grpcOptions...)
|
||||||
conn, err := grpc.DialContext(ctx, address, grpcOptions...) //nolint:staticcheck // SA1019: deprecated but supported in 1.x
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
cancel()
|
log.Fatal().Caller().Err(err).Msgf("Could not connect: %v", err)
|
||||||
|
os.Exit(-1) // we get here if logging is suppressed (i.e., json output)
|
||||||
return nil, nil, nil, nil, fmt.Errorf("connecting to %s: %w", address, err)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
client := v1.NewHeadscaleServiceClient(conn)
|
client := v1.NewHeadscaleServiceClient(conn)
|
||||||
|
|
||||||
return ctx, client, conn, cancel, nil
|
return ctx, client, conn, cancel
|
||||||
}
|
}
|
||||||
|
|
||||||
// formatOutput serialises result into the requested format. For the
|
func output(result interface{}, override string, outputFormat string) string {
|
||||||
// default (empty) format the human-readable override string is returned.
|
var jsonBytes []byte
|
||||||
func formatOutput(result any, override string, outputFormat string) (string, error) {
|
var err error
|
||||||
switch outputFormat {
|
switch outputFormat {
|
||||||
case outputFormatJSON:
|
case "json":
|
||||||
b, err := json.MarshalIndent(result, "", "\t")
|
jsonBytes, err = json.MarshalIndent(result, "", "\t")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return "", fmt.Errorf("marshalling JSON output: %w", err)
|
log.Fatal().Err(err).Msg("failed to unmarshal output")
|
||||||
}
|
}
|
||||||
|
case "json-line":
|
||||||
return string(b), nil
|
jsonBytes, err = json.Marshal(result)
|
||||||
case outputFormatJSONLine:
|
|
||||||
b, err := json.Marshal(result)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return "", fmt.Errorf("marshalling JSON-line output: %w", err)
|
log.Fatal().Err(err).Msg("failed to unmarshal output")
|
||||||
}
|
}
|
||||||
|
case "yaml":
|
||||||
return string(b), nil
|
jsonBytes, err = yaml.Marshal(result)
|
||||||
case outputFormatYAML:
|
|
||||||
b, err := yaml.Marshal(result)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return "", fmt.Errorf("marshalling YAML output: %w", err)
|
log.Fatal().Err(err).Msg("failed to unmarshal output")
|
||||||
}
|
}
|
||||||
|
|
||||||
return string(b), nil
|
|
||||||
default:
|
default:
|
||||||
return override, nil
|
// nolint
|
||||||
}
|
return override
|
||||||
}
|
|
||||||
|
|
||||||
// printOutput formats result and writes it to stdout. It reads the --output
|
|
||||||
// flag from cmd to decide the serialisation format.
|
|
||||||
func printOutput(cmd *cobra.Command, result any, override string) error {
|
|
||||||
format, _ := cmd.Flags().GetString("output")
|
|
||||||
|
|
||||||
out, err := formatOutput(result, override, format)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
}
|
||||||
|
|
||||||
fmt.Println(out)
|
return string(jsonBytes)
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// expirationFromFlag parses the --expiration flag as a Prometheus-style
|
// SuccessOutput prints the result to stdout and exits with status code 0.
|
||||||
// duration (e.g. "90d", "1h") and returns an absolute timestamp.
|
func SuccessOutput(result interface{}, override string, outputFormat string) {
|
||||||
func expirationFromFlag(cmd *cobra.Command) (*timestamppb.Timestamp, error) {
|
fmt.Println(output(result, override, outputFormat))
|
||||||
durationStr, _ := cmd.Flags().GetString("expiration")
|
os.Exit(0)
|
||||||
|
|
||||||
duration, err := model.ParseDuration(durationStr)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("parsing duration: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return timestamppb.New(time.Now().UTC().Add(time.Duration(duration))), nil
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// confirmAction returns true when the user confirms a prompt, or when
|
// ErrorOutput prints an error message to stderr and exits with status code 1.
|
||||||
// --force is set. Callers decide what to do when it returns false.
|
func ErrorOutput(errResult error, override string, outputFormat string) {
|
||||||
func confirmAction(cmd *cobra.Command, prompt string) bool {
|
|
||||||
force, _ := cmd.Flags().GetBool("force")
|
|
||||||
if force {
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
|
|
||||||
return util.YesNo(prompt)
|
|
||||||
}
|
|
||||||
|
|
||||||
// printListOutput checks the --output flag: when a machine-readable format is
|
|
||||||
// requested it serialises data as JSON/YAML; otherwise it calls renderTable
|
|
||||||
// to produce the human-readable pterm table.
|
|
||||||
func printListOutput(
|
|
||||||
cmd *cobra.Command,
|
|
||||||
data any,
|
|
||||||
renderTable func() error,
|
|
||||||
) error {
|
|
||||||
format, _ := cmd.Flags().GetString("output")
|
|
||||||
if format != "" {
|
|
||||||
return printOutput(cmd, data, "")
|
|
||||||
}
|
|
||||||
|
|
||||||
return renderTable()
|
|
||||||
}
|
|
||||||
|
|
||||||
// printError writes err to stderr, formatting it as JSON/YAML when the
|
|
||||||
// --output flag requests machine-readable output. Used exclusively by
|
|
||||||
// Execute() so that every error surfaces in the format the caller asked for.
|
|
||||||
func printError(err error, outputFormat string) {
|
|
||||||
type errOutput struct {
|
type errOutput struct {
|
||||||
Error string `json:"error"`
|
Error string `json:"error"`
|
||||||
}
|
}
|
||||||
|
|
||||||
e := errOutput{Error: err.Error()}
|
fmt.Fprintf(os.Stderr, "%s\n", output(errOutput{errResult.Error()}, override, outputFormat))
|
||||||
|
os.Exit(1)
|
||||||
var formatted []byte
|
|
||||||
|
|
||||||
switch outputFormat {
|
|
||||||
case outputFormatJSON:
|
|
||||||
formatted, _ = json.MarshalIndent(e, "", "\t") //nolint:errchkjson // errOutput contains only a string field
|
|
||||||
case outputFormatJSONLine:
|
|
||||||
formatted, _ = json.Marshal(e) //nolint:errchkjson // errOutput contains only a string field
|
|
||||||
case outputFormatYAML:
|
|
||||||
formatted, _ = yaml.Marshal(e)
|
|
||||||
default:
|
|
||||||
fmt.Fprintf(os.Stderr, "Error: %s\n", err)
|
|
||||||
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
fmt.Fprintf(os.Stderr, "%s\n", formatted)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func hasMachineOutputFlag() bool {
|
func HasMachineOutputFlag() bool {
|
||||||
for _, arg := range os.Args {
|
for _, arg := range os.Args {
|
||||||
if arg == outputFormatJSON || arg == outputFormatJSONLine || arg == outputFormatYAML {
|
if arg == "json" || arg == "json-line" || arg == "yaml" {
|
||||||
return true
|
return true
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -7,16 +7,17 @@ import (
|
|||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
rootCmd.AddCommand(versionCmd)
|
rootCmd.AddCommand(versionCmd)
|
||||||
versionCmd.Flags().StringP("output", "o", "", "Output format. Empty for human-readable, 'json', 'json-line' or 'yaml'")
|
|
||||||
}
|
}
|
||||||
|
|
||||||
var versionCmd = &cobra.Command{
|
var versionCmd = &cobra.Command{
|
||||||
Use: "version",
|
Use: "version",
|
||||||
Short: "Print the version.",
|
Short: "Print the version.",
|
||||||
Long: "The version of headscale.",
|
Long: "The version of headscale.",
|
||||||
RunE: func(cmd *cobra.Command, args []string) error {
|
Run: func(cmd *cobra.Command, args []string) {
|
||||||
info := types.GetVersionInfo()
|
output, _ := cmd.Flags().GetString("output")
|
||||||
|
SuccessOutput(map[string]string{
|
||||||
return printOutput(cmd, info, info.String())
|
"version": types.Version,
|
||||||
|
"commit": types.GitCommitHash,
|
||||||
|
}, types.Version, output)
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -12,7 +12,6 @@ import (
|
|||||||
|
|
||||||
func main() {
|
func main() {
|
||||||
var colors bool
|
var colors bool
|
||||||
|
|
||||||
switch l := termcolor.SupportLevel(os.Stderr); l {
|
switch l := termcolor.SupportLevel(os.Stderr); l {
|
||||||
case termcolor.Level16M:
|
case termcolor.Level16M:
|
||||||
colors = true
|
colors = true
|
||||||
|
|||||||
@@ -9,15 +9,34 @@ import (
|
|||||||
"github.com/juanfont/headscale/hscontrol/types"
|
"github.com/juanfont/headscale/hscontrol/types"
|
||||||
"github.com/juanfont/headscale/hscontrol/util"
|
"github.com/juanfont/headscale/hscontrol/util"
|
||||||
"github.com/spf13/viper"
|
"github.com/spf13/viper"
|
||||||
"github.com/stretchr/testify/assert"
|
"gopkg.in/check.v1"
|
||||||
"github.com/stretchr/testify/require"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
func TestConfigFileLoading(t *testing.T) {
|
func Test(t *testing.T) {
|
||||||
tmpDir := t.TempDir()
|
check.TestingT(t)
|
||||||
|
}
|
||||||
|
|
||||||
|
var _ = check.Suite(&Suite{})
|
||||||
|
|
||||||
|
type Suite struct{}
|
||||||
|
|
||||||
|
func (s *Suite) SetUpSuite(c *check.C) {
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *Suite) TearDownSuite(c *check.C) {
|
||||||
|
}
|
||||||
|
|
||||||
|
func (*Suite) TestConfigFileLoading(c *check.C) {
|
||||||
|
tmpDir, err := os.MkdirTemp("", "headscale")
|
||||||
|
if err != nil {
|
||||||
|
c.Fatal(err)
|
||||||
|
}
|
||||||
|
defer os.RemoveAll(tmpDir)
|
||||||
|
|
||||||
path, err := os.Getwd()
|
path, err := os.Getwd()
|
||||||
require.NoError(t, err)
|
if err != nil {
|
||||||
|
c.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
cfgFile := filepath.Join(tmpDir, "config.yaml")
|
cfgFile := filepath.Join(tmpDir, "config.yaml")
|
||||||
|
|
||||||
@@ -26,52 +45,70 @@ func TestConfigFileLoading(t *testing.T) {
|
|||||||
filepath.Clean(path+"/../../config-example.yaml"),
|
filepath.Clean(path+"/../../config-example.yaml"),
|
||||||
cfgFile,
|
cfgFile,
|
||||||
)
|
)
|
||||||
require.NoError(t, err)
|
if err != nil {
|
||||||
|
c.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
// Load example config, it should load without validation errors
|
// Load example config, it should load without validation errors
|
||||||
err = types.LoadConfig(cfgFile, true)
|
err = types.LoadConfig(cfgFile, true)
|
||||||
require.NoError(t, err)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
// Test that config file was interpreted correctly
|
// Test that config file was interpreted correctly
|
||||||
assert.Equal(t, "http://127.0.0.1:8080", viper.GetString("server_url"))
|
c.Assert(viper.GetString("server_url"), check.Equals, "http://127.0.0.1:8080")
|
||||||
assert.Equal(t, "127.0.0.1:8080", viper.GetString("listen_addr"))
|
c.Assert(viper.GetString("listen_addr"), check.Equals, "127.0.0.1:8080")
|
||||||
assert.Equal(t, "127.0.0.1:9090", viper.GetString("metrics_listen_addr"))
|
c.Assert(viper.GetString("metrics_listen_addr"), check.Equals, "127.0.0.1:9090")
|
||||||
assert.Equal(t, "sqlite", viper.GetString("database.type"))
|
c.Assert(viper.GetString("database.type"), check.Equals, "sqlite")
|
||||||
assert.Equal(t, "/var/lib/headscale/db.sqlite", viper.GetString("database.sqlite.path"))
|
c.Assert(viper.GetString("database.sqlite.path"), check.Equals, "/var/lib/headscale/db.sqlite")
|
||||||
assert.Empty(t, viper.GetString("tls_letsencrypt_hostname"))
|
c.Assert(viper.GetString("tls_letsencrypt_hostname"), check.Equals, "")
|
||||||
assert.Equal(t, ":http", viper.GetString("tls_letsencrypt_listen"))
|
c.Assert(viper.GetString("tls_letsencrypt_listen"), check.Equals, ":http")
|
||||||
assert.Equal(t, "HTTP-01", viper.GetString("tls_letsencrypt_challenge_type"))
|
c.Assert(viper.GetString("tls_letsencrypt_challenge_type"), check.Equals, "HTTP-01")
|
||||||
assert.Equal(t, fs.FileMode(0o770), util.GetFileMode("unix_socket_permission"))
|
c.Assert(
|
||||||
assert.False(t, viper.GetBool("logtail.enabled"))
|
util.GetFileMode("unix_socket_permission"),
|
||||||
|
check.Equals,
|
||||||
|
fs.FileMode(0o770),
|
||||||
|
)
|
||||||
|
c.Assert(viper.GetBool("logtail.enabled"), check.Equals, false)
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestConfigLoading(t *testing.T) {
|
func (*Suite) TestConfigLoading(c *check.C) {
|
||||||
tmpDir := t.TempDir()
|
tmpDir, err := os.MkdirTemp("", "headscale")
|
||||||
|
if err != nil {
|
||||||
|
c.Fatal(err)
|
||||||
|
}
|
||||||
|
defer os.RemoveAll(tmpDir)
|
||||||
|
|
||||||
path, err := os.Getwd()
|
path, err := os.Getwd()
|
||||||
require.NoError(t, err)
|
if err != nil {
|
||||||
|
c.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
// Symlink the example config file
|
// Symlink the example config file
|
||||||
err = os.Symlink(
|
err = os.Symlink(
|
||||||
filepath.Clean(path+"/../../config-example.yaml"),
|
filepath.Clean(path+"/../../config-example.yaml"),
|
||||||
filepath.Join(tmpDir, "config.yaml"),
|
filepath.Join(tmpDir, "config.yaml"),
|
||||||
)
|
)
|
||||||
require.NoError(t, err)
|
if err != nil {
|
||||||
|
c.Fatal(err)
|
||||||
|
}
|
||||||
|
|
||||||
// Load example config, it should load without validation errors
|
// Load example config, it should load without validation errors
|
||||||
err = types.LoadConfig(tmpDir, false)
|
err = types.LoadConfig(tmpDir, false)
|
||||||
require.NoError(t, err)
|
c.Assert(err, check.IsNil)
|
||||||
|
|
||||||
// Test that config file was interpreted correctly
|
// Test that config file was interpreted correctly
|
||||||
assert.Equal(t, "http://127.0.0.1:8080", viper.GetString("server_url"))
|
c.Assert(viper.GetString("server_url"), check.Equals, "http://127.0.0.1:8080")
|
||||||
assert.Equal(t, "127.0.0.1:8080", viper.GetString("listen_addr"))
|
c.Assert(viper.GetString("listen_addr"), check.Equals, "127.0.0.1:8080")
|
||||||
assert.Equal(t, "127.0.0.1:9090", viper.GetString("metrics_listen_addr"))
|
c.Assert(viper.GetString("metrics_listen_addr"), check.Equals, "127.0.0.1:9090")
|
||||||
assert.Equal(t, "sqlite", viper.GetString("database.type"))
|
c.Assert(viper.GetString("database.type"), check.Equals, "sqlite")
|
||||||
assert.Equal(t, "/var/lib/headscale/db.sqlite", viper.GetString("database.sqlite.path"))
|
c.Assert(viper.GetString("database.sqlite.path"), check.Equals, "/var/lib/headscale/db.sqlite")
|
||||||
assert.Empty(t, viper.GetString("tls_letsencrypt_hostname"))
|
c.Assert(viper.GetString("tls_letsencrypt_hostname"), check.Equals, "")
|
||||||
assert.Equal(t, ":http", viper.GetString("tls_letsencrypt_listen"))
|
c.Assert(viper.GetString("tls_letsencrypt_listen"), check.Equals, ":http")
|
||||||
assert.Equal(t, "HTTP-01", viper.GetString("tls_letsencrypt_challenge_type"))
|
c.Assert(viper.GetString("tls_letsencrypt_challenge_type"), check.Equals, "HTTP-01")
|
||||||
assert.Equal(t, fs.FileMode(0o770), util.GetFileMode("unix_socket_permission"))
|
c.Assert(
|
||||||
assert.False(t, viper.GetBool("logtail.enabled"))
|
util.GetFileMode("unix_socket_permission"),
|
||||||
assert.False(t, viper.GetBool("randomize_client_port"))
|
check.Equals,
|
||||||
|
fs.FileMode(0o770),
|
||||||
|
)
|
||||||
|
c.Assert(viper.GetBool("logtail.enabled"), check.Equals, false)
|
||||||
|
c.Assert(viper.GetBool("randomize_client_port"), check.Equals, false)
|
||||||
}
|
}
|
||||||
|
|||||||
262
cmd/hi/README.md
@@ -1,262 +0,0 @@
|
|||||||
# hi — Headscale Integration test runner
|
|
||||||
|
|
||||||
`hi` wraps Docker container orchestration around the tests in
|
|
||||||
[`../../integration`](../../integration) and extracts debugging artefacts
|
|
||||||
(logs, database snapshots, MapResponse protocol captures) for post-mortem
|
|
||||||
analysis.
|
|
||||||
|
|
||||||
**Read this file in full before running any `hi` command.** The test
|
|
||||||
runner has sharp edges — wrong flags produce stale containers, lost
|
|
||||||
artefacts, or hung CI.
|
|
||||||
|
|
||||||
For test-authoring patterns (scenario setup, `EventuallyWithT`,
|
|
||||||
`IntegrationSkip`, helper variants), read
|
|
||||||
[`../../integration/README.md`](../../integration/README.md).
|
|
||||||
|
|
||||||
## Quick Start
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Verify system requirements (Docker, Go, disk space, images)
|
|
||||||
go run ./cmd/hi doctor
|
|
||||||
|
|
||||||
# Run a single test (the default flags are tuned for development)
|
|
||||||
go run ./cmd/hi run "TestPingAllByIP"
|
|
||||||
|
|
||||||
# Run a database-heavy test against PostgreSQL
|
|
||||||
go run ./cmd/hi run "TestExpireNode" --postgres
|
|
||||||
|
|
||||||
# Pattern matching
|
|
||||||
go run ./cmd/hi run "TestSubnet*"
|
|
||||||
```
|
|
||||||
|
|
||||||
Run `doctor` before the first `run` in any new environment. Tests
|
|
||||||
generate ~100 MB of logs per run in `control_logs/`; `doctor` verifies
|
|
||||||
there is enough space and that the required Docker images are available.
|
|
||||||
|
|
||||||
## Commands
|
|
||||||
|
|
||||||
| Command | Purpose |
|
|
||||||
| ------------------ | ---------------------------------------------------- |
|
|
||||||
| `run [pattern]` | Execute the test(s) matching `pattern` |
|
|
||||||
| `doctor` | Verify system requirements |
|
|
||||||
| `clean networks` | Prune unused Docker networks |
|
|
||||||
| `clean images` | Clean old test images |
|
|
||||||
| `clean containers` | Kill **all** test containers (dangerous — see below) |
|
|
||||||
| `clean cache` | Clean Go module cache volume |
|
|
||||||
| `clean all` | Run all cleanup operations |
|
|
||||||
|
|
||||||
## Flags
|
|
||||||
|
|
||||||
Defaults are tuned for single-test development runs. Review before
|
|
||||||
changing.
|
|
||||||
|
|
||||||
| Flag | Default | Purpose |
|
|
||||||
| ------------------- | -------------- | --------------------------------------------------------------------------- |
|
|
||||||
| `--timeout` | `120m` | Total test timeout. Use the built-in flag — never wrap with bash `timeout`. |
|
|
||||||
| `--postgres` | `false` | Use PostgreSQL instead of SQLite |
|
|
||||||
| `--failfast` | `true` | Stop on first test failure |
|
|
||||||
| `--go-version` | auto | Detected from `go.mod` (currently 1.26.1) |
|
|
||||||
| `--clean-before` | `true` | Clean stale (stopped/exited) containers before starting |
|
|
||||||
| `--clean-after` | `true` | Clean this run's containers after completion |
|
|
||||||
| `--keep-on-failure` | `false` | Preserve containers for manual inspection on failure |
|
|
||||||
| `--logs-dir` | `control_logs` | Where to save run artefacts |
|
|
||||||
| `--verbose` | `false` | Verbose output |
|
|
||||||
| `--stats` | `false` | Collect container resource-usage stats |
|
|
||||||
| `--hs-memory-limit` | `0` | Fail if any headscale container exceeds N MB (0 = disabled) |
|
|
||||||
| `--ts-memory-limit` | `0` | Fail if any tailscale container exceeds N MB |
|
|
||||||
|
|
||||||
### Timeout guidance
|
|
||||||
|
|
||||||
The default `120m` is generous for a single test. If you must tune it,
|
|
||||||
these are realistic floors by category:
|
|
||||||
|
|
||||||
| Test type | Minimum | Examples |
|
|
||||||
| ------------------------- | ----------- | ------------------------------------- |
|
|
||||||
| Basic functionality / CLI | 900s (15m) | `TestPingAllByIP`, `TestCLI*` |
|
|
||||||
| Route / ACL | 1200s (20m) | `TestSubnet*`, `TestACL*` |
|
|
||||||
| HA / failover | 1800s (30m) | `TestHASubnetRouter*` |
|
|
||||||
| Long-running | 2100s (35m) | `TestNodeOnlineStatus` (~12 min body) |
|
|
||||||
| Full suite | 45m | `go test ./integration -timeout 45m` |
|
|
||||||
|
|
||||||
**Never** use the shell `timeout` command around `hi`. It kills the
|
|
||||||
process mid-cleanup and leaves stale containers:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
timeout 300 go run ./cmd/hi run "TestName" # WRONG — orphaned containers
|
|
||||||
go run ./cmd/hi run "TestName" --timeout=900s # correct
|
|
||||||
```
|
|
||||||
|
|
||||||
## Concurrent Execution
|
|
||||||
|
|
||||||
Multiple `hi run` invocations can run simultaneously on the same Docker
|
|
||||||
daemon. Each invocation gets a unique **Run ID** (format
|
|
||||||
`YYYYMMDD-HHMMSS-6charhash`, e.g. `20260409-104215-mdjtzx`).
|
|
||||||
|
|
||||||
- **Container names** include the short run ID: `ts-mdjtzx-1-74-fgdyls`
|
|
||||||
- **Docker labels**: `hi.run-id={runID}` on every container
|
|
||||||
- **Port allocation**: dynamic — kernel assigns free ports, no conflicts
|
|
||||||
- **Cleanup isolation**: each run cleans only its own containers
|
|
||||||
- **Log directories**: `control_logs/{runID}/`
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Start three tests in parallel — each gets its own run ID
|
|
||||||
go run ./cmd/hi run "TestPingAllByIP" &
|
|
||||||
go run ./cmd/hi run "TestACLAllowUserDst" &
|
|
||||||
go run ./cmd/hi run "TestOIDCAuthenticationPingAll" &
|
|
||||||
```
|
|
||||||
|
|
||||||
### Safety rules for concurrent runs
|
|
||||||
|
|
||||||
- ✅ Your run cleans only containers labelled with its own `hi.run-id`
|
|
||||||
- ✅ `--clean-before` removes only stopped/exited containers
|
|
||||||
- ❌ **Never** run `docker rm -f $(docker ps -q --filter name=hs-)` —
|
|
||||||
this destroys other agents' live test sessions
|
|
||||||
- ❌ **Never** run `docker system prune -f` while any tests are running
|
|
||||||
- ❌ **Never** run `hi clean containers` / `hi clean all` while other
|
|
||||||
tests are running — both kill all test containers on the daemon
|
|
||||||
|
|
||||||
To identify your own containers:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker ps --filter "label=hi.run-id=20260409-104215-mdjtzx"
|
|
||||||
```
|
|
||||||
|
|
||||||
The run ID appears at the top of the `hi run` output — copy it from
|
|
||||||
there rather than trying to reconstruct it.
|
|
||||||
|
|
||||||
## Artefacts
|
|
||||||
|
|
||||||
Every run saves debugging artefacts under `control_logs/{runID}/`:
|
|
||||||
|
|
||||||
```
|
|
||||||
control_logs/20260409-104215-mdjtzx/
|
|
||||||
├── hs-<test>-<hash>.stderr.log # headscale server errors
|
|
||||||
├── hs-<test>-<hash>.stdout.log # headscale server output
|
|
||||||
├── hs-<test>-<hash>.db # database snapshot (SQLite)
|
|
||||||
├── hs-<test>-<hash>_metrics.txt # Prometheus metrics dump
|
|
||||||
├── hs-<test>-<hash>-mapresponses/ # MapResponse protocol captures
|
|
||||||
├── ts-<client>-<hash>.stderr.log # tailscale client errors
|
|
||||||
├── ts-<client>-<hash>.stdout.log # tailscale client output
|
|
||||||
└── ts-<client>-<hash>_status.json # client network-status dump
|
|
||||||
```
|
|
||||||
|
|
||||||
Artefacts persist after cleanup. Old runs accumulate fast — delete
|
|
||||||
unwanted directories to reclaim disk.
|
|
||||||
|
|
||||||
## Debugging workflow
|
|
||||||
|
|
||||||
When a test fails, read the artefacts **in this order**:
|
|
||||||
|
|
||||||
1. **`hs-*.stderr.log`** — headscale server errors, panics, policy
|
|
||||||
evaluation failures. Most issues originate server-side.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
grep -E "ERROR|panic|FATAL" control_logs/*/hs-*.stderr.log
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **`ts-*.stderr.log`** — authentication failures, connectivity issues,
|
|
||||||
DNS resolution problems on the client side.
|
|
||||||
|
|
||||||
3. **MapResponse JSON** in `hs-*-mapresponses/` — protocol-level
|
|
||||||
debugging for network map generation, peer visibility, route
|
|
||||||
distribution, policy evaluation results.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
ls control_logs/*/hs-*-mapresponses/
|
|
||||||
jq '.Peers[] | {Name, Tags, PrimaryRoutes}' \
|
|
||||||
control_logs/*/hs-*-mapresponses/001.json
|
|
||||||
```
|
|
||||||
|
|
||||||
4. **`*_status.json`** — client peer-connectivity state.
|
|
||||||
|
|
||||||
5. **`hs-*.db`** — SQLite snapshot for post-mortem consistency checks.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
sqlite3 control_logs/<runID>/hs-*.db
|
|
||||||
sqlite> .tables
|
|
||||||
sqlite> .schema nodes
|
|
||||||
sqlite> SELECT id, hostname, user_id, tags FROM nodes WHERE hostname LIKE '%problematic%';
|
|
||||||
```
|
|
||||||
|
|
||||||
6. **`*_metrics.txt`** — Prometheus dumps for latency, NodeStore
|
|
||||||
operation timing, database query performance, memory usage.
|
|
||||||
|
|
||||||
## Heuristic: infrastructure vs code
|
|
||||||
|
|
||||||
**Before blaming Docker, disk, or network: read `hs-*.stderr.log` in
|
|
||||||
full.** In practice, well over 99% of failures are code bugs (policy
|
|
||||||
evaluation, NodeStore sync, route approval) rather than infrastructure.
|
|
||||||
|
|
||||||
Actual infrastructure failures have signature error messages:
|
|
||||||
|
|
||||||
| Signature | Cause | Fix |
|
|
||||||
| --------------------------------------------------------------- | ------------------------- | ------------------------------------------------------------- |
|
|
||||||
| `failed to resolve "hs-...": no DNS fallback candidates remain` | Docker DNS | Reset Docker networking |
|
|
||||||
| `container creation timeout`, no progress >2 min | Resource exhaustion | `docker system prune -f` (when no other tests running), retry |
|
|
||||||
| OOM kills, slow Docker daemon | Too many concurrent tests | Reduce concurrency, wait for completion |
|
|
||||||
| `no space left on device` | Disk full | Delete old `control_logs/` |
|
|
||||||
|
|
||||||
If you don't see a signature error, **assume it's a code regression** —
|
|
||||||
do not retry hoping the flake goes away.
|
|
||||||
|
|
||||||
## Common failure patterns (code bugs)
|
|
||||||
|
|
||||||
### Route advertisement timing
|
|
||||||
|
|
||||||
Test asserts route state before the client has finished propagating its
|
|
||||||
Hostinfo update. Symptom: `nodes[0].GetAvailableRoutes()` empty when
|
|
||||||
the test expects a route.
|
|
||||||
|
|
||||||
- **Wrong fix**: `time.Sleep(5 * time.Second)` — fragile and slow.
|
|
||||||
- **Right fix**: wrap the assertion in `EventuallyWithT`. See
|
|
||||||
[`../../integration/README.md`](../../integration/README.md).
|
|
||||||
|
|
||||||
### NodeStore sync issues
|
|
||||||
|
|
||||||
Route changes not reflected in the NodeStore snapshot. Symptom: route
|
|
||||||
advertisements in logs but no tracking updates in subsequent reads.
|
|
||||||
|
|
||||||
The sync point is `State.UpdateNodeFromMapRequest()` in
|
|
||||||
`hscontrol/state/state.go`. If you added a new kind of client state
|
|
||||||
update, make sure it lands here.
|
|
||||||
|
|
||||||
### HA failover: routes disappearing on disconnect
|
|
||||||
|
|
||||||
`TestHASubnetRouterFailover` fails because approved routes vanish when
|
|
||||||
a subnet router goes offline. **This is a bug, not expected behaviour.**
|
|
||||||
Route approval must not be coupled to client connectivity — routes
|
|
||||||
stay approved; only the primary-route selection is affected by
|
|
||||||
connectivity.
|
|
||||||
|
|
||||||
### Policy evaluation race
|
|
||||||
|
|
||||||
Symptom: tests that change policy and immediately assert peer visibility
|
|
||||||
fail intermittently. Policy changes trigger async recomputation.
|
|
||||||
|
|
||||||
- See recent fixes in `git log -- hscontrol/state/` for examples (e.g.
|
|
||||||
the `PolicyChange` trigger on every Connect/Disconnect).
|
|
||||||
|
|
||||||
### SQLite vs PostgreSQL timing differences
|
|
||||||
|
|
||||||
Some race conditions only surface on one backend. If a test is flaky,
|
|
||||||
try the other backend with `--postgres`:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
go run ./cmd/hi run "TestName" --postgres --verbose
|
|
||||||
```
|
|
||||||
|
|
||||||
PostgreSQL generally has more consistent timing; SQLite can expose
|
|
||||||
races during rapid writes.
|
|
||||||
|
|
||||||
## Keeping containers for inspection
|
|
||||||
|
|
||||||
If you need to inspect a failed test's state manually:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
go run ./cmd/hi run "TestName" --keep-on-failure
|
|
||||||
# containers survive — inspect them
|
|
||||||
docker exec -it ts-<runID>-<...> /bin/sh
|
|
||||||
docker logs hs-<runID>-<...>
|
|
||||||
# clean up manually when done
|
|
||||||
go run ./cmd/hi clean all # only when no other tests are running
|
|
||||||
```
|
|
||||||
@@ -3,13 +3,9 @@ package main
|
|||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"fmt"
|
"fmt"
|
||||||
"log"
|
|
||||||
"os"
|
|
||||||
"path/filepath"
|
|
||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/cenkalti/backoff/v5"
|
|
||||||
"github.com/docker/docker/api/types/container"
|
"github.com/docker/docker/api/types/container"
|
||||||
"github.com/docker/docker/api/types/filters"
|
"github.com/docker/docker/api/types/filters"
|
||||||
"github.com/docker/docker/api/types/image"
|
"github.com/docker/docker/api/types/image"
|
||||||
@@ -18,46 +14,30 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
// cleanupBeforeTest performs cleanup operations before running tests.
|
// cleanupBeforeTest performs cleanup operations before running tests.
|
||||||
// Only removes stale (stopped/exited) test containers to avoid interfering with concurrent test runs.
|
|
||||||
func cleanupBeforeTest(ctx context.Context) error {
|
func cleanupBeforeTest(ctx context.Context) error {
|
||||||
err := cleanupStaleTestContainers(ctx)
|
if err := killTestContainers(ctx); err != nil {
|
||||||
if err != nil {
|
return fmt.Errorf("failed to kill test containers: %w", err)
|
||||||
return fmt.Errorf("cleaning stale test containers: %w", err)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := pruneDockerNetworks(ctx); err != nil { //nolint:noinlineerr
|
if err := pruneDockerNetworks(ctx); err != nil {
|
||||||
return fmt.Errorf("pruning networks: %w", err)
|
return fmt.Errorf("failed to prune networks: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// cleanupAfterTest removes the test container and all associated integration test containers for the run.
|
// cleanupAfterTest removes the test container after completion.
|
||||||
func cleanupAfterTest(ctx context.Context, cli *client.Client, containerID, runID string) error {
|
func cleanupAfterTest(ctx context.Context, cli *client.Client, containerID string) error {
|
||||||
// Remove the main test container
|
return cli.ContainerRemove(ctx, containerID, container.RemoveOptions{
|
||||||
err := cli.ContainerRemove(ctx, containerID, container.RemoveOptions{
|
|
||||||
Force: true,
|
Force: true,
|
||||||
})
|
})
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("removing test container: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Clean up integration test containers for this run only
|
|
||||||
if runID != "" {
|
|
||||||
err := killTestContainersByRunID(ctx, runID)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("cleaning up containers for run %s: %w", runID, err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// killTestContainers terminates and removes all test containers.
|
// killTestContainers terminates and removes all test containers.
|
||||||
func killTestContainers(ctx context.Context) error {
|
func killTestContainers(ctx context.Context) error {
|
||||||
cli, err := createDockerClient(ctx)
|
cli, err := createDockerClient()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("creating Docker client: %w", err)
|
return fmt.Errorf("failed to create Docker client: %w", err)
|
||||||
}
|
}
|
||||||
defer cli.Close()
|
defer cli.Close()
|
||||||
|
|
||||||
@@ -65,14 +45,12 @@ func killTestContainers(ctx context.Context) error {
|
|||||||
All: true,
|
All: true,
|
||||||
})
|
})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("listing containers: %w", err)
|
return fmt.Errorf("failed to list containers: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
removed := 0
|
removed := 0
|
||||||
|
|
||||||
for _, cont := range containers {
|
for _, cont := range containers {
|
||||||
shouldRemove := false
|
shouldRemove := false
|
||||||
|
|
||||||
for _, name := range cont.Names {
|
for _, name := range cont.Names {
|
||||||
if strings.Contains(name, "headscale-test-suite") ||
|
if strings.Contains(name, "headscale-test-suite") ||
|
||||||
strings.Contains(name, "hs-") ||
|
strings.Contains(name, "hs-") ||
|
||||||
@@ -105,135 +83,43 @@ func killTestContainers(ctx context.Context) error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// killTestContainersByRunID terminates and removes all test containers for a specific run ID.
|
|
||||||
// This function filters containers by the hi.run-id label to only affect containers
|
|
||||||
// belonging to the specified test run, leaving other concurrent test runs untouched.
|
|
||||||
func killTestContainersByRunID(ctx context.Context, runID string) error {
|
|
||||||
cli, err := createDockerClient(ctx)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("creating Docker client: %w", err)
|
|
||||||
}
|
|
||||||
defer cli.Close()
|
|
||||||
|
|
||||||
// Filter containers by hi.run-id label
|
|
||||||
containers, err := cli.ContainerList(ctx, container.ListOptions{
|
|
||||||
All: true,
|
|
||||||
Filters: filters.NewArgs(
|
|
||||||
filters.Arg("label", "hi.run-id="+runID),
|
|
||||||
),
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("listing containers for run %s: %w", runID, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
removed := 0
|
|
||||||
|
|
||||||
for _, cont := range containers {
|
|
||||||
// Kill the container if it's running
|
|
||||||
if cont.State == "running" {
|
|
||||||
_ = cli.ContainerKill(ctx, cont.ID, "KILL")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Remove the container with retry logic
|
|
||||||
if removeContainerWithRetry(ctx, cli, cont.ID) {
|
|
||||||
removed++
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if removed > 0 {
|
|
||||||
fmt.Printf("Removed %d containers for run ID %s\n", removed, runID)
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// cleanupStaleTestContainers removes stopped/exited test containers without affecting running tests.
|
|
||||||
// This is useful for cleaning up leftover containers from previous crashed or interrupted test runs
|
|
||||||
// without interfering with currently running concurrent tests.
|
|
||||||
func cleanupStaleTestContainers(ctx context.Context) error {
|
|
||||||
cli, err := createDockerClient(ctx)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("creating Docker client: %w", err)
|
|
||||||
}
|
|
||||||
defer cli.Close()
|
|
||||||
|
|
||||||
// Only get stopped/exited containers
|
|
||||||
containers, err := cli.ContainerList(ctx, container.ListOptions{
|
|
||||||
All: true,
|
|
||||||
Filters: filters.NewArgs(
|
|
||||||
filters.Arg("status", "exited"),
|
|
||||||
filters.Arg("status", "dead"),
|
|
||||||
),
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("listing stopped containers: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
removed := 0
|
|
||||||
|
|
||||||
for _, cont := range containers {
|
|
||||||
// Only remove containers that look like test containers
|
|
||||||
shouldRemove := false
|
|
||||||
|
|
||||||
for _, name := range cont.Names {
|
|
||||||
if strings.Contains(name, "headscale-test-suite") ||
|
|
||||||
strings.Contains(name, "hs-") ||
|
|
||||||
strings.Contains(name, "ts-") ||
|
|
||||||
strings.Contains(name, "derp-") {
|
|
||||||
shouldRemove = true
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if shouldRemove {
|
|
||||||
if removeContainerWithRetry(ctx, cli, cont.ID) {
|
|
||||||
removed++
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if removed > 0 {
|
|
||||||
fmt.Printf("Removed %d stale test containers\n", removed)
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
const (
|
|
||||||
containerRemoveInitialInterval = 100 * time.Millisecond
|
|
||||||
containerRemoveMaxElapsedTime = 2 * time.Second
|
|
||||||
)
|
|
||||||
|
|
||||||
// removeContainerWithRetry attempts to remove a container with exponential backoff retry logic.
|
// removeContainerWithRetry attempts to remove a container with exponential backoff retry logic.
|
||||||
func removeContainerWithRetry(ctx context.Context, cli *client.Client, containerID string) bool {
|
func removeContainerWithRetry(ctx context.Context, cli *client.Client, containerID string) bool {
|
||||||
expBackoff := backoff.NewExponentialBackOff()
|
maxRetries := 3
|
||||||
expBackoff.InitialInterval = containerRemoveInitialInterval
|
baseDelay := 100 * time.Millisecond
|
||||||
|
|
||||||
_, err := backoff.Retry(ctx, func() (struct{}, error) {
|
for attempt := range maxRetries {
|
||||||
err := cli.ContainerRemove(ctx, containerID, container.RemoveOptions{
|
err := cli.ContainerRemove(ctx, containerID, container.RemoveOptions{
|
||||||
Force: true,
|
Force: true,
|
||||||
})
|
})
|
||||||
if err != nil {
|
if err == nil {
|
||||||
return struct{}{}, err
|
return true
|
||||||
}
|
}
|
||||||
|
|
||||||
return struct{}{}, nil
|
// If this is the last attempt, don't wait
|
||||||
}, backoff.WithBackOff(expBackoff), backoff.WithMaxElapsedTime(containerRemoveMaxElapsedTime))
|
if attempt == maxRetries-1 {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
|
||||||
return err == nil
|
// Wait with exponential backoff
|
||||||
|
delay := baseDelay * time.Duration(1<<attempt)
|
||||||
|
time.Sleep(delay)
|
||||||
|
}
|
||||||
|
|
||||||
|
return false
|
||||||
}
|
}
|
||||||
|
|
||||||
// pruneDockerNetworks removes unused Docker networks.
|
// pruneDockerNetworks removes unused Docker networks.
|
||||||
func pruneDockerNetworks(ctx context.Context) error {
|
func pruneDockerNetworks(ctx context.Context) error {
|
||||||
cli, err := createDockerClient(ctx)
|
cli, err := createDockerClient()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("creating Docker client: %w", err)
|
return fmt.Errorf("failed to create Docker client: %w", err)
|
||||||
}
|
}
|
||||||
defer cli.Close()
|
defer cli.Close()
|
||||||
|
|
||||||
report, err := cli.NetworksPrune(ctx, filters.Args{})
|
report, err := cli.NetworksPrune(ctx, filters.Args{})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("pruning networks: %w", err)
|
return fmt.Errorf("failed to prune networks: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
if len(report.NetworksDeleted) > 0 {
|
if len(report.NetworksDeleted) > 0 {
|
||||||
@@ -247,9 +133,9 @@ func pruneDockerNetworks(ctx context.Context) error {
|
|||||||
|
|
||||||
// cleanOldImages removes test-related and old dangling Docker images.
|
// cleanOldImages removes test-related and old dangling Docker images.
|
||||||
func cleanOldImages(ctx context.Context) error {
|
func cleanOldImages(ctx context.Context) error {
|
||||||
cli, err := createDockerClient(ctx)
|
cli, err := createDockerClient()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("creating Docker client: %w", err)
|
return fmt.Errorf("failed to create Docker client: %w", err)
|
||||||
}
|
}
|
||||||
defer cli.Close()
|
defer cli.Close()
|
||||||
|
|
||||||
@@ -257,14 +143,12 @@ func cleanOldImages(ctx context.Context) error {
|
|||||||
All: true,
|
All: true,
|
||||||
})
|
})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("listing images: %w", err)
|
return fmt.Errorf("failed to list images: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
removed := 0
|
removed := 0
|
||||||
|
|
||||||
for _, img := range images {
|
for _, img := range images {
|
||||||
shouldRemove := false
|
shouldRemove := false
|
||||||
|
|
||||||
for _, tag := range img.RepoTags {
|
for _, tag := range img.RepoTags {
|
||||||
if strings.Contains(tag, "hs-") ||
|
if strings.Contains(tag, "hs-") ||
|
||||||
strings.Contains(tag, "headscale-integration") ||
|
strings.Contains(tag, "headscale-integration") ||
|
||||||
@@ -299,19 +183,18 @@ func cleanOldImages(ctx context.Context) error {
|
|||||||
|
|
||||||
// cleanCacheVolume removes the Docker volume used for Go module cache.
|
// cleanCacheVolume removes the Docker volume used for Go module cache.
|
||||||
func cleanCacheVolume(ctx context.Context) error {
|
func cleanCacheVolume(ctx context.Context) error {
|
||||||
cli, err := createDockerClient(ctx)
|
cli, err := createDockerClient()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("creating Docker client: %w", err)
|
return fmt.Errorf("failed to create Docker client: %w", err)
|
||||||
}
|
}
|
||||||
defer cli.Close()
|
defer cli.Close()
|
||||||
|
|
||||||
volumeName := "hs-integration-go-cache"
|
volumeName := "hs-integration-go-cache"
|
||||||
|
|
||||||
err = cli.VolumeRemove(ctx, volumeName, true)
|
err = cli.VolumeRemove(ctx, volumeName, true)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
if errdefs.IsNotFound(err) { //nolint:staticcheck // SA1019: deprecated but functional
|
if errdefs.IsNotFound(err) {
|
||||||
fmt.Printf("Go module cache volume not found: %s\n", volumeName)
|
fmt.Printf("Go module cache volume not found: %s\n", volumeName)
|
||||||
} else if errdefs.IsConflict(err) { //nolint:staticcheck // SA1019: deprecated but functional
|
} else if errdefs.IsConflict(err) {
|
||||||
fmt.Printf("Go module cache volume is in use and cannot be removed: %s\n", volumeName)
|
fmt.Printf("Go module cache volume is in use and cannot be removed: %s\n", volumeName)
|
||||||
} else {
|
} else {
|
||||||
fmt.Printf("Failed to remove Go module cache volume %s: %v\n", volumeName, err)
|
fmt.Printf("Failed to remove Go module cache volume %s: %v\n", volumeName, err)
|
||||||
@@ -322,110 +205,3 @@ func cleanCacheVolume(ctx context.Context) error {
|
|||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// cleanupSuccessfulTestArtifacts removes artifacts from successful test runs to save disk space.
|
|
||||||
// This function removes large artifacts that are mainly useful for debugging failures:
|
|
||||||
// - Database dumps (.db files)
|
|
||||||
// - Profile data (pprof directories)
|
|
||||||
// - MapResponse data (mapresponses directories)
|
|
||||||
// - Prometheus metrics files
|
|
||||||
//
|
|
||||||
// It preserves:
|
|
||||||
// - Log files (.log) which are small and useful for verification.
|
|
||||||
func cleanupSuccessfulTestArtifacts(logsDir string, verbose bool) error {
|
|
||||||
entries, err := os.ReadDir(logsDir)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("reading logs directory: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
var (
|
|
||||||
removedFiles, removedDirs int
|
|
||||||
totalSize int64
|
|
||||||
)
|
|
||||||
|
|
||||||
for _, entry := range entries {
|
|
||||||
name := entry.Name()
|
|
||||||
fullPath := filepath.Join(logsDir, name)
|
|
||||||
|
|
||||||
if entry.IsDir() {
|
|
||||||
// Remove pprof and mapresponses directories (typically large)
|
|
||||||
// These directories contain artifacts from all containers in the test run
|
|
||||||
if name == "pprof" || name == "mapresponses" {
|
|
||||||
size, sizeErr := getDirSize(fullPath)
|
|
||||||
if sizeErr == nil {
|
|
||||||
totalSize += size
|
|
||||||
}
|
|
||||||
|
|
||||||
err := os.RemoveAll(fullPath)
|
|
||||||
if err != nil {
|
|
||||||
if verbose {
|
|
||||||
log.Printf("Warning: failed to remove directory %s: %v", name, err)
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
removedDirs++
|
|
||||||
|
|
||||||
if verbose {
|
|
||||||
log.Printf("Removed directory: %s/", name)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
// Only process test-related files (headscale and tailscale)
|
|
||||||
if !strings.HasPrefix(name, "hs-") && !strings.HasPrefix(name, "ts-") {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
// Remove database, metrics, and status files, but keep logs
|
|
||||||
shouldRemove := strings.HasSuffix(name, ".db") ||
|
|
||||||
strings.HasSuffix(name, "_metrics.txt") ||
|
|
||||||
strings.HasSuffix(name, "_status.json")
|
|
||||||
|
|
||||||
if shouldRemove {
|
|
||||||
info, infoErr := entry.Info()
|
|
||||||
if infoErr == nil {
|
|
||||||
totalSize += info.Size()
|
|
||||||
}
|
|
||||||
|
|
||||||
err := os.Remove(fullPath)
|
|
||||||
if err != nil {
|
|
||||||
if verbose {
|
|
||||||
log.Printf("Warning: failed to remove file %s: %v", name, err)
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
removedFiles++
|
|
||||||
|
|
||||||
if verbose {
|
|
||||||
log.Printf("Removed file: %s", name)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if removedFiles > 0 || removedDirs > 0 {
|
|
||||||
const bytesPerMB = 1024 * 1024
|
|
||||||
log.Printf("Cleaned up %d files and %d directories (freed ~%.2f MB)",
|
|
||||||
removedFiles, removedDirs, float64(totalSize)/bytesPerMB)
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// getDirSize calculates the total size of a directory.
|
|
||||||
func getDirSize(path string) (int64, error) {
|
|
||||||
var size int64
|
|
||||||
|
|
||||||
err := filepath.Walk(path, func(_ string, info os.FileInfo, err error) error {
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
if !info.IsDir() {
|
|
||||||
size += info.Size()
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
|
|
||||||
return size, err
|
|
||||||
}
|
|
||||||
|
|||||||
299
cmd/hi/docker.go
@@ -22,22 +22,17 @@ import (
|
|||||||
"github.com/juanfont/headscale/integration/dockertestutil"
|
"github.com/juanfont/headscale/integration/dockertestutil"
|
||||||
)
|
)
|
||||||
|
|
||||||
const defaultDirPerm = 0o755
|
|
||||||
|
|
||||||
var (
|
var (
|
||||||
ErrTestFailed = errors.New("test failed")
|
ErrTestFailed = errors.New("test failed")
|
||||||
ErrUnexpectedContainerWait = errors.New("unexpected end of container wait")
|
ErrUnexpectedContainerWait = errors.New("unexpected end of container wait")
|
||||||
ErrNoDockerContext = errors.New("no docker context found")
|
ErrNoDockerContext = errors.New("no docker context found")
|
||||||
ErrMemoryLimitViolations = errors.New("container(s) exceeded memory limits")
|
|
||||||
)
|
)
|
||||||
|
|
||||||
// runTestContainer executes integration tests in a Docker container.
|
// runTestContainer executes integration tests in a Docker container.
|
||||||
//
|
|
||||||
//nolint:gocyclo // complex test orchestration function
|
|
||||||
func runTestContainer(ctx context.Context, config *RunConfig) error {
|
func runTestContainer(ctx context.Context, config *RunConfig) error {
|
||||||
cli, err := createDockerClient(ctx)
|
cli, err := createDockerClient()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("creating Docker client: %w", err)
|
return fmt.Errorf("failed to create Docker client: %w", err)
|
||||||
}
|
}
|
||||||
defer cli.Close()
|
defer cli.Close()
|
||||||
|
|
||||||
@@ -53,21 +48,19 @@ func runTestContainer(ctx context.Context, config *RunConfig) error {
|
|||||||
|
|
||||||
absLogsDir, err := filepath.Abs(logsDir)
|
absLogsDir, err := filepath.Abs(logsDir)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("getting absolute path for logs directory: %w", err)
|
return fmt.Errorf("failed to get absolute path for logs directory: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
const dirPerm = 0o755
|
const dirPerm = 0o755
|
||||||
if err := os.MkdirAll(absLogsDir, dirPerm); err != nil { //nolint:noinlineerr
|
if err := os.MkdirAll(absLogsDir, dirPerm); err != nil {
|
||||||
return fmt.Errorf("creating logs directory: %w", err)
|
return fmt.Errorf("failed to create logs directory: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
if config.CleanBefore {
|
if config.CleanBefore {
|
||||||
if config.Verbose {
|
if config.Verbose {
|
||||||
log.Printf("Running pre-test cleanup...")
|
log.Printf("Running pre-test cleanup...")
|
||||||
}
|
}
|
||||||
|
if err := cleanupBeforeTest(ctx); err != nil && config.Verbose {
|
||||||
err := cleanupBeforeTest(ctx)
|
|
||||||
if err != nil && config.Verbose {
|
|
||||||
log.Printf("Warning: pre-test cleanup failed: %v", err)
|
log.Printf("Warning: pre-test cleanup failed: %v", err)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -78,40 +71,34 @@ func runTestContainer(ctx context.Context, config *RunConfig) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
imageName := "golang:" + config.GoVersion
|
imageName := "golang:" + config.GoVersion
|
||||||
if err := ensureImageAvailable(ctx, cli, imageName, config.Verbose); err != nil { //nolint:noinlineerr
|
if err := ensureImageAvailable(ctx, cli, imageName, config.Verbose); err != nil {
|
||||||
return fmt.Errorf("ensuring image availability: %w", err)
|
return fmt.Errorf("failed to ensure image availability: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
resp, err := createGoTestContainer(ctx, cli, config, containerName, absLogsDir, goTestCmd)
|
resp, err := createGoTestContainer(ctx, cli, config, containerName, absLogsDir, goTestCmd)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("creating container: %w", err)
|
return fmt.Errorf("failed to create container: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
if config.Verbose {
|
if config.Verbose {
|
||||||
log.Printf("Created container: %s", resp.ID)
|
log.Printf("Created container: %s", resp.ID)
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := cli.ContainerStart(ctx, resp.ID, container.StartOptions{}); err != nil { //nolint:noinlineerr
|
if err := cli.ContainerStart(ctx, resp.ID, container.StartOptions{}); err != nil {
|
||||||
return fmt.Errorf("starting container: %w", err)
|
return fmt.Errorf("failed to start container: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
log.Printf("Starting test: %s", config.TestPattern)
|
log.Printf("Starting test: %s", config.TestPattern)
|
||||||
log.Printf("Run ID: %s", runID)
|
|
||||||
log.Printf("Monitor with: docker logs -f %s", containerName)
|
|
||||||
log.Printf("Logs directory: %s", logsDir)
|
|
||||||
|
|
||||||
// Start stats collection for container resource monitoring (if enabled)
|
// Start stats collection for container resource monitoring (if enabled)
|
||||||
var statsCollector *StatsCollector
|
var statsCollector *StatsCollector
|
||||||
|
|
||||||
if config.Stats {
|
if config.Stats {
|
||||||
var err error
|
var err error
|
||||||
|
statsCollector, err = NewStatsCollector()
|
||||||
statsCollector, err = NewStatsCollector(ctx)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
if config.Verbose {
|
if config.Verbose {
|
||||||
log.Printf("Warning: failed to create stats collector: %v", err)
|
log.Printf("Warning: failed to create stats collector: %v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
statsCollector = nil
|
statsCollector = nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -120,8 +107,7 @@ func runTestContainer(ctx context.Context, config *RunConfig) error {
|
|||||||
|
|
||||||
// Start stats collection immediately - no need for complex retry logic
|
// Start stats collection immediately - no need for complex retry logic
|
||||||
// The new implementation monitors Docker events and will catch containers as they start
|
// The new implementation monitors Docker events and will catch containers as they start
|
||||||
err := statsCollector.StartCollection(ctx, runID, config.Verbose)
|
if err := statsCollector.StartCollection(ctx, runID, config.Verbose); err != nil {
|
||||||
if err != nil {
|
|
||||||
if config.Verbose {
|
if config.Verbose {
|
||||||
log.Printf("Warning: failed to start stats collection: %v", err)
|
log.Printf("Warning: failed to start stats collection: %v", err)
|
||||||
}
|
}
|
||||||
@@ -133,13 +119,12 @@ func runTestContainer(ctx context.Context, config *RunConfig) error {
|
|||||||
exitCode, err := streamAndWait(ctx, cli, resp.ID)
|
exitCode, err := streamAndWait(ctx, cli, resp.ID)
|
||||||
|
|
||||||
// Ensure all containers have finished and logs are flushed before extracting artifacts
|
// Ensure all containers have finished and logs are flushed before extracting artifacts
|
||||||
waitErr := waitForContainerFinalization(ctx, cli, resp.ID, config.Verbose)
|
if waitErr := waitForContainerFinalization(ctx, cli, resp.ID, config.Verbose); waitErr != nil && config.Verbose {
|
||||||
if waitErr != nil && config.Verbose {
|
|
||||||
log.Printf("Warning: failed to wait for container finalization: %v", waitErr)
|
log.Printf("Warning: failed to wait for container finalization: %v", waitErr)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Extract artifacts from test containers before cleanup
|
// Extract artifacts from test containers before cleanup
|
||||||
if err := extractArtifactsFromContainers(ctx, resp.ID, logsDir, config.Verbose); err != nil && config.Verbose { //nolint:noinlineerr
|
if err := extractArtifactsFromContainers(ctx, resp.ID, logsDir, config.Verbose); err != nil && config.Verbose {
|
||||||
log.Printf("Warning: failed to extract artifacts from containers: %v", err)
|
log.Printf("Warning: failed to extract artifacts from containers: %v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -152,44 +137,26 @@ func runTestContainer(ctx context.Context, config *RunConfig) error {
|
|||||||
if len(violations) > 0 {
|
if len(violations) > 0 {
|
||||||
log.Printf("MEMORY LIMIT VIOLATIONS DETECTED:")
|
log.Printf("MEMORY LIMIT VIOLATIONS DETECTED:")
|
||||||
log.Printf("=================================")
|
log.Printf("=================================")
|
||||||
|
|
||||||
for _, violation := range violations {
|
for _, violation := range violations {
|
||||||
log.Printf("Container %s exceeded memory limit: %.1f MB > %.1f MB",
|
log.Printf("Container %s exceeded memory limit: %.1f MB > %.1f MB",
|
||||||
violation.ContainerName, violation.MaxMemoryMB, violation.LimitMB)
|
violation.ContainerName, violation.MaxMemoryMB, violation.LimitMB)
|
||||||
}
|
}
|
||||||
|
return fmt.Errorf("test failed: %d container(s) exceeded memory limits", len(violations))
|
||||||
return fmt.Errorf("test failed: %d %w", len(violations), ErrMemoryLimitViolations)
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
shouldCleanup := config.CleanAfter && (!config.KeepOnFailure || exitCode == 0)
|
shouldCleanup := config.CleanAfter && (!config.KeepOnFailure || exitCode == 0)
|
||||||
if shouldCleanup {
|
if shouldCleanup {
|
||||||
if config.Verbose {
|
if config.Verbose {
|
||||||
log.Printf("Running post-test cleanup for run %s...", runID)
|
log.Printf("Running post-test cleanup...")
|
||||||
}
|
}
|
||||||
|
if cleanErr := cleanupAfterTest(ctx, cli, resp.ID); cleanErr != nil && config.Verbose {
|
||||||
cleanErr := cleanupAfterTest(ctx, cli, resp.ID, runID)
|
|
||||||
|
|
||||||
if cleanErr != nil && config.Verbose {
|
|
||||||
log.Printf("Warning: post-test cleanup failed: %v", cleanErr)
|
log.Printf("Warning: post-test cleanup failed: %v", cleanErr)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Clean up artifacts from successful tests to save disk space in CI
|
|
||||||
if exitCode == 0 {
|
|
||||||
if config.Verbose {
|
|
||||||
log.Printf("Test succeeded, cleaning up artifacts to save disk space...")
|
|
||||||
}
|
|
||||||
|
|
||||||
cleanErr := cleanupSuccessfulTestArtifacts(logsDir, config.Verbose)
|
|
||||||
|
|
||||||
if cleanErr != nil && config.Verbose {
|
|
||||||
log.Printf("Warning: artifact cleanup failed: %v", cleanErr)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("executing test: %w", err)
|
return fmt.Errorf("test execution failed: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
if exitCode != 0 {
|
if exitCode != 0 {
|
||||||
@@ -223,7 +190,7 @@ func buildGoTestCommand(config *RunConfig) []string {
|
|||||||
func createGoTestContainer(ctx context.Context, cli *client.Client, config *RunConfig, containerName, logsDir string, goTestCmd []string) (container.CreateResponse, error) {
|
func createGoTestContainer(ctx context.Context, cli *client.Client, config *RunConfig, containerName, logsDir string, goTestCmd []string) (container.CreateResponse, error) {
|
||||||
pwd, err := os.Getwd()
|
pwd, err := os.Getwd()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return container.CreateResponse{}, fmt.Errorf("getting working directory: %w", err)
|
return container.CreateResponse{}, fmt.Errorf("failed to get working directory: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
projectRoot := findProjectRoot(pwd)
|
projectRoot := findProjectRoot(pwd)
|
||||||
@@ -234,28 +201,6 @@ func createGoTestContainer(ctx context.Context, cli *client.Client, config *RunC
|
|||||||
fmt.Sprintf("HEADSCALE_INTEGRATION_POSTGRES=%d", boolToInt(config.UsePostgres)),
|
fmt.Sprintf("HEADSCALE_INTEGRATION_POSTGRES=%d", boolToInt(config.UsePostgres)),
|
||||||
"HEADSCALE_INTEGRATION_RUN_ID=" + runID,
|
"HEADSCALE_INTEGRATION_RUN_ID=" + runID,
|
||||||
}
|
}
|
||||||
|
|
||||||
// Pass through CI environment variable for CI detection
|
|
||||||
if ci := os.Getenv("CI"); ci != "" {
|
|
||||||
env = append(env, "CI="+ci)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Pass through all HEADSCALE_INTEGRATION_* environment variables
|
|
||||||
for _, e := range os.Environ() {
|
|
||||||
if strings.HasPrefix(e, "HEADSCALE_INTEGRATION_") {
|
|
||||||
// Skip the ones we already set explicitly
|
|
||||||
if strings.HasPrefix(e, "HEADSCALE_INTEGRATION_POSTGRES=") ||
|
|
||||||
strings.HasPrefix(e, "HEADSCALE_INTEGRATION_RUN_ID=") {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
env = append(env, e)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Set GOCACHE to a known location (used by both bind mount and volume cases)
|
|
||||||
env = append(env, "GOCACHE=/cache/go-build")
|
|
||||||
|
|
||||||
containerConfig := &container.Config{
|
containerConfig := &container.Config{
|
||||||
Image: "golang:" + config.GoVersion,
|
Image: "golang:" + config.GoVersion,
|
||||||
Cmd: goTestCmd,
|
Cmd: goTestCmd,
|
||||||
@@ -275,43 +220,20 @@ func createGoTestContainer(ctx context.Context, cli *client.Client, config *RunC
|
|||||||
log.Printf("Using Docker socket: %s", dockerSocketPath)
|
log.Printf("Using Docker socket: %s", dockerSocketPath)
|
||||||
}
|
}
|
||||||
|
|
||||||
binds := []string{
|
hostConfig := &container.HostConfig{
|
||||||
|
AutoRemove: false, // We'll remove manually for better control
|
||||||
|
Binds: []string{
|
||||||
fmt.Sprintf("%s:%s", projectRoot, projectRoot),
|
fmt.Sprintf("%s:%s", projectRoot, projectRoot),
|
||||||
dockerSocketPath + ":/var/run/docker.sock",
|
dockerSocketPath + ":/var/run/docker.sock",
|
||||||
logsDir + ":/tmp/control",
|
logsDir + ":/tmp/control",
|
||||||
}
|
},
|
||||||
|
Mounts: []mount.Mount{
|
||||||
// Use bind mounts for Go cache if provided via environment variables,
|
{
|
||||||
// otherwise fall back to Docker volumes for local development
|
|
||||||
var mounts []mount.Mount
|
|
||||||
|
|
||||||
goCache := os.Getenv("HEADSCALE_INTEGRATION_GO_CACHE")
|
|
||||||
goBuildCache := os.Getenv("HEADSCALE_INTEGRATION_GO_BUILD_CACHE")
|
|
||||||
|
|
||||||
if goCache != "" {
|
|
||||||
binds = append(binds, goCache+":/go")
|
|
||||||
} else {
|
|
||||||
mounts = append(mounts, mount.Mount{
|
|
||||||
Type: mount.TypeVolume,
|
Type: mount.TypeVolume,
|
||||||
Source: "hs-integration-go-cache",
|
Source: "hs-integration-go-cache",
|
||||||
Target: "/go",
|
Target: "/go",
|
||||||
})
|
},
|
||||||
}
|
},
|
||||||
|
|
||||||
if goBuildCache != "" {
|
|
||||||
binds = append(binds, goBuildCache+":/cache/go-build")
|
|
||||||
} else {
|
|
||||||
mounts = append(mounts, mount.Mount{
|
|
||||||
Type: mount.TypeVolume,
|
|
||||||
Source: "hs-integration-go-build-cache",
|
|
||||||
Target: "/cache/go-build",
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
hostConfig := &container.HostConfig{
|
|
||||||
AutoRemove: false, // We'll remove manually for better control
|
|
||||||
Binds: binds,
|
|
||||||
Mounts: mounts,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
return cli.ContainerCreate(ctx, containerConfig, hostConfig, nil, nil, containerName)
|
return cli.ContainerCreate(ctx, containerConfig, hostConfig, nil, nil, containerName)
|
||||||
@@ -325,7 +247,7 @@ func streamAndWait(ctx context.Context, cli *client.Client, containerID string)
|
|||||||
Follow: true,
|
Follow: true,
|
||||||
})
|
})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return -1, fmt.Errorf("getting container logs: %w", err)
|
return -1, fmt.Errorf("failed to get container logs: %w", err)
|
||||||
}
|
}
|
||||||
defer out.Close()
|
defer out.Close()
|
||||||
|
|
||||||
@@ -337,7 +259,7 @@ func streamAndWait(ctx context.Context, cli *client.Client, containerID string)
|
|||||||
select {
|
select {
|
||||||
case err := <-errCh:
|
case err := <-errCh:
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return -1, fmt.Errorf("waiting for container: %w", err)
|
return -1, fmt.Errorf("error waiting for container: %w", err)
|
||||||
}
|
}
|
||||||
case status := <-statusCh:
|
case status := <-statusCh:
|
||||||
return int(status.StatusCode), nil
|
return int(status.StatusCode), nil
|
||||||
@@ -351,7 +273,7 @@ func waitForContainerFinalization(ctx context.Context, cli *client.Client, testC
|
|||||||
// First, get all related test containers
|
// First, get all related test containers
|
||||||
containers, err := cli.ContainerList(ctx, container.ListOptions{All: true})
|
containers, err := cli.ContainerList(ctx, container.ListOptions{All: true})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("listing containers: %w", err)
|
return fmt.Errorf("failed to list containers: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
testContainers := getCurrentTestContainers(containers, testContainerID, verbose)
|
testContainers := getCurrentTestContainers(containers, testContainerID, verbose)
|
||||||
@@ -360,7 +282,6 @@ func waitForContainerFinalization(ctx context.Context, cli *client.Client, testC
|
|||||||
maxWaitTime := 10 * time.Second
|
maxWaitTime := 10 * time.Second
|
||||||
checkInterval := 500 * time.Millisecond
|
checkInterval := 500 * time.Millisecond
|
||||||
timeout := time.After(maxWaitTime)
|
timeout := time.After(maxWaitTime)
|
||||||
|
|
||||||
ticker := time.NewTicker(checkInterval)
|
ticker := time.NewTicker(checkInterval)
|
||||||
defer ticker.Stop()
|
defer ticker.Stop()
|
||||||
|
|
||||||
@@ -370,7 +291,6 @@ func waitForContainerFinalization(ctx context.Context, cli *client.Client, testC
|
|||||||
if verbose {
|
if verbose {
|
||||||
log.Printf("Timeout waiting for container finalization, proceeding with artifact extraction")
|
log.Printf("Timeout waiting for container finalization, proceeding with artifact extraction")
|
||||||
}
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
case <-ticker.C:
|
case <-ticker.C:
|
||||||
allFinalized := true
|
allFinalized := true
|
||||||
@@ -381,14 +301,12 @@ func waitForContainerFinalization(ctx context.Context, cli *client.Client, testC
|
|||||||
if verbose {
|
if verbose {
|
||||||
log.Printf("Warning: failed to inspect container %s: %v", testCont.name, err)
|
log.Printf("Warning: failed to inspect container %s: %v", testCont.name, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
|
||||||
// Check if container is in a final state
|
// Check if container is in a final state
|
||||||
if !isContainerFinalized(inspect.State) {
|
if !isContainerFinalized(inspect.State) {
|
||||||
allFinalized = false
|
allFinalized = false
|
||||||
|
|
||||||
if verbose {
|
if verbose {
|
||||||
log.Printf("Container %s still finalizing (state: %s)", testCont.name, inspect.State.Status)
|
log.Printf("Container %s still finalizing (state: %s)", testCont.name, inspect.State.Status)
|
||||||
}
|
}
|
||||||
@@ -401,7 +319,6 @@ func waitForContainerFinalization(ctx context.Context, cli *client.Client, testC
|
|||||||
if verbose {
|
if verbose {
|
||||||
log.Printf("All test containers finalized, ready for artifact extraction")
|
log.Printf("All test containers finalized, ready for artifact extraction")
|
||||||
}
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -418,15 +335,13 @@ func isContainerFinalized(state *container.State) bool {
|
|||||||
func findProjectRoot(startPath string) string {
|
func findProjectRoot(startPath string) string {
|
||||||
current := startPath
|
current := startPath
|
||||||
for {
|
for {
|
||||||
if _, err := os.Stat(filepath.Join(current, "go.mod")); err == nil { //nolint:noinlineerr
|
if _, err := os.Stat(filepath.Join(current, "go.mod")); err == nil {
|
||||||
return current
|
return current
|
||||||
}
|
}
|
||||||
|
|
||||||
parent := filepath.Dir(current)
|
parent := filepath.Dir(current)
|
||||||
if parent == current {
|
if parent == current {
|
||||||
return startPath
|
return startPath
|
||||||
}
|
}
|
||||||
|
|
||||||
current = parent
|
current = parent
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -436,37 +351,34 @@ func boolToInt(b bool) int {
|
|||||||
if b {
|
if b {
|
||||||
return 1
|
return 1
|
||||||
}
|
}
|
||||||
|
|
||||||
return 0
|
return 0
|
||||||
}
|
}
|
||||||
|
|
||||||
// DockerContext represents Docker context information.
|
// DockerContext represents Docker context information.
|
||||||
type DockerContext struct {
|
type DockerContext struct {
|
||||||
Name string `json:"Name"`
|
Name string `json:"Name"`
|
||||||
Metadata map[string]any `json:"Metadata"`
|
Metadata map[string]interface{} `json:"Metadata"`
|
||||||
Endpoints map[string]any `json:"Endpoints"`
|
Endpoints map[string]interface{} `json:"Endpoints"`
|
||||||
Current bool `json:"Current"`
|
Current bool `json:"Current"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// createDockerClient creates a Docker client with context detection.
|
// createDockerClient creates a Docker client with context detection.
|
||||||
func createDockerClient(ctx context.Context) (*client.Client, error) {
|
func createDockerClient() (*client.Client, error) {
|
||||||
contextInfo, err := getCurrentDockerContext(ctx)
|
contextInfo, err := getCurrentDockerContext()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return client.NewClientWithOpts(client.FromEnv, client.WithAPIVersionNegotiation())
|
return client.NewClientWithOpts(client.FromEnv, client.WithAPIVersionNegotiation())
|
||||||
}
|
}
|
||||||
|
|
||||||
var clientOpts []client.Opt
|
var clientOpts []client.Opt
|
||||||
|
|
||||||
clientOpts = append(clientOpts, client.WithAPIVersionNegotiation())
|
clientOpts = append(clientOpts, client.WithAPIVersionNegotiation())
|
||||||
|
|
||||||
if contextInfo != nil {
|
if contextInfo != nil {
|
||||||
if endpoints, ok := contextInfo.Endpoints["docker"]; ok {
|
if endpoints, ok := contextInfo.Endpoints["docker"]; ok {
|
||||||
if endpointMap, ok := endpoints.(map[string]any); ok {
|
if endpointMap, ok := endpoints.(map[string]interface{}); ok {
|
||||||
if host, ok := endpointMap["Host"].(string); ok {
|
if host, ok := endpointMap["Host"].(string); ok {
|
||||||
if runConfig.Verbose {
|
if runConfig.Verbose {
|
||||||
log.Printf("Using Docker host from context '%s': %s", contextInfo.Name, host)
|
log.Printf("Using Docker host from context '%s': %s", contextInfo.Name, host)
|
||||||
}
|
}
|
||||||
|
|
||||||
clientOpts = append(clientOpts, client.WithHost(host))
|
clientOpts = append(clientOpts, client.WithHost(host))
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -481,17 +393,16 @@ func createDockerClient(ctx context.Context) (*client.Client, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// getCurrentDockerContext retrieves the current Docker context information.
|
// getCurrentDockerContext retrieves the current Docker context information.
|
||||||
func getCurrentDockerContext(ctx context.Context) (*DockerContext, error) {
|
func getCurrentDockerContext() (*DockerContext, error) {
|
||||||
cmd := exec.CommandContext(ctx, "docker", "context", "inspect")
|
cmd := exec.Command("docker", "context", "inspect")
|
||||||
|
|
||||||
output, err := cmd.Output()
|
output, err := cmd.Output()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, fmt.Errorf("getting docker context: %w", err)
|
return nil, fmt.Errorf("failed to get docker context: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
var contexts []DockerContext
|
var contexts []DockerContext
|
||||||
if err := json.Unmarshal(output, &contexts); err != nil { //nolint:noinlineerr
|
if err := json.Unmarshal(output, &contexts); err != nil {
|
||||||
return nil, fmt.Errorf("parsing docker context: %w", err)
|
return nil, fmt.Errorf("failed to parse docker context: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
if len(contexts) > 0 {
|
if len(contexts) > 0 {
|
||||||
@@ -510,13 +421,12 @@ func getDockerSocketPath() string {
|
|||||||
|
|
||||||
// checkImageAvailableLocally checks if the specified Docker image is available locally.
|
// checkImageAvailableLocally checks if the specified Docker image is available locally.
|
||||||
func checkImageAvailableLocally(ctx context.Context, cli *client.Client, imageName string) (bool, error) {
|
func checkImageAvailableLocally(ctx context.Context, cli *client.Client, imageName string) (bool, error) {
|
||||||
_, _, err := cli.ImageInspectWithRaw(ctx, imageName) //nolint:staticcheck // SA1019: deprecated but functional
|
_, _, err := cli.ImageInspectWithRaw(ctx, imageName)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
if client.IsErrNotFound(err) { //nolint:staticcheck // SA1019: deprecated but functional
|
if client.IsErrNotFound(err) {
|
||||||
return false, nil
|
return false, nil
|
||||||
}
|
}
|
||||||
|
return false, fmt.Errorf("failed to inspect image %s: %w", imageName, err)
|
||||||
return false, fmt.Errorf("inspecting image %s: %w", imageName, err)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
return true, nil
|
return true, nil
|
||||||
@@ -527,14 +437,13 @@ func ensureImageAvailable(ctx context.Context, cli *client.Client, imageName str
|
|||||||
// First check if image is available locally
|
// First check if image is available locally
|
||||||
available, err := checkImageAvailableLocally(ctx, cli, imageName)
|
available, err := checkImageAvailableLocally(ctx, cli, imageName)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("checking local image availability: %w", err)
|
return fmt.Errorf("failed to check local image availability: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
if available {
|
if available {
|
||||||
if verbose {
|
if verbose {
|
||||||
log.Printf("Image %s is available locally", imageName)
|
log.Printf("Image %s is available locally", imageName)
|
||||||
}
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -545,21 +454,20 @@ func ensureImageAvailable(ctx context.Context, cli *client.Client, imageName str
|
|||||||
|
|
||||||
reader, err := cli.ImagePull(ctx, imageName, image.PullOptions{})
|
reader, err := cli.ImagePull(ctx, imageName, image.PullOptions{})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("pulling image %s: %w", imageName, err)
|
return fmt.Errorf("failed to pull image %s: %w", imageName, err)
|
||||||
}
|
}
|
||||||
defer reader.Close()
|
defer reader.Close()
|
||||||
|
|
||||||
if verbose {
|
if verbose {
|
||||||
_, err = io.Copy(os.Stdout, reader)
|
_, err = io.Copy(os.Stdout, reader)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("reading pull output: %w", err)
|
return fmt.Errorf("failed to read pull output: %w", err)
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
_, err = io.Copy(io.Discard, reader)
|
_, err = io.Copy(io.Discard, reader)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("reading pull output: %w", err)
|
return fmt.Errorf("failed to read pull output: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
log.Printf("Image %s pulled successfully", imageName)
|
log.Printf("Image %s pulled successfully", imageName)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -574,11 +482,9 @@ func listControlFiles(logsDir string) {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
var (
|
var logFiles []string
|
||||||
logFiles []string
|
var dataFiles []string
|
||||||
dataFiles []string
|
var dataDirs []string
|
||||||
dataDirs []string
|
|
||||||
)
|
|
||||||
|
|
||||||
for _, entry := range entries {
|
for _, entry := range entries {
|
||||||
name := entry.Name()
|
name := entry.Name()
|
||||||
@@ -607,7 +513,6 @@ func listControlFiles(logsDir string) {
|
|||||||
|
|
||||||
if len(logFiles) > 0 {
|
if len(logFiles) > 0 {
|
||||||
log.Printf("Headscale logs:")
|
log.Printf("Headscale logs:")
|
||||||
|
|
||||||
for _, file := range logFiles {
|
for _, file := range logFiles {
|
||||||
log.Printf(" %s", file)
|
log.Printf(" %s", file)
|
||||||
}
|
}
|
||||||
@@ -615,11 +520,9 @@ func listControlFiles(logsDir string) {
|
|||||||
|
|
||||||
if len(dataFiles) > 0 || len(dataDirs) > 0 {
|
if len(dataFiles) > 0 || len(dataDirs) > 0 {
|
||||||
log.Printf("Headscale data:")
|
log.Printf("Headscale data:")
|
||||||
|
|
||||||
for _, file := range dataFiles {
|
for _, file := range dataFiles {
|
||||||
log.Printf(" %s", file)
|
log.Printf(" %s", file)
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, dir := range dataDirs {
|
for _, dir := range dataDirs {
|
||||||
log.Printf(" %s/", dir)
|
log.Printf(" %s/", dir)
|
||||||
}
|
}
|
||||||
@@ -628,27 +531,25 @@ func listControlFiles(logsDir string) {
|
|||||||
|
|
||||||
// extractArtifactsFromContainers collects container logs and files from the specific test run.
|
// extractArtifactsFromContainers collects container logs and files from the specific test run.
|
||||||
func extractArtifactsFromContainers(ctx context.Context, testContainerID, logsDir string, verbose bool) error {
|
func extractArtifactsFromContainers(ctx context.Context, testContainerID, logsDir string, verbose bool) error {
|
||||||
cli, err := createDockerClient(ctx)
|
cli, err := createDockerClient()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("creating Docker client: %w", err)
|
return fmt.Errorf("failed to create Docker client: %w", err)
|
||||||
}
|
}
|
||||||
defer cli.Close()
|
defer cli.Close()
|
||||||
|
|
||||||
// List all containers
|
// List all containers
|
||||||
containers, err := cli.ContainerList(ctx, container.ListOptions{All: true})
|
containers, err := cli.ContainerList(ctx, container.ListOptions{All: true})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("listing containers: %w", err)
|
return fmt.Errorf("failed to list containers: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Get containers from the specific test run
|
// Get containers from the specific test run
|
||||||
currentTestContainers := getCurrentTestContainers(containers, testContainerID, verbose)
|
currentTestContainers := getCurrentTestContainers(containers, testContainerID, verbose)
|
||||||
|
|
||||||
extractedCount := 0
|
extractedCount := 0
|
||||||
|
|
||||||
for _, cont := range currentTestContainers {
|
for _, cont := range currentTestContainers {
|
||||||
// Extract container logs and tar files
|
// Extract container logs and tar files
|
||||||
err := extractContainerArtifacts(ctx, cli, cont.ID, cont.name, logsDir, verbose)
|
if err := extractContainerArtifacts(ctx, cli, cont.ID, cont.name, logsDir, verbose); err != nil {
|
||||||
if err != nil {
|
|
||||||
if verbose {
|
if verbose {
|
||||||
log.Printf("Warning: failed to extract artifacts from container %s (%s): %v", cont.name, cont.ID[:12], err)
|
log.Printf("Warning: failed to extract artifacts from container %s (%s): %v", cont.name, cont.ID[:12], err)
|
||||||
}
|
}
|
||||||
@@ -656,7 +557,6 @@ func extractArtifactsFromContainers(ctx context.Context, testContainerID, logsDi
|
|||||||
if verbose {
|
if verbose {
|
||||||
log.Printf("Extracted artifacts from container %s (%s)", cont.name, cont.ID[:12])
|
log.Printf("Extracted artifacts from container %s (%s)", cont.name, cont.ID[:12])
|
||||||
}
|
}
|
||||||
|
|
||||||
extractedCount++
|
extractedCount++
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -680,13 +580,11 @@ func getCurrentTestContainers(containers []container.Summary, testContainerID st
|
|||||||
|
|
||||||
// Find the test container to get its run ID label
|
// Find the test container to get its run ID label
|
||||||
var runID string
|
var runID string
|
||||||
|
|
||||||
for _, cont := range containers {
|
for _, cont := range containers {
|
||||||
if cont.ID == testContainerID {
|
if cont.ID == testContainerID {
|
||||||
if cont.Labels != nil {
|
if cont.Labels != nil {
|
||||||
runID = cont.Labels["hi.run-id"]
|
runID = cont.Labels["hi.run-id"]
|
||||||
}
|
}
|
||||||
|
|
||||||
break
|
break
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -727,21 +625,18 @@ func getCurrentTestContainers(containers []container.Summary, testContainerID st
|
|||||||
// extractContainerArtifacts saves logs and tar files from a container.
|
// extractContainerArtifacts saves logs and tar files from a container.
|
||||||
func extractContainerArtifacts(ctx context.Context, cli *client.Client, containerID, containerName, logsDir string, verbose bool) error {
|
func extractContainerArtifacts(ctx context.Context, cli *client.Client, containerID, containerName, logsDir string, verbose bool) error {
|
||||||
// Ensure the logs directory exists
|
// Ensure the logs directory exists
|
||||||
err := os.MkdirAll(logsDir, defaultDirPerm)
|
if err := os.MkdirAll(logsDir, 0o755); err != nil {
|
||||||
if err != nil {
|
return fmt.Errorf("failed to create logs directory: %w", err)
|
||||||
return fmt.Errorf("creating logs directory: %w", err)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Extract container logs
|
// Extract container logs
|
||||||
err = extractContainerLogs(ctx, cli, containerID, containerName, logsDir, verbose)
|
if err := extractContainerLogs(ctx, cli, containerID, containerName, logsDir, verbose); err != nil {
|
||||||
if err != nil {
|
return fmt.Errorf("failed to extract logs: %w", err)
|
||||||
return fmt.Errorf("extracting logs: %w", err)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Extract tar files for headscale containers only
|
// Extract tar files for headscale containers only
|
||||||
if strings.HasPrefix(containerName, "hs-") {
|
if strings.HasPrefix(containerName, "hs-") {
|
||||||
err := extractContainerFiles(ctx, cli, containerID, containerName, logsDir, verbose)
|
if err := extractContainerFiles(ctx, cli, containerID, containerName, logsDir, verbose); err != nil {
|
||||||
if err != nil {
|
|
||||||
if verbose {
|
if verbose {
|
||||||
log.Printf("Warning: failed to extract files from %s: %v", containerName, err)
|
log.Printf("Warning: failed to extract files from %s: %v", containerName, err)
|
||||||
}
|
}
|
||||||
@@ -763,7 +658,7 @@ func extractContainerLogs(ctx context.Context, cli *client.Client, containerID,
|
|||||||
Tail: "all",
|
Tail: "all",
|
||||||
})
|
})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("getting container logs: %w", err)
|
return fmt.Errorf("failed to get container logs: %w", err)
|
||||||
}
|
}
|
||||||
defer logReader.Close()
|
defer logReader.Close()
|
||||||
|
|
||||||
@@ -777,17 +672,17 @@ func extractContainerLogs(ctx context.Context, cli *client.Client, containerID,
|
|||||||
// Demultiplex the Docker logs stream to separate stdout and stderr
|
// Demultiplex the Docker logs stream to separate stdout and stderr
|
||||||
_, err = stdcopy.StdCopy(&stdoutBuf, &stderrBuf, logReader)
|
_, err = stdcopy.StdCopy(&stdoutBuf, &stderrBuf, logReader)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("demultiplexing container logs: %w", err)
|
return fmt.Errorf("failed to demultiplex container logs: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Write stdout logs
|
// Write stdout logs
|
||||||
if err := os.WriteFile(stdoutPath, stdoutBuf.Bytes(), 0o644); err != nil { //nolint:gosec,noinlineerr // log files should be readable
|
if err := os.WriteFile(stdoutPath, stdoutBuf.Bytes(), 0o644); err != nil {
|
||||||
return fmt.Errorf("writing stdout log: %w", err)
|
return fmt.Errorf("failed to write stdout log: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Write stderr logs
|
// Write stderr logs
|
||||||
if err := os.WriteFile(stderrPath, stderrBuf.Bytes(), 0o644); err != nil { //nolint:gosec,noinlineerr // log files should be readable
|
if err := os.WriteFile(stderrPath, stderrBuf.Bytes(), 0o644); err != nil {
|
||||||
return fmt.Errorf("writing stderr log: %w", err)
|
return fmt.Errorf("failed to write stderr log: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
if verbose {
|
if verbose {
|
||||||
@@ -805,3 +700,63 @@ func extractContainerFiles(ctx context.Context, cli *client.Client, containerID,
|
|||||||
// This function is kept for potential future use or other file types
|
// This function is kept for potential future use or other file types
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// logExtractionError logs extraction errors with appropriate level based on error type.
|
||||||
|
func logExtractionError(artifactType, containerName string, err error, verbose bool) {
|
||||||
|
if errors.Is(err, ErrFileNotFoundInTar) {
|
||||||
|
// File not found is expected and only logged in verbose mode
|
||||||
|
if verbose {
|
||||||
|
log.Printf("No %s found in container %s", artifactType, containerName)
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
// Other errors are actual failures and should be logged as warnings
|
||||||
|
log.Printf("Warning: failed to extract %s from %s: %v", artifactType, containerName, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// extractSingleFile copies a single file from a container.
|
||||||
|
func extractSingleFile(ctx context.Context, cli *client.Client, containerID, sourcePath, fileName, logsDir string, verbose bool) error {
|
||||||
|
tarReader, _, err := cli.CopyFromContainer(ctx, containerID, sourcePath)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to copy %s from container: %w", sourcePath, err)
|
||||||
|
}
|
||||||
|
defer tarReader.Close()
|
||||||
|
|
||||||
|
// Extract the single file from the tar
|
||||||
|
filePath := filepath.Join(logsDir, fileName)
|
||||||
|
if err := extractFileFromTar(tarReader, filepath.Base(sourcePath), filePath); err != nil {
|
||||||
|
return fmt.Errorf("failed to extract file from tar: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if verbose {
|
||||||
|
log.Printf("Extracted %s from %s", fileName, containerID[:12])
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// extractDirectory copies a directory from a container and extracts its contents.
|
||||||
|
func extractDirectory(ctx context.Context, cli *client.Client, containerID, sourcePath, dirName, logsDir string, verbose bool) error {
|
||||||
|
tarReader, _, err := cli.CopyFromContainer(ctx, containerID, sourcePath)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to copy %s from container: %w", sourcePath, err)
|
||||||
|
}
|
||||||
|
defer tarReader.Close()
|
||||||
|
|
||||||
|
// Create target directory
|
||||||
|
targetDir := filepath.Join(logsDir, dirName)
|
||||||
|
if err := os.MkdirAll(targetDir, 0o755); err != nil {
|
||||||
|
return fmt.Errorf("failed to create directory %s: %w", targetDir, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Extract the directory from the tar
|
||||||
|
if err := extractDirectoryFromTar(tarReader, targetDir); err != nil {
|
||||||
|
return fmt.Errorf("failed to extract directory from tar: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if verbose {
|
||||||
|
log.Printf("Extracted %s/ from %s", dirName, containerID[:12])
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|||||||
@@ -38,13 +38,13 @@ func runDoctorCheck(ctx context.Context) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Check 3: Go installation
|
// Check 3: Go installation
|
||||||
results = append(results, checkGoInstallation(ctx))
|
results = append(results, checkGoInstallation())
|
||||||
|
|
||||||
// Check 4: Git repository
|
// Check 4: Git repository
|
||||||
results = append(results, checkGitRepository(ctx))
|
results = append(results, checkGitRepository())
|
||||||
|
|
||||||
// Check 5: Required files
|
// Check 5: Required files
|
||||||
results = append(results, checkRequiredFiles(ctx))
|
results = append(results, checkRequiredFiles())
|
||||||
|
|
||||||
// Display results
|
// Display results
|
||||||
displayDoctorResults(results)
|
displayDoctorResults(results)
|
||||||
@@ -86,7 +86,7 @@ func checkDockerBinary() DoctorResult {
|
|||||||
|
|
||||||
// checkDockerDaemon verifies Docker daemon is running and accessible.
|
// checkDockerDaemon verifies Docker daemon is running and accessible.
|
||||||
func checkDockerDaemon(ctx context.Context) DoctorResult {
|
func checkDockerDaemon(ctx context.Context) DoctorResult {
|
||||||
cli, err := createDockerClient(ctx)
|
cli, err := createDockerClient()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return DoctorResult{
|
return DoctorResult{
|
||||||
Name: "Docker Daemon",
|
Name: "Docker Daemon",
|
||||||
@@ -124,8 +124,8 @@ func checkDockerDaemon(ctx context.Context) DoctorResult {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// checkDockerContext verifies Docker context configuration.
|
// checkDockerContext verifies Docker context configuration.
|
||||||
func checkDockerContext(ctx context.Context) DoctorResult {
|
func checkDockerContext(_ context.Context) DoctorResult {
|
||||||
contextInfo, err := getCurrentDockerContext(ctx)
|
contextInfo, err := getCurrentDockerContext()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return DoctorResult{
|
return DoctorResult{
|
||||||
Name: "Docker Context",
|
Name: "Docker Context",
|
||||||
@@ -155,7 +155,7 @@ func checkDockerContext(ctx context.Context) DoctorResult {
|
|||||||
|
|
||||||
// checkDockerSocket verifies Docker socket accessibility.
|
// checkDockerSocket verifies Docker socket accessibility.
|
||||||
func checkDockerSocket(ctx context.Context) DoctorResult {
|
func checkDockerSocket(ctx context.Context) DoctorResult {
|
||||||
cli, err := createDockerClient(ctx)
|
cli, err := createDockerClient()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return DoctorResult{
|
return DoctorResult{
|
||||||
Name: "Docker Socket",
|
Name: "Docker Socket",
|
||||||
@@ -192,7 +192,7 @@ func checkDockerSocket(ctx context.Context) DoctorResult {
|
|||||||
|
|
||||||
// checkGolangImage verifies the golang Docker image is available locally or can be pulled.
|
// checkGolangImage verifies the golang Docker image is available locally or can be pulled.
|
||||||
func checkGolangImage(ctx context.Context) DoctorResult {
|
func checkGolangImage(ctx context.Context) DoctorResult {
|
||||||
cli, err := createDockerClient(ctx)
|
cli, err := createDockerClient()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return DoctorResult{
|
return DoctorResult{
|
||||||
Name: "Golang Image",
|
Name: "Golang Image",
|
||||||
@@ -251,7 +251,7 @@ func checkGolangImage(ctx context.Context) DoctorResult {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// checkGoInstallation verifies Go is installed and working.
|
// checkGoInstallation verifies Go is installed and working.
|
||||||
func checkGoInstallation(ctx context.Context) DoctorResult {
|
func checkGoInstallation() DoctorResult {
|
||||||
_, err := exec.LookPath("go")
|
_, err := exec.LookPath("go")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return DoctorResult{
|
return DoctorResult{
|
||||||
@@ -265,8 +265,7 @@ func checkGoInstallation(ctx context.Context) DoctorResult {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
cmd := exec.CommandContext(ctx, "go", "version")
|
cmd := exec.Command("go", "version")
|
||||||
|
|
||||||
output, err := cmd.Output()
|
output, err := cmd.Output()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return DoctorResult{
|
return DoctorResult{
|
||||||
@@ -286,9 +285,8 @@ func checkGoInstallation(ctx context.Context) DoctorResult {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// checkGitRepository verifies we're in a git repository.
|
// checkGitRepository verifies we're in a git repository.
|
||||||
func checkGitRepository(ctx context.Context) DoctorResult {
|
func checkGitRepository() DoctorResult {
|
||||||
cmd := exec.CommandContext(ctx, "git", "rev-parse", "--git-dir")
|
cmd := exec.Command("git", "rev-parse", "--git-dir")
|
||||||
|
|
||||||
err := cmd.Run()
|
err := cmd.Run()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return DoctorResult{
|
return DoctorResult{
|
||||||
@@ -310,7 +308,7 @@ func checkGitRepository(ctx context.Context) DoctorResult {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// checkRequiredFiles verifies required files exist.
|
// checkRequiredFiles verifies required files exist.
|
||||||
func checkRequiredFiles(ctx context.Context) DoctorResult {
|
func checkRequiredFiles() DoctorResult {
|
||||||
requiredFiles := []string{
|
requiredFiles := []string{
|
||||||
"go.mod",
|
"go.mod",
|
||||||
"integration/",
|
"integration/",
|
||||||
@@ -318,12 +316,9 @@ func checkRequiredFiles(ctx context.Context) DoctorResult {
|
|||||||
}
|
}
|
||||||
|
|
||||||
var missingFiles []string
|
var missingFiles []string
|
||||||
|
|
||||||
for _, file := range requiredFiles {
|
for _, file := range requiredFiles {
|
||||||
cmd := exec.CommandContext(ctx, "test", "-e", file)
|
cmd := exec.Command("test", "-e", file)
|
||||||
|
if err := cmd.Run(); err != nil {
|
||||||
err := cmd.Run()
|
|
||||||
if err != nil {
|
|
||||||
missingFiles = append(missingFiles, file)
|
missingFiles = append(missingFiles, file)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -355,7 +350,6 @@ func displayDoctorResults(results []DoctorResult) {
|
|||||||
|
|
||||||
for _, result := range results {
|
for _, result := range results {
|
||||||
var icon string
|
var icon string
|
||||||
|
|
||||||
switch result.Status {
|
switch result.Status {
|
||||||
case "PASS":
|
case "PASS":
|
||||||
icon = "✅"
|
icon = "✅"
|
||||||
|
|||||||
@@ -79,18 +79,13 @@ func main() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func cleanAll(ctx context.Context) error {
|
func cleanAll(ctx context.Context) error {
|
||||||
err := killTestContainers(ctx)
|
if err := killTestContainers(ctx); err != nil {
|
||||||
if err != nil {
|
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
if err := pruneDockerNetworks(ctx); err != nil {
|
||||||
err = pruneDockerNetworks(ctx)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
if err := cleanOldImages(ctx); err != nil {
|
||||||
err = cleanOldImages(ctx)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -19,7 +19,7 @@ type RunConfig struct {
|
|||||||
FailFast bool `flag:"failfast,default=true,Stop on first test failure"`
|
FailFast bool `flag:"failfast,default=true,Stop on first test failure"`
|
||||||
UsePostgres bool `flag:"postgres,default=false,Use PostgreSQL instead of SQLite"`
|
UsePostgres bool `flag:"postgres,default=false,Use PostgreSQL instead of SQLite"`
|
||||||
GoVersion string `flag:"go-version,Go version to use (auto-detected from go.mod)"`
|
GoVersion string `flag:"go-version,Go version to use (auto-detected from go.mod)"`
|
||||||
CleanBefore bool `flag:"clean-before,default=true,Clean stale resources before test"`
|
CleanBefore bool `flag:"clean-before,default=true,Clean resources before test"`
|
||||||
CleanAfter bool `flag:"clean-after,default=true,Clean resources after test"`
|
CleanAfter bool `flag:"clean-after,default=true,Clean resources after test"`
|
||||||
KeepOnFailure bool `flag:"keep-on-failure,default=false,Keep containers on test failure"`
|
KeepOnFailure bool `flag:"keep-on-failure,default=false,Keep containers on test failure"`
|
||||||
LogsDir string `flag:"logs-dir,default=control_logs,Control logs directory"`
|
LogsDir string `flag:"logs-dir,default=control_logs,Control logs directory"`
|
||||||
@@ -48,9 +48,7 @@ func runIntegrationTest(env *command.Env) error {
|
|||||||
if runConfig.Verbose {
|
if runConfig.Verbose {
|
||||||
log.Printf("Running pre-flight system checks...")
|
log.Printf("Running pre-flight system checks...")
|
||||||
}
|
}
|
||||||
|
if err := runDoctorCheck(env.Context()); err != nil {
|
||||||
err := runDoctorCheck(env.Context())
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("pre-flight checks failed: %w", err)
|
return fmt.Errorf("pre-flight checks failed: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -68,15 +66,15 @@ func runIntegrationTest(env *command.Env) error {
|
|||||||
func detectGoVersion() string {
|
func detectGoVersion() string {
|
||||||
goModPath := filepath.Join("..", "..", "go.mod")
|
goModPath := filepath.Join("..", "..", "go.mod")
|
||||||
|
|
||||||
if _, err := os.Stat("go.mod"); err == nil { //nolint:noinlineerr
|
if _, err := os.Stat("go.mod"); err == nil {
|
||||||
goModPath = "go.mod"
|
goModPath = "go.mod"
|
||||||
} else if _, err := os.Stat("../../go.mod"); err == nil { //nolint:noinlineerr
|
} else if _, err := os.Stat("../../go.mod"); err == nil {
|
||||||
goModPath = "../../go.mod"
|
goModPath = "../../go.mod"
|
||||||
}
|
}
|
||||||
|
|
||||||
content, err := os.ReadFile(goModPath)
|
content, err := os.ReadFile(goModPath)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return "1.26.1"
|
return "1.24"
|
||||||
}
|
}
|
||||||
|
|
||||||
lines := splitLines(string(content))
|
lines := splitLines(string(content))
|
||||||
@@ -91,15 +89,13 @@ func detectGoVersion() string {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return "1.26.1"
|
return "1.24"
|
||||||
}
|
}
|
||||||
|
|
||||||
// splitLines splits a string into lines without using strings.Split.
|
// splitLines splits a string into lines without using strings.Split.
|
||||||
func splitLines(s string) []string {
|
func splitLines(s string) []string {
|
||||||
var (
|
var lines []string
|
||||||
lines []string
|
var current string
|
||||||
current string
|
|
||||||
)
|
|
||||||
|
|
||||||
for _, char := range s {
|
for _, char := range s {
|
||||||
if char == '\n' {
|
if char == '\n' {
|
||||||
|
|||||||
109
cmd/hi/stats.go
@@ -3,7 +3,6 @@ package main
|
|||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"encoding/json"
|
"encoding/json"
|
||||||
"errors"
|
|
||||||
"fmt"
|
"fmt"
|
||||||
"log"
|
"log"
|
||||||
"sort"
|
"sort"
|
||||||
@@ -18,10 +17,7 @@ import (
|
|||||||
"github.com/docker/docker/client"
|
"github.com/docker/docker/client"
|
||||||
)
|
)
|
||||||
|
|
||||||
// ErrStatsCollectionAlreadyStarted is returned when trying to start stats collection that is already running.
|
// ContainerStats represents statistics for a single container
|
||||||
var ErrStatsCollectionAlreadyStarted = errors.New("stats collection already started")
|
|
||||||
|
|
||||||
// ContainerStats represents statistics for a single container.
|
|
||||||
type ContainerStats struct {
|
type ContainerStats struct {
|
||||||
ContainerID string
|
ContainerID string
|
||||||
ContainerName string
|
ContainerName string
|
||||||
@@ -29,14 +25,14 @@ type ContainerStats struct {
|
|||||||
mutex sync.RWMutex
|
mutex sync.RWMutex
|
||||||
}
|
}
|
||||||
|
|
||||||
// StatsSample represents a single stats measurement.
|
// StatsSample represents a single stats measurement
|
||||||
type StatsSample struct {
|
type StatsSample struct {
|
||||||
Timestamp time.Time
|
Timestamp time.Time
|
||||||
CPUUsage float64 // CPU usage percentage
|
CPUUsage float64 // CPU usage percentage
|
||||||
MemoryMB float64 // Memory usage in MB
|
MemoryMB float64 // Memory usage in MB
|
||||||
}
|
}
|
||||||
|
|
||||||
// StatsCollector manages collection of container statistics.
|
// StatsCollector manages collection of container statistics
|
||||||
type StatsCollector struct {
|
type StatsCollector struct {
|
||||||
client *client.Client
|
client *client.Client
|
||||||
containers map[string]*ContainerStats
|
containers map[string]*ContainerStats
|
||||||
@@ -46,11 +42,11 @@ type StatsCollector struct {
|
|||||||
collectionStarted bool
|
collectionStarted bool
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewStatsCollector creates a new stats collector instance.
|
// NewStatsCollector creates a new stats collector instance
|
||||||
func NewStatsCollector(ctx context.Context) (*StatsCollector, error) {
|
func NewStatsCollector() (*StatsCollector, error) {
|
||||||
cli, err := createDockerClient(ctx)
|
cli, err := createDockerClient()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, fmt.Errorf("creating Docker client: %w", err)
|
return nil, fmt.Errorf("failed to create Docker client: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
return &StatsCollector{
|
return &StatsCollector{
|
||||||
@@ -60,25 +56,23 @@ func NewStatsCollector(ctx context.Context) (*StatsCollector, error) {
|
|||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// StartCollection begins monitoring all containers and collecting stats for hs- and ts- containers with matching run ID.
|
// StartCollection begins monitoring all containers and collecting stats for hs- and ts- containers with matching run ID
|
||||||
func (sc *StatsCollector) StartCollection(ctx context.Context, runID string, verbose bool) error {
|
func (sc *StatsCollector) StartCollection(ctx context.Context, runID string, verbose bool) error {
|
||||||
sc.mutex.Lock()
|
sc.mutex.Lock()
|
||||||
defer sc.mutex.Unlock()
|
defer sc.mutex.Unlock()
|
||||||
|
|
||||||
if sc.collectionStarted {
|
if sc.collectionStarted {
|
||||||
return ErrStatsCollectionAlreadyStarted
|
return fmt.Errorf("stats collection already started")
|
||||||
}
|
}
|
||||||
|
|
||||||
sc.collectionStarted = true
|
sc.collectionStarted = true
|
||||||
|
|
||||||
// Start monitoring existing containers
|
// Start monitoring existing containers
|
||||||
sc.wg.Add(1)
|
sc.wg.Add(1)
|
||||||
|
|
||||||
go sc.monitorExistingContainers(ctx, runID, verbose)
|
go sc.monitorExistingContainers(ctx, runID, verbose)
|
||||||
|
|
||||||
// Start Docker events monitoring for new containers
|
// Start Docker events monitoring for new containers
|
||||||
sc.wg.Add(1)
|
sc.wg.Add(1)
|
||||||
|
|
||||||
go sc.monitorDockerEvents(ctx, runID, verbose)
|
go sc.monitorDockerEvents(ctx, runID, verbose)
|
||||||
|
|
||||||
if verbose {
|
if verbose {
|
||||||
@@ -88,16 +82,14 @@ func (sc *StatsCollector) StartCollection(ctx context.Context, runID string, ver
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// StopCollection stops all stats collection.
|
// StopCollection stops all stats collection
|
||||||
func (sc *StatsCollector) StopCollection() {
|
func (sc *StatsCollector) StopCollection() {
|
||||||
// Check if already stopped without holding lock
|
// Check if already stopped without holding lock
|
||||||
sc.mutex.RLock()
|
sc.mutex.RLock()
|
||||||
|
|
||||||
if !sc.collectionStarted {
|
if !sc.collectionStarted {
|
||||||
sc.mutex.RUnlock()
|
sc.mutex.RUnlock()
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
sc.mutex.RUnlock()
|
sc.mutex.RUnlock()
|
||||||
|
|
||||||
// Signal stop to all goroutines
|
// Signal stop to all goroutines
|
||||||
@@ -112,7 +104,7 @@ func (sc *StatsCollector) StopCollection() {
|
|||||||
sc.mutex.Unlock()
|
sc.mutex.Unlock()
|
||||||
}
|
}
|
||||||
|
|
||||||
// monitorExistingContainers checks for existing containers that match our criteria.
|
// monitorExistingContainers checks for existing containers that match our criteria
|
||||||
func (sc *StatsCollector) monitorExistingContainers(ctx context.Context, runID string, verbose bool) {
|
func (sc *StatsCollector) monitorExistingContainers(ctx context.Context, runID string, verbose bool) {
|
||||||
defer sc.wg.Done()
|
defer sc.wg.Done()
|
||||||
|
|
||||||
@@ -121,7 +113,6 @@ func (sc *StatsCollector) monitorExistingContainers(ctx context.Context, runID s
|
|||||||
if verbose {
|
if verbose {
|
||||||
log.Printf("Failed to list existing containers: %v", err)
|
log.Printf("Failed to list existing containers: %v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -132,7 +123,7 @@ func (sc *StatsCollector) monitorExistingContainers(ctx context.Context, runID s
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// monitorDockerEvents listens for container start events and begins monitoring relevant containers.
|
// monitorDockerEvents listens for container start events and begins monitoring relevant containers
|
||||||
func (sc *StatsCollector) monitorDockerEvents(ctx context.Context, runID string, verbose bool) {
|
func (sc *StatsCollector) monitorDockerEvents(ctx context.Context, runID string, verbose bool) {
|
||||||
defer sc.wg.Done()
|
defer sc.wg.Done()
|
||||||
|
|
||||||
@@ -155,13 +146,13 @@ func (sc *StatsCollector) monitorDockerEvents(ctx context.Context, runID string,
|
|||||||
case event := <-events:
|
case event := <-events:
|
||||||
if event.Type == "container" && event.Action == "start" {
|
if event.Type == "container" && event.Action == "start" {
|
||||||
// Get container details
|
// Get container details
|
||||||
containerInfo, err := sc.client.ContainerInspect(ctx, event.ID) //nolint:staticcheck // SA1019: use Actor.ID
|
containerInfo, err := sc.client.ContainerInspect(ctx, event.ID)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
|
||||||
// Convert to types.Container format for consistency
|
// Convert to types.Container format for consistency
|
||||||
cont := types.Container{ //nolint:staticcheck // SA1019: use container.Summary
|
cont := types.Container{
|
||||||
ID: containerInfo.ID,
|
ID: containerInfo.ID,
|
||||||
Names: []string{containerInfo.Name},
|
Names: []string{containerInfo.Name},
|
||||||
Labels: containerInfo.Config.Labels,
|
Labels: containerInfo.Config.Labels,
|
||||||
@@ -175,14 +166,13 @@ func (sc *StatsCollector) monitorDockerEvents(ctx context.Context, runID string,
|
|||||||
if verbose {
|
if verbose {
|
||||||
log.Printf("Error in Docker events stream: %v", err)
|
log.Printf("Error in Docker events stream: %v", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// shouldMonitorContainer determines if a container should be monitored.
|
// shouldMonitorContainer determines if a container should be monitored
|
||||||
func (sc *StatsCollector) shouldMonitorContainer(cont types.Container, runID string) bool { //nolint:staticcheck // SA1019: use container.Summary
|
func (sc *StatsCollector) shouldMonitorContainer(cont types.Container, runID string) bool {
|
||||||
// Check if it has the correct run ID label
|
// Check if it has the correct run ID label
|
||||||
if cont.Labels == nil || cont.Labels["hi.run-id"] != runID {
|
if cont.Labels == nil || cont.Labels["hi.run-id"] != runID {
|
||||||
return false
|
return false
|
||||||
@@ -199,7 +189,7 @@ func (sc *StatsCollector) shouldMonitorContainer(cont types.Container, runID str
|
|||||||
return false
|
return false
|
||||||
}
|
}
|
||||||
|
|
||||||
// startStatsForContainer begins stats collection for a specific container.
|
// startStatsForContainer begins stats collection for a specific container
|
||||||
func (sc *StatsCollector) startStatsForContainer(ctx context.Context, containerID, containerName string, verbose bool) {
|
func (sc *StatsCollector) startStatsForContainer(ctx context.Context, containerID, containerName string, verbose bool) {
|
||||||
containerName = strings.TrimPrefix(containerName, "/")
|
containerName = strings.TrimPrefix(containerName, "/")
|
||||||
|
|
||||||
@@ -222,11 +212,10 @@ func (sc *StatsCollector) startStatsForContainer(ctx context.Context, containerI
|
|||||||
}
|
}
|
||||||
|
|
||||||
sc.wg.Add(1)
|
sc.wg.Add(1)
|
||||||
|
|
||||||
go sc.collectStatsForContainer(ctx, containerID, verbose)
|
go sc.collectStatsForContainer(ctx, containerID, verbose)
|
||||||
}
|
}
|
||||||
|
|
||||||
// collectStatsForContainer collects stats for a specific container using Docker API streaming.
|
// collectStatsForContainer collects stats for a specific container using Docker API streaming
|
||||||
func (sc *StatsCollector) collectStatsForContainer(ctx context.Context, containerID string, verbose bool) {
|
func (sc *StatsCollector) collectStatsForContainer(ctx context.Context, containerID string, verbose bool) {
|
||||||
defer sc.wg.Done()
|
defer sc.wg.Done()
|
||||||
|
|
||||||
@@ -236,14 +225,12 @@ func (sc *StatsCollector) collectStatsForContainer(ctx context.Context, containe
|
|||||||
if verbose {
|
if verbose {
|
||||||
log.Printf("Failed to get stats stream for container %s: %v", containerID[:12], err)
|
log.Printf("Failed to get stats stream for container %s: %v", containerID[:12], err)
|
||||||
}
|
}
|
||||||
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
defer statsResponse.Body.Close()
|
defer statsResponse.Body.Close()
|
||||||
|
|
||||||
decoder := json.NewDecoder(statsResponse.Body)
|
decoder := json.NewDecoder(statsResponse.Body)
|
||||||
|
var prevStats *container.Stats
|
||||||
var prevStats *container.Stats //nolint:staticcheck // SA1019: use StatsResponse
|
|
||||||
|
|
||||||
for {
|
for {
|
||||||
select {
|
select {
|
||||||
@@ -252,15 +239,12 @@ func (sc *StatsCollector) collectStatsForContainer(ctx context.Context, containe
|
|||||||
case <-ctx.Done():
|
case <-ctx.Done():
|
||||||
return
|
return
|
||||||
default:
|
default:
|
||||||
var stats container.Stats //nolint:staticcheck // SA1019: use StatsResponse
|
var stats container.Stats
|
||||||
|
if err := decoder.Decode(&stats); err != nil {
|
||||||
err := decoder.Decode(&stats)
|
|
||||||
if err != nil {
|
|
||||||
// EOF is expected when container stops or stream ends
|
// EOF is expected when container stops or stream ends
|
||||||
if err.Error() != "EOF" && verbose {
|
if err.Error() != "EOF" && verbose {
|
||||||
log.Printf("Failed to decode stats for container %s: %v", containerID[:12], err)
|
log.Printf("Failed to decode stats for container %s: %v", containerID[:12], err)
|
||||||
}
|
}
|
||||||
|
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -276,10 +260,8 @@ func (sc *StatsCollector) collectStatsForContainer(ctx context.Context, containe
|
|||||||
// Store the sample (skip first sample since CPU calculation needs previous stats)
|
// Store the sample (skip first sample since CPU calculation needs previous stats)
|
||||||
if prevStats != nil {
|
if prevStats != nil {
|
||||||
// Get container stats reference without holding the main mutex
|
// Get container stats reference without holding the main mutex
|
||||||
var (
|
var containerStats *ContainerStats
|
||||||
containerStats *ContainerStats
|
var exists bool
|
||||||
exists bool
|
|
||||||
)
|
|
||||||
|
|
||||||
sc.mutex.RLock()
|
sc.mutex.RLock()
|
||||||
containerStats, exists = sc.containers[containerID]
|
containerStats, exists = sc.containers[containerID]
|
||||||
@@ -302,8 +284,8 @@ func (sc *StatsCollector) collectStatsForContainer(ctx context.Context, containe
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// calculateCPUPercent calculates CPU usage percentage from Docker stats.
|
// calculateCPUPercent calculates CPU usage percentage from Docker stats
|
||||||
func calculateCPUPercent(prevStats, stats *container.Stats) float64 { //nolint:staticcheck // SA1019: use StatsResponse
|
func calculateCPUPercent(prevStats, stats *container.Stats) float64 {
|
||||||
// CPU calculation based on Docker's implementation
|
// CPU calculation based on Docker's implementation
|
||||||
cpuDelta := float64(stats.CPUStats.CPUUsage.TotalUsage) - float64(prevStats.CPUStats.CPUUsage.TotalUsage)
|
cpuDelta := float64(stats.CPUStats.CPUUsage.TotalUsage) - float64(prevStats.CPUStats.CPUUsage.TotalUsage)
|
||||||
systemDelta := float64(stats.CPUStats.SystemUsage) - float64(prevStats.CPUStats.SystemUsage)
|
systemDelta := float64(stats.CPUStats.SystemUsage) - float64(prevStats.CPUStats.SystemUsage)
|
||||||
@@ -315,14 +297,12 @@ func calculateCPUPercent(prevStats, stats *container.Stats) float64 { //nolint:s
|
|||||||
// Fallback: if PercpuUsage is not available, assume 1 CPU
|
// Fallback: if PercpuUsage is not available, assume 1 CPU
|
||||||
numCPUs = 1.0
|
numCPUs = 1.0
|
||||||
}
|
}
|
||||||
|
|
||||||
return (cpuDelta / systemDelta) * numCPUs * 100.0
|
return (cpuDelta / systemDelta) * numCPUs * 100.0
|
||||||
}
|
}
|
||||||
|
|
||||||
return 0.0
|
return 0.0
|
||||||
}
|
}
|
||||||
|
|
||||||
// ContainerStatsSummary represents summary statistics for a container.
|
// ContainerStatsSummary represents summary statistics for a container
|
||||||
type ContainerStatsSummary struct {
|
type ContainerStatsSummary struct {
|
||||||
ContainerName string
|
ContainerName string
|
||||||
SampleCount int
|
SampleCount int
|
||||||
@@ -330,30 +310,28 @@ type ContainerStatsSummary struct {
|
|||||||
Memory StatsSummary
|
Memory StatsSummary
|
||||||
}
|
}
|
||||||
|
|
||||||
// MemoryViolation represents a container that exceeded the memory limit.
|
// MemoryViolation represents a container that exceeded the memory limit
|
||||||
type MemoryViolation struct {
|
type MemoryViolation struct {
|
||||||
ContainerName string
|
ContainerName string
|
||||||
MaxMemoryMB float64
|
MaxMemoryMB float64
|
||||||
LimitMB float64
|
LimitMB float64
|
||||||
}
|
}
|
||||||
|
|
||||||
// StatsSummary represents min, max, and average for a metric.
|
// StatsSummary represents min, max, and average for a metric
|
||||||
type StatsSummary struct {
|
type StatsSummary struct {
|
||||||
Min float64
|
Min float64
|
||||||
Max float64
|
Max float64
|
||||||
Average float64
|
Average float64
|
||||||
}
|
}
|
||||||
|
|
||||||
// GetSummary returns a summary of collected statistics.
|
// GetSummary returns a summary of collected statistics
|
||||||
func (sc *StatsCollector) GetSummary() []ContainerStatsSummary {
|
func (sc *StatsCollector) GetSummary() []ContainerStatsSummary {
|
||||||
// Take snapshot of container references without holding main lock long
|
// Take snapshot of container references without holding main lock long
|
||||||
sc.mutex.RLock()
|
sc.mutex.RLock()
|
||||||
|
|
||||||
containerRefs := make([]*ContainerStats, 0, len(sc.containers))
|
containerRefs := make([]*ContainerStats, 0, len(sc.containers))
|
||||||
for _, containerStats := range sc.containers {
|
for _, containerStats := range sc.containers {
|
||||||
containerRefs = append(containerRefs, containerStats)
|
containerRefs = append(containerRefs, containerStats)
|
||||||
}
|
}
|
||||||
|
|
||||||
sc.mutex.RUnlock()
|
sc.mutex.RUnlock()
|
||||||
|
|
||||||
summaries := make([]ContainerStatsSummary, 0, len(containerRefs))
|
summaries := make([]ContainerStatsSummary, 0, len(containerRefs))
|
||||||
@@ -397,36 +375,34 @@ func (sc *StatsCollector) GetSummary() []ContainerStatsSummary {
|
|||||||
return summaries
|
return summaries
|
||||||
}
|
}
|
||||||
|
|
||||||
// calculateStatsSummary calculates min, max, and average for a slice of values.
|
// calculateStatsSummary calculates min, max, and average for a slice of values
|
||||||
func calculateStatsSummary(values []float64) StatsSummary {
|
func calculateStatsSummary(values []float64) StatsSummary {
|
||||||
if len(values) == 0 {
|
if len(values) == 0 {
|
||||||
return StatsSummary{}
|
return StatsSummary{}
|
||||||
}
|
}
|
||||||
|
|
||||||
minVal := values[0]
|
min := values[0]
|
||||||
maxVal := values[0]
|
max := values[0]
|
||||||
sum := 0.0
|
sum := 0.0
|
||||||
|
|
||||||
for _, value := range values {
|
for _, value := range values {
|
||||||
if value < minVal {
|
if value < min {
|
||||||
minVal = value
|
min = value
|
||||||
}
|
}
|
||||||
|
if value > max {
|
||||||
if value > maxVal {
|
max = value
|
||||||
maxVal = value
|
|
||||||
}
|
}
|
||||||
|
|
||||||
sum += value
|
sum += value
|
||||||
}
|
}
|
||||||
|
|
||||||
return StatsSummary{
|
return StatsSummary{
|
||||||
Min: minVal,
|
Min: min,
|
||||||
Max: maxVal,
|
Max: max,
|
||||||
Average: sum / float64(len(values)),
|
Average: sum / float64(len(values)),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// PrintSummary prints the statistics summary to the console.
|
// PrintSummary prints the statistics summary to the console
|
||||||
func (sc *StatsCollector) PrintSummary() {
|
func (sc *StatsCollector) PrintSummary() {
|
||||||
summaries := sc.GetSummary()
|
summaries := sc.GetSummary()
|
||||||
|
|
||||||
@@ -448,14 +424,13 @@ func (sc *StatsCollector) PrintSummary() {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// CheckMemoryLimits checks if any containers exceeded their memory limits.
|
// CheckMemoryLimits checks if any containers exceeded their memory limits
|
||||||
func (sc *StatsCollector) CheckMemoryLimits(hsLimitMB, tsLimitMB float64) []MemoryViolation {
|
func (sc *StatsCollector) CheckMemoryLimits(hsLimitMB, tsLimitMB float64) []MemoryViolation {
|
||||||
if hsLimitMB <= 0 && tsLimitMB <= 0 {
|
if hsLimitMB <= 0 && tsLimitMB <= 0 {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
summaries := sc.GetSummary()
|
summaries := sc.GetSummary()
|
||||||
|
|
||||||
var violations []MemoryViolation
|
var violations []MemoryViolation
|
||||||
|
|
||||||
for _, summary := range summaries {
|
for _, summary := range summaries {
|
||||||
@@ -480,13 +455,13 @@ func (sc *StatsCollector) CheckMemoryLimits(hsLimitMB, tsLimitMB float64) []Memo
|
|||||||
return violations
|
return violations
|
||||||
}
|
}
|
||||||
|
|
||||||
// PrintSummaryAndCheckLimits prints the statistics summary and returns memory violations if any.
|
// PrintSummaryAndCheckLimits prints the statistics summary and returns memory violations if any
|
||||||
func (sc *StatsCollector) PrintSummaryAndCheckLimits(hsLimitMB, tsLimitMB float64) []MemoryViolation {
|
func (sc *StatsCollector) PrintSummaryAndCheckLimits(hsLimitMB, tsLimitMB float64) []MemoryViolation {
|
||||||
sc.PrintSummary()
|
sc.PrintSummary()
|
||||||
return sc.CheckMemoryLimits(hsLimitMB, tsLimitMB)
|
return sc.CheckMemoryLimits(hsLimitMB, tsLimitMB)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Close closes the stats collector and cleans up resources.
|
// Close closes the stats collector and cleans up resources
|
||||||
func (sc *StatsCollector) Close() error {
|
func (sc *StatsCollector) Close() error {
|
||||||
sc.StopCollection()
|
sc.StopCollection()
|
||||||
return sc.client.Close()
|
return sc.client.Close()
|
||||||
|
|||||||
100
cmd/hi/tar_utils.go
Normal file
@@ -0,0 +1,100 @@
|
|||||||
|
package main
|
||||||
|
|
||||||
|
import (
|
||||||
|
"archive/tar"
|
||||||
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"strings"
|
||||||
|
)
|
||||||
|
|
||||||
|
// ErrFileNotFoundInTar indicates a file was not found in the tar archive.
|
||||||
|
var ErrFileNotFoundInTar = errors.New("file not found in tar")
|
||||||
|
|
||||||
|
// extractFileFromTar extracts a single file from a tar reader.
|
||||||
|
func extractFileFromTar(tarReader io.Reader, fileName, outputPath string) error {
|
||||||
|
tr := tar.NewReader(tarReader)
|
||||||
|
|
||||||
|
for {
|
||||||
|
header, err := tr.Next()
|
||||||
|
if err == io.EOF {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to read tar header: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if this is the file we're looking for
|
||||||
|
if filepath.Base(header.Name) == fileName {
|
||||||
|
if header.Typeflag == tar.TypeReg {
|
||||||
|
// Create the output file
|
||||||
|
outFile, err := os.Create(outputPath)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to create output file: %w", err)
|
||||||
|
}
|
||||||
|
defer outFile.Close()
|
||||||
|
|
||||||
|
// Copy file contents
|
||||||
|
if _, err := io.Copy(outFile, tr); err != nil {
|
||||||
|
return fmt.Errorf("failed to copy file contents: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return fmt.Errorf("%w: %s", ErrFileNotFoundInTar, fileName)
|
||||||
|
}
|
||||||
|
|
||||||
|
// extractDirectoryFromTar extracts all files from a tar reader to a target directory.
|
||||||
|
func extractDirectoryFromTar(tarReader io.Reader, targetDir string) error {
|
||||||
|
tr := tar.NewReader(tarReader)
|
||||||
|
|
||||||
|
for {
|
||||||
|
header, err := tr.Next()
|
||||||
|
if err == io.EOF {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to read tar header: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Clean the path to prevent directory traversal
|
||||||
|
cleanName := filepath.Clean(header.Name)
|
||||||
|
if strings.Contains(cleanName, "..") {
|
||||||
|
continue // Skip potentially dangerous paths
|
||||||
|
}
|
||||||
|
|
||||||
|
targetPath := filepath.Join(targetDir, filepath.Base(cleanName))
|
||||||
|
|
||||||
|
switch header.Typeflag {
|
||||||
|
case tar.TypeDir:
|
||||||
|
// Create directory
|
||||||
|
if err := os.MkdirAll(targetPath, os.FileMode(header.Mode)); err != nil {
|
||||||
|
return fmt.Errorf("failed to create directory %s: %w", targetPath, err)
|
||||||
|
}
|
||||||
|
case tar.TypeReg:
|
||||||
|
// Create file
|
||||||
|
outFile, err := os.Create(targetPath)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("failed to create file %s: %w", targetPath, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if _, err := io.Copy(outFile, tr); err != nil {
|
||||||
|
outFile.Close()
|
||||||
|
return fmt.Errorf("failed to copy file contents: %w", err)
|
||||||
|
}
|
||||||
|
outFile.Close()
|
||||||
|
|
||||||
|
// Set file permissions
|
||||||
|
if err := os.Chmod(targetPath, os.FileMode(header.Mode)); err != nil {
|
||||||
|
return fmt.Errorf("failed to set file permissions: %w", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
@@ -1,66 +0,0 @@
|
|||||||
package main
|
|
||||||
|
|
||||||
import (
|
|
||||||
"encoding/json"
|
|
||||||
"errors"
|
|
||||||
"fmt"
|
|
||||||
"os"
|
|
||||||
|
|
||||||
"github.com/creachadair/command"
|
|
||||||
"github.com/creachadair/flax"
|
|
||||||
"github.com/juanfont/headscale/hscontrol/mapper"
|
|
||||||
"github.com/juanfont/headscale/integration/integrationutil"
|
|
||||||
)
|
|
||||||
|
|
||||||
type MapConfig struct {
|
|
||||||
Directory string `flag:"directory,Directory to read map responses from"`
|
|
||||||
}
|
|
||||||
|
|
||||||
var (
|
|
||||||
mapConfig MapConfig
|
|
||||||
errDirectoryRequired = errors.New("directory is required")
|
|
||||||
)
|
|
||||||
|
|
||||||
func main() {
|
|
||||||
root := command.C{
|
|
||||||
Name: "mapresponses",
|
|
||||||
Help: "MapResponses is a tool to map and compare map responses from a directory",
|
|
||||||
Commands: []*command.C{
|
|
||||||
{
|
|
||||||
Name: "online",
|
|
||||||
Help: "",
|
|
||||||
Usage: "run [test-pattern] [flags]",
|
|
||||||
SetFlags: command.Flags(flax.MustBind, &mapConfig),
|
|
||||||
Run: runOnline,
|
|
||||||
},
|
|
||||||
command.HelpCommand(nil),
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
env := root.NewEnv(nil).MergeFlags(true)
|
|
||||||
command.RunOrFail(env, os.Args[1:])
|
|
||||||
}
|
|
||||||
|
|
||||||
// runIntegrationTest executes the integration test workflow.
|
|
||||||
func runOnline(env *command.Env) error {
|
|
||||||
if mapConfig.Directory == "" {
|
|
||||||
return errDirectoryRequired
|
|
||||||
}
|
|
||||||
|
|
||||||
resps, err := mapper.ReadMapResponsesFromDirectory(mapConfig.Directory)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("reading map responses from directory: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
expected := integrationutil.BuildExpectedOnlineMap(resps)
|
|
||||||
|
|
||||||
out, err := json.MarshalIndent(expected, "", " ")
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("marshaling expected online map: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
os.Stderr.Write(out)
|
|
||||||
os.Stderr.Write([]byte("\n"))
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
@@ -20,7 +20,6 @@ listen_addr: 127.0.0.1:8080
|
|||||||
|
|
||||||
# Address to listen to /metrics and /debug, you may want
|
# Address to listen to /metrics and /debug, you may want
|
||||||
# to keep this endpoint private to your internal network
|
# to keep this endpoint private to your internal network
|
||||||
# Use an emty value to disable the metrics listener.
|
|
||||||
metrics_listen_addr: 127.0.0.1:9090
|
metrics_listen_addr: 127.0.0.1:9090
|
||||||
|
|
||||||
# Address to listen for gRPC.
|
# Address to listen for gRPC.
|
||||||
@@ -50,29 +49,18 @@ noise:
|
|||||||
# List of IP prefixes to allocate tailaddresses from.
|
# List of IP prefixes to allocate tailaddresses from.
|
||||||
# Each prefix consists of either an IPv4 or IPv6 address,
|
# Each prefix consists of either an IPv4 or IPv6 address,
|
||||||
# and the associated prefix length, delimited by a slash.
|
# and the associated prefix length, delimited by a slash.
|
||||||
#
|
# It must be within IP ranges supported by the Tailscale
|
||||||
# WARNING: These prefixes MUST be subsets of the standard Tailscale ranges:
|
# client - i.e., subnets of 100.64.0.0/10 and fd7a:115c:a1e0::/48.
|
||||||
# - IPv4: 100.64.0.0/10 (CGNAT range)
|
# See below:
|
||||||
# - IPv6: fd7a:115c:a1e0::/48 (Tailscale ULA range)
|
|
||||||
#
|
|
||||||
# Using a SUBSET of these ranges is supported and useful if you want to
|
|
||||||
# limit IP allocation to a smaller block (e.g., 100.64.0.0/24).
|
|
||||||
#
|
|
||||||
# Using ranges OUTSIDE of CGNAT/ULA is NOT supported and will cause
|
|
||||||
# undefined behaviour. The Tailscale client has hard-coded assumptions
|
|
||||||
# about these ranges and will break in subtle, hard-to-debug ways.
|
|
||||||
#
|
|
||||||
# See:
|
|
||||||
# IPv4: https://github.com/tailscale/tailscale/blob/22ebb25e833264f58d7c3f534a8b166894a89536/net/tsaddr/tsaddr.go#L33
|
|
||||||
# IPv6: https://github.com/tailscale/tailscale/blob/22ebb25e833264f58d7c3f534a8b166894a89536/net/tsaddr/tsaddr.go#LL81C52-L81C71
|
# IPv6: https://github.com/tailscale/tailscale/blob/22ebb25e833264f58d7c3f534a8b166894a89536/net/tsaddr/tsaddr.go#LL81C52-L81C71
|
||||||
|
# IPv4: https://github.com/tailscale/tailscale/blob/22ebb25e833264f58d7c3f534a8b166894a89536/net/tsaddr/tsaddr.go#L33
|
||||||
|
# Any other range is NOT supported, and it will cause unexpected issues.
|
||||||
prefixes:
|
prefixes:
|
||||||
v4: 100.64.0.0/10
|
v4: 100.64.0.0/10
|
||||||
v6: fd7a:115c:a1e0::/48
|
v6: fd7a:115c:a1e0::/48
|
||||||
|
|
||||||
# Strategy used for allocation of IPs to nodes, available options:
|
# Strategy used for allocation of IPs to nodes, available options:
|
||||||
# - sequential (default): assigns the next free IP from the previous given
|
# - sequential (default): assigns the next free IP from the previous given IP.
|
||||||
# IP. A best-effort approach is used and Headscale might leave holes in the
|
|
||||||
# IP range or fill up existing holes in the IP range.
|
|
||||||
# - random: assigns the next free IP from a pseudo-random IP generator (crypto/rand).
|
# - random: assigns the next free IP from a pseudo-random IP generator (crypto/rand).
|
||||||
allocation: sequential
|
allocation: sequential
|
||||||
|
|
||||||
@@ -117,7 +105,7 @@ derp:
|
|||||||
|
|
||||||
# For better connection stability (especially when using an Exit-Node and DNS is not working),
|
# For better connection stability (especially when using an Exit-Node and DNS is not working),
|
||||||
# it is possible to optionally add the public IPv4 and IPv6 address to the Derp-Map using:
|
# it is possible to optionally add the public IPv4 and IPv6 address to the Derp-Map using:
|
||||||
ipv4: 198.51.100.1
|
ipv4: 1.2.3.4
|
||||||
ipv6: 2001:db8::1
|
ipv6: 2001:db8::1
|
||||||
|
|
||||||
# List of externally available DERP maps encoded in JSON
|
# List of externally available DERP maps encoded in JSON
|
||||||
@@ -140,30 +128,13 @@ derp:
|
|||||||
auto_update_enabled: true
|
auto_update_enabled: true
|
||||||
|
|
||||||
# How often should we check for DERP updates?
|
# How often should we check for DERP updates?
|
||||||
update_frequency: 3h
|
update_frequency: 24h
|
||||||
|
|
||||||
# Disables the automatic check for headscale updates on startup
|
# Disables the automatic check for headscale updates on startup
|
||||||
disable_check_updates: false
|
disable_check_updates: false
|
||||||
|
|
||||||
# Node lifecycle configuration.
|
# Time before an inactive ephemeral node is deleted?
|
||||||
node:
|
ephemeral_node_inactivity_timeout: 30m
|
||||||
# Default key expiry for non-tagged nodes, regardless of registration method
|
|
||||||
# (auth key, CLI, web auth). Tagged nodes are exempt and never expire.
|
|
||||||
#
|
|
||||||
# This is the base default. OIDC can override this via oidc.expiry.
|
|
||||||
# If a client explicitly requests a specific expiry, the client value is used.
|
|
||||||
#
|
|
||||||
# Setting the value to "0" means no default expiry (nodes never expire unless
|
|
||||||
# explicitly expired via `headscale nodes expire`).
|
|
||||||
#
|
|
||||||
# Tailscale SaaS uses 180d; set to a positive duration to match that behaviour.
|
|
||||||
#
|
|
||||||
# Default: 0 (no default expiry)
|
|
||||||
expiry: 0
|
|
||||||
|
|
||||||
ephemeral:
|
|
||||||
# Time before an inactive ephemeral node is deleted.
|
|
||||||
inactivity_timeout: 30m
|
|
||||||
|
|
||||||
database:
|
database:
|
||||||
# Database type. Available options: sqlite, postgres
|
# Database type. Available options: sqlite, postgres
|
||||||
@@ -304,9 +275,9 @@ dns:
|
|||||||
# `hostname.base_domain` (e.g., _myhost.example.com_).
|
# `hostname.base_domain` (e.g., _myhost.example.com_).
|
||||||
base_domain: example.com
|
base_domain: example.com
|
||||||
|
|
||||||
# Whether to use the local DNS settings of a node or override the local DNS
|
# Whether to use the local DNS settings of a node (default) or override the
|
||||||
# settings (default) and force the use of Headscale's DNS configuration.
|
# local DNS settings and force the use of Headscale's DNS configuration.
|
||||||
override_local_dns: true
|
override_local_dns: false
|
||||||
|
|
||||||
# List of DNS servers to expose to clients.
|
# List of DNS servers to expose to clients.
|
||||||
nameservers:
|
nameservers:
|
||||||
@@ -322,7 +293,8 @@ dns:
|
|||||||
|
|
||||||
# Split DNS (see https://tailscale.com/kb/1054/dns/),
|
# Split DNS (see https://tailscale.com/kb/1054/dns/),
|
||||||
# a map of domains and which DNS server to use for each.
|
# a map of domains and which DNS server to use for each.
|
||||||
split: {}
|
split:
|
||||||
|
{}
|
||||||
# foo.bar.com:
|
# foo.bar.com:
|
||||||
# - 1.1.1.1
|
# - 1.1.1.1
|
||||||
# darp.headscale.net:
|
# darp.headscale.net:
|
||||||
@@ -372,11 +344,15 @@ unix_socket_permission: "0770"
|
|||||||
# # `LoadCredential` straightforward:
|
# # `LoadCredential` straightforward:
|
||||||
# client_secret_path: "${CREDENTIALS_DIRECTORY}/oidc_client_secret"
|
# client_secret_path: "${CREDENTIALS_DIRECTORY}/oidc_client_secret"
|
||||||
#
|
#
|
||||||
|
# # The amount of time a node is authenticated with OpenID until it expires
|
||||||
|
# # and needs to reauthenticate.
|
||||||
|
# # Setting the value to "0" will mean no expiry.
|
||||||
|
# expiry: 180d
|
||||||
|
#
|
||||||
# # Use the expiry from the token received from OpenID when the user logged
|
# # Use the expiry from the token received from OpenID when the user logged
|
||||||
# # in. This will typically lead to frequent need to reauthenticate and should
|
# # in. This will typically lead to frequent need to reauthenticate and should
|
||||||
# # only be enabled if you know what you are doing.
|
# # only be enabled if you know what you are doing.
|
||||||
# # Note: enabling this will cause `node.expiry` to be ignored for
|
# # Note: enabling this will cause `oidc.expiry` to be ignored.
|
||||||
# # OIDC-authenticated nodes.
|
|
||||||
# use_expiry_from_token: false
|
# use_expiry_from_token: false
|
||||||
#
|
#
|
||||||
# # The OIDC scopes to use, defaults to "openid", "profile" and "email".
|
# # The OIDC scopes to use, defaults to "openid", "profile" and "email".
|
||||||
@@ -384,12 +360,6 @@ unix_socket_permission: "0770"
|
|||||||
# # required "openid" scope.
|
# # required "openid" scope.
|
||||||
# scope: ["openid", "profile", "email"]
|
# scope: ["openid", "profile", "email"]
|
||||||
#
|
#
|
||||||
# # Only verified email addresses are synchronized to the user profile by
|
|
||||||
# # default. Unverified emails may be allowed in case an identity provider
|
|
||||||
# # does not send the "email_verified: true" claim or email verification is
|
|
||||||
# # not required.
|
|
||||||
# email_verified_required: true
|
|
||||||
#
|
|
||||||
# # Provide custom key/value pairs which get sent to the identity provider's
|
# # Provide custom key/value pairs which get sent to the identity provider's
|
||||||
# # authorization endpoint.
|
# # authorization endpoint.
|
||||||
# extra_params:
|
# extra_params:
|
||||||
@@ -422,13 +392,11 @@ unix_socket_permission: "0770"
|
|||||||
# method: S256
|
# method: S256
|
||||||
|
|
||||||
# Logtail configuration
|
# Logtail configuration
|
||||||
# Logtail is Tailscales logging and auditing infrastructure, it allows the
|
# Logtail is Tailscales logging and auditing infrastructure, it allows the control panel
|
||||||
# control panel to instruct tailscale nodes to log their activity to a remote
|
# to instruct tailscale nodes to log their activity to a remote server.
|
||||||
# server. To disable logging on the client side, please refer to:
|
|
||||||
# https://tailscale.com/kb/1011/log-mesh-traffic#opting-out-of-client-logging
|
|
||||||
logtail:
|
logtail:
|
||||||
# Enable logtail for tailscale nodes of this Headscale instance.
|
# Enable logtail for this headscales clients.
|
||||||
# As there is currently no support for overriding the log server in Headscale, this is
|
# As there is currently no support for overriding the log server in headscale, this is
|
||||||
# disabled by default. Enabling this will make your clients send logs to Tailscale Inc.
|
# disabled by default. Enabling this will make your clients send logs to Tailscale Inc.
|
||||||
enabled: false
|
enabled: false
|
||||||
|
|
||||||
@@ -436,28 +404,3 @@ logtail:
|
|||||||
# default static port 41641. This option is intended as a workaround for some buggy
|
# default static port 41641. This option is intended as a workaround for some buggy
|
||||||
# firewall devices. See https://tailscale.com/kb/1181/firewalls/ for more information.
|
# firewall devices. See https://tailscale.com/kb/1181/firewalls/ for more information.
|
||||||
randomize_client_port: false
|
randomize_client_port: false
|
||||||
|
|
||||||
# Taildrop configuration
|
|
||||||
# Taildrop is the file sharing feature of Tailscale, allowing nodes to send files to each other.
|
|
||||||
# https://tailscale.com/kb/1106/taildrop/
|
|
||||||
taildrop:
|
|
||||||
# Enable or disable Taildrop for all nodes.
|
|
||||||
# When enabled, nodes can send files to other nodes owned by the same user.
|
|
||||||
# Tagged devices and cross-user transfers are not permitted by Tailscale clients.
|
|
||||||
enabled: true
|
|
||||||
# Advanced performance tuning parameters.
|
|
||||||
# The defaults are carefully chosen and should rarely need adjustment.
|
|
||||||
# Only modify these if you have identified a specific performance issue.
|
|
||||||
#
|
|
||||||
# tuning:
|
|
||||||
# # Maximum number of pending registration entries in the auth cache.
|
|
||||||
# # Oldest entries are evicted when the cap is reached.
|
|
||||||
# #
|
|
||||||
# # register_cache_max_entries: 1024
|
|
||||||
#
|
|
||||||
# # NodeStore write batching configuration.
|
|
||||||
# # The NodeStore batches write operations before rebuilding peer relationships,
|
|
||||||
# # which is computationally expensive. Batching reduces rebuild frequency.
|
|
||||||
# #
|
|
||||||
# # node_store_batch_size: 100
|
|
||||||
# # node_store_batch_timeout: 500ms
|
|
||||||
|
|||||||
@@ -1,6 +1,5 @@
|
|||||||
# If you plan to somehow use headscale, please deploy your own DERP infra: https://tailscale.com/kb/1118/custom-derp-servers/
|
# If you plan to somehow use headscale, please deploy your own DERP infra: https://tailscale.com/kb/1118/custom-derp-servers/
|
||||||
regions:
|
regions:
|
||||||
1: null # Disable DERP region with ID 1
|
|
||||||
900:
|
900:
|
||||||
regionid: 900
|
regionid: 900
|
||||||
regioncode: custom
|
regioncode: custom
|
||||||
@@ -8,9 +7,9 @@ regions:
|
|||||||
nodes:
|
nodes:
|
||||||
- name: 900a
|
- name: 900a
|
||||||
regionid: 900
|
regionid: 900
|
||||||
hostname: myderp.example.com
|
hostname: myderp.mydomain.no
|
||||||
ipv4: 198.51.100.1
|
ipv4: 123.123.123.123
|
||||||
ipv6: 2001:db8::1
|
ipv6: "2604:a880:400:d1::828:b001"
|
||||||
stunport: 0
|
stunport: 0
|
||||||
stunonly: false
|
stunonly: false
|
||||||
derpport: 0
|
derpport: 0
|
||||||
|
|||||||
@@ -1,3 +1,3 @@
|
|||||||
{%
|
{%
|
||||||
include-markdown "../../CONTRIBUTING.md"
|
include-markdown "../../CONTRIBUTING.md"
|
||||||
%}
|
%}
|
||||||
|
|||||||
@@ -24,12 +24,9 @@ We are more than happy to exchange emails, or to have dedicated calls before a P
|
|||||||
|
|
||||||
## When/Why is Feature X going to be implemented?
|
## When/Why is Feature X going to be implemented?
|
||||||
|
|
||||||
We use [GitHub Milestones to plan for upcoming Headscale releases](https://github.com/juanfont/headscale/milestones).
|
We don't know. We might be working on it. If you're interested in contributing, please post a feature request about it.
|
||||||
Have a look at [our current plan](https://github.com/juanfont/headscale/milestones) to get an idea when a specific
|
|
||||||
feature is about to be implemented. The release plan is subject to change at any time.
|
|
||||||
|
|
||||||
If you're interested in contributing, please post a feature request about it. Please be aware that there are a number of
|
Please be aware that there are a number of reasons why we might not accept specific contributions:
|
||||||
reasons why we might not accept specific contributions:
|
|
||||||
|
|
||||||
- It is not possible to implement the feature in a way that makes sense in a self-hosted environment.
|
- It is not possible to implement the feature in a way that makes sense in a self-hosted environment.
|
||||||
- Given that we are reverse-engineering Tailscale to satisfy our own curiosity, we might be interested in implementing the feature ourselves.
|
- Given that we are reverse-engineering Tailscale to satisfy our own curiosity, we might be interested in implementing the feature ourselves.
|
||||||
@@ -47,15 +44,6 @@ For convenience, we also [build container images with headscale](../setup/instal
|
|||||||
we don't officially support deploying headscale using Docker**. On our [Discord server](https://discord.gg/c84AZQhmpx)
|
we don't officially support deploying headscale using Docker**. On our [Discord server](https://discord.gg/c84AZQhmpx)
|
||||||
we have a "docker-issues" channel where you can ask for Docker-specific help to the community.
|
we have a "docker-issues" channel where you can ask for Docker-specific help to the community.
|
||||||
|
|
||||||
## What is the recommended update path? Can I skip multiple versions while updating?
|
|
||||||
|
|
||||||
Please follow the steps outlined in the [upgrade guide](../setup/upgrade.md) to update your existing Headscale
|
|
||||||
installation. Its required to update from one stable version to the next (e.g. 0.26.0 → 0.27.1 → 0.28.0) without
|
|
||||||
skipping minor versions in between. You should always pick the latest available patch release.
|
|
||||||
|
|
||||||
Be sure to check the [changelog](https://github.com/juanfont/headscale/blob/main/CHANGELOG.md) for version specific
|
|
||||||
upgrade instructions and breaking changes.
|
|
||||||
|
|
||||||
## Scaling / How many clients does Headscale support?
|
## Scaling / How many clients does Headscale support?
|
||||||
|
|
||||||
It depends. As often stated, Headscale is not enterprise software and our focus
|
It depends. As often stated, Headscale is not enterprise software and our focus
|
||||||
@@ -76,7 +64,7 @@ of Headscale:
|
|||||||
- they rarely "move" (change their endpoints)
|
- they rarely "move" (change their endpoints)
|
||||||
- new nodes are added rarely
|
- new nodes are added rarely
|
||||||
|
|
||||||
1. An environment with 80 laptops/phones (end user devices)
|
2. An environment with 80 laptops/phones (end user devices)
|
||||||
|
|
||||||
- nodes move often, e.g. switching from home to office
|
- nodes move often, e.g. switching from home to office
|
||||||
|
|
||||||
@@ -142,72 +130,7 @@ connect back to the administrator's node. Why do all nodes see the administrator
|
|||||||
`tailscale status`?
|
`tailscale status`?
|
||||||
|
|
||||||
This is essentially how Tailscale works. If traffic is allowed to flow in one direction, then both nodes see each other
|
This is essentially how Tailscale works. If traffic is allowed to flow in one direction, then both nodes see each other
|
||||||
in their output of `tailscale status`. Traffic is still filtered according to the ACL, with the exception of
|
in their output of `tailscale status`. Traffic is still filtered according to the ACL, with the exception of `tailscale
|
||||||
`tailscale ping` which is always allowed in either direction.
|
ping` which is always allowed in either direction.
|
||||||
|
|
||||||
See also <https://tailscale.com/kb/1087/device-visibility>.
|
See also <https://tailscale.com/kb/1087/device-visibility>.
|
||||||
|
|
||||||
## My policy is stored in the database and Headscale refuses to start due to an invalid policy. How can I recover?
|
|
||||||
|
|
||||||
Headscale checks if the policy is valid during startup and refuses to start if it detects an error. The error message
|
|
||||||
indicates which part of the policy is invalid. Follow these steps to fix your policy:
|
|
||||||
|
|
||||||
- Dump the policy to a file: `headscale policy get --bypass-grpc-and-access-database-directly > policy.json`
|
|
||||||
- Edit and fixup `policy.json`. Use the command `headscale policy check --file policy.json` to validate the policy.
|
|
||||||
- Load the modified policy: `headscale policy set --bypass-grpc-and-access-database-directly --file policy.json`
|
|
||||||
- Start Headscale as usual.
|
|
||||||
|
|
||||||
!!! warning "Full server configuration required"
|
|
||||||
|
|
||||||
The above commands to get/set the policy require a complete server configuration file including database settings. A
|
|
||||||
minimal config to [control Headscale via remote CLI](../ref/api.md#grpc) is not sufficient. You may use
|
|
||||||
`headscale -c /path/to/config.yaml` to specify the path to an alternative configuration file.
|
|
||||||
|
|
||||||
## How can I migrate back to the recommended IP prefixes?
|
|
||||||
|
|
||||||
Tailscale only supports the IP prefixes `100.64.0.0/10` and `fd7a:115c:a1e0::/48` or smaller subnets thereof. The
|
|
||||||
following steps can be used to migrate from unsupported IP prefixes back to the supported and recommended ones.
|
|
||||||
|
|
||||||
!!! warning "Backup and test in a demo environment required"
|
|
||||||
|
|
||||||
The commands below update the IP addresses of all nodes in your tailnet and this might have a severe impact in your
|
|
||||||
specific environment. At a minimum:
|
|
||||||
|
|
||||||
- [Create a backup of your database](../setup/upgrade.md#backup)
|
|
||||||
- Test the commands below in a representive demo environment. This allows to catch subsequent connectivity errors
|
|
||||||
early and see how the tailnet behaves in your specific environment.
|
|
||||||
|
|
||||||
- Stop Headscale
|
|
||||||
- Restore the default prefixes in the [configuration file](../ref/configuration.md):
|
|
||||||
```yaml
|
|
||||||
prefixes:
|
|
||||||
v4: 100.64.0.0/10
|
|
||||||
v6: fd7a:115c:a1e0::/48
|
|
||||||
```
|
|
||||||
- Update the `nodes.ipv4` and `nodes.ipv6` columns in the database and assign each node a unique IPv4 and IPv6 address.
|
|
||||||
The following SQL statement assigns IP addresses based on the node ID:
|
|
||||||
```sql
|
|
||||||
UPDATE nodes
|
|
||||||
SET ipv4=concat('100.64.', id/256, '.', id%256),
|
|
||||||
ipv6=concat('fd7a:115c:a1e0::', format('%x', id));
|
|
||||||
```
|
|
||||||
- Update the [policy](../ref/acls.md) to reflect the IP address changes (if any)
|
|
||||||
- Start Headscale
|
|
||||||
|
|
||||||
Nodes should reconnect within a few seconds and pickup their newly assigned IP addresses.
|
|
||||||
|
|
||||||
## How can I avoid to send logs to Tailscale Inc?
|
|
||||||
|
|
||||||
A Tailscale client [collects logs about its operation and connection attempts with other
|
|
||||||
clients](https://tailscale.com/kb/1011/log-mesh-traffic#client-logs) and sends them to a central log service operated by
|
|
||||||
Tailscale Inc.
|
|
||||||
|
|
||||||
Headscale, by default, instructs clients to disable log submission to the central log service. This configuration is
|
|
||||||
applied by a client once it successfully connected with Headscale. See the configuration option `logtail.enabled` in the
|
|
||||||
[configuration file](../ref/configuration.md) for details.
|
|
||||||
|
|
||||||
Alternatively, logging can also be disabled on the client side. This is independent of Headscale and opting out of
|
|
||||||
client logging disables log submission early during client startup. The configuration is operating system specific and
|
|
||||||
is usually achieved by setting the environment variable `TS_NO_LOGS_NO_SUPPORT=true` or by passing the flag
|
|
||||||
`--no-logs-no-support` to `tailscaled`. See
|
|
||||||
<https://tailscale.com/kb/1011/log-mesh-traffic#opting-out-of-client-logging> for details.
|
|
||||||
|
|||||||
@@ -5,31 +5,30 @@ to provide self-hosters and hobbyists with an open-source server they can use fo
|
|||||||
provides on overview of Headscale's feature and compatibility with the Tailscale control server:
|
provides on overview of Headscale's feature and compatibility with the Tailscale control server:
|
||||||
|
|
||||||
- [x] Full "base" support of Tailscale's features
|
- [x] Full "base" support of Tailscale's features
|
||||||
- [x] [Node registration](../ref/registration.md)
|
- [x] Node registration
|
||||||
- [x] [Web authentication](../ref/registration.md#web-authentication)
|
- [x] Interactive
|
||||||
- [x] [Pre authenticated key](../ref/registration.md#pre-authenticated-key)
|
- [x] Pre authenticated key
|
||||||
- [x] [DNS](../ref/dns.md)
|
- [x] [DNS](../ref/dns.md)
|
||||||
- [x] [MagicDNS](https://tailscale.com/kb/1081/magicdns)
|
- [x] [MagicDNS](https://tailscale.com/kb/1081/magicdns)
|
||||||
- [x] [Global and restricted nameservers (split DNS)](https://tailscale.com/kb/1054/dns#nameservers)
|
- [x] [Global and restricted nameservers (split DNS)](https://tailscale.com/kb/1054/dns#nameservers)
|
||||||
- [x] [search domains](https://tailscale.com/kb/1054/dns#search-domains)
|
- [x] [search domains](https://tailscale.com/kb/1054/dns#search-domains)
|
||||||
- [x] [Extra DNS records (Headscale only)](../ref/dns.md#setting-extra-dns-records)
|
- [x] [Extra DNS records (Headscale only)](../ref/dns.md#setting-extra-dns-records)
|
||||||
- [x] [Taildrop (File Sharing)](https://tailscale.com/kb/1106/taildrop)
|
- [x] [Taildrop (File Sharing)](https://tailscale.com/kb/1106/taildrop)
|
||||||
- [x] [Tags](../ref/tags.md)
|
|
||||||
- [x] [Routes](../ref/routes.md)
|
- [x] [Routes](../ref/routes.md)
|
||||||
- [x] [Subnet routers](../ref/routes.md#subnet-router)
|
- [x] [Subnet routers](../ref/routes.md#subnet-router)
|
||||||
- [x] [Exit nodes](../ref/routes.md#exit-node)
|
- [x] [Exit nodes](../ref/routes.md#exit-node)
|
||||||
- [x] Dual stack (IPv4 and IPv6)
|
- [x] Dual stack (IPv4 and IPv6)
|
||||||
- [x] Ephemeral nodes
|
- [x] Ephemeral nodes
|
||||||
- [x] Embedded [DERP server](../ref/derp.md)
|
- [x] Embedded [DERP server](https://tailscale.com/kb/1232/derp-servers)
|
||||||
- [x] Access control lists ([GitHub label "policy"](https://github.com/juanfont/headscale/labels/policy%20%F0%9F%93%9D))
|
- [x] Access control lists ([GitHub label "policy"](https://github.com/juanfont/headscale/labels/policy%20%F0%9F%93%9D))
|
||||||
- [x] ACL management via API
|
- [x] ACL management via API
|
||||||
- [x] Some [Autogroups](https://tailscale.com/kb/1396/targets#autogroups), currently: `autogroup:internet`,
|
- [x] Some [Autogroups](https://tailscale.com/kb/1396/targets#autogroups), currently: `autogroup:internet`,
|
||||||
`autogroup:nonroot`, `autogroup:member`, `autogroup:tagged`, `autogroup:self`
|
`autogroup:nonroot`, `autogroup:member`, `autogroup:tagged`
|
||||||
- [x] [Auto approvers](https://tailscale.com/kb/1337/acl-syntax#auto-approvers) for [subnet
|
- [x] [Auto approvers](https://tailscale.com/kb/1337/acl-syntax#auto-approvers) for [subnet
|
||||||
routers](../ref/routes.md#automatically-approve-routes-of-a-subnet-router) and [exit
|
routers](../ref/routes.md#automatically-approve-routes-of-a-subnet-router) and [exit
|
||||||
nodes](../ref/routes.md#automatically-approve-an-exit-node-with-auto-approvers)
|
nodes](../ref/routes.md#automatically-approve-an-exit-node-with-auto-approvers)
|
||||||
- [x] [Tailscale SSH](https://tailscale.com/kb/1193/tailscale-ssh)
|
- [x] [Tailscale SSH](https://tailscale.com/kb/1193/tailscale-ssh)
|
||||||
- [x] [Node registration using Single-Sign-On (OpenID Connect)](../ref/oidc.md) ([GitHub label "OIDC"](https://github.com/juanfont/headscale/labels/OIDC))
|
* [x] [Node registration using Single-Sign-On (OpenID Connect)](../ref/oidc.md) ([GitHub label "OIDC"](https://github.com/juanfont/headscale/labels/OIDC))
|
||||||
- [x] Basic registration
|
- [x] Basic registration
|
||||||
- [x] Update user profile from identity provider
|
- [x] Update user profile from identity provider
|
||||||
- [ ] OIDC groups cannot be used in ACLs
|
- [ ] OIDC groups cannot be used in ACLs
|
||||||
|
|||||||
|
Before Width: | Height: | Size: 22 KiB |
|
Before Width: | Height: | Size: 56 KiB After Width: | Height: | Size: 56 KiB |
|
Before Width: | Height: | Size: 34 KiB After Width: | Height: | Size: 34 KiB |
|
Before Width: | Height: | Size: 1.2 KiB After Width: | Height: | Size: 1.2 KiB |
|
Before Width: | Height: | Size: 49 KiB After Width: | Height: | Size: 49 KiB |
|
Before Width: | Height: | Size: 7.8 KiB After Width: | Height: | Size: 7.8 KiB |
135
docs/ref/acls.md
@@ -9,38 +9,9 @@ When using ACL's the User borders are no longer applied. All machines
|
|||||||
whichever the User have the ability to communicate with other hosts as
|
whichever the User have the ability to communicate with other hosts as
|
||||||
long as the ACL's permits this exchange.
|
long as the ACL's permits this exchange.
|
||||||
|
|
||||||
## ACL Setup
|
## ACLs use case example
|
||||||
|
|
||||||
To enable and configure ACLs in Headscale, you need to specify the path to your ACL policy file in the `policy.path` key in `config.yaml`.
|
Let's build an example use case for a small business (It may be the place where
|
||||||
|
|
||||||
Your ACL policy file must be formatted using [huJSON](https://github.com/tailscale/hujson).
|
|
||||||
|
|
||||||
Info on how these policies are written can be found
|
|
||||||
[here](https://tailscale.com/kb/1018/acls/).
|
|
||||||
|
|
||||||
Please reload or restart Headscale after updating the ACL file. Headscale may be reloaded either via its systemd service
|
|
||||||
(`sudo systemctl reload headscale`) or by sending a SIGHUP signal (`sudo kill -HUP $(pidof headscale)`) to the main
|
|
||||||
process. Headscale logs the result of ACL policy processing after each reload.
|
|
||||||
|
|
||||||
## Simple Examples
|
|
||||||
|
|
||||||
- [**Allow All**](https://tailscale.com/kb/1192/acl-samples#allow-all-default-acl): If you define an ACL file but completely omit the `"acls"` field from its content, Headscale will default to an "allow all" policy. This means all devices connected to your tailnet will be able to communicate freely with each other.
|
|
||||||
|
|
||||||
```json
|
|
||||||
{}
|
|
||||||
```
|
|
||||||
|
|
||||||
- [**Deny All**](https://tailscale.com/kb/1192/acl-samples#deny-all): To prevent all communication within your tailnet, you can include an empty array for the `"acls"` field in your policy file.
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"acls": []
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Complex Example
|
|
||||||
|
|
||||||
Let's build a more complex example use case for a small business (It may be the place where
|
|
||||||
ACL's are the most useful).
|
ACL's are the most useful).
|
||||||
|
|
||||||
We have a small company with a boss, an admin, two developers and an intern.
|
We have a small company with a boss, an admin, two developers and an intern.
|
||||||
@@ -65,7 +36,11 @@ servers.
|
|||||||
- billing.internal
|
- billing.internal
|
||||||
- router.internal
|
- router.internal
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
|
## ACL setup
|
||||||
|
|
||||||
|
ACLs have to be written in [huJSON](https://github.com/tailscale/hujson).
|
||||||
|
|
||||||
When [registering the servers](../usage/getting-started.md#register-a-node) we
|
When [registering the servers](../usage/getting-started.md#register-a-node) we
|
||||||
will need to add the flag `--advertise-tags=tag:<tag1>,tag:<tag2>`, and the user
|
will need to add the flag `--advertise-tags=tag:<tag1>,tag:<tag2>`, and the user
|
||||||
@@ -74,6 +49,14 @@ tags to a server they can register, the check of the tags is done on headscale
|
|||||||
server and only valid tags are applied. A tag is valid if the user that is
|
server and only valid tags are applied. A tag is valid if the user that is
|
||||||
registering it is allowed to do it.
|
registering it is allowed to do it.
|
||||||
|
|
||||||
|
To use ACLs in headscale, you must edit your `config.yaml` file. In there you will find a `policy.path` parameter. This
|
||||||
|
will need to point to your ACL file. More info on how these policies are written can be found
|
||||||
|
[here](https://tailscale.com/kb/1018/acls/).
|
||||||
|
|
||||||
|
Please reload or restart Headscale after updating the ACL file. Headscale may be reloaded either via its systemd service
|
||||||
|
(`sudo systemctl reload headscale`) or by sending a SIGHUP signal (`sudo kill -HUP $(pidof headscale)`) to the main
|
||||||
|
process. Headscale logs the result of ACL policy processing after each reload.
|
||||||
|
|
||||||
Here are the ACL's to implement the same permissions as above:
|
Here are the ACL's to implement the same permissions as above:
|
||||||
|
|
||||||
```json title="acl.json"
|
```json title="acl.json"
|
||||||
@@ -194,95 +177,13 @@ Here are the ACL's to implement the same permissions as above:
|
|||||||
"dst": ["tag:dev-app-servers:80,443"]
|
"dst": ["tag:dev-app-servers:80,443"]
|
||||||
},
|
},
|
||||||
|
|
||||||
// Allow users to access their own devices using autogroup:self (see below for more details about performance impact)
|
// We still have to allow internal users communications since nothing guarantees that each user have
|
||||||
{
|
// their own users.
|
||||||
"action": "accept",
|
|
||||||
"src": ["autogroup:member"],
|
|
||||||
"dst": ["autogroup:self:*"]
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Autogroups
|
|
||||||
|
|
||||||
Headscale supports several autogroups that automatically include users, destinations, or devices with specific properties. Autogroups provide a convenient way to write ACL rules without manually listing individual users or devices.
|
|
||||||
|
|
||||||
### `autogroup:internet`
|
|
||||||
|
|
||||||
Allows access to the internet through [exit nodes](routes.md#exit-node). Can only be used in ACL destinations.
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"action": "accept",
|
|
||||||
"src": ["group:users"],
|
|
||||||
"dst": ["autogroup:internet:*"]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### `autogroup:member`
|
|
||||||
|
|
||||||
Includes all [personal (untagged) devices](registration.md/#identity-model).
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"action": "accept",
|
|
||||||
"src": ["autogroup:member"],
|
|
||||||
"dst": ["tag:prod-app-servers:80,443"]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### `autogroup:tagged`
|
|
||||||
|
|
||||||
Includes all devices that [have at least one tag](registration.md/#identity-model).
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"action": "accept",
|
|
||||||
"src": ["autogroup:tagged"],
|
|
||||||
"dst": ["tag:monitoring:9090"]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### `autogroup:self`
|
|
||||||
|
|
||||||
!!! warning "The current implementation of `autogroup:self` is inefficient"
|
|
||||||
|
|
||||||
Includes devices where the same user is authenticated on both the source and destination. Does not include tagged devices. Can only be used in ACL destinations.
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"action": "accept",
|
|
||||||
"src": ["autogroup:member"],
|
|
||||||
"dst": ["autogroup:self:*"]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
*Using `autogroup:self` may cause performance degradation on the Headscale coordinator server in large deployments, as filter rules must be compiled per-node rather than globally and the current implementation is not very efficient.*
|
|
||||||
|
|
||||||
If you experience performance issues, consider using more specific ACL rules or limiting the use of `autogroup:self`.
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
// The following rules allow internal users to communicate with their
|
|
||||||
// own nodes in case autogroup:self is causing performance issues.
|
|
||||||
{ "action": "accept", "src": ["boss@"], "dst": ["boss@:*"] },
|
{ "action": "accept", "src": ["boss@"], "dst": ["boss@:*"] },
|
||||||
{ "action": "accept", "src": ["dev1@"], "dst": ["dev1@:*"] },
|
{ "action": "accept", "src": ["dev1@"], "dst": ["dev1@:*"] },
|
||||||
{ "action": "accept", "src": ["dev2@"], "dst": ["dev2@:*"] },
|
{ "action": "accept", "src": ["dev2@"], "dst": ["dev2@:*"] },
|
||||||
{ "action": "accept", "src": ["admin1@"], "dst": ["admin1@:*"] },
|
{ "action": "accept", "src": ["admin1@"], "dst": ["admin1@:*"] },
|
||||||
{ "action": "accept", "src": ["intern1@"], "dst": ["intern1@:*"] }
|
{ "action": "accept", "src": ["intern1@"], "dst": ["intern1@:*"] }
|
||||||
}
|
]
|
||||||
```
|
|
||||||
|
|
||||||
### `autogroup:nonroot`
|
|
||||||
|
|
||||||
Used in Tailscale SSH rules to allow access to any user except root. Can only be used in the `users` field of SSH rules.
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"action": "accept",
|
|
||||||
"src": ["autogroup:member"],
|
|
||||||
"dst": ["autogroup:self"],
|
|
||||||
"users": ["autogroup:nonroot"]
|
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|||||||
129
docs/ref/api.md
@@ -1,129 +0,0 @@
|
|||||||
# API
|
|
||||||
|
|
||||||
Headscale provides a [HTTP REST API](#rest-api) and a [gRPC interface](#grpc) which may be used to integrate a [web
|
|
||||||
interface](integration/web-ui.md), [remote control Headscale](#setup-remote-control) or provide a base for custom
|
|
||||||
integration and tooling.
|
|
||||||
|
|
||||||
Both interfaces require a valid API key before use. To create an API key, log into your Headscale server and generate
|
|
||||||
one with the default expiration of 90 days:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
headscale apikeys create
|
|
||||||
```
|
|
||||||
|
|
||||||
Copy the output of the command and save it for later. Please note that you can not retrieve an API key again. If the API
|
|
||||||
key is lost, expire the old one, and create a new one.
|
|
||||||
|
|
||||||
To list the API keys currently associated with the server:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
headscale apikeys list
|
|
||||||
```
|
|
||||||
|
|
||||||
and to expire an API key:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
headscale apikeys expire --prefix <PREFIX>
|
|
||||||
```
|
|
||||||
|
|
||||||
## REST API
|
|
||||||
|
|
||||||
- API endpoint: `/api/v1`, e.g. `https://headscale.example.com/api/v1`
|
|
||||||
- Documentation: `/swagger`, e.g. `https://headscale.example.com/swagger`
|
|
||||||
- Headscale Version: `/version`, e.g. `https://headscale.example.com/version`
|
|
||||||
- Authenticate using HTTP Bearer authentication by sending the [API key](#api) with the HTTP `Authorization: Bearer <API_KEY>` header.
|
|
||||||
|
|
||||||
Start by [creating an API key](#api) and test it with the examples below. Read the API documentation provided by your
|
|
||||||
Headscale server at `/swagger` for details.
|
|
||||||
|
|
||||||
=== "Get details for all users"
|
|
||||||
|
|
||||||
```console
|
|
||||||
curl -H "Authorization: Bearer <API_KEY>" \
|
|
||||||
https://headscale.example.com/api/v1/user
|
|
||||||
```
|
|
||||||
|
|
||||||
=== "Get details for user 'bob'"
|
|
||||||
|
|
||||||
```console
|
|
||||||
curl -H "Authorization: Bearer <API_KEY>" \
|
|
||||||
https://headscale.example.com/api/v1/user?name=bob
|
|
||||||
```
|
|
||||||
|
|
||||||
=== "Register a node"
|
|
||||||
|
|
||||||
```console
|
|
||||||
curl -H "Authorization: Bearer <API_KEY>" \
|
|
||||||
--json '{"user": "<USER>", "authId": "AUTH_ID>"}' \
|
|
||||||
https://headscale.example.com/api/v1/auth/register
|
|
||||||
```
|
|
||||||
|
|
||||||
## gRPC
|
|
||||||
|
|
||||||
The gRPC interface can be used to control a Headscale instance from a remote machine with the `headscale` binary.
|
|
||||||
|
|
||||||
### Prerequisite
|
|
||||||
|
|
||||||
- A workstation to run `headscale` (any supported platform, e.g. Linux).
|
|
||||||
- A Headscale server with gRPC enabled.
|
|
||||||
- Connections to the gRPC port (default: `50443`) are allowed.
|
|
||||||
- Remote access requires an encrypted connection via TLS.
|
|
||||||
- An [API key](#api) to authenticate with the Headscale server.
|
|
||||||
|
|
||||||
### Setup remote control
|
|
||||||
|
|
||||||
1. Download the [`headscale` binary from GitHub's release page](https://github.com/juanfont/headscale/releases). Make
|
|
||||||
sure to use the same version as on the server.
|
|
||||||
|
|
||||||
1. Put the binary somewhere in your `PATH`, e.g. `/usr/local/bin/headscale`
|
|
||||||
|
|
||||||
1. Make `headscale` executable: `chmod +x /usr/local/bin/headscale`
|
|
||||||
|
|
||||||
1. [Create an API key](#api) on the Headscale server.
|
|
||||||
|
|
||||||
1. Provide the connection parameters for the remote Headscale server either via a minimal YAML configuration file or
|
|
||||||
via environment variables:
|
|
||||||
|
|
||||||
=== "Minimal YAML configuration file"
|
|
||||||
|
|
||||||
```yaml title="config.yaml"
|
|
||||||
cli:
|
|
||||||
address: <HEADSCALE_ADDRESS>:<PORT>
|
|
||||||
api_key: <API_KEY>
|
|
||||||
```
|
|
||||||
|
|
||||||
=== "Environment variables"
|
|
||||||
|
|
||||||
```shell
|
|
||||||
export HEADSCALE_CLI_ADDRESS="<HEADSCALE_ADDRESS>:<PORT>"
|
|
||||||
export HEADSCALE_CLI_API_KEY="<API_KEY>"
|
|
||||||
```
|
|
||||||
|
|
||||||
This instructs the `headscale` binary to connect to a remote instance at `<HEADSCALE_ADDRESS>:<PORT>`, instead of
|
|
||||||
connecting to the local instance.
|
|
||||||
|
|
||||||
1. Test the connection by listing all nodes:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
headscale nodes list
|
|
||||||
```
|
|
||||||
|
|
||||||
You should now be able to see a list of your nodes from your workstation, and you can
|
|
||||||
now control the Headscale server from your workstation.
|
|
||||||
|
|
||||||
### Behind a proxy
|
|
||||||
|
|
||||||
It's possible to run the gRPC remote endpoint behind a reverse proxy, like Nginx, and have it run on the _same_ port as Headscale.
|
|
||||||
|
|
||||||
While this is _not a supported_ feature, an example on how this can be set up on
|
|
||||||
[NixOS is shown here](https://github.com/kradalby/dotfiles/blob/4489cdbb19cddfbfae82cd70448a38fde5a76711/machines/headscale.oracldn/headscale.nix#L61-L91).
|
|
||||||
|
|
||||||
### Troubleshooting
|
|
||||||
|
|
||||||
- Make sure you have the _same_ Headscale version on your server and workstation.
|
|
||||||
- Ensure that connections to the gRPC port are allowed.
|
|
||||||
- Verify that your TLS certificate is valid and trusted.
|
|
||||||
- If you don't have access to a trusted certificate (e.g. from Let's Encrypt), either:
|
|
||||||
- Add your self-signed certificate to the trust store of your OS _or_
|
|
||||||
- Disable certificate verification by either setting `cli.insecure: true` in the configuration file or by setting
|
|
||||||
`HEADSCALE_CLI_INSECURE=1` via an environment variable. We do **not** recommend to disable certificate validation.
|
|
||||||
@@ -17,8 +17,8 @@
|
|||||||
|
|
||||||
=== "View on GitHub"
|
=== "View on GitHub"
|
||||||
|
|
||||||
- Development version: <https://github.com/juanfont/headscale/blob/main/config-example.yaml>
|
* Development version: <https://github.com/juanfont/headscale/blob/main/config-example.yaml>
|
||||||
- Version {{ headscale.version }}: https://github.com/juanfont/headscale/blob/v{{ headscale.version }}/config-example.yaml
|
* Version {{ headscale.version }}: <https://github.com/juanfont/headscale/blob/v{{ headscale.version }}/config-example.yaml>
|
||||||
|
|
||||||
=== "Download with `wget`"
|
=== "Download with `wget`"
|
||||||
|
|
||||||
|
|||||||
@@ -64,9 +64,6 @@ Headscale provides a metrics and debug endpoint. It allows to introspect differe
|
|||||||
|
|
||||||
Keep the metrics and debug endpoint private to your internal network and don't expose it to the Internet.
|
Keep the metrics and debug endpoint private to your internal network and don't expose it to the Internet.
|
||||||
|
|
||||||
The metrics and debug interface can be disabled completely by setting `metrics_listen_addr: null` in the
|
|
||||||
[configuration file](./configuration.md).
|
|
||||||
|
|
||||||
Query metrics via <http://localhost:9090/metrics> and get an overview of available debug information via
|
Query metrics via <http://localhost:9090/metrics> and get an overview of available debug information via
|
||||||
<http://localhost:9090/debug/>. Metrics may be queried from outside localhost but the debug interface is subject to
|
<http://localhost:9090/debug/>. Metrics may be queried from outside localhost but the debug interface is subject to
|
||||||
additional protection despite listening on all interfaces.
|
additional protection despite listening on all interfaces.
|
||||||
|
|||||||
174
docs/ref/derp.md
@@ -1,174 +0,0 @@
|
|||||||
# DERP
|
|
||||||
|
|
||||||
A [DERP (Designated Encrypted Relay for Packets) server](https://tailscale.com/kb/1232/derp-servers) is mainly used to
|
|
||||||
relay traffic between two nodes in case a direct connection can't be established. Headscale provides an embedded DERP
|
|
||||||
server to ensure seamless connectivity between nodes.
|
|
||||||
|
|
||||||
## Configuration
|
|
||||||
|
|
||||||
DERP related settings are configured within the `derp` section of the [configuration file](./configuration.md). The
|
|
||||||
following sections only use a few of the available settings, check the [example configuration](./configuration.md) for
|
|
||||||
all available configuration options.
|
|
||||||
|
|
||||||
### Enable embedded DERP
|
|
||||||
|
|
||||||
Headscale ships with an embedded DERP server which allows to run your own self-hosted DERP server easily. The embedded
|
|
||||||
DERP server is disabled by default and needs to be enabled. In addition, you should configure the public IPv4 and public
|
|
||||||
IPv6 address of your Headscale server for improved connection stability:
|
|
||||||
|
|
||||||
```yaml title="config.yaml" hl_lines="3-5"
|
|
||||||
derp:
|
|
||||||
server:
|
|
||||||
enabled: true
|
|
||||||
ipv4: 198.51.100.1
|
|
||||||
ipv6: 2001:db8::1
|
|
||||||
```
|
|
||||||
|
|
||||||
Keep in mind that [additional ports are needed to run a DERP server](../setup/requirements.md#ports-in-use). Besides
|
|
||||||
relaying traffic, it also uses STUN (udp/3478) to help clients discover their public IP addresses and perform NAT
|
|
||||||
traversal. [Check DERP server connectivity](#check-derp-server-connectivity) to see if everything works.
|
|
||||||
|
|
||||||
### Remove Tailscale's DERP servers
|
|
||||||
|
|
||||||
Once enabled, Headscale's embedded DERP is added to the list of free-to-use [DERP
|
|
||||||
servers](https://tailscale.com/kb/1232/derp-servers) offered by Tailscale Inc. To only use Headscale's embedded DERP
|
|
||||||
server, disable the loading of the default DERP map:
|
|
||||||
|
|
||||||
```yaml title="config.yaml" hl_lines="6"
|
|
||||||
derp:
|
|
||||||
server:
|
|
||||||
enabled: true
|
|
||||||
ipv4: 198.51.100.1
|
|
||||||
ipv6: 2001:db8::1
|
|
||||||
urls: []
|
|
||||||
```
|
|
||||||
|
|
||||||
!!! warning "Single point of failure"
|
|
||||||
|
|
||||||
Removing Tailscale's DERP servers means that there is now just a single DERP server available for clients. This is a
|
|
||||||
single point of failure and could hamper connectivity.
|
|
||||||
|
|
||||||
[Check DERP server connectivity](#check-derp-server-connectivity) with your embedded DERP server before removing
|
|
||||||
Tailscale's DERP servers.
|
|
||||||
|
|
||||||
### Customize DERP map
|
|
||||||
|
|
||||||
The DERP map offered to clients can be customized with a [dedicated YAML-configuration
|
|
||||||
file](https://github.com/juanfont/headscale/blob/main/derp-example.yaml). This allows to modify previously loaded DERP
|
|
||||||
maps fetched via URL or to offer your own, custom DERP servers to nodes.
|
|
||||||
|
|
||||||
=== "Remove specific DERP regions"
|
|
||||||
|
|
||||||
The free-to-use [DERP servers](https://tailscale.com/kb/1232/derp-servers) are organized into regions via a region
|
|
||||||
ID. You can explicitly disable a specific region by setting its region ID to `null`. The following sample
|
|
||||||
`derp.yaml` disables the New York DERP region (which has the region ID 1):
|
|
||||||
|
|
||||||
```yaml title="derp.yaml"
|
|
||||||
regions:
|
|
||||||
1: null
|
|
||||||
```
|
|
||||||
|
|
||||||
Use the following configuration to serve the default DERP map (excluding New York) to nodes:
|
|
||||||
|
|
||||||
```yaml title="config.yaml" hl_lines="6 7"
|
|
||||||
derp:
|
|
||||||
server:
|
|
||||||
enabled: false
|
|
||||||
urls:
|
|
||||||
- https://controlplane.tailscale.com/derpmap/default
|
|
||||||
paths:
|
|
||||||
- /etc/headscale/derp.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
=== "Provide custom DERP servers"
|
|
||||||
|
|
||||||
The following sample `derp.yaml` references two custom regions (`custom-east` with ID 900 and `custom-west` with ID 901)
|
|
||||||
with one custom DERP server in each region. Each DERP server offers DERP relay via HTTPS on tcp/443, support for captive
|
|
||||||
portal checks via HTTP on tcp/80 and STUN on udp/3478. See the definitions of
|
|
||||||
[DERPMap](https://pkg.go.dev/tailscale.com/tailcfg#DERPMap),
|
|
||||||
[DERPRegion](https://pkg.go.dev/tailscale.com/tailcfg#DERPRegion) and
|
|
||||||
[DERPNode](https://pkg.go.dev/tailscale.com/tailcfg#DERPNode) for all available options.
|
|
||||||
|
|
||||||
```yaml title="derp.yaml"
|
|
||||||
regions:
|
|
||||||
900:
|
|
||||||
regionid: 900
|
|
||||||
regioncode: custom-east
|
|
||||||
regionname: My region (east)
|
|
||||||
nodes:
|
|
||||||
- name: 900a
|
|
||||||
regionid: 900
|
|
||||||
hostname: derp900a.example.com
|
|
||||||
ipv4: 198.51.100.1
|
|
||||||
ipv6: 2001:db8::1
|
|
||||||
canport80: true
|
|
||||||
901:
|
|
||||||
regionid: 901
|
|
||||||
regioncode: custom-west
|
|
||||||
regionname: My Region (west)
|
|
||||||
nodes:
|
|
||||||
- name: 901a
|
|
||||||
regionid: 901
|
|
||||||
hostname: derp901a.example.com
|
|
||||||
ipv4: 198.51.100.2
|
|
||||||
ipv6: 2001:db8::2
|
|
||||||
canport80: true
|
|
||||||
```
|
|
||||||
|
|
||||||
Use the following configuration to only serve the two DERP servers from the above `derp.yaml`:
|
|
||||||
|
|
||||||
```yaml title="config.yaml" hl_lines="5 6"
|
|
||||||
derp:
|
|
||||||
server:
|
|
||||||
enabled: false
|
|
||||||
urls: []
|
|
||||||
paths:
|
|
||||||
- /etc/headscale/derp.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
Independent of the custom DERP map, you may choose to [enable the embedded DERP server and have it automatically added
|
|
||||||
to the custom DERP map](#enable-embedded-derp).
|
|
||||||
|
|
||||||
### Verify clients
|
|
||||||
|
|
||||||
Access to DERP serves can be restricted to nodes that are members of your Tailnet. Relay access is denied for unknown
|
|
||||||
clients.
|
|
||||||
|
|
||||||
=== "Embedded DERP"
|
|
||||||
|
|
||||||
Client verification is enabled by default.
|
|
||||||
|
|
||||||
```yaml title="config.yaml" hl_lines="3"
|
|
||||||
derp:
|
|
||||||
server:
|
|
||||||
verify_clients: true
|
|
||||||
```
|
|
||||||
|
|
||||||
=== "3rd-party DERP"
|
|
||||||
|
|
||||||
Tailscale's `derper` provides two parameters to configure client verification:
|
|
||||||
|
|
||||||
- Use the `-verify-client-url` parameter of the `derper` and point it towards the `/verify` endpoint of your
|
|
||||||
Headscale server (e.g `https://headscale.example.com/verify`). The DERP server will query your Headscale instance
|
|
||||||
as soon as a client connects with it to ask whether access should be allowed or denied. Access is allowed if
|
|
||||||
Headscale knows about the connecting client and denied otherwise.
|
|
||||||
- The parameter `-verify-client-url-fail-open` controls what should happen when the DERP server can't reach the
|
|
||||||
Headscale instance. By default, it will allow access if Headscale is unreachable.
|
|
||||||
|
|
||||||
## Check DERP server connectivity
|
|
||||||
|
|
||||||
Any Tailscale client may be used to introspect the DERP map and to check for connectivity issues with DERP servers.
|
|
||||||
|
|
||||||
- Display DERP map: `tailscale debug derp-map`
|
|
||||||
- Check connectivity with the embedded DERP[^1]:`tailscale debug derp headscale`
|
|
||||||
|
|
||||||
Additional DERP related metrics and information is available via the [metrics and debug
|
|
||||||
endpoint](./debug.md#metrics-and-debug-endpoint).
|
|
||||||
|
|
||||||
## Limitations
|
|
||||||
|
|
||||||
- The embedded DERP server can't be used for Tailscale's captive portal checks as it doesn't support the `/generate_204`
|
|
||||||
endpoint via HTTP on port tcp/80.
|
|
||||||
- There are no speed or throughput optimisations, the main purpose is to assist in node connectivity.
|
|
||||||
|
|
||||||
[^1]: This assumes that the default region code of the [configuration file](./configuration.md) is used.
|
|
||||||
@@ -1,7 +1,7 @@
|
|||||||
# DNS
|
# DNS
|
||||||
|
|
||||||
Headscale supports [most DNS features](../about/features.md) from Tailscale. DNS related settings can be configured
|
Headscale supports [most DNS features](../about/features.md) from Tailscale. DNS related settings can be configured
|
||||||
within the `dns` section of the [configuration file](./configuration.md).
|
within `dns` section of the [configuration file](./configuration.md).
|
||||||
|
|
||||||
## Setting extra DNS records
|
## Setting extra DNS records
|
||||||
|
|
||||||
@@ -23,7 +23,7 @@ hostname and port combination "http://hostname-in-magic-dns.myvpn.example.com:30
|
|||||||
|
|
||||||
!!! warning "Limitations"
|
!!! warning "Limitations"
|
||||||
|
|
||||||
Currently, [only A and AAAA records are processed by Tailscale](https://github.com/tailscale/tailscale/blob/v1.86.5/ipn/ipnlocal/node_backend.go#L662).
|
Currently, [only A and AAAA records are processed by Tailscale](https://github.com/tailscale/tailscale/blob/v1.78.3/ipn/ipnlocal/local.go#L4461-L4479).
|
||||||
|
|
||||||
1. Configure extra DNS records using one of the available configuration options:
|
1. Configure extra DNS records using one of the available configuration options:
|
||||||
|
|
||||||
@@ -66,9 +66,9 @@ hostname and port combination "http://hostname-in-magic-dns.myvpn.example.com:30
|
|||||||
|
|
||||||
!!! tip "Good to know"
|
!!! tip "Good to know"
|
||||||
|
|
||||||
- The `dns.extra_records_path` option in the [configuration file](./configuration.md) needs to reference the
|
* The `dns.extra_records_path` option in the [configuration file](./configuration.md) needs to reference the
|
||||||
JSON file containing extra DNS records.
|
JSON file containing extra DNS records.
|
||||||
- Be sure to "sort keys" and produce a stable output in case you generate the JSON file with a script.
|
* Be sure to "sort keys" and produce a stable output in case you generate the JSON file with a script.
|
||||||
Headscale uses a checksum to detect changes to the file and a stable output avoids unnecessary processing.
|
Headscale uses a checksum to detect changes to the file and a stable output avoids unnecessary processing.
|
||||||
|
|
||||||
1. Verify that DNS records are properly set using the DNS querying tool of your choice:
|
1. Verify that DNS records are properly set using the DNS querying tool of your choice:
|
||||||
|
|||||||
@@ -13,7 +13,7 @@ Running headscale behind a reverse proxy is useful when running multiple applica
|
|||||||
|
|
||||||
The reverse proxy MUST be configured to support WebSockets to communicate with Tailscale clients.
|
The reverse proxy MUST be configured to support WebSockets to communicate with Tailscale clients.
|
||||||
|
|
||||||
WebSockets support is also required when using the Headscale [embedded DERP server](../derp.md). In this case, you will also need to expose the UDP port used for STUN (by default, udp/3478). Please check our [config-example.yaml](https://github.com/juanfont/headscale/blob/main/config-example.yaml).
|
WebSockets support is also required when using the headscale embedded DERP server. In this case, you will also need to expose the UDP port used for STUN (by default, udp/3478). Please check our [config-example.yaml](https://github.com/juanfont/headscale/blob/main/config-example.yaml).
|
||||||
|
|
||||||
### Cloudflare
|
### Cloudflare
|
||||||
|
|
||||||
|
|||||||
@@ -7,16 +7,9 @@
|
|||||||
|
|
||||||
This page collects third-party tools, client libraries, and scripts related to headscale.
|
This page collects third-party tools, client libraries, and scripts related to headscale.
|
||||||
|
|
||||||
- [headscale-operator](https://github.com/infradohq/headscale-operator) - Headscale Kubernetes Operator
|
| Name | Repository Link | Description |
|
||||||
- [tailscale-manager](https://github.com/singlestore-labs/tailscale-manager) - Dynamically manage Tailscale route
|
| --------------------- | --------------------------------------------------------------- | -------------------------------------------------------------------- |
|
||||||
advertisements
|
| tailscale-manager | [Github](https://github.com/singlestore-labs/tailscale-manager) | Dynamically manage Tailscale route advertisements |
|
||||||
- [headscalebacktosqlite](https://github.com/bigbozza/headscalebacktosqlite) - Migrate headscale from PostgreSQL back to
|
| headscalebacktosqlite | [Github](https://github.com/bigbozza/headscalebacktosqlite) | Migrate headscale from PostgreSQL back to SQLite |
|
||||||
SQLite
|
| headscale-pf | [Github](https://github.com/YouSysAdmin/headscale-pf) | Populates user groups based on user groups in Jumpcloud or Authentik |
|
||||||
- [headscale-pf](https://github.com/YouSysAdmin/headscale-pf) - Populates user groups based on user groups in Jumpcloud
|
| headscale-client-go | [Github](https://github.com/hibare/headscale-client-go) | A Go client implementation for the Headscale HTTP API. |
|
||||||
or Authentik
|
|
||||||
- [headscale-client-go](https://github.com/hibare/headscale-client-go) - A Go client implementation for the Headscale
|
|
||||||
HTTP API.
|
|
||||||
- [headscale-zabbix](https://github.com/dblanque/headscale-zabbix) - A Zabbix Monitoring Template for the Headscale
|
|
||||||
Service.
|
|
||||||
- [tailscale-exporter](https://github.com/adinhodovic/tailscale-exporter) - A Prometheus exporter for Headscale that
|
|
||||||
provides network-level metrics using the Headscale API.
|
|
||||||
|
|||||||
@@ -7,20 +7,14 @@
|
|||||||
|
|
||||||
Headscale doesn't provide a built-in web interface but users may pick one from the available options.
|
Headscale doesn't provide a built-in web interface but users may pick one from the available options.
|
||||||
|
|
||||||
- [headscale-ui](https://github.com/gurucomputing/headscale-ui) - A web frontend for the headscale Tailscale-compatible
|
| Name | Repository Link | Description |
|
||||||
coordination server
|
| ---------------------- | ----------------------------------------------------------- | -------------------------------------------------------------------------------------------- |
|
||||||
- [HeadscaleUi](https://github.com/simcu/headscale-ui) - A static headscale admin ui, no backend environment required
|
| headscale-ui | [Github](https://github.com/gurucomputing/headscale-ui) | A web frontend for the headscale Tailscale-compatible coordination server |
|
||||||
- [Headplane](https://github.com/tale/headplane) - An advanced Tailscale inspired frontend for headscale
|
| HeadscaleUi | [GitHub](https://github.com/simcu/headscale-ui) | A static headscale admin ui, no backend environment required |
|
||||||
- [headscale-admin](https://github.com/GoodiesHQ/headscale-admin) - Headscale-Admin is meant to be a simple, modern web
|
| Headplane | [GitHub](https://github.com/tale/headplane) | An advanced Tailscale inspired frontend for headscale |
|
||||||
interface for headscale
|
| headscale-admin | [Github](https://github.com/GoodiesHQ/headscale-admin) | Headscale-Admin is meant to be a simple, modern web interface for headscale |
|
||||||
- [ouroboros](https://github.com/yellowsink/ouroboros) - Ouroboros is designed for users to manage their own devices,
|
| ouroboros | [Github](https://github.com/yellowsink/ouroboros) | Ouroboros is designed for users to manage their own devices, rather than for admins |
|
||||||
rather than for admins
|
| unraid-headscale-admin | [Github](https://github.com/ich777/unraid-headscale-admin) | A simple headscale admin UI for Unraid, it offers Local (`docker exec`) and API Mode |
|
||||||
- [unraid-headscale-admin](https://github.com/ich777/unraid-headscale-admin) - A simple headscale admin UI for Unraid,
|
| headscale-console | [Github](https://github.com/rickli-cloud/headscale-console) | WebAssembly-based client supporting SSH, VNC and RDP with optional self-service capabilities |
|
||||||
it offers Local (`docker exec`) and API Mode
|
|
||||||
- [headscale-console](https://github.com/rickli-cloud/headscale-console) - WebAssembly-based client supporting SSH, VNC
|
|
||||||
and RDP with optional self-service capabilities
|
|
||||||
- [headscale-piying](https://github.com/wszgrcy/headscale-piying) - headscale web ui,support visual ACL configuration
|
|
||||||
- [HeadControl](https://github.com/ahmadzip/HeadControl) - Minimal Headscale admin dashboard, built with Go and HTMX
|
|
||||||
- [Headscale Manager](https://github.com/hkdone/headscalemanager) - Headscale UI for Android
|
|
||||||
|
|
||||||
You can ask for support on our [Discord server](https://discord.gg/c84AZQhmpx) in the "web-interfaces" channel.
|
You can ask for support on our [Discord server](https://discord.gg/c84AZQhmpx) in the "web-interfaces" channel.
|
||||||
|
|||||||
169
docs/ref/oidc.md
@@ -40,9 +40,9 @@ A basic configuration connects Headscale to an identity provider and typically r
|
|||||||
|
|
||||||
=== "Identity provider"
|
=== "Identity provider"
|
||||||
|
|
||||||
- Create a new confidential client (`Client ID`, `Client secret`)
|
* Create a new confidential client (`Client ID`, `Client secret`)
|
||||||
- Add Headscale's OIDC callback URL as valid redirect URL: `https://headscale.example.com/oidc/callback`
|
* Add Headscale's OIDC callback URL as valid redirect URL: `https://headscale.example.com/oidc/callback`
|
||||||
- Configure additional parameters to improve user experience such as: name, description, logo, …
|
* Configure additional parameters to improve user experience such as: name, description, logo, …
|
||||||
|
|
||||||
### Enable PKCE (recommended)
|
### Enable PKCE (recommended)
|
||||||
|
|
||||||
@@ -63,8 +63,8 @@ recommended and needs to be configured for Headscale and the identity provider a
|
|||||||
|
|
||||||
=== "Identity provider"
|
=== "Identity provider"
|
||||||
|
|
||||||
- Enable PKCE for the headscale client
|
* Enable PKCE for the headscale client
|
||||||
- Set the PKCE challenge method to "S256"
|
* Set the PKCE challenge method to "S256"
|
||||||
|
|
||||||
### Authorize users with filters
|
### Authorize users with filters
|
||||||
|
|
||||||
@@ -75,11 +75,10 @@ are configured, a user needs to pass all of them.
|
|||||||
|
|
||||||
=== "Allowed domains"
|
=== "Allowed domains"
|
||||||
|
|
||||||
- Check the email domain of each authenticating user against the list of allowed domains and only authorize users
|
* Check the email domain of each authenticating user against the list of allowed domains and only authorize users
|
||||||
whose email domain matches `example.com`.
|
whose email domain matches `example.com`.
|
||||||
- A verified email address is required [unless email verification is disabled](#control-email-verification).
|
* Access allowed: `alice@example.com`
|
||||||
- Access allowed: `alice@example.com`
|
* Access denied: `bob@example.net`
|
||||||
- Access denied: `bob@example.net`
|
|
||||||
|
|
||||||
```yaml hl_lines="5-6"
|
```yaml hl_lines="5-6"
|
||||||
oidc:
|
oidc:
|
||||||
@@ -92,11 +91,10 @@ are configured, a user needs to pass all of them.
|
|||||||
|
|
||||||
=== "Allowed users/emails"
|
=== "Allowed users/emails"
|
||||||
|
|
||||||
- Check the email address of each authenticating user against the list of allowed email addresses and only authorize
|
* Check the email address of each authenticating user against the list of allowed email addresses and only authorize
|
||||||
users whose email is part of the `allowed_users` list.
|
users whose email is part of the `allowed_users` list.
|
||||||
- A verified email address is required [unless email verification is disabled](#control-email-verification).
|
* Access allowed: `alice@example.com`, `bob@example.net`
|
||||||
- Access allowed: `alice@example.com`, `bob@example.net`
|
* Access denied: `mallory@example.net`
|
||||||
- Access denied: `mallory@example.net`
|
|
||||||
|
|
||||||
```yaml hl_lines="5-7"
|
```yaml hl_lines="5-7"
|
||||||
oidc:
|
oidc:
|
||||||
@@ -110,10 +108,10 @@ are configured, a user needs to pass all of them.
|
|||||||
|
|
||||||
=== "Allowed groups"
|
=== "Allowed groups"
|
||||||
|
|
||||||
- Use the OIDC `groups` claim of each authenticating user to get their group membership and only authorize users
|
* Use the OIDC `groups` claim of each authenticating user to get their group membership and only authorize users
|
||||||
which are members in at least one of the referenced groups.
|
which are members in at least one of the referenced groups.
|
||||||
- Access allowed: users in the `headscale_users` group
|
* Access allowed: users in the `headscale_users` group
|
||||||
- Access denied: users without groups, users with other groups
|
* Access denied: users without groups, users with other groups
|
||||||
|
|
||||||
```yaml hl_lines="5-7"
|
```yaml hl_lines="5-7"
|
||||||
oidc:
|
oidc:
|
||||||
@@ -125,32 +123,19 @@ are configured, a user needs to pass all of them.
|
|||||||
- "headscale_users"
|
- "headscale_users"
|
||||||
```
|
```
|
||||||
|
|
||||||
### Control email verification
|
|
||||||
|
|
||||||
Headscale uses the `email` claim from the identity provider to synchronize the email address to its user profile. By
|
|
||||||
default, a user's email address is only synchronized when the identity provider reports the email address as verified
|
|
||||||
via the `email_verified: true` claim.
|
|
||||||
|
|
||||||
Unverified emails may be allowed in case an identity provider does not send the `email_verified` claim or email
|
|
||||||
verification is not required. In that case, a user's email address is always synchronized to the user profile.
|
|
||||||
|
|
||||||
```yaml hl_lines="5"
|
|
||||||
oidc:
|
|
||||||
issuer: "https://sso.example.com"
|
|
||||||
client_id: "headscale"
|
|
||||||
client_secret: "generated-secret"
|
|
||||||
email_verified_required: false
|
|
||||||
```
|
|
||||||
|
|
||||||
### Customize node expiration
|
### Customize node expiration
|
||||||
|
|
||||||
The node expiration is the amount of time a node is authenticated with OpenID Connect until it expires and needs to
|
The node expiration is the amount of time a node is authenticated with OpenID Connect until it expires and needs to
|
||||||
reauthenticate. The default node expiration can be configured via the top-level `node.expiry` setting.
|
reauthenticate. The default node expiration is 180 days. This can either be customized or set to the expiration from the
|
||||||
|
Access Token.
|
||||||
|
|
||||||
=== "Customize node expiration"
|
=== "Customize node expiration"
|
||||||
|
|
||||||
```yaml hl_lines="2"
|
```yaml hl_lines="5"
|
||||||
node:
|
oidc:
|
||||||
|
issuer: "https://sso.example.com"
|
||||||
|
client_id: "headscale"
|
||||||
|
client_secret: "generated-secret"
|
||||||
expiry: 30d # Use 0 to disable node expiration
|
expiry: 30d # Use 0 to disable node expiration
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -159,6 +144,7 @@ reauthenticate. The default node expiration can be configured via the top-level
|
|||||||
Please keep in mind that the Access Token is typically a short-lived token that expires within a few minutes. You
|
Please keep in mind that the Access Token is typically a short-lived token that expires within a few minutes. You
|
||||||
will have to configure token expiration in your identity provider to avoid frequent re-authentication.
|
will have to configure token expiration in your identity provider to avoid frequent re-authentication.
|
||||||
|
|
||||||
|
|
||||||
```yaml hl_lines="5"
|
```yaml hl_lines="5"
|
||||||
oidc:
|
oidc:
|
||||||
issuer: "https://sso.example.com"
|
issuer: "https://sso.example.com"
|
||||||
@@ -170,7 +156,6 @@ reauthenticate. The default node expiration can be configured via the top-level
|
|||||||
!!! tip "Expire a node and force re-authentication"
|
!!! tip "Expire a node and force re-authentication"
|
||||||
|
|
||||||
A node can be expired immediately via:
|
A node can be expired immediately via:
|
||||||
|
|
||||||
```console
|
```console
|
||||||
headscale node expire -i <NODE_ID>
|
headscale node expire -i <NODE_ID>
|
||||||
```
|
```
|
||||||
@@ -181,16 +166,13 @@ You may refer to users in the Headscale policy via:
|
|||||||
|
|
||||||
- Email address
|
- Email address
|
||||||
- Username
|
- Username
|
||||||
- Provider identifier (this value is currently only available from the [API](api.md), database or directly from your
|
- Provider identifier (only available in the database or from your identity provider)
|
||||||
identity provider)
|
|
||||||
|
|
||||||
!!! note "A user identifier in the policy must contain a single `@`"
|
!!! note "A user identifier in the policy must contain a single `@`"
|
||||||
|
|
||||||
The Headscale policy requires a single `@` to reference a user. If the username or provider identifier doesn't
|
The Headscale policy requires a single `@` to reference a user. If the username or provider identifier doesn't
|
||||||
already contain a single `@`, it needs to be appended at the end. For example: the Headscale username `ssmith` has
|
already contain a single `@`, it needs to be appended at the end. For example: the username `ssmith` has to be
|
||||||
to be written as `ssmith@` to be correctly identified as user within the policy.
|
written as `ssmith@` to be correctly identified as user within the policy.
|
||||||
|
|
||||||
Ensure that the Headscale username itself does not end with `@`.
|
|
||||||
|
|
||||||
!!! warning "Email address or username might be updated by users"
|
!!! warning "Email address or username might be updated by users"
|
||||||
|
|
||||||
@@ -199,43 +181,15 @@ You may refer to users in the Headscale policy via:
|
|||||||
consequences for Headscale where a policy might no longer work or a user might obtain more access by hijacking an
|
consequences for Headscale where a policy might no longer work or a user might obtain more access by hijacking an
|
||||||
existing username or email address.
|
existing username or email address.
|
||||||
|
|
||||||
!!! tip "Howto use the provider identifier in the policy"
|
|
||||||
|
|
||||||
The provider identifier uniquely identifies an OIDC user and a well-behaving identity provider guarantees that this
|
|
||||||
value never changes for a particular user. It is usually an opaque and long string and its value is currently only
|
|
||||||
available from the [API](api.md), database or directly from your identity provider).
|
|
||||||
|
|
||||||
Use the [API](api.md) with the `/api/v1/user` endpoint to fetch the provider identifier (`providerId`). The value
|
|
||||||
(be sure to append an `@` in case the provider identifier doesn't already contain an `@` somewhere) can be used
|
|
||||||
directly to reference a user in the policy. To improve readability of the policy, one may use the `groups` section
|
|
||||||
as an alias:
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"groups": {
|
|
||||||
"group:alice": [
|
|
||||||
"https://soo.example.com/oauth2/openid/59ac9125-c31b-46c5-814e-06242908cf57@"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"acls": [
|
|
||||||
{
|
|
||||||
"action": "accept",
|
|
||||||
"src": ["group:alice"],
|
|
||||||
"dst": ["*:*"]
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Supported OIDC claims
|
## Supported OIDC claims
|
||||||
|
|
||||||
Headscale uses [the standard OIDC claims](https://openid.net/specs/openid-connect-core-1_0.html#StandardClaims) to
|
Headscale uses [the standard OIDC claims](https://openid.net/specs/openid-connect-core-1_0.html#StandardClaims) to
|
||||||
populate and update its local user profile on each login. OIDC claims are read from the ID Token and from the UserInfo
|
populate and update its local user profile on each login. OIDC claims are read from the ID Token or from the UserInfo
|
||||||
endpoint.
|
endpoint.
|
||||||
|
|
||||||
| Headscale profile | OIDC claim | Notes / examples |
|
| Headscale profile | OIDC claim | Notes / examples |
|
||||||
| ------------------- | -------------------- | ------------------------------------------------------------------------------------------------- |
|
| ------------------- | -------------------- | ------------------------------------------------------------------------------------------------- |
|
||||||
| email address | `email` | Only verified emails are synchronized, unless `email_verified_required: false` is configured |
|
| email address | `email` | Only used when `email_verified: true` |
|
||||||
| display name | `name` | eg: `Sam Smith` |
|
| display name | `name` | eg: `Sam Smith` |
|
||||||
| username | `preferred_username` | Depends on identity provider, eg: `ssmith`, `ssmith@idp.example.com`, `\\example.com\ssmith` |
|
| username | `preferred_username` | Depends on identity provider, eg: `ssmith`, `ssmith@idp.example.com`, `\\example.com\ssmith` |
|
||||||
| profile picture | `picture` | URL to a profile picture or avatar |
|
| profile picture | `picture` | URL to a profile picture or avatar |
|
||||||
@@ -251,6 +205,8 @@ endpoint.
|
|||||||
- The username must be at least two characters long.
|
- The username must be at least two characters long.
|
||||||
- It must only contain letters, digits, hyphens, dots, underscores, and up to a single `@`.
|
- It must only contain letters, digits, hyphens, dots, underscores, and up to a single `@`.
|
||||||
- The username must start with a letter.
|
- The username must start with a letter.
|
||||||
|
- A user's email address is only synchronized to the local user profile when the identity provider marks the email
|
||||||
|
address as verified (`email_verified: true`).
|
||||||
|
|
||||||
Please see the [GitHub label "OIDC"](https://github.com/juanfont/headscale/labels/OIDC) for OIDC related issues.
|
Please see the [GitHub label "OIDC"](https://github.com/juanfont/headscale/labels/OIDC) for OIDC related issues.
|
||||||
|
|
||||||
@@ -274,12 +230,24 @@ are known to work:
|
|||||||
|
|
||||||
Authelia is fully supported by Headscale.
|
Authelia is fully supported by Headscale.
|
||||||
|
|
||||||
|
#### Additional configuration to authorize users based on filters
|
||||||
|
|
||||||
|
Authelia (4.39.0 or newer) no longer provides standard OIDC claims such as `email` or `groups` via the ID Token. The
|
||||||
|
OIDC `email` and `groups` claims are used to [authorize users with filters](#authorize-users-with-filters). This extra
|
||||||
|
configuration step is **only** needed if you need to authorize access based on one of the following user properties:
|
||||||
|
|
||||||
|
- domain
|
||||||
|
- email address
|
||||||
|
- group membership
|
||||||
|
|
||||||
|
Please follow the instructions from Authelia's documentation on how to [Restore Functionality Prior to Claims
|
||||||
|
Parameter](https://www.authelia.com/integration/openid-connect/openid-connect-1.0-claims/#restore-functionality-prior-to-claims-parameter).
|
||||||
|
|
||||||
### Authentik
|
### Authentik
|
||||||
|
|
||||||
- Authentik is fully supported by Headscale.
|
- Authentik is fully supported by Headscale.
|
||||||
- [Headscale does not support JSON Web Encryption](https://github.com/juanfont/headscale/issues/2446). Leave the field
|
- [Headscale does not JSON Web Encryption](https://github.com/juanfont/headscale/issues/2446). Leave the field
|
||||||
`Encryption Key` in the providers section unset.
|
`Encryption Key` in the providers section unset.
|
||||||
- See Authentik's [Integrate with Headscale](https://integrations.goauthentik.io/networking/headscale/)
|
|
||||||
|
|
||||||
### Google OAuth
|
### Google OAuth
|
||||||
|
|
||||||
@@ -301,15 +269,15 @@ Console.
|
|||||||
#### Steps
|
#### Steps
|
||||||
|
|
||||||
1. Go to [Google Console](https://console.cloud.google.com) and login or create an account if you don't have one.
|
1. Go to [Google Console](https://console.cloud.google.com) and login or create an account if you don't have one.
|
||||||
1. Create a project (if you don't already have one).
|
2. Create a project (if you don't already have one).
|
||||||
1. On the left hand menu, go to `APIs and services` -> `Credentials`
|
3. On the left hand menu, go to `APIs and services` -> `Credentials`
|
||||||
1. Click `Create Credentials` -> `OAuth client ID`
|
4. Click `Create Credentials` -> `OAuth client ID`
|
||||||
1. Under `Application Type`, choose `Web Application`
|
5. Under `Application Type`, choose `Web Application`
|
||||||
1. For `Name`, enter whatever you like
|
6. For `Name`, enter whatever you like
|
||||||
1. Under `Authorised redirect URIs`, add Headscale's OIDC callback URL: `https://headscale.example.com/oidc/callback`
|
7. Under `Authorised redirect URIs`, add Headscale's OIDC callback URL: `https://headscale.example.com/oidc/callback`
|
||||||
1. Click `Save` at the bottom of the form
|
8. Click `Save` at the bottom of the form
|
||||||
1. Take note of the `Client ID` and `Client secret`, you can also download it for reference if you need it.
|
9. Take note of the `Client ID` and `Client secret`, you can also download it for reference if you need it.
|
||||||
1. [Configure Headscale following the "Basic configuration" steps](#basic-configuration). The issuer URL for Google
|
10. [Configure Headscale following the "Basic configuration" steps](#basic-configuration). The issuer URL for Google
|
||||||
OAuth is: `https://accounts.google.com`.
|
OAuth is: `https://accounts.google.com`.
|
||||||
|
|
||||||
### Kanidm
|
### Kanidm
|
||||||
@@ -317,14 +285,6 @@ Console.
|
|||||||
- Kanidm is fully supported by Headscale.
|
- Kanidm is fully supported by Headscale.
|
||||||
- Groups for the [allowed groups filter](#authorize-users-with-filters) need to be specified with their full SPN, for
|
- Groups for the [allowed groups filter](#authorize-users-with-filters) need to be specified with their full SPN, for
|
||||||
example: `headscale_users@sso.example.com`.
|
example: `headscale_users@sso.example.com`.
|
||||||
- Kanidm sends the full SPN (`alice@sso.example.com`) as `preferred_username` by default. Headscale stores this value as
|
|
||||||
username which might be confusing as the username and email fields now contain values that look like an email address.
|
|
||||||
[Kanidm can be configured to send the short username as `preferred_username` attribute
|
|
||||||
instead](https://kanidm.github.io/kanidm/stable/integrations/oauth2.html#short-names):
|
|
||||||
```console
|
|
||||||
kanidm system oauth2 prefer-short-username <client name>
|
|
||||||
```
|
|
||||||
Once configured, the short username in Headscale will be `alice` and can be referred to as `alice@` in the policy.
|
|
||||||
|
|
||||||
### Keycloak
|
### Keycloak
|
||||||
|
|
||||||
@@ -337,15 +297,13 @@ you need to [authorize access based on group membership](#authorize-users-with-f
|
|||||||
|
|
||||||
- Create a new client scope `groups` for OpenID Connect:
|
- Create a new client scope `groups` for OpenID Connect:
|
||||||
- Configure a `Group Membership` mapper with name `groups` and the token claim name `groups`.
|
- Configure a `Group Membership` mapper with name `groups` and the token claim name `groups`.
|
||||||
- Add the mapper to at least the UserInfo endpoint.
|
- Enable the mapper for the ID Token, Access Token and UserInfo endpoint.
|
||||||
- Configure the new client scope for your Headscale client:
|
- Configure the new client scope for your Headscale client:
|
||||||
- Edit the Headscale client.
|
- Edit the Headscale client.
|
||||||
- Search for the client scope `group`.
|
- Search for the client scope `group`.
|
||||||
- Add it with assigned type `Default`.
|
- Add it with assigned type `Default`.
|
||||||
- [Configure the allowed groups in Headscale](#authorize-users-with-filters). How groups need to be specified depends on
|
- [Configure the allowed groups in Headscale](#authorize-users-with-filters). Keep in mind that groups in Keycloak start
|
||||||
Keycloak's `Full group path` option:
|
with a leading `/`.
|
||||||
- `Full group path` is enabled: groups contain their full path, e.g. `/top/group1`
|
|
||||||
- `Full group path` is disabled: only the name of the group is used, e.g. `group1`
|
|
||||||
|
|
||||||
### Microsoft Entra ID
|
### Microsoft Entra ID
|
||||||
|
|
||||||
@@ -357,20 +315,3 @@ Entra ID is: `https://login.microsoftonline.com/<tenant-UUID>/v2.0`. The followi
|
|||||||
|
|
||||||
- `domain_hint: example.com` to use your own domain
|
- `domain_hint: example.com` to use your own domain
|
||||||
- `prompt: select_account` to force an account picker during login
|
- `prompt: select_account` to force an account picker during login
|
||||||
|
|
||||||
When using Microsoft Entra ID together with the [allowed groups filter](#authorize-users-with-filters), configure the
|
|
||||||
Headscale OIDC scope without the `groups` claim, for example:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
oidc:
|
|
||||||
scope: ["openid", "profile", "email"]
|
|
||||||
```
|
|
||||||
|
|
||||||
Groups for the [allowed groups filter](#authorize-users-with-filters) need to be specified with their group ID(UUID) instead
|
|
||||||
of the group name.
|
|
||||||
|
|
||||||
## Switching OIDC providers
|
|
||||||
|
|
||||||
Headscale only supports a single OIDC provider in its configuration, but it does store the provider identifier of each user. When switching providers, this might lead to issues with existing users: all user details (name, email, groups) might be identical with the new provider, but the identifier will differ. Headscale will be unable to create a new user as the name and email will already be in use for the existing users.
|
|
||||||
|
|
||||||
At this time, you will need to manually update the `provider_identifier` column in the `users` table for each user with the appropriate value for the new provider. The identifier is built from the `iss` and `sub` claims of the OIDC ID token, for example `https://id.example.com/12340987`.
|
|
||||||
|
|||||||
@@ -1,144 +0,0 @@
|
|||||||
# Registration methods
|
|
||||||
|
|
||||||
Headscale supports multiple ways to register a node. The preferred registration method depends on the identity of a node
|
|
||||||
and your use case.
|
|
||||||
|
|
||||||
## Identity model
|
|
||||||
|
|
||||||
Tailscale's identity model distinguishes between personal and tagged nodes:
|
|
||||||
|
|
||||||
- A personal node (or user-owned node) is owned by a human and typically refers to end-user devices such as laptops,
|
|
||||||
workstations or mobile phones. End-user devices are managed by a single user.
|
|
||||||
- A tagged node (or service-based node or non-human node) provides services to the network. Common examples include web-
|
|
||||||
and database servers. Those nodes are typically managed by a team of users. Some additional restrictions apply for
|
|
||||||
tagged nodes, e.g. a tagged node is not allowed to [Tailscale SSH](https://tailscale.com/kb/1193/tailscale-ssh) into a
|
|
||||||
personal node.
|
|
||||||
|
|
||||||
Headscale implements Tailscale's identity model and distinguishes between personal and tagged nodes where a personal
|
|
||||||
node is owned by a Headscale user and a tagged node is owned by a tag. Tagged devices are grouped under the special user
|
|
||||||
`tagged-devices`.
|
|
||||||
|
|
||||||
## Registration methods
|
|
||||||
|
|
||||||
There are two main ways to register new nodes, [web authentication](#web-authentication) and [registration with a pre
|
|
||||||
authenticated key](#pre-authenticated-key). Both methods can be used to register personal and tagged nodes.
|
|
||||||
|
|
||||||
### Web authentication
|
|
||||||
|
|
||||||
Web authentication is the default method to register a new node. It's interactive, where the client initiates the
|
|
||||||
registration and the Headscale administrator needs to approve the new node before it is allowed to join the network. A
|
|
||||||
node can be approved with:
|
|
||||||
|
|
||||||
- Headscale CLI (described in this documentation)
|
|
||||||
- [Headscale API](api.md)
|
|
||||||
- Or delegated to an identity provider via [OpenID Connect](oidc.md)
|
|
||||||
|
|
||||||
Web authentication relies on the presence of a Headscale user. Use the `headscale users` command to create a new
|
|
||||||
user[^1]:
|
|
||||||
|
|
||||||
```console
|
|
||||||
headscale users create <USER>
|
|
||||||
```
|
|
||||||
|
|
||||||
=== "Personal devices"
|
|
||||||
|
|
||||||
Run `tailscale up` to login your personal device:
|
|
||||||
|
|
||||||
```console
|
|
||||||
tailscale up --login-server <YOUR_HEADSCALE_URL>
|
|
||||||
```
|
|
||||||
|
|
||||||
Usually, a browser window with further instructions is opened. This page explains how to complete the registration
|
|
||||||
on your Headscale server and it also prints the Auth ID required to approve the node:
|
|
||||||
|
|
||||||
```console
|
|
||||||
headscale auth register --user <USER> --auth-id <AUTH_ID>
|
|
||||||
```
|
|
||||||
|
|
||||||
Congrations, the registration of your personal node is complete and it should be listed as "online" in the output of
|
|
||||||
`headscale nodes list`. The "User" column displays `<USER>` as the owner of the node.
|
|
||||||
|
|
||||||
=== "Tagged devices"
|
|
||||||
|
|
||||||
Your Headscale user needs to be authorized to register tagged devices. This authorization is specified in the
|
|
||||||
[`tagOwners`](https://tailscale.com/kb/1337/policy-syntax#tag-owners) section of the [ACL](acls.md). A simple
|
|
||||||
example looks like this:
|
|
||||||
|
|
||||||
```json title="The user alice can register nodes tagged with tag:server"
|
|
||||||
{
|
|
||||||
"tagOwners": {
|
|
||||||
"tag:server": ["alice@"]
|
|
||||||
},
|
|
||||||
// more rules
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
Run `tailscale up` and provide at least one tag to login a tagged device:
|
|
||||||
|
|
||||||
```console
|
|
||||||
tailscale up --login-server <YOUR_HEADSCALE_URL> --advertise-tags tag:<TAG>
|
|
||||||
```
|
|
||||||
|
|
||||||
Usually, a browser window with further instructions is opened. This page explains how to complete the registration
|
|
||||||
on your Headscale server and it also prints the Auth ID required to approve the node:
|
|
||||||
|
|
||||||
```console
|
|
||||||
headscale auth register --user <USER> --auth-id <AUTH_ID>
|
|
||||||
```
|
|
||||||
|
|
||||||
Headscale checks that `<USER>` is allowed to register a node with the specified tag(s) and then transfers ownership
|
|
||||||
of the new node to the special user `tagged-devices`. The registration of a tagged node is complete and it should be
|
|
||||||
listed as "online" in the output of `headscale nodes list`. The "User" column displays `tagged-devices` as the owner
|
|
||||||
of the node. See the "Tags" column for the list of assigned tags.
|
|
||||||
|
|
||||||
### Pre authenticated key
|
|
||||||
|
|
||||||
Registration with a pre authenticated key (or auth key) is a non-interactive way to register a new node. The Headscale
|
|
||||||
administrator creates a preauthkey upfront and this preauthkey can then be used to register a node non-interactively.
|
|
||||||
Its best suited for automation.
|
|
||||||
|
|
||||||
=== "Personal devices"
|
|
||||||
|
|
||||||
A personal node is always assigned to a Headscale user. Use the `headscale users` command to create a new user[^1]:
|
|
||||||
|
|
||||||
```console
|
|
||||||
headscale users create <USER>
|
|
||||||
```
|
|
||||||
|
|
||||||
Use the `headscale user list` command to learn its `<USER_ID>` and create a new pre authenticated key for your user:
|
|
||||||
|
|
||||||
```console
|
|
||||||
headscale preauthkeys create --user <USER_ID>
|
|
||||||
```
|
|
||||||
|
|
||||||
The above prints a pre authenticated key with the default settings (can be used once and is valid for one hour). Use
|
|
||||||
this auth key to register a node non-interactively:
|
|
||||||
|
|
||||||
```console
|
|
||||||
tailscale up --login-server <YOUR_HEADSCALE_URL> --authkey <YOUR_AUTH_KEY>
|
|
||||||
```
|
|
||||||
|
|
||||||
Congrations, the registration of your personal node is complete and it should be listed as "online" in the output of
|
|
||||||
`headscale nodes list`. The "User" column displays `<USER>` as the owner of the node.
|
|
||||||
|
|
||||||
=== "Tagged devices"
|
|
||||||
|
|
||||||
Create a new pre authenticated key and provide at least one tag:
|
|
||||||
|
|
||||||
```console
|
|
||||||
headscale preauthkeys create --tags tag:<TAG>
|
|
||||||
```
|
|
||||||
|
|
||||||
The above prints a pre authenticated key with the default settings (can be used once and is valid for one hour). Use
|
|
||||||
this auth key to register a node non-interactively. You don't need to provide the `--advertise-tags` parameter as
|
|
||||||
the tags are automatically read from the pre authenticated key:
|
|
||||||
|
|
||||||
```console
|
|
||||||
tailscale up --login-server <YOUR_HEADSCALE_URL> --authkey <YOUR_AUTH_KEY>
|
|
||||||
```
|
|
||||||
|
|
||||||
The registration of a tagged node is complete and it should be listed as "online" in the output of
|
|
||||||
`headscale nodes list`. The "User" column displays `tagged-devices` as the owner of the node. See the "Tags" column for the list of
|
|
||||||
assigned tags.
|
|
||||||
|
|
||||||
[^1]: [Ensure that the Headscale username does not end with `@`.](oidc.md#reference-a-user-in-the-policy)
|
|
||||||
105
docs/ref/remote-cli.md
Normal file
@@ -0,0 +1,105 @@
|
|||||||
|
# Controlling headscale with remote CLI
|
||||||
|
|
||||||
|
This documentation has the goal of showing a user how-to control a headscale instance
|
||||||
|
from a remote machine with the `headscale` command line binary.
|
||||||
|
|
||||||
|
## Prerequisite
|
||||||
|
|
||||||
|
- A workstation to run `headscale` (any supported platform, e.g. Linux).
|
||||||
|
- A headscale server with gRPC enabled.
|
||||||
|
- Connections to the gRPC port (default: `50443`) are allowed.
|
||||||
|
- Remote access requires an encrypted connection via TLS.
|
||||||
|
- An API key to authenticate with the headscale server.
|
||||||
|
|
||||||
|
## Create an API key
|
||||||
|
|
||||||
|
We need to create an API key to authenticate with the remote headscale server when using it from our workstation.
|
||||||
|
|
||||||
|
To create an API key, log into your headscale server and generate a key:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
headscale apikeys create --expiration 90d
|
||||||
|
```
|
||||||
|
|
||||||
|
Copy the output of the command and save it for later. Please note that you can not retrieve a key again,
|
||||||
|
if the key is lost, expire the old one, and create a new key.
|
||||||
|
|
||||||
|
To list the keys currently associated with the server:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
headscale apikeys list
|
||||||
|
```
|
||||||
|
|
||||||
|
and to expire a key:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
headscale apikeys expire --prefix "<PREFIX>"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Download and configure headscale
|
||||||
|
|
||||||
|
1. Download the [`headscale` binary from GitHub's release page](https://github.com/juanfont/headscale/releases). Make
|
||||||
|
sure to use the same version as on the server.
|
||||||
|
|
||||||
|
1. Put the binary somewhere in your `PATH`, e.g. `/usr/local/bin/headscale`
|
||||||
|
|
||||||
|
1. Make `headscale` executable:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
chmod +x /usr/local/bin/headscale
|
||||||
|
```
|
||||||
|
|
||||||
|
1. Provide the connection parameters for the remote headscale server either via a minimal YAML configuration file or via
|
||||||
|
environment variables:
|
||||||
|
|
||||||
|
=== "Minimal YAML configuration file"
|
||||||
|
|
||||||
|
```yaml title="config.yaml"
|
||||||
|
cli:
|
||||||
|
address: <HEADSCALE_ADDRESS>:<PORT>
|
||||||
|
api_key: <API_KEY_FROM_PREVIOUS_STEP>
|
||||||
|
```
|
||||||
|
|
||||||
|
=== "Environment variables"
|
||||||
|
|
||||||
|
```shell
|
||||||
|
export HEADSCALE_CLI_ADDRESS="<HEADSCALE_ADDRESS>:<PORT>"
|
||||||
|
export HEADSCALE_CLI_API_KEY="<API_KEY_FROM_PREVIOUS_STEP>"
|
||||||
|
```
|
||||||
|
|
||||||
|
!!! bug
|
||||||
|
|
||||||
|
Headscale currently requires at least an empty configuration file when environment variables are used to
|
||||||
|
specify connection details. See [issue 2193](https://github.com/juanfont/headscale/issues/2193) for more
|
||||||
|
information.
|
||||||
|
|
||||||
|
This instructs the `headscale` binary to connect to a remote instance at `<HEADSCALE_ADDRESS>:<PORT>`, instead of
|
||||||
|
connecting to the local instance.
|
||||||
|
|
||||||
|
1. Test the connection
|
||||||
|
|
||||||
|
Let us run the headscale command to verify that we can connect by listing our nodes:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
headscale nodes list
|
||||||
|
```
|
||||||
|
|
||||||
|
You should now be able to see a list of your nodes from your workstation, and you can
|
||||||
|
now control the headscale server from your workstation.
|
||||||
|
|
||||||
|
## Behind a proxy
|
||||||
|
|
||||||
|
It is possible to run the gRPC remote endpoint behind a reverse proxy, like Nginx, and have it run on the _same_ port as headscale.
|
||||||
|
|
||||||
|
While this is _not a supported_ feature, an example on how this can be set up on
|
||||||
|
[NixOS is shown here](https://github.com/kradalby/dotfiles/blob/4489cdbb19cddfbfae82cd70448a38fde5a76711/machines/headscale.oracldn/headscale.nix#L61-L91).
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
- Make sure you have the _same_ headscale version on your server and workstation.
|
||||||
|
- Ensure that connections to the gRPC port are allowed.
|
||||||
|
- Verify that your TLS certificate is valid and trusted.
|
||||||
|
- If you don't have access to a trusted certificate (e.g. from Let's Encrypt), either:
|
||||||
|
- Add your self-signed certificate to the trust store of your OS _or_
|
||||||
|
- Disable certificate verification by either setting `cli.insecure: true` in the configuration file or by setting
|
||||||
|
`HEADSCALE_CLI_INSECURE=1` via an environment variable. We do **not** recommend to disable certificate validation.
|
||||||
@@ -43,8 +43,7 @@ can be used.
|
|||||||
```console
|
```console
|
||||||
$ headscale nodes list-routes
|
$ headscale nodes list-routes
|
||||||
ID | Hostname | Approved | Available | Serving (Primary)
|
ID | Hostname | Approved | Available | Serving (Primary)
|
||||||
1 | myrouter | | 10.0.0.0/8 |
|
1 | myrouter | | 10.0.0.0/8, 192.168.0.0/24 |
|
||||||
| | | 192.168.0.0/24 |
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Approve all desired routes of a subnet router by specifying them as comma separated list:
|
Approve all desired routes of a subnet router by specifying them as comma separated list:
|
||||||
@@ -59,8 +58,7 @@ The node `myrouter` can now route the IPv4 networks `10.0.0.0/8` and `192.168.0.
|
|||||||
```console
|
```console
|
||||||
$ headscale nodes list-routes
|
$ headscale nodes list-routes
|
||||||
ID | Hostname | Approved | Available | Serving (Primary)
|
ID | Hostname | Approved | Available | Serving (Primary)
|
||||||
1 | myrouter | 10.0.0.0/8 | 10.0.0.0/8 | 10.0.0.0/8
|
1 | myrouter | 10.0.0.0/8, 192.168.0.0/24 | 10.0.0.0/8, 192.168.0.0/24 | 10.0.0.0/8, 192.168.0.0/24
|
||||||
| | 192.168.0.0/24 | 192.168.0.0/24 | 192.168.0.0/24
|
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Use the subnet router
|
#### Use the subnet router
|
||||||
@@ -111,9 +109,9 @@ approval of routes served with a subnet router.
|
|||||||
|
|
||||||
The ACL snippet below defines the tag `tag:router` owned by the user `alice`. This tag is used for `routes` in the
|
The ACL snippet below defines the tag `tag:router` owned by the user `alice`. This tag is used for `routes` in the
|
||||||
`autoApprovers` section. The IPv4 route `192.168.0.0/24` is automatically approved once announced by a subnet router
|
`autoApprovers` section. The IPv4 route `192.168.0.0/24` is automatically approved once announced by a subnet router
|
||||||
that advertises the tag `tag:router`.
|
owned by the user `alice` and that also advertises the tag `tag:router`.
|
||||||
|
|
||||||
```json title="Subnet routers tagged with tag:router are automatically approved"
|
```json title="Subnet routers owned by alice and tagged with tag:router are automatically approved"
|
||||||
{
|
{
|
||||||
"tagOwners": {
|
"tagOwners": {
|
||||||
"tag:router": ["alice@"]
|
"tag:router": ["alice@"]
|
||||||
@@ -171,8 +169,7 @@ available, but needs to be approved:
|
|||||||
```console
|
```console
|
||||||
$ headscale nodes list-routes
|
$ headscale nodes list-routes
|
||||||
ID | Hostname | Approved | Available | Serving (Primary)
|
ID | Hostname | Approved | Available | Serving (Primary)
|
||||||
1 | myexit | | 0.0.0.0/0 |
|
1 | myexit | | 0.0.0.0/0, ::/0 |
|
||||||
| | | ::/0 |
|
|
||||||
```
|
```
|
||||||
|
|
||||||
For exit nodes, it is sufficient to approve either the IPv4 or IPv6 route. The other will be approved automatically.
|
For exit nodes, it is sufficient to approve either the IPv4 or IPv6 route. The other will be approved automatically.
|
||||||
@@ -187,8 +184,7 @@ The node `myexit` is now approved as exit node for the tailnet:
|
|||||||
```console
|
```console
|
||||||
$ headscale nodes list-routes
|
$ headscale nodes list-routes
|
||||||
ID | Hostname | Approved | Available | Serving (Primary)
|
ID | Hostname | Approved | Available | Serving (Primary)
|
||||||
1 | myexit | 0.0.0.0/0 | 0.0.0.0/0 | 0.0.0.0/0
|
1 | myexit | 0.0.0.0/0, ::/0 | 0.0.0.0/0, ::/0 | 0.0.0.0/0, ::/0
|
||||||
| | ::/0 | ::/0 | ::/0
|
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Use the exit node
|
#### Use the exit node
|
||||||
@@ -220,39 +216,6 @@ nodes.
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
### Restrict access to exit nodes per user or group
|
|
||||||
|
|
||||||
A user can use _any_ of the available exit nodes with `autogroup:internet`. Alternatively, the ACL snippet below assigns
|
|
||||||
each user a specific exit node while hiding all other exit nodes. The user `alice` can only use exit node `exit1` while
|
|
||||||
user `bob` can only use exit node `exit2`.
|
|
||||||
|
|
||||||
```json title="Assign each user a dedicated exit node"
|
|
||||||
{
|
|
||||||
"hosts": {
|
|
||||||
"exit1": "100.64.0.1/32",
|
|
||||||
"exit2": "100.64.0.2/32"
|
|
||||||
},
|
|
||||||
"acls": [
|
|
||||||
{
|
|
||||||
"action": "accept",
|
|
||||||
"src": ["alice@"],
|
|
||||||
"dst": ["exit1:*"]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"action": "accept",
|
|
||||||
"src": ["bob@"],
|
|
||||||
"dst": ["exit2:*"]
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
!!! warning
|
|
||||||
|
|
||||||
- The above implementation is Headscale specific and will likely be removed once [support for
|
|
||||||
`via`](https://github.com/juanfont/headscale/issues/2409) is available.
|
|
||||||
- Beware that a user can also connect to any port of the exit node itself.
|
|
||||||
|
|
||||||
### Automatically approve an exit node with auto approvers
|
### Automatically approve an exit node with auto approvers
|
||||||
|
|
||||||
The initial setup of an exit node usually requires manual approval on the control server before it can be used by a node
|
The initial setup of an exit node usually requires manual approval on the control server before it can be used by a node
|
||||||
@@ -260,9 +223,10 @@ in a tailnet. Headscale supports the `autoApprovers` section of an ACL to automa
|
|||||||
soon as it joins the tailnet.
|
soon as it joins the tailnet.
|
||||||
|
|
||||||
The ACL snippet below defines the tag `tag:exit` owned by the user `alice`. This tag is used for `exitNode` in the
|
The ACL snippet below defines the tag `tag:exit` owned by the user `alice`. This tag is used for `exitNode` in the
|
||||||
`autoApprovers` section. A new exit node that advertises the tag `tag:exit` is automatically approved:
|
`autoApprovers` section. A new exit node which is owned by the user `alice` and that also advertises the tag `tag:exit`
|
||||||
|
is automatically approved:
|
||||||
|
|
||||||
```json title="Exit nodes tagged with tag:exit are automatically approved"
|
```json title="Exit nodes owned by alice and tagged with tag:exit are automatically approved"
|
||||||
{
|
{
|
||||||
"tagOwners": {
|
"tagOwners": {
|
||||||
"tag:exit": ["alice@"]
|
"tag:exit": ["alice@"]
|
||||||
|
|||||||
@@ -1,54 +0,0 @@
|
|||||||
# Tags
|
|
||||||
|
|
||||||
Headscale supports Tailscale tags. Please read [Tailscale's tag documentation](https://tailscale.com/kb/1068/tags) to
|
|
||||||
learn how tags work and how to use them.
|
|
||||||
|
|
||||||
Tags can be applied during [node registration](registration.md):
|
|
||||||
|
|
||||||
- using the `--advertise-tags` flag, see [web authentication for tagged devices](registration.md#__tabbed_1_2)
|
|
||||||
- using a tagged pre authenticated key, see [how to create and use it](registration.md#__tabbed_2_2)
|
|
||||||
|
|
||||||
Administrators can manage tags with:
|
|
||||||
|
|
||||||
- Headscale CLI
|
|
||||||
- [Headscale API](api.md)
|
|
||||||
|
|
||||||
## Common operations
|
|
||||||
|
|
||||||
### Manage tags for a node
|
|
||||||
|
|
||||||
Run `headscale nodes list` to list the tags for a node.
|
|
||||||
|
|
||||||
Use the `headscale nodes tag` command to modify the tags for a node. At least one tag is required and multiple tags can
|
|
||||||
be provided as comma separated list. The following command sets the tags `tag:server` and `tag:prod` on node with ID 1:
|
|
||||||
|
|
||||||
```console
|
|
||||||
headscale nodes tag -i 1 -t tag:server,tag:prod
|
|
||||||
```
|
|
||||||
|
|
||||||
### Convert from personal to tagged node
|
|
||||||
|
|
||||||
Use the `headscale nodes tag` command to convert a personal (user-owned) node to a tagged node:
|
|
||||||
|
|
||||||
```console
|
|
||||||
headscale nodes tag -i <NODE_ID> -t <TAG>
|
|
||||||
```
|
|
||||||
|
|
||||||
The node is now owned by the special user `tagged-devices` and has the specified tags assigned to it.
|
|
||||||
|
|
||||||
### Convert from tagged to personal node
|
|
||||||
|
|
||||||
Tagged nodes can return to personal (user-owned) nodes by re-authenticating with:
|
|
||||||
|
|
||||||
```console
|
|
||||||
tailscale up --login-server <YOUR_HEADSCALE_URL> --advertise-tags= --force-reauth
|
|
||||||
```
|
|
||||||
|
|
||||||
Usually, a browser window with further instructions is opened. This page explains how to complete the registration on
|
|
||||||
your Headscale server and it also prints the Auth ID required to approve the node:
|
|
||||||
|
|
||||||
```console
|
|
||||||
headscale auth register --user <USER> --auth-id <AUTH_ID>
|
|
||||||
```
|
|
||||||
|
|
||||||
All previously assigned tags get removed and the node is now owned by the user specified in the above command.
|
|
||||||
@@ -50,7 +50,7 @@ Headscale uses [autocert](https://pkg.go.dev/golang.org/x/crypto/acme/autocert),
|
|||||||
If you want to validate that certificate renewal completed successfully, this can be done either manually, or through external monitoring software. Two examples of doing this manually:
|
If you want to validate that certificate renewal completed successfully, this can be done either manually, or through external monitoring software. Two examples of doing this manually:
|
||||||
|
|
||||||
1. Open the URL for your headscale server in your browser of choice, and manually inspecting the expiry date of the certificate you receive.
|
1. Open the URL for your headscale server in your browser of choice, and manually inspecting the expiry date of the certificate you receive.
|
||||||
1. Or, check remotely from CLI using `openssl`:
|
2. Or, check remotely from CLI using `openssl`:
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ openssl s_client -servername [hostname] -connect [hostname]:443 | openssl x509 -noout -dates
|
$ openssl s_client -servername [hostname] -connect [hostname]:443 | openssl x509 -noout -dates
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
mike~=2.1
|
mike~=2.1
|
||||||
mkdocs-include-markdown-plugin~=7.2
|
mkdocs-include-markdown-plugin~=7.1
|
||||||
mkdocs-macros-plugin~=1.5
|
mkdocs-macros-plugin~=1.3
|
||||||
mkdocs-materialx[imaging]~=10.1
|
mkdocs-material[imaging]~=9.5
|
||||||
mkdocs-minify-plugin~=0.8
|
mkdocs-minify-plugin~=0.7
|
||||||
mkdocs-redirects~=1.2
|
mkdocs-redirects~=1.2
|
||||||
|
|||||||