mirror of
https://github.com/juanfont/headscale.git
synced 2026-01-11 20:00:28 +01:00
No Branch/Tag Specified
main
update_flake_lock_action
gh-pages
kradalby/release-v0.27.2
dependabot/go_modules/golang.org/x/crypto-0.45.0
dependabot/go_modules/github.com/opencontainers/runc-1.3.3
copilot/investigate-headscale-issue-2788
copilot/investigate-visibility-issue-2788
copilot/investigate-issue-2833
copilot/debug-issue-2846
copilot/fix-issue-2847
dependabot/go_modules/github.com/go-viper/mapstructure/v2-2.4.0
dependabot/go_modules/github.com/docker/docker-28.3.3incompatible
kradalby/cli-experiement3
doc/0.26.1
doc/0.25.1
doc/0.25.0
doc/0.24.3
doc/0.24.2
doc/0.24.1
doc/0.24.0
kradalby/build-docker-on-pr
topic/docu-versioning
topic/docker-kos
juanfont/fix-crash-node-id
juanfont/better-disclaimer
update-contributors
topic/prettier
revert-1893-add-test-stage-to-docs
add-test-stage-to-docs
remove-node-check-interval
fix-empty-prefix
fix-ephemeral-reusable
bug_report-debuginfo
autogroups
logs-to-stderr
revert-1414-topic/fix_unix_socket
rename-machine-node
port-embedded-derp-tests-v2
port-derp-tests
duplicate-word-linter
update-tailscale-1.36
warn-against-apache
ko-fi-link
more-acl-tests
fix-typo-standalone
parallel-nolint
tparallel-fix
rerouting
ssh-changelog-docs
oidc-cleanup
web-auth-flow-tests
kradalby-gh-runner
fix-proto-lint
remove-funding-links
go-1.19
enable-1.30-in-tests
0.16.x
cosmetic-changes-integration
tmp-fix-integration-docker
fix-integration-docker
configurable-update-interval
show-nodes-online
hs2021
acl-syntax-fixes
ts2021-implementation
fix-spurious-updates
unstable-integration-tests
mandatory-stun
embedded-derp
prtemplate-fix
v0.28.0-beta.1
v0.27.2-rc.1
v0.27.1
v0.27.0
v0.27.0-beta.2
v0.27.0-beta.1
v0.26.1
v0.26.0
v0.26.0-beta.2
v0.26.0-beta.1
v0.25.1
v0.25.0
v0.25.0-beta.2
v0.24.3
v0.25.0-beta.1
v0.24.2
v0.24.1
v0.24.0
v0.24.0-beta.2
v0.24.0-beta.1
v0.23.0
v0.23.0-rc.1
v0.23.0-beta.5
v0.23.0-beta.4
v0.23.0-beta3
v0.23.0-beta2
v0.23.0-beta1
v0.23.0-alpha12
v0.23.0-alpha11
v0.23.0-alpha10
v0.23.0-alpha9
v0.23.0-alpha8
v0.23.0-alpha7
v0.23.0-alpha6
v0.23.0-alpha5
v0.23.0-alpha4
v0.23.0-alpha4-docker-ko-test9
v0.23.0-alpha4-docker-ko-test8
v0.23.0-alpha4-docker-ko-test7
v0.23.0-alpha4-docker-ko-test6
v0.23.0-alpha4-docker-ko-test5
v0.23.0-alpha-docker-release-test-debug2
v0.23.0-alpha-docker-release-test-debug
v0.23.0-alpha4-docker-ko-test4
v0.23.0-alpha4-docker-ko-test3
v0.23.0-alpha4-docker-ko-test2
v0.23.0-alpha4-docker-ko-test
v0.23.0-alpha3
v0.23.0-alpha2
v0.23.0-alpha1
v0.22.3
v0.22.2
v0.23.0-alpha-docker-release-test
v0.22.1
v0.22.0
v0.22.0-alpha3
v0.22.0-alpha2
v0.22.0-alpha1
v0.22.0-nfpmtest
v0.21.0
v0.20.0
v0.19.0
v0.19.0-beta2
v0.19.0-beta1
v0.18.0
v0.18.0-beta4
v0.18.0-beta3
v0.18.0-beta2
v0.18.0-beta1
v0.17.1
v0.17.0
v0.17.0-beta5
v0.17.0-beta4
v0.17.0-beta3
v0.17.0-beta2
v0.17.0-beta1
v0.17.0-alpha4
v0.17.0-alpha3
v0.17.0-alpha2
v0.17.0-alpha1
v0.16.4
v0.16.3
v0.16.2
v0.16.1
v0.16.0
v0.16.0-beta7
v0.16.0-beta6
v0.16.0-beta5
v0.16.0-beta4
v0.16.0-beta3
v0.16.0-beta2
v0.16.0-beta1
v0.15.0
v0.15.0-beta6
v0.15.0-beta5
v0.15.0-beta4
v0.15.0-beta3
v0.15.0-beta2
v0.15.0-beta1
v0.14.0
v0.14.0-beta2
v0.14.0-beta1
v0.13.0
v0.13.0-beta3
v0.13.0-beta2
v0.13.0-beta1
upstream/v0.12.4
v0.12.4
v0.12.3
v0.12.2
v0.12.2-beta1
v0.12.1
v0.12.0-beta2
v0.12.0-beta1
v0.11.0
v0.10.8
v0.10.7
v0.10.6
v0.10.5
v0.10.4
v0.10.3
v0.10.2
v0.10.1
v0.10.0
v0.9.3
v0.9.2
v0.9.1
v0.9.0
v0.8.1
v0.8.0
v0.7.1
v0.7.0
v0.6.1
v0.6.0
v0.5.2
v0.5.1
v0.5.0
v0.4.0
v0.3.6
v0.3.5
v0.3.4
v0.3.3
v0.3.2
v0.3.1
v0.3.0
v0.2.2
v0.2.1
v0.2.0
v0.1.1
v0.1.0
Labels
Clear labels
CLI
DERP
DNS
Nix
OIDC
SSH
bug
database
documentation
duplicate
enhancement
faq
good first issue
grants
help wanted
might-come
needs design doc
needs investigation
no-stale-bot
out of scope
performance
policy 📝
pull-request
question
regression
routes
stale
tags
tailscale-feature-gap
well described ❤️
wontfix
Mirrored from GitHub Pull Request
Milestone
No items
No Milestone
Projects
Clear projects
No project
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: starred/headscale#266
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @nabeelshaikh7 on GitHub (May 27, 2022).
Feature request
Multiple server support
A Way to have multiple server nodes so that we can have mesh and the user can connect to the nearest server and
I have multiple servers in different regions so it was better if i would have I node per region so the latency would be low and the load will also be low
Thanks!!
@enoperm commented on GitHub (May 31, 2022):
I have not tried it, but I suppose if you have a central database, sharing it across control servers may be possible. The downside is, doing that bypasses any application level locks, which may introduces race conditions around machine registration/ip address allocation.
But then again, since the control server should only provide information about a network, and not forward any traffic by itself, I am not sure it is worth optimizing them for latency. If my interpretation is correct, "latency" in this sense would be "how quickly would existing clients observe a new node joining the network".
@kradalby commented on GitHub (Jun 12, 2022):
In principle, the amount of traffic going between the control server (headscale) and the tailscale clients are not really affected by latency (unless its so bad it times out, but then I suspect you have other issues).
For DERP relays, lower latency could make sense, and you can host those separately from headscale.
@kradalby commented on GitHub (Jun 12, 2022):
There are scenarios where multiple servers would make sense and allowing them to connect would also make sense:
@enoperm commented on GitHub (Jun 12, 2022):
@enoperm commented on GitHub (Jun 12, 2022):
Though even if the database could be shared, one also needs to ensure the ACLs and the DERP map remain consistent across control servers.
@enoperm commented on GitHub (Jun 12, 2022):
This may sound crazy, but how about removing hard dependencies on exact config files and datastores/schemas, and letting users write their own behaviour in some glue/scripting language? As long as APIs are provided to them, they can decide what ACLs exists (for updates, just ask their script again), they'll know if the rules they wish to give out are handcrafted, come from a config file, or a database, or generated on the fly. Same for nodes, instead of hardwiring the address allocation/node listing logic, call into their
machine_register, ormachine_enumeratefunctions - this way they can share nodes, set up their own machine registration logic (this would allow for any authentication machinery to be used, including external OIDC/SAML/Basic Auth/mTLS/Kerberos solutions, without the control server needing to care), share users in any manner they wish.The upside is, the control server becomes a lot simpler and a lot more flexible. The downside is, scripting one's own DB access and the like is easier to screw up than relying on something shipped with the control server, and now the control server really needs to get the admin-facing API right. I think the former can be balanced out to a high degree by providing high quality samples and docs, but it is still more work for the user.
@ciroiriarte commented on GitHub (Mar 24, 2023):
I definitely see the need for multisite/multiregion deployments for redundancy purposes (like nebula lighthouses).
One site dying, shouldn't take down the communication for the rest of them.
@0n1cOn3 commented on GitHub (Apr 13, 2023):
I would welcome it as well. Our admin stack is struggling with outages and the headscale VM is also crashing. High availability would be extremely desirable. We could have one headscale server in Canada and the second one in Switzerland. And if one of them goes down, for whatever reason, we can still continue to work. So far our master admin always has to fix the whole thing and work with a proxy that is not secured. We would like to prevent that. While Headscale is down, clients that have restarted can't connect and the work has to be down.
@juanfont commented on GitHub (Apr 14, 2023):
Hi @0n1cOn3 :)
The main objective of Headscale is to provide a correct implementation of the Tailscale protocol & control server - for hobbyists and self-hosters. We might work in the future to support HA setups, that's not the very short term goal.
Those kinds of requests I would recommend you the official Tailscale.com SaaS + Tailnet Lock.
Or send us a PR :) PRs are always welcomed!
@0n1cOn3 commented on GitHub (Apr 14, 2023):
Hi @juanfont
Thanks for your answer.
Yes, we (n64.cc) are do self-hosting and wont reliable on others "computers".
Maybe I'm just asking too much 😂 I'm unfortunately not able to program, otherwise I would very much like to implement somehow and make a PR. But as a hobby system / cloud administrator, I'm almost left to the others who can program.
@kradalby commented on GitHub (May 10, 2023):
While we appreciate the suggestion, it is out of scope for this project and not something we will work for now.
@ciroiriarte commented on GitHub (May 10, 2023):
@0n1cOn3 commented on GitHub (May 10, 2023):
Thanks for the answer @kradalby
Too bad, because HA for Tailscale would certainly be a groundbreaking possibility.
Our community has unfortunately the problem that the main server with Tailscale random again and again says goodbye and therefore this idea arose.
I would welcome it if this idea is implemented in a later time perhaps.
Thank you very much.
@0n1cOn3 commented on GitHub (May 10, 2023):
Maybe there would be the possibility, if Tailscale is down, that a client as standby can take over this task for authentication in a temporary period. At least for the already logged in clients.
Only, I see some challenges in addressing this.
@gucki commented on GitHub (Jun 26, 2023):
@0n1cOn3 What about using two vms in different availability zones/ datacenters, a floating/ virtual IP for headscale, and a local postgres master/ slave setup. Use keepalive to control the failover.
@rallisf1 commented on GitHub (Aug 26, 2023):
floating IPs work only in the same network. anyways, since clients use an FQDN to connect all you really need is:
@krlc commented on GitHub (Oct 23, 2023):
Has anyone implemeted what @rallisf1 has outlined? What's your experience? Any pitfalls?
@0n1cOn3 commented on GitHub (Oct 23, 2023):
Not yet. I have to speak with our main admin to test that.
@Capelinha commented on GitHub (Nov 21, 2023):
@0n1cOn3 Were you able to try this idea?
@0n1cOn3 commented on GitHub (Nov 26, 2023):
Not yet. Our main admin has to perform this setup.
@ser commented on GitHub (Oct 22, 2025):
https://gawsoft.com/blog/headscale-litefs-consul-replication-failover/
@unixfox commented on GitHub (Oct 22, 2025):
Wouldn't use litefs, it's in a "pause" state (https://community.fly.io/t/litefs-discontinued/23682) in favor of litestream: https://fly.io/blog/litestream-revamped/
But the idea is good.
Though I have personally instead done the same idea with patroni : postgresql + consul.