mirror of
https://github.com/juanfont/headscale.git
synced 2026-01-11 20:00:28 +01:00
Support for WireGuard only peers #554
Open
opened 2025-12-29 02:19:56 +01:00 by adam
·
51 comments
No Branch/Tag Specified
main
update_flake_lock_action
gh-pages
kradalby/release-v0.27.2
dependabot/go_modules/golang.org/x/crypto-0.45.0
dependabot/go_modules/github.com/opencontainers/runc-1.3.3
copilot/investigate-headscale-issue-2788
copilot/investigate-visibility-issue-2788
copilot/investigate-issue-2833
copilot/debug-issue-2846
copilot/fix-issue-2847
dependabot/go_modules/github.com/go-viper/mapstructure/v2-2.4.0
dependabot/go_modules/github.com/docker/docker-28.3.3incompatible
kradalby/cli-experiement3
doc/0.26.1
doc/0.25.1
doc/0.25.0
doc/0.24.3
doc/0.24.2
doc/0.24.1
doc/0.24.0
kradalby/build-docker-on-pr
topic/docu-versioning
topic/docker-kos
juanfont/fix-crash-node-id
juanfont/better-disclaimer
update-contributors
topic/prettier
revert-1893-add-test-stage-to-docs
add-test-stage-to-docs
remove-node-check-interval
fix-empty-prefix
fix-ephemeral-reusable
bug_report-debuginfo
autogroups
logs-to-stderr
revert-1414-topic/fix_unix_socket
rename-machine-node
port-embedded-derp-tests-v2
port-derp-tests
duplicate-word-linter
update-tailscale-1.36
warn-against-apache
ko-fi-link
more-acl-tests
fix-typo-standalone
parallel-nolint
tparallel-fix
rerouting
ssh-changelog-docs
oidc-cleanup
web-auth-flow-tests
kradalby-gh-runner
fix-proto-lint
remove-funding-links
go-1.19
enable-1.30-in-tests
0.16.x
cosmetic-changes-integration
tmp-fix-integration-docker
fix-integration-docker
configurable-update-interval
show-nodes-online
hs2021
acl-syntax-fixes
ts2021-implementation
fix-spurious-updates
unstable-integration-tests
mandatory-stun
embedded-derp
prtemplate-fix
v0.28.0-beta.1
v0.27.2-rc.1
v0.27.1
v0.27.0
v0.27.0-beta.2
v0.27.0-beta.1
v0.26.1
v0.26.0
v0.26.0-beta.2
v0.26.0-beta.1
v0.25.1
v0.25.0
v0.25.0-beta.2
v0.24.3
v0.25.0-beta.1
v0.24.2
v0.24.1
v0.24.0
v0.24.0-beta.2
v0.24.0-beta.1
v0.23.0
v0.23.0-rc.1
v0.23.0-beta.5
v0.23.0-beta.4
v0.23.0-beta3
v0.23.0-beta2
v0.23.0-beta1
v0.23.0-alpha12
v0.23.0-alpha11
v0.23.0-alpha10
v0.23.0-alpha9
v0.23.0-alpha8
v0.23.0-alpha7
v0.23.0-alpha6
v0.23.0-alpha5
v0.23.0-alpha4
v0.23.0-alpha4-docker-ko-test9
v0.23.0-alpha4-docker-ko-test8
v0.23.0-alpha4-docker-ko-test7
v0.23.0-alpha4-docker-ko-test6
v0.23.0-alpha4-docker-ko-test5
v0.23.0-alpha-docker-release-test-debug2
v0.23.0-alpha-docker-release-test-debug
v0.23.0-alpha4-docker-ko-test4
v0.23.0-alpha4-docker-ko-test3
v0.23.0-alpha4-docker-ko-test2
v0.23.0-alpha4-docker-ko-test
v0.23.0-alpha3
v0.23.0-alpha2
v0.23.0-alpha1
v0.22.3
v0.22.2
v0.23.0-alpha-docker-release-test
v0.22.1
v0.22.0
v0.22.0-alpha3
v0.22.0-alpha2
v0.22.0-alpha1
v0.22.0-nfpmtest
v0.21.0
v0.20.0
v0.19.0
v0.19.0-beta2
v0.19.0-beta1
v0.18.0
v0.18.0-beta4
v0.18.0-beta3
v0.18.0-beta2
v0.18.0-beta1
v0.17.1
v0.17.0
v0.17.0-beta5
v0.17.0-beta4
v0.17.0-beta3
v0.17.0-beta2
v0.17.0-beta1
v0.17.0-alpha4
v0.17.0-alpha3
v0.17.0-alpha2
v0.17.0-alpha1
v0.16.4
v0.16.3
v0.16.2
v0.16.1
v0.16.0
v0.16.0-beta7
v0.16.0-beta6
v0.16.0-beta5
v0.16.0-beta4
v0.16.0-beta3
v0.16.0-beta2
v0.16.0-beta1
v0.15.0
v0.15.0-beta6
v0.15.0-beta5
v0.15.0-beta4
v0.15.0-beta3
v0.15.0-beta2
v0.15.0-beta1
v0.14.0
v0.14.0-beta2
v0.14.0-beta1
v0.13.0
v0.13.0-beta3
v0.13.0-beta2
v0.13.0-beta1
upstream/v0.12.4
v0.12.4
v0.12.3
v0.12.2
v0.12.2-beta1
v0.12.1
v0.12.0-beta2
v0.12.0-beta1
v0.11.0
v0.10.8
v0.10.7
v0.10.6
v0.10.5
v0.10.4
v0.10.3
v0.10.2
v0.10.1
v0.10.0
v0.9.3
v0.9.2
v0.9.1
v0.9.0
v0.8.1
v0.8.0
v0.7.1
v0.7.0
v0.6.1
v0.6.0
v0.5.2
v0.5.1
v0.5.0
v0.4.0
v0.3.6
v0.3.5
v0.3.4
v0.3.3
v0.3.2
v0.3.1
v0.3.0
v0.2.2
v0.2.1
v0.2.0
v0.1.1
v0.1.0
Labels
Clear labels
CLI
DERP
DNS
Nix
OIDC
SSH
bug
database
documentation
duplicate
enhancement
faq
good first issue
grants
help wanted
might-come
needs design doc
needs investigation
no-stale-bot
out of scope
performance
policy 📝
pull-request
question
regression
routes
stale
tags
tailscale-feature-gap
well described ❤️
wontfix
Mirrored from GitHub Pull Request
Milestone
No items
No Milestone
Projects
Clear projects
No project
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: starred/headscale#554
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @Nemo157 on GitHub (Sep 7, 2023).
Why
Tailscale just announced their support for integrated Mullvad exit nodes. Being able to configure a similar setup via Headscale and an independent Mullvad account (or other wireguard VPN provider) would be useful for those of us without a Tailscale account.
Description
I haven't looked deeply into the details, but it's my understanding that this is implemented via a "WireGuard only peer" feature, and then support in the Tailscale coordination server to synchronize these peers with Mullvad. I assume it would be possible for Headscale to allow manually configuring these peer types.
@sachiniyer commented on GitHub (Sep 8, 2023):
It also seems like mullvad publishes a script to connect to mullvad servers. The most interesting thing is that you basically link public keys with your account (which is why I think that there is a preconfiguration step to register devices in their announcement).
It seems like the simplest implementation could be to create an external script that calls
registerNodeCmdon mullvad endpoints (marking them as WireguardOnly), and then calling the mullvad api with each of the node's public keys you want to link.I think the RegisterMachine, machine config, and node conversion would need to be changed.
(I also am really not an expert in this, so please take it with a grain of salt)
Edit: what needs to be changed
@ghost commented on GitHub (Sep 11, 2023):
I think this was intended by the issue author, but to reiterate it seems more useful to me to allow any generic WireGuard-only peer as an exit node, not just Mullvad servers. That way headscale doesn't have to be tied to one VPN provider like the Tailscale coordination server currently is.
@ghost commented on GitHub (Sep 11, 2023):
It might be possible to support this for any WireGuard server peer by accepting peer config files like those generated by Mullvad's WireGuard config file generator like described in this guide, or by just asking the user to provide the generated fields we need at the CLI when adding the peer.
@Nemo157 commented on GitHub (Sep 11, 2023):
I don't think it's possible to import a generated config file, because that contains a randomized private key. The provider needs to support uploading the existing public key from the devices that will connect. That doesn't seem possible through Mullvad's website, it wants the private key specified so it can embed it in the generated config files, but it is possible through the API. I haven't used other wireguard based vpn services so I'm not sure if being able to upload existing keys is common.
@JadedHearth commented on GitHub (Sep 18, 2023):
Why would it be required to upload an existing public key?
@ghost commented on GitHub (Sep 18, 2023):
@WoodenMaxim From what I can tell, each Tailscale node only has a single private/public key pair that is generated when they are created, and then it uses that pair with every other node. So, when adding a non-Tailscale WireGuard endpoint like a Mullvad server, that other end needs to know (all of the) existing Tailscale nodes' public keys that are going to connect to it.
@noseshimself commented on GitHub (Sep 27, 2023):
How does adding a Wireguard-only exit node get the public key of the nodes intending to use it into that node's configuration? If there was an easy solution for this we would not need Tailscale...
@infogulch commented on GitHub (Sep 27, 2023):
How does mullvad do it?
@noseshimself commented on GitHub (Sep 28, 2023):
No. "How does Tailscale do it?" Obviously by being a kind of "reseller" and having an interface to provision the mullvad IAM that way. The more interesting question is how the tailscale client is selecting the "exit node" it wants to use.
I was already wondering about this in other settings. If there are multiple possible exit nodes for a destination or multiple Internet gateways how is the most appropriate node selected and how can I influence the choice?
@infogulch commented on GitHub (Sep 28, 2023):
https://tailscale.com/kb/1103/exit-nodes/?tab=linux
With that resolved, we still need to figure out how to get the wireguard public keys of the tailscale nodes with permission to access the exit node into the wireguard-only peer, and vice-versa.
Maybe it's as simple as "run a command to dump the full list of keys in a form that the wireguard-only peer can consume, and expect the admin to put that configuration onto the node (and keep it up to date) manually". That may not be very palatable, but the alternative is writing software to sync keys automatically in which case why not just run a full tailscale node?
Maybe the best solution would be to just add some example docs showing how you can execute this pattern with a regular tailscale node...
@sosnik commented on GitHub (Sep 28, 2023):
Mullvad themselves provide a script 1 to generate vanilla wg configs
instead of mullvad's native client.
The client public key is communicated over mullvad's (sadly undocumented)
API.
On Thu, Sep 28, 2023 at 4:05 AM Joe Taber @.***> wrote:
@infogulch commented on GitHub (Sep 28, 2023):
Very interesting, thanks for sharing sosnik.
https://github.com/mullvad/mullvad-wg.sh/blob/main/mullvad-wg.sh#L59
Roughly the script:
/etc/wireguard/$CODE.confwith aforementioned connection details so you can connect by runningwg-quick up $CODEI also found this gist that loosely documents uploading and revoking keys which may also be useful for mullvad: https://gist.github.com/izzyleung/98bcc1c0ecf424c1896dac10a3a4a1f8
So mullvad relays are configured via simple https POSTs to their api (which great, I love that they use simple tech). This may be useful if we want to implement mullvad support into headscale.
That said, it doesn't help us much if we just have a random wireguard relay sitting on a vps and we want to add "Support for WireGuard only peers", as the title of this issue suggests. Honestly I'm not even sure how we would expect a plain WG exit node to work in theory: Either you configure it completely manually, exporting private keys from tailnet nodes and importing them into the WG node which is annoying; or just... you know, install tailscale on that node instead, tailscale was invented to solve that annoyance in the first place.
With that in mind, I think we should open a new issue / retitle this issue to "Support for mullvad exit nodes" (if desired), and add a recipe to the docs showing how to set up your own exit node by running a headscale node on a vps or something, because a "WireGuard only peer" is a non starter.
@Nemo157 commented on GitHub (Sep 28, 2023):
Just to clarify, you need to export public keys to configure the WG node (and import the WG nodes' public key into headscale).
I think there are still situations where this could be useful. One thought is that you want to connect devices to an organization managed WG node, where you don't have permission to install tailscale but you are able to provide your public keys to be configured on the node.
The other main reason I think to target just supporting "wireguard only peers" first is that they are the only thing that the tailscale protocol knows about. If they are supported by headscale then scripts can be written to configure them for whatever situation is needed, while if headscale instead only supports talking to the Mullvad API it blocks being able to configure for other situations. That doesn't mean headscale shouldn't support talking to Mullvad itself, but I think it should build on the general functionality of wireguard only peers.
@sosnik commented on GitHub (Sep 28, 2023):
I am with Nemo157 on this one.
At a bare minimum, headscale should support exporting a vanilla wireguard
config (peer public keys and endpoints) for use with other wireguard
clients.
Supporting only mullvad opens the door to "But why not Proton" / "Why not
X" conversations.
One other thing to consider is that commercial VPN providers will limit you
to X number of concurrent connections (I think mullvad's limit is 5?). If
someone's tailnet (headnet?) has more than 5 devices, we don't want to give
mullvad more than 5 public keys and run up against such limits by accident.
On Thu, Sep 28, 2023 at 7:05 PM Nemo157 @.***> wrote:
@almereyda commented on GitHub (Sep 28, 2023):
I strongly believe this would come with all the overhead involved in implementing a WireGuard management API, for which exist many examples.
Examples
Do we already know the API surface of the Tailscale coordination server, which needs to be mimicked by Headscale for supporting the client feature implemented in tailscale/tailscale#7821?
Practically speaking, it appears this use case will be much easier to achieve with a Tailscale network and a given WireGuard network residing in the same namespace, and routing being allowed between their subnets.
To continute to distinguish between (1) sole support for WireGuard peers and (2) a default route via an external WireGuard VPN:
::/0route, and that is locally forwarded through the above subnet, given another::/0route would be inherited from there?@infogulch commented on GitHub (Sep 28, 2023):
Thanks for the correction, please excuse my typo.
I'll relax my stance a bit here, it seems perfectly reasonable to allow headscale users to manually configure wireguard peers by exporting node public keys and importing remote endpoint public keys by some cli or api, and to expect VPN configuration scripts to be layered on top of this feature. Though, how it interacts with ACL and other tailscale features appears to present some non-trivial remaining challenges.
Wrt namespacing and routing, are routing "announcements" a thing in wg? The mullvad script explicitly sets config on the client to route ips over the interface with
AllowedIPs = 0.0.0.0/0, ::/0. From that I'd guess the admin would have to set the route manually.@sosnik commented on GitHub (Sep 29, 2023):
Not that I am aware of. wg-quick uses native methods (ip route) to define
routes in the host, and no "announcements" per se are actually happening.
But I don't think this is a problem:
aware of the need for manual routing; and
and if you have access to the node, you (or your management script) can
define the routes like it normally would for vanilla wg.
On Fri, Sep 29, 2023 at 3:18 AM Joe Taber @.***> wrote:
@github-actions[bot] commented on GitHub (Dec 28, 2023):
This issue is stale because it has been open for 90 days with no activity.
@Victor239 commented on GitHub (Dec 28, 2023):
Still relevant.
@noseshimself commented on GitHub (Dec 29, 2023):
But still without any idea about the implementation by those who want it. To summarize: Tailscale (and a few others) exist because there is no simple auto-configuration for Wireguard links in the basic protocol. You either tell us how to introduce the Tailnet to some arbitrary wireguard (exit-)node or we can just as well close this for good.
@Nemo157 commented on GitHub (Dec 29, 2023):
Tailscale already has the client-side feature for this, someone needs to investigate exactly how it is represented by the server and add it to the details provided by Headscale, there is no design work needed for the tailnet side of it. I'm pretty sure once the backend side of how to represent it is investigated the interface to configure that from the CLI will be relatively self-evident, so I'm not sure if there's any point in trying to design it externally. (I would have worked on this myself already, except I really dislike golang, maybe one day I'll eventually give up waiting for someone else and get over my aversion).
@smehrens commented on GitHub (Feb 7, 2024):
I think this feature would be very helpful in several scenarios:
if you can
I) import to headscale
a) a dataset of nodename, owner, publickey, wireguard ip address, external address ... for each wireguard only node
II) export
a) a list of datasets for the headscale nodes, which should be able to connect these nodes
b) a n*m matrix which headscale node should be able to connect to which "wireguard" node
III) send a webhook if reconfiguration ist needed
the deployment tools should be able to do the rest.
Company rules could require to use a specific deployment tool and automation process so headscale client may be no option for some systems. Or company rules do not allow headscale installation without a time consuming certification process for every new software version.
Some appliances do not have support for head/tailscale and dont allow installing third party software, but allow deloyment over ssh, api, ldap or whatever.
A second headscale server may import the exported list and use this for federation.
Of course all these scenarios dont fit to the general idea of headscale, but .... maybe there no other way ....
@unixfox commented on GitHub (Mar 11, 2024):
Does anyone have a mullvad account for testing? I want to check the traffic between the tailscale control plane and the tailscale client in order to understand how mullvad servers are served to the client.
If so, email me at
github1545 [at] unixfox.eu@sefidel commented on GitHub (Mar 11, 2024):
I think you need a Tailscale account with Mullvad add-on for that.
@github-actions[bot] commented on GitHub (Jun 10, 2024):
This issue is stale because it has been open for 90 days with no activity.
@aniqueta commented on GitHub (Jun 11, 2024):
Not stale.
@thedustinmiller commented on GitHub (Jul 29, 2024):
Hello, I went and bought a Mullvad subscription for Tailscale to investigate how it works.
The system is actually really simple: when you use a Mullvad exit node it looks exactly like a normal exit node. Here's what it looks like using a personal exit node vs Mullvad exit node.
and then in the tailscaled.state file it sets the ExitNodeID in the "profile-xxxx" like any other exit node, eg Chicago 007 is
"ExitNodeID": "n85Dw3BNhX11CNTRL",When you go to mullvad.net they see your traffic as coming from the server corresponding to that hostname. Beyond that, it's opaque. No Mullvad wireguard configs are exposed, all of that happens within the Tailscale controlled exit node server.
It's a black box from there, although Tailscale provides clear documentation on what data they associate with accounts. Technically speaking, I don't think Mullvad has to do anything for this to work. They provide a CLI and readable API, for example the following is what happens when you login via cli
curl -sSL https://api.mullvad.net/wg -d account="$ACCOUNT" --data-urlencode pubkey="$(wg pubkey <<<"$PRIVATE_KEY")")"where $PRIVATE_KEY is defined as
wg genkeyand stored in the wireguard config.I obviously can't tell, but it seems all Tailscale would have to do is set up those exit nodes to chain the wireguard connections, the complex part would be key allocation/licensing.
TL;DR: Neither the Tailscale client nor orchestrator have anything to do with the Mullvad integration. They have specially configured exit nodes that likely do wireguard chaining.
@sosnik commented on GitHub (Jul 31, 2024):
Thanks for putting your money up to investigate, @thedustinmiller.
Still, I don't think that the tailscale control server has no role to play in managing the wg exit nodes if for no other reason than that you can have an arbitrarily-large amount of exit nodes and you don't necessarily want to give them a unique IP on your tailnet at the same time. Is the Chicago exit node always on
100.127.203.60? Do other nodes use different IPs? Do they use consistent IPs or randomly-assigned ones?As for connecting the tailscale traffic to wireguard exit traffic, it might be as simple as setting up a minimal container with just ts and wg installed and then:
And then you can switch out which wireguard config (VPN exit node) becomes
wg0in the above picture. Granted, this is more of a hacky single-user setup, but it might work.@sefidel commented on GitHub (Jul 31, 2024):
The control server manages them as "wireguard only" nodes, and Disco and DERP is disabled on such nodes.
@thedustinmiller commented on GitHub (Jul 31, 2024):
Here's a list of the exit nodes. The IPs appear consistent, so far at least.
Exit nodes
I am working on exactly what you mentioned with the iptables, I subscribe to normal Mullvad as well and am trying that with the vanilla Wireguard configs they generate.
But yeah I was totally wrong, the search sefidel provided included a test describing wireguard only functionality, I think. I'm guessing this comment means the server has to explicitly define those peers.
This is pretty far outside my usual work, so please let me know if I can provide any other info.
@trinity-geology-unstable commented on GitHub (Aug 3, 2024):
I've been experimenting with essentially this idea in my homelab recently and I've had some success.
I started with having Tailscale and Wireguard both running inside a Ubuntu VM (well actually a cheap rented virtual private server for the 1Gbps connection) which didn't work until I finally found the correct pre-up/down rules and got it going. Then I transferred the concept into a Debian docker image successfully. The only noticeable difference is slightly slower network speeds, 200Mbps inside docker vs 350Mbps on the VM host (i.e. download speed from the perspective of another node on the tailnet using the exit-node), presumably due to docker overhead.
I haven't yet put it together into a nice neat repo that can be shared widely. However I'm happy to share my configs here for anyone that's interested with a big health warning that I'm a noob and don't know what I'm doing so copy at your peril. Hope it helps!
This is the wireguard config file I've created. It's activated using wg-quick after tailscale has been installed and started successfully on the same system:
This is the dockerfile I use to create my docker image:
This is my entrypoint.sh file:
And finally my docker-compose to bring the image up:
@iikkart commented on GitHub (Oct 11, 2024):
Thank you @trinity-geology-unstable for your efforts! I tried your configs and setup, and found few flaws in it (in case someone else want to try it).
I think there was some naming inconsistencies in the suggested solution.
PostUp = wg set vpn1_cloud_01 fwmark 51820 # copied from wg-quck. Interface name should match config file name PostUp = ip -4 route add 0.0.0.0/0 dev vpn1_cloud_01 table 51820 # copied from wg-quickI think it should be:
PostUp = wg set ts_wg_01 fwmark 51820 # copied from wg-quck. Interface name should match config file name PostUp = ip -4 route add 0.0.0.0/0 dev ts_wg_01 table 51820 # copied from wg-quickThe reason is mentioned in the comment:
In the Dockerfile I needed to add also openresolv like this:
Also, at least in my case, I needed to modify docker-compose.yml and add an address of dns-server into this line:
- TS_EXTRA_ARGS=--login-server=https://headscale.[MYDOMAIN] --authkey=[AUTHKEY] --hostname=[HOSTNAME] --advertise-exit-node=true --accept-routes=true --accept-dns=true --dns=x.x.x.x. So by adding--dns=x.x.x.xmade my endpoint finally to get connection to the internet.@github-actions[bot] commented on GitHub (Jan 10, 2025):
This issue is stale because it has been open for 90 days with no activity.
@FracKenA commented on GitHub (Jan 13, 2025):
Not stale.
@stratself commented on GitHub (Mar 27, 2025):
Hi everyone. Inspired by this thread and others, I went for a few days of tinkering and found myself with a workable Tailscale + WireGuard solution that can run as nonroot and also accepts MagicDNS. Essentially, it copies
wg-quickcommands, and route everything via the tunnel except for WireGuard's endpoint itself.As a small hack, I also holepunched my headscale's IP:port. With this the node's non-WireGuard IP address is revealed to headscale's embedded DERP, and I was able to get direct connections (at least on IPv6 for now) which is pretty nice.
Please check out the repo (link again) and see if it helps you, and leave feedback when you run into any issues, thanks :)
P/s: For @trinity-geology-unstable's solution, I believe this can help running without
privileged: True.@unixfox commented on GitHub (Apr 29, 2025):
Reiterating my request: https://github.com/juanfont/headscale/issues/1545#issuecomment-1989439954
I don't have a tailscale account and I don't have a tailscale account with a mullvad subscription. I'm only using mullvad separatively.
Would someone be able to share his tailscale account connected to a mullvad subscription or sponsor me a separate tailscale account connected to a mullvad subscription?
I'm willing to investigate how this has been implemented and share some insights here through a PoC in order to implement this feature into headscale and ionscale. And possibly create a PR if the code base is not too complicated.
Hit me up on matrix (@unixfox:matrix.org) or by email (wgheadscale [AT] unixfox ..DOT.. EU)
@sqenixs commented on GitHub (May 4, 2025):
This is the only thing keeping me from using headscale right now. I hope there is enough interest to make it happen eventually.
@unixfox commented on GitHub (May 10, 2025):
Thank you for @sefidel for sharing access to his tailscale account.
I got amazing progress on this feature.
1. Finding the peer list of wireguard only peers
Using
tailscale debug watch-ipn --show-private-key=true --initial=trueortailscale debug netmap --show-private-keyI was able to dump the whole initial peers list given by the tailscale control plane. Here is the file (it's big): https://gist.github.com/unixfox/ce911b4622bc37de07b0c8d0ff0e911a (and raw:22ac36497c/log.json).All confidential info has been replaced by dummy data that look like real data.
Here is an abstract:
It's actually not complicated, the important details are:
Endpointsare the wireguard endpoints. Ending with the standard 51820 UDP port.SelfNodeV4MasqAddrForThisPeer: The IPv4 wireguard address to use for the wireguard tunnel to work. It seems to be unique across the whole tailnet.nodekey:xxx: The wireguard public key of the peer"IsWireGuardOnly": true: The parameter to tell the tailscale client that it's a wireguard server only."IsJailed": trueis probably important too."ExitNodeDNSResolvers"it's theDNS=parameter in wireguard config.I'm not sure whenever tagging
mullvad-exit-nodeand specifyingLocationis important.It seems like you can't use the same mullvad exit node from two machines in the same tailnet. Only one work at a time. Headscale could follow the same principle in order to ease with the wireguard server peer implementation.
2. Testing that the wireguard only peers work
In order to test that a wireguard only peer work. I extracted the private key from the logs and in the line
"PrivateKey": "privkey:XX".Since I'm lazy to code a tool for converting a tailscale private key to a wireguard private key. Same for the public key.
I used cursor AI, and it gave me a nice command
tailscale debug nodekey-to-wg XXXX. And the source code is here: https://gist.github.com/unixfox/f21291add9e7ee35750417d5db3cfe0dI created a new wireguard config with it:
And that worked perfectly (it's a peer in New Zealand, and I'm in europe):
3. Implement in headscale (or ionscale)
Anybody now has the capabilities to replicate such feature in headscale or ionscale using the debug data. I talk about ionscale because I'm a user of ionscale but ionscale share the same goal as headscale.
For creating a wireguard server that will work with a tailscale client, it's a little tricky to do.
tailscale debug netmap. You have to do that for all the machines in your tailnet. A bit cumbersome.In a sense, for such feature to be usable by the general public. One has to implement such feature in headscale (or ionscale). And create a custom wireguard server that connect to the API of headscale or ionscale.
I'll try to see if I can experiment something in ionscale, like some kind of PoC. Which could certainly help more experienced golang developers to implement the feature properly.
Hopefully this will help with the progress of this feature!
@orelvis15 commented on GitHub (May 15, 2025):
Hello, good afternoon. I have a question. Excuse my ignorance. Is it possible to export the configuration for a WireGuard VPN using Headscale? To use the WireGuard mobile app instead of the Tailscale one?
@Matchlighter commented on GitHub (Jun 11, 2025):
@orelvis15
If you didn't find an answer to this, no. In networking, especially in overlay/VPN networks, there are two "planes" - a "control-plane" and a "data-plane". The data-plane handles communication of actual packets and data. The control-plane handles key-exchange, host-introduction, ACLs, (to some extent) routing, etc. WireGuard itself is just the data-plane - if you're using the WireGuard app, you are the control-plane - you handle key exchange, configuring each host with a list of peers, etc. Tailscale is mostly a control-plane that configures/manages the WG data-plane.
It's technically possible that you could extract the list of hosts and keys from the Tailscale client and use them with a plain WG client, but this wouldn't be full featured (ACLs for one wouldn't work correctly) and it wouldn't work long term as Tailscale does other things too (like cycling client keys, handling addition/removal of hosts, handling roaming hosts, etc).
@lk-vila commented on GitHub (Jul 15, 2025):
This feature would be awesome. Being able to use mullvad exit nodes is the only thing keeping me from migrating to headscale right now.
One thing about the tailscale implementation that could be improved though is the blocking of "intermediary nodes" (exit nodes using an exit node). As each mullvad account is limited to 5 devices, one could use something like this to protect low priority devices that have low bandwith usage by sharing a connection.
@iridated commented on GitHub (Oct 19, 2025):
Hey I think this feature would be an extremely welcome addition to headscale, and I am going to have a go at implementing it. I noticed the maintainers ask for a design document for each feature, so I wrote up some of my thoughts freeform below. Please let me know if there is some standard format for this instead.
Use case
This feature allows nodes on a tailnet to connect to wireguard nodes which do not run a full Tailscale client. This increases interoperability with nodes which we want to connect to but don't have full control over, for example commercial VPN providers. With this feature implemented, a user can funnel their traffic through a privacy-enhancing proxy without having to disconnect from their personal tailnet.
An alternative to this would be running two VPN connections simultaneously, which comes with networking headaches which can be sidestepped by using tailscale. Further, this is just not possible on mobile devices.
Implementation
The aim is to implement the minimal features necessary in headscale to allow the use case above, and leave the actual configuration of wireguard-only nodes to external scripts.
The feature would add a CLI command to introduce wireguard-only peers to the tailnet, e.g.
headscale nodes register-wg-onlyaccepting parameters--name,--public-key,--known-nodes,--allowed-ips,--endpoints,--self-ip{v4/v6}-masq-addr,--exit-node-dns-resolvers.Headscale would then handle distributing the required connection details to the nodes specified by
--known-nodes. Remember since the wg-only node is external to the tailnet, there is no easy way of dynamically updating the list of peers it knows about. Instead, we must specify up-front which nodes it is going to accept connections from.Because this must be configured manually by the server administrator, we exempt wireguard-only peers from ACLs. Note that even if we didn't have an exemption here, there's isn't much we can do to enforce ACLs. For inbound connections from wg-only peers to tailscale nodes, we follow tailscale's mullvad implementation and set
IsJailed = truewhich signals clients to block incoming connections. For outbound connections, any access control must be done on the wg-only host and headscale doesn't have control over this - the only thing we can do is only distribute wireguard connection details to the known nodes specified by the administrator.Several features proposed in this issue are desirable but out of scope for the initial implementation. First, it should not be tied to any specific provider (e.g. Mullvad), instead allowing the user to configure any external peer supporting wireguard. Secondly, we hardcode all details of the node when adding it to the network. An extension could implement an interface to update these dynamically without having to delete and recreate the peer.
I think there are broadly two ways we could integrate wireguard-only nodes in the existing code.
Nodetype to have someIsWireguardOnlymarker and perform special handling in all functions operating on nodes.WireguardOnlyNodetype (stored in a separate database table) and keep wg-only peer functionality parallel to normal nodes.I am strongly in favour of the second option. It allows us to slowly introduce wg-only peer functionality without accidentally introducing incorrect behaviour in other functions dealing with nodes. Further, since headscale doesn't control these external peers, the logic surface of our code that wireguard-only peers deal with is much smaller than normal nodse (e.g. no ACL logic, no dynamic IP changes, no DERP etc).
I have already started a proof of concept implementation at https://git.sr.ht/~iridated/headscale/log/wireguard-only-peers and intend to continue working on the code here until it is ready to merge.
Maintenance
I plan to use this feature in conjunction with a Mullvad VPN and I'm happy to contribute to keeping it around in headscale. Since the difficult networking work has already been done by the Tailscale team in client applications, implementing this feature should not introduce significant complexity in the control plane.
@kradalby commented on GitHub (Oct 19, 2025):
I believe it would be more desirable to have this as a JSON blob on the current node object to fit with the current patterns we use.
Alternatively, since they might actually differ quite a bit, no disco keys, no hostinfo(?), having a separate WireGuardNode as a whole might be the right move.
I think in general there are good ideas here, considering that this might require configuration per node, an alternative to have these nodes in the database could be to have them be a declarative configuration that is watched from the server.
Pros, easy to update, con, file update, not api.
@stratself commented on GitHub (Oct 20, 2025):
Hi all, I have a small question
Why would it be not possible to dynamically update known nodes? I'd like to use a special ACL tag e.g.
tag:vpn-exit-nodein order to facilitate connections, and I guess this would render such a configuration untenable (?).Maybe the wireguard only nodes can be recreated every time its peers are updated? Or maybe have it configured to allow access from all nodes in the tailnet, but limits its visibility via ACLs?
Thanks for any answers
@iridated commented on GitHub (Oct 22, 2025):
It comes down to the distinction between the data plane and the control plane described by Matchlighter. WireGuard-only peers are part of the data plane, so we can communicate with them, but not part of the control plane, so we cannot dynamically update their configuration.
Your second solution of configuring the WireGuard-only peer to accept connections from all tailnet peers and using ACLs purely to limit visibility is possible. However, I am somewhat hesitant to allow this functionality because ACLs are supposed to provide strong access control, whereas this would only be security by obscurity. Could you describe your use case in more detail? I'd like to understand what kind dynamic updates you're looking to achieve.
The more I thought about this, the more I'm convinced that having them as a separate type is the right move. As you've said, they differ from normal nodes in many ways, and it would be sad to introduce special cases and potential bugs in so many parts of the code.
I think that is sensible as well, but having a proper command line API will make it easier for other software to integrate with this feature, which I think is pretty important considering that headscale will not do much configuration here on its own.
I've got an alpha version of the implementation now and I'm currently testing it out on my network. It seems to be stable and working correctly so far but the presentation of these nodes in the Android client is still a big janky. Full disclosure: most of the code was written by Claude. I have reviewed it and made some changes but I don't normally use Go so there could still be bugs hiding.
If anybody else would like to try, the easiest way is to get this container image which you can load with
docker load- when running the container you need to make sure to override the user to 0 or it will fail with permission errors. I also made an interactive script to help add Mullvad nodes to the network.@stratself commented on GitHub (Oct 24, 2025):
I do agree it's kinda unsafe, though I'd like to see it as an option.
In my opinion, it'd be convenient (and consistent with Tailscale) to use nodeAttrs to determine connectivity to the WG peer. I may've been confused between NodeAttrs and ACLs as different methods for controlling distributions, but if there is a way to change
--known-nodesat runtime via such a policy change, I'd highly appreciate that.My next concern is that, could I use my own Tailnet addresses in
--exit-node-dns-resolvers? The use case is to resolve domains using my own DNS forwarder (Pi-hole, AdGuard Home etc). Hope this is possible since Tailscale normally teleports DNS to the exit node too@kradalby commented on GitHub (Oct 24, 2025):
I want to write up some things about how I understand how this will end up working, but please keep in mind that I am mostly speculating because I have not looked into the implementation too much. It is mostly to manage expectations on how a "normal" WireGuard nodes can work together with Tailscale clients.
a. This means that when a Tailscale client talks to a Tailscale client, the packages flows through more systems (NAT Traversal (magicsock), ACL and WireGuard)
b. When a Tailscale client talks to a WireGuard client, the package flows "directly" to a WireGuard receiver. So you cant use ACL and NAT Traversal. I want to point out that it is not a "business" thing to do that by Tailscale, it is because you need those things at both ends.
@iridated point here is quite important to how useful this is in many cases:
Even if a WireGuard client can be added dynamically to your Tailscale client, you will still have to "manually" add it to the WireGuard side of the equation for it to be able to set up the connection.
If you have one node that is to use WireGuard, then this is fairly manageable, but if you have 100s, you have to find a way to automate it, and this is outside of what Headscale can do (of course, we can write API endpoints to generate these configurations, but I suspect they will quickly be all or nothing and you have to accept them on your WireGuard server).
In the world of Tailscales usage (Mullvad), I suspect that the control server updates Mullvad via their API, which is the "control plane".
As for ACL, I am not sure if this can work at all, for the reason in 3. above. I suspect the Tailscale client may bypass all filters for WireGuard nodes, and then there is nothing we really can do. I think it is important to remember that WireGuard clients are implemented, at least currently, for the sole purpose of being exit nodes (Mullvad), meaning they don't really have any need for filtering.
The client is Open Source, and this can be inspected, so happy to be corrected here.
So ultimately, I think we can get it in, but I suggest people should have the base expectation that WireGuard peers are, "fully accessible" and "manually configured".
@iridated commented on GitHub (Oct 30, 2025):
+1, I think this is the right way to think about this feature. If the external Wireguard node exposes some additional configuration API, it should in principle be possible to update it dynamically by reading node state from headscale. But Wireguard itself provides no such capabilities so we would need a separate implementation for each service providing such an API. I think such integrations should be kept outside the core headscale project.
This is a good question, I am not sure how Tailscale clients handle this. I will try to test this at some point, but to be clear there isn't much headscale can do to alter this behavior. We simply send these IP addresses to the client which handles them as it wishes.
An alternative could be to avoid setting "--exit-node-dns-resolvers" at all, then it should default to the same DNS you normally use. Again though, I'm not sure how this interacts with exit nodes outside of your tailnet.
I have been daily driving the current implementation for about a week now and I'm happy with how it performs. However, I have come to the conclusion that the current representation of wg-only nodes is suboptimal. Right now, each node is a struct
It contains a single
SelfIPv4MasqAddrandSelfIPv6MasqAddrand an array ofKnownNodeIDs. However, most of the time each tailnet node should really be using a different masquerade address. The way to currently work around this is duplicating the wireguard-only node, each with a different singleton set ofKnownNodeIDs- this feels suboptimal. A simple solution here would be to have an array of(known_node_id, masq_ipv4, masq_ipv6)attached to each wireguard-only peer. With this naive implementation, this would be stored as a single database field, making it potentially hard to query.The essence here is the need to separate the configuration that is inherent to the external wireguard peer (and hence should be the same for all tailnet nodes, e.g. pubkey, endpoints), and the configuration inherent to the connection between the wg-only peer and a specific tailnet node (which will differ for each node, e.g. masquerade addr)
One way to do this would be to make the db table
wireguard_only_peersonly contain information inherent to a wg-only peer, and have a separate database tablewireguard_only_peer_linkswhich describes how specific tailnet peers communicate with wg-only nodes. This somewhat complicates the initial configuration but better represents how we should think about connections to wg-only peers.Before I start implementing this, I am looking for some feedback on which of these approaches I should try or if anybody has even better suggestions
@stratself commented on GitHub (Oct 31, 2025):
Hi,
As for exit nodes DNS, I believe it'd be nice to advertise the address as-typed, and then we'll see how Tailscale clients react.
I couldn't comment on the database design, but I wanna ask on for a few more things if that's okay:
Sometimes, I'd like to to add/switch machines using a certain wg-peer, and internally this means altering
--known-nodes. Could there be a command to re-modify (update) a current wg-peer too, instead of destroying and recreating it again?In general, tailnet nodekeys can be rotated over time, right? Then would an external API script to send in the new pubkeys sufficient? Or does the control plane need to intervene in anything?
I'd like to try scripting this setup with Proton, once this feature is complete. If possible, please expose these routes via API as such scripts should be executed client-side.
Thank you
@kradalby commented on GitHub (Nov 11, 2025):
I think there is a slippery slope here were we will expose some API for you to read the node information, and you can automate configuring your WireGuard, but at that point, you have just implemented a less capable version of Tailscale.
The real usage I would think is services offering WireGuard nodes which you can configure and then you can write a service to do so (VPN providers, Mullvad). I do not think however that we want to integrate such services into Headscale, and it should be expected by people that they will have to run them separately (and that they would be community projects).
I have not looked into WireGuardPeers and I do not know too much about it yet, so there will be some questions.
Am I understanding it so:
We define a
WireGuardPeer, this is essentially a "shadow" of a node "somewhere on the internet", we have their public IPs, their public key and some routes.Then for each of our Tailscale nodes, the
WireGuardPeerneeds to know the Tailscale nodes public key, and a "sham" IP on which it can be contacted, to not bleed into the Tailnet IP range?If something like this is the case, I can imagine something like this:
WireGuardPeers(wireguard_peers) - table defining the external node, how to reach it and talk to it. Common to all nodes. No users or tags involved (should we allow tagging?).WireGuardPeersRelation(dunno the name?) - a denormilased table linking together WireGuardPeer (ID), TailnetNode(ID) (and therefore public node key), and the fields for the needed IP addresses.@iridated commented on GitHub (Nov 12, 2025):
With the new connection-based model, this is pretty easy now. You just need to use
headscale nodes add-wg-connectionandheadscale nodes remove-wg-connectionI don't think the control plane would need to do anything special here - it should be sufficient to inform the external wireguard-only node that it should expect connections from a new public key. I personally don't use key rotation so I'm not sure how it works in headscale but I would be surprised if there is any way to get notified when a key is rotated. However, for your usecase it might be sufficient to poll
headscale node list --output=jsonevery few minutes and check manually if the key has changed.That's right
Kind of. We still assign the WireGuard peer a proper unique IP within the tailnet range. I'm not sure if this is necessary, but it looks like that's what tailscale does so the clients might be relying on it somehow.
The IP addresses that we need to configure per-node here are for masquerading packets from our tailnet node to the wg-peer. Packets from our node going to the wg-only peer will be rewritten to have source ip given by
ipvX-masquerade-addr. In principle, the wireguard-only peer could understand our tailnet addresses and accept traffic incoming with the tailnet IP - there is no problem with "bleeding" into the tailnet range. However, this isn't feasible with a peer we cannot control so we instead rewrite the source IP to whatever the peer actually expects.