Support for WireGuard only peers #554

Open
opened 2025-12-29 02:19:56 +01:00 by adam · 51 comments
Owner

Originally created by @Nemo157 on GitHub (Sep 7, 2023).

Why

Tailscale just announced their support for integrated Mullvad exit nodes. Being able to configure a similar setup via Headscale and an independent Mullvad account (or other wireguard VPN provider) would be useful for those of us without a Tailscale account.

Description

I haven't looked deeply into the details, but it's my understanding that this is implemented via a "WireGuard only peer" feature, and then support in the Tailscale coordination server to synchronize these peers with Mullvad. I assume it would be possible for Headscale to allow manually configuring these peer types.

Originally created by @Nemo157 on GitHub (Sep 7, 2023). ## Why Tailscale just [announced](https://tailscale.com/blog/mullvad-integration/) their support for integrated Mullvad exit nodes. Being able to configure a similar setup via Headscale and an independent Mullvad account (or other wireguard VPN provider) would be useful for those of us without a Tailscale account. ## Description I haven't looked deeply into the details, but it's my understanding that this is [implemented via a "WireGuard only peer"](https://github.com/tailscale/tailscale/pull/7821) feature, and then support in the Tailscale coordination server to synchronize these peers with Mullvad. I assume it would be possible for Headscale to allow manually configuring these peer types.
Author
Owner

@sachiniyer commented on GitHub (Sep 8, 2023):

It also seems like mullvad publishes a script to connect to mullvad servers. The most interesting thing is that you basically link public keys with your account (which is why I think that there is a preconfiguration step to register devices in their announcement).

It seems like the simplest implementation could be to create an external script that calls registerNodeCmd on mullvad endpoints (marking them as WireguardOnly), and then calling the mullvad api with each of the node's public keys you want to link.

I think the RegisterMachine, machine config, and node conversion would need to be changed.

(I also am really not an expert in this, so please take it with a grain of salt)

Edit: what needs to be changed

@sachiniyer commented on GitHub (Sep 8, 2023): It also seems like mullvad [publishes a script](https://github.com/mullvad/mullvad-wg.sh/blob/main/mullvad-wg.sh) to connect to mullvad servers. The most interesting thing is that you basically link public keys with your account (which is why I think that there is a preconfiguration step to register devices in their announcement). It seems like the simplest implementation could be to create an external script that calls [`registerNodeCmd`](https://github.com/juanfont/headscale/blob/7edc953d35f8c783b77a9409ab4208f50d5187f8/cmd/headscale/cli/nodes.go#L108) on mullvad endpoints (marking them as WireguardOnly), and then calling the [mullvad api](https://github.com/mullvad/mullvad-wg.sh/blob/ce91d41ae72b5166d87cb5de6956d22bc613af81/mullvad-wg.sh#L59) with each of the node's public keys you want to link. I think the [RegisterMachine](https://github.com/juanfont/headscale/blob/7edc953d35f8c783b77a9409ab4208f50d5187f8/hscontrol/db/machine.go#L413), [machine config](https://github.com/juanfont/headscale/blob/7edc953d35f8c783b77a9409ab4208f50d5187f8/hscontrol/mapper/tail.go#L110), and [node conversion](https://github.com/juanfont/headscale/blob/7edc953d35f8c783b77a9409ab4208f50d5187f8/cmd/headscale/cli/nodes.go#L108) would need to be changed. (I also am really not an expert in this, so please take it with a grain of salt) Edit: what needs to be changed
Author
Owner

@ghost commented on GitHub (Sep 11, 2023):

I think this was intended by the issue author, but to reiterate it seems more useful to me to allow any generic WireGuard-only peer as an exit node, not just Mullvad servers. That way headscale doesn't have to be tied to one VPN provider like the Tailscale coordination server currently is.

@ghost commented on GitHub (Sep 11, 2023): I think this was intended by the issue author, but to reiterate it seems more useful to me to allow any generic WireGuard-only peer as an exit node, not just Mullvad servers. That way headscale doesn't have to be tied to one VPN provider like the Tailscale coordination server currently is.
Author
Owner

@ghost commented on GitHub (Sep 11, 2023):

It might be possible to support this for any WireGuard server peer by accepting peer config files like those generated by Mullvad's WireGuard config file generator like described in this guide, or by just asking the user to provide the generated fields we need at the CLI when adding the peer.

@ghost commented on GitHub (Sep 11, 2023): It might be possible to support this for any WireGuard server peer by accepting peer config files like those generated by [Mullvad's WireGuard config file generator][0] like described in [this guide][1], or by just asking the user to provide the generated fields we need at the CLI when adding the peer. [0]: https://mullvad.net/en/account/wireguard-config?platform=linux [1]: https://mullvad.net/en/help/easy-wireguard-mullvad-setup-linux/
Author
Owner

@Nemo157 commented on GitHub (Sep 11, 2023):

I don't think it's possible to import a generated config file, because that contains a randomized private key. The provider needs to support uploading the existing public key from the devices that will connect. That doesn't seem possible through Mullvad's website, it wants the private key specified so it can embed it in the generated config files, but it is possible through the API. I haven't used other wireguard based vpn services so I'm not sure if being able to upload existing keys is common.

@Nemo157 commented on GitHub (Sep 11, 2023): I don't think it's possible to import a generated config file, because that contains a randomized private key. The provider needs to support uploading the existing public key from the devices that will connect. That doesn't seem possible through Mullvad's website, it wants the private key specified so it can embed it in the generated config files, but it is possible through the API. I haven't used other wireguard based vpn services so I'm not sure if being able to upload existing keys is common.
Author
Owner

@JadedHearth commented on GitHub (Sep 18, 2023):

Why would it be required to upload an existing public key?

@JadedHearth commented on GitHub (Sep 18, 2023): Why would it be required to upload an existing public key?
Author
Owner

@ghost commented on GitHub (Sep 18, 2023):

@WoodenMaxim From what I can tell, each Tailscale node only has a single private/public key pair that is generated when they are created, and then it uses that pair with every other node. So, when adding a non-Tailscale WireGuard endpoint like a Mullvad server, that other end needs to know (all of the) existing Tailscale nodes' public keys that are going to connect to it.

@ghost commented on GitHub (Sep 18, 2023): @WoodenMaxim From what I can tell, each Tailscale node only has a single private/public key pair that is generated when they are created, and then it uses that pair with every other node. So, when adding a non-Tailscale WireGuard endpoint like a Mullvad server, that other end needs to know (all of the) existing Tailscale nodes' public keys that are going to connect to it.
Author
Owner

@noseshimself commented on GitHub (Sep 27, 2023):

How does adding a Wireguard-only exit node get the public key of the nodes intending to use it into that node's configuration? If there was an easy solution for this we would not need Tailscale...

@noseshimself commented on GitHub (Sep 27, 2023): How does adding a Wireguard-only exit node get the public key of the nodes intending to use it into that node's configuration? If there was an easy solution for this we would not need Tailscale...
Author
Owner

@infogulch commented on GitHub (Sep 27, 2023):

How does mullvad do it?

@infogulch commented on GitHub (Sep 27, 2023): How does mullvad do it?
Author
Owner

@noseshimself commented on GitHub (Sep 28, 2023):

How does mullvad do it?

No. "How does Tailscale do it?" Obviously by being a kind of "reseller" and having an interface to provision the mullvad IAM that way. The more interesting question is how the tailscale client is selecting the "exit node" it wants to use.

I was already wondering about this in other settings. If there are multiple possible exit nodes for a destination or multiple Internet gateways how is the most appropriate node selected and how can I influence the choice?

@noseshimself commented on GitHub (Sep 28, 2023): > How does mullvad do it? No. "How does Tailscale do it?" Obviously by being a kind of "reseller" and having an interface to provision the mullvad IAM that way. The more interesting question is how the tailscale client is selecting the "exit node" it wants to use. I was already wondering about this in other settings. If there are multiple possible exit nodes for a destination or multiple Internet gateways how is the most appropriate node selected and how can I influence the choice?
Author
Owner

@infogulch commented on GitHub (Sep 28, 2023):

https://tailscale.com/kb/1103/exit-nodes/?tab=linux

tailscale up --exit-node=<exit-node-ip>

With that resolved, we still need to figure out how to get the wireguard public keys of the tailscale nodes with permission to access the exit node into the wireguard-only peer, and vice-versa.

Maybe it's as simple as "run a command to dump the full list of keys in a form that the wireguard-only peer can consume, and expect the admin to put that configuration onto the node (and keep it up to date) manually". That may not be very palatable, but the alternative is writing software to sync keys automatically in which case why not just run a full tailscale node?

Maybe the best solution would be to just add some example docs showing how you can execute this pattern with a regular tailscale node...

@infogulch commented on GitHub (Sep 28, 2023): https://tailscale.com/kb/1103/exit-nodes/?tab=linux ``` tailscale up --exit-node=<exit-node-ip> ``` With that resolved, we still need to figure out how to get the wireguard public keys of the tailscale nodes with permission to access the exit node into the wireguard-only peer, and vice-versa. Maybe it's as simple as "run a command to dump the full list of keys in a form that the wireguard-only peer can consume, and expect the admin to put that configuration onto the node (and keep it up to date) manually". That may not be very palatable, but the alternative is writing software to sync keys automatically in which case why not just run a full tailscale node? Maybe the best solution would be to just add some example docs showing how you can execute this pattern with a regular tailscale node...
Author
Owner

@sosnik commented on GitHub (Sep 28, 2023):

Mullvad themselves provide a script 1 to generate vanilla wg configs
instead of mullvad's native client.

The client public key is communicated over mullvad's (sadly undocumented)
API.

On Thu, Sep 28, 2023 at 4:05 AM Joe Taber @.***> wrote:

How does mullvad do it?


Reply to this email directly, view it on GitHub
https://github.com/juanfont/headscale/issues/1545#issuecomment-1737854626,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AGMXXMWCDPE3INS5NYB5JVTX4RTGHANCNFSM6AAAAAA4PIQA6M
.
You are receiving this because you are subscribed to this thread.Message
ID: @.***>

@sosnik commented on GitHub (Sep 28, 2023): Mullvad themselves provide a script [1] to generate vanilla wg configs instead of mullvad's native client. The client public key is communicated over mullvad's (sadly undocumented) API. [1]: https://github.com/mullvad/mullvad-wg.sh/tree/main On Thu, Sep 28, 2023 at 4:05 AM Joe Taber ***@***.***> wrote: > How does mullvad do it? > > — > Reply to this email directly, view it on GitHub > <https://github.com/juanfont/headscale/issues/1545#issuecomment-1737854626>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AGMXXMWCDPE3INS5NYB5JVTX4RTGHANCNFSM6AAAAAA4PIQA6M> > . > You are receiving this because you are subscribed to this thread.Message > ID: ***@***.***> >
Author
Owner

@infogulch commented on GitHub (Sep 28, 2023):

Very interesting, thanks for sharing sosnik.

https://github.com/mullvad/mullvad-wg.sh/blob/main/mullvad-wg.sh#L59

curl -sSL https://api.mullvad.net/wg -d account="$ACCOUNT" --data-urlencode pubkey="$(wg pubkey <<<"$PRIVATE_KEY")"

Roughly the script:

  • Reads your mullvad account number from stdin
  • Downloads a full listing of mullvad relays from the api, including the id/"CODE", location, public key, and endpoint of each.
  • Uploads your public key to the mullvad api using your account number (see code quoted above)
  • Creates a wg config file for each relay named /etc/wireguard/$CODE.conf with aforementioned connection details so you can connect by running wg-quick up $CODE

I also found this gist that loosely documents uploading and revoking keys which may also be useful for mullvad: https://gist.github.com/izzyleung/98bcc1c0ecf424c1896dac10a3a4a1f8


So mullvad relays are configured via simple https POSTs to their api (which great, I love that they use simple tech). This may be useful if we want to implement mullvad support into headscale.

That said, it doesn't help us much if we just have a random wireguard relay sitting on a vps and we want to add "Support for WireGuard only peers", as the title of this issue suggests. Honestly I'm not even sure how we would expect a plain WG exit node to work in theory: Either you configure it completely manually, exporting private keys from tailnet nodes and importing them into the WG node which is annoying; or just... you know, install tailscale on that node instead, tailscale was invented to solve that annoyance in the first place.

With that in mind, I think we should open a new issue / retitle this issue to "Support for mullvad exit nodes" (if desired), and add a recipe to the docs showing how to set up your own exit node by running a headscale node on a vps or something, because a "WireGuard only peer" is a non starter.

@infogulch commented on GitHub (Sep 28, 2023): Very interesting, thanks for sharing sosnik. https://github.com/mullvad/mullvad-wg.sh/blob/main/mullvad-wg.sh#L59 ```bash curl -sSL https://api.mullvad.net/wg -d account="$ACCOUNT" --data-urlencode pubkey="$(wg pubkey <<<"$PRIVATE_KEY")" ``` Roughly the script: * Reads your mullvad account number from stdin * Downloads a full listing of mullvad relays from the api, including the id/"CODE", location, public key, and endpoint of each. * Uploads your public key to the mullvad api using your account number (see code quoted above) * Creates a wg config file for each relay named `/etc/wireguard/$CODE.conf` with aforementioned connection details so you can connect by running `wg-quick up $CODE` I also found this gist that loosely documents uploading and revoking keys which may also be useful for mullvad: https://gist.github.com/izzyleung/98bcc1c0ecf424c1896dac10a3a4a1f8 --- So mullvad relays are configured via simple https POSTs to their api (which great, I love that they use simple tech). This may be useful if we want to implement mullvad support into headscale. That said, it doesn't help us much if we just have a random wireguard relay sitting on a vps and we want to add "Support for WireGuard only peers", as the title of this issue suggests. Honestly I'm not even sure how we would expect a plain WG exit node to work in *theory*: Either you configure it completely manually, exporting private keys from tailnet nodes and importing them into the WG node which is annoying; or just... you know, install tailscale on that node instead, tailscale was invented to solve that annoyance in the first place. With that in mind, I think we should open a new issue / retitle this issue to "Support for mullvad exit nodes" (if desired), and add a recipe to the docs showing how to set up your own exit node by running a headscale node on a vps or something, because a "WireGuard only peer" is a non starter.
Author
Owner

@Nemo157 commented on GitHub (Sep 28, 2023):

Either you configure it completely manually, exporting private keys from tailnet nodes and importing them into the WG node which is annoying

Just to clarify, you need to export public keys to configure the WG node (and import the WG nodes' public key into headscale).

I think there are still situations where this could be useful. One thought is that you want to connect devices to an organization managed WG node, where you don't have permission to install tailscale but you are able to provide your public keys to be configured on the node.

The other main reason I think to target just supporting "wireguard only peers" first is that they are the only thing that the tailscale protocol knows about. If they are supported by headscale then scripts can be written to configure them for whatever situation is needed, while if headscale instead only supports talking to the Mullvad API it blocks being able to configure for other situations. That doesn't mean headscale shouldn't support talking to Mullvad itself, but I think it should build on the general functionality of wireguard only peers.

@Nemo157 commented on GitHub (Sep 28, 2023): > Either you configure it completely manually, exporting private keys from tailnet nodes and importing them into the WG node which is annoying Just to clarify, you need to export _public_ keys to configure the WG node (and import the WG nodes' public key into headscale). I think there are still situations where this could be useful. One thought is that you want to connect devices to an organization managed WG node, where you don't have permission to install tailscale but you are able to provide your public keys to be configured on the node. The other main reason I think to target just supporting "wireguard only peers" first is that they are the only thing that the tailscale protocol knows about. If they are supported by headscale then scripts can be written to configure them for whatever situation is needed, while if headscale instead only supports talking to the Mullvad API it blocks being able to configure for other situations. That doesn't mean headscale _shouldn't_ support talking to Mullvad itself, but I think it should build on the general functionality of wireguard only peers.
Author
Owner

@sosnik commented on GitHub (Sep 28, 2023):

I am with Nemo157 on this one.
At a bare minimum, headscale should support exporting a vanilla wireguard
config (peer public keys and endpoints) for use with other wireguard
clients.
Supporting only mullvad opens the door to "But why not Proton" / "Why not
X" conversations.
One other thing to consider is that commercial VPN providers will limit you
to X number of concurrent connections (I think mullvad's limit is 5?). If
someone's tailnet (headnet?) has more than 5 devices, we don't want to give
mullvad more than 5 public keys and run up against such limits by accident.

On Thu, Sep 28, 2023 at 7:05 PM Nemo157 @.***> wrote:

Either you configure it completely manually, exporting private keys from
tailnet nodes and importing them into the WG node which is annoying

Just to clarify, you need to export public keys to configure the WG
node (and import the WG nodes' public key into headscale).

I think there are still situations where this could be useful. One thought
is that you want to connect devices to an organization managed WG node,
where you don't have permission to install tailscale but you are able to
provide your public keys to be configured on the node.

The other main reason I think to target just supporting "wireguard only
peers" first is that they are the only thing that the tailscale protocol
knows about. If they are supported by headscale then scripts can be written
to configure them for whatever situation is needed, while if headscale
instead only supports talking to the Mullvad API it blocks being able to
configure for other situations. That doesn't mean headscale shouldn't
support talking to Mullvad itself, but I think it should build on the
general functionality of wireguard only peers.


Reply to this email directly, view it on GitHub
https://github.com/juanfont/headscale/issues/1545#issuecomment-1738757384,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AGMXXMXH5IN3WLT6HDXLFZDX4U4V3ANCNFSM6AAAAAA4PIQA6M
.
You are receiving this because you are subscribed to this thread.Message
ID: @.***>

@sosnik commented on GitHub (Sep 28, 2023): I am with Nemo157 on this one. At a bare minimum, headscale should support exporting a vanilla wireguard config (peer public keys and endpoints) for use with other wireguard clients. Supporting only mullvad opens the door to "But why not Proton" / "Why not X" conversations. One other thing to consider is that commercial VPN providers will limit you to X number of concurrent connections (I think mullvad's limit is 5?). If someone's tailnet (headnet?) has more than 5 devices, we don't want to give mullvad more than 5 public keys and run up against such limits by accident. On Thu, Sep 28, 2023 at 7:05 PM Nemo157 ***@***.***> wrote: > Either you configure it completely manually, exporting private keys from > tailnet nodes and importing them into the WG node which is annoying > > Just to clarify, you need to export *public* keys to configure the WG > node (and import the WG nodes' public key into headscale). > > I think there are still situations where this could be useful. One thought > is that you want to connect devices to an organization managed WG node, > where you don't have permission to install tailscale but you are able to > provide your public keys to be configured on the node. > > The other main reason I think to target just supporting "wireguard only > peers" first is that they are the only thing that the tailscale protocol > knows about. If they are supported by headscale then scripts can be written > to configure them for whatever situation is needed, while if headscale > instead only supports talking to the Mullvad API it blocks being able to > configure for other situations. That doesn't mean headscale *shouldn't* > support talking to Mullvad itself, but I think it should build on the > general functionality of wireguard only peers. > > — > Reply to this email directly, view it on GitHub > <https://github.com/juanfont/headscale/issues/1545#issuecomment-1738757384>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AGMXXMXH5IN3WLT6HDXLFZDX4U4V3ANCNFSM6AAAAAA4PIQA6M> > . > You are receiving this because you are subscribed to this thread.Message > ID: ***@***.***> >
Author
Owner

@almereyda commented on GitHub (Sep 28, 2023):

I strongly believe this would come with all the overhead involved in implementing a WireGuard management API, for which exist many examples.

Examples

Do we already know the API surface of the Tailscale coordination server, which needs to be mimicked by Headscale for supporting the client feature implemented in tailscale/tailscale#7821?


Practically speaking, it appears this use case will be much easier to achieve with a Tailscale network and a given WireGuard network residing in the same namespace, and routing being allowed between their subnets.

To continute to distinguish between (1) sole support for WireGuard peers and (2) a default route via an external WireGuard VPN:

  • Maybe a node can announce a route to the WG subnet, and that would be about it for "native WireGuard clients", for now?
  • For the VPN use case, maybe an exit node can announce the ::/0 route, and that is locally forwarded through the above subnet, given another ::/0 route would be inherited from there?
@almereyda commented on GitHub (Sep 28, 2023): I strongly believe this would come with all the overhead involved in implementing a WireGuard management API, for which exist [many examples](https://github.com/topics/wireguard). <details><summary>Examples</summary> <p> * https://github.com/firezone/firezone * https://github.com/gravitl/netmaker * https://github.com/netbirdio/netbird * https://github.com/ngoduykhanh/wireguard-ui </p> </details> Do we already know the API surface of the Tailscale coordination server, which needs to be mimicked by Headscale for supporting the client feature implemented in tailscale/tailscale#7821? --- Practically speaking, it appears this use case will be much easier to achieve with a Tailscale network and a given WireGuard network residing in the same namespace, and routing being allowed between their subnets. To continute to distinguish between (1) sole support for WireGuard peers and (2) a default route via an external WireGuard VPN: - Maybe a node can announce a route to the WG subnet, and that would be about it for "native WireGuard clients", for now? - For the VPN use case, maybe an exit node can announce the `::/0` route, and that is locally forwarded through the above subnet, given another `::/0` route would be inherited from there?
Author
Owner

@infogulch commented on GitHub (Sep 28, 2023):

Just to clarify, you need to export public keys

Thanks for the correction, please excuse my typo.


I'll relax my stance a bit here, it seems perfectly reasonable to allow headscale users to manually configure wireguard peers by exporting node public keys and importing remote endpoint public keys by some cli or api, and to expect VPN configuration scripts to be layered on top of this feature. Though, how it interacts with ACL and other tailscale features appears to present some non-trivial remaining challenges.


Wrt namespacing and routing, are routing "announcements" a thing in wg? The mullvad script explicitly sets config on the client to route ips over the interface with AllowedIPs = 0.0.0.0/0, ::/0. From that I'd guess the admin would have to set the route manually.

@infogulch commented on GitHub (Sep 28, 2023): > Just to clarify, you need to export public keys Thanks for the correction, please excuse my typo. --- I'll relax my stance a bit here, it seems perfectly reasonable to allow headscale users to manually configure wireguard peers by exporting node public keys and importing remote endpoint public keys by some cli or api, and to expect VPN configuration scripts to be layered on top of this feature. Though, how it interacts with ACL and other tailscale features appears to present some non-trivial remaining challenges. --- Wrt namespacing and routing, are routing "announcements" a thing in wg? The mullvad script explicitly sets config on the client to route ips over the interface with `AllowedIPs = 0.0.0.0/0, ::/0`. From that I'd guess the admin would have to set the route manually.
Author
Owner

@sosnik commented on GitHub (Sep 29, 2023):

Wrt namespacing and routing, are routing "announcements" a thing in wg?
The mullvad script explicitly sets config on the client to route ips over
the interface with AllowedIPs = 0.0.0.0/0, ::/0. From that I'd guess the
admin would have to set the route manually.

Not that I am aware of. wg-quick uses native methods (ip route) to define
routes in the host, and no "announcements" per se are actually happening.
But I don't think this is a problem:

  1. The people who will use vanilla wireguard will be savvy enough to be
    aware of the need for manual routing; and
  2. Wireguard exit node config is going to be a per-node thing anyway,
    and if you have access to the node, you (or your management script) can
    define the routes like it normally would for vanilla wg.

On Fri, Sep 29, 2023 at 3:18 AM Joe Taber @.***> wrote:

Just to clarify, you need to export public keys

Thanks for the correction, please excuse my typo.

I'll relax my stance a bit here, it seems perfectly reasonable to allow
headscale users to manually configure wireguard peers by exporting node
public keys and importing remote endpoint public keys by some cli or api,
and to expect VPN configuration scripts to be layered on top of this
feature. Though, how it interacts with ACL and other tailscale features
appears to present some non-trivial remaining challenges.

Wrt namespacing and routing, are routing "announcements" a thing in wg?
The mullvad script explicitly sets config on the client to route ips over
the interface with AllowedIPs = 0.0.0.0/0, ::/0. From that I'd guess the
admin would have to set the route manually.


Reply to this email directly, view it on GitHub
https://github.com/juanfont/headscale/issues/1545#issuecomment-1739725218,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AGMXXMWSNVU4TWT2BVN27HLX4WWNPANCNFSM6AAAAAA4PIQA6M
.
You are receiving this because you are subscribed to this thread.Message
ID: @.***>

@sosnik commented on GitHub (Sep 29, 2023): > > Wrt namespacing and routing, are routing "announcements" a thing in wg? > The mullvad script explicitly sets config on the client to route ips over > the interface with AllowedIPs = 0.0.0.0/0, ::/0. From that I'd guess the > admin would have to set the route manually. > Not that I am aware of. wg-quick uses native methods (ip route) to define routes in the host, and no "announcements" per se are actually happening. But I don't think this is a problem: 1. The people who will use vanilla wireguard will be savvy enough to be aware of the need for manual routing; and 2. Wireguard exit node config is going to be a per-node thing anyway, and if you have access to the node, you (or your management script) can define the routes like it normally would for vanilla wg. On Fri, Sep 29, 2023 at 3:18 AM Joe Taber ***@***.***> wrote: > Just to clarify, you need to export public keys > > Thanks for the correction, please excuse my typo. > ------------------------------ > > I'll relax my stance a bit here, it seems perfectly reasonable to allow > headscale users to manually configure wireguard peers by exporting node > public keys and importing remote endpoint public keys by some cli or api, > and to expect VPN configuration scripts to be layered on top of this > feature. Though, how it interacts with ACL and other tailscale features > appears to present some non-trivial remaining challenges. > ------------------------------ > > Wrt namespacing and routing, are routing "announcements" a thing in wg? > The mullvad script explicitly sets config on the client to route ips over > the interface with AllowedIPs = 0.0.0.0/0, ::/0. From that I'd guess the > admin would have to set the route manually. > > — > Reply to this email directly, view it on GitHub > <https://github.com/juanfont/headscale/issues/1545#issuecomment-1739725218>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AGMXXMWSNVU4TWT2BVN27HLX4WWNPANCNFSM6AAAAAA4PIQA6M> > . > You are receiving this because you are subscribed to this thread.Message > ID: ***@***.***> >
Author
Owner

@github-actions[bot] commented on GitHub (Dec 28, 2023):

This issue is stale because it has been open for 90 days with no activity.

@github-actions[bot] commented on GitHub (Dec 28, 2023): This issue is stale because it has been open for 90 days with no activity.
Author
Owner

@Victor239 commented on GitHub (Dec 28, 2023):

Still relevant.

@Victor239 commented on GitHub (Dec 28, 2023): Still relevant.
Author
Owner

@noseshimself commented on GitHub (Dec 29, 2023):

Still relevant.

But still without any idea about the implementation by those who want it. To summarize: Tailscale (and a few others) exist because there is no simple auto-configuration for Wireguard links in the basic protocol. You either tell us how to introduce the Tailnet to some arbitrary wireguard (exit-)node or we can just as well close this for good.

@noseshimself commented on GitHub (Dec 29, 2023): > Still relevant. But still without any idea about the implementation by those who want it. To summarize: Tailscale (and a few others) exist because there is no simple auto-configuration for Wireguard links in the basic protocol. You either tell us how to introduce the Tailnet to some arbitrary wireguard (exit-)node or we can just as well close this for good.
Author
Owner

@Nemo157 commented on GitHub (Dec 29, 2023):

Tailscale already has the client-side feature for this, someone needs to investigate exactly how it is represented by the server and add it to the details provided by Headscale, there is no design work needed for the tailnet side of it. I'm pretty sure once the backend side of how to represent it is investigated the interface to configure that from the CLI will be relatively self-evident, so I'm not sure if there's any point in trying to design it externally. (I would have worked on this myself already, except I really dislike golang, maybe one day I'll eventually give up waiting for someone else and get over my aversion).

@Nemo157 commented on GitHub (Dec 29, 2023): Tailscale already has the client-side feature for this, someone needs to investigate exactly how it is represented by the server and add it to the details provided by Headscale, there is no design work needed for the tailnet side of it. I'm pretty sure once the backend side of how to represent it is investigated the interface to configure that from the CLI will be relatively self-evident, so I'm not sure if there's any point in trying to design it externally. (I would have worked on this myself already, except I really dislike golang, maybe one day I'll eventually give up waiting for someone else and get over my aversion).
Author
Owner

@smehrens commented on GitHub (Feb 7, 2024):

I think this feature would be very helpful in several scenarios:

  1. clients with ssh but without headscale/tailscale client
  2. clients controlled by a deployment system like ansible, chef, puppet, intune, apple mdm
  3. clients controlled by second headscale server
  4. nodes belong to different company.

if you can

I) import to headscale
a) a dataset of nodename, owner, publickey, wireguard ip address, external address ... for each wireguard only node
II) export
a) a list of datasets for the headscale nodes, which should be able to connect these nodes
b) a n*m matrix which headscale node should be able to connect to which "wireguard" node
III) send a webhook if reconfiguration ist needed
the deployment tools should be able to do the rest.

Company rules could require to use a specific deployment tool and automation process so headscale client may be no option for some systems. Or company rules do not allow headscale installation without a time consuming certification process for every new software version.

Some appliances do not have support for head/tailscale and dont allow installing third party software, but allow deloyment over ssh, api, ldap or whatever.

A second headscale server may import the exported list and use this for federation.

Of course all these scenarios dont fit to the general idea of headscale, but .... maybe there no other way ....

@smehrens commented on GitHub (Feb 7, 2024): I think this feature would be very helpful in several scenarios: 1. clients with ssh but without headscale/tailscale client 2. clients controlled by a deployment system like ansible, chef, puppet, intune, apple mdm 3. clients controlled by second headscale server 4. nodes belong to different company. if you can I) import to headscale a) a dataset of nodename, owner, publickey, wireguard ip address, external address ... for each wireguard only node II) export a) a list of datasets for the headscale nodes, which should be able to connect these nodes b) a n*m matrix which headscale node should be able to connect to which "wireguard" node III) send a webhook if reconfiguration ist needed the deployment tools should be able to do the rest. Company rules could require to use a specific deployment tool and automation process so headscale client may be no option for some systems. Or company rules do not allow headscale installation without a time consuming certification process for every new software version. Some appliances do not have support for head/tailscale and dont allow installing third party software, but allow deloyment over ssh, api, ldap or whatever. A second headscale server may import the exported list and use this for federation. Of course all these scenarios dont fit to the general idea of headscale, but .... maybe there no other way ....
Author
Owner

@unixfox commented on GitHub (Mar 11, 2024):

Does anyone have a mullvad account for testing? I want to check the traffic between the tailscale control plane and the tailscale client in order to understand how mullvad servers are served to the client.

If so, email me at github1545 [at] unixfox.eu

@unixfox commented on GitHub (Mar 11, 2024): Does anyone have a mullvad account for testing? I want to check the traffic between the tailscale control plane and the tailscale client in order to understand how mullvad servers are served to the client. If so, email me at `github1545 [at] unixfox.eu`
Author
Owner

@sefidel commented on GitHub (Mar 11, 2024):

@unixfox I want to check the traffic between the tailscale control plane and the tailscale client in order to understand how mullvad servers are served to the client.

I think you need a Tailscale account with Mullvad add-on for that.

@sefidel commented on GitHub (Mar 11, 2024): > @unixfox I want to check the traffic between the tailscale control plane and the tailscale client in order to understand how mullvad servers are served to the client. I think you need a Tailscale account with Mullvad add-on for that.
Author
Owner

@github-actions[bot] commented on GitHub (Jun 10, 2024):

This issue is stale because it has been open for 90 days with no activity.

@github-actions[bot] commented on GitHub (Jun 10, 2024): This issue is stale because it has been open for 90 days with no activity.
Author
Owner

@aniqueta commented on GitHub (Jun 11, 2024):

Not stale.

@aniqueta commented on GitHub (Jun 11, 2024): Not stale.
Author
Owner

@thedustinmiller commented on GitHub (Jul 29, 2024):

Hello, I went and bought a Mullvad subscription for Tailscale to investigate how it works.

The system is actually really simple: when you use a Mullvad exit node it looks exactly like a normal exit node. Here's what it looks like using a personal exit node vs Mullvad exit node.

 IP                  HOSTNAME                           COUNTRY            CITY                   STATUS     
 100.xxx.xxx.xxx        personal.tailedxxxx.ts.net              -                  -                      -          
 100.127.203.60      us-chi-wg-007-1.mullvad.ts.net     USA                Chicago, IL            -          

and then in the tailscaled.state file it sets the ExitNodeID in the "profile-xxxx" like any other exit node, eg Chicago 007 is "ExitNodeID": "n85Dw3BNhX11CNTRL",

When you go to mullvad.net they see your traffic as coming from the server corresponding to that hostname. Beyond that, it's opaque. No Mullvad wireguard configs are exposed, all of that happens within the Tailscale controlled exit node server.

It's a black box from there, although Tailscale provides clear documentation on what data they associate with accounts. Technically speaking, I don't think Mullvad has to do anything for this to work. They provide a CLI and readable API, for example the following is what happens when you login via cli
curl -sSL https://api.mullvad.net/wg -d account="$ACCOUNT" --data-urlencode pubkey="$(wg pubkey <<<"$PRIVATE_KEY")")"
where $PRIVATE_KEY is defined as wg genkey and stored in the wireguard config.

I obviously can't tell, but it seems all Tailscale would have to do is set up those exit nodes to chain the wireguard connections, the complex part would be key allocation/licensing.

TL;DR: Neither the Tailscale client nor orchestrator have anything to do with the Mullvad integration. They have specially configured exit nodes that likely do wireguard chaining.

@thedustinmiller commented on GitHub (Jul 29, 2024): Hello, I went and bought a Mullvad subscription for Tailscale to investigate how it works. The system is actually really simple: when you use a Mullvad exit node it looks exactly like a normal exit node. Here's what it looks like using a personal exit node vs Mullvad exit node. ``` IP HOSTNAME COUNTRY CITY STATUS 100.xxx.xxx.xxx personal.tailedxxxx.ts.net - - - 100.127.203.60 us-chi-wg-007-1.mullvad.ts.net USA Chicago, IL - ``` and then in the tailscaled.state file it sets the ExitNodeID in the "profile-xxxx" like any other exit node, eg Chicago 007 is `"ExitNodeID": "n85Dw3BNhX11CNTRL",` When you go to mullvad.net they see your traffic as coming from the server corresponding to that hostname. Beyond that, it's opaque. No Mullvad wireguard configs are exposed, all of that happens within the Tailscale controlled exit node server. It's a black box from there, although Tailscale provides clear documentation on what data they associate with accounts. Technically speaking, I don't think Mullvad has to do anything for this to work. They provide a CLI and readable API, for example the following is what happens when you login via cli `curl -sSL https://api.mullvad.net/wg -d account="$ACCOUNT" --data-urlencode pubkey="$(wg pubkey <<<"$PRIVATE_KEY")")"` where $PRIVATE_KEY is defined as `wg genkey` and stored in the wireguard config. I obviously can't tell, but it seems all Tailscale would have to do is set up those exit nodes to chain the wireguard connections, the complex part would be key allocation/licensing. TL;DR: Neither the Tailscale client nor orchestrator have anything to do with the Mullvad integration. They have specially configured exit nodes that likely do wireguard chaining.
Author
Owner

@sosnik commented on GitHub (Jul 31, 2024):

Thanks for putting your money up to investigate, @thedustinmiller.

Still, I don't think that the tailscale control server has no role to play in managing the wg exit nodes if for no other reason than that you can have an arbitrarily-large amount of exit nodes and you don't necessarily want to give them a unique IP on your tailnet at the same time. Is the Chicago exit node always on 100.127.203.60? Do other nodes use different IPs? Do they use consistent IPs or randomly-assigned ones?

As for connecting the tailscale traffic to wireguard exit traffic, it might be as simple as setting up a minimal container with just ts and wg installed and then:

iptables -A FORWARD -i tailscale0 -o wg0 -j ACCEPT
iptables -A FORWARD -i wg0 -o tailscale0 -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -t nat -A POSTROUTING -o wg0 -j MASQUERADE

And then you can switch out which wireguard config (VPN exit node) becomes wg0 in the above picture. Granted, this is more of a hacky single-user setup, but it might work.

@sosnik commented on GitHub (Jul 31, 2024): Thanks for putting your money up to investigate, @thedustinmiller. Still, I don't think that the tailscale control server has *no* role to play in managing the wg exit nodes if for no other reason than that you can have an arbitrarily-large amount of exit nodes and you don't necessarily want to give them a unique IP on your tailnet at the same time. Is the Chicago exit node always on `100.127.203.60`? Do other nodes use different IPs? Do they use consistent IPs or randomly-assigned ones? As for connecting the tailscale traffic to wireguard exit traffic, it might be as simple as setting up a minimal container with just ts and wg installed and then: ```Shell iptables -A FORWARD -i tailscale0 -o wg0 -j ACCEPT iptables -A FORWARD -i wg0 -o tailscale0 -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -t nat -A POSTROUTING -o wg0 -j MASQUERADE ``` And then you can switch out which wireguard config (VPN exit node) becomes `wg0` in the above picture. Granted, this is more of a hacky single-user setup, but it might work.
Author
Owner

@sefidel commented on GitHub (Jul 31, 2024):

The control server manages them as "wireguard only" nodes, and Disco and DERP is disabled on such nodes.

@sefidel commented on GitHub (Jul 31, 2024): The control server manages them as ["wireguard only" nodes](https://sourcegraph.com/search?q=repo:%5Egithub%5C.com/tailscale/tailscale%24+wireguardOnly&patternType=keyword&sm=0), and Disco and DERP is disabled on such nodes.
Author
Owner

@thedustinmiller commented on GitHub (Jul 31, 2024):

Here's a list of the exit nodes. The IPs appear consistent, so far at least.

Exit nodes
IP                  HOSTNAME                           COUNTRY            CITY                   STATUS      
 100.91.198.95       al-tia-wg-001.mullvad.ts.net       Albania            Tirana                 -           
 100.65.216.68       au-adl-wg-301.mullvad.ts.net       Australia          Any                    -           
 100.65.216.68       au-adl-wg-301.mullvad.ts.net       Australia          Adelaide               -           
 100.70.240.117      au-bne-wg-301.mullvad.ts.net       Australia          Brisbane               -           
 100.117.126.96      au-mel-wg-301.mullvad.ts.net       Australia          Melbourne              -           
 100.88.22.25        au-per-wg-301.mullvad.ts.net       Australia          Perth                  -           
 100.100.169.122     au-syd-wg-001.mullvad.ts.net       Australia          Sydney                 -           
 100.79.65.118       at-vie-wg-001.mullvad.ts.net       Austria            Vienna                 -           
 100.120.7.76        be-bru-wg-101.mullvad.ts.net       Belgium            Brussels               -           
 100.66.247.50       br-sao-wg-201.mullvad.ts.net       Brazil             Sao Paulo              -           
 100.98.0.17         bg-sof-wg-001.mullvad.ts.net       Bulgaria           Sofia                  -           
 100.112.151.82      ca-mtr-wg-001.mullvad.ts.net       Canada             Any                    -           
 100.88.213.131      ca-yyc-wg-201.mullvad.ts.net       Canada             Calgary                -           
 100.112.151.82      ca-mtr-wg-001.mullvad.ts.net       Canada             Montreal               -           
 100.111.245.115     ca-tor-wg-001.mullvad.ts.net       Canada             Toronto                -           
 100.85.34.142       ca-van-wg-201.mullvad.ts.net       Canada             Vancouver              -           
 100.100.203.20      cl-scl-wg-001.mullvad.ts.net       Chile              Santiago               -           
 100.81.101.39       co-bog-wg-001.mullvad.ts.net       Colombia           Bogota                 -           
 100.99.12.129       hr-zag-wg-001.mullvad.ts.net       Croatia            Zagreb                 -           
 100.91.160.150      cz-prg-wg-101.mullvad.ts.net       Czech Republic     Prague                 -           
 100.66.11.120       dk-cph-wg-001.mullvad.ts.net       Denmark            Copenhagen             -           
 100.127.234.115     ee-tll-wg-001.mullvad.ts.net       Estonia            Tallinn                -           
 100.117.20.17       fi-hel-wg-001.mullvad.ts.net       Finland            Helsinki               -           
 100.122.231.14      fr-mrs-wg-001.mullvad.ts.net       France             Any                    -           
 100.114.187.107     fr-bod-wg-002.mullvad.ts.net       France             Bordeaux               -           
 100.122.231.14      fr-mrs-wg-001.mullvad.ts.net       France             Marseille              -           
 100.120.246.19      fr-par-wg-001.mullvad.ts.net       France             Paris                  -           
 100.108.29.93       de-fra-wg-009.mullvad.ts.net       Germany            Any                    -           
 100.123.112.113     de-ber-wg-001.mullvad.ts.net       Germany            Berlin                 -           
 100.78.4.15         de-dus-wg-001.mullvad.ts.net       Germany            Dusseldorf             -           
 100.108.29.93       de-fra-wg-009.mullvad.ts.net       Germany            Frankfurt              -           
 100.73.33.78        gr-ath-wg-101.mullvad.ts.net       Greece             Athens                 -           
 100.127.156.146     hk-hkg-wg-201.mullvad.ts.net       Hong Kong          Hong Kong              -           
 100.114.248.11      hu-bud-wg-101.mullvad.ts.net       Hungary            Budapest               -           
 100.119.231.71      id-jpu-wg-001.mullvad.ts.net       Indonesia          Jakarta                -           
 100.117.68.90       ie-dub-wg-101.mullvad.ts.net       Ireland            Dublin                 -           
 100.112.80.91       il-tlv-wg-101.mullvad.ts.net       Israel             Tel Aviv               -           
 100.113.57.90       it-mil-wg-001.mullvad.ts.net       Italy              Any                    -           
 100.113.57.90       it-mil-wg-001.mullvad.ts.net       Italy              Milan                  -           
 100.86.133.93       it-pmo-wg-001.mullvad.ts.net       Italy              Palermo                -           
 100.100.131.39      jp-tyo-wg-001.mullvad.ts.net       Japan              Any                    -           
 100.81.28.91        jp-osa-wg-001.mullvad.ts.net       Japan              Osaka                  -           
 100.100.131.39      jp-tyo-wg-001.mullvad.ts.net       Japan              Tokyo                  -           
 100.83.7.69         lv-rix-wg-001.mullvad.ts.net       Latvia             Riga                   -           
 100.109.204.162     mx-qro-wg-001.mullvad.ts.net       Mexico             Queretaro              -           
 100.73.97.117       nl-ams-wg-007.mullvad.ts.net       Netherlands        Amsterdam              -           
 100.123.7.85        nz-akl-wg-301.mullvad.ts.net       New Zealand        Auckland               -           
 100.118.155.102     no-svg-wg-001.mullvad.ts.net       Norway             Any                    -           
 100.87.93.9         no-osl-wg-001.mullvad.ts.net       Norway             Oslo                   -           
 100.118.155.102     no-svg-wg-001.mullvad.ts.net       Norway             Stavanger              -           
 100.103.142.113     pl-waw-wg-101.mullvad.ts.net       Poland             Warsaw                 -           
 100.81.170.137      pt-lis-wg-201.mullvad.ts.net       Portugal           Lisbon                 -           
 100.120.181.133     ro-buh-wg-001.mullvad.ts.net       Romania            Bucharest              -           
 100.108.32.59       rs-beg-wg-101.mullvad.ts.net       Serbia             Belgrade               -           
 100.110.81.102      sg-sin-wg-001.mullvad.ts.net       Singapore          Singapore              -           
 100.112.134.92      sk-bts-wg-001.mullvad.ts.net       Slovakia           Bratislava             -           
 100.97.155.16       si-lju-wg-001.mullvad.ts.net       Slovenia           Ljubljana              -           
 100.120.39.100      za-jnb-wg-001.mullvad.ts.net       South Africa       Johannesburg           -           
 100.123.96.89       es-bcn-wg-001.mullvad.ts.net       Spain              Any                    -           
 100.123.96.89       es-bcn-wg-001.mullvad.ts.net       Spain              Barcelona              -           
 100.107.199.105     es-mad-wg-101.mullvad.ts.net       Spain              Madrid                 -           
 100.80.70.77        es-vlc-wg-001.mullvad.ts.net       Spain              Valencia               -           
 100.120.166.95      se-got-wg-001.mullvad.ts.net       Sweden             Any                    -           
 100.120.166.95      se-got-wg-001.mullvad.ts.net       Sweden             Gothenburg             -           
 100.66.72.110       se-mma-wg-001.mullvad.ts.net       Sweden             Malmö                  -           
 100.127.8.7         se-sto-wg-001.mullvad.ts.net       Sweden             Stockholm              -           
 100.90.55.149       ch-zrh-wg-001.mullvad.ts.net       Switzerland        Zurich                 -           
 100.68.203.31       th-bkk-wg-001.mullvad.ts.net       Thailand           Bangkok                -           
 100.83.81.112       tr-ist-wg-001.mullvad.ts.net       Turkey             Istanbul               -           
 100.80.218.70       gb-glw-wg-001.mullvad.ts.net       UK                 Any                    -           
 100.80.218.70       gb-glw-wg-001.mullvad.ts.net       UK                 Glasgow                -           
 100.68.61.55        gb-lon-wg-001.mullvad.ts.net       UK                 London                 -           
 100.104.54.116      gb-mnc-wg-001.mullvad.ts.net       UK                 Manchester             -           
 100.127.203.60      us-chi-wg-007-1.mullvad.ts.net     USA                Any                    -           
 100.121.87.35       us-qas-wg-001.mullvad.ts.net       USA                Ashburn, VA            -           
 100.126.111.61      us-atl-wg-001.mullvad.ts.net       USA                Atlanta, GA            -           
 100.85.243.129      us-bos-wg-001.mullvad.ts.net       USA                Boston, MA             -           
 100.127.203.60      us-chi-wg-007-1.mullvad.ts.net     USA                Chicago, IL            -           
 100.70.74.122       us-dal-wg-001.mullvad.ts.net       USA                Dallas, TX             -           
 100.99.135.32       us-den-wg-101.mullvad.ts.net       USA                Denver, CO             -           
 100.100.52.113      us-det-wg-001.mullvad.ts.net       USA                Detroit, MI            -           
 100.82.151.77       us-hou-wg-001.mullvad.ts.net       USA                Houston, TX            -           
 100.95.200.53       us-lax-wg-101.mullvad.ts.net       USA                Los Angeles, CA        -           
 100.106.242.19      us-txc-wg-001.mullvad.ts.net       USA                McAllen, TX            -           
 100.84.251.68       us-mia-wg-002.mullvad.ts.net       USA                Miami, FL              -           
 100.82.221.88       us-nyc-wg-301.mullvad.ts.net       USA                New York, NY           -           
 100.64.17.108       us-phx-wg-103.mullvad.ts.net       USA                Phoenix, AZ            -           
 100.125.49.122      us-rag-wg-101.mullvad.ts.net       USA                Raleigh, NC            -           
 100.76.23.113       us-slc-wg-101.mullvad.ts.net       USA                Salt Lake City, UT     -           
 100.91.152.42       us-sjc-wg-001.mullvad.ts.net       USA                San Jose, CA           -           
 100.96.176.46       us-sea-wg-001.mullvad.ts.net       USA                Seattle, WA            -           
 100.95.30.22        us-uyk-wg-102.mullvad.ts.net       USA                Secaucus, NJ           -           
 100.93.242.75       ua-iev-wg-001.mullvad.ts.net       Ukraine            Kyiv                   -  

I am working on exactly what you mentioned with the iptables, I subscribe to normal Mullvad as well and am trying that with the vanilla Wireguard configs they generate.

But yeah I was totally wrong, the search sefidel provided included a test describing wireguard only functionality, I think. I'm guessing this comment means the server has to explicitly define those peers.

// IsWireGuardOnly indicates that this is a non-Tailscale WireGuard peer, it
   // is not expected to speak Disco or DERP, and it must have Endpoints in
   // order to be reachable.

This is pretty far outside my usual work, so please let me know if I can provide any other info.

@thedustinmiller commented on GitHub (Jul 31, 2024): Here's a list of the exit nodes. The IPs appear consistent, so far at least. <details><summary>Exit nodes</summary> ``` IP HOSTNAME COUNTRY CITY STATUS 100.91.198.95 al-tia-wg-001.mullvad.ts.net Albania Tirana - 100.65.216.68 au-adl-wg-301.mullvad.ts.net Australia Any - 100.65.216.68 au-adl-wg-301.mullvad.ts.net Australia Adelaide - 100.70.240.117 au-bne-wg-301.mullvad.ts.net Australia Brisbane - 100.117.126.96 au-mel-wg-301.mullvad.ts.net Australia Melbourne - 100.88.22.25 au-per-wg-301.mullvad.ts.net Australia Perth - 100.100.169.122 au-syd-wg-001.mullvad.ts.net Australia Sydney - 100.79.65.118 at-vie-wg-001.mullvad.ts.net Austria Vienna - 100.120.7.76 be-bru-wg-101.mullvad.ts.net Belgium Brussels - 100.66.247.50 br-sao-wg-201.mullvad.ts.net Brazil Sao Paulo - 100.98.0.17 bg-sof-wg-001.mullvad.ts.net Bulgaria Sofia - 100.112.151.82 ca-mtr-wg-001.mullvad.ts.net Canada Any - 100.88.213.131 ca-yyc-wg-201.mullvad.ts.net Canada Calgary - 100.112.151.82 ca-mtr-wg-001.mullvad.ts.net Canada Montreal - 100.111.245.115 ca-tor-wg-001.mullvad.ts.net Canada Toronto - 100.85.34.142 ca-van-wg-201.mullvad.ts.net Canada Vancouver - 100.100.203.20 cl-scl-wg-001.mullvad.ts.net Chile Santiago - 100.81.101.39 co-bog-wg-001.mullvad.ts.net Colombia Bogota - 100.99.12.129 hr-zag-wg-001.mullvad.ts.net Croatia Zagreb - 100.91.160.150 cz-prg-wg-101.mullvad.ts.net Czech Republic Prague - 100.66.11.120 dk-cph-wg-001.mullvad.ts.net Denmark Copenhagen - 100.127.234.115 ee-tll-wg-001.mullvad.ts.net Estonia Tallinn - 100.117.20.17 fi-hel-wg-001.mullvad.ts.net Finland Helsinki - 100.122.231.14 fr-mrs-wg-001.mullvad.ts.net France Any - 100.114.187.107 fr-bod-wg-002.mullvad.ts.net France Bordeaux - 100.122.231.14 fr-mrs-wg-001.mullvad.ts.net France Marseille - 100.120.246.19 fr-par-wg-001.mullvad.ts.net France Paris - 100.108.29.93 de-fra-wg-009.mullvad.ts.net Germany Any - 100.123.112.113 de-ber-wg-001.mullvad.ts.net Germany Berlin - 100.78.4.15 de-dus-wg-001.mullvad.ts.net Germany Dusseldorf - 100.108.29.93 de-fra-wg-009.mullvad.ts.net Germany Frankfurt - 100.73.33.78 gr-ath-wg-101.mullvad.ts.net Greece Athens - 100.127.156.146 hk-hkg-wg-201.mullvad.ts.net Hong Kong Hong Kong - 100.114.248.11 hu-bud-wg-101.mullvad.ts.net Hungary Budapest - 100.119.231.71 id-jpu-wg-001.mullvad.ts.net Indonesia Jakarta - 100.117.68.90 ie-dub-wg-101.mullvad.ts.net Ireland Dublin - 100.112.80.91 il-tlv-wg-101.mullvad.ts.net Israel Tel Aviv - 100.113.57.90 it-mil-wg-001.mullvad.ts.net Italy Any - 100.113.57.90 it-mil-wg-001.mullvad.ts.net Italy Milan - 100.86.133.93 it-pmo-wg-001.mullvad.ts.net Italy Palermo - 100.100.131.39 jp-tyo-wg-001.mullvad.ts.net Japan Any - 100.81.28.91 jp-osa-wg-001.mullvad.ts.net Japan Osaka - 100.100.131.39 jp-tyo-wg-001.mullvad.ts.net Japan Tokyo - 100.83.7.69 lv-rix-wg-001.mullvad.ts.net Latvia Riga - 100.109.204.162 mx-qro-wg-001.mullvad.ts.net Mexico Queretaro - 100.73.97.117 nl-ams-wg-007.mullvad.ts.net Netherlands Amsterdam - 100.123.7.85 nz-akl-wg-301.mullvad.ts.net New Zealand Auckland - 100.118.155.102 no-svg-wg-001.mullvad.ts.net Norway Any - 100.87.93.9 no-osl-wg-001.mullvad.ts.net Norway Oslo - 100.118.155.102 no-svg-wg-001.mullvad.ts.net Norway Stavanger - 100.103.142.113 pl-waw-wg-101.mullvad.ts.net Poland Warsaw - 100.81.170.137 pt-lis-wg-201.mullvad.ts.net Portugal Lisbon - 100.120.181.133 ro-buh-wg-001.mullvad.ts.net Romania Bucharest - 100.108.32.59 rs-beg-wg-101.mullvad.ts.net Serbia Belgrade - 100.110.81.102 sg-sin-wg-001.mullvad.ts.net Singapore Singapore - 100.112.134.92 sk-bts-wg-001.mullvad.ts.net Slovakia Bratislava - 100.97.155.16 si-lju-wg-001.mullvad.ts.net Slovenia Ljubljana - 100.120.39.100 za-jnb-wg-001.mullvad.ts.net South Africa Johannesburg - 100.123.96.89 es-bcn-wg-001.mullvad.ts.net Spain Any - 100.123.96.89 es-bcn-wg-001.mullvad.ts.net Spain Barcelona - 100.107.199.105 es-mad-wg-101.mullvad.ts.net Spain Madrid - 100.80.70.77 es-vlc-wg-001.mullvad.ts.net Spain Valencia - 100.120.166.95 se-got-wg-001.mullvad.ts.net Sweden Any - 100.120.166.95 se-got-wg-001.mullvad.ts.net Sweden Gothenburg - 100.66.72.110 se-mma-wg-001.mullvad.ts.net Sweden Malmö - 100.127.8.7 se-sto-wg-001.mullvad.ts.net Sweden Stockholm - 100.90.55.149 ch-zrh-wg-001.mullvad.ts.net Switzerland Zurich - 100.68.203.31 th-bkk-wg-001.mullvad.ts.net Thailand Bangkok - 100.83.81.112 tr-ist-wg-001.mullvad.ts.net Turkey Istanbul - 100.80.218.70 gb-glw-wg-001.mullvad.ts.net UK Any - 100.80.218.70 gb-glw-wg-001.mullvad.ts.net UK Glasgow - 100.68.61.55 gb-lon-wg-001.mullvad.ts.net UK London - 100.104.54.116 gb-mnc-wg-001.mullvad.ts.net UK Manchester - 100.127.203.60 us-chi-wg-007-1.mullvad.ts.net USA Any - 100.121.87.35 us-qas-wg-001.mullvad.ts.net USA Ashburn, VA - 100.126.111.61 us-atl-wg-001.mullvad.ts.net USA Atlanta, GA - 100.85.243.129 us-bos-wg-001.mullvad.ts.net USA Boston, MA - 100.127.203.60 us-chi-wg-007-1.mullvad.ts.net USA Chicago, IL - 100.70.74.122 us-dal-wg-001.mullvad.ts.net USA Dallas, TX - 100.99.135.32 us-den-wg-101.mullvad.ts.net USA Denver, CO - 100.100.52.113 us-det-wg-001.mullvad.ts.net USA Detroit, MI - 100.82.151.77 us-hou-wg-001.mullvad.ts.net USA Houston, TX - 100.95.200.53 us-lax-wg-101.mullvad.ts.net USA Los Angeles, CA - 100.106.242.19 us-txc-wg-001.mullvad.ts.net USA McAllen, TX - 100.84.251.68 us-mia-wg-002.mullvad.ts.net USA Miami, FL - 100.82.221.88 us-nyc-wg-301.mullvad.ts.net USA New York, NY - 100.64.17.108 us-phx-wg-103.mullvad.ts.net USA Phoenix, AZ - 100.125.49.122 us-rag-wg-101.mullvad.ts.net USA Raleigh, NC - 100.76.23.113 us-slc-wg-101.mullvad.ts.net USA Salt Lake City, UT - 100.91.152.42 us-sjc-wg-001.mullvad.ts.net USA San Jose, CA - 100.96.176.46 us-sea-wg-001.mullvad.ts.net USA Seattle, WA - 100.95.30.22 us-uyk-wg-102.mullvad.ts.net USA Secaucus, NJ - 100.93.242.75 ua-iev-wg-001.mullvad.ts.net Ukraine Kyiv - ``` </details> I am working on exactly what you mentioned with the iptables, I subscribe to normal Mullvad as well and am trying that with the vanilla Wireguard configs they generate. But yeah I was totally wrong, the search sefidel provided included a [test](https://github.com/tailscale/tailscale/blob/655b4f8fc5e14a0faa13a218d34757a546a4b5da/wgengine/magicsock/magicsock_test.go#L2377) describing wireguard only functionality, I think. I'm guessing this comment means the server has to explicitly define those peers. ``` // IsWireGuardOnly indicates that this is a non-Tailscale WireGuard peer, it // is not expected to speak Disco or DERP, and it must have Endpoints in // order to be reachable. ``` This is pretty far outside my usual work, so please let me know if I can provide any other info.
Author
Owner

@trinity-geology-unstable commented on GitHub (Aug 3, 2024):

Thanks for putting your money up to investigate, @thedustinmiller.

Still, I don't think that the tailscale control server has no role to play in managing the wg exit nodes if for no other reason than that you can have an arbitrarily-large amount of exit nodes and you don't necessarily want to give them a unique IP on your tailnet at the same time. Is the Chicago exit node always on 100.127.203.60? Do other nodes use different IPs? Do they use consistent IPs or randomly-assigned ones?

As for connecting the tailscale traffic to wireguard exit traffic, it might be as simple as setting up a minimal container with just ts and wg installed and then:

iptables -A FORWARD -i tailscale0 -o wg0 -j ACCEPT
iptables -A FORWARD -i wg0 -o tailscale0 -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -t nat -A POSTROUTING -o wg0 -j MASQUERADE

And then you can switch out which wireguard config (VPN exit node) becomes wg0 in the above picture. Granted, this is more of a hacky single-user setup, but it might work.

I've been experimenting with essentially this idea in my homelab recently and I've had some success.

I started with having Tailscale and Wireguard both running inside a Ubuntu VM (well actually a cheap rented virtual private server for the 1Gbps connection) which didn't work until I finally found the correct pre-up/down rules and got it going. Then I transferred the concept into a Debian docker image successfully. The only noticeable difference is slightly slower network speeds, 200Mbps inside docker vs 350Mbps on the VM host (i.e. download speed from the perspective of another node on the tailnet using the exit-node), presumably due to docker overhead.

I haven't yet put it together into a nice neat repo that can be shared widely. However I'm happy to share my configs here for anyone that's interested with a big health warning that I'm a noob and don't know what I'm doing so copy at your peril. Hope it helps!

This is the wireguard config file I've created. It's activated using wg-quick after tailscale has been installed and started successfully on the same system:

[Interface]
Address = [REDACTED]
PrivateKey = [REDACTED]
Table = off # stops wg-quick from auto-generating routing tables
MTU = 1380 # seems to give better network speed when running inside a docker container

PostUp = wg set vpn1_cloud_01 fwmark 51820 # copied from wg-quck. Interface name should match config file name
PostUp = ip -4 route add 0.0.0.0/0 dev vpn1_cloud_01 table 51820 # copied from wg-quick
PostUp = ip -4 rule add not fwmark 51820 table 51820 pref 32765 # important - I believe this sets the preference of the wireguard tunnel to be higher (i.e. lower priority) than tailscale, allowing both to co-exist
PostUp = ip -4 rule add table main suppress_prefixlength 0 # copied from wg-quick
PostUp = sysctl -q net.ipv4.conf.all.src_valid_mark=1 # copied from wg-quick
PreDown = ip -4 rule del table 51820 # copied from wg-quick
PreDown = ip -4 rule del table main suppress_prefixlength 0 # copied from wg-quick

[Peer]
PublicKey = [REDACTED]
AllowedIPs = 0.0.0.0/0
Endpoint = [REDACTED]:51820
PersistentKeepalive = 25

This is the dockerfile I use to create my docker image:

# Base image
FROM debian:bullseye-slim

# Install necessary packages
RUN apt-get update && apt-get install -y \
        curl \
        iproute2 \
        iptables \
        wireguard-tools \
        bash \
        procps \
        && apt-get clean \
    && rm -rf /var/lib/apt/lists/*

# Install additional packages
RUN curl -fsSL https://tailscale.com/install.sh | sh

# Copy and set the entrypoint script into the container if active
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]

This is my entrypoint.sh file:


#!/bin/bash

# Check if necessary environment variables are set
: "${TS_EXTRA_ARGS:?Environment variable TS_EXTRA_ARGS must be set}"
: "${WG_CONFIG:?Environment variable WG_CONFIG must be set}"

# Start Tailscale in the background
echo "Starting tailscaled"
tailscaled &
TAILSCALE_PID=$!

# Function to check if tailscaled is running
check_tailscaled() {
    pgrep tailscaled > /dev/null
}

# Wait for tailscaled to start up
echo "Waiting for tailscaled to initialize..."
for i in {1..10}; do
    if check_tailscaled; then
        echo "tailscaled is running"
        break
    fi
    sleep 2
done

# Check again if tailscaled is running after waiting
if ! check_tailscaled; then
    echo "tailscaled failed to start. Exiting."
    tail -f /dev/null  # Keeps the container running for debugging
    exit 1
fi

# Authenticate with Tailscale
echo "Running Tailscale up"
tailscale up $TS_EXTRA_ARGS

# Check if Tailscale is connected
if ! tailscale status | grep -q "Tailscale is stopped"; then
    echo "Tailscale is connected"
    # Ensure WireGuard is up and running
    if [ -f $WG_CONFIG ]; then
        echo "Starting WireGuard"
        wg-quick up "$WG_CONFIG"
    else
        echo "WireGuard configuration file not found!"
    fi
    tail -f /dev/null  # Keeps the container running for debugging
else
    echo "Tailscale failed to connect. Exiting."
    tailscale status # Optionally: Print Tailscale logs for further debugging
    tail -f /dev/null  # Keeps the container running for debugging
fi

And finally my docker-compose to bring the image up:

  ts_wg_01:
    image: ts_wg_01
    container_name: ts_wg_01
    privileged: True
    cap_add:
      - net_admin
      - sys_module
    volumes:
      - /mnt/docker/appdata/ts_wg/state:/var/lib/tailscale
      - /dev/net/tun:/dev/net/tun
      - /etc/wireguard/ts_wg_01.conf:/etc/wireguard/wg0.conf
    environment:
      - TS_EXTRA_ARGS=--login-server=https://headscale.[MYDOMAIN] --authkey=[AUTHKEY] --hostname=[HOSTNAME] --advertise-exit-node=true --accept-routes=true --accept-dns=true
      - WG_CONFIG=/etc/wireguard/wg0.conf

@trinity-geology-unstable commented on GitHub (Aug 3, 2024): > Thanks for putting your money up to investigate, @thedustinmiller. > > Still, I don't think that the tailscale control server has _no_ role to play in managing the wg exit nodes if for no other reason than that you can have an arbitrarily-large amount of exit nodes and you don't necessarily want to give them a unique IP on your tailnet at the same time. Is the Chicago exit node always on `100.127.203.60`? Do other nodes use different IPs? Do they use consistent IPs or randomly-assigned ones? > > As for connecting the tailscale traffic to wireguard exit traffic, it might be as simple as setting up a minimal container with just ts and wg installed and then: > > ```shell > iptables -A FORWARD -i tailscale0 -o wg0 -j ACCEPT > iptables -A FORWARD -i wg0 -o tailscale0 -m state --state ESTABLISHED,RELATED -j ACCEPT > iptables -t nat -A POSTROUTING -o wg0 -j MASQUERADE > ``` > > And then you can switch out which wireguard config (VPN exit node) becomes `wg0` in the above picture. Granted, this is more of a hacky single-user setup, but it might work. I've been experimenting with essentially this idea in my homelab recently and I've had some success. I started with having Tailscale and Wireguard both running inside a Ubuntu VM (well actually a cheap rented virtual private server for the 1Gbps connection) which didn't work until I finally found the correct pre-up/down rules and got it going. Then I transferred the concept into a Debian docker image successfully. The only noticeable difference is slightly slower network speeds, 200Mbps inside docker vs 350Mbps on the VM host (i.e. download speed from the perspective of another node on the tailnet using the exit-node), presumably due to docker overhead. I haven't yet put it together into a nice neat repo that can be shared widely. However I'm happy to share my configs here for anyone that's interested with a big health warning that I'm a noob and don't know what I'm doing so copy at your peril. Hope it helps! This is the wireguard config file I've created. It's activated using wg-quick after tailscale has been installed and started successfully on the same system: ``` [Interface] Address = [REDACTED] PrivateKey = [REDACTED] Table = off # stops wg-quick from auto-generating routing tables MTU = 1380 # seems to give better network speed when running inside a docker container PostUp = wg set vpn1_cloud_01 fwmark 51820 # copied from wg-quck. Interface name should match config file name PostUp = ip -4 route add 0.0.0.0/0 dev vpn1_cloud_01 table 51820 # copied from wg-quick PostUp = ip -4 rule add not fwmark 51820 table 51820 pref 32765 # important - I believe this sets the preference of the wireguard tunnel to be higher (i.e. lower priority) than tailscale, allowing both to co-exist PostUp = ip -4 rule add table main suppress_prefixlength 0 # copied from wg-quick PostUp = sysctl -q net.ipv4.conf.all.src_valid_mark=1 # copied from wg-quick PreDown = ip -4 rule del table 51820 # copied from wg-quick PreDown = ip -4 rule del table main suppress_prefixlength 0 # copied from wg-quick [Peer] PublicKey = [REDACTED] AllowedIPs = 0.0.0.0/0 Endpoint = [REDACTED]:51820 PersistentKeepalive = 25 ``` This is the dockerfile I use to create my docker image: ``` # Base image FROM debian:bullseye-slim # Install necessary packages RUN apt-get update && apt-get install -y \ curl \ iproute2 \ iptables \ wireguard-tools \ bash \ procps \ && apt-get clean \ && rm -rf /var/lib/apt/lists/* # Install additional packages RUN curl -fsSL https://tailscale.com/install.sh | sh # Copy and set the entrypoint script into the container if active COPY entrypoint.sh /entrypoint.sh RUN chmod +x /entrypoint.sh ENTRYPOINT ["/entrypoint.sh"] ``` This is my entrypoint.sh file: ``` #!/bin/bash # Check if necessary environment variables are set : "${TS_EXTRA_ARGS:?Environment variable TS_EXTRA_ARGS must be set}" : "${WG_CONFIG:?Environment variable WG_CONFIG must be set}" # Start Tailscale in the background echo "Starting tailscaled" tailscaled & TAILSCALE_PID=$! # Function to check if tailscaled is running check_tailscaled() { pgrep tailscaled > /dev/null } # Wait for tailscaled to start up echo "Waiting for tailscaled to initialize..." for i in {1..10}; do if check_tailscaled; then echo "tailscaled is running" break fi sleep 2 done # Check again if tailscaled is running after waiting if ! check_tailscaled; then echo "tailscaled failed to start. Exiting." tail -f /dev/null # Keeps the container running for debugging exit 1 fi # Authenticate with Tailscale echo "Running Tailscale up" tailscale up $TS_EXTRA_ARGS # Check if Tailscale is connected if ! tailscale status | grep -q "Tailscale is stopped"; then echo "Tailscale is connected" # Ensure WireGuard is up and running if [ -f $WG_CONFIG ]; then echo "Starting WireGuard" wg-quick up "$WG_CONFIG" else echo "WireGuard configuration file not found!" fi tail -f /dev/null # Keeps the container running for debugging else echo "Tailscale failed to connect. Exiting." tailscale status # Optionally: Print Tailscale logs for further debugging tail -f /dev/null # Keeps the container running for debugging fi ``` And finally my docker-compose to bring the image up: ``` ts_wg_01: image: ts_wg_01 container_name: ts_wg_01 privileged: True cap_add: - net_admin - sys_module volumes: - /mnt/docker/appdata/ts_wg/state:/var/lib/tailscale - /dev/net/tun:/dev/net/tun - /etc/wireguard/ts_wg_01.conf:/etc/wireguard/wg0.conf environment: - TS_EXTRA_ARGS=--login-server=https://headscale.[MYDOMAIN] --authkey=[AUTHKEY] --hostname=[HOSTNAME] --advertise-exit-node=true --accept-routes=true --accept-dns=true - WG_CONFIG=/etc/wireguard/wg0.conf ```
Author
Owner

@iikkart commented on GitHub (Oct 11, 2024):

Thank you @trinity-geology-unstable for your efforts! I tried your configs and setup, and found few flaws in it (in case someone else want to try it).

I think there was some naming inconsistencies in the suggested solution.

PostUp = wg set vpn1_cloud_01 fwmark 51820 # copied from wg-quck. Interface name should match config file name PostUp = ip -4 route add 0.0.0.0/0 dev vpn1_cloud_01 table 51820 # copied from wg-quick

I think it should be:

PostUp = wg set ts_wg_01 fwmark 51820 # copied from wg-quck. Interface name should match config file name PostUp = ip -4 route add 0.0.0.0/0 dev ts_wg_01 table 51820 # copied from wg-quick

The reason is mentioned in the comment:

"Interface name should match config file name"

In the Dockerfile I needed to add also openresolv like this:

        curl \
        iproute2 \
        iptables \
        wireguard-tools \
        bash \
        procps \
        openresolv \
        && apt-get clean \
    && rm -rf /var/lib/apt/lists/*

Also, at least in my case, I needed to modify docker-compose.yml and add an address of dns-server into this line: - TS_EXTRA_ARGS=--login-server=https://headscale.[MYDOMAIN] --authkey=[AUTHKEY] --hostname=[HOSTNAME] --advertise-exit-node=true --accept-routes=true --accept-dns=true --dns=x.x.x.x. So by adding --dns=x.x.x.x made my endpoint finally to get connection to the internet.

@iikkart commented on GitHub (Oct 11, 2024): Thank you @trinity-geology-unstable for your efforts! I tried your configs and setup, and found few flaws in it (in case someone else want to try it). I think there was some naming inconsistencies in the suggested solution. `PostUp = wg set vpn1_cloud_01 fwmark 51820 # copied from wg-quck. Interface name should match config file name PostUp = ip -4 route add 0.0.0.0/0 dev vpn1_cloud_01 table 51820 # copied from wg-quick` I think it should be: `PostUp = wg set ts_wg_01 fwmark 51820 # copied from wg-quck. Interface name should match config file name PostUp = ip -4 route add 0.0.0.0/0 dev ts_wg_01 table 51820 # copied from wg-quick` The reason is mentioned in the comment: > "Interface name should match config file name" In the Dockerfile I needed to add also openresolv like this: ```RUN apt-get update && apt-get install -y \ curl \ iproute2 \ iptables \ wireguard-tools \ bash \ procps \ openresolv \ && apt-get clean \ && rm -rf /var/lib/apt/lists/* ``` Also, at least in my case, I needed to modify docker-compose.yml and add an address of dns-server into this line: `- TS_EXTRA_ARGS=--login-server=https://headscale.[MYDOMAIN] --authkey=[AUTHKEY] --hostname=[HOSTNAME] --advertise-exit-node=true --accept-routes=true --accept-dns=true --dns=x.x.x.x`. So by adding `--dns=x.x.x.x` made my endpoint finally to get connection to the internet.
Author
Owner

@github-actions[bot] commented on GitHub (Jan 10, 2025):

This issue is stale because it has been open for 90 days with no activity.

@github-actions[bot] commented on GitHub (Jan 10, 2025): This issue is stale because it has been open for 90 days with no activity.
Author
Owner

@FracKenA commented on GitHub (Jan 13, 2025):

Not stale.

@FracKenA commented on GitHub (Jan 13, 2025): Not stale.
Author
Owner

@stratself commented on GitHub (Mar 27, 2025):

Hi everyone. Inspired by this thread and others, I went for a few days of tinkering and found myself with a workable Tailscale + WireGuard solution that can run as nonroot and also accepts MagicDNS. Essentially, it copies wg-quick commands, and route everything via the tunnel except for WireGuard's endpoint itself.

As a small hack, I also holepunched my headscale's IP:port. With this the node's non-WireGuard IP address is revealed to headscale's embedded DERP, and I was able to get direct connections (at least on IPv6 for now) which is pretty nice.

Please check out the repo (link again) and see if it helps you, and leave feedback when you run into any issues, thanks :)


P/s: For @trinity-geology-unstable's solution, I believe this can help running without privileged: True.

@stratself commented on GitHub (Mar 27, 2025): Hi everyone. Inspired by this thread and others, I went for a few days of tinkering and found myself with a workable Tailscale + WireGuard [solution](https://github.com/skedastically/tswg) that can run as nonroot and also accepts MagicDNS. Essentially, it copies `wg-quick` commands, and route everything via the tunnel **except for WireGuard's endpoint itself**. As a small hack, I also [holepunched](https://github.com/skedastically/tswg/blob/b08346cf92c9f508fcb90ee013af5603fb3d119b/init.sh#L43-L55) my headscale's IP:port. With this the node's non-WireGuard IP address is revealed to headscale's embedded DERP, and I was able to get direct connections (at least on IPv6 for now) which is pretty nice. Please check out the repo ([link again](https://github.com/skedastically/tswg)) and see if it helps you, and leave feedback when you run into any issues, thanks :) --- P/s: For @trinity-geology-unstable's solution, I believe [this](https://forums.docker.com/t/sysctl-error-setting-key-net-ipv4-conf-all-src-valid-mark-read-only-file-system/92567/11) can help running without `privileged: True`.
Author
Owner

@unixfox commented on GitHub (Apr 29, 2025):

Reiterating my request: https://github.com/juanfont/headscale/issues/1545#issuecomment-1989439954

I don't have a tailscale account and I don't have a tailscale account with a mullvad subscription. I'm only using mullvad separatively.

Would someone be able to share his tailscale account connected to a mullvad subscription or sponsor me a separate tailscale account connected to a mullvad subscription?

I'm willing to investigate how this has been implemented and share some insights here through a PoC in order to implement this feature into headscale and ionscale. And possibly create a PR if the code base is not too complicated.

Hit me up on matrix (@unixfox:matrix.org) or by email (wgheadscale [AT] unixfox ..DOT.. EU)

@unixfox commented on GitHub (Apr 29, 2025): Reiterating my request: https://github.com/juanfont/headscale/issues/1545#issuecomment-1989439954 I don't have a tailscale account and I don't have a tailscale account with a mullvad subscription. I'm only using mullvad separatively. Would someone be able to share his tailscale account connected to a mullvad subscription or sponsor me a separate tailscale account connected to a mullvad subscription? I'm willing to investigate how this has been implemented and share some insights here through a PoC in order to implement this feature into headscale and ionscale. And possibly create a PR if the code base is not too complicated. Hit me up on matrix (@unixfox:matrix.org) or by email (wgheadscale [AT] unixfox ..DOT.. EU)
Author
Owner

@sqenixs commented on GitHub (May 4, 2025):

This is the only thing keeping me from using headscale right now. I hope there is enough interest to make it happen eventually.

@sqenixs commented on GitHub (May 4, 2025): This is the only thing keeping me from using headscale right now. I hope there is enough interest to make it happen eventually.
Author
Owner

@unixfox commented on GitHub (May 10, 2025):

Thank you for @sefidel for sharing access to his tailscale account.

I got amazing progress on this feature.

1. Finding the peer list of wireguard only peers

Using tailscale debug watch-ipn --show-private-key=true --initial=true or tailscale debug netmap --show-private-key I was able to dump the whole initial peers list given by the tailscale control plane. Here is the file (it's big): https://gist.github.com/unixfox/ce911b4622bc37de07b0c8d0ff0e911a (and raw: 22ac36497c/log.json).
All confidential info has been replaced by dummy data that look like real data.

Here is an abstract:

{
	"ID": 2491202446799032,
	"StableID": "ooraef7Ire11CNTRL",
	"Name": "nz-akl-wg-302.mullvad.ts.net.",
	"User": 2491202446799032,
	"Key": "nodekey:f345865a014ff6adde535e8cb8b248481d5fcc0bb62ccda17ac7219a89154955",
	"KeyExpiry": "0001-01-01T00:00:00Z",
	"Machine": "mkey:0000000000000000000000000000000000000000000000000000000000000000",
	"DiscoKey": "discokey:0000000000000000000000000000000000000000000000000000000000000000",
	"Addresses": [
		"100.85.73.163/32",
		"fd7a:115c:a1e0:ab12:4843:cd96:6255:49a3/128"
	],
	"AllowedIPs": [
		"100.85.73.163/32",
		"fd7a:115c:a1e0:ab12:4843:cd96:6255:49a3/128",
		"0.0.0.0/0",
		"::/0"
	],
	"Endpoints": [
		"103.75.11.66:51820",
		"[2404:f780:5:dec::c02f]:51820"
	],
	"Hostinfo": {
		"Hostname": "nz-akl-wg-302",
		"Location": {
			"Country": "New Zealand",
			"CountryCode": "NZ",
			"City": "Auckland",
			"CityCode": "AKL",
			"Latitude": -36.848461,
			"Longitude": 174.763336,
			"Priority": 100
		}
	},
	"Created": "2023-10-03T09:46:44.301340661Z",
	"Tags": [
		"tag:mullvad-exit-node"
	],
	"LastSeen": "2023-10-03T09:46:44.1Z",
	"Online": true,
	"CapMap": {
		"suggest-exit-node": null
	},
	"ComputedName": "nz-akl-wg-302.mullvad.ts.net",
	"ComputedNameWithHost": "nz-akl-wg-302.mullvad.ts.net (nz-akl-wg-302)",
	"SelfNodeV4MasqAddrForThisPeer": "10.67.92.192",
	"SelfNodeV6MasqAddrForThisPeer": "fc00:bbbb:bbbb:bb01::4:5cbf",
	"IsWireGuardOnly": true,
	"IsJailed": true,
	"ExitNodeDNSResolvers": [
		{
			"Addr": "194.242.2.2"
		}
	]
},

It's actually not complicated, the important details are:

  • Endpoints are the wireguard endpoints. Ending with the standard 51820 UDP port.
  • SelfNodeV4MasqAddrForThisPeer: The IPv4 wireguard address to use for the wireguard tunnel to work. It seems to be unique across the whole tailnet.
  • nodekey:xxx: The wireguard public key of the peer
  • "IsWireGuardOnly": true: The parameter to tell the tailscale client that it's a wireguard server only.
  • "IsJailed": true is probably important too.
  • "ExitNodeDNSResolvers" it's the DNS= parameter in wireguard config.

I'm not sure whenever tagging mullvad-exit-node and specifying Location is important.

It seems like you can't use the same mullvad exit node from two machines in the same tailnet. Only one work at a time. Headscale could follow the same principle in order to ease with the wireguard server peer implementation.

2. Testing that the wireguard only peers work

In order to test that a wireguard only peer work. I extracted the private key from the logs and in the line "PrivateKey": "privkey:XX".

Since I'm lazy to code a tool for converting a tailscale private key to a wireguard private key. Same for the public key.

I used cursor AI, and it gave me a nice command tailscale debug nodekey-to-wg XXXX. And the source code is here: https://gist.github.com/unixfox/f21291add9e7ee35750417d5db3cfe0d

I created a new wireguard config with it:

[Interface]
PrivateKey = XXXXXXXXXXXXXXXXXXXXXX=
Address = 10.67.92.192/32

[Peer]
PublicKey = XXXXXXXXXXXXXXXXXXXXXX=
AllowedIPs = 8.8.8.8/32
Endpoint = 103.75.11.66:51820
PersistentKeepalive = 15

And that worked perfectly (it's a peer in New Zealand, and I'm in europe):

LANG=C ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=117 time=340 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=117 time=340 ms

3. Implement in headscale (or ionscale)

Anybody now has the capabilities to replicate such feature in headscale or ionscale using the debug data. I talk about ionscale because I'm a user of ionscale but ionscale share the same goal as headscale.

For creating a wireguard server that will work with a tailscale client, it's a little tricky to do.

  • Manually create a wireguard server that accept multiple peers using the nodekey found using tailscale debug netmap. You have to do that for all the machines in your tailnet. A bit cumbersome.
  • Create a custom wireguard server that will communicate using the API of your headscale (and ionscale) and add all the peers using the API. Maybe using some libraries that allow to create wireguard server programmatically.

In a sense, for such feature to be usable by the general public. One has to implement such feature in headscale (or ionscale). And create a custom wireguard server that connect to the API of headscale or ionscale.

I'll try to see if I can experiment something in ionscale, like some kind of PoC. Which could certainly help more experienced golang developers to implement the feature properly.


Hopefully this will help with the progress of this feature!

@unixfox commented on GitHub (May 10, 2025): Thank you for @sefidel for sharing access to his tailscale account. I got amazing progress on this feature. ### 1. Finding the peer list of wireguard only peers Using `tailscale debug watch-ipn --show-private-key=true --initial=true` or `tailscale debug netmap --show-private-key` I was able to dump the whole initial peers list given by the tailscale control plane. Here is the file (it's big): https://gist.github.com/unixfox/ce911b4622bc37de07b0c8d0ff0e911a (and raw: https://gist.github.com/unixfox/ce911b4622bc37de07b0c8d0ff0e911a/raw/22ac36497cb42b8a6f578786af86774efa33aed7/log.json). All confidential info has been replaced by dummy data that look like real data. Here is an abstract: ```json { "ID": 2491202446799032, "StableID": "ooraef7Ire11CNTRL", "Name": "nz-akl-wg-302.mullvad.ts.net.", "User": 2491202446799032, "Key": "nodekey:f345865a014ff6adde535e8cb8b248481d5fcc0bb62ccda17ac7219a89154955", "KeyExpiry": "0001-01-01T00:00:00Z", "Machine": "mkey:0000000000000000000000000000000000000000000000000000000000000000", "DiscoKey": "discokey:0000000000000000000000000000000000000000000000000000000000000000", "Addresses": [ "100.85.73.163/32", "fd7a:115c:a1e0:ab12:4843:cd96:6255:49a3/128" ], "AllowedIPs": [ "100.85.73.163/32", "fd7a:115c:a1e0:ab12:4843:cd96:6255:49a3/128", "0.0.0.0/0", "::/0" ], "Endpoints": [ "103.75.11.66:51820", "[2404:f780:5:dec::c02f]:51820" ], "Hostinfo": { "Hostname": "nz-akl-wg-302", "Location": { "Country": "New Zealand", "CountryCode": "NZ", "City": "Auckland", "CityCode": "AKL", "Latitude": -36.848461, "Longitude": 174.763336, "Priority": 100 } }, "Created": "2023-10-03T09:46:44.301340661Z", "Tags": [ "tag:mullvad-exit-node" ], "LastSeen": "2023-10-03T09:46:44.1Z", "Online": true, "CapMap": { "suggest-exit-node": null }, "ComputedName": "nz-akl-wg-302.mullvad.ts.net", "ComputedNameWithHost": "nz-akl-wg-302.mullvad.ts.net (nz-akl-wg-302)", "SelfNodeV4MasqAddrForThisPeer": "10.67.92.192", "SelfNodeV6MasqAddrForThisPeer": "fc00:bbbb:bbbb:bb01::4:5cbf", "IsWireGuardOnly": true, "IsJailed": true, "ExitNodeDNSResolvers": [ { "Addr": "194.242.2.2" } ] }, ``` It's actually not complicated, the important details are: - `Endpoints` are the wireguard endpoints. Ending with the standard 51820 UDP port. - `SelfNodeV4MasqAddrForThisPeer`: The IPv4 wireguard address to use for the wireguard tunnel to work. It seems to be unique across the whole tailnet. - `nodekey:xxx`: The wireguard public key of the peer - `"IsWireGuardOnly": true`: The parameter to tell the tailscale client that it's a wireguard server only. - `"IsJailed": true` is probably important too. - `"ExitNodeDNSResolvers"` it's the `DNS=` parameter in wireguard config. I'm not sure whenever tagging `mullvad-exit-node` and specifying `Location` is important. It seems like you can't use the same mullvad exit node from two machines in the same tailnet. Only one work at a time. Headscale could follow the same principle in order to ease with the wireguard server peer implementation. ### 2. Testing that the wireguard only peers work In order to test that a wireguard only peer work. I extracted the private key from the logs and in the line `"PrivateKey": "privkey:XX"`. Since I'm lazy to code a tool for converting a tailscale private key to a wireguard private key. Same for the public key. I used cursor AI, and it gave me a nice command `tailscale debug nodekey-to-wg XXXX`. And the source code is here: https://gist.github.com/unixfox/f21291add9e7ee35750417d5db3cfe0d I created a new wireguard config with it: ``` [Interface] PrivateKey = XXXXXXXXXXXXXXXXXXXXXX= Address = 10.67.92.192/32 [Peer] PublicKey = XXXXXXXXXXXXXXXXXXXXXX= AllowedIPs = 8.8.8.8/32 Endpoint = 103.75.11.66:51820 PersistentKeepalive = 15 ``` And that worked perfectly (it's a peer in New Zealand, and I'm in europe): ``` LANG=C ping 8.8.8.8 PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. 64 bytes from 8.8.8.8: icmp_seq=1 ttl=117 time=340 ms 64 bytes from 8.8.8.8: icmp_seq=2 ttl=117 time=340 ms ``` ### 3. Implement in headscale (or [ionscale](https://github.com/jsiebens/ionscale)) Anybody now has the capabilities to replicate such feature in headscale or ionscale using the debug data. I talk about ionscale because I'm a user of ionscale but ionscale share the same goal as headscale. For creating a wireguard server that will work with a tailscale client, it's a little tricky to do. - Manually create a wireguard server that accept multiple peers using the nodekey found using `tailscale debug netmap`. You have to do that for all the machines in your tailnet. A bit cumbersome. - Create a custom wireguard server that will communicate using the API of your headscale (and ionscale) and add all the peers using the API. Maybe using some libraries that allow to create wireguard server programmatically. In a sense, for such feature to be usable by the general public. One has to implement such feature in headscale (or ionscale). And create a custom wireguard server that connect to the API of headscale or ionscale. I'll try to see if I can experiment something in ionscale, like some kind of PoC. Which could certainly help more experienced golang developers to implement the feature properly. ------- Hopefully this will help with the progress of this feature!
Author
Owner

@orelvis15 commented on GitHub (May 15, 2025):

Hello, good afternoon. I have a question. Excuse my ignorance. Is it possible to export the configuration for a WireGuard VPN using Headscale? To use the WireGuard mobile app instead of the Tailscale one?

@orelvis15 commented on GitHub (May 15, 2025): Hello, good afternoon. I have a question. Excuse my ignorance. Is it possible to export the configuration for a WireGuard VPN using Headscale? To use the WireGuard mobile app instead of the Tailscale one?
Author
Owner

@Matchlighter commented on GitHub (Jun 11, 2025):

@orelvis15

Hello, good afternoon. I have a question. Excuse my ignorance. Is it possible to export the configuration for a WireGuard VPN using Headscale? To use the WireGuard mobile app instead of the Tailscale one?

If you didn't find an answer to this, no. In networking, especially in overlay/VPN networks, there are two "planes" - a "control-plane" and a "data-plane". The data-plane handles communication of actual packets and data. The control-plane handles key-exchange, host-introduction, ACLs, (to some extent) routing, etc. WireGuard itself is just the data-plane - if you're using the WireGuard app, you are the control-plane - you handle key exchange, configuring each host with a list of peers, etc. Tailscale is mostly a control-plane that configures/manages the WG data-plane.

It's technically possible that you could extract the list of hosts and keys from the Tailscale client and use them with a plain WG client, but this wouldn't be full featured (ACLs for one wouldn't work correctly) and it wouldn't work long term as Tailscale does other things too (like cycling client keys, handling addition/removal of hosts, handling roaming hosts, etc).

@Matchlighter commented on GitHub (Jun 11, 2025): @orelvis15 > Hello, good afternoon. I have a question. Excuse my ignorance. Is it possible to export the configuration for a WireGuard VPN using Headscale? To use the WireGuard mobile app instead of the Tailscale one? If you didn't find an answer to this, no. In networking, especially in overlay/VPN networks, there are two "planes" - a "control-plane" and a "data-plane". The data-plane handles communication of actual packets and data. The control-plane handles key-exchange, host-introduction, ACLs, (to some extent) routing, etc. WireGuard itself is just the data-plane - if you're using the WireGuard app, _you_ are the control-plane - _you_ handle key exchange, configuring each host with a list of peers, etc. Tailscale is mostly a control-plane that configures/manages the WG data-plane. It's _technically_ possible that you could extract the list of hosts and keys from the Tailscale client and use them with a plain WG client, _**but**_ this wouldn't be full featured (ACLs for one wouldn't work correctly) and it wouldn't work long term as Tailscale does other things too (like cycling client keys, handling addition/removal of hosts, handling roaming hosts, etc).
Author
Owner

@lk-vila commented on GitHub (Jul 15, 2025):

This feature would be awesome. Being able to use mullvad exit nodes is the only thing keeping me from migrating to headscale right now.

One thing about the tailscale implementation that could be improved though is the blocking of "intermediary nodes" (exit nodes using an exit node). As each mullvad account is limited to 5 devices, one could use something like this to protect low priority devices that have low bandwith usage by sharing a connection.

@lk-vila commented on GitHub (Jul 15, 2025): This feature would be awesome. Being able to use mullvad exit nodes is the only thing keeping me from migrating to headscale right now. One thing about the tailscale implementation that could be improved though is the blocking of "intermediary nodes" (exit nodes using an exit node). As each mullvad account is limited to 5 devices, one could use something like this to protect low priority devices that have low bandwith usage by sharing a connection.
Author
Owner

@iridated commented on GitHub (Oct 19, 2025):

Hey I think this feature would be an extremely welcome addition to headscale, and I am going to have a go at implementing it. I noticed the maintainers ask for a design document for each feature, so I wrote up some of my thoughts freeform below. Please let me know if there is some standard format for this instead.

Use case

This feature allows nodes on a tailnet to connect to wireguard nodes which do not run a full Tailscale client. This increases interoperability with nodes which we want to connect to but don't have full control over, for example commercial VPN providers. With this feature implemented, a user can funnel their traffic through a privacy-enhancing proxy without having to disconnect from their personal tailnet.
An alternative to this would be running two VPN connections simultaneously, which comes with networking headaches which can be sidestepped by using tailscale. Further, this is just not possible on mobile devices.

Implementation

The aim is to implement the minimal features necessary in headscale to allow the use case above, and leave the actual configuration of wireguard-only nodes to external scripts.
The feature would add a CLI command to introduce wireguard-only peers to the tailnet, e.g. headscale nodes register-wg-only accepting parameters --name, --public-key, --known-nodes, --allowed-ips, --endpoints, --self-ip{v4/v6}-masq-addr, --exit-node-dns-resolvers.
Headscale would then handle distributing the required connection details to the nodes specified by --known-nodes. Remember since the wg-only node is external to the tailnet, there is no easy way of dynamically updating the list of peers it knows about. Instead, we must specify up-front which nodes it is going to accept connections from.
Because this must be configured manually by the server administrator, we exempt wireguard-only peers from ACLs. Note that even if we didn't have an exemption here, there's isn't much we can do to enforce ACLs. For inbound connections from wg-only peers to tailscale nodes, we follow tailscale's mullvad implementation and set IsJailed = true which signals clients to block incoming connections. For outbound connections, any access control must be done on the wg-only host and headscale doesn't have control over this - the only thing we can do is only distribute wireguard connection details to the known nodes specified by the administrator.

Several features proposed in this issue are desirable but out of scope for the initial implementation. First, it should not be tied to any specific provider (e.g. Mullvad), instead allowing the user to configure any external peer supporting wireguard. Secondly, we hardcode all details of the node when adding it to the network. An extension could implement an interface to update these dynamically without having to delete and recreate the peer.

I think there are broadly two ways we could integrate wireguard-only nodes in the existing code.

  1. Modify the Node type to have some IsWireguardOnly marker and perform special handling in all functions operating on nodes.
  2. Create a new WireguardOnlyNode type (stored in a separate database table) and keep wg-only peer functionality parallel to normal nodes.

I am strongly in favour of the second option. It allows us to slowly introduce wg-only peer functionality without accidentally introducing incorrect behaviour in other functions dealing with nodes. Further, since headscale doesn't control these external peers, the logic surface of our code that wireguard-only peers deal with is much smaller than normal nodse (e.g. no ACL logic, no dynamic IP changes, no DERP etc).

I have already started a proof of concept implementation at https://git.sr.ht/~iridated/headscale/log/wireguard-only-peers and intend to continue working on the code here until it is ready to merge.

Maintenance

I plan to use this feature in conjunction with a Mullvad VPN and I'm happy to contribute to keeping it around in headscale. Since the difficult networking work has already been done by the Tailscale team in client applications, implementing this feature should not introduce significant complexity in the control plane.

@iridated commented on GitHub (Oct 19, 2025): Hey I think this feature would be an extremely welcome addition to headscale, and I am going to have a go at implementing it. I noticed the maintainers ask for a design document for each feature, so I wrote up some of my thoughts freeform below. Please let me know if there is some standard format for this instead. ### Use case This feature allows nodes on a tailnet to connect to wireguard nodes which do not run a full Tailscale client. This increases interoperability with nodes which we want to connect to but don't have full control over, for example commercial VPN providers. With this feature implemented, a user can funnel their traffic through a privacy-enhancing proxy without having to disconnect from their personal tailnet. An alternative to this would be running two VPN connections simultaneously, which comes with networking headaches which can be sidestepped by using tailscale. Further, this is just not possible on mobile devices. ### Implementation The aim is to implement the minimal features necessary in headscale to allow the use case above, and leave the actual configuration of wireguard-only nodes to external scripts. The feature would add a CLI command to introduce wireguard-only peers to the tailnet, e.g. `headscale nodes register-wg-only` accepting parameters `--name`, `--public-key`, `--known-nodes`, `--allowed-ips`, `--endpoints`, `--self-ip{v4/v6}-masq-addr`, `--exit-node-dns-resolvers`. Headscale would then handle distributing the required connection details to the nodes specified by `--known-nodes`. Remember since the wg-only node is external to the tailnet, there is no easy way of dynamically updating the list of peers it knows about. Instead, we must specify up-front which nodes it is going to accept connections from. Because this must be configured manually by the server administrator, we exempt wireguard-only peers from ACLs. Note that even if we didn't have an exemption here, there's isn't much we can do to enforce ACLs. For inbound connections from wg-only peers to tailscale nodes, we follow tailscale's mullvad implementation and set `IsJailed = true` which signals clients to block incoming connections. For outbound connections, any access control must be done on the wg-only host and headscale doesn't have control over this - the only thing we can do is only distribute wireguard connection details to the known nodes specified by the administrator. Several features proposed in this issue are desirable but out of scope for the initial implementation. First, it should not be tied to any specific provider (e.g. Mullvad), instead allowing the user to configure any external peer supporting wireguard. Secondly, we hardcode all details of the node when adding it to the network. An extension could implement an interface to update these dynamically without having to delete and recreate the peer. I think there are broadly two ways we could integrate wireguard-only nodes in the existing code. 1. Modify the `Node` type to have some `IsWireguardOnly` marker and perform special handling in all functions operating on nodes. 2. Create a new `WireguardOnlyNode` type (stored in a separate database table) and keep wg-only peer functionality parallel to normal nodes. I am strongly in favour of the second option. It allows us to slowly introduce wg-only peer functionality without accidentally introducing incorrect behaviour in other functions dealing with nodes. Further, since headscale doesn't control these external peers, the logic surface of our code that wireguard-only peers deal with is much smaller than normal nodse (e.g. no ACL logic, no dynamic IP changes, no DERP etc). I have already started a proof of concept implementation at https://git.sr.ht/~iridated/headscale/log/wireguard-only-peers and intend to continue working on the code here until it is ready to merge. ### Maintenance I plan to use this feature in conjunction with a Mullvad VPN and I'm happy to contribute to keeping it around in headscale. Since the difficult networking work has already been done by the Tailscale team in client applications, implementing this feature should not introduce significant complexity in the control plane.
Author
Owner

@kradalby commented on GitHub (Oct 19, 2025):

Modify the Node type to have some IsWireguardOnly marker and perform special handling in all functions operating on nodes.
Create a new WireguardOnlyNode type (stored in a separate database table) and keep wg-only peer functionality parallel to normal nodes.

I believe it would be more desirable to have this as a JSON blob on the current node object to fit with the current patterns we use.
Alternatively, since they might actually differ quite a bit, no disco keys, no hostinfo(?), having a separate WireGuardNode as a whole might be the right move.

I think in general there are good ideas here, considering that this might require configuration per node, an alternative to have these nodes in the database could be to have them be a declarative configuration that is watched from the server.
Pros, easy to update, con, file update, not api.

@kradalby commented on GitHub (Oct 19, 2025): > Modify the Node type to have some IsWireguardOnly marker and perform special handling in all functions operating on nodes. > Create a new WireguardOnlyNode type (stored in a separate database table) and keep wg-only peer functionality parallel to normal nodes. I believe it would be more desirable to have this as a JSON blob on the current node object to fit with the current patterns we use. Alternatively, since they might actually differ quite a bit, no disco keys, no hostinfo(?), having a separate WireGuardNode as a whole might be the right move. I think in general there are good ideas here, considering that this might require configuration per node, an alternative to have these nodes in the database could be to have them be a declarative configuration that is watched from the server. Pros, easy to update, con, file update, not api.
Author
Owner

@stratself commented on GitHub (Oct 20, 2025):

Hi all, I have a small question

Headscale would then handle distributing the required connection details to the nodes specified by --known-nodes. Remember since the wg-only node is external to the tailnet, there is no easy way of dynamically updating the list of peers it knows about.

Why would it be not possible to dynamically update known nodes? I'd like to use a special ACL tag e.g. tag:vpn-exit-node in order to facilitate connections, and I guess this would render such a configuration untenable (?).

Maybe the wireguard only nodes can be recreated every time its peers are updated? Or maybe have it configured to allow access from all nodes in the tailnet, but limits its visibility via ACLs?

Thanks for any answers

@stratself commented on GitHub (Oct 20, 2025): Hi all, I have a small question > Headscale would then handle distributing the required connection details to the nodes specified by --known-nodes. Remember since the wg-only node is external to the tailnet, there is no easy way of dynamically updating the list of peers it knows about. Why would it be not possible to dynamically update known nodes? I'd like to use a special ACL tag e.g. `tag:vpn-exit-node` in order to facilitate connections, and I guess this would render such a configuration untenable (?). Maybe the wireguard only nodes can be recreated every time its peers are updated? Or maybe have it configured to allow access from all nodes in the tailnet, but limits its visibility via ACLs? Thanks for any answers
Author
Owner

@iridated commented on GitHub (Oct 22, 2025):

Why would it be not possible to dynamically update known nodes? I'd like to use a special ACL tag e.g. tag:vpn-exit-node in order to facilitate connections, and I guess this would render such a configuration untenable (?).

Maybe the wireguard only nodes can be recreated every time its peers are updated? Or maybe have it configured to allow access from all nodes in the tailnet, but limits its visibility via ACLs?

It comes down to the distinction between the data plane and the control plane described by Matchlighter. WireGuard-only peers are part of the data plane, so we can communicate with them, but not part of the control plane, so we cannot dynamically update their configuration.

Your second solution of configuring the WireGuard-only peer to accept connections from all tailnet peers and using ACLs purely to limit visibility is possible. However, I am somewhat hesitant to allow this functionality because ACLs are supposed to provide strong access control, whereas this would only be security by obscurity. Could you describe your use case in more detail? I'd like to understand what kind dynamic updates you're looking to achieve.

I believe it would be more desirable to have this as a JSON blob on the current node object to fit with the current patterns we use.
Alternatively, since they might actually differ quite a bit, no disco keys, no hostinfo(?), having a separate WireGuardNode as a whole might be the right move.

The more I thought about this, the more I'm convinced that having them as a separate type is the right move. As you've said, they differ from normal nodes in many ways, and it would be sad to introduce special cases and potential bugs in so many parts of the code.

I think in general there are good ideas here, considering that this might require configuration per node, an alternative to have these nodes in the database could be to have them be a declarative configuration that is watched from the server.
Pros, easy to update, con, file update, not api.

I think that is sensible as well, but having a proper command line API will make it easier for other software to integrate with this feature, which I think is pretty important considering that headscale will not do much configuration here on its own.


I've got an alpha version of the implementation now and I'm currently testing it out on my network. It seems to be stable and working correctly so far but the presentation of these nodes in the Android client is still a big janky. Full disclosure: most of the code was written by Claude. I have reviewed it and made some changes but I don't normally use Go so there could still be bugs hiding.

If anybody else would like to try, the easiest way is to get this container image which you can load with docker load - when running the container you need to make sure to override the user to 0 or it will fail with permission errors. I also made an interactive script to help add Mullvad nodes to the network.

@iridated commented on GitHub (Oct 22, 2025): > Why would it be not possible to dynamically update known nodes? I'd like to use a special ACL tag e.g. `tag:vpn-exit-node` in order to facilitate connections, and I guess this would render such a configuration untenable (?). > > Maybe the wireguard only nodes can be recreated every time its peers are updated? Or maybe have it configured to allow access from all nodes in the tailnet, but limits its visibility via ACLs? It comes down to the distinction between the data plane and the control plane [described by Matchlighter](https://github.com/juanfont/headscale/issues/1545#issuecomment-2963623663). WireGuard-only peers are part of the data plane, so we can communicate with them, but not part of the control plane, so we cannot dynamically update their configuration. Your second solution of configuring the WireGuard-only peer to accept connections from all tailnet peers and using ACLs purely to limit visibility is possible. However, I am somewhat hesitant to allow this functionality because ACLs are supposed to provide strong access control, whereas this would only be security by obscurity. Could you describe your use case in more detail? I'd like to understand what kind dynamic updates you're looking to achieve. > I believe it would be more desirable to have this as a JSON blob on the current node object to fit with the current patterns we use. > Alternatively, since they might actually differ quite a bit, no disco keys, no hostinfo(?), having a separate WireGuardNode as a whole might be the right move. The more I thought about this, the more I'm convinced that having them as a separate type is the right move. As you've said, they differ from normal nodes in many ways, and it would be sad to introduce special cases and potential bugs in so many parts of the code. > I think in general there are good ideas here, considering that this might require configuration per node, an alternative to have these nodes in the database could be to have them be a declarative configuration that is watched from the server. > Pros, easy to update, con, file update, not api. I think that is sensible as well, but having a proper command line API will make it easier for other software to integrate with this feature, which I think is pretty important considering that headscale will not do much configuration here on its own. --- I've got an alpha version of the implementation now and I'm currently testing it out on my network. It seems to be stable and working correctly so far but the presentation of these nodes in the Android client is still a big janky. Full disclosure: most of the code was written by Claude. I have reviewed it and made some changes but I don't normally use Go so there could still be bugs hiding. If anybody else would like to try, the easiest way is to get [this container image](https://s3.sr.ht/builds.sr.ht/artifacts/~iridated/1592734/81f3be8340c55e8f/headscale-amd64.tar) which you can load with `docker load` - when running the container you need to make sure to override the user to 0 or it will fail with permission errors. I also made [an interactive script](https://paste.sr.ht/~iridated/5c7e74453cf439835fd9b4319ef8fb221d5469c6) to help add Mullvad nodes to the network.
Author
Owner

@stratself commented on GitHub (Oct 24, 2025):

Your second solution of configuring the WireGuard-only peer to accept connections from all tailnet peers and using ACLs purely to limit visibility is possible. However, I am somewhat hesitant to allow this functionality because ACLs are supposed to provide strong access control, whereas this would only be security by obscurity. Could you describe your use case in more detail? I'd like to understand what kind dynamic updates you're looking to achieve.

I do agree it's kinda unsafe, though I'd like to see it as an option.

In my opinion, it'd be convenient (and consistent with Tailscale) to use nodeAttrs to determine connectivity to the WG peer. I may've been confused between NodeAttrs and ACLs as different methods for controlling distributions, but if there is a way to change --known-nodes at runtime via such a policy change, I'd highly appreciate that.


My next concern is that, could I use my own Tailnet addresses in --exit-node-dns-resolvers? The use case is to resolve domains using my own DNS forwarder (Pi-hole, AdGuard Home etc). Hope this is possible since Tailscale normally teleports DNS to the exit node too

@stratself commented on GitHub (Oct 24, 2025): > Your second solution of configuring the WireGuard-only peer to accept connections from all tailnet peers and using ACLs purely to limit visibility is possible. However, I am somewhat hesitant to allow this functionality because ACLs are supposed to provide strong access control, whereas this would only be security by obscurity. Could you describe your use case in more detail? I'd like to understand what kind dynamic updates you're looking to achieve. I do agree it's kinda unsafe, though I'd like to see it as an option. In my opinion, it'd be convenient (and consistent with Tailscale) to use [nodeAttrs](https://tailscale.com/kb/1258/mullvad-exit-nodes#from-the-tailnet-policy-file) to determine connectivity to the WG peer. I may've been confused between NodeAttrs and ACLs as different methods for controlling distributions, but if there is a way to change `--known-nodes` at runtime via such a policy change, I'd highly appreciate that. --- My next concern is that, could I use my own Tailnet addresses in `--exit-node-dns-resolvers`? The use case is to resolve domains using my own DNS forwarder (Pi-hole, AdGuard Home etc). Hope this is possible since Tailscale normally teleports DNS to the exit node too
Author
Owner

@kradalby commented on GitHub (Oct 24, 2025):

I want to write up some things about how I understand how this will end up working, but please keep in mind that I am mostly speculating because I have not looked into the implementation too much. It is mostly to manage expectations on how a "normal" WireGuard nodes can work together with Tailscale clients.

  1. WireGuard requires each client node to know about each others public key
  2. WireGuard requires at least one of the nodes to be publicly reachable on the internet
  3. Tailscale uses WireGuard, but the Access Control is implemented "after" their WireGuard package machinery
    a. This means that when a Tailscale client talks to a Tailscale client, the packages flows through more systems (NAT Traversal (magicsock), ACL and WireGuard)
    b. When a Tailscale client talks to a WireGuard client, the package flows "directly" to a WireGuard receiver. So you cant use ACL and NAT Traversal. I want to point out that it is not a "business" thing to do that by Tailscale, it is because you need those things at both ends.

@iridated point here is quite important to how useful this is in many cases:

WireGuard-only peers are part of the data plane, so we can communicate with them, but not part of the control plane, so we cannot dynamically update their configuration.

Even if a WireGuard client can be added dynamically to your Tailscale client, you will still have to "manually" add it to the WireGuard side of the equation for it to be able to set up the connection.
If you have one node that is to use WireGuard, then this is fairly manageable, but if you have 100s, you have to find a way to automate it, and this is outside of what Headscale can do (of course, we can write API endpoints to generate these configurations, but I suspect they will quickly be all or nothing and you have to accept them on your WireGuard server).

In the world of Tailscales usage (Mullvad), I suspect that the control server updates Mullvad via their API, which is the "control plane".

As for ACL, I am not sure if this can work at all, for the reason in 3. above. I suspect the Tailscale client may bypass all filters for WireGuard nodes, and then there is nothing we really can do. I think it is important to remember that WireGuard clients are implemented, at least currently, for the sole purpose of being exit nodes (Mullvad), meaning they don't really have any need for filtering.
The client is Open Source, and this can be inspected, so happy to be corrected here.

So ultimately, I think we can get it in, but I suggest people should have the base expectation that WireGuard peers are, "fully accessible" and "manually configured".

@kradalby commented on GitHub (Oct 24, 2025): I want to write up some things about how I understand how this will end up working, but please keep in mind that I am mostly speculating because I have not looked into the implementation too much. It is _mostly_ to manage expectations on how a "normal" WireGuard nodes can work together with Tailscale clients. 1. WireGuard requires each client node to know about each others public key 2. WireGuard requires _at least_ one of the nodes to be publicly reachable on the internet 3. Tailscale uses WireGuard, but the Access Control is implemented "after" their WireGuard package machinery a. This means that when a Tailscale client talks to a Tailscale client, the packages flows through _more_ systems (NAT Traversal (magicsock), ACL and WireGuard) b. When a Tailscale client talks to a WireGuard client, the package flows "directly" to a WireGuard receiver. So you cant use ACL and NAT Traversal. I want to point out that it is not a "business" thing to do that by Tailscale, it is because you need those things at both ends. @iridated point here is quite important to how useful this is in many cases: > WireGuard-only peers are part of the data plane, so we can communicate with them, but not part of the control plane, so we cannot dynamically update their configuration. Even if a WireGuard client can be added dynamically to your Tailscale client, you will still have to "manually" add it to the WireGuard side of the equation for it to be able to set up the connection. If you have one node that is to use WireGuard, then this is fairly manageable, but if you have 100s, you have to find a way to automate it, and this is outside of what Headscale can do (of course, we can write API endpoints to generate these configurations, but I suspect they will quickly be all or nothing and you have to accept them on your WireGuard server). In the world of Tailscales usage (Mullvad), I suspect that the control server updates Mullvad via their API, which is the "control plane". As for ACL, I am not _sure_ if this can work at all, for the reason in 3. above. I suspect the Tailscale client may bypass all filters for WireGuard nodes, and then there is nothing we really can do. I think it is important to remember that WireGuard clients are implemented, at least currently, for the sole purpose of being exit nodes (Mullvad), meaning they don't really have any need for filtering. The client is Open Source, and this can be inspected, so happy to be corrected here. So ultimately, I think we can get it in, but I suggest people should have the base expectation that WireGuard peers are, "fully accessible" and "manually configured".
Author
Owner

@iridated commented on GitHub (Oct 30, 2025):

So ultimately, I think we can get it in, but I suggest people should have the base expectation that WireGuard peers are, "fully accessible" and "manually configured".

+1, I think this is the right way to think about this feature. If the external Wireguard node exposes some additional configuration API, it should in principle be possible to update it dynamically by reading node state from headscale. But Wireguard itself provides no such capabilities so we would need a separate implementation for each service providing such an API. I think such integrations should be kept outside the core headscale project.

My next concern is that, could I use my own Tailnet addresses in --exit-node-dns-resolvers? The use case is to resolve domains using my own DNS forwarder (Pi-hole, AdGuard Home etc). Hope this is possible since Tailscale normally teleports DNS to the exit node too.

This is a good question, I am not sure how Tailscale clients handle this. I will try to test this at some point, but to be clear there isn't much headscale can do to alter this behavior. We simply send these IP addresses to the client which handles them as it wishes.
An alternative could be to avoid setting "--exit-node-dns-resolvers" at all, then it should default to the same DNS you normally use. Again though, I'm not sure how this interacts with exit nodes outside of your tailnet.


I have been daily driving the current implementation for about a week now and I'm happy with how it performs. However, I have come to the conclusion that the current representation of wg-only nodes is suboptimal. Right now, each node is a struct

type WireGuardOnlyPeer struct {
	ID uint64 `gorm:"primary_key"`
	Name string `gorm:"unique;not null"`
	UserID uint
	User   User `gorm:"constraint:OnDelete:CASCADE;"`
	PublicKey key.NodePublic `gorm:"serializer:text;not null"`
	KnownNodeIDs []uint64 `gorm:"serializer:json;not null"`
	AllowedIPs []netip.Prefix `gorm:"serializer:json;not null"`
	Endpoints []netip.AddrPort `gorm:"serializer:json;not null"`
	SelfIPv4MasqAddr *netip.Addr `gorm:"serializer:text"`
	SelfIPv6MasqAddr *netip.Addr `gorm:"serializer:text"`
	IPv4 *netip.Addr `gorm:"column:ipv4;serializer:text"`
	IPv6 *netip.Addr `gorm:"column:ipv6;serializer:text"`
	ExtraConfig *WireGuardOnlyPeerExtraConfig `gorm:"serializer:json"`
	CreatedAt time.Time
	UpdatedAt time.Time
	DeletedAt *time.Time
}

It contains a single SelfIPv4MasqAddr and SelfIPv6MasqAddr and an array of KnownNodeIDs. However, most of the time each tailnet node should really be using a different masquerade address. The way to currently work around this is duplicating the wireguard-only node, each with a different singleton set of KnownNodeIDs - this feels suboptimal. A simple solution here would be to have an array of (known_node_id, masq_ipv4, masq_ipv6) attached to each wireguard-only peer. With this naive implementation, this would be stored as a single database field, making it potentially hard to query.

The essence here is the need to separate the configuration that is inherent to the external wireguard peer (and hence should be the same for all tailnet nodes, e.g. pubkey, endpoints), and the configuration inherent to the connection between the wg-only peer and a specific tailnet node (which will differ for each node, e.g. masquerade addr)
One way to do this would be to make the db table wireguard_only_peers only contain information inherent to a wg-only peer, and have a separate database table wireguard_only_peer_links which describes how specific tailnet peers communicate with wg-only nodes. This somewhat complicates the initial configuration but better represents how we should think about connections to wg-only peers.

Before I start implementing this, I am looking for some feedback on which of these approaches I should try or if anybody has even better suggestions

@iridated commented on GitHub (Oct 30, 2025): > So ultimately, I think we can get it in, but I suggest people should have the base expectation that WireGuard peers are, "fully accessible" and "manually configured". +1, I think this is the right way to think about this feature. If the external Wireguard node exposes some additional configuration API, it should in principle be possible to update it dynamically by reading node state from headscale. But Wireguard itself provides no such capabilities so we would need a separate implementation for each service providing such an API. I think such integrations should be kept outside the core headscale project. > My next concern is that, could I use my own Tailnet addresses in --exit-node-dns-resolvers? The use case is to resolve domains using my own DNS forwarder (Pi-hole, AdGuard Home etc). Hope this is possible since Tailscale normally teleports DNS to the exit node too. This is a good question, I am not sure how Tailscale clients handle this. I will try to test this at some point, but to be clear there isn't much headscale can do to alter this behavior. We simply send these IP addresses to the client which handles them as it wishes. An alternative could be to avoid setting "--exit-node-dns-resolvers" at all, then it should default to the same DNS you normally use. Again though, I'm not sure how this interacts with exit nodes outside of your tailnet. --- I have been daily driving the current implementation for about a week now and I'm happy with how it performs. However, I have come to the conclusion that the current representation of wg-only nodes is suboptimal. Right now, each node is a struct ```go type WireGuardOnlyPeer struct { ID uint64 `gorm:"primary_key"` Name string `gorm:"unique;not null"` UserID uint User User `gorm:"constraint:OnDelete:CASCADE;"` PublicKey key.NodePublic `gorm:"serializer:text;not null"` KnownNodeIDs []uint64 `gorm:"serializer:json;not null"` AllowedIPs []netip.Prefix `gorm:"serializer:json;not null"` Endpoints []netip.AddrPort `gorm:"serializer:json;not null"` SelfIPv4MasqAddr *netip.Addr `gorm:"serializer:text"` SelfIPv6MasqAddr *netip.Addr `gorm:"serializer:text"` IPv4 *netip.Addr `gorm:"column:ipv4;serializer:text"` IPv6 *netip.Addr `gorm:"column:ipv6;serializer:text"` ExtraConfig *WireGuardOnlyPeerExtraConfig `gorm:"serializer:json"` CreatedAt time.Time UpdatedAt time.Time DeletedAt *time.Time } ``` It contains a single `SelfIPv4MasqAddr` and `SelfIPv6MasqAddr` and an array of `KnownNodeIDs`. However, most of the time each tailnet node should really be using a different masquerade address. The way to currently work around this is duplicating the wireguard-only node, each with a different singleton set of `KnownNodeIDs` - this feels suboptimal. A simple solution here would be to have an array of `(known_node_id, masq_ipv4, masq_ipv6)` attached to each wireguard-only peer. With this naive implementation, this would be stored as a single database field, making it potentially hard to query. The essence here is the need to separate the configuration that is inherent to the external wireguard peer (and hence should be the same for all tailnet nodes, e.g. pubkey, endpoints), and the configuration inherent to the connection between the wg-only peer and a specific tailnet node (which will differ for each node, e.g. masquerade addr) One way to do this would be to make the db table `wireguard_only_peers` only contain information inherent to a wg-only peer, and have a separate database table `wireguard_only_peer_links` which describes how specific tailnet peers communicate with wg-only nodes. This somewhat complicates the initial configuration but better represents how we should think about connections to wg-only peers. Before I start implementing this, I am looking for some feedback on which of these approaches I should try or if anybody has even better suggestions
Author
Owner

@stratself commented on GitHub (Oct 31, 2025):

Hi,

As for exit nodes DNS, I believe it'd be nice to advertise the address as-typed, and then we'll see how Tailscale clients react.


I couldn't comment on the database design, but I wanna ask on for a few more things if that's okay:

  • Sometimes, I'd like to to add/switch machines using a certain wg-peer, and internally this means altering --known-nodes. Could there be a command to re-modify (update) a current wg-peer too, instead of destroying and recreating it again?

  • In general, tailnet nodekeys can be rotated over time, right? Then would an external API script to send in the new pubkeys sufficient? Or does the control plane need to intervene in anything?

I'd like to try scripting this setup with Proton, once this feature is complete. If possible, please expose these routes via API as such scripts should be executed client-side.

Thank you

@stratself commented on GitHub (Oct 31, 2025): Hi, As for exit nodes DNS, I believe it'd be nice to advertise the address as-typed, and then we'll see how Tailscale clients react. --- I couldn't comment on the database design, but I wanna ask on for a few more things if that's okay: - Sometimes, I'd like to to add/switch machines using a certain wg-peer, and internally this means altering `--known-nodes`. Could there be a command to re-modify (update) a current wg-peer too, instead of destroying and recreating it again? - In general, tailnet nodekeys can be rotated over time, right? Then would an external API script to send in the new pubkeys sufficient? Or does the control plane need to intervene in anything? I'd like to try scripting this setup with Proton, once this feature is complete. If possible, please expose these routes via API as such scripts should be executed client-side. Thank you
Author
Owner

@kradalby commented on GitHub (Nov 11, 2025):

But Wireguard itself provides no such capabilities so we would need a separate implementation for each service providing such an API. I think such integrations should be kept outside the core headscale project.

I think there is a slippery slope here were we will expose some API for you to read the node information, and you can automate configuring your WireGuard, but at that point, you have just implemented a less capable version of Tailscale.

The real usage I would think is services offering WireGuard nodes which you can configure and then you can write a service to do so (VPN providers, Mullvad). I do not think however that we want to integrate such services into Headscale, and it should be expected by people that they will have to run them separately (and that they would be community projects).

It contains a single SelfIPv4MasqAddr and SelfIPv6MasqAddr and an array of KnownNodeIDs. However, most of the time each tailnet node should really be using a different masquerade address. The way to currently work around this is duplicating the wireguard-only node, each with a different singleton set of KnownNodeIDs - this feels suboptimal. A simple solution here would be to have an array of (known_node_id, masq_ipv4, masq_ipv6)

I have not looked into WireGuardPeers and I do not know too much about it yet, so there will be some questions.
Am I understanding it so:

We define a WireGuardPeer, this is essentially a "shadow" of a node "somewhere on the internet", we have their public IPs, their public key and some routes.
Then for each of our Tailscale nodes, the WireGuardPeer needs to know the Tailscale nodes public key, and a "sham" IP on which it can be contacted, to not bleed into the Tailnet IP range?

If something like this is the case, I can imagine something like this:

WireGuardPeers (wireguard_peers) - table defining the external node, how to reach it and talk to it. Common to all nodes. No users or tags involved (should we allow tagging?).
WireGuardPeersRelation (dunno the name?) - a denormilased table linking together WireGuardPeer (ID), TailnetNode(ID) (and therefore public node key), and the fields for the needed IP addresses.

@kradalby commented on GitHub (Nov 11, 2025): > But Wireguard itself provides no such capabilities so we would need a separate implementation for each service providing such an API. I think such integrations should be kept outside the core headscale project. I think there is a slippery slope here were we will expose some API for you to read the node information, and you can automate configuring your WireGuard, but at that point, you have just implemented a less capable version of Tailscale. The real usage I would think is services offering WireGuard nodes which you can configure and then you can write a service to do so (VPN providers, Mullvad). I do not think however that we want to integrate such services into Headscale, and it should be expected by people that they will have to run them separately (and that they would be community projects). > It contains a single SelfIPv4MasqAddr and SelfIPv6MasqAddr and an array of KnownNodeIDs. However, most of the time each tailnet node should really be using a different masquerade address. The way to currently work around this is duplicating the wireguard-only node, each with a different singleton set of KnownNodeIDs - this feels suboptimal. A simple solution here would be to have an array of (known_node_id, masq_ipv4, masq_ipv6) I have not looked into WireGuardPeers and I do not know too much about it yet, so there will be some questions. Am I understanding it so: We define a `WireGuardPeer`, this is essentially a "shadow" of a node "somewhere on the internet", we have their public IPs, their public key and some routes. Then for each of our Tailscale nodes, the `WireGuardPeer` needs to know the Tailscale nodes public key, and a "sham" IP on which it can be contacted, to not bleed into the Tailnet IP range? If something like this is the case, I can imagine something like this: `WireGuardPeers` (wireguard_peers) - table defining the external node, how to reach it and talk to it. Common to all nodes. No users or tags involved (should we allow tagging?). `WireGuardPeersRelation` (dunno the name?) - a denormilased table linking together WireGuardPeer (ID), TailnetNode(ID) (and therefore public node key), and the fields for the needed IP addresses.
Author
Owner

@iridated commented on GitHub (Nov 12, 2025):

Sometimes, I'd like to to add/switch machines using a certain wg-peer, and internally this means altering --known-nodes. Could there be a command to re-modify (update) a current wg-peer too, instead of destroying and recreating it again?

With the new connection-based model, this is pretty easy now. You just need to use headscale nodes add-wg-connection and headscale nodes remove-wg-connection

In general, tailnet nodekeys can be rotated over time, right? Then would an external API script to send in the new pubkeys sufficient? Or does the control plane need to intervene in anything?

I don't think the control plane would need to do anything special here - it should be sufficient to inform the external wireguard-only node that it should expect connections from a new public key. I personally don't use key rotation so I'm not sure how it works in headscale but I would be surprised if there is any way to get notified when a key is rotated. However, for your usecase it might be sufficient to poll headscale node list --output=json every few minutes and check manually if the key has changed.


Am I understanding it so:

We define a WireGuardPeer, this is essentially a "shadow" of a node "somewhere on the internet", we have their public IPs, their public key and some routes.

That's right

Then for each of our Tailscale nodes, the WireGuardPeer needs to know the Tailscale nodes public key, and a "sham" IP on which it can be contacted, to not bleed into the Tailnet IP range?

Kind of. We still assign the WireGuard peer a proper unique IP within the tailnet range. I'm not sure if this is necessary, but it looks like that's what tailscale does so the clients might be relying on it somehow.

The IP addresses that we need to configure per-node here are for masquerading packets from our tailnet node to the wg-peer. Packets from our node going to the wg-only peer will be rewritten to have source ip given by ipvX-masquerade-addr. In principle, the wireguard-only peer could understand our tailnet addresses and accept traffic incoming with the tailnet IP - there is no problem with "bleeding" into the tailnet range. However, this isn't feasible with a peer we cannot control so we instead rewrite the source IP to whatever the peer actually expects.

@iridated commented on GitHub (Nov 12, 2025): > Sometimes, I'd like to to add/switch machines using a certain wg-peer, and internally this means altering --known-nodes. Could there be a command to re-modify (update) a current wg-peer too, instead of destroying and recreating it again? With the new connection-based model, this is pretty easy now. You just need to use `headscale nodes add-wg-connection` and `headscale nodes remove-wg-connection` > In general, tailnet nodekeys can be rotated over time, right? Then would an external API script to send in the new pubkeys sufficient? Or does the control plane need to intervene in anything? I don't think the control plane would need to do anything special here - it should be sufficient to inform the external wireguard-only node that it should expect connections from a new public key. I personally don't use key rotation so I'm not sure how it works in headscale but I would be surprised if there is any way to get notified when a key is rotated. However, for your usecase it might be sufficient to poll `headscale node list --output=json` every few minutes and check manually if the key has changed. --- > Am I understanding it so: > > We define a WireGuardPeer, this is essentially a "shadow" of a node "somewhere on the internet", we have their public IPs, their public key and some routes. That's right > Then for each of our Tailscale nodes, the WireGuardPeer needs to know the Tailscale nodes public key, and a "sham" IP on which it can be contacted, to not bleed into the Tailnet IP range? Kind of. We still assign the WireGuard peer a proper unique IP within the tailnet range. I'm not sure if this is necessary, but it looks like that's what tailscale does so the clients might be relying on it somehow. The IP addresses that we need to configure per-node here are for masquerading packets from our tailnet node to the wg-peer. Packets from our node going to the wg-only peer will be rewritten to have source ip given by `ipvX-masquerade-addr`. In principle, the wireguard-only peer could understand our tailnet addresses and accept traffic incoming with the tailnet IP - there is no problem with "bleeding" into the tailnet range. However, this isn't feasible with a peer we cannot control so we instead rewrite the source IP to whatever the peer actually expects.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: starred/headscale#554