4via6subnetrouter not working #575

Closed
opened 2025-12-29 02:20:41 +01:00 by adam · 5 comments
Owner

Originally created by @amitsingh21 on GitHub (Nov 11, 2023).

We are not able to get 4via6subnetrouting work with headscale. We have two conflicting subnet ranges 10.0.0.0/16 and we are trying to use https://tailscale.com/kb/1201/4via6-subnets/. As seen from below details, we are able to ping the tailscale subnetrouter advertising the ipv6 subnetroute, but we are not seeing the magicdns entries to access the ipv4 address sitting behind the network:
headscale route list output
/ # headscale route list
ID | Machine | Prefix | Advertised | Enabled | Primary
19 | rpi-1 | fd7a:115c:a1e0:b1a:0:1:a00:0/112 | true | true | true

/ # headscale version
v0.22.3
acl config:
{
"action": "accept",
"proto": "",
"src": [
"group:rio-internal-headscale-project-ck03ver40t6sj8n9m3dg",

  ],
  "dst": [
    "tag:rio-internal-headscale-project-ck03ver40t6sj8n9m3dg:*",
    "tag:rapyuta:*",        
    "rpi1:*",
    "10.0.0.0/16:*"
  ]
},

"hosts": {
"rpi1": "fd7a:115c:a1e0:b1a:0:1:a00:0/112",
},

Machine A acting as subnet route:

root@rpi-1:~# docker exec -it 150946f0a192 sh
/ # tailscale set --advertise-routes=fd7a:115c:a1e0:b1a:0:1:a00:0/112
/ # tailscale version
1.46.1
tailscale commit: b73e4ea37af1c6b7bbf83471ef5c691319a8a0e9
go version: go1.21rc3

/ # ifconfig
docker0 Link encap:Ethernet HWaddr 02:42:80:D5:A1:8D
inet addr:172.17.0.1 Bcast:172.17.255.255 Mask:255.255.0.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

eth0 Link encap:Ethernet HWaddr DC:A6:32:FD:42:30
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:177943 errors:0 dropped:0 overruns:0 frame:0
TX packets:177943 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:17218113 (16.4 MiB) TX bytes:17218113 (16.4 MiB)

tailscale0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet addr:100.64.0.51 P-t-P:100.64.0.51 Mask:255.255.255.255
inet6 addr: fe80::87ce:bdad:9f41:2599/64 Scope:Link
UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1280 Metric:1
RX packets:487 errors:0 dropped:0 overruns:0 frame:0
TX packets:342 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:500
RX bytes:34632 (33.8 KiB) TX bytes:43968 (42.9 KiB)

wlan0 Link encap:Ethernet HWaddr DC:A6:32:FD:42:32
inet addr:10.0.113.133 Bcast:10.0.255.255 Mask:255.255.0.0
inet6 addr: fe80::b9de:3706:d44c:6bb7/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:166284 errors:0 dropped:0 overruns:0 frame:0
TX packets:117956 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:23341143 (22.2 MiB) TX bytes:97468762 (92.9 MiB)

Report:
* UDP: true
* IPv4: yes, 49.205.138.58:8108
* IPv6: no, but OS has support
* MappingVariesByDestIP: false
* HairPinning: false
* PortMapping: UPnP, NAT-PMP
* CaptivePortal: false
* Nearest DERP: Bangalore
* DERP latency:
- blr: 14.9ms (Bangalore)
- sin: 56.8ms (Singapore)
- hkg: 80.9ms (Hong Kong)
- tok: 119.4ms (Tokyo)
- syd: 135.5ms (Sydney)
- par: 170.3ms (Paris)
- fra: 173.3ms (Frankfurt)
- ams: 179.6ms (Amsterdam)
- mad: 180.7ms (Madrid)
- lhr: 180.8ms (London)
- waw: 191.9ms (Warsaw)
- lax: 214ms (Los Angeles)
- sea: 219.7ms (Seattle)
- sfo: 222.9ms (San Francisco)
- ord: 234.3ms (Chicago)
- dfw: 241.7ms (Dallas)
- den: 249.4ms (Denver)
- mia: 251.9ms (Miami)
- dbi: 253.9ms (Dubai)
- tor: 254.2ms (Toronto)
- jnb: 269.4ms (Johannesburg)
- hnl: 281.2ms (Honolulu)
- nyc: 282.2ms (New York City)
- nai: 316.3ms (Nairobi)
- sao: 324ms (São Paulo)

Originally created by @amitsingh21 on GitHub (Nov 11, 2023). We are not able to get 4via6subnetrouting work with headscale. We have two conflicting subnet ranges 10.0.0.0/16 and we are trying to use https://tailscale.com/kb/1201/4via6-subnets/. As seen from below details, we are able to ping the tailscale subnetrouter advertising the ipv6 subnetroute, but we are not seeing the magicdns entries to access the ipv4 address sitting behind the network: headscale route list output / # headscale route list ID | Machine | Prefix | Advertised | Enabled | Primary 19 | rpi-1 | fd7a:115c:a1e0:b1a:0:1:a00:0/112 | true | true | true / # headscale version v0.22.3 acl config: { "action": "accept", "proto": "", "src": [ "group:rio-internal-headscale-project-ck03ver40t6sj8n9m3dg", ], "dst": [ "tag:rio-internal-headscale-project-ck03ver40t6sj8n9m3dg:*", "tag:rapyuta:*", "rpi1:*", "10.0.0.0/16:*" ] }, "hosts": { "rpi1": "fd7a:115c:a1e0:b1a:0:1:a00:0/112", }, Machine A acting as subnet route: root@rpi-1:~# docker exec -it 150946f0a192 sh / # tailscale set --advertise-routes=fd7a:115c:a1e0:b1a:0:1:a00:0/112 / # tailscale version 1.46.1 tailscale commit: b73e4ea37af1c6b7bbf83471ef5c691319a8a0e9 go version: go1.21rc3 / # ifconfig docker0 Link encap:Ethernet HWaddr 02:42:80:D5:A1:8D inet addr:172.17.0.1 Bcast:172.17.255.255 Mask:255.255.0.0 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) eth0 Link encap:Ethernet HWaddr DC:A6:32:FD:42:30 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:177943 errors:0 dropped:0 overruns:0 frame:0 TX packets:177943 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:17218113 (16.4 MiB) TX bytes:17218113 (16.4 MiB) tailscale0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:100.64.0.51 P-t-P:100.64.0.51 Mask:255.255.255.255 inet6 addr: fe80::87ce:bdad:9f41:2599/64 Scope:Link UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1280 Metric:1 RX packets:487 errors:0 dropped:0 overruns:0 frame:0 TX packets:342 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:34632 (33.8 KiB) TX bytes:43968 (42.9 KiB) wlan0 Link encap:Ethernet HWaddr DC:A6:32:FD:42:32 inet addr:10.0.113.133 Bcast:10.0.255.255 Mask:255.255.0.0 inet6 addr: fe80::b9de:3706:d44c:6bb7/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:166284 errors:0 dropped:0 overruns:0 frame:0 TX packets:117956 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:23341143 (22.2 MiB) TX bytes:97468762 (92.9 MiB) Report: * UDP: true * IPv4: yes, 49.205.138.58:8108 * IPv6: no, but OS has support * MappingVariesByDestIP: false * HairPinning: false * PortMapping: UPnP, NAT-PMP * CaptivePortal: false * Nearest DERP: Bangalore * DERP latency: - blr: 14.9ms (Bangalore) - sin: 56.8ms (Singapore) - hkg: 80.9ms (Hong Kong) - tok: 119.4ms (Tokyo) - syd: 135.5ms (Sydney) - par: 170.3ms (Paris) - fra: 173.3ms (Frankfurt) - ams: 179.6ms (Amsterdam) - mad: 180.7ms (Madrid) - lhr: 180.8ms (London) - waw: 191.9ms (Warsaw) - lax: 214ms (Los Angeles) - sea: 219.7ms (Seattle) - sfo: 222.9ms (San Francisco) - ord: 234.3ms (Chicago) - dfw: 241.7ms (Dallas) - den: 249.4ms (Denver) - mia: 251.9ms (Miami) - dbi: 253.9ms (Dubai) - tor: 254.2ms (Toronto) - jnb: 269.4ms (Johannesburg) - hnl: 281.2ms (Honolulu) - nyc: 282.2ms (New York City) - nai: 316.3ms (Nairobi) - sao: 324ms (São Paulo)
adam added the stalebug labels 2025-12-29 02:20:41 +01:00
adam closed this issue 2025-12-29 02:20:41 +01:00
Author
Owner

@n4zim commented on GitHub (Nov 14, 2023):

Several months ago we had exactly the same need, with the same Tailscale documentation with that feature. After digging a bit, I found this : https://github.com/juanfont/headscale/blob/main/docs/proposals/002-better-routing.md

So I'm not sure this is a bug, but maybe more a feature not yet implemented.

Our need was to connect several Kubernetes clusters, but some have the same CIDR creating conflicts. Waiting for more to do that in the future, when the routing will be updated.

@n4zim commented on GitHub (Nov 14, 2023): Several months ago we had exactly the same need, with the same Tailscale documentation with that feature. After digging a bit, I found this : https://github.com/juanfont/headscale/blob/main/docs/proposals/002-better-routing.md So I'm not sure this is a bug, but maybe more a feature not yet implemented. Our need was to connect several Kubernetes clusters, but some have the same CIDR creating conflicts. Waiting for more to do that in the future, when the routing will be updated.
Author
Owner

@github-actions[bot] commented on GitHub (Feb 13, 2024):

This issue is stale because it has been open for 90 days with no activity.

@github-actions[bot] commented on GitHub (Feb 13, 2024): This issue is stale because it has been open for 90 days with no activity.
Author
Owner

@github-actions[bot] commented on GitHub (Feb 21, 2024):

This issue was closed because it has been inactive for 14 days since being marked as stale.

@github-actions[bot] commented on GitHub (Feb 21, 2024): This issue was closed because it has been inactive for 14 days since being marked as stale.
Author
Owner

@madejackson commented on GitHub (Sep 2, 2024):

For anyone stumbling upon this, 4via6 works as expected, at least with v0.23.0-beta3.

Only thing not yet working is the automatic magic-dns entry: https://tailscale.com/kb/1201/4via6-subnets?q=4via#magicdns-name-for-the-ipv4-subnet-devices Edit: This does actually work, I confirmed it on my own setup. See comment below from @moserpjm

I suppose this does make sense as 4via6 is not really a function headscale has to be aware of, it's handled completely in the tailscale client itself. Headscale only controls the subnet routing of the corresponding v6 subnets.

proof:
image

@madejackson commented on GitHub (Sep 2, 2024): For anyone stumbling upon this, **4via6 works as expected, at least with v0.23.0-beta3**. ~~Only thing not yet working is the automatic magic-dns entry: https://tailscale.com/kb/1201/4via6-subnets?q=4via#magicdns-name-for-the-ipv4-subnet-devices~~ Edit: This does actually work, I confirmed it on my own setup. See comment below from @moserpjm I suppose this does make sense as 4via6 is not really a function headscale has to be aware of, it's handled completely in the tailscale client itself. Headscale only controls the subnet routing of the corresponding v6 subnets. proof: ![image](https://github.com/user-attachments/assets/005b4f73-d52c-4cd6-8c73-887811dfd183)
Author
Owner

@moserpjm commented on GitHub (Sep 13, 2024):

Magic DNS works fine as well. After a little bit of digging in the clients source I now understand how it works. The domain name must end with tailscale.net or ts.net. So you just have to add something like foo.ts.net to the search domains in your headscale config.

Proof:
image

@moserpjm commented on GitHub (Sep 13, 2024): Magic DNS works fine as well. After a little bit of digging in the clients source I now understand how it works. The domain name must end with tailscale.net or ts.net. So you just have to add something like foo.ts.net to the search domains in your headscale config. Proof: ![image](https://github.com/user-attachments/assets/c32e0493-0be3-4e5a-ab13-6674be2951cf)
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: starred/headscale#575