[Bug] SSH permission denied after DB updated from wal v24.0beta1 #885

Closed
opened 2025-12-29 02:25:12 +01:00 by adam · 28 comments
Owner

Originally created by @masterwishx on GitHub (Dec 16, 2024).

Is this a support request?

  • This is not a support request

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

After updated yesterday to v24.0beta1 ssh worked fine .

But today after db file changed from wal , got Permission denied (tailscale).

no changes was made for acl file . also cant see changes in db file

Expected Behavior

ssh working

Steps To Reproduce

update to v24.0beta1

Environment

- OS: docker in ubuntu
- Headscale version: v24.0beta1
- Tailscale version: 1.78.1

Runtime environment

  • Headscale is behind a (reverse) proxy
  • Headscale runs in a container

Anything else?

by tailscale debug netmap:

"SSHPolicy": {
		"rules": [
			{
				"principals": [
					{
						"userLogin": "masterwishx"
					}
				],
				"sshUsers": {
					"abc": "=",
					"root": "=",
					"ubuntu": "="
				},
				"action": {
					"accept": true,
					"allowAgentForwarding": true,
					"allowLocalPortForwarding": true
				}
			}
		]
	},
Originally created by @masterwishx on GitHub (Dec 16, 2024). ### Is this a support request? - [X] This is not a support request ### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior After updated yesterday to v24.0beta1 ssh worked fine . But today after db file changed from wal , got Permission denied (tailscale). no changes was made for acl file . also cant see changes in db file ### Expected Behavior ssh working ### Steps To Reproduce update to v24.0beta1 ### Environment ```markdown - OS: docker in ubuntu - Headscale version: v24.0beta1 - Tailscale version: 1.78.1 ``` ### Runtime environment - [X] Headscale is behind a (reverse) proxy - [X] Headscale runs in a container ### Anything else? by `tailscale debug netmap`: ``` "SSHPolicy": { "rules": [ { "principals": [ { "userLogin": "masterwishx" } ], "sshUsers": { "abc": "=", "root": "=", "ubuntu": "=" }, "action": { "accept": true, "allowAgentForwarding": true, "allowLocalPortForwarding": true } } ] }, ```
adam added the bug label 2025-12-29 02:25:12 +01:00
adam closed this issue 2025-12-29 02:25:12 +01:00
Author
Owner

@masterwishx commented on GitHub (Dec 16, 2024):

the only thing connected by OIDC to node yesterday . maybe changed name here from masterwishx ?

"UserProfiles": {
		"1": {
			"ID": 1,
			"LoginName": "masterwishx@mymail.com",
			"DisplayName": "DaRK AnGeL",       < ---------  here  ?
			"ProfilePicURL": "",
			"Roles": []
		},

in ACL have :

"groups": {
    "group:admin": ["masterwishx"],
...
@masterwishx commented on GitHub (Dec 16, 2024): the only thing connected by OIDC to node yesterday . maybe changed name here from `masterwishx` ? ``` "UserProfiles": { "1": { "ID": 1, "LoginName": "masterwishx@mymail.com", "DisplayName": "DaRK AnGeL", < --------- here ? "ProfilePicURL": "", "Roles": [] }, ``` in ACL have : ``` "groups": { "group:admin": ["masterwishx"], ... ```
Author
Owner

@masterwishx commented on GitHub (Dec 16, 2024):

seems i found the issue was i deleted :
strip_email_domain: true when updated so user changed to masterwishx@mymail.com instead of masterwishx , trying to fix it ...

@masterwishx commented on GitHub (Dec 16, 2024): seems i found the issue was i deleted : ` strip_email_domain: true` when updated so user changed to `masterwishx@mymail.com` instead of `masterwishx` , trying to fix it ...
Author
Owner

@masterwishx commented on GitHub (Dec 16, 2024):

even changed in config:

 strip_email_domain: true
 map_legacy_users: true

and made migration again from 23.0 old db to 24.0beta1 it still broke ssh.

"UserProfiles": {
		"1": {
			"ID": 1,
			"LoginName": "masterwishx@mymail.com",    < ---------  here  ?
			"DisplayName": "DaRK AnGeL",       
			"ProfilePicURL": "",
			"Roles": []
		},

Users:

ID | Name | Username | Email | Created
1 | DaRK AnGeL | masterwishx | masterwishx@mymail.com | 2024-01-07 06:56:49

@masterwishx commented on GitHub (Dec 16, 2024): even changed in config: ``` strip_email_domain: true map_legacy_users: true ``` and made migration again from 23.0 old db to 24.0beta1 it still broke ssh. ``` "UserProfiles": { "1": { "ID": 1, "LoginName": "masterwishx@mymail.com", < --------- here ? "DisplayName": "DaRK AnGeL", "ProfilePicURL": "", "Roles": [] }, ``` Users: ID | Name | Username | Email | Created 1 | DaRK AnGeL | masterwishx | masterwishx@mymail.com | 2024-01-07 06:56:49
Author
Owner

@kradalby commented on GitHub (Dec 16, 2024):

Just to understand, you have not been able to make it work? or you made it work after the migration found the email correctly?

@kradalby commented on GitHub (Dec 16, 2024): Just to understand, you have not been able to make it work? or you made it work after the migration found the email correctly?
Author
Owner

@masterwishx commented on GitHub (Dec 16, 2024):

Just to understand, you have not been able to make it work? or you made it work after the migration found the email correctly?

No it still not working.
Seems that email as login instead preferred_username was migrated.
Using Authentik

@masterwishx commented on GitHub (Dec 16, 2024): > Just to understand, you have not been able to make it work? or you made it work after the migration found the email correctly? No it still not working. Seems that email as login instead preferred_username was migrated. Using Authentik
Author
Owner

@kradalby commented on GitHub (Dec 16, 2024):

ID | Name | Username | Email | Created
1 | DaRK AnGeL | masterwishx | masterwishx@mymail.com | 2024-01-07 06:56:49

This looks like it has migrated correctly to me, so it might be something that is not able to resolve the SSH configuration back to a machine.

Do you have an ACL to share too? I will have to investigate.

@kradalby commented on GitHub (Dec 16, 2024): > ID | Name | Username | Email | Created > 1 | DaRK AnGeL | masterwishx | [masterwishx@mymail.com](mailto:masterwishx@mymail.com) | 2024-01-07 06:56:49 This looks like it has migrated correctly to me, so it might be something that is not able to resolve the SSH configuration back to a machine. Do you have an ACL to share too? I will have to investigate.
Author
Owner

@masterwishx commented on GitHub (Dec 16, 2024):

I will post it now but you can see login changed to email :
"LoginName": "masterwishx@mymail.com"

It was "masterwishx" in 23.0 so same name for admin in acl

@masterwishx commented on GitHub (Dec 16, 2024): I will post it now but you can see login changed to email : "LoginName": "masterwishx@mymail.com" It was "masterwishx" in 23.0 so same name for admin in acl
Author
Owner

@masterwishx commented on GitHub (Dec 16, 2024):

So Although the name of user is masterwishx
But login in debug is email

@masterwishx commented on GitHub (Dec 16, 2024): So Although the name of user is `masterwishx` But login in debug is email
Author
Owner

@kradalby commented on GitHub (Dec 16, 2024):

Dont look at the UserProfiles in the status, it isnt relevant in this case. If you use OIDC, it should be the email.

Can you, Share your ACLs and try to put your email in place of your username in the ACL?

@kradalby commented on GitHub (Dec 16, 2024): Dont look at the `UserProfiles` in the status, it isnt relevant in this case. If you use OIDC, it should be the email. Can you, Share your ACLs and try to put your email in place of your username in the ACL?
Author
Owner

@masterwishx commented on GitHub (Dec 16, 2024):

i rolled back to 23.0 ,but i think this will work i can check it later but wanted username as login ...

{
  "groups": {
    "group:admin": ["masterwishx"],
    "group:family": ["user1", "user2", "user3"]
  },

  "tagOwners": {
    "tag:cloud-server": ["group:admin"],
    "tag:home-pc": ["group:admin", "group:family"],
    "tag:home-pc-vm": ["group:admin"],
    "tag:home-server": ["group:admin"],
    "tag:home-server-vm": ["group:admin"],
    "tag:home-mobile": ["group:admin", "group:family"],
    "tag:home-mobile-vm": ["group:admin", "group:family"]
  },

  "acls": [
    {
      // admin have access to all servers
      "action": "accept",
      "src": ["group:admin"],
      "dst": ["*:*"]
    },

    {
      // family have access to all home pcs,Speedtest Tracker
      "action": "accept",
      "src": ["group:family"],
      "dst": ["tag:home-pc:*", "tag:home-server:9443", "tag:home-server:8180"]
    }

    // We still have to allow internal users communications since nothing guarantees that each user have
    // their own users.
    //{ "action": "accept", "src": ["admin"], "dst": ["admin:*"] },
    //{ "action": "accept", "src": ["family"], "dst": ["family:*"] }
  ],

  "ssh": [
    {
      "action": "accept",
      //"src": ["tag:cloud-server", "tag:home-server", "tag:home-pc"],
      "src": ["group:admin"],
      "dst": ["tag:cloud-server", "tag:home-server"],
      "users": ["root", "ubuntu", "abc"]
    }
  ]
}

@masterwishx commented on GitHub (Dec 16, 2024): i rolled back to 23.0 ,but i think this will work i can check it later but wanted username as login ... ``` { "groups": { "group:admin": ["masterwishx"], "group:family": ["user1", "user2", "user3"] }, "tagOwners": { "tag:cloud-server": ["group:admin"], "tag:home-pc": ["group:admin", "group:family"], "tag:home-pc-vm": ["group:admin"], "tag:home-server": ["group:admin"], "tag:home-server-vm": ["group:admin"], "tag:home-mobile": ["group:admin", "group:family"], "tag:home-mobile-vm": ["group:admin", "group:family"] }, "acls": [ { // admin have access to all servers "action": "accept", "src": ["group:admin"], "dst": ["*:*"] }, { // family have access to all home pcs,Speedtest Tracker "action": "accept", "src": ["group:family"], "dst": ["tag:home-pc:*", "tag:home-server:9443", "tag:home-server:8180"] } // We still have to allow internal users communications since nothing guarantees that each user have // their own users. //{ "action": "accept", "src": ["admin"], "dst": ["admin:*"] }, //{ "action": "accept", "src": ["family"], "dst": ["family:*"] } ], "ssh": [ { "action": "accept", //"src": ["tag:cloud-server", "tag:home-server", "tag:home-pc"], "src": ["group:admin"], "dst": ["tag:cloud-server", "tag:home-server"], "users": ["root", "ubuntu", "abc"] } ] } ```
Author
Owner

@kradalby commented on GitHub (Dec 16, 2024):

We will likely transition to using email over username in ACL, but, it should not have broken in this release, so I will investigate in a bit. It will be useful to know if email does work tho.

@kradalby commented on GitHub (Dec 16, 2024): We will likely transition to using email over username in ACL, but, it should not have broken in this release, so I will investigate in a bit. It will be useful to know if email does work tho.
Author
Owner

@masterwishx commented on GitHub (Dec 16, 2024):

If you use OIDC, it should be the email.

do you mean it change login to email and this is by design ?

@masterwishx commented on GitHub (Dec 16, 2024): > If you use OIDC, it should be the email. do you mean it change login to email and this is by design ?
Author
Owner

@masterwishx commented on GitHub (Dec 16, 2024):

It will be useful to know if email does work tho.

OK i will test it later today and will post here ..

@masterwishx commented on GitHub (Dec 16, 2024): > It will be useful to know if email does work tho. OK i will test it later today and will post here ..
Author
Owner

@masterwishx commented on GitHub (Dec 16, 2024):

i understood that if i have in config:

 strip_email_domain: true
 map_legacy_users: true

it should migrate with username not email

@masterwishx commented on GitHub (Dec 16, 2024): i understood that if i have in config: ``` strip_email_domain: true map_legacy_users: true ``` it should migrate with username not email
Author
Owner

@kradalby commented on GitHub (Dec 16, 2024):

{
  "groups": {
    "group:admin": ["masterwishx"], // <--- Test if it works with emails here, instead of usernames.
    "group:family": ["user1", "user2", "user3"]
  },
}

it should migrate with username not email

Everything is being migrated to email for OIDC, username will also be filled if it is sent to us from the OIDC (Authentik in your case).

@kradalby commented on GitHub (Dec 16, 2024): ``` { "groups": { "group:admin": ["masterwishx"], // <--- Test if it works with emails here, instead of usernames. "group:family": ["user1", "user2", "user3"] }, } ``` > it should migrate with username not email Everything is being migrated to email for OIDC, username will also be filled if it is sent to us from the OIDC (Authentik in your case).
Author
Owner

@masterwishx commented on GitHub (Dec 16, 2024):

So when I will try again migration should I USE with?

 strip_email_domain: true
 map_legacy_users: true
@masterwishx commented on GitHub (Dec 16, 2024): So when I will try again migration should I USE with? ``` strip_email_domain: true map_legacy_users: true ```
Author
Owner

@kradalby commented on GitHub (Dec 16, 2024):

Migrate true, strip_email_domain should be the same as you had it before migration, it should not be changed

@kradalby commented on GitHub (Dec 16, 2024): Migrate true, strip_email_domain should be the same as you had it before migration, it should not be changed
Author
Owner

@kradalby commented on GitHub (Dec 17, 2024):

I've confirmed that a setup I have using Google OIDC works with the email (Google does not populate the username).
Other than that I have not yet had time to investigate systems that have usernames.

@kradalby commented on GitHub (Dec 17, 2024): I've confirmed that a setup I have using Google OIDC works with the email (Google does not populate the username). Other than that I have not yet had time to investigate systems that have usernames.
Author
Owner

@kradalby commented on GitHub (Dec 17, 2024):

@masterwishx could you include the full output of tailscale debug netmap of:

  • A node that can SSH to another node
  • A node that can be SSHed to

So each side of the SSH essentially.

@kradalby commented on GitHub (Dec 17, 2024): @masterwishx could you include the full output of `tailscale debug netmap` of: - A node that can SSH to another node - A node that can be SSHed to So each side of the SSH essentially.
Author
Owner

@masterwishx commented on GitHub (Dec 17, 2024):

@masterwishx could you include the full output of tailscale debug netmap of:

  • A node that can SSH to another node
  • A node that can be SSHed to

So each side of the SSH essentially.

i wantred to test migration again but somehow cant update container : got timeout and :

on tailscale status :

# Health check:
#     - adding [-i tailscale0 -j MARK --set-mark 0x40000/0xff0000] in v6/filter/ts-forward: running [/usr/sbin/ip6tables -t filter -A ts-forward -i tailscale0 -j MARK --set-mark 0x40000/0xff0000 --wait]: exit status 2: ip6tables v1.8.4 (legacy): unknown option "--set-mark"

tailscale update :

tailscale update
fetching latest tailscale version: Get "https://pkgs.tailscale.com/stable/?mode=json&os=linux": dial tcp: lookup pkgs.tailscale.com on 100.100.100.100:53: read udp 100.64.0.4:59745->100.100.100.100:53: i/o timeout

@masterwishx commented on GitHub (Dec 17, 2024): > @masterwishx could you include the full output of `tailscale debug netmap` of: > > * A node that can SSH to another node > * A node that can be SSHed to > > So each side of the SSH essentially. i wantred to test migration again but somehow cant update container : got timeout and : on tailscale status : ``` # Health check: # - adding [-i tailscale0 -j MARK --set-mark 0x40000/0xff0000] in v6/filter/ts-forward: running [/usr/sbin/ip6tables -t filter -A ts-forward -i tailscale0 -j MARK --set-mark 0x40000/0xff0000 --wait]: exit status 2: ip6tables v1.8.4 (legacy): unknown option "--set-mark" ``` tailscale update : tailscale update fetching latest tailscale version: Get "https://pkgs.tailscale.com/stable/?mode=json&os=linux": dial tcp: lookup pkgs.tailscale.com on 100.100.100.100:53: read udp 100.64.0.4:59745->100.100.100.100:53: i/o timeout
Author
Owner

@masterwishx commented on GitHub (Dec 17, 2024):

@masterwishx could you include the full output of tailscale debug netmap of:

  • A node that can SSH to another node
  • A node that can be SSHed to

So each side of the SSH essentially.

"DNS": {
		"Resolvers": [
			{
				"Addr": "100.64.0.4"
			}
		],
		"Routes": {
			"0.e.1.a.c.5.1.1.a.7.d.f.ip6.arpa": [],
			"100.100.in-addr.arpa": [],
			"101.100.in-addr.arpa": [],
			"102.100.in-addr.arpa": [],
			"103.100.in-addr.arpa": [],
			"104.100.in-addr.arpa": [],
			"105.100.in-addr.arpa": [],
			"106.100.in-addr.arpa": [],
			"107.100.in-addr.arpa": [],
			"108.100.in-addr.arpa": [],
			"109.100.in-addr.arpa": [],
			"110.100.in-addr.arpa": [],
			"111.100.in-addr.arpa": [],
			"112.100.in-addr.arpa": [],
			"113.100.in-addr.arpa": [],
			"114.100.in-addr.arpa": [],
			"115.100.in-addr.arpa": [],
			"116.100.in-addr.arpa": [],
			"117.100.in-addr.arpa": [],
			"118.100.in-addr.arpa": [],
			"119.100.in-addr.arpa": [],
			"120.100.in-addr.arpa": [],
			"121.100.in-addr.arpa": [],
			"122.100.in-addr.arpa": [],
			"123.100.in-addr.arpa": [],
			"124.100.in-addr.arpa": [],
			"125.100.in-addr.arpa": [],
			"126.100.in-addr.arpa": [],
			"127.100.in-addr.arpa": [],
			"64.100.in-addr.arpa": [],
			"65.100.in-addr.arpa": [],
			"66.100.in-addr.arpa": [],
			"67.100.in-addr.arpa": [],
			"68.100.in-addr.arpa": [],
			"69.100.in-addr.arpa": [],
			"70.100.in-addr.arpa": [],
			"71.100.in-addr.arpa": [],
			"72.100.in-addr.arpa": [],
			"73.100.in-addr.arpa": [],
			"74.100.in-addr.arpa": [],
			"75.100.in-addr.arpa": [],
			"76.100.in-addr.arpa": [],
			"77.100.in-addr.arpa": [],
			"78.100.in-addr.arpa": [],
			"79.100.in-addr.arpa": [],
			"80.100.in-addr.arpa": [],
			"81.100.in-addr.arpa": [],
			"82.100.in-addr.arpa": [],
			"83.100.in-addr.arpa": [],
			"84.100.in-addr.arpa": [],
			"85.100.in-addr.arpa": [],
			"86.100.in-addr.arpa": [],
			"87.100.in-addr.arpa": [],
			"88.100.in-addr.arpa": [],
			"89.100.in-addr.arpa": [],
			"90.100.in-addr.arpa": [],
			"91.100.in-addr.arpa": [],
			"92.100.in-addr.arpa": [],
			"93.100.in-addr.arpa": [],
			"94.100.in-addr.arpa": [],
			"95.100.in-addr.arpa": [],
			"96.100.in-addr.arpa": [],
			"97.100.in-addr.arpa": [],
			"98.100.in-addr.arpa": [],
			"99.100.in-addr.arpa": []
		},
		"Domains": [
			"hs.mysite.com"
		],
		"Proxied": true
	},
	"PacketFilter": [
		{
			"IPProto": [
				6,
				17,
				1,
				58
			],
			"Srcs": [
				"100.64.0.1/32",
				"100.64.0.2/31",
				"100.64.0.4/30",
				"100.64.0.9/32",
				"100.64.0.10/32",
				"100.64.0.12/32",
				"fd7a:115c:a1e0::1/128",
				"fd7a:115c:a1e0::2/127",
				"fd7a:115c:a1e0::4/126",
				"fd7a:115c:a1e0::9/128",
				"fd7a:115c:a1e0::a/128",
				"fd7a:115c:a1e0::c/128"
			],
			"SrcCaps": null,
			"Dsts": [
				{
					"Net": "0.0.0.0/0",
					"Ports": {
						"First": 0,
						"Last": 65535
					}
				},
				{
					"Net": "::/0",
					"Ports": {
						"First": 0,
						"Last": 65535
					}
				}
			],
			"Caps": []
		},
		{
			"IPProto": [
				6,
				17,
				1,
				58
			],
			"Srcs": [
				"100.64.0.8/32",
				"100.64.0.11/32",
				"100.64.0.13/32",
				"fd7a:115c:a1e0::8/128",
				"fd7a:115c:a1e0::b/128",
				"fd7a:115c:a1e0::d/128"
			],
			"SrcCaps": null,
			"Dsts": [
				{
					"Net": "100.64.0.2/32",
					"Ports": {
						"First": 0,
						"Last": 65535
					}
				},
				{
					"Net": "100.64.0.13/32",
					"Ports": {
						"First": 0,
						"Last": 65535
					}
				},
				{
					"Net": "fd7a:115c:a1e0::2/128",
					"Ports": {
						"First": 0,
						"Last": 65535
					}
				},
				{
					"Net": "fd7a:115c:a1e0::d/128",
					"Ports": {
						"First": 0,
						"Last": 65535
					}
				},
				{
					"Net": "100.64.0.7/32",
					"Ports": {
						"First": 9443,
						"Last": 9443
					}
				},
				{
					"Net": "fd7a:115c:a1e0::7/128",
					"Ports": {
						"First": 9443,
						"Last": 9443
					}
				},
				{
					"Net": "100.64.0.7/32",
					"Ports": {
						"First": 8180,
						"Last": 8180
					}
				},
				{
					"Net": "fd7a:115c:a1e0::7/128",
					"Ports": {
						"First": 8180,
						"Last": 8180
					}
				}
			],
			"Caps": []
		}
	],
	"PacketFilterRules": [
		{
			"SrcIPs": [
				"100.64.0.1/32",
				"100.64.0.2/31",
				"100.64.0.4/30",
				"100.64.0.9/32",
				"100.64.0.10/32",
				"100.64.0.12/32",
				"fd7a:115c:a1e0::1/128",
				"fd7a:115c:a1e0::2/127",
				"fd7a:115c:a1e0::4/126",
				"fd7a:115c:a1e0::9/128",
				"fd7a:115c:a1e0::a/128",
				"fd7a:115c:a1e0::c/128"
			],
			"DstPorts": [
				{
					"IP": "0.0.0.0/0",
					"Bits": null,
					"Ports": {
						"First": 0,
						"Last": 65535
					}
				},
				{
					"IP": "::/0",
					"Bits": null,
					"Ports": {
						"First": 0,
						"Last": 65535
					}
				}
			]
		},
		{
			"SrcIPs": [
				"100.64.0.8/32",
				"100.64.0.11/32",
				"100.64.0.13/32",
				"fd7a:115c:a1e0::8/128",
				"fd7a:115c:a1e0::b/128",
				"fd7a:115c:a1e0::d/128"
			],
			"DstPorts": [
				{
					"IP": "100.64.0.2/32",
					"Bits": null,
					"Ports": {
						"First": 0,
						"Last": 65535
					}
				},
				{
					"IP": "100.64.0.13/32",
					"Bits": null,
					"Ports": {
						"First": 0,
						"Last": 65535
					}
				},
				{
					"IP": "fd7a:115c:a1e0::2/128",
					"Bits": null,
					"Ports": {
						"First": 0,
						"Last": 65535
					}
				},
				{
					"IP": "fd7a:115c:a1e0::d/128",
					"Bits": null,
					"Ports": {
						"First": 0,
						"Last": 65535
					}
				},
				{
					"IP": "100.64.0.7/32",
					"Bits": null,
					"Ports": {
						"First": 9443,
						"Last": 9443
					}
				},
				{
					"IP": "fd7a:115c:a1e0::7/128",
					"Bits": null,
					"Ports": {
						"First": 9443,
						"Last": 9443
					}
				},
				{
					"IP": "100.64.0.7/32",
					"Bits": null,
					"Ports": {
						"First": 8180,
						"Last": 8180
					}
				},
				{
					"IP": "fd7a:115c:a1e0::7/128",
					"Bits": null,
					"Ports": {
						"First": 8180,
						"Last": 8180
					}
				}
			]
		}
	],
@masterwishx commented on GitHub (Dec 17, 2024): > @masterwishx could you include the full output of `tailscale debug netmap` of: > > * A node that can SSH to another node > * A node that can be SSHed to > > So each side of the SSH essentially. ``` "DNS": { "Resolvers": [ { "Addr": "100.64.0.4" } ], "Routes": { "0.e.1.a.c.5.1.1.a.7.d.f.ip6.arpa": [], "100.100.in-addr.arpa": [], "101.100.in-addr.arpa": [], "102.100.in-addr.arpa": [], "103.100.in-addr.arpa": [], "104.100.in-addr.arpa": [], "105.100.in-addr.arpa": [], "106.100.in-addr.arpa": [], "107.100.in-addr.arpa": [], "108.100.in-addr.arpa": [], "109.100.in-addr.arpa": [], "110.100.in-addr.arpa": [], "111.100.in-addr.arpa": [], "112.100.in-addr.arpa": [], "113.100.in-addr.arpa": [], "114.100.in-addr.arpa": [], "115.100.in-addr.arpa": [], "116.100.in-addr.arpa": [], "117.100.in-addr.arpa": [], "118.100.in-addr.arpa": [], "119.100.in-addr.arpa": [], "120.100.in-addr.arpa": [], "121.100.in-addr.arpa": [], "122.100.in-addr.arpa": [], "123.100.in-addr.arpa": [], "124.100.in-addr.arpa": [], "125.100.in-addr.arpa": [], "126.100.in-addr.arpa": [], "127.100.in-addr.arpa": [], "64.100.in-addr.arpa": [], "65.100.in-addr.arpa": [], "66.100.in-addr.arpa": [], "67.100.in-addr.arpa": [], "68.100.in-addr.arpa": [], "69.100.in-addr.arpa": [], "70.100.in-addr.arpa": [], "71.100.in-addr.arpa": [], "72.100.in-addr.arpa": [], "73.100.in-addr.arpa": [], "74.100.in-addr.arpa": [], "75.100.in-addr.arpa": [], "76.100.in-addr.arpa": [], "77.100.in-addr.arpa": [], "78.100.in-addr.arpa": [], "79.100.in-addr.arpa": [], "80.100.in-addr.arpa": [], "81.100.in-addr.arpa": [], "82.100.in-addr.arpa": [], "83.100.in-addr.arpa": [], "84.100.in-addr.arpa": [], "85.100.in-addr.arpa": [], "86.100.in-addr.arpa": [], "87.100.in-addr.arpa": [], "88.100.in-addr.arpa": [], "89.100.in-addr.arpa": [], "90.100.in-addr.arpa": [], "91.100.in-addr.arpa": [], "92.100.in-addr.arpa": [], "93.100.in-addr.arpa": [], "94.100.in-addr.arpa": [], "95.100.in-addr.arpa": [], "96.100.in-addr.arpa": [], "97.100.in-addr.arpa": [], "98.100.in-addr.arpa": [], "99.100.in-addr.arpa": [] }, "Domains": [ "hs.mysite.com" ], "Proxied": true }, "PacketFilter": [ { "IPProto": [ 6, 17, 1, 58 ], "Srcs": [ "100.64.0.1/32", "100.64.0.2/31", "100.64.0.4/30", "100.64.0.9/32", "100.64.0.10/32", "100.64.0.12/32", "fd7a:115c:a1e0::1/128", "fd7a:115c:a1e0::2/127", "fd7a:115c:a1e0::4/126", "fd7a:115c:a1e0::9/128", "fd7a:115c:a1e0::a/128", "fd7a:115c:a1e0::c/128" ], "SrcCaps": null, "Dsts": [ { "Net": "0.0.0.0/0", "Ports": { "First": 0, "Last": 65535 } }, { "Net": "::/0", "Ports": { "First": 0, "Last": 65535 } } ], "Caps": [] }, { "IPProto": [ 6, 17, 1, 58 ], "Srcs": [ "100.64.0.8/32", "100.64.0.11/32", "100.64.0.13/32", "fd7a:115c:a1e0::8/128", "fd7a:115c:a1e0::b/128", "fd7a:115c:a1e0::d/128" ], "SrcCaps": null, "Dsts": [ { "Net": "100.64.0.2/32", "Ports": { "First": 0, "Last": 65535 } }, { "Net": "100.64.0.13/32", "Ports": { "First": 0, "Last": 65535 } }, { "Net": "fd7a:115c:a1e0::2/128", "Ports": { "First": 0, "Last": 65535 } }, { "Net": "fd7a:115c:a1e0::d/128", "Ports": { "First": 0, "Last": 65535 } }, { "Net": "100.64.0.7/32", "Ports": { "First": 9443, "Last": 9443 } }, { "Net": "fd7a:115c:a1e0::7/128", "Ports": { "First": 9443, "Last": 9443 } }, { "Net": "100.64.0.7/32", "Ports": { "First": 8180, "Last": 8180 } }, { "Net": "fd7a:115c:a1e0::7/128", "Ports": { "First": 8180, "Last": 8180 } } ], "Caps": [] } ], "PacketFilterRules": [ { "SrcIPs": [ "100.64.0.1/32", "100.64.0.2/31", "100.64.0.4/30", "100.64.0.9/32", "100.64.0.10/32", "100.64.0.12/32", "fd7a:115c:a1e0::1/128", "fd7a:115c:a1e0::2/127", "fd7a:115c:a1e0::4/126", "fd7a:115c:a1e0::9/128", "fd7a:115c:a1e0::a/128", "fd7a:115c:a1e0::c/128" ], "DstPorts": [ { "IP": "0.0.0.0/0", "Bits": null, "Ports": { "First": 0, "Last": 65535 } }, { "IP": "::/0", "Bits": null, "Ports": { "First": 0, "Last": 65535 } } ] }, { "SrcIPs": [ "100.64.0.8/32", "100.64.0.11/32", "100.64.0.13/32", "fd7a:115c:a1e0::8/128", "fd7a:115c:a1e0::b/128", "fd7a:115c:a1e0::d/128" ], "DstPorts": [ { "IP": "100.64.0.2/32", "Bits": null, "Ports": { "First": 0, "Last": 65535 } }, { "IP": "100.64.0.13/32", "Bits": null, "Ports": { "First": 0, "Last": 65535 } }, { "IP": "fd7a:115c:a1e0::2/128", "Bits": null, "Ports": { "First": 0, "Last": 65535 } }, { "IP": "fd7a:115c:a1e0::d/128", "Bits": null, "Ports": { "First": 0, "Last": 65535 } }, { "IP": "100.64.0.7/32", "Bits": null, "Ports": { "First": 9443, "Last": 9443 } }, { "IP": "fd7a:115c:a1e0::7/128", "Bits": null, "Ports": { "First": 9443, "Last": 9443 } }, { "IP": "100.64.0.7/32", "Bits": null, "Ports": { "First": 8180, "Last": 8180 } }, { "IP": "fd7a:115c:a1e0::7/128", "Bits": null, "Ports": { "First": 8180, "Last": 8180 } } ] } ], ```
Author
Owner

@masterwishx commented on GitHub (Dec 18, 2024):

I've confirmed that a setup I have using Google OIDC works with the email (Google does not populate the username). Other than that I have not yet had time to investigate systems that have usernames.

if you mean this? we can select in Authentik :

image

As i have now issue :

# Health check:
#     - adding [-i tailscale0 -j MARK --set-mark 0x40000/0xff0000] in v6/filter/ts-forward: running [/usr/sbin/ip6tables -t filter -A ts-forward -i tailscale0 -j MARK --set-mark 0x40000/0xff0000 --wait]: exit status 2: ip6tables v1.8.4 (legacy): unknown option "--set-mark"

seems related to https://github.com/tailscale/tailscale/issues/13863 , will try to fix then will check migration again ...

@masterwishx commented on GitHub (Dec 18, 2024): > I've confirmed that a setup I have using Google OIDC works with the email (Google does not populate the username). Other than that I have not yet had time to investigate systems that have usernames. if you mean this? we can select in Authentik : ![image](https://github.com/user-attachments/assets/ce4cf57d-71e5-48d1-826c-1b0d59613d9b) As i have now issue : ``` # Health check: # - adding [-i tailscale0 -j MARK --set-mark 0x40000/0xff0000] in v6/filter/ts-forward: running [/usr/sbin/ip6tables -t filter -A ts-forward -i tailscale0 -j MARK --set-mark 0x40000/0xff0000 --wait]: exit status 2: ip6tables v1.8.4 (legacy): unknown option "--set-mark" ``` seems related to https://github.com/tailscale/tailscale/issues/13863 , will try to fix then will check migration again ...
Author
Owner

@kradalby commented on GitHub (Dec 18, 2024):

So each side of the SSH essentially.

Can you please send the two full ones, one from each side, not a truncated one

@kradalby commented on GitHub (Dec 18, 2024): > > So each side of the SSH essentially. Can you please send the two full ones, one from each side, not a truncated one
Author
Owner

@masterwishx commented on GitHub (Dec 18, 2024):

So each side of the SSH essentially.

Can you please send the two full ones, one from each side, not a truncated one

I'm now on 23.0, what I sended it was one I saved when was on 24.0.
So you need two files from 24.0 or from 23.0?

@masterwishx commented on GitHub (Dec 18, 2024): > > > So each side of the SSH essentially. > > Can you please send the two full ones, one from each side, not a truncated one I'm now on 23.0, what I sended it was one I saved when was on 24.0. So you need two files from 24.0 or from 23.0?
Author
Owner

@kradalby commented on GitHub (Dec 18, 2024):

No I am looking for two debug outputs,

So tailscale debug netmap, from two different machines .

@kradalby commented on GitHub (Dec 18, 2024): No I am looking for two debug outputs, So tailscale debug netmap, from two different machines .
Author
Owner

@kradalby commented on GitHub (Dec 18, 2024):

I think this should be resolved in #2309, If the tests pass, I'll get that in and do another beta.

@kradalby commented on GitHub (Dec 18, 2024): I think this should be resolved in #2309, If the tests pass, I'll get that in and do another beta.
Author
Owner

@masterwishx commented on GitHub (Dec 18, 2024):

No I am looking for two debug outputs,

So tailscale debug netmap, from two different machines .

Yes i got it , but its ok from 23.0 version that im it now ?

@masterwishx commented on GitHub (Dec 18, 2024): > No I am looking for two debug outputs, > > So tailscale debug netmap, from two different machines . Yes i got it , but its ok from 23.0 version that im it now ?
Author
Owner

@masterwishx commented on GitHub (Dec 18, 2024):

I think this should be resolved in #2309, If the tests pass, I'll get that in and do another beta.

Ohh seems you founded the problem ( missing tags for names ...) , sorry i want able to help because of bug in kernel i got yesterday that wrote above ... so my headscale/tailscale not working well , so cant migrate now until the fix :(

@masterwishx commented on GitHub (Dec 18, 2024): > I think this should be resolved in #2309, If the tests pass, I'll get that in and do another beta. Ohh seems you founded the problem ( missing tags for names ...) , sorry i want able to help because of bug in kernel i got yesterday that wrote above ... so my headscale/tailscale not working well , so cant migrate now until the fix :(
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: starred/headscale#885