[Bug] Adding a tag via CLI removes advertised tag #1030

Closed
opened 2025-12-29 02:27:49 +01:00 by adam · 4 comments
Owner

Originally created by @Murgeye on GitHub (May 19, 2025).

Is this a support request?

  • This is not a support request

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

Since v0.26 adding tags via CLI seems to invalidate advertised tags, removing them completely from the tag list. This only happens after a new node advertising the same tag is added.

This can be reproduced by adding a node advertising a tag to headscale, then add a forced tag to that same node. If you then register another node advertising the same tag, the first node loses its advertised tag (for more details see Steps to reproduce).

This basically causes all my ACLs to break, since several nodes have lost their advertised tags with the update. This might be due to another related issues, which, however, I cannot easily reproduce.

If I can do anything more to help debug this, let me know!

Expected Behavior

CLI-added tags and advertised tags should not influence each other.

Steps To Reproduce

  1. Configure two tags, e.g., tag:vm and tag:test
  2. On machine 1 run: sudo tailscale login --login-server=https://<...> --authkey=<..> --advertise-tags=tag:vm
  3. Run headscale nodes ls --tags on the headscale server
    Result, as expected:
ID | Hostname  | Name      | MachineKey | NodeKey | User   | IP addresses                  | Ephemeral | Last seen           | Expiration | Connected | Expired | ForcedTags | InvalidTags | ValidTags
4  | test-vm-1 | test-vm-1 | [A5yrv]    | [XuMNd] | vms    | 100.64.0.2, fd7a:115c:a1e0::2 | false     | 2025-05-19 13:17:35 | N/A        | online    | no      |            |             | tag:vm

  1. On the server run docker compose exec headscale headscale nodes tag -i 4 -t "tag:test"
    Result, still as expected:
ID | Hostname  | Name      | MachineKey | NodeKey | User   | IP addresses                  | Ephemeral | Last seen           | Expiration | Connected | Expired | ForcedTags | InvalidTags | ValidTags
4  | test-vm-1 | test-vm-1 | [A5yrv]    | [XuMNd] | vms    | 100.64.0.2, fd7a:115c:a1e0::2 | false     | 2025-05-19 13:17:35 | N/A        | online    | no      | tag:test   |             | tag:vm
  1. On the second machine run: sudo tailscale login --login-server=https://<...> --authkey=<..> --advertise-tags=tag:vm
    Result:
ID | Hostname  | Name      | MachineKey | NodeKey | User   | IP addresses                  | Ephemeral | Last seen           | Expiration | Connected | Expired | ForcedTags | InvalidTags | ValidTags
4  | test-vm-1 | test-vm-1 | [A5yrv]    | [XuMNd] | vms    | 100.64.0.2, fd7a:115c:a1e0::2 | false     | 2025-05-19 13:17:35 | N/A        | online    | no      | tag:test   |             |
5  | test-vm-2 | test-vm-2 | [fQQNE]    | [Blqu5] | vms    | 100.64.0.3, fd7a:115c:a1e0::3 | false     | 2025-05-19 13:19:20 | N/A        | online    | no      |            |             | tag:vm

See that tag:vm is missing from test-vm-1.

Environment

- OS: Ubuntu 24.04
- Headscale version: 0.26
- Tailscale version: 1.82.5

Runtime environment

  • Headscale is behind a (reverse) proxy
  • Headscale runs in a container

Debug information

Policy:

{
  "groups": {
    "group:admins": ["fabian@"],
    "group:vms": ["vms@"]
  },
  "tagOwners": {
    "tag:vm": ["group:admins", "group:vms"],
    "tag:test": ["fabian@"],
    "tag:exit-node": ["group:admins"]
  },
  // Servers with tag exit node can advertise exit nodes without further approval
  "autoApprovers": {
      "exitNode": ["tag:exit-node"]
  },
    "acls": [
        // Allow admins full access to VMs
        {
        "action": "accept",
        "src": ["group:admins"],
        "dst": [
          "tag:vm:*"
        ]
        }
    ]
}

headscale.log

Originally created by @Murgeye on GitHub (May 19, 2025). ### Is this a support request? - [x] This is not a support request ### Is there an existing issue for this? - [x] I have searched the existing issues ### Current Behavior Since v0.26 adding tags via CLI seems to invalidate advertised tags, removing them completely from the tag list. This only happens after a new node advertising the same tag is added. This can be reproduced by adding a node advertising a tag to headscale, then add a forced tag to that same node. If you then register another node advertising the same tag, the first node loses its advertised tag (for more details see Steps to reproduce). This basically causes all my ACLs to break, since several nodes have lost their advertised tags with the update. This might be due to another related issues, which, however, I cannot easily reproduce. If I can do anything more to help debug this, let me know! ### Expected Behavior CLI-added tags and advertised tags should not influence each other. ### Steps To Reproduce 1. Configure two tags, e.g., `tag:vm` and `tag:test` 2. On machine 1 run: `sudo tailscale login --login-server=https://<...> --authkey=<..> --advertise-tags=tag:vm` 3. Run `headscale nodes ls --tags` on the headscale server Result, as expected: ``` ID | Hostname | Name | MachineKey | NodeKey | User | IP addresses | Ephemeral | Last seen | Expiration | Connected | Expired | ForcedTags | InvalidTags | ValidTags 4 | test-vm-1 | test-vm-1 | [A5yrv] | [XuMNd] | vms | 100.64.0.2, fd7a:115c:a1e0::2 | false | 2025-05-19 13:17:35 | N/A | online | no | | | tag:vm ``` 4. On the server run `docker compose exec headscale headscale nodes tag -i 4 -t "tag:test"` Result, still as expected: ``` ID | Hostname | Name | MachineKey | NodeKey | User | IP addresses | Ephemeral | Last seen | Expiration | Connected | Expired | ForcedTags | InvalidTags | ValidTags 4 | test-vm-1 | test-vm-1 | [A5yrv] | [XuMNd] | vms | 100.64.0.2, fd7a:115c:a1e0::2 | false | 2025-05-19 13:17:35 | N/A | online | no | tag:test | | tag:vm ``` 5. On the second machine run: `sudo tailscale login --login-server=https://<...> --authkey=<..> --advertise-tags=tag:vm` Result: ``` ID | Hostname | Name | MachineKey | NodeKey | User | IP addresses | Ephemeral | Last seen | Expiration | Connected | Expired | ForcedTags | InvalidTags | ValidTags 4 | test-vm-1 | test-vm-1 | [A5yrv] | [XuMNd] | vms | 100.64.0.2, fd7a:115c:a1e0::2 | false | 2025-05-19 13:17:35 | N/A | online | no | tag:test | | 5 | test-vm-2 | test-vm-2 | [fQQNE] | [Blqu5] | vms | 100.64.0.3, fd7a:115c:a1e0::3 | false | 2025-05-19 13:19:20 | N/A | online | no | | | tag:vm ``` See that `tag:vm` is missing from test-vm-1. ### Environment ```markdown - OS: Ubuntu 24.04 - Headscale version: 0.26 - Tailscale version: 1.82.5 ``` ### Runtime environment - [x] Headscale is behind a (reverse) proxy - [x] Headscale runs in a container ### Debug information Policy: ``` { "groups": { "group:admins": ["fabian@"], "group:vms": ["vms@"] }, "tagOwners": { "tag:vm": ["group:admins", "group:vms"], "tag:test": ["fabian@"], "tag:exit-node": ["group:admins"] }, // Servers with tag exit node can advertise exit nodes without further approval "autoApprovers": { "exitNode": ["tag:exit-node"] }, "acls": [ // Allow admins full access to VMs { "action": "accept", "src": ["group:admins"], "dst": [ "tag:vm:*" ] } ] } ``` [headscale.log](https://github.com/user-attachments/files/20297516/headscale.log)
adam added the bugno-stale-bottagswell described ❤️ labels 2025-12-29 02:27:49 +01:00
adam closed this issue 2025-12-29 02:27:49 +01:00
Author
Owner

@pamart commented on GitHub (May 19, 2025):

I have the same issue with non docker install, fresh deployment on dedicated Ubuntu 24.04 server, using the latest amd64 deb file.

@pamart commented on GitHub (May 19, 2025): I have the same issue with non docker install, fresh deployment on dedicated Ubuntu 24.04 server, using the latest amd64 deb file.
Author
Owner

@eyJhb commented on GitHub (Jun 6, 2025):

I looked into this, and made a simple integration test to reproduce the error.
Very much just for visual, and doesn't actually validate how it should be yet, but I ran out of time.

func TestNodesTagss(t *testing.T) {
	zerolog.SetGlobalLevel(zerolog.TraceLevel)
	IntegrationSkip(t)
	t.Parallel()

	policy := &policyv2.Policy{
		TagOwners: policyv2.TagOwners{
			policyv2.Tag("tag:vm"): policyv2.Owners{usernameOwner("user1@test.no")},
		},
	}

	scenario, err := NewScenario(ScenarioSpec{})
	assertNoErr(t, err)
	defer scenario.ShutdownAssertNoPanics(t)

	err = scenario.CreateHeadscaleEnv(
		[]tsic.Option{},
		hsic.WithACLPolicy(policy),
	)
	assertNoErr(t, err)

	headscale, err := scenario.Headscale()
	assertNoErr(t, err)

	// create user
	u, err := scenario.CreateUser("user1")
	assertNoErr(t, err)

	// create preauthkey for spinning up nodes
	key, err := scenario.CreatePreAuthKey(u.GetId(), true, true)
	if err != nil {
		t.Fatalf("failed to create pre-auth key for user %s: %s", u.Name, err)
	}

	// create node with advertised tag `tag:vm`
	err = scenario.CreateTailscaleNodesInUser(u.Name,
		"all",
		1,
		tsic.WithNetwork(scenario.networks[scenario.testDefaultNetwork]),
		tsic.WithTags([]string{"tag:vm"}))
	assertNoErr(t, err)

	err = scenario.RunTailscaleUp(u.Name, headscale.GetEndpoint(), key.GetKey())
	if err != nil {
		t.Fatalf("failed to run tailscale up for user %s: %s", u.Name, err)
	}

	// add a forced tag to node
	out, err := headscale.Execute(
		[]string{
			"headscale",
			"nodes",
			"tag",
			"-i", "1",
			"-t", "tag:lol",
			"--output", "json",
		},
	)
	assert.Nil(t, err)
	t.Logf("Output: %+v\n", out)

	// get nodes output
	out, err = headscale.Execute(
		[]string{
			"headscale",
			"nodes",
			"list",
			"--tags",
		},
	)
	fmt.Println(out)

	// create 2nd node, with advertised tag `tag:vm`
	err = scenario.CreateTailscaleNodesInUser(u.Name,
		"all",
		1,
		tsic.WithNetwork(scenario.networks[scenario.testDefaultNetwork]),
		tsic.WithTags([]string{"tag:vm"}))
	assertNoErr(t, err)

	err = scenario.RunTailscaleUp(u.Name, headscale.GetEndpoint(), key.GetKey())
	if err != nil {
		t.Fatalf("failed to run tailscale up for user %s: %s", u.Name, err)
	}

	// get nodes output
	out, err = headscale.Execute(
		[]string{
			"headscale",
			"nodes",
			"list",
			"--tags",
		},
	)
	fmt.Println(out)

	assertNoErr(t, errors.New("invalid"))
}
@eyJhb commented on GitHub (Jun 6, 2025): I looked into this, and made a simple integration test to reproduce the error. Very much just for visual, and doesn't actually validate how it should be yet, but I ran out of time. ```go func TestNodesTagss(t *testing.T) { zerolog.SetGlobalLevel(zerolog.TraceLevel) IntegrationSkip(t) t.Parallel() policy := &policyv2.Policy{ TagOwners: policyv2.TagOwners{ policyv2.Tag("tag:vm"): policyv2.Owners{usernameOwner("user1@test.no")}, }, } scenario, err := NewScenario(ScenarioSpec{}) assertNoErr(t, err) defer scenario.ShutdownAssertNoPanics(t) err = scenario.CreateHeadscaleEnv( []tsic.Option{}, hsic.WithACLPolicy(policy), ) assertNoErr(t, err) headscale, err := scenario.Headscale() assertNoErr(t, err) // create user u, err := scenario.CreateUser("user1") assertNoErr(t, err) // create preauthkey for spinning up nodes key, err := scenario.CreatePreAuthKey(u.GetId(), true, true) if err != nil { t.Fatalf("failed to create pre-auth key for user %s: %s", u.Name, err) } // create node with advertised tag `tag:vm` err = scenario.CreateTailscaleNodesInUser(u.Name, "all", 1, tsic.WithNetwork(scenario.networks[scenario.testDefaultNetwork]), tsic.WithTags([]string{"tag:vm"})) assertNoErr(t, err) err = scenario.RunTailscaleUp(u.Name, headscale.GetEndpoint(), key.GetKey()) if err != nil { t.Fatalf("failed to run tailscale up for user %s: %s", u.Name, err) } // add a forced tag to node out, err := headscale.Execute( []string{ "headscale", "nodes", "tag", "-i", "1", "-t", "tag:lol", "--output", "json", }, ) assert.Nil(t, err) t.Logf("Output: %+v\n", out) // get nodes output out, err = headscale.Execute( []string{ "headscale", "nodes", "list", "--tags", }, ) fmt.Println(out) // create 2nd node, with advertised tag `tag:vm` err = scenario.CreateTailscaleNodesInUser(u.Name, "all", 1, tsic.WithNetwork(scenario.networks[scenario.testDefaultNetwork]), tsic.WithTags([]string{"tag:vm"})) assertNoErr(t, err) err = scenario.RunTailscaleUp(u.Name, headscale.GetEndpoint(), key.GetKey()) if err != nil { t.Fatalf("failed to run tailscale up for user %s: %s", u.Name, err) } // get nodes output out, err = headscale.Execute( []string{ "headscale", "nodes", "list", "--tags", }, ) fmt.Println(out) assertNoErr(t, errors.New("invalid")) } ```
Author
Owner

@github-actions[bot] commented on GitHub (Oct 17, 2025):

This issue is stale because it has been open for 90 days with no activity.

@github-actions[bot] commented on GitHub (Oct 17, 2025): This issue is stale because it has been open for 90 days with no activity.
Author
Owner

@kradalby commented on GitHub (Dec 17, 2025):

This behaviour should be consistent in v0.28.

@kradalby commented on GitHub (Dec 17, 2025): This behaviour should be consistent in v0.28.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: starred/headscale#1030