Namespace borders not applied by default #351

Closed
opened 2025-12-29 01:27:29 +01:00 by adam · 7 comments
Owner

Originally created by @GrigoriyMikhalkin on GitHub (Oct 19, 2022).

As stated in documentation:

When using ACL's the Namespace borders are no longer applied.

Implying that the default behavior is to disallow communication between namespaces. By digging into the code we can find the comment:

// If ACLs rules are defined, filter visible host list with the ACLs
// else use the classic namespace scope

Which was actually true previously, as peers were filtered by namespace.

The question is, should we assume that documentation is outdated(and fix it and provide an example with ACLs for achieving the same behavior?) or should the stated behavior be reimplemented?

There's definitely demand for namespace borders(https://github.com/metal-stack/metal-roles/issues/105 and https://github.com/juanfont/headscale/issues/841).

Originally created by @GrigoriyMikhalkin on GitHub (Oct 19, 2022). As [stated in documentation](https://github.com/juanfont/headscale/tree/main/docs): > When using ACL's the Namespace borders are no longer applied. Implying that the default behavior is to disallow communication between namespaces. By digging into the code we can [find the comment](https://github.com/juanfont/headscale/blob/main/machine.go#L276): > // If ACLs rules are defined, filter visible host list with the ACLs // else use the classic namespace scope Which was actually true previously, as [peers were filtered by namespace](https://github.com/juanfont/headscale/blob/650108c7c7fc7fdf84cd5cd2fbe6be043ef74dd9/machine.go#L241). The question is, should we assume that documentation is outdated(and fix it and provide an example with ACLs for achieving the same behavior?) or should the stated behavior be reimplemented? There's definitely demand for namespace borders(https://github.com/metal-stack/metal-roles/issues/105 and https://github.com/juanfont/headscale/issues/841).
adam added the bug label 2025-12-29 01:27:29 +01:00
adam closed this issue 2025-12-29 01:27:29 +01:00
Author
Owner

@doebi commented on GitHub (Oct 27, 2022):

Coming across this issue as I created a second namespace right now, just to find out that from a ACL perspective it made no difference at all. I explicitly created a second namespace for a group of devices that have nothing to do with the first one.

I would love to see a per-default blocking between namespace as this is what most users would expect.

Another note: I was surprised to see that the new namespace uses the same IP pool. Is there a reasoning behind, which I do not see yet?
My thinking was that namespaces are complete distinct networks (tailnets).

@doebi commented on GitHub (Oct 27, 2022): Coming across this issue as I created a second namespace right now, just to find out that from a ACL perspective it made no difference at all. I explicitly created a second namespace for a group of devices that have nothing to do with the first one. I would love to see a per-default blocking between namespace as this is what most users would expect. Another note: I was surprised to see that the new namespace uses the same IP pool. Is there a reasoning behind, which I do not see yet? My thinking was that namespaces are complete distinct networks (tailnets).
Author
Owner

@razza-guhl commented on GitHub (Oct 28, 2022):

I notice the same behavior, devices in different namespaces can communicate with each other per default / without ACLs.

I am unsure if it is a bug because this behavior also seems logical - without ACLs there no traffic restrictions.

On the other hand, this behavior is misleading because it is documented differently. Users might get unwanted results.

@razza-guhl commented on GitHub (Oct 28, 2022): I notice the same behavior, devices in different namespaces can communicate with each other per default / without ACLs. I am unsure if it is a bug because this behavior also seems logical - without ACLs there no traffic restrictions. On the other hand, this behavior is misleading because it is documented differently. Users might get unwanted results.
Author
Owner

@madjam002 commented on GitHub (Nov 2, 2022):

Another note: I was surprised to see that the new namespace uses the same IP pool. Is there a reasoning behind, which I do not see yet? My thinking was that namespaces are complete distinct networks (tailnets).

Tailnets in vanilla Tailscale are not distinct networks and everyone actually shares the 100.64.0.0/10 space. This behaviour is mirrored in Headscale. In Tailscale however, they isolate peers based on the Tailnet, even though the address space is shared across all users.

If you're using ACL rules I think it makes sense for Headscale not to get in the way and instead you can define the boundaries for your use-case in ACL rules.

@madjam002 commented on GitHub (Nov 2, 2022): > Another note: I was surprised to see that the new namespace uses the same IP pool. Is there a reasoning behind, which I do not see yet? My thinking was that namespaces are complete distinct networks (tailnets). Tailnets in vanilla Tailscale are not distinct networks and everyone actually shares the 100.64.0.0/10 space. This behaviour is mirrored in Headscale. In Tailscale however, they isolate peers based on the Tailnet, even though the address space is shared across all users. If you're using ACL rules I think it makes sense for Headscale not to get in the way and instead you can define the boundaries for your use-case in ACL rules.
Author
Owner

@darookee commented on GitHub (Apr 15, 2024):

Sorry to resurrect this issue but I'm not sure if this is marked as completed and working as intended now and my assumptions are incorrect or if it's a bug that still exists.

The documentation states When using ACL's the User borders are no longer applied. All machines whichever the User have the ability to communicate with other hosts as long as the ACL's permits this exchange.. As I'm not using ACLs I expect devices/nodes/machines that are registerred to a nother user/namespace cannot 'see' each other.

For example:

  • User1, Nodes Host1 and Host2
  • User2, Nodes Host3 and Host4

=> I would assume that Host1 and Host2 can communicate, Host1 and Host3 cannot

Is my interpretion of the documentation incorrect?

I'm currently using 0.22.3 (which was released on 2023-05-12, so two days after this issue was closed, so I would asume if it was a bug it would be fixed in this version).

@darookee commented on GitHub (Apr 15, 2024): Sorry to resurrect this issue but I'm not sure if this is marked as completed and working as intended now and my assumptions are incorrect or if it's a bug that still exists. The documentation states `When using ACL's the User borders are no longer applied. All machines whichever the User have the ability to communicate with other hosts as long as the ACL's permits this exchange.`. As I'm not using ACLs I expect devices/nodes/machines that are registerred to a nother user/namespace cannot 'see' each other. For example: - User1, Nodes Host1 and Host2 - User2, Nodes Host3 and Host4 => I would assume that Host1 and Host2 can communicate, Host1 and Host3 cannot Is my interpretion of the documentation incorrect? I'm currently using 0.22.3 (which was released on 2023-05-12, so two days after this issue was closed, so I would asume if it was a bug it would be fixed in this version).
Author
Owner

@Hobby-Student commented on GitHub (May 19, 2024):

I'm currently testing 0.22.3 (podman rootless) and expected the namespaces/users to be isolated. My tests gave me the following results:

  • Using no ACL file --> all Nodes can access each other across all namespaces/users
  • Using an ACL file with { "action": "accept", "src": ["namespace"], "dst": ["namespace:*"] } (namespace has to be replaced with the actual name) for every namespace/user --> expected behaviour of splitting the namespaces/users

I started to tag all nodes to prevent accidental access between the namespaces/users. For now this seems to be the best way for my usecase.

Offtopic:
Thank you very much for this great project. I just started few days ago and it's amazing!

@Hobby-Student commented on GitHub (May 19, 2024): I'm currently testing 0.22.3 (podman rootless) and expected the namespaces/users to be isolated. My tests gave me the following results: - Using no ACL file --> all Nodes can access each other across all namespaces/users - Using an ACL file with `{ "action": "accept", "src": ["namespace"], "dst": ["namespace:*"] }` (namespace has to be replaced with the actual name) for every namespace/user --> expected behaviour of splitting the namespaces/users I started to tag all nodes to prevent accidental access between the namespaces/users. For now this seems to be the best way for my usecase. Offtopic: Thank you very much for this great project. I just started few days ago and it's amazing!
Author
Owner

@ohdearaugustin commented on GitHub (May 20, 2024):

Maybe you could test it on a new alpha release if this still is a use there, as we won't fix anything in 0.23.3 anymore.

Can't answer if it should have been fixed in 0.22.3.

@ohdearaugustin commented on GitHub (May 20, 2024): Maybe you could test it on a new alpha release if this still is a use there, as we won't fix anything in 0.23.3 anymore. Can't answer if it should have been fixed in 0.22.3.
Author
Owner

@Hobby-Student commented on GitHub (May 20, 2024):

Maybe you could test it on a new alpha release if this still is a use there, as we won't fix anything in 0.23.3 anymore.

Can't answer if it should have been fixed in 0.22.3.

I did try 0.23.0-alpha9 and ran (unknowingly) straight into the postgres bug. While troubleshooting I switched to 0.22.3 (with sqlite in the end). It feels more reliable to stick with it and not using an alpha version. Using an ACL file is no problem for me, because I need one to reach my goal. Just wanted to share my findings.

@Hobby-Student commented on GitHub (May 20, 2024): > Maybe you could test it on a new alpha release if this still is a use there, as we won't fix anything in 0.23.3 anymore. > > Can't answer if it should have been fixed in 0.22.3. I did try 0.23.0-alpha9 and ran (unknowingly) straight into the postgres bug. While troubleshooting I switched to 0.22.3 (with sqlite in the end). It feels more reliable to stick with it and not using an alpha version. Using an ACL file is no problem for me, because I need one to reach my goal. Just wanted to share my findings.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: starred/headscale#351