[Bug] Docker - High CPU Usage - RATELIMIT lines in log #976

Closed
opened 2025-12-29 02:26:59 +01:00 by adam · 4 comments
Owner

Originally created by @plittlefield on GitHub (Mar 17, 2025).

Is this a support request?

  • This is not a support request

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

I thought I would post my issue on Tailscale just in case it is related.

I have been running Headscale for a long while with no issues, but when I recently updated to the latest Docker image on the Tailscale Docker container running on the same machine, I had high CPU usage and now I have to stop the Tailscale container.

https://github.com/tailscale/tailscale/issues/15339

I hope you don't mind me pointing it out!

Regards,

Paully

Expected Behavior

The Tailscale docker container on the same machine as Headscale runs fine.

Steps To Reproduce

With this docker compose, run a tailscale client on the same machine as headscale ...

services:
tailscale:
container_name: tailscale
volumes:
- /var/lib:/var/lib
- /dev/net/tun:/dev/net/tun
network_mode: host
cap_add:
- NET_ADMIN
- NET_RAW
privileged: true
environment:
- TS_STATE_DIR=/var/lib/tailscale
- TS_USERSPACE=false
- TS_EXTRA_ARGS=--login-server=https://headscale.xxxxxxxxxxx --advertise-exit-node --advertise-routes=10.2.0.0/24
- TS_HOSTNAME=xxxxxxxxxxxxxxxx
image: tailscale/tailscale
restart: unless-stopped

Environment

- OS: Ubuntu Server 20.04
- Headscale version: headscale/headscale:0.23.0
- Tailscale version: tailscale/tailscale-1.80.3-tbd762b827

Runtime environment

  • Headscale is behind a (reverse) proxy
  • Headscale runs in a container

Debug information

The last few lines of the logs which spout out every second ....

2025-03-17T15:06:20Z INF home/runner/work/headscale/headscale/hscontrol/poll.go:705 > node has disconnected, mapSession: 0xc000b04f00, chan: 0xc000284a10 node=lightsail node.id=18 omitPeers=false readOnly=false stream=true
2025-03-17T15:06:20Z INF home/runner/work/headscale/headscale/hscontrol/poll.go:705 > node has connected, mapSession: 0xc000b05080, chan: 0xc0002b05b0 node=lightsail node.id=18 omitPeers=false readOnly=false stream=true
2025-03-17T15:06:20Z INF home/runner/work/headscale/headscale/hscontrol/poll.go:705 > node has connected, mapSession: 0xc0001ee600, chan: 0xc000264770 node=lightsail node.id=18 omitPeers=false readOnly=false stream=true
2025-03-17T15:06:20Z INF home/runner/work/headscale/headscale/hscontrol/poll.go:705 > node has disconnected, mapSession: 0xc000b05080, chan: 0xc0002b05b0 node=lightsail node.id=18 omitPeers=false readOnly=false stream=true
2025-03-17T15:06:20Z INF home/runner/work/headscale/headscale/hscontrol/poll.go:705 > node has connected, mapSession: 0xc000b05200, chan: 0xc0002b1960 node=lightsail node.id=18 omitPeers=false readOnly=false stream=true
2025-03-17T15:06:20Z INF home/runner/work/headscale/headscale/hscontrol/poll.go:705 > node has disconnected, mapSession: 0xc0001ee600, chan: 0xc000264770 node=lightsail node.id=18 omitPeers=false readOnly=false stream=true
2025-03-17T15:06:20Z INF home/runner/work/headscale/headscale/hscontrol/poll.go:705 > node has disconnected, mapSession: 0xc000b05200, chan: 0xc0002b1960 node=lightsail node.id=18 omitPeers=false readOnly=false stream=true
2025-03-17T15:06:20Z INF home/runner/work/headscale/headscale/hscontrol/poll.go:705 > node has connected, mapSession: 0xc0001eea80, chan: 0xc0002aae00 node=lightsail node.id=18 omitPeers=false readOnly=false stream=true
2025-03-17T15:06:20Z INF home/runner/work/headscale/headscale/hscontrol/poll.go:705 > node has connected, mapSession: 0xc000b05380, chan: 0xc0003944d0 node=lightsail node.id=18 omitPeers=false readOnly=false stream=true
2025-03-17T15:06:20Z INF home/runner/work/headscale/headscale/hscontrol/poll.go:705 > node has disconnected, mapSession: 0xc0001eea80, chan: 0xc0002aae00 node=lightsail node.id=18 omitPeers=false readOnly=false stream=true
2025-03-17T15:06:20Z INF home/runner/work/headscale/headscale/hscontrol/poll.go:705 > node has connected, mapSession: 0xc0001ee600, chan: 0xc0002aba40 node=lightsail node.id=18 omitPeers=false readOnly=false stream=true
2025-03-17T15:06:20Z INF home/runner/work/headscale/headscale/hscontrol/poll.go:705 > node has disconnected, mapSession: 0xc000b05380, chan: 0xc0003944d0 node=lightsail node.id=18 omitPeers=false readOnly=false stream=true
Originally created by @plittlefield on GitHub (Mar 17, 2025). ### Is this a support request? - [x] This is not a support request ### Is there an existing issue for this? - [x] I have searched the existing issues ### Current Behavior I thought I would post my issue on Tailscale just in case it is related. I have been running Headscale for a long while with no issues, but when I recently updated to the latest Docker image on the Tailscale Docker container running on the same machine, I had high CPU usage and now I have to stop the Tailscale container. https://github.com/tailscale/tailscale/issues/15339 I hope you don't mind me pointing it out! Regards, Paully ### Expected Behavior The Tailscale docker container on the same machine as Headscale runs fine. ### Steps To Reproduce With this docker compose, run a tailscale client on the same machine as headscale ... services: tailscale: container_name: tailscale volumes: - /var/lib:/var/lib - /dev/net/tun:/dev/net/tun network_mode: host cap_add: - NET_ADMIN - NET_RAW privileged: true environment: - TS_STATE_DIR=/var/lib/tailscale - TS_USERSPACE=false - TS_EXTRA_ARGS=--login-server=https://headscale.xxxxxxxxxxx --advertise-exit-node --advertise-routes=10.2.0.0/24 - TS_HOSTNAME=xxxxxxxxxxxxxxxx image: tailscale/tailscale restart: unless-stopped ### Environment ```markdown - OS: Ubuntu Server 20.04 - Headscale version: headscale/headscale:0.23.0 - Tailscale version: tailscale/tailscale-1.80.3-tbd762b827 ``` ### Runtime environment - [x] Headscale is behind a (reverse) proxy - [x] Headscale runs in a container ### Debug information The last few lines of the logs which spout out every second .... ``` 2025-03-17T15:06:20Z INF home/runner/work/headscale/headscale/hscontrol/poll.go:705 > node has disconnected, mapSession: 0xc000b04f00, chan: 0xc000284a10 node=lightsail node.id=18 omitPeers=false readOnly=false stream=true 2025-03-17T15:06:20Z INF home/runner/work/headscale/headscale/hscontrol/poll.go:705 > node has connected, mapSession: 0xc000b05080, chan: 0xc0002b05b0 node=lightsail node.id=18 omitPeers=false readOnly=false stream=true 2025-03-17T15:06:20Z INF home/runner/work/headscale/headscale/hscontrol/poll.go:705 > node has connected, mapSession: 0xc0001ee600, chan: 0xc000264770 node=lightsail node.id=18 omitPeers=false readOnly=false stream=true 2025-03-17T15:06:20Z INF home/runner/work/headscale/headscale/hscontrol/poll.go:705 > node has disconnected, mapSession: 0xc000b05080, chan: 0xc0002b05b0 node=lightsail node.id=18 omitPeers=false readOnly=false stream=true 2025-03-17T15:06:20Z INF home/runner/work/headscale/headscale/hscontrol/poll.go:705 > node has connected, mapSession: 0xc000b05200, chan: 0xc0002b1960 node=lightsail node.id=18 omitPeers=false readOnly=false stream=true 2025-03-17T15:06:20Z INF home/runner/work/headscale/headscale/hscontrol/poll.go:705 > node has disconnected, mapSession: 0xc0001ee600, chan: 0xc000264770 node=lightsail node.id=18 omitPeers=false readOnly=false stream=true 2025-03-17T15:06:20Z INF home/runner/work/headscale/headscale/hscontrol/poll.go:705 > node has disconnected, mapSession: 0xc000b05200, chan: 0xc0002b1960 node=lightsail node.id=18 omitPeers=false readOnly=false stream=true 2025-03-17T15:06:20Z INF home/runner/work/headscale/headscale/hscontrol/poll.go:705 > node has connected, mapSession: 0xc0001eea80, chan: 0xc0002aae00 node=lightsail node.id=18 omitPeers=false readOnly=false stream=true 2025-03-17T15:06:20Z INF home/runner/work/headscale/headscale/hscontrol/poll.go:705 > node has connected, mapSession: 0xc000b05380, chan: 0xc0003944d0 node=lightsail node.id=18 omitPeers=false readOnly=false stream=true 2025-03-17T15:06:20Z INF home/runner/work/headscale/headscale/hscontrol/poll.go:705 > node has disconnected, mapSession: 0xc0001eea80, chan: 0xc0002aae00 node=lightsail node.id=18 omitPeers=false readOnly=false stream=true 2025-03-17T15:06:20Z INF home/runner/work/headscale/headscale/hscontrol/poll.go:705 > node has connected, mapSession: 0xc0001ee600, chan: 0xc0002aba40 node=lightsail node.id=18 omitPeers=false readOnly=false stream=true 2025-03-17T15:06:20Z INF home/runner/work/headscale/headscale/hscontrol/poll.go:705 > node has disconnected, mapSession: 0xc000b05380, chan: 0xc0003944d0 node=lightsail node.id=18 omitPeers=false readOnly=false stream=true ```
adam added the bug label 2025-12-29 02:26:59 +01:00
adam closed this issue 2025-12-29 02:26:59 +01:00
Author
Owner

@kradalby commented on GitHub (Mar 18, 2025):

I thought I would post my issue on Tailscale just in case it is related.

Please do not do this, it is almost always a Headscale issue and we really try to not cause additional load and noise to their issue tracker to preserve our current relatively good relationships.

The last few lines of the logs which spout out every second

I would guess this indicates a bad reverse proxy and the connection keeps failing, or the long poll keeps dying.

Since this is the Tailscale client on the docker machine, do you connect it directly and not over HTTPS? because that will likely be the reason that it keeps failing due to how the Tailscale client will force HTTPS on reconnects to prevent rebinding attacks. This wasnt introduced that long ago, so sounds plausible.

This is one of the reasons we Can I use headscale and tailscale on the same machine?, not because it should not work, but because there is a lot of known (and unknown) pitfalls.

@kradalby commented on GitHub (Mar 18, 2025): > I thought I would post my issue on Tailscale just in case it is related. Please do not do this, it is _almost_ always a Headscale issue and we really try to not cause additional load and noise to their issue tracker to preserve our current relatively good relationships. > The last few lines of the logs which spout out every second I would guess this indicates a bad reverse proxy and the connection keeps failing, or the long poll keeps dying. Since this is the Tailscale client on the docker machine, do you connect it directly and not over HTTPS? because that will likely be the reason that it keeps failing due to how the Tailscale client will force HTTPS on reconnects to prevent rebinding attacks. This wasnt introduced that long ago, so sounds plausible. This is one of the reasons we [Can I use headscale and tailscale on the same machine?](https://headscale.net/stable/about/faq/#can-i-use-headscale-and-tailscale-on-the-same-machine), not because it should not work, but because there is a lot of known (and unknown) pitfalls.
Author
Owner

@plittlefield commented on GitHub (Mar 18, 2025):

As it turns out, it was a Headscale issue :)

I had locked the Headscale version in the Docker Compose and when I upgraded my Tailscale container it was borking because of the upgraded code.

So, I changed my Headscale version to ‘latest’ and the issue has been resolved.

For good measure, I deleted the node using Headscale Admin, created an auth key and rejoined the Tailscale container on the same machine.

Job, done.

So, it might be worth posting this after all!

Thanks for keeping up the great work :)

@plittlefield commented on GitHub (Mar 18, 2025): As it turns out, it was a Headscale issue :) I had locked the Headscale version in the Docker Compose and when I upgraded my Tailscale container it was borking because of the upgraded code. So, I changed my Headscale version to ‘latest’ and the issue has been resolved. For good measure, I deleted the node using Headscale Admin, created an auth key and rejoined the Tailscale container on the same machine. Job, done. So, it might be worth posting this after all! Thanks for keeping up the great work :)
Author
Owner

@nblock commented on GitHub (Mar 18, 2025):

As it turns out, it was a Headscale issue :)

Please close the issue in the tailscale repository.

@nblock commented on GitHub (Mar 18, 2025): > As it turns out, it was a Headscale issue :) Please close the issue in the tailscale repository.
Author
Owner

@plittlefield commented on GitHub (Mar 18, 2025):

Done :)

@plittlefield commented on GitHub (Mar 18, 2025): Done :)
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: starred/headscale#976