mirror of
https://github.com/juanfont/headscale.git
synced 2026-01-11 20:00:28 +01:00
Headscale fails to activate clients with postgresql backend #314
Closed
opened 2025-12-29 01:26:37 +01:00 by adam
·
25 comments
No Branch/Tag Specified
main
update_flake_lock_action
gh-pages
kradalby/release-v0.27.2
dependabot/go_modules/golang.org/x/crypto-0.45.0
dependabot/go_modules/github.com/opencontainers/runc-1.3.3
copilot/investigate-headscale-issue-2788
copilot/investigate-visibility-issue-2788
copilot/investigate-issue-2833
copilot/debug-issue-2846
copilot/fix-issue-2847
dependabot/go_modules/github.com/go-viper/mapstructure/v2-2.4.0
dependabot/go_modules/github.com/docker/docker-28.3.3incompatible
kradalby/cli-experiement3
doc/0.26.1
doc/0.25.1
doc/0.25.0
doc/0.24.3
doc/0.24.2
doc/0.24.1
doc/0.24.0
kradalby/build-docker-on-pr
topic/docu-versioning
topic/docker-kos
juanfont/fix-crash-node-id
juanfont/better-disclaimer
update-contributors
topic/prettier
revert-1893-add-test-stage-to-docs
add-test-stage-to-docs
remove-node-check-interval
fix-empty-prefix
fix-ephemeral-reusable
bug_report-debuginfo
autogroups
logs-to-stderr
revert-1414-topic/fix_unix_socket
rename-machine-node
port-embedded-derp-tests-v2
port-derp-tests
duplicate-word-linter
update-tailscale-1.36
warn-against-apache
ko-fi-link
more-acl-tests
fix-typo-standalone
parallel-nolint
tparallel-fix
rerouting
ssh-changelog-docs
oidc-cleanup
web-auth-flow-tests
kradalby-gh-runner
fix-proto-lint
remove-funding-links
go-1.19
enable-1.30-in-tests
0.16.x
cosmetic-changes-integration
tmp-fix-integration-docker
fix-integration-docker
configurable-update-interval
show-nodes-online
hs2021
acl-syntax-fixes
ts2021-implementation
fix-spurious-updates
unstable-integration-tests
mandatory-stun
embedded-derp
prtemplate-fix
v0.28.0-beta.1
v0.27.2-rc.1
v0.27.1
v0.27.0
v0.27.0-beta.2
v0.27.0-beta.1
v0.26.1
v0.26.0
v0.26.0-beta.2
v0.26.0-beta.1
v0.25.1
v0.25.0
v0.25.0-beta.2
v0.24.3
v0.25.0-beta.1
v0.24.2
v0.24.1
v0.24.0
v0.24.0-beta.2
v0.24.0-beta.1
v0.23.0
v0.23.0-rc.1
v0.23.0-beta.5
v0.23.0-beta.4
v0.23.0-beta3
v0.23.0-beta2
v0.23.0-beta1
v0.23.0-alpha12
v0.23.0-alpha11
v0.23.0-alpha10
v0.23.0-alpha9
v0.23.0-alpha8
v0.23.0-alpha7
v0.23.0-alpha6
v0.23.0-alpha5
v0.23.0-alpha4
v0.23.0-alpha4-docker-ko-test9
v0.23.0-alpha4-docker-ko-test8
v0.23.0-alpha4-docker-ko-test7
v0.23.0-alpha4-docker-ko-test6
v0.23.0-alpha4-docker-ko-test5
v0.23.0-alpha-docker-release-test-debug2
v0.23.0-alpha-docker-release-test-debug
v0.23.0-alpha4-docker-ko-test4
v0.23.0-alpha4-docker-ko-test3
v0.23.0-alpha4-docker-ko-test2
v0.23.0-alpha4-docker-ko-test
v0.23.0-alpha3
v0.23.0-alpha2
v0.23.0-alpha1
v0.22.3
v0.22.2
v0.23.0-alpha-docker-release-test
v0.22.1
v0.22.0
v0.22.0-alpha3
v0.22.0-alpha2
v0.22.0-alpha1
v0.22.0-nfpmtest
v0.21.0
v0.20.0
v0.19.0
v0.19.0-beta2
v0.19.0-beta1
v0.18.0
v0.18.0-beta4
v0.18.0-beta3
v0.18.0-beta2
v0.18.0-beta1
v0.17.1
v0.17.0
v0.17.0-beta5
v0.17.0-beta4
v0.17.0-beta3
v0.17.0-beta2
v0.17.0-beta1
v0.17.0-alpha4
v0.17.0-alpha3
v0.17.0-alpha2
v0.17.0-alpha1
v0.16.4
v0.16.3
v0.16.2
v0.16.1
v0.16.0
v0.16.0-beta7
v0.16.0-beta6
v0.16.0-beta5
v0.16.0-beta4
v0.16.0-beta3
v0.16.0-beta2
v0.16.0-beta1
v0.15.0
v0.15.0-beta6
v0.15.0-beta5
v0.15.0-beta4
v0.15.0-beta3
v0.15.0-beta2
v0.15.0-beta1
v0.14.0
v0.14.0-beta2
v0.14.0-beta1
v0.13.0
v0.13.0-beta3
v0.13.0-beta2
v0.13.0-beta1
upstream/v0.12.4
v0.12.4
v0.12.3
v0.12.2
v0.12.2-beta1
v0.12.1
v0.12.0-beta2
v0.12.0-beta1
v0.11.0
v0.10.8
v0.10.7
v0.10.6
v0.10.5
v0.10.4
v0.10.3
v0.10.2
v0.10.1
v0.10.0
v0.9.3
v0.9.2
v0.9.1
v0.9.0
v0.8.1
v0.8.0
v0.7.1
v0.7.0
v0.6.1
v0.6.0
v0.5.2
v0.5.1
v0.5.0
v0.4.0
v0.3.6
v0.3.5
v0.3.4
v0.3.3
v0.3.2
v0.3.1
v0.3.0
v0.2.2
v0.2.1
v0.2.0
v0.1.1
v0.1.0
Labels
Clear labels
CLI
DERP
DNS
Nix
OIDC
SSH
bug
database
documentation
duplicate
enhancement
faq
good first issue
grants
help wanted
might-come
needs design doc
needs investigation
no-stale-bot
out of scope
performance
policy 📝
pull-request
question
regression
routes
stale
tags
tailscale-feature-gap
well described ❤️
wontfix
Mirrored from GitHub Pull Request
No Label
bug
Milestone
No items
No Milestone
Projects
Clear projects
No project
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: starred/headscale#314
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @ishanjain28 on GitHub (Aug 24, 2022).
Bug description
Tailscale clients authenticate successfully with headscale when headscale is configured to use postgres but then get stuck in a loop and keep refreshing keys.
More specifically,
In case of sqlite,
In case of postgres,
0001-01-01 05:53:28+05:53:28.Client never sends a payload with read_only=false.
To Reproduce
Try to register any tailscale client.
Context info
@kradalby commented on GitHub (Sep 8, 2022):
Note for when this is tackled,
@QZAiXH commented on GitHub (Apr 3, 2023):
I seem to have encountered this problem, the client cannot join headscale with authkey, and the tailscale status keeps showing Logged out
@QZAiXH commented on GitHub (Apr 3, 2023):
Can you guys tell me how to solve this problem? Maybe I can x change the code accordingly. @kradalby @juanfont
@Yxnt commented on GitHub (May 29, 2023):
@QZAiXH
maybe i can solve this problem, but i need more time to test this case.
#765 has already been fixed for the situation where the command line gets stuck when using
tailscale upandtailscale login, but there are still more cases that need to be tested.@github-actions[bot] commented on GitHub (Nov 26, 2023):
This issue is stale because it has been open for 180 days with no activity.
@github-actions[bot] commented on GitHub (Dec 10, 2023):
This issue was closed because it has been inactive for 14 days since being marked as stale.
@weikinhuang commented on GitHub (Dec 13, 2023):
I'm also encountering this issue, clients status:
Even though I'm connected and have an ip. I'm using a reusable key (for a router device).
@TnZzZHlp commented on GitHub (Dec 14, 2023):
I also encountered this problem.
Headscale version: v0.23.0-alpha2
I use nginx for reverse proxy, I don't know if it is because of the problem caused by nginx
@weikinhuang commented on GitHub (Dec 14, 2023):
Like the reporter, it works fine with sqlite. I was trying to move to postgres for HA, but encountered this issue and went back to sqlite.
@2kvasnikov commented on GitHub (Dec 21, 2023):
Same issue with OpenWrt router as a tailscale client. Can register cleent with sqllite.
@alexfornuto commented on GitHub (Apr 26, 2024):
Hate to leave a "bump" comment, but FYI this issue occurs on Postgres 15 as well.
@m1cr0man commented on GitHub (May 8, 2024):
Can confirm this happens on Postgres 16 also. I have collected logs from headscale whilst trying to join with a reusable preauth key (log level debug). Also in the gist is the client status json output, and the server node info json output, after the registration. I redacted/obfuscated some data.
Clientside interaction looks like so:
Can confirm that switching to SQLite resolves the issue. Perhaps it is a collation issue, wherein some comparison is returning different results depending on the engine? Happy to do more testing.
@sjansen1 commented on GitHub (Jun 10, 2024):
I always wonder why authkey is not working on my Headscale installation until i found this issue. For now, i auth with openid, move this nodes to a fake user that can not login and set expiration date in the database to a high value to avoid expiration. For me, thats the only way to avoid expiration on server/subnet gateways.
@Cubea01 commented on GitHub (Jul 16, 2024):
This is still an issue, two years later.
@kradalby commented on GitHub (Aug 30, 2024):
As mentioned in #2087, it takes a lot more effort than initially anticipated to support multiple database engines and Postgres is not really a priority as the benefits to scaling something like headscale is marginal.
That does not mean we will never attempt to resolve it, it just means that we have a bunch of other things that we consider more important to bring headscale forward.
Often people create issues or bring up that you need Postgres to scale headscale beyond X nodes, while it is currently true that you might be able to have 10-20% more node with the current code, the main bottlenecks are in the headscale code and not dependent on the database.
I imagine that it will improve over time as we free up more time to work on making things more efficient, and if it comes to optimising around a database, we will do it for SQLite.
While we will work hard to not break postgresql or regress it, I would consider our support for it "best effort" and if your looking to run Headscale in a more serious manner, I would choose SQLite.
@alexfornuto commented on GitHub (Aug 30, 2024):
@kradalby Everything you said makes a lot of sense, and I do not intend to argue any of your points, only to add another perspective.
My interest in using a db other than sqlite is not for "scaling" as much as fault tolerance. Consider a setup where HS is running in a single VM. If that VM is destroyed, the tailnet will suffer while it's recreated. With a version-controlled policy file and good monitoring coupled with a CI pipeline, this issue can be resolved in minutes.
Compare that scenario to a setup with two VMs running headscale, a primary and a backup, both connected to a managed database with its own redundancy. If the primary is destroyed, the IP address can be swapped to the backup VM, as just one of several options to swap traffic in less time.
@kradalby commented on GitHub (Aug 30, 2024):
@alexfornuto Its fair, I understand that people have different solutions to recovery and HA and solutions they are more familiar with.
That said, for all the things mentioned, SQLite has excellent backup/streaming/cold copy solutions like litestream, which I use with my Headscale(s).
We are not removing it, but likely not investing in it. I think a sensible way to look at the "investing" or optimisation part, is to think, if we find that we can make a change that will benefit SQLites performance, we will implement them and sacrifice postgres performance, not implement two solutions.
As a side note, we have also started to have an increase in special cases for migrating both databases, which also is eating out of our dev time.
@m1cr0man commented on GitHub (Aug 30, 2024):
I agree with the sentiments and statements here but would like to highlight that using postgresql is not an option at all due to this issue. I personally tried to use psql because I already had it set up, but using sqlite instead was fine. The problem is that the documentation states that postgresql should work, and so I did waste a good amount of time trying to figure out why it did not work before giving up on it. At the very least, it may be worth amending the documentation to state that psql support is best effort and not as well tested as sqlite.
@kradalby commented on GitHub (Aug 30, 2024):
I updated the config with some notes in https://github.com/juanfont/headscale/pull/2091, but I agree, that is fair.
I will try to assess this issue next week and evaluate if the work will result in a fix or documentation of known limitations. I know people out there are running Postgres, so it is strange that not everyone runs into this, maybe they dont use preauthkeys.
@mpoindexter commented on GitHub (Aug 30, 2024):
I'll chime in as another postgres user: I understand that there are great options to backup/manage SQLite, but for anyone running on the major cloud providers (AWS, GCP, Azure, etc) they all have a managed DB solution that speaks postgresql protocol, so it's dramatically easier to set up a database with proper backup, etc. with anything that can use that. Deploying headscale as a stateless container, with external state in a DB is a really easy way to manage it, and it would be a shame to lose that. I understand that it's an increased burden of maintenance, and I'm happy to help with testing and fixes for postgres if it helps alleviate the problem a little.
@alexfornuto commented on GitHub (Aug 30, 2024):
This is my situation exactly. I'm already using managed databases for other self-hosted services. Litestream does look like a viable solution for sqlite, but it's also an additional burden in terms of having to learn, deploy, and maintain another system just for use by Headscale. Ultimately this disqualifies Headscale as a viable alternative for use in the network I administer professionally.
I completely understand that this is not a priority for the HS dev team, and it's not my intention to argue that point. I just want to make sure my POV is properly articulated.
@mpoindexter commented on GitHub (Aug 31, 2024):
PR #2093 should fix this issue. @kradalby feel free to mention me on any other postgresql related issues if you'd like me to take a look.
@alexfornuto commented on GitHub (Sep 3, 2024):
Many thanks @mpoindexter!
EDIT: P.S. Is there any chance of the fix being backported to the current stable release?
@mpoindexter commented on GitHub (Sep 3, 2024):
@alexfornuto I would doubt it makes sense to backport, but I think just ensuring your headscale process runs using the UTC timezone should function as a workaround.
@alexfornuto commented on GitHub (Sep 3, 2024):
@mpoindexter I will likely do that if we decide to proceed with headscale in that environment.
FWIW, it would make sense to me to backport it since a production deployment requiring a more reliable db backend than sqlite would also likely require a stable release (my environment does), and v0.23 is still in beta.