Compare commits
722 Commits
v0.24.0-be
...
update_fla
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
b720568cf3 | ||
|
|
61c9ae81e4 | ||
|
|
1f9635c2ec | ||
|
|
fd1074160e | ||
|
|
d66d3a4269 | ||
|
|
d5a4e6e36a | ||
|
|
8c6cb05ab4 | ||
|
|
42b8c779a0 | ||
|
|
a3c4ad2ca3 | ||
|
|
0641771128 | ||
|
|
f7d8bb8b3f | ||
|
|
adb9467f60 | ||
|
|
41d70fe87b | ||
|
|
99767cf805 | ||
|
|
0d4f2293ff | ||
|
|
3587225a88 | ||
|
|
9371b4ee28 | ||
|
|
cef5338cfe | ||
|
|
3529fe0da1 | ||
|
|
4064f13bda | ||
|
|
3037e5eee0 | ||
|
|
82bb4331f5 | ||
|
|
2a2d5c869a | ||
|
|
157e3a30fc | ||
|
|
70b622fc68 | ||
|
|
742878d172 | ||
|
|
2109674467 | ||
|
|
36a73f8c22 | ||
|
|
e40dbe3b28 | ||
|
|
7c756b8201 | ||
|
|
6ae182696f | ||
|
|
ccddeceeec | ||
|
|
580dcad683 | ||
|
|
442fcdbd33 | ||
|
|
380f531342 | ||
|
|
51eed414b4 | ||
|
|
e638cbc9b9 | ||
|
|
6390fcee79 | ||
|
|
b52f8cb52f | ||
|
|
ff29af63f6 | ||
|
|
7e8930c507 | ||
|
|
6337a3dbc4 | ||
|
|
4d0b273b90 | ||
|
|
23a5f1b628 | ||
|
|
44600550c6 | ||
|
|
835db974b5 | ||
|
|
30dce30a9d | ||
|
|
f693cc0851 | ||
|
|
abd2b15db5 | ||
|
|
b762e4c350 | ||
|
|
c36cedc32f | ||
|
|
6a55f7d731 | ||
|
|
bca6e6334d | ||
|
|
0431039f2a | ||
|
|
ccd284c0a5 | ||
|
|
9db5fb6393 | ||
|
|
3ca4ff8f3f | ||
|
|
5cd5e5de69 | ||
|
|
08d26e541c | ||
|
|
d243adaedd | ||
|
|
9b1a6b6c05 | ||
|
|
8573ff9158 | ||
|
|
a739862c65 | ||
|
|
8358017dcf | ||
|
|
28be15f8ea | ||
|
|
687cf0882f | ||
|
|
4f040dead2 | ||
|
|
54db47badc | ||
|
|
0e3acdd8ec | ||
|
|
ebe0f4078d | ||
|
|
dda35847b0 | ||
|
|
f95b254ea9 | ||
|
|
e05f45cfb1 | ||
|
|
995ed0187c | ||
|
|
927ce418d2 | ||
|
|
93d79d8da9 | ||
|
|
500442c8f1 | ||
|
|
2fb71690e8 | ||
|
|
9f7aa55689 | ||
|
|
0fa9dcaff8 | ||
|
|
f74ea5b8ed | ||
|
|
53b8a81d48 | ||
|
|
15c1cfd778 | ||
|
|
a76b4bd46c | ||
|
|
a9a2001ae7 | ||
|
|
acb8cfc7ee | ||
|
|
f1e5f1346d | ||
|
|
210f58f62e | ||
|
|
a147b0cd87 | ||
|
|
a7edcf3b0f | ||
|
|
fda72ad1a3 | ||
|
|
dfaf120f2a | ||
|
|
e171d30179 | ||
|
|
0c6b9f5348 | ||
|
|
f3512d50df | ||
|
|
efd83da14e | ||
|
|
568baf3d02 | ||
|
|
5105033224 | ||
|
|
3d53f97c82 | ||
|
|
1053fbb16b | ||
|
|
b09af3846b | ||
|
|
00c41b6422 | ||
|
|
ab4e205ce7 | ||
|
|
f87b08676d | ||
|
|
ca7362e9aa | ||
|
|
0288614bdf | ||
|
|
82c7efccf8 | ||
|
|
81b871c9b5 | ||
|
|
e5ebe3205a | ||
|
|
87b8507ac9 | ||
|
|
60317064fd | ||
|
|
4d427cfe2a | ||
|
|
afd3a6acbc | ||
|
|
feaf85bfbc | ||
|
|
86e279869e | ||
|
|
7881f65358 | ||
|
|
2d549e579f | ||
|
|
50e8b21471 | ||
|
|
8e26651f2c | ||
|
|
57a38b5678 | ||
|
|
051a38a4c4 | ||
|
|
3276bda0c0 | ||
|
|
ebc57d9a38 | ||
|
|
2058343ad6 | ||
|
|
9b24a39943 | ||
|
|
3ebe4d99c1 | ||
|
|
da33795e79 | ||
|
|
57070680a5 | ||
|
|
21e02e5d1f | ||
|
|
2f94b80e70 | ||
|
|
3e0a96ec3a | ||
|
|
fffc58b5d0 | ||
|
|
4aca9d6568 | ||
|
|
3daf45e88a | ||
|
|
b81d6c734d | ||
|
|
c5ef1d3bb9 | ||
|
|
542cdb2cb2 | ||
|
|
5e33259550 | ||
|
|
65880ecb58 | ||
|
|
37c6a9e3a6 | ||
|
|
8423af2732 | ||
|
|
9baa795ddb | ||
|
|
acddd73183 | ||
|
|
47307d19cf | ||
|
|
5c449db125 | ||
|
|
2be94ce19a | ||
|
|
6c59d3e601 | ||
|
|
0acf09bdd2 | ||
|
|
414d3bbbd8 | ||
|
|
0f12e414a6 | ||
|
|
df339cd290 | ||
|
|
610c1daa4d | ||
|
|
84adda226b | ||
|
|
0f97294665 | ||
|
|
3db0a483ed | ||
|
|
7bab8da366 | ||
|
|
48cc98b787 | ||
|
|
61a14bb0e4 | ||
|
|
dc0e52a960 | ||
|
|
107c2f2f70 | ||
|
|
4a7e1475c0 | ||
|
|
cb3b6949ea | ||
|
|
30338441c1 | ||
|
|
25ccb5a161 | ||
|
|
8048f10d13 | ||
|
|
be4fd9ff2d | ||
|
|
1e4fc3f179 | ||
|
|
894e6946dc | ||
|
|
75e56df9e4 | ||
|
|
52d454d0c8 | ||
|
|
f20bd0cf08 | ||
|
|
a8f7fedced | ||
|
|
b668c7a596 | ||
|
|
49744cd467 | ||
|
|
a0d6802d5b | ||
|
|
13ebea192c | ||
|
|
af777f44f4 | ||
|
|
7460bec767 | ||
|
|
ca321d3c13 | ||
|
|
2765fd397f | ||
|
|
d72a06c6c6 | ||
|
|
e816397d54 | ||
|
|
22fccae125 | ||
|
|
6c08b49d63 | ||
|
|
7b7b270126 | ||
|
|
d6c39e65a5 | ||
|
|
8891ec9835 | ||
|
|
095106f498 | ||
|
|
e4fe216e45 | ||
|
|
e6546b2cea | ||
|
|
aae2f7de71 | ||
|
|
cfb308b4a7 | ||
|
|
4bb0241257 | ||
|
|
513544cc11 | ||
|
|
d556df1c36 | ||
|
|
d15ec28799 | ||
|
|
eccf64eb58 | ||
|
|
43afeedde2 | ||
|
|
73613d7f53 | ||
|
|
30d18575be | ||
|
|
70f8141abd | ||
|
|
82958835ce | ||
|
|
9c3a3c5837 | ||
|
|
faf55f5e8f | ||
|
|
e3323b65e5 | ||
|
|
8f60b819ec | ||
|
|
c29bcd2eaf | ||
|
|
890a044ef6 | ||
|
|
8028fa5483 | ||
|
|
a7f981e30e | ||
|
|
e0d8c3c877 | ||
|
|
c1b468f9f4 | ||
|
|
900f4b7b75 | ||
|
|
64f23136a2 | ||
|
|
0f6d312ada | ||
|
|
20dff82f95 | ||
|
|
31c4331a91 | ||
|
|
ce580f8245 | ||
|
|
bfb6fd80df | ||
|
|
3acce2da87 | ||
|
|
4a9a329339 | ||
|
|
dd16567c52 | ||
|
|
e0a436cefc | ||
|
|
53cdeff129 | ||
|
|
7148a690d0 | ||
|
|
4e73133b9f | ||
|
|
4f8724151e | ||
|
|
91730e2a1d | ||
|
|
b5090a01ec | ||
|
|
27f5641341 | ||
|
|
cf3d30b6f6 | ||
|
|
58020696fe | ||
|
|
e44b402fe4 | ||
|
|
835b7eb960 | ||
|
|
95b1fd636e | ||
|
|
834ac27779 | ||
|
|
4a4032a4b0 | ||
|
|
29aa08df0e | ||
|
|
0b1727c337 | ||
|
|
08fe2e4d6c | ||
|
|
cb29cade46 | ||
|
|
f27298c759 | ||
|
|
8baa14ef4a | ||
|
|
ebdbe03639 | ||
|
|
f735502eae | ||
|
|
53d17aa321 | ||
|
|
14f833bdb9 | ||
|
|
9e50071df9 | ||
|
|
c907b0d323 | ||
|
|
97fa117c48 | ||
|
|
b5329ff0f3 | ||
|
|
eac8a57bce | ||
|
|
44af046196 | ||
|
|
4a744f423b | ||
|
|
ca75e096e6 | ||
|
|
ce7c256d1e | ||
|
|
4912ceaaf5 | ||
|
|
d7f7f2c85e | ||
|
|
df184e5276 | ||
|
|
0630fd32e5 | ||
|
|
306aabbbce | ||
|
|
a09b0d1d69 | ||
|
|
362696a5ef | ||
|
|
1f32c8bf61 | ||
|
|
fb137a8fe3 | ||
|
|
c2f28efbd7 | ||
|
|
11f0d4cfdd | ||
|
|
5d300273dc | ||
|
|
7f003ecaff | ||
|
|
2695d1527e | ||
|
|
d32f6707f7 | ||
|
|
89e436f0e6 | ||
|
|
46daa659e2 | ||
|
|
49b70db7f2 | ||
|
|
04b4071888 | ||
|
|
ee127edbf7 | ||
|
|
606e5f68a0 | ||
|
|
a04b21abc6 | ||
|
|
92caadcee6 | ||
|
|
aa29fd95a3 | ||
|
|
0565e01c2f | ||
|
|
aee1d2a640 | ||
|
|
ee303186b3 | ||
|
|
e9a94f00a9 | ||
|
|
d40203e153 | ||
|
|
5688c201e9 | ||
|
|
4e1834adaf | ||
|
|
22afb2c61b | ||
|
|
b3c4d0ec81 | ||
|
|
b82c9c9c0e | ||
|
|
e0bae9b769 | ||
|
|
a194712c34 | ||
|
|
8776745428 | ||
|
|
b01eda721c | ||
|
|
42bd9cd058 | ||
|
|
515a22e696 | ||
|
|
6654142fbe | ||
|
|
424e26d636 | ||
|
|
d9cbb96603 | ||
|
|
c1cfb59b91 | ||
|
|
4be13baf3f | ||
|
|
98c0817b95 | ||
|
|
951fd5a8e7 | ||
|
|
b8f3e09046 | ||
|
|
4ab06930a2 | ||
|
|
165c5f0491 | ||
|
|
c8c3c9d4a0 | ||
|
|
4dd1b49a35 | ||
|
|
db6882b5f5 | ||
|
|
1325fd8b27 | ||
|
|
8631581852 | ||
|
|
1398d01bd8 | ||
|
|
00da5361b3 | ||
|
|
740d2b5a2c | ||
|
|
3b4b9a4436 | ||
|
|
1b6db34b93 | ||
|
|
07a4b1b1fd | ||
|
|
2e180d2587 | ||
|
|
0451dd4718 | ||
|
|
a6696582a4 | ||
|
|
00f22a8443 | ||
|
|
1d9900273e | ||
|
|
18e13f6ffa | ||
|
|
a445278f76 | ||
|
|
8387c9cd82 | ||
|
|
25a7434830 | ||
|
|
183a38715c | ||
|
|
99d35fbbbc | ||
|
|
d50108c722 | ||
|
|
6d21a4a3fe | ||
|
|
7d81dca9aa | ||
|
|
3689f05407 | ||
|
|
bb30208f97 | ||
|
|
c3e2e57f8e | ||
|
|
e43f19df79 | ||
|
|
0516c0ec37 | ||
|
|
eec54cbbf3 | ||
|
|
72fcb93ef3 | ||
|
|
f5c779626a | ||
|
|
d227b3a135 | ||
|
|
0bcfdc29ad | ||
|
|
87c230d251 | ||
|
|
84c092a9f9 | ||
|
|
9146140217 | ||
|
|
5103b35f3c | ||
|
|
7be20912f5 | ||
|
|
e8753619de | ||
|
|
251e16d772 | ||
|
|
3f0bfe28cc | ||
|
|
82d4275c3b | ||
|
|
f3767dddf8 | ||
|
|
5c6cd62df1 | ||
|
|
56bec66a44 | ||
|
|
f0e464dc36 | ||
|
|
2c3c943acf | ||
|
|
a50bd13930 | ||
|
|
5655ef86d7 | ||
|
|
21ba197d06 | ||
|
|
9d77207ed8 | ||
|
|
cf1ad47b42 | ||
|
|
a288f04a1a | ||
|
|
5767ca5085 | ||
|
|
f67ed36fe2 | ||
|
|
506bd8c8eb | ||
|
|
daf9f36c78 | ||
|
|
616c0e895d | ||
|
|
c4600346f9 | ||
|
|
642073f4b8 | ||
|
|
87bd67318b | ||
|
|
0e1673041c | ||
|
|
f3f2d30004 | ||
|
|
c8376e44a2 | ||
|
|
5d0a6ab0e9 | ||
|
|
22ee2bfc9c | ||
|
|
1f5df017a1 | ||
|
|
bba91a89be | ||
|
|
6359511a62 | ||
|
|
d2fcd5b95b | ||
|
|
15c84b34e0 | ||
|
|
eb788cd007 | ||
|
|
705b239677 | ||
|
|
cb4d5b1906 | ||
|
|
0078eb7790 | ||
|
|
3cf2d7195a | ||
|
|
16d811b306 | ||
|
|
eec196d200 | ||
|
|
bfcd9d261d | ||
|
|
f00c412cde | ||
|
|
2010805712 | ||
|
|
c5133ee5d3 | ||
|
|
9c33cbfdc8 | ||
|
|
9b327f6b56 | ||
|
|
9368fee1c5 | ||
|
|
ed78bf4b98 | ||
|
|
db293e0698 | ||
|
|
9c4c017eac | ||
|
|
14af9b3ab1 | ||
|
|
72d5fd04a7 | ||
|
|
e86d063056 | ||
|
|
e0c9e18e22 | ||
|
|
21af106f68 | ||
|
|
7fb0f9a501 | ||
|
|
4b25976288 | ||
|
|
1c146f70e9 | ||
|
|
249630bed8 | ||
|
|
75247f82b8 | ||
|
|
665cc44094 | ||
|
|
8394e7094a | ||
|
|
da9018a0eb | ||
|
|
e3ced80278 | ||
|
|
09c9762fe0 | ||
|
|
75e24de7bd | ||
|
|
2aa5b8b68d | ||
|
|
4e77e910c5 | ||
|
|
a496864762 | ||
|
|
3ed1067a95 | ||
|
|
285c4e46a9 | ||
|
|
89285c317b | ||
|
|
d14be8d43b | ||
|
|
000d5c3b0c | ||
|
|
218a8db1b9 | ||
|
|
1dcb04ce9b | ||
|
|
299cef4e99 | ||
|
|
6d24afba1c | ||
|
|
f658a8eacd | ||
|
|
785168a7b8 | ||
|
|
3bd4ecd9cd | ||
|
|
3455d1cb59 | ||
|
|
ddd31ba774 | ||
|
|
4a8dc2d445 | ||
|
|
773a46a968 | ||
|
|
4728a2ba9e | ||
|
|
abed534628 | ||
|
|
21e3f2598d | ||
|
|
a28d9bed6d | ||
|
|
28faf8cd71 | ||
|
|
5a2ee0c391 | ||
|
|
5cd15c3656 | ||
|
|
2024219bd1 | ||
|
|
d9c3eaf8c8 | ||
|
|
bd9cf42b96 | ||
|
|
d7a43a7cf1 | ||
|
|
1c0bb0338d | ||
|
|
c649c89e00 | ||
|
|
af2de35b6c | ||
|
|
02c7c1a0e7 | ||
|
|
d23fa26395 | ||
|
|
f9bb88ad24 | ||
|
|
456a5d5cce | ||
|
|
ddbd3e14ba | ||
|
|
0a43aab8f5 | ||
|
|
4bd614a559 | ||
|
|
19a33394f6 | ||
|
|
84fe3de251 | ||
|
|
450a7b15ec | ||
|
|
64b7142e22 | ||
|
|
52d27d58f0 | ||
|
|
e68e2288f7 | ||
|
|
c808587de0 | ||
|
|
2bf1200483 | ||
|
|
66826232ff | ||
|
|
1cdea7ed9b | ||
|
|
2c9e98d3f5 | ||
|
|
8becb7e54a | ||
|
|
ed38d00aaa | ||
|
|
8010cc574e | ||
|
|
c97d0ff23d | ||
|
|
047dbda136 | ||
|
|
2a1392fb5b | ||
|
|
46477b8021 | ||
|
|
c87471136b | ||
|
|
e7a28a14af | ||
|
|
4912769ab3 | ||
|
|
c07cc491bf | ||
|
|
c2a58a304d | ||
|
|
fddc7117e4 | ||
|
|
881a6b9227 | ||
|
|
3fbde7a1b6 | ||
|
|
c4a8c038cd | ||
|
|
022098fe4e | ||
|
|
bd35fcf338 | ||
|
|
2d680b5ebb | ||
|
|
ed3a9c8d6d | ||
|
|
4de56c40d8 | ||
|
|
40b3d54c1f | ||
|
|
30d12dafed | ||
|
|
2b30a15a68 | ||
|
|
2938d03878 | ||
|
|
1b1c989268 | ||
|
|
3950f8f171 | ||
|
|
ee0ef396a2 | ||
|
|
7056fbb63b | ||
|
|
c91b9fc761 | ||
|
|
d41fb4d540 | ||
|
|
01c1f6f82a | ||
|
|
3f6657ae57 | ||
|
|
0512f7c57e | ||
|
|
c6427aa296 | ||
|
|
4e6d42d5bd | ||
|
|
8ff5baadbe | ||
|
|
2f3c365b68 | ||
|
|
4893cdac74 | ||
|
|
476f30ab20 | ||
|
|
233dffc186 | ||
|
|
39443184d6 | ||
|
|
0303b76e1f | ||
|
|
684239e015 | ||
|
|
81b3e8f743 | ||
|
|
50ed24847b | ||
|
|
9b962956b5 | ||
|
|
3b16b75fe6 | ||
|
|
9d236571f4 | ||
|
|
38be30b6d4 | ||
|
|
7f8b14f6f3 | ||
|
|
3326c5b7ec | ||
|
|
b6d5788231 | ||
|
|
33e9e7a71f | ||
|
|
ccd79ed8d4 | ||
|
|
f6c4b338fd | ||
|
|
306d8e1bd4 | ||
|
|
4927e9d590 | ||
|
|
8e25f7f9dd | ||
|
|
1a7a2f4196 | ||
|
|
860a8a597f | ||
|
|
a2a6d20218 | ||
|
|
d29feaef79 | ||
|
|
630bfd265a | ||
|
|
e949859d33 | ||
|
|
4d61da30d0 | ||
|
|
b87567628a | ||
|
|
51c6367bb1 | ||
|
|
be337c6a33 | ||
|
|
086fcad7d9 | ||
|
|
3e3c72ea6f | ||
|
|
43f90d205e | ||
|
|
7b8b796a71 | ||
|
|
fa619ea9f3 | ||
|
|
30a1f7e68e | ||
|
|
30cec3aa2b | ||
|
|
5d8a2c25ea | ||
|
|
b4f7782fd8 | ||
|
|
d77874373d | ||
|
|
a058bf3cd3 | ||
|
|
b2a18830ed | ||
|
|
9779adc0b7 | ||
|
|
e7fe645be5 | ||
|
|
bcd80ee773 | ||
|
|
c04e17d82e | ||
|
|
98fc0563ac | ||
|
|
3123d5286b | ||
|
|
7fce5065c4 | ||
|
|
a98d9bd05f | ||
|
|
46c59a3fff | ||
|
|
044193bf34 | ||
|
|
a8f2eebf66 | ||
|
|
6220e64978 | ||
|
|
c6d7b512bd | ||
|
|
b904276f2b | ||
|
|
4a8d2d9ed3 | ||
|
|
22e6094a90 | ||
|
|
73023c2ec3 | ||
|
|
5ba7120418 | ||
|
|
d311d2e206 | ||
|
|
05996a5048 | ||
|
|
4668e5dd96 | ||
|
|
c6736dd6d6 | ||
|
|
855c48aec2 | ||
|
|
ded049b905 | ||
|
|
3bad5d5590 | ||
|
|
d461db3abd | ||
|
|
efc6974017 | ||
|
|
3f72ee9de8 | ||
|
|
e73b2a9fb9 | ||
|
|
081af2674b | ||
|
|
1553f0ab53 | ||
|
|
a975b6a8b1 | ||
|
|
afc11e1f0c | ||
|
|
ea7376f522 | ||
|
|
d325211617 | ||
|
|
bad783321e | ||
|
|
b8044c29dd | ||
|
|
df69840f92 | ||
|
|
76ca7a2b50 | ||
|
|
cd704570be | ||
|
|
43c9c50af4 | ||
|
|
4a941a2cb4 | ||
|
|
d2879b2b36 | ||
|
|
a52f1df180 | ||
|
|
1605e2a7a9 | ||
|
|
6750414db1 | ||
|
|
b50e10a1be | ||
|
|
c15aa541bb | ||
|
|
49b3468845 | ||
|
|
bd6ed80936 | ||
|
|
30525cee0e | ||
|
|
2dc2f3b3f0 | ||
|
|
d7a503a34e | ||
|
|
62b489dc68 | ||
|
|
8c7e650616 | ||
|
|
43943aeee9 | ||
|
|
d81b0053e5 | ||
|
|
dd0cbdf40c | ||
|
|
37dc0dad35 | ||
|
|
377b854dd8 | ||
|
|
56db4ed0f1 | ||
|
|
833e0f66f1 | ||
|
|
1dddd3e93b | ||
|
|
9a86ffc102 | ||
|
|
45e38cb080 | ||
|
|
b9868f6516 | ||
|
|
f317a85ab4 | ||
|
|
53d9c95160 | ||
|
|
03a91693ac | ||
|
|
cb7c0173ec | ||
|
|
18d21d3585 | ||
|
|
e7d2d79134 | ||
|
|
d810597414 | ||
|
|
93afb03f67 | ||
|
|
e4d10ad964 | ||
|
|
7dc86366b4 | ||
|
|
c923f461ab | ||
|
|
a4a203b9a3 | ||
|
|
4651d06fa8 | ||
|
|
eb1ecefd9e | ||
|
|
6b6509eeeb | ||
|
|
cfe9bbf829 | ||
|
|
8f9fbf16f1 | ||
|
|
f1206328dc | ||
|
|
57861507ab | ||
|
|
2b38f7bef7 | ||
|
|
9a4d0e1a99 | ||
|
|
30539b2e26 | ||
|
|
098ab0357c | ||
|
|
56d085bd08 | ||
|
|
92e587a82c | ||
|
|
f3a1e693f2 | ||
|
|
f783555469 | ||
|
|
710d75367e | ||
|
|
c30e3a4762 | ||
|
|
3287aa8bba | ||
|
|
8e7e52cf3a | ||
|
|
1e0516b99d | ||
|
|
0fbe392499 | ||
|
|
109989005d | ||
|
|
0d3134720b | ||
|
|
d2a6356d89 | ||
|
|
5a18e91317 | ||
|
|
e3521be705 | ||
|
|
f52f15ff08 | ||
|
|
cbc99010f0 | ||
|
|
b5953d689c | ||
|
|
badbb68217 | ||
|
|
603f3ad490 | ||
|
|
707438f25e | ||
|
|
24ad235917 | ||
|
|
00d5d647ed | ||
|
|
cbce8f6011 | ||
|
|
05202099f7 | ||
|
|
800456018a | ||
|
|
586a20fbff | ||
|
|
818046f240 | ||
|
|
fe06a00d45 | ||
|
|
0b5c29e875 | ||
|
|
0a243b4162 | ||
|
|
29ba29478b | ||
|
|
e52f1e87ce | ||
|
|
87326f5c4f | ||
|
|
b6fbd37539 | ||
|
|
7891378f57 | ||
|
|
16868190c8 | ||
|
|
da2ca054b1 | ||
|
|
bcff0eaae7 | ||
|
|
b220fb7d51 | ||
|
|
2cce3a99eb | ||
|
|
bbe57f6cd4 | ||
|
|
604f7f6282 | ||
|
|
c61fbe9c5f | ||
|
|
b943cce868 | ||
|
|
6403c8d5d2 | ||
|
|
b3fa16fbda | ||
|
|
1f0110fe06 | ||
|
|
b92bd3d27e | ||
|
|
3bf7d5a9c9 | ||
|
|
1d65865425 | ||
|
|
c53ff2ce00 | ||
|
|
b4ac8cd9a3 | ||
|
|
22277d1fc7 | ||
|
|
9ae3570154 | ||
|
|
f12cb2e048 | ||
|
|
8c09afe20c | ||
|
|
8b92c017ec | ||
|
|
9a7890d56b | ||
|
|
45752db0f6 | ||
|
|
1c7f3bc440 | ||
|
|
9bd143852f | ||
|
|
d57a55c024 | ||
|
|
e172c29360 | ||
|
|
cd3b8e68ff | ||
|
|
f44b1d37c4 | ||
|
|
7ba6ad3489 | ||
|
|
2c279e0a7b | ||
|
|
4c8e847f47 | ||
|
|
97e5d95399 | ||
|
|
d1dbe4ece9 | ||
|
|
9e3f945eda | ||
|
|
615ee5df75 | ||
|
|
c1f42cdf4b | ||
|
|
aa76980b43 | ||
|
|
5b986ed0a7 | ||
|
|
8076c94444 | ||
|
|
e88406e837 | ||
|
|
e4a3dcc3b8 | ||
|
|
caad5c613d | ||
|
|
38aef77e54 | ||
|
|
1ab7b315a2 | ||
|
|
610597bfb7 | ||
|
|
ede4f97a16 | ||
|
|
fa641e38b8 | ||
|
|
41bad2b9fd | ||
|
|
f9bbfa5eab | ||
|
|
b81420bef1 | ||
|
|
9313e5b058 |
@@ -1,15 +0,0 @@
|
|||||||
# yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json
|
|
||||||
language: "en-GB"
|
|
||||||
early_access: false
|
|
||||||
reviews:
|
|
||||||
profile: "chill"
|
|
||||||
request_changes_workflow: false
|
|
||||||
high_level_summary: true
|
|
||||||
poem: true
|
|
||||||
review_status: true
|
|
||||||
collapse_walkthrough: false
|
|
||||||
auto_review:
|
|
||||||
enabled: true
|
|
||||||
drafts: true
|
|
||||||
chat:
|
|
||||||
auto_reply: true
|
|
||||||
@@ -17,3 +17,7 @@ LICENSE
|
|||||||
.vscode
|
.vscode
|
||||||
|
|
||||||
*.sock
|
*.sock
|
||||||
|
|
||||||
|
node_modules/
|
||||||
|
package-lock.json
|
||||||
|
package.json
|
||||||
|
|||||||
16
.editorconfig
Normal file
@@ -0,0 +1,16 @@
|
|||||||
|
root = true
|
||||||
|
|
||||||
|
[*]
|
||||||
|
charset = utf-8
|
||||||
|
end_of_line = lf
|
||||||
|
indent_size = 2
|
||||||
|
indent_style = space
|
||||||
|
insert_final_newline = true
|
||||||
|
trim_trailing_whitespace = true
|
||||||
|
max_line_length = 120
|
||||||
|
|
||||||
|
[*.go]
|
||||||
|
indent_style = tab
|
||||||
|
|
||||||
|
[Makefile]
|
||||||
|
indent_style = tab
|
||||||
43
.github/ISSUE_TEMPLATE/bug_report.yaml
vendored
@@ -6,14 +6,16 @@ body:
|
|||||||
- type: checkboxes
|
- type: checkboxes
|
||||||
attributes:
|
attributes:
|
||||||
label: Is this a support request?
|
label: Is this a support request?
|
||||||
description: This issue tracker is for bugs and feature requests only. If you need help, please use ask in our Discord community
|
description: This issue tracker is for bugs and feature requests only. If you need
|
||||||
|
help, please use ask in our Discord community
|
||||||
options:
|
options:
|
||||||
- label: This is not a support request
|
- label: This is not a support request
|
||||||
required: true
|
required: true
|
||||||
- type: checkboxes
|
- type: checkboxes
|
||||||
attributes:
|
attributes:
|
||||||
label: Is there an existing issue for this?
|
label: Is there an existing issue for this?
|
||||||
description: Please search to see if an issue already exists for the bug you encountered.
|
description: Please search to see if an issue already exists for the bug you
|
||||||
|
encountered.
|
||||||
options:
|
options:
|
||||||
- label: I have searched the existing issues
|
- label: I have searched the existing issues
|
||||||
required: true
|
required: true
|
||||||
@@ -44,10 +46,19 @@ body:
|
|||||||
attributes:
|
attributes:
|
||||||
label: Environment
|
label: Environment
|
||||||
description: |
|
description: |
|
||||||
|
Please provide information about your environment.
|
||||||
|
If you are using a container, always provide the headscale version and not only the Docker image version.
|
||||||
|
Please do not put "latest".
|
||||||
|
|
||||||
|
Describe your "headscale network". Is there a lot of nodes, are the nodes all interconnected, are some subnet routers?
|
||||||
|
|
||||||
|
If you are experiencing a problem during an upgrade, please provide the versions of the old and new versions of Headscale and Tailscale.
|
||||||
|
|
||||||
examples:
|
examples:
|
||||||
- **OS**: Ubuntu 20.04
|
- **OS**: Ubuntu 24.04
|
||||||
- **Headscale version**: 0.22.3
|
- **Headscale version**: 0.24.3
|
||||||
- **Tailscale version**: 1.64.0
|
- **Tailscale version**: 1.80.0
|
||||||
|
- **Number of nodes**: 20
|
||||||
value: |
|
value: |
|
||||||
- OS:
|
- OS:
|
||||||
- Headscale version:
|
- Headscale version:
|
||||||
@@ -65,19 +76,31 @@ body:
|
|||||||
required: false
|
required: false
|
||||||
- type: textarea
|
- type: textarea
|
||||||
attributes:
|
attributes:
|
||||||
label: Anything else?
|
label: Debug information
|
||||||
description: |
|
description: |
|
||||||
Links? References? Anything that will give us more context about the issue you are encountering!
|
Please have a look at our [Debugging and troubleshooting
|
||||||
|
guide](https://headscale.net/development/ref/debug/) to learn about
|
||||||
|
common debugging techniques.
|
||||||
|
|
||||||
|
Links? References? Anything that will give us more context about the issue you are encountering.
|
||||||
|
If **any** of these are omitted we will likely close your issue, do **not** ignore them.
|
||||||
|
|
||||||
- Client netmap dump (see below)
|
- Client netmap dump (see below)
|
||||||
- ACL configuration
|
- Policy configuration
|
||||||
- Headscale configuration
|
- Headscale configuration
|
||||||
|
- Headscale log (with `trace` enabled)
|
||||||
|
|
||||||
Dump the netmap of tailscale clients:
|
Dump the netmap of tailscale clients:
|
||||||
`tailscale debug netmap > DESCRIPTIVE_NAME.json`
|
`tailscale debug netmap > DESCRIPTIVE_NAME.json`
|
||||||
|
|
||||||
Please provide information describing the netmap, which client, which headscale version etc.
|
Dump the status of tailscale clients:
|
||||||
|
`tailscale status --json > DESCRIPTIVE_NAME.json`
|
||||||
|
|
||||||
|
Get the logs of a Tailscale client that is not working as expected.
|
||||||
|
`tailscale debug daemon-logs`
|
||||||
|
|
||||||
Tip: You can attach images or log files by clicking this area to highlight it and then dragging files in.
|
Tip: You can attach images or log files by clicking this area to highlight it and then dragging files in.
|
||||||
|
**Ensure** you use formatting for files you attach.
|
||||||
|
Do **not** paste in long files.
|
||||||
validations:
|
validations:
|
||||||
required: false
|
required: true
|
||||||
|
|||||||
8
.github/ISSUE_TEMPLATE/config.yml
vendored
@@ -3,9 +3,9 @@ blank_issues_enabled: false
|
|||||||
|
|
||||||
# Contact links
|
# Contact links
|
||||||
contact_links:
|
contact_links:
|
||||||
- name: "headscale usage documentation"
|
|
||||||
url: "https://github.com/juanfont/headscale/blob/main/docs"
|
|
||||||
about: "Find documentation about how to configure and run headscale."
|
|
||||||
- name: "headscale Discord community"
|
- name: "headscale Discord community"
|
||||||
url: "https://discord.gg/xGj2TuqyxY"
|
url: "https://discord.gg/c84AZQhmpx"
|
||||||
about: "Please ask and answer questions about usage of headscale here."
|
about: "Please ask and answer questions about usage of headscale here."
|
||||||
|
- name: "headscale usage documentation"
|
||||||
|
url: "https://headscale.net/"
|
||||||
|
about: "Find documentation about how to configure and run headscale."
|
||||||
|
|||||||
80
.github/label-response/needs-more-info.md
vendored
Normal file
@@ -0,0 +1,80 @@
|
|||||||
|
Thank you for taking the time to report this issue.
|
||||||
|
|
||||||
|
To help us investigate and resolve this, we need more information. Please provide the following:
|
||||||
|
|
||||||
|
> [!TIP]
|
||||||
|
> Most issues turn out to be configuration errors rather than bugs. We encourage you to discuss your problem in our [Discord community](https://discord.gg/c84AZQhmpx) **before** opening an issue. The community can often help identify misconfigurations quickly, saving everyone time.
|
||||||
|
|
||||||
|
## Required Information
|
||||||
|
|
||||||
|
### Environment Details
|
||||||
|
|
||||||
|
- **Headscale version**: (run `headscale version`)
|
||||||
|
- **Tailscale client version**: (run `tailscale version`)
|
||||||
|
- **Operating System**: (e.g., Ubuntu 24.04, macOS 14, Windows 11)
|
||||||
|
- **Deployment method**: (binary, Docker, Kubernetes, etc.)
|
||||||
|
- **Reverse proxy**: (if applicable: nginx, Traefik, Caddy, etc. - include configuration)
|
||||||
|
|
||||||
|
### Debug Information
|
||||||
|
|
||||||
|
Please follow our [Debugging and Troubleshooting Guide](https://headscale.net/stable/ref/debug/) and provide:
|
||||||
|
|
||||||
|
1. **Client netmap dump** (from affected Tailscale client):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
tailscale debug netmap > netmap.json
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Client status dump** (from affected Tailscale client):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
tailscale status --json > status.json
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Tailscale client logs** (if experiencing client issues):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
tailscale debug daemon-logs
|
||||||
|
```
|
||||||
|
|
||||||
|
> [!IMPORTANT]
|
||||||
|
> We need logs from **multiple nodes** to understand the full picture:
|
||||||
|
>
|
||||||
|
> - The node(s) initiating connections
|
||||||
|
> - The node(s) being connected to
|
||||||
|
>
|
||||||
|
> Without logs from both sides, we cannot diagnose connectivity issues.
|
||||||
|
|
||||||
|
4. **Headscale server logs** with `log.level: trace` enabled
|
||||||
|
|
||||||
|
5. **Headscale configuration** (with sensitive values redacted - see rules below)
|
||||||
|
|
||||||
|
6. **ACL/Policy configuration** (if using ACLs)
|
||||||
|
|
||||||
|
7. **Proxy/Docker configuration** (if applicable - nginx.conf, docker-compose.yml, Traefik config, etc.)
|
||||||
|
|
||||||
|
## Formatting Requirements
|
||||||
|
|
||||||
|
- **Attach long files** - Do not paste large logs or configurations inline. Use GitHub file attachments or GitHub Gists.
|
||||||
|
- **Use proper Markdown** - Format code blocks, logs, and configurations with appropriate syntax highlighting.
|
||||||
|
- **Structure your response** - Use the headings above to organize your information clearly.
|
||||||
|
|
||||||
|
## Redaction Rules
|
||||||
|
|
||||||
|
> [!CAUTION]
|
||||||
|
> **Replace, do not remove.** Removing information makes debugging impossible.
|
||||||
|
|
||||||
|
When redacting sensitive information:
|
||||||
|
|
||||||
|
- ✅ **Replace consistently** - If you change `alice@company.com` to `user1@example.com`, use `user1@example.com` everywhere (logs, config, policy, etc.)
|
||||||
|
- ✅ **Use meaningful placeholders** - `user1@example.com`, `bob@example.com`, `my-secret-key` are acceptable
|
||||||
|
- ❌ **Never remove information** - Gaps in data prevent us from correlating events across logs
|
||||||
|
- ❌ **Never redact IP addresses** - We need the actual IPs to trace network paths and identify issues
|
||||||
|
|
||||||
|
**If redaction rules are not followed, we will be unable to debug the issue and will have to close it.**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Note:** This issue will be automatically closed in 3 days if no additional information is provided. Once you reply with the requested information, the `needs-more-info` label will be removed automatically.
|
||||||
|
|
||||||
|
If you need help gathering this information, please visit our [Discord community](https://discord.gg/c84AZQhmpx).
|
||||||
15
.github/label-response/support-request.md
vendored
Normal file
@@ -0,0 +1,15 @@
|
|||||||
|
Thank you for reaching out.
|
||||||
|
|
||||||
|
This issue tracker is used for **bug reports and feature requests** only. Your question appears to be a support or configuration question rather than a bug report.
|
||||||
|
|
||||||
|
For help with setup, configuration, or general questions, please visit our [Discord community](https://discord.gg/c84AZQhmpx) where the community and maintainers can assist you in real-time.
|
||||||
|
|
||||||
|
**Before posting in Discord, please check:**
|
||||||
|
|
||||||
|
- [Documentation](https://headscale.net/)
|
||||||
|
- [FAQ](https://headscale.net/stable/faq/)
|
||||||
|
- [Debugging and Troubleshooting Guide](https://headscale.net/stable/ref/debug/)
|
||||||
|
|
||||||
|
If after troubleshooting you determine this is actually a bug, please open a new issue with the required debug information from the troubleshooting guide.
|
||||||
|
|
||||||
|
This issue has been automatically closed.
|
||||||
39
.github/workflows/build.yml
vendored
@@ -5,8 +5,6 @@ on:
|
|||||||
branches:
|
branches:
|
||||||
- main
|
- main
|
||||||
pull_request:
|
pull_request:
|
||||||
branches:
|
|
||||||
- main
|
|
||||||
|
|
||||||
concurrency:
|
concurrency:
|
||||||
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||||
@@ -17,12 +15,12 @@ jobs:
|
|||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
permissions: write-all
|
permissions: write-all
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||||
with:
|
with:
|
||||||
fetch-depth: 2
|
fetch-depth: 2
|
||||||
- name: Get changed files
|
- name: Get changed files
|
||||||
id: changed-files
|
id: changed-files
|
||||||
uses: dorny/paths-filter@v3
|
uses: dorny/paths-filter@de90cc6fb38fc0963ad72b210f1f284cd68cea36 # v3.0.2
|
||||||
with:
|
with:
|
||||||
filters: |
|
filters: |
|
||||||
files:
|
files:
|
||||||
@@ -31,10 +29,14 @@ jobs:
|
|||||||
- '**/*.go'
|
- '**/*.go'
|
||||||
- 'integration_test/'
|
- 'integration_test/'
|
||||||
- 'config-example.yaml'
|
- 'config-example.yaml'
|
||||||
- uses: DeterminateSystems/nix-installer-action@main
|
- uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34
|
||||||
if: steps.changed-files.outputs.files == 'true'
|
if: steps.changed-files.outputs.files == 'true'
|
||||||
- uses: DeterminateSystems/magic-nix-cache-action@main
|
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
|
||||||
if: steps.changed-files.outputs.files == 'true'
|
if: steps.changed-files.outputs.files == 'true'
|
||||||
|
with:
|
||||||
|
primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
|
||||||
|
'**/flake.lock') }}
|
||||||
|
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
|
||||||
|
|
||||||
- name: Run nix build
|
- name: Run nix build
|
||||||
id: build
|
id: build
|
||||||
@@ -52,7 +54,7 @@ jobs:
|
|||||||
exit $BUILD_STATUS
|
exit $BUILD_STATUS
|
||||||
|
|
||||||
- name: Nix gosum diverging
|
- name: Nix gosum diverging
|
||||||
uses: actions/github-script@v6
|
uses: actions/github-script@ed597411d8f924073f98dfc5c65a23a2325f34cd # v8.0.0
|
||||||
if: failure() && steps.build.outcome == 'failure'
|
if: failure() && steps.build.outcome == 'failure'
|
||||||
with:
|
with:
|
||||||
github-token: ${{secrets.GITHUB_TOKEN}}
|
github-token: ${{secrets.GITHUB_TOKEN}}
|
||||||
@@ -64,7 +66,7 @@ jobs:
|
|||||||
body: 'Nix build failed with wrong gosum, please update "vendorSha256" (${{ steps.build.outputs.OLD_HASH }}) for the "headscale" package in flake.nix with the new SHA: ${{ steps.build.outputs.NEW_HASH }}'
|
body: 'Nix build failed with wrong gosum, please update "vendorSha256" (${{ steps.build.outputs.OLD_HASH }}) for the "headscale" package in flake.nix with the new SHA: ${{ steps.build.outputs.NEW_HASH }}'
|
||||||
})
|
})
|
||||||
|
|
||||||
- uses: actions/upload-artifact@v4
|
- uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
|
||||||
if: steps.changed-files.outputs.files == 'true'
|
if: steps.changed-files.outputs.files == 'true'
|
||||||
with:
|
with:
|
||||||
name: headscale-linux
|
name: headscale-linux
|
||||||
@@ -74,22 +76,25 @@ jobs:
|
|||||||
strategy:
|
strategy:
|
||||||
matrix:
|
matrix:
|
||||||
env:
|
env:
|
||||||
- "GOARCH=arm GOOS=linux GOARM=5"
|
|
||||||
- "GOARCH=arm GOOS=linux GOARM=6"
|
|
||||||
- "GOARCH=arm GOOS=linux GOARM=7"
|
|
||||||
- "GOARCH=arm64 GOOS=linux"
|
- "GOARCH=arm64 GOOS=linux"
|
||||||
- "GOARCH=386 GOOS=linux"
|
|
||||||
- "GOARCH=amd64 GOOS=linux"
|
- "GOARCH=amd64 GOOS=linux"
|
||||||
- "GOARCH=arm64 GOOS=darwin"
|
- "GOARCH=arm64 GOOS=darwin"
|
||||||
- "GOARCH=amd64 GOOS=darwin"
|
- "GOARCH=amd64 GOOS=darwin"
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||||
- uses: DeterminateSystems/nix-installer-action@main
|
- uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34
|
||||||
- uses: DeterminateSystems/magic-nix-cache-action@main
|
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
|
||||||
|
with:
|
||||||
|
primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
|
||||||
|
'**/flake.lock') }}
|
||||||
|
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
|
||||||
|
|
||||||
- name: Run go cross compile
|
- name: Run go cross compile
|
||||||
run: env ${{ matrix.env }} nix develop --command -- go build -o "headscale" ./cmd/headscale
|
env:
|
||||||
- uses: actions/upload-artifact@v4
|
CGO_ENABLED: 0
|
||||||
|
run: env ${{ matrix.env }} nix develop --command -- go build -o "headscale"
|
||||||
|
./cmd/headscale
|
||||||
|
- uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
|
||||||
with:
|
with:
|
||||||
name: "headscale-${{ matrix.env }}"
|
name: "headscale-${{ matrix.env }}"
|
||||||
path: "headscale"
|
path: "headscale"
|
||||||
|
|||||||
55
.github/workflows/check-generated.yml
vendored
Normal file
@@ -0,0 +1,55 @@
|
|||||||
|
name: Check Generated Files
|
||||||
|
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
branches:
|
||||||
|
- main
|
||||||
|
pull_request:
|
||||||
|
branches:
|
||||||
|
- main
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||||
|
cancel-in-progress: true
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
check-generated:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||||
|
with:
|
||||||
|
fetch-depth: 2
|
||||||
|
- name: Get changed files
|
||||||
|
id: changed-files
|
||||||
|
uses: dorny/paths-filter@de90cc6fb38fc0963ad72b210f1f284cd68cea36 # v3.0.2
|
||||||
|
with:
|
||||||
|
filters: |
|
||||||
|
files:
|
||||||
|
- '*.nix'
|
||||||
|
- 'go.*'
|
||||||
|
- '**/*.go'
|
||||||
|
- '**/*.proto'
|
||||||
|
- 'buf.gen.yaml'
|
||||||
|
- 'tools/**'
|
||||||
|
- uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34
|
||||||
|
if: steps.changed-files.outputs.files == 'true'
|
||||||
|
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
|
||||||
|
if: steps.changed-files.outputs.files == 'true'
|
||||||
|
with:
|
||||||
|
primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix', '**/flake.lock') }}
|
||||||
|
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
|
||||||
|
|
||||||
|
- name: Run make generate
|
||||||
|
if: steps.changed-files.outputs.files == 'true'
|
||||||
|
run: nix develop --command -- make generate
|
||||||
|
|
||||||
|
- name: Check for uncommitted changes
|
||||||
|
if: steps.changed-files.outputs.files == 'true'
|
||||||
|
run: |
|
||||||
|
if ! git diff --exit-code; then
|
||||||
|
echo "❌ Generated files are not up to date!"
|
||||||
|
echo "Please run 'make generate' and commit the changes."
|
||||||
|
exit 1
|
||||||
|
else
|
||||||
|
echo "✅ All generated files are up to date."
|
||||||
|
fi
|
||||||
14
.github/workflows/check-tests.yaml
vendored
@@ -10,12 +10,12 @@ jobs:
|
|||||||
check-tests:
|
check-tests:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||||
with:
|
with:
|
||||||
fetch-depth: 2
|
fetch-depth: 2
|
||||||
- name: Get changed files
|
- name: Get changed files
|
||||||
id: changed-files
|
id: changed-files
|
||||||
uses: dorny/paths-filter@v3
|
uses: dorny/paths-filter@de90cc6fb38fc0963ad72b210f1f284cd68cea36 # v3.0.2
|
||||||
with:
|
with:
|
||||||
filters: |
|
filters: |
|
||||||
files:
|
files:
|
||||||
@@ -24,15 +24,19 @@ jobs:
|
|||||||
- '**/*.go'
|
- '**/*.go'
|
||||||
- 'integration_test/'
|
- 'integration_test/'
|
||||||
- 'config-example.yaml'
|
- 'config-example.yaml'
|
||||||
- uses: DeterminateSystems/nix-installer-action@main
|
- uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34
|
||||||
if: steps.changed-files.outputs.files == 'true'
|
if: steps.changed-files.outputs.files == 'true'
|
||||||
- uses: DeterminateSystems/magic-nix-cache-action@main
|
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
|
||||||
if: steps.changed-files.outputs.files == 'true'
|
if: steps.changed-files.outputs.files == 'true'
|
||||||
|
with:
|
||||||
|
primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
|
||||||
|
'**/flake.lock') }}
|
||||||
|
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
|
||||||
|
|
||||||
- name: Generate and check integration tests
|
- name: Generate and check integration tests
|
||||||
if: steps.changed-files.outputs.files == 'true'
|
if: steps.changed-files.outputs.files == 'true'
|
||||||
run: |
|
run: |
|
||||||
nix develop --command bash -c "cd cmd/gh-action-integration-generator/ && go generate"
|
nix develop --command bash -c "cd .github/workflows && go generate"
|
||||||
git diff --exit-code .github/workflows/test-integration.yaml
|
git diff --exit-code .github/workflows/test-integration.yaml
|
||||||
|
|
||||||
- name: Show missing tests
|
- name: Show missing tests
|
||||||
|
|||||||
112
.github/workflows/container-main.yml
vendored
Normal file
@@ -0,0 +1,112 @@
|
|||||||
|
---
|
||||||
|
name: Build (main)
|
||||||
|
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
branches:
|
||||||
|
- main
|
||||||
|
paths:
|
||||||
|
- "*.nix"
|
||||||
|
- "go.*"
|
||||||
|
- "**/*.go"
|
||||||
|
- ".github/workflows/container-main.yml"
|
||||||
|
workflow_dispatch:
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: ${{ github.workflow }}-${{ github.sha }}
|
||||||
|
cancel-in-progress: true
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
container:
|
||||||
|
if: github.repository == 'juanfont/headscale'
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
permissions:
|
||||||
|
packages: write
|
||||||
|
contents: read
|
||||||
|
steps:
|
||||||
|
- name: Checkout
|
||||||
|
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||||
|
|
||||||
|
- name: Login to DockerHub
|
||||||
|
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
|
||||||
|
with:
|
||||||
|
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||||
|
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||||
|
|
||||||
|
- name: Login to GHCR
|
||||||
|
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
|
||||||
|
with:
|
||||||
|
registry: ghcr.io
|
||||||
|
username: ${{ github.repository_owner }}
|
||||||
|
password: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
|
||||||
|
- uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34
|
||||||
|
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
|
||||||
|
with:
|
||||||
|
primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
|
||||||
|
'**/flake.lock') }}
|
||||||
|
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
|
||||||
|
|
||||||
|
- name: Set commit timestamp
|
||||||
|
run: echo "SOURCE_DATE_EPOCH=$(git log -1 --format=%ct)" >> $GITHUB_ENV
|
||||||
|
|
||||||
|
- name: Build and push to GHCR
|
||||||
|
env:
|
||||||
|
KO_DOCKER_REPO: ghcr.io/juanfont/headscale
|
||||||
|
KO_DEFAULTBASEIMAGE: gcr.io/distroless/base-debian13
|
||||||
|
CGO_ENABLED: "0"
|
||||||
|
run: |
|
||||||
|
nix develop --command -- ko build \
|
||||||
|
--bare \
|
||||||
|
--platform=linux/amd64,linux/arm64 \
|
||||||
|
--tags=main-${GITHUB_SHA::7} \
|
||||||
|
./cmd/headscale
|
||||||
|
|
||||||
|
- name: Push to Docker Hub
|
||||||
|
env:
|
||||||
|
KO_DOCKER_REPO: headscale/headscale
|
||||||
|
KO_DEFAULTBASEIMAGE: gcr.io/distroless/base-debian13
|
||||||
|
CGO_ENABLED: "0"
|
||||||
|
run: |
|
||||||
|
nix develop --command -- ko build \
|
||||||
|
--bare \
|
||||||
|
--platform=linux/amd64,linux/arm64 \
|
||||||
|
--tags=main-${GITHUB_SHA::7} \
|
||||||
|
./cmd/headscale
|
||||||
|
|
||||||
|
binaries:
|
||||||
|
if: github.repository == 'juanfont/headscale'
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
strategy:
|
||||||
|
matrix:
|
||||||
|
include:
|
||||||
|
- goos: linux
|
||||||
|
goarch: amd64
|
||||||
|
- goos: linux
|
||||||
|
goarch: arm64
|
||||||
|
- goos: darwin
|
||||||
|
goarch: amd64
|
||||||
|
- goos: darwin
|
||||||
|
goarch: arm64
|
||||||
|
steps:
|
||||||
|
- name: Checkout
|
||||||
|
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||||
|
|
||||||
|
- uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34
|
||||||
|
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
|
||||||
|
with:
|
||||||
|
primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
|
||||||
|
'**/flake.lock') }}
|
||||||
|
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
|
||||||
|
|
||||||
|
- name: Build binary
|
||||||
|
env:
|
||||||
|
CGO_ENABLED: "0"
|
||||||
|
GOOS: ${{ matrix.goos }}
|
||||||
|
GOARCH: ${{ matrix.goarch }}
|
||||||
|
run: nix develop --command -- go build -o headscale ./cmd/headscale
|
||||||
|
|
||||||
|
- uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
|
||||||
|
with:
|
||||||
|
name: headscale-${{ matrix.goos }}-${{ matrix.goarch }}
|
||||||
|
path: headscale
|
||||||
6
.github/workflows/docs-deploy.yml
vendored
@@ -21,15 +21,15 @@ jobs:
|
|||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
steps:
|
steps:
|
||||||
- name: Checkout repository
|
- name: Checkout repository
|
||||||
uses: actions/checkout@v4
|
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||||
with:
|
with:
|
||||||
fetch-depth: 0
|
fetch-depth: 0
|
||||||
- name: Install python
|
- name: Install python
|
||||||
uses: actions/setup-python@v5
|
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
|
||||||
with:
|
with:
|
||||||
python-version: 3.x
|
python-version: 3.x
|
||||||
- name: Setup cache
|
- name: Setup cache
|
||||||
uses: actions/cache@v4
|
uses: actions/cache@a7833574556fa59680c1b7cb190c1735db73ebf0 # v5.0.0
|
||||||
with:
|
with:
|
||||||
key: ${{ github.ref }}
|
key: ${{ github.ref }}
|
||||||
path: .cache
|
path: .cache
|
||||||
|
|||||||
6
.github/workflows/docs-test.yml
vendored
@@ -11,13 +11,13 @@ jobs:
|
|||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
steps:
|
steps:
|
||||||
- name: Checkout repository
|
- name: Checkout repository
|
||||||
uses: actions/checkout@v4
|
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||||
- name: Install python
|
- name: Install python
|
||||||
uses: actions/setup-python@v5
|
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
|
||||||
with:
|
with:
|
||||||
python-version: 3.x
|
python-version: 3.x
|
||||||
- name: Setup cache
|
- name: Setup cache
|
||||||
uses: actions/cache@v4
|
uses: actions/cache@a7833574556fa59680c1b7cb190c1735db73ebf0 # v5.0.0
|
||||||
with:
|
with:
|
||||||
key: ${{ github.ref }}
|
key: ${{ github.ref }}
|
||||||
path: .cache
|
path: .cache
|
||||||
|
|||||||
144
.github/workflows/gh-action-integration-generator.go
vendored
Normal file
@@ -0,0 +1,144 @@
|
|||||||
|
package main
|
||||||
|
|
||||||
|
//go:generate go run ./gh-action-integration-generator.go
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"fmt"
|
||||||
|
"log"
|
||||||
|
"os/exec"
|
||||||
|
"strings"
|
||||||
|
)
|
||||||
|
|
||||||
|
// testsToSplit defines tests that should be split into multiple CI jobs.
|
||||||
|
// Key is the test function name, value is a list of subtest prefixes.
|
||||||
|
// Each prefix becomes a separate CI job as "TestName/prefix".
|
||||||
|
//
|
||||||
|
// Example: TestAutoApproveMultiNetwork has subtests like:
|
||||||
|
// - TestAutoApproveMultiNetwork/authkey-tag-advertiseduringup-false-pol-database
|
||||||
|
// - TestAutoApproveMultiNetwork/webauth-user-advertiseduringup-true-pol-file
|
||||||
|
//
|
||||||
|
// Splitting by approver type (tag, user, group) creates 6 CI jobs with 4 tests each:
|
||||||
|
// - TestAutoApproveMultiNetwork/authkey-tag.* (4 tests)
|
||||||
|
// - TestAutoApproveMultiNetwork/authkey-user.* (4 tests)
|
||||||
|
// - TestAutoApproveMultiNetwork/authkey-group.* (4 tests)
|
||||||
|
// - TestAutoApproveMultiNetwork/webauth-tag.* (4 tests)
|
||||||
|
// - TestAutoApproveMultiNetwork/webauth-user.* (4 tests)
|
||||||
|
// - TestAutoApproveMultiNetwork/webauth-group.* (4 tests)
|
||||||
|
//
|
||||||
|
// This reduces load per CI job (4 tests instead of 12) to avoid infrastructure
|
||||||
|
// flakiness when running many sequential Docker-based integration tests.
|
||||||
|
var testsToSplit = map[string][]string{
|
||||||
|
"TestAutoApproveMultiNetwork": {
|
||||||
|
"authkey-tag",
|
||||||
|
"authkey-user",
|
||||||
|
"authkey-group",
|
||||||
|
"webauth-tag",
|
||||||
|
"webauth-user",
|
||||||
|
"webauth-group",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
// expandTests takes a list of test names and expands any that need splitting
|
||||||
|
// into multiple subtest patterns.
|
||||||
|
func expandTests(tests []string) []string {
|
||||||
|
var expanded []string
|
||||||
|
for _, test := range tests {
|
||||||
|
if prefixes, ok := testsToSplit[test]; ok {
|
||||||
|
// This test should be split into multiple jobs.
|
||||||
|
// We append ".*" to each prefix because the CI runner wraps patterns
|
||||||
|
// with ^...$ anchors. Without ".*", a pattern like "authkey$" wouldn't
|
||||||
|
// match "authkey-tag-advertiseduringup-false-pol-database".
|
||||||
|
for _, prefix := range prefixes {
|
||||||
|
expanded = append(expanded, fmt.Sprintf("%s/%s.*", test, prefix))
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
expanded = append(expanded, test)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return expanded
|
||||||
|
}
|
||||||
|
|
||||||
|
func findTests() []string {
|
||||||
|
rgBin, err := exec.LookPath("rg")
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("failed to find rg (ripgrep) binary")
|
||||||
|
}
|
||||||
|
|
||||||
|
args := []string{
|
||||||
|
"--type", "go",
|
||||||
|
"--regexp", "func (Test.+)\\(.*",
|
||||||
|
"../../integration/",
|
||||||
|
"--replace", "$1",
|
||||||
|
"--sort", "path",
|
||||||
|
"--no-line-number",
|
||||||
|
"--no-filename",
|
||||||
|
"--no-heading",
|
||||||
|
}
|
||||||
|
|
||||||
|
cmd := exec.Command(rgBin, args...)
|
||||||
|
var out bytes.Buffer
|
||||||
|
cmd.Stdout = &out
|
||||||
|
err = cmd.Run()
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("failed to run command: %s", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
tests := strings.Split(strings.TrimSpace(out.String()), "\n")
|
||||||
|
return tests
|
||||||
|
}
|
||||||
|
|
||||||
|
func updateYAML(tests []string, jobName string, testPath string) {
|
||||||
|
testsForYq := fmt.Sprintf("[%s]", strings.Join(tests, ", "))
|
||||||
|
|
||||||
|
yqCommand := fmt.Sprintf(
|
||||||
|
"yq eval '.jobs.%s.strategy.matrix.test = %s' %s -i",
|
||||||
|
jobName,
|
||||||
|
testsForYq,
|
||||||
|
testPath,
|
||||||
|
)
|
||||||
|
cmd := exec.Command("bash", "-c", yqCommand)
|
||||||
|
|
||||||
|
var stdout bytes.Buffer
|
||||||
|
var stderr bytes.Buffer
|
||||||
|
cmd.Stdout = &stdout
|
||||||
|
cmd.Stderr = &stderr
|
||||||
|
err := cmd.Run()
|
||||||
|
if err != nil {
|
||||||
|
log.Printf("stdout: %s", stdout.String())
|
||||||
|
log.Printf("stderr: %s", stderr.String())
|
||||||
|
log.Fatalf("failed to run yq command: %s", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Printf("YAML file (%s) job %s updated successfully\n", testPath, jobName)
|
||||||
|
}
|
||||||
|
|
||||||
|
func main() {
|
||||||
|
tests := findTests()
|
||||||
|
|
||||||
|
// Expand tests that should be split into multiple jobs
|
||||||
|
expandedTests := expandTests(tests)
|
||||||
|
|
||||||
|
quotedTests := make([]string, len(expandedTests))
|
||||||
|
for i, test := range expandedTests {
|
||||||
|
quotedTests[i] = fmt.Sprintf("\"%s\"", test)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Define selected tests for PostgreSQL
|
||||||
|
postgresTestNames := []string{
|
||||||
|
"TestACLAllowUserDst",
|
||||||
|
"TestPingAllByIP",
|
||||||
|
"TestEphemeral2006DeletedTooQuickly",
|
||||||
|
"TestPingAllByIPManyUpDown",
|
||||||
|
"TestSubnetRouterMultiNetwork",
|
||||||
|
}
|
||||||
|
|
||||||
|
quotedPostgresTests := make([]string, len(postgresTestNames))
|
||||||
|
for i, test := range postgresTestNames {
|
||||||
|
quotedPostgresTests[i] = fmt.Sprintf("\"%s\"", test)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Update both SQLite and PostgreSQL job matrices
|
||||||
|
updateYAML(quotedTests, "sqlite", "./test-integration.yaml")
|
||||||
|
updateYAML(quotedPostgresTests, "postgres", "./test-integration.yaml")
|
||||||
|
}
|
||||||
4
.github/workflows/gh-actions-updater.yaml
vendored
@@ -11,13 +11,13 @@ jobs:
|
|||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||||
with:
|
with:
|
||||||
# [Required] Access token with `workflow` scope.
|
# [Required] Access token with `workflow` scope.
|
||||||
token: ${{ secrets.WORKFLOW_SECRET }}
|
token: ${{ secrets.WORKFLOW_SECRET }}
|
||||||
|
|
||||||
- name: Run GitHub Actions Version Updater
|
- name: Run GitHub Actions Version Updater
|
||||||
uses: saadmk11/github-actions-version-updater@v0.8.1
|
uses: saadmk11/github-actions-version-updater@d8781caf11d11168579c8e5e94f62b068038f442 # v0.9.0
|
||||||
with:
|
with:
|
||||||
# [Required] Access token with `workflow` scope.
|
# [Required] Access token with `workflow` scope.
|
||||||
token: ${{ secrets.WORKFLOW_SECRET }}
|
token: ${{ secrets.WORKFLOW_SECRET }}
|
||||||
|
|||||||
130
.github/workflows/integration-test-template.yml
vendored
Normal file
@@ -0,0 +1,130 @@
|
|||||||
|
name: Integration Test Template
|
||||||
|
|
||||||
|
on:
|
||||||
|
workflow_call:
|
||||||
|
inputs:
|
||||||
|
test:
|
||||||
|
required: true
|
||||||
|
type: string
|
||||||
|
postgres_flag:
|
||||||
|
required: false
|
||||||
|
type: string
|
||||||
|
default: ""
|
||||||
|
database_name:
|
||||||
|
required: true
|
||||||
|
type: string
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
test:
|
||||||
|
runs-on: ubuntu-24.04-arm
|
||||||
|
env:
|
||||||
|
# Github does not allow us to access secrets in pull requests,
|
||||||
|
# so this env var is used to check if we have the secret or not.
|
||||||
|
# If we have the secrets, meaning we are running on push in a fork,
|
||||||
|
# there might be secrets available for more debugging.
|
||||||
|
# If TS_OAUTH_CLIENT_ID and TS_OAUTH_SECRET is set, then the job
|
||||||
|
# will join a debug tailscale network, set up SSH and a tmux session.
|
||||||
|
# The SSH will be configured to use the SSH key of the Github user
|
||||||
|
# that triggered the build.
|
||||||
|
HAS_TAILSCALE_SECRET: ${{ secrets.TS_OAUTH_CLIENT_ID }}
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||||
|
with:
|
||||||
|
fetch-depth: 2
|
||||||
|
- name: Tailscale
|
||||||
|
if: ${{ env.HAS_TAILSCALE_SECRET }}
|
||||||
|
uses: tailscale/github-action@a392da0a182bba0e9613b6243ebd69529b1878aa # v4.1.0
|
||||||
|
with:
|
||||||
|
oauth-client-id: ${{ secrets.TS_OAUTH_CLIENT_ID }}
|
||||||
|
oauth-secret: ${{ secrets.TS_OAUTH_SECRET }}
|
||||||
|
tags: tag:gh
|
||||||
|
- name: Setup SSH server for Actor
|
||||||
|
if: ${{ env.HAS_TAILSCALE_SECRET }}
|
||||||
|
uses: alexellis/setup-sshd-actor@master
|
||||||
|
- name: Download headscale image
|
||||||
|
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
|
||||||
|
with:
|
||||||
|
name: headscale-image
|
||||||
|
path: /tmp/artifacts
|
||||||
|
- name: Download tailscale HEAD image
|
||||||
|
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
|
||||||
|
with:
|
||||||
|
name: tailscale-head-image
|
||||||
|
path: /tmp/artifacts
|
||||||
|
- name: Download hi binary
|
||||||
|
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
|
||||||
|
with:
|
||||||
|
name: hi-binary
|
||||||
|
path: /tmp/artifacts
|
||||||
|
- name: Download Go cache
|
||||||
|
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
|
||||||
|
with:
|
||||||
|
name: go-cache
|
||||||
|
path: /tmp/artifacts
|
||||||
|
- name: Download postgres image
|
||||||
|
if: ${{ inputs.postgres_flag == '--postgres=1' }}
|
||||||
|
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
|
||||||
|
with:
|
||||||
|
name: postgres-image
|
||||||
|
path: /tmp/artifacts
|
||||||
|
- name: Pin Docker to v28 (avoid v29 breaking changes)
|
||||||
|
run: |
|
||||||
|
# Docker 29 breaks docker build via Go client libraries and
|
||||||
|
# docker load/save with certain tarball formats.
|
||||||
|
# Pin to Docker 28.x until our tooling is updated.
|
||||||
|
# https://github.com/actions/runner-images/issues/13474
|
||||||
|
sudo install -m 0755 -d /etc/apt/keyrings
|
||||||
|
curl -fsSL https://download.docker.com/linux/ubuntu/gpg \
|
||||||
|
| sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
|
||||||
|
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
|
||||||
|
https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "$VERSION_CODENAME") stable" \
|
||||||
|
| sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
|
||||||
|
sudo apt-get update -qq
|
||||||
|
VERSION=$(apt-cache madison docker-ce | grep '28\.5' | head -1 | awk '{print $3}')
|
||||||
|
sudo apt-get install -y --allow-downgrades \
|
||||||
|
"docker-ce=${VERSION}" "docker-ce-cli=${VERSION}"
|
||||||
|
sudo systemctl restart docker
|
||||||
|
docker version
|
||||||
|
- name: Load Docker images, Go cache, and prepare binary
|
||||||
|
run: |
|
||||||
|
gunzip -c /tmp/artifacts/headscale-image.tar.gz | docker load
|
||||||
|
gunzip -c /tmp/artifacts/tailscale-head-image.tar.gz | docker load
|
||||||
|
if [ -f /tmp/artifacts/postgres-image.tar.gz ]; then
|
||||||
|
gunzip -c /tmp/artifacts/postgres-image.tar.gz | docker load
|
||||||
|
fi
|
||||||
|
chmod +x /tmp/artifacts/hi
|
||||||
|
docker images
|
||||||
|
# Extract Go cache to host directories for bind mounting
|
||||||
|
mkdir -p /tmp/go-cache
|
||||||
|
tar -xzf /tmp/artifacts/go-cache.tar.gz -C /tmp/go-cache
|
||||||
|
ls -la /tmp/go-cache/ /tmp/go-cache/.cache/
|
||||||
|
- name: Run Integration Test
|
||||||
|
env:
|
||||||
|
HEADSCALE_INTEGRATION_HEADSCALE_IMAGE: headscale:${{ github.sha }}
|
||||||
|
HEADSCALE_INTEGRATION_TAILSCALE_IMAGE: tailscale-head:${{ github.sha }}
|
||||||
|
HEADSCALE_INTEGRATION_POSTGRES_IMAGE: ${{ inputs.postgres_flag == '--postgres=1' && format('postgres:{0}', github.sha) || '' }}
|
||||||
|
HEADSCALE_INTEGRATION_GO_CACHE: /tmp/go-cache/go
|
||||||
|
HEADSCALE_INTEGRATION_GO_BUILD_CACHE: /tmp/go-cache/.cache/go-build
|
||||||
|
run: /tmp/artifacts/hi run --stats --ts-memory-limit=300 --hs-memory-limit=1500 "^${{ inputs.test }}$" \
|
||||||
|
--timeout=120m \
|
||||||
|
${{ inputs.postgres_flag }}
|
||||||
|
# Sanitize test name for artifact upload (replace invalid characters: " : < > | * ? \ / with -)
|
||||||
|
- name: Sanitize test name for artifacts
|
||||||
|
if: always()
|
||||||
|
id: sanitize
|
||||||
|
run: echo "name=${TEST_NAME//[\":<>|*?\\\/]/-}" >> $GITHUB_OUTPUT
|
||||||
|
env:
|
||||||
|
TEST_NAME: ${{ inputs.test }}
|
||||||
|
- uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
|
||||||
|
if: always()
|
||||||
|
with:
|
||||||
|
name: ${{ inputs.database_name }}-${{ steps.sanitize.outputs.name }}-logs
|
||||||
|
path: "control_logs/*/*.log"
|
||||||
|
- uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
|
||||||
|
if: always()
|
||||||
|
with:
|
||||||
|
name: ${{ inputs.database_name }}-${{ steps.sanitize.outputs.name }}-artifacts
|
||||||
|
path: control_logs/
|
||||||
|
- name: Setup a blocking tmux session
|
||||||
|
if: ${{ env.HAS_TAILSCALE_SECRET }}
|
||||||
|
uses: alexellis/block-with-tmux-action@master
|
||||||
44
.github/workflows/lint.yml
vendored
@@ -10,12 +10,12 @@ jobs:
|
|||||||
golangci-lint:
|
golangci-lint:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||||
with:
|
with:
|
||||||
fetch-depth: 2
|
fetch-depth: 2
|
||||||
- name: Get changed files
|
- name: Get changed files
|
||||||
id: changed-files
|
id: changed-files
|
||||||
uses: dorny/paths-filter@v3
|
uses: dorny/paths-filter@de90cc6fb38fc0963ad72b210f1f284cd68cea36 # v3.0.2
|
||||||
with:
|
with:
|
||||||
filters: |
|
filters: |
|
||||||
files:
|
files:
|
||||||
@@ -24,24 +24,33 @@ jobs:
|
|||||||
- '**/*.go'
|
- '**/*.go'
|
||||||
- 'integration_test/'
|
- 'integration_test/'
|
||||||
- 'config-example.yaml'
|
- 'config-example.yaml'
|
||||||
- uses: DeterminateSystems/nix-installer-action@main
|
- uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34
|
||||||
if: steps.changed-files.outputs.files == 'true'
|
if: steps.changed-files.outputs.files == 'true'
|
||||||
- uses: DeterminateSystems/magic-nix-cache-action@main
|
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
|
||||||
if: steps.changed-files.outputs.files == 'true'
|
if: steps.changed-files.outputs.files == 'true'
|
||||||
|
with:
|
||||||
|
primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
|
||||||
|
'**/flake.lock') }}
|
||||||
|
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
|
||||||
|
|
||||||
- name: golangci-lint
|
- name: golangci-lint
|
||||||
if: steps.changed-files.outputs.files == 'true'
|
if: steps.changed-files.outputs.files == 'true'
|
||||||
run: nix develop --command -- golangci-lint run --new-from-rev=${{github.event.pull_request.base.sha}} --out-format=colored-line-number
|
run: nix develop --command -- golangci-lint run
|
||||||
|
--new-from-rev=${{github.event.pull_request.base.sha}}
|
||||||
|
--output.text.path=stdout
|
||||||
|
--output.text.print-linter-name
|
||||||
|
--output.text.print-issued-lines
|
||||||
|
--output.text.colors
|
||||||
|
|
||||||
prettier-lint:
|
prettier-lint:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||||
with:
|
with:
|
||||||
fetch-depth: 2
|
fetch-depth: 2
|
||||||
- name: Get changed files
|
- name: Get changed files
|
||||||
id: changed-files
|
id: changed-files
|
||||||
uses: dorny/paths-filter@v3
|
uses: dorny/paths-filter@de90cc6fb38fc0963ad72b210f1f284cd68cea36 # v3.0.2
|
||||||
with:
|
with:
|
||||||
filters: |
|
filters: |
|
||||||
files:
|
files:
|
||||||
@@ -55,21 +64,30 @@ jobs:
|
|||||||
- '**/*.css'
|
- '**/*.css'
|
||||||
- '**/*.scss'
|
- '**/*.scss'
|
||||||
- '**/*.html'
|
- '**/*.html'
|
||||||
- uses: DeterminateSystems/nix-installer-action@main
|
- uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34
|
||||||
if: steps.changed-files.outputs.files == 'true'
|
if: steps.changed-files.outputs.files == 'true'
|
||||||
- uses: DeterminateSystems/magic-nix-cache-action@main
|
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
|
||||||
if: steps.changed-files.outputs.files == 'true'
|
if: steps.changed-files.outputs.files == 'true'
|
||||||
|
with:
|
||||||
|
primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
|
||||||
|
'**/flake.lock') }}
|
||||||
|
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
|
||||||
|
|
||||||
- name: Prettify code
|
- name: Prettify code
|
||||||
if: steps.changed-files.outputs.files == 'true'
|
if: steps.changed-files.outputs.files == 'true'
|
||||||
run: nix develop --command -- prettier --no-error-on-unmatched-pattern --ignore-unknown --check **/*.{ts,js,md,yaml,yml,sass,css,scss,html}
|
run: nix develop --command -- prettier --no-error-on-unmatched-pattern
|
||||||
|
--ignore-unknown --check **/*.{ts,js,md,yaml,yml,sass,css,scss,html}
|
||||||
|
|
||||||
proto-lint:
|
proto-lint:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||||
- uses: DeterminateSystems/nix-installer-action@main
|
- uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34
|
||||||
- uses: DeterminateSystems/magic-nix-cache-action@main
|
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
|
||||||
|
with:
|
||||||
|
primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
|
||||||
|
'**/flake.lock') }}
|
||||||
|
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
|
||||||
|
|
||||||
- name: Buf lint
|
- name: Buf lint
|
||||||
run: nix develop --command -- buf lint proto
|
run: nix develop --command -- buf lint proto
|
||||||
|
|||||||
28
.github/workflows/needs-more-info-comment.yml
vendored
Normal file
@@ -0,0 +1,28 @@
|
|||||||
|
name: Needs More Info - Post Comment
|
||||||
|
|
||||||
|
on:
|
||||||
|
issues:
|
||||||
|
types: [labeled]
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
post-comment:
|
||||||
|
if: >-
|
||||||
|
github.event.label.name == 'needs-more-info' &&
|
||||||
|
github.repository == 'juanfont/headscale'
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
permissions:
|
||||||
|
issues: write
|
||||||
|
contents: read
|
||||||
|
steps:
|
||||||
|
- name: Checkout repository
|
||||||
|
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||||
|
with:
|
||||||
|
sparse-checkout: .github/label-response/needs-more-info.md
|
||||||
|
sparse-checkout-cone-mode: false
|
||||||
|
|
||||||
|
- name: Post instruction comment
|
||||||
|
run: gh issue comment "$NUMBER" --body-file .github/label-response/needs-more-info.md
|
||||||
|
env:
|
||||||
|
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
GH_REPO: ${{ github.repository }}
|
||||||
|
NUMBER: ${{ github.event.issue.number }}
|
||||||
98
.github/workflows/needs-more-info-timer.yml
vendored
Normal file
@@ -0,0 +1,98 @@
|
|||||||
|
name: Needs More Info - Timer
|
||||||
|
|
||||||
|
on:
|
||||||
|
schedule:
|
||||||
|
- cron: "0 0 * * *" # Daily at midnight UTC
|
||||||
|
issue_comment:
|
||||||
|
types: [created]
|
||||||
|
workflow_dispatch:
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
# When a non-bot user comments on a needs-more-info issue, remove the label.
|
||||||
|
remove-label-on-response:
|
||||||
|
if: >-
|
||||||
|
github.repository == 'juanfont/headscale' &&
|
||||||
|
github.event_name == 'issue_comment' &&
|
||||||
|
github.event.comment.user.type != 'Bot' &&
|
||||||
|
contains(github.event.issue.labels.*.name, 'needs-more-info')
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
permissions:
|
||||||
|
issues: write
|
||||||
|
steps:
|
||||||
|
- name: Remove needs-more-info label
|
||||||
|
run: gh issue edit "$NUMBER" --remove-label needs-more-info
|
||||||
|
env:
|
||||||
|
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
GH_REPO: ${{ github.repository }}
|
||||||
|
NUMBER: ${{ github.event.issue.number }}
|
||||||
|
|
||||||
|
# On schedule, close issues that have had no human response for 3 days.
|
||||||
|
close-stale:
|
||||||
|
if: >-
|
||||||
|
github.repository == 'juanfont/headscale' &&
|
||||||
|
github.event_name != 'issue_comment'
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
permissions:
|
||||||
|
issues: write
|
||||||
|
steps:
|
||||||
|
- uses: hustcer/setup-nu@920172d92eb04671776f3ba69d605d3b09351c30 # v3.22
|
||||||
|
with:
|
||||||
|
version: "*"
|
||||||
|
|
||||||
|
- name: Close stale needs-more-info issues
|
||||||
|
shell: nu {0}
|
||||||
|
env:
|
||||||
|
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
GH_REPO: ${{ github.repository }}
|
||||||
|
run: |
|
||||||
|
let issues = (gh issue list
|
||||||
|
--repo $env.GH_REPO
|
||||||
|
--label "needs-more-info"
|
||||||
|
--state open
|
||||||
|
--json number
|
||||||
|
| from json)
|
||||||
|
|
||||||
|
for issue in $issues {
|
||||||
|
let number = $issue.number
|
||||||
|
print $"Checking issue #($number)"
|
||||||
|
|
||||||
|
# Find when needs-more-info was last added
|
||||||
|
let events = (gh api $"repos/($env.GH_REPO)/issues/($number)/events"
|
||||||
|
--paginate | from json | flatten)
|
||||||
|
let label_event = ($events
|
||||||
|
| where event == "labeled" and label.name == "needs-more-info"
|
||||||
|
| last)
|
||||||
|
let label_added_at = ($label_event.created_at | into datetime)
|
||||||
|
|
||||||
|
# Check for non-bot comments after the label was added
|
||||||
|
let comments = (gh api $"repos/($env.GH_REPO)/issues/($number)/comments"
|
||||||
|
--paginate | from json | flatten)
|
||||||
|
let human_responses = ($comments
|
||||||
|
| where user.type != "Bot"
|
||||||
|
| where { ($in.created_at | into datetime) > $label_added_at })
|
||||||
|
|
||||||
|
if ($human_responses | length) > 0 {
|
||||||
|
print $" Human responded, removing label"
|
||||||
|
gh issue edit $number --repo $env.GH_REPO --remove-label needs-more-info
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
# Check if 3 days have passed
|
||||||
|
let elapsed = (date now) - $label_added_at
|
||||||
|
if $elapsed < 3day {
|
||||||
|
print $" Only ($elapsed | format duration day) elapsed, skipping"
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
print $" No response for ($elapsed | format duration day), closing"
|
||||||
|
let message = [
|
||||||
|
"This issue has been automatically closed because no additional information was provided within 3 days."
|
||||||
|
""
|
||||||
|
"If you have the requested information, please open a new issue and include the debug information requested above."
|
||||||
|
""
|
||||||
|
"Thank you for your understanding."
|
||||||
|
] | str join "\n"
|
||||||
|
gh issue comment $number --repo $env.GH_REPO --body $message
|
||||||
|
gh issue close $number --repo $env.GH_REPO --reason "not planned"
|
||||||
|
gh issue edit $number --repo $env.GH_REPO --remove-label needs-more-info
|
||||||
|
}
|
||||||
55
.github/workflows/nix-module-test.yml
vendored
Normal file
@@ -0,0 +1,55 @@
|
|||||||
|
name: NixOS Module Tests
|
||||||
|
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
branches:
|
||||||
|
- main
|
||||||
|
pull_request:
|
||||||
|
branches:
|
||||||
|
- main
|
||||||
|
|
||||||
|
concurrency:
|
||||||
|
group: ${{ github.workflow }}-$${{ github.head_ref || github.run_id }}
|
||||||
|
cancel-in-progress: true
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
nix-module-check:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
permissions:
|
||||||
|
contents: read
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||||
|
with:
|
||||||
|
fetch-depth: 2
|
||||||
|
|
||||||
|
- name: Get changed files
|
||||||
|
id: changed-files
|
||||||
|
uses: dorny/paths-filter@de90cc6fb38fc0963ad72b210f1f284cd68cea36 # v3.0.2
|
||||||
|
with:
|
||||||
|
filters: |
|
||||||
|
nix:
|
||||||
|
- 'nix/**'
|
||||||
|
- 'flake.nix'
|
||||||
|
- 'flake.lock'
|
||||||
|
go:
|
||||||
|
- 'go.*'
|
||||||
|
- '**/*.go'
|
||||||
|
- 'cmd/**'
|
||||||
|
- 'hscontrol/**'
|
||||||
|
|
||||||
|
- uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34
|
||||||
|
if: steps.changed-files.outputs.nix == 'true' || steps.changed-files.outputs.go == 'true'
|
||||||
|
|
||||||
|
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
|
||||||
|
if: steps.changed-files.outputs.nix == 'true' || steps.changed-files.outputs.go == 'true'
|
||||||
|
with:
|
||||||
|
primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
|
||||||
|
'**/flake.lock') }}
|
||||||
|
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
|
||||||
|
|
||||||
|
- name: Run NixOS module tests
|
||||||
|
if: steps.changed-files.outputs.nix == 'true' || steps.changed-files.outputs.go == 'true'
|
||||||
|
run: |
|
||||||
|
echo "Running NixOS module integration test..."
|
||||||
|
nix build .#checks.x86_64-linux.headscale -L
|
||||||
33
.github/workflows/release.yml
vendored
@@ -13,25 +13,48 @@ jobs:
|
|||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
steps:
|
steps:
|
||||||
- name: Checkout
|
- name: Checkout
|
||||||
uses: actions/checkout@v4
|
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||||
with:
|
with:
|
||||||
fetch-depth: 0
|
fetch-depth: 0
|
||||||
|
|
||||||
|
- name: Pin Docker to v28 (avoid v29 breaking changes)
|
||||||
|
run: |
|
||||||
|
# Docker 29 breaks docker build via Go client libraries and
|
||||||
|
# docker load/save with certain tarball formats.
|
||||||
|
# Pin to Docker 28.x until our tooling is updated.
|
||||||
|
# https://github.com/actions/runner-images/issues/13474
|
||||||
|
sudo install -m 0755 -d /etc/apt/keyrings
|
||||||
|
curl -fsSL https://download.docker.com/linux/ubuntu/gpg \
|
||||||
|
| sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
|
||||||
|
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
|
||||||
|
https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "$VERSION_CODENAME") stable" \
|
||||||
|
| sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
|
||||||
|
sudo apt-get update -qq
|
||||||
|
VERSION=$(apt-cache madison docker-ce | grep '28\.5' | head -1 | awk '{print $3}')
|
||||||
|
sudo apt-get install -y --allow-downgrades \
|
||||||
|
"docker-ce=${VERSION}" "docker-ce-cli=${VERSION}"
|
||||||
|
sudo systemctl restart docker
|
||||||
|
docker version
|
||||||
|
|
||||||
- name: Login to DockerHub
|
- name: Login to DockerHub
|
||||||
uses: docker/login-action@v3
|
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
|
||||||
with:
|
with:
|
||||||
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||||
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||||
|
|
||||||
- name: Login to GHCR
|
- name: Login to GHCR
|
||||||
uses: docker/login-action@v3
|
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
|
||||||
with:
|
with:
|
||||||
registry: ghcr.io
|
registry: ghcr.io
|
||||||
username: ${{ github.repository_owner }}
|
username: ${{ github.repository_owner }}
|
||||||
password: ${{ secrets.GITHUB_TOKEN }}
|
password: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
|
||||||
- uses: DeterminateSystems/nix-installer-action@main
|
- uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34
|
||||||
- uses: DeterminateSystems/magic-nix-cache-action@main
|
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
|
||||||
|
with:
|
||||||
|
primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
|
||||||
|
'**/flake.lock') }}
|
||||||
|
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
|
||||||
|
|
||||||
- name: Run goreleaser
|
- name: Run goreleaser
|
||||||
run: nix develop --command -- goreleaser release --clean
|
run: nix develop --command -- goreleaser release --clean
|
||||||
|
|||||||
10
.github/workflows/stale.yml
vendored
@@ -12,14 +12,16 @@ jobs:
|
|||||||
issues: write
|
issues: write
|
||||||
pull-requests: write
|
pull-requests: write
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/stale@v9
|
- uses: actions/stale@997185467fa4f803885201cee163a9f38240193d # v10.1.1
|
||||||
with:
|
with:
|
||||||
days-before-issue-stale: 90
|
days-before-issue-stale: 90
|
||||||
days-before-issue-close: 7
|
days-before-issue-close: 7
|
||||||
stale-issue-label: "stale"
|
stale-issue-label: "stale"
|
||||||
stale-issue-message: "This issue is stale because it has been open for 90 days with no activity."
|
stale-issue-message: "This issue is stale because it has been open for 90 days with no
|
||||||
close-issue-message: "This issue was closed because it has been inactive for 14 days since being marked as stale."
|
activity."
|
||||||
|
close-issue-message: "This issue was closed because it has been inactive for 14 days
|
||||||
|
since being marked as stale."
|
||||||
days-before-pr-stale: -1
|
days-before-pr-stale: -1
|
||||||
days-before-pr-close: -1
|
days-before-pr-close: -1
|
||||||
exempt-issue-labels: "no-stale-bot"
|
exempt-issue-labels: "no-stale-bot,needs-more-info"
|
||||||
repo-token: ${{ secrets.GITHUB_TOKEN }}
|
repo-token: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
|||||||
30
.github/workflows/support-request.yml
vendored
Normal file
@@ -0,0 +1,30 @@
|
|||||||
|
name: Support Request - Close Issue
|
||||||
|
|
||||||
|
on:
|
||||||
|
issues:
|
||||||
|
types: [labeled]
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
close-support-request:
|
||||||
|
if: >-
|
||||||
|
github.event.label.name == 'support-request' &&
|
||||||
|
github.repository == 'juanfont/headscale'
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
permissions:
|
||||||
|
issues: write
|
||||||
|
contents: read
|
||||||
|
steps:
|
||||||
|
- name: Checkout repository
|
||||||
|
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||||
|
with:
|
||||||
|
sparse-checkout: .github/label-response/support-request.md
|
||||||
|
sparse-checkout-cone-mode: false
|
||||||
|
|
||||||
|
- name: Post comment and close issue
|
||||||
|
run: |
|
||||||
|
gh issue comment "$NUMBER" --body-file .github/label-response/support-request.md
|
||||||
|
gh issue close "$NUMBER" --reason "not planned"
|
||||||
|
env:
|
||||||
|
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
GH_REPO: ${{ github.repository }}
|
||||||
|
NUMBER: ${{ github.event.issue.number }}
|
||||||
349
.github/workflows/test-integration.yaml
vendored
@@ -1,4 +1,4 @@
|
|||||||
name: Integration Tests
|
name: integration
|
||||||
# To debug locally on a branch, and when needing secrets
|
# To debug locally on a branch, and when needing secrets
|
||||||
# change this to include `push` so the build is ran on
|
# change this to include `push` so the build is ran on
|
||||||
# the main repository.
|
# the main repository.
|
||||||
@@ -7,8 +7,154 @@ concurrency:
|
|||||||
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
|
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
|
||||||
cancel-in-progress: true
|
cancel-in-progress: true
|
||||||
jobs:
|
jobs:
|
||||||
integration-test:
|
# build: Builds binaries and Docker images once, uploads as artifacts for reuse.
|
||||||
runs-on: ubuntu-latest
|
# build-postgres: Pulls postgres image separately to avoid Docker Hub rate limits.
|
||||||
|
# sqlite: Runs all integration tests with SQLite backend.
|
||||||
|
# postgres: Runs a subset of tests with PostgreSQL to verify database compatibility.
|
||||||
|
build:
|
||||||
|
runs-on: ubuntu-24.04-arm
|
||||||
|
outputs:
|
||||||
|
files-changed: ${{ steps.changed-files.outputs.files }}
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||||
|
with:
|
||||||
|
fetch-depth: 2
|
||||||
|
- name: Get changed files
|
||||||
|
id: changed-files
|
||||||
|
uses: dorny/paths-filter@de90cc6fb38fc0963ad72b210f1f284cd68cea36 # v3.0.2
|
||||||
|
with:
|
||||||
|
filters: |
|
||||||
|
files:
|
||||||
|
- '*.nix'
|
||||||
|
- 'go.*'
|
||||||
|
- '**/*.go'
|
||||||
|
- 'integration/**'
|
||||||
|
- 'config-example.yaml'
|
||||||
|
- '.github/workflows/test-integration.yaml'
|
||||||
|
- '.github/workflows/integration-test-template.yml'
|
||||||
|
- 'Dockerfile.*'
|
||||||
|
- uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34
|
||||||
|
if: steps.changed-files.outputs.files == 'true'
|
||||||
|
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
|
||||||
|
if: steps.changed-files.outputs.files == 'true'
|
||||||
|
with:
|
||||||
|
primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix', '**/flake.lock') }}
|
||||||
|
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
|
||||||
|
- name: Build binaries and warm Go cache
|
||||||
|
if: steps.changed-files.outputs.files == 'true'
|
||||||
|
run: |
|
||||||
|
# Build all Go binaries in one nix shell to maximize cache reuse
|
||||||
|
nix develop --command -- bash -c '
|
||||||
|
go build -o hi ./cmd/hi
|
||||||
|
CGO_ENABLED=0 GOOS=linux go build -o headscale ./cmd/headscale
|
||||||
|
# Build integration test binary to warm the cache with all dependencies
|
||||||
|
go test -c ./integration -o /dev/null 2>/dev/null || true
|
||||||
|
'
|
||||||
|
- name: Upload hi binary
|
||||||
|
if: steps.changed-files.outputs.files == 'true'
|
||||||
|
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
|
||||||
|
with:
|
||||||
|
name: hi-binary
|
||||||
|
path: hi
|
||||||
|
retention-days: 10
|
||||||
|
- name: Package Go cache
|
||||||
|
if: steps.changed-files.outputs.files == 'true'
|
||||||
|
run: |
|
||||||
|
# Package Go module cache and build cache
|
||||||
|
tar -czf go-cache.tar.gz -C ~ go .cache/go-build
|
||||||
|
- name: Upload Go cache
|
||||||
|
if: steps.changed-files.outputs.files == 'true'
|
||||||
|
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
|
||||||
|
with:
|
||||||
|
name: go-cache
|
||||||
|
path: go-cache.tar.gz
|
||||||
|
retention-days: 10
|
||||||
|
- name: Pin Docker to v28 (avoid v29 breaking changes)
|
||||||
|
if: steps.changed-files.outputs.files == 'true'
|
||||||
|
run: |
|
||||||
|
# Docker 29 breaks docker build via Go client libraries and
|
||||||
|
# docker load/save with certain tarball formats.
|
||||||
|
# Pin to Docker 28.x until our tooling is updated.
|
||||||
|
# https://github.com/actions/runner-images/issues/13474
|
||||||
|
sudo install -m 0755 -d /etc/apt/keyrings
|
||||||
|
curl -fsSL https://download.docker.com/linux/ubuntu/gpg \
|
||||||
|
| sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
|
||||||
|
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
|
||||||
|
https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "$VERSION_CODENAME") stable" \
|
||||||
|
| sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
|
||||||
|
sudo apt-get update -qq
|
||||||
|
VERSION=$(apt-cache madison docker-ce | grep '28\.5' | head -1 | awk '{print $3}')
|
||||||
|
sudo apt-get install -y --allow-downgrades \
|
||||||
|
"docker-ce=${VERSION}" "docker-ce-cli=${VERSION}"
|
||||||
|
sudo systemctl restart docker
|
||||||
|
docker version
|
||||||
|
- name: Build headscale image
|
||||||
|
if: steps.changed-files.outputs.files == 'true'
|
||||||
|
run: |
|
||||||
|
docker build \
|
||||||
|
--file Dockerfile.integration-ci \
|
||||||
|
--tag headscale:${{ github.sha }} \
|
||||||
|
.
|
||||||
|
docker save headscale:${{ github.sha }} | gzip > headscale-image.tar.gz
|
||||||
|
- name: Build tailscale HEAD image
|
||||||
|
if: steps.changed-files.outputs.files == 'true'
|
||||||
|
run: |
|
||||||
|
docker build \
|
||||||
|
--file Dockerfile.tailscale-HEAD \
|
||||||
|
--tag tailscale-head:${{ github.sha }} \
|
||||||
|
.
|
||||||
|
docker save tailscale-head:${{ github.sha }} | gzip > tailscale-head-image.tar.gz
|
||||||
|
- name: Upload headscale image
|
||||||
|
if: steps.changed-files.outputs.files == 'true'
|
||||||
|
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
|
||||||
|
with:
|
||||||
|
name: headscale-image
|
||||||
|
path: headscale-image.tar.gz
|
||||||
|
retention-days: 10
|
||||||
|
- name: Upload tailscale HEAD image
|
||||||
|
if: steps.changed-files.outputs.files == 'true'
|
||||||
|
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
|
||||||
|
with:
|
||||||
|
name: tailscale-head-image
|
||||||
|
path: tailscale-head-image.tar.gz
|
||||||
|
retention-days: 10
|
||||||
|
build-postgres:
|
||||||
|
runs-on: ubuntu-24.04-arm
|
||||||
|
needs: build
|
||||||
|
if: needs.build.outputs.files-changed == 'true'
|
||||||
|
steps:
|
||||||
|
- name: Pin Docker to v28 (avoid v29 breaking changes)
|
||||||
|
run: |
|
||||||
|
# Docker 29 breaks docker build via Go client libraries and
|
||||||
|
# docker load/save with certain tarball formats.
|
||||||
|
# Pin to Docker 28.x until our tooling is updated.
|
||||||
|
# https://github.com/actions/runner-images/issues/13474
|
||||||
|
sudo install -m 0755 -d /etc/apt/keyrings
|
||||||
|
curl -fsSL https://download.docker.com/linux/ubuntu/gpg \
|
||||||
|
| sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
|
||||||
|
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
|
||||||
|
https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "$VERSION_CODENAME") stable" \
|
||||||
|
| sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
|
||||||
|
sudo apt-get update -qq
|
||||||
|
VERSION=$(apt-cache madison docker-ce | grep '28\.5' | head -1 | awk '{print $3}')
|
||||||
|
sudo apt-get install -y --allow-downgrades \
|
||||||
|
"docker-ce=${VERSION}" "docker-ce-cli=${VERSION}"
|
||||||
|
sudo systemctl restart docker
|
||||||
|
docker version
|
||||||
|
- name: Pull and save postgres image
|
||||||
|
run: |
|
||||||
|
docker pull postgres:latest
|
||||||
|
docker tag postgres:latest postgres:${{ github.sha }}
|
||||||
|
docker save postgres:${{ github.sha }} | gzip > postgres-image.tar.gz
|
||||||
|
- name: Upload postgres image
|
||||||
|
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
|
||||||
|
with:
|
||||||
|
name: postgres-image
|
||||||
|
path: postgres-image.tar.gz
|
||||||
|
retention-days: 10
|
||||||
|
sqlite:
|
||||||
|
needs: build
|
||||||
|
if: needs.build.outputs.files-changed == 'true'
|
||||||
strategy:
|
strategy:
|
||||||
fail-fast: false
|
fail-fast: false
|
||||||
matrix:
|
matrix:
|
||||||
@@ -22,34 +168,59 @@ jobs:
|
|||||||
- TestACLNamedHostsCanReach
|
- TestACLNamedHostsCanReach
|
||||||
- TestACLDevice1CanAccessDevice2
|
- TestACLDevice1CanAccessDevice2
|
||||||
- TestPolicyUpdateWhileRunningWithCLIInDatabase
|
- TestPolicyUpdateWhileRunningWithCLIInDatabase
|
||||||
|
- TestACLAutogroupMember
|
||||||
|
- TestACLAutogroupTagged
|
||||||
|
- TestACLAutogroupSelf
|
||||||
|
- TestACLPolicyPropagationOverTime
|
||||||
|
- TestACLTagPropagation
|
||||||
|
- TestACLTagPropagationPortSpecific
|
||||||
|
- TestACLGroupWithUnknownUser
|
||||||
|
- TestACLGroupAfterUserDeletion
|
||||||
|
- TestACLGroupDeletionExactReproduction
|
||||||
|
- TestACLDynamicUnknownUserAddition
|
||||||
|
- TestACLDynamicUnknownUserRemoval
|
||||||
|
- TestAPIAuthenticationBypass
|
||||||
|
- TestAPIAuthenticationBypassCurl
|
||||||
|
- TestGRPCAuthenticationBypass
|
||||||
|
- TestCLIWithConfigAuthenticationBypass
|
||||||
|
- TestAuthKeyLogoutAndReloginSameUser
|
||||||
|
- TestAuthKeyLogoutAndReloginNewUser
|
||||||
|
- TestAuthKeyLogoutAndReloginSameUserExpiredKey
|
||||||
|
- TestAuthKeyDeleteKey
|
||||||
|
- TestAuthKeyLogoutAndReloginRoutesPreserved
|
||||||
- TestOIDCAuthenticationPingAll
|
- TestOIDCAuthenticationPingAll
|
||||||
- TestOIDCExpireNodesBasedOnTokenExpiry
|
- TestOIDCExpireNodesBasedOnTokenExpiry
|
||||||
- TestOIDC024UserCreation
|
- TestOIDC024UserCreation
|
||||||
|
- TestOIDCAuthenticationWithPKCE
|
||||||
|
- TestOIDCReloginSameNodeNewUser
|
||||||
|
- TestOIDCFollowUpUrl
|
||||||
|
- TestOIDCMultipleOpenedLoginUrls
|
||||||
|
- TestOIDCReloginSameNodeSameUser
|
||||||
|
- TestOIDCExpiryAfterRestart
|
||||||
|
- TestOIDCACLPolicyOnJoin
|
||||||
|
- TestOIDCReloginSameUserRoutesPreserved
|
||||||
- TestAuthWebFlowAuthenticationPingAll
|
- TestAuthWebFlowAuthenticationPingAll
|
||||||
- TestAuthWebFlowLogoutAndRelogin
|
- TestAuthWebFlowLogoutAndReloginSameUser
|
||||||
|
- TestAuthWebFlowLogoutAndReloginNewUser
|
||||||
- TestUserCommand
|
- TestUserCommand
|
||||||
- TestPreAuthKeyCommand
|
- TestPreAuthKeyCommand
|
||||||
- TestPreAuthKeyCommandWithoutExpiry
|
- TestPreAuthKeyCommandWithoutExpiry
|
||||||
- TestPreAuthKeyCommandReusableEphemeral
|
- TestPreAuthKeyCommandReusableEphemeral
|
||||||
- TestPreAuthKeyCorrectUserLoggedInCommand
|
- TestPreAuthKeyCorrectUserLoggedInCommand
|
||||||
|
- TestTaggedNodesCLIOutput
|
||||||
- TestApiKeyCommand
|
- TestApiKeyCommand
|
||||||
- TestNodeTagCommand
|
|
||||||
- TestNodeAdvertiseTagCommand
|
|
||||||
- TestNodeCommand
|
- TestNodeCommand
|
||||||
- TestNodeExpireCommand
|
- TestNodeExpireCommand
|
||||||
- TestNodeRenameCommand
|
- TestNodeRenameCommand
|
||||||
- TestNodeMoveCommand
|
|
||||||
- TestPolicyCommand
|
- TestPolicyCommand
|
||||||
- TestPolicyBrokenConfigCommand
|
- TestPolicyBrokenConfigCommand
|
||||||
- TestDERPVerifyEndpoint
|
- TestDERPVerifyEndpoint
|
||||||
- TestResolveMagicDNS
|
- TestResolveMagicDNS
|
||||||
- TestResolveMagicDNSExtraRecordsPath
|
- TestResolveMagicDNSExtraRecordsPath
|
||||||
- TestValidateResolvConf
|
|
||||||
- TestDERPServerScenario
|
- TestDERPServerScenario
|
||||||
- TestDERPServerWebsocketScenario
|
- TestDERPServerWebsocketScenario
|
||||||
- TestPingAllByIP
|
- TestPingAllByIP
|
||||||
- TestPingAllByIPPublicDERP
|
- TestPingAllByIPPublicDERP
|
||||||
- TestAuthKeyLogoutAndRelogin
|
|
||||||
- TestEphemeral
|
- TestEphemeral
|
||||||
- TestEphemeralInAlternateTimezone
|
- TestEphemeralInAlternateTimezone
|
||||||
- TestEphemeral2006DeletedTooQuickly
|
- TestEphemeral2006DeletedTooQuickly
|
||||||
@@ -57,97 +228,95 @@ jobs:
|
|||||||
- TestTaildrop
|
- TestTaildrop
|
||||||
- TestUpdateHostnameFromClient
|
- TestUpdateHostnameFromClient
|
||||||
- TestExpireNode
|
- TestExpireNode
|
||||||
|
- TestSetNodeExpiryInFuture
|
||||||
|
- TestDisableNodeExpiry
|
||||||
- TestNodeOnlineStatus
|
- TestNodeOnlineStatus
|
||||||
- TestPingAllByIPManyUpDown
|
- TestPingAllByIPManyUpDown
|
||||||
- Test2118DeletingOnlineNodePanics
|
- Test2118DeletingOnlineNodePanics
|
||||||
|
- TestGrantCapRelay
|
||||||
|
- TestGrantCapDrive
|
||||||
- TestEnablingRoutes
|
- TestEnablingRoutes
|
||||||
- TestHASubnetRouterFailover
|
- TestHASubnetRouterFailover
|
||||||
- TestEnableDisableAutoApprovedRoute
|
|
||||||
- TestAutoApprovedSubRoute2068
|
|
||||||
- TestSubnetRouteACL
|
- TestSubnetRouteACL
|
||||||
|
- TestEnablingExitRoutes
|
||||||
|
- TestSubnetRouterMultiNetwork
|
||||||
|
- TestSubnetRouterMultiNetworkExitNode
|
||||||
|
- TestAutoApproveMultiNetwork/authkey-tag.*
|
||||||
|
- TestAutoApproveMultiNetwork/authkey-user.*
|
||||||
|
- TestAutoApproveMultiNetwork/authkey-group.*
|
||||||
|
- TestAutoApproveMultiNetwork/webauth-tag.*
|
||||||
|
- TestAutoApproveMultiNetwork/webauth-user.*
|
||||||
|
- TestAutoApproveMultiNetwork/webauth-group.*
|
||||||
|
- TestSubnetRouteACLFiltering
|
||||||
|
- TestGrantViaSubnetSteering
|
||||||
- TestHeadscale
|
- TestHeadscale
|
||||||
- TestCreateTailscale
|
|
||||||
- TestTailscaleNodesJoiningHeadcale
|
- TestTailscaleNodesJoiningHeadcale
|
||||||
- TestSSHOneUserToAll
|
- TestSSHOneUserToAll
|
||||||
- TestSSHMultipleUsersAllToAll
|
- TestSSHMultipleUsersAllToAll
|
||||||
- TestSSHNoSSHConfigured
|
- TestSSHNoSSHConfigured
|
||||||
- TestSSHIsBlockedInACL
|
- TestSSHIsBlockedInACL
|
||||||
- TestSSHUserOnlyIsolation
|
- TestSSHUserOnlyIsolation
|
||||||
database: [postgres, sqlite]
|
- TestSSHAutogroupSelf
|
||||||
env:
|
- TestSSHOneUserToOneCheckModeCLI
|
||||||
# Github does not allow us to access secrets in pull requests,
|
- TestSSHOneUserToOneCheckModeOIDC
|
||||||
# so this env var is used to check if we have the secret or not.
|
- TestSSHCheckModeUnapprovedTimeout
|
||||||
# If we have the secrets, meaning we are running on push in a fork,
|
- TestSSHCheckModeCheckPeriodCLI
|
||||||
# there might be secrets available for more debugging.
|
- TestSSHCheckModeAutoApprove
|
||||||
# If TS_OAUTH_CLIENT_ID and TS_OAUTH_SECRET is set, then the job
|
- TestSSHCheckModeNegativeCLI
|
||||||
# will join a debug tailscale network, set up SSH and a tmux session.
|
- TestSSHLocalpart
|
||||||
# The SSH will be configured to use the SSH key of the Github user
|
- TestTagsAuthKeyWithTagRequestDifferentTag
|
||||||
# that triggered the build.
|
- TestTagsAuthKeyWithTagNoAdvertiseFlag
|
||||||
HAS_TAILSCALE_SECRET: ${{ secrets.TS_OAUTH_CLIENT_ID }}
|
- TestTagsAuthKeyWithTagCannotAddViaCLI
|
||||||
steps:
|
- TestTagsAuthKeyWithTagCannotChangeViaCLI
|
||||||
- uses: actions/checkout@v4
|
- TestTagsAuthKeyWithTagAdminOverrideReauthPreserves
|
||||||
with:
|
- TestTagsAuthKeyWithTagCLICannotModifyAdminTags
|
||||||
fetch-depth: 2
|
- TestTagsAuthKeyWithoutTagCannotRequestTags
|
||||||
- name: Get changed files
|
- TestTagsAuthKeyWithoutTagRegisterNoTags
|
||||||
id: changed-files
|
- TestTagsAuthKeyWithoutTagCannotAddViaCLI
|
||||||
uses: dorny/paths-filter@v3
|
- TestTagsAuthKeyWithoutTagCLINoOpAfterAdminWithReset
|
||||||
with:
|
- TestTagsAuthKeyWithoutTagCLINoOpAfterAdminWithEmptyAdvertise
|
||||||
filters: |
|
- TestTagsAuthKeyWithoutTagCLICannotReduceAdminMultiTag
|
||||||
files:
|
- TestTagsUserLoginOwnedTagAtRegistration
|
||||||
- '*.nix'
|
- TestTagsUserLoginNonExistentTagAtRegistration
|
||||||
- 'go.*'
|
- TestTagsUserLoginUnownedTagAtRegistration
|
||||||
- '**/*.go'
|
- TestTagsUserLoginAddTagViaCLIReauth
|
||||||
- 'integration_test/'
|
- TestTagsUserLoginRemoveTagViaCLIReauth
|
||||||
- 'config-example.yaml'
|
- TestTagsUserLoginCLINoOpAfterAdminAssignment
|
||||||
- name: Tailscale
|
- TestTagsUserLoginCLICannotRemoveAdminTags
|
||||||
if: ${{ env.HAS_TAILSCALE_SECRET }}
|
- TestTagsAuthKeyWithTagRequestNonExistentTag
|
||||||
uses: tailscale/github-action@v2
|
- TestTagsAuthKeyWithTagRequestUnownedTag
|
||||||
with:
|
- TestTagsAuthKeyWithoutTagRequestNonExistentTag
|
||||||
oauth-client-id: ${{ secrets.TS_OAUTH_CLIENT_ID }}
|
- TestTagsAuthKeyWithoutTagRequestUnownedTag
|
||||||
oauth-secret: ${{ secrets.TS_OAUTH_SECRET }}
|
- TestTagsAdminAPICannotSetNonExistentTag
|
||||||
tags: tag:gh
|
- TestTagsAdminAPICanSetUnownedTag
|
||||||
- name: Setup SSH server for Actor
|
- TestTagsAdminAPICannotRemoveAllTags
|
||||||
if: ${{ env.HAS_TAILSCALE_SECRET }}
|
- TestTagsIssue2978ReproTagReplacement
|
||||||
uses: alexellis/setup-sshd-actor@master
|
- TestTagsAdminAPICannotSetInvalidFormat
|
||||||
- uses: DeterminateSystems/nix-installer-action@main
|
- TestTagsUserLoginReauthWithEmptyTagsRemovesAllTags
|
||||||
if: steps.changed-files.outputs.files == 'true'
|
- TestTagsAuthKeyWithoutUserInheritsTags
|
||||||
- uses: DeterminateSystems/magic-nix-cache-action@main
|
- TestTagsAuthKeyWithoutUserRejectsAdvertisedTags
|
||||||
if: steps.changed-files.outputs.files == 'true'
|
- TestTagsAuthKeyConvertToUserViaCLIRegister
|
||||||
- uses: satackey/action-docker-layer-caching@main
|
uses: ./.github/workflows/integration-test-template.yml
|
||||||
if: steps.changed-files.outputs.files == 'true'
|
secrets: inherit
|
||||||
continue-on-error: true
|
with:
|
||||||
- name: Run Integration Test
|
test: ${{ matrix.test }}
|
||||||
uses: Wandalen/wretry.action@master
|
postgres_flag: "--postgres=0"
|
||||||
if: steps.changed-files.outputs.files == 'true'
|
database_name: "sqlite"
|
||||||
env:
|
postgres:
|
||||||
USE_POSTGRES: ${{ matrix.database == 'postgres' && '1' || '0' }}
|
needs: [build, build-postgres]
|
||||||
with:
|
if: needs.build.outputs.files-changed == 'true'
|
||||||
attempt_limit: 5
|
strategy:
|
||||||
command: |
|
fail-fast: false
|
||||||
nix develop --command -- docker run \
|
matrix:
|
||||||
--tty --rm \
|
test:
|
||||||
--volume ~/.cache/hs-integration-go:/go \
|
- TestACLAllowUserDst
|
||||||
--name headscale-test-suite \
|
- TestPingAllByIP
|
||||||
--volume $PWD:$PWD -w $PWD/integration \
|
- TestEphemeral2006DeletedTooQuickly
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock \
|
- TestPingAllByIPManyUpDown
|
||||||
--volume $PWD/control_logs:/tmp/control \
|
- TestSubnetRouterMultiNetwork
|
||||||
--env HEADSCALE_INTEGRATION_POSTGRES=${{env.USE_POSTGRES}} \
|
uses: ./.github/workflows/integration-test-template.yml
|
||||||
golang:1 \
|
secrets: inherit
|
||||||
go run gotest.tools/gotestsum@latest -- ./... \
|
with:
|
||||||
-failfast \
|
test: ${{ matrix.test }}
|
||||||
-timeout 120m \
|
postgres_flag: "--postgres=1"
|
||||||
-parallel 1 \
|
database_name: "postgres"
|
||||||
-run "^${{ matrix.test }}$"
|
|
||||||
- uses: actions/upload-artifact@v4
|
|
||||||
if: always() && steps.changed-files.outputs.files == 'true'
|
|
||||||
with:
|
|
||||||
name: ${{ matrix.test }}-${{matrix.database}}-logs
|
|
||||||
path: "control_logs/*.log"
|
|
||||||
- uses: actions/upload-artifact@v4
|
|
||||||
if: always() && steps.changed-files.outputs.files == 'true'
|
|
||||||
with:
|
|
||||||
name: ${{ matrix.test }}-${{matrix.database}}-pprof
|
|
||||||
path: "control_logs/*.pprof.tar"
|
|
||||||
- name: Setup a blocking tmux session
|
|
||||||
if: ${{ env.HAS_TAILSCALE_SECRET }}
|
|
||||||
uses: alexellis/block-with-tmux-action@master
|
|
||||||
|
|||||||
18
.github/workflows/test.yml
vendored
@@ -11,13 +11,13 @@ jobs:
|
|||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
|
||||||
with:
|
with:
|
||||||
fetch-depth: 2
|
fetch-depth: 2
|
||||||
|
|
||||||
- name: Get changed files
|
- name: Get changed files
|
||||||
id: changed-files
|
id: changed-files
|
||||||
uses: dorny/paths-filter@v3
|
uses: dorny/paths-filter@de90cc6fb38fc0963ad72b210f1f284cd68cea36 # v3.0.2
|
||||||
with:
|
with:
|
||||||
filters: |
|
filters: |
|
||||||
files:
|
files:
|
||||||
@@ -27,11 +27,21 @@ jobs:
|
|||||||
- 'integration_test/'
|
- 'integration_test/'
|
||||||
- 'config-example.yaml'
|
- 'config-example.yaml'
|
||||||
|
|
||||||
- uses: DeterminateSystems/nix-installer-action@main
|
- uses: nixbuild/nix-quick-install-action@2c9db80fb984ceb1bcaa77cdda3fdf8cfba92035 # v34
|
||||||
if: steps.changed-files.outputs.files == 'true'
|
if: steps.changed-files.outputs.files == 'true'
|
||||||
- uses: DeterminateSystems/magic-nix-cache-action@main
|
- uses: nix-community/cache-nix-action@135667ec418502fa5a3598af6fb9eb733888ce6a # v6.1.3
|
||||||
if: steps.changed-files.outputs.files == 'true'
|
if: steps.changed-files.outputs.files == 'true'
|
||||||
|
with:
|
||||||
|
primary-key: nix-${{ runner.os }}-${{ runner.arch }}-${{ hashFiles('**/*.nix',
|
||||||
|
'**/flake.lock') }}
|
||||||
|
restore-prefixes-first-match: nix-${{ runner.os }}-${{ runner.arch }}
|
||||||
|
|
||||||
- name: Run tests
|
- name: Run tests
|
||||||
if: steps.changed-files.outputs.files == 'true'
|
if: steps.changed-files.outputs.files == 'true'
|
||||||
|
env:
|
||||||
|
# As of 2025-01-06, these env vars was not automatically
|
||||||
|
# set anymore which breaks the initdb for postgres on
|
||||||
|
# some of the database migration tests.
|
||||||
|
LC_ALL: "en_US.UTF-8"
|
||||||
|
LC_CTYPE: "en_US.UTF-8"
|
||||||
run: nix develop --command -- gotestsum
|
run: nix develop --command -- gotestsum
|
||||||
|
|||||||
6
.github/workflows/update-flake.yml
vendored
@@ -10,10 +10,10 @@ jobs:
|
|||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
steps:
|
steps:
|
||||||
- name: Checkout repository
|
- name: Checkout repository
|
||||||
uses: actions/checkout@v4
|
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
|
||||||
- name: Install Nix
|
- name: Install Nix
|
||||||
uses: DeterminateSystems/nix-installer-action@main
|
uses: DeterminateSystems/nix-installer-action@21a544727d0c62386e78b4befe52d19ad12692e3 # v17
|
||||||
- name: Update flake.lock
|
- name: Update flake.lock
|
||||||
uses: DeterminateSystems/update-flake-lock@main
|
uses: DeterminateSystems/update-flake-lock@428c2b58a4b7414dabd372acb6a03dba1084d3ab # v25
|
||||||
with:
|
with:
|
||||||
pr-title: "Update flake.lock"
|
pr-title: "Update flake.lock"
|
||||||
|
|||||||
11
.gitignore
vendored
@@ -1,6 +1,10 @@
|
|||||||
ignored/
|
ignored/
|
||||||
tailscale/
|
tailscale/
|
||||||
.vscode/
|
.vscode/
|
||||||
|
.claude/
|
||||||
|
logs/
|
||||||
|
|
||||||
|
*.prof
|
||||||
|
|
||||||
# Binaries for programs and plugins
|
# Binaries for programs and plugins
|
||||||
*.exe
|
*.exe
|
||||||
@@ -20,11 +24,12 @@ vendor/
|
|||||||
|
|
||||||
dist/
|
dist/
|
||||||
/headscale
|
/headscale
|
||||||
config.json
|
|
||||||
config.yaml
|
config.yaml
|
||||||
config*.yaml
|
config*.yaml
|
||||||
|
!config-example.yaml
|
||||||
derp.yaml
|
derp.yaml
|
||||||
*.hujson
|
*.hujson
|
||||||
|
!hscontrol/policy/v2/testdata/*/*.hujson
|
||||||
*.key
|
*.key
|
||||||
/db.sqlite
|
/db.sqlite
|
||||||
*.sqlite3
|
*.sqlite3
|
||||||
@@ -46,3 +51,7 @@ integration_test/etc/config.dump.yaml
|
|||||||
/site
|
/site
|
||||||
|
|
||||||
__debug_bin
|
__debug_bin
|
||||||
|
|
||||||
|
node_modules/
|
||||||
|
package-lock.json
|
||||||
|
package.json
|
||||||
|
|||||||
154
.golangci.yaml
@@ -1,70 +1,108 @@
|
|||||||
---
|
---
|
||||||
run:
|
version: "2"
|
||||||
timeout: 10m
|
|
||||||
build-tags:
|
|
||||||
- ts2019
|
|
||||||
|
|
||||||
issues:
|
|
||||||
skip-dirs:
|
|
||||||
- gen
|
|
||||||
linters:
|
linters:
|
||||||
enable-all: true
|
default: all
|
||||||
disable:
|
disable:
|
||||||
|
- cyclop
|
||||||
- depguard
|
- depguard
|
||||||
|
- dupl
|
||||||
- revive
|
- exhaustruct
|
||||||
- lll
|
- funcorder
|
||||||
- gofmt
|
- funlen
|
||||||
- gochecknoglobals
|
- gochecknoglobals
|
||||||
- gochecknoinits
|
- gochecknoinits
|
||||||
- gocognit
|
- gocognit
|
||||||
- funlen
|
|
||||||
- tagliatelle
|
|
||||||
- godox
|
- godox
|
||||||
- ireturn
|
|
||||||
- execinquery
|
|
||||||
- exhaustruct
|
|
||||||
- nolintlint
|
|
||||||
- musttag # causes issues with imported libs
|
|
||||||
- depguard
|
|
||||||
- exportloopref
|
|
||||||
|
|
||||||
# We should strive to enable these:
|
|
||||||
- wrapcheck
|
|
||||||
- dupl
|
|
||||||
- makezero
|
|
||||||
- maintidx
|
|
||||||
|
|
||||||
# Limits the methods of an interface to 10. We have more in integration tests
|
|
||||||
- interfacebloat
|
- interfacebloat
|
||||||
|
- ireturn
|
||||||
# We might want to enable this, but it might be a lot of work
|
- lll
|
||||||
- cyclop
|
- maintidx
|
||||||
|
- makezero
|
||||||
|
- mnd
|
||||||
|
- musttag
|
||||||
- nestif
|
- nestif
|
||||||
- wsl # might be incompatible with gofumpt
|
- nolintlint
|
||||||
- testpackage
|
|
||||||
- paralleltest
|
- paralleltest
|
||||||
|
- revive
|
||||||
|
- tagliatelle
|
||||||
|
- testpackage
|
||||||
|
- varnamelen
|
||||||
|
- wrapcheck
|
||||||
|
- wsl
|
||||||
|
settings:
|
||||||
|
forbidigo:
|
||||||
|
forbid:
|
||||||
|
# Forbid time.Sleep everywhere with context-appropriate alternatives
|
||||||
|
- pattern: 'time\.Sleep'
|
||||||
|
msg: >-
|
||||||
|
time.Sleep is forbidden.
|
||||||
|
In tests: use assert.EventuallyWithT for polling/waiting patterns.
|
||||||
|
In production code: use a backoff strategy (e.g., cenkalti/backoff) or proper synchronization primitives.
|
||||||
|
# Forbid inline string literals in zerolog field methods - use zf.* constants
|
||||||
|
- pattern: '\.(Str|Int|Int8|Int16|Int32|Int64|Uint|Uint8|Uint16|Uint32|Uint64|Float32|Float64|Bool|Dur|Time|TimeDiff|Strs|Ints|Uints|Floats|Bools|Any|Interface)\("[^"]+"'
|
||||||
|
msg: >-
|
||||||
|
Use zf.* constants for zerolog field names instead of string literals.
|
||||||
|
Import "github.com/juanfont/headscale/hscontrol/util/zlog/zf" and use
|
||||||
|
constants like zf.NodeID, zf.UserName, etc. Add new constants to
|
||||||
|
hscontrol/util/zlog/zf/fields.go if needed.
|
||||||
|
# Forbid ptr.To - use Go 1.26 new(expr) instead
|
||||||
|
- pattern: 'ptr\.To\('
|
||||||
|
msg: >-
|
||||||
|
ptr.To is forbidden. Use Go 1.26's new(expr) syntax instead.
|
||||||
|
Example: ptr.To(value) → new(value)
|
||||||
|
# Forbid tsaddr.SortPrefixes - use slices.SortFunc with netip.Prefix.Compare
|
||||||
|
- pattern: 'tsaddr\.SortPrefixes'
|
||||||
|
msg: >-
|
||||||
|
tsaddr.SortPrefixes is forbidden. Use Go 1.26's netip.Prefix.Compare instead.
|
||||||
|
Example: slices.SortFunc(prefixes, netip.Prefix.Compare)
|
||||||
|
analyze-types: true
|
||||||
|
gocritic:
|
||||||
|
disabled-checks:
|
||||||
|
- appendAssign
|
||||||
|
- ifElseChain
|
||||||
|
nlreturn:
|
||||||
|
block-size: 4
|
||||||
|
varnamelen:
|
||||||
|
ignore-names:
|
||||||
|
- err
|
||||||
|
- db
|
||||||
|
- id
|
||||||
|
- ip
|
||||||
|
- ok
|
||||||
|
- c
|
||||||
|
- tt
|
||||||
|
- tx
|
||||||
|
- rx
|
||||||
|
- sb
|
||||||
|
- wg
|
||||||
|
- pr
|
||||||
|
- p
|
||||||
|
- p2
|
||||||
|
ignore-type-assert-ok: true
|
||||||
|
ignore-map-index-ok: true
|
||||||
|
exclusions:
|
||||||
|
generated: lax
|
||||||
|
presets:
|
||||||
|
- comments
|
||||||
|
- common-false-positives
|
||||||
|
- legacy
|
||||||
|
- std-error-handling
|
||||||
|
paths:
|
||||||
|
- third_party$
|
||||||
|
- builtin$
|
||||||
|
- examples$
|
||||||
|
- gen
|
||||||
|
|
||||||
linters-settings:
|
formatters:
|
||||||
varnamelen:
|
enable:
|
||||||
ignore-type-assert-ok: true
|
- gci
|
||||||
ignore-map-index-ok: true
|
- gofmt
|
||||||
ignore-names:
|
- gofumpt
|
||||||
- err
|
- goimports
|
||||||
- db
|
exclusions:
|
||||||
- id
|
generated: lax
|
||||||
- ip
|
paths:
|
||||||
- ok
|
- third_party$
|
||||||
- c
|
- builtin$
|
||||||
- tt
|
- examples$
|
||||||
- tx
|
- gen
|
||||||
- rx
|
|
||||||
|
|
||||||
gocritic:
|
|
||||||
disabled-checks:
|
|
||||||
- appendAssign
|
|
||||||
# TODO(kradalby): Remove this
|
|
||||||
- ifElseChain
|
|
||||||
|
|
||||||
nlreturn:
|
|
||||||
block-size: 4
|
|
||||||
|
|||||||
110
.goreleaser.yml
@@ -2,11 +2,16 @@
|
|||||||
version: 2
|
version: 2
|
||||||
before:
|
before:
|
||||||
hooks:
|
hooks:
|
||||||
- go mod tidy -compat=1.22
|
- go mod tidy -compat=1.26
|
||||||
- go mod vendor
|
- go mod vendor
|
||||||
|
|
||||||
release:
|
release:
|
||||||
prerelease: auto
|
prerelease: auto
|
||||||
|
draft: true
|
||||||
|
header: |
|
||||||
|
## Upgrade
|
||||||
|
|
||||||
|
Please follow the steps outlined in the [upgrade guide](https://headscale.net/stable/setup/upgrade/) to update your existing Headscale installation.
|
||||||
|
|
||||||
builds:
|
builds:
|
||||||
- id: headscale
|
- id: headscale
|
||||||
@@ -18,23 +23,16 @@ builds:
|
|||||||
- darwin_amd64
|
- darwin_amd64
|
||||||
- darwin_arm64
|
- darwin_arm64
|
||||||
- freebsd_amd64
|
- freebsd_amd64
|
||||||
- linux_386
|
|
||||||
- linux_amd64
|
- linux_amd64
|
||||||
- linux_arm64
|
- linux_arm64
|
||||||
- linux_arm_5
|
|
||||||
- linux_arm_6
|
|
||||||
- linux_arm_7
|
|
||||||
flags:
|
flags:
|
||||||
- -mod=readonly
|
- -mod=readonly
|
||||||
ldflags:
|
|
||||||
- -s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=v{{.Version}}
|
|
||||||
tags:
|
|
||||||
- ts2019
|
|
||||||
|
|
||||||
archives:
|
archives:
|
||||||
- id: golang-cross
|
- id: golang-cross
|
||||||
name_template: '{{ .ProjectName }}_{{ .Version }}_{{ .Os }}_{{ .Arch }}{{ with .Arm }}v{{ . }}{{ end }}{{ with .Mips }}_{{ . }}{{ end }}{{ if not (eq .Amd64 "v1") }}{{ .Amd64 }}{{ end }}'
|
name_template: '{{ .ProjectName }}_{{ .Version }}_{{ .Os }}_{{ .Arch }}{{ with .Arm }}v{{ . }}{{ end }}{{ with .Mips }}_{{ . }}{{ end }}{{ if not (eq .Amd64 "v1") }}{{ .Amd64 }}{{ end }}'
|
||||||
format: binary
|
formats:
|
||||||
|
- binary
|
||||||
|
|
||||||
source:
|
source:
|
||||||
enabled: true
|
enabled: true
|
||||||
@@ -53,15 +51,22 @@ nfpms:
|
|||||||
# List file contents: dpkg -c dist/headscale...deb
|
# List file contents: dpkg -c dist/headscale...deb
|
||||||
# Package metadata: dpkg --info dist/headscale....deb
|
# Package metadata: dpkg --info dist/headscale....deb
|
||||||
#
|
#
|
||||||
- builds:
|
- ids:
|
||||||
- headscale
|
- headscale
|
||||||
package_name: headscale
|
package_name: headscale
|
||||||
priority: optional
|
priority: optional
|
||||||
vendor: headscale
|
vendor: headscale
|
||||||
maintainer: Kristoffer Dalby <kristoffer@dalby.cc>
|
maintainer: Kristoffer Dalby <kristoffer@dalby.cc>
|
||||||
homepage: https://github.com/juanfont/headscale
|
homepage: https://github.com/juanfont/headscale
|
||||||
license: BSD
|
description: |-
|
||||||
|
Open source implementation of the Tailscale control server.
|
||||||
|
Headscale aims to implement a self-hosted, open source alternative to the
|
||||||
|
Tailscale control server. Headscale's goal is to provide self-hosters and
|
||||||
|
hobbyists with an open-source server they can use for their projects and
|
||||||
|
labs. It implements a narrow scope, a single Tailscale network (tailnet),
|
||||||
|
suitable for a personal use, or a small open-source organisation.
|
||||||
bindir: /usr/bin
|
bindir: /usr/bin
|
||||||
|
section: net
|
||||||
formats:
|
formats:
|
||||||
- deb
|
- deb
|
||||||
contents:
|
contents:
|
||||||
@@ -70,56 +75,39 @@ nfpms:
|
|||||||
type: config|noreplace
|
type: config|noreplace
|
||||||
file_info:
|
file_info:
|
||||||
mode: 0644
|
mode: 0644
|
||||||
- src: ./docs/packaging/headscale.systemd.service
|
- src: ./packaging/systemd/headscale.service
|
||||||
dst: /usr/lib/systemd/system/headscale.service
|
dst: /usr/lib/systemd/system/headscale.service
|
||||||
- dst: /var/lib/headscale
|
- dst: /var/lib/headscale
|
||||||
type: dir
|
type: dir
|
||||||
- dst: /var/run/headscale
|
- src: LICENSE
|
||||||
type: dir
|
dst: /usr/share/doc/headscale/copyright
|
||||||
scripts:
|
scripts:
|
||||||
postinstall: ./docs/packaging/postinstall.sh
|
postinstall: ./packaging/deb/postinst
|
||||||
postremove: ./docs/packaging/postremove.sh
|
postremove: ./packaging/deb/postrm
|
||||||
|
preremove: ./packaging/deb/prerm
|
||||||
|
deb:
|
||||||
|
lintian_overrides:
|
||||||
|
- no-changelog # Our CHANGELOG.md uses a different formatting
|
||||||
|
- no-manual-page
|
||||||
|
- statically-linked-binary
|
||||||
|
|
||||||
kos:
|
kos:
|
||||||
- id: ghcr
|
- id: ghcr
|
||||||
repository: ghcr.io/juanfont/headscale
|
repositories:
|
||||||
|
- ghcr.io/juanfont/headscale
|
||||||
|
- headscale/headscale
|
||||||
|
|
||||||
# bare tells KO to only use the repository
|
# bare tells KO to only use the repository
|
||||||
# for tagging and naming the container.
|
# for tagging and naming the container.
|
||||||
bare: true
|
bare: true
|
||||||
base_image: gcr.io/distroless/base-debian12
|
base_image: gcr.io/distroless/base-debian13
|
||||||
build: headscale
|
build: headscale
|
||||||
main: ./cmd/headscale
|
main: ./cmd/headscale
|
||||||
env:
|
env:
|
||||||
- CGO_ENABLED=0
|
- CGO_ENABLED=0
|
||||||
platforms:
|
platforms:
|
||||||
- linux/amd64
|
- linux/amd64
|
||||||
- linux/386
|
|
||||||
- linux/arm64
|
- linux/arm64
|
||||||
- linux/arm/v7
|
|
||||||
tags:
|
|
||||||
- "{{ if not .Prerelease }}latest{{ end }}"
|
|
||||||
- "{{ if not .Prerelease }}{{ .Major }}.{{ .Minor }}.{{ .Patch }}{{ end }}"
|
|
||||||
- "{{ if not .Prerelease }}{{ .Major }}.{{ .Minor }}{{ end }}"
|
|
||||||
- "{{ if not .Prerelease }}{{ .Major }}{{ end }}"
|
|
||||||
- "{{ if not .Prerelease }}v{{ .Major }}.{{ .Minor }}.{{ .Patch }}{{ end }}"
|
|
||||||
- "{{ if not .Prerelease }}v{{ .Major }}.{{ .Minor }}{{ end }}"
|
|
||||||
- "{{ if not .Prerelease }}v{{ .Major }}{{ end }}"
|
|
||||||
- "{{ if not .Prerelease }}stable{{ else }}unstable{{ end }}"
|
|
||||||
- "{{ .Tag }}"
|
|
||||||
- '{{ trimprefix .Tag "v" }}'
|
|
||||||
- "sha-{{ .ShortCommit }}"
|
|
||||||
|
|
||||||
- id: dockerhub
|
|
||||||
build: headscale
|
|
||||||
base_image: gcr.io/distroless/base-debian12
|
|
||||||
repository: headscale/headscale
|
|
||||||
bare: true
|
|
||||||
platforms:
|
|
||||||
- linux/amd64
|
|
||||||
- linux/386
|
|
||||||
- linux/arm64
|
|
||||||
- linux/arm/v7
|
|
||||||
tags:
|
tags:
|
||||||
- "{{ if not .Prerelease }}latest{{ end }}"
|
- "{{ if not .Prerelease }}latest{{ end }}"
|
||||||
- "{{ if not .Prerelease }}{{ .Major }}.{{ .Minor }}.{{ .Patch }}{{ end }}"
|
- "{{ if not .Prerelease }}{{ .Major }}.{{ .Minor }}.{{ .Patch }}{{ end }}"
|
||||||
@@ -132,43 +120,23 @@ kos:
|
|||||||
- "{{ .Tag }}"
|
- "{{ .Tag }}"
|
||||||
- '{{ trimprefix .Tag "v" }}'
|
- '{{ trimprefix .Tag "v" }}'
|
||||||
- "sha-{{ .ShortCommit }}"
|
- "sha-{{ .ShortCommit }}"
|
||||||
|
creation_time: "{{.CommitTimestamp}}"
|
||||||
|
ko_data_creation_time: "{{.CommitTimestamp}}"
|
||||||
|
|
||||||
- id: ghcr-debug
|
- id: ghcr-debug
|
||||||
repository: ghcr.io/juanfont/headscale
|
repositories:
|
||||||
|
- ghcr.io/juanfont/headscale
|
||||||
|
- headscale/headscale
|
||||||
|
|
||||||
bare: true
|
bare: true
|
||||||
base_image: gcr.io/distroless/base-debian12:debug
|
base_image: gcr.io/distroless/base-debian13:debug
|
||||||
build: headscale
|
build: headscale
|
||||||
main: ./cmd/headscale
|
main: ./cmd/headscale
|
||||||
env:
|
env:
|
||||||
- CGO_ENABLED=0
|
- CGO_ENABLED=0
|
||||||
platforms:
|
platforms:
|
||||||
- linux/amd64
|
- linux/amd64
|
||||||
- linux/386
|
|
||||||
- linux/arm64
|
- linux/arm64
|
||||||
- linux/arm/v7
|
|
||||||
tags:
|
|
||||||
- "{{ if not .Prerelease }}latest-debug{{ end }}"
|
|
||||||
- "{{ if not .Prerelease }}{{ .Major }}.{{ .Minor }}.{{ .Patch }}-debug{{ end }}"
|
|
||||||
- "{{ if not .Prerelease }}{{ .Major }}.{{ .Minor }}-debug{{ end }}"
|
|
||||||
- "{{ if not .Prerelease }}{{ .Major }}-debug{{ end }}"
|
|
||||||
- "{{ if not .Prerelease }}v{{ .Major }}.{{ .Minor }}.{{ .Patch }}-debug{{ end }}"
|
|
||||||
- "{{ if not .Prerelease }}v{{ .Major }}.{{ .Minor }}-debug{{ end }}"
|
|
||||||
- "{{ if not .Prerelease }}v{{ .Major }}-debug{{ end }}"
|
|
||||||
- "{{ if not .Prerelease }}stable-debug{{ else }}unstable-debug{{ end }}"
|
|
||||||
- "{{ .Tag }}-debug"
|
|
||||||
- '{{ trimprefix .Tag "v" }}-debug'
|
|
||||||
- "sha-{{ .ShortCommit }}-debug"
|
|
||||||
|
|
||||||
- id: dockerhub-debug
|
|
||||||
build: headscale
|
|
||||||
base_image: gcr.io/distroless/base-debian12:debug
|
|
||||||
repository: headscale/headscale
|
|
||||||
bare: true
|
|
||||||
platforms:
|
|
||||||
- linux/amd64
|
|
||||||
- linux/386
|
|
||||||
- linux/arm64
|
|
||||||
- linux/arm/v7
|
|
||||||
tags:
|
tags:
|
||||||
- "{{ if not .Prerelease }}latest-debug{{ end }}"
|
- "{{ if not .Prerelease }}latest-debug{{ end }}"
|
||||||
- "{{ if not .Prerelease }}{{ .Major }}.{{ .Minor }}.{{ .Patch }}-debug{{ end }}"
|
- "{{ if not .Prerelease }}{{ .Major }}.{{ .Minor }}.{{ .Patch }}-debug{{ end }}"
|
||||||
|
|||||||
34
.mcp.json
Normal file
@@ -0,0 +1,34 @@
|
|||||||
|
{
|
||||||
|
"mcpServers": {
|
||||||
|
"claude-code-mcp": {
|
||||||
|
"type": "stdio",
|
||||||
|
"command": "npx",
|
||||||
|
"args": ["-y", "@steipete/claude-code-mcp@latest"],
|
||||||
|
"env": {}
|
||||||
|
},
|
||||||
|
"sequential-thinking": {
|
||||||
|
"type": "stdio",
|
||||||
|
"command": "npx",
|
||||||
|
"args": ["-y", "@modelcontextprotocol/server-sequential-thinking"],
|
||||||
|
"env": {}
|
||||||
|
},
|
||||||
|
"nixos": {
|
||||||
|
"type": "stdio",
|
||||||
|
"command": "uvx",
|
||||||
|
"args": ["mcp-nixos"],
|
||||||
|
"env": {}
|
||||||
|
},
|
||||||
|
"context7": {
|
||||||
|
"type": "stdio",
|
||||||
|
"command": "npx",
|
||||||
|
"args": ["-y", "@upstash/context7-mcp"],
|
||||||
|
"env": {}
|
||||||
|
},
|
||||||
|
"git": {
|
||||||
|
"type": "stdio",
|
||||||
|
"command": "npx",
|
||||||
|
"args": ["-y", "@cyanheads/git-mcp-server"],
|
||||||
|
"env": {}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
2
.mdformat.toml
Normal file
@@ -0,0 +1,2 @@
|
|||||||
|
[plugin.mkdocs]
|
||||||
|
align_semantic_breaks_in_lists = true
|
||||||
62
.pre-commit-config.yaml
Normal file
@@ -0,0 +1,62 @@
|
|||||||
|
# prek/pre-commit configuration for headscale
|
||||||
|
# See: https://prek.j178.dev/quickstart/
|
||||||
|
# See: https://prek.j178.dev/builtin/
|
||||||
|
|
||||||
|
# Global exclusions - ignore generated code
|
||||||
|
exclude: ^gen/
|
||||||
|
|
||||||
|
repos:
|
||||||
|
# Built-in hooks from pre-commit/pre-commit-hooks
|
||||||
|
# prek will use fast-path optimized versions automatically
|
||||||
|
# See: https://prek.j178.dev/builtin/
|
||||||
|
- repo: https://github.com/pre-commit/pre-commit-hooks
|
||||||
|
rev: v6.0.0
|
||||||
|
hooks:
|
||||||
|
- id: check-added-large-files
|
||||||
|
- id: check-case-conflict
|
||||||
|
- id: check-executables-have-shebangs
|
||||||
|
- id: check-json
|
||||||
|
- id: check-merge-conflict
|
||||||
|
- id: check-symlinks
|
||||||
|
- id: check-toml
|
||||||
|
- id: check-xml
|
||||||
|
- id: check-yaml
|
||||||
|
- id: detect-private-key
|
||||||
|
- id: end-of-file-fixer
|
||||||
|
- id: fix-byte-order-marker
|
||||||
|
- id: mixed-line-ending
|
||||||
|
- id: trailing-whitespace
|
||||||
|
|
||||||
|
# Local hooks for project-specific tooling
|
||||||
|
- repo: local
|
||||||
|
hooks:
|
||||||
|
# nixpkgs-fmt for Nix files
|
||||||
|
- id: nixpkgs-fmt
|
||||||
|
name: nixpkgs-fmt
|
||||||
|
entry: nixpkgs-fmt
|
||||||
|
language: system
|
||||||
|
files: \.nix$
|
||||||
|
|
||||||
|
# Prettier for formatting
|
||||||
|
- id: prettier
|
||||||
|
name: prettier
|
||||||
|
entry: prettier --write --list-different
|
||||||
|
language: system
|
||||||
|
exclude: ^docs/
|
||||||
|
types_or: [javascript, jsx, ts, tsx, yaml, json, toml, html, css, scss, sass, markdown]
|
||||||
|
|
||||||
|
# mdformat for docs
|
||||||
|
- id: mdformat
|
||||||
|
name: mdformat
|
||||||
|
entry: mdformat
|
||||||
|
language: system
|
||||||
|
types_or: [markdown]
|
||||||
|
files: ^docs/
|
||||||
|
|
||||||
|
# golangci-lint for Go code quality
|
||||||
|
- id: golangci-lint
|
||||||
|
name: golangci-lint
|
||||||
|
entry: nix develop --command -- golangci-lint run --new-from-rev=HEAD~1 --timeout=5m --fix
|
||||||
|
language: system
|
||||||
|
types: [go]
|
||||||
|
pass_filenames: false
|
||||||
@@ -1,4 +1,2 @@
|
|||||||
.github/workflows/test-integration-v2*
|
.github/workflows/test-integration-v2*
|
||||||
docs/about/features.md
|
docs/
|
||||||
docs/ref/configuration.md
|
|
||||||
docs/ref/remote-cli.md
|
|
||||||
|
|||||||
291
AGENTS.md
Normal file
@@ -0,0 +1,291 @@
|
|||||||
|
# AGENTS.md
|
||||||
|
|
||||||
|
Behavioural guidance for AI agents working in this repository. Reference
|
||||||
|
material for complex procedures lives next to the code — integration
|
||||||
|
testing is documented in [`cmd/hi/README.md`](cmd/hi/README.md) and
|
||||||
|
[`integration/README.md`](integration/README.md). Read those files
|
||||||
|
before running tests or writing new ones.
|
||||||
|
|
||||||
|
Headscale is an open-source implementation of the Tailscale control server
|
||||||
|
written in Go. It manages node registration, IP allocation, policy
|
||||||
|
enforcement, and DERP routing for self-hosted tailnets.
|
||||||
|
|
||||||
|
## Interaction Rules
|
||||||
|
|
||||||
|
These rules govern how you work in this repo. They are listed first
|
||||||
|
because they shape every other decision.
|
||||||
|
|
||||||
|
### Ask with comprehensive multiple-choice options
|
||||||
|
|
||||||
|
When you need to clarify intent, scope, or approach, use the
|
||||||
|
`AskUserQuestion` tool (or a numbered list fallback) and present the user
|
||||||
|
with a comprehensive set of options. Cover the likely branches explicitly
|
||||||
|
and include an "other — please describe" escape.
|
||||||
|
|
||||||
|
- Bad: _"How should I handle expired nodes?"_
|
||||||
|
- Good: _"How should expired nodes be handled? (a) Remain visible to peers
|
||||||
|
but marked expired (current behaviour); (b) Hidden from peers entirely;
|
||||||
|
(c) Hidden from peers but visible in admin API; (d) Other."_
|
||||||
|
|
||||||
|
This matters more than you think — open-ended questions waste a round
|
||||||
|
trip and often produce a misaligned answer.
|
||||||
|
|
||||||
|
### Read the documented procedure before running complex commands
|
||||||
|
|
||||||
|
Before invoking any `hi` command, integration test, generator, or
|
||||||
|
migration tool, read the referenced README in full —
|
||||||
|
`cmd/hi/README.md` for running tests, `integration/README.md` for
|
||||||
|
writing them. Never guess flags. If the procedure is not documented
|
||||||
|
anywhere, ask the user rather than inventing one.
|
||||||
|
|
||||||
|
### Map once, then act
|
||||||
|
|
||||||
|
Use `Glob` / `Grep` to understand file structure, then execute. Do not
|
||||||
|
re-explore the same area to "double-check" once you have a plan. Do not
|
||||||
|
re-read files you edited in this session — the harness tracks state for
|
||||||
|
you.
|
||||||
|
|
||||||
|
### Fail fast, report up
|
||||||
|
|
||||||
|
If a command fails twice with the same error, stop and report the exact
|
||||||
|
error to the user with context. Do not loop through variants or
|
||||||
|
"try one more thing". A repeated failure means your model of the problem
|
||||||
|
is wrong.
|
||||||
|
|
||||||
|
### Confirm scope for multi-file changes
|
||||||
|
|
||||||
|
Before touching more than three files, show the user which files will
|
||||||
|
change and why. Use plan mode (`ExitPlanMode`) for non-trivial work.
|
||||||
|
|
||||||
|
### Prefer editing existing files
|
||||||
|
|
||||||
|
Do not create new files unless strictly necessary. Do not generate helper
|
||||||
|
abstractions, wrapper utilities, or "just in case" configuration. Three
|
||||||
|
similar lines of code is better than a premature abstraction.
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Enter the nix dev shell (Go 1.26.1, buf, golangci-lint, prek)
|
||||||
|
nix develop
|
||||||
|
|
||||||
|
# Full development workflow: fmt + lint + test + build
|
||||||
|
make dev
|
||||||
|
|
||||||
|
# Individual targets
|
||||||
|
make build # build the headscale binary
|
||||||
|
make test # go test ./...
|
||||||
|
make fmt # format Go, docs, proto
|
||||||
|
make lint # lint Go, proto
|
||||||
|
make generate # regenerate protobuf code (after changes to proto/)
|
||||||
|
make clean # remove build artefacts
|
||||||
|
|
||||||
|
# Direct go test invocations
|
||||||
|
go test ./...
|
||||||
|
go test -race ./...
|
||||||
|
|
||||||
|
# Integration tests — read cmd/hi/README.md first
|
||||||
|
go run ./cmd/hi doctor
|
||||||
|
go run ./cmd/hi run "TestName"
|
||||||
|
```
|
||||||
|
|
||||||
|
Go 1.26.1 minimum (per `go.mod:3`). `nix develop` pins the exact toolchain
|
||||||
|
used in CI.
|
||||||
|
|
||||||
|
## Pre-Commit with prek
|
||||||
|
|
||||||
|
`prek` installs git hooks that run the same checks as CI.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
nix develop
|
||||||
|
prek install # one-time setup
|
||||||
|
prek run # run hooks on staged files
|
||||||
|
prek run --all-files # run hooks on the full tree
|
||||||
|
```
|
||||||
|
|
||||||
|
Hooks cover: file hygiene (trailing whitespace, line endings, BOM),
|
||||||
|
syntax validation (JSON/YAML/TOML/XML), merge-conflict markers, private
|
||||||
|
key detection, nixpkgs-fmt, prettier, and `golangci-lint` via
|
||||||
|
`--new-from-rev=HEAD~1` (see `.pre-commit-config.yaml:59`). A manual
|
||||||
|
invocation with an `upstream/main` remote is equivalent:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
golangci-lint run --new-from-rev=upstream/main --timeout=5m --fix
|
||||||
|
```
|
||||||
|
|
||||||
|
`git commit --no-verify` is acceptable only for WIP commits on feature
|
||||||
|
branches — never on `main`.
|
||||||
|
|
||||||
|
## Project Layout
|
||||||
|
|
||||||
|
```
|
||||||
|
headscale/
|
||||||
|
├── cmd/
|
||||||
|
│ ├── headscale/ # Main headscale server binary
|
||||||
|
│ └── hi/ # Integration test runner (see cmd/hi/README.md)
|
||||||
|
├── hscontrol/ # Core control plane
|
||||||
|
├── integration/ # End-to-end Docker-based tests (see integration/README.md)
|
||||||
|
├── proto/ # Protocol buffer definitions
|
||||||
|
├── gen/ # Generated code (buf output — do not edit)
|
||||||
|
├── docs/ # User and ACL reference documentation
|
||||||
|
└── packaging/ # Distribution packaging
|
||||||
|
```
|
||||||
|
|
||||||
|
### `hscontrol/` packages
|
||||||
|
|
||||||
|
- `app.go`, `handlers.go`, `grpcv1.go`, `noise.go`, `auth.go`, `oidc.go`,
|
||||||
|
`poll.go`, `metrics.go`, `debug.go`, `tailsql.go`, `platform_config.go`
|
||||||
|
— top-level server files
|
||||||
|
- `state/` — central coordinator (`state.go`) and the copy-on-write
|
||||||
|
`NodeStore` (`node_store.go`). All cross-subsystem operations go
|
||||||
|
through `State`.
|
||||||
|
- `db/` — GORM layer, migrations, schema. `node.go`, `users.go`,
|
||||||
|
`api_key.go`, `preauth_keys.go`, `ip.go`, `policy.go`.
|
||||||
|
- `mapper/` — streaming batcher that distributes MapResponses to
|
||||||
|
clients: `batcher.go`, `node_conn.go`, `builder.go`, `mapper.go`.
|
||||||
|
Performance-critical.
|
||||||
|
- `policy/` — `policy/v2/` is **the** policy implementation. The
|
||||||
|
top-level `policy.go` is thin wrappers. There is no v1 directory.
|
||||||
|
- `routes/`, `dns/`, `derp/`, `types/`, `util/`, `templates/`, `capver/`
|
||||||
|
— routing, MagicDNS, relay, core types, helpers, client templates,
|
||||||
|
capability versioning.
|
||||||
|
- `servertest/` — in-memory test harness for server-level tests that
|
||||||
|
don't need Docker. Prefer this over `integration/` when possible.
|
||||||
|
- `assets/` — embedded UI assets.
|
||||||
|
|
||||||
|
### `cmd/hi/` files
|
||||||
|
|
||||||
|
`main.go`, `run.go`, `doctor.go`, `docker.go`, `cleanup.go`, `stats.go`,
|
||||||
|
`README.md`. **Read `cmd/hi/README.md` before running any `hi` command.**
|
||||||
|
|
||||||
|
## Architecture Essentials
|
||||||
|
|
||||||
|
- **`hscontrol/state/state.go`** is the central coordinator. Cross-cutting
|
||||||
|
operations (node updates, policy evaluation, IP allocation) go through
|
||||||
|
the `State` type, not directly to the database.
|
||||||
|
- **`NodeStore`** in `hscontrol/state/node_store.go` is a copy-on-write
|
||||||
|
in-memory cache backed by `atomic.Pointer[Snapshot]`. Every read is a
|
||||||
|
pointer load; writes rebuild a new snapshot and atomically swap. It is
|
||||||
|
the hot path for `MapRequest` processing and peer visibility.
|
||||||
|
- **The map-request sync point** is
|
||||||
|
`State.UpdateNodeFromMapRequest()` in
|
||||||
|
`hscontrol/state/state.go:2351`. This is where Hostinfo changes,
|
||||||
|
endpoint updates, and route advertisements land in the NodeStore.
|
||||||
|
- **Mapper subsystem** streams MapResponses via `batcher.go` and
|
||||||
|
`node_conn.go`. Changes here affect all connected clients.
|
||||||
|
- **Node registration flow**: noise handshake (`noise.go`) → auth
|
||||||
|
(`auth.go`) → state/DB persistence (`state/`, `db/`) → initial map
|
||||||
|
(`mapper/`).
|
||||||
|
|
||||||
|
## Database Migration Rules
|
||||||
|
|
||||||
|
These rules are load-bearing — violating them corrupts production
|
||||||
|
databases. The `migrationsRequiringFKDisabled` map in
|
||||||
|
`hscontrol/db/db.go:962` is frozen as of 2025-07-02 (see the comment at
|
||||||
|
`db.go:989`). All new migrations must:
|
||||||
|
|
||||||
|
1. **Never reorder existing migrations.** Migration order is immutable
|
||||||
|
once committed.
|
||||||
|
2. **Only add new migrations to the end** of the migrations array.
|
||||||
|
3. **Never disable foreign keys.** No new entries in
|
||||||
|
`migrationsRequiringFKDisabled`.
|
||||||
|
4. **Use the migration ID format** `YYYYMMDDHHMM-short-description`
|
||||||
|
(timestamp + descriptive suffix). Example: `202602201200-clear-tagged-node-user-id`.
|
||||||
|
5. **Never rename columns** that later migrations reference. Let
|
||||||
|
`AutoMigrate` create a new column if needed.
|
||||||
|
|
||||||
|
## Tags-as-Identity
|
||||||
|
|
||||||
|
Headscale enforces **tags XOR user ownership**: every node is either
|
||||||
|
tagged (owned by tags) or user-owned (owned by a user namespace), never
|
||||||
|
both. This is a load-bearing architectural invariant.
|
||||||
|
|
||||||
|
- **Use `node.IsTagged()`** (`hscontrol/types/node.go:221`) to determine
|
||||||
|
ownership, not `node.UserID().Valid()`. A tagged node may still have
|
||||||
|
`UserID` set for "created by" tracking — `IsTagged()` is authoritative.
|
||||||
|
- `IsUserOwned()` (`node.go:227`) returns `!IsTagged()`.
|
||||||
|
- Tagged nodes are presented to Tailscale as the special
|
||||||
|
`TaggedDevices` user (`hscontrol/types/users.go`, ID `2147455555`).
|
||||||
|
- `SetTags` validation is enforced by `validateNodeOwnership()` in
|
||||||
|
`hscontrol/state/tags.go`.
|
||||||
|
- Examples and edge cases live in `hscontrol/types/node_tags_test.go`
|
||||||
|
and `hscontrol/grpcv1_test.go` (`TestSetTags_*`).
|
||||||
|
|
||||||
|
**Don't do this**:
|
||||||
|
|
||||||
|
```go
|
||||||
|
if node.UserID().Valid() { /* assume user-owned */ } // WRONG
|
||||||
|
if node.UserID().Valid() && !node.IsTagged() { /* ok */ } // correct
|
||||||
|
```
|
||||||
|
|
||||||
|
## Policy Engine
|
||||||
|
|
||||||
|
`hscontrol/policy/v2/policy.go` is the policy implementation. The
|
||||||
|
top-level `hscontrol/policy/policy.go` contains only wrapper functions
|
||||||
|
around v2. There is no v1 directory.
|
||||||
|
|
||||||
|
Key concepts an agent will encounter:
|
||||||
|
|
||||||
|
- **Autogroups**: `autogroup:self`, `autogroup:member`, `autogroup:internet`
|
||||||
|
- **Tag owners**: IP-based authorization for who can claim a tag
|
||||||
|
- **Route approvals**: auto-approval of subnet routes by policy
|
||||||
|
- **SSH policies**: SSH access control via grants
|
||||||
|
- **HuJSON** parsing for policy files
|
||||||
|
|
||||||
|
For usage examples, read `hscontrol/policy/v2/policy_test.go`. For ACL
|
||||||
|
reference documentation, see `docs/`.
|
||||||
|
|
||||||
|
## Integration Testing
|
||||||
|
|
||||||
|
**Before running any `hi` command, read `cmd/hi/README.md` in full.**
|
||||||
|
Guessing at `hi` flags leads to broken runs and stale containers.
|
||||||
|
|
||||||
|
Test-authoring patterns (`EventuallyWithT`, `IntegrationSkip`, helper
|
||||||
|
variants, scenario setup) are documented in `integration/README.md`.
|
||||||
|
|
||||||
|
Key reminders:
|
||||||
|
|
||||||
|
- Integration test functions **must** start with `IntegrationSkip(t)`.
|
||||||
|
- External calls (`client.Status`, `headscale.ListNodes`, etc.) belong
|
||||||
|
inside `EventuallyWithT`; state-mutating commands (`tailscale set`)
|
||||||
|
must not.
|
||||||
|
- Tests generate ~100 MB of logs per run under `control_logs/{runID}/`.
|
||||||
|
Prune old runs if disk is tight.
|
||||||
|
- Flakes are almost always code, not infrastructure. Read `hs-*.stderr.log`
|
||||||
|
before blaming Docker.
|
||||||
|
|
||||||
|
## Code Conventions
|
||||||
|
|
||||||
|
- **Commit messages** follow Go-style `package: imperative description`.
|
||||||
|
Recent examples from `git log`:
|
||||||
|
- `db: scope DestroyUser to only delete the target user's pre-auth keys`
|
||||||
|
- `state: fix policy change race in UpdateNodeFromMapRequest`
|
||||||
|
- `integration: fix ACL tests for address-family-specific resolve`
|
||||||
|
|
||||||
|
Not Conventional Commits. No `feat:`/`chore:`/`docs:` prefixes.
|
||||||
|
|
||||||
|
- **Protobuf regeneration**: changes under `proto/` require
|
||||||
|
`make generate` (which runs `buf generate`) and should land in a
|
||||||
|
**separate commit** from the callers that use the regenerated types.
|
||||||
|
- **Formatting** is enforced by `golangci-lint` with `golines` (width 88)
|
||||||
|
and `gofumpt`. Run `make fmt` or rely on the pre-commit hook.
|
||||||
|
- **Logging** uses `zerolog`. Prefer single-line chains
|
||||||
|
(`log.Info().Str(...).Msg(...)`). For 4+ fields or conditional fields,
|
||||||
|
build incrementally and **reassign** the event variable:
|
||||||
|
`e = e.Str("k", v)`. Forgetting to reassign silently drops the field.
|
||||||
|
- **Tests**: prefer `hscontrol/servertest/` for server-level tests that
|
||||||
|
don't need Docker — faster than full integration tests.
|
||||||
|
|
||||||
|
## Gotchas
|
||||||
|
|
||||||
|
- **Database**: SQLite for local dev, PostgreSQL for integration-heavy
|
||||||
|
tests (`go run ./cmd/hi run "..." --postgres`). Some race conditions
|
||||||
|
only surface on one backend.
|
||||||
|
- **NodeStore writes** rebuild a full snapshot. Measure before changing
|
||||||
|
hot-path code.
|
||||||
|
- **`.claude/agents/` is deprecated.** Do not create new agent files
|
||||||
|
there. Put behavioural guidance in this file and procedural guidance
|
||||||
|
in the nearest README.
|
||||||
|
- **Do not edit `gen/`** — it is regenerated from `proto/` by
|
||||||
|
`make generate`.
|
||||||
|
- **Proto changes + code changes should be two commits**, not one.
|
||||||
1025
CHANGELOG.md
@@ -1,6 +1,6 @@
|
|||||||
# For testing purposes only
|
# For testing purposes only
|
||||||
|
|
||||||
FROM golang:alpine AS build-env
|
FROM golang:1.26.2-alpine AS build-env
|
||||||
|
|
||||||
WORKDIR /go/src
|
WORKDIR /go/src
|
||||||
|
|
||||||
@@ -12,7 +12,7 @@ WORKDIR /go/src/tailscale
|
|||||||
ARG TARGETARCH
|
ARG TARGETARCH
|
||||||
RUN GOARCH=$TARGETARCH go install -v ./cmd/derper
|
RUN GOARCH=$TARGETARCH go install -v ./cmd/derper
|
||||||
|
|
||||||
FROM alpine:3.18
|
FROM alpine:3.22
|
||||||
RUN apk add --no-cache ca-certificates iptables iproute2 ip6tables curl
|
RUN apk add --no-cache ca-certificates iptables iproute2 ip6tables curl
|
||||||
|
|
||||||
COPY --from=build-env /go/bin/* /usr/local/bin/
|
COPY --from=build-env /go/bin/* /usr/local/bin/
|
||||||
|
|||||||
@@ -2,25 +2,43 @@
|
|||||||
# and are in no way endorsed by Headscale's maintainers as an
|
# and are in no way endorsed by Headscale's maintainers as an
|
||||||
# official nor supported release or distribution.
|
# official nor supported release or distribution.
|
||||||
|
|
||||||
FROM docker.io/golang:1.23-bookworm
|
FROM docker.io/golang:1.26.1-trixie AS builder
|
||||||
ARG VERSION=dev
|
ARG VERSION=dev
|
||||||
ENV GOPATH /go
|
ENV GOPATH /go
|
||||||
WORKDIR /go/src/headscale
|
WORKDIR /go/src/headscale
|
||||||
|
|
||||||
RUN apt-get update \
|
# Install delve debugger first - rarely changes, good cache candidate
|
||||||
&& apt-get install --no-install-recommends --yes less jq sqlite3 dnsutils \
|
RUN go install github.com/go-delve/delve/cmd/dlv@latest
|
||||||
&& rm -rf /var/lib/apt/lists/* \
|
|
||||||
&& apt-get clean
|
|
||||||
RUN mkdir -p /var/run/headscale
|
|
||||||
|
|
||||||
|
# Download dependencies - only invalidated when go.mod/go.sum change
|
||||||
COPY go.mod go.sum /go/src/headscale/
|
COPY go.mod go.sum /go/src/headscale/
|
||||||
RUN go mod download
|
RUN go mod download
|
||||||
|
|
||||||
|
# Copy source and build - invalidated on any source change
|
||||||
COPY . .
|
COPY . .
|
||||||
|
|
||||||
RUN CGO_ENABLED=0 GOOS=linux go install -ldflags="-s -w -X github.com/juanfont/headscale/cmd/headscale/cli.Version=$VERSION" -a ./cmd/headscale && test -e /go/bin/headscale
|
# Build debug binary with debug symbols for delve
|
||||||
|
RUN CGO_ENABLED=0 GOOS=linux go build -gcflags="all=-N -l" -o /go/bin/headscale ./cmd/headscale
|
||||||
|
|
||||||
|
# Runtime stage
|
||||||
|
FROM debian:trixie-slim
|
||||||
|
|
||||||
|
RUN apt-get --update install --no-install-recommends --yes \
|
||||||
|
bash ca-certificates curl dnsutils findutils iproute2 jq less procps python3 sqlite3 \
|
||||||
|
&& apt-get dist-clean
|
||||||
|
|
||||||
|
RUN mkdir -p /var/run/headscale
|
||||||
|
|
||||||
|
# Copy binaries from builder
|
||||||
|
COPY --from=builder /go/bin/headscale /usr/local/bin/headscale
|
||||||
|
COPY --from=builder /go/bin/dlv /usr/local/bin/dlv
|
||||||
|
|
||||||
|
# Copy source code for delve source-level debugging
|
||||||
|
COPY --from=builder /go/src/headscale /go/src/headscale
|
||||||
|
|
||||||
|
WORKDIR /go/src/headscale
|
||||||
|
|
||||||
# Need to reset the entrypoint or everything will run as a busybox script
|
# Need to reset the entrypoint or everything will run as a busybox script
|
||||||
ENTRYPOINT []
|
ENTRYPOINT []
|
||||||
EXPOSE 8080/tcp
|
EXPOSE 8080/tcp 40000/tcp
|
||||||
CMD ["headscale"]
|
CMD ["dlv", "--listen=0.0.0.0:40000", "--headless=true", "--api-version=2", "--accept-multiclient", "exec", "/usr/local/bin/headscale", "--"]
|
||||||
|
|||||||
17
Dockerfile.integration-ci
Normal file
@@ -0,0 +1,17 @@
|
|||||||
|
# Minimal CI image - expects pre-built headscale binary in build context
|
||||||
|
# For local development with delve debugging, use Dockerfile.integration instead
|
||||||
|
|
||||||
|
FROM debian:trixie-slim
|
||||||
|
|
||||||
|
RUN apt-get --update install --no-install-recommends --yes \
|
||||||
|
bash ca-certificates curl dnsutils findutils iproute2 jq less procps python3 sqlite3 \
|
||||||
|
&& apt-get dist-clean
|
||||||
|
|
||||||
|
RUN mkdir -p /var/run/headscale
|
||||||
|
|
||||||
|
# Copy pre-built headscale binary from build context
|
||||||
|
COPY headscale /usr/local/bin/headscale
|
||||||
|
|
||||||
|
ENTRYPOINT []
|
||||||
|
EXPOSE 8080/tcp
|
||||||
|
CMD ["/usr/local/bin/headscale"]
|
||||||
@@ -4,7 +4,7 @@
|
|||||||
# This Dockerfile is more or less lifted from tailscale/tailscale
|
# This Dockerfile is more or less lifted from tailscale/tailscale
|
||||||
# to ensure a similar build process when testing the HEAD of tailscale.
|
# to ensure a similar build process when testing the HEAD of tailscale.
|
||||||
|
|
||||||
FROM golang:1.23-alpine AS build-env
|
FROM golang:1.26.2-alpine AS build-env
|
||||||
|
|
||||||
WORKDIR /go/src
|
WORKDIR /go/src
|
||||||
|
|
||||||
@@ -36,8 +36,10 @@ RUN GOARCH=$TARGETARCH go install -tags="${BUILD_TAGS}" -ldflags="\
|
|||||||
-X tailscale.com/version.gitCommitStamp=$VERSION_GIT_HASH" \
|
-X tailscale.com/version.gitCommitStamp=$VERSION_GIT_HASH" \
|
||||||
-v ./cmd/tailscale ./cmd/tailscaled ./cmd/containerboot
|
-v ./cmd/tailscale ./cmd/tailscaled ./cmd/containerboot
|
||||||
|
|
||||||
FROM alpine:3.18
|
FROM alpine:3.22
|
||||||
RUN apk add --no-cache ca-certificates iptables iproute2 ip6tables curl
|
# Upstream: ca-certificates ip6tables iptables iproute2
|
||||||
|
# Tests: curl python3 (traceroute via BusyBox)
|
||||||
|
RUN apk add --no-cache ca-certificates curl ip6tables iptables iproute2 python3
|
||||||
|
|
||||||
COPY --from=build-env /go/bin/* /usr/local/bin/
|
COPY --from=build-env /go/bin/* /usr/local/bin/
|
||||||
# For compat with the previous run.sh, although ideally you should be
|
# For compat with the previous run.sh, although ideally you should be
|
||||||
|
|||||||
169
Makefile
@@ -1,64 +1,135 @@
|
|||||||
# Calculate version
|
# Headscale Makefile
|
||||||
version ?= $(shell git describe --always --tags --dirty)
|
# Modern Makefile following best practices
|
||||||
|
|
||||||
rwildcard=$(foreach d,$(wildcard $1*),$(call rwildcard,$d/,$2) $(filter $(subst *,%,$2),$d))
|
# Version calculation
|
||||||
|
VERSION ?= $(shell git describe --always --tags --dirty)
|
||||||
|
|
||||||
# Determine if OS supports pie
|
# Build configuration
|
||||||
GOOS ?= $(shell uname | tr '[:upper:]' '[:lower:]')
|
GOOS ?= $(shell uname | tr '[:upper:]' '[:lower:]')
|
||||||
ifeq ($(filter $(GOOS), openbsd netbsd soloaris plan9), )
|
ifeq ($(filter $(GOOS), openbsd netbsd solaris plan9), )
|
||||||
pieflags = -buildmode=pie
|
PIE_FLAGS = -buildmode=pie
|
||||||
else
|
|
||||||
endif
|
endif
|
||||||
|
|
||||||
# GO_SOURCES = $(wildcard *.go)
|
# Tool availability check with nix warning
|
||||||
# PROTO_SOURCES = $(wildcard **/*.proto)
|
define check_tool
|
||||||
GO_SOURCES = $(call rwildcard,,*.go)
|
@command -v $(1) >/dev/null 2>&1 || { \
|
||||||
PROTO_SOURCES = $(call rwildcard,,*.proto)
|
echo "Warning: $(1) not found. Run 'nix develop' to ensure all dependencies are available."; \
|
||||||
|
exit 1; \
|
||||||
|
}
|
||||||
|
endef
|
||||||
|
|
||||||
|
# Source file collections using shell find for better performance
|
||||||
|
GO_SOURCES := $(shell find . -name '*.go' -not -path './gen/*' -not -path './vendor/*')
|
||||||
|
PROTO_SOURCES := $(shell find . -name '*.proto' -not -path './gen/*' -not -path './vendor/*')
|
||||||
|
PRETTIER_SOURCES := $(shell find . \( -name '*.md' -o -name '*.yaml' -o -name '*.yml' -o -name '*.ts' -o -name '*.js' -o -name '*.html' -o -name '*.css' -o -name '*.scss' -o -name '*.sass' \) -not -path './gen/*' -not -path './vendor/*' -not -path './node_modules/*')
|
||||||
|
|
||||||
|
# Default target
|
||||||
|
.PHONY: all
|
||||||
|
all: lint test build
|
||||||
|
|
||||||
|
# Dependency checking
|
||||||
|
.PHONY: check-deps
|
||||||
|
check-deps:
|
||||||
|
$(call check_tool,go)
|
||||||
|
$(call check_tool,golangci-lint)
|
||||||
|
$(call check_tool,gofumpt)
|
||||||
|
$(call check_tool,mdformat)
|
||||||
|
$(call check_tool,prettier)
|
||||||
|
$(call check_tool,clang-format)
|
||||||
|
$(call check_tool,buf)
|
||||||
|
|
||||||
|
# Build targets
|
||||||
|
.PHONY: build
|
||||||
|
build: check-deps $(GO_SOURCES) go.mod go.sum
|
||||||
|
@echo "Building headscale..."
|
||||||
|
go build $(PIE_FLAGS) -ldflags "-X main.version=$(VERSION)" -o headscale ./cmd/headscale
|
||||||
|
|
||||||
|
# Test targets
|
||||||
|
.PHONY: test
|
||||||
|
test: check-deps $(GO_SOURCES) go.mod go.sum
|
||||||
|
@echo "Running Go tests..."
|
||||||
|
go test -race ./...
|
||||||
|
|
||||||
|
|
||||||
build:
|
# Formatting targets
|
||||||
nix build
|
.PHONY: fmt
|
||||||
|
fmt: fmt-go fmt-mdformat fmt-prettier fmt-proto
|
||||||
|
|
||||||
dev: lint test build
|
.PHONY: fmt-go
|
||||||
|
fmt-go: check-deps $(GO_SOURCES)
|
||||||
test:
|
@echo "Formatting Go code..."
|
||||||
gotestsum -- -short -race -coverprofile=coverage.out ./...
|
|
||||||
|
|
||||||
test_integration:
|
|
||||||
docker run \
|
|
||||||
-t --rm \
|
|
||||||
-v ~/.cache/hs-integration-go:/go \
|
|
||||||
--name headscale-test-suite \
|
|
||||||
-v $$PWD:$$PWD -w $$PWD/integration \
|
|
||||||
-v /var/run/docker.sock:/var/run/docker.sock \
|
|
||||||
-v $$PWD/control_logs:/tmp/control \
|
|
||||||
golang:1 \
|
|
||||||
go run gotest.tools/gotestsum@latest -- -race -failfast ./... -timeout 120m -parallel 8
|
|
||||||
|
|
||||||
lint:
|
|
||||||
golangci-lint run --fix --timeout 10m
|
|
||||||
|
|
||||||
fmt: fmt-go fmt-prettier fmt-proto
|
|
||||||
|
|
||||||
fmt-prettier:
|
|
||||||
prettier --write '**/**.{ts,js,md,yaml,yml,sass,css,scss,html}'
|
|
||||||
prettier --write --print-width 80 --prose-wrap always CHANGELOG.md
|
|
||||||
|
|
||||||
fmt-go:
|
|
||||||
# TODO(kradalby): Reeval if we want to use 88 in the future.
|
|
||||||
# golines --max-len=88 --base-formatter=gofumpt -w $(GO_SOURCES)
|
|
||||||
gofumpt -l -w .
|
gofumpt -l -w .
|
||||||
golangci-lint run --fix
|
golangci-lint run --fix
|
||||||
|
|
||||||
fmt-proto:
|
.PHONY: fmt-mdformat
|
||||||
|
fmt-mdformat: check-deps
|
||||||
|
@echo "Formatting documentation..."
|
||||||
|
mdformat docs/
|
||||||
|
|
||||||
|
.PHONY: fmt-prettier
|
||||||
|
fmt-prettier: check-deps $(PRETTIER_SOURCES)
|
||||||
|
@echo "Formatting markup and config files..."
|
||||||
|
prettier --write '**/*.{ts,js,md,yaml,yml,sass,css,scss,html}'
|
||||||
|
|
||||||
|
.PHONY: fmt-proto
|
||||||
|
fmt-proto: check-deps $(PROTO_SOURCES)
|
||||||
|
@echo "Formatting Protocol Buffer files..."
|
||||||
clang-format -i $(PROTO_SOURCES)
|
clang-format -i $(PROTO_SOURCES)
|
||||||
|
|
||||||
proto-lint:
|
# Linting targets
|
||||||
cd proto/ && go run github.com/bufbuild/buf/cmd/buf lint
|
.PHONY: lint
|
||||||
|
lint: lint-go lint-proto
|
||||||
|
|
||||||
compress: build
|
.PHONY: lint-go
|
||||||
upx --brute headscale
|
lint-go: check-deps $(GO_SOURCES) go.mod go.sum
|
||||||
|
@echo "Linting Go code..."
|
||||||
|
golangci-lint run --timeout 10m
|
||||||
|
|
||||||
generate:
|
.PHONY: lint-proto
|
||||||
rm -rf gen
|
lint-proto: check-deps $(PROTO_SOURCES)
|
||||||
buf generate proto
|
@echo "Linting Protocol Buffer files..."
|
||||||
|
cd proto/ && buf lint
|
||||||
|
|
||||||
|
# Code generation
|
||||||
|
.PHONY: generate
|
||||||
|
generate: check-deps
|
||||||
|
@echo "Generating code..."
|
||||||
|
go generate ./...
|
||||||
|
|
||||||
|
# Clean targets
|
||||||
|
.PHONY: clean
|
||||||
|
clean:
|
||||||
|
rm -rf headscale gen
|
||||||
|
|
||||||
|
# Development workflow
|
||||||
|
.PHONY: dev
|
||||||
|
dev: fmt lint test build
|
||||||
|
|
||||||
|
# Help target
|
||||||
|
.PHONY: help
|
||||||
|
help:
|
||||||
|
@echo "Headscale Development Makefile"
|
||||||
|
@echo ""
|
||||||
|
@echo "Main targets:"
|
||||||
|
@echo " all - Run lint, test, and build (default)"
|
||||||
|
@echo " build - Build headscale binary"
|
||||||
|
@echo " test - Run Go tests"
|
||||||
|
@echo " fmt - Format all code (Go, docs, proto)"
|
||||||
|
@echo " lint - Lint all code (Go, proto)"
|
||||||
|
@echo " generate - Generate code from Protocol Buffers"
|
||||||
|
@echo " dev - Full development workflow (fmt + lint + test + build)"
|
||||||
|
@echo " clean - Clean build artifacts"
|
||||||
|
@echo ""
|
||||||
|
@echo "Specific targets:"
|
||||||
|
@echo " fmt-go - Format Go code only"
|
||||||
|
@echo " fmt-mdformat - Format documentation only"
|
||||||
|
@echo " fmt-prettier - Format markup and config files only"
|
||||||
|
@echo " fmt-proto - Format Protocol Buffer files only"
|
||||||
|
@echo " lint-go - Lint Go code only"
|
||||||
|
@echo " lint-proto - Lint Protocol Buffer files only"
|
||||||
|
@echo ""
|
||||||
|
@echo "Dependencies:"
|
||||||
|
@echo " check-deps - Verify required tools are available"
|
||||||
|
@echo ""
|
||||||
|
@echo "Note: If not running in a nix shell, ensure dependencies are available:"
|
||||||
|
@echo " nix develop"
|
||||||
|
|||||||
61
README.md
@@ -1,4 +1,4 @@
|
|||||||

|

|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
@@ -7,8 +7,12 @@ An open source, self-hosted implementation of the Tailscale control server.
|
|||||||
Join our [Discord server](https://discord.gg/c84AZQhmpx) for a chat.
|
Join our [Discord server](https://discord.gg/c84AZQhmpx) for a chat.
|
||||||
|
|
||||||
**Note:** Always select the same GitHub tag as the released version you use
|
**Note:** Always select the same GitHub tag as the released version you use
|
||||||
to ensure you have the correct example configuration and documentation.
|
to ensure you have the correct example configuration. The `main` branch might
|
||||||
The `main` branch might contain unreleased changes.
|
contain unreleased changes. The documentation is available for stable and
|
||||||
|
development versions:
|
||||||
|
|
||||||
|
- [Documentation for the stable version](https://headscale.net/stable/)
|
||||||
|
- [Documentation for the development version](https://headscale.net/development/)
|
||||||
|
|
||||||
## What is Tailscale
|
## What is Tailscale
|
||||||
|
|
||||||
@@ -32,12 +36,12 @@ organisation.
|
|||||||
|
|
||||||
## Design goal
|
## Design goal
|
||||||
|
|
||||||
Headscale aims to implement a self-hosted, open source alternative to the Tailscale
|
Headscale aims to implement a self-hosted, open source alternative to the
|
||||||
control server.
|
[Tailscale](https://tailscale.com/) control server. Headscale's goal is to
|
||||||
Headscale's goal is to provide self-hosters and hobbyists with an open-source
|
provide self-hosters and hobbyists with an open-source server they can use for
|
||||||
server they can use for their projects and labs.
|
their projects and labs. It implements a narrow scope, a _single_ Tailscale
|
||||||
It implements a narrow scope, a single Tailnet, suitable for a personal use, or a small
|
network (tailnet), suitable for a personal use, or a small open-source
|
||||||
open-source organisation.
|
organisation.
|
||||||
|
|
||||||
## Supporting Headscale
|
## Supporting Headscale
|
||||||
|
|
||||||
@@ -59,8 +63,18 @@ and container to run Headscale.**
|
|||||||
|
|
||||||
Please have a look at the [`documentation`](https://headscale.net/stable/).
|
Please have a look at the [`documentation`](https://headscale.net/stable/).
|
||||||
|
|
||||||
|
For NixOS users, a module is available in [`nix/`](./nix/).
|
||||||
|
|
||||||
|
## Builds from `main`
|
||||||
|
|
||||||
|
Development builds from the `main` branch are available as container images and
|
||||||
|
binaries. See the [development builds](https://headscale.net/stable/setup/install/main/)
|
||||||
|
documentation for details.
|
||||||
|
|
||||||
## Talks
|
## Talks
|
||||||
|
|
||||||
|
- Fosdem 2026 (video): [Headscale & Tailscale: The complementary open source clone](https://fosdem.org/2026/schedule/event/KYQ3LL-headscale-the-complementary-open-source-clone/)
|
||||||
|
- presented by Kristoffer Dalby
|
||||||
- Fosdem 2023 (video): [Headscale: How we are using integration testing to reimplement Tailscale](https://fosdem.org/2023/schedule/event/goheadscale/)
|
- Fosdem 2023 (video): [Headscale: How we are using integration testing to reimplement Tailscale](https://fosdem.org/2023/schedule/event/goheadscale/)
|
||||||
- presented by Juan Font Alonso and Kristoffer Dalby
|
- presented by Juan Font Alonso and Kristoffer Dalby
|
||||||
|
|
||||||
@@ -99,6 +113,8 @@ run `make lint` and `make fmt` before committing any code.
|
|||||||
The **Proto** code is linted with [`buf`](https://docs.buf.build/lint/overview) and
|
The **Proto** code is linted with [`buf`](https://docs.buf.build/lint/overview) and
|
||||||
formatted with [`clang-format`](https://clang.llvm.org/docs/ClangFormat.html).
|
formatted with [`clang-format`](https://clang.llvm.org/docs/ClangFormat.html).
|
||||||
|
|
||||||
|
The **docs** are formatted with [`mdformat`](https://mdformat.readthedocs.io).
|
||||||
|
|
||||||
The **rest** (Markdown, YAML, etc) is formatted with [`prettier`](https://prettier.io).
|
The **rest** (Markdown, YAML, etc) is formatted with [`prettier`](https://prettier.io).
|
||||||
|
|
||||||
Check out the `.golangci.yaml` and `Makefile` to see the specific configuration.
|
Check out the `.golangci.yaml` and `Makefile` to see the specific configuration.
|
||||||
@@ -134,16 +150,31 @@ make test
|
|||||||
|
|
||||||
To build the program:
|
To build the program:
|
||||||
|
|
||||||
```shell
|
|
||||||
nix build
|
|
||||||
```
|
|
||||||
|
|
||||||
or
|
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
make build
|
make build
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Development workflow
|
||||||
|
|
||||||
|
We recommend using Nix for dependency management to ensure you have all required tools. If you prefer to manage dependencies yourself, you can use Make directly:
|
||||||
|
|
||||||
|
**With Nix (recommended):**
|
||||||
|
|
||||||
|
```shell
|
||||||
|
nix develop
|
||||||
|
make test
|
||||||
|
make build
|
||||||
|
```
|
||||||
|
|
||||||
|
**With your own dependencies:**
|
||||||
|
|
||||||
|
```shell
|
||||||
|
make test
|
||||||
|
make build
|
||||||
|
```
|
||||||
|
|
||||||
|
The Makefile will warn you if any required tools are missing and suggest running `nix develop`. Run `make help` to see all available targets.
|
||||||
|
|
||||||
## Contributors
|
## Contributors
|
||||||
|
|
||||||
<a href="https://github.com/juanfont/headscale/graphs/contributors">
|
<a href="https://github.com/juanfont/headscale/graphs/contributors">
|
||||||
|
|||||||
@@ -1,69 +0,0 @@
|
|||||||
package main
|
|
||||||
|
|
||||||
//go:generate go run ./main.go
|
|
||||||
|
|
||||||
import (
|
|
||||||
"bytes"
|
|
||||||
"fmt"
|
|
||||||
"log"
|
|
||||||
"os/exec"
|
|
||||||
"strings"
|
|
||||||
)
|
|
||||||
|
|
||||||
func findTests() []string {
|
|
||||||
rgBin, err := exec.LookPath("rg")
|
|
||||||
if err != nil {
|
|
||||||
log.Fatalf("failed to find rg (ripgrep) binary")
|
|
||||||
}
|
|
||||||
|
|
||||||
args := []string{
|
|
||||||
"--regexp", "func (Test.+)\\(.*",
|
|
||||||
"../../integration/",
|
|
||||||
"--replace", "$1",
|
|
||||||
"--sort", "path",
|
|
||||||
"--no-line-number",
|
|
||||||
"--no-filename",
|
|
||||||
"--no-heading",
|
|
||||||
}
|
|
||||||
|
|
||||||
cmd := exec.Command(rgBin, args...)
|
|
||||||
var out bytes.Buffer
|
|
||||||
cmd.Stdout = &out
|
|
||||||
err = cmd.Run()
|
|
||||||
if err != nil {
|
|
||||||
log.Fatalf("failed to run command: %s", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
tests := strings.Split(strings.TrimSpace(out.String()), "\n")
|
|
||||||
return tests
|
|
||||||
}
|
|
||||||
|
|
||||||
func updateYAML(tests []string) {
|
|
||||||
testsForYq := fmt.Sprintf("[%s]", strings.Join(tests, ", "))
|
|
||||||
|
|
||||||
yqCommand := fmt.Sprintf(
|
|
||||||
"yq eval '.jobs.integration-test.strategy.matrix.test = %s' ../../.github/workflows/test-integration.yaml -i",
|
|
||||||
testsForYq,
|
|
||||||
)
|
|
||||||
cmd := exec.Command("bash", "-c", yqCommand)
|
|
||||||
|
|
||||||
var out bytes.Buffer
|
|
||||||
cmd.Stdout = &out
|
|
||||||
err := cmd.Run()
|
|
||||||
if err != nil {
|
|
||||||
log.Fatalf("failed to run yq command: %s", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
fmt.Println("YAML file updated successfully")
|
|
||||||
}
|
|
||||||
|
|
||||||
func main() {
|
|
||||||
tests := findTests()
|
|
||||||
|
|
||||||
quotedTests := make([]string, len(tests))
|
|
||||||
for i, test := range tests {
|
|
||||||
quotedTests[i] = fmt.Sprintf("\"%s\"", test)
|
|
||||||
}
|
|
||||||
|
|
||||||
updateYAML(quotedTests)
|
|
||||||
}
|
|
||||||
@@ -1,21 +1,18 @@
|
|||||||
package cli
|
package cli
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"context"
|
||||||
"fmt"
|
"fmt"
|
||||||
"strconv"
|
"strconv"
|
||||||
"time"
|
|
||||||
|
|
||||||
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
||||||
"github.com/juanfont/headscale/hscontrol/util"
|
"github.com/juanfont/headscale/hscontrol/util"
|
||||||
"github.com/prometheus/common/model"
|
|
||||||
"github.com/pterm/pterm"
|
"github.com/pterm/pterm"
|
||||||
"github.com/rs/zerolog/log"
|
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
"google.golang.org/protobuf/types/known/timestamppb"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
const (
|
const (
|
||||||
// 90 days.
|
// DefaultAPIKeyExpiry is 90 days.
|
||||||
DefaultAPIKeyExpiry = "90d"
|
DefaultAPIKeyExpiry = "90d"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -29,15 +26,11 @@ func init() {
|
|||||||
apiKeysCmd.AddCommand(createAPIKeyCmd)
|
apiKeysCmd.AddCommand(createAPIKeyCmd)
|
||||||
|
|
||||||
expireAPIKeyCmd.Flags().StringP("prefix", "p", "", "ApiKey prefix")
|
expireAPIKeyCmd.Flags().StringP("prefix", "p", "", "ApiKey prefix")
|
||||||
if err := expireAPIKeyCmd.MarkFlagRequired("prefix"); err != nil {
|
expireAPIKeyCmd.Flags().Uint64P("id", "i", 0, "ApiKey ID")
|
||||||
log.Fatal().Err(err).Msg("")
|
|
||||||
}
|
|
||||||
apiKeysCmd.AddCommand(expireAPIKeyCmd)
|
apiKeysCmd.AddCommand(expireAPIKeyCmd)
|
||||||
|
|
||||||
deleteAPIKeyCmd.Flags().StringP("prefix", "p", "", "ApiKey prefix")
|
deleteAPIKeyCmd.Flags().StringP("prefix", "p", "", "ApiKey prefix")
|
||||||
if err := deleteAPIKeyCmd.MarkFlagRequired("prefix"); err != nil {
|
deleteAPIKeyCmd.Flags().Uint64P("id", "i", 0, "ApiKey ID")
|
||||||
log.Fatal().Err(err).Msg("")
|
|
||||||
}
|
|
||||||
apiKeysCmd.AddCommand(deleteAPIKeyCmd)
|
apiKeysCmd.AddCommand(deleteAPIKeyCmd)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -51,55 +44,35 @@ var listAPIKeys = &cobra.Command{
|
|||||||
Use: "list",
|
Use: "list",
|
||||||
Short: "List the Api keys for headscale",
|
Short: "List the Api keys for headscale",
|
||||||
Aliases: []string{"ls", "show"},
|
Aliases: []string{"ls", "show"},
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
response, err := client.ListApiKeys(ctx, &v1.ListApiKeysRequest{})
|
||||||
|
|
||||||
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
|
|
||||||
defer cancel()
|
|
||||||
defer conn.Close()
|
|
||||||
|
|
||||||
request := &v1.ListApiKeysRequest{}
|
|
||||||
|
|
||||||
response, err := client.ListApiKeys(ctx, request)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(
|
return fmt.Errorf("listing api keys: %w", err)
|
||||||
err,
|
|
||||||
fmt.Sprintf("Error getting the list of keys: %s", err),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if output != "" {
|
return printListOutput(cmd, response.GetApiKeys(), func() error {
|
||||||
SuccessOutput(response.GetApiKeys(), "", output)
|
tableData := pterm.TableData{
|
||||||
}
|
{"ID", "Prefix", "Expiration", "Created"},
|
||||||
|
|
||||||
tableData := pterm.TableData{
|
|
||||||
{"ID", "Prefix", "Expiration", "Created"},
|
|
||||||
}
|
|
||||||
for _, key := range response.GetApiKeys() {
|
|
||||||
expiration := "-"
|
|
||||||
|
|
||||||
if key.GetExpiration() != nil {
|
|
||||||
expiration = ColourTime(key.GetExpiration().AsTime())
|
|
||||||
}
|
}
|
||||||
|
|
||||||
tableData = append(tableData, []string{
|
for _, key := range response.GetApiKeys() {
|
||||||
strconv.FormatUint(key.GetId(), util.Base10),
|
expiration := "-"
|
||||||
key.GetPrefix(),
|
|
||||||
expiration,
|
|
||||||
key.GetCreatedAt().AsTime().Format(HeadscaleDateTimeFormat),
|
|
||||||
})
|
|
||||||
|
|
||||||
}
|
if key.GetExpiration() != nil {
|
||||||
err = pterm.DefaultTable.WithHasHeader().WithData(tableData).Render()
|
expiration = ColourTime(key.GetExpiration().AsTime())
|
||||||
if err != nil {
|
}
|
||||||
ErrorOutput(
|
|
||||||
err,
|
tableData = append(tableData, []string{
|
||||||
fmt.Sprintf("Failed to render pterm table: %s", err),
|
strconv.FormatUint(key.GetId(), util.Base10),
|
||||||
output,
|
key.GetPrefix(),
|
||||||
)
|
expiration,
|
||||||
}
|
key.GetCreatedAt().AsTime().Format(HeadscaleDateTimeFormat),
|
||||||
},
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
return pterm.DefaultTable.WithHasHeader().WithData(tableData).Render()
|
||||||
|
})
|
||||||
|
}),
|
||||||
}
|
}
|
||||||
|
|
||||||
var createAPIKeyCmd = &cobra.Command{
|
var createAPIKeyCmd = &cobra.Command{
|
||||||
@@ -110,113 +83,79 @@ Creates a new Api key, the Api key is only visible on creation
|
|||||||
and cannot be retrieved again.
|
and cannot be retrieved again.
|
||||||
If you loose a key, create a new one and revoke (expire) the old one.`,
|
If you loose a key, create a new one and revoke (expire) the old one.`,
|
||||||
Aliases: []string{"c", "new"},
|
Aliases: []string{"c", "new"},
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
expiration, err := expirationFromFlag(cmd)
|
||||||
|
|
||||||
request := &v1.CreateApiKeyRequest{}
|
|
||||||
|
|
||||||
durationStr, _ := cmd.Flags().GetString("expiration")
|
|
||||||
|
|
||||||
duration, err := model.ParseDuration(durationStr)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(
|
return err
|
||||||
err,
|
|
||||||
fmt.Sprintf("Could not parse duration: %s\n", err),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
expiration := time.Now().UTC().Add(time.Duration(duration))
|
response, err := client.CreateApiKey(ctx, &v1.CreateApiKeyRequest{
|
||||||
|
Expiration: expiration,
|
||||||
request.Expiration = timestamppb.New(expiration)
|
})
|
||||||
|
|
||||||
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
|
|
||||||
defer cancel()
|
|
||||||
defer conn.Close()
|
|
||||||
|
|
||||||
response, err := client.CreateApiKey(ctx, request)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(
|
return fmt.Errorf("creating api key: %w", err)
|
||||||
err,
|
|
||||||
fmt.Sprintf("Cannot create Api Key: %s\n", err),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
SuccessOutput(response.GetApiKey(), response.GetApiKey(), output)
|
return printOutput(cmd, response.GetApiKey(), response.GetApiKey())
|
||||||
},
|
}),
|
||||||
|
}
|
||||||
|
|
||||||
|
// apiKeyIDOrPrefix reads --id and --prefix from cmd and validates that
|
||||||
|
// exactly one is provided.
|
||||||
|
func apiKeyIDOrPrefix(cmd *cobra.Command) (uint64, string, error) {
|
||||||
|
id, _ := cmd.Flags().GetUint64("id")
|
||||||
|
prefix, _ := cmd.Flags().GetString("prefix")
|
||||||
|
|
||||||
|
switch {
|
||||||
|
case id == 0 && prefix == "":
|
||||||
|
return 0, "", fmt.Errorf("either --id or --prefix must be provided: %w", errMissingParameter)
|
||||||
|
case id != 0 && prefix != "":
|
||||||
|
return 0, "", fmt.Errorf("only one of --id or --prefix can be provided: %w", errMissingParameter)
|
||||||
|
}
|
||||||
|
|
||||||
|
return id, prefix, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
var expireAPIKeyCmd = &cobra.Command{
|
var expireAPIKeyCmd = &cobra.Command{
|
||||||
Use: "expire",
|
Use: "expire",
|
||||||
Short: "Expire an ApiKey",
|
Short: "Expire an ApiKey",
|
||||||
Aliases: []string{"revoke", "exp", "e"},
|
Aliases: []string{"revoke", "exp", "e"},
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
id, prefix, err := apiKeyIDOrPrefix(cmd)
|
||||||
|
|
||||||
prefix, err := cmd.Flags().GetString("prefix")
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(
|
return err
|
||||||
err,
|
|
||||||
fmt.Sprintf("Error getting prefix from CLI flag: %s", err),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
|
response, err := client.ExpireApiKey(ctx, &v1.ExpireApiKeyRequest{
|
||||||
defer cancel()
|
Id: id,
|
||||||
defer conn.Close()
|
|
||||||
|
|
||||||
request := &v1.ExpireApiKeyRequest{
|
|
||||||
Prefix: prefix,
|
Prefix: prefix,
|
||||||
}
|
})
|
||||||
|
|
||||||
response, err := client.ExpireApiKey(ctx, request)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(
|
return fmt.Errorf("expiring api key: %w", err)
|
||||||
err,
|
|
||||||
fmt.Sprintf("Cannot expire Api Key: %s\n", err),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
SuccessOutput(response, "Key expired", output)
|
return printOutput(cmd, response, "Key expired")
|
||||||
},
|
}),
|
||||||
}
|
}
|
||||||
|
|
||||||
var deleteAPIKeyCmd = &cobra.Command{
|
var deleteAPIKeyCmd = &cobra.Command{
|
||||||
Use: "delete",
|
Use: "delete",
|
||||||
Short: "Delete an ApiKey",
|
Short: "Delete an ApiKey",
|
||||||
Aliases: []string{"remove", "del"},
|
Aliases: []string{"remove", "del"},
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
id, prefix, err := apiKeyIDOrPrefix(cmd)
|
||||||
|
|
||||||
prefix, err := cmd.Flags().GetString("prefix")
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(
|
return err
|
||||||
err,
|
|
||||||
fmt.Sprintf("Error getting prefix from CLI flag: %s", err),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
|
response, err := client.DeleteApiKey(ctx, &v1.DeleteApiKeyRequest{
|
||||||
defer cancel()
|
Id: id,
|
||||||
defer conn.Close()
|
|
||||||
|
|
||||||
request := &v1.DeleteApiKeyRequest{
|
|
||||||
Prefix: prefix,
|
Prefix: prefix,
|
||||||
}
|
})
|
||||||
|
|
||||||
response, err := client.DeleteApiKey(ctx, request)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(
|
return fmt.Errorf("deleting api key: %w", err)
|
||||||
err,
|
|
||||||
fmt.Sprintf("Cannot delete Api Key: %s\n", err),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
SuccessOutput(response, "Key deleted", output)
|
return printOutput(cmd, response, "Key deleted")
|
||||||
},
|
}),
|
||||||
}
|
}
|
||||||
|
|||||||
93
cmd/headscale/cli/auth.go
Normal file
@@ -0,0 +1,93 @@
|
|||||||
|
package cli
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
|
||||||
|
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
||||||
|
"github.com/spf13/cobra"
|
||||||
|
)
|
||||||
|
|
||||||
|
func init() {
|
||||||
|
rootCmd.AddCommand(authCmd)
|
||||||
|
|
||||||
|
authRegisterCmd.Flags().StringP("user", "u", "", "User")
|
||||||
|
authRegisterCmd.Flags().String("auth-id", "", "Auth ID")
|
||||||
|
mustMarkRequired(authRegisterCmd, "user", "auth-id")
|
||||||
|
authCmd.AddCommand(authRegisterCmd)
|
||||||
|
|
||||||
|
authApproveCmd.Flags().String("auth-id", "", "Auth ID")
|
||||||
|
mustMarkRequired(authApproveCmd, "auth-id")
|
||||||
|
authCmd.AddCommand(authApproveCmd)
|
||||||
|
|
||||||
|
authRejectCmd.Flags().String("auth-id", "", "Auth ID")
|
||||||
|
mustMarkRequired(authRejectCmd, "auth-id")
|
||||||
|
authCmd.AddCommand(authRejectCmd)
|
||||||
|
}
|
||||||
|
|
||||||
|
var authCmd = &cobra.Command{
|
||||||
|
Use: "auth",
|
||||||
|
Short: "Manage node authentication and approval",
|
||||||
|
}
|
||||||
|
|
||||||
|
var authRegisterCmd = &cobra.Command{
|
||||||
|
Use: "register",
|
||||||
|
Short: "Register a node to your network",
|
||||||
|
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
|
||||||
|
user, _ := cmd.Flags().GetString("user")
|
||||||
|
authID, _ := cmd.Flags().GetString("auth-id")
|
||||||
|
|
||||||
|
request := &v1.AuthRegisterRequest{
|
||||||
|
AuthId: authID,
|
||||||
|
User: user,
|
||||||
|
}
|
||||||
|
|
||||||
|
response, err := client.AuthRegister(ctx, request)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("registering node: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return printOutput(
|
||||||
|
cmd,
|
||||||
|
response.GetNode(),
|
||||||
|
fmt.Sprintf("Node %s registered", response.GetNode().GetGivenName()))
|
||||||
|
}),
|
||||||
|
}
|
||||||
|
|
||||||
|
var authApproveCmd = &cobra.Command{
|
||||||
|
Use: "approve",
|
||||||
|
Short: "Approve a pending authentication request",
|
||||||
|
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
|
||||||
|
authID, _ := cmd.Flags().GetString("auth-id")
|
||||||
|
|
||||||
|
request := &v1.AuthApproveRequest{
|
||||||
|
AuthId: authID,
|
||||||
|
}
|
||||||
|
|
||||||
|
response, err := client.AuthApprove(ctx, request)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("approving auth request: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return printOutput(cmd, response, "Auth request approved")
|
||||||
|
}),
|
||||||
|
}
|
||||||
|
|
||||||
|
var authRejectCmd = &cobra.Command{
|
||||||
|
Use: "reject",
|
||||||
|
Short: "Reject a pending authentication request",
|
||||||
|
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
|
||||||
|
authID, _ := cmd.Flags().GetString("auth-id")
|
||||||
|
|
||||||
|
request := &v1.AuthRejectRequest{
|
||||||
|
AuthId: authID,
|
||||||
|
}
|
||||||
|
|
||||||
|
response, err := client.AuthReject(ctx, request)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("rejecting auth request: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return printOutput(cmd, response, "Auth request rejected")
|
||||||
|
}),
|
||||||
|
}
|
||||||
@@ -1,7 +1,8 @@
|
|||||||
package cli
|
package cli
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"github.com/rs/zerolog/log"
|
"fmt"
|
||||||
|
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -13,10 +14,12 @@ var configTestCmd = &cobra.Command{
|
|||||||
Use: "configtest",
|
Use: "configtest",
|
||||||
Short: "Test the configuration.",
|
Short: "Test the configuration.",
|
||||||
Long: "Run a test of the configuration and exit.",
|
Long: "Run a test of the configuration and exit.",
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
_, err := newHeadscaleServerWithConfig()
|
_, err := newHeadscaleServerWithConfig()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatal().Caller().Err(err).Msg("Error initializing")
|
return fmt.Errorf("configuration error: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,48 +1,22 @@
|
|||||||
package cli
|
package cli
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"context"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
|
||||||
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
||||||
"github.com/rs/zerolog/log"
|
"github.com/juanfont/headscale/hscontrol/types"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
"google.golang.org/grpc/status"
|
|
||||||
"tailscale.com/types/key"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
const (
|
|
||||||
errPreAuthKeyMalformed = Error("key is malformed. expected 64 hex characters with `nodekey` prefix")
|
|
||||||
)
|
|
||||||
|
|
||||||
// Error is used to compare errors as per https://dave.cheney.net/2016/04/07/constant-errors
|
|
||||||
type Error string
|
|
||||||
|
|
||||||
func (e Error) Error() string { return string(e) }
|
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
rootCmd.AddCommand(debugCmd)
|
rootCmd.AddCommand(debugCmd)
|
||||||
|
|
||||||
createNodeCmd.Flags().StringP("name", "", "", "Name")
|
createNodeCmd.Flags().StringP("name", "", "", "Name")
|
||||||
err := createNodeCmd.MarkFlagRequired("name")
|
|
||||||
if err != nil {
|
|
||||||
log.Fatal().Err(err).Msg("")
|
|
||||||
}
|
|
||||||
createNodeCmd.Flags().StringP("user", "u", "", "User")
|
createNodeCmd.Flags().StringP("user", "u", "", "User")
|
||||||
|
|
||||||
createNodeCmd.Flags().StringP("namespace", "n", "", "User")
|
|
||||||
createNodeNamespaceFlag := createNodeCmd.Flags().Lookup("namespace")
|
|
||||||
createNodeNamespaceFlag.Deprecated = deprecateNamespaceMessage
|
|
||||||
createNodeNamespaceFlag.Hidden = true
|
|
||||||
|
|
||||||
err = createNodeCmd.MarkFlagRequired("user")
|
|
||||||
if err != nil {
|
|
||||||
log.Fatal().Err(err).Msg("")
|
|
||||||
}
|
|
||||||
createNodeCmd.Flags().StringP("key", "k", "", "Key")
|
createNodeCmd.Flags().StringP("key", "k", "", "Key")
|
||||||
err = createNodeCmd.MarkFlagRequired("key")
|
mustMarkRequired(createNodeCmd, "name", "user", "key")
|
||||||
if err != nil {
|
|
||||||
log.Fatal().Err(err).Msg("")
|
|
||||||
}
|
|
||||||
createNodeCmd.Flags().
|
createNodeCmd.Flags().
|
||||||
StringSliceP("route", "r", []string{}, "List (or repeated flags) of routes to advertise")
|
StringSliceP("route", "r", []string{}, "List (or repeated flags) of routes to advertise")
|
||||||
|
|
||||||
@@ -57,58 +31,21 @@ var debugCmd = &cobra.Command{
|
|||||||
|
|
||||||
var createNodeCmd = &cobra.Command{
|
var createNodeCmd = &cobra.Command{
|
||||||
Use: "create-node",
|
Use: "create-node",
|
||||||
Short: "Create a node that can be registered with `nodes register <>` command",
|
Short: "Create a node that can be registered with `auth register <>` command",
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
user, _ := cmd.Flags().GetString("user")
|
||||||
|
name, _ := cmd.Flags().GetString("name")
|
||||||
|
registrationID, _ := cmd.Flags().GetString("key")
|
||||||
|
|
||||||
user, err := cmd.Flags().GetString("user")
|
_, err := types.AuthIDFromString(registrationID)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
|
return fmt.Errorf("parsing machine key: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
|
routes, _ := cmd.Flags().GetStringSlice("route")
|
||||||
defer cancel()
|
|
||||||
defer conn.Close()
|
|
||||||
|
|
||||||
name, err := cmd.Flags().GetString("name")
|
|
||||||
if err != nil {
|
|
||||||
ErrorOutput(
|
|
||||||
err,
|
|
||||||
fmt.Sprintf("Error getting node from flag: %s", err),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
machineKey, err := cmd.Flags().GetString("key")
|
|
||||||
if err != nil {
|
|
||||||
ErrorOutput(
|
|
||||||
err,
|
|
||||||
fmt.Sprintf("Error getting key from flag: %s", err),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
var mkey key.MachinePublic
|
|
||||||
err = mkey.UnmarshalText([]byte(machineKey))
|
|
||||||
if err != nil {
|
|
||||||
ErrorOutput(
|
|
||||||
err,
|
|
||||||
fmt.Sprintf("Failed to parse machine key from flag: %s", err),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
routes, err := cmd.Flags().GetStringSlice("route")
|
|
||||||
if err != nil {
|
|
||||||
ErrorOutput(
|
|
||||||
err,
|
|
||||||
fmt.Sprintf("Error getting routes from flag: %s", err),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
request := &v1.DebugCreateNodeRequest{
|
request := &v1.DebugCreateNodeRequest{
|
||||||
Key: machineKey,
|
Key: registrationID,
|
||||||
Name: name,
|
Name: name,
|
||||||
User: user,
|
User: user,
|
||||||
Routes: routes,
|
Routes: routes,
|
||||||
@@ -116,13 +53,9 @@ var createNodeCmd = &cobra.Command{
|
|||||||
|
|
||||||
response, err := client.DebugCreateNode(ctx, request)
|
response, err := client.DebugCreateNode(ctx, request)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(
|
return fmt.Errorf("creating node: %w", err)
|
||||||
err,
|
|
||||||
fmt.Sprintf("Cannot create node: %s", status.Convert(err).Message()),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
SuccessOutput(response.GetNode(), "Node created", output)
|
return printOutput(cmd, response.GetNode(), "Node created")
|
||||||
},
|
}),
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -15,14 +15,12 @@ var dumpConfigCmd = &cobra.Command{
|
|||||||
Use: "dumpConfig",
|
Use: "dumpConfig",
|
||||||
Short: "dump current config to /etc/headscale/config.dump.yaml, integration test only",
|
Short: "dump current config to /etc/headscale/config.dump.yaml, integration test only",
|
||||||
Hidden: true,
|
Hidden: true,
|
||||||
Args: func(cmd *cobra.Command, args []string) error {
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
return nil
|
|
||||||
},
|
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
|
||||||
err := viper.WriteConfigAs("/etc/headscale/config.dump.yaml")
|
err := viper.WriteConfigAs("/etc/headscale/config.dump.yaml")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
//nolint
|
return fmt.Errorf("dumping config: %w", err)
|
||||||
fmt.Println("Failed to dump config")
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -21,22 +21,17 @@ var generateCmd = &cobra.Command{
|
|||||||
var generatePrivateKeyCmd = &cobra.Command{
|
var generatePrivateKeyCmd = &cobra.Command{
|
||||||
Use: "private-key",
|
Use: "private-key",
|
||||||
Short: "Generate a private key for the headscale server",
|
Short: "Generate a private key for the headscale server",
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
|
||||||
machineKey := key.NewMachine()
|
machineKey := key.NewMachine()
|
||||||
|
|
||||||
machineKeyStr, err := machineKey.MarshalText()
|
machineKeyStr, err := machineKey.MarshalText()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(
|
return fmt.Errorf("marshalling machine key: %w", err)
|
||||||
err,
|
|
||||||
fmt.Sprintf("Error getting machine key from flag: %s", err),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
SuccessOutput(map[string]string{
|
return printOutput(cmd, map[string]string{
|
||||||
"private_key": string(machineKeyStr),
|
"private_key": string(machineKeyStr),
|
||||||
},
|
},
|
||||||
string(machineKeyStr), output)
|
string(machineKeyStr))
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|||||||
27
cmd/headscale/cli/health.go
Normal file
@@ -0,0 +1,27 @@
|
|||||||
|
package cli
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
|
||||||
|
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
||||||
|
"github.com/spf13/cobra"
|
||||||
|
)
|
||||||
|
|
||||||
|
func init() {
|
||||||
|
rootCmd.AddCommand(healthCmd)
|
||||||
|
}
|
||||||
|
|
||||||
|
var healthCmd = &cobra.Command{
|
||||||
|
Use: "health",
|
||||||
|
Short: "Check the health of the Headscale server",
|
||||||
|
Long: "Check the health of the Headscale server. This command will return an exit code of 0 if the server is healthy, or 1 if it is not.",
|
||||||
|
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
|
||||||
|
response, err := client.Health(ctx, &v1.HealthRequest{})
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("checking health: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return printOutput(cmd, response, "")
|
||||||
|
}),
|
||||||
|
}
|
||||||
@@ -1,6 +1,7 @@
|
|||||||
package cli
|
package cli
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"context"
|
||||||
"encoding/json"
|
"encoding/json"
|
||||||
"fmt"
|
"fmt"
|
||||||
"net"
|
"net"
|
||||||
@@ -9,15 +10,22 @@ import (
|
|||||||
"strconv"
|
"strconv"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"github.com/juanfont/headscale/hscontrol/util/zlog/zf"
|
||||||
"github.com/oauth2-proxy/mockoidc"
|
"github.com/oauth2-proxy/mockoidc"
|
||||||
"github.com/rs/zerolog/log"
|
"github.com/rs/zerolog/log"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
// Error is used to compare errors as per https://dave.cheney.net/2016/04/07/constant-errors
|
||||||
|
type Error string
|
||||||
|
|
||||||
|
func (e Error) Error() string { return string(e) }
|
||||||
|
|
||||||
const (
|
const (
|
||||||
errMockOidcClientIDNotDefined = Error("MOCKOIDC_CLIENT_ID not defined")
|
errMockOidcClientIDNotDefined = Error("MOCKOIDC_CLIENT_ID not defined")
|
||||||
errMockOidcClientSecretNotDefined = Error("MOCKOIDC_CLIENT_SECRET not defined")
|
errMockOidcClientSecretNotDefined = Error("MOCKOIDC_CLIENT_SECRET not defined")
|
||||||
errMockOidcPortNotDefined = Error("MOCKOIDC_PORT not defined")
|
errMockOidcPortNotDefined = Error("MOCKOIDC_PORT not defined")
|
||||||
|
errMockOidcUsersNotDefined = Error("MOCKOIDC_USERS not defined")
|
||||||
refreshTTL = 60 * time.Minute
|
refreshTTL = 60 * time.Minute
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -31,12 +39,13 @@ var mockOidcCmd = &cobra.Command{
|
|||||||
Use: "mockoidc",
|
Use: "mockoidc",
|
||||||
Short: "Runs a mock OIDC server for testing",
|
Short: "Runs a mock OIDC server for testing",
|
||||||
Long: "This internal command runs a OpenID Connect for testing purposes",
|
Long: "This internal command runs a OpenID Connect for testing purposes",
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
err := mockOIDC()
|
err := mockOIDC()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Error().Err(err).Msgf("Error running mock OIDC server")
|
return fmt.Errorf("running mock OIDC server: %w", err)
|
||||||
os.Exit(1)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -45,41 +54,47 @@ func mockOIDC() error {
|
|||||||
if clientID == "" {
|
if clientID == "" {
|
||||||
return errMockOidcClientIDNotDefined
|
return errMockOidcClientIDNotDefined
|
||||||
}
|
}
|
||||||
|
|
||||||
clientSecret := os.Getenv("MOCKOIDC_CLIENT_SECRET")
|
clientSecret := os.Getenv("MOCKOIDC_CLIENT_SECRET")
|
||||||
if clientSecret == "" {
|
if clientSecret == "" {
|
||||||
return errMockOidcClientSecretNotDefined
|
return errMockOidcClientSecretNotDefined
|
||||||
}
|
}
|
||||||
|
|
||||||
addrStr := os.Getenv("MOCKOIDC_ADDR")
|
addrStr := os.Getenv("MOCKOIDC_ADDR")
|
||||||
if addrStr == "" {
|
if addrStr == "" {
|
||||||
return errMockOidcPortNotDefined
|
return errMockOidcPortNotDefined
|
||||||
}
|
}
|
||||||
|
|
||||||
portStr := os.Getenv("MOCKOIDC_PORT")
|
portStr := os.Getenv("MOCKOIDC_PORT")
|
||||||
if portStr == "" {
|
if portStr == "" {
|
||||||
return errMockOidcPortNotDefined
|
return errMockOidcPortNotDefined
|
||||||
}
|
}
|
||||||
|
|
||||||
accessTTLOverride := os.Getenv("MOCKOIDC_ACCESS_TTL")
|
accessTTLOverride := os.Getenv("MOCKOIDC_ACCESS_TTL")
|
||||||
if accessTTLOverride != "" {
|
if accessTTLOverride != "" {
|
||||||
newTTL, err := time.ParseDuration(accessTTLOverride)
|
newTTL, err := time.ParseDuration(accessTTLOverride)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
accessTTL = newTTL
|
accessTTL = newTTL
|
||||||
}
|
}
|
||||||
|
|
||||||
userStr := os.Getenv("MOCKOIDC_USERS")
|
userStr := os.Getenv("MOCKOIDC_USERS")
|
||||||
if userStr == "" {
|
if userStr == "" {
|
||||||
return fmt.Errorf("MOCKOIDC_USERS not defined")
|
return errMockOidcUsersNotDefined
|
||||||
}
|
}
|
||||||
|
|
||||||
var users []mockoidc.MockUser
|
var users []mockoidc.MockUser
|
||||||
|
|
||||||
err := json.Unmarshal([]byte(userStr), &users)
|
err := json.Unmarshal([]byte(userStr), &users)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("unmarshalling users: %w", err)
|
return fmt.Errorf("unmarshalling users: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
log.Info().Interface("users", users).Msg("loading users from JSON")
|
log.Info().Interface(zf.Users, users).Msg("loading users from JSON")
|
||||||
|
|
||||||
log.Info().Msgf("Access token TTL: %s", accessTTL)
|
log.Info().Msgf("access token TTL: %s", accessTTL)
|
||||||
|
|
||||||
port, err := strconv.Atoi(portStr)
|
port, err := strconv.Atoi(portStr)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -91,7 +106,7 @@ func mockOIDC() error {
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
listener, err := net.Listen("tcp", fmt.Sprintf("%s:%d", addrStr, port))
|
listener, err := new(net.ListenConfig).Listen(context.Background(), "tcp", fmt.Sprintf("%s:%d", addrStr, port))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -100,8 +115,10 @@ func mockOIDC() error {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
log.Info().Msgf("Mock OIDC server listening on %s", listener.Addr().String())
|
|
||||||
log.Info().Msgf("Issuer: %s", mock.Issuer())
|
log.Info().Msgf("mock OIDC server listening on %s", listener.Addr().String())
|
||||||
|
log.Info().Msgf("issuer: %s", mock.Issuer())
|
||||||
|
|
||||||
c := make(chan struct{})
|
c := make(chan struct{})
|
||||||
<-c
|
<-c
|
||||||
|
|
||||||
@@ -132,12 +149,13 @@ func getMockOIDC(clientID string, clientSecret string, users []mockoidc.MockUser
|
|||||||
ErrorQueue: &mockoidc.ErrorQueue{},
|
ErrorQueue: &mockoidc.ErrorQueue{},
|
||||||
}
|
}
|
||||||
|
|
||||||
mock.AddMiddleware(func(h http.Handler) http.Handler {
|
_ = mock.AddMiddleware(func(h http.Handler) http.Handler {
|
||||||
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||||
log.Info().Msgf("Request: %+v", r)
|
log.Info().Msgf("request: %+v", r)
|
||||||
h.ServeHTTP(w, r)
|
h.ServeHTTP(w, r)
|
||||||
|
|
||||||
if r.Response != nil {
|
if r.Response != nil {
|
||||||
log.Info().Msgf("Response: %+v", r.Response)
|
log.Info().Msgf("response: %+v", r.Response)
|
||||||
}
|
}
|
||||||
})
|
})
|
||||||
})
|
})
|
||||||
|
|||||||
@@ -1,281 +1,217 @@
|
|||||||
package cli
|
package cli
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"context"
|
||||||
"fmt"
|
"fmt"
|
||||||
"log"
|
|
||||||
"net/netip"
|
"net/netip"
|
||||||
"slices"
|
|
||||||
"strconv"
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
survey "github.com/AlecAivazis/survey/v2"
|
|
||||||
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
||||||
"github.com/juanfont/headscale/hscontrol/util"
|
"github.com/juanfont/headscale/hscontrol/util"
|
||||||
"github.com/pterm/pterm"
|
"github.com/pterm/pterm"
|
||||||
|
"github.com/samber/lo"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
"google.golang.org/grpc/status"
|
"google.golang.org/protobuf/types/known/timestamppb"
|
||||||
"tailscale.com/types/key"
|
"tailscale.com/types/key"
|
||||||
)
|
)
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
rootCmd.AddCommand(nodeCmd)
|
rootCmd.AddCommand(nodeCmd)
|
||||||
listNodesCmd.Flags().StringP("user", "u", "", "Filter by user")
|
listNodesCmd.Flags().StringP("user", "u", "", "Filter by user")
|
||||||
listNodesCmd.Flags().BoolP("tags", "t", false, "Show tags")
|
|
||||||
|
|
||||||
listNodesCmd.Flags().StringP("namespace", "n", "", "User")
|
|
||||||
listNodesNamespaceFlag := listNodesCmd.Flags().Lookup("namespace")
|
|
||||||
listNodesNamespaceFlag.Deprecated = deprecateNamespaceMessage
|
|
||||||
listNodesNamespaceFlag.Hidden = true
|
|
||||||
|
|
||||||
nodeCmd.AddCommand(listNodesCmd)
|
nodeCmd.AddCommand(listNodesCmd)
|
||||||
|
|
||||||
|
listNodeRoutesCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
|
||||||
|
nodeCmd.AddCommand(listNodeRoutesCmd)
|
||||||
|
|
||||||
registerNodeCmd.Flags().StringP("user", "u", "", "User")
|
registerNodeCmd.Flags().StringP("user", "u", "", "User")
|
||||||
|
|
||||||
registerNodeCmd.Flags().StringP("namespace", "n", "", "User")
|
|
||||||
registerNodeNamespaceFlag := registerNodeCmd.Flags().Lookup("namespace")
|
|
||||||
registerNodeNamespaceFlag.Deprecated = deprecateNamespaceMessage
|
|
||||||
registerNodeNamespaceFlag.Hidden = true
|
|
||||||
|
|
||||||
err := registerNodeCmd.MarkFlagRequired("user")
|
|
||||||
if err != nil {
|
|
||||||
log.Fatal(err.Error())
|
|
||||||
}
|
|
||||||
registerNodeCmd.Flags().StringP("key", "k", "", "Key")
|
registerNodeCmd.Flags().StringP("key", "k", "", "Key")
|
||||||
err = registerNodeCmd.MarkFlagRequired("key")
|
mustMarkRequired(registerNodeCmd, "user", "key")
|
||||||
if err != nil {
|
|
||||||
log.Fatal(err.Error())
|
|
||||||
}
|
|
||||||
nodeCmd.AddCommand(registerNodeCmd)
|
nodeCmd.AddCommand(registerNodeCmd)
|
||||||
|
|
||||||
expireNodeCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
|
expireNodeCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
|
||||||
err = expireNodeCmd.MarkFlagRequired("identifier")
|
expireNodeCmd.Flags().StringP("expiry", "e", "", "Set expire to (RFC3339 format, e.g. 2025-08-27T10:00:00Z), or leave empty to expire immediately.")
|
||||||
if err != nil {
|
expireNodeCmd.Flags().BoolP("disable", "d", false, "Disable key expiry (node will never expire)")
|
||||||
log.Fatal(err.Error())
|
mustMarkRequired(expireNodeCmd, "identifier")
|
||||||
}
|
|
||||||
nodeCmd.AddCommand(expireNodeCmd)
|
nodeCmd.AddCommand(expireNodeCmd)
|
||||||
|
|
||||||
renameNodeCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
|
renameNodeCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
|
||||||
err = renameNodeCmd.MarkFlagRequired("identifier")
|
mustMarkRequired(renameNodeCmd, "identifier")
|
||||||
if err != nil {
|
|
||||||
log.Fatal(err.Error())
|
|
||||||
}
|
|
||||||
nodeCmd.AddCommand(renameNodeCmd)
|
nodeCmd.AddCommand(renameNodeCmd)
|
||||||
|
|
||||||
deleteNodeCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
|
deleteNodeCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
|
||||||
err = deleteNodeCmd.MarkFlagRequired("identifier")
|
mustMarkRequired(deleteNodeCmd, "identifier")
|
||||||
if err != nil {
|
|
||||||
log.Fatal(err.Error())
|
|
||||||
}
|
|
||||||
nodeCmd.AddCommand(deleteNodeCmd)
|
nodeCmd.AddCommand(deleteNodeCmd)
|
||||||
|
|
||||||
moveNodeCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
|
|
||||||
|
|
||||||
err = moveNodeCmd.MarkFlagRequired("identifier")
|
|
||||||
if err != nil {
|
|
||||||
log.Fatal(err.Error())
|
|
||||||
}
|
|
||||||
|
|
||||||
moveNodeCmd.Flags().StringP("user", "u", "", "New user")
|
|
||||||
|
|
||||||
moveNodeCmd.Flags().StringP("namespace", "n", "", "User")
|
|
||||||
moveNodeNamespaceFlag := moveNodeCmd.Flags().Lookup("namespace")
|
|
||||||
moveNodeNamespaceFlag.Deprecated = deprecateNamespaceMessage
|
|
||||||
moveNodeNamespaceFlag.Hidden = true
|
|
||||||
|
|
||||||
err = moveNodeCmd.MarkFlagRequired("user")
|
|
||||||
if err != nil {
|
|
||||||
log.Fatal(err.Error())
|
|
||||||
}
|
|
||||||
nodeCmd.AddCommand(moveNodeCmd)
|
|
||||||
|
|
||||||
tagCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
|
tagCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
|
||||||
|
mustMarkRequired(tagCmd, "identifier")
|
||||||
err = tagCmd.MarkFlagRequired("identifier")
|
tagCmd.Flags().StringSliceP("tags", "t", []string{}, "List of tags to add to the node")
|
||||||
if err != nil {
|
|
||||||
log.Fatal(err.Error())
|
|
||||||
}
|
|
||||||
tagCmd.Flags().
|
|
||||||
StringSliceP("tags", "t", []string{}, "List of tags to add to the node")
|
|
||||||
nodeCmd.AddCommand(tagCmd)
|
nodeCmd.AddCommand(tagCmd)
|
||||||
|
|
||||||
|
approveRoutesCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
|
||||||
|
mustMarkRequired(approveRoutesCmd, "identifier")
|
||||||
|
approveRoutesCmd.Flags().StringSliceP("routes", "r", []string{}, `List of routes that will be approved (comma-separated, e.g. "10.0.0.0/8,192.168.0.0/24" or empty string to remove all approved routes)`)
|
||||||
|
nodeCmd.AddCommand(approveRoutesCmd)
|
||||||
|
|
||||||
nodeCmd.AddCommand(backfillNodeIPsCmd)
|
nodeCmd.AddCommand(backfillNodeIPsCmd)
|
||||||
}
|
}
|
||||||
|
|
||||||
var nodeCmd = &cobra.Command{
|
var nodeCmd = &cobra.Command{
|
||||||
Use: "nodes",
|
Use: "nodes",
|
||||||
Short: "Manage the nodes of Headscale",
|
Short: "Manage the nodes of Headscale",
|
||||||
Aliases: []string{"node", "machine", "machines"},
|
Aliases: []string{"node"},
|
||||||
}
|
}
|
||||||
|
|
||||||
var registerNodeCmd = &cobra.Command{
|
var registerNodeCmd = &cobra.Command{
|
||||||
Use: "register",
|
Use: "register",
|
||||||
Short: "Registers a node to your network",
|
Short: "Registers a node to your network",
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
Deprecated: "use 'headscale auth register --auth-id <id> --user <user>' instead",
|
||||||
output, _ := cmd.Flags().GetString("output")
|
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
|
||||||
user, err := cmd.Flags().GetString("user")
|
user, _ := cmd.Flags().GetString("user")
|
||||||
if err != nil {
|
registrationID, _ := cmd.Flags().GetString("key")
|
||||||
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
|
|
||||||
}
|
|
||||||
|
|
||||||
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
|
|
||||||
defer cancel()
|
|
||||||
defer conn.Close()
|
|
||||||
|
|
||||||
machineKey, err := cmd.Flags().GetString("key")
|
|
||||||
if err != nil {
|
|
||||||
ErrorOutput(
|
|
||||||
err,
|
|
||||||
fmt.Sprintf("Error getting node key from flag: %s", err),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
request := &v1.RegisterNodeRequest{
|
request := &v1.RegisterNodeRequest{
|
||||||
Key: machineKey,
|
Key: registrationID,
|
||||||
User: user,
|
User: user,
|
||||||
}
|
}
|
||||||
|
|
||||||
response, err := client.RegisterNode(ctx, request)
|
response, err := client.RegisterNode(ctx, request)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(
|
return fmt.Errorf("registering node: %w", err)
|
||||||
err,
|
|
||||||
fmt.Sprintf(
|
|
||||||
"Cannot register node: %s\n",
|
|
||||||
status.Convert(err).Message(),
|
|
||||||
),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
SuccessOutput(
|
return printOutput(
|
||||||
|
cmd,
|
||||||
response.GetNode(),
|
response.GetNode(),
|
||||||
fmt.Sprintf("Node %s registered", response.GetNode().GetGivenName()), output)
|
fmt.Sprintf("Node %s registered", response.GetNode().GetGivenName()))
|
||||||
},
|
}),
|
||||||
}
|
}
|
||||||
|
|
||||||
var listNodesCmd = &cobra.Command{
|
var listNodesCmd = &cobra.Command{
|
||||||
Use: "list",
|
Use: "list",
|
||||||
Short: "List nodes",
|
Short: "List nodes",
|
||||||
Aliases: []string{"ls", "show"},
|
Aliases: []string{"ls", "show"},
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
user, _ := cmd.Flags().GetString("user")
|
||||||
user, err := cmd.Flags().GetString("user")
|
|
||||||
|
response, err := client.ListNodes(ctx, &v1.ListNodesRequest{User: user})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
|
return fmt.Errorf("listing nodes: %w", err)
|
||||||
}
|
}
|
||||||
showTags, err := cmd.Flags().GetBool("tags")
|
|
||||||
|
return printListOutput(cmd, response.GetNodes(), func() error {
|
||||||
|
tableData, err := nodesToPtables(user, response.GetNodes())
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("converting to table: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return pterm.DefaultTable.WithHasHeader().WithData(tableData).Render()
|
||||||
|
})
|
||||||
|
}),
|
||||||
|
}
|
||||||
|
|
||||||
|
var listNodeRoutesCmd = &cobra.Command{
|
||||||
|
Use: "list-routes",
|
||||||
|
Short: "List routes available on nodes",
|
||||||
|
Aliases: []string{"lsr", "routes"},
|
||||||
|
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
|
||||||
|
identifier, _ := cmd.Flags().GetUint64("identifier")
|
||||||
|
|
||||||
|
response, err := client.ListNodes(ctx, &v1.ListNodesRequest{})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(err, fmt.Sprintf("Error getting tags flag: %s", err), output)
|
return fmt.Errorf("listing nodes: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
|
nodes := response.GetNodes()
|
||||||
defer cancel()
|
if identifier != 0 {
|
||||||
defer conn.Close()
|
for _, node := range response.GetNodes() {
|
||||||
|
if node.GetId() == identifier {
|
||||||
|
nodes = []*v1.Node{node}
|
||||||
|
|
||||||
request := &v1.ListNodesRequest{
|
break
|
||||||
User: user,
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
response, err := client.ListNodes(ctx, request)
|
nodes = lo.Filter(nodes, func(n *v1.Node, _ int) bool {
|
||||||
if err != nil {
|
return (n.GetSubnetRoutes() != nil && len(n.GetSubnetRoutes()) > 0) || (n.GetApprovedRoutes() != nil && len(n.GetApprovedRoutes()) > 0) || (n.GetAvailableRoutes() != nil && len(n.GetAvailableRoutes()) > 0)
|
||||||
ErrorOutput(
|
})
|
||||||
err,
|
|
||||||
fmt.Sprintf("Cannot get nodes: %s", status.Convert(err).Message()),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
if output != "" {
|
return printListOutput(cmd, nodes, func() error {
|
||||||
SuccessOutput(response.GetNodes(), "", output)
|
return pterm.DefaultTable.WithHasHeader().WithData(nodeRoutesToPtables(nodes)).Render()
|
||||||
}
|
})
|
||||||
|
}),
|
||||||
tableData, err := nodesToPtables(user, showTags, response.GetNodes())
|
|
||||||
if err != nil {
|
|
||||||
ErrorOutput(err, fmt.Sprintf("Error converting to table: %s", err), output)
|
|
||||||
}
|
|
||||||
|
|
||||||
err = pterm.DefaultTable.WithHasHeader().WithData(tableData).Render()
|
|
||||||
if err != nil {
|
|
||||||
ErrorOutput(
|
|
||||||
err,
|
|
||||||
fmt.Sprintf("Failed to render pterm table: %s", err),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
}
|
|
||||||
},
|
|
||||||
}
|
}
|
||||||
|
|
||||||
var expireNodeCmd = &cobra.Command{
|
var expireNodeCmd = &cobra.Command{
|
||||||
Use: "expire",
|
Use: "expire",
|
||||||
Short: "Expire (log out) a node in your network",
|
Short: "Expire (log out) a node in your network",
|
||||||
Long: "Expiring a node will keep the node in the database and force it to reauthenticate.",
|
Long: `Expiring a node will keep the node in the database and force it to reauthenticate.
|
||||||
|
|
||||||
|
Use --disable to disable key expiry (node will never expire).`,
|
||||||
Aliases: []string{"logout", "exp", "e"},
|
Aliases: []string{"logout", "exp", "e"},
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
identifier, _ := cmd.Flags().GetUint64("identifier")
|
||||||
|
disableExpiry, _ := cmd.Flags().GetBool("disable")
|
||||||
|
|
||||||
identifier, err := cmd.Flags().GetUint64("identifier")
|
// Handle disable expiry - node will never expire.
|
||||||
if err != nil {
|
if disableExpiry {
|
||||||
ErrorOutput(
|
request := &v1.ExpireNodeRequest{
|
||||||
err,
|
NodeId: identifier,
|
||||||
fmt.Sprintf("Error converting ID to integer: %s", err),
|
DisableExpiry: true,
|
||||||
output,
|
}
|
||||||
)
|
|
||||||
|
|
||||||
return
|
response, err := client.ExpireNode(ctx, request)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("disabling node expiry: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return printOutput(cmd, response.GetNode(), "Node expiry disabled")
|
||||||
}
|
}
|
||||||
|
|
||||||
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
|
expiry, _ := cmd.Flags().GetString("expiry")
|
||||||
defer cancel()
|
|
||||||
defer conn.Close()
|
now := time.Now()
|
||||||
|
|
||||||
|
expiryTime := now
|
||||||
|
if expiry != "" {
|
||||||
|
var err error
|
||||||
|
expiryTime, err = time.Parse(time.RFC3339, expiry)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("parsing expiry time: %w", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
request := &v1.ExpireNodeRequest{
|
request := &v1.ExpireNodeRequest{
|
||||||
NodeId: identifier,
|
NodeId: identifier,
|
||||||
|
Expiry: timestamppb.New(expiryTime),
|
||||||
}
|
}
|
||||||
|
|
||||||
response, err := client.ExpireNode(ctx, request)
|
response, err := client.ExpireNode(ctx, request)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(
|
return fmt.Errorf("expiring node: %w", err)
|
||||||
err,
|
|
||||||
fmt.Sprintf(
|
|
||||||
"Cannot expire node: %s\n",
|
|
||||||
status.Convert(err).Message(),
|
|
||||||
),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
|
|
||||||
return
|
|
||||||
}
|
}
|
||||||
|
|
||||||
SuccessOutput(response.GetNode(), "Node expired", output)
|
if now.Equal(expiryTime) || now.After(expiryTime) {
|
||||||
},
|
return printOutput(cmd, response.GetNode(), "Node expired")
|
||||||
|
}
|
||||||
|
|
||||||
|
return printOutput(cmd, response.GetNode(), "Node expiration updated")
|
||||||
|
}),
|
||||||
}
|
}
|
||||||
|
|
||||||
var renameNodeCmd = &cobra.Command{
|
var renameNodeCmd = &cobra.Command{
|
||||||
Use: "rename NEW_NAME",
|
Use: "rename NEW_NAME",
|
||||||
Short: "Renames a node in your network",
|
Short: "Renames a node in your network",
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
identifier, _ := cmd.Flags().GetUint64("identifier")
|
||||||
|
|
||||||
identifier, err := cmd.Flags().GetUint64("identifier")
|
|
||||||
if err != nil {
|
|
||||||
ErrorOutput(
|
|
||||||
err,
|
|
||||||
fmt.Sprintf("Error converting ID to integer: %s", err),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
|
|
||||||
defer cancel()
|
|
||||||
defer conn.Close()
|
|
||||||
|
|
||||||
newName := ""
|
newName := ""
|
||||||
if len(args) > 0 {
|
if len(args) > 0 {
|
||||||
newName = args[0]
|
newName = args[0]
|
||||||
}
|
}
|
||||||
|
|
||||||
request := &v1.RenameNodeRequest{
|
request := &v1.RenameNodeRequest{
|
||||||
NodeId: identifier,
|
NodeId: identifier,
|
||||||
NewName: newName,
|
NewName: newName,
|
||||||
@@ -283,43 +219,19 @@ var renameNodeCmd = &cobra.Command{
|
|||||||
|
|
||||||
response, err := client.RenameNode(ctx, request)
|
response, err := client.RenameNode(ctx, request)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(
|
return fmt.Errorf("renaming node: %w", err)
|
||||||
err,
|
|
||||||
fmt.Sprintf(
|
|
||||||
"Cannot rename node: %s\n",
|
|
||||||
status.Convert(err).Message(),
|
|
||||||
),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
|
|
||||||
return
|
|
||||||
}
|
}
|
||||||
|
|
||||||
SuccessOutput(response.GetNode(), "Node renamed", output)
|
return printOutput(cmd, response.GetNode(), "Node renamed")
|
||||||
},
|
}),
|
||||||
}
|
}
|
||||||
|
|
||||||
var deleteNodeCmd = &cobra.Command{
|
var deleteNodeCmd = &cobra.Command{
|
||||||
Use: "delete",
|
Use: "delete",
|
||||||
Short: "Delete a node",
|
Short: "Delete a node",
|
||||||
Aliases: []string{"del"},
|
Aliases: []string{"del"},
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
identifier, _ := cmd.Flags().GetUint64("identifier")
|
||||||
|
|
||||||
identifier, err := cmd.Flags().GetUint64("identifier")
|
|
||||||
if err != nil {
|
|
||||||
ErrorOutput(
|
|
||||||
err,
|
|
||||||
fmt.Sprintf("Error converting ID to integer: %s", err),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
|
|
||||||
defer cancel()
|
|
||||||
defer conn.Close()
|
|
||||||
|
|
||||||
getRequest := &v1.GetNodeRequest{
|
getRequest := &v1.GetNodeRequest{
|
||||||
NodeId: identifier,
|
NodeId: identifier,
|
||||||
@@ -327,139 +239,31 @@ var deleteNodeCmd = &cobra.Command{
|
|||||||
|
|
||||||
getResponse, err := client.GetNode(ctx, getRequest)
|
getResponse, err := client.GetNode(ctx, getRequest)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(
|
return fmt.Errorf("getting node: %w", err)
|
||||||
err,
|
|
||||||
fmt.Sprintf(
|
|
||||||
"Error getting node node: %s",
|
|
||||||
status.Convert(err).Message(),
|
|
||||||
),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
|
|
||||||
return
|
|
||||||
}
|
}
|
||||||
|
|
||||||
deleteRequest := &v1.DeleteNodeRequest{
|
deleteRequest := &v1.DeleteNodeRequest{
|
||||||
NodeId: identifier,
|
NodeId: identifier,
|
||||||
}
|
}
|
||||||
|
|
||||||
confirm := false
|
if !confirmAction(cmd, fmt.Sprintf(
|
||||||
force, _ := cmd.Flags().GetBool("force")
|
"Do you want to remove the node %s?",
|
||||||
if !force {
|
getResponse.GetNode().GetName(),
|
||||||
prompt := &survey.Confirm{
|
)) {
|
||||||
Message: fmt.Sprintf(
|
return printOutput(cmd, map[string]string{"Result": "Node not deleted"}, "Node not deleted")
|
||||||
"Do you want to remove the node %s?",
|
|
||||||
getResponse.GetNode().GetName(),
|
|
||||||
),
|
|
||||||
}
|
|
||||||
err = survey.AskOne(prompt, &confirm)
|
|
||||||
if err != nil {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if confirm || force {
|
_, err = client.DeleteNode(ctx, deleteRequest)
|
||||||
response, err := client.DeleteNode(ctx, deleteRequest)
|
|
||||||
if output != "" {
|
|
||||||
SuccessOutput(response, "", output)
|
|
||||||
|
|
||||||
return
|
|
||||||
}
|
|
||||||
if err != nil {
|
|
||||||
ErrorOutput(
|
|
||||||
err,
|
|
||||||
fmt.Sprintf(
|
|
||||||
"Error deleting node: %s",
|
|
||||||
status.Convert(err).Message(),
|
|
||||||
),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
|
|
||||||
return
|
|
||||||
}
|
|
||||||
SuccessOutput(
|
|
||||||
map[string]string{"Result": "Node deleted"},
|
|
||||||
"Node deleted",
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
} else {
|
|
||||||
SuccessOutput(map[string]string{"Result": "Node not deleted"}, "Node not deleted", output)
|
|
||||||
}
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
var moveNodeCmd = &cobra.Command{
|
|
||||||
Use: "move",
|
|
||||||
Short: "Move node to another user",
|
|
||||||
Aliases: []string{"mv"},
|
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
|
||||||
output, _ := cmd.Flags().GetString("output")
|
|
||||||
|
|
||||||
identifier, err := cmd.Flags().GetUint64("identifier")
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(
|
return fmt.Errorf("deleting node: %w", err)
|
||||||
err,
|
|
||||||
fmt.Sprintf("Error converting ID to integer: %s", err),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
|
|
||||||
return
|
|
||||||
}
|
}
|
||||||
|
|
||||||
user, err := cmd.Flags().GetString("user")
|
return printOutput(
|
||||||
if err != nil {
|
cmd,
|
||||||
ErrorOutput(
|
map[string]string{"Result": "Node deleted"},
|
||||||
err,
|
"Node deleted",
|
||||||
fmt.Sprintf("Error getting user: %s", err),
|
)
|
||||||
output,
|
}),
|
||||||
)
|
|
||||||
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
|
|
||||||
defer cancel()
|
|
||||||
defer conn.Close()
|
|
||||||
|
|
||||||
getRequest := &v1.GetNodeRequest{
|
|
||||||
NodeId: identifier,
|
|
||||||
}
|
|
||||||
|
|
||||||
_, err = client.GetNode(ctx, getRequest)
|
|
||||||
if err != nil {
|
|
||||||
ErrorOutput(
|
|
||||||
err,
|
|
||||||
fmt.Sprintf(
|
|
||||||
"Error getting node: %s",
|
|
||||||
status.Convert(err).Message(),
|
|
||||||
),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
moveRequest := &v1.MoveNodeRequest{
|
|
||||||
NodeId: identifier,
|
|
||||||
User: user,
|
|
||||||
}
|
|
||||||
|
|
||||||
moveResponse, err := client.MoveNode(ctx, moveRequest)
|
|
||||||
if err != nil {
|
|
||||||
ErrorOutput(
|
|
||||||
err,
|
|
||||||
fmt.Sprintf(
|
|
||||||
"Error moving node: %s",
|
|
||||||
status.Convert(err).Message(),
|
|
||||||
),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
SuccessOutput(moveResponse.GetNode(), "Node moved to another user", output)
|
|
||||||
},
|
|
||||||
}
|
}
|
||||||
|
|
||||||
var backfillNodeIPsCmd = &cobra.Command{
|
var backfillNodeIPsCmd = &cobra.Command{
|
||||||
@@ -477,45 +281,29 @@ all nodes that are missing.
|
|||||||
If you remove IPv4 or IPv6 prefixes from the config,
|
If you remove IPv4 or IPv6 prefixes from the config,
|
||||||
it can be run to remove the IPs that should no longer
|
it can be run to remove the IPs that should no longer
|
||||||
be assigned to nodes.`,
|
be assigned to nodes.`,
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
var err error
|
if !confirmAction(cmd, "Are you sure that you want to assign/remove IPs to/from nodes?") {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
return nil
|
||||||
|
|
||||||
confirm := false
|
|
||||||
prompt := &survey.Confirm{
|
|
||||||
Message: "Are you sure that you want to assign/remove IPs to/from nodes?",
|
|
||||||
}
|
}
|
||||||
err = survey.AskOne(prompt, &confirm)
|
|
||||||
|
ctx, client, conn, cancel, err := newHeadscaleCLIWithConfig()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return
|
return fmt.Errorf("connecting to headscale: %w", err)
|
||||||
}
|
}
|
||||||
if confirm {
|
defer cancel()
|
||||||
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
|
defer conn.Close()
|
||||||
defer cancel()
|
|
||||||
defer conn.Close()
|
|
||||||
|
|
||||||
changes, err := client.BackfillNodeIPs(ctx, &v1.BackfillNodeIPsRequest{Confirmed: confirm})
|
changes, err := client.BackfillNodeIPs(ctx, &v1.BackfillNodeIPsRequest{Confirmed: true})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(
|
return fmt.Errorf("backfilling IPs: %w", err)
|
||||||
err,
|
|
||||||
fmt.Sprintf(
|
|
||||||
"Error backfilling IPs: %s",
|
|
||||||
status.Convert(err).Message(),
|
|
||||||
),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
SuccessOutput(changes, "Node IPs backfilled successfully", output)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
return printOutput(cmd, changes, "Node IPs backfilled successfully")
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
func nodesToPtables(
|
func nodesToPtables(
|
||||||
currentUser string,
|
currentUser string,
|
||||||
showTags bool,
|
|
||||||
nodes []*v1.Node,
|
nodes []*v1.Node,
|
||||||
) (pterm.TableData, error) {
|
) (pterm.TableData, error) {
|
||||||
tableHeader := []string{
|
tableHeader := []string{
|
||||||
@@ -525,6 +313,7 @@ func nodesToPtables(
|
|||||||
"MachineKey",
|
"MachineKey",
|
||||||
"NodeKey",
|
"NodeKey",
|
||||||
"User",
|
"User",
|
||||||
|
"Tags",
|
||||||
"IP addresses",
|
"IP addresses",
|
||||||
"Ephemeral",
|
"Ephemeral",
|
||||||
"Last seen",
|
"Last seen",
|
||||||
@@ -532,13 +321,6 @@ func nodesToPtables(
|
|||||||
"Connected",
|
"Connected",
|
||||||
"Expired",
|
"Expired",
|
||||||
}
|
}
|
||||||
if showTags {
|
|
||||||
tableHeader = append(tableHeader, []string{
|
|
||||||
"ForcedTags",
|
|
||||||
"InvalidTags",
|
|
||||||
"ValidTags",
|
|
||||||
}...)
|
|
||||||
}
|
|
||||||
tableData := pterm.TableData{tableHeader}
|
tableData := pterm.TableData{tableHeader}
|
||||||
|
|
||||||
for _, node := range nodes {
|
for _, node := range nodes {
|
||||||
@@ -547,23 +329,30 @@ func nodesToPtables(
|
|||||||
ephemeral = true
|
ephemeral = true
|
||||||
}
|
}
|
||||||
|
|
||||||
var lastSeen time.Time
|
var (
|
||||||
var lastSeenTime string
|
lastSeen time.Time
|
||||||
|
lastSeenTime string
|
||||||
|
)
|
||||||
|
|
||||||
if node.GetLastSeen() != nil {
|
if node.GetLastSeen() != nil {
|
||||||
lastSeen = node.GetLastSeen().AsTime()
|
lastSeen = node.GetLastSeen().AsTime()
|
||||||
lastSeenTime = lastSeen.Format("2006-01-02 15:04:05")
|
lastSeenTime = lastSeen.Format(HeadscaleDateTimeFormat)
|
||||||
}
|
}
|
||||||
|
|
||||||
var expiry time.Time
|
var (
|
||||||
var expiryTime string
|
expiry time.Time
|
||||||
|
expiryTime string
|
||||||
|
)
|
||||||
|
|
||||||
if node.GetExpiry() != nil {
|
if node.GetExpiry() != nil {
|
||||||
expiry = node.GetExpiry().AsTime()
|
expiry = node.GetExpiry().AsTime()
|
||||||
expiryTime = expiry.Format("2006-01-02 15:04:05")
|
expiryTime = expiry.Format(HeadscaleDateTimeFormat)
|
||||||
} else {
|
} else {
|
||||||
expiryTime = "N/A"
|
expiryTime = "N/A"
|
||||||
}
|
}
|
||||||
|
|
||||||
var machineKey key.MachinePublic
|
var machineKey key.MachinePublic
|
||||||
|
|
||||||
err := machineKey.UnmarshalText(
|
err := machineKey.UnmarshalText(
|
||||||
[]byte(node.GetMachineKey()),
|
[]byte(node.GetMachineKey()),
|
||||||
)
|
)
|
||||||
@@ -572,6 +361,7 @@ func nodesToPtables(
|
|||||||
}
|
}
|
||||||
|
|
||||||
var nodeKey key.NodePublic
|
var nodeKey key.NodePublic
|
||||||
|
|
||||||
err = nodeKey.UnmarshalText(
|
err = nodeKey.UnmarshalText(
|
||||||
[]byte(node.GetNodeKey()),
|
[]byte(node.GetNodeKey()),
|
||||||
)
|
)
|
||||||
@@ -587,50 +377,39 @@ func nodesToPtables(
|
|||||||
}
|
}
|
||||||
|
|
||||||
var expired string
|
var expired string
|
||||||
if expiry.IsZero() || expiry.After(time.Now()) {
|
if node.GetExpiry() != nil && node.GetExpiry().AsTime().Before(time.Now()) {
|
||||||
expired = pterm.LightGreen("no")
|
|
||||||
} else {
|
|
||||||
expired = pterm.LightRed("yes")
|
expired = pterm.LightRed("yes")
|
||||||
|
} else {
|
||||||
|
expired = pterm.LightGreen("no")
|
||||||
}
|
}
|
||||||
|
|
||||||
var forcedTags string
|
var tagsBuilder strings.Builder
|
||||||
for _, tag := range node.GetForcedTags() {
|
|
||||||
forcedTags += "," + tag
|
for _, tag := range node.GetTags() {
|
||||||
|
tagsBuilder.WriteString("\n" + tag)
|
||||||
}
|
}
|
||||||
forcedTags = strings.TrimLeft(forcedTags, ",")
|
|
||||||
var invalidTags string
|
tags := strings.TrimLeft(tagsBuilder.String(), "\n")
|
||||||
for _, tag := range node.GetInvalidTags() {
|
|
||||||
if !slices.Contains(node.GetForcedTags(), tag) {
|
|
||||||
invalidTags += "," + pterm.LightRed(tag)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
invalidTags = strings.TrimLeft(invalidTags, ",")
|
|
||||||
var validTags string
|
|
||||||
for _, tag := range node.GetValidTags() {
|
|
||||||
if !slices.Contains(node.GetForcedTags(), tag) {
|
|
||||||
validTags += "," + pterm.LightGreen(tag)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
validTags = strings.TrimLeft(validTags, ",")
|
|
||||||
|
|
||||||
var user string
|
var user string
|
||||||
if currentUser == "" || (currentUser == node.GetUser().GetName()) {
|
if node.GetUser() != nil {
|
||||||
user = pterm.LightMagenta(node.GetUser().GetName())
|
user = node.GetUser().GetName()
|
||||||
} else {
|
|
||||||
// Shared into this user
|
|
||||||
user = pterm.LightYellow(node.GetUser().GetName())
|
|
||||||
}
|
}
|
||||||
|
|
||||||
var IPV4Address string
|
var ipBuilder strings.Builder
|
||||||
var IPV6Address string
|
|
||||||
for _, addr := range node.GetIpAddresses() {
|
for _, addr := range node.GetIpAddresses() {
|
||||||
if netip.MustParseAddr(addr).Is4() {
|
ip, err := netip.ParseAddr(addr)
|
||||||
IPV4Address = addr
|
if err == nil {
|
||||||
} else {
|
if ipBuilder.Len() > 0 {
|
||||||
IPV6Address = addr
|
ipBuilder.WriteString("\n")
|
||||||
|
}
|
||||||
|
|
||||||
|
ipBuilder.WriteString(ip.String())
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
ipAddresses := ipBuilder.String()
|
||||||
|
|
||||||
nodeData := []string{
|
nodeData := []string{
|
||||||
strconv.FormatUint(node.GetId(), util.Base10),
|
strconv.FormatUint(node.GetId(), util.Base10),
|
||||||
node.GetName(),
|
node.GetName(),
|
||||||
@@ -638,16 +417,14 @@ func nodesToPtables(
|
|||||||
machineKey.ShortString(),
|
machineKey.ShortString(),
|
||||||
nodeKey.ShortString(),
|
nodeKey.ShortString(),
|
||||||
user,
|
user,
|
||||||
strings.Join([]string{IPV4Address, IPV6Address}, ", "),
|
tags,
|
||||||
|
ipAddresses,
|
||||||
strconv.FormatBool(ephemeral),
|
strconv.FormatBool(ephemeral),
|
||||||
lastSeenTime,
|
lastSeenTime,
|
||||||
expiryTime,
|
expiryTime,
|
||||||
online,
|
online,
|
||||||
expired,
|
expired,
|
||||||
}
|
}
|
||||||
if showTags {
|
|
||||||
nodeData = append(nodeData, []string{forcedTags, invalidTags, validTags}...)
|
|
||||||
}
|
|
||||||
tableData = append(
|
tableData = append(
|
||||||
tableData,
|
tableData,
|
||||||
nodeData,
|
nodeData,
|
||||||
@@ -657,60 +434,76 @@ func nodesToPtables(
|
|||||||
return tableData, nil
|
return tableData, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func nodeRoutesToPtables(
|
||||||
|
nodes []*v1.Node,
|
||||||
|
) pterm.TableData {
|
||||||
|
tableHeader := []string{
|
||||||
|
"ID",
|
||||||
|
"Hostname",
|
||||||
|
"Approved",
|
||||||
|
"Available",
|
||||||
|
"Serving (Primary)",
|
||||||
|
}
|
||||||
|
tableData := pterm.TableData{tableHeader}
|
||||||
|
|
||||||
|
for _, node := range nodes {
|
||||||
|
nodeData := []string{
|
||||||
|
strconv.FormatUint(node.GetId(), util.Base10),
|
||||||
|
node.GetGivenName(),
|
||||||
|
strings.Join(node.GetApprovedRoutes(), "\n"),
|
||||||
|
strings.Join(node.GetAvailableRoutes(), "\n"),
|
||||||
|
strings.Join(node.GetSubnetRoutes(), "\n"),
|
||||||
|
}
|
||||||
|
tableData = append(
|
||||||
|
tableData,
|
||||||
|
nodeData,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
return tableData
|
||||||
|
}
|
||||||
|
|
||||||
var tagCmd = &cobra.Command{
|
var tagCmd = &cobra.Command{
|
||||||
Use: "tag",
|
Use: "tag",
|
||||||
Short: "Manage the tags of a node",
|
Short: "Manage the tags of a node",
|
||||||
Aliases: []string{"tags", "t"},
|
Aliases: []string{"tags", "t"},
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
identifier, _ := cmd.Flags().GetUint64("identifier")
|
||||||
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
|
tagsToSet, _ := cmd.Flags().GetStringSlice("tags")
|
||||||
defer cancel()
|
|
||||||
defer conn.Close()
|
|
||||||
|
|
||||||
// retrieve flags from CLI
|
|
||||||
identifier, err := cmd.Flags().GetUint64("identifier")
|
|
||||||
if err != nil {
|
|
||||||
ErrorOutput(
|
|
||||||
err,
|
|
||||||
fmt.Sprintf("Error converting ID to integer: %s", err),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
|
|
||||||
return
|
|
||||||
}
|
|
||||||
tagsToSet, err := cmd.Flags().GetStringSlice("tags")
|
|
||||||
if err != nil {
|
|
||||||
ErrorOutput(
|
|
||||||
err,
|
|
||||||
fmt.Sprintf("Error retrieving list of tags to add to node, %v", err),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Sending tags to node
|
// Sending tags to node
|
||||||
request := &v1.SetTagsRequest{
|
request := &v1.SetTagsRequest{
|
||||||
NodeId: identifier,
|
NodeId: identifier,
|
||||||
Tags: tagsToSet,
|
Tags: tagsToSet,
|
||||||
}
|
}
|
||||||
|
|
||||||
resp, err := client.SetTags(ctx, request)
|
resp, err := client.SetTags(ctx, request)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(
|
return fmt.Errorf("setting tags: %w", err)
|
||||||
err,
|
|
||||||
fmt.Sprintf("Error while sending tags to headscale: %s", err),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
|
|
||||||
return
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if resp != nil {
|
return printOutput(cmd, resp.GetNode(), "Node updated")
|
||||||
SuccessOutput(
|
}),
|
||||||
resp.GetNode(),
|
}
|
||||||
"Node updated",
|
|
||||||
output,
|
var approveRoutesCmd = &cobra.Command{
|
||||||
)
|
Use: "approve-routes",
|
||||||
}
|
Short: "Manage the approved routes of a node",
|
||||||
},
|
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
|
||||||
|
identifier, _ := cmd.Flags().GetUint64("identifier")
|
||||||
|
routes, _ := cmd.Flags().GetStringSlice("routes")
|
||||||
|
|
||||||
|
// Sending routes to node
|
||||||
|
request := &v1.SetApprovedRoutesRequest{
|
||||||
|
NodeId: identifier,
|
||||||
|
Routes: routes,
|
||||||
|
}
|
||||||
|
|
||||||
|
resp, err := client.SetApprovedRoutes(ctx, request)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("setting approved routes: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return printOutput(cmd, resp.GetNode(), "Node updated")
|
||||||
|
}),
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,24 +1,55 @@
|
|||||||
package cli
|
package cli
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
"io"
|
|
||||||
"os"
|
"os"
|
||||||
|
|
||||||
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
||||||
"github.com/rs/zerolog/log"
|
"github.com/juanfont/headscale/hscontrol/db"
|
||||||
|
"github.com/juanfont/headscale/hscontrol/policy"
|
||||||
|
"github.com/juanfont/headscale/hscontrol/types"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
|
"tailscale.com/types/views"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
bypassFlag = "bypass-grpc-and-access-database-directly" //nolint:gosec // not a credential
|
||||||
|
)
|
||||||
|
|
||||||
|
var errAborted = errors.New("command aborted by user")
|
||||||
|
|
||||||
|
// bypassDatabase loads the server config and opens the database directly,
|
||||||
|
// bypassing the gRPC server. The caller is responsible for closing the
|
||||||
|
// returned database handle.
|
||||||
|
func bypassDatabase() (*db.HSDatabase, error) {
|
||||||
|
cfg, err := types.LoadServerConfig()
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("loading config: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
d, err := db.NewHeadscaleDatabase(cfg)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("opening database: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return d, nil
|
||||||
|
}
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
rootCmd.AddCommand(policyCmd)
|
rootCmd.AddCommand(policyCmd)
|
||||||
|
|
||||||
|
getPolicy.Flags().BoolP(bypassFlag, "", false, "Uses the headscale config to directly access the database, bypassing gRPC and does not require the server to be running")
|
||||||
policyCmd.AddCommand(getPolicy)
|
policyCmd.AddCommand(getPolicy)
|
||||||
|
|
||||||
setPolicy.Flags().StringP("file", "f", "", "Path to a policy file in HuJSON format")
|
setPolicy.Flags().StringP("file", "f", "", "Path to a policy file in HuJSON format")
|
||||||
if err := setPolicy.MarkFlagRequired("file"); err != nil {
|
setPolicy.Flags().BoolP(bypassFlag, "", false, "Uses the headscale config to directly access the database, bypassing gRPC and does not require the server to be running")
|
||||||
log.Fatal().Err(err).Msg("")
|
mustMarkRequired(setPolicy, "file")
|
||||||
}
|
|
||||||
policyCmd.AddCommand(setPolicy)
|
policyCmd.AddCommand(setPolicy)
|
||||||
|
|
||||||
|
checkPolicy.Flags().StringP("file", "f", "", "Path to a policy file in HuJSON format")
|
||||||
|
mustMarkRequired(checkPolicy, "file")
|
||||||
|
policyCmd.AddCommand(checkPolicy)
|
||||||
}
|
}
|
||||||
|
|
||||||
var policyCmd = &cobra.Command{
|
var policyCmd = &cobra.Command{
|
||||||
@@ -30,23 +61,46 @@ var getPolicy = &cobra.Command{
|
|||||||
Use: "get",
|
Use: "get",
|
||||||
Short: "Print the current ACL Policy",
|
Short: "Print the current ACL Policy",
|
||||||
Aliases: []string{"show", "view", "fetch"},
|
Aliases: []string{"show", "view", "fetch"},
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
var policyData string
|
||||||
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
|
if bypass, _ := cmd.Flags().GetBool(bypassFlag); bypass {
|
||||||
defer cancel()
|
if !confirmAction(cmd, "DO NOT run this command if an instance of headscale is running, are you sure headscale is not running?") {
|
||||||
defer conn.Close()
|
return errAborted
|
||||||
|
}
|
||||||
|
|
||||||
request := &v1.GetPolicyRequest{}
|
d, err := bypassDatabase()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer d.Close()
|
||||||
|
|
||||||
response, err := client.GetPolicy(ctx, request)
|
pol, err := d.GetPolicy()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(err, fmt.Sprintf("Failed loading ACL Policy: %s", err), output)
|
return fmt.Errorf("loading policy from database: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
policyData = pol.Data
|
||||||
|
} else {
|
||||||
|
ctx, client, conn, cancel, err := newHeadscaleCLIWithConfig()
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("connecting to headscale: %w", err)
|
||||||
|
}
|
||||||
|
defer cancel()
|
||||||
|
defer conn.Close()
|
||||||
|
|
||||||
|
response, err := client.GetPolicy(ctx, &v1.GetPolicyRequest{})
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("loading ACL policy: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
policyData = response.GetPolicy()
|
||||||
}
|
}
|
||||||
|
|
||||||
// TODO(pallabpain): Maybe print this better?
|
// This does not pass output format as we don't support yaml, json or
|
||||||
// This does not pass output as we dont support yaml, json or json-line
|
// json-line output for this command. It is HuJSON already.
|
||||||
// output for this command. It is HuJSON already.
|
fmt.Println(policyData)
|
||||||
SuccessOutput("", response.GetPolicy(), "")
|
|
||||||
|
return nil
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -57,31 +111,79 @@ var setPolicy = &cobra.Command{
|
|||||||
Updates the existing ACL Policy with the provided policy. The policy must be a valid HuJSON object.
|
Updates the existing ACL Policy with the provided policy. The policy must be a valid HuJSON object.
|
||||||
This command only works when the acl.policy_mode is set to "db", and the policy will be stored in the database.`,
|
This command only works when the acl.policy_mode is set to "db", and the policy will be stored in the database.`,
|
||||||
Aliases: []string{"put", "update"},
|
Aliases: []string{"put", "update"},
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
|
||||||
policyPath, _ := cmd.Flags().GetString("file")
|
policyPath, _ := cmd.Flags().GetString("file")
|
||||||
|
|
||||||
f, err := os.Open(policyPath)
|
policyBytes, err := os.ReadFile(policyPath)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(err, fmt.Sprintf("Error opening the policy file: %s", err), output)
|
return fmt.Errorf("reading policy file: %w", err)
|
||||||
}
|
|
||||||
defer f.Close()
|
|
||||||
|
|
||||||
policyBytes, err := io.ReadAll(f)
|
|
||||||
if err != nil {
|
|
||||||
ErrorOutput(err, fmt.Sprintf("Error reading the policy file: %s", err), output)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
request := &v1.SetPolicyRequest{Policy: string(policyBytes)}
|
if bypass, _ := cmd.Flags().GetBool(bypassFlag); bypass {
|
||||||
|
if !confirmAction(cmd, "DO NOT run this command if an instance of headscale is running, are you sure headscale is not running?") {
|
||||||
|
return errAborted
|
||||||
|
}
|
||||||
|
|
||||||
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
|
d, err := bypassDatabase()
|
||||||
defer cancel()
|
if err != nil {
|
||||||
defer conn.Close()
|
return err
|
||||||
|
}
|
||||||
|
defer d.Close()
|
||||||
|
|
||||||
if _, err := client.SetPolicy(ctx, request); err != nil {
|
users, err := d.ListUsers()
|
||||||
ErrorOutput(err, fmt.Sprintf("Failed to set ACL Policy: %s", err), output)
|
if err != nil {
|
||||||
|
return fmt.Errorf("loading users for policy validation: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err = policy.NewPolicyManager(policyBytes, users, views.Slice[types.NodeView]{})
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("parsing policy file: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err = d.SetPolicy(string(policyBytes))
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("setting ACL policy: %w", err)
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
request := &v1.SetPolicyRequest{Policy: string(policyBytes)}
|
||||||
|
|
||||||
|
ctx, client, conn, cancel, err := newHeadscaleCLIWithConfig()
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("connecting to headscale: %w", err)
|
||||||
|
}
|
||||||
|
defer cancel()
|
||||||
|
defer conn.Close()
|
||||||
|
|
||||||
|
_, err = client.SetPolicy(ctx, request)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("setting ACL policy: %w", err)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
SuccessOutput(nil, "Policy updated.", "")
|
fmt.Println("Policy updated.")
|
||||||
|
|
||||||
|
return nil
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
var checkPolicy = &cobra.Command{
|
||||||
|
Use: "check",
|
||||||
|
Short: "Check the Policy file for errors",
|
||||||
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
|
policyPath, _ := cmd.Flags().GetString("file")
|
||||||
|
|
||||||
|
policyBytes, err := os.ReadFile(policyPath)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("reading policy file: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err = policy.NewPolicyManager(policyBytes, nil, views.Slice[types.NodeView]{})
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("parsing policy file: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Println("Policy is valid")
|
||||||
|
|
||||||
|
return nil
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,17 +1,15 @@
|
|||||||
package cli
|
package cli
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"context"
|
||||||
"fmt"
|
"fmt"
|
||||||
"strconv"
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
"time"
|
|
||||||
|
|
||||||
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
||||||
"github.com/prometheus/common/model"
|
"github.com/juanfont/headscale/hscontrol/util"
|
||||||
"github.com/pterm/pterm"
|
"github.com/pterm/pterm"
|
||||||
"github.com/rs/zerolog/log"
|
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
"google.golang.org/protobuf/types/known/timestamppb"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
const (
|
const (
|
||||||
@@ -20,20 +18,10 @@ const (
|
|||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
rootCmd.AddCommand(preauthkeysCmd)
|
rootCmd.AddCommand(preauthkeysCmd)
|
||||||
preauthkeysCmd.PersistentFlags().StringP("user", "u", "", "User")
|
|
||||||
|
|
||||||
preauthkeysCmd.PersistentFlags().StringP("namespace", "n", "", "User")
|
|
||||||
pakNamespaceFlag := preauthkeysCmd.PersistentFlags().Lookup("namespace")
|
|
||||||
pakNamespaceFlag.Deprecated = deprecateNamespaceMessage
|
|
||||||
pakNamespaceFlag.Hidden = true
|
|
||||||
|
|
||||||
err := preauthkeysCmd.MarkPersistentFlagRequired("user")
|
|
||||||
if err != nil {
|
|
||||||
log.Fatal().Err(err).Msg("")
|
|
||||||
}
|
|
||||||
preauthkeysCmd.AddCommand(listPreAuthKeys)
|
preauthkeysCmd.AddCommand(listPreAuthKeys)
|
||||||
preauthkeysCmd.AddCommand(createPreAuthKeyCmd)
|
preauthkeysCmd.AddCommand(createPreAuthKeyCmd)
|
||||||
preauthkeysCmd.AddCommand(expirePreAuthKeyCmd)
|
preauthkeysCmd.AddCommand(expirePreAuthKeyCmd)
|
||||||
|
preauthkeysCmd.AddCommand(deletePreAuthKeyCmd)
|
||||||
createPreAuthKeyCmd.PersistentFlags().
|
createPreAuthKeyCmd.PersistentFlags().
|
||||||
Bool("reusable", false, "Make the preauthkey reusable")
|
Bool("reusable", false, "Make the preauthkey reusable")
|
||||||
createPreAuthKeyCmd.PersistentFlags().
|
createPreAuthKeyCmd.PersistentFlags().
|
||||||
@@ -42,6 +30,9 @@ func init() {
|
|||||||
StringP("expiration", "e", DefaultPreAuthKeyExpiry, "Human-readable expiration of the key (e.g. 30m, 24h)")
|
StringP("expiration", "e", DefaultPreAuthKeyExpiry, "Human-readable expiration of the key (e.g. 30m, 24h)")
|
||||||
createPreAuthKeyCmd.Flags().
|
createPreAuthKeyCmd.Flags().
|
||||||
StringSlice("tags", []string{}, "Tags to automatically assign to node")
|
StringSlice("tags", []string{}, "Tags to automatically assign to node")
|
||||||
|
createPreAuthKeyCmd.PersistentFlags().Uint64P("user", "u", 0, "User identifier (ID)")
|
||||||
|
expirePreAuthKeyCmd.PersistentFlags().Uint64P("id", "i", 0, "Authkey ID")
|
||||||
|
deletePreAuthKeyCmd.PersistentFlags().Uint64P("id", "i", 0, "Authkey ID")
|
||||||
}
|
}
|
||||||
|
|
||||||
var preauthkeysCmd = &cobra.Command{
|
var preauthkeysCmd = &cobra.Command{
|
||||||
@@ -52,183 +43,136 @@ var preauthkeysCmd = &cobra.Command{
|
|||||||
|
|
||||||
var listPreAuthKeys = &cobra.Command{
|
var listPreAuthKeys = &cobra.Command{
|
||||||
Use: "list",
|
Use: "list",
|
||||||
Short: "List the preauthkeys for this user",
|
Short: "List all preauthkeys",
|
||||||
Aliases: []string{"ls", "show"},
|
Aliases: []string{"ls", "show"},
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
response, err := client.ListPreAuthKeys(ctx, &v1.ListPreAuthKeysRequest{})
|
||||||
|
|
||||||
user, err := cmd.Flags().GetString("user")
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
|
return fmt.Errorf("listing preauthkeys: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
|
return printListOutput(cmd, response.GetPreAuthKeys(), func() error {
|
||||||
defer cancel()
|
tableData := pterm.TableData{
|
||||||
defer conn.Close()
|
{
|
||||||
|
"ID",
|
||||||
request := &v1.ListPreAuthKeysRequest{
|
"Key/Prefix",
|
||||||
User: user,
|
"Reusable",
|
||||||
}
|
"Ephemeral",
|
||||||
|
"Used",
|
||||||
response, err := client.ListPreAuthKeys(ctx, request)
|
"Expiration",
|
||||||
if err != nil {
|
"Created",
|
||||||
ErrorOutput(
|
"Owner",
|
||||||
err,
|
},
|
||||||
fmt.Sprintf("Error getting the list of keys: %s", err),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if output != "" {
|
|
||||||
SuccessOutput(response.GetPreAuthKeys(), "", output)
|
|
||||||
}
|
|
||||||
|
|
||||||
tableData := pterm.TableData{
|
|
||||||
{
|
|
||||||
"ID",
|
|
||||||
"Key",
|
|
||||||
"Reusable",
|
|
||||||
"Ephemeral",
|
|
||||||
"Used",
|
|
||||||
"Expiration",
|
|
||||||
"Created",
|
|
||||||
"Tags",
|
|
||||||
},
|
|
||||||
}
|
|
||||||
for _, key := range response.GetPreAuthKeys() {
|
|
||||||
expiration := "-"
|
|
||||||
if key.GetExpiration() != nil {
|
|
||||||
expiration = ColourTime(key.GetExpiration().AsTime())
|
|
||||||
}
|
}
|
||||||
|
|
||||||
aclTags := ""
|
for _, key := range response.GetPreAuthKeys() {
|
||||||
|
expiration := "-"
|
||||||
|
if key.GetExpiration() != nil {
|
||||||
|
expiration = ColourTime(key.GetExpiration().AsTime())
|
||||||
|
}
|
||||||
|
|
||||||
for _, tag := range key.GetAclTags() {
|
var owner string
|
||||||
aclTags += "," + tag
|
if len(key.GetAclTags()) > 0 {
|
||||||
|
owner = strings.Join(key.GetAclTags(), "\n")
|
||||||
|
} else if key.GetUser() != nil {
|
||||||
|
owner = key.GetUser().GetName()
|
||||||
|
} else {
|
||||||
|
owner = "-"
|
||||||
|
}
|
||||||
|
|
||||||
|
tableData = append(tableData, []string{
|
||||||
|
strconv.FormatUint(key.GetId(), util.Base10),
|
||||||
|
key.GetKey(),
|
||||||
|
strconv.FormatBool(key.GetReusable()),
|
||||||
|
strconv.FormatBool(key.GetEphemeral()),
|
||||||
|
strconv.FormatBool(key.GetUsed()),
|
||||||
|
expiration,
|
||||||
|
key.GetCreatedAt().AsTime().Format(HeadscaleDateTimeFormat),
|
||||||
|
owner,
|
||||||
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
aclTags = strings.TrimLeft(aclTags, ",")
|
return pterm.DefaultTable.WithHasHeader().WithData(tableData).Render()
|
||||||
|
})
|
||||||
tableData = append(tableData, []string{
|
}),
|
||||||
key.GetId(),
|
|
||||||
key.GetKey(),
|
|
||||||
strconv.FormatBool(key.GetReusable()),
|
|
||||||
strconv.FormatBool(key.GetEphemeral()),
|
|
||||||
strconv.FormatBool(key.GetUsed()),
|
|
||||||
expiration,
|
|
||||||
key.GetCreatedAt().AsTime().Format("2006-01-02 15:04:05"),
|
|
||||||
aclTags,
|
|
||||||
})
|
|
||||||
|
|
||||||
}
|
|
||||||
err = pterm.DefaultTable.WithHasHeader().WithData(tableData).Render()
|
|
||||||
if err != nil {
|
|
||||||
ErrorOutput(
|
|
||||||
err,
|
|
||||||
fmt.Sprintf("Failed to render pterm table: %s", err),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
}
|
|
||||||
},
|
|
||||||
}
|
}
|
||||||
|
|
||||||
var createPreAuthKeyCmd = &cobra.Command{
|
var createPreAuthKeyCmd = &cobra.Command{
|
||||||
Use: "create",
|
Use: "create",
|
||||||
Short: "Creates a new preauthkey in the specified user",
|
Short: "Creates a new preauthkey",
|
||||||
Aliases: []string{"c", "new"},
|
Aliases: []string{"c", "new"},
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
user, _ := cmd.Flags().GetUint64("user")
|
||||||
|
|
||||||
user, err := cmd.Flags().GetString("user")
|
|
||||||
if err != nil {
|
|
||||||
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
|
|
||||||
}
|
|
||||||
|
|
||||||
reusable, _ := cmd.Flags().GetBool("reusable")
|
reusable, _ := cmd.Flags().GetBool("reusable")
|
||||||
ephemeral, _ := cmd.Flags().GetBool("ephemeral")
|
ephemeral, _ := cmd.Flags().GetBool("ephemeral")
|
||||||
tags, _ := cmd.Flags().GetStringSlice("tags")
|
tags, _ := cmd.Flags().GetStringSlice("tags")
|
||||||
|
|
||||||
request := &v1.CreatePreAuthKeyRequest{
|
expiration, err := expirationFromFlag(cmd)
|
||||||
User: user,
|
|
||||||
Reusable: reusable,
|
|
||||||
Ephemeral: ephemeral,
|
|
||||||
AclTags: tags,
|
|
||||||
}
|
|
||||||
|
|
||||||
durationStr, _ := cmd.Flags().GetString("expiration")
|
|
||||||
|
|
||||||
duration, err := model.ParseDuration(durationStr)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(
|
return err
|
||||||
err,
|
|
||||||
fmt.Sprintf("Could not parse duration: %s\n", err),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
expiration := time.Now().UTC().Add(time.Duration(duration))
|
request := &v1.CreatePreAuthKeyRequest{
|
||||||
|
User: user,
|
||||||
log.Trace().
|
Reusable: reusable,
|
||||||
Dur("expiration", time.Duration(duration)).
|
Ephemeral: ephemeral,
|
||||||
Msg("expiration has been set")
|
AclTags: tags,
|
||||||
|
Expiration: expiration,
|
||||||
request.Expiration = timestamppb.New(expiration)
|
}
|
||||||
|
|
||||||
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
|
|
||||||
defer cancel()
|
|
||||||
defer conn.Close()
|
|
||||||
|
|
||||||
response, err := client.CreatePreAuthKey(ctx, request)
|
response, err := client.CreatePreAuthKey(ctx, request)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(
|
return fmt.Errorf("creating preauthkey: %w", err)
|
||||||
err,
|
|
||||||
fmt.Sprintf("Cannot create Pre Auth Key: %s\n", err),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
SuccessOutput(response.GetPreAuthKey(), response.GetPreAuthKey().GetKey(), output)
|
return printOutput(cmd, response.GetPreAuthKey(), response.GetPreAuthKey().GetKey())
|
||||||
},
|
}),
|
||||||
}
|
}
|
||||||
|
|
||||||
var expirePreAuthKeyCmd = &cobra.Command{
|
var expirePreAuthKeyCmd = &cobra.Command{
|
||||||
Use: "expire KEY",
|
Use: "expire",
|
||||||
Short: "Expire a preauthkey",
|
Short: "Expire a preauthkey",
|
||||||
Aliases: []string{"revoke", "exp", "e"},
|
Aliases: []string{"revoke", "exp", "e"},
|
||||||
Args: func(cmd *cobra.Command, args []string) error {
|
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
|
||||||
if len(args) < 1 {
|
id, _ := cmd.Flags().GetUint64("id")
|
||||||
return errMissingParameter
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
if id == 0 {
|
||||||
},
|
return fmt.Errorf("missing --id parameter: %w", errMissingParameter)
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
|
||||||
output, _ := cmd.Flags().GetString("output")
|
|
||||||
user, err := cmd.Flags().GetString("user")
|
|
||||||
if err != nil {
|
|
||||||
ErrorOutput(err, fmt.Sprintf("Error getting user: %s", err), output)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
|
|
||||||
defer cancel()
|
|
||||||
defer conn.Close()
|
|
||||||
|
|
||||||
request := &v1.ExpirePreAuthKeyRequest{
|
request := &v1.ExpirePreAuthKeyRequest{
|
||||||
User: user,
|
Id: id,
|
||||||
Key: args[0],
|
|
||||||
}
|
}
|
||||||
|
|
||||||
response, err := client.ExpirePreAuthKey(ctx, request)
|
response, err := client.ExpirePreAuthKey(ctx, request)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(
|
return fmt.Errorf("expiring preauthkey: %w", err)
|
||||||
err,
|
|
||||||
fmt.Sprintf("Cannot expire Pre Auth Key: %s\n", err),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
SuccessOutput(response, "Key expired", output)
|
return printOutput(cmd, response, "Key expired")
|
||||||
},
|
}),
|
||||||
|
}
|
||||||
|
|
||||||
|
var deletePreAuthKeyCmd = &cobra.Command{
|
||||||
|
Use: "delete",
|
||||||
|
Short: "Delete a preauthkey",
|
||||||
|
Aliases: []string{"del", "rm", "d"},
|
||||||
|
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
|
||||||
|
id, _ := cmd.Flags().GetUint64("id")
|
||||||
|
|
||||||
|
if id == 0 {
|
||||||
|
return fmt.Errorf("missing --id parameter: %w", errMissingParameter)
|
||||||
|
}
|
||||||
|
|
||||||
|
request := &v1.DeletePreAuthKeyRequest{
|
||||||
|
Id: id,
|
||||||
|
}
|
||||||
|
|
||||||
|
response, err := client.DeletePreAuthKey(ctx, request)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("deleting preauthkey: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return printOutput(cmd, response, "Key deleted")
|
||||||
|
}),
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -7,7 +7,7 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
func ColourTime(date time.Time) string {
|
func ColourTime(date time.Time) string {
|
||||||
dateStr := date.Format("2006-01-02 15:04:05")
|
dateStr := date.Format(HeadscaleDateTimeFormat)
|
||||||
|
|
||||||
if date.After(time.Now()) {
|
if date.After(time.Now()) {
|
||||||
dateStr = pterm.LightGreen(dateStr)
|
dateStr = pterm.LightGreen(dateStr)
|
||||||
|
|||||||
@@ -1,9 +1,10 @@
|
|||||||
package cli
|
package cli
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"fmt"
|
|
||||||
"os"
|
"os"
|
||||||
"runtime"
|
"runtime"
|
||||||
|
"slices"
|
||||||
|
"strings"
|
||||||
|
|
||||||
"github.com/juanfont/headscale/hscontrol/types"
|
"github.com/juanfont/headscale/hscontrol/types"
|
||||||
"github.com/rs/zerolog"
|
"github.com/rs/zerolog"
|
||||||
@@ -13,10 +14,6 @@ import (
|
|||||||
"github.com/tcnksm/go-latest"
|
"github.com/tcnksm/go-latest"
|
||||||
)
|
)
|
||||||
|
|
||||||
const (
|
|
||||||
deprecateNamespaceMessage = "use --user"
|
|
||||||
)
|
|
||||||
|
|
||||||
var cfgFile string = ""
|
var cfgFile string = ""
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
@@ -25,6 +22,11 @@ func init() {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if slices.Contains(os.Args, "policy") && slices.Contains(os.Args, "check") {
|
||||||
|
zerolog.SetGlobalLevel(zerolog.Disabled)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
cobra.OnInitialize(initConfig)
|
cobra.OnInitialize(initConfig)
|
||||||
rootCmd.PersistentFlags().
|
rootCmd.PersistentFlags().
|
||||||
StringVarP(&cfgFile, "config", "c", "", "config file (default is /etc/headscale/config.yaml)")
|
StringVarP(&cfgFile, "config", "c", "", "config file (default is /etc/headscale/config.yaml)")
|
||||||
@@ -32,25 +34,34 @@ func init() {
|
|||||||
StringP("output", "o", "", "Output format. Empty for human-readable, 'json', 'json-line' or 'yaml'")
|
StringP("output", "o", "", "Output format. Empty for human-readable, 'json', 'json-line' or 'yaml'")
|
||||||
rootCmd.PersistentFlags().
|
rootCmd.PersistentFlags().
|
||||||
Bool("force", false, "Disable prompts and forces the execution")
|
Bool("force", false, "Disable prompts and forces the execution")
|
||||||
|
|
||||||
|
// Re-enable usage output only for flag-parsing errors; runtime errors
|
||||||
|
// from RunE should never dump usage text.
|
||||||
|
rootCmd.SetFlagErrorFunc(func(cmd *cobra.Command, err error) error {
|
||||||
|
cmd.SilenceUsage = false
|
||||||
|
|
||||||
|
return err
|
||||||
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
func initConfig() {
|
func initConfig() {
|
||||||
if cfgFile == "" {
|
if cfgFile == "" {
|
||||||
cfgFile = os.Getenv("HEADSCALE_CONFIG")
|
cfgFile = os.Getenv("HEADSCALE_CONFIG")
|
||||||
}
|
}
|
||||||
|
|
||||||
if cfgFile != "" {
|
if cfgFile != "" {
|
||||||
err := types.LoadConfig(cfgFile, true)
|
err := types.LoadConfig(cfgFile, true)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatal().Caller().Err(err).Msgf("Error loading config file %s", cfgFile)
|
log.Fatal().Caller().Err(err).Msgf("error loading config file %s", cfgFile)
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
err := types.LoadConfig("", false)
|
err := types.LoadConfig("", false)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatal().Caller().Err(err).Msgf("Error loading config")
|
log.Fatal().Caller().Err(err).Msgf("error loading config")
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
machineOutput := HasMachineOutputFlag()
|
machineOutput := hasMachineOutputFlag()
|
||||||
|
|
||||||
// If the user has requested a "node" readable format,
|
// If the user has requested a "node" readable format,
|
||||||
// then disable login so the output remains valid.
|
// then disable login so the output remains valid.
|
||||||
@@ -58,32 +69,73 @@ func initConfig() {
|
|||||||
zerolog.SetGlobalLevel(zerolog.Disabled)
|
zerolog.SetGlobalLevel(zerolog.Disabled)
|
||||||
}
|
}
|
||||||
|
|
||||||
// logFormat := viper.GetString("log.format")
|
logFormat := viper.GetString("log.format")
|
||||||
// if logFormat == types.JSONLogFormat {
|
if logFormat == types.JSONLogFormat {
|
||||||
// log.Logger = log.Output(os.Stdout)
|
log.Logger = log.Output(os.Stdout)
|
||||||
// }
|
}
|
||||||
|
|
||||||
disableUpdateCheck := viper.GetBool("disable_check_updates")
|
disableUpdateCheck := viper.GetBool("disable_check_updates")
|
||||||
if !disableUpdateCheck && !machineOutput {
|
if !disableUpdateCheck && !machineOutput {
|
||||||
|
versionInfo := types.GetVersionInfo()
|
||||||
if (runtime.GOOS == "linux" || runtime.GOOS == "darwin") &&
|
if (runtime.GOOS == "linux" || runtime.GOOS == "darwin") &&
|
||||||
Version != "dev" {
|
!versionInfo.Dirty {
|
||||||
githubTag := &latest.GithubTag{
|
githubTag := &latest.GithubTag{
|
||||||
Owner: "juanfont",
|
Owner: "juanfont",
|
||||||
Repository: "headscale",
|
Repository: "headscale",
|
||||||
|
TagFilterFunc: filterPreReleasesIfStable(func() string { return versionInfo.Version }),
|
||||||
}
|
}
|
||||||
res, err := latest.Check(githubTag, Version)
|
|
||||||
|
res, err := latest.Check(githubTag, versionInfo.Version)
|
||||||
if err == nil && res.Outdated {
|
if err == nil && res.Outdated {
|
||||||
//nolint
|
//nolint
|
||||||
log.Warn().Msgf(
|
log.Warn().Msgf(
|
||||||
"An updated version of Headscale has been found (%s vs. your current %s). Check it out https://github.com/juanfont/headscale/releases\n",
|
"An updated version of Headscale has been found (%s vs. your current %s). Check it out https://github.com/juanfont/headscale/releases\n",
|
||||||
res.Current,
|
res.Current,
|
||||||
Version,
|
versionInfo.Version,
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
var prereleases = []string{"alpha", "beta", "rc", "dev"}
|
||||||
|
|
||||||
|
func isPreReleaseVersion(version string) bool {
|
||||||
|
for _, unstable := range prereleases {
|
||||||
|
if strings.Contains(version, unstable) {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
// filterPreReleasesIfStable returns a function that filters out
|
||||||
|
// pre-release tags if the current version is stable.
|
||||||
|
// If the current version is a pre-release, it does not filter anything.
|
||||||
|
// versionFunc is a function that returns the current version string, it is
|
||||||
|
// a func for testability.
|
||||||
|
func filterPreReleasesIfStable(versionFunc func() string) func(string) bool {
|
||||||
|
return func(tag string) bool {
|
||||||
|
version := versionFunc()
|
||||||
|
|
||||||
|
// If we are on a pre-release version, then we do not filter anything
|
||||||
|
// as we want to recommend the user the latest pre-release.
|
||||||
|
if isPreReleaseVersion(version) {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
// If we are on a stable release, filter out pre-releases.
|
||||||
|
for _, ignore := range prereleases {
|
||||||
|
if strings.Contains(tag, ignore) {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
var rootCmd = &cobra.Command{
|
var rootCmd = &cobra.Command{
|
||||||
Use: "headscale",
|
Use: "headscale",
|
||||||
Short: "headscale - a Tailscale control server",
|
Short: "headscale - a Tailscale control server",
|
||||||
@@ -91,11 +143,15 @@ var rootCmd = &cobra.Command{
|
|||||||
headscale is an open source implementation of the Tailscale control server
|
headscale is an open source implementation of the Tailscale control server
|
||||||
|
|
||||||
https://github.com/juanfont/headscale`,
|
https://github.com/juanfont/headscale`,
|
||||||
|
SilenceErrors: true,
|
||||||
|
SilenceUsage: true,
|
||||||
}
|
}
|
||||||
|
|
||||||
func Execute() {
|
func Execute() {
|
||||||
if err := rootCmd.Execute(); err != nil {
|
cmd, err := rootCmd.ExecuteC()
|
||||||
fmt.Fprintln(os.Stderr, err)
|
if err != nil {
|
||||||
|
outputFormat, _ := cmd.Flags().GetString("output")
|
||||||
|
printError(err, outputFormat)
|
||||||
os.Exit(1)
|
os.Exit(1)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
293
cmd/headscale/cli/root_test.go
Normal file
@@ -0,0 +1,293 @@
|
|||||||
|
package cli
|
||||||
|
|
||||||
|
import (
|
||||||
|
"testing"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestFilterPreReleasesIfStable(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
currentVersion string
|
||||||
|
tag string
|
||||||
|
expectedFilter bool
|
||||||
|
description string
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "stable version filters alpha tag",
|
||||||
|
currentVersion: "0.23.0",
|
||||||
|
tag: "v0.24.0-alpha.1",
|
||||||
|
expectedFilter: true,
|
||||||
|
description: "When on stable release, alpha tags should be filtered",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "stable version filters beta tag",
|
||||||
|
currentVersion: "0.23.0",
|
||||||
|
tag: "v0.24.0-beta.2",
|
||||||
|
expectedFilter: true,
|
||||||
|
description: "When on stable release, beta tags should be filtered",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "stable version filters rc tag",
|
||||||
|
currentVersion: "0.23.0",
|
||||||
|
tag: "v0.24.0-rc.1",
|
||||||
|
expectedFilter: true,
|
||||||
|
description: "When on stable release, rc tags should be filtered",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "stable version allows stable tag",
|
||||||
|
currentVersion: "0.23.0",
|
||||||
|
tag: "v0.24.0",
|
||||||
|
expectedFilter: false,
|
||||||
|
description: "When on stable release, stable tags should not be filtered",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "alpha version allows alpha tag",
|
||||||
|
currentVersion: "0.23.0-alpha.1",
|
||||||
|
tag: "v0.24.0-alpha.2",
|
||||||
|
expectedFilter: false,
|
||||||
|
description: "When on alpha release, alpha tags should not be filtered",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "alpha version allows beta tag",
|
||||||
|
currentVersion: "0.23.0-alpha.1",
|
||||||
|
tag: "v0.24.0-beta.1",
|
||||||
|
expectedFilter: false,
|
||||||
|
description: "When on alpha release, beta tags should not be filtered",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "alpha version allows rc tag",
|
||||||
|
currentVersion: "0.23.0-alpha.1",
|
||||||
|
tag: "v0.24.0-rc.1",
|
||||||
|
expectedFilter: false,
|
||||||
|
description: "When on alpha release, rc tags should not be filtered",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "alpha version allows stable tag",
|
||||||
|
currentVersion: "0.23.0-alpha.1",
|
||||||
|
tag: "v0.24.0",
|
||||||
|
expectedFilter: false,
|
||||||
|
description: "When on alpha release, stable tags should not be filtered",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "beta version allows alpha tag",
|
||||||
|
currentVersion: "0.23.0-beta.1",
|
||||||
|
tag: "v0.24.0-alpha.1",
|
||||||
|
expectedFilter: false,
|
||||||
|
description: "When on beta release, alpha tags should not be filtered",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "beta version allows beta tag",
|
||||||
|
currentVersion: "0.23.0-beta.2",
|
||||||
|
tag: "v0.24.0-beta.3",
|
||||||
|
expectedFilter: false,
|
||||||
|
description: "When on beta release, beta tags should not be filtered",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "beta version allows rc tag",
|
||||||
|
currentVersion: "0.23.0-beta.1",
|
||||||
|
tag: "v0.24.0-rc.1",
|
||||||
|
expectedFilter: false,
|
||||||
|
description: "When on beta release, rc tags should not be filtered",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "beta version allows stable tag",
|
||||||
|
currentVersion: "0.23.0-beta.1",
|
||||||
|
tag: "v0.24.0",
|
||||||
|
expectedFilter: false,
|
||||||
|
description: "When on beta release, stable tags should not be filtered",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "rc version allows alpha tag",
|
||||||
|
currentVersion: "0.23.0-rc.1",
|
||||||
|
tag: "v0.24.0-alpha.1",
|
||||||
|
expectedFilter: false,
|
||||||
|
description: "When on rc release, alpha tags should not be filtered",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "rc version allows beta tag",
|
||||||
|
currentVersion: "0.23.0-rc.1",
|
||||||
|
tag: "v0.24.0-beta.1",
|
||||||
|
expectedFilter: false,
|
||||||
|
description: "When on rc release, beta tags should not be filtered",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "rc version allows rc tag",
|
||||||
|
currentVersion: "0.23.0-rc.2",
|
||||||
|
tag: "v0.24.0-rc.3",
|
||||||
|
expectedFilter: false,
|
||||||
|
description: "When on rc release, rc tags should not be filtered",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "rc version allows stable tag",
|
||||||
|
currentVersion: "0.23.0-rc.1",
|
||||||
|
tag: "v0.24.0",
|
||||||
|
expectedFilter: false,
|
||||||
|
description: "When on rc release, stable tags should not be filtered",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "stable version with patch filters alpha",
|
||||||
|
currentVersion: "0.23.1",
|
||||||
|
tag: "v0.24.0-alpha.1",
|
||||||
|
expectedFilter: true,
|
||||||
|
description: "Stable version with patch number should filter alpha tags",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "stable version with patch allows stable",
|
||||||
|
currentVersion: "0.23.1",
|
||||||
|
tag: "v0.24.0",
|
||||||
|
expectedFilter: false,
|
||||||
|
description: "Stable version with patch number should allow stable tags",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "tag with alpha substring in version number",
|
||||||
|
currentVersion: "0.23.0",
|
||||||
|
tag: "v1.0.0-alpha.1",
|
||||||
|
expectedFilter: true,
|
||||||
|
description: "Tags with alpha in version string should be filtered on stable",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "tag with beta substring in version number",
|
||||||
|
currentVersion: "0.23.0",
|
||||||
|
tag: "v1.0.0-beta.1",
|
||||||
|
expectedFilter: true,
|
||||||
|
description: "Tags with beta in version string should be filtered on stable",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "tag with rc substring in version number",
|
||||||
|
currentVersion: "0.23.0",
|
||||||
|
tag: "v1.0.0-rc.1",
|
||||||
|
expectedFilter: true,
|
||||||
|
description: "Tags with rc in version string should be filtered on stable",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "empty tag on stable version",
|
||||||
|
currentVersion: "0.23.0",
|
||||||
|
tag: "",
|
||||||
|
expectedFilter: false,
|
||||||
|
description: "Empty tags should not be filtered",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "dev version allows all tags",
|
||||||
|
currentVersion: "0.23.0-dev",
|
||||||
|
tag: "v0.24.0-alpha.1",
|
||||||
|
expectedFilter: false,
|
||||||
|
description: "Dev versions should not filter any tags (pre-release allows all)",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "stable version filters dev tag",
|
||||||
|
currentVersion: "0.23.0",
|
||||||
|
tag: "v0.24.0-dev",
|
||||||
|
expectedFilter: true,
|
||||||
|
description: "When on stable release, dev tags should be filtered",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "dev version allows dev tag",
|
||||||
|
currentVersion: "0.23.0-dev",
|
||||||
|
tag: "v0.24.0-dev.1",
|
||||||
|
expectedFilter: false,
|
||||||
|
description: "When on dev release, dev tags should not be filtered",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "dev version allows stable tag",
|
||||||
|
currentVersion: "0.23.0-dev",
|
||||||
|
tag: "v0.24.0",
|
||||||
|
expectedFilter: false,
|
||||||
|
description: "When on dev release, stable tags should not be filtered",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tt := range tests {
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
result := filterPreReleasesIfStable(func() string { return tt.currentVersion })(tt.tag)
|
||||||
|
if result != tt.expectedFilter {
|
||||||
|
t.Errorf("%s: got %v, want %v\nDescription: %s\nCurrent version: %s, Tag: %s",
|
||||||
|
tt.name,
|
||||||
|
result,
|
||||||
|
tt.expectedFilter,
|
||||||
|
tt.description,
|
||||||
|
tt.currentVersion,
|
||||||
|
tt.tag,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestIsPreReleaseVersion(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
name string
|
||||||
|
version string
|
||||||
|
expected bool
|
||||||
|
description string
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "stable version",
|
||||||
|
version: "0.23.0",
|
||||||
|
expected: false,
|
||||||
|
description: "Stable version should not be pre-release",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "alpha version",
|
||||||
|
version: "0.23.0-alpha.1",
|
||||||
|
expected: true,
|
||||||
|
description: "Alpha version should be pre-release",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "beta version",
|
||||||
|
version: "0.23.0-beta.1",
|
||||||
|
expected: true,
|
||||||
|
description: "Beta version should be pre-release",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "rc version",
|
||||||
|
version: "0.23.0-rc.1",
|
||||||
|
expected: true,
|
||||||
|
description: "RC version should be pre-release",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "version with alpha substring",
|
||||||
|
version: "0.23.0-alphabetical",
|
||||||
|
expected: true,
|
||||||
|
description: "Version containing 'alpha' should be pre-release",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "version with beta substring",
|
||||||
|
version: "0.23.0-betamax",
|
||||||
|
expected: true,
|
||||||
|
description: "Version containing 'beta' should be pre-release",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "dev version",
|
||||||
|
version: "0.23.0-dev",
|
||||||
|
expected: true,
|
||||||
|
description: "Dev version should be pre-release",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "empty version",
|
||||||
|
version: "",
|
||||||
|
expected: false,
|
||||||
|
description: "Empty version should not be pre-release",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "version with patch number",
|
||||||
|
version: "0.23.1",
|
||||||
|
expected: false,
|
||||||
|
description: "Stable version with patch should not be pre-release",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tt := range tests {
|
||||||
|
t.Run(tt.name, func(t *testing.T) {
|
||||||
|
result := isPreReleaseVersion(tt.version)
|
||||||
|
if result != tt.expected {
|
||||||
|
t.Errorf("%s: got %v, want %v\nDescription: %s\nVersion: %s",
|
||||||
|
tt.name,
|
||||||
|
result,
|
||||||
|
tt.expected,
|
||||||
|
tt.description,
|
||||||
|
tt.version,
|
||||||
|
)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -1,266 +0,0 @@
|
|||||||
package cli
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"log"
|
|
||||||
"net/netip"
|
|
||||||
"strconv"
|
|
||||||
|
|
||||||
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
|
||||||
"github.com/pterm/pterm"
|
|
||||||
"github.com/spf13/cobra"
|
|
||||||
"google.golang.org/grpc/status"
|
|
||||||
"tailscale.com/net/tsaddr"
|
|
||||||
)
|
|
||||||
|
|
||||||
const (
|
|
||||||
Base10 = 10
|
|
||||||
)
|
|
||||||
|
|
||||||
func init() {
|
|
||||||
rootCmd.AddCommand(routesCmd)
|
|
||||||
listRoutesCmd.Flags().Uint64P("identifier", "i", 0, "Node identifier (ID)")
|
|
||||||
routesCmd.AddCommand(listRoutesCmd)
|
|
||||||
|
|
||||||
enableRouteCmd.Flags().Uint64P("route", "r", 0, "Route identifier (ID)")
|
|
||||||
err := enableRouteCmd.MarkFlagRequired("route")
|
|
||||||
if err != nil {
|
|
||||||
log.Fatal(err.Error())
|
|
||||||
}
|
|
||||||
routesCmd.AddCommand(enableRouteCmd)
|
|
||||||
|
|
||||||
disableRouteCmd.Flags().Uint64P("route", "r", 0, "Route identifier (ID)")
|
|
||||||
err = disableRouteCmd.MarkFlagRequired("route")
|
|
||||||
if err != nil {
|
|
||||||
log.Fatal(err.Error())
|
|
||||||
}
|
|
||||||
routesCmd.AddCommand(disableRouteCmd)
|
|
||||||
|
|
||||||
deleteRouteCmd.Flags().Uint64P("route", "r", 0, "Route identifier (ID)")
|
|
||||||
err = deleteRouteCmd.MarkFlagRequired("route")
|
|
||||||
if err != nil {
|
|
||||||
log.Fatal(err.Error())
|
|
||||||
}
|
|
||||||
routesCmd.AddCommand(deleteRouteCmd)
|
|
||||||
}
|
|
||||||
|
|
||||||
var routesCmd = &cobra.Command{
|
|
||||||
Use: "routes",
|
|
||||||
Short: "Manage the routes of Headscale",
|
|
||||||
Aliases: []string{"r", "route"},
|
|
||||||
}
|
|
||||||
|
|
||||||
var listRoutesCmd = &cobra.Command{
|
|
||||||
Use: "list",
|
|
||||||
Short: "List all routes",
|
|
||||||
Aliases: []string{"ls", "show"},
|
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
|
||||||
output, _ := cmd.Flags().GetString("output")
|
|
||||||
|
|
||||||
machineID, err := cmd.Flags().GetUint64("identifier")
|
|
||||||
if err != nil {
|
|
||||||
ErrorOutput(
|
|
||||||
err,
|
|
||||||
fmt.Sprintf("Error getting machine id from flag: %s", err),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
|
|
||||||
defer cancel()
|
|
||||||
defer conn.Close()
|
|
||||||
|
|
||||||
var routes []*v1.Route
|
|
||||||
|
|
||||||
if machineID == 0 {
|
|
||||||
response, err := client.GetRoutes(ctx, &v1.GetRoutesRequest{})
|
|
||||||
if err != nil {
|
|
||||||
ErrorOutput(
|
|
||||||
err,
|
|
||||||
fmt.Sprintf("Cannot get nodes: %s", status.Convert(err).Message()),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
if output != "" {
|
|
||||||
SuccessOutput(response.GetRoutes(), "", output)
|
|
||||||
}
|
|
||||||
|
|
||||||
routes = response.GetRoutes()
|
|
||||||
} else {
|
|
||||||
response, err := client.GetNodeRoutes(ctx, &v1.GetNodeRoutesRequest{
|
|
||||||
NodeId: machineID,
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
ErrorOutput(
|
|
||||||
err,
|
|
||||||
fmt.Sprintf("Cannot get routes for node %d: %s", machineID, status.Convert(err).Message()),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
if output != "" {
|
|
||||||
SuccessOutput(response.GetRoutes(), "", output)
|
|
||||||
}
|
|
||||||
|
|
||||||
routes = response.GetRoutes()
|
|
||||||
}
|
|
||||||
|
|
||||||
tableData := routesToPtables(routes)
|
|
||||||
if err != nil {
|
|
||||||
ErrorOutput(err, fmt.Sprintf("Error converting to table: %s", err), output)
|
|
||||||
}
|
|
||||||
|
|
||||||
err = pterm.DefaultTable.WithHasHeader().WithData(tableData).Render()
|
|
||||||
if err != nil {
|
|
||||||
ErrorOutput(
|
|
||||||
err,
|
|
||||||
fmt.Sprintf("Failed to render pterm table: %s", err),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
}
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
var enableRouteCmd = &cobra.Command{
|
|
||||||
Use: "enable",
|
|
||||||
Short: "Set a route as enabled",
|
|
||||||
Long: `This command will make as enabled a given route.`,
|
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
|
||||||
output, _ := cmd.Flags().GetString("output")
|
|
||||||
|
|
||||||
routeID, err := cmd.Flags().GetUint64("route")
|
|
||||||
if err != nil {
|
|
||||||
ErrorOutput(
|
|
||||||
err,
|
|
||||||
fmt.Sprintf("Error getting machine id from flag: %s", err),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
|
|
||||||
defer cancel()
|
|
||||||
defer conn.Close()
|
|
||||||
|
|
||||||
response, err := client.EnableRoute(ctx, &v1.EnableRouteRequest{
|
|
||||||
RouteId: routeID,
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
ErrorOutput(
|
|
||||||
err,
|
|
||||||
fmt.Sprintf("Cannot enable route %d: %s", routeID, status.Convert(err).Message()),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
if output != "" {
|
|
||||||
SuccessOutput(response, "", output)
|
|
||||||
}
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
var disableRouteCmd = &cobra.Command{
|
|
||||||
Use: "disable",
|
|
||||||
Short: "Set as disabled a given route",
|
|
||||||
Long: `This command will make as disabled a given route.`,
|
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
|
||||||
output, _ := cmd.Flags().GetString("output")
|
|
||||||
|
|
||||||
routeID, err := cmd.Flags().GetUint64("route")
|
|
||||||
if err != nil {
|
|
||||||
ErrorOutput(
|
|
||||||
err,
|
|
||||||
fmt.Sprintf("Error getting machine id from flag: %s", err),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
|
|
||||||
defer cancel()
|
|
||||||
defer conn.Close()
|
|
||||||
|
|
||||||
response, err := client.DisableRoute(ctx, &v1.DisableRouteRequest{
|
|
||||||
RouteId: routeID,
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
ErrorOutput(
|
|
||||||
err,
|
|
||||||
fmt.Sprintf("Cannot disable route %d: %s", routeID, status.Convert(err).Message()),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
if output != "" {
|
|
||||||
SuccessOutput(response, "", output)
|
|
||||||
}
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
var deleteRouteCmd = &cobra.Command{
|
|
||||||
Use: "delete",
|
|
||||||
Short: "Delete a given route",
|
|
||||||
Long: `This command will delete a given route.`,
|
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
|
||||||
output, _ := cmd.Flags().GetString("output")
|
|
||||||
|
|
||||||
routeID, err := cmd.Flags().GetUint64("route")
|
|
||||||
if err != nil {
|
|
||||||
ErrorOutput(
|
|
||||||
err,
|
|
||||||
fmt.Sprintf("Error getting machine id from flag: %s", err),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
|
|
||||||
defer cancel()
|
|
||||||
defer conn.Close()
|
|
||||||
|
|
||||||
response, err := client.DeleteRoute(ctx, &v1.DeleteRouteRequest{
|
|
||||||
RouteId: routeID,
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
ErrorOutput(
|
|
||||||
err,
|
|
||||||
fmt.Sprintf("Cannot delete route %d: %s", routeID, status.Convert(err).Message()),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
if output != "" {
|
|
||||||
SuccessOutput(response, "", output)
|
|
||||||
}
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
// routesToPtables converts the list of routes to a nice table.
|
|
||||||
func routesToPtables(routes []*v1.Route) pterm.TableData {
|
|
||||||
tableData := pterm.TableData{{"ID", "Node", "Prefix", "Advertised", "Enabled", "Primary"}}
|
|
||||||
|
|
||||||
for _, route := range routes {
|
|
||||||
var isPrimaryStr string
|
|
||||||
prefix, err := netip.ParsePrefix(route.GetPrefix())
|
|
||||||
if err != nil {
|
|
||||||
log.Printf("Error parsing prefix %s: %s", route.GetPrefix(), err)
|
|
||||||
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
if tsaddr.IsExitRoute(prefix) {
|
|
||||||
isPrimaryStr = "-"
|
|
||||||
} else {
|
|
||||||
isPrimaryStr = strconv.FormatBool(route.GetIsPrimary())
|
|
||||||
}
|
|
||||||
|
|
||||||
tableData = append(tableData,
|
|
||||||
[]string{
|
|
||||||
strconv.FormatUint(route.GetId(), Base10),
|
|
||||||
route.GetNode().GetGivenName(),
|
|
||||||
route.GetPrefix(),
|
|
||||||
strconv.FormatBool(route.GetAdvertised()),
|
|
||||||
strconv.FormatBool(route.GetEnabled()),
|
|
||||||
isPrimaryStr,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
return tableData
|
|
||||||
}
|
|
||||||
@@ -2,10 +2,11 @@ package cli
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"errors"
|
"errors"
|
||||||
|
"fmt"
|
||||||
"net/http"
|
"net/http"
|
||||||
|
|
||||||
"github.com/rs/zerolog/log"
|
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
|
"github.com/tailscale/squibble"
|
||||||
)
|
)
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
@@ -15,18 +16,22 @@ func init() {
|
|||||||
var serveCmd = &cobra.Command{
|
var serveCmd = &cobra.Command{
|
||||||
Use: "serve",
|
Use: "serve",
|
||||||
Short: "Launches the headscale server",
|
Short: "Launches the headscale server",
|
||||||
Args: func(cmd *cobra.Command, args []string) error {
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
return nil
|
|
||||||
},
|
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
|
||||||
app, err := newHeadscaleServerWithConfig()
|
app, err := newHeadscaleServerWithConfig()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatal().Caller().Err(err).Msg("Error initializing")
|
if squibbleErr, ok := errors.AsType[squibble.ValidationError](err); ok {
|
||||||
|
fmt.Printf("SQLite schema failed to validate:\n")
|
||||||
|
fmt.Println(squibbleErr.Diff)
|
||||||
|
}
|
||||||
|
|
||||||
|
return fmt.Errorf("initializing: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
err = app.Serve()
|
err = app.Serve()
|
||||||
if err != nil && !errors.Is(err, http.ErrServerClosed) {
|
if err != nil && !errors.Is(err, http.ErrServerClosed) {
|
||||||
log.Fatal().Caller().Err(err).Msg("Headscale ran into an error and had to shut down.")
|
return fmt.Errorf("headscale ran into an error and had to shut down: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,16 +1,24 @@
|
|||||||
package cli
|
package cli
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"context"
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
"net/url"
|
"net/url"
|
||||||
|
"strconv"
|
||||||
|
|
||||||
survey "github.com/AlecAivazis/survey/v2"
|
|
||||||
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
||||||
|
"github.com/juanfont/headscale/hscontrol/util"
|
||||||
|
"github.com/juanfont/headscale/hscontrol/util/zlog/zf"
|
||||||
"github.com/pterm/pterm"
|
"github.com/pterm/pterm"
|
||||||
"github.com/rs/zerolog/log"
|
"github.com/rs/zerolog/log"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
"google.golang.org/grpc/status"
|
)
|
||||||
|
|
||||||
|
// CLI user errors.
|
||||||
|
var (
|
||||||
|
errFlagRequired = errors.New("--name or --identifier flag is required")
|
||||||
|
errMultipleUsersMatch = errors.New("multiple users match query, specify an ID")
|
||||||
)
|
)
|
||||||
|
|
||||||
func usernameAndIDFlag(cmd *cobra.Command) {
|
func usernameAndIDFlag(cmd *cobra.Command) {
|
||||||
@@ -19,23 +27,21 @@ func usernameAndIDFlag(cmd *cobra.Command) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// usernameAndIDFromFlag returns the username and ID from the flags of the command.
|
// usernameAndIDFromFlag returns the username and ID from the flags of the command.
|
||||||
// If both are empty, it will exit the program with an error.
|
func usernameAndIDFromFlag(cmd *cobra.Command) (uint64, string, error) {
|
||||||
func usernameAndIDFromFlag(cmd *cobra.Command) (uint64, string) {
|
|
||||||
username, _ := cmd.Flags().GetString("name")
|
username, _ := cmd.Flags().GetString("name")
|
||||||
|
|
||||||
identifier, _ := cmd.Flags().GetInt64("identifier")
|
identifier, _ := cmd.Flags().GetInt64("identifier")
|
||||||
if username == "" && identifier < 0 {
|
if username == "" && identifier < 0 {
|
||||||
err := errors.New("--name or --identifier flag is required")
|
return 0, "", errFlagRequired
|
||||||
ErrorOutput(
|
|
||||||
err,
|
|
||||||
fmt.Sprintf(
|
|
||||||
"Cannot rename user: %s",
|
|
||||||
status.Convert(err).Message(),
|
|
||||||
),
|
|
||||||
"",
|
|
||||||
)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
return uint64(identifier), username
|
// Normalise unset/negative identifiers to 0 so the uint64
|
||||||
|
// conversion does not produce a bogus large value.
|
||||||
|
if identifier < 0 {
|
||||||
|
identifier = 0
|
||||||
|
}
|
||||||
|
|
||||||
|
return uint64(identifier), username, nil //nolint:gosec // identifier is clamped to >= 0 above
|
||||||
}
|
}
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
@@ -52,15 +58,13 @@ func init() {
|
|||||||
userCmd.AddCommand(renameUserCmd)
|
userCmd.AddCommand(renameUserCmd)
|
||||||
usernameAndIDFlag(renameUserCmd)
|
usernameAndIDFlag(renameUserCmd)
|
||||||
renameUserCmd.Flags().StringP("new-name", "r", "", "New username")
|
renameUserCmd.Flags().StringP("new-name", "r", "", "New username")
|
||||||
renameNodeCmd.MarkFlagRequired("new-name")
|
mustMarkRequired(renameUserCmd, "new-name")
|
||||||
}
|
}
|
||||||
|
|
||||||
var errMissingParameter = errors.New("missing parameters")
|
|
||||||
|
|
||||||
var userCmd = &cobra.Command{
|
var userCmd = &cobra.Command{
|
||||||
Use: "users",
|
Use: "users",
|
||||||
Short: "Manage the users of Headscale",
|
Short: "Manage the users of Headscale",
|
||||||
Aliases: []string{"user", "namespace", "namespaces", "ns"},
|
Aliases: []string{"user"},
|
||||||
}
|
}
|
||||||
|
|
||||||
var createUserCmd = &cobra.Command{
|
var createUserCmd = &cobra.Command{
|
||||||
@@ -74,16 +78,10 @@ var createUserCmd = &cobra.Command{
|
|||||||
|
|
||||||
return nil
|
return nil
|
||||||
},
|
},
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
|
||||||
|
|
||||||
userName := args[0]
|
userName := args[0]
|
||||||
|
|
||||||
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
|
log.Trace().Interface(zf.Client, client).Msg("obtained gRPC client")
|
||||||
defer cancel()
|
|
||||||
defer conn.Close()
|
|
||||||
|
|
||||||
log.Trace().Interface("client", client).Msg("Obtained gRPC client")
|
|
||||||
|
|
||||||
request := &v1.CreateUserRequest{Name: userName}
|
request := &v1.CreateUserRequest{Name: userName}
|
||||||
|
|
||||||
@@ -96,120 +94,73 @@ var createUserCmd = &cobra.Command{
|
|||||||
}
|
}
|
||||||
|
|
||||||
if pictureURL, _ := cmd.Flags().GetString("picture-url"); pictureURL != "" {
|
if pictureURL, _ := cmd.Flags().GetString("picture-url"); pictureURL != "" {
|
||||||
if _, err := url.Parse(pictureURL); err != nil {
|
if _, err := url.Parse(pictureURL); err != nil { //nolint:noinlineerr
|
||||||
ErrorOutput(
|
return fmt.Errorf("invalid picture URL: %w", err)
|
||||||
err,
|
|
||||||
fmt.Sprintf(
|
|
||||||
"Invalid Picture URL: %s",
|
|
||||||
err,
|
|
||||||
),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
request.PictureUrl = pictureURL
|
request.PictureUrl = pictureURL
|
||||||
}
|
}
|
||||||
|
|
||||||
log.Trace().Interface("request", request).Msg("Sending CreateUser request")
|
log.Trace().Interface(zf.Request, request).Msg("sending CreateUser request")
|
||||||
|
|
||||||
response, err := client.CreateUser(ctx, request)
|
response, err := client.CreateUser(ctx, request)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(
|
return fmt.Errorf("creating user: %w", err)
|
||||||
err,
|
|
||||||
fmt.Sprintf(
|
|
||||||
"Cannot create user: %s",
|
|
||||||
status.Convert(err).Message(),
|
|
||||||
),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
SuccessOutput(response.GetUser(), "User created", output)
|
return printOutput(cmd, response.GetUser(), "User created")
|
||||||
},
|
}),
|
||||||
}
|
}
|
||||||
|
|
||||||
var destroyUserCmd = &cobra.Command{
|
var destroyUserCmd = &cobra.Command{
|
||||||
Use: "destroy --identifier ID or --name NAME",
|
Use: "destroy --identifier ID or --name NAME",
|
||||||
Short: "Destroys a user",
|
Short: "Destroys a user",
|
||||||
Aliases: []string{"delete"},
|
Aliases: []string{"delete"},
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
id, username, err := usernameAndIDFromFlag(cmd)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
id, username := usernameAndIDFromFlag(cmd)
|
|
||||||
request := &v1.ListUsersRequest{
|
request := &v1.ListUsersRequest{
|
||||||
Name: username,
|
Name: username,
|
||||||
Id: id,
|
Id: id,
|
||||||
}
|
}
|
||||||
|
|
||||||
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
|
|
||||||
defer cancel()
|
|
||||||
defer conn.Close()
|
|
||||||
|
|
||||||
users, err := client.ListUsers(ctx, request)
|
users, err := client.ListUsers(ctx, request)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(
|
return fmt.Errorf("listing users: %w", err)
|
||||||
err,
|
|
||||||
fmt.Sprintf("Error: %s", status.Convert(err).Message()),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if len(users.GetUsers()) != 1 {
|
if len(users.GetUsers()) != 1 {
|
||||||
err := fmt.Errorf("Unable to determine user to delete, query returned multiple users, use ID")
|
return errMultipleUsersMatch
|
||||||
ErrorOutput(
|
|
||||||
err,
|
|
||||||
fmt.Sprintf("Error: %s", status.Convert(err).Message()),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
user := users.GetUsers()[0]
|
user := users.GetUsers()[0]
|
||||||
|
|
||||||
confirm := false
|
if !confirmAction(cmd, fmt.Sprintf(
|
||||||
force, _ := cmd.Flags().GetBool("force")
|
"Do you want to remove the user %q (%d) and any associated preauthkeys?",
|
||||||
if !force {
|
user.GetName(), user.GetId(),
|
||||||
prompt := &survey.Confirm{
|
)) {
|
||||||
Message: fmt.Sprintf(
|
return printOutput(cmd, map[string]string{"Result": "User not destroyed"}, "User not destroyed")
|
||||||
"Do you want to remove the user %q (%d) and any associated preauthkeys?",
|
|
||||||
user.GetName(), user.GetId(),
|
|
||||||
),
|
|
||||||
}
|
|
||||||
err := survey.AskOne(prompt, &confirm)
|
|
||||||
if err != nil {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if confirm || force {
|
deleteRequest := &v1.DeleteUserRequest{Id: user.GetId()}
|
||||||
request := &v1.DeleteUserRequest{Id: user.GetId()}
|
|
||||||
|
|
||||||
response, err := client.DeleteUser(ctx, request)
|
response, err := client.DeleteUser(ctx, deleteRequest)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(
|
return fmt.Errorf("destroying user: %w", err)
|
||||||
err,
|
|
||||||
fmt.Sprintf(
|
|
||||||
"Cannot destroy user: %s",
|
|
||||||
status.Convert(err).Message(),
|
|
||||||
),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
}
|
|
||||||
SuccessOutput(response, "User destroyed", output)
|
|
||||||
} else {
|
|
||||||
SuccessOutput(map[string]string{"Result": "User not destroyed"}, "User not destroyed", output)
|
|
||||||
}
|
}
|
||||||
},
|
|
||||||
|
return printOutput(cmd, response, "User destroyed")
|
||||||
|
}),
|
||||||
}
|
}
|
||||||
|
|
||||||
var listUsersCmd = &cobra.Command{
|
var listUsersCmd = &cobra.Command{
|
||||||
Use: "list",
|
Use: "list",
|
||||||
Short: "List all the users",
|
Short: "List all the users",
|
||||||
Aliases: []string{"ls", "show"},
|
Aliases: []string{"ls", "show"},
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
|
||||||
|
|
||||||
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
|
|
||||||
defer cancel()
|
|
||||||
defer conn.Close()
|
|
||||||
|
|
||||||
request := &v1.ListUsersRequest{}
|
request := &v1.ListUsersRequest{}
|
||||||
|
|
||||||
id, _ := cmd.Flags().GetInt64("identifier")
|
id, _ := cmd.Flags().GetInt64("identifier")
|
||||||
@@ -220,64 +171,47 @@ var listUsersCmd = &cobra.Command{
|
|||||||
switch {
|
switch {
|
||||||
case id > 0:
|
case id > 0:
|
||||||
request.Id = uint64(id)
|
request.Id = uint64(id)
|
||||||
break
|
|
||||||
case username != "":
|
case username != "":
|
||||||
request.Name = username
|
request.Name = username
|
||||||
break
|
|
||||||
case email != "":
|
case email != "":
|
||||||
request.Email = email
|
request.Email = email
|
||||||
break
|
|
||||||
}
|
}
|
||||||
|
|
||||||
response, err := client.ListUsers(ctx, request)
|
response, err := client.ListUsers(ctx, request)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(
|
return fmt.Errorf("listing users: %w", err)
|
||||||
err,
|
|
||||||
fmt.Sprintf("Cannot get users: %s", status.Convert(err).Message()),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if output != "" {
|
return printListOutput(cmd, response.GetUsers(), func() error {
|
||||||
SuccessOutput(response.GetUsers(), "", output)
|
tableData := pterm.TableData{{"ID", "Name", "Username", "Email", "Created"}}
|
||||||
}
|
for _, user := range response.GetUsers() {
|
||||||
|
tableData = append(
|
||||||
|
tableData,
|
||||||
|
[]string{
|
||||||
|
strconv.FormatUint(user.GetId(), util.Base10),
|
||||||
|
user.GetDisplayName(),
|
||||||
|
user.GetName(),
|
||||||
|
user.GetEmail(),
|
||||||
|
user.GetCreatedAt().AsTime().Format(HeadscaleDateTimeFormat),
|
||||||
|
},
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
tableData := pterm.TableData{{"ID", "Name", "Username", "Email", "Created"}}
|
return pterm.DefaultTable.WithHasHeader().WithData(tableData).Render()
|
||||||
for _, user := range response.GetUsers() {
|
})
|
||||||
tableData = append(
|
}),
|
||||||
tableData,
|
|
||||||
[]string{
|
|
||||||
fmt.Sprintf("%d", user.GetId()),
|
|
||||||
user.GetDisplayName(),
|
|
||||||
user.GetName(),
|
|
||||||
user.GetEmail(),
|
|
||||||
user.GetCreatedAt().AsTime().Format("2006-01-02 15:04:05"),
|
|
||||||
},
|
|
||||||
)
|
|
||||||
}
|
|
||||||
err = pterm.DefaultTable.WithHasHeader().WithData(tableData).Render()
|
|
||||||
if err != nil {
|
|
||||||
ErrorOutput(
|
|
||||||
err,
|
|
||||||
fmt.Sprintf("Failed to render pterm table: %s", err),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
}
|
|
||||||
},
|
|
||||||
}
|
}
|
||||||
|
|
||||||
var renameUserCmd = &cobra.Command{
|
var renameUserCmd = &cobra.Command{
|
||||||
Use: "rename",
|
Use: "rename",
|
||||||
Short: "Renames a user",
|
Short: "Renames a user",
|
||||||
Aliases: []string{"mv"},
|
Aliases: []string{"mv"},
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
RunE: grpcRunE(func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
id, username, err := usernameAndIDFromFlag(cmd)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
ctx, client, conn, cancel := newHeadscaleCLIWithConfig()
|
|
||||||
defer cancel()
|
|
||||||
defer conn.Close()
|
|
||||||
|
|
||||||
id, username := usernameAndIDFromFlag(cmd)
|
|
||||||
listReq := &v1.ListUsersRequest{
|
listReq := &v1.ListUsersRequest{
|
||||||
Name: username,
|
Name: username,
|
||||||
Id: id,
|
Id: id,
|
||||||
@@ -285,20 +219,11 @@ var renameUserCmd = &cobra.Command{
|
|||||||
|
|
||||||
users, err := client.ListUsers(ctx, listReq)
|
users, err := client.ListUsers(ctx, listReq)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(
|
return fmt.Errorf("listing users: %w", err)
|
||||||
err,
|
|
||||||
fmt.Sprintf("Error: %s", status.Convert(err).Message()),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if len(users.GetUsers()) != 1 {
|
if len(users.GetUsers()) != 1 {
|
||||||
err := fmt.Errorf("Unable to determine user to delete, query returned multiple users, use ID")
|
return errMultipleUsersMatch
|
||||||
ErrorOutput(
|
|
||||||
err,
|
|
||||||
fmt.Sprintf("Error: %s", status.Convert(err).Message()),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
newName, _ := cmd.Flags().GetString("new-name")
|
newName, _ := cmd.Flags().GetString("new-name")
|
||||||
@@ -310,16 +235,9 @@ var renameUserCmd = &cobra.Command{
|
|||||||
|
|
||||||
response, err := client.RenameUser(ctx, renameReq)
|
response, err := client.RenameUser(ctx, renameReq)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
ErrorOutput(
|
return fmt.Errorf("renaming user: %w", err)
|
||||||
err,
|
|
||||||
fmt.Sprintf(
|
|
||||||
"Cannot rename user: %s",
|
|
||||||
status.Convert(err).Message(),
|
|
||||||
),
|
|
||||||
output,
|
|
||||||
)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
SuccessOutput(response.GetUser(), "User renamed", output)
|
return printOutput(cmd, response.GetUser(), "User renamed")
|
||||||
},
|
}),
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -4,50 +4,91 @@ import (
|
|||||||
"context"
|
"context"
|
||||||
"crypto/tls"
|
"crypto/tls"
|
||||||
"encoding/json"
|
"encoding/json"
|
||||||
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
"os"
|
"os"
|
||||||
|
"time"
|
||||||
|
|
||||||
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
v1 "github.com/juanfont/headscale/gen/go/headscale/v1"
|
||||||
"github.com/juanfont/headscale/hscontrol"
|
"github.com/juanfont/headscale/hscontrol"
|
||||||
"github.com/juanfont/headscale/hscontrol/types"
|
"github.com/juanfont/headscale/hscontrol/types"
|
||||||
"github.com/juanfont/headscale/hscontrol/util"
|
"github.com/juanfont/headscale/hscontrol/util"
|
||||||
|
"github.com/juanfont/headscale/hscontrol/util/zlog/zf"
|
||||||
|
"github.com/prometheus/common/model"
|
||||||
"github.com/rs/zerolog/log"
|
"github.com/rs/zerolog/log"
|
||||||
|
"github.com/spf13/cobra"
|
||||||
"google.golang.org/grpc"
|
"google.golang.org/grpc"
|
||||||
"google.golang.org/grpc/credentials"
|
"google.golang.org/grpc/credentials"
|
||||||
"google.golang.org/grpc/credentials/insecure"
|
"google.golang.org/grpc/credentials/insecure"
|
||||||
|
"google.golang.org/protobuf/types/known/timestamppb"
|
||||||
"gopkg.in/yaml.v3"
|
"gopkg.in/yaml.v3"
|
||||||
)
|
)
|
||||||
|
|
||||||
const (
|
const (
|
||||||
HeadscaleDateTimeFormat = "2006-01-02 15:04:05"
|
HeadscaleDateTimeFormat = "2006-01-02 15:04:05"
|
||||||
SocketWritePermissions = 0o666
|
SocketWritePermissions = 0o666
|
||||||
|
|
||||||
|
outputFormatJSON = "json"
|
||||||
|
outputFormatJSONLine = "json-line"
|
||||||
|
outputFormatYAML = "yaml"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
var (
|
||||||
|
errAPIKeyNotSet = errors.New("HEADSCALE_CLI_API_KEY environment variable needs to be set")
|
||||||
|
errMissingParameter = errors.New("missing parameters")
|
||||||
|
)
|
||||||
|
|
||||||
|
// mustMarkRequired marks the named flags as required on cmd, panicking
|
||||||
|
// if any name does not match a registered flag. This is only called
|
||||||
|
// from init() where a failure indicates a programming error.
|
||||||
|
func mustMarkRequired(cmd *cobra.Command, names ...string) {
|
||||||
|
for _, n := range names {
|
||||||
|
err := cmd.MarkFlagRequired(n)
|
||||||
|
if err != nil {
|
||||||
|
panic(fmt.Sprintf("marking flag %q required on %q: %v", n, cmd.Name(), err))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
func newHeadscaleServerWithConfig() (*hscontrol.Headscale, error) {
|
func newHeadscaleServerWithConfig() (*hscontrol.Headscale, error) {
|
||||||
cfg, err := types.LoadServerConfig()
|
cfg, err := types.LoadServerConfig()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, fmt.Errorf(
|
return nil, fmt.Errorf(
|
||||||
"failed to load configuration while creating headscale instance: %w",
|
"loading configuration: %w",
|
||||||
err,
|
err,
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
app, err := hscontrol.NewHeadscale(cfg)
|
app, err := hscontrol.NewHeadscale(cfg)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, fmt.Errorf("creating new headscale: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
return app, nil
|
return app, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func newHeadscaleCLIWithConfig() (context.Context, v1.HeadscaleServiceClient, *grpc.ClientConn, context.CancelFunc) {
|
// grpcRunE wraps a cobra RunE func, injecting a ready gRPC client and
|
||||||
|
// context. Connection lifecycle is managed by the wrapper — callers
|
||||||
|
// never see the underlying conn or cancel func.
|
||||||
|
func grpcRunE(
|
||||||
|
fn func(ctx context.Context, client v1.HeadscaleServiceClient, cmd *cobra.Command, args []string) error,
|
||||||
|
) func(*cobra.Command, []string) error {
|
||||||
|
return func(cmd *cobra.Command, args []string) error {
|
||||||
|
ctx, client, conn, cancel, err := newHeadscaleCLIWithConfig()
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("connecting to headscale: %w", err)
|
||||||
|
}
|
||||||
|
defer cancel()
|
||||||
|
defer conn.Close()
|
||||||
|
|
||||||
|
return fn(ctx, client, cmd, args)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func newHeadscaleCLIWithConfig() (context.Context, v1.HeadscaleServiceClient, *grpc.ClientConn, context.CancelFunc, error) {
|
||||||
cfg, err := types.LoadCLIConfig()
|
cfg, err := types.LoadCLIConfig()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatal().
|
return nil, nil, nil, nil, fmt.Errorf("loading configuration: %w", err)
|
||||||
Err(err).
|
|
||||||
Caller().
|
|
||||||
Msgf("Failed to load configuration")
|
|
||||||
os.Exit(-1) // we get here if logging is suppressed (i.e., json output)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
log.Debug().
|
log.Debug().
|
||||||
@@ -57,7 +98,7 @@ func newHeadscaleCLIWithConfig() (context.Context, v1.HeadscaleServiceClient, *g
|
|||||||
ctx, cancel := context.WithTimeout(context.Background(), cfg.CLI.Timeout)
|
ctx, cancel := context.WithTimeout(context.Background(), cfg.CLI.Timeout)
|
||||||
|
|
||||||
grpcOptions := []grpc.DialOption{
|
grpcOptions := []grpc.DialOption{
|
||||||
grpc.WithBlock(),
|
grpc.WithBlock(), //nolint:staticcheck // SA1019: deprecated but supported in 1.x
|
||||||
}
|
}
|
||||||
|
|
||||||
address := cfg.CLI.Address
|
address := cfg.CLI.Address
|
||||||
@@ -71,17 +112,23 @@ func newHeadscaleCLIWithConfig() (context.Context, v1.HeadscaleServiceClient, *g
|
|||||||
address = cfg.UnixSocket
|
address = cfg.UnixSocket
|
||||||
|
|
||||||
// Try to give the user better feedback if we cannot write to the headscale
|
// Try to give the user better feedback if we cannot write to the headscale
|
||||||
// socket.
|
// socket. Note: os.OpenFile on a Unix domain socket returns ENXIO on
|
||||||
socket, err := os.OpenFile(cfg.UnixSocket, os.O_WRONLY, SocketWritePermissions) // nolint
|
// Linux which is expected — only permission errors are actionable here.
|
||||||
|
// The actual gRPC connection uses net.Dial which handles sockets properly.
|
||||||
|
socket, err := os.OpenFile(cfg.UnixSocket, os.O_WRONLY, SocketWritePermissions) //nolint
|
||||||
if err != nil {
|
if err != nil {
|
||||||
if os.IsPermission(err) {
|
if os.IsPermission(err) {
|
||||||
log.Fatal().
|
cancel()
|
||||||
Err(err).
|
|
||||||
Str("socket", cfg.UnixSocket).
|
return nil, nil, nil, nil, fmt.Errorf(
|
||||||
Msgf("Unable to read/write to headscale socket, do you have the correct permissions?")
|
"unable to read/write to headscale socket %q, do you have the correct permissions? %w",
|
||||||
|
cfg.UnixSocket,
|
||||||
|
err,
|
||||||
|
)
|
||||||
}
|
}
|
||||||
|
} else {
|
||||||
|
socket.Close()
|
||||||
}
|
}
|
||||||
socket.Close()
|
|
||||||
|
|
||||||
grpcOptions = append(
|
grpcOptions = append(
|
||||||
grpcOptions,
|
grpcOptions,
|
||||||
@@ -92,8 +139,11 @@ func newHeadscaleCLIWithConfig() (context.Context, v1.HeadscaleServiceClient, *g
|
|||||||
// If we are not connecting to a local server, require an API key for authentication
|
// If we are not connecting to a local server, require an API key for authentication
|
||||||
apiKey := cfg.CLI.APIKey
|
apiKey := cfg.CLI.APIKey
|
||||||
if apiKey == "" {
|
if apiKey == "" {
|
||||||
log.Fatal().Caller().Msgf("HEADSCALE_CLI_API_KEY environment variable needs to be set.")
|
cancel()
|
||||||
|
|
||||||
|
return nil, nil, nil, nil, errAPIKeyNotSet
|
||||||
}
|
}
|
||||||
|
|
||||||
grpcOptions = append(grpcOptions,
|
grpcOptions = append(grpcOptions,
|
||||||
grpc.WithPerRPCCredentials(tokenAuth{
|
grpc.WithPerRPCCredentials(tokenAuth{
|
||||||
token: apiKey,
|
token: apiKey,
|
||||||
@@ -118,64 +168,136 @@ func newHeadscaleCLIWithConfig() (context.Context, v1.HeadscaleServiceClient, *g
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
log.Trace().Caller().Str("address", address).Msg("Connecting via gRPC")
|
log.Trace().Caller().Str(zf.Address, address).Msg("connecting via gRPC")
|
||||||
conn, err := grpc.DialContext(ctx, address, grpcOptions...)
|
|
||||||
|
conn, err := grpc.DialContext(ctx, address, grpcOptions...) //nolint:staticcheck // SA1019: deprecated but supported in 1.x
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatal().Caller().Err(err).Msgf("Could not connect: %v", err)
|
cancel()
|
||||||
os.Exit(-1) // we get here if logging is suppressed (i.e., json output)
|
|
||||||
|
return nil, nil, nil, nil, fmt.Errorf("connecting to %s: %w", address, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
client := v1.NewHeadscaleServiceClient(conn)
|
client := v1.NewHeadscaleServiceClient(conn)
|
||||||
|
|
||||||
return ctx, client, conn, cancel
|
return ctx, client, conn, cancel, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func output(result interface{}, override string, outputFormat string) string {
|
// formatOutput serialises result into the requested format. For the
|
||||||
var jsonBytes []byte
|
// default (empty) format the human-readable override string is returned.
|
||||||
var err error
|
func formatOutput(result any, override string, outputFormat string) (string, error) {
|
||||||
switch outputFormat {
|
switch outputFormat {
|
||||||
case "json":
|
case outputFormatJSON:
|
||||||
jsonBytes, err = json.MarshalIndent(result, "", "\t")
|
b, err := json.MarshalIndent(result, "", "\t")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatal().Err(err).Msg("failed to unmarshal output")
|
return "", fmt.Errorf("marshalling JSON output: %w", err)
|
||||||
}
|
}
|
||||||
case "json-line":
|
|
||||||
jsonBytes, err = json.Marshal(result)
|
return string(b), nil
|
||||||
|
case outputFormatJSONLine:
|
||||||
|
b, err := json.Marshal(result)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatal().Err(err).Msg("failed to unmarshal output")
|
return "", fmt.Errorf("marshalling JSON-line output: %w", err)
|
||||||
}
|
}
|
||||||
case "yaml":
|
|
||||||
jsonBytes, err = yaml.Marshal(result)
|
return string(b), nil
|
||||||
|
case outputFormatYAML:
|
||||||
|
b, err := yaml.Marshal(result)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatal().Err(err).Msg("failed to unmarshal output")
|
return "", fmt.Errorf("marshalling YAML output: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
return string(b), nil
|
||||||
default:
|
default:
|
||||||
// nolint
|
return override, nil
|
||||||
return override
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// printOutput formats result and writes it to stdout. It reads the --output
|
||||||
|
// flag from cmd to decide the serialisation format.
|
||||||
|
func printOutput(cmd *cobra.Command, result any, override string) error {
|
||||||
|
format, _ := cmd.Flags().GetString("output")
|
||||||
|
|
||||||
|
out, err := formatOutput(result, override, format)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
return string(jsonBytes)
|
fmt.Println(out)
|
||||||
|
|
||||||
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// SuccessOutput prints the result to stdout and exits with status code 0.
|
// expirationFromFlag parses the --expiration flag as a Prometheus-style
|
||||||
func SuccessOutput(result interface{}, override string, outputFormat string) {
|
// duration (e.g. "90d", "1h") and returns an absolute timestamp.
|
||||||
fmt.Println(output(result, override, outputFormat))
|
func expirationFromFlag(cmd *cobra.Command) (*timestamppb.Timestamp, error) {
|
||||||
os.Exit(0)
|
durationStr, _ := cmd.Flags().GetString("expiration")
|
||||||
|
|
||||||
|
duration, err := model.ParseDuration(durationStr)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("parsing duration: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return timestamppb.New(time.Now().UTC().Add(time.Duration(duration))), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// ErrorOutput prints an error message to stderr and exits with status code 1.
|
// confirmAction returns true when the user confirms a prompt, or when
|
||||||
func ErrorOutput(errResult error, override string, outputFormat string) {
|
// --force is set. Callers decide what to do when it returns false.
|
||||||
|
func confirmAction(cmd *cobra.Command, prompt string) bool {
|
||||||
|
force, _ := cmd.Flags().GetBool("force")
|
||||||
|
if force {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
return util.YesNo(prompt)
|
||||||
|
}
|
||||||
|
|
||||||
|
// printListOutput checks the --output flag: when a machine-readable format is
|
||||||
|
// requested it serialises data as JSON/YAML; otherwise it calls renderTable
|
||||||
|
// to produce the human-readable pterm table.
|
||||||
|
func printListOutput(
|
||||||
|
cmd *cobra.Command,
|
||||||
|
data any,
|
||||||
|
renderTable func() error,
|
||||||
|
) error {
|
||||||
|
format, _ := cmd.Flags().GetString("output")
|
||||||
|
if format != "" {
|
||||||
|
return printOutput(cmd, data, "")
|
||||||
|
}
|
||||||
|
|
||||||
|
return renderTable()
|
||||||
|
}
|
||||||
|
|
||||||
|
// printError writes err to stderr, formatting it as JSON/YAML when the
|
||||||
|
// --output flag requests machine-readable output. Used exclusively by
|
||||||
|
// Execute() so that every error surfaces in the format the caller asked for.
|
||||||
|
func printError(err error, outputFormat string) {
|
||||||
type errOutput struct {
|
type errOutput struct {
|
||||||
Error string `json:"error"`
|
Error string `json:"error"`
|
||||||
}
|
}
|
||||||
|
|
||||||
fmt.Fprintf(os.Stderr, "%s\n", output(errOutput{errResult.Error()}, override, outputFormat))
|
e := errOutput{Error: err.Error()}
|
||||||
os.Exit(1)
|
|
||||||
|
var formatted []byte
|
||||||
|
|
||||||
|
switch outputFormat {
|
||||||
|
case outputFormatJSON:
|
||||||
|
formatted, _ = json.MarshalIndent(e, "", "\t") //nolint:errchkjson // errOutput contains only a string field
|
||||||
|
case outputFormatJSONLine:
|
||||||
|
formatted, _ = json.Marshal(e) //nolint:errchkjson // errOutput contains only a string field
|
||||||
|
case outputFormatYAML:
|
||||||
|
formatted, _ = yaml.Marshal(e)
|
||||||
|
default:
|
||||||
|
fmt.Fprintf(os.Stderr, "Error: %s\n", err)
|
||||||
|
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Fprintf(os.Stderr, "%s\n", formatted)
|
||||||
}
|
}
|
||||||
|
|
||||||
func HasMachineOutputFlag() bool {
|
func hasMachineOutputFlag() bool {
|
||||||
for _, arg := range os.Args {
|
for _, arg := range os.Args {
|
||||||
if arg == "json" || arg == "json-line" || arg == "yaml" {
|
if arg == outputFormatJSON || arg == outputFormatJSONLine || arg == outputFormatYAML {
|
||||||
return true
|
return true
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,21 +1,22 @@
|
|||||||
package cli
|
package cli
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"github.com/juanfont/headscale/hscontrol/types"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
||||||
var Version = "dev"
|
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
rootCmd.AddCommand(versionCmd)
|
rootCmd.AddCommand(versionCmd)
|
||||||
|
versionCmd.Flags().StringP("output", "o", "", "Output format. Empty for human-readable, 'json', 'json-line' or 'yaml'")
|
||||||
}
|
}
|
||||||
|
|
||||||
var versionCmd = &cobra.Command{
|
var versionCmd = &cobra.Command{
|
||||||
Use: "version",
|
Use: "version",
|
||||||
Short: "Print the version.",
|
Short: "Print the version.",
|
||||||
Long: "The version of headscale.",
|
Long: "The version of headscale.",
|
||||||
Run: func(cmd *cobra.Command, args []string) {
|
RunE: func(cmd *cobra.Command, args []string) error {
|
||||||
output, _ := cmd.Flags().GetString("output")
|
info := types.GetVersionInfo()
|
||||||
SuccessOutput(map[string]string{"version": Version}, Version, output)
|
|
||||||
|
return printOutput(cmd, info, info.String())
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -12,6 +12,7 @@ import (
|
|||||||
|
|
||||||
func main() {
|
func main() {
|
||||||
var colors bool
|
var colors bool
|
||||||
|
|
||||||
switch l := termcolor.SupportLevel(os.Stderr); l {
|
switch l := termcolor.SupportLevel(os.Stderr); l {
|
||||||
case termcolor.Level16M:
|
case termcolor.Level16M:
|
||||||
colors = true
|
colors = true
|
||||||
|
|||||||
@@ -9,34 +9,15 @@ import (
|
|||||||
"github.com/juanfont/headscale/hscontrol/types"
|
"github.com/juanfont/headscale/hscontrol/types"
|
||||||
"github.com/juanfont/headscale/hscontrol/util"
|
"github.com/juanfont/headscale/hscontrol/util"
|
||||||
"github.com/spf13/viper"
|
"github.com/spf13/viper"
|
||||||
"gopkg.in/check.v1"
|
"github.com/stretchr/testify/assert"
|
||||||
|
"github.com/stretchr/testify/require"
|
||||||
)
|
)
|
||||||
|
|
||||||
func Test(t *testing.T) {
|
func TestConfigFileLoading(t *testing.T) {
|
||||||
check.TestingT(t)
|
tmpDir := t.TempDir()
|
||||||
}
|
|
||||||
|
|
||||||
var _ = check.Suite(&Suite{})
|
|
||||||
|
|
||||||
type Suite struct{}
|
|
||||||
|
|
||||||
func (s *Suite) SetUpSuite(c *check.C) {
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *Suite) TearDownSuite(c *check.C) {
|
|
||||||
}
|
|
||||||
|
|
||||||
func (*Suite) TestConfigFileLoading(c *check.C) {
|
|
||||||
tmpDir, err := os.MkdirTemp("", "headscale")
|
|
||||||
if err != nil {
|
|
||||||
c.Fatal(err)
|
|
||||||
}
|
|
||||||
defer os.RemoveAll(tmpDir)
|
|
||||||
|
|
||||||
path, err := os.Getwd()
|
path, err := os.Getwd()
|
||||||
if err != nil {
|
require.NoError(t, err)
|
||||||
c.Fatal(err)
|
|
||||||
}
|
|
||||||
|
|
||||||
cfgFile := filepath.Join(tmpDir, "config.yaml")
|
cfgFile := filepath.Join(tmpDir, "config.yaml")
|
||||||
|
|
||||||
@@ -45,70 +26,52 @@ func (*Suite) TestConfigFileLoading(c *check.C) {
|
|||||||
filepath.Clean(path+"/../../config-example.yaml"),
|
filepath.Clean(path+"/../../config-example.yaml"),
|
||||||
cfgFile,
|
cfgFile,
|
||||||
)
|
)
|
||||||
if err != nil {
|
require.NoError(t, err)
|
||||||
c.Fatal(err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Load example config, it should load without validation errors
|
// Load example config, it should load without validation errors
|
||||||
err = types.LoadConfig(cfgFile, true)
|
err = types.LoadConfig(cfgFile, true)
|
||||||
c.Assert(err, check.IsNil)
|
require.NoError(t, err)
|
||||||
|
|
||||||
// Test that config file was interpreted correctly
|
// Test that config file was interpreted correctly
|
||||||
c.Assert(viper.GetString("server_url"), check.Equals, "http://127.0.0.1:8080")
|
assert.Equal(t, "http://127.0.0.1:8080", viper.GetString("server_url"))
|
||||||
c.Assert(viper.GetString("listen_addr"), check.Equals, "127.0.0.1:8080")
|
assert.Equal(t, "127.0.0.1:8080", viper.GetString("listen_addr"))
|
||||||
c.Assert(viper.GetString("metrics_listen_addr"), check.Equals, "127.0.0.1:9090")
|
assert.Equal(t, "127.0.0.1:9090", viper.GetString("metrics_listen_addr"))
|
||||||
c.Assert(viper.GetString("database.type"), check.Equals, "sqlite")
|
assert.Equal(t, "sqlite", viper.GetString("database.type"))
|
||||||
c.Assert(viper.GetString("database.sqlite.path"), check.Equals, "/var/lib/headscale/db.sqlite")
|
assert.Equal(t, "/var/lib/headscale/db.sqlite", viper.GetString("database.sqlite.path"))
|
||||||
c.Assert(viper.GetString("tls_letsencrypt_hostname"), check.Equals, "")
|
assert.Empty(t, viper.GetString("tls_letsencrypt_hostname"))
|
||||||
c.Assert(viper.GetString("tls_letsencrypt_listen"), check.Equals, ":http")
|
assert.Equal(t, ":http", viper.GetString("tls_letsencrypt_listen"))
|
||||||
c.Assert(viper.GetString("tls_letsencrypt_challenge_type"), check.Equals, "HTTP-01")
|
assert.Equal(t, "HTTP-01", viper.GetString("tls_letsencrypt_challenge_type"))
|
||||||
c.Assert(
|
assert.Equal(t, fs.FileMode(0o770), util.GetFileMode("unix_socket_permission"))
|
||||||
util.GetFileMode("unix_socket_permission"),
|
assert.False(t, viper.GetBool("logtail.enabled"))
|
||||||
check.Equals,
|
|
||||||
fs.FileMode(0o770),
|
|
||||||
)
|
|
||||||
c.Assert(viper.GetBool("logtail.enabled"), check.Equals, false)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func (*Suite) TestConfigLoading(c *check.C) {
|
func TestConfigLoading(t *testing.T) {
|
||||||
tmpDir, err := os.MkdirTemp("", "headscale")
|
tmpDir := t.TempDir()
|
||||||
if err != nil {
|
|
||||||
c.Fatal(err)
|
|
||||||
}
|
|
||||||
defer os.RemoveAll(tmpDir)
|
|
||||||
|
|
||||||
path, err := os.Getwd()
|
path, err := os.Getwd()
|
||||||
if err != nil {
|
require.NoError(t, err)
|
||||||
c.Fatal(err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Symlink the example config file
|
// Symlink the example config file
|
||||||
err = os.Symlink(
|
err = os.Symlink(
|
||||||
filepath.Clean(path+"/../../config-example.yaml"),
|
filepath.Clean(path+"/../../config-example.yaml"),
|
||||||
filepath.Join(tmpDir, "config.yaml"),
|
filepath.Join(tmpDir, "config.yaml"),
|
||||||
)
|
)
|
||||||
if err != nil {
|
require.NoError(t, err)
|
||||||
c.Fatal(err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Load example config, it should load without validation errors
|
// Load example config, it should load without validation errors
|
||||||
err = types.LoadConfig(tmpDir, false)
|
err = types.LoadConfig(tmpDir, false)
|
||||||
c.Assert(err, check.IsNil)
|
require.NoError(t, err)
|
||||||
|
|
||||||
// Test that config file was interpreted correctly
|
// Test that config file was interpreted correctly
|
||||||
c.Assert(viper.GetString("server_url"), check.Equals, "http://127.0.0.1:8080")
|
assert.Equal(t, "http://127.0.0.1:8080", viper.GetString("server_url"))
|
||||||
c.Assert(viper.GetString("listen_addr"), check.Equals, "127.0.0.1:8080")
|
assert.Equal(t, "127.0.0.1:8080", viper.GetString("listen_addr"))
|
||||||
c.Assert(viper.GetString("metrics_listen_addr"), check.Equals, "127.0.0.1:9090")
|
assert.Equal(t, "127.0.0.1:9090", viper.GetString("metrics_listen_addr"))
|
||||||
c.Assert(viper.GetString("database.type"), check.Equals, "sqlite")
|
assert.Equal(t, "sqlite", viper.GetString("database.type"))
|
||||||
c.Assert(viper.GetString("database.sqlite.path"), check.Equals, "/var/lib/headscale/db.sqlite")
|
assert.Equal(t, "/var/lib/headscale/db.sqlite", viper.GetString("database.sqlite.path"))
|
||||||
c.Assert(viper.GetString("tls_letsencrypt_hostname"), check.Equals, "")
|
assert.Empty(t, viper.GetString("tls_letsencrypt_hostname"))
|
||||||
c.Assert(viper.GetString("tls_letsencrypt_listen"), check.Equals, ":http")
|
assert.Equal(t, ":http", viper.GetString("tls_letsencrypt_listen"))
|
||||||
c.Assert(viper.GetString("tls_letsencrypt_challenge_type"), check.Equals, "HTTP-01")
|
assert.Equal(t, "HTTP-01", viper.GetString("tls_letsencrypt_challenge_type"))
|
||||||
c.Assert(
|
assert.Equal(t, fs.FileMode(0o770), util.GetFileMode("unix_socket_permission"))
|
||||||
util.GetFileMode("unix_socket_permission"),
|
assert.False(t, viper.GetBool("logtail.enabled"))
|
||||||
check.Equals,
|
assert.False(t, viper.GetBool("randomize_client_port"))
|
||||||
fs.FileMode(0o770),
|
|
||||||
)
|
|
||||||
c.Assert(viper.GetBool("logtail.enabled"), check.Equals, false)
|
|
||||||
c.Assert(viper.GetBool("randomize_client_port"), check.Equals, false)
|
|
||||||
}
|
}
|
||||||
|
|||||||
262
cmd/hi/README.md
Normal file
@@ -0,0 +1,262 @@
|
|||||||
|
# hi — Headscale Integration test runner
|
||||||
|
|
||||||
|
`hi` wraps Docker container orchestration around the tests in
|
||||||
|
[`../../integration`](../../integration) and extracts debugging artefacts
|
||||||
|
(logs, database snapshots, MapResponse protocol captures) for post-mortem
|
||||||
|
analysis.
|
||||||
|
|
||||||
|
**Read this file in full before running any `hi` command.** The test
|
||||||
|
runner has sharp edges — wrong flags produce stale containers, lost
|
||||||
|
artefacts, or hung CI.
|
||||||
|
|
||||||
|
For test-authoring patterns (scenario setup, `EventuallyWithT`,
|
||||||
|
`IntegrationSkip`, helper variants), read
|
||||||
|
[`../../integration/README.md`](../../integration/README.md).
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Verify system requirements (Docker, Go, disk space, images)
|
||||||
|
go run ./cmd/hi doctor
|
||||||
|
|
||||||
|
# Run a single test (the default flags are tuned for development)
|
||||||
|
go run ./cmd/hi run "TestPingAllByIP"
|
||||||
|
|
||||||
|
# Run a database-heavy test against PostgreSQL
|
||||||
|
go run ./cmd/hi run "TestExpireNode" --postgres
|
||||||
|
|
||||||
|
# Pattern matching
|
||||||
|
go run ./cmd/hi run "TestSubnet*"
|
||||||
|
```
|
||||||
|
|
||||||
|
Run `doctor` before the first `run` in any new environment. Tests
|
||||||
|
generate ~100 MB of logs per run in `control_logs/`; `doctor` verifies
|
||||||
|
there is enough space and that the required Docker images are available.
|
||||||
|
|
||||||
|
## Commands
|
||||||
|
|
||||||
|
| Command | Purpose |
|
||||||
|
| ------------------ | ---------------------------------------------------- |
|
||||||
|
| `run [pattern]` | Execute the test(s) matching `pattern` |
|
||||||
|
| `doctor` | Verify system requirements |
|
||||||
|
| `clean networks` | Prune unused Docker networks |
|
||||||
|
| `clean images` | Clean old test images |
|
||||||
|
| `clean containers` | Kill **all** test containers (dangerous — see below) |
|
||||||
|
| `clean cache` | Clean Go module cache volume |
|
||||||
|
| `clean all` | Run all cleanup operations |
|
||||||
|
|
||||||
|
## Flags
|
||||||
|
|
||||||
|
Defaults are tuned for single-test development runs. Review before
|
||||||
|
changing.
|
||||||
|
|
||||||
|
| Flag | Default | Purpose |
|
||||||
|
| ------------------- | -------------- | --------------------------------------------------------------------------- |
|
||||||
|
| `--timeout` | `120m` | Total test timeout. Use the built-in flag — never wrap with bash `timeout`. |
|
||||||
|
| `--postgres` | `false` | Use PostgreSQL instead of SQLite |
|
||||||
|
| `--failfast` | `true` | Stop on first test failure |
|
||||||
|
| `--go-version` | auto | Detected from `go.mod` (currently 1.26.1) |
|
||||||
|
| `--clean-before` | `true` | Clean stale (stopped/exited) containers before starting |
|
||||||
|
| `--clean-after` | `true` | Clean this run's containers after completion |
|
||||||
|
| `--keep-on-failure` | `false` | Preserve containers for manual inspection on failure |
|
||||||
|
| `--logs-dir` | `control_logs` | Where to save run artefacts |
|
||||||
|
| `--verbose` | `false` | Verbose output |
|
||||||
|
| `--stats` | `false` | Collect container resource-usage stats |
|
||||||
|
| `--hs-memory-limit` | `0` | Fail if any headscale container exceeds N MB (0 = disabled) |
|
||||||
|
| `--ts-memory-limit` | `0` | Fail if any tailscale container exceeds N MB |
|
||||||
|
|
||||||
|
### Timeout guidance
|
||||||
|
|
||||||
|
The default `120m` is generous for a single test. If you must tune it,
|
||||||
|
these are realistic floors by category:
|
||||||
|
|
||||||
|
| Test type | Minimum | Examples |
|
||||||
|
| ------------------------- | ----------- | ------------------------------------- |
|
||||||
|
| Basic functionality / CLI | 900s (15m) | `TestPingAllByIP`, `TestCLI*` |
|
||||||
|
| Route / ACL | 1200s (20m) | `TestSubnet*`, `TestACL*` |
|
||||||
|
| HA / failover | 1800s (30m) | `TestHASubnetRouter*` |
|
||||||
|
| Long-running | 2100s (35m) | `TestNodeOnlineStatus` (~12 min body) |
|
||||||
|
| Full suite | 45m | `go test ./integration -timeout 45m` |
|
||||||
|
|
||||||
|
**Never** use the shell `timeout` command around `hi`. It kills the
|
||||||
|
process mid-cleanup and leaves stale containers:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
timeout 300 go run ./cmd/hi run "TestName" # WRONG — orphaned containers
|
||||||
|
go run ./cmd/hi run "TestName" --timeout=900s # correct
|
||||||
|
```
|
||||||
|
|
||||||
|
## Concurrent Execution
|
||||||
|
|
||||||
|
Multiple `hi run` invocations can run simultaneously on the same Docker
|
||||||
|
daemon. Each invocation gets a unique **Run ID** (format
|
||||||
|
`YYYYMMDD-HHMMSS-6charhash`, e.g. `20260409-104215-mdjtzx`).
|
||||||
|
|
||||||
|
- **Container names** include the short run ID: `ts-mdjtzx-1-74-fgdyls`
|
||||||
|
- **Docker labels**: `hi.run-id={runID}` on every container
|
||||||
|
- **Port allocation**: dynamic — kernel assigns free ports, no conflicts
|
||||||
|
- **Cleanup isolation**: each run cleans only its own containers
|
||||||
|
- **Log directories**: `control_logs/{runID}/`
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Start three tests in parallel — each gets its own run ID
|
||||||
|
go run ./cmd/hi run "TestPingAllByIP" &
|
||||||
|
go run ./cmd/hi run "TestACLAllowUserDst" &
|
||||||
|
go run ./cmd/hi run "TestOIDCAuthenticationPingAll" &
|
||||||
|
```
|
||||||
|
|
||||||
|
### Safety rules for concurrent runs
|
||||||
|
|
||||||
|
- ✅ Your run cleans only containers labelled with its own `hi.run-id`
|
||||||
|
- ✅ `--clean-before` removes only stopped/exited containers
|
||||||
|
- ❌ **Never** run `docker rm -f $(docker ps -q --filter name=hs-)` —
|
||||||
|
this destroys other agents' live test sessions
|
||||||
|
- ❌ **Never** run `docker system prune -f` while any tests are running
|
||||||
|
- ❌ **Never** run `hi clean containers` / `hi clean all` while other
|
||||||
|
tests are running — both kill all test containers on the daemon
|
||||||
|
|
||||||
|
To identify your own containers:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker ps --filter "label=hi.run-id=20260409-104215-mdjtzx"
|
||||||
|
```
|
||||||
|
|
||||||
|
The run ID appears at the top of the `hi run` output — copy it from
|
||||||
|
there rather than trying to reconstruct it.
|
||||||
|
|
||||||
|
## Artefacts
|
||||||
|
|
||||||
|
Every run saves debugging artefacts under `control_logs/{runID}/`:
|
||||||
|
|
||||||
|
```
|
||||||
|
control_logs/20260409-104215-mdjtzx/
|
||||||
|
├── hs-<test>-<hash>.stderr.log # headscale server errors
|
||||||
|
├── hs-<test>-<hash>.stdout.log # headscale server output
|
||||||
|
├── hs-<test>-<hash>.db # database snapshot (SQLite)
|
||||||
|
├── hs-<test>-<hash>_metrics.txt # Prometheus metrics dump
|
||||||
|
├── hs-<test>-<hash>-mapresponses/ # MapResponse protocol captures
|
||||||
|
├── ts-<client>-<hash>.stderr.log # tailscale client errors
|
||||||
|
├── ts-<client>-<hash>.stdout.log # tailscale client output
|
||||||
|
└── ts-<client>-<hash>_status.json # client network-status dump
|
||||||
|
```
|
||||||
|
|
||||||
|
Artefacts persist after cleanup. Old runs accumulate fast — delete
|
||||||
|
unwanted directories to reclaim disk.
|
||||||
|
|
||||||
|
## Debugging workflow
|
||||||
|
|
||||||
|
When a test fails, read the artefacts **in this order**:
|
||||||
|
|
||||||
|
1. **`hs-*.stderr.log`** — headscale server errors, panics, policy
|
||||||
|
evaluation failures. Most issues originate server-side.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
grep -E "ERROR|panic|FATAL" control_logs/*/hs-*.stderr.log
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **`ts-*.stderr.log`** — authentication failures, connectivity issues,
|
||||||
|
DNS resolution problems on the client side.
|
||||||
|
|
||||||
|
3. **MapResponse JSON** in `hs-*-mapresponses/` — protocol-level
|
||||||
|
debugging for network map generation, peer visibility, route
|
||||||
|
distribution, policy evaluation results.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ls control_logs/*/hs-*-mapresponses/
|
||||||
|
jq '.Peers[] | {Name, Tags, PrimaryRoutes}' \
|
||||||
|
control_logs/*/hs-*-mapresponses/001.json
|
||||||
|
```
|
||||||
|
|
||||||
|
4. **`*_status.json`** — client peer-connectivity state.
|
||||||
|
|
||||||
|
5. **`hs-*.db`** — SQLite snapshot for post-mortem consistency checks.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sqlite3 control_logs/<runID>/hs-*.db
|
||||||
|
sqlite> .tables
|
||||||
|
sqlite> .schema nodes
|
||||||
|
sqlite> SELECT id, hostname, user_id, tags FROM nodes WHERE hostname LIKE '%problematic%';
|
||||||
|
```
|
||||||
|
|
||||||
|
6. **`*_metrics.txt`** — Prometheus dumps for latency, NodeStore
|
||||||
|
operation timing, database query performance, memory usage.
|
||||||
|
|
||||||
|
## Heuristic: infrastructure vs code
|
||||||
|
|
||||||
|
**Before blaming Docker, disk, or network: read `hs-*.stderr.log` in
|
||||||
|
full.** In practice, well over 99% of failures are code bugs (policy
|
||||||
|
evaluation, NodeStore sync, route approval) rather than infrastructure.
|
||||||
|
|
||||||
|
Actual infrastructure failures have signature error messages:
|
||||||
|
|
||||||
|
| Signature | Cause | Fix |
|
||||||
|
| --------------------------------------------------------------- | ------------------------- | ------------------------------------------------------------- |
|
||||||
|
| `failed to resolve "hs-...": no DNS fallback candidates remain` | Docker DNS | Reset Docker networking |
|
||||||
|
| `container creation timeout`, no progress >2 min | Resource exhaustion | `docker system prune -f` (when no other tests running), retry |
|
||||||
|
| OOM kills, slow Docker daemon | Too many concurrent tests | Reduce concurrency, wait for completion |
|
||||||
|
| `no space left on device` | Disk full | Delete old `control_logs/` |
|
||||||
|
|
||||||
|
If you don't see a signature error, **assume it's a code regression** —
|
||||||
|
do not retry hoping the flake goes away.
|
||||||
|
|
||||||
|
## Common failure patterns (code bugs)
|
||||||
|
|
||||||
|
### Route advertisement timing
|
||||||
|
|
||||||
|
Test asserts route state before the client has finished propagating its
|
||||||
|
Hostinfo update. Symptom: `nodes[0].GetAvailableRoutes()` empty when
|
||||||
|
the test expects a route.
|
||||||
|
|
||||||
|
- **Wrong fix**: `time.Sleep(5 * time.Second)` — fragile and slow.
|
||||||
|
- **Right fix**: wrap the assertion in `EventuallyWithT`. See
|
||||||
|
[`../../integration/README.md`](../../integration/README.md).
|
||||||
|
|
||||||
|
### NodeStore sync issues
|
||||||
|
|
||||||
|
Route changes not reflected in the NodeStore snapshot. Symptom: route
|
||||||
|
advertisements in logs but no tracking updates in subsequent reads.
|
||||||
|
|
||||||
|
The sync point is `State.UpdateNodeFromMapRequest()` in
|
||||||
|
`hscontrol/state/state.go`. If you added a new kind of client state
|
||||||
|
update, make sure it lands here.
|
||||||
|
|
||||||
|
### HA failover: routes disappearing on disconnect
|
||||||
|
|
||||||
|
`TestHASubnetRouterFailover` fails because approved routes vanish when
|
||||||
|
a subnet router goes offline. **This is a bug, not expected behaviour.**
|
||||||
|
Route approval must not be coupled to client connectivity — routes
|
||||||
|
stay approved; only the primary-route selection is affected by
|
||||||
|
connectivity.
|
||||||
|
|
||||||
|
### Policy evaluation race
|
||||||
|
|
||||||
|
Symptom: tests that change policy and immediately assert peer visibility
|
||||||
|
fail intermittently. Policy changes trigger async recomputation.
|
||||||
|
|
||||||
|
- See recent fixes in `git log -- hscontrol/state/` for examples (e.g.
|
||||||
|
the `PolicyChange` trigger on every Connect/Disconnect).
|
||||||
|
|
||||||
|
### SQLite vs PostgreSQL timing differences
|
||||||
|
|
||||||
|
Some race conditions only surface on one backend. If a test is flaky,
|
||||||
|
try the other backend with `--postgres`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
go run ./cmd/hi run "TestName" --postgres --verbose
|
||||||
|
```
|
||||||
|
|
||||||
|
PostgreSQL generally has more consistent timing; SQLite can expose
|
||||||
|
races during rapid writes.
|
||||||
|
|
||||||
|
## Keeping containers for inspection
|
||||||
|
|
||||||
|
If you need to inspect a failed test's state manually:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
go run ./cmd/hi run "TestName" --keep-on-failure
|
||||||
|
# containers survive — inspect them
|
||||||
|
docker exec -it ts-<runID>-<...> /bin/sh
|
||||||
|
docker logs hs-<runID>-<...>
|
||||||
|
# clean up manually when done
|
||||||
|
go run ./cmd/hi clean all # only when no other tests are running
|
||||||
|
```
|
||||||
431
cmd/hi/cleanup.go
Normal file
@@ -0,0 +1,431 @@
|
|||||||
|
package main
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"fmt"
|
||||||
|
"log"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/cenkalti/backoff/v5"
|
||||||
|
"github.com/docker/docker/api/types/container"
|
||||||
|
"github.com/docker/docker/api/types/filters"
|
||||||
|
"github.com/docker/docker/api/types/image"
|
||||||
|
"github.com/docker/docker/client"
|
||||||
|
"github.com/docker/docker/errdefs"
|
||||||
|
)
|
||||||
|
|
||||||
|
// cleanupBeforeTest performs cleanup operations before running tests.
|
||||||
|
// Only removes stale (stopped/exited) test containers to avoid interfering with concurrent test runs.
|
||||||
|
func cleanupBeforeTest(ctx context.Context) error {
|
||||||
|
err := cleanupStaleTestContainers(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("cleaning stale test containers: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := pruneDockerNetworks(ctx); err != nil { //nolint:noinlineerr
|
||||||
|
return fmt.Errorf("pruning networks: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// cleanupAfterTest removes the test container and all associated integration test containers for the run.
|
||||||
|
func cleanupAfterTest(ctx context.Context, cli *client.Client, containerID, runID string) error {
|
||||||
|
// Remove the main test container
|
||||||
|
err := cli.ContainerRemove(ctx, containerID, container.RemoveOptions{
|
||||||
|
Force: true,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("removing test container: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Clean up integration test containers for this run only
|
||||||
|
if runID != "" {
|
||||||
|
err := killTestContainersByRunID(ctx, runID)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("cleaning up containers for run %s: %w", runID, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// killTestContainers terminates and removes all test containers.
|
||||||
|
func killTestContainers(ctx context.Context) error {
|
||||||
|
cli, err := createDockerClient(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("creating Docker client: %w", err)
|
||||||
|
}
|
||||||
|
defer cli.Close()
|
||||||
|
|
||||||
|
containers, err := cli.ContainerList(ctx, container.ListOptions{
|
||||||
|
All: true,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("listing containers: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
removed := 0
|
||||||
|
|
||||||
|
for _, cont := range containers {
|
||||||
|
shouldRemove := false
|
||||||
|
|
||||||
|
for _, name := range cont.Names {
|
||||||
|
if strings.Contains(name, "headscale-test-suite") ||
|
||||||
|
strings.Contains(name, "hs-") ||
|
||||||
|
strings.Contains(name, "ts-") ||
|
||||||
|
strings.Contains(name, "derp-") {
|
||||||
|
shouldRemove = true
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if shouldRemove {
|
||||||
|
// First kill the container if it's running
|
||||||
|
if cont.State == "running" {
|
||||||
|
_ = cli.ContainerKill(ctx, cont.ID, "KILL")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Then remove the container with retry logic
|
||||||
|
if removeContainerWithRetry(ctx, cli, cont.ID) {
|
||||||
|
removed++
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if removed > 0 {
|
||||||
|
fmt.Printf("Removed %d test containers\n", removed)
|
||||||
|
} else {
|
||||||
|
fmt.Println("No test containers found to remove")
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// killTestContainersByRunID terminates and removes all test containers for a specific run ID.
|
||||||
|
// This function filters containers by the hi.run-id label to only affect containers
|
||||||
|
// belonging to the specified test run, leaving other concurrent test runs untouched.
|
||||||
|
func killTestContainersByRunID(ctx context.Context, runID string) error {
|
||||||
|
cli, err := createDockerClient(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("creating Docker client: %w", err)
|
||||||
|
}
|
||||||
|
defer cli.Close()
|
||||||
|
|
||||||
|
// Filter containers by hi.run-id label
|
||||||
|
containers, err := cli.ContainerList(ctx, container.ListOptions{
|
||||||
|
All: true,
|
||||||
|
Filters: filters.NewArgs(
|
||||||
|
filters.Arg("label", "hi.run-id="+runID),
|
||||||
|
),
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("listing containers for run %s: %w", runID, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
removed := 0
|
||||||
|
|
||||||
|
for _, cont := range containers {
|
||||||
|
// Kill the container if it's running
|
||||||
|
if cont.State == "running" {
|
||||||
|
_ = cli.ContainerKill(ctx, cont.ID, "KILL")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Remove the container with retry logic
|
||||||
|
if removeContainerWithRetry(ctx, cli, cont.ID) {
|
||||||
|
removed++
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if removed > 0 {
|
||||||
|
fmt.Printf("Removed %d containers for run ID %s\n", removed, runID)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// cleanupStaleTestContainers removes stopped/exited test containers without affecting running tests.
|
||||||
|
// This is useful for cleaning up leftover containers from previous crashed or interrupted test runs
|
||||||
|
// without interfering with currently running concurrent tests.
|
||||||
|
func cleanupStaleTestContainers(ctx context.Context) error {
|
||||||
|
cli, err := createDockerClient(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("creating Docker client: %w", err)
|
||||||
|
}
|
||||||
|
defer cli.Close()
|
||||||
|
|
||||||
|
// Only get stopped/exited containers
|
||||||
|
containers, err := cli.ContainerList(ctx, container.ListOptions{
|
||||||
|
All: true,
|
||||||
|
Filters: filters.NewArgs(
|
||||||
|
filters.Arg("status", "exited"),
|
||||||
|
filters.Arg("status", "dead"),
|
||||||
|
),
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("listing stopped containers: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
removed := 0
|
||||||
|
|
||||||
|
for _, cont := range containers {
|
||||||
|
// Only remove containers that look like test containers
|
||||||
|
shouldRemove := false
|
||||||
|
|
||||||
|
for _, name := range cont.Names {
|
||||||
|
if strings.Contains(name, "headscale-test-suite") ||
|
||||||
|
strings.Contains(name, "hs-") ||
|
||||||
|
strings.Contains(name, "ts-") ||
|
||||||
|
strings.Contains(name, "derp-") {
|
||||||
|
shouldRemove = true
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if shouldRemove {
|
||||||
|
if removeContainerWithRetry(ctx, cli, cont.ID) {
|
||||||
|
removed++
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if removed > 0 {
|
||||||
|
fmt.Printf("Removed %d stale test containers\n", removed)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
const (
|
||||||
|
containerRemoveInitialInterval = 100 * time.Millisecond
|
||||||
|
containerRemoveMaxElapsedTime = 2 * time.Second
|
||||||
|
)
|
||||||
|
|
||||||
|
// removeContainerWithRetry attempts to remove a container with exponential backoff retry logic.
|
||||||
|
func removeContainerWithRetry(ctx context.Context, cli *client.Client, containerID string) bool {
|
||||||
|
expBackoff := backoff.NewExponentialBackOff()
|
||||||
|
expBackoff.InitialInterval = containerRemoveInitialInterval
|
||||||
|
|
||||||
|
_, err := backoff.Retry(ctx, func() (struct{}, error) {
|
||||||
|
err := cli.ContainerRemove(ctx, containerID, container.RemoveOptions{
|
||||||
|
Force: true,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return struct{}{}, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return struct{}{}, nil
|
||||||
|
}, backoff.WithBackOff(expBackoff), backoff.WithMaxElapsedTime(containerRemoveMaxElapsedTime))
|
||||||
|
|
||||||
|
return err == nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// pruneDockerNetworks removes unused Docker networks.
|
||||||
|
func pruneDockerNetworks(ctx context.Context) error {
|
||||||
|
cli, err := createDockerClient(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("creating Docker client: %w", err)
|
||||||
|
}
|
||||||
|
defer cli.Close()
|
||||||
|
|
||||||
|
report, err := cli.NetworksPrune(ctx, filters.Args{})
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("pruning networks: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(report.NetworksDeleted) > 0 {
|
||||||
|
fmt.Printf("Removed %d unused networks\n", len(report.NetworksDeleted))
|
||||||
|
} else {
|
||||||
|
fmt.Println("No unused networks found to remove")
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// cleanOldImages removes test-related and old dangling Docker images.
|
||||||
|
func cleanOldImages(ctx context.Context) error {
|
||||||
|
cli, err := createDockerClient(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("creating Docker client: %w", err)
|
||||||
|
}
|
||||||
|
defer cli.Close()
|
||||||
|
|
||||||
|
images, err := cli.ImageList(ctx, image.ListOptions{
|
||||||
|
All: true,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("listing images: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
removed := 0
|
||||||
|
|
||||||
|
for _, img := range images {
|
||||||
|
shouldRemove := false
|
||||||
|
|
||||||
|
for _, tag := range img.RepoTags {
|
||||||
|
if strings.Contains(tag, "hs-") ||
|
||||||
|
strings.Contains(tag, "headscale-integration") ||
|
||||||
|
strings.Contains(tag, "tailscale") {
|
||||||
|
shouldRemove = true
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(img.RepoTags) == 0 && time.Unix(img.Created, 0).Before(time.Now().Add(-7*24*time.Hour)) {
|
||||||
|
shouldRemove = true
|
||||||
|
}
|
||||||
|
|
||||||
|
if shouldRemove {
|
||||||
|
_, err := cli.ImageRemove(ctx, img.ID, image.RemoveOptions{
|
||||||
|
Force: true,
|
||||||
|
})
|
||||||
|
if err == nil {
|
||||||
|
removed++
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if removed > 0 {
|
||||||
|
fmt.Printf("Removed %d test images\n", removed)
|
||||||
|
} else {
|
||||||
|
fmt.Println("No test images found to remove")
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// cleanCacheVolume removes the Docker volume used for Go module cache.
|
||||||
|
func cleanCacheVolume(ctx context.Context) error {
|
||||||
|
cli, err := createDockerClient(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("creating Docker client: %w", err)
|
||||||
|
}
|
||||||
|
defer cli.Close()
|
||||||
|
|
||||||
|
volumeName := "hs-integration-go-cache"
|
||||||
|
|
||||||
|
err = cli.VolumeRemove(ctx, volumeName, true)
|
||||||
|
if err != nil {
|
||||||
|
if errdefs.IsNotFound(err) { //nolint:staticcheck // SA1019: deprecated but functional
|
||||||
|
fmt.Printf("Go module cache volume not found: %s\n", volumeName)
|
||||||
|
} else if errdefs.IsConflict(err) { //nolint:staticcheck // SA1019: deprecated but functional
|
||||||
|
fmt.Printf("Go module cache volume is in use and cannot be removed: %s\n", volumeName)
|
||||||
|
} else {
|
||||||
|
fmt.Printf("Failed to remove Go module cache volume %s: %v\n", volumeName, err)
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
fmt.Printf("Removed Go module cache volume: %s\n", volumeName)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// cleanupSuccessfulTestArtifacts removes artifacts from successful test runs to save disk space.
|
||||||
|
// This function removes large artifacts that are mainly useful for debugging failures:
|
||||||
|
// - Database dumps (.db files)
|
||||||
|
// - Profile data (pprof directories)
|
||||||
|
// - MapResponse data (mapresponses directories)
|
||||||
|
// - Prometheus metrics files
|
||||||
|
//
|
||||||
|
// It preserves:
|
||||||
|
// - Log files (.log) which are small and useful for verification.
|
||||||
|
func cleanupSuccessfulTestArtifacts(logsDir string, verbose bool) error {
|
||||||
|
entries, err := os.ReadDir(logsDir)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("reading logs directory: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
var (
|
||||||
|
removedFiles, removedDirs int
|
||||||
|
totalSize int64
|
||||||
|
)
|
||||||
|
|
||||||
|
for _, entry := range entries {
|
||||||
|
name := entry.Name()
|
||||||
|
fullPath := filepath.Join(logsDir, name)
|
||||||
|
|
||||||
|
if entry.IsDir() {
|
||||||
|
// Remove pprof and mapresponses directories (typically large)
|
||||||
|
// These directories contain artifacts from all containers in the test run
|
||||||
|
if name == "pprof" || name == "mapresponses" {
|
||||||
|
size, sizeErr := getDirSize(fullPath)
|
||||||
|
if sizeErr == nil {
|
||||||
|
totalSize += size
|
||||||
|
}
|
||||||
|
|
||||||
|
err := os.RemoveAll(fullPath)
|
||||||
|
if err != nil {
|
||||||
|
if verbose {
|
||||||
|
log.Printf("Warning: failed to remove directory %s: %v", name, err)
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
removedDirs++
|
||||||
|
|
||||||
|
if verbose {
|
||||||
|
log.Printf("Removed directory: %s/", name)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
// Only process test-related files (headscale and tailscale)
|
||||||
|
if !strings.HasPrefix(name, "hs-") && !strings.HasPrefix(name, "ts-") {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
// Remove database, metrics, and status files, but keep logs
|
||||||
|
shouldRemove := strings.HasSuffix(name, ".db") ||
|
||||||
|
strings.HasSuffix(name, "_metrics.txt") ||
|
||||||
|
strings.HasSuffix(name, "_status.json")
|
||||||
|
|
||||||
|
if shouldRemove {
|
||||||
|
info, infoErr := entry.Info()
|
||||||
|
if infoErr == nil {
|
||||||
|
totalSize += info.Size()
|
||||||
|
}
|
||||||
|
|
||||||
|
err := os.Remove(fullPath)
|
||||||
|
if err != nil {
|
||||||
|
if verbose {
|
||||||
|
log.Printf("Warning: failed to remove file %s: %v", name, err)
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
removedFiles++
|
||||||
|
|
||||||
|
if verbose {
|
||||||
|
log.Printf("Removed file: %s", name)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if removedFiles > 0 || removedDirs > 0 {
|
||||||
|
const bytesPerMB = 1024 * 1024
|
||||||
|
log.Printf("Cleaned up %d files and %d directories (freed ~%.2f MB)",
|
||||||
|
removedFiles, removedDirs, float64(totalSize)/bytesPerMB)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// getDirSize calculates the total size of a directory.
|
||||||
|
func getDirSize(path string) (int64, error) {
|
||||||
|
var size int64
|
||||||
|
|
||||||
|
err := filepath.Walk(path, func(_ string, info os.FileInfo, err error) error {
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
if !info.IsDir() {
|
||||||
|
size += info.Size()
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
})
|
||||||
|
|
||||||
|
return size, err
|
||||||
|
}
|
||||||
807
cmd/hi/docker.go
Normal file
@@ -0,0 +1,807 @@
|
|||||||
|
package main
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"context"
|
||||||
|
"encoding/json"
|
||||||
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"log"
|
||||||
|
"os"
|
||||||
|
"os/exec"
|
||||||
|
"path/filepath"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/docker/docker/api/types/container"
|
||||||
|
"github.com/docker/docker/api/types/image"
|
||||||
|
"github.com/docker/docker/api/types/mount"
|
||||||
|
"github.com/docker/docker/client"
|
||||||
|
"github.com/docker/docker/pkg/stdcopy"
|
||||||
|
"github.com/juanfont/headscale/integration/dockertestutil"
|
||||||
|
)
|
||||||
|
|
||||||
|
const defaultDirPerm = 0o755
|
||||||
|
|
||||||
|
var (
|
||||||
|
ErrTestFailed = errors.New("test failed")
|
||||||
|
ErrUnexpectedContainerWait = errors.New("unexpected end of container wait")
|
||||||
|
ErrNoDockerContext = errors.New("no docker context found")
|
||||||
|
ErrMemoryLimitViolations = errors.New("container(s) exceeded memory limits")
|
||||||
|
)
|
||||||
|
|
||||||
|
// runTestContainer executes integration tests in a Docker container.
|
||||||
|
//
|
||||||
|
//nolint:gocyclo // complex test orchestration function
|
||||||
|
func runTestContainer(ctx context.Context, config *RunConfig) error {
|
||||||
|
cli, err := createDockerClient(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("creating Docker client: %w", err)
|
||||||
|
}
|
||||||
|
defer cli.Close()
|
||||||
|
|
||||||
|
runID := dockertestutil.GenerateRunID()
|
||||||
|
containerName := "headscale-test-suite-" + runID
|
||||||
|
logsDir := filepath.Join(config.LogsDir, runID)
|
||||||
|
|
||||||
|
if config.Verbose {
|
||||||
|
log.Printf("Run ID: %s", runID)
|
||||||
|
log.Printf("Container name: %s", containerName)
|
||||||
|
log.Printf("Logs directory: %s", logsDir)
|
||||||
|
}
|
||||||
|
|
||||||
|
absLogsDir, err := filepath.Abs(logsDir)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("getting absolute path for logs directory: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
const dirPerm = 0o755
|
||||||
|
if err := os.MkdirAll(absLogsDir, dirPerm); err != nil { //nolint:noinlineerr
|
||||||
|
return fmt.Errorf("creating logs directory: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if config.CleanBefore {
|
||||||
|
if config.Verbose {
|
||||||
|
log.Printf("Running pre-test cleanup...")
|
||||||
|
}
|
||||||
|
|
||||||
|
err := cleanupBeforeTest(ctx)
|
||||||
|
if err != nil && config.Verbose {
|
||||||
|
log.Printf("Warning: pre-test cleanup failed: %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
goTestCmd := buildGoTestCommand(config)
|
||||||
|
if config.Verbose {
|
||||||
|
log.Printf("Command: %s", strings.Join(goTestCmd, " "))
|
||||||
|
}
|
||||||
|
|
||||||
|
imageName := "golang:" + config.GoVersion
|
||||||
|
if err := ensureImageAvailable(ctx, cli, imageName, config.Verbose); err != nil { //nolint:noinlineerr
|
||||||
|
return fmt.Errorf("ensuring image availability: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
resp, err := createGoTestContainer(ctx, cli, config, containerName, absLogsDir, goTestCmd)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("creating container: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if config.Verbose {
|
||||||
|
log.Printf("Created container: %s", resp.ID)
|
||||||
|
}
|
||||||
|
|
||||||
|
if err := cli.ContainerStart(ctx, resp.ID, container.StartOptions{}); err != nil { //nolint:noinlineerr
|
||||||
|
return fmt.Errorf("starting container: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
log.Printf("Starting test: %s", config.TestPattern)
|
||||||
|
log.Printf("Run ID: %s", runID)
|
||||||
|
log.Printf("Monitor with: docker logs -f %s", containerName)
|
||||||
|
log.Printf("Logs directory: %s", logsDir)
|
||||||
|
|
||||||
|
// Start stats collection for container resource monitoring (if enabled)
|
||||||
|
var statsCollector *StatsCollector
|
||||||
|
|
||||||
|
if config.Stats {
|
||||||
|
var err error
|
||||||
|
|
||||||
|
statsCollector, err = NewStatsCollector(ctx)
|
||||||
|
if err != nil {
|
||||||
|
if config.Verbose {
|
||||||
|
log.Printf("Warning: failed to create stats collector: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
statsCollector = nil
|
||||||
|
}
|
||||||
|
|
||||||
|
if statsCollector != nil {
|
||||||
|
defer statsCollector.Close()
|
||||||
|
|
||||||
|
// Start stats collection immediately - no need for complex retry logic
|
||||||
|
// The new implementation monitors Docker events and will catch containers as they start
|
||||||
|
err := statsCollector.StartCollection(ctx, runID, config.Verbose)
|
||||||
|
if err != nil {
|
||||||
|
if config.Verbose {
|
||||||
|
log.Printf("Warning: failed to start stats collection: %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
defer statsCollector.StopCollection()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
exitCode, err := streamAndWait(ctx, cli, resp.ID)
|
||||||
|
|
||||||
|
// Ensure all containers have finished and logs are flushed before extracting artifacts
|
||||||
|
waitErr := waitForContainerFinalization(ctx, cli, resp.ID, config.Verbose)
|
||||||
|
if waitErr != nil && config.Verbose {
|
||||||
|
log.Printf("Warning: failed to wait for container finalization: %v", waitErr)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Extract artifacts from test containers before cleanup
|
||||||
|
if err := extractArtifactsFromContainers(ctx, resp.ID, logsDir, config.Verbose); err != nil && config.Verbose { //nolint:noinlineerr
|
||||||
|
log.Printf("Warning: failed to extract artifacts from containers: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Always list control files regardless of test outcome
|
||||||
|
listControlFiles(logsDir)
|
||||||
|
|
||||||
|
// Print stats summary and check memory limits if enabled
|
||||||
|
if config.Stats && statsCollector != nil {
|
||||||
|
violations := statsCollector.PrintSummaryAndCheckLimits(config.HSMemoryLimit, config.TSMemoryLimit)
|
||||||
|
if len(violations) > 0 {
|
||||||
|
log.Printf("MEMORY LIMIT VIOLATIONS DETECTED:")
|
||||||
|
log.Printf("=================================")
|
||||||
|
|
||||||
|
for _, violation := range violations {
|
||||||
|
log.Printf("Container %s exceeded memory limit: %.1f MB > %.1f MB",
|
||||||
|
violation.ContainerName, violation.MaxMemoryMB, violation.LimitMB)
|
||||||
|
}
|
||||||
|
|
||||||
|
return fmt.Errorf("test failed: %d %w", len(violations), ErrMemoryLimitViolations)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
shouldCleanup := config.CleanAfter && (!config.KeepOnFailure || exitCode == 0)
|
||||||
|
if shouldCleanup {
|
||||||
|
if config.Verbose {
|
||||||
|
log.Printf("Running post-test cleanup for run %s...", runID)
|
||||||
|
}
|
||||||
|
|
||||||
|
cleanErr := cleanupAfterTest(ctx, cli, resp.ID, runID)
|
||||||
|
|
||||||
|
if cleanErr != nil && config.Verbose {
|
||||||
|
log.Printf("Warning: post-test cleanup failed: %v", cleanErr)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Clean up artifacts from successful tests to save disk space in CI
|
||||||
|
if exitCode == 0 {
|
||||||
|
if config.Verbose {
|
||||||
|
log.Printf("Test succeeded, cleaning up artifacts to save disk space...")
|
||||||
|
}
|
||||||
|
|
||||||
|
cleanErr := cleanupSuccessfulTestArtifacts(logsDir, config.Verbose)
|
||||||
|
|
||||||
|
if cleanErr != nil && config.Verbose {
|
||||||
|
log.Printf("Warning: artifact cleanup failed: %v", cleanErr)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("executing test: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if exitCode != 0 {
|
||||||
|
return fmt.Errorf("%w: exit code %d", ErrTestFailed, exitCode)
|
||||||
|
}
|
||||||
|
|
||||||
|
log.Printf("Test completed successfully!")
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// buildGoTestCommand constructs the go test command arguments.
|
||||||
|
func buildGoTestCommand(config *RunConfig) []string {
|
||||||
|
cmd := []string{"go", "test", "./..."}
|
||||||
|
|
||||||
|
if config.TestPattern != "" {
|
||||||
|
cmd = append(cmd, "-run", config.TestPattern)
|
||||||
|
}
|
||||||
|
|
||||||
|
if config.FailFast {
|
||||||
|
cmd = append(cmd, "-failfast")
|
||||||
|
}
|
||||||
|
|
||||||
|
cmd = append(cmd, "-timeout", config.Timeout.String())
|
||||||
|
cmd = append(cmd, "-v")
|
||||||
|
|
||||||
|
return cmd
|
||||||
|
}
|
||||||
|
|
||||||
|
// createGoTestContainer creates a Docker container configured for running integration tests.
|
||||||
|
func createGoTestContainer(ctx context.Context, cli *client.Client, config *RunConfig, containerName, logsDir string, goTestCmd []string) (container.CreateResponse, error) {
|
||||||
|
pwd, err := os.Getwd()
|
||||||
|
if err != nil {
|
||||||
|
return container.CreateResponse{}, fmt.Errorf("getting working directory: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
projectRoot := findProjectRoot(pwd)
|
||||||
|
|
||||||
|
runID := dockertestutil.ExtractRunIDFromContainerName(containerName)
|
||||||
|
|
||||||
|
env := []string{
|
||||||
|
fmt.Sprintf("HEADSCALE_INTEGRATION_POSTGRES=%d", boolToInt(config.UsePostgres)),
|
||||||
|
"HEADSCALE_INTEGRATION_RUN_ID=" + runID,
|
||||||
|
}
|
||||||
|
|
||||||
|
// Pass through CI environment variable for CI detection
|
||||||
|
if ci := os.Getenv("CI"); ci != "" {
|
||||||
|
env = append(env, "CI="+ci)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Pass through all HEADSCALE_INTEGRATION_* environment variables
|
||||||
|
for _, e := range os.Environ() {
|
||||||
|
if strings.HasPrefix(e, "HEADSCALE_INTEGRATION_") {
|
||||||
|
// Skip the ones we already set explicitly
|
||||||
|
if strings.HasPrefix(e, "HEADSCALE_INTEGRATION_POSTGRES=") ||
|
||||||
|
strings.HasPrefix(e, "HEADSCALE_INTEGRATION_RUN_ID=") {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
env = append(env, e)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Set GOCACHE to a known location (used by both bind mount and volume cases)
|
||||||
|
env = append(env, "GOCACHE=/cache/go-build")
|
||||||
|
|
||||||
|
containerConfig := &container.Config{
|
||||||
|
Image: "golang:" + config.GoVersion,
|
||||||
|
Cmd: goTestCmd,
|
||||||
|
Env: env,
|
||||||
|
WorkingDir: projectRoot + "/integration",
|
||||||
|
Tty: true,
|
||||||
|
Labels: map[string]string{
|
||||||
|
"hi.run-id": runID,
|
||||||
|
"hi.test-type": "test-runner",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get the correct Docker socket path from the current context
|
||||||
|
dockerSocketPath := getDockerSocketPath()
|
||||||
|
|
||||||
|
if config.Verbose {
|
||||||
|
log.Printf("Using Docker socket: %s", dockerSocketPath)
|
||||||
|
}
|
||||||
|
|
||||||
|
binds := []string{
|
||||||
|
fmt.Sprintf("%s:%s", projectRoot, projectRoot),
|
||||||
|
dockerSocketPath + ":/var/run/docker.sock",
|
||||||
|
logsDir + ":/tmp/control",
|
||||||
|
}
|
||||||
|
|
||||||
|
// Use bind mounts for Go cache if provided via environment variables,
|
||||||
|
// otherwise fall back to Docker volumes for local development
|
||||||
|
var mounts []mount.Mount
|
||||||
|
|
||||||
|
goCache := os.Getenv("HEADSCALE_INTEGRATION_GO_CACHE")
|
||||||
|
goBuildCache := os.Getenv("HEADSCALE_INTEGRATION_GO_BUILD_CACHE")
|
||||||
|
|
||||||
|
if goCache != "" {
|
||||||
|
binds = append(binds, goCache+":/go")
|
||||||
|
} else {
|
||||||
|
mounts = append(mounts, mount.Mount{
|
||||||
|
Type: mount.TypeVolume,
|
||||||
|
Source: "hs-integration-go-cache",
|
||||||
|
Target: "/go",
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
if goBuildCache != "" {
|
||||||
|
binds = append(binds, goBuildCache+":/cache/go-build")
|
||||||
|
} else {
|
||||||
|
mounts = append(mounts, mount.Mount{
|
||||||
|
Type: mount.TypeVolume,
|
||||||
|
Source: "hs-integration-go-build-cache",
|
||||||
|
Target: "/cache/go-build",
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
hostConfig := &container.HostConfig{
|
||||||
|
AutoRemove: false, // We'll remove manually for better control
|
||||||
|
Binds: binds,
|
||||||
|
Mounts: mounts,
|
||||||
|
}
|
||||||
|
|
||||||
|
return cli.ContainerCreate(ctx, containerConfig, hostConfig, nil, nil, containerName)
|
||||||
|
}
|
||||||
|
|
||||||
|
// streamAndWait streams container output and waits for completion.
|
||||||
|
func streamAndWait(ctx context.Context, cli *client.Client, containerID string) (int, error) {
|
||||||
|
out, err := cli.ContainerLogs(ctx, containerID, container.LogsOptions{
|
||||||
|
ShowStdout: true,
|
||||||
|
ShowStderr: true,
|
||||||
|
Follow: true,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return -1, fmt.Errorf("getting container logs: %w", err)
|
||||||
|
}
|
||||||
|
defer out.Close()
|
||||||
|
|
||||||
|
go func() {
|
||||||
|
_, _ = io.Copy(os.Stdout, out)
|
||||||
|
}()
|
||||||
|
|
||||||
|
statusCh, errCh := cli.ContainerWait(ctx, containerID, container.WaitConditionNotRunning)
|
||||||
|
select {
|
||||||
|
case err := <-errCh:
|
||||||
|
if err != nil {
|
||||||
|
return -1, fmt.Errorf("waiting for container: %w", err)
|
||||||
|
}
|
||||||
|
case status := <-statusCh:
|
||||||
|
return int(status.StatusCode), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
return -1, ErrUnexpectedContainerWait
|
||||||
|
}
|
||||||
|
|
||||||
|
// waitForContainerFinalization ensures all test containers have properly finished and flushed their output.
|
||||||
|
func waitForContainerFinalization(ctx context.Context, cli *client.Client, testContainerID string, verbose bool) error {
|
||||||
|
// First, get all related test containers
|
||||||
|
containers, err := cli.ContainerList(ctx, container.ListOptions{All: true})
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("listing containers: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
testContainers := getCurrentTestContainers(containers, testContainerID, verbose)
|
||||||
|
|
||||||
|
// Wait for all test containers to reach a final state
|
||||||
|
maxWaitTime := 10 * time.Second
|
||||||
|
checkInterval := 500 * time.Millisecond
|
||||||
|
timeout := time.After(maxWaitTime)
|
||||||
|
|
||||||
|
ticker := time.NewTicker(checkInterval)
|
||||||
|
defer ticker.Stop()
|
||||||
|
|
||||||
|
for {
|
||||||
|
select {
|
||||||
|
case <-timeout:
|
||||||
|
if verbose {
|
||||||
|
log.Printf("Timeout waiting for container finalization, proceeding with artifact extraction")
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
case <-ticker.C:
|
||||||
|
allFinalized := true
|
||||||
|
|
||||||
|
for _, testCont := range testContainers {
|
||||||
|
inspect, err := cli.ContainerInspect(ctx, testCont.ID)
|
||||||
|
if err != nil {
|
||||||
|
if verbose {
|
||||||
|
log.Printf("Warning: failed to inspect container %s: %v", testCont.name, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if container is in a final state
|
||||||
|
if !isContainerFinalized(inspect.State) {
|
||||||
|
allFinalized = false
|
||||||
|
|
||||||
|
if verbose {
|
||||||
|
log.Printf("Container %s still finalizing (state: %s)", testCont.name, inspect.State.Status)
|
||||||
|
}
|
||||||
|
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if allFinalized {
|
||||||
|
if verbose {
|
||||||
|
log.Printf("All test containers finalized, ready for artifact extraction")
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// isContainerFinalized checks if a container has reached a final state where logs are flushed.
|
||||||
|
func isContainerFinalized(state *container.State) bool {
|
||||||
|
// Container is finalized if it's not running and has a finish time
|
||||||
|
return !state.Running && state.FinishedAt != ""
|
||||||
|
}
|
||||||
|
|
||||||
|
// findProjectRoot locates the project root by finding the directory containing go.mod.
|
||||||
|
func findProjectRoot(startPath string) string {
|
||||||
|
current := startPath
|
||||||
|
for {
|
||||||
|
if _, err := os.Stat(filepath.Join(current, "go.mod")); err == nil { //nolint:noinlineerr
|
||||||
|
return current
|
||||||
|
}
|
||||||
|
|
||||||
|
parent := filepath.Dir(current)
|
||||||
|
if parent == current {
|
||||||
|
return startPath
|
||||||
|
}
|
||||||
|
|
||||||
|
current = parent
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// boolToInt converts a boolean to an integer for environment variables.
|
||||||
|
func boolToInt(b bool) int {
|
||||||
|
if b {
|
||||||
|
return 1
|
||||||
|
}
|
||||||
|
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
|
||||||
|
// DockerContext represents Docker context information.
|
||||||
|
type DockerContext struct {
|
||||||
|
Name string `json:"Name"`
|
||||||
|
Metadata map[string]any `json:"Metadata"`
|
||||||
|
Endpoints map[string]any `json:"Endpoints"`
|
||||||
|
Current bool `json:"Current"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// createDockerClient creates a Docker client with context detection.
|
||||||
|
func createDockerClient(ctx context.Context) (*client.Client, error) {
|
||||||
|
contextInfo, err := getCurrentDockerContext(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return client.NewClientWithOpts(client.FromEnv, client.WithAPIVersionNegotiation())
|
||||||
|
}
|
||||||
|
|
||||||
|
var clientOpts []client.Opt
|
||||||
|
|
||||||
|
clientOpts = append(clientOpts, client.WithAPIVersionNegotiation())
|
||||||
|
|
||||||
|
if contextInfo != nil {
|
||||||
|
if endpoints, ok := contextInfo.Endpoints["docker"]; ok {
|
||||||
|
if endpointMap, ok := endpoints.(map[string]any); ok {
|
||||||
|
if host, ok := endpointMap["Host"].(string); ok {
|
||||||
|
if runConfig.Verbose {
|
||||||
|
log.Printf("Using Docker host from context '%s': %s", contextInfo.Name, host)
|
||||||
|
}
|
||||||
|
|
||||||
|
clientOpts = append(clientOpts, client.WithHost(host))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(clientOpts) == 1 {
|
||||||
|
clientOpts = append(clientOpts, client.FromEnv)
|
||||||
|
}
|
||||||
|
|
||||||
|
return client.NewClientWithOpts(clientOpts...)
|
||||||
|
}
|
||||||
|
|
||||||
|
// getCurrentDockerContext retrieves the current Docker context information.
|
||||||
|
func getCurrentDockerContext(ctx context.Context) (*DockerContext, error) {
|
||||||
|
cmd := exec.CommandContext(ctx, "docker", "context", "inspect")
|
||||||
|
|
||||||
|
output, err := cmd.Output()
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("getting docker context: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
var contexts []DockerContext
|
||||||
|
if err := json.Unmarshal(output, &contexts); err != nil { //nolint:noinlineerr
|
||||||
|
return nil, fmt.Errorf("parsing docker context: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(contexts) > 0 {
|
||||||
|
return &contexts[0], nil
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil, ErrNoDockerContext
|
||||||
|
}
|
||||||
|
|
||||||
|
// getDockerSocketPath returns the correct Docker socket path for the current context.
|
||||||
|
func getDockerSocketPath() string {
|
||||||
|
// Always use the default socket path for mounting since Docker handles
|
||||||
|
// the translation to the actual socket (e.g., colima socket) internally
|
||||||
|
return "/var/run/docker.sock"
|
||||||
|
}
|
||||||
|
|
||||||
|
// checkImageAvailableLocally checks if the specified Docker image is available locally.
|
||||||
|
func checkImageAvailableLocally(ctx context.Context, cli *client.Client, imageName string) (bool, error) {
|
||||||
|
_, _, err := cli.ImageInspectWithRaw(ctx, imageName) //nolint:staticcheck // SA1019: deprecated but functional
|
||||||
|
if err != nil {
|
||||||
|
if client.IsErrNotFound(err) { //nolint:staticcheck // SA1019: deprecated but functional
|
||||||
|
return false, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
return false, fmt.Errorf("inspecting image %s: %w", imageName, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return true, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// ensureImageAvailable checks if the image is available locally first, then pulls if needed.
|
||||||
|
func ensureImageAvailable(ctx context.Context, cli *client.Client, imageName string, verbose bool) error {
|
||||||
|
// First check if image is available locally
|
||||||
|
available, err := checkImageAvailableLocally(ctx, cli, imageName)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("checking local image availability: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if available {
|
||||||
|
if verbose {
|
||||||
|
log.Printf("Image %s is available locally", imageName)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Image not available locally, try to pull it
|
||||||
|
if verbose {
|
||||||
|
log.Printf("Image %s not found locally, pulling...", imageName)
|
||||||
|
}
|
||||||
|
|
||||||
|
reader, err := cli.ImagePull(ctx, imageName, image.PullOptions{})
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("pulling image %s: %w", imageName, err)
|
||||||
|
}
|
||||||
|
defer reader.Close()
|
||||||
|
|
||||||
|
if verbose {
|
||||||
|
_, err = io.Copy(os.Stdout, reader)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("reading pull output: %w", err)
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
_, err = io.Copy(io.Discard, reader)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("reading pull output: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
log.Printf("Image %s pulled successfully", imageName)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// listControlFiles displays the headscale test artifacts created in the control logs directory.
|
||||||
|
func listControlFiles(logsDir string) {
|
||||||
|
entries, err := os.ReadDir(logsDir)
|
||||||
|
if err != nil {
|
||||||
|
log.Printf("Logs directory: %s", logsDir)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
var (
|
||||||
|
logFiles []string
|
||||||
|
dataFiles []string
|
||||||
|
dataDirs []string
|
||||||
|
)
|
||||||
|
|
||||||
|
for _, entry := range entries {
|
||||||
|
name := entry.Name()
|
||||||
|
// Only show headscale (hs-*) files and directories
|
||||||
|
if !strings.HasPrefix(name, "hs-") {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
if entry.IsDir() {
|
||||||
|
// Include directories (pprof, mapresponses)
|
||||||
|
if strings.Contains(name, "-pprof") || strings.Contains(name, "-mapresponses") {
|
||||||
|
dataDirs = append(dataDirs, name)
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
// Include files
|
||||||
|
switch {
|
||||||
|
case strings.HasSuffix(name, ".stderr.log") || strings.HasSuffix(name, ".stdout.log"):
|
||||||
|
logFiles = append(logFiles, name)
|
||||||
|
case strings.HasSuffix(name, ".db"):
|
||||||
|
dataFiles = append(dataFiles, name)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
log.Printf("Test artifacts saved to: %s", logsDir)
|
||||||
|
|
||||||
|
if len(logFiles) > 0 {
|
||||||
|
log.Printf("Headscale logs:")
|
||||||
|
|
||||||
|
for _, file := range logFiles {
|
||||||
|
log.Printf(" %s", file)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(dataFiles) > 0 || len(dataDirs) > 0 {
|
||||||
|
log.Printf("Headscale data:")
|
||||||
|
|
||||||
|
for _, file := range dataFiles {
|
||||||
|
log.Printf(" %s", file)
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, dir := range dataDirs {
|
||||||
|
log.Printf(" %s/", dir)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// extractArtifactsFromContainers collects container logs and files from the specific test run.
|
||||||
|
func extractArtifactsFromContainers(ctx context.Context, testContainerID, logsDir string, verbose bool) error {
|
||||||
|
cli, err := createDockerClient(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("creating Docker client: %w", err)
|
||||||
|
}
|
||||||
|
defer cli.Close()
|
||||||
|
|
||||||
|
// List all containers
|
||||||
|
containers, err := cli.ContainerList(ctx, container.ListOptions{All: true})
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("listing containers: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get containers from the specific test run
|
||||||
|
currentTestContainers := getCurrentTestContainers(containers, testContainerID, verbose)
|
||||||
|
|
||||||
|
extractedCount := 0
|
||||||
|
|
||||||
|
for _, cont := range currentTestContainers {
|
||||||
|
// Extract container logs and tar files
|
||||||
|
err := extractContainerArtifacts(ctx, cli, cont.ID, cont.name, logsDir, verbose)
|
||||||
|
if err != nil {
|
||||||
|
if verbose {
|
||||||
|
log.Printf("Warning: failed to extract artifacts from container %s (%s): %v", cont.name, cont.ID[:12], err)
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
if verbose {
|
||||||
|
log.Printf("Extracted artifacts from container %s (%s)", cont.name, cont.ID[:12])
|
||||||
|
}
|
||||||
|
|
||||||
|
extractedCount++
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if verbose && extractedCount > 0 {
|
||||||
|
log.Printf("Extracted artifacts from %d containers", extractedCount)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// testContainer represents a container from the current test run.
|
||||||
|
type testContainer struct {
|
||||||
|
ID string
|
||||||
|
name string
|
||||||
|
}
|
||||||
|
|
||||||
|
// getCurrentTestContainers filters containers to only include those from the current test run.
|
||||||
|
func getCurrentTestContainers(containers []container.Summary, testContainerID string, verbose bool) []testContainer {
|
||||||
|
var testRunContainers []testContainer
|
||||||
|
|
||||||
|
// Find the test container to get its run ID label
|
||||||
|
var runID string
|
||||||
|
|
||||||
|
for _, cont := range containers {
|
||||||
|
if cont.ID == testContainerID {
|
||||||
|
if cont.Labels != nil {
|
||||||
|
runID = cont.Labels["hi.run-id"]
|
||||||
|
}
|
||||||
|
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if runID == "" {
|
||||||
|
log.Printf("Error: test container %s missing required hi.run-id label", testContainerID[:12])
|
||||||
|
return testRunContainers
|
||||||
|
}
|
||||||
|
|
||||||
|
if verbose {
|
||||||
|
log.Printf("Looking for containers with run ID: %s", runID)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Find all containers with the same run ID
|
||||||
|
for _, cont := range containers {
|
||||||
|
for _, name := range cont.Names {
|
||||||
|
containerName := strings.TrimPrefix(name, "/")
|
||||||
|
if strings.HasPrefix(containerName, "hs-") || strings.HasPrefix(containerName, "ts-") {
|
||||||
|
// Check if container has matching run ID label
|
||||||
|
if cont.Labels != nil && cont.Labels["hi.run-id"] == runID {
|
||||||
|
testRunContainers = append(testRunContainers, testContainer{
|
||||||
|
ID: cont.ID,
|
||||||
|
name: containerName,
|
||||||
|
})
|
||||||
|
if verbose {
|
||||||
|
log.Printf("Including container %s (run ID: %s)", containerName, runID)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return testRunContainers
|
||||||
|
}
|
||||||
|
|
||||||
|
// extractContainerArtifacts saves logs and tar files from a container.
|
||||||
|
func extractContainerArtifacts(ctx context.Context, cli *client.Client, containerID, containerName, logsDir string, verbose bool) error {
|
||||||
|
// Ensure the logs directory exists
|
||||||
|
err := os.MkdirAll(logsDir, defaultDirPerm)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("creating logs directory: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Extract container logs
|
||||||
|
err = extractContainerLogs(ctx, cli, containerID, containerName, logsDir, verbose)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("extracting logs: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Extract tar files for headscale containers only
|
||||||
|
if strings.HasPrefix(containerName, "hs-") {
|
||||||
|
err := extractContainerFiles(ctx, cli, containerID, containerName, logsDir, verbose)
|
||||||
|
if err != nil {
|
||||||
|
if verbose {
|
||||||
|
log.Printf("Warning: failed to extract files from %s: %v", containerName, err)
|
||||||
|
}
|
||||||
|
// Don't fail the whole extraction if files are missing
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// extractContainerLogs saves the stdout and stderr logs from a container to files.
|
||||||
|
func extractContainerLogs(ctx context.Context, cli *client.Client, containerID, containerName, logsDir string, verbose bool) error {
|
||||||
|
// Get container logs
|
||||||
|
logReader, err := cli.ContainerLogs(ctx, containerID, container.LogsOptions{
|
||||||
|
ShowStdout: true,
|
||||||
|
ShowStderr: true,
|
||||||
|
Timestamps: false,
|
||||||
|
Follow: false,
|
||||||
|
Tail: "all",
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("getting container logs: %w", err)
|
||||||
|
}
|
||||||
|
defer logReader.Close()
|
||||||
|
|
||||||
|
// Create log files following the headscale naming convention
|
||||||
|
stdoutPath := filepath.Join(logsDir, containerName+".stdout.log")
|
||||||
|
stderrPath := filepath.Join(logsDir, containerName+".stderr.log")
|
||||||
|
|
||||||
|
// Create buffers to capture stdout and stderr separately
|
||||||
|
var stdoutBuf, stderrBuf bytes.Buffer
|
||||||
|
|
||||||
|
// Demultiplex the Docker logs stream to separate stdout and stderr
|
||||||
|
_, err = stdcopy.StdCopy(&stdoutBuf, &stderrBuf, logReader)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("demultiplexing container logs: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Write stdout logs
|
||||||
|
if err := os.WriteFile(stdoutPath, stdoutBuf.Bytes(), 0o644); err != nil { //nolint:gosec,noinlineerr // log files should be readable
|
||||||
|
return fmt.Errorf("writing stdout log: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Write stderr logs
|
||||||
|
if err := os.WriteFile(stderrPath, stderrBuf.Bytes(), 0o644); err != nil { //nolint:gosec,noinlineerr // log files should be readable
|
||||||
|
return fmt.Errorf("writing stderr log: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if verbose {
|
||||||
|
log.Printf("Saved logs for %s: %s, %s", containerName, stdoutPath, stderrPath)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// extractContainerFiles extracts database file and directories from headscale containers.
|
||||||
|
// Note: The actual file extraction is now handled by the integration tests themselves
|
||||||
|
// via SaveProfile, SaveMapResponses, and SaveDatabase functions in hsic.go.
|
||||||
|
func extractContainerFiles(ctx context.Context, cli *client.Client, containerID, containerName, logsDir string, verbose bool) error {
|
||||||
|
// Files are now extracted directly by the integration tests
|
||||||
|
// This function is kept for potential future use or other file types
|
||||||
|
return nil
|
||||||
|
}
|
||||||
380
cmd/hi/doctor.go
Normal file
@@ -0,0 +1,380 @@
|
|||||||
|
package main
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"log"
|
||||||
|
"os/exec"
|
||||||
|
"strings"
|
||||||
|
)
|
||||||
|
|
||||||
|
var ErrSystemChecksFailed = errors.New("system checks failed")
|
||||||
|
|
||||||
|
// DoctorResult represents the result of a single health check.
|
||||||
|
type DoctorResult struct {
|
||||||
|
Name string
|
||||||
|
Status string // "PASS", "FAIL", "WARN"
|
||||||
|
Message string
|
||||||
|
Suggestions []string
|
||||||
|
}
|
||||||
|
|
||||||
|
// runDoctorCheck performs comprehensive pre-flight checks for integration testing.
|
||||||
|
func runDoctorCheck(ctx context.Context) error {
|
||||||
|
results := []DoctorResult{}
|
||||||
|
|
||||||
|
// Check 1: Docker binary availability
|
||||||
|
results = append(results, checkDockerBinary())
|
||||||
|
|
||||||
|
// Check 2: Docker daemon connectivity
|
||||||
|
dockerResult := checkDockerDaemon(ctx)
|
||||||
|
results = append(results, dockerResult)
|
||||||
|
|
||||||
|
// If Docker is available, run additional checks
|
||||||
|
if dockerResult.Status == "PASS" {
|
||||||
|
results = append(results, checkDockerContext(ctx))
|
||||||
|
results = append(results, checkDockerSocket(ctx))
|
||||||
|
results = append(results, checkGolangImage(ctx))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check 3: Go installation
|
||||||
|
results = append(results, checkGoInstallation(ctx))
|
||||||
|
|
||||||
|
// Check 4: Git repository
|
||||||
|
results = append(results, checkGitRepository(ctx))
|
||||||
|
|
||||||
|
// Check 5: Required files
|
||||||
|
results = append(results, checkRequiredFiles(ctx))
|
||||||
|
|
||||||
|
// Display results
|
||||||
|
displayDoctorResults(results)
|
||||||
|
|
||||||
|
// Return error if any critical checks failed
|
||||||
|
for _, result := range results {
|
||||||
|
if result.Status == "FAIL" {
|
||||||
|
return fmt.Errorf("%w - see details above", ErrSystemChecksFailed)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
log.Printf("✅ All system checks passed - ready to run integration tests!")
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// checkDockerBinary verifies Docker binary is available.
|
||||||
|
func checkDockerBinary() DoctorResult {
|
||||||
|
_, err := exec.LookPath("docker")
|
||||||
|
if err != nil {
|
||||||
|
return DoctorResult{
|
||||||
|
Name: "Docker Binary",
|
||||||
|
Status: "FAIL",
|
||||||
|
Message: "Docker binary not found in PATH",
|
||||||
|
Suggestions: []string{
|
||||||
|
"Install Docker: https://docs.docker.com/get-docker/",
|
||||||
|
"For macOS: consider using colima or Docker Desktop",
|
||||||
|
"Ensure docker is in your PATH",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return DoctorResult{
|
||||||
|
Name: "Docker Binary",
|
||||||
|
Status: "PASS",
|
||||||
|
Message: "Docker binary found",
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// checkDockerDaemon verifies Docker daemon is running and accessible.
|
||||||
|
func checkDockerDaemon(ctx context.Context) DoctorResult {
|
||||||
|
cli, err := createDockerClient(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return DoctorResult{
|
||||||
|
Name: "Docker Daemon",
|
||||||
|
Status: "FAIL",
|
||||||
|
Message: fmt.Sprintf("Cannot create Docker client: %v", err),
|
||||||
|
Suggestions: []string{
|
||||||
|
"Start Docker daemon/service",
|
||||||
|
"Check Docker Desktop is running (if using Docker Desktop)",
|
||||||
|
"For colima: run 'colima start'",
|
||||||
|
"Verify DOCKER_HOST environment variable if set",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
defer cli.Close()
|
||||||
|
|
||||||
|
_, err = cli.Ping(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return DoctorResult{
|
||||||
|
Name: "Docker Daemon",
|
||||||
|
Status: "FAIL",
|
||||||
|
Message: fmt.Sprintf("Cannot ping Docker daemon: %v", err),
|
||||||
|
Suggestions: []string{
|
||||||
|
"Ensure Docker daemon is running",
|
||||||
|
"Check Docker socket permissions",
|
||||||
|
"Try: docker info",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return DoctorResult{
|
||||||
|
Name: "Docker Daemon",
|
||||||
|
Status: "PASS",
|
||||||
|
Message: "Docker daemon is running and accessible",
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// checkDockerContext verifies Docker context configuration.
|
||||||
|
func checkDockerContext(ctx context.Context) DoctorResult {
|
||||||
|
contextInfo, err := getCurrentDockerContext(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return DoctorResult{
|
||||||
|
Name: "Docker Context",
|
||||||
|
Status: "WARN",
|
||||||
|
Message: "Could not detect Docker context, using default settings",
|
||||||
|
Suggestions: []string{
|
||||||
|
"Check: docker context ls",
|
||||||
|
"Consider setting up a specific context if needed",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if contextInfo == nil {
|
||||||
|
return DoctorResult{
|
||||||
|
Name: "Docker Context",
|
||||||
|
Status: "PASS",
|
||||||
|
Message: "Using default Docker context",
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return DoctorResult{
|
||||||
|
Name: "Docker Context",
|
||||||
|
Status: "PASS",
|
||||||
|
Message: "Using Docker context: " + contextInfo.Name,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// checkDockerSocket verifies Docker socket accessibility.
|
||||||
|
func checkDockerSocket(ctx context.Context) DoctorResult {
|
||||||
|
cli, err := createDockerClient(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return DoctorResult{
|
||||||
|
Name: "Docker Socket",
|
||||||
|
Status: "FAIL",
|
||||||
|
Message: fmt.Sprintf("Cannot access Docker socket: %v", err),
|
||||||
|
Suggestions: []string{
|
||||||
|
"Check Docker socket permissions",
|
||||||
|
"Add user to docker group: sudo usermod -aG docker $USER",
|
||||||
|
"For colima: ensure socket is accessible",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
defer cli.Close()
|
||||||
|
|
||||||
|
info, err := cli.Info(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return DoctorResult{
|
||||||
|
Name: "Docker Socket",
|
||||||
|
Status: "FAIL",
|
||||||
|
Message: fmt.Sprintf("Cannot get Docker info: %v", err),
|
||||||
|
Suggestions: []string{
|
||||||
|
"Check Docker daemon status",
|
||||||
|
"Verify socket permissions",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return DoctorResult{
|
||||||
|
Name: "Docker Socket",
|
||||||
|
Status: "PASS",
|
||||||
|
Message: fmt.Sprintf("Docker socket accessible (Server: %s)", info.ServerVersion),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// checkGolangImage verifies the golang Docker image is available locally or can be pulled.
|
||||||
|
func checkGolangImage(ctx context.Context) DoctorResult {
|
||||||
|
cli, err := createDockerClient(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return DoctorResult{
|
||||||
|
Name: "Golang Image",
|
||||||
|
Status: "FAIL",
|
||||||
|
Message: "Cannot create Docker client for image check",
|
||||||
|
}
|
||||||
|
}
|
||||||
|
defer cli.Close()
|
||||||
|
|
||||||
|
goVersion := detectGoVersion()
|
||||||
|
imageName := "golang:" + goVersion
|
||||||
|
|
||||||
|
// First check if image is available locally
|
||||||
|
available, err := checkImageAvailableLocally(ctx, cli, imageName)
|
||||||
|
if err != nil {
|
||||||
|
return DoctorResult{
|
||||||
|
Name: "Golang Image",
|
||||||
|
Status: "FAIL",
|
||||||
|
Message: fmt.Sprintf("Cannot check golang image %s: %v", imageName, err),
|
||||||
|
Suggestions: []string{
|
||||||
|
"Check Docker daemon status",
|
||||||
|
"Try: docker images | grep golang",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if available {
|
||||||
|
return DoctorResult{
|
||||||
|
Name: "Golang Image",
|
||||||
|
Status: "PASS",
|
||||||
|
Message: fmt.Sprintf("Golang image %s is available locally", imageName),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Image not available locally, try to pull it
|
||||||
|
err = ensureImageAvailable(ctx, cli, imageName, false)
|
||||||
|
if err != nil {
|
||||||
|
return DoctorResult{
|
||||||
|
Name: "Golang Image",
|
||||||
|
Status: "FAIL",
|
||||||
|
Message: fmt.Sprintf("Golang image %s not available locally and cannot pull: %v", imageName, err),
|
||||||
|
Suggestions: []string{
|
||||||
|
"Check internet connectivity",
|
||||||
|
"Verify Docker Hub access",
|
||||||
|
"Try: docker pull " + imageName,
|
||||||
|
"Or run tests offline if image was pulled previously",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return DoctorResult{
|
||||||
|
Name: "Golang Image",
|
||||||
|
Status: "PASS",
|
||||||
|
Message: fmt.Sprintf("Golang image %s is now available", imageName),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// checkGoInstallation verifies Go is installed and working.
|
||||||
|
func checkGoInstallation(ctx context.Context) DoctorResult {
|
||||||
|
_, err := exec.LookPath("go")
|
||||||
|
if err != nil {
|
||||||
|
return DoctorResult{
|
||||||
|
Name: "Go Installation",
|
||||||
|
Status: "FAIL",
|
||||||
|
Message: "Go binary not found in PATH",
|
||||||
|
Suggestions: []string{
|
||||||
|
"Install Go: https://golang.org/dl/",
|
||||||
|
"Ensure go is in your PATH",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
cmd := exec.CommandContext(ctx, "go", "version")
|
||||||
|
|
||||||
|
output, err := cmd.Output()
|
||||||
|
if err != nil {
|
||||||
|
return DoctorResult{
|
||||||
|
Name: "Go Installation",
|
||||||
|
Status: "FAIL",
|
||||||
|
Message: fmt.Sprintf("Cannot get Go version: %v", err),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
version := strings.TrimSpace(string(output))
|
||||||
|
|
||||||
|
return DoctorResult{
|
||||||
|
Name: "Go Installation",
|
||||||
|
Status: "PASS",
|
||||||
|
Message: version,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// checkGitRepository verifies we're in a git repository.
|
||||||
|
func checkGitRepository(ctx context.Context) DoctorResult {
|
||||||
|
cmd := exec.CommandContext(ctx, "git", "rev-parse", "--git-dir")
|
||||||
|
|
||||||
|
err := cmd.Run()
|
||||||
|
if err != nil {
|
||||||
|
return DoctorResult{
|
||||||
|
Name: "Git Repository",
|
||||||
|
Status: "FAIL",
|
||||||
|
Message: "Not in a Git repository",
|
||||||
|
Suggestions: []string{
|
||||||
|
"Run from within the headscale git repository",
|
||||||
|
"Clone the repository: git clone https://github.com/juanfont/headscale.git",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return DoctorResult{
|
||||||
|
Name: "Git Repository",
|
||||||
|
Status: "PASS",
|
||||||
|
Message: "Running in Git repository",
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// checkRequiredFiles verifies required files exist.
|
||||||
|
func checkRequiredFiles(ctx context.Context) DoctorResult {
|
||||||
|
requiredFiles := []string{
|
||||||
|
"go.mod",
|
||||||
|
"integration/",
|
||||||
|
"cmd/hi/",
|
||||||
|
}
|
||||||
|
|
||||||
|
var missingFiles []string
|
||||||
|
|
||||||
|
for _, file := range requiredFiles {
|
||||||
|
cmd := exec.CommandContext(ctx, "test", "-e", file)
|
||||||
|
|
||||||
|
err := cmd.Run()
|
||||||
|
if err != nil {
|
||||||
|
missingFiles = append(missingFiles, file)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(missingFiles) > 0 {
|
||||||
|
return DoctorResult{
|
||||||
|
Name: "Required Files",
|
||||||
|
Status: "FAIL",
|
||||||
|
Message: "Missing required files: " + strings.Join(missingFiles, ", "),
|
||||||
|
Suggestions: []string{
|
||||||
|
"Ensure you're in the headscale project root directory",
|
||||||
|
"Check that integration/ directory exists",
|
||||||
|
"Verify this is a complete headscale repository",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return DoctorResult{
|
||||||
|
Name: "Required Files",
|
||||||
|
Status: "PASS",
|
||||||
|
Message: "All required files found",
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// displayDoctorResults shows the results in a formatted way.
|
||||||
|
func displayDoctorResults(results []DoctorResult) {
|
||||||
|
log.Printf("🔍 System Health Check Results")
|
||||||
|
log.Printf("================================")
|
||||||
|
|
||||||
|
for _, result := range results {
|
||||||
|
var icon string
|
||||||
|
|
||||||
|
switch result.Status {
|
||||||
|
case "PASS":
|
||||||
|
icon = "✅"
|
||||||
|
case "WARN":
|
||||||
|
icon = "⚠️"
|
||||||
|
case "FAIL":
|
||||||
|
icon = "❌"
|
||||||
|
default:
|
||||||
|
icon = "❓"
|
||||||
|
}
|
||||||
|
|
||||||
|
log.Printf("%s %s: %s", icon, result.Name, result.Message)
|
||||||
|
|
||||||
|
if len(result.Suggestions) > 0 {
|
||||||
|
for _, suggestion := range result.Suggestions {
|
||||||
|
log.Printf(" 💡 %s", suggestion)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
log.Printf("================================")
|
||||||
|
}
|
||||||
98
cmd/hi/main.go
Normal file
@@ -0,0 +1,98 @@
|
|||||||
|
package main
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"os"
|
||||||
|
|
||||||
|
"github.com/creachadair/command"
|
||||||
|
"github.com/creachadair/flax"
|
||||||
|
)
|
||||||
|
|
||||||
|
var runConfig RunConfig
|
||||||
|
|
||||||
|
func main() {
|
||||||
|
root := command.C{
|
||||||
|
Name: "hi",
|
||||||
|
Help: "Headscale Integration test runner",
|
||||||
|
Commands: []*command.C{
|
||||||
|
{
|
||||||
|
Name: "run",
|
||||||
|
Help: "Run integration tests",
|
||||||
|
Usage: "run [test-pattern] [flags]",
|
||||||
|
SetFlags: command.Flags(flax.MustBind, &runConfig),
|
||||||
|
Run: runIntegrationTest,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Name: "doctor",
|
||||||
|
Help: "Check system requirements for running integration tests",
|
||||||
|
Run: func(env *command.Env) error {
|
||||||
|
return runDoctorCheck(env.Context())
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Name: "clean",
|
||||||
|
Help: "Clean Docker resources",
|
||||||
|
Commands: []*command.C{
|
||||||
|
{
|
||||||
|
Name: "networks",
|
||||||
|
Help: "Prune unused Docker networks",
|
||||||
|
Run: func(env *command.Env) error {
|
||||||
|
return pruneDockerNetworks(env.Context())
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Name: "images",
|
||||||
|
Help: "Clean old test images",
|
||||||
|
Run: func(env *command.Env) error {
|
||||||
|
return cleanOldImages(env.Context())
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Name: "containers",
|
||||||
|
Help: "Kill all test containers",
|
||||||
|
Run: func(env *command.Env) error {
|
||||||
|
return killTestContainers(env.Context())
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Name: "cache",
|
||||||
|
Help: "Clean Go module cache volume",
|
||||||
|
Run: func(env *command.Env) error {
|
||||||
|
return cleanCacheVolume(env.Context())
|
||||||
|
},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
Name: "all",
|
||||||
|
Help: "Run all cleanup operations",
|
||||||
|
Run: func(env *command.Env) error {
|
||||||
|
return cleanAll(env.Context())
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
command.HelpCommand(nil),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
env := root.NewEnv(nil).MergeFlags(true)
|
||||||
|
command.RunOrFail(env, os.Args[1:])
|
||||||
|
}
|
||||||
|
|
||||||
|
func cleanAll(ctx context.Context) error {
|
||||||
|
err := killTestContainers(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
err = pruneDockerNetworks(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
err = cleanOldImages(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
return cleanCacheVolume(ctx)
|
||||||
|
}
|
||||||
129
cmd/hi/run.go
Normal file
@@ -0,0 +1,129 @@
|
|||||||
|
package main
|
||||||
|
|
||||||
|
import (
|
||||||
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"log"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/creachadair/command"
|
||||||
|
)
|
||||||
|
|
||||||
|
var ErrTestPatternRequired = errors.New("test pattern is required as first argument or use --test flag")
|
||||||
|
|
||||||
|
type RunConfig struct {
|
||||||
|
TestPattern string `flag:"test,Test pattern to run"`
|
||||||
|
Timeout time.Duration `flag:"timeout,default=120m,Test timeout"`
|
||||||
|
FailFast bool `flag:"failfast,default=true,Stop on first test failure"`
|
||||||
|
UsePostgres bool `flag:"postgres,default=false,Use PostgreSQL instead of SQLite"`
|
||||||
|
GoVersion string `flag:"go-version,Go version to use (auto-detected from go.mod)"`
|
||||||
|
CleanBefore bool `flag:"clean-before,default=true,Clean stale resources before test"`
|
||||||
|
CleanAfter bool `flag:"clean-after,default=true,Clean resources after test"`
|
||||||
|
KeepOnFailure bool `flag:"keep-on-failure,default=false,Keep containers on test failure"`
|
||||||
|
LogsDir string `flag:"logs-dir,default=control_logs,Control logs directory"`
|
||||||
|
Verbose bool `flag:"verbose,default=false,Verbose output"`
|
||||||
|
Stats bool `flag:"stats,default=false,Collect and display container resource usage statistics"`
|
||||||
|
HSMemoryLimit float64 `flag:"hs-memory-limit,default=0,Fail test if any Headscale container exceeds this memory limit in MB (0 = disabled)"`
|
||||||
|
TSMemoryLimit float64 `flag:"ts-memory-limit,default=0,Fail test if any Tailscale container exceeds this memory limit in MB (0 = disabled)"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// runIntegrationTest executes the integration test workflow.
|
||||||
|
func runIntegrationTest(env *command.Env) error {
|
||||||
|
args := env.Args
|
||||||
|
if len(args) > 0 && runConfig.TestPattern == "" {
|
||||||
|
runConfig.TestPattern = args[0]
|
||||||
|
}
|
||||||
|
|
||||||
|
if runConfig.TestPattern == "" {
|
||||||
|
return ErrTestPatternRequired
|
||||||
|
}
|
||||||
|
|
||||||
|
if runConfig.GoVersion == "" {
|
||||||
|
runConfig.GoVersion = detectGoVersion()
|
||||||
|
}
|
||||||
|
|
||||||
|
// Run pre-flight checks
|
||||||
|
if runConfig.Verbose {
|
||||||
|
log.Printf("Running pre-flight system checks...")
|
||||||
|
}
|
||||||
|
|
||||||
|
err := runDoctorCheck(env.Context())
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("pre-flight checks failed: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
if runConfig.Verbose {
|
||||||
|
log.Printf("Running test: %s", runConfig.TestPattern)
|
||||||
|
log.Printf("Go version: %s", runConfig.GoVersion)
|
||||||
|
log.Printf("Timeout: %s", runConfig.Timeout)
|
||||||
|
log.Printf("Use PostgreSQL: %t", runConfig.UsePostgres)
|
||||||
|
}
|
||||||
|
|
||||||
|
return runTestContainer(env.Context(), &runConfig)
|
||||||
|
}
|
||||||
|
|
||||||
|
// detectGoVersion reads the Go version from go.mod file.
|
||||||
|
func detectGoVersion() string {
|
||||||
|
goModPath := filepath.Join("..", "..", "go.mod")
|
||||||
|
|
||||||
|
if _, err := os.Stat("go.mod"); err == nil { //nolint:noinlineerr
|
||||||
|
goModPath = "go.mod"
|
||||||
|
} else if _, err := os.Stat("../../go.mod"); err == nil { //nolint:noinlineerr
|
||||||
|
goModPath = "../../go.mod"
|
||||||
|
}
|
||||||
|
|
||||||
|
content, err := os.ReadFile(goModPath)
|
||||||
|
if err != nil {
|
||||||
|
return "1.26.1"
|
||||||
|
}
|
||||||
|
|
||||||
|
lines := splitLines(string(content))
|
||||||
|
for _, line := range lines {
|
||||||
|
if len(line) > 3 && line[:3] == "go " {
|
||||||
|
version := line[3:]
|
||||||
|
if idx := indexOf(version, " "); idx != -1 {
|
||||||
|
version = version[:idx]
|
||||||
|
}
|
||||||
|
|
||||||
|
return version
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return "1.26.1"
|
||||||
|
}
|
||||||
|
|
||||||
|
// splitLines splits a string into lines without using strings.Split.
|
||||||
|
func splitLines(s string) []string {
|
||||||
|
var (
|
||||||
|
lines []string
|
||||||
|
current string
|
||||||
|
)
|
||||||
|
|
||||||
|
for _, char := range s {
|
||||||
|
if char == '\n' {
|
||||||
|
lines = append(lines, current)
|
||||||
|
current = ""
|
||||||
|
} else {
|
||||||
|
current += string(char)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if current != "" {
|
||||||
|
lines = append(lines, current)
|
||||||
|
}
|
||||||
|
|
||||||
|
return lines
|
||||||
|
}
|
||||||
|
|
||||||
|
// indexOf finds the first occurrence of substr in s.
|
||||||
|
func indexOf(s, substr string) int {
|
||||||
|
for i := 0; i <= len(s)-len(substr); i++ {
|
||||||
|
if s[i:i+len(substr)] == substr {
|
||||||
|
return i
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return -1
|
||||||
|
}
|
||||||
493
cmd/hi/stats.go
Normal file
@@ -0,0 +1,493 @@
|
|||||||
|
package main
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"encoding/json"
|
||||||
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"log"
|
||||||
|
"sort"
|
||||||
|
"strings"
|
||||||
|
"sync"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/docker/docker/api/types"
|
||||||
|
"github.com/docker/docker/api/types/container"
|
||||||
|
"github.com/docker/docker/api/types/events"
|
||||||
|
"github.com/docker/docker/api/types/filters"
|
||||||
|
"github.com/docker/docker/client"
|
||||||
|
)
|
||||||
|
|
||||||
|
// ErrStatsCollectionAlreadyStarted is returned when trying to start stats collection that is already running.
|
||||||
|
var ErrStatsCollectionAlreadyStarted = errors.New("stats collection already started")
|
||||||
|
|
||||||
|
// ContainerStats represents statistics for a single container.
|
||||||
|
type ContainerStats struct {
|
||||||
|
ContainerID string
|
||||||
|
ContainerName string
|
||||||
|
Stats []StatsSample
|
||||||
|
mutex sync.RWMutex
|
||||||
|
}
|
||||||
|
|
||||||
|
// StatsSample represents a single stats measurement.
|
||||||
|
type StatsSample struct {
|
||||||
|
Timestamp time.Time
|
||||||
|
CPUUsage float64 // CPU usage percentage
|
||||||
|
MemoryMB float64 // Memory usage in MB
|
||||||
|
}
|
||||||
|
|
||||||
|
// StatsCollector manages collection of container statistics.
|
||||||
|
type StatsCollector struct {
|
||||||
|
client *client.Client
|
||||||
|
containers map[string]*ContainerStats
|
||||||
|
stopChan chan struct{}
|
||||||
|
wg sync.WaitGroup
|
||||||
|
mutex sync.RWMutex
|
||||||
|
collectionStarted bool
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewStatsCollector creates a new stats collector instance.
|
||||||
|
func NewStatsCollector(ctx context.Context) (*StatsCollector, error) {
|
||||||
|
cli, err := createDockerClient(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("creating Docker client: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return &StatsCollector{
|
||||||
|
client: cli,
|
||||||
|
containers: make(map[string]*ContainerStats),
|
||||||
|
stopChan: make(chan struct{}),
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// StartCollection begins monitoring all containers and collecting stats for hs- and ts- containers with matching run ID.
|
||||||
|
func (sc *StatsCollector) StartCollection(ctx context.Context, runID string, verbose bool) error {
|
||||||
|
sc.mutex.Lock()
|
||||||
|
defer sc.mutex.Unlock()
|
||||||
|
|
||||||
|
if sc.collectionStarted {
|
||||||
|
return ErrStatsCollectionAlreadyStarted
|
||||||
|
}
|
||||||
|
|
||||||
|
sc.collectionStarted = true
|
||||||
|
|
||||||
|
// Start monitoring existing containers
|
||||||
|
sc.wg.Add(1)
|
||||||
|
|
||||||
|
go sc.monitorExistingContainers(ctx, runID, verbose)
|
||||||
|
|
||||||
|
// Start Docker events monitoring for new containers
|
||||||
|
sc.wg.Add(1)
|
||||||
|
|
||||||
|
go sc.monitorDockerEvents(ctx, runID, verbose)
|
||||||
|
|
||||||
|
if verbose {
|
||||||
|
log.Printf("Started container monitoring for run ID %s", runID)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// StopCollection stops all stats collection.
|
||||||
|
func (sc *StatsCollector) StopCollection() {
|
||||||
|
// Check if already stopped without holding lock
|
||||||
|
sc.mutex.RLock()
|
||||||
|
|
||||||
|
if !sc.collectionStarted {
|
||||||
|
sc.mutex.RUnlock()
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
sc.mutex.RUnlock()
|
||||||
|
|
||||||
|
// Signal stop to all goroutines
|
||||||
|
close(sc.stopChan)
|
||||||
|
|
||||||
|
// Wait for all goroutines to finish
|
||||||
|
sc.wg.Wait()
|
||||||
|
|
||||||
|
// Mark as stopped
|
||||||
|
sc.mutex.Lock()
|
||||||
|
sc.collectionStarted = false
|
||||||
|
sc.mutex.Unlock()
|
||||||
|
}
|
||||||
|
|
||||||
|
// monitorExistingContainers checks for existing containers that match our criteria.
|
||||||
|
func (sc *StatsCollector) monitorExistingContainers(ctx context.Context, runID string, verbose bool) {
|
||||||
|
defer sc.wg.Done()
|
||||||
|
|
||||||
|
containers, err := sc.client.ContainerList(ctx, container.ListOptions{})
|
||||||
|
if err != nil {
|
||||||
|
if verbose {
|
||||||
|
log.Printf("Failed to list existing containers: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, cont := range containers {
|
||||||
|
if sc.shouldMonitorContainer(cont, runID) {
|
||||||
|
sc.startStatsForContainer(ctx, cont.ID, cont.Names[0], verbose)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// monitorDockerEvents listens for container start events and begins monitoring relevant containers.
|
||||||
|
func (sc *StatsCollector) monitorDockerEvents(ctx context.Context, runID string, verbose bool) {
|
||||||
|
defer sc.wg.Done()
|
||||||
|
|
||||||
|
filter := filters.NewArgs()
|
||||||
|
filter.Add("type", "container")
|
||||||
|
filter.Add("event", "start")
|
||||||
|
|
||||||
|
eventOptions := events.ListOptions{
|
||||||
|
Filters: filter,
|
||||||
|
}
|
||||||
|
|
||||||
|
events, errs := sc.client.Events(ctx, eventOptions)
|
||||||
|
|
||||||
|
for {
|
||||||
|
select {
|
||||||
|
case <-sc.stopChan:
|
||||||
|
return
|
||||||
|
case <-ctx.Done():
|
||||||
|
return
|
||||||
|
case event := <-events:
|
||||||
|
if event.Type == "container" && event.Action == "start" {
|
||||||
|
// Get container details
|
||||||
|
containerInfo, err := sc.client.ContainerInspect(ctx, event.ID) //nolint:staticcheck // SA1019: use Actor.ID
|
||||||
|
if err != nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
// Convert to types.Container format for consistency
|
||||||
|
cont := types.Container{ //nolint:staticcheck // SA1019: use container.Summary
|
||||||
|
ID: containerInfo.ID,
|
||||||
|
Names: []string{containerInfo.Name},
|
||||||
|
Labels: containerInfo.Config.Labels,
|
||||||
|
}
|
||||||
|
|
||||||
|
if sc.shouldMonitorContainer(cont, runID) {
|
||||||
|
sc.startStatsForContainer(ctx, cont.ID, cont.Names[0], verbose)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
case err := <-errs:
|
||||||
|
if verbose {
|
||||||
|
log.Printf("Error in Docker events stream: %v", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// shouldMonitorContainer determines if a container should be monitored.
|
||||||
|
func (sc *StatsCollector) shouldMonitorContainer(cont types.Container, runID string) bool { //nolint:staticcheck // SA1019: use container.Summary
|
||||||
|
// Check if it has the correct run ID label
|
||||||
|
if cont.Labels == nil || cont.Labels["hi.run-id"] != runID {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if it's an hs- or ts- container
|
||||||
|
for _, name := range cont.Names {
|
||||||
|
containerName := strings.TrimPrefix(name, "/")
|
||||||
|
if strings.HasPrefix(containerName, "hs-") || strings.HasPrefix(containerName, "ts-") {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
// startStatsForContainer begins stats collection for a specific container.
|
||||||
|
func (sc *StatsCollector) startStatsForContainer(ctx context.Context, containerID, containerName string, verbose bool) {
|
||||||
|
containerName = strings.TrimPrefix(containerName, "/")
|
||||||
|
|
||||||
|
sc.mutex.Lock()
|
||||||
|
// Check if we're already monitoring this container
|
||||||
|
if _, exists := sc.containers[containerID]; exists {
|
||||||
|
sc.mutex.Unlock()
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
sc.containers[containerID] = &ContainerStats{
|
||||||
|
ContainerID: containerID,
|
||||||
|
ContainerName: containerName,
|
||||||
|
Stats: make([]StatsSample, 0),
|
||||||
|
}
|
||||||
|
sc.mutex.Unlock()
|
||||||
|
|
||||||
|
if verbose {
|
||||||
|
log.Printf("Starting stats collection for container %s (%s)", containerName, containerID[:12])
|
||||||
|
}
|
||||||
|
|
||||||
|
sc.wg.Add(1)
|
||||||
|
|
||||||
|
go sc.collectStatsForContainer(ctx, containerID, verbose)
|
||||||
|
}
|
||||||
|
|
||||||
|
// collectStatsForContainer collects stats for a specific container using Docker API streaming.
|
||||||
|
func (sc *StatsCollector) collectStatsForContainer(ctx context.Context, containerID string, verbose bool) {
|
||||||
|
defer sc.wg.Done()
|
||||||
|
|
||||||
|
// Use Docker API streaming stats - much more efficient than CLI
|
||||||
|
statsResponse, err := sc.client.ContainerStats(ctx, containerID, true)
|
||||||
|
if err != nil {
|
||||||
|
if verbose {
|
||||||
|
log.Printf("Failed to get stats stream for container %s: %v", containerID[:12], err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return
|
||||||
|
}
|
||||||
|
defer statsResponse.Body.Close()
|
||||||
|
|
||||||
|
decoder := json.NewDecoder(statsResponse.Body)
|
||||||
|
|
||||||
|
var prevStats *container.Stats //nolint:staticcheck // SA1019: use StatsResponse
|
||||||
|
|
||||||
|
for {
|
||||||
|
select {
|
||||||
|
case <-sc.stopChan:
|
||||||
|
return
|
||||||
|
case <-ctx.Done():
|
||||||
|
return
|
||||||
|
default:
|
||||||
|
var stats container.Stats //nolint:staticcheck // SA1019: use StatsResponse
|
||||||
|
|
||||||
|
err := decoder.Decode(&stats)
|
||||||
|
if err != nil {
|
||||||
|
// EOF is expected when container stops or stream ends
|
||||||
|
if err.Error() != "EOF" && verbose {
|
||||||
|
log.Printf("Failed to decode stats for container %s: %v", containerID[:12], err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Calculate CPU percentage (only if we have previous stats)
|
||||||
|
var cpuPercent float64
|
||||||
|
if prevStats != nil {
|
||||||
|
cpuPercent = calculateCPUPercent(prevStats, &stats)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Calculate memory usage in MB
|
||||||
|
memoryMB := float64(stats.MemoryStats.Usage) / (1024 * 1024)
|
||||||
|
|
||||||
|
// Store the sample (skip first sample since CPU calculation needs previous stats)
|
||||||
|
if prevStats != nil {
|
||||||
|
// Get container stats reference without holding the main mutex
|
||||||
|
var (
|
||||||
|
containerStats *ContainerStats
|
||||||
|
exists bool
|
||||||
|
)
|
||||||
|
|
||||||
|
sc.mutex.RLock()
|
||||||
|
containerStats, exists = sc.containers[containerID]
|
||||||
|
sc.mutex.RUnlock()
|
||||||
|
|
||||||
|
if exists && containerStats != nil {
|
||||||
|
containerStats.mutex.Lock()
|
||||||
|
containerStats.Stats = append(containerStats.Stats, StatsSample{
|
||||||
|
Timestamp: time.Now(),
|
||||||
|
CPUUsage: cpuPercent,
|
||||||
|
MemoryMB: memoryMB,
|
||||||
|
})
|
||||||
|
containerStats.mutex.Unlock()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Save current stats for next iteration
|
||||||
|
prevStats = &stats
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// calculateCPUPercent calculates CPU usage percentage from Docker stats.
|
||||||
|
func calculateCPUPercent(prevStats, stats *container.Stats) float64 { //nolint:staticcheck // SA1019: use StatsResponse
|
||||||
|
// CPU calculation based on Docker's implementation
|
||||||
|
cpuDelta := float64(stats.CPUStats.CPUUsage.TotalUsage) - float64(prevStats.CPUStats.CPUUsage.TotalUsage)
|
||||||
|
systemDelta := float64(stats.CPUStats.SystemUsage) - float64(prevStats.CPUStats.SystemUsage)
|
||||||
|
|
||||||
|
if systemDelta > 0 && cpuDelta >= 0 {
|
||||||
|
// Calculate CPU percentage: (container CPU delta / system CPU delta) * number of CPUs * 100
|
||||||
|
numCPUs := float64(len(stats.CPUStats.CPUUsage.PercpuUsage))
|
||||||
|
if numCPUs == 0 {
|
||||||
|
// Fallback: if PercpuUsage is not available, assume 1 CPU
|
||||||
|
numCPUs = 1.0
|
||||||
|
}
|
||||||
|
|
||||||
|
return (cpuDelta / systemDelta) * numCPUs * 100.0
|
||||||
|
}
|
||||||
|
|
||||||
|
return 0.0
|
||||||
|
}
|
||||||
|
|
||||||
|
// ContainerStatsSummary represents summary statistics for a container.
|
||||||
|
type ContainerStatsSummary struct {
|
||||||
|
ContainerName string
|
||||||
|
SampleCount int
|
||||||
|
CPU StatsSummary
|
||||||
|
Memory StatsSummary
|
||||||
|
}
|
||||||
|
|
||||||
|
// MemoryViolation represents a container that exceeded the memory limit.
|
||||||
|
type MemoryViolation struct {
|
||||||
|
ContainerName string
|
||||||
|
MaxMemoryMB float64
|
||||||
|
LimitMB float64
|
||||||
|
}
|
||||||
|
|
||||||
|
// StatsSummary represents min, max, and average for a metric.
|
||||||
|
type StatsSummary struct {
|
||||||
|
Min float64
|
||||||
|
Max float64
|
||||||
|
Average float64
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetSummary returns a summary of collected statistics.
|
||||||
|
func (sc *StatsCollector) GetSummary() []ContainerStatsSummary {
|
||||||
|
// Take snapshot of container references without holding main lock long
|
||||||
|
sc.mutex.RLock()
|
||||||
|
|
||||||
|
containerRefs := make([]*ContainerStats, 0, len(sc.containers))
|
||||||
|
for _, containerStats := range sc.containers {
|
||||||
|
containerRefs = append(containerRefs, containerStats)
|
||||||
|
}
|
||||||
|
|
||||||
|
sc.mutex.RUnlock()
|
||||||
|
|
||||||
|
summaries := make([]ContainerStatsSummary, 0, len(containerRefs))
|
||||||
|
|
||||||
|
for _, containerStats := range containerRefs {
|
||||||
|
containerStats.mutex.RLock()
|
||||||
|
stats := make([]StatsSample, len(containerStats.Stats))
|
||||||
|
copy(stats, containerStats.Stats)
|
||||||
|
containerName := containerStats.ContainerName
|
||||||
|
containerStats.mutex.RUnlock()
|
||||||
|
|
||||||
|
if len(stats) == 0 {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
summary := ContainerStatsSummary{
|
||||||
|
ContainerName: containerName,
|
||||||
|
SampleCount: len(stats),
|
||||||
|
}
|
||||||
|
|
||||||
|
// Calculate CPU stats
|
||||||
|
cpuValues := make([]float64, len(stats))
|
||||||
|
memoryValues := make([]float64, len(stats))
|
||||||
|
|
||||||
|
for i, sample := range stats {
|
||||||
|
cpuValues[i] = sample.CPUUsage
|
||||||
|
memoryValues[i] = sample.MemoryMB
|
||||||
|
}
|
||||||
|
|
||||||
|
summary.CPU = calculateStatsSummary(cpuValues)
|
||||||
|
summary.Memory = calculateStatsSummary(memoryValues)
|
||||||
|
|
||||||
|
summaries = append(summaries, summary)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Sort by container name for consistent output
|
||||||
|
sort.Slice(summaries, func(i, j int) bool {
|
||||||
|
return summaries[i].ContainerName < summaries[j].ContainerName
|
||||||
|
})
|
||||||
|
|
||||||
|
return summaries
|
||||||
|
}
|
||||||
|
|
||||||
|
// calculateStatsSummary calculates min, max, and average for a slice of values.
|
||||||
|
func calculateStatsSummary(values []float64) StatsSummary {
|
||||||
|
if len(values) == 0 {
|
||||||
|
return StatsSummary{}
|
||||||
|
}
|
||||||
|
|
||||||
|
minVal := values[0]
|
||||||
|
maxVal := values[0]
|
||||||
|
sum := 0.0
|
||||||
|
|
||||||
|
for _, value := range values {
|
||||||
|
if value < minVal {
|
||||||
|
minVal = value
|
||||||
|
}
|
||||||
|
|
||||||
|
if value > maxVal {
|
||||||
|
maxVal = value
|
||||||
|
}
|
||||||
|
|
||||||
|
sum += value
|
||||||
|
}
|
||||||
|
|
||||||
|
return StatsSummary{
|
||||||
|
Min: minVal,
|
||||||
|
Max: maxVal,
|
||||||
|
Average: sum / float64(len(values)),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// PrintSummary prints the statistics summary to the console.
|
||||||
|
func (sc *StatsCollector) PrintSummary() {
|
||||||
|
summaries := sc.GetSummary()
|
||||||
|
|
||||||
|
if len(summaries) == 0 {
|
||||||
|
log.Printf("No container statistics collected")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
log.Printf("Container Resource Usage Summary:")
|
||||||
|
log.Printf("================================")
|
||||||
|
|
||||||
|
for _, summary := range summaries {
|
||||||
|
log.Printf("Container: %s (%d samples)", summary.ContainerName, summary.SampleCount)
|
||||||
|
log.Printf(" CPU Usage: Min: %6.2f%% Max: %6.2f%% Avg: %6.2f%%",
|
||||||
|
summary.CPU.Min, summary.CPU.Max, summary.CPU.Average)
|
||||||
|
log.Printf(" Memory Usage: Min: %6.1f MB Max: %6.1f MB Avg: %6.1f MB",
|
||||||
|
summary.Memory.Min, summary.Memory.Max, summary.Memory.Average)
|
||||||
|
log.Printf("")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// CheckMemoryLimits checks if any containers exceeded their memory limits.
|
||||||
|
func (sc *StatsCollector) CheckMemoryLimits(hsLimitMB, tsLimitMB float64) []MemoryViolation {
|
||||||
|
if hsLimitMB <= 0 && tsLimitMB <= 0 {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
summaries := sc.GetSummary()
|
||||||
|
|
||||||
|
var violations []MemoryViolation
|
||||||
|
|
||||||
|
for _, summary := range summaries {
|
||||||
|
var limitMB float64
|
||||||
|
if strings.HasPrefix(summary.ContainerName, "hs-") {
|
||||||
|
limitMB = hsLimitMB
|
||||||
|
} else if strings.HasPrefix(summary.ContainerName, "ts-") {
|
||||||
|
limitMB = tsLimitMB
|
||||||
|
} else {
|
||||||
|
continue // Skip containers that don't match our patterns
|
||||||
|
}
|
||||||
|
|
||||||
|
if limitMB > 0 && summary.Memory.Max > limitMB {
|
||||||
|
violations = append(violations, MemoryViolation{
|
||||||
|
ContainerName: summary.ContainerName,
|
||||||
|
MaxMemoryMB: summary.Memory.Max,
|
||||||
|
LimitMB: limitMB,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return violations
|
||||||
|
}
|
||||||
|
|
||||||
|
// PrintSummaryAndCheckLimits prints the statistics summary and returns memory violations if any.
|
||||||
|
func (sc *StatsCollector) PrintSummaryAndCheckLimits(hsLimitMB, tsLimitMB float64) []MemoryViolation {
|
||||||
|
sc.PrintSummary()
|
||||||
|
return sc.CheckMemoryLimits(hsLimitMB, tsLimitMB)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Close closes the stats collector and cleans up resources.
|
||||||
|
func (sc *StatsCollector) Close() error {
|
||||||
|
sc.StopCollection()
|
||||||
|
return sc.client.Close()
|
||||||
|
}
|
||||||
66
cmd/mapresponses/main.go
Normal file
@@ -0,0 +1,66 @@
|
|||||||
|
package main
|
||||||
|
|
||||||
|
import (
|
||||||
|
"encoding/json"
|
||||||
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"os"
|
||||||
|
|
||||||
|
"github.com/creachadair/command"
|
||||||
|
"github.com/creachadair/flax"
|
||||||
|
"github.com/juanfont/headscale/hscontrol/mapper"
|
||||||
|
"github.com/juanfont/headscale/integration/integrationutil"
|
||||||
|
)
|
||||||
|
|
||||||
|
type MapConfig struct {
|
||||||
|
Directory string `flag:"directory,Directory to read map responses from"`
|
||||||
|
}
|
||||||
|
|
||||||
|
var (
|
||||||
|
mapConfig MapConfig
|
||||||
|
errDirectoryRequired = errors.New("directory is required")
|
||||||
|
)
|
||||||
|
|
||||||
|
func main() {
|
||||||
|
root := command.C{
|
||||||
|
Name: "mapresponses",
|
||||||
|
Help: "MapResponses is a tool to map and compare map responses from a directory",
|
||||||
|
Commands: []*command.C{
|
||||||
|
{
|
||||||
|
Name: "online",
|
||||||
|
Help: "",
|
||||||
|
Usage: "run [test-pattern] [flags]",
|
||||||
|
SetFlags: command.Flags(flax.MustBind, &mapConfig),
|
||||||
|
Run: runOnline,
|
||||||
|
},
|
||||||
|
command.HelpCommand(nil),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
env := root.NewEnv(nil).MergeFlags(true)
|
||||||
|
command.RunOrFail(env, os.Args[1:])
|
||||||
|
}
|
||||||
|
|
||||||
|
// runIntegrationTest executes the integration test workflow.
|
||||||
|
func runOnline(env *command.Env) error {
|
||||||
|
if mapConfig.Directory == "" {
|
||||||
|
return errDirectoryRequired
|
||||||
|
}
|
||||||
|
|
||||||
|
resps, err := mapper.ReadMapResponsesFromDirectory(mapConfig.Directory)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("reading map responses from directory: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
expected := integrationutil.BuildExpectedOnlineMap(resps)
|
||||||
|
|
||||||
|
out, err := json.MarshalIndent(expected, "", " ")
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("marshaling expected online map: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
os.Stderr.Write(out)
|
||||||
|
os.Stderr.Write([]byte("\n"))
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
@@ -18,10 +18,9 @@ server_url: http://127.0.0.1:8080
|
|||||||
# listen_addr: 0.0.0.0:8080
|
# listen_addr: 0.0.0.0:8080
|
||||||
listen_addr: 127.0.0.1:8080
|
listen_addr: 127.0.0.1:8080
|
||||||
|
|
||||||
# Address to listen to /metrics, you may want
|
# Address to listen to /metrics and /debug, you may want
|
||||||
# to keep this endpoint private to your internal
|
# to keep this endpoint private to your internal network
|
||||||
# network
|
# Use an emty value to disable the metrics listener.
|
||||||
#
|
|
||||||
metrics_listen_addr: 127.0.0.1:9090
|
metrics_listen_addr: 127.0.0.1:9090
|
||||||
|
|
||||||
# Address to listen for gRPC.
|
# Address to listen for gRPC.
|
||||||
@@ -43,26 +42,37 @@ grpc_allow_insecure: false
|
|||||||
# The Noise section includes specific configuration for the
|
# The Noise section includes specific configuration for the
|
||||||
# TS2021 Noise protocol
|
# TS2021 Noise protocol
|
||||||
noise:
|
noise:
|
||||||
# The Noise private key is used to encrypt the
|
# The Noise private key is used to encrypt the traffic between headscale and
|
||||||
# traffic between headscale and Tailscale clients when
|
# Tailscale clients when using the new Noise-based protocol. A missing key
|
||||||
# using the new Noise-based protocol.
|
# will be automatically generated.
|
||||||
private_key_path: /var/lib/headscale/noise_private.key
|
private_key_path: /var/lib/headscale/noise_private.key
|
||||||
|
|
||||||
# List of IP prefixes to allocate tailaddresses from.
|
# List of IP prefixes to allocate tailaddresses from.
|
||||||
# Each prefix consists of either an IPv4 or IPv6 address,
|
# Each prefix consists of either an IPv4 or IPv6 address,
|
||||||
# and the associated prefix length, delimited by a slash.
|
# and the associated prefix length, delimited by a slash.
|
||||||
# It must be within IP ranges supported by the Tailscale
|
#
|
||||||
# client - i.e., subnets of 100.64.0.0/10 and fd7a:115c:a1e0::/48.
|
# WARNING: These prefixes MUST be subsets of the standard Tailscale ranges:
|
||||||
# See below:
|
# - IPv4: 100.64.0.0/10 (CGNAT range)
|
||||||
# IPv6: https://github.com/tailscale/tailscale/blob/22ebb25e833264f58d7c3f534a8b166894a89536/net/tsaddr/tsaddr.go#LL81C52-L81C71
|
# - IPv6: fd7a:115c:a1e0::/48 (Tailscale ULA range)
|
||||||
|
#
|
||||||
|
# Using a SUBSET of these ranges is supported and useful if you want to
|
||||||
|
# limit IP allocation to a smaller block (e.g., 100.64.0.0/24).
|
||||||
|
#
|
||||||
|
# Using ranges OUTSIDE of CGNAT/ULA is NOT supported and will cause
|
||||||
|
# undefined behaviour. The Tailscale client has hard-coded assumptions
|
||||||
|
# about these ranges and will break in subtle, hard-to-debug ways.
|
||||||
|
#
|
||||||
|
# See:
|
||||||
# IPv4: https://github.com/tailscale/tailscale/blob/22ebb25e833264f58d7c3f534a8b166894a89536/net/tsaddr/tsaddr.go#L33
|
# IPv4: https://github.com/tailscale/tailscale/blob/22ebb25e833264f58d7c3f534a8b166894a89536/net/tsaddr/tsaddr.go#L33
|
||||||
# Any other range is NOT supported, and it will cause unexpected issues.
|
# IPv6: https://github.com/tailscale/tailscale/blob/22ebb25e833264f58d7c3f534a8b166894a89536/net/tsaddr/tsaddr.go#LL81C52-L81C71
|
||||||
prefixes:
|
prefixes:
|
||||||
v4: 100.64.0.0/10
|
v4: 100.64.0.0/10
|
||||||
v6: fd7a:115c:a1e0::/48
|
v6: fd7a:115c:a1e0::/48
|
||||||
|
|
||||||
# Strategy used for allocation of IPs to nodes, available options:
|
# Strategy used for allocation of IPs to nodes, available options:
|
||||||
# - sequential (default): assigns the next free IP from the previous given IP.
|
# - sequential (default): assigns the next free IP from the previous given
|
||||||
|
# IP. A best-effort approach is used and Headscale might leave holes in the
|
||||||
|
# IP range or fill up existing holes in the IP range.
|
||||||
# - random: assigns the next free IP from a pseudo-random IP generator (crypto/rand).
|
# - random: assigns the next free IP from a pseudo-random IP generator (crypto/rand).
|
||||||
allocation: sequential
|
allocation: sequential
|
||||||
|
|
||||||
@@ -87,16 +97,17 @@ derp:
|
|||||||
region_code: "headscale"
|
region_code: "headscale"
|
||||||
region_name: "Headscale Embedded DERP"
|
region_name: "Headscale Embedded DERP"
|
||||||
|
|
||||||
|
# Only allow clients associated with this server access
|
||||||
|
verify_clients: true
|
||||||
|
|
||||||
# Listens over UDP at the configured address for STUN connections - to help with NAT traversal.
|
# Listens over UDP at the configured address for STUN connections - to help with NAT traversal.
|
||||||
# When the embedded DERP server is enabled stun_listen_addr MUST be defined.
|
# When the embedded DERP server is enabled stun_listen_addr MUST be defined.
|
||||||
#
|
#
|
||||||
# For more details on how this works, check this great article: https://tailscale.com/blog/how-tailscale-works/
|
# For more details on how this works, check this great article: https://tailscale.com/blog/how-tailscale-works/
|
||||||
stun_listen_addr: "0.0.0.0:3478"
|
stun_listen_addr: "0.0.0.0:3478"
|
||||||
|
|
||||||
# Private key used to encrypt the traffic between headscale DERP
|
# Private key used to encrypt the traffic between headscale DERP and
|
||||||
# and Tailscale clients.
|
# Tailscale clients. A missing key will be automatically generated.
|
||||||
# The private key file will be autogenerated if it's missing.
|
|
||||||
#
|
|
||||||
private_key_path: /var/lib/headscale/derp_server_private.key
|
private_key_path: /var/lib/headscale/derp_server_private.key
|
||||||
|
|
||||||
# This flag can be used, so the DERP map entry for the embedded DERP server is not written automatically,
|
# This flag can be used, so the DERP map entry for the embedded DERP server is not written automatically,
|
||||||
@@ -106,7 +117,7 @@ derp:
|
|||||||
|
|
||||||
# For better connection stability (especially when using an Exit-Node and DNS is not working),
|
# For better connection stability (especially when using an Exit-Node and DNS is not working),
|
||||||
# it is possible to optionally add the public IPv4 and IPv6 address to the Derp-Map using:
|
# it is possible to optionally add the public IPv4 and IPv6 address to the Derp-Map using:
|
||||||
ipv4: 1.2.3.4
|
ipv4: 198.51.100.1
|
||||||
ipv6: 2001:db8::1
|
ipv6: 2001:db8::1
|
||||||
|
|
||||||
# List of externally available DERP maps encoded in JSON
|
# List of externally available DERP maps encoded in JSON
|
||||||
@@ -129,13 +140,30 @@ derp:
|
|||||||
auto_update_enabled: true
|
auto_update_enabled: true
|
||||||
|
|
||||||
# How often should we check for DERP updates?
|
# How often should we check for DERP updates?
|
||||||
update_frequency: 24h
|
update_frequency: 3h
|
||||||
|
|
||||||
# Disables the automatic check for headscale updates on startup
|
# Disables the automatic check for headscale updates on startup
|
||||||
disable_check_updates: false
|
disable_check_updates: false
|
||||||
|
|
||||||
# Time before an inactive ephemeral node is deleted?
|
# Node lifecycle configuration.
|
||||||
ephemeral_node_inactivity_timeout: 30m
|
node:
|
||||||
|
# Default key expiry for non-tagged nodes, regardless of registration method
|
||||||
|
# (auth key, CLI, web auth). Tagged nodes are exempt and never expire.
|
||||||
|
#
|
||||||
|
# This is the base default. OIDC can override this via oidc.expiry.
|
||||||
|
# If a client explicitly requests a specific expiry, the client value is used.
|
||||||
|
#
|
||||||
|
# Setting the value to "0" means no default expiry (nodes never expire unless
|
||||||
|
# explicitly expired via `headscale nodes expire`).
|
||||||
|
#
|
||||||
|
# Tailscale SaaS uses 180d; set to a positive duration to match that behaviour.
|
||||||
|
#
|
||||||
|
# Default: 0 (no default expiry)
|
||||||
|
expiry: 0
|
||||||
|
|
||||||
|
ephemeral:
|
||||||
|
# Time before an inactive ephemeral node is deleted.
|
||||||
|
inactivity_timeout: 30m
|
||||||
|
|
||||||
database:
|
database:
|
||||||
# Database type. Available options: sqlite, postgres
|
# Database type. Available options: sqlite, postgres
|
||||||
@@ -226,9 +254,11 @@ tls_cert_path: ""
|
|||||||
tls_key_path: ""
|
tls_key_path: ""
|
||||||
|
|
||||||
log:
|
log:
|
||||||
|
# Valid log levels: panic, fatal, error, warn, info, debug, trace
|
||||||
|
level: info
|
||||||
|
|
||||||
# Output formatting for logs: text or json
|
# Output formatting for logs: text or json
|
||||||
format: text
|
format: text
|
||||||
level: info
|
|
||||||
|
|
||||||
## Policy
|
## Policy
|
||||||
# headscale supports Tailscale's ACL policies.
|
# headscale supports Tailscale's ACL policies.
|
||||||
@@ -274,6 +304,10 @@ dns:
|
|||||||
# `hostname.base_domain` (e.g., _myhost.example.com_).
|
# `hostname.base_domain` (e.g., _myhost.example.com_).
|
||||||
base_domain: example.com
|
base_domain: example.com
|
||||||
|
|
||||||
|
# Whether to use the local DNS settings of a node or override the local DNS
|
||||||
|
# settings (default) and force the use of Headscale's DNS configuration.
|
||||||
|
override_local_dns: true
|
||||||
|
|
||||||
# List of DNS servers to expose to clients.
|
# List of DNS servers to expose to clients.
|
||||||
nameservers:
|
nameservers:
|
||||||
global:
|
global:
|
||||||
@@ -288,8 +322,7 @@ dns:
|
|||||||
|
|
||||||
# Split DNS (see https://tailscale.com/kb/1054/dns/),
|
# Split DNS (see https://tailscale.com/kb/1054/dns/),
|
||||||
# a map of domains and which DNS server to use for each.
|
# a map of domains and which DNS server to use for each.
|
||||||
split:
|
split: {}
|
||||||
{}
|
|
||||||
# foo.bar.com:
|
# foo.bar.com:
|
||||||
# - 1.1.1.1
|
# - 1.1.1.1
|
||||||
# darp.headscale.net:
|
# darp.headscale.net:
|
||||||
@@ -319,70 +352,83 @@ dns:
|
|||||||
# Note: for production you will want to set this to something like:
|
# Note: for production you will want to set this to something like:
|
||||||
unix_socket: /var/run/headscale/headscale.sock
|
unix_socket: /var/run/headscale/headscale.sock
|
||||||
unix_socket_permission: "0770"
|
unix_socket_permission: "0770"
|
||||||
#
|
|
||||||
# headscale supports experimental OpenID connect support,
|
|
||||||
# it is still being tested and might have some bugs, please
|
|
||||||
# help us test it.
|
|
||||||
# OpenID Connect
|
# OpenID Connect
|
||||||
# oidc:
|
# oidc:
|
||||||
|
# # Block startup until the identity provider is available and healthy.
|
||||||
# only_start_if_oidc_is_available: true
|
# only_start_if_oidc_is_available: true
|
||||||
|
#
|
||||||
|
# # OpenID Connect Issuer URL from the identity provider
|
||||||
# issuer: "https://your-oidc.issuer.com/path"
|
# issuer: "https://your-oidc.issuer.com/path"
|
||||||
|
#
|
||||||
|
# # Client ID from the identity provider
|
||||||
# client_id: "your-oidc-client-id"
|
# client_id: "your-oidc-client-id"
|
||||||
|
#
|
||||||
|
# # Client secret generated by the identity provider
|
||||||
|
# # Note: client_secret and client_secret_path are mutually exclusive.
|
||||||
# client_secret: "your-oidc-client-secret"
|
# client_secret: "your-oidc-client-secret"
|
||||||
# # Alternatively, set `client_secret_path` to read the secret from the file.
|
# # Alternatively, set `client_secret_path` to read the secret from the file.
|
||||||
# # It resolves environment variables, making integration to systemd's
|
# # It resolves environment variables, making integration to systemd's
|
||||||
# # `LoadCredential` straightforward:
|
# # `LoadCredential` straightforward:
|
||||||
# client_secret_path: "${CREDENTIALS_DIRECTORY}/oidc_client_secret"
|
# client_secret_path: "${CREDENTIALS_DIRECTORY}/oidc_client_secret"
|
||||||
# # client_secret and client_secret_path are mutually exclusive.
|
|
||||||
#
|
|
||||||
# # The amount of time from a node is authenticated with OpenID until it
|
|
||||||
# # expires and needs to reauthenticate.
|
|
||||||
# # Setting the value to "0" will mean no expiry.
|
|
||||||
# expiry: 180d
|
|
||||||
#
|
#
|
||||||
# # Use the expiry from the token received from OpenID when the user logged
|
# # Use the expiry from the token received from OpenID when the user logged
|
||||||
# # in, this will typically lead to frequent need to reauthenticate and should
|
# # in. This will typically lead to frequent need to reauthenticate and should
|
||||||
# # only been enabled if you know what you are doing.
|
# # only be enabled if you know what you are doing.
|
||||||
# # Note: enabling this will cause `oidc.expiry` to be ignored.
|
# # Note: enabling this will cause `node.expiry` to be ignored for
|
||||||
|
# # OIDC-authenticated nodes.
|
||||||
# use_expiry_from_token: false
|
# use_expiry_from_token: false
|
||||||
#
|
#
|
||||||
# # Customize the scopes used in the OIDC flow, defaults to "openid", "profile" and "email" and add custom query
|
# # The OIDC scopes to use, defaults to "openid", "profile" and "email".
|
||||||
# # parameters to the Authorize Endpoint request. Scopes default to "openid", "profile" and "email".
|
# # Custom scopes can be configured as needed, be sure to always include the
|
||||||
|
# # required "openid" scope.
|
||||||
|
# scope: ["openid", "profile", "email"]
|
||||||
#
|
#
|
||||||
# scope: ["openid", "profile", "email", "custom"]
|
# # Only verified email addresses are synchronized to the user profile by
|
||||||
|
# # default. Unverified emails may be allowed in case an identity provider
|
||||||
|
# # does not send the "email_verified: true" claim or email verification is
|
||||||
|
# # not required.
|
||||||
|
# email_verified_required: true
|
||||||
|
#
|
||||||
|
# # Provide custom key/value pairs which get sent to the identity provider's
|
||||||
|
# # authorization endpoint.
|
||||||
# extra_params:
|
# extra_params:
|
||||||
# domain_hint: example.com
|
# domain_hint: example.com
|
||||||
#
|
#
|
||||||
# # List allowed principal domains and/or users. If an authenticated user's domain is not in this list, the
|
# # Only accept users whose email domain is part of the allowed_domains list.
|
||||||
# # authentication request will be rejected.
|
|
||||||
#
|
|
||||||
# allowed_domains:
|
# allowed_domains:
|
||||||
# - example.com
|
# - example.com
|
||||||
# # Note: Groups from keycloak have a leading '/'
|
#
|
||||||
# allowed_groups:
|
# # Only accept users whose email address is part of the allowed_users list.
|
||||||
# - /headscale
|
|
||||||
# allowed_users:
|
# allowed_users:
|
||||||
# - alice@example.com
|
# - alice@example.com
|
||||||
#
|
#
|
||||||
# # Map legacy users from pre-0.24.0 versions of headscale to the new OIDC users
|
# # Only accept users which are members of at least one group in the
|
||||||
# # by taking the username from the legacy user and matching it with the username
|
# # allowed_groups list.
|
||||||
# # provided by the OIDC. This is useful when migrating from legacy users to OIDC
|
# allowed_groups:
|
||||||
# # to force them using the unique identifier from the OIDC and to give them a
|
# - /headscale
|
||||||
# # proper display name and picture if available.
|
#
|
||||||
# # Note that this will only work if the username from the legacy user is the same
|
# # Optional: PKCE (Proof Key for Code Exchange) configuration
|
||||||
# # and there is a possibility for account takeover should a username have changed
|
# # PKCE adds an additional layer of security to the OAuth 2.0 authorization code flow
|
||||||
# # with the provider.
|
# # by preventing authorization code interception attacks
|
||||||
# # Disabling this feature will cause all new logins to be created as new users.
|
# # See https://datatracker.ietf.org/doc/html/rfc7636
|
||||||
# # Note this option will be removed in the future and should be set to false
|
# pkce:
|
||||||
# # on all new installations, or when all users have logged in with OIDC once.
|
# # Enable or disable PKCE support (default: false)
|
||||||
# map_legacy_users: true
|
# enabled: false
|
||||||
|
#
|
||||||
|
# # PKCE method to use:
|
||||||
|
# # - plain: Use plain code verifier
|
||||||
|
# # - S256: Use SHA256 hashed code verifier (default, recommended)
|
||||||
|
# method: S256
|
||||||
|
|
||||||
# Logtail configuration
|
# Logtail configuration
|
||||||
# Logtail is Tailscales logging and auditing infrastructure, it allows the control panel
|
# Logtail is Tailscales logging and auditing infrastructure, it allows the
|
||||||
# to instruct tailscale nodes to log their activity to a remote server.
|
# control panel to instruct tailscale nodes to log their activity to a remote
|
||||||
|
# server. To disable logging on the client side, please refer to:
|
||||||
|
# https://tailscale.com/kb/1011/log-mesh-traffic#opting-out-of-client-logging
|
||||||
logtail:
|
logtail:
|
||||||
# Enable logtail for this headscales clients.
|
# Enable logtail for tailscale nodes of this Headscale instance.
|
||||||
# As there is currently no support for overriding the log server in headscale, this is
|
# As there is currently no support for overriding the log server in Headscale, this is
|
||||||
# disabled by default. Enabling this will make your clients send logs to Tailscale Inc.
|
# disabled by default. Enabling this will make your clients send logs to Tailscale Inc.
|
||||||
enabled: false
|
enabled: false
|
||||||
|
|
||||||
@@ -390,3 +436,28 @@ logtail:
|
|||||||
# default static port 41641. This option is intended as a workaround for some buggy
|
# default static port 41641. This option is intended as a workaround for some buggy
|
||||||
# firewall devices. See https://tailscale.com/kb/1181/firewalls/ for more information.
|
# firewall devices. See https://tailscale.com/kb/1181/firewalls/ for more information.
|
||||||
randomize_client_port: false
|
randomize_client_port: false
|
||||||
|
|
||||||
|
# Taildrop configuration
|
||||||
|
# Taildrop is the file sharing feature of Tailscale, allowing nodes to send files to each other.
|
||||||
|
# https://tailscale.com/kb/1106/taildrop/
|
||||||
|
taildrop:
|
||||||
|
# Enable or disable Taildrop for all nodes.
|
||||||
|
# When enabled, nodes can send files to other nodes owned by the same user.
|
||||||
|
# Tagged devices and cross-user transfers are not permitted by Tailscale clients.
|
||||||
|
enabled: true
|
||||||
|
# Advanced performance tuning parameters.
|
||||||
|
# The defaults are carefully chosen and should rarely need adjustment.
|
||||||
|
# Only modify these if you have identified a specific performance issue.
|
||||||
|
#
|
||||||
|
# tuning:
|
||||||
|
# # Maximum number of pending registration entries in the auth cache.
|
||||||
|
# # Oldest entries are evicted when the cap is reached.
|
||||||
|
# #
|
||||||
|
# # register_cache_max_entries: 1024
|
||||||
|
#
|
||||||
|
# # NodeStore write batching configuration.
|
||||||
|
# # The NodeStore batches write operations before rebuilding peer relationships,
|
||||||
|
# # which is computationally expensive. Batching reduces rebuild frequency.
|
||||||
|
# #
|
||||||
|
# # node_store_batch_size: 100
|
||||||
|
# # node_store_batch_timeout: 500ms
|
||||||
|
|||||||
@@ -1,5 +1,6 @@
|
|||||||
# If you plan to somehow use headscale, please deploy your own DERP infra: https://tailscale.com/kb/1118/custom-derp-servers/
|
# If you plan to somehow use headscale, please deploy your own DERP infra: https://tailscale.com/kb/1118/custom-derp-servers/
|
||||||
regions:
|
regions:
|
||||||
|
1: null # Disable DERP region with ID 1
|
||||||
900:
|
900:
|
||||||
regionid: 900
|
regionid: 900
|
||||||
regioncode: custom
|
regioncode: custom
|
||||||
@@ -7,9 +8,9 @@ regions:
|
|||||||
nodes:
|
nodes:
|
||||||
- name: 900a
|
- name: 900a
|
||||||
regionid: 900
|
regionid: 900
|
||||||
hostname: myderp.mydomain.no
|
hostname: myderp.example.com
|
||||||
ipv4: 123.123.123.123
|
ipv4: 198.51.100.1
|
||||||
ipv6: "2604:a880:400:d1::828:b001"
|
ipv6: 2001:db8::1
|
||||||
stunport: 0
|
stunport: 0
|
||||||
stunonly: false
|
stunonly: false
|
||||||
derpport: 0
|
derpport: 0
|
||||||
|
|||||||
@@ -10,7 +10,7 @@ headscale.
|
|||||||
| OpenBSD | Yes |
|
| OpenBSD | Yes |
|
||||||
| FreeBSD | Yes |
|
| FreeBSD | Yes |
|
||||||
| Windows | Yes (see [docs](../usage/connect/windows.md) and `/windows` on your headscale for more information) |
|
| Windows | Yes (see [docs](../usage/connect/windows.md) and `/windows` on your headscale for more information) |
|
||||||
| Android | Yes (see [docs](../usage/connect/android.md)) |
|
| Android | Yes (see [docs](../usage/connect/android.md) for more information) |
|
||||||
| macOS | Yes (see [docs](../usage/connect/apple.md#macos) and `/apple` on your headscale for more information) |
|
| macOS | Yes (see [docs](../usage/connect/apple.md#macos) and `/apple` on your headscale for more information) |
|
||||||
| iOS | Yes (see [docs](../usage/connect/apple.md#ios) and `/apple` on your headscale for more information) |
|
| iOS | Yes (see [docs](../usage/connect/apple.md#ios) and `/apple` on your headscale for more information) |
|
||||||
| tvOS | Yes (see [docs](../usage/connect/apple.md#tvos) and `/apple` on your headscale for more information) |
|
| tvOS | Yes (see [docs](../usage/connect/apple.md#tvos) and `/apple` on your headscale for more information) |
|
||||||
|
|||||||
@@ -1,3 +1,3 @@
|
|||||||
{%
|
{%
|
||||||
include-markdown "../../CONTRIBUTING.md"
|
include-markdown "../../CONTRIBUTING.md"
|
||||||
%}
|
%}
|
||||||
|
|||||||
@@ -2,12 +2,12 @@
|
|||||||
|
|
||||||
## What is the design goal of headscale?
|
## What is the design goal of headscale?
|
||||||
|
|
||||||
Headscale aims to implement a self-hosted, open source alternative to the [Tailscale](https://tailscale.com/)
|
Headscale aims to implement a self-hosted, open source alternative to the
|
||||||
control server.
|
[Tailscale](https://tailscale.com/) control server. Headscale's goal is to
|
||||||
Headscale's goal is to provide self-hosters and hobbyists with an open-source
|
provide self-hosters and hobbyists with an open-source server they can use for
|
||||||
server they can use for their projects and labs.
|
their projects and labs. It implements a narrow scope, a _single_ Tailscale
|
||||||
It implements a narrow scope, a _single_ Tailnet, suitable for a personal use, or a small
|
network (tailnet), suitable for a personal use, or a small open-source
|
||||||
open-source organisation.
|
organisation.
|
||||||
|
|
||||||
## How can I contribute?
|
## How can I contribute?
|
||||||
|
|
||||||
@@ -24,9 +24,12 @@ We are more than happy to exchange emails, or to have dedicated calls before a P
|
|||||||
|
|
||||||
## When/Why is Feature X going to be implemented?
|
## When/Why is Feature X going to be implemented?
|
||||||
|
|
||||||
We don't know. We might be working on it. If you're interested in contributing, please post a feature request about it.
|
We use [GitHub Milestones to plan for upcoming Headscale releases](https://github.com/juanfont/headscale/milestones).
|
||||||
|
Have a look at [our current plan](https://github.com/juanfont/headscale/milestones) to get an idea when a specific
|
||||||
|
feature is about to be implemented. The release plan is subject to change at any time.
|
||||||
|
|
||||||
Please be aware that there are a number of reasons why we might not accept specific contributions:
|
If you're interested in contributing, please post a feature request about it. Please be aware that there are a number of
|
||||||
|
reasons why we might not accept specific contributions:
|
||||||
|
|
||||||
- It is not possible to implement the feature in a way that makes sense in a self-hosted environment.
|
- It is not possible to implement the feature in a way that makes sense in a self-hosted environment.
|
||||||
- Given that we are reverse-engineering Tailscale to satisfy our own curiosity, we might be interested in implementing the feature ourselves.
|
- Given that we are reverse-engineering Tailscale to satisfy our own curiosity, we might be interested in implementing the feature ourselves.
|
||||||
@@ -40,22 +43,86 @@ official releases](../setup/install/official.md) for more information.
|
|||||||
In addition to that, you may use packages provided by the community or from distributions. Learn more in the
|
In addition to that, you may use packages provided by the community or from distributions. Learn more in the
|
||||||
[installation guide using community packages](../setup/install/community.md).
|
[installation guide using community packages](../setup/install/community.md).
|
||||||
|
|
||||||
For convenience, we also [build Docker images with headscale](../setup/install/container.md). But **please be aware that
|
For convenience, we also [build container images with headscale](../setup/install/container.md). But **please be aware that
|
||||||
we don't officially support deploying headscale using Docker**. On our [Discord server](https://discord.gg/c84AZQhmpx)
|
we don't officially support deploying headscale using Docker**. On our [Discord server](https://discord.gg/c84AZQhmpx)
|
||||||
we have a "docker-issues" channel where you can ask for Docker-specific help to the community.
|
we have a "docker-issues" channel where you can ask for Docker-specific help to the community.
|
||||||
|
|
||||||
|
## What is the recommended update path? Can I skip multiple versions while updating?
|
||||||
|
|
||||||
|
Please follow the steps outlined in the [upgrade guide](../setup/upgrade.md) to update your existing Headscale
|
||||||
|
installation. Its required to update from one stable version to the next (e.g. 0.26.0 → 0.27.1 → 0.28.0) without
|
||||||
|
skipping minor versions in between. You should always pick the latest available patch release.
|
||||||
|
|
||||||
|
Be sure to check the [changelog](https://github.com/juanfont/headscale/blob/main/CHANGELOG.md) for version specific
|
||||||
|
upgrade instructions and breaking changes.
|
||||||
|
|
||||||
|
## Scaling / How many clients does Headscale support?
|
||||||
|
|
||||||
|
It depends. As often stated, Headscale is not enterprise software and our focus
|
||||||
|
is homelabbers and self-hosters. Of course, we do not prevent people from using
|
||||||
|
it in a commercial/professional setting and often get questions about scaling.
|
||||||
|
|
||||||
|
Please note that when Headscale is developed, performance is not part of the
|
||||||
|
consideration as the main audience is considered to be users with a modest
|
||||||
|
amount of devices. We focus on correctness and feature parity with Tailscale
|
||||||
|
SaaS over time.
|
||||||
|
|
||||||
|
To understand if you might be able to use Headscale for your use case, I will
|
||||||
|
describe two scenarios in an effort to explain what is the central bottleneck
|
||||||
|
of Headscale:
|
||||||
|
|
||||||
|
1. An environment with 1000 servers
|
||||||
|
|
||||||
|
- they rarely "move" (change their endpoints)
|
||||||
|
- new nodes are added rarely
|
||||||
|
|
||||||
|
1. An environment with 80 laptops/phones (end user devices)
|
||||||
|
|
||||||
|
- nodes move often, e.g. switching from home to office
|
||||||
|
|
||||||
|
Headscale calculates a map of all nodes that need to talk to each other,
|
||||||
|
creating this "world map" requires a lot of CPU time. When an event that
|
||||||
|
requires changes to this map happens, the whole "world" is recalculated, and a
|
||||||
|
new "world map" is created for every node in the network.
|
||||||
|
|
||||||
|
This means that under certain conditions, Headscale can likely handle 100s
|
||||||
|
of devices (maybe more), if there is _little to no change_ happening in the
|
||||||
|
network. For example, in Scenario 1, the process of computing the world map is
|
||||||
|
extremely demanding due to the size of the network, but when the map has been
|
||||||
|
created and the nodes are not changing, the Headscale instance will likely
|
||||||
|
return to a very low resource usage until the next time there is an event
|
||||||
|
requiring the new map.
|
||||||
|
|
||||||
|
In the case of Scenario 2, the process of computing the world map is less
|
||||||
|
demanding due to the smaller size of the network, however, the type of nodes
|
||||||
|
will likely change frequently, which would lead to a constant resource usage.
|
||||||
|
|
||||||
|
Headscale will start to struggle when the two scenarios overlap, e.g. many nodes
|
||||||
|
with frequent changes will cause the resource usage to remain constantly high.
|
||||||
|
In the worst case scenario, the queue of nodes waiting for their map will grow
|
||||||
|
to a point where Headscale never will be able to catch up, and nodes will never
|
||||||
|
learn about the current state of the world.
|
||||||
|
|
||||||
|
We expect that the performance will improve over time as we improve the code
|
||||||
|
base, but it is not a focus. In general, we will never make the tradeoff to make
|
||||||
|
things faster on the cost of less maintainable or readable code. We are a small
|
||||||
|
team and have to optimise for maintainability.
|
||||||
|
|
||||||
## Which database should I use?
|
## Which database should I use?
|
||||||
|
|
||||||
We recommend the use of SQLite as database for headscale:
|
We recommend the use of SQLite as database for headscale:
|
||||||
|
|
||||||
- SQLite is simple to setup and easy to use
|
- SQLite is simple to setup and easy to use
|
||||||
- It scales well for all of headscale's usecases
|
- It scales well for all of headscale's use cases
|
||||||
- Development and testing happens primarily on SQLite
|
- Development and testing happens primarily on SQLite
|
||||||
- PostgreSQL is still supported, but is considered to be in "maintenance mode"
|
- PostgreSQL is still supported, but is considered to be in "maintenance mode"
|
||||||
|
|
||||||
The headscale project itself does not provide a tool to migrate from PostgreSQL to SQLite. Please have a look at [the
|
The headscale project itself does not provide a tool to migrate from PostgreSQL to SQLite. Please have a look at [the
|
||||||
related tools documentation](../ref/integration/tools.md) for migration tooling provided by the community.
|
related tools documentation](../ref/integration/tools.md) for migration tooling provided by the community.
|
||||||
|
|
||||||
|
The choice of database has little to no impact on the performance of the server,
|
||||||
|
see [Scaling / How many clients does Headscale support?](#scaling-how-many-clients-does-headscale-support) for understanding how Headscale spends its resources.
|
||||||
|
|
||||||
## Why is my reverse proxy not working with headscale?
|
## Why is my reverse proxy not working with headscale?
|
||||||
|
|
||||||
We don't know. We don't use reverse proxies with headscale ourselves, so we don't have any experience with them. We have
|
We don't know. We don't use reverse proxies with headscale ourselves, so we don't have any experience with them. We have
|
||||||
@@ -66,3 +133,81 @@ help to the community.
|
|||||||
## Can I use headscale and tailscale on the same machine?
|
## Can I use headscale and tailscale on the same machine?
|
||||||
|
|
||||||
Running headscale on a machine that is also in the tailnet can cause problems with subnet routers, traffic relay nodes, and MagicDNS. It might work, but it is not supported.
|
Running headscale on a machine that is also in the tailnet can cause problems with subnet routers, traffic relay nodes, and MagicDNS. It might work, but it is not supported.
|
||||||
|
|
||||||
|
## Why do two nodes see each other in their status, even if an ACL allows traffic only in one direction?
|
||||||
|
|
||||||
|
A frequent use case is to allow traffic only from one node to another, but not the other way around. For example, the
|
||||||
|
workstation of an administrator should be able to connect to all nodes but the nodes themselves shouldn't be able to
|
||||||
|
connect back to the administrator's node. Why do all nodes see the administrator's workstation in the output of
|
||||||
|
`tailscale status`?
|
||||||
|
|
||||||
|
This is essentially how Tailscale works. If traffic is allowed to flow in one direction, then both nodes see each other
|
||||||
|
in their output of `tailscale status`. Traffic is still filtered according to the ACL, with the exception of
|
||||||
|
`tailscale ping` which is always allowed in either direction.
|
||||||
|
|
||||||
|
See also <https://tailscale.com/kb/1087/device-visibility>.
|
||||||
|
|
||||||
|
## My policy is stored in the database and Headscale refuses to start due to an invalid policy. How can I recover?
|
||||||
|
|
||||||
|
Headscale checks if the policy is valid during startup and refuses to start if it detects an error. The error message
|
||||||
|
indicates which part of the policy is invalid. Follow these steps to fix your policy:
|
||||||
|
|
||||||
|
- Dump the policy to a file: `headscale policy get --bypass-grpc-and-access-database-directly > policy.json`
|
||||||
|
- Edit and fixup `policy.json`. Use the command `headscale policy check --file policy.json` to validate the policy.
|
||||||
|
- Load the modified policy: `headscale policy set --bypass-grpc-and-access-database-directly --file policy.json`
|
||||||
|
- Start Headscale as usual.
|
||||||
|
|
||||||
|
!!! warning "Full server configuration required"
|
||||||
|
|
||||||
|
The above commands to get/set the policy require a complete server configuration file including database settings. A
|
||||||
|
minimal config to [control Headscale via remote CLI](../ref/api.md#grpc) is not sufficient. You may use
|
||||||
|
`headscale -c /path/to/config.yaml` to specify the path to an alternative configuration file.
|
||||||
|
|
||||||
|
## How can I migrate back to the recommended IP prefixes?
|
||||||
|
|
||||||
|
Tailscale only supports the IP prefixes `100.64.0.0/10` and `fd7a:115c:a1e0::/48` or smaller subnets thereof. The
|
||||||
|
following steps can be used to migrate from unsupported IP prefixes back to the supported and recommended ones.
|
||||||
|
|
||||||
|
!!! warning "Backup and test in a demo environment required"
|
||||||
|
|
||||||
|
The commands below update the IP addresses of all nodes in your tailnet and this might have a severe impact in your
|
||||||
|
specific environment. At a minimum:
|
||||||
|
|
||||||
|
- [Create a backup of your database](../setup/upgrade.md#backup)
|
||||||
|
- Test the commands below in a representive demo environment. This allows to catch subsequent connectivity errors
|
||||||
|
early and see how the tailnet behaves in your specific environment.
|
||||||
|
|
||||||
|
- Stop Headscale
|
||||||
|
- Restore the default prefixes in the [configuration file](../ref/configuration.md):
|
||||||
|
```yaml
|
||||||
|
prefixes:
|
||||||
|
v4: 100.64.0.0/10
|
||||||
|
v6: fd7a:115c:a1e0::/48
|
||||||
|
```
|
||||||
|
- Update the `nodes.ipv4` and `nodes.ipv6` columns in the database and assign each node a unique IPv4 and IPv6 address.
|
||||||
|
The following SQL statement assigns IP addresses based on the node ID:
|
||||||
|
```sql
|
||||||
|
UPDATE nodes
|
||||||
|
SET ipv4=concat('100.64.', id/256, '.', id%256),
|
||||||
|
ipv6=concat('fd7a:115c:a1e0::', format('%x', id));
|
||||||
|
```
|
||||||
|
- Update the [policy](../ref/acls.md) to reflect the IP address changes (if any)
|
||||||
|
- Start Headscale
|
||||||
|
|
||||||
|
Nodes should reconnect within a few seconds and pickup their newly assigned IP addresses.
|
||||||
|
|
||||||
|
## How can I avoid to send logs to Tailscale Inc?
|
||||||
|
|
||||||
|
A Tailscale client [collects logs about its operation and connection attempts with other
|
||||||
|
clients](https://tailscale.com/kb/1011/log-mesh-traffic#client-logs) and sends them to a central log service operated by
|
||||||
|
Tailscale Inc.
|
||||||
|
|
||||||
|
Headscale, by default, instructs clients to disable log submission to the central log service. This configuration is
|
||||||
|
applied by a client once it successfully connected with Headscale. See the configuration option `logtail.enabled` in the
|
||||||
|
[configuration file](../ref/configuration.md) for details.
|
||||||
|
|
||||||
|
Alternatively, logging can also be disabled on the client side. This is independent of Headscale and opting out of
|
||||||
|
client logging disables log submission early during client startup. The configuration is operating system specific and
|
||||||
|
is usually achieved by setting the environment variable `TS_NO_LOGS_NO_SUPPORT=true` or by passing the flag
|
||||||
|
`--no-logs-no-support` to `tailscaled`. See
|
||||||
|
<https://tailscale.com/kb/1011/log-mesh-traffic#opting-out-of-client-logging> for details.
|
||||||
|
|||||||
@@ -2,31 +2,37 @@
|
|||||||
|
|
||||||
Headscale aims to implement a self-hosted, open source alternative to the Tailscale control server. Headscale's goal is
|
Headscale aims to implement a self-hosted, open source alternative to the Tailscale control server. Headscale's goal is
|
||||||
to provide self-hosters and hobbyists with an open-source server they can use for their projects and labs. This page
|
to provide self-hosters and hobbyists with an open-source server they can use for their projects and labs. This page
|
||||||
provides on overview of headscale's feature and compatibility with the Tailscale control server:
|
provides on overview of Headscale's feature and compatibility with the Tailscale control server:
|
||||||
|
|
||||||
- [x] Full "base" support of Tailscale's features
|
- [x] Full "base" support of Tailscale's features
|
||||||
- [x] Node registration
|
- [x] [Node registration](../ref/registration.md)
|
||||||
- [x] Interactive
|
- [x] [Web authentication](../ref/registration.md#web-authentication)
|
||||||
- [x] Pre authenticated key
|
- [x] [Pre authenticated key](../ref/registration.md#pre-authenticated-key)
|
||||||
- [x] [DNS](https://tailscale.com/kb/1054/dns)
|
- [x] [DNS](../ref/dns.md)
|
||||||
- [x] [MagicDNS](https://tailscale.com/kb/1081/magicdns)
|
- [x] [MagicDNS](https://tailscale.com/kb/1081/magicdns)
|
||||||
- [x] [Global and restricted nameservers (split DNS)](https://tailscale.com/kb/1054/dns#nameservers)
|
- [x] [Global and restricted nameservers (split DNS)](https://tailscale.com/kb/1054/dns#nameservers)
|
||||||
- [x] [search domains](https://tailscale.com/kb/1054/dns#search-domains)
|
- [x] [search domains](https://tailscale.com/kb/1054/dns#search-domains)
|
||||||
- [x] [Extra DNS records (headscale only)](../ref/dns.md#setting-extra-dns-records)
|
- [x] [Extra DNS records (Headscale only)](../ref/dns.md#setting-extra-dns-records)
|
||||||
- [x] [Taildrop (File Sharing)](https://tailscale.com/kb/1106/taildrop)
|
- [x] [Taildrop (File Sharing)](https://tailscale.com/kb/1106/taildrop)
|
||||||
- [x] Routing advertising (including exit nodes)
|
- [x] [Tags](../ref/tags.md)
|
||||||
|
- [x] [Routes](../ref/routes.md)
|
||||||
|
- [x] [Subnet routers](../ref/routes.md#subnet-router)
|
||||||
|
- [x] [Exit nodes](../ref/routes.md#exit-node)
|
||||||
- [x] Dual stack (IPv4 and IPv6)
|
- [x] Dual stack (IPv4 and IPv6)
|
||||||
- [x] Ephemeral nodes
|
- [x] Ephemeral nodes
|
||||||
- [x] Embedded [DERP server](https://tailscale.com/kb/1232/derp-servers)
|
- [x] Embedded [DERP server](../ref/derp.md)
|
||||||
- [x] Access control lists ([GitHub label "policy"](https://github.com/juanfont/headscale/labels/policy%20%F0%9F%93%9D))
|
- [x] Access control lists ([GitHub label "policy"](https://github.com/juanfont/headscale/labels/policy%20%F0%9F%93%9D))
|
||||||
- [x] ACL management via API
|
- [x] ACL management via API
|
||||||
- [x] `autogroup:internet`
|
- [x] Some [Autogroups](https://tailscale.com/kb/1396/targets#autogroups), currently: `autogroup:internet`,
|
||||||
- [ ] `autogroup:self`
|
`autogroup:nonroot`, `autogroup:member`, `autogroup:tagged`, `autogroup:self`
|
||||||
- [ ] `autogroup:member`
|
- [x] [Auto approvers](https://tailscale.com/kb/1337/acl-syntax#auto-approvers) for [subnet
|
||||||
* [ ] Node registration using Single-Sign-On (OpenID Connect) ([GitHub label "OIDC"](https://github.com/juanfont/headscale/labels/OIDC))
|
routers](../ref/routes.md#automatically-approve-routes-of-a-subnet-router) and [exit
|
||||||
|
nodes](../ref/routes.md#automatically-approve-an-exit-node-with-auto-approvers)
|
||||||
|
- [x] [Tailscale SSH](https://tailscale.com/kb/1193/tailscale-ssh)
|
||||||
|
- [x] [Node registration using Single-Sign-On (OpenID Connect)](../ref/oidc.md) ([GitHub label "OIDC"](https://github.com/juanfont/headscale/labels/OIDC))
|
||||||
- [x] Basic registration
|
- [x] Basic registration
|
||||||
- [ ] Update user profile from identity provider
|
- [x] Update user profile from identity provider
|
||||||
- [ ] Dynamic ACL support
|
|
||||||
- [ ] OIDC groups cannot be used in ACLs
|
- [ ] OIDC groups cannot be used in ACLs
|
||||||
- [ ] [Funnel](https://tailscale.com/kb/1223/funnel) ([#1040](https://github.com/juanfont/headscale/issues/1040))
|
- [ ] [Funnel](https://tailscale.com/kb/1223/funnel) ([#1040](https://github.com/juanfont/headscale/issues/1040))
|
||||||
- [ ] [Serve](https://tailscale.com/kb/1312/serve) ([#1234](https://github.com/juanfont/headscale/issues/1921))
|
- [ ] [Serve](https://tailscale.com/kb/1312/serve) ([#1234](https://github.com/juanfont/headscale/issues/1921))
|
||||||
|
- [ ] [Network flow logs](https://tailscale.com/kb/1219/network-flow-logs) ([#1687](https://github.com/juanfont/headscale/issues/1687))
|
||||||
|
|||||||
@@ -2,7 +2,8 @@
|
|||||||
|
|
||||||
All headscale releases are available on the [GitHub release page](https://github.com/juanfont/headscale/releases). Those
|
All headscale releases are available on the [GitHub release page](https://github.com/juanfont/headscale/releases). Those
|
||||||
releases are available as binaries for various platforms and architectures, packages for Debian based systems and source
|
releases are available as binaries for various platforms and architectures, packages for Debian based systems and source
|
||||||
code archives. Container images are available on [Docker Hub](https://hub.docker.com/r/headscale/headscale).
|
code archives. Container images are available on [Docker Hub](https://hub.docker.com/r/headscale/headscale) and
|
||||||
|
[GitHub Container Registry](https://github.com/juanfont/headscale/pkgs/container/headscale).
|
||||||
|
|
||||||
An Atom/RSS feed of headscale releases is available [here](https://github.com/juanfont/headscale/releases.atom).
|
An Atom/RSS feed of headscale releases is available [here](https://github.com/juanfont/headscale/releases.atom).
|
||||||
|
|
||||||
|
|||||||
BIN
docs/assets/favicon.png
Normal file
|
After Width: | Height: | Size: 22 KiB |
|
Before Width: | Height: | Size: 56 KiB After Width: | Height: | Size: 56 KiB |
|
Before Width: | Height: | Size: 34 KiB After Width: | Height: | Size: 34 KiB |
@@ -1 +1 @@
|
|||||||
<svg xmlns="http://www.w3.org/2000/svg" xml:space="preserve" style="fill-rule:evenodd;clip-rule:evenodd;stroke-linejoin:round;stroke-miterlimit:2" viewBox="0 0 1280 640"><circle cx="141.023" cy="338.36" r="117.472" style="fill:#f8b5cb" transform="matrix(.997276 0 0 1.00556 10.0024 -14.823)"/><circle cx="352.014" cy="268.302" r="33.095" style="fill:#a2a2a2" transform="matrix(1.01749 0 0 1 -3.15847 0)"/><circle cx="352.014" cy="268.302" r="33.095" style="fill:#a2a2a2" transform="matrix(1.01749 0 0 1 -3.15847 115.914)"/><circle cx="352.014" cy="268.302" r="33.095" style="fill:#a2a2a2" transform="matrix(1.01749 0 0 1 148.43 115.914)"/><circle cx="352.014" cy="268.302" r="33.095" style="fill:#a2a2a2" transform="matrix(1.01749 0 0 1 148.851 0)"/><circle cx="805.557" cy="336.915" r="118.199" style="fill:#8d8d8d" transform="matrix(.99196 0 0 1 3.36978 -10.2458)"/><circle cx="805.557" cy="336.915" r="118.199" style="fill:#8d8d8d" transform="matrix(.99196 0 0 1 255.633 -10.2458)"/><path d="M680.282 124.808h-68.093v390.325h68.081v-28.23H640V153.228h40.282v-28.42Z" style="fill:#303030"/><path d="M680.282 124.808h-68.093v390.325h68.081v-28.23H640V153.228h40.282v-28.42Z" style="fill:#303030" transform="matrix(-1 0 0 1 1857.19 0)"/></svg>
|
<svg xmlns="http://www.w3.org/2000/svg" xml:space="preserve" style="fill-rule:evenodd;clip-rule:evenodd;stroke-linejoin:round;stroke-miterlimit:2" viewBox="0 0 1280 640"><circle cx="141.023" cy="338.36" r="117.472" style="fill:#f8b5cb" transform="matrix(.997276 0 0 1.00556 10.0024 -14.823)"/><circle cx="352.014" cy="268.302" r="33.095" style="fill:#a2a2a2" transform="matrix(1.01749 0 0 1 -3.15847 0)"/><circle cx="352.014" cy="268.302" r="33.095" style="fill:#a2a2a2" transform="matrix(1.01749 0 0 1 -3.15847 115.914)"/><circle cx="352.014" cy="268.302" r="33.095" style="fill:#a2a2a2" transform="matrix(1.01749 0 0 1 148.43 115.914)"/><circle cx="352.014" cy="268.302" r="33.095" style="fill:#a2a2a2" transform="matrix(1.01749 0 0 1 148.851 0)"/><circle cx="805.557" cy="336.915" r="118.199" style="fill:#8d8d8d" transform="matrix(.99196 0 0 1 3.36978 -10.2458)"/><circle cx="805.557" cy="336.915" r="118.199" style="fill:#8d8d8d" transform="matrix(.99196 0 0 1 255.633 -10.2458)"/><path d="M680.282 124.808h-68.093v390.325h68.081v-28.23H640V153.228h40.282v-28.42Z" style="fill:#303030"/><path d="M680.282 124.808h-68.093v390.325h68.081v-28.23H640V153.228h40.282v-28.42Z" style="fill:#303030" transform="matrix(-1 0 0 1 1857.19 0)"/></svg>
|
||||||
|
Before Width: | Height: | Size: 1.2 KiB After Width: | Height: | Size: 1.2 KiB |
|
Before Width: | Height: | Size: 49 KiB After Width: | Height: | Size: 49 KiB |
|
Before Width: | Height: | Size: 7.8 KiB After Width: | Height: | Size: 7.8 KiB |
@@ -14,12 +14,12 @@ Join our [Discord server](https://discord.gg/c84AZQhmpx) for a chat and communit
|
|||||||
|
|
||||||
## Design goal
|
## Design goal
|
||||||
|
|
||||||
Headscale aims to implement a self-hosted, open source alternative to the Tailscale
|
Headscale aims to implement a self-hosted, open source alternative to the
|
||||||
control server.
|
[Tailscale](https://tailscale.com/) control server. Headscale's goal is to
|
||||||
Headscale's goal is to provide self-hosters and hobbyists with an open-source
|
provide self-hosters and hobbyists with an open-source server they can use for
|
||||||
server they can use for their projects and labs.
|
their projects and labs. It implements a narrow scope, a _single_ Tailscale
|
||||||
It implements a narrower scope, a single Tailnet, suitable for a personal use, or a small
|
network (tailnet), suitable for a personal use, or a small open-source
|
||||||
open-source organisation.
|
organisation.
|
||||||
|
|
||||||
## Supporting headscale
|
## Supporting headscale
|
||||||
|
|
||||||
|
|||||||
@@ -1,5 +0,0 @@
|
|||||||
# Packaging
|
|
||||||
|
|
||||||
We use [nFPM](https://nfpm.goreleaser.com/) for making `.deb`, `.rpm` and `.apk`.
|
|
||||||
|
|
||||||
This folder contains files we need to package with these releases.
|
|
||||||
@@ -1,88 +0,0 @@
|
|||||||
#!/bin/sh
|
|
||||||
# Determine OS platform
|
|
||||||
# shellcheck source=/dev/null
|
|
||||||
. /etc/os-release
|
|
||||||
|
|
||||||
HEADSCALE_EXE="/usr/bin/headscale"
|
|
||||||
BSD_HIER=""
|
|
||||||
HEADSCALE_RUN_DIR="/var/run/headscale"
|
|
||||||
HEADSCALE_HOME_DIR="/var/lib/headscale"
|
|
||||||
HEADSCALE_USER="headscale"
|
|
||||||
HEADSCALE_GROUP="headscale"
|
|
||||||
HEADSCALE_SHELL="/usr/sbin/nologin"
|
|
||||||
|
|
||||||
ensure_sudo() {
|
|
||||||
if [ "$(id -u)" = "0" ]; then
|
|
||||||
echo "Sudo permissions detected"
|
|
||||||
else
|
|
||||||
echo "No sudo permission detected, please run as sudo"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
ensure_headscale_path() {
|
|
||||||
if [ ! -f "$HEADSCALE_EXE" ]; then
|
|
||||||
echo "headscale not in default path, exiting..."
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
printf "Found headscale %s\n" "$HEADSCALE_EXE"
|
|
||||||
}
|
|
||||||
|
|
||||||
create_headscale_user() {
|
|
||||||
printf "PostInstall: Adding headscale user %s\n" "$HEADSCALE_USER"
|
|
||||||
useradd -s "$HEADSCALE_SHELL" -d "$HEADSCALE_HOME_DIR" -c "headscale default user" "$HEADSCALE_USER"
|
|
||||||
}
|
|
||||||
|
|
||||||
create_headscale_group() {
|
|
||||||
if command -V systemctl >/dev/null 2>&1; then
|
|
||||||
printf "PostInstall: Adding headscale group %s\n" "$HEADSCALE_GROUP"
|
|
||||||
groupadd "$HEADSCALE_GROUP"
|
|
||||||
|
|
||||||
printf "PostInstall: Adding headscale user %s to group %s\n" "$HEADSCALE_USER" "$HEADSCALE_GROUP"
|
|
||||||
usermod -a -G "$HEADSCALE_GROUP" "$HEADSCALE_USER"
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ "$ID" = "alpine" ]; then
|
|
||||||
printf "PostInstall: Adding headscale group %s\n" "$HEADSCALE_GROUP"
|
|
||||||
addgroup "$HEADSCALE_GROUP"
|
|
||||||
|
|
||||||
printf "PostInstall: Adding headscale user %s to group %s\n" "$HEADSCALE_USER" "$HEADSCALE_GROUP"
|
|
||||||
addgroup "$HEADSCALE_USER" "$HEADSCALE_GROUP"
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
create_run_dir() {
|
|
||||||
printf "PostInstall: Creating headscale run directory \n"
|
|
||||||
mkdir -p "$HEADSCALE_RUN_DIR"
|
|
||||||
|
|
||||||
printf "PostInstall: Modifying group ownership of headscale run directory \n"
|
|
||||||
chown "$HEADSCALE_USER":"$HEADSCALE_GROUP" "$HEADSCALE_RUN_DIR"
|
|
||||||
}
|
|
||||||
|
|
||||||
summary() {
|
|
||||||
echo "----------------------------------------------------------------------"
|
|
||||||
echo " headscale package has been successfully installed."
|
|
||||||
echo ""
|
|
||||||
echo " Please follow the next steps to start the software:"
|
|
||||||
echo ""
|
|
||||||
echo " sudo systemctl enable headscale"
|
|
||||||
echo " sudo systemctl start headscale"
|
|
||||||
echo ""
|
|
||||||
echo " Configuration settings can be adjusted here:"
|
|
||||||
echo " ${BSD_HIER}/etc/headscale/config.yaml"
|
|
||||||
echo ""
|
|
||||||
echo "----------------------------------------------------------------------"
|
|
||||||
}
|
|
||||||
|
|
||||||
#
|
|
||||||
# Main body of the script
|
|
||||||
#
|
|
||||||
{
|
|
||||||
ensure_sudo
|
|
||||||
ensure_headscale_path
|
|
||||||
create_headscale_user
|
|
||||||
create_headscale_group
|
|
||||||
create_run_dir
|
|
||||||
summary
|
|
||||||
}
|
|
||||||
@@ -1,15 +0,0 @@
|
|||||||
#!/bin/sh
|
|
||||||
# Determine OS platform
|
|
||||||
# shellcheck source=/dev/null
|
|
||||||
. /etc/os-release
|
|
||||||
|
|
||||||
if command -V systemctl >/dev/null 2>&1; then
|
|
||||||
echo "Stop and disable headscale service"
|
|
||||||
systemctl stop headscale >/dev/null 2>&1 || true
|
|
||||||
systemctl disable headscale >/dev/null 2>&1 || true
|
|
||||||
echo "Running daemon-reload"
|
|
||||||
systemctl daemon-reload || true
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "Removing run directory"
|
|
||||||
rm -rf "/var/run/headscale.sock"
|
|
||||||
157
docs/ref/acls.md
@@ -9,9 +9,38 @@ When using ACL's the User borders are no longer applied. All machines
|
|||||||
whichever the User have the ability to communicate with other hosts as
|
whichever the User have the ability to communicate with other hosts as
|
||||||
long as the ACL's permits this exchange.
|
long as the ACL's permits this exchange.
|
||||||
|
|
||||||
## ACLs use case example
|
## ACL Setup
|
||||||
|
|
||||||
Let's build an example use case for a small business (It may be the place where
|
To enable and configure ACLs in Headscale, you need to specify the path to your ACL policy file in the `policy.path` key in `config.yaml`.
|
||||||
|
|
||||||
|
Your ACL policy file must be formatted using [huJSON](https://github.com/tailscale/hujson).
|
||||||
|
|
||||||
|
Info on how these policies are written can be found
|
||||||
|
[here](https://tailscale.com/kb/1018/acls/).
|
||||||
|
|
||||||
|
Please reload or restart Headscale after updating the ACL file. Headscale may be reloaded either via its systemd service
|
||||||
|
(`sudo systemctl reload headscale`) or by sending a SIGHUP signal (`sudo kill -HUP $(pidof headscale)`) to the main
|
||||||
|
process. Headscale logs the result of ACL policy processing after each reload.
|
||||||
|
|
||||||
|
## Simple Examples
|
||||||
|
|
||||||
|
- [**Allow All**](https://tailscale.com/kb/1192/acl-samples#allow-all-default-acl): If you define an ACL file but completely omit the `"acls"` field from its content, Headscale will default to an "allow all" policy. This means all devices connected to your tailnet will be able to communicate freely with each other.
|
||||||
|
|
||||||
|
```json
|
||||||
|
{}
|
||||||
|
```
|
||||||
|
|
||||||
|
- [**Deny All**](https://tailscale.com/kb/1192/acl-samples#deny-all): To prevent all communication within your tailnet, you can include an empty array for the `"acls"` field in your policy file.
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"acls": []
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Complex Example
|
||||||
|
|
||||||
|
Let's build a more complex example use case for a small business (It may be the place where
|
||||||
ACL's are the most useful).
|
ACL's are the most useful).
|
||||||
|
|
||||||
We have a small company with a boss, an admin, two developers and an intern.
|
We have a small company with a boss, an admin, two developers and an intern.
|
||||||
@@ -36,11 +65,7 @@ servers.
|
|||||||
- billing.internal
|
- billing.internal
|
||||||
- router.internal
|
- router.internal
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
## ACL setup
|
|
||||||
|
|
||||||
ACLs have to be written in [huJSON](https://github.com/tailscale/hujson).
|
|
||||||
|
|
||||||
When [registering the servers](../usage/getting-started.md#register-a-node) we
|
When [registering the servers](../usage/getting-started.md#register-a-node) we
|
||||||
will need to add the flag `--advertise-tags=tag:<tag1>,tag:<tag2>`, and the user
|
will need to add the flag `--advertise-tags=tag:<tag1>,tag:<tag2>`, and the user
|
||||||
@@ -49,14 +74,6 @@ tags to a server they can register, the check of the tags is done on headscale
|
|||||||
server and only valid tags are applied. A tag is valid if the user that is
|
server and only valid tags are applied. A tag is valid if the user that is
|
||||||
registering it is allowed to do it.
|
registering it is allowed to do it.
|
||||||
|
|
||||||
To use ACLs in headscale, you must edit your `config.yaml` file. In there you will find a `policy.path` parameter. This
|
|
||||||
will need to point to your ACL file. More info on how these policies are written can be found
|
|
||||||
[here](https://tailscale.com/kb/1018/acls/).
|
|
||||||
|
|
||||||
Please reload or restart Headscale after updating the ACL file. Headscale may be reloaded either via its systemd service
|
|
||||||
(`sudo systemctl reload headscale`) or by sending a SIGHUP signal (`sudo kill -HUP $(pidof headscale)`) to the main
|
|
||||||
process. Headscale logs the result of ACL policy processing after each reload.
|
|
||||||
|
|
||||||
Here are the ACL's to implement the same permissions as above:
|
Here are the ACL's to implement the same permissions as above:
|
||||||
|
|
||||||
```json title="acl.json"
|
```json title="acl.json"
|
||||||
@@ -64,10 +81,10 @@ Here are the ACL's to implement the same permissions as above:
|
|||||||
// groups are collections of users having a common scope. A user can be in multiple groups
|
// groups are collections of users having a common scope. A user can be in multiple groups
|
||||||
// groups cannot be composed of groups
|
// groups cannot be composed of groups
|
||||||
"groups": {
|
"groups": {
|
||||||
"group:boss": ["boss"],
|
"group:boss": ["boss@"],
|
||||||
"group:dev": ["dev1", "dev2"],
|
"group:dev": ["dev1@", "dev2@"],
|
||||||
"group:admin": ["admin1"],
|
"group:admin": ["admin1@"],
|
||||||
"group:intern": ["intern1"]
|
"group:intern": ["intern1@"]
|
||||||
},
|
},
|
||||||
// tagOwners in tailscale is an association between a TAG and the people allowed to set this TAG on a server.
|
// tagOwners in tailscale is an association between a TAG and the people allowed to set this TAG on a server.
|
||||||
// This is documented [here](https://tailscale.com/kb/1068/acl-tags#defining-a-tag)
|
// This is documented [here](https://tailscale.com/kb/1068/acl-tags#defining-a-tag)
|
||||||
@@ -149,13 +166,11 @@ Here are the ACL's to implement the same permissions as above:
|
|||||||
},
|
},
|
||||||
// developers have access to the internal network through the router.
|
// developers have access to the internal network through the router.
|
||||||
// the internal network is composed of HTTPS endpoints and Postgresql
|
// the internal network is composed of HTTPS endpoints and Postgresql
|
||||||
// database servers. There's an additional rule to allow traffic to be
|
// database servers.
|
||||||
// forwarded to the internal subnet, 10.20.0.0/16. See this issue
|
|
||||||
// https://github.com/juanfont/headscale/issues/502
|
|
||||||
{
|
{
|
||||||
"action": "accept",
|
"action": "accept",
|
||||||
"src": ["group:dev"],
|
"src": ["group:dev"],
|
||||||
"dst": ["10.20.0.0/16:443,5432", "router.internal:0"]
|
"dst": ["10.20.0.0/16:443,5432"]
|
||||||
},
|
},
|
||||||
|
|
||||||
// servers should be able to talk to database in tcp/5432. Database should not be able to initiate connections to
|
// servers should be able to talk to database in tcp/5432. Database should not be able to initiate connections to
|
||||||
@@ -179,13 +194,95 @@ Here are the ACL's to implement the same permissions as above:
|
|||||||
"dst": ["tag:dev-app-servers:80,443"]
|
"dst": ["tag:dev-app-servers:80,443"]
|
||||||
},
|
},
|
||||||
|
|
||||||
// We still have to allow internal users communications since nothing guarantees that each user have
|
// Allow users to access their own devices using autogroup:self (see below for more details about performance impact)
|
||||||
// their own users.
|
{
|
||||||
{ "action": "accept", "src": ["boss"], "dst": ["boss:*"] },
|
"action": "accept",
|
||||||
{ "action": "accept", "src": ["dev1"], "dst": ["dev1:*"] },
|
"src": ["autogroup:member"],
|
||||||
{ "action": "accept", "src": ["dev2"], "dst": ["dev2:*"] },
|
"dst": ["autogroup:self:*"]
|
||||||
{ "action": "accept", "src": ["admin1"], "dst": ["admin1:*"] },
|
}
|
||||||
{ "action": "accept", "src": ["intern1"], "dst": ["intern1:*"] }
|
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Autogroups
|
||||||
|
|
||||||
|
Headscale supports several autogroups that automatically include users, destinations, or devices with specific properties. Autogroups provide a convenient way to write ACL rules without manually listing individual users or devices.
|
||||||
|
|
||||||
|
### `autogroup:internet`
|
||||||
|
|
||||||
|
Allows access to the internet through [exit nodes](routes.md#exit-node). Can only be used in ACL destinations.
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"action": "accept",
|
||||||
|
"src": ["group:users"],
|
||||||
|
"dst": ["autogroup:internet:*"]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### `autogroup:member`
|
||||||
|
|
||||||
|
Includes all [personal (untagged) devices](registration.md/#identity-model).
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"action": "accept",
|
||||||
|
"src": ["autogroup:member"],
|
||||||
|
"dst": ["tag:prod-app-servers:80,443"]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### `autogroup:tagged`
|
||||||
|
|
||||||
|
Includes all devices that [have at least one tag](registration.md/#identity-model).
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"action": "accept",
|
||||||
|
"src": ["autogroup:tagged"],
|
||||||
|
"dst": ["tag:monitoring:9090"]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### `autogroup:self`
|
||||||
|
|
||||||
|
!!! warning "The current implementation of `autogroup:self` is inefficient"
|
||||||
|
|
||||||
|
Includes devices where the same user is authenticated on both the source and destination. Does not include tagged devices. Can only be used in ACL destinations.
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"action": "accept",
|
||||||
|
"src": ["autogroup:member"],
|
||||||
|
"dst": ["autogroup:self:*"]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
*Using `autogroup:self` may cause performance degradation on the Headscale coordinator server in large deployments, as filter rules must be compiled per-node rather than globally and the current implementation is not very efficient.*
|
||||||
|
|
||||||
|
If you experience performance issues, consider using more specific ACL rules or limiting the use of `autogroup:self`.
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
// The following rules allow internal users to communicate with their
|
||||||
|
// own nodes in case autogroup:self is causing performance issues.
|
||||||
|
{ "action": "accept", "src": ["boss@"], "dst": ["boss@:*"] },
|
||||||
|
{ "action": "accept", "src": ["dev1@"], "dst": ["dev1@:*"] },
|
||||||
|
{ "action": "accept", "src": ["dev2@"], "dst": ["dev2@:*"] },
|
||||||
|
{ "action": "accept", "src": ["admin1@"], "dst": ["admin1@:*"] },
|
||||||
|
{ "action": "accept", "src": ["intern1@"], "dst": ["intern1@:*"] }
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### `autogroup:nonroot`
|
||||||
|
|
||||||
|
Used in Tailscale SSH rules to allow access to any user except root. Can only be used in the `users` field of SSH rules.
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"action": "accept",
|
||||||
|
"src": ["autogroup:member"],
|
||||||
|
"dst": ["autogroup:self"],
|
||||||
|
"users": ["autogroup:nonroot"]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|||||||
129
docs/ref/api.md
Normal file
@@ -0,0 +1,129 @@
|
|||||||
|
# API
|
||||||
|
|
||||||
|
Headscale provides a [HTTP REST API](#rest-api) and a [gRPC interface](#grpc) which may be used to integrate a [web
|
||||||
|
interface](integration/web-ui.md), [remote control Headscale](#setup-remote-control) or provide a base for custom
|
||||||
|
integration and tooling.
|
||||||
|
|
||||||
|
Both interfaces require a valid API key before use. To create an API key, log into your Headscale server and generate
|
||||||
|
one with the default expiration of 90 days:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
headscale apikeys create
|
||||||
|
```
|
||||||
|
|
||||||
|
Copy the output of the command and save it for later. Please note that you can not retrieve an API key again. If the API
|
||||||
|
key is lost, expire the old one, and create a new one.
|
||||||
|
|
||||||
|
To list the API keys currently associated with the server:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
headscale apikeys list
|
||||||
|
```
|
||||||
|
|
||||||
|
and to expire an API key:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
headscale apikeys expire --prefix <PREFIX>
|
||||||
|
```
|
||||||
|
|
||||||
|
## REST API
|
||||||
|
|
||||||
|
- API endpoint: `/api/v1`, e.g. `https://headscale.example.com/api/v1`
|
||||||
|
- Documentation: `/swagger`, e.g. `https://headscale.example.com/swagger`
|
||||||
|
- Headscale Version: `/version`, e.g. `https://headscale.example.com/version`
|
||||||
|
- Authenticate using HTTP Bearer authentication by sending the [API key](#api) with the HTTP `Authorization: Bearer <API_KEY>` header.
|
||||||
|
|
||||||
|
Start by [creating an API key](#api) and test it with the examples below. Read the API documentation provided by your
|
||||||
|
Headscale server at `/swagger` for details.
|
||||||
|
|
||||||
|
=== "Get details for all users"
|
||||||
|
|
||||||
|
```console
|
||||||
|
curl -H "Authorization: Bearer <API_KEY>" \
|
||||||
|
https://headscale.example.com/api/v1/user
|
||||||
|
```
|
||||||
|
|
||||||
|
=== "Get details for user 'bob'"
|
||||||
|
|
||||||
|
```console
|
||||||
|
curl -H "Authorization: Bearer <API_KEY>" \
|
||||||
|
https://headscale.example.com/api/v1/user?name=bob
|
||||||
|
```
|
||||||
|
|
||||||
|
=== "Register a node"
|
||||||
|
|
||||||
|
```console
|
||||||
|
curl -H "Authorization: Bearer <API_KEY>" \
|
||||||
|
--json '{"user": "<USER>", "authId": "AUTH_ID>"}' \
|
||||||
|
https://headscale.example.com/api/v1/auth/register
|
||||||
|
```
|
||||||
|
|
||||||
|
## gRPC
|
||||||
|
|
||||||
|
The gRPC interface can be used to control a Headscale instance from a remote machine with the `headscale` binary.
|
||||||
|
|
||||||
|
### Prerequisite
|
||||||
|
|
||||||
|
- A workstation to run `headscale` (any supported platform, e.g. Linux).
|
||||||
|
- A Headscale server with gRPC enabled.
|
||||||
|
- Connections to the gRPC port (default: `50443`) are allowed.
|
||||||
|
- Remote access requires an encrypted connection via TLS.
|
||||||
|
- An [API key](#api) to authenticate with the Headscale server.
|
||||||
|
|
||||||
|
### Setup remote control
|
||||||
|
|
||||||
|
1. Download the [`headscale` binary from GitHub's release page](https://github.com/juanfont/headscale/releases). Make
|
||||||
|
sure to use the same version as on the server.
|
||||||
|
|
||||||
|
1. Put the binary somewhere in your `PATH`, e.g. `/usr/local/bin/headscale`
|
||||||
|
|
||||||
|
1. Make `headscale` executable: `chmod +x /usr/local/bin/headscale`
|
||||||
|
|
||||||
|
1. [Create an API key](#api) on the Headscale server.
|
||||||
|
|
||||||
|
1. Provide the connection parameters for the remote Headscale server either via a minimal YAML configuration file or
|
||||||
|
via environment variables:
|
||||||
|
|
||||||
|
=== "Minimal YAML configuration file"
|
||||||
|
|
||||||
|
```yaml title="config.yaml"
|
||||||
|
cli:
|
||||||
|
address: <HEADSCALE_ADDRESS>:<PORT>
|
||||||
|
api_key: <API_KEY>
|
||||||
|
```
|
||||||
|
|
||||||
|
=== "Environment variables"
|
||||||
|
|
||||||
|
```shell
|
||||||
|
export HEADSCALE_CLI_ADDRESS="<HEADSCALE_ADDRESS>:<PORT>"
|
||||||
|
export HEADSCALE_CLI_API_KEY="<API_KEY>"
|
||||||
|
```
|
||||||
|
|
||||||
|
This instructs the `headscale` binary to connect to a remote instance at `<HEADSCALE_ADDRESS>:<PORT>`, instead of
|
||||||
|
connecting to the local instance.
|
||||||
|
|
||||||
|
1. Test the connection by listing all nodes:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
headscale nodes list
|
||||||
|
```
|
||||||
|
|
||||||
|
You should now be able to see a list of your nodes from your workstation, and you can
|
||||||
|
now control the Headscale server from your workstation.
|
||||||
|
|
||||||
|
### Behind a proxy
|
||||||
|
|
||||||
|
It's possible to run the gRPC remote endpoint behind a reverse proxy, like Nginx, and have it run on the _same_ port as Headscale.
|
||||||
|
|
||||||
|
While this is _not a supported_ feature, an example on how this can be set up on
|
||||||
|
[NixOS is shown here](https://github.com/kradalby/dotfiles/blob/4489cdbb19cddfbfae82cd70448a38fde5a76711/machines/headscale.oracldn/headscale.nix#L61-L91).
|
||||||
|
|
||||||
|
### Troubleshooting
|
||||||
|
|
||||||
|
- Make sure you have the _same_ Headscale version on your server and workstation.
|
||||||
|
- Ensure that connections to the gRPC port are allowed.
|
||||||
|
- Verify that your TLS certificate is valid and trusted.
|
||||||
|
- If you don't have access to a trusted certificate (e.g. from Let's Encrypt), either:
|
||||||
|
- Add your self-signed certificate to the trust store of your OS _or_
|
||||||
|
- Disable certificate verification by either setting `cli.insecure: true` in the configuration file or by setting
|
||||||
|
`HEADSCALE_CLI_INSECURE=1` via an environment variable. We do **not** recommend to disable certificate validation.
|
||||||
@@ -5,7 +5,9 @@
|
|||||||
- `/etc/headscale`
|
- `/etc/headscale`
|
||||||
- `$HOME/.headscale`
|
- `$HOME/.headscale`
|
||||||
- the current working directory
|
- the current working directory
|
||||||
- Use the command line flag `-c`, `--config` to load the configuration from a different path
|
- To load the configuration from a different path, use:
|
||||||
|
- the command line flag `-c`, `--config`
|
||||||
|
- the environment variable `HEADSCALE_CONFIG`
|
||||||
- Validate the configuration file with: `headscale configtest`
|
- Validate the configuration file with: `headscale configtest`
|
||||||
|
|
||||||
!!! example "Get the [example configuration from the GitHub repository](https://github.com/juanfont/headscale/blob/main/config-example.yaml)"
|
!!! example "Get the [example configuration from the GitHub repository](https://github.com/juanfont/headscale/blob/main/config-example.yaml)"
|
||||||
@@ -15,8 +17,8 @@
|
|||||||
|
|
||||||
=== "View on GitHub"
|
=== "View on GitHub"
|
||||||
|
|
||||||
* Development version: <https://github.com/juanfont/headscale/blob/main/config-example.yaml>
|
- Development version: <https://github.com/juanfont/headscale/blob/main/config-example.yaml>
|
||||||
* Version {{ headscale.version }}: <https://github.com/juanfont/headscale/blob/v{{ headscale.version }}/config-example.yaml>
|
- Version {{ headscale.version }}: https://github.com/juanfont/headscale/blob/v{{ headscale.version }}/config-example.yaml
|
||||||
|
|
||||||
=== "Download with `wget`"
|
=== "Download with `wget`"
|
||||||
|
|
||||||
|
|||||||
118
docs/ref/debug.md
Normal file
@@ -0,0 +1,118 @@
|
|||||||
|
# Debugging and troubleshooting
|
||||||
|
|
||||||
|
Headscale and Tailscale provide debug and introspection capabilities that can be helpful when things don't work as
|
||||||
|
expected. This page explains some debugging techniques to help pinpoint problems.
|
||||||
|
|
||||||
|
Please also have a look at [Tailscale's Troubleshooting guide](https://tailscale.com/kb/1023/troubleshooting). It offers
|
||||||
|
a many tips and suggestions to troubleshoot common issues.
|
||||||
|
|
||||||
|
## Tailscale
|
||||||
|
|
||||||
|
The Tailscale client itself offers many commands to introspect its state as well as the state of the network:
|
||||||
|
|
||||||
|
- [Check local network conditions](https://tailscale.com/kb/1080/cli#netcheck): `tailscale netcheck`
|
||||||
|
- [Get the client status](https://tailscale.com/kb/1080/cli#status): `tailscale status --json`
|
||||||
|
- [Get DNS status](https://tailscale.com/kb/1080/cli#dns): `tailscale dns status --all`
|
||||||
|
- Client logs: `tailscale debug daemon-logs`
|
||||||
|
- Client netmap: `tailscale debug netmap`
|
||||||
|
- Test DERP connection: `tailscale debug derp headscale`
|
||||||
|
- And many more, see: `tailscale debug --help`
|
||||||
|
|
||||||
|
Many of the commands are helpful when trying to understand differences between Headscale and Tailscale SaaS.
|
||||||
|
|
||||||
|
## Headscale
|
||||||
|
|
||||||
|
### Application logging
|
||||||
|
|
||||||
|
The log levels `debug` and `trace` can be useful to get more information from Headscale.
|
||||||
|
|
||||||
|
```yaml hl_lines="3"
|
||||||
|
log:
|
||||||
|
# Valid log levels: panic, fatal, error, warn, info, debug, trace
|
||||||
|
level: debug
|
||||||
|
```
|
||||||
|
|
||||||
|
### Database logging
|
||||||
|
|
||||||
|
The database debug mode logs all database queries. Enable it to see how Headscale interacts with its database. This also
|
||||||
|
requires the application log level to be set to either `debug` or `trace`.
|
||||||
|
|
||||||
|
```yaml hl_lines="3 7"
|
||||||
|
database:
|
||||||
|
# Enable debug mode. This setting requires the log.level to be set to "debug" or "trace".
|
||||||
|
debug: false
|
||||||
|
|
||||||
|
log:
|
||||||
|
# Valid log levels: panic, fatal, error, warn, info, debug, trace
|
||||||
|
level: debug
|
||||||
|
```
|
||||||
|
|
||||||
|
### Metrics and debug endpoint
|
||||||
|
|
||||||
|
Headscale provides a metrics and debug endpoint. It allows to introspect different aspects such as:
|
||||||
|
|
||||||
|
- Information about the Go runtime, memory usage and statistics
|
||||||
|
- Connected nodes and pending registrations
|
||||||
|
- Active ACLs, filters and SSH policy
|
||||||
|
- Current DERPMap
|
||||||
|
- Prometheus metrics
|
||||||
|
|
||||||
|
!!! warning "Keep the metrics and debug endpoint private"
|
||||||
|
|
||||||
|
The listen address and port can be configured with the `metrics_listen_addr` variable in the [configuration
|
||||||
|
file](./configuration.md). By default it listens on localhost, port 9090.
|
||||||
|
|
||||||
|
Keep the metrics and debug endpoint private to your internal network and don't expose it to the Internet.
|
||||||
|
|
||||||
|
The metrics and debug interface can be disabled completely by setting `metrics_listen_addr: null` in the
|
||||||
|
[configuration file](./configuration.md).
|
||||||
|
|
||||||
|
Query metrics via <http://localhost:9090/metrics> and get an overview of available debug information via
|
||||||
|
<http://localhost:9090/debug/>. Metrics may be queried from outside localhost but the debug interface is subject to
|
||||||
|
additional protection despite listening on all interfaces.
|
||||||
|
|
||||||
|
=== "Direct access"
|
||||||
|
|
||||||
|
Access the debug interface directly on the server where Headscale is installed.
|
||||||
|
|
||||||
|
```console
|
||||||
|
curl http://localhost:9090/debug/
|
||||||
|
```
|
||||||
|
|
||||||
|
=== "SSH port forwarding"
|
||||||
|
|
||||||
|
Use SSH port forwarding to forward Headscale's metrics and debug port to your device.
|
||||||
|
|
||||||
|
```console
|
||||||
|
ssh <HEADSCALE_SERVER> -L 9090:localhost:9090
|
||||||
|
```
|
||||||
|
|
||||||
|
Access the debug interface on your device by opening <http://localhost:9090/debug/> in your web browser.
|
||||||
|
|
||||||
|
=== "Via debug key"
|
||||||
|
|
||||||
|
The access control of the debug interface supports the use of a debug key. Traffic is accepted if the path to a
|
||||||
|
debug key is set via the environment variable `TS_DEBUG_KEY_PATH` and the debug key sent as value for `debugkey`
|
||||||
|
parameter with each request.
|
||||||
|
|
||||||
|
```console
|
||||||
|
openssl rand -hex 32 | tee debugkey.txt
|
||||||
|
export TS_DEBUG_KEY_PATH=debugkey.txt
|
||||||
|
headscale serve
|
||||||
|
```
|
||||||
|
|
||||||
|
Access the debug interface on your device by opening `http://<IP_OF_HEADSCALE>:9090/debug/?debugkey=<DEBUG_KEY>` in
|
||||||
|
your web browser. The `debugkey` parameter must be sent with every request.
|
||||||
|
|
||||||
|
=== "Via debug IP address"
|
||||||
|
|
||||||
|
The debug endpoint expects traffic from localhost. A different debug IP address may be configured by setting the
|
||||||
|
`TS_ALLOW_DEBUG_IP` environment variable before starting Headscale. The debug IP address is ignored when the HTTP
|
||||||
|
header `X-Forwarded-For` is present.
|
||||||
|
|
||||||
|
```console
|
||||||
|
export TS_ALLOW_DEBUG_IP=192.168.0.10 # IP address of your device
|
||||||
|
headscale serve
|
||||||
|
```
|
||||||
|
|
||||||
|
Access the debug interface on your device by opening `http://<IP_OF_HEADSCALE>:9090/debug/` in your web browser.
|
||||||
174
docs/ref/derp.md
Normal file
@@ -0,0 +1,174 @@
|
|||||||
|
# DERP
|
||||||
|
|
||||||
|
A [DERP (Designated Encrypted Relay for Packets) server](https://tailscale.com/kb/1232/derp-servers) is mainly used to
|
||||||
|
relay traffic between two nodes in case a direct connection can't be established. Headscale provides an embedded DERP
|
||||||
|
server to ensure seamless connectivity between nodes.
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
DERP related settings are configured within the `derp` section of the [configuration file](./configuration.md). The
|
||||||
|
following sections only use a few of the available settings, check the [example configuration](./configuration.md) for
|
||||||
|
all available configuration options.
|
||||||
|
|
||||||
|
### Enable embedded DERP
|
||||||
|
|
||||||
|
Headscale ships with an embedded DERP server which allows to run your own self-hosted DERP server easily. The embedded
|
||||||
|
DERP server is disabled by default and needs to be enabled. In addition, you should configure the public IPv4 and public
|
||||||
|
IPv6 address of your Headscale server for improved connection stability:
|
||||||
|
|
||||||
|
```yaml title="config.yaml" hl_lines="3-5"
|
||||||
|
derp:
|
||||||
|
server:
|
||||||
|
enabled: true
|
||||||
|
ipv4: 198.51.100.1
|
||||||
|
ipv6: 2001:db8::1
|
||||||
|
```
|
||||||
|
|
||||||
|
Keep in mind that [additional ports are needed to run a DERP server](../setup/requirements.md#ports-in-use). Besides
|
||||||
|
relaying traffic, it also uses STUN (udp/3478) to help clients discover their public IP addresses and perform NAT
|
||||||
|
traversal. [Check DERP server connectivity](#check-derp-server-connectivity) to see if everything works.
|
||||||
|
|
||||||
|
### Remove Tailscale's DERP servers
|
||||||
|
|
||||||
|
Once enabled, Headscale's embedded DERP is added to the list of free-to-use [DERP
|
||||||
|
servers](https://tailscale.com/kb/1232/derp-servers) offered by Tailscale Inc. To only use Headscale's embedded DERP
|
||||||
|
server, disable the loading of the default DERP map:
|
||||||
|
|
||||||
|
```yaml title="config.yaml" hl_lines="6"
|
||||||
|
derp:
|
||||||
|
server:
|
||||||
|
enabled: true
|
||||||
|
ipv4: 198.51.100.1
|
||||||
|
ipv6: 2001:db8::1
|
||||||
|
urls: []
|
||||||
|
```
|
||||||
|
|
||||||
|
!!! warning "Single point of failure"
|
||||||
|
|
||||||
|
Removing Tailscale's DERP servers means that there is now just a single DERP server available for clients. This is a
|
||||||
|
single point of failure and could hamper connectivity.
|
||||||
|
|
||||||
|
[Check DERP server connectivity](#check-derp-server-connectivity) with your embedded DERP server before removing
|
||||||
|
Tailscale's DERP servers.
|
||||||
|
|
||||||
|
### Customize DERP map
|
||||||
|
|
||||||
|
The DERP map offered to clients can be customized with a [dedicated YAML-configuration
|
||||||
|
file](https://github.com/juanfont/headscale/blob/main/derp-example.yaml). This allows to modify previously loaded DERP
|
||||||
|
maps fetched via URL or to offer your own, custom DERP servers to nodes.
|
||||||
|
|
||||||
|
=== "Remove specific DERP regions"
|
||||||
|
|
||||||
|
The free-to-use [DERP servers](https://tailscale.com/kb/1232/derp-servers) are organized into regions via a region
|
||||||
|
ID. You can explicitly disable a specific region by setting its region ID to `null`. The following sample
|
||||||
|
`derp.yaml` disables the New York DERP region (which has the region ID 1):
|
||||||
|
|
||||||
|
```yaml title="derp.yaml"
|
||||||
|
regions:
|
||||||
|
1: null
|
||||||
|
```
|
||||||
|
|
||||||
|
Use the following configuration to serve the default DERP map (excluding New York) to nodes:
|
||||||
|
|
||||||
|
```yaml title="config.yaml" hl_lines="6 7"
|
||||||
|
derp:
|
||||||
|
server:
|
||||||
|
enabled: false
|
||||||
|
urls:
|
||||||
|
- https://controlplane.tailscale.com/derpmap/default
|
||||||
|
paths:
|
||||||
|
- /etc/headscale/derp.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
=== "Provide custom DERP servers"
|
||||||
|
|
||||||
|
The following sample `derp.yaml` references two custom regions (`custom-east` with ID 900 and `custom-west` with ID 901)
|
||||||
|
with one custom DERP server in each region. Each DERP server offers DERP relay via HTTPS on tcp/443, support for captive
|
||||||
|
portal checks via HTTP on tcp/80 and STUN on udp/3478. See the definitions of
|
||||||
|
[DERPMap](https://pkg.go.dev/tailscale.com/tailcfg#DERPMap),
|
||||||
|
[DERPRegion](https://pkg.go.dev/tailscale.com/tailcfg#DERPRegion) and
|
||||||
|
[DERPNode](https://pkg.go.dev/tailscale.com/tailcfg#DERPNode) for all available options.
|
||||||
|
|
||||||
|
```yaml title="derp.yaml"
|
||||||
|
regions:
|
||||||
|
900:
|
||||||
|
regionid: 900
|
||||||
|
regioncode: custom-east
|
||||||
|
regionname: My region (east)
|
||||||
|
nodes:
|
||||||
|
- name: 900a
|
||||||
|
regionid: 900
|
||||||
|
hostname: derp900a.example.com
|
||||||
|
ipv4: 198.51.100.1
|
||||||
|
ipv6: 2001:db8::1
|
||||||
|
canport80: true
|
||||||
|
901:
|
||||||
|
regionid: 901
|
||||||
|
regioncode: custom-west
|
||||||
|
regionname: My Region (west)
|
||||||
|
nodes:
|
||||||
|
- name: 901a
|
||||||
|
regionid: 901
|
||||||
|
hostname: derp901a.example.com
|
||||||
|
ipv4: 198.51.100.2
|
||||||
|
ipv6: 2001:db8::2
|
||||||
|
canport80: true
|
||||||
|
```
|
||||||
|
|
||||||
|
Use the following configuration to only serve the two DERP servers from the above `derp.yaml`:
|
||||||
|
|
||||||
|
```yaml title="config.yaml" hl_lines="5 6"
|
||||||
|
derp:
|
||||||
|
server:
|
||||||
|
enabled: false
|
||||||
|
urls: []
|
||||||
|
paths:
|
||||||
|
- /etc/headscale/derp.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
Independent of the custom DERP map, you may choose to [enable the embedded DERP server and have it automatically added
|
||||||
|
to the custom DERP map](#enable-embedded-derp).
|
||||||
|
|
||||||
|
### Verify clients
|
||||||
|
|
||||||
|
Access to DERP serves can be restricted to nodes that are members of your Tailnet. Relay access is denied for unknown
|
||||||
|
clients.
|
||||||
|
|
||||||
|
=== "Embedded DERP"
|
||||||
|
|
||||||
|
Client verification is enabled by default.
|
||||||
|
|
||||||
|
```yaml title="config.yaml" hl_lines="3"
|
||||||
|
derp:
|
||||||
|
server:
|
||||||
|
verify_clients: true
|
||||||
|
```
|
||||||
|
|
||||||
|
=== "3rd-party DERP"
|
||||||
|
|
||||||
|
Tailscale's `derper` provides two parameters to configure client verification:
|
||||||
|
|
||||||
|
- Use the `-verify-client-url` parameter of the `derper` and point it towards the `/verify` endpoint of your
|
||||||
|
Headscale server (e.g `https://headscale.example.com/verify`). The DERP server will query your Headscale instance
|
||||||
|
as soon as a client connects with it to ask whether access should be allowed or denied. Access is allowed if
|
||||||
|
Headscale knows about the connecting client and denied otherwise.
|
||||||
|
- The parameter `-verify-client-url-fail-open` controls what should happen when the DERP server can't reach the
|
||||||
|
Headscale instance. By default, it will allow access if Headscale is unreachable.
|
||||||
|
|
||||||
|
## Check DERP server connectivity
|
||||||
|
|
||||||
|
Any Tailscale client may be used to introspect the DERP map and to check for connectivity issues with DERP servers.
|
||||||
|
|
||||||
|
- Display DERP map: `tailscale debug derp-map`
|
||||||
|
- Check connectivity with the embedded DERP[^1]:`tailscale debug derp headscale`
|
||||||
|
|
||||||
|
Additional DERP related metrics and information is available via the [metrics and debug
|
||||||
|
endpoint](./debug.md#metrics-and-debug-endpoint).
|
||||||
|
|
||||||
|
## Limitations
|
||||||
|
|
||||||
|
- The embedded DERP server can't be used for Tailscale's captive portal checks as it doesn't support the `/generate_204`
|
||||||
|
endpoint via HTTP on port tcp/80.
|
||||||
|
- There are no speed or throughput optimisations, the main purpose is to assist in node connectivity.
|
||||||
|
|
||||||
|
[^1]: This assumes that the default region code of the [configuration file](./configuration.md) is used.
|
||||||
@@ -1,18 +1,18 @@
|
|||||||
# DNS
|
# DNS
|
||||||
|
|
||||||
Headscale supports [most DNS features](../about/features.md) from Tailscale. DNS releated settings can be configured
|
Headscale supports [most DNS features](../about/features.md) from Tailscale. DNS related settings can be configured
|
||||||
within `dns` section of the [configuration file](./configuration.md).
|
within the `dns` section of the [configuration file](./configuration.md).
|
||||||
|
|
||||||
## Setting extra DNS records
|
## Setting extra DNS records
|
||||||
|
|
||||||
Headscale allows to set extra DNS records which are made available via
|
Headscale allows to set extra DNS records which are made available via
|
||||||
[MagicDNS](https://tailscale.com/kb/1081/magicdns). Extra DNS records can be configured either via static entries in the
|
[MagicDNS](https://tailscale.com/kb/1081/magicdns). Extra DNS records can be configured either via static entries in the
|
||||||
[configuration file](./configuration.md) or from a JSON file that Headscale continously watches for changes:
|
[configuration file](./configuration.md) or from a JSON file that Headscale continuously watches for changes:
|
||||||
|
|
||||||
* Use the `dns.extra_records` option in the [configuration file](./configuration.md) for entries that are static and
|
- Use the `dns.extra_records` option in the [configuration file](./configuration.md) for entries that are static and
|
||||||
don't change while Headscale is running. Those entries are processed when Headscale is starting up and changes to the
|
don't change while Headscale is running. Those entries are processed when Headscale is starting up and changes to the
|
||||||
configuration require a restart of Headscale.
|
configuration require a restart of Headscale.
|
||||||
* For dynamic DNS records that may be added, updated or removed while Headscale is running or DNS records that are
|
- For dynamic DNS records that may be added, updated or removed while Headscale is running or DNS records that are
|
||||||
generated by scripts the option `dns.extra_records_path` in the [configuration file](./configuration.md) is useful.
|
generated by scripts the option `dns.extra_records_path` in the [configuration file](./configuration.md) is useful.
|
||||||
Set it to the absolute path of the JSON file containing DNS records and Headscale processes this file as it detects
|
Set it to the absolute path of the JSON file containing DNS records and Headscale processes this file as it detects
|
||||||
changes.
|
changes.
|
||||||
@@ -23,10 +23,9 @@ hostname and port combination "http://hostname-in-magic-dns.myvpn.example.com:30
|
|||||||
|
|
||||||
!!! warning "Limitations"
|
!!! warning "Limitations"
|
||||||
|
|
||||||
Currently, [only A and AAAA records are processed by Tailscale](https://github.com/tailscale/tailscale/blob/v1.78.3/ipn/ipnlocal/local.go#L4461-L4479).
|
Currently, [only A and AAAA records are processed by Tailscale](https://github.com/tailscale/tailscale/blob/v1.86.5/ipn/ipnlocal/node_backend.go#L662).
|
||||||
|
|
||||||
|
1. Configure extra DNS records using one of the available configuration options:
|
||||||
1. Configure extra DNS records using one of the available configuration options:
|
|
||||||
|
|
||||||
=== "Static entries, via `dns.extra_records`"
|
=== "Static entries, via `dns.extra_records`"
|
||||||
|
|
||||||
@@ -67,28 +66,28 @@ hostname and port combination "http://hostname-in-magic-dns.myvpn.example.com:30
|
|||||||
|
|
||||||
!!! tip "Good to know"
|
!!! tip "Good to know"
|
||||||
|
|
||||||
* The `dns.extra_records_path` option in the [configuration file](./configuration.md) needs to reference the
|
- The `dns.extra_records_path` option in the [configuration file](./configuration.md) needs to reference the
|
||||||
JSON file containing extra DNS records.
|
JSON file containing extra DNS records.
|
||||||
* Be sure to "sort keys" and produce a stable output in case you generate the JSON file with a script.
|
- Be sure to "sort keys" and produce a stable output in case you generate the JSON file with a script.
|
||||||
Headscale uses a checksum to detect changes to the file and a stable output avoids unnecessary processing.
|
Headscale uses a checksum to detect changes to the file and a stable output avoids unnecessary processing.
|
||||||
|
|
||||||
1. Verify that DNS records are properly set using the DNS querying tool of your choice:
|
1. Verify that DNS records are properly set using the DNS querying tool of your choice:
|
||||||
|
|
||||||
=== "Query with dig"
|
=== "Query with dig"
|
||||||
|
|
||||||
```shell
|
```console
|
||||||
dig +short grafana.myvpn.example.com
|
dig +short grafana.myvpn.example.com
|
||||||
100.64.0.3
|
100.64.0.3
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "Query with drill"
|
=== "Query with drill"
|
||||||
|
|
||||||
```shell
|
```console
|
||||||
drill -Q grafana.myvpn.example.com
|
drill -Q grafana.myvpn.example.com
|
||||||
100.64.0.3
|
100.64.0.3
|
||||||
```
|
```
|
||||||
|
|
||||||
1. Optional: Setup the reverse proxy
|
1. Optional: Setup the reverse proxy
|
||||||
|
|
||||||
The motivating example here was to be able to access internal monitoring services on the same host without
|
The motivating example here was to be able to access internal monitoring services on the same host without
|
||||||
specifying a port, depicted as NGINX configuration snippet:
|
specifying a port, depicted as NGINX configuration snippet:
|
||||||
|
|||||||
@@ -1,51 +0,0 @@
|
|||||||
# Exit Nodes
|
|
||||||
|
|
||||||
## On the node
|
|
||||||
|
|
||||||
Register the node and make it advertise itself as an exit node:
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ sudo tailscale up --login-server https://headscale.example.com --advertise-exit-node
|
|
||||||
```
|
|
||||||
|
|
||||||
If the node is already registered, it can advertise exit capabilities like this:
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ sudo tailscale set --advertise-exit-node
|
|
||||||
```
|
|
||||||
|
|
||||||
To use a node as an exit node, IP forwarding must be enabled on the node. Check the official [Tailscale documentation](https://tailscale.com/kb/1019/subnets/?tab=linux#enable-ip-forwarding) for how to enable IP forwarding.
|
|
||||||
|
|
||||||
## On the control server
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ # list nodes
|
|
||||||
$ headscale routes list
|
|
||||||
ID | Node | Prefix | Advertised | Enabled | Primary
|
|
||||||
1 | | 0.0.0.0/0 | false | false | -
|
|
||||||
2 | | ::/0 | false | false | -
|
|
||||||
3 | phobos | 0.0.0.0/0 | true | false | -
|
|
||||||
4 | phobos | ::/0 | true | false | -
|
|
||||||
|
|
||||||
$ # enable routes for phobos
|
|
||||||
$ headscale routes enable -r 3
|
|
||||||
$ headscale routes enable -r 4
|
|
||||||
|
|
||||||
$ # Check node list again. The routes are now enabled.
|
|
||||||
$ headscale routes list
|
|
||||||
ID | Node | Prefix | Advertised | Enabled | Primary
|
|
||||||
1 | | 0.0.0.0/0 | false | false | -
|
|
||||||
2 | | ::/0 | false | false | -
|
|
||||||
3 | phobos | 0.0.0.0/0 | true | true | -
|
|
||||||
4 | phobos | ::/0 | true | true | -
|
|
||||||
```
|
|
||||||
|
|
||||||
## On the client
|
|
||||||
|
|
||||||
The exit node can now be used with:
|
|
||||||
|
|
||||||
```console
|
|
||||||
$ sudo tailscale set --exit-node phobos
|
|
||||||
```
|
|
||||||
|
|
||||||
Check the official [Tailscale documentation](https://tailscale.com/kb/1103/exit-nodes#use-the-exit-node) for how to do it on your device.
|
|
||||||
@@ -13,7 +13,7 @@ Running headscale behind a reverse proxy is useful when running multiple applica
|
|||||||
|
|
||||||
The reverse proxy MUST be configured to support WebSockets to communicate with Tailscale clients.
|
The reverse proxy MUST be configured to support WebSockets to communicate with Tailscale clients.
|
||||||
|
|
||||||
WebSockets support is also required when using the headscale embedded DERP server. In this case, you will also need to expose the UDP port used for STUN (by default, udp/3478). Please check our [config-example.yaml](https://github.com/juanfont/headscale/blob/main/config-example.yaml).
|
WebSockets support is also required when using the Headscale [embedded DERP server](../derp.md). In this case, you will also need to expose the UDP port used for STUN (by default, udp/3478). Please check our [config-example.yaml](https://github.com/juanfont/headscale/blob/main/config-example.yaml).
|
||||||
|
|
||||||
### Cloudflare
|
### Cloudflare
|
||||||
|
|
||||||
|
|||||||
@@ -5,9 +5,18 @@
|
|||||||
This page contains community contributions. The projects listed here are not
|
This page contains community contributions. The projects listed here are not
|
||||||
maintained by the headscale authors and are written by community members.
|
maintained by the headscale authors and are written by community members.
|
||||||
|
|
||||||
This page collects third-party tools and scripts related to headscale.
|
This page collects third-party tools, client libraries, and scripts related to headscale.
|
||||||
|
|
||||||
| Name | Repository Link | Description |
|
- [headscale-operator](https://github.com/infradohq/headscale-operator) - Headscale Kubernetes Operator
|
||||||
| --------------------- | --------------------------------------------------------------- | ------------------------------------------------- |
|
- [tailscale-manager](https://github.com/singlestore-labs/tailscale-manager) - Dynamically manage Tailscale route
|
||||||
| tailscale-manager | [Github](https://github.com/singlestore-labs/tailscale-manager) | Dynamically manage Tailscale route advertisements |
|
advertisements
|
||||||
| headscalebacktosqlite | [Github](https://github.com/bigbozza/headscalebacktosqlite) | Migrate headscale from PostgreSQL back to SQLite |
|
- [headscalebacktosqlite](https://github.com/bigbozza/headscalebacktosqlite) - Migrate headscale from PostgreSQL back to
|
||||||
|
SQLite
|
||||||
|
- [headscale-pf](https://github.com/YouSysAdmin/headscale-pf) - Populates user groups based on user groups in Jumpcloud
|
||||||
|
or Authentik
|
||||||
|
- [headscale-client-go](https://github.com/hibare/headscale-client-go) - A Go client implementation for the Headscale
|
||||||
|
HTTP API.
|
||||||
|
- [headscale-zabbix](https://github.com/dblanque/headscale-zabbix) - A Zabbix Monitoring Template for the Headscale
|
||||||
|
Service.
|
||||||
|
- [tailscale-exporter](https://github.com/adinhodovic/tailscale-exporter) - A Prometheus exporter for Headscale that
|
||||||
|
provides network-level metrics using the Headscale API.
|
||||||
|
|||||||