关于在 Kubevirt 上的 K3S 集群的部署流程 #49

Closed
opened 2025-12-28 23:19:01 +01:00 by adam · 9 comments
Owner

Originally created by @LokiSharp on GitHub (Nov 11, 2024).

我想请教一下,现在配置中 K3S 集群的部署流程。
kubevirt 的 token 是用 u 盘传递的,直接手动正常部署即可,这个很容易理解。
但是我看到你的 k3s 集群是部署在 kubevirt 上的,token 存放在 secrets 仓库用 agenix 加密解密。加密解密用的是主机上的公钥私钥,在初次部署运行后才会生成。公钥私钥是初次部署运行后手动拉取吗?是不是有什么我没理解的自动化流程?

我现在理解的工作流程:

  1. 生成主机的 qcow2 上传文件服务器
  2. 在 kubevirt 上用 k8s-gitops 中的配置,部署集群节点虚拟机容器
  3. 登录到容器提取出公钥私钥
  4. 用提取出的公钥私钥添加集群节点 token ,更新 secrets 仓库
  5. 在节点虚拟机上 rebuild
Originally created by @LokiSharp on GitHub (Nov 11, 2024). 我想请教一下,现在配置中 K3S 集群的部署流程。 kubevirt 的 token 是用 u 盘传递的,直接手动正常部署即可,这个很容易理解。 但是我看到你的 k3s 集群是部署在 kubevirt 上的,token 存放在 secrets 仓库用 agenix 加密解密。加密解密用的是主机上的公钥私钥,在初次部署运行后才会生成。公钥私钥是初次部署运行后手动拉取吗?是不是有什么我没理解的自动化流程? 我现在理解的工作流程: 1. 生成主机的 qcow2 上传文件服务器 2. 在 kubevirt 上用 k8s-gitops 中的配置,部署集群节点虚拟机容器 3. 登录到容器提取出公钥私钥 4. 用提取出的公钥私钥添加集群节点 token ,更新 secrets 仓库 5. 在节点虚拟机上 rebuild
adam closed this issue 2025-12-28 23:19:01 +01:00
Author
Owner

@ryan4yin commented on GitHub (Nov 11, 2024):

你的理解没问题,初次部署时会有这么个手动更新 secrets 的流程,只是我写了个 nushell 小脚本自动化了 3 跟 4 的更新流程。
不提前生成公私钥是为了安全考虑,确保私钥不会经由任何网络传输

3 跟 4 中更新 secrets 仓库这一操作不需要每台机器的私钥,只需要所有虚拟机的公钥即可。因为 secrets 的解密操作只需要任一能够解密这些数据的私钥,而我本机的私钥以及我的 backup 私钥都能用于解密整个 secrets 仓库。

如果你对安全性的要求没这么严格,也可以提前生成好公私钥,初次部署时直接写个脚本或用 ansible 之类的工具 rsync 到所有虚拟机里。

@ryan4yin commented on GitHub (Nov 11, 2024): 你的理解没问题,初次部署时会有这么个手动更新 secrets 的流程,只是我写了个 nushell 小脚本自动化了 3 跟 4 的更新流程。 不提前生成公私钥是为了安全考虑,**确保私钥不会经由任何网络传输**。 3 跟 4 中更新 secrets 仓库这一操作不需要每台机器的私钥,只需要所有虚拟机的公钥即可。因为 secrets 的解密操作只需要任一能够解密这些数据的私钥,而我本机的私钥以及我的 backup 私钥都能用于解密整个 secrets 仓库。 如果你对安全性的要求没这么严格,也可以提前生成好公私钥,初次部署时直接写个脚本或用 ansible 之类的工具 rsync 到所有虚拟机里。
Author
Owner

@LokiSharp commented on GitHub (Nov 17, 2024):

@ryan4yin 我跑了一下,没跑通。
这里的构建脚本 utils 里 VM 镜像上传脚本是不是需要调整一下?
另外 /data/apps/caddy/fileserver/vms/ 这个目录 users 似乎没有写入权限,只能用 root 上传?

https://github.com/ryan4yin/nix-config/blob/main/utils.nu

# ==================== Virtual Machines related =====================

# Build and upload a VM image
export def upload-vm [
    name: string
    mode: string
] {
    let target = $".#($name)"
    if "debug" == $mode {
        nom build $target --show-trace --verbose
    } else {
        nix build $target
    }

---    let remote = $"ryan@rakushun:/data/caddy/fileserver/vms/kubevirt-($name).qcow2"
---    rsync -avz --progress --copy-links --checksum result $remote
+++    let remote = $"ryan@rakushun:/data/apps/caddy/fileserver/vms/kubevirt-($name).qcow2"
+++    rsync -avz --progress --copy-links --checksum result/nixos.qcow2 $remote
}
@LokiSharp commented on GitHub (Nov 17, 2024): @ryan4yin 我跑了一下,没跑通。 这里的构建脚本 utils 里 VM 镜像上传脚本是不是需要调整一下? 另外 /data/apps/caddy/fileserver/vms/ 这个目录 users 似乎没有写入权限,只能用 root 上传? https://github.com/ryan4yin/nix-config/blob/main/utils.nu ```nix # ==================== Virtual Machines related ===================== # Build and upload a VM image export def upload-vm [ name: string mode: string ] { let target = $".#($name)" if "debug" == $mode { nom build $target --show-trace --verbose } else { nix build $target } --- let remote = $"ryan@rakushun:/data/caddy/fileserver/vms/kubevirt-($name).qcow2" --- rsync -avz --progress --copy-links --checksum result $remote +++ let remote = $"ryan@rakushun:/data/apps/caddy/fileserver/vms/kubevirt-($name).qcow2" +++ rsync -avz --progress --copy-links --checksum result/nixos.qcow2 $remote } ```
Author
Owner

@ryan4yin commented on GitHub (Nov 17, 2024):

因为一些个人原因,rakushun 这台机器我前阵子单独换成 ubuntu 跑了,所以目前的脚本在我这边是 work 的。

@ryan4yin commented on GitHub (Nov 17, 2024): 因为一些个人原因,rakushun 这台机器我前阵子单独换成 ubuntu 跑了,所以目前的脚本在我这边是 work 的。
Author
Owner

@LokiSharp commented on GitHub (Nov 18, 2024):

我 Kubevirt 节点成功部署在 PVE 上,并构建了 K8S 测试节点。但是似乎测试节点的 cni 插件没法正常初始化。/etc/cni/net.d 和 /var/lib/rancher/k3s/agent/etc/cni/net.d 也都是空的。

loki-sharp in 🌐 K3S-Test-1-Master-1 in ~ 
❯ journalctl -u k3s -f
11月 19 00:25:32 K3S-Test-1-Master-1 k3s[6435]: {"level":"warn","ts":"2024-11-19T00:25:32.140979+0800","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-19T00:25:30.673912+0800","time spent":"1.467016688s","remote":"127.0.0.1:38282","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":436,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/k3s\" mod_revision:6672 > success:<request_put:<key:\"/registry/leases/kube-system/k3s\" value_size:396 >> failure:<request_range:<key:\"/registry/leases/kube-system/k3s\" > >"}
11月 19 00:25:32 K3S-Test-1-Master-1 k3s[6435]: {"level":"info","ts":"2024-11-19T00:25:32.141184+0800","caller":"traceutil/trace.go:171","msg":"trace[478588034] transaction","detail":"{read_only:false; response_revision:6680; number_of_response:1; }","duration":"1.171831161s","start":"2024-11-19T00:25:30.969344+0800","end":"2024-11-19T00:25:32.141175+0800","steps":["trace[478588034] 'process raft request'  (duration: 1.168951322s)"],"step_count":1}
11月 19 00:25:32 K3S-Test-1-Master-1 k3s[6435]: {"level":"warn","ts":"2024-11-19T00:25:32.141341+0800","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-19T00:25:30.969327+0800","time spent":"1.171976296s","remote":"127.0.0.1:38282","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":492,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/k3s-cloud-controller-manager\" mod_revision:6673 > success:<request_put:<key:\"/registry/leases/kube-system/k3s-cloud-controller-manager\" value_size:427 >> failure:<request_range:<key:\"/registry/leases/kube-system/k3s-cloud-controller-manager\" > >"}
11月 19 00:25:34 K3S-Test-1-Master-1 k3s[6435]: {"level":"info","ts":"2024-11-19T00:25:34.268381+0800","caller":"traceutil/trace.go:171","msg":"trace[1958828523] transaction","detail":"{read_only:false; response_revision:6688; number_of_response:1; }","duration":"108.696157ms","start":"2024-11-19T00:25:34.15959+0800","end":"2024-11-19T00:25:34.268359+0800","steps":["trace[1958828523] 'process raft request'  (duration: 108.469891ms)"],"step_count":1}
11月 19 00:25:34 K3S-Test-1-Master-1 k3s[6435]: {"level":"info","ts":"2024-11-19T00:25:34.268405+0800","caller":"traceutil/trace.go:171","msg":"trace[1430542136] transaction","detail":"{read_only:false; response_revision:6690; number_of_response:1; }","duration":"108.261087ms","start":"2024-11-19T00:25:34.160126+0800","end":"2024-11-19T00:25:34.268388+0800","steps":["trace[1430542136] 'process raft request'  (duration: 108.147763ms)"],"step_count":1}
11月 19 00:25:34 K3S-Test-1-Master-1 k3s[6435]: {"level":"info","ts":"2024-11-19T00:25:34.268444+0800","caller":"traceutil/trace.go:171","msg":"trace[1262166638] transaction","detail":"{read_only:false; response_revision:6689; number_of_response:1; }","duration":"108.373245ms","start":"2024-11-19T00:25:34.160066+0800","end":"2024-11-19T00:25:34.268439+0800","steps":["trace[1262166638] 'process raft request'  (duration: 108.173927ms)"],"step_count":1}
11月 19 00:25:35 K3S-Test-1-Master-1 k3s[6435]: E1119 00:25:35.105025    6435 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
11月 19 00:25:35 K3S-Test-1-Master-1 k3s[6435]: I1119 00:25:35.357084    6435 range_allocator.go:241] "Successfully synced" key="k3s-test-1-master-2"
11月 19 00:25:40 K3S-Test-1-Master-1 k3s[6435]: E1119 00:25:40.107331    6435 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
11月 19 00:25:40 K3S-Test-1-Master-1 k3s[6435]: {"level":"info","ts":"2024-11-19T00:25:40.449389+0800","caller":"traceutil/trace.go:171","msg":"trace[807137478] transaction","detail":"{read_only:false; response_revision:6708; number_of_response:1; }","duration":"163.012859ms","start":"2024-11-19T00:25:40.28636+0800","end":"2024-11-19T00:25:40.449373+0800","steps":["trace[807137478] 'process raft request'  (duration: 162.920282ms)"],"step_count":1}
11月 19 00:25:44 K3S-Test-1-Master-1 k3s[6435]: E1119 00:25:44.947250    6435 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1"
11月 19 00:25:45 K3S-Test-1-Master-1 k3s[6435]: E1119 00:25:45.108614    6435 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
11月 19 00:25:45 K3S-Test-1-Master-1 k3s[6435]: I1119 00:25:45.531758    6435 garbagecollector.go:826] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
11月 19 00:25:50 K3S-Test-1-Master-1 k3s[6435]: E1119 00:25:50.110082    6435 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
11月 19 00:25:50 K3S-Test-1-Master-1 k3s[6435]: I1119 00:25:50.390091    6435 range_allocator.go:241] "Successfully synced" key="k3s-test-1-master-3"
[root@K3S-Test-1-Master-1:/var/lib/rancher/k3s/agent/etc/cni/net.d]# ls -la
total 8
drwxr-x--x 2 root root 4096 11月18日 23:55 .
drwxr-xr-x 3 root root 4096 11月18日 23:55 ..

我是 K8S 初学者,不是很懂这里的网络配置是因为嵌套虚拟化的问题还是配置问题。我试着对比了一下 genKubeVirtGuestModule 和 genKubeVirtHostModule。我看到 KubeVirt 中配置了 vSwitch。我在想 genKubeVirtGuestModule 中运行的节点是不是也需要配置 vSwitch?

  # Enable the Open vSwitch as a systemd service
  # It's required by kubernetes' ovs-cni plugin.
  virtualisation.vswitch = {
    enable = true;
    # reset the Open vSwitch configuration database to a default configuration on every start of the systemd ovsdb.service
    resetOnStart = false;
  };
  networking.vswitches = {
    # https://github.com/k8snetworkplumbingwg/ovs-cni/blob/main/docs/demo.md
    ovsbr1 = {
      # Attach the interfaces to OVS bridge
      # This interface should not used by the host itself!
      interfaces.${iface} = { };
    };
  };
@LokiSharp commented on GitHub (Nov 18, 2024): 我 Kubevirt 节点成功部署在 PVE 上,并构建了 K8S 测试节点。但是似乎测试节点的 cni 插件没法正常初始化。/etc/cni/net.d 和 /var/lib/rancher/k3s/agent/etc/cni/net.d 也都是空的。 ```log loki-sharp in 🌐 K3S-Test-1-Master-1 in ~ ❯ journalctl -u k3s -f 11月 19 00:25:32 K3S-Test-1-Master-1 k3s[6435]: {"level":"warn","ts":"2024-11-19T00:25:32.140979+0800","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-19T00:25:30.673912+0800","time spent":"1.467016688s","remote":"127.0.0.1:38282","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":436,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/k3s\" mod_revision:6672 > success:<request_put:<key:\"/registry/leases/kube-system/k3s\" value_size:396 >> failure:<request_range:<key:\"/registry/leases/kube-system/k3s\" > >"} 11月 19 00:25:32 K3S-Test-1-Master-1 k3s[6435]: {"level":"info","ts":"2024-11-19T00:25:32.141184+0800","caller":"traceutil/trace.go:171","msg":"trace[478588034] transaction","detail":"{read_only:false; response_revision:6680; number_of_response:1; }","duration":"1.171831161s","start":"2024-11-19T00:25:30.969344+0800","end":"2024-11-19T00:25:32.141175+0800","steps":["trace[478588034] 'process raft request' (duration: 1.168951322s)"],"step_count":1} 11月 19 00:25:32 K3S-Test-1-Master-1 k3s[6435]: {"level":"warn","ts":"2024-11-19T00:25:32.141341+0800","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-11-19T00:25:30.969327+0800","time spent":"1.171976296s","remote":"127.0.0.1:38282","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":492,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/k3s-cloud-controller-manager\" mod_revision:6673 > success:<request_put:<key:\"/registry/leases/kube-system/k3s-cloud-controller-manager\" value_size:427 >> failure:<request_range:<key:\"/registry/leases/kube-system/k3s-cloud-controller-manager\" > >"} 11月 19 00:25:34 K3S-Test-1-Master-1 k3s[6435]: {"level":"info","ts":"2024-11-19T00:25:34.268381+0800","caller":"traceutil/trace.go:171","msg":"trace[1958828523] transaction","detail":"{read_only:false; response_revision:6688; number_of_response:1; }","duration":"108.696157ms","start":"2024-11-19T00:25:34.15959+0800","end":"2024-11-19T00:25:34.268359+0800","steps":["trace[1958828523] 'process raft request' (duration: 108.469891ms)"],"step_count":1} 11月 19 00:25:34 K3S-Test-1-Master-1 k3s[6435]: {"level":"info","ts":"2024-11-19T00:25:34.268405+0800","caller":"traceutil/trace.go:171","msg":"trace[1430542136] transaction","detail":"{read_only:false; response_revision:6690; number_of_response:1; }","duration":"108.261087ms","start":"2024-11-19T00:25:34.160126+0800","end":"2024-11-19T00:25:34.268388+0800","steps":["trace[1430542136] 'process raft request' (duration: 108.147763ms)"],"step_count":1} 11月 19 00:25:34 K3S-Test-1-Master-1 k3s[6435]: {"level":"info","ts":"2024-11-19T00:25:34.268444+0800","caller":"traceutil/trace.go:171","msg":"trace[1262166638] transaction","detail":"{read_only:false; response_revision:6689; number_of_response:1; }","duration":"108.373245ms","start":"2024-11-19T00:25:34.160066+0800","end":"2024-11-19T00:25:34.268439+0800","steps":["trace[1262166638] 'process raft request' (duration: 108.173927ms)"],"step_count":1} 11月 19 00:25:35 K3S-Test-1-Master-1 k3s[6435]: E1119 00:25:35.105025 6435 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" 11月 19 00:25:35 K3S-Test-1-Master-1 k3s[6435]: I1119 00:25:35.357084 6435 range_allocator.go:241] "Successfully synced" key="k3s-test-1-master-2" 11月 19 00:25:40 K3S-Test-1-Master-1 k3s[6435]: E1119 00:25:40.107331 6435 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" 11月 19 00:25:40 K3S-Test-1-Master-1 k3s[6435]: {"level":"info","ts":"2024-11-19T00:25:40.449389+0800","caller":"traceutil/trace.go:171","msg":"trace[807137478] transaction","detail":"{read_only:false; response_revision:6708; number_of_response:1; }","duration":"163.012859ms","start":"2024-11-19T00:25:40.28636+0800","end":"2024-11-19T00:25:40.449373+0800","steps":["trace[807137478] 'process raft request' (duration: 162.920282ms)"],"step_count":1} 11月 19 00:25:44 K3S-Test-1-Master-1 k3s[6435]: E1119 00:25:44.947250 6435 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" 11月 19 00:25:45 K3S-Test-1-Master-1 k3s[6435]: E1119 00:25:45.108614 6435 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" 11月 19 00:25:45 K3S-Test-1-Master-1 k3s[6435]: I1119 00:25:45.531758 6435 garbagecollector.go:826] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>" 11月 19 00:25:50 K3S-Test-1-Master-1 k3s[6435]: E1119 00:25:50.110082 6435 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" 11月 19 00:25:50 K3S-Test-1-Master-1 k3s[6435]: I1119 00:25:50.390091 6435 range_allocator.go:241] "Successfully synced" key="k3s-test-1-master-3" ``` ``` [root@K3S-Test-1-Master-1:/var/lib/rancher/k3s/agent/etc/cni/net.d]# ls -la total 8 drwxr-x--x 2 root root 4096 11月18日 23:55 . drwxr-xr-x 3 root root 4096 11月18日 23:55 .. ``` 我是 K8S 初学者,不是很懂这里的网络配置是因为嵌套虚拟化的问题还是配置问题。我试着对比了一下 genKubeVirtGuestModule 和 genKubeVirtHostModule。我看到 KubeVirt 中配置了 vSwitch。我在想 genKubeVirtGuestModule 中运行的节点是不是也需要配置 vSwitch? ```nix # Enable the Open vSwitch as a systemd service # It's required by kubernetes' ovs-cni plugin. virtualisation.vswitch = { enable = true; # reset the Open vSwitch configuration database to a default configuration on every start of the systemd ovsdb.service resetOnStart = false; }; networking.vswitches = { # https://github.com/k8snetworkplumbingwg/ovs-cni/blob/main/docs/demo.md ovsbr1 = { # Attach the interfaces to OVS bridge # This interface should not used by the host itself! interfaces.${iface} = { }; }; }; ```
Author
Owner

@ryan4yin commented on GitHub (Nov 19, 2024):

因为我在 genK3sServerModule.nix#L67 关闭了 k3s 自带的 flannel,所以需要额外手动部署网络插件,我的网络插件配置在这:

https://github.com/ryan4yin/k8s-gitops/tree/main/infra/pre-controllers/base/cilium

网络就绪后再手动接入 fluxcd,才能实现 gitops 化的集群自动化更新。

另外对初学者,我还是比较建议自己根据网络上的教程,手动用 kubeadm 部署一遍 kubernetes 集群,熟悉下集群的各个组件,直接按我这个来比较容易踩坑。

@ryan4yin commented on GitHub (Nov 19, 2024): 因为我在 [genK3sServerModule.nix#L67](https://github.com/ryan4yin/nix-config/blob/68fa736/lib/genK3sServerModule.nix#L67) 关闭了 k3s 自带的 flannel,所以需要额外手动部署网络插件,我的网络插件配置在这: https://github.com/ryan4yin/k8s-gitops/tree/main/infra/pre-controllers/base/cilium 网络就绪后再手动接入 fluxcd,才能实现 gitops 化的集群自动化更新。 另外对初学者,我还是比较建议自己根据网络上的教程,手动用 kubeadm 部署一遍 kubernetes 集群,熟悉下集群的各个组件,直接按我这个来比较容易踩坑。
Author
Owner

@LokiSharp commented on GitHub (Nov 19, 2024):

谢谢,我先在普通平台部署一遍熟悉一下,nixos + gitops 学习起来复杂度有点高。

@LokiSharp commented on GitHub (Nov 19, 2024): 谢谢,我先在普通平台部署一遍熟悉一下,nixos + gitops 学习起来复杂度有点高。
Author
Owner

@LokiSharp commented on GitHub (Nov 19, 2024):

翻阅你的 repo 的时候我看到你还 fork 了 proxmox-nixos,我很震惊 NixOS 上居然还能移植 PVE。你有尝试部署它么?

@LokiSharp commented on GitHub (Nov 19, 2024): 翻阅你的 repo 的时候我看到你还 fork 了 proxmox-nixos,我很震惊 NixOS 上居然还能移植 PVE。你有尝试部署它么?
Author
Owner

@ryan4yin commented on GitHub (Nov 20, 2024):

proxmox-nixos 有几个 NixOS 群友在用,因为现在只支持单机,我只是虚拟机里简单试用过。

@ryan4yin commented on GitHub (Nov 20, 2024): proxmox-nixos 有几个 NixOS 群友在用,因为现在只支持单机,我只是虚拟机里简单试用过。
Author
Owner

@ryan4yin commented on GitHub (Nov 20, 2024):

可以加我们 NixOS 中文群翻聊天记录,或者直接落絮搜索 proxmox / pve 关键字:

https://luoxu.torus.icu/#g=1455914104&q=pve

@ryan4yin commented on GitHub (Nov 20, 2024): 可以加我们 NixOS 中文群翻聊天记录,或者直接落絮搜索 proxmox / pve 关键字: https://luoxu.torus.icu/#g=1455914104&q=pve
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: starred/nix-config#49