panic runtime error after host reboot #100

Closed
opened 2025-12-29 14:24:56 +01:00 by adam · 10 comments
Owner

Originally created by @DrPulse on GitHub (Sep 6, 2025).

Hi,
First, thank you for this amazing piece of software, it's really intuitive, easy to use and works really well, especially for simple homelabs setup. It's nice to see efforts like this in the reverse proxy space.
I ran this on my NAS since version 0.16.1and it worked flawlessly.
But since version 16.2 or 17.0 I'm not sure, and after rebooting my host machine, I started having the following errors when deploying the compose file, crashing and restarting over and over the container godoxy-proxy :

failed to create modcache index dir: mkdir /.cache: permission denied
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x58 pc=0x8f77d6]

I used the provided compose and .env files, except I binded the volumes elsewhere and the permissions seems correct, even did a chown -R 1000:1000 on the folder. I tried to delete the stack, folders, everything and remake it, with the same results.
The config.yml file is very close to the base one, using cloudflare and dns challenge, which successfully creates the certificates.
The docker socket proxy is also working well I think, because in the stack logs all the running containers are added to the http routes.

The errors seem close to the ones in the issue #106 but seem to be related to missing permissions, which in my case don't know where.
To add a bit more context, the stack is running on a QNAP NAS, and was running well before reboots. However, it works perfectly when running in a VM on the said NAS (I switch the DNS and made sure it wasn't the root cause).

I'm not sure what could be causing the crashes, as all env variables are provided, certificates are created and the socket proxy working well

Originally created by @DrPulse on GitHub (Sep 6, 2025). Hi, First, thank you for this amazing piece of software, it's really intuitive, easy to use and works really well, especially for simple homelabs setup. It's nice to see efforts like this in the reverse proxy space. I ran this on my NAS since version 0.16.1and it worked flawlessly. But since version 16.2 or 17.0 I'm not sure, and after rebooting my host machine, I started having the following errors when deploying the compose file, crashing and restarting over and over the container `godoxy-proxy` : ``` failed to create modcache index dir: mkdir /.cache: permission denied panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x58 pc=0x8f77d6] ``` I used the provided compose and .env files, except I binded the volumes elsewhere and the permissions seems correct, even did a `chown -R 1000:1000` on the folder. I tried to delete the stack, folders, everything and remake it, with the same results. The `config.yml` file is very close to the base one, using cloudflare and dns challenge, which successfully creates the certificates. The docker socket proxy is also working well I think, because in the stack logs all the running containers are added to the http routes. The errors seem close to the ones in the issue #106 but seem to be related to missing permissions, which in my case don't know where. To add a bit more context, the stack is running on a QNAP NAS, and was running well before reboots. However, it works perfectly when running in a VM on the said NAS (I switch the DNS and made sure it wasn't the root cause). I'm not sure what could be causing the crashes, as all env variables are provided, certificates are created and the socket proxy working well
adam closed this issue 2025-12-29 14:24:56 +01:00
Author
Owner

@yusing commented on GitHub (Sep 6, 2025):

failed to create modcache index dir: mkdir /.cache: permission denied

That is very weird, GoDoxy does not use the /.cache directory. Could you show me the full log?

Found out it could be related to the swaggo/swag package as it's the only thing that needs golang.org/x/tools, which the error is from. Will remove the dependency and see if the problem is resolved.

@yusing commented on GitHub (Sep 6, 2025): > failed to create modcache index dir: mkdir /.cache: permission denied That is very weird, GoDoxy does not use the `/.cache` directory. Could you show me the full log? Found out it could be related to the `swaggo/swag` package as it's the only thing that needs `golang.org/x/tools`, which the error is from. Will remove the dependency and see if the problem is resolved.
Author
Owner

@yusing commented on GitHub (Sep 6, 2025):

@DrPulse Please check if it's fixed in v0.17.3

@yusing commented on GitHub (Sep 6, 2025): @DrPulse Please check if it's fixed in v0.17.3
Author
Owner

@DrPulse commented on GitHub (Sep 6, 2025):

The error related to the /.cache is gone with the v0.17.3. For the record, I couldn't find any other information in the logs related to that, even in debug mode.

However, I still end up with the panic runtime error. One thing I noticed is that the json files supposed to live in the the data folder are not created, even if the logs says they are loaded:

godoxy-proxy  | 09-06 18:04 DBG loaded store namespace=.icon_cache path=data/.icon_cache.json
godoxy-proxy  | 09-06 18:04 DBG loaded store namespace=.homepage path=data/.homepage.json
godoxy-proxy  | 09-06 18:04 DBG loaded store namespace=captcha_sessions path=data/captcha_sessions.json

On my VM they are created fine, but in my current deployment context the files just don't exist. Maybe this is the cause of the panic error as metrics are icons are trying to be written ?
This might be an issue on my end though, but don't really see where right now

@DrPulse commented on GitHub (Sep 6, 2025): The error related to the `/.cache` is gone with the v0.17.3. For the record, I couldn't find any other information in the logs related to that, even in debug mode. However, I still end up with the panic runtime error. One thing I noticed is that the json files supposed to live in the the `data` folder are not created, even if the logs says they are loaded: ``` godoxy-proxy | 09-06 18:04 DBG loaded store namespace=.icon_cache path=data/.icon_cache.json godoxy-proxy | 09-06 18:04 DBG loaded store namespace=.homepage path=data/.homepage.json godoxy-proxy | 09-06 18:04 DBG loaded store namespace=captcha_sessions path=data/captcha_sessions.json ``` On my VM they are created fine, but in my current deployment context the files just don't exist. Maybe this is the cause of the panic error as metrics are icons are trying to be written ? This might be an issue on my end though, but don't really see where right now
Author
Owner

@yusing commented on GitHub (Sep 6, 2025):

Those files will only be saved when the process exits (gracefully). If possible please run the godoxy container with root user once, then when it panics it will print the full error stack.

@yusing commented on GitHub (Sep 6, 2025): Those files will only be saved when the process exits (gracefully). If possible please run the `godoxy` container with root user once, then when it panics it will print the full error stack.
Author
Owner

@DrPulse commented on GitHub (Sep 6, 2025):

Here is the full stacktrace when run as root user:

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x58 pc=0x8f7456]

goroutine 152 [running]:
internal/sync.(*Mutex).Lock(...)
	/usr/local/go/src/internal/sync/mutex.go:63
sync.(*Mutex).Lock(...)
	/usr/local/go/src/sync/mutex.go:46
github.com/yusing/go-proxy/internal/task.(*Task).finish(0x0, {0x2799d00, 0x3408300}, 0x0)
	/src/internal/task/task.go:185 +0x36
github.com/yusing/go-proxy/internal/task.(*Task).Finish(...)
	/src/internal/task/task.go:79
github.com/yusing/go-proxy/internal/watcher/health/monitor.(*monitor).Finish(...)
	/src/internal/watcher/health/monitor/monitor.go:174
github.com/yusing/go-proxy/internal/route.(*Route).SetHealthMonitor(...)
	/src/internal/route/route.go:393
github.com/yusing/go-proxy/internal/idlewatcher.NewWatcher({0x343bcc0, 0xc000680f00}, {0x3461780, 0xc00001ecf0}, 0xc0000b4620)
	/src/internal/idlewatcher/watcher.go:323 +0x1e7e
github.com/yusing/go-proxy/internal/idlewatcher.NewWatcher({0x343bcc0, 0xc000680f00}, {0x3461780, 0xc000789c50}, 0xc0000b4540)
	/src/internal/idlewatcher/watcher.go:213 +0xaeb
github.com/yusing/go-proxy/internal/route.(*ReveseProxyRoute).Start(0xc000789c50, {0x343bcc0, 0xc000680f00})
	/src/internal/route/reverse_proxy.go:101 +0xd3
github.com/yusing/go-proxy/internal/route.(*Route).start(0xc0007a4480, {0x343bcc0?, 0xc000680f00?})
	/src/internal/route/route.go:287 +0x9f
github.com/yusing/go-proxy/internal/route.(*Route).Start.func1()
	/src/internal/route/route.go:276 +0x28
sync.(*Once).doSlow(0x6163333064393631?, 0x6561376335633836?)
	/usr/local/go/src/sync/once.go:78 +0xac
sync.(*Once).Do(...)
	/usr/local/go/src/sync/once.go:69
github.com/yusing/go-proxy/internal/route.(*Route).Start(0x7375792f6d6f632e?, {0x343bcc0?, 0xc000680f00?})
	/src/internal/route/route.go:275 +0x5f
github.com/yusing/go-proxy/internal/route/provider.(*Provider).startRoute(0xc00029ce60, {0x343bcc0?, 0xc000680f00?}, 0xc0007a4480)
	/src/internal/route/provider/provider.go:211 +0x2e
github.com/yusing/go-proxy/internal/route/provider.(*Provider).Start.func1(0x7270222c22332e37?)
	/src/internal/route/provider/provider.go:114 +0x65
created by github.com/yusing/go-proxy/internal/route/provider.(*Provider).Start in goroutine 1
	/src/internal/route/provider/provider.go:112 +0x325
@DrPulse commented on GitHub (Sep 6, 2025): Here is the full stacktrace when run as root user: ``` panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x58 pc=0x8f7456] goroutine 152 [running]: internal/sync.(*Mutex).Lock(...) /usr/local/go/src/internal/sync/mutex.go:63 sync.(*Mutex).Lock(...) /usr/local/go/src/sync/mutex.go:46 github.com/yusing/go-proxy/internal/task.(*Task).finish(0x0, {0x2799d00, 0x3408300}, 0x0) /src/internal/task/task.go:185 +0x36 github.com/yusing/go-proxy/internal/task.(*Task).Finish(...) /src/internal/task/task.go:79 github.com/yusing/go-proxy/internal/watcher/health/monitor.(*monitor).Finish(...) /src/internal/watcher/health/monitor/monitor.go:174 github.com/yusing/go-proxy/internal/route.(*Route).SetHealthMonitor(...) /src/internal/route/route.go:393 github.com/yusing/go-proxy/internal/idlewatcher.NewWatcher({0x343bcc0, 0xc000680f00}, {0x3461780, 0xc00001ecf0}, 0xc0000b4620) /src/internal/idlewatcher/watcher.go:323 +0x1e7e github.com/yusing/go-proxy/internal/idlewatcher.NewWatcher({0x343bcc0, 0xc000680f00}, {0x3461780, 0xc000789c50}, 0xc0000b4540) /src/internal/idlewatcher/watcher.go:213 +0xaeb github.com/yusing/go-proxy/internal/route.(*ReveseProxyRoute).Start(0xc000789c50, {0x343bcc0, 0xc000680f00}) /src/internal/route/reverse_proxy.go:101 +0xd3 github.com/yusing/go-proxy/internal/route.(*Route).start(0xc0007a4480, {0x343bcc0?, 0xc000680f00?}) /src/internal/route/route.go:287 +0x9f github.com/yusing/go-proxy/internal/route.(*Route).Start.func1() /src/internal/route/route.go:276 +0x28 sync.(*Once).doSlow(0x6163333064393631?, 0x6561376335633836?) /usr/local/go/src/sync/once.go:78 +0xac sync.(*Once).Do(...) /usr/local/go/src/sync/once.go:69 github.com/yusing/go-proxy/internal/route.(*Route).Start(0x7375792f6d6f632e?, {0x343bcc0?, 0xc000680f00?}) /src/internal/route/route.go:275 +0x5f github.com/yusing/go-proxy/internal/route/provider.(*Provider).startRoute(0xc00029ce60, {0x343bcc0?, 0xc000680f00?}, 0xc0007a4480) /src/internal/route/provider/provider.go:211 +0x2e github.com/yusing/go-proxy/internal/route/provider.(*Provider).Start.func1(0x7270222c22332e37?) /src/internal/route/provider/provider.go:114 +0x65 created by github.com/yusing/go-proxy/internal/route/provider.(*Provider).Start in goroutine 1 /src/internal/route/provider/provider.go:112 +0x325 ```
Author
Owner

@yusing commented on GitHub (Sep 7, 2025):

Could you show me the full docker compose of the container(s) with idlewatcher enabled (and it's dependencies: all containers in it's depends_on or proxy.idlewatcher.depends_on)

@yusing commented on GitHub (Sep 7, 2025): Could you show me the full docker compose of the container(s) with idlewatcher enabled (and it's dependencies: all containers in it's `depends_on` or `proxy.idlewatcher.depends_on`)
Author
Owner

@DrPulse commented on GitHub (Sep 7, 2025):

I wrote here the compose files for the services where idle timeouts are activated.
Upon making it, I noticed that I used godoxy.exclude=True on some dependencies containers (paperless or calibre-download), maybe it's mutually exclusive with the idle timeouts ?
For the compose services without dependencies I only wrote one, as they follow the same pattern as the one shown, only having occasionally a different aliases, but no other differences.

@DrPulse commented on GitHub (Sep 7, 2025): [I wrote here](https://pastebin.com/7GjiYjuU) the compose files for the services where idle timeouts are activated. Upon making it, I noticed that I used `godoxy.exclude=True` on some dependencies containers (paperless or calibre-download), maybe it's mutually exclusive with the idle timeouts ? For the compose services without dependencies I only wrote one, as they follow the same pattern as the one shown, only having occasionally a different aliases, but no other differences.
Author
Owner

@yusing commented on GitHub (Sep 8, 2025):

Fixed. Please update and report back, thanks.

@yusing commented on GitHub (Sep 8, 2025): Fixed. Please update and report back, thanks.
Author
Owner

@yusing commented on GitHub (Sep 8, 2025):

I noticed that I used godoxy.exclude=True on some dependencies containers (paperless or calibre-download), maybe it's mutually exclusive with the idle timeouts

It's not, you can exclude dependencies from proxying.

@yusing commented on GitHub (Sep 8, 2025): > I noticed that I used godoxy.exclude=True on some dependencies containers (paperless or calibre-download), maybe it's mutually exclusive with the idle timeouts It's not, you can exclude dependencies from proxying.
Author
Owner

@DrPulse commented on GitHub (Sep 8, 2025):

Fixed. Please update and report back, thanks.

It now works perfectly. Thank you very much for your reactivity !

@DrPulse commented on GitHub (Sep 8, 2025): > Fixed. Please update and report back, thanks. It now works perfectly. Thank you very much for your reactivity !
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: starred/godoxy-yusing#100