Failed jobs doesn't display Exception message in RQ Task view #10467

Closed
opened 2025-12-29 21:31:52 +01:00 by adam · 6 comments
Owner

Originally created by @a-belhadj on GitHub (Nov 11, 2024).

Deployment Type

Self-hosted

Triage priority

I'm a NetBox Labs customer

NetBox Version

v4.1.6

Python Version

3.12

Steps to Reproduce

Go to : http://localhost:8000/core/background-tasks/<job_failed_id>/ the exception message isn't present

Expected Behavior

See the Exception message

Observed Behavior

No Exception message

Originally created by @a-belhadj on GitHub (Nov 11, 2024). ### Deployment Type Self-hosted ### Triage priority I'm a NetBox Labs customer ### NetBox Version v4.1.6 ### Python Version 3.12 ### Steps to Reproduce Go to : http://localhost:8000/core/background-tasks/<job_failed_id>/ the exception message isn't present ### Expected Behavior See the Exception message ### Observed Behavior No Exception message
adam added the type: bugpending closurestatus: revisions needed labels 2025-12-29 21:31:52 +01:00
adam closed this issue 2025-12-29 21:31:52 +01:00
Author
Owner

@jeremystretch commented on GitHub (Nov 12, 2024):

Thank you for opening a bug report. Unfortunately, the information you have provided is not sufficient for someone else to attempt to reproduce the reported behavior. Remember, each bug report must include detailed steps that someone else can follow on a clean, empty NetBox installation to reproduce the exact problem you're experiencing. These instructions should include the creation of any involved objects, any configuration changes, and complete accounting of the actions being taken. Also be sure that your report does not reference data on the public NetBox demo, as that is subject to change at any time by an outside party and cannot be relied upon for bug reports.

@jeremystretch commented on GitHub (Nov 12, 2024): Thank you for opening a bug report. Unfortunately, the information you have provided is not sufficient for someone else to attempt to reproduce the reported behavior. Remember, each bug report must include detailed steps that someone else can follow on a clean, empty NetBox installation to reproduce the exact problem you're experiencing. These instructions should include the creation of any involved objects, any configuration changes, and complete accounting of the actions being taken. Also be sure that your report does not reference data on the public NetBox demo, as that is subject to change at any time by an outside party and cannot be relied upon for bug reports.
Author
Owner

@a-belhadj commented on GitHub (Nov 13, 2024):

@jeremystretch See PR https://github.com/netbox-community/netbox/pull/17984

@a-belhadj commented on GitHub (Nov 13, 2024): @jeremystretch See PR https://github.com/netbox-community/netbox/pull/17984
Author
Owner

@jeremystretch commented on GitHub (Nov 20, 2024):

@a-belhadj per the PR template, we cannot accept a PR until an issue has been accepted and assigned. Further, the change proposed in that PR is not valid: exc_info is provided as template context.

Please provide reproduction steps if you'd like to move forward with this issue.

@jeremystretch commented on GitHub (Nov 20, 2024): @a-belhadj per the PR template, we cannot accept a PR until an issue has been accepted and assigned. Further, the change proposed in that PR is not valid: `exc_info` is provided as template context. Please provide reproduction steps if you'd like to move forward with this issue.
Author
Owner

@github-actions[bot] commented on GitHub (Nov 28, 2024):

This is a reminder that additional information is needed in order to further triage this issue. If the requested details are not provided, the issue will soon be closed automatically.

@github-actions[bot] commented on GitHub (Nov 28, 2024): This is a reminder that additional information is needed in order to further triage this issue. If the requested details are not provided, the issue will soon be closed automatically.
Author
Owner

@ghost commented on GitHub (Dec 2, 2024):

@jeremystretch I’ve also encountered this issue multiple times. It seems to occur during high memory-consuming tasks executed via the script. When the task fails due to resource constraints, the script's state in the UI continues to display as "running," even though it has actually failed on the worker side.

Upon investigating further and checking the logs in the Django RQ admin panel, the exception message is:
work-horse terminated unexpectedly; waitpid returned.

This indicates that the worker process terminates abruptly, but the task state is not properly updated in the UI to reflect the failure. It might be worth examining how task state synchronization is handled between the worker and the UI to address this discrepancy.

@ghost commented on GitHub (Dec 2, 2024): @jeremystretch I’ve also encountered this issue multiple times. It seems to occur during high memory-consuming tasks executed via the script. When the task fails due to resource constraints, the script's state in the UI continues to display as "running," even though it has actually failed on the worker side. Upon investigating further and checking the logs in the Django RQ admin panel, the exception message is: `work-horse terminated unexpectedly; waitpid returned`. This indicates that the worker process terminates abruptly, but the task state is not properly updated in the UI to reflect the failure. It might be worth examining how task state synchronization is handled between the worker and the UI to address this discrepancy.
Author
Owner

@jeremystretch commented on GitHub (Dec 6, 2024):

Closing this out as there's been no further response from the submitter.

@jeremystretch commented on GitHub (Dec 6, 2024): Closing this out as there's been no further response from the submitter.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: starred/netbox#10467