mirror of
https://github.com/netbox-community/netbox.git
synced 2026-01-11 21:10:29 +01:00
Netbox workers exhibit slow memory leak #2948
Closed
opened 2025-12-29 18:23:52 +01:00 by adam
·
6 comments
No Branch/Tag Specified
main
update-changelog-comments-docs
feature-removal-issue-type
20911-dropdown
20239-plugin-menu-classes-mutable-state
21097-graphql-id-lookups
feature
fix_module_substitution
20923-dcim-templates
20044-elevation-stuck-lightmode
feature-ip-prefix-link
v4.5-beta1-release
20068-import-moduletype-attrs
20766-fix-german-translation-code-literals
20378-del-script
7604-filter-modifiers-v3
circuit-swap
12318-case-insensitive-uniqueness
20637-improve-device-q-filter
20660-script-load
19724-graphql
20614-update-ruff
14884-script
02496-max-page
19720-macaddress-interface-generic-relation
19408-circuit-terminations-export-templates
20203-openapi-check
fix-19669-api-image-download
7604-filter-modifiers
19275-fixes-interface-bulk-edit
fix-17794-get_field_value_return_list
11507-show-aggregate-and-rir-on-api
9583-add_column_specific_search_field_to_tables
v4.5.0
v4.4.10
v4.4.9
v4.5.0-beta1
v4.4.8
v4.4.7
v4.4.6
v4.4.5
v4.4.4
v4.4.3
v4.4.2
v4.4.1
v4.4.0
v4.3.7
v4.4.0-beta1
v4.3.6
v4.3.5
v4.3.4
v4.3.3
v4.3.2
v4.3.1
v4.3.0
v4.2.9
v4.3.0-beta2
v4.2.8
v4.3.0-beta1
v4.2.7
v4.2.6
v4.2.5
v4.2.4
v4.2.3
v4.2.2
v4.2.1
v4.2.0
v4.1.11
v4.1.10
v4.1.9
v4.1.8
v4.2-beta1
v4.1.7
v4.1.6
v4.1.5
v4.1.4
v4.1.3
v4.1.2
v4.1.1
v4.1.0
v4.0.11
v4.0.10
v4.0.9
v4.1-beta1
v4.0.8
v4.0.7
v4.0.6
v4.0.5
v4.0.3
v4.0.2
v4.0.1
v4.0.0
v3.7.8
v3.7.7
v4.0-beta2
v3.7.6
v3.7.5
v4.0-beta1
v3.7.4
v3.7.3
v3.7.2
v3.7.1
v3.7.0
v3.6.9
v3.6.8
v3.6.7
v3.7-beta1
v3.6.6
v3.6.5
v3.6.4
v3.6.3
v3.6.2
v3.6.1
v3.6.0
v3.5.9
v3.6-beta2
v3.5.8
v3.6-beta1
v3.5.7
v3.5.6
v3.5.5
v3.5.4
v3.5.3
v3.5.2
v3.5.1
v3.5.0
v3.4.10
v3.4.9
v3.5-beta2
v3.4.8
v3.5-beta1
v3.4.7
v3.4.6
v3.4.5
v3.4.4
v3.4.3
v3.4.2
v3.4.1
v3.4.0
v3.3.10
v3.3.9
v3.4-beta1
v3.3.8
v3.3.7
v3.3.6
v3.3.5
v3.3.4
v3.3.3
v3.3.2
v3.3.1
v3.3.0
v3.2.9
v3.2.8
v3.3-beta2
v3.2.7
v3.3-beta1
v3.2.6
v3.2.5
v3.2.4
v3.2.3
v3.2.2
v3.2.1
v3.2.0
v3.1.11
v3.1.10
v3.2-beta2
v3.1.9
v3.2-beta1
v3.1.8
v3.1.7
v3.1.6
v3.1.5
v3.1.4
v3.1.3
v3.1.2
v3.1.1
v3.1.0
v3.0.12
v3.0.11
v3.0.10
v3.1-beta1
v3.0.9
v3.0.8
v3.0.7
v3.0.6
v3.0.5
v3.0.4
v3.0.3
v3.0.2
v3.0.1
v3.0.0
v2.11.12
v3.0-beta2
v2.11.11
v2.11.10
v3.0-beta1
v2.11.9
v2.11.8
v2.11.7
v2.11.6
v2.11.5
v2.11.4
v2.11.3
v2.11.2
v2.11.1
v2.11.0
v2.10.10
v2.10.9
v2.11-beta1
v2.10.8
v2.10.7
v2.10.6
v2.10.5
v2.10.4
v2.10.3
v2.10.2
v2.10.1
v2.10.0
v2.9.11
v2.10-beta2
v2.9.10
v2.10-beta1
v2.9.9
v2.9.8
v2.9.7
v2.9.6
v2.9.5
v2.9.4
v2.9.3
v2.9.2
v2.9.1
v2.9.0
v2.9-beta2
v2.8.9
v2.9-beta1
v2.8.8
v2.8.7
v2.8.6
v2.8.5
v2.8.4
v2.8.3
v2.8.2
v2.8.1
v2.8.0
v2.7.12
v2.7.11
v2.7.10
v2.7.9
v2.7.8
v2.7.7
v2.7.6
v2.7.5
v2.7.4
v2.7.3
v2.7.2
v2.7.1
v2.7.0
v2.6.12
v2.6.11
v2.6.10
v2.6.9
v2.7-beta1
Solcon-2020-01-06
v2.6.8
v2.6.7
v2.6.6
v2.6.5
v2.6.4
v2.6.3
v2.6.2
v2.6.1
v2.6.0
v2.5.13
v2.5.12
v2.6-beta1
v2.5.11
v2.5.10
v2.5.9
v2.5.8
v2.5.7
v2.5.6
v2.5.5
v2.5.4
v2.5.3
v2.5.2
v2.5.1
v2.5.0
v2.4.9
v2.5-beta2
v2.4.8
v2.5-beta1
v2.4.7
v2.4.6
v2.4.5
v2.4.4
v2.4.3
v2.4.2
v2.4.1
v2.4.0
v2.3.7
v2.4-beta1
v2.3.6
v2.3.5
v2.3.4
v2.3.3
v2.3.2
v2.3.1
v2.3.0
v2.2.10
v2.3-beta2
v2.2.9
v2.3-beta1
v2.2.8
v2.2.7
v2.2.6
v2.2.5
v2.2.4
v2.2.3
v2.2.2
v2.2.1
v2.2.0
v2.1.6
v2.2-beta2
v2.1.5
v2.2-beta1
v2.1.4
v2.1.3
v2.1.2
v2.1.1
v2.1.0
v2.0.10
v2.1-beta1
v2.0.9
v2.0.8
v2.0.7
v2.0.6
v2.0.5
v2.0.4
v2.0.3
v2.0.2
v2.0.1
v2.0.0
v2.0-beta3
v1.9.6
v1.9.5
v2.0-beta2
v1.9.4-r1
v1.9.3
v2.0-beta1
v1.9.2
v1.9.1
v1.9.0-r1
v1.8.4
v1.8.3
v1.8.2
v1.8.1
v1.8.0
v1.7.3
v1.7.2-r1
v1.7.1
v1.7.0
v1.6.3
v1.6.2-r1
v1.6.1-r1
1.6.1
v1.6.0
v1.5.2
v1.5.1
v1.5.0
v1.4.2
v1.4.1
v1.4.0
v1.3.2
v1.3.1
v1.3.0
v1.2.2
v1.2.1
v1.2.0
v1.1.0
v1.0.7-r1
v1.0.7
v1.0.6
v1.0.5
v1.0.4
v1.0.3-r1
v1.0.3
1.0.0
Labels
Clear labels
beta
breaking change
complexity: high
complexity: low
complexity: medium
needs milestone
netbox
pending closure
plugin candidate
pull-request
severity: high
severity: low
severity: medium
status: accepted
status: backlog
status: blocked
status: duplicate
status: needs owner
status: needs triage
status: revisions needed
status: under review
topic: GraphQL
topic: Internationalization
topic: OpenAPI
topic: UI/UX
topic: cabling
topic: event rules
topic: htmx navigation
topic: industrialization
topic: migrations
topic: plugins
topic: scripts
topic: templating
topic: testing
type: bug
type: deprecation
type: documentation
type: feature
type: housekeeping
type: translation
Mirrored from GitHub Pull Request
No Label
Milestone
No items
No Milestone
Projects
Clear projects
No project
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: starred/netbox#2948
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @ajknv on GitHub (Oct 11, 2019).
Environment
Steps to Reproduce
(I did this as an A/B test on two servers in a clustered HA configuration)
Expected Behavior
While the gunicorn max_requests configuration does provide a workaround, ideally a server service application like Netbox shouldn't slowly drain all memory on the host until it seizes up.
Observed Behavior
I initially got turned on to tracing and experimenting with performance parameters like the gunicorn max_requests in the course of debugging why both instances in the HA cluster became completely unresponsive, and seized up the VM instances to such a degree that they had to be hard-terminated in AWS for replacement. I'm continuing to observe server A for a similar total failure for complete verification of that root cause, but feel fairly confident that this memory leaking behavior is the primary smoking gun.
@DanSheps commented on GitHub (Oct 11, 2019):
I am going to go out on a limb, and say this is going to be an upstream issue.
There are four layers here:
Most functions are abstracted by the first 2 so Netbox itself does not have a lot of interaction with the operating system at all. The memory leak, if you determine there is one, could come from any layer.
@jeremystretch commented on GitHub (Oct 12, 2019):
Agreed. Unless you can attribute the leak to something particular inside NetBox, I don't think there's much we can do. Although, it might be worth adding
max_requeststo the reference gunicorn config in the docs to mitigate the issue.@ajknv commented on GitHub (Oct 14, 2019):
It is possible it could be upstream, no doubt, though given the popularity of all of those layers that conclusion would seem to imply that every single project built on pretty widely adopted frameworks suffers from a fairly aggressive memory leak. IMO that doesn't really pass the Occam's Razor test for explaining the issue.
What I mean by memory leak in this context is the likelihood of something hanging on to and accumulating references to Python objects such that they can't be garbage collected, e.g. a memory leak in the sense typical of higher-level languages; not just something like a malloc request directly to the OS. Something like DB results sets or whatnot.
I'm fairly puzzled, the description provided in the issue description isn't even convincing to you that there is definitely a memory leak, setting aside doubts about which layer it is coming from?
I can try to find the time to help, but I guess I was hoping you might consider it worthwhile as the maintainers of the project, to consider running some tools (static analysis/linters, memory analysis, etc.) or even just do an inspection pass over the code around data handling to look for possible culprits within Netbox.
@jeremystretch commented on GitHub (Oct 15, 2019):
I'm sure it's worthwhile, but so are the 186 other issues currently open that our four (part-time) maintainers are tasked with managing. Open source projects depend on contributions from their users to survive. @ajknv if you're willing to commit to doing this work I'll leave this issue open. Otherwise, I think adding a line in the docs to suggest adding a
max_requestslimit in the gunicorn configuration is a very acceptable workaround.@tyler-8 commented on GitHub (Oct 16, 2019):
I agree that
max_requestsis really the way to go. Memory leaks can be difficult and time-intensive to pin down, and even then may eventually be attributed to any number of third party libraries in use (if the issue truly lies in the Netbox code to begin with).I would add that
max_requestsshould be paired withmax_requests_jitterin the suggested configuration. This helps prevent all the gunicorn workers from being restarted at nearly the same time. I'm usinguwsgimyself but it has similar configuration parameters.Further general recommendations: https://adamj.eu/tech/2019/09/19/working-around-memory-leaks-in-your-django-app/
@jeremystretch commented on GitHub (Nov 1, 2019):
Going to close this out as #3658 seems to provide an adequate workaround in the absence of a known root issue.