High memory when delete object change middleware delete a large object changes #3468

Closed
opened 2025-12-29 18:29:22 +01:00 by adam · 2 comments
Owner

Originally created by @haminhcong on GitHub (Mar 12, 2020).

Environment

  • Python version: 3.6.9
  • NetBox version: 2.7.9

Steps to Reproduce

  1. Create/update a large object number (100 000 objects) in a day
  2. After 90 days run update some objects
  3. Check memory usage

Expected Behavior

Netbox memory usage stable anytime.

Observed Behavior

When object changes number need to delete in retention_cycle is very high, in my case is 100 000 - 1 000 000 ObjectChange objects , object_change_delete process current used in netbox Middleware is not paging => leads to Netbox memory usage is very high when delete object change process run:

https://github.com/netbox-community/netbox/blob/develop/netbox/extras/middleware.py#L146

        if settings.CHANGELOG_RETENTION and random.randint(1, 100) == 1:
            cutoff = timezone.now() - timedelta(days=settings.CHANGELOG_RETENTION)
            purged_count, _ = ObjectChange.objects.filter(
                time__lt=cutoff
            ).delete()

a gunicorn worker normaly usage 100 MB RAM, when run this process, raise to 2 GB RAM and lead to VM memory error

I suggest paging this query to avoid memory high usage, and if needed, move this process to run in django rq worker to avoid http_timeout (default is 60 seconds in gunicorn). Somethings like this:

    cutoff = timezone.now() - timedelta(days=settings.CHANGELOG_RETENTION)
    offset = 0
    pagesize = 500
    obj_list_qs = ObjectChange.objects.filter(
        time__lt=cutoff
    )
    count = obj_list_qs.count()
    while offset < count:
        for objectchange in obj_list_qs[offset: offset + pagesize]:
            objectchange.delete()
        offset += pagesize

Originally created by @haminhcong on GitHub (Mar 12, 2020). <!-- NOTE: IF YOUR ISSUE DOES NOT FOLLOW THIS TEMPLATE, IT WILL BE CLOSED. This form is only for reproducible bugs. If you need assistance with NetBox installation, or if you have a general question, DO NOT open an issue. Instead, post to our mailing list: https://groups.google.com/forum/#!forum/netbox-discuss Please describe the environment in which you are running NetBox. Be sure that you are running an unmodified instance of the latest stable release before submitting a bug report. --> ### Environment * Python version: 3.6.9 * NetBox version: 2.7.9 <!-- Describe in detail the exact steps that someone else can take to reproduce this bug using the current stable release of NetBox (or the current beta release where applicable). Begin with the creation of any necessary database objects and call out every operation being performed explicitly. If reporting a bug in the REST API, be sure to reconstruct the raw HTTP request(s) being made: Don't rely on a wrapper like pynetbox. --> ### Steps to Reproduce 1. Create/update a large object number (100 000 objects) in a day 2. After 90 days run update some objects 3. Check memory usage <!-- What did you expect to happen? --> ### Expected Behavior Netbox memory usage stable anytime. <!-- What happened instead? --> ### Observed Behavior When object changes number need to delete in retention_cycle is very high, in my case is 100 000 - 1 000 000 ObjectChange objects , object_change_delete process current used in netbox Middleware is not paging => leads to Netbox memory usage is very high when delete object change process run: https://github.com/netbox-community/netbox/blob/develop/netbox/extras/middleware.py#L146 ```python if settings.CHANGELOG_RETENTION and random.randint(1, 100) == 1: cutoff = timezone.now() - timedelta(days=settings.CHANGELOG_RETENTION) purged_count, _ = ObjectChange.objects.filter( time__lt=cutoff ).delete() ``` a gunicorn worker normaly usage 100 MB RAM, when run this process, raise to 2 GB RAM and lead to VM memory error I suggest paging this query to avoid memory high usage, and if needed, move this process to run in django rq worker to avoid http_timeout (default is 60 seconds in gunicorn). Somethings like this: ```python cutoff = timezone.now() - timedelta(days=settings.CHANGELOG_RETENTION) offset = 0 pagesize = 500 obj_list_qs = ObjectChange.objects.filter( time__lt=cutoff ) count = obj_list_qs.count() while offset < count: for objectchange in obj_list_qs[offset: offset + pagesize]: objectchange.delete() offset += pagesize ```
adam added the pending closure label 2025-12-29 18:29:22 +01:00
adam closed this issue 2025-12-29 18:29:22 +01:00
Author
Owner

@stale[bot] commented on GitHub (Mar 26, 2020):

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. NetBox is governed by a small group of core maintainers which means not all opened issues may receive direct feedback. Please see our contributing guide.

@stale[bot] commented on GitHub (Mar 26, 2020): This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. NetBox is governed by a small group of core maintainers which means not all opened issues may receive direct feedback. Please see our [contributing guide](https://github.com/netbox-community/netbox/blob/develop/CONTRIBUTING.md).
Author
Owner

@stale[bot] commented on GitHub (Apr 2, 2020):

This issue has been automatically closed due to lack of activity. In an effort to reduce noise, please do not comment any further. Note that the core maintainers may elect to reopen this issue at a later date if deemed necessary.

@stale[bot] commented on GitHub (Apr 2, 2020): This issue has been automatically closed due to lack of activity. In an effort to reduce noise, please do not comment any further. Note that the core maintainers may elect to reopen this issue at a later date if deemed necessary.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: starred/netbox#3468