UI Racks Table View Performance Scaling Problem - Loading time increases with large number of racks - django N+1 query issue #10752

Closed
opened 2025-12-29 21:35:32 +01:00 by adam · 3 comments
Owner

Originally created by @marcusyuri on GitHub (Feb 8, 2025).

Deployment Type

Self-hosted

NetBox Version

v4.2.3

Python Version

3.10

Steps to Reproduce

  1. Starts with a clean NetBox Database
  2. Create a site (ex: site) and a rack type (ex: Rack)
  3. Batch import 10000 racks on this site (sample csv import file below - you may perform partial upload to avoid server timeout)

Rack import.txt

  1. Click on Racks -> Racks

It will take several seconds to load. In our case, with about 20K racks instances on NetBox Database, it takes about 15 seconds to load.

  1. Enable database debug on configuration.py
  2. Click on Racks -> Racks
  3. Look at database log

Expected Behavior

The Rack View loads with a few miliseconds, and the database log may show about <10 SQL querys to render the view.

Observed Behavior

In Database debug log we can see that for Each Rack row displayed in NetBox UI was performed another 3 SQLs querys, to fetch dcim_device, dcim_devicetype and dcim_rackreservation objects, filtered by current rack. It appears to be an django N+1 query issue:

I assume that those additional query was to fetch the device count by rack, and available rack U space stats.

Since in the rack UI view we are only interested on the totals, I am suggesting that a query refactoring, using SQL COUNT/GROUP BY, may solve this optimization/performance issue. It appears to have several django examples on mitigation the N+1 problem https://dev.to/herchila/how-to-avoid-n1-queries-in-django-tips-and-solutions-2ajo

Query #1
(0.006) SELECT "dcim_device"."id", "dcim_device"."created", "dcim_device"."last_updated", "dcim_device"."custom_field_data", "dcim_device"."description", "dcim_device"."comments", "dcim_device"."local_context_data", "dcim_device"."config_template_id", "dcim_device"."device_type_id", "dcim_device"."role_id", "dcim_device"."tenant_id", "dcim_device"."platform_id", "dcim_device"."name", "dcim_device"."serial", "dcim_device"."asset_tag", "dcim_device"."site_id", "dcim_device"."location_id", "dcim_device"."rack_id", "dcim_device"."position", "dcim_device"."face", "dcim_device"."status", "dcim_device"."airflow", "dcim_device"."primary_ip4_id", "dcim_device"."primary_ip6_id", "dcim_device"."oob_ip_id", "dcim_device"."cluster_id", "dcim_device"."virtual_chassis_id", "dcim_device"."vc_position", "dcim_device"."vc_priority", "dcim_device"."latitude", "dcim_device"."longitude", "dcim_device"."console_port_count", "dcim_device"."console_server_port_count", "dcim_device"."power_port_count", "dcim_device"."power_outlet_count", "dcim_device"."interface_count", "dcim_device"."front_port_count", "dcim_device"."rear_port_count", "dcim_device"."device_bay_count", "dcim_device"."module_bay_count", "dcim_device"."inventory_item_count" FROM "dcim_device" INNER JOIN "dcim_devicetype" ON ("dcim_device"."device_type_id" = "dcim_devicetype"."id") WHERE ("dcim_device"."rack_id" = 1327 AND "dcim_device"."position" >= 1 AND NOT ("dcim_devicetype"."exclude_from_utilization")) ORDER BY "dcim_device"."name" ASC, "dcim_device"."id" ASC; args=(1327, Decimal('1')); alias=default

Query #2
(0.001) SELECT "dcim_devicetype"."id", "dcim_devicetype"."created", "dcim_devicetype"."last_updated", "dcim_devicetype"."custom_field_data", "dcim_devicetype"."description", "dcim_devicetype"."comments", "dcim_devicetype"."weight", "dcim_devicetype"."weight_unit", "dcim_devicetype"."_abs_weight", "dcim_devicetype"."manufacturer_id", "dcim_devicetype"."model", "dcim_devicetype"."slug", "dcim_devicetype"."default_platform_id", "dcim_devicetype"."part_number", "dcim_devicetype"."u_height", "dcim_devicetype"."exclude_from_utilization", "dcim_devicetype"."is_full_depth", "dcim_devicetype"."subdevice_role", "dcim_devicetype"."airflow", "dcim_devicetype"."front_image", "dcim_devicetype"."rear_image", "dcim_devicetype"."console_port_template_count", "dcim_devicetype"."console_server_port_template_count", "dcim_devicetype"."power_port_template_count", "dcim_devicetype"."power_outlet_template_count", "dcim_devicetype"."interface_template_count", "dcim_devicetype"."front_port_template_count", "dcim_devicetype"."rear_port_template_count", "dcim_devicetype"."device_bay_template_count", "dcim_devicetype"."module_bay_template_count", "dcim_devicetype"."inventory_item_template_count" FROM "dcim_devicetype" WHERE "dcim_devicetype"."id" IN (504, 397); args=(504, 397); alias=default

Query #3
(0.001) SELECT "dcim_rackreservation"."id", "dcim_rackreservation"."created", "dcim_rackreservation"."last_updated", "dcim_rackreservation"."custom_field_data", "dcim_rackreservation"."comments", "dcim_rackreservation"."rack_id", "dcim_rackreservation"."units", "dcim_rackreservation"."tenant_id", "dcim_rackreservation"."user_id", "dcim_rackreservation"."description" FROM "dcim_rackreservation" WHERE "dcim_rackreservation"."rack_id" = 1327 ORDER BY "dcim_rackreservation"."created" ASC, "dcim_rackreservation"."id" ASC; args=(1327,); alias=default

Originally created by @marcusyuri on GitHub (Feb 8, 2025). ### Deployment Type Self-hosted ### NetBox Version v4.2.3 ### Python Version 3.10 ### Steps to Reproduce 1. Starts with a clean NetBox Database 2. Create a site (ex: site) and a rack type (ex: Rack) 3. Batch import 10000 racks on this site (sample csv import file below - you may perform partial upload to avoid server timeout) [Rack import.txt](https://github.com/user-attachments/files/18719135/Rack.import.txt) 4. Click on Racks -> Racks It will take several seconds to load. In our case, with about 20K racks instances on NetBox Database, it takes about 15 seconds to load. 5. Enable database debug on configuration.py 6. Click on Racks -> Racks 7. Look at database log ### Expected Behavior The Rack View loads with a few miliseconds, and the database log may show about <10 SQL querys to render the view. ### Observed Behavior In Database debug log we can see that for **Each Rack row displayed in NetBox UI** was performed another 3 SQLs querys, to fetch dcim_device, dcim_devicetype and dcim_rackreservation objects, filtered by current rack. It appears to be an django N+1 query issue: I assume that those additional query was to fetch the device count by rack, and available rack U space stats. Since in the rack UI view we are only interested on the totals, I am suggesting that a query refactoring, using SQL COUNT/GROUP BY, may solve this optimization/performance issue. It appears to have several django examples on mitigation the N+1 problem [https://dev.to/herchila/how-to-avoid-n1-queries-in-django-tips-and-solutions-2ajo](url) Query #1 `` (0.006) SELECT "dcim_device"."id", "dcim_device"."created", "dcim_device"."last_updated", "dcim_device"."custom_field_data", "dcim_device"."description", "dcim_device"."comments", "dcim_device"."local_context_data", "dcim_device"."config_template_id", "dcim_device"."device_type_id", "dcim_device"."role_id", "dcim_device"."tenant_id", "dcim_device"."platform_id", "dcim_device"."name", "dcim_device"."serial", "dcim_device"."asset_tag", "dcim_device"."site_id", "dcim_device"."location_id", "dcim_device"."rack_id", "dcim_device"."position", "dcim_device"."face", "dcim_device"."status", "dcim_device"."airflow", "dcim_device"."primary_ip4_id", "dcim_device"."primary_ip6_id", "dcim_device"."oob_ip_id", "dcim_device"."cluster_id", "dcim_device"."virtual_chassis_id", "dcim_device"."vc_position", "dcim_device"."vc_priority", "dcim_device"."latitude", "dcim_device"."longitude", "dcim_device"."console_port_count", "dcim_device"."console_server_port_count", "dcim_device"."power_port_count", "dcim_device"."power_outlet_count", "dcim_device"."interface_count", "dcim_device"."front_port_count", "dcim_device"."rear_port_count", "dcim_device"."device_bay_count", "dcim_device"."module_bay_count", "dcim_device"."inventory_item_count" FROM "dcim_device" INNER JOIN "dcim_devicetype" ON ("dcim_device"."device_type_id" = "dcim_devicetype"."id") WHERE ("dcim_device"."rack_id" = 1327 AND "dcim_device"."position" >= 1 AND NOT ("dcim_devicetype"."exclude_from_utilization")) ORDER BY "dcim_device"."name" ASC, "dcim_device"."id" ASC; args=(1327, Decimal('1')); alias=default `` Query #2 `` (0.001) SELECT "dcim_devicetype"."id", "dcim_devicetype"."created", "dcim_devicetype"."last_updated", "dcim_devicetype"."custom_field_data", "dcim_devicetype"."description", "dcim_devicetype"."comments", "dcim_devicetype"."weight", "dcim_devicetype"."weight_unit", "dcim_devicetype"."_abs_weight", "dcim_devicetype"."manufacturer_id", "dcim_devicetype"."model", "dcim_devicetype"."slug", "dcim_devicetype"."default_platform_id", "dcim_devicetype"."part_number", "dcim_devicetype"."u_height", "dcim_devicetype"."exclude_from_utilization", "dcim_devicetype"."is_full_depth", "dcim_devicetype"."subdevice_role", "dcim_devicetype"."airflow", "dcim_devicetype"."front_image", "dcim_devicetype"."rear_image", "dcim_devicetype"."console_port_template_count", "dcim_devicetype"."console_server_port_template_count", "dcim_devicetype"."power_port_template_count", "dcim_devicetype"."power_outlet_template_count", "dcim_devicetype"."interface_template_count", "dcim_devicetype"."front_port_template_count", "dcim_devicetype"."rear_port_template_count", "dcim_devicetype"."device_bay_template_count", "dcim_devicetype"."module_bay_template_count", "dcim_devicetype"."inventory_item_template_count" FROM "dcim_devicetype" WHERE "dcim_devicetype"."id" IN (504, 397); args=(504, 397); alias=default `` Query #3 `` (0.001) SELECT "dcim_rackreservation"."id", "dcim_rackreservation"."created", "dcim_rackreservation"."last_updated", "dcim_rackreservation"."custom_field_data", "dcim_rackreservation"."comments", "dcim_rackreservation"."rack_id", "dcim_rackreservation"."units", "dcim_rackreservation"."tenant_id", "dcim_rackreservation"."user_id", "dcim_rackreservation"."description" FROM "dcim_rackreservation" WHERE "dcim_rackreservation"."rack_id" = 1327 ORDER BY "dcim_rackreservation"."created" ASC, "dcim_rackreservation"."id" ASC; args=(1327,); alias=default ``
adam added the type: bugstatus: needs ownerpending closureseverity: low labels 2025-12-29 21:35:32 +01:00
adam closed this issue 2025-12-29 21:35:33 +01:00
Author
Owner

@renatoalmeidaoliveira commented on GitHub (Apr 25, 2025):

After investigating this issue, I found that:

  • The N+1 problem occurs when the get_utilization and get_power_utilization columns are displayed. Since get_utilization is one of the default columns, this issue impacts most users.
  • The get_utilization column is a @property field in the Rack model.
  • Within the Rack model's get_utilization method, there are calls to two other methods: get_available_units and get_reserved_units.
  • The get_reserved_units method executes another query for self.reservations.all(). Adding reservations to the RackListView queryset using prefetch_related alleviates the issue, reducing the number of queries from 123 to 85 when tested with NetBox sample data containing 42 rack models.
  • The get_available_units method also performs a separate query for devices, incorporating a prefetch_related operation in this code. However, adding devices and devices__device_type to the RackListView queryset using prefetch_related did not yield any performance improvements.
@renatoalmeidaoliveira commented on GitHub (Apr 25, 2025): After investigating this issue, I found that: - The N+1 problem occurs when the `get_utilization` and `get_power_utilization` columns are displayed. Since `get_utilization` is one of the default columns, this issue impacts most users. - The `get_utilization` column is a `@property` field in the `Rack` model. - Within the `Rack` model's `get_utilization` method, there are calls to two other methods: `get_available_units` and `get_reserved_units`. - The `get_reserved_units` method executes another query for `self.reservations.all()`. Adding `reservations` to the `RackListView` queryset using `prefetch_related` alleviates the issue, reducing the number of queries from 123 to 85 when tested with NetBox sample data containing 42 rack models. - The `get_available_units` method also performs a separate query for devices, incorporating a `prefetch_related` operation in [this code](https://github.com/netbox-community/netbox/blob/0f4afbca2c925193646db7b229d42ddc5b58be8e/netbox/dcim/models/racks.py#L517). However, adding `devices` and `devices__device_type` to the `RackListView` queryset using `prefetch_related` did not yield any performance improvements.
Author
Owner

@github-actions[bot] commented on GitHub (Jul 25, 2025):

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. NetBox is governed by a small group of core maintainers which means not all opened issues may receive direct feedback. Do not attempt to circumvent this process by "bumping" the issue; doing so will result in its immediate closure and you may be barred from participating in any future discussions. Please see our contributing guide.

@github-actions[bot] commented on GitHub (Jul 25, 2025): This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. NetBox is governed by a small group of core maintainers which means not all opened issues may receive direct feedback. **Do not** attempt to circumvent this process by "bumping" the issue; doing so will result in its immediate closure and you may be barred from participating in any future discussions. Please see our [contributing guide](https://github.com/netbox-community/netbox/blob/main/CONTRIBUTING.md).
Author
Owner

@github-actions[bot] commented on GitHub (Aug 25, 2025):

This issue has been automatically closed due to lack of activity. In an effort to reduce noise, please do not comment any further. Note that the core maintainers may elect to reopen this issue at a later date if deemed necessary.

@github-actions[bot] commented on GitHub (Aug 25, 2025): This issue has been automatically closed due to lack of activity. In an effort to reduce noise, please do not comment any further. Note that the core maintainers may elect to reopen this issue at a later date if deemed necessary.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: starred/netbox#10752