mirror of
https://github.com/netbox-community/netbox.git
synced 2026-01-11 21:10:29 +01:00
CSV-based bulk update functionality #5721
Closed
opened 2025-12-29 19:31:50 +01:00 by adam
·
16 comments
No Branch/Tag Specified
main
update-changelog-comments-docs
feature-removal-issue-type
20911-dropdown
20239-plugin-menu-classes-mutable-state
21097-graphql-id-lookups
feature
fix_module_substitution
20923-dcim-templates
20044-elevation-stuck-lightmode
feature-ip-prefix-link
v4.5-beta1-release
20068-import-moduletype-attrs
20766-fix-german-translation-code-literals
20378-del-script
7604-filter-modifiers-v3
circuit-swap
12318-case-insensitive-uniqueness
20637-improve-device-q-filter
20660-script-load
19724-graphql
20614-update-ruff
14884-script
02496-max-page
19720-macaddress-interface-generic-relation
19408-circuit-terminations-export-templates
20203-openapi-check
fix-19669-api-image-download
7604-filter-modifiers
19275-fixes-interface-bulk-edit
fix-17794-get_field_value_return_list
11507-show-aggregate-and-rir-on-api
9583-add_column_specific_search_field_to_tables
v4.5.0
v4.4.10
v4.4.9
v4.5.0-beta1
v4.4.8
v4.4.7
v4.4.6
v4.4.5
v4.4.4
v4.4.3
v4.4.2
v4.4.1
v4.4.0
v4.3.7
v4.4.0-beta1
v4.3.6
v4.3.5
v4.3.4
v4.3.3
v4.3.2
v4.3.1
v4.3.0
v4.2.9
v4.3.0-beta2
v4.2.8
v4.3.0-beta1
v4.2.7
v4.2.6
v4.2.5
v4.2.4
v4.2.3
v4.2.2
v4.2.1
v4.2.0
v4.1.11
v4.1.10
v4.1.9
v4.1.8
v4.2-beta1
v4.1.7
v4.1.6
v4.1.5
v4.1.4
v4.1.3
v4.1.2
v4.1.1
v4.1.0
v4.0.11
v4.0.10
v4.0.9
v4.1-beta1
v4.0.8
v4.0.7
v4.0.6
v4.0.5
v4.0.3
v4.0.2
v4.0.1
v4.0.0
v3.7.8
v3.7.7
v4.0-beta2
v3.7.6
v3.7.5
v4.0-beta1
v3.7.4
v3.7.3
v3.7.2
v3.7.1
v3.7.0
v3.6.9
v3.6.8
v3.6.7
v3.7-beta1
v3.6.6
v3.6.5
v3.6.4
v3.6.3
v3.6.2
v3.6.1
v3.6.0
v3.5.9
v3.6-beta2
v3.5.8
v3.6-beta1
v3.5.7
v3.5.6
v3.5.5
v3.5.4
v3.5.3
v3.5.2
v3.5.1
v3.5.0
v3.4.10
v3.4.9
v3.5-beta2
v3.4.8
v3.5-beta1
v3.4.7
v3.4.6
v3.4.5
v3.4.4
v3.4.3
v3.4.2
v3.4.1
v3.4.0
v3.3.10
v3.3.9
v3.4-beta1
v3.3.8
v3.3.7
v3.3.6
v3.3.5
v3.3.4
v3.3.3
v3.3.2
v3.3.1
v3.3.0
v3.2.9
v3.2.8
v3.3-beta2
v3.2.7
v3.3-beta1
v3.2.6
v3.2.5
v3.2.4
v3.2.3
v3.2.2
v3.2.1
v3.2.0
v3.1.11
v3.1.10
v3.2-beta2
v3.1.9
v3.2-beta1
v3.1.8
v3.1.7
v3.1.6
v3.1.5
v3.1.4
v3.1.3
v3.1.2
v3.1.1
v3.1.0
v3.0.12
v3.0.11
v3.0.10
v3.1-beta1
v3.0.9
v3.0.8
v3.0.7
v3.0.6
v3.0.5
v3.0.4
v3.0.3
v3.0.2
v3.0.1
v3.0.0
v2.11.12
v3.0-beta2
v2.11.11
v2.11.10
v3.0-beta1
v2.11.9
v2.11.8
v2.11.7
v2.11.6
v2.11.5
v2.11.4
v2.11.3
v2.11.2
v2.11.1
v2.11.0
v2.10.10
v2.10.9
v2.11-beta1
v2.10.8
v2.10.7
v2.10.6
v2.10.5
v2.10.4
v2.10.3
v2.10.2
v2.10.1
v2.10.0
v2.9.11
v2.10-beta2
v2.9.10
v2.10-beta1
v2.9.9
v2.9.8
v2.9.7
v2.9.6
v2.9.5
v2.9.4
v2.9.3
v2.9.2
v2.9.1
v2.9.0
v2.9-beta2
v2.8.9
v2.9-beta1
v2.8.8
v2.8.7
v2.8.6
v2.8.5
v2.8.4
v2.8.3
v2.8.2
v2.8.1
v2.8.0
v2.7.12
v2.7.11
v2.7.10
v2.7.9
v2.7.8
v2.7.7
v2.7.6
v2.7.5
v2.7.4
v2.7.3
v2.7.2
v2.7.1
v2.7.0
v2.6.12
v2.6.11
v2.6.10
v2.6.9
v2.7-beta1
Solcon-2020-01-06
v2.6.8
v2.6.7
v2.6.6
v2.6.5
v2.6.4
v2.6.3
v2.6.2
v2.6.1
v2.6.0
v2.5.13
v2.5.12
v2.6-beta1
v2.5.11
v2.5.10
v2.5.9
v2.5.8
v2.5.7
v2.5.6
v2.5.5
v2.5.4
v2.5.3
v2.5.2
v2.5.1
v2.5.0
v2.4.9
v2.5-beta2
v2.4.8
v2.5-beta1
v2.4.7
v2.4.6
v2.4.5
v2.4.4
v2.4.3
v2.4.2
v2.4.1
v2.4.0
v2.3.7
v2.4-beta1
v2.3.6
v2.3.5
v2.3.4
v2.3.3
v2.3.2
v2.3.1
v2.3.0
v2.2.10
v2.3-beta2
v2.2.9
v2.3-beta1
v2.2.8
v2.2.7
v2.2.6
v2.2.5
v2.2.4
v2.2.3
v2.2.2
v2.2.1
v2.2.0
v2.1.6
v2.2-beta2
v2.1.5
v2.2-beta1
v2.1.4
v2.1.3
v2.1.2
v2.1.1
v2.1.0
v2.0.10
v2.1-beta1
v2.0.9
v2.0.8
v2.0.7
v2.0.6
v2.0.5
v2.0.4
v2.0.3
v2.0.2
v2.0.1
v2.0.0
v2.0-beta3
v1.9.6
v1.9.5
v2.0-beta2
v1.9.4-r1
v1.9.3
v2.0-beta1
v1.9.2
v1.9.1
v1.9.0-r1
v1.8.4
v1.8.3
v1.8.2
v1.8.1
v1.8.0
v1.7.3
v1.7.2-r1
v1.7.1
v1.7.0
v1.6.3
v1.6.2-r1
v1.6.1-r1
1.6.1
v1.6.0
v1.5.2
v1.5.1
v1.5.0
v1.4.2
v1.4.1
v1.4.0
v1.3.2
v1.3.1
v1.3.0
v1.2.2
v1.2.1
v1.2.0
v1.1.0
v1.0.7-r1
v1.0.7
v1.0.6
v1.0.5
v1.0.4
v1.0.3-r1
v1.0.3
1.0.0
Labels
Clear labels
beta
breaking change
complexity: high
complexity: low
complexity: medium
needs milestone
netbox
pending closure
plugin candidate
pull-request
severity: high
severity: low
severity: medium
status: accepted
status: backlog
status: blocked
status: duplicate
status: needs owner
status: needs triage
status: revisions needed
status: under review
topic: GraphQL
topic: Internationalization
topic: OpenAPI
topic: UI/UX
topic: cabling
topic: event rules
topic: htmx navigation
topic: industrialization
topic: migrations
topic: plugins
topic: scripts
topic: templating
topic: testing
type: bug
type: deprecation
type: documentation
type: feature
type: housekeeping
type: translation
Mirrored from GitHub Pull Request
Milestone
No items
No Milestone
Projects
Clear projects
No project
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: starred/netbox#5721
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @bhunt64848 on GitHub (Dec 2, 2021).
Originally assigned to: @arthanson on GitHub.
NetBox version
v2.10.4
Feature type
Change to existing functionality
Proposed functionality
Being able to import serial numbers or any given field without having the export delete and re-import would be absolute amazing (wanted to add 45 serial numbers and couldn’t do it)
Use case
Provide efficiency in mass updating the database
Database changes
No response
External dependencies
No response
@DanSheps commented on GitHub (Dec 2, 2021):
I think a better way to phrase this would be "bulk update via import"
@bhunt64848 commented on GitHub (Dec 2, 2021):
Agreed!
@Mizarv commented on GitHub (Dec 6, 2021):
This has already been suggsted in #1732 with a reply from Jeremy.
Given it was 4 years ago, I would love to see this looked at again.
@jeremystretch commented on GitHub (Dec 8, 2021):
I'm afraid this needs quite a bit more detail. What is the proposed workflow? How will you accurately identify existing objects?
@Mizarv commented on GitHub (Dec 9, 2021):
I guess now that we can export the CSVs with the object ID you could use that as that field can't be updated.
Maybe an additional tab to avoid complicating the existing CSV Data method as this works excellently
@martinum4 commented on GitHub (Dec 10, 2021):
Maybe it could the same as the normal csv import in regard to attribute extension and selection, example:
would be used to update the serial and asset tags for the device with ID 500.
However it could also be like
since the name is an unique identifier too.
The next page could be similar to the interface-renaming screen where the prechange and postchange-data for each device is displayed, that way it could be avoided to update already set fields.
@jeremystretch commented on GitHub (Dec 13, 2021):
Implementing this would require an absolutely reliable unique object identifier, i.e. the object's primary key (numeric ID). If that's acceptable, I think this is reasonably feasible to implement. However, it would likely require a separate view and workflow from the bulk import function, as we would be updating objects rather than creating them, and none of the form fields would be required (similar to the existing bulk edit functionality).
@github-actions[bot] commented on GitHub (Feb 12, 2022):
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. NetBox is governed by a small group of core maintainers which means not all opened issues may receive direct feedback. Please see our contributing guide.
@mikewilliamsjr commented on GitHub (Jun 8, 2022):
Would addressing this issue include ensuring that the data that is exported can be immediately imported once the data is deleted, or can be imported to a fresh instance without curation? Currently, bulk import demands headers formatted in lower-case and requires that some content is in a specific format. If data must be imported in a specific format, the data should be exported in the same format as what is acceptable for import. Usability will be improved and admin effort will be reduced.
@jeremystretch commented on GitHub (Aug 11, 2022):
No. This has been discussed many times and in many cases is not feasible due to import requiring sufficiently unique identifiers for related objects, which would be extraneous in exported data.
@mikewilliamsjr commented on GitHub (Aug 11, 2022):
You've acknowledged that including the identifiers in imported data would make martinum4's suggestion feasible. How is their suggestion materially different from mine, aside from their pure focus on importing updates? Support for updating existing records is effectively the same as importing new, unique records. The same process for ensuring primary keys are not duplicated could be used in both instances.
The logical next step is to provide a means for easily accessing the primary keys. A toggled setting to include/exclude the keys on export could improve the usability of this tool, especially if your intent is to develop a new workflow for bulk update. The keys are only extraneous if the intent is to use exported data in some other application with no intent to import; if the intent is to migrate to a fresh instance or programmatically update existing data, then rather than being extraneous, the keys seem mandatory.
@jeremystretch commented on GitHub (Aug 11, 2022):
A primary key don't exist until the object has been created in the database. It is not possible to reference one until after the object has been imported.
Let's please keep this discussion limited to the scope of the FR, which involves the bulk update of existing objects.
@mikewilliamsjr commented on GitHub (Aug 12, 2022):
RIght; hence, the suggestion to add an option to include the key in exported data.
The point I'm making is that both requests can be supported with a more useful export. IOW, by making it possible to include the keys for each record with the exported data, you would enable users to export the data, manipulate the data as required, then either import to the existing instance (update) or to a new instance.
Please forgive me for not realizing how I'm offtopic. Unless my understanding is far off, it seems like the path to CSV-based bulk updates would include a means for retrieving the primary key of each record. From there, the next logical step seems to be to ensure that exported column headers match the required column headers for import. Each of these steps should simplify the logic and/or administrative effort required for data validation.
@jdavidson2021 commented on GitHub (Aug 12, 2022):
Maybe have two choices during the CSV export to provide the data with "friendly names" from what is currently shown in the GUI (current design) and 2nd choice to export the data with "import column names". We currently export the data in CSV, delete the objects, massage all of the column names that won't import correctly from the current CSV export, update our data in the CSV, and then re-import the data. I'm sure this is what most users are doing when they need to do a bulk update of information currently.
@jeremystretch commented on GitHub (Aug 12, 2022):
To reiterate, this FR has nothing to do with object export; the functionality required to export object IDs is already present. The work necessary to implement this FR involves:
(This is assuming the above is feasible. If not, we'll need to introduce a new form & view to accommodate the functionality.)
@jnovak-netsystemcz commented on GitHub (Nov 29, 2022):
Hi,
I'm focused on update of existing items. I tested it and it works, but I have proposal:
Import description in 'Field Options' states that 'id' is required for update, but I'm afraid it is not emphasized. I propose to write it to the top of description. I mean add above 'Field Options' something like:
"If you would like update existing items, you must add 'id' column." and left 'id' in list of fields.
To be honest, I propose to split import and update as separate actions. Name import as 'import new' and there will be no 'id' in the list. For update, there will 'id' as first item in the list and will be marked as mandatory. In that case no sentence proposed above is needed.
I know that users can complain that they might need import and update at same time, but I think it is easy to split input file to two parts - new items and updated items and run it in two steps. For newcomers it will be much straightforward.