Expand NetBox DCIM for Storage Management #7518

Open
opened 2025-12-29 20:24:35 +01:00 by adam · 8 comments
Owner

Originally created by @danner26 on GitHub (Jan 15, 2023).

NetBox version

v3.4.2

Feature type

Data model extension

Proposed functionality

With the growing expansion of the DCIM functionality, I believe it is time we start adding storage management within the DCIM module. Besides switches that utilize MiniSAS for stacking, please see my use case for more information on why I believe this is useful.

Storage management can be a complex set of infrastructure, especially when you step into the Fibre Channel, iSCSI, and SAS world. IQNs, WWNs, and other configurations are all important to a properly functioning virtual environment. Adding storage management to NetBox will be a great addition to the existing data center infrastructure management system. Please feel free to comment on any of my ideas! In my opinion, this is a very complex and important topic to get right out of the gate.

I propose that we add/modify the following:

  1. A new component type for storage disks, probably added to dcim, referred to as storage-slot. A storage slot within a device, which is an empty slot within the device which can house one storage disk.
  • Actual disks might be added to Module Types, or maybe they would get their own definition within devices.
  • Alternatively, we can modify ModuleType to include optional fields such as disk_size, disk_speed
  1. A new component type for storage interfaces, probably added to dcim, referred to as storage-interface. Now, this part could get a little hairy since, for example, some switches use MiniSAS interfaces for stacking.. This is just a rough idea
  • Migrate fibre channel to storage interfaces, since it is primarily used in SAN data center configurations (up in the air, it might be easier to leave it where it is for now/ever)
  • Migrate iSCSI to storage interfaces, since it is primarily used for storage data center configurations (up in the air, it might be easier to leave it where it is for now/ever)
  • Add SAS interfaces, as well as any other external interfaces
  1. RAID configuration. We would need to first require that disks are added to a device. Then we would need to create a raid-group which would contain a list of disks. This group would also need to have certain properties, such as raid-type. In order for this to work properly, we would need to have some validation before saving. Specifically we would need to check that the minimum amount of disks is added to the group for each raid type.
  2. ???. I am sure I am missing parts, so this will really need to be fleshed out within the community. I think a project board for this alone makes sense as well since this is going to be a massive addition.

Use case

In our data centers we have a handful of clusters that utilize DAS storage. We use Dell ME4024’s, which have two modular controllers (A/B) - each containing 4 SAS interfaces. Currently, when modeling the devices along with the module types, we are forced to use an “other” interface type. Even though this works technically, I believe it would be helpful to have the exact type available to those who work in the storage industry.

It is my intention to track each World Wide Name or iSCSI Qualified Name, as well as their mapping information. In addition to assisting with manual configuration management, especially with our data centers and their teams, I also intend to integrate our NetBox instance with Dell OpenManage/LibreNMS monitoring. While it is rare, sometimes we receive SNMP alerts regarding specific IQNs. Using the extensive data set within NetBox, we can correlate the data and automatically gather the specific interface, cable, and downstream client device along with running programmatic tests to verify configuration.

Database changes

Addition of the following tables, as well as their supporting linkings/indexes (this is a very rough idea, community ideas are needed): dcim.storage_slot, dcim.storage_interface, possibly dcim.raid. We might even want to add a completely new data model for Storage.

External dependencies

No response

Originally created by @danner26 on GitHub (Jan 15, 2023). ### NetBox version v3.4.2 ### Feature type Data model extension ### Proposed functionality With the growing expansion of the DCIM functionality, I believe it is time we start adding storage management within the DCIM module. Besides switches that utilize MiniSAS for stacking, please see my use case for more information on why I believe this is useful. Storage management can be a complex set of infrastructure, especially when you step into the Fibre Channel, iSCSI, and SAS world. IQNs, WWNs, and other configurations are all important to a properly functioning virtual environment. Adding storage management to NetBox will be a great addition to the existing data center infrastructure management system. Please feel free to comment on any of my ideas! In my opinion, this is a very complex and important topic to get right out of the gate. I propose that we add/modify the following: 1. A new component type for storage disks, probably added to `dcim`, referred to as `storage-slot`. A storage slot within a device, which is an empty slot within the device which can house one storage disk. - Actual disks might be added to Module Types, or maybe they would get their own definition within devices. - Alternatively, we can modify `ModuleType` to include optional fields such as `disk_size`, `disk_speed` 2. A new component type for storage interfaces, probably added to `dcim`, referred to as `storage-interface`. Now, this part could get a little hairy since, for example, some switches use MiniSAS interfaces for stacking.. This is just a rough idea - Migrate fibre channel to storage interfaces, since it is primarily used in SAN data center configurations (up in the air, it might be easier to leave it where it is for now/ever) - Migrate iSCSI to storage interfaces, since it is primarily used for storage data center configurations (up in the air, it might be easier to leave it where it is for now/ever) - Add SAS interfaces, as well as any other external interfaces 3. RAID configuration. We would need to first require that disks are added to a device. Then we would need to create a `raid-group` which would contain a list of disks. This group would also need to have certain properties, such as `raid-type`. In order for this to work properly, we would need to have some validation before saving. Specifically we would need to check that the minimum amount of disks is added to the group for each raid type. 4. ???. I am sure I am missing parts, so this will really need to be fleshed out within the community. I think a project board for this alone makes sense as well since this is going to be a massive addition. ### Use case In our data centers we have a handful of clusters that utilize DAS storage. We use Dell ME4024’s, which have two modular controllers (A/B) - each containing 4 SAS interfaces. Currently, when modeling the devices along with the module types, we are forced to use an “other” interface type. Even though this works technically, I believe it would be helpful to have the exact type available to those who work in the storage industry. It is my intention to track each World Wide Name or iSCSI Qualified Name, as well as their mapping information. In addition to assisting with manual configuration management, especially with our data centers and their teams, I also intend to integrate our NetBox instance with Dell OpenManage/LibreNMS monitoring. While it is rare, sometimes we receive SNMP alerts regarding specific IQNs. Using the extensive data set within NetBox, we can correlate the data and automatically gather the specific interface, cable, and downstream client device along with running programmatic tests to verify configuration. ### Database changes Addition of the following tables, as well as their supporting linkings/indexes (this is a very rough idea, community ideas are needed): `dcim.storage_slot`, `dcim.storage_interface`, possibly `dcim.raid`. We might even want to add a completely new data model for `Storage`. ### External dependencies _No response_
adam added the type: featurenetboxneeds milestonestatus: backlogcomplexity: high labels 2025-12-29 20:24:35 +01:00
Author
Owner

@stavr666 commented on GitHub (Jan 17, 2023):

While I agree with general storage management needs in DC, we rarely see need to provision HW storage, change physical drives or raid types. Only when some servers detects broken drive we need some information, but list of assets and replacement parts is more than enough for us.

On the other hand, there is massive operational changes with hardware and software SANs, cloud storages and project/tenant resources (costs also, but it's CMDB info, outside of engineering tasks and Netbox's DCIM approach), VM migration contexts, monitored objects.

So, we more interested in documenting VHDs, mounted cluster VVs, cloud tenant/blob/storage classes and other communication/replication data, not the physical aspects of it.

@stavr666 commented on GitHub (Jan 17, 2023): While I agree with general storage management needs in DC, we rarely see need to provision HW storage, change physical drives or raid types. Only when some servers detects broken drive we need some information, but list of assets and replacement parts is more than enough for us. On the other hand, there is massive operational changes with hardware and software SANs, cloud storages and project/tenant resources (costs also, but it's CMDB info, outside of engineering tasks and Netbox's DCIM approach), VM migration contexts, monitored objects. So, we more interested in documenting VHDs, mounted cluster VVs, cloud tenant/blob/storage classes and other communication/replication data, not the physical aspects of it.
Author
Owner

@danner26 commented on GitHub (Jan 18, 2023):

I completely agree with you, @stavr666

It is important to keep in mind that there are many moving parts like you mentioned, both virtual and physical. The underlying physical components are equally important as the virtual components. In my opinion, the more information an engineer has, the better. In my opinion, that is the best way to operate - plus the eventual goal of entering data into other information systems would be helpful especially for our particular use case.. However, it all has to begin somewhere, and I believe NetBox is ready and capable of tackling Storage Management.

@danner26 commented on GitHub (Jan 18, 2023): I completely agree with you, @stavr666 It is important to keep in mind that there are many moving parts like you mentioned, both virtual and physical. The underlying physical components are equally important as the virtual components. In my opinion, the more information an engineer has, the better. In my opinion, that is the best way to operate - plus the eventual goal of entering data into other information systems would be helpful especially for our particular use case.. However, it all has to begin somewhere, and I believe NetBox is ready and capable of tackling Storage Management.
Author
Owner

@MalfuncEddie commented on GitHub (Jan 20, 2023):

On the other hand, there is massive operational changes with hardware and software SANs, cloud storages and project/tenant resources (costs also, but it's CMDB info, outside of engineering tasks and Netbox's DCIM approach), VM migration contexts, monitored objects.

I do not thing this is outside the "Netbox's DCIM approach". I've used ansible to deploy vm's from info of the netboxbox side. Currently I use a custom field to specify the kind of storage but it would be nice if there was a report that shows how much storage is allocated of a certain type.

Most san storages gather a bunch of disk/raids in a pool that can then be allocated to a vm or device.

So something like this

netbox object Storage pool (size, type (SSD/HHD)(mayybe like a role?), replicated ....)
netbox object LUN/disk: virtual dis attached to a device/vm

I do believe we need some kind of parent child relation between pools.

vm
parent pools 3par storage -> child pool vmware datastore -> lun/diks: mapped to vm

device:
parent pools 3par storage -> raw device mapping to device

@MalfuncEddie commented on GitHub (Jan 20, 2023): > On the other hand, there is massive operational changes with hardware and software SANs, cloud storages and project/tenant resources (costs also, but it's CMDB info, outside of engineering tasks and Netbox's DCIM approach), VM migration contexts, monitored objects. I do not thing this is outside the "Netbox's DCIM approach". I've used ansible to deploy vm's from info of the netboxbox side. Currently I use a custom field to specify the kind of storage but it would be nice if there was a report that shows how much storage is allocated of a certain type. Most san storages gather a bunch of disk/raids in a pool that can then be allocated to a vm or device. So something like this netbox object Storage pool (size, type (SSD/HHD)(mayybe like a role?), replicated ....) netbox object LUN/disk: virtual dis attached to a device/vm I do believe we need some kind of parent child relation between pools. vm parent pools 3par storage -> child pool vmware datastore -> lun/diks: mapped to vm device: parent pools 3par storage -> raw device mapping to device
Author
Owner

@ArKam commented on GitHub (Feb 28, 2023):

To second this request, I do think that with SDS it's not out of netbox approach to manage disks ressources as requested, for instance, we do operate our netbox as our SoT for the whole infrastructure, it's our begining point of information for everything, would it be the automation system that provision our servers by modeling the storage layout and network from it or our DC guys.

Currently, we abuse the inventory items but it would be nice to really rework the module vs device feature.

Here is an exemple for HPE DL385 Gen10 plus v2:

It holds 3 front SFF drive bays named Universal media bay, that each host 4 slots, that can be filled with SSD/HDD SATA/SAS drives.
Currently we can't have device type to get appropriate parent/child tree on this kind of setup.

@ArKam commented on GitHub (Feb 28, 2023): To second this request, I do think that with SDS it's not out of netbox approach to manage disks ressources as requested, for instance, we do operate our netbox as our SoT for the whole infrastructure, it's our begining point of information for everything, would it be the automation system that provision our servers by modeling the storage layout and network from it or our DC guys. Currently, we abuse the inventory items but it would be nice to really rework the module vs device feature. Here is an exemple for HPE DL385 Gen10 plus v2: It holds 3 front SFF drive bays named Universal media bay, that each host 4 slots, that can be filled with SSD/HDD SATA/SAS drives. Currently we can't have device type to get appropriate parent/child tree on this kind of setup.
Author
Owner

@eronlloyd commented on GitHub (May 2, 2023):

Definitely +1 on this; as I'm building out our DC infrastructure and learning more about the design and centrality of storage in hyperconvergence, making storage a first class consideration in DCIM modeling, just like network and compute resources, is essential.

@eronlloyd commented on GitHub (May 2, 2023): Definitely +1 on this; as I'm building out our DC infrastructure and learning more about the design and centrality of storage in hyperconvergence, making storage a first class consideration in DCIM modeling, just like network and compute resources, is essential.
Author
Owner

@iopsthecloud commented on GitHub (Aug 10, 2023):

Our use cases are linked to our ISO27001 obligations:

  • traceability of disks in equipment
  • keeping stocks / spares in our DC
  • traceability of physical destruction or shreding

All these procedures are automated by dynamically retrieving disk identification from servers and SANs and we maintain these informations dynamically.

Today, we try to do it via "inventory-items", but this not working well, as we have to attach the asset to a device, and thus we lose traceability.

@iopsthecloud commented on GitHub (Aug 10, 2023): Our use cases are linked to our ISO27001 obligations: - traceability of disks in equipment - keeping stocks / spares in our DC - traceability of physical destruction or shreding All these procedures are automated by dynamically retrieving disk identification from servers and SANs and we maintain these informations dynamically. Today, we try to do it via "inventory-items", but this not working well, as we have to attach the asset to a device, and thus we lose traceability.
Author
Owner

@Zorlin commented on GitHub (Dec 31, 2023):

I'm now in the VERY early stages of working on a plugin to solve this for my use case. Who knows if it'll go anywhere :)

@Zorlin commented on GitHub (Dec 31, 2023): I'm now in the VERY early stages of working on [a plugin to solve this](https://github.com/Zorlin/netbox-physical-storage) for my use case. Who knows if it'll go anywhere :)
Author
Owner

@RedShift1 commented on GitHub (Oct 23, 2025):

In the mean time, can the SAS interfaces (SFF-8088, SFF-8644, ...) be added to interface type? Adding whole new object types just to document this one cable connection seems like a bit overkill.

@RedShift1 commented on GitHub (Oct 23, 2025): In the mean time, can the SAS interfaces (SFF-8088, SFF-8644, ...) be added to interface type? Adding whole new object types just to document this one cable connection seems like a bit overkill.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: starred/netbox#7518