Virtual Chassis enhancement - Add the ability to name a virtual chassis #1668

Closed
opened 2025-12-29 16:34:10 +01:00 by adam · 20 comments
Owner

Originally created by @jsenecal on GitHub (Apr 9, 2018).

Originally assigned to: @jeremystretch on GitHub.

Issue type

[X] Feature request
[ ] Bug report
[ ] Documentation

Environment

  • Python version: Python 3.5.2
  • NetBox version: Develop branch [07364ab]

Description

Currently virtual chassis are named after their master node. It would be great if there was a way to name them according to their "virtual" identity -

For instance: A two member firewall virtual chassis could have two separate names and a virtual one representing the two units.

  • node1.fw1.somesite.net
  • node2.fw1.somesite.net

Their virtual name is fw1.somesite.net

See what I did there?

A name CharField would need to be added to the VirtualChassis model
It should be optional but used in the __str__ method when documented.

I can submit a PR for this functionality if approved.

Originally created by @jsenecal on GitHub (Apr 9, 2018). Originally assigned to: @jeremystretch on GitHub. <!-- Before opening a new issue, please search through the existing issues to see if your topic has already been addressed. Note that you may need to remove the "is:open" filter from the search bar to include closed issues. Check the appropriate type for your issue below by placing an x between the brackets. For assistance with installation issues, or for any other issues other than those listed below, please raise your topic for discussion on our mailing list: https://groups.google.com/forum/#!forum/netbox-discuss Please note that issues which do not fall under any of the below categories will be closed. Due to an excessive backlog of feature requests, we are not currently accepting any proposals which extend NetBox's feature scope. Do not prepend any sort of tag to your issue's title. An administrator will review your issue and assign labels as appropriate. ---> ### Issue type [X] Feature request <!-- An enhancement of existing functionality --> [ ] Bug report <!-- Unexpected or erroneous behavior --> [ ] Documentation <!-- A modification to the documentation --> <!-- Please describe the environment in which you are running NetBox. (Be sure to verify that you are running the latest stable release of NetBox before submitting a bug report.) If you are submitting a bug report and have made any changes to the code base, please first validate that your bug can be recreated while running an official release. --> ### Environment * Python version: Python 3.5.2 * NetBox version: Develop branch [07364ab] <!-- BUG REPORTS must include: * A list of the steps needed for someone else to reproduce the bug * A description of the expected and observed behavior * Any relevant error messages (screenshots may also help) FEATURE REQUESTS must include: * A detailed description of the proposed functionality * A use case for the new feature * A rough description of any necessary changes to the database schema * Any relevant third-party libraries which would be needed --> ### Description Currently virtual chassis are named after their master *node*. It would be great if there was a way to name them according to their "virtual" identity - For instance: A two member firewall virtual chassis could have two separate names and a virtual one representing the two units. - node1.fw1.somesite.net - node2.fw1.somesite.net Their virtual name is fw1.somesite.net See what I did there? A `name` CharField would need to be added to the [VirtualChassis model](https://github.com/digitalocean/netbox/blob/develop/netbox/dcim/models.py#L1646) It should be optional but used in the [`__str__` method](https://github.com/digitalocean/netbox/blob/develop/netbox/dcim/models.py#L1664) when documented. I can submit a PR for this functionality if approved.
adam added the status: acceptedtype: feature labels 2025-12-29 16:34:10 +01:00
adam closed this issue 2025-12-29 16:34:10 +01:00
Author
Owner

@jeremystretch commented on GitHub (Apr 12, 2018):

This virtual chassis model is intended to replicate real-world implementations, wherein the master device "is" the virtual chassis. Do you need to support an implementation where the virtual chassis has a name which differs from that of its master device?

@jeremystretch commented on GitHub (Apr 12, 2018): This virtual chassis model is intended to replicate real-world implementations, wherein the master device "is" the virtual chassis. Do you need to support an implementation where the virtual chassis has a name which differs from that of its master device?
Author
Owner

@jsenecal commented on GitHub (Apr 16, 2018):

Yes, in our use case, the two node virtual chassis assumes a third "virtual" identity while both units can still be reachable using specific Ips/hostnames) for instance.

@jsenecal commented on GitHub (Apr 16, 2018): Yes, in our use case, the two node virtual chassis assumes a third "virtual" identity while both units can still be reachable using specific Ips/hostnames) for instance.
Author
Owner

@dirtycajunrice commented on GitHub (Apr 16, 2018):

To chime in:
We have it both ways actually.
Example A:
2 Brocade Switches in "Fabric" setup. Master device takes the name of the pair
Example B:
2 Cisco Switches in "VSS" Setup. Each device has its own name but the "HSRP" Identity has its own name

@dirtycajunrice commented on GitHub (Apr 16, 2018): To chime in: We have it both ways actually. Example A: 2 Brocade Switches in "Fabric" setup. Master device takes the name of the pair Example B: 2 Cisco Switches in "VSS" Setup. Each device has its own name but the "HSRP" Identity has its own name
Author
Owner

@Chipa001 commented on GitHub (Apr 18, 2018):

Another example being EqualLogic SAN's in a group.
Any member can be elected the master.
Each Member has its own IP's and the Group has additional IP's.
Naming - 3 members means 4 names: member[1-3], group1

@Chipa001 commented on GitHub (Apr 18, 2018): Another example being EqualLogic SAN's in a group. Any member can be elected the master. Each Member has its own IP's and the Group has additional IP's. Naming - 3 members means 4 names: member[1-3], group1
Author
Owner

@Chipa001 commented on GitHub (Apr 18, 2018):

I know it might be a separate issue, but it would also be nice to assign interfaces/IP's directly to the VC object. For us this would mainly apply for my EqualLogic SAN example.

@Chipa001 commented on GitHub (Apr 18, 2018): I know it might be a separate issue, but it would also be nice to assign interfaces/IP's directly to the VC object. For us this would mainly apply for my EqualLogic SAN example.
Author
Owner

@ghost commented on GitHub (Apr 20, 2018):

Prior to the implementation of virtual chassis in netbox, and in fact still on our psuedo production environment of netbox, we used the Chassis device bays (child/parent) model to represent our virtual chassis. This was of course at the detriment of being able to represent the physical hardware on racks. This essentially created a "Virtual Chassis" which would hold most of the logical configuration, but still allowed the physical hardware to exist within netbox for inventory purposes.

When Virtual chassis was originally discussed, I really thought that a rework of chassis would be implemented to support virtual chassis'. I was actually quite surprised to see the direction that was taken.

There is talk of Firewalls and Active/Standby which I don't really like the idea of representing these type of relationships as virtual chassis, except when looking at the current implementation of Virtual Chassis. I think current VC implementation is better described as an HA relationship and less like a Virtual Chassis. With that in mind, I believe VC/HA should be allowed within the Virtual world as well.

@ghost commented on GitHub (Apr 20, 2018): Prior to the implementation of virtual chassis in netbox, and in fact still on our psuedo production environment of netbox, we used the Chassis device bays (child/parent) model to represent our virtual chassis. This was of course at the detriment of being able to represent the physical hardware on racks. This essentially created a "Virtual Chassis" which would hold most of the logical configuration, but still allowed the physical hardware to exist within netbox for inventory purposes. When Virtual chassis was originally discussed, I really thought that a rework of chassis would be implemented to support virtual chassis'. I was actually quite surprised to see the direction that was taken. There is talk of Firewalls and Active/Standby which I don't really like the idea of representing these type of relationships as virtual chassis, except when looking at the current implementation of Virtual Chassis. I think current VC implementation is better described as an HA relationship and less like a Virtual Chassis. With that in mind, I believe VC/HA should be allowed within the Virtual world as well.
Author
Owner

@deadflanders commented on GitHub (Jun 29, 2018):

I'd vote for this feature as well. We use Comware gear here, and we often IRF multiple chassis into a single virtual chassis. When we do this, we have a name for each physical device that is used just for inventory purposes, we then have a separate name that is used for the stack as a whole.

@deadflanders commented on GitHub (Jun 29, 2018): I'd vote for this feature as well. We use Comware gear here, and we often IRF multiple chassis into a single virtual chassis. When we do this, we have a name for each physical device that is used just for inventory purposes, we then have a separate name that is used for the stack as a whole.
Author
Owner

@mmahacek commented on GitHub (Aug 7, 2018):

The use case sounds more like a cluster than a virtual chassis. In our case, we have two sets of stacked switches, each as their own virtual chassis, that are then clustered together for fail over. I would vote for this enhancement to be a separate object. Kind of a combination of virtual chassis and virtual cluster, but for physical clusters.

@mmahacek commented on GitHub (Aug 7, 2018): The use case sounds more like a cluster than a virtual chassis. In our case, we have two sets of stacked switches, each as their own virtual chassis, that are then clustered together for fail over. I would vote for this enhancement to be a separate object. Kind of a combination of virtual chassis and virtual cluster, but for physical clusters.
Author
Owner

@snowie-swe commented on GitHub (Nov 11, 2018):

Virtual cluster naming is also present in Checkpoint firewalls, where u can have for example 8 nodes in a VSX cluster running VSLS (load sharing) - as clusters are normally made of physical devices that can be located in diff datacenters a virtual cluster name is more or less always given.

Each member of the cluster will have its own IP and Name.
The Virtual cluster will have its own IP and name, it dosn´t inherit from any other member.

@snowie-swe commented on GitHub (Nov 11, 2018): Virtual cluster naming is also present in Checkpoint firewalls, where u can have for example 8 nodes in a VSX cluster running VSLS (load sharing) - as clusters are normally made of physical devices that can be located in diff datacenters a virtual cluster name is more or less always given. Each member of the cluster will have its own IP and Name. The Virtual cluster will have its own IP and name, it dosn´t inherit from any other member.
Author
Owner

@paravoid commented on GitHub (Nov 30, 2018):

This virtual chassis model is intended to replicate real-world implementations, wherein the master device "is" the virtual chassis. Do you need to support an implementation where the virtual chassis has a name which differs from that of its master device?

I'm not sure how familiar you are with Juniper gear; apologies beforehand if you are and this is overly verbose!

Juniper's virtual chassis (= switch stacking) has the concept of "global management", where one assigns an IP to the "virtual management Ethernet" interface, and then SSH to that IP for management. That IP is automatically assigned by whichever switch happens to be the master ("routing-engine" in Juniper's confusing terminology) at that time.

For all intents and purposes, a VC is "one switch", with one configuration, which has one hostname set (as shown at the prompt), and one IP assigned to its vme interface. The individual switches that comprise the stack don't (typically?) have an IP of their own, and no separate configuration of their own either -- the interfaces are all renamed as well (prefixed with the member's ID), making their names globally unique across the stack. You can't know which switch you've connected to (i.e. which switch is the master) until you run e.g. show virtual-chassis.

Other vendors -and even Juniper in some configurations I believe- may treat a switch stack in a similar fashion, where "secondary" devices are treated as "remote linecards" of a primary device.

An real-world simplified example is: we have a switch stack called "asw-esams", with a hostname of "asw-esams.mgmt.esams.wmnet", comprised of two QFX5100 switches, IDs 0 and 3, devices "asw-oe10-esams" and "asw-oe13-esams" in Netbox, in racks OE10 and OE13 respectively. oe10's interfaces are xe-0/N/N and oe13's interfaces are xe-3/N/N. The configuration includes:

set system host-name asw-esams
set interfaces vme unit 0 family inet address 10.21.0.104/24
set routing-options static route 0.0.0.0/0 next-hop 10.21.0.1
...
set interfaces xe-0/0/0 description lvs3001
...
set interfaces xe-3/0/0 description lvs3003
...

It's not clear to me how one would model this in Netbox, and how would make this association. It's certainly useful, both for humans, and for automated tools (e.g. configuration management) to be able to maintain the association between stack members and "stack hostname", as well as to be able to define the management IPv4/IPv6 of the stack as a whole in some field that is shared between the members. (Separate from this issue and probably another can of worms deserving another issue is the fact that one cannot represent these "global interface (naming) view" in Netbox right now either, but would have to parse interface names and offset them by VC ID to get the configured interface name).

Hope I'm making sense!

@paravoid commented on GitHub (Nov 30, 2018): > This virtual chassis model is intended to replicate real-world implementations, wherein the master device "is" the virtual chassis. Do you need to support an implementation where the virtual chassis has a name which differs from that of its master device? I'm not sure how familiar you are with Juniper gear; apologies beforehand if you are and this is overly verbose! Juniper's virtual chassis (= switch stacking) has the concept of ["global management"](https://www.juniper.net/documentation/en_US/junos/topics/concept/virtual-chassis-ex4200-global-management.html), where one assigns an IP to the "virtual management Ethernet" interface, and then SSH to that IP for management. That IP is automatically assigned by whichever switch happens to be the master ("routing-engine" in Juniper's confusing terminology) at that time. For all intents and purposes, a VC is "one switch", with one configuration, which has one hostname set (as shown at the prompt), and one IP assigned to its `vme` interface. The individual switches that comprise the stack don't (typically?) have an IP of their own, and no separate configuration of their own either -- the interfaces are all renamed as well (prefixed with the member's ID), making their names globally unique across the stack. You can't know which switch you've connected to (i.e. which switch is the master) until you run e.g. `show virtual-chassis`. Other vendors -and even Juniper in some configurations I believe- may treat a switch stack in a similar fashion, where "secondary" devices are treated as "remote linecards" of a primary device. An real-world simplified example is: we have a switch stack called "asw-esams", with a hostname of "asw-esams.mgmt.esams.wmnet", comprised of two QFX5100 switches, IDs 0 and 3, devices "asw-oe10-esams" and "asw-oe13-esams" in Netbox, in racks OE10 and OE13 respectively. oe10's interfaces are xe-0/N/N and oe13's interfaces are xe-3/N/N. The configuration includes: ``` set system host-name asw-esams set interfaces vme unit 0 family inet address 10.21.0.104/24 set routing-options static route 0.0.0.0/0 next-hop 10.21.0.1 ... set interfaces xe-0/0/0 description lvs3001 ... set interfaces xe-3/0/0 description lvs3003 ... ``` It's not clear to me how one would model this in Netbox, and how would make this association. It's certainly useful, both for humans, and for automated tools (e.g. configuration management) to be able to maintain the association between stack members and "stack hostname", as well as to be able to define the management IPv4/IPv6 of the stack as a whole in some field that is shared between the members. (Separate from this issue and probably another can of worms deserving another issue is the fact that one cannot represent these "global interface (naming) view" in Netbox right now either, but would have to parse interface names and offset them by VC ID to get the configured interface name). Hope I'm making sense!
Author
Owner

@DanSheps commented on GitHub (Nov 30, 2018):

Just to chime in on this to give a use case, but ultimately express my middling support for this.

We physically name each device (-<#><sw#>-<4thoctet>) however the VC is named -<#>-<4thoctet>.

I feel this keeps the documentation in Netbox fairly structured to follow the physical labelling convention in netbox. However, to get around this, currently I use the domain as the Chassis Name.

Something for people to keep in mind, Virtual Chassis is meant, in the current implementation, to replicate actual Virtual Chassis, not virtual clusters (vPC and the like).

To be honest through, I don't know if a separate name is required as the domain is sufficient for, I would imagine, most use cases. I haven't dove too deep into our VSS setup but there is no domain in our VC setup and you can't have one (stackwise doesn't do it, not 100% sure about Virtual Stackwise however).

I would like to see some support for MLAG, which would support the other Virtual implementations (vPC, etc) but I think that would be out of scope of this FR.

@DanSheps commented on GitHub (Nov 30, 2018): Just to chime in on this to give a use case, but ultimately express my middling support for this. We physically name each device (<room>-<#><sw#>-<4thoctet>) however the VC is named <room>-<#>-<4thoctet>. I feel this keeps the documentation in Netbox fairly structured to follow the physical labelling convention in netbox. However, to get around this, currently I use the domain as the Chassis Name. Something for people to keep in mind, Virtual Chassis is meant, in the current implementation, to replicate actual Virtual Chassis, not virtual clusters (vPC and the like). To be honest through, I don't know if a separate name is required as the domain is sufficient for, I would imagine, most use cases. I haven't dove too deep into our VSS setup but there is no domain in our VC setup and you can't have one (stackwise doesn't do it, not 100% sure about Virtual Stackwise however). I would like to see some support for MLAG, which would support the other Virtual implementations (vPC, etc) but I think that would be out of scope of this FR.
Author
Owner

@paravoid commented on GitHub (Dec 8, 2018):

To be honest through, I don't know if a separate name is required as the domain is sufficient for, I would imagine, most use cases. I haven't dove too deep into our VSS setup but there is no domain in our VC setup and you can't have one (stackwise doesn't do it, not 100% sure about Virtual Stackwise however).

"Domain" is not a concept that exists in a Juniper Virtual Chassis. I suspect it's a Cisco-specific feature.

Besides the confusing terminology issue, even if one were to put the name of the switch's name in that field, it still makes things quite awkward. There is no place for one to put the (primary) IP of the stack, nor its interfaces and so on and so forth.

@paravoid commented on GitHub (Dec 8, 2018): > To be honest through, I don't know if a separate name is required as the domain is sufficient for, I would imagine, most use cases. I haven't dove too deep into our VSS setup but there is no domain in our VC setup and you can't have one (stackwise doesn't do it, not 100% sure about Virtual Stackwise however). "Domain" is not a concept that exists in a Juniper Virtual Chassis. I suspect it's a Cisco-specific feature. Besides the confusing terminology issue, even if one were to put the name of the switch's name in that field, it still makes things quite awkward. There is no place for one to put the (primary) IP of the stack, nor its interfaces and so on and so forth.
Author
Owner

@DanSheps commented on GitHub (Dec 8, 2018):

"Domain" is not a concept that exists in a Juniper Virtual Chassis. I suspect it's a Cisco-specific feature.

It is not specific to Cisco, as I have not seen it on any of our gear, currently, with the exception of our Nexus switches which run VPC. It may be a VSS terminology but I haven't looked too much into VSS yet.

Besides the confusing terminology issue, even if one were to put the name of the switch's name in that field, it still makes things quite awkward. There is no place for one to put the (primary) IP of the stack, nor its interfaces and so on and so forth.

If you create a VME interface on each switch and mark it as "management", when you combine the switches into a VC you will only see the single interface from the master unit.

The way you model the interfaces is the interface for each physical unit belong to the individual units in netbox. The master unit is typically where you model all the "global" configuration. You can also opt to do what I do and include certain global configurations on each unit.

Netbox is not intended to represent a config (even though you can build one from the information contained in Netbox)

@DanSheps commented on GitHub (Dec 8, 2018): > "Domain" is not a concept that exists in a Juniper Virtual Chassis. I suspect it's a Cisco-specific feature. It is not specific to Cisco, as I have not seen it on any of our gear, currently, with the exception of our Nexus switches which run VPC. It may be a VSS terminology but I haven't looked too much into VSS yet. > Besides the confusing terminology issue, even if one were to put the name of the switch's name in that field, it still makes things quite awkward. There is no place for one to put the (primary) IP of the stack, nor its interfaces and so on and so forth. If you create a VME interface on each switch and mark it as "management", when you combine the switches into a VC you will only see the single interface from the master unit. The way you model the interfaces is the interface for each physical unit belong to the individual units in netbox. The master unit is typically where you model all the "global" configuration. You can also opt to do what I do and include certain global configurations on each unit. Netbox is not intended to represent a config (even though you can build one from the information contained in Netbox)
Author
Owner

@jannooo commented on GitHub (Mar 5, 2019):

I'd love to see this feature, too. We have firewall clusters that consist of two servers. Both nodes are named like this:

  • node1.fwN.domain.tld
  • node2.fwN.domain.tld

Where the cluster has the name fwN.domain.tld.

@jannooo commented on GitHub (Mar 5, 2019): I'd love to see this feature, too. We have firewall clusters that consist of two servers. Both nodes are named like this: - node1.fwN.domain.tld - node2.fwN.domain.tld Where the cluster has the name fwN.domain.tld.
Author
Owner

@paravoid commented on GitHub (Apr 20, 2019):

This came up recently again: we are exploring DHCP option 82, with the intent to map DHCP requests using the port they're coming from, map them to the device and serve them the IP as recorded in Netbox. WIth Juniper switches this is awkward:

The "Agent Remote ID" in the DHCP request is e.g. asw-b-eqiad:xe-4/0/39, but this maps to the device asw-b4-eqiad and port xe-0/0/39 in Netbox, because asw-b-eqiad is a stack composed of multiple switches, and this is a port on its 4th member.

Basically, the real-world implementation of Juniper virtual chassis can't be represented in Netbox :( We can start exploring what it would take to fix this, but I was wondering if @jeremystretch or others had any direction or guidance to provide before we do so.

@paravoid commented on GitHub (Apr 20, 2019): This came up recently again: we are exploring DHCP option 82, with the intent to map DHCP requests using the port they're coming from, map them to the device and serve them the IP as recorded in Netbox. WIth Juniper switches this is awkward: The "Agent Remote ID" in the DHCP request is e.g. asw-**b**-eqiad:xe-**4**/0/39, but this maps to the device asw-b**4**-eqiad and port xe-**0**/0/39 in Netbox, because asw-b-eqiad is a stack composed of multiple switches, and this is a port on its 4th member. Basically, the real-world implementation of Juniper virtual chassis can't be represented in Netbox :( We can start exploring what it would take to fix this, but I was wondering if @jeremystretch or others had any direction or guidance to provide before we do so.
Author
Owner

@mmahacek commented on GitHub (Apr 22, 2019):

@paravoid (slightly off topic) I would be interested in how you are scripting that update on your IP addresses. Are you able to share your script, possibly over at https://github.com/netbox-community/reports?

@mmahacek commented on GitHub (Apr 22, 2019): @paravoid (slightly off topic) I would be interested in how you are scripting that update on your IP addresses. Are you able to share your script, possibly over at https://github.com/netbox-community/reports?
Author
Owner

@crutcha commented on GitHub (Nov 12, 2019):

Is this being PR being worked on by anyone? I'd be more than happy to do it but dont want to duplicate efforts.

@crutcha commented on GitHub (Nov 12, 2019): Is this being PR being worked on by anyone? I'd be more than happy to do it but dont want to duplicate efforts.
Author
Owner

@DanSheps commented on GitHub (Nov 12, 2019):

@crutcha I do not believe this is currently being actioned by anyone.

If you are willing to take on the work, I can assign this to you.

@DanSheps commented on GitHub (Nov 12, 2019): @crutcha I do not believe this is currently being actioned by anyone. If you are willing to take on the work, I can assign this to you.
Author
Owner

@jeremystretch commented on GitHub (Jun 18, 2020):

I'm considering bumping this up to v2.9. Adding a name to the VirtualChassis model would allow us to decouple member device assignment from the initial object creation, which would allow us to standardize the relevant views and tests, which is a core goal of #4416 (a current v2.9 milestone).

@jeremystretch commented on GitHub (Jun 18, 2020): I'm considering bumping this up to v2.9. Adding a name to the VirtualChassis model would allow us to decouple member device assignment from the initial object creation, which would allow us to standardize the relevant views and tests, which is a core goal of #4416 (a current v2.9 milestone).
Author
Owner

@jsenecal commented on GitHub (Jun 19, 2020):

@jeremystretch In that case I volunteer to work on this, just let me know what you had in mind exactly :)

@jsenecal commented on GitHub (Jun 19, 2020): @jeremystretch In that case I volunteer to work on this, just let me know what you had in mind exactly :)
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: starred/netbox#1668