Ability to link a virtual machine to a specific device in the cluster #5869

Closed
opened 2025-12-29 19:33:39 +01:00 by adam · 8 comments
Owner

Originally created by @ITJamie on GitHub (Jan 4, 2022).

Originally assigned to: @jeremystretch on GitHub.

NetBox version

3.1.3

Feature type

Data model extension

Proposed functionality

The ability to Optionally connect a virtual machine to a device in a cluster.

Use case

  • to show where a VM (should be, or used to be) running - (we can keep this upto date with the api) or some of the sync scripts
  • to document "pinned" vms - some vms even though in a cluster are pinned to specific hardware
  • remove some of the need for "false" clusters. ( right now to show this data ether a custom field is needed or a fake cluster with just that single device in it)

Database changes

extension to existing models

External dependencies

No response

Originally created by @ITJamie on GitHub (Jan 4, 2022). Originally assigned to: @jeremystretch on GitHub. ### NetBox version 3.1.3 ### Feature type Data model extension ### Proposed functionality The ability to Optionally connect a virtual machine to a device in a cluster. ### Use case - to show where a VM (should be, or used to be) running - (we can keep this upto date with the api) or some of the sync scripts - to document "pinned" vms - some vms even though in a cluster are pinned to specific hardware - remove some of the need for "false" clusters. ( right now to show this data ether a custom field is needed or a fake cluster with just that single device in it) ### Database changes extension to existing models ### External dependencies _No response_
adam added the status: acceptedtype: feature labels 2025-12-29 19:33:39 +01:00
adam closed this issue 2025-12-29 19:33:39 +01:00
Author
Owner

@ITJamie commented on GitHub (Jan 4, 2022):

technically there are two "links".

one connection for "compute" one for "storage".

but id happily ignore the storage part.
being able to link a vm to its compute device would ease some of our usecases.

@ITJamie commented on GitHub (Jan 4, 2022): technically there are two "links". one connection for "compute" one for "storage". but id happily ignore the storage part. being able to link a vm to its compute device would ease some of our usecases.
Author
Owner

@hagbarddenstore commented on GitHub (Jan 5, 2022):

@ITJamie Wouldn't a simple "Pinned to Device"-field suffice?

I'm assuming you're talking about things like Nutanix CVMs on a Nutanix cluster, for which each CVM is pinned to "its" host Device.

@hagbarddenstore commented on GitHub (Jan 5, 2022): @ITJamie Wouldn't a simple "Pinned to Device"-field suffice? I'm assuming you're talking about things like Nutanix CVMs on a Nutanix cluster, for which each CVM is pinned to "its" host Device.
Author
Owner

@ITJamie commented on GitHub (Jan 5, 2022):

If you mean a custom field for this. Yes that would work in a worst case as a workaround. Having the model updated would mean being able to have an actual working link to the device running the vm.

It would also mean people could choose / configure the option much easier

@ITJamie commented on GitHub (Jan 5, 2022): If you mean a custom field for this. Yes that would work in a worst case as a workaround. Having the model updated would mean being able to have an actual working link to the device running the vm. It would also mean people could choose / configure the option much easier
Author
Owner

@geor-g commented on GitHub (Jan 5, 2022):

I'm assuming you're talking about things like Nutanix CVMs on a Nutanix cluster, for which each CVM is pinned to "its" host Device.

In my experience, this feature request is quite common, wrt all kind of cluster people are running, regardless of the specific implementation: Ganeti, OpenStack, OpenNebubla, 'homegrown' stuff via corosync and pacemaker, 'the cloud', etc.

If I understand this feature request correctly, this is about specifying a 'preference' wrt on which physical machine a virtual one should be running, by default.

It might help in situations like rebalancing clusters, doing (live) migrations to keep virtual machines running while rebooting or powering of physical ones, but still being able, afterwards, to get the virtual machines 'back on track', ensuring these run (again) on their preferred physical hosts.

While implementing this, we could think about doing the opposite as well: having (an) 'exclusion link(s)' modelling relationship(s) which don't work, because, for example, the same service, hosted on two virtual machines, shouldn't / must not be running on the same physical host.

@geor-g commented on GitHub (Jan 5, 2022): > I'm assuming you're talking about things like Nutanix CVMs on a Nutanix cluster, for which each CVM is pinned to "its" host Device. In my experience, this feature request is quite common, wrt all kind of cluster people are running, regardless of the specific implementation: Ganeti, OpenStack, OpenNebubla, 'homegrown' stuff via corosync and pacemaker, 'the cloud', etc. If I understand this feature request correctly, this is about specifying a 'preference' wrt on which physical machine a virtual one should be running, by default. It might help in situations like rebalancing clusters, doing (live) migrations to keep virtual machines running while rebooting or powering of physical ones, but still being able, afterwards, to get the virtual machines 'back on track', ensuring these run (again) on their preferred physical hosts. While implementing this, we could think about doing the opposite as well: having (an) 'exclusion link(s)' modelling relationship(s) which don't work, because, for example, the same service, hosted on two virtual machines, shouldn't / must not be running on the same physical host.
Author
Owner

@jeremystretch commented on GitHub (Jan 5, 2022):

There's some related discussion under #4482.

@jeremystretch commented on GitHub (Jan 5, 2022): There's some related discussion under #4482.
Author
Owner

@hagbarddenstore commented on GitHub (Jan 5, 2022):

If you mean a custom field for this. Yes that would work in a worst case as a workaround. Having the model updated would mean being able to have an actual working link to the device running the vm.

It would also mean people could choose / configure the option much easier

I didn't have a custom field in mind.

I'm assuming you're talking about things like Nutanix CVMs on a Nutanix cluster, for which each CVM is pinned to "its" host Device.

In my experience, this feature request is quite common, wrt all kind of cluster people are running, regardless of the specific implementation: Ganeti, OpenStack, OpenNebubla, 'homegrown' stuff via corosync and pacemaker, 'the cloud', etc.

If I understand this feature request correctly, this is about specifying a 'preference' wrt on which physical machine a virtual one should be running, by default.

It might help in situations like rebalancing clusters, doing (live) migrations to keep virtual machines running while rebooting or powering of physical ones, but still being able, afterwards, to get the virtual machines 'back on track', ensuring these run (again) on their preferred physical hosts.

While implementing this, we could think about doing the opposite as well: having (an) 'exclusion link(s)' modelling relationship(s) which don't work, because, for example, the same service, hosted on two virtual machines, shouldn't / must not be running on the same physical host.

Not quite what I was thinking about. A Nutanix Controller VM (CVM) can only exist on the Device it's pinned to. If the Device goes down, that VM is downed as well, it isn't moved anywhere.

If you have a 4 node cluster, each node have a CVM running on it.

Another example is VMs that requires a specific Device because that Device have specific hardware attached to it, so that VM can't function on another Device.

This could be modeled fairly simple by adding a new field to the VirtualMachine model (pinned_to) whereas modelling preference and exclusion groups becomes a lot more complex.

@hagbarddenstore commented on GitHub (Jan 5, 2022): > If you mean a custom field for this. Yes that would work in a worst case as a workaround. Having the model updated would mean being able to have an actual working link to the device running the vm. > > It would also mean people could choose / configure the option much easier I didn't have a custom field in mind. > > I'm assuming you're talking about things like Nutanix CVMs on a Nutanix cluster, for which each CVM is pinned to "its" host Device. > > In my experience, this feature request is quite common, wrt all kind of cluster people are running, regardless of the specific implementation: Ganeti, OpenStack, OpenNebubla, 'homegrown' stuff via corosync and pacemaker, 'the cloud', etc. > > If I understand this feature request correctly, this is about specifying a 'preference' wrt on which physical machine a virtual one should be running, by default. > > It might help in situations like rebalancing clusters, doing (live) migrations to keep virtual machines running while rebooting or powering of physical ones, but still being able, afterwards, to get the virtual machines 'back on track', ensuring these run (again) on their preferred physical hosts. > > While implementing this, we could think about doing the opposite as well: having (an) 'exclusion link(s)' modelling relationship(s) which don't work, because, for example, the same service, hosted on two virtual machines, shouldn't / must not be running on the same physical host. Not quite what I was thinking about. A Nutanix Controller VM (CVM) can *only* exist on the Device it's pinned to. If the Device goes down, that VM is downed as well, it isn't moved anywhere. If you have a 4 node cluster, each node have a CVM running on it. Another example is VMs that requires a specific Device because that Device have specific hardware attached to it, so that VM can't function on another Device. This could be modeled fairly simple by adding a new field to the VirtualMachine model (pinned_to) whereas modelling preference and exclusion groups becomes a lot more complex.
Author
Owner

@github-actions[bot] commented on GitHub (Mar 7, 2022):

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. NetBox is governed by a small group of core maintainers which means not all opened issues may receive direct feedback. Please see our contributing guide.

@github-actions[bot] commented on GitHub (Mar 7, 2022): This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. NetBox is governed by a small group of core maintainers which means not all opened issues may receive direct feedback. Please see our [contributing guide](https://github.com/netbox-community/netbox/blob/develop/CONTRIBUTING.md).
Author
Owner

@emersonfelipesp commented on GitHub (Apr 7, 2022):

On Proxmox clusters, even using High Availability with storage sharing or not, the Virtual Machine is always running at a specific Device.

This Issue will really help me to improve Proxmox Plugin without having to create custom models or custom fields, as I currently do.

@emersonfelipesp commented on GitHub (Apr 7, 2022): On Proxmox clusters, even using High Availability with storage sharing or not, the Virtual Machine is always running at a specific Device. This Issue will really help me to improve [Proxmox Plugin](https://github.com/netdevopsbr/netbox-proxbox) without having to create custom models or custom fields, as I currently do.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: starred/netbox#5869