Reference related object as custom field value #5213

Closed
opened 2025-12-29 19:25:30 +01:00 by adam · 2 comments
Owner

Originally created by @jeremystretch on GitHub (Aug 20, 2021).

Originally assigned to: @jeremystretch on GitHub.

NetBox version

v2.11.11

Feature type

New functionality

Proposed functionality

Add a new type of custom field which enables referencing a related NetBox object. (This will largely replicate Django's generic foreign key mechanism.)

This was originally proposed by @tyler-8 in #4408.

Use case

Provides the ability to reference one NetBox object from another where no foreign key relation exists in the database.

Database changes

We might need to extend the CustomField model, but specific implementation details are yet to be determined.

External dependencies

No response

Originally created by @jeremystretch on GitHub (Aug 20, 2021). Originally assigned to: @jeremystretch on GitHub. ### NetBox version v2.11.11 ### Feature type New functionality ### Proposed functionality Add a new type of custom field which enables referencing a related NetBox object. (This will largely replicate Django's [generic foreign key](https://docs.djangoproject.com/en/3.2/ref/contrib/contenttypes/#generic-relations) mechanism.) This was originally proposed by @tyler-8 in #4408. ### Use case Provides the ability to reference one NetBox object from another where no foreign key relation exists in the database. ### Database changes We might need to extend the CustomField model, but specific implementation details are yet to be determined. ### External dependencies _No response_
adam added the status: acceptedtype: feature labels 2025-12-29 19:25:30 +01:00
adam closed this issue 2025-12-29 19:25:30 +01:00
Author
Owner

@horazont commented on GitHub (Oct 20, 2021):

I second this.

We are using NetBox as IPAM and DCIM for bare metal Kubernetes clusters. We have automation in place which matches nodes in NetBox with nodes in a bare metal deployment tool (OpenStack Ironic). When a new node is discovered by the bare metal tool and it is found in NetBox and assigned to a Virtualization Cluster, we do the following steps fully automatedly:

  1. Assign a (randomized) hostname
  2. Assign IP addresses from prefixes based on the VLANs objects assigned to the interfaces
  3. Assign an FQDN based on the FQDN of the cluster and an optional infix from the VLAN or Prefix role.
  4. Generate a netplan configuration based on the assigned IP addresses
  5. Deploy the node

However, there are some bits of information about the Cluster which we cannot easily represent in this way. Most importantly, the virtual IP address (the implementation of the VIP is then VRRP based) used for the Kubernetes API endpoint. This is definitely a property of the Virtualization Cluster. At the same time, the IP address s should be known to the IPAM to avoid double allocations during the automatic allocation step.

Our preferred way to solve this would be to be able to have a custom field which links an IP address to a cluster with a custom semantic (in our case "kubernetes API IP").

Other solutions we have considered:

  1. Use the configuration context of the cluster or a custom field with the IP address as string: This can quickly get out of sync and there is no guarantee that the IP address isn't assigned to something else within NetBox.
  2. Assign the IP address to one or more of the control plane nodes of the cluster and tag it to identify it as the virtual IP for the API: This has the downside that it pretends to be in sync with the real world (by assigning a VIP to multiple devices as I'm told is possible), but it is likely to diverge because it is being done manually and not enforced elsewhere. In addition, it requires us to assign the API IP address before the node is caught by the automation (on the one hand because the automation will not auto-assign addresses if an address is already assigned, though that could be fixed by adding an exception for the VIP and on the other hand to ensure that the IP is defined before the first node is deployed).

Clearly, having this on the Cluster would be much nicer.

Thanks for reading. I understand that this is probably a non-trivial change to the data model :)

@horazont commented on GitHub (Oct 20, 2021): I second this. We are using NetBox as IPAM and DCIM for bare metal Kubernetes clusters. We have [automation in place](https://gitlab.com/yaook/metal-controller) which matches nodes in NetBox with nodes in a bare metal deployment tool (OpenStack Ironic). When a new node is discovered by the bare metal tool and it is found in NetBox and assigned to a Virtualization Cluster, we do the following steps fully automatedly: 1. Assign a (randomized) hostname 2. Assign IP addresses from prefixes based on the VLANs objects assigned to the interfaces 3. Assign an FQDN based on the FQDN of the cluster and an optional infix from the VLAN or Prefix role. 4. Generate a netplan configuration based on the assigned IP addresses 5. Deploy the node However, there are some bits of information about the Cluster which we cannot easily represent in this way. Most importantly, the virtual IP address (the implementation of the VIP is then VRRP based) used for the Kubernetes API endpoint. This is definitely a property of the Virtualization Cluster. At the same time, the IP address s should be known to the IPAM to avoid double allocations during the automatic allocation step. Our preferred way to solve this would be to be able to have a custom field which links an IP address to a cluster with a custom semantic (in our case "kubernetes API IP"). Other solutions we have considered: 1. Use the configuration context of the cluster or a custom field with the IP address as string: This can quickly get out of sync and there is no guarantee that the IP address isn't assigned to something else within NetBox. 2. Assign the IP address to one or more of the control plane nodes of the cluster and tag it to identify it as the virtual IP for the API: This has the downside that it pretends to be in sync with the real world (by assigning a VIP to multiple devices as I'm told is possible), but it is likely to diverge because it is being done manually and not enforced elsewhere. In addition, it requires us to assign the API IP address before the node is caught by the automation (on the one hand because the automation will not auto-assign addresses if an address is already assigned, though that could be fixed by adding an exception for the VIP and on the other hand to ensure that the IP is defined before the first node is deployed). Clearly, having this on the Cluster would be much nicer. Thanks for reading. I understand that this is probably a non-trivial change to the data model :)
Author
Owner

@jeremystretch commented on GitHub (Jan 5, 2022):

I'm going to extend the scope of this FR to include the assignment of multiple objects as well, since it appears doable with very little additional work.

@jeremystretch commented on GitHub (Jan 5, 2022): I'm going to extend the scope of this FR to include the assignment of multiple objects as well, since it appears doable with very little additional work.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: starred/netbox#5213