mirror of
https://github.com/netbox-community/netbox.git
synced 2026-01-11 21:10:29 +01:00
Add support for virtual machines #107
Closed
opened 2025-12-29 15:32:56 +01:00 by adam
·
82 comments
No Branch/Tag Specified
main
update-changelog-comments-docs
feature-removal-issue-type
20911-dropdown
20239-plugin-menu-classes-mutable-state
21097-graphql-id-lookups
feature
fix_module_substitution
20923-dcim-templates
20044-elevation-stuck-lightmode
feature-ip-prefix-link
v4.5-beta1-release
20068-import-moduletype-attrs
20766-fix-german-translation-code-literals
20378-del-script
7604-filter-modifiers-v3
circuit-swap
12318-case-insensitive-uniqueness
20637-improve-device-q-filter
20660-script-load
19724-graphql
20614-update-ruff
14884-script
02496-max-page
19720-macaddress-interface-generic-relation
19408-circuit-terminations-export-templates
20203-openapi-check
fix-19669-api-image-download
7604-filter-modifiers
19275-fixes-interface-bulk-edit
fix-17794-get_field_value_return_list
11507-show-aggregate-and-rir-on-api
9583-add_column_specific_search_field_to_tables
v4.5.0
v4.4.10
v4.4.9
v4.5.0-beta1
v4.4.8
v4.4.7
v4.4.6
v4.4.5
v4.4.4
v4.4.3
v4.4.2
v4.4.1
v4.4.0
v4.3.7
v4.4.0-beta1
v4.3.6
v4.3.5
v4.3.4
v4.3.3
v4.3.2
v4.3.1
v4.3.0
v4.2.9
v4.3.0-beta2
v4.2.8
v4.3.0-beta1
v4.2.7
v4.2.6
v4.2.5
v4.2.4
v4.2.3
v4.2.2
v4.2.1
v4.2.0
v4.1.11
v4.1.10
v4.1.9
v4.1.8
v4.2-beta1
v4.1.7
v4.1.6
v4.1.5
v4.1.4
v4.1.3
v4.1.2
v4.1.1
v4.1.0
v4.0.11
v4.0.10
v4.0.9
v4.1-beta1
v4.0.8
v4.0.7
v4.0.6
v4.0.5
v4.0.3
v4.0.2
v4.0.1
v4.0.0
v3.7.8
v3.7.7
v4.0-beta2
v3.7.6
v3.7.5
v4.0-beta1
v3.7.4
v3.7.3
v3.7.2
v3.7.1
v3.7.0
v3.6.9
v3.6.8
v3.6.7
v3.7-beta1
v3.6.6
v3.6.5
v3.6.4
v3.6.3
v3.6.2
v3.6.1
v3.6.0
v3.5.9
v3.6-beta2
v3.5.8
v3.6-beta1
v3.5.7
v3.5.6
v3.5.5
v3.5.4
v3.5.3
v3.5.2
v3.5.1
v3.5.0
v3.4.10
v3.4.9
v3.5-beta2
v3.4.8
v3.5-beta1
v3.4.7
v3.4.6
v3.4.5
v3.4.4
v3.4.3
v3.4.2
v3.4.1
v3.4.0
v3.3.10
v3.3.9
v3.4-beta1
v3.3.8
v3.3.7
v3.3.6
v3.3.5
v3.3.4
v3.3.3
v3.3.2
v3.3.1
v3.3.0
v3.2.9
v3.2.8
v3.3-beta2
v3.2.7
v3.3-beta1
v3.2.6
v3.2.5
v3.2.4
v3.2.3
v3.2.2
v3.2.1
v3.2.0
v3.1.11
v3.1.10
v3.2-beta2
v3.1.9
v3.2-beta1
v3.1.8
v3.1.7
v3.1.6
v3.1.5
v3.1.4
v3.1.3
v3.1.2
v3.1.1
v3.1.0
v3.0.12
v3.0.11
v3.0.10
v3.1-beta1
v3.0.9
v3.0.8
v3.0.7
v3.0.6
v3.0.5
v3.0.4
v3.0.3
v3.0.2
v3.0.1
v3.0.0
v2.11.12
v3.0-beta2
v2.11.11
v2.11.10
v3.0-beta1
v2.11.9
v2.11.8
v2.11.7
v2.11.6
v2.11.5
v2.11.4
v2.11.3
v2.11.2
v2.11.1
v2.11.0
v2.10.10
v2.10.9
v2.11-beta1
v2.10.8
v2.10.7
v2.10.6
v2.10.5
v2.10.4
v2.10.3
v2.10.2
v2.10.1
v2.10.0
v2.9.11
v2.10-beta2
v2.9.10
v2.10-beta1
v2.9.9
v2.9.8
v2.9.7
v2.9.6
v2.9.5
v2.9.4
v2.9.3
v2.9.2
v2.9.1
v2.9.0
v2.9-beta2
v2.8.9
v2.9-beta1
v2.8.8
v2.8.7
v2.8.6
v2.8.5
v2.8.4
v2.8.3
v2.8.2
v2.8.1
v2.8.0
v2.7.12
v2.7.11
v2.7.10
v2.7.9
v2.7.8
v2.7.7
v2.7.6
v2.7.5
v2.7.4
v2.7.3
v2.7.2
v2.7.1
v2.7.0
v2.6.12
v2.6.11
v2.6.10
v2.6.9
v2.7-beta1
Solcon-2020-01-06
v2.6.8
v2.6.7
v2.6.6
v2.6.5
v2.6.4
v2.6.3
v2.6.2
v2.6.1
v2.6.0
v2.5.13
v2.5.12
v2.6-beta1
v2.5.11
v2.5.10
v2.5.9
v2.5.8
v2.5.7
v2.5.6
v2.5.5
v2.5.4
v2.5.3
v2.5.2
v2.5.1
v2.5.0
v2.4.9
v2.5-beta2
v2.4.8
v2.5-beta1
v2.4.7
v2.4.6
v2.4.5
v2.4.4
v2.4.3
v2.4.2
v2.4.1
v2.4.0
v2.3.7
v2.4-beta1
v2.3.6
v2.3.5
v2.3.4
v2.3.3
v2.3.2
v2.3.1
v2.3.0
v2.2.10
v2.3-beta2
v2.2.9
v2.3-beta1
v2.2.8
v2.2.7
v2.2.6
v2.2.5
v2.2.4
v2.2.3
v2.2.2
v2.2.1
v2.2.0
v2.1.6
v2.2-beta2
v2.1.5
v2.2-beta1
v2.1.4
v2.1.3
v2.1.2
v2.1.1
v2.1.0
v2.0.10
v2.1-beta1
v2.0.9
v2.0.8
v2.0.7
v2.0.6
v2.0.5
v2.0.4
v2.0.3
v2.0.2
v2.0.1
v2.0.0
v2.0-beta3
v1.9.6
v1.9.5
v2.0-beta2
v1.9.4-r1
v1.9.3
v2.0-beta1
v1.9.2
v1.9.1
v1.9.0-r1
v1.8.4
v1.8.3
v1.8.2
v1.8.1
v1.8.0
v1.7.3
v1.7.2-r1
v1.7.1
v1.7.0
v1.6.3
v1.6.2-r1
v1.6.1-r1
1.6.1
v1.6.0
v1.5.2
v1.5.1
v1.5.0
v1.4.2
v1.4.1
v1.4.0
v1.3.2
v1.3.1
v1.3.0
v1.2.2
v1.2.1
v1.2.0
v1.1.0
v1.0.7-r1
v1.0.7
v1.0.6
v1.0.5
v1.0.4
v1.0.3-r1
v1.0.3
1.0.0
Labels
Clear labels
beta
breaking change
complexity: high
complexity: low
complexity: medium
needs milestone
netbox
pending closure
plugin candidate
pull-request
severity: high
severity: low
severity: medium
status: accepted
status: backlog
status: blocked
status: duplicate
status: needs owner
status: needs triage
status: revisions needed
status: under review
topic: GraphQL
topic: Internationalization
topic: OpenAPI
topic: UI/UX
topic: cabling
topic: event rules
topic: htmx navigation
topic: industrialization
topic: migrations
topic: plugins
topic: scripts
topic: templating
topic: testing
type: bug
type: deprecation
type: documentation
type: feature
type: housekeeping
type: translation
Mirrored from GitHub Pull Request
No Label
Milestone
No items
No Milestone
Projects
Clear projects
No project
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: starred/netbox#107
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @ghost on GitHub (Jun 30, 2016).
Any plans to support virtual devices? e.g. a VM or virtualized network appliance which may have multiple interfaces and be parented to a single 'Device' or a 'Platform' of devices?
@JNR8 commented on GitHub (Jul 15, 2016):
Whilst you could do this by creating a non racked device type for Virtual devices the issue is that they are not able to be displayed in the location or rack pages. Having some additional support for displaying these type of virtual devices would be very handy.
@jvolt commented on GitHub (Jul 15, 2016):
+1.
We have a lot of VMs and I am creating them as devices on a single rack,
but that does not represents the reality.
I would like to create VMs in a cluster or something so...
2016-07-15 8:05 GMT-04:00 jennec notifications@github.com:
@joachimtingvold commented on GitHub (Jul 22, 2016):
+1
Virtual devices would be much appreciated. Having lots of VM's and/or containers, it would be practical to have a "virtual device" with one or more interfaces, again having IP's tied to them. The "virtual device" would then be tied to a physical device (e.g. the server hosting those VM's and/or containers). This way, if you move a VM from one physical device, to another, all you'd need to do, is to change the physical/virtual connection, and the IPAM would be updated (i.e. you only have to change one thing, and the IPAM-documentation reflects that).
@ryanmerolle commented on GitHub (Jul 22, 2016):
I think this is a great idea that many commercial DCIM products also provide modeling for.
Given the state of the underlay, the documentation/features that have not been focused on much(RPC) and foundation that needs to be setup to support VM, this feels like its sometime off in the future like version 4.0 or later?
VMs are something that can be moved between hypervisors as needed or on demand. Furthermore, a VM in the current devops mantras are here today, gone tomorrow for a number of reasons. I vote to add it, if that is the intent of the project on the roadmap now, but far off in the future.
Given how Openstack and VMware handle bare metal / VM inventory very differently, there are a host of other items we will have to sort out to possibly develop a model like what is planned for with the network in netbox and auditing via RPC.
@JNR8 commented on GitHub (Jul 29, 2016):
I would suggest that this could simply be achieved by creating a new device type that is associated with the location rather than the rack or container device. This way all VMs can be documented and the IPAM would be accurate. Currently IPAM is missing all Virtual machines which account for approx 90% of all IP allocations for my company. Being able to record these IPs against a virtual device type, regardless of its underlying hypervisor type, would be a great help.
@ryanmerolle commented on GitHub (Jul 29, 2016):
Obviously, this is all opinion here, but looking at the IPAM alternatives and the roadmap for netbox, I would suggest we do not rush to add virtual devices to the current system until we better map this out.
phpIPAM (a very popular IPAM) handles IPs as objects linked to names/descriptions. Yes IPs/interfaces of a VM or server in that case are not related to one another by a common object id, but they can be correlated when searching and using a common naming standard.
It also gets tedious creating all of these vm objects/IT assets manually. Overlays, virtual hosts, user/server endpoints need some thoughtful consideration over the next few months. It just feels like other parts of the model need to mature before this is tackled. For example, by first instituting DNS lookups, the tedious nature of create better information about IPs can be helped along by leveraging existing records(if you set those up obviously).
@rfdrake commented on GitHub (Jul 30, 2016):
The "device must have a rack location" immediately struck me as.. limiting. As an ISP, we have devices in places that are mounted to the wall or in a cable doghouse, etc.
I was happy to find the workaround of adding a non-rack device. At that point I decided I could treat "Rack" as a code word for IDF and it fits in 99% of the cases.
The only place it falls apart for me is the aforementioned virtual machine case, where we might spread out hypervisors in a cluster to different places in a datacenter (for power, fiber, environmental redundancy) and then we would need to say this virtual machine is in ... whichever rack. I suppose there are even times when someone might spread out hypervisors to different datacenters, which means you might need to assign a device to a region rather than a site.
The reality is that devices are already tied to a site (which could be considered a location). It would be nice if Rack was considered an optional value so we could say no rack.
Alternatively, if we could generalize locations but make them nested, you could have "3rd floor IDF", or "Datacenter 2nd Floor" as a location, then as a sublocation you could say "Cage Z5", then under that you put "Rack 3"
I guess it all comes down to how people want to shoehorn their data into the places that are provided. You could name your site "Telx Datacenter - 3rd Floor" then setup racks under it as "Cage Z5 - Rack 2", with another site being "Telx Datacenter - 2nd Floor".
I don't like this workaround, but I don't know what a good solution would be.
@Gelob commented on GitHub (Aug 11, 2016):
Noting here that both https://github.com/digitalocean/netbox/issues/180#issuecomment-238247048 & https://github.com/digitalocean/netbox/issues/198 mention that a device & rack will always be considered physical. With that in mind does anyone have any ideas for how to handle implement handling static IP assignments of VMs in netbox beyond just assigning a description to an IP address?
@akhiq commented on GitHub (Aug 11, 2016):
@Gelob , you can add a 'Virtual' form factor interface to the physical host. Although the limitation here is the VM is always expected to be on that same physical host.
@BonzTM commented on GitHub (Aug 15, 2016):
As an environment with mixed cloud and datacenter infrastructure, I'd love to see the ability to define certain devices as VM hosts or guests and link them together for ease of navigation and the ability to quickly see the hosts that hold each guest.
Also, not sure if this should be a separate request or not; but I would love to see the ability to give an AWS credential and poll/query all of the AWS infrastructure for always up-to-date IPAM, Device Mangement/Documentation etc.
@joachimtingvold commented on GitHub (Aug 15, 2016):
Until this feature is implemented in some way, I've come up with the following "workaround" (albeit tedious, it serves my purpose well);
Virtual, and two device types using that as the manufacturer;ContainerandVM(or whatever else you want to call them). Set the height to 0U, and set whatever other parameters you want (e.g.eth[0-1]as interfaces).[]to create a list of bays, and we use the ID of the container/VM as the name (e.g.[1000-1099]to create 100 device bays).The "best" benefit I find from using this approach, is that the IP-use is tied to something trackable, and you don't have to register things twice (i.e. when adding the IP to the host, it's already documented in IPAM). There is also less need to set a description of the IP (as it's tied to a host that should be sufficient information), which is nice if the host changes hostname or similar (as changing the hostname of the host also change it in IPAM).
The only "limitation" so far is for VMs/containers that can automatically fail over to other physical servers (so you'd lose control of what physical device the VM/container runs on). There also might be limitations using this approach when having blade servers in blade chassis (i.e. not sure if two layers of device bays is possible).
@JNR8 commented on GitHub (Aug 15, 2016):
Could you extend this by creating a non racked device that has container
bays? If so then you could create one that represents the cluster of
hypervisor hosts and add the VMs to that instead. Keeping both physical and
virtual logically separated.
On Monday, 15 August 2016, Joachim Tingvold notifications@github.com
wrote:
Chris Jenner
Mob: 07739712218
@joachimtingvold commented on GitHub (Aug 15, 2016):
Yes, I guess that should work as well.
@aoyawale commented on GitHub (Aug 25, 2016):
1+
@darkstar commented on GitHub (Sep 9, 2016):
this would also be immensely useful for virtualized storage systems like NetApp for example.
My suggestion for this would be something like this:
Allow to create something like a "Cluster" which is just a container for multiple devices. Then, you can create "virtual" devices (devices of a special type or with a "virtual" checkbox clicked) that are not assigned to a rack but to a "Cluster". the "Cluster" could have other properties as well, for example it could be (logically) linked to another Cluster as "failover partner" or something
If these virtual devices are considered devices as well, this could even be done recursively, for example to add KVM machines or Docker containers that are themselves running in a virtual linux VM.
@Chipa001 commented on GitHub (Oct 5, 2016):
This is the only thing stopping us from completing our migration from Racktables.
This is one thing I liked how Racktables worked. You could flag a server object as a Hypervisor. Then you could nest VM's against servers with that flag. You could also create Virtual Cluster objects, assign Servers to the Cluster and then nest the VM's under the Cluster.
@Gorian commented on GitHub (Oct 13, 2016):
I also was looking at this as a DCIM solution. We have a majority of our serves as virtual machines, both in Amazon and an internal cloud that we don't control. I would like to use netbox, but the lack of support for virtual machines is frustrating. I see the solutions listed here for adding them as "device bays" for the servers they are hosted on, but as this stuff is all in "clouds", there are no physical servers we would track to do so with...
@aoyawale commented on GitHub (Oct 13, 2016):
@Gorian the way I have been doing it with my VM's for the moment its to use site as "Virginia AWS" and Device type "AWS VM". I also add them to a rack group so they are easier to find but, I do not rack them.
@jeremystretch commented on GitHub (Oct 25, 2016):
I'd like to reiterate that the DCIM component of NetBox relates strictly to physical infrastructure only. It has absolutely no support for virtual machines, and any attempt to shoehorn the management of VMs into it will almost certainly end in sadness and drinking. The only overlap with VMs present in the current feature set is on the IPAM side; obviously, people want the ability to track VM IP assignments as well.
It wouldn't make sense to try and extend the current DCIM component to accommodate virtual machines: VMs don't have rack positions, interface form factors, console, power, etc. The ideal approach would almost certainly be to create a new subapplication alongside the DCIM, IPAM, and other components to track virtual machines. But before we can do this, we need to decide what the data model will look like.
Defining a model to represent a VM should be fairly straightforward, as is associating it with a physical hypervisor device. We can also create a model to represent virtual interfaces which can only be associated with virtual machines. The gap left to close, then, is the assignment of IP addresses to these virtual interfaces. Currently, the
IPAddressmodel can be assigned only to physical interfaces. There are a few ways around this, and I'm going to have to give it some more thought. For now I just wanted to let everyone know what I'm thinking regarding this feature.@drybjed commented on GitHub (Oct 25, 2016):
@jeremystretch What do you think about the fact that virtual machines can migrate between hosts (live or not)? Perhaps the data model could contain the information about what physical servers can host a given VM with optional information about it's current placement, which might or might not be true at the moment, but the additional information could point to different machines where the VM might be.
@jeremystretch commented on GitHub (Oct 25, 2016):
@drybjed I want to avoid getting into hypervisor orchestration. I think it would make sense to support some concept of a hypervisor cluster, but that's probably as far as we'll go. I don't have much experience with traditional enterprise VM management though, so I'm open to suggestions.
@Gorian commented on GitHub (Oct 25, 2016):
I guess I don't understand why you need to tie the VMs to a host. Why can't you just support the concept of free-floating VMs in some cloud? At work we have tons of internal clouds, people use VPS, or EC2, and may know nothing about the physical hardware, or be able to track the host it's running on. Just let us say "this is a vm, this is it's IP" etc. and call it a day. Still super useful for cases where we have 200+ virtual machines, and no physical hardware to track, we still want something better than excel sheets to track and manage the servers we do have
@jeremystretch commented on GitHub (Oct 25, 2016):
I think we have two suitable options.
Invert IPAddress assignment
Currently, the
IPAddressmodel has a nullable ForeignKey pointing toInterfacewhich indicates IP assignment. We could flip this around, adding a ManyToManyField namedip_addresseswith a unique constraint to theInterfacemodel. IP assignments to interfaces would then be stored in a separate database table. We could also add anip_addressesfield to a futureVMInterfacemodel, allowing an IP to be assigned to either type of interface.I'm not a huge fan of this approach, as it requires some fairly disruptive data migrations, and imposes additional database hits when saving objects (each new
IPAddressneeds to be saved, and then its interface mapping needs to be saved). However, it is workable.Convert
interfaceto a GenericForeignKeyThis approach would convert the existing interface field from a ForeignKey (pointing to
Interface) to a GenericForeignKey indicating both interface type (i.e.InterfaceorVMInterface) and interface ID. This is arguably the more natural approach in Django. It's also a much simpler migration as the content type for all existing IP assignments will beInterface.The downside to this approach is an additional
ContentTypelookup when evaluatingIPAddressobjects, although these are typically cached so the additional overhead shouldn't be a concern.@jeremystretch commented on GitHub (Oct 25, 2016):
@Gorian I'd guess that most people who track IP assignments to VMs also want to track where those VMs are physically located. Assignment of a VM to a physical hypervisor can probably be optional, though it would be difficult to organize them otherwise.
@joachimtingvold commented on GitHub (Oct 25, 2016):
Physically located as in "Site"; sure. Physically located as in a specific server and/or rack; not so sure. Some of us has VMs that is contained to one physical server, where the latter could make sense, but for those that has VMs that can roam/failover to other physical servers it doesn't make much sense. And then you have the aspect of geo-redundant VMs, spanning sites as well (or even countries or continents). There are many flavors.
Personally the most important part is to have a device that you can assign IPs to, that is not a physical entity (so that you can have a 1:1 relationship between the IP and what device that uses it). How it's organized should probably be up to the user -- add it to a rack, a rack group, site or none of the above.
@darkstar commented on GitHub (Oct 25, 2016):
Physically locating a VM is important. For example you have 3 ESX clusters with 3 hosts each (prod, dev, test for example). You surely want to know if your VM runs in the prod, dev or test cluster. You don't wanna know on which of the 3 hypervisors it runs. So you'll definitely want some sort of host-grouping (i.e. create a "cluster" by combining physical hosts/nodes and assign the VM to the cluster instead of a single host)
@hwinkel commented on GitHub (Nov 2, 2016):
Seems we should have a clear distinction between pet or cattle VMs. Pet vms might be a candidate for management in Netbox if they configured manually. Cattle, orchestrated, managed or floating vms you should not have in Netbox.
Basically you should have a 'container' object of a type and platform can contain other devices. The container types can be a cage, rack, hypermedia. Based on this type its visual, physical, racket etc.
@Gelob commented on GitHub (Nov 2, 2016):
I just wanted to add comments and thanks to for thinking about this. From my standpoint we don't see any reason to use Netbox to track what host the VM is on, as it can move and thats a job for another tool. We just want to be able to say this IP is assigned to this VM. If we have to give it a datacenter location or cluster thats fine because thats fairly static in our environment.
@Armadill0 commented on GitHub (Nov 4, 2016):
I totally agree with @jallakim. A VM which is bound to a single physical node is already possible today through parent/child relationships.
More interesting are VMs which are located on a cluster of virtualization nodes. And this is the only information you would need to find this VM e.g. on a vSphere interface.
A possibility to build a cluster of physical devices to a parent device which can be assigned many virtual child devices to would be a very good solution for our needs.
@xenuser commented on GitHub (Nov 4, 2016):
First of all, I would like to thank the author for his great work and for considering support for virtual machines.
The most common use case might be that you want to create a virtual "virtualization cluster" resource and assign it to a location/site (e.g. "DC Toronto"). Then you would like to create virtual machines and assign it to a cluster. For those who have floating VMs (being able to migrate between clusters), you might want to consider a field for "secondary virtualization cluster" or something like that.
The link host <-> VM is something you rarely have, since VMs usually migrate between nodes within a cluster. But still, one should be able to configure such a thing if required (but something like this is already possible).
In addition, I would like to suggest that one can flag devices/clusters with a staging/environment tag, something like "production" or "devel". I know that this topic might be completely unrelated, but maybe it would be possible to introduce such a thing when adding support for virtualization. I definitely would like to flag my virtualization clusters with staging tags. I know that there are custom fields, but "native support" for such a thing would be great.
@Gorian commented on GitHub (Nov 4, 2016):
All of this discussion, but none of it really addresses large infrastructure where you are unable to track the physical hypervisors AT ALL. We have hundreds of servers, in AWS (they don't even tell you the city the VMs are in, let alone physical hosts - you just know the "availability region"), corporate clouds we deploy VMs on, but know nothing about the underlying infrastructure. These are still production servers that we want to track and know what their IP is, what services they are running, their hostnames, etc. as a central inventory of our infrastructure. Configuring a cluster of physical servers in the application when none exist as far as you are concerned isn't necessarily a decent solution....
@xenuser commented on GitHub (Nov 4, 2016):
@Gorian True. Well, one could still create a "fake" site, like "AWS". But I agree with you, the option to create "cloud or VM pools" without a location would be great.
@darkstar commented on GitHub (Nov 4, 2016):
@Gorian Well we already established that there are 2 use-cases ("pet" vs "cattle" VMs) so ultimately, both of them should be considered.
Right now you can already setup non-racked devices so I wonder why that will not work for you if you explicitly don't want any sort of "locality" in "clusters" or similar.
For me personally, "pet"-type VMs are more important though, since that's what we're using
@phobiadhs commented on GitHub (Nov 7, 2016):
Simple workaround that has worked for us.
We don't want Netbox to MANAGE anything. It is an information store. That is it. At least that is how I read the planning docs... :)
Because of this, Netbox doesn't need to know WHERE a VM "physically" lives. It shouldn't care. The idea is to document the VM and any IP resources it uses. Use SCVMM or whatever other utility to find physical locations and manage the VM's directly.
With this in mind, we are doing the following to work around this issue, though I think this might just be the way to do it going forward as a rule.
Any management and host location tracking is already being done by other software so no need to recreate the wheel here.
In short, the only thing I would change in the code for this is possibly to add the manufacturer types, device role, and possibly some common VM device types as well. Beyond that, the tool does exactly what it is supposed to do in its current state.
Cluster awareness is beyond the scope of the project in my view as machines tend to move across different physical resources very frequently inside of a moderate to large sized cluster. Tracking that with Netbox is a losing battle in any deployment larger than just a few VM hosts.
As for tracking the VM hosts and what cluster or group they are a part of, we create new device roles with the Cluster or group ID as the name (or some form of it).
The above paragraph leads us to the problem of nested hosts. Hyper-V 2016 for example includes options to have VM's act as hosts with additional VM's living beneath those virtual hosts. Once those layers start to become blended like they seem to be trending towards, tracking the physical location of a VM is REALLY a losing battle in this project scope. Especially considering the fact that the virtual hosts are cluster aware themselves and resource locations may not actually be within the assumed parent virtual or physical host depending on your cluster configuration.
Just my two cents. Hope it helps someone out there.
@aoyawale commented on GitHub (Nov 7, 2016):
thank you for this :) this is how we are doing it using openstack.
On Mon, Nov 7, 2016 at 10:20 AM, phobiadhs notifications@github.com wrote:
@candlerb commented on GitHub (Jan 9, 2017):
@jeremystretch:
True. But VMs do have a platform (i.e. OS), and primary IPv4/IPv6 addresses, and you may want to use Secrets with them, and you most certainly have Services with them (new 1.8.0 feature) And they have interfaces, albeit virtual ones (*)
So I think there is a substantial degree of overlap between a physical machine and a virtual machine; and so it's a question of whether it's better to extend Device to allow virtual devices, or have a different type of database entity which also implements a lot of functionality of Device.
I agree that a VM shouldn't be associated with a single host, and I agree with the suggestion already made that it should optionally be associated to an abstract "cluster", from which you can infer its mobility scope.
In addition, there are multiple levels of virtualization: you can have a running VM which in turn hosts multiple lxc or lxd or docker containers. So the VM in the middle can be both a VM running within a cluster, and in turn the provider of a cluster of lxd services.
This suggests two different relationships to clusters:
There might be better terms: "Cluster server" and "Cluster user" perhaps. The point is that a VM can at the same time be running on a cluster, and also be a cluster host for lower-level containers.
(*) Indeed, when you add a Device today, you can create a "virtual" interface - although I think that's intended for loopbacks as they can't be connected.
I suppose it might also be desirable to have a VM interface attached to a VLAN. But therein lies a can of worms, because a single interface on a VM could carry multiple tagged VLANs, and then you might want to model multiple tagged VLANs on a physical port too.
As far as I can tell, a Netbox "interface" represents a physical port, and is intended for modelling layer 1 connections only. That is:
This seems like a good compromise for a simple DCIM application. It avoids the need to model your whole layer 2 topology and the whole business about subinterfaces and QinQ and so on; that all belongs in a separate application I think.
It does mean that the Netbox model may not be completely accurate: you might associate an IP address with interface eth0, but actually it's eth0.100. But nothing stops you from creating a 'virtual' interface called eth0.100.
@candlerb commented on GitHub (Jan 9, 2017):
Another commonality between Device and VM is "Tenant".
Custom fields might also come into this. Right now we're looking at adding custom fields to Device for "Primary admin contact" and "Secondary admin contact". If we don't use fake devices for VMs then we'll have to make them an attribute of the IP Address, which is icky. If Netbox adds a different (non-Device) entity for VMs we'd need to duplicate these custom fields. Similarly for "OS version", "OS support contract" etc.
But on the flip side, there are some fields you might want for physical hardware but not for VMs - e.g. hardware support contract, additional types of asset tag.
I think where I'm going with this is: if there are separate "Device" and "VM" objects, maybe they should share some sort of common "Entity" base?
@darkstar commented on GitHub (Jan 9, 2017):
Actually "host" and "guest" are IMHO better than "server" and "user", since "server" usually refers to physical boxes and "user" means a person or (login-)account.
@sts commented on GitHub (Jan 14, 2017):
Or just an operating system "Instance", which takes certain properties (parent, disk, vcpus, memory, networking, os).
@f0o commented on GitHub (Jan 21, 2017):
We've solved this by creating special sized Racks where every Unit represents the smallest possible size of a VM (Using OpenStack, this would be your flavor). When we add a VM to that rack the units it takes up are a simplified representation of the resources that VM takes up in the Hypervisor.
Each Rack would be one Hypervisor and the Site would be the Cluster. Moving VM's around would just mean racking them on a different Rack in the same Site.
It's not optimal but as for now it's the best way we could think of in terms of reflecting the used capacity per Hypervisor and binding network to VMs
@candlerb commented on GitHub (Jan 21, 2017):
Interesting approach.
I am about to do a migration into Netbox, and the way I've decided to do it is:
That means that the day-to-day minutiae of migrating VMs around the cluster does not need to be reflected in Netbox, nor the amount of resource used by each VM - there are separate VM management tools for that. As we don't have cross-site mobility, the fact that this "Rack" is tied to a site is not a problem.
The main benefit we get from this approach is to do with tracking administrative responsibility for each VM. We have added custom fields to "Device" to indicate who is the primary and backup responsible person for each device, and by making VMs be Devices we gain that for VMs too.
In future, we may use Services to generate Nagios configs, and may add more Device fields to track how each device is being backed up (and those things apply equally to VMs as to physical devices)
@jeremystretch commented on GitHub (Jan 23, 2017):
I'd like to take this opportunity to reiterate a disclaimer: NetBox does not currently support the concept of virtual machines. Attempting to shoehorn VMs into the DCIM component will inevitably cause problems in the future. I strongly recommend sticking with whatever orchestration system you've been using for VM management until proper support is introduced. You have been warned.
@candlerb commented on GitHub (Jan 24, 2017):
OK, let's get back to what VM support might require.
Something which is a VM, not a physical item
A VM has some of the attributes of a DCIM Device: Name, Management platform and status (Active/Offline); primary IPv4 and IPv6 addresses; Interfaces (which can be linked to IP addresses); Services; Secrets; Comments; possibly Site, Tenant and Role.
It does not have Manufacturer, Device Type, Serial Number, Asset Tag, Location, Rack/Face/Position, Mgmt/Console/Power ports.
Indication of location
Given that VMs may be mobile, we need some more abstract indication of where it's running. This is so that you can say "go to management interface of cluster
<X>" or "go to EC2" to find and control it.The suggestion here is that the abstraction is a named "cluster".
The "cluster" attribute could just be a name field, but the cluster itself probably needs some attributes such as a management URL or service provider, and hence is an object in its own right. Possibly it belongs to a "Site".
The cluster itself arguably has a "platform" (e.g. an ESXi cluster, a Ganeti cluster).
Cluster structure
The cluster consists of physical machines, virtual machines (e.g. VMs providing a container service), or some mixture (*).
If you are running your own VM infrastructure rather than using a third-party cloud, it would be helpful to be able to model it, so that e.g. if machine X crashes you at least know which cluster it belongs to.
You could have a "Device Group" analogous to "Rack Group", "VLAN Group" - but that would exclude the possibility of a VM being able to host a container.
Note that even if you have no hardware, you may still create a "docker cluster" or "Kubernetes cluster" which you are running as a bunch of VMs in a third-party cloud.
I think the required relationships are:
Multi-level relationships are allowed: Device A is a member of cluster X, VM B runs on cluster X, VM B is a member of cluster Y, container C runs on cluster Y.
Circular relationships obviously not allowed. Drawing the tree is highly desirable.
Custom fields
For some Custom Fields, it could be helpful to share them with DCIM objects, e.g. things like alert contact person, backup strategy, date of last security review - things which apply equally to both physical and VM, so are really concerned with the OS/Platform level rather than the tin.
This takes us on to:
Commonality between Device and VM
The things which are in common are primarily related to the OS/Platform and the services running on it. Maybe then Platform could become a separate instance object in its own right, with things like OS version, Services, Secrets, primary IPv4/IPv6 management addresses.
Then this can run either directly "on the metal" (linked to a Device), or as a VM (linked to a VM Cluster)
Note: this might also fix an outstanding issue, which is that the "primary" management address of a Device might either refer to the ILO/IPMI (hardware management) or the installed OS (software management). Separating Device and Platform would allow them to be recorded separately. If a Device has no independent ILO then only the Platform has a management address.
An edge case is "interfaces". A physical device has physical interfaces, which can be cabled to other physical interfaces. A virtual device has virtual interfaces, which can still be linked to IP addresses, but not cabled.
This suggests that Device interface and Platform interface are probably different, even though they duplicate some functionality.
In the "run on bare metal" case, the Platform interface and Device interface might actually be referring to the same port - e.g. a physical port on the device is directly presented as eth0 to the OS. OTOH, the device may have another physical port which is for IPMI only and not visible to the OS.
Anti-features
It might be requested to store VM parameters like vCPUs, RAM, virtual disk size etc. However such things are not stored for physical inventory either (unless you model DIMMs and CPUs as modules, or enumerate many different Device Types). In any case, these things can easily be found by going to the VM platform itself.
For people who really want to model these things in Netbox, there are Custom Fields.
In my opinion, the main value that Netbox can provide here is in integration with IPAM and with management concepts like primary IPv4/IPv6 addresses, login secrets, multiple interfaces/addresses, and Services - those things which need to be handled consistently between physical and VM infrastructure. Things which are purely VM-only can be kept in the VM platform.
(*) Example: in a ganeti cluster consisting of two physical nodes, you may have a third virtual node, for acting as a tiebreaker in the master voting process. So the ganeti cluster consists of two physical hosts and a VM.
@candlerb commented on GitHub (Jan 31, 2017):
Also, in our environment there are a number of cases where an IP address is assigned to a cluster, not an individual host. To model this accurately, it would also be useful to be able to associate an IP address to a cluster rather than to a device or a VM. (Example 1: a pair of Windows servers with Windows clustering: each machine has its own IP address but the cluster address is enabled on all of them. Example 2: a CARP address which floats between two hosts)
The available relationships then would be:
@LukeDRussell commented on GitHub (Jan 31, 2017):
VM's do have manufacturers (Microsoft, Redhat, Cisco, Juniper, etc) and device types (Server, Workstation, Router, Firewall, Load Balancer, etc).
@candlerb commented on GitHub (Jan 31, 2017):
In Netbox, "Server, Workstation, Router, Firewall" (etc) are Device Roles, not Device Types. I'd certainly agree that VMs should have Roles, and those would be the same as Device roles.
Device Types are model numbers, e.g. Device Type = R210, Manufacturer = Dell.
When you talk about a VM's 'manufacturer', if you're thinking of the operating system (e.g. Red Hat / RHEL7, Microsoft / Windows Server 2012R2) then the Netbox concept here is "Platform". It's very basic at the moment - you can just select from a list of platforms that you define, and there are no additional parameters. If you want different versions of the same OS, you have to list them as different platforms.
At the moment, Platforms have no explicit information about their supplier/vendor, but it's implied (e.g. you could create one platform called "Microsoft Windows Server 2012R2" and another called "Red Hat Enterprise Linux 7")
I think it's clear that "Platform" would apply equally to device and VM: that is, you can run an OS on a physical device, or you can run an OS on a VM, and the same choices are available. And in both cases, the supplier of the Platform needs no relationship to the manufacturer of the Device.
You might also be thinking of VM appliances, e.g. Sophos UTM as a VM. I personally see this as "Platform" as well. Whether you have a physical Sophos firewall, or a Sophos VM, they both run the same UTM software and that's the "Platform".
It gets more complex when you start thinking about serial numbers and asset tags for software. But this opens a big can of worms: you could be running RHEL, and inside that be running an instance of Oracle, and also running some commercial application which uses Oracle, all inside the same VM, and all with separate licences.
There clearly is a need for tracking licence renewals and support contracts for software, but I think that belongs in a separate feature request, since it is orthogonal to the issue of physical machines versus VMs.
@darkstar commented on GitHub (Jan 31, 2017):
Actually, the manufacturer should probably be "Docker", "VMware ESXi", "Microsoft Hyper-V", etc.
At least for the hypervisors that is what the VM reports as "vendor" instead of the "physical" hardware platform/vendor, so I'd say it makes sense to use that
@candlerb commented on GitHub (Feb 1, 2017):
I would suggest that's a property of where the VM is running, not of the VM itself. In principle you could lift-shift a VM from an ESXi host to a Hyper-V host without changing it.
But I agree that this information may be visible inside the VM e.g. via
dmidecode, and in the database you want to link to VM to information about the environment it's running in.Since VMs are often mobile within a group, then rather than associate them with a single host I was thinking of defining a more abstract "cluster".
For example:
With this model, when you look at vm1 you know that it's running on a vSphere-type cluster, and it could be running on either host1 or host2. If you want to know exactly which host it's running on right now, you go to cluster "foo's" management interface.
Possibly 'cluster' may be usable for linking switches in a stack. But a cluster can also be comprised of VMs. Imagine that vm2 and vm3 are docker hosts. You can then have:
@Gorian commented on GitHub (Feb 1, 2017):
I have to agree. Almost all replies here are "just associate the VM with the host", but I manage an environment that is 100% virtual. I have hundreds of VMs distributed between Production, Staging, and Development enviroments, spread between internal "clouds" managed by other teams that are basically block boxes to us, and servers in AWS. I can't just "add the host" but I still want to maintain an inventory of OUR servers that we use and manage.
@LukeDRussell commented on GitHub (Feb 1, 2017):
From what I see most are in agreement that a "cluster" is the best place for VMs to live. Cluster then could mean a vSphere host cluster, AWS, Kubernetes or whatever else people need it to mean.
@jeremystretch commented on GitHub (Mar 30, 2017):
In the interest of moving this along, I'd like to propose a tenative data model for a new "virtualization" app within NetBox.
Cluster
A mapping of VirtualMachines to Devices (hypervisors). A new
clusterForeignKey would be added to the Device model to allow the assignment of Devices to Clusters. (A ClusterType would be a minimal model similar to Platform, used to indicate the type of VM orchestration in use.)VirtualMachine
Represents a VM; analogous to a Device.
VMInterface
Similar to a physical Interface.
@Armadill0 commented on GitHub (Mar 31, 2017):
This sounds great!
But don't you think it would be better to use the interface implementation from the devices itself and simply add some new form factors where we could select e.g. "Virtual NIC 1GE", "Virtual NIC 10GE" and so on?
Additionally it would be great to have the possibility to predefine interfaces like it's already possible for devices via the device types.
@jeremystretch commented on GitHub (Mar 31, 2017):
I've opted to introduce a new VMInterface primarily because the selection of form factors and the presence of a
lagfield don't make sense for virtual interfaces. I'm not sure whether it makes sense to track type/speed for virtual interfaces; I'm open to opinions.Also, reusing the existing Interface model would mean converting its ForeignKey(Device) relation to a GenericForeignKey (to match both Device and VirtualMachine). I feel it's cleaner and less disruptive to establish this decoupling on the
interfacefield of the IPAddress model.We could include a VMTemplate model, similar to how DeviceTypes are used today.
@candlerb commented on GitHub (Mar 31, 2017):
Cluster
My main comment is that I would like a Cluster to be composed of Devices and/or Virtual Machines. This is because a VM may itself host other "VMs", e.g.
In each case, a docker container or an lxc/lxd container are really another type of virtual machine, running inside a virtual machine.
I realise this makes the cluster model recursive, and that may be painful.
However I am deploying a load of lxd containers inside VMs at the moment, and here is a real-world small example:
So I'd like to record that wrn-dns1 itself is a one-node cluster, of cluster type "lxd", on which those containers run.
Without this I'm a bit stuck. I could model wrn-dc1, wrn-ipa1 and wrn-radius1 as running directly on the "wrn-vm" cluster, which does tell me which physical devices they might be on. But they are not VMs in their own right, and are not visible in the ganeti admin interface, so I have no info on where to find them, short of unstructured comments.
I imagine the same would apply to people using Netbox where their infrastructure comprises entirely VMs (e.g. inside a cloud provider), and wish to host containers inside those. For example, it would be very common to fire up a number of hosts in the cloud to form a docker cluster, and then run docker containers on those hosts.
If a recursive structure causes too many problem, I would be OK with a fixed two-level hierarchy. I propose:
This is because containers can run on hardware or on VMs, and indeed there's no reason why you can't have a container hosting environment which mixes physical servers and VMs (e.g. a Kubernetes cluster which has some physical boxes and some VMs) - a likely hybrid cloud setup.
There's no recursion there. VirtualMachine and Device both have a nullable foreign key to ContainerCluster, while only Device has a nullable FK to Cluster. (Or Device has a polymorphic foreign key to either Cluster or ContainerCluster - your choice)
Asset_tag
I don't think
asset_tagis particularly useful for a VM - but having it doesn't do any harmVM interfaces
I don't think it makes sense, since this is entirely in software. A software bridge might bring a VLAN out at one or more physical interfaces, but Netbox doesn't model layer 2 anyway.
There are different types of NIC driver (e.g. emulated Intel NIC, virtio NIC) but I don't think it makes sense to model those either - because Netbox doesn't do that for physical NICs. I consider it a run-time property of how the VM is started, just like amount of RAM/disk/CPU allocated to the VM. That's something you go to the cluster admin interface to see.
I think where Netbox adds value is in tracking the use of IP addresses and being able to record which VM or physical device that IP address is being used on.
Services
You did not mention Services, but clearly these can run on an VM identically to physical hardware
Separation of hardware asset from Device?
There is another approach which I don't know if you've considered and discounted.
A Device can then represent either a virtual machine, or the configuration of a physical machine (at the level of IP addresses, interfaces and connections). The existing distinction between a "virtual" interface and various types of physical interface is fine for this; a "virtual" interface cannot be connected.
As well as supporting VMs, it makes a lot more sense for tracking hardware lifecycle. For example, it means you can swap out hardware in the way it happens in real life:
Netbox today can't do this. I either have to create a new device for XYZ456 (thereby copying all the IP config over from ABC123); or I have to change the serial number on device "firewall1" (thereby losing all hope of tracking the history of the valuable physical boxes)
@Armadill0 commented on GitHub (Apr 3, 2017):
@jeremystretch
I just talked to our virtualization guys and they share your opinion. It doesn't make sense for them to track the type and/or speed of the interfaces of the virtual machines.
Good point. 😄
This would be awesome!
@jeremystretch commented on GitHub (Apr 3, 2017):
@candlerb
I'd like to avoid recursion in this model, since we're only going to be dealing with a three-level hierarchy at most (hypervisor <- VM <- container). We'll need to work out exactly what the Container model should look like; how does it differ from VirtualMachine? I have no experience with container clustering myself so I'll be relying heavily on community input.
asset_tagprobably isn't the best name for it; I was just copying stuff from the Device model. I figured it could be handy to have some analog to the asset tag field in case people needed to do VM inventory of some kind.Yeah, IPAddresses and Services will both need to be converted to use GenericForeignKeys for interface/device assignment.
Yeah, it's possible using multi-table inheritance, but it really degrades the ORM and poses validation challenges. IMO the added complexity of using multiple tables to represent a single object isn't worth the additional complexity. (Also, consider that when swapping out one hardware model for another, interfaces and IP assignments are likely to change anyway.)
@specialcircumstances commented on GitHub (Apr 4, 2017):
I was thinking it might be useful to have the concept of a VMNetwork
VMInterface
VMNetwork
As per comments in Issue 150, if the VLAN models were rewritten to a more generic Tagging type (or perhaps Broadcast_Domain), with VLANs being just one sort of Tagged item (perhaps set by the Tagged Group they are a member of). With that the uid could be a foreignkey to the Broadcast Domain.
Also, if there any benefit in considering if the model can be used to support the concepts of switch and firewall clusters as well as servers? Now would be the time!
@candlerb commented on GitHub (Apr 4, 2017):
I think that if this were done, it would be better done at the level of prefix rather than VM interface; because presumably the VM network type would be shared by all VMs on that subnet, so it's not a property of the interface itself.
There is already an association from an interface to IP address, IP address to prefix and (optionally) prefix to VLAN, so as you say, this would be a properly of the broadcast domain.
Note that Netbox currently doesn't do any layer 2 connectivity modelling at all. It can tell you that server X is physically cabled to switch Y, but not that the link carries VLANs A, B and C. If you have VLAN subinterfaces on a server, then they would be modelled as "virtual" interfaces, which have no connection. There is just the implicit association between the interface's IP address and the IP address's prefix; it is assumed that the prefix is trunked wherever it needs to be.
I think the same can apply to VMs. A VM has interface A with address a.a.a.a. This isn't a physical port so it isn't connected anywhere, but implicitly the prefix which carries a.a.a.a must be available to that interface somehow. How it's done is down to how the VM infrastructure is built.
Now, this doesn't work when you are trunking a VLAN to a VM, but the VM itself doesn't have an IP address on that prefix (e.g. the VM itself is acting as a switch). There's no way to tell which networks pass through that VM.
However, modelling layer 2 is a huge pain (think about what happens when you get into QinQ), and at the moment I think Netbox has struck the right balance by ignoring it.
I have raised this before. I see a "device" as a logical entity (e.g. a switch stack with a management interface, or a failover firewall pair); and a device could comprise one or more physical assets (e.g. physical switches, or physical firewalls). However this has been rejected.
@candlerb commented on GitHub (Apr 4, 2017):
For this purpose, containers are identical to virtual machines.
In fact, an lxc/lxd container is very much like a lightweight VM: it has its own IP, its own init daemon, you can SSH into it etc. These are usually long-lived and managed in the same way as a VM.
Docker containers can also be used like that; but more usually have a shorter, more dynamic lifecycle. Often they are behind NAT with port forwarding; such uses of Docker probably wouldn't be modelled in Netbox. But for those people who fire up bits of long-lived infrastructure as Docker containers, they would probably want to record them as if they were VMs.
@bpaplow commented on GitHub (Apr 15, 2017):
I have been reading the other issues and this is the closest i have seen, please point me in the right direction if im wrong
Something missing from this is that I have seen in most hypervisors is the bridge between the vm and the physical hardware. I will use vmware as the example but most others (xen,proxmox,hyperv) have similar setups.
vnic represents the physical nic and does not get an ip assignment and connects to a vswitch or dvswitch
vswitch is only present in the hypervisor of the physical server, acts as the switch for the vms to the uplink vnics
vkernel adapters, are assigned ip addresses and are connected to a vswitch for moving traffic from the hypervisor to the physical network(OBM,iscsi,fc)
virtual network adapter, are assigned to VMs and likely receive an IP and are bound only to the vm container and a vswitch
can be tied to a device in a rack
vkernel ---- vswitch ----- vnic
virtual network adapter----vswitch----vnic
vnic-----physical nic-----physical switch
The above can be tied to one server.
Below is the cluster concept
dvswitch is a cluster concept and is a spanned switch across multiple hypervisors to act as one and uplink like a vswith to vnics
virtual network adapter----dvswitch----cluster
servers---cluster
@candlerb commented on GitHub (Apr 15, 2017):
You should be aware that Netbox doesn't do any modelling of layer 2, other than associating a prefix with a VLAN. It does not record, for example, which VLANs are active on which switch ports.
So in the case of Netbox and virtual machines, what you can expect is:
This models the normal case of an IP address being configured directly on a virtual network adapter. There is an implicit association from IP address to its parent prefix, and from prefix to VLAN. But actually how that VLAN is presented to that VM is a matter of internal configuration of your VM environment.
The physical servers (Devices) will have connections into your network, e.g. into switches. But Netbox doesn't model which VLANs are carried on those links, nor any internal bridges (vswitches) inside your VM environment.
In many ways, I think this is the right thing. Obviously the right VLAN must be presented somehow to the VM, otherwise it wouldn't work. But the innards of how vswitches/bridges work is very much specific to the configuration of the various platforms. The right place to see the internal configuration of those platforms is in the management interface those platforms provide.
@bpaplow commented on GitHub (Apr 15, 2017):
I agree and disagree. I agree in that a lot the details of how the vswitches are functioning is best left in the hypervisor manager and may not be suited for netbox. I disagree in that at its core a vswitch is a switch connecting a to b and within the purpose of netbox.
Perhaps what im thinking of is out of the scope of how vms specifically get added to netbox but it is related and I think worth looking at as long as we are talking about how vms will stack on the rest of the data.
I am working on writing up my specific use case and will put it in a separate issue and seeing how the data model can be applied.
@candlerb commented on GitHub (Apr 15, 2017):
Absolutely. For me, the key things about VMs are:
@snazy2000 commented on GitHub (Apr 17, 2017):
Very much looking forward to this feature, it is currently holding back us fully using Netbox hope we can see some work on this soon! 👍
@jeremystretch commented on GitHub (Aug 31, 2017):
For anyone still following along, I've gotten a mostly-complete implementation of this in the virtualization branch, though it's still very much a work in progress. I'm hoping to have the first v2.2 beta out sometime in early September.
@Chris-ZA commented on GitHub (Aug 31, 2017):
That is great news. When the beta drops, I'll be ready to test. I've been waiting for this since the very beginning and it's the only thing holding us back from fully implementing Netbox! 👍🏻
@jeremystretch commented on GitHub (Aug 31, 2017):
@Chris-ZA If you feel like it, you can spin up an instance of the
virtualizationbranch and a copy of your current database to try it out. Should be mostly stable at this point.@darkstar commented on GitHub (Sep 1, 2017):
@jeremystretch I see a problem when trying to add devices to a cluster (the virtualization hosts)
I need to select a region, but I have no regions defined (it's all onsite so far). I have a few sites that are not assigned to any region, but they cannot be selected in the drop-down list. I'd hate having to create a "default" region and assigning all my sites (and racks) to it, just to be able to select the correct devices here
@darkstar commented on GitHub (Sep 1, 2017):
@jeremystretch Also, the "Show IPs" button on a Virtual Machine is broken. It redirects to an empty page. No error messages, just an empty page.
@darkstar commented on GitHub (Sep 1, 2017):
@jeremystretch Also, When assigning IP Addresses to VMs, the "create and add another" button redirects to a "generic" IP creation form, that is not in any way connected to the VM anymore. I.e. it doesn't have the "Interface Assignment" form where you can select to which interface to "bind" the IP. And since it seems there is no (?) easy (?) way to assign an IP Address to a VM later (you cannot change the "parent", at least I have not found out how), this is probably not how it should work.
Note that this is also the way it works on "regular" devices, so it's probably the way it is intended. It still feels a bit strange though...
@darkstar commented on GitHub (Sep 1, 2017):
@jeremystretch Can we also get the possibility of adding secrets to a VM, like it works with devices? Would be helpful for the login credentials...
Maybe also for clusters... Not sure (most of the time, the instance managing a cluster is itself a VM of some sort, for example the vCenter server, but there are probably also some cluster types that have the management in-band...)
@jeremystretch commented on GitHub (Sep 1, 2017):
@darkstar Thanks, I'll look into those bugs. But in the future, please try to avoid commenting many times in succession as it generates a lot of noise for people who have subscribed to an issue. You can edit your previous comments on GitHub to add more content.
I'm on the fence about this, and leaning toward "no." I want to maintain NetBox's focus on infrastructure and I feel like extending secrets to VMs crosses too far into systems administration territory.
@darkstar commented on GitHub (Sep 1, 2017):
Okay, I can see how this is an edge case. I'm fine with that. But maybe at least the cluster can have a secret, since it kind of belongs to the "infrastructure" side of things.
Meanwhile, the other 2 problems I reported were fixed in your last 2 commits and work fine. Thanks for that quick (and apparently easy ;-) fix. Really looking forward to seeing these features in netbox as it's currently the only remaining issue that keeps me from ditching out Excel list and using netbox exclusively :)
@joachimtingvold commented on GitHub (Sep 1, 2017):
I haven't followed this discussion for a while, nor had the time to test the branch, but I have a quick question; will it be possible to move/migrate/change existing devices into these "VM" devices after upgrading? (All our VMs are implemented as devices attached to "Device Bays").
@darkstar commented on GitHub (Sep 1, 2017):
Probably not. At least I see no possible way right now. You might have success with exporting and re-importing them, maybe after slightly changing the CSV file
@jeremystretch commented on GitHub (Sep 1, 2017):
From my comment almost a year ago:
That said, it should be feasible to migrate devices in bulk via the command shell. Essentially, you'll need to:
(We're reusing the same interface object for VMs so IP assignments will all stay intact through the migration.)
Something like this should work, though obviously it needs to be fleshed out and tested:
@candlerb commented on GitHub (Sep 1, 2017):
Thank you for this.
I'd like to see VirtualMachine have "role" (which I'd be happy to share tags with DeviceRole); and less importantly "status" - e.g. to plan a VM and reserve an IP for it, or track VMs which are intentionally shut down.
Well, I'd argue that is what secrets are doing for Devices today. If you're using secrets to store the root password or the SNMP community (for a server anyway), then you're using this to administer the OS which is running on the device, as opposed to the device itself.
If secrets are only being used for IPMI logins, then I'll accept that's infrastructure not system administration :-)
And as for network devices, remember that you can get virtual routers which run as VMs. Infrastructure is both hard and soft these days...
@jeremystretch commented on GitHub (Sep 1, 2017):
FYI I've opened the
develop-2.2branch in preparation for the beta later this month. Thevirtualizationbranch has been merged into that one and deleted.@snazy2000 commented on GitHub (Sep 2, 2017):
Great work!! Will it support Custom Fields like normal devices? I think this is a must to be able to add custom stuff
How could a in VM cluster work in terms of IP addresses and not making them duplicate? Is it even currently possible? Is it possible to assign IP address the the Cluster of Hosts as well?
@RyanBreaker commented on GitHub (Sep 4, 2017):
A very minor bug I found when testing this, when clicking "Add Devices" with the placeholder ("-----") entry selected, the screen refreshes with an error message as expected and with the previous selections but without any options listed in the Device window:
Expected behavior would be, of course, for the selections to be reset or for the Device window to be populated with the prior selections.
@jeremystretch commented on GitHub (Sep 12, 2017):
@RyanBreaker This should be addressed in
136d16b7fd@jeremystretch commented on GitHub (Sep 14, 2017):
Since the first v2.2 beta has been released, I'm going to close out this issue. 🎉
For any further bugs or related feature requests, please open a separate issue using the normal template. I will label issues with the
betatag accordingly. Thanks everyone!