mirror of
https://github.com/netbox-community/netbox.git
synced 2026-01-27 12:18:15 +01:00
Compare commits
102 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
742804ecb8 | ||
|
|
2bf20fa501 | ||
|
|
685e0ce00d | ||
|
|
6a6b0236a9 | ||
|
|
857c70ece9 | ||
|
|
e68be6f041 | ||
|
|
52edeb42b5 | ||
|
|
c8a8bfd84d | ||
|
|
9f2c4919eb | ||
|
|
f56a470cc7 | ||
|
|
54ccc705d0 | ||
|
|
7e481960f9 | ||
|
|
809d9e4697 | ||
|
|
79c06442db | ||
|
|
6195fc0d11 | ||
|
|
6523334a48 | ||
|
|
b3cde51590 | ||
|
|
6ec296f2a7 | ||
|
|
cb4392628f | ||
|
|
a224e5d470 | ||
|
|
7444110c79 | ||
|
|
fc0c8a160b | ||
|
|
481cc52686 | ||
|
|
4273b6e4fb | ||
|
|
5e08b2be37 | ||
|
|
a665b79f85 | ||
|
|
fe4de7f929 | ||
|
|
0783d57459 | ||
|
|
4e1e5bd8c4 | ||
|
|
b3a14e9a7b | ||
|
|
b725a9bcea | ||
|
|
5c263fac8d | ||
|
|
04c1619eb4 | ||
|
|
d74dbb722a | ||
|
|
95969c4979 | ||
|
|
10c9954ebc | ||
|
|
e61b2b1fc5 | ||
|
|
46ecb0ac03 | ||
|
|
0a0b852f2c | ||
|
|
1658d7ae86 | ||
|
|
ca44cda112 | ||
|
|
1935f8b27f | ||
|
|
d32dba43b4 | ||
|
|
8d0a3c8e69 | ||
|
|
f561b2d955 | ||
|
|
8afb7d654d | ||
|
|
32cbc20108 | ||
|
|
be3cd2a434 | ||
|
|
ba3ca6b00d | ||
|
|
c88dcef900 | ||
|
|
3d1e4fde81 | ||
|
|
1e02bb5999 | ||
|
|
bd7bcf8a0b | ||
|
|
1c0f3e1b81 | ||
|
|
b2b3f388b1 | ||
|
|
110a6d11a5 | ||
|
|
75faf7d30e | ||
|
|
e95a9731be | ||
|
|
5cb5f9a963 | ||
|
|
88aa3a4e19 | ||
|
|
d34b9ee00e | ||
|
|
103730a642 | ||
|
|
84017776ec | ||
|
|
34e673f7d6 | ||
|
|
5ac6a307bf | ||
|
|
8c1b681391 | ||
|
|
da558de769 | ||
|
|
da1fb4f969 | ||
|
|
9046f59b9f | ||
|
|
6c1f9dba52 | ||
|
|
ea1df2b5c3 | ||
|
|
b3423e1722 | ||
|
|
bfb91fcf10 | ||
|
|
44c62f8f44 | ||
|
|
c8c47961db | ||
|
|
78b0e50742 | ||
|
|
a7371c048b | ||
|
|
f3dfa81811 | ||
|
|
5b4793a2d5 | ||
|
|
b6660c72e1 | ||
|
|
a6eeed4061 | ||
|
|
239fddcac2 | ||
|
|
b27f9bf74c | ||
|
|
09b856bf0b | ||
|
|
9954c6a571 | ||
|
|
44b24de5d0 | ||
|
|
22927bfc76 | ||
|
|
a39522a25e | ||
|
|
ea6c8a1a65 | ||
|
|
546bbe5418 | ||
|
|
5ca7f375d3 | ||
|
|
568148a349 | ||
|
|
fedf745d25 | ||
|
|
8823aeb9d7 | ||
|
|
dc57332988 | ||
|
|
138231059b | ||
|
|
834b233c30 | ||
|
|
72d41eac85 | ||
|
|
0fec03ad3f | ||
|
|
7dc71f92d0 | ||
|
|
f74b47ca16 | ||
|
|
4dff20cc8c |
7
.github/ISSUE_TEMPLATE/bug_report.yaml
vendored
7
.github/ISSUE_TEMPLATE/bug_report.yaml
vendored
@@ -17,7 +17,7 @@ body:
|
||||
What version of NetBox are you currently running? (If you don't have access to the most
|
||||
recent NetBox release, consider testing on our [demo instance](https://demo.netbox.dev/)
|
||||
before opening a bug report to see if your issue has already been addressed.)
|
||||
placeholder: v2.11.3
|
||||
placeholder: v2.11.7
|
||||
validations:
|
||||
required: true
|
||||
- type: dropdown
|
||||
@@ -39,8 +39,9 @@ body:
|
||||
reproduce this bug using the current stable release of NetBox. Begin with the
|
||||
creation of any necessary database objects and call out every operation being
|
||||
performed explicitly. If reporting a bug in the REST API, be sure to reconstruct
|
||||
the raw HTTP request(s) being made: Don't rely on a client library such as
|
||||
pynetbox."
|
||||
the raw HTTP request(s) being made: Don't rely on a client library such as
|
||||
pynetbox. Additionally, **do not rely on the demo instance** for reproducing
|
||||
suspected bugs, as its data is prone to modification or deletion at any time.
|
||||
placeholder: |
|
||||
1. Click on "create widget"
|
||||
2. Set foo to 12 and bar to G
|
||||
|
||||
11
.github/ISSUE_TEMPLATE/config.yml
vendored
11
.github/ISSUE_TEMPLATE/config.yml
vendored
@@ -3,7 +3,10 @@ blank_issues_enabled: false
|
||||
contact_links:
|
||||
- name: 📖 Contributing Policy
|
||||
url: https://github.com/netbox-community/netbox/blob/develop/CONTRIBUTING.md
|
||||
about: Please read through our contributing policy before opening an issue or pull request
|
||||
- name: 💬 Discussion Group
|
||||
url: https://groups.google.com/g/netbox-discuss
|
||||
about: Join our discussion group for assistance with installation issues and other problems
|
||||
about: "Please read through our contributing policy before opening an issue or pull request"
|
||||
- name: ❓ Discussion
|
||||
url: https://github.com/netbox-community/netbox/discussions
|
||||
about: "If you're just looking for help, try starting a discussion instead"
|
||||
- name: 💬 Community Slack
|
||||
url: https://netdev.chat/
|
||||
about: "Join #netbox on the NetDev Community Slack for assistance with installation issues and other problems"
|
||||
|
||||
2
.github/ISSUE_TEMPLATE/feature_request.yaml
vendored
2
.github/ISSUE_TEMPLATE/feature_request.yaml
vendored
@@ -14,7 +14,7 @@ body:
|
||||
attributes:
|
||||
label: NetBox version
|
||||
description: What version of NetBox are you currently running?
|
||||
placeholder: v2.11.3
|
||||
placeholder: v2.11.7
|
||||
validations:
|
||||
required: true
|
||||
- type: dropdown
|
||||
|
||||
@@ -25,7 +25,7 @@ discussions.
|
||||
|
||||
### Slack
|
||||
|
||||
For real-time chat, you can join the **#netbox** Slack channel on [NetDev Community](https://slack.netbox.dev/).
|
||||
For real-time chat, you can join the **#netbox** Slack channel on [NetDev Community](https://netdev.chat/).
|
||||
Unfortunately, the Slack channel does not provide long-term retention of chat
|
||||
history, so try to avoid it for any discussions would benefit from being
|
||||
preserved for future reference.
|
||||
|
||||
19
README.md
19
README.md
@@ -2,8 +2,10 @@
|
||||
<img src="https://raw.githubusercontent.com/netbox-community/netbox/develop/docs/netbox_logo.svg" width="400" alt="NetBox logo" />
|
||||
</div>
|
||||
|
||||
NetBox is an IP address management (IPAM) and data center infrastructure
|
||||
management (DCIM) tool. Initially conceived by the network engineering team at
|
||||

|
||||
|
||||
NetBox is an infrastructure resource modeling (IRM) tool designed to empower
|
||||
network automation. Initially conceived by the network engineering team at
|
||||
[DigitalOcean](https://www.digitalocean.com/), NetBox was developed specifically
|
||||
to address the needs of network and infrastructure engineers. It is intended to
|
||||
function as a domain-specific source of truth for network operations.
|
||||
@@ -14,16 +16,15 @@ complete list of requirements, see `requirements.txt`. The code is available [on
|
||||
|
||||
The complete documentation for NetBox can be found at [Read the Docs](https://netbox.readthedocs.io/en/stable/). A public demo instance is available at https://demo.netbox.dev.
|
||||
|
||||
| | status |
|
||||
|-------------|------------|
|
||||
| **master** |  |
|
||||
| **develop** |  |
|
||||
|
||||
<div align="center">
|
||||
<h4>Thank you to our sponsors!</h4>
|
||||
|
||||
[](https://ns1.com/)
|
||||
[](https://try.digitalocean.com/developer-cloud)
|
||||
|
||||
[](https://metal.equinix.com/)
|
||||
|
||||
[](https://ns1.com/)
|
||||
<br />
|
||||
[](https://stellar.tech/)
|
||||
|
||||
</div>
|
||||
@@ -31,7 +32,7 @@ The complete documentation for NetBox can be found at [Read the Docs](https://ne
|
||||
### Discussion
|
||||
|
||||
* [GitHub Discussions](https://github.com/netbox-community/netbox/discussions) - Discussion forum hosted by GitHub; ideal for Q&A and other structured discussions
|
||||
* [Slack](https://slack.netbox.dev/) - Real-time chat hosted by the NetDev Community; best for unstructured discussion or just hanging out
|
||||
* [Slack](https://netdev.chat/) - Real-time chat hosted by the NetDev Community; best for unstructured discussion or just hanging out
|
||||
* [Google Group](https://groups.google.com/g/netbox-discuss) - Legacy mailing list; slowly being replaced by GitHub discussions
|
||||
|
||||
### Installation
|
||||
|
||||
@@ -6,7 +6,7 @@ If a change is made to any of the objects returned by the query within that time
|
||||
|
||||
## Invalidating Cached Data
|
||||
|
||||
Although caching is performed automatically and rarely requires administrative intervention, NetBox provides the `invalidate` management command to force invalidation of cached results. This command can reference a specific object my its type and numeric ID:
|
||||
Although caching is performed automatically and rarely requires administrative intervention, NetBox provides the `invalidate` management command to force invalidation of cached results. This command can reference a specific object by its type and numeric ID:
|
||||
|
||||
```no-highlight
|
||||
$ python netbox/manage.py invalidate dcim.Device.34
|
||||
|
||||
@@ -24,7 +24,7 @@ Marking a field as required will force the user to provide a value for the field
|
||||
|
||||
The filter logic controls how values are matched when filtering objects by the custom field. Loose filtering (the default) matches on a partial value, whereas exact matching requires a complete match of the given string to a field's value. For example, exact filtering with the string "red" will only match the exact value "red", whereas loose filtering will match on the values "red", "red-orange", or "bored". Setting the filter logic to "disabled" disables filtering by the field entirely.
|
||||
|
||||
A custom field must be assigned to one or object types, or models, in NetBox. Once created, custom fields will automatically appear as part of these models in the web UI and REST API. Note that not all models support custom fields.
|
||||
A custom field must be assigned to one or more object types, or models, in NetBox. Once created, custom fields will automatically appear as part of these models in the web UI and REST API. Note that not all models support custom fields.
|
||||
|
||||
### Custom Field Validation
|
||||
|
||||
|
||||
@@ -175,7 +175,7 @@ A particular object within NetBox. Each ObjectVar must specify a particular mode
|
||||
* `null_option` - A label representing a "null" or empty choice (optional)
|
||||
|
||||
!!! warning
|
||||
The `display_field` parameter is now deprecated, and will be removed in NetBox v2.12. All ObjectVar instances will
|
||||
The `display_field` parameter is now deprecated, and will be removed in NetBox v3.0. All ObjectVar instances will
|
||||
instead use the new standard `display` field for all serializers (introduced in NetBox v2.11).
|
||||
|
||||
To limit the selections available within the list, additional query parameters can be passed as the `query_params` dictionary. For example, to show only devices with an "active" status:
|
||||
|
||||
@@ -80,7 +80,7 @@ class DeviceConnectionsReport(Report):
|
||||
self.log_success(device)
|
||||
```
|
||||
|
||||
As you can see, reports are completely customizable. Validation logic can be as simple or as complex as needed.
|
||||
As you can see, reports are completely customizable. Validation logic can be as simple or as complex as needed. Also note that the `description` attribute support markdown syntax. It will be rendered in the report list page.
|
||||
|
||||
!!! warning
|
||||
Reports should never alter data: If you find yourself using the `create()`, `save()`, `update()`, or `delete()` methods on objects within reports, stop and re-evaluate what you're trying to accomplish. Note that there are no safeguards against the accidental alteration or destruction of data.
|
||||
@@ -93,7 +93,7 @@ The following methods are available to log results within a report:
|
||||
* log_warning(object, message)
|
||||
* log_failure(object, message)
|
||||
|
||||
The recording of one or more failure messages will automatically flag a report as failed. It is advised to log a success for each object that is evaluated so that the results will reflect how many objects are being reported on. (The inclusion of a log message is optional for successes.) Messages recorded with `log()` will appear in a report's results but are not associated with a particular object or status.
|
||||
The recording of one or more failure messages will automatically flag a report as failed. It is advised to log a success for each object that is evaluated so that the results will reflect how many objects are being reported on. (The inclusion of a log message is optional for successes.) Messages recorded with `log()` will appear in a report's results but are not associated with a particular object or status. Log messages also support using markdown syntax and will be rendered on the report result page.
|
||||
|
||||
To perform additional tasks, such as sending an email or calling a webhook, after a report has been run, extend the `post_run()` method. The status of the report is available as `self.failed` and the results object is `self.result`.
|
||||
|
||||
|
||||
@@ -8,7 +8,7 @@ There are several official forums for communication among the developers and com
|
||||
|
||||
* [GitHub issues](https://github.com/netbox-community/netbox/issues) - All feature requests, bug reports, and other substantial changes to the code base **must** be documented in an issue.
|
||||
* [GitHub Discussions](https://github.com/netbox-community/netbox/discussions) - The preferred forum for general discussion and support issues. Ideal for shaping a feature request prior to submitting an issue.
|
||||
* [#netbox on NetDev Community Slack](https://slack.netbox.dev/) - Good for quick chats. Avoid any discussion that might need to be referenced later on, as the chat history is not retained long.
|
||||
* [#netbox on NetDev Community Slack](https://netdev.chat/) - Good for quick chats. Avoid any discussion that might need to be referenced later on, as the chat history is not retained long.
|
||||
* [Google Group](https://groups.google.com/g/netbox-discuss) - Legacy mailing list; slowly being phased out in favor of GitHub discussions.
|
||||
|
||||
## Governance
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
# What is NetBox?
|
||||
|
||||
NetBox is an open source web application designed to help manage and document computer networks. Initially conceived by the network engineering team at [DigitalOcean](https://www.digitalocean.com/), NetBox was developed specifically to address the needs of network and infrastructure engineers. It encompasses the following aspects of network management:
|
||||
NetBox is an infrastructure resource modeling (IRM) application designed to empower network automation. Initially conceived by the network engineering team at [DigitalOcean](https://www.digitalocean.com/), NetBox was developed specifically to address the needs of network and infrastructure engineers. NetBox is made available as open source under the Apache 2 license. It encompasses the following aspects of network management:
|
||||
|
||||
* **IP address management (IPAM)** - IP networks and addresses, VRFs, and VLANs
|
||||
* **Equipment racks** - Organized by group and site
|
||||
|
||||
@@ -24,7 +24,7 @@ The video below demonstrates the installation of NetBox v2.10.3 on Ubuntu 20.04
|
||||
| Redis | 4.0 |
|
||||
|
||||
!!! note
|
||||
Python 3.7 or later will be required in NetBox v2.12. Users are strongly encouraged to install NetBox using Python 3.7 or later for new deployments.
|
||||
Python 3.7 or later will be required in NetBox v3.0. Users are strongly encouraged to install NetBox using Python 3.7 or later for new deployments.
|
||||
|
||||
Below is a simplified overview of the NetBox application stack for reference:
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Power Feed
|
||||
|
||||
A power feed represents the distribution of power from a power panel to a particular device, typically a power distribution unit (PDU). The power pot (inlet) on a device can be connected via a cable to a power feed. A power feed may optionally be assigned to a rack to allow more easily tracking the distribution of power among racks.
|
||||
A power feed represents the distribution of power from a power panel to a particular device, typically a power distribution unit (PDU). The power port (inlet) on a device can be connected via a cable to a power feed. A power feed may optionally be assigned to a rack to allow more easily tracking the distribution of power among racks.
|
||||
|
||||
Each power feed is assigned an operational type (primary or redundant) and one of the following statuses:
|
||||
|
||||
|
||||
@@ -1,5 +1,81 @@
|
||||
# NetBox v2.11
|
||||
|
||||
## v2.11.7 (2021-06-16)
|
||||
|
||||
### Enhancements
|
||||
|
||||
* [#6455](https://github.com/netbox-community/netbox/issues/6455) - Permit /32 IPv4 and /128 IPv6 prefixes
|
||||
* [#6493](https://github.com/netbox-community/netbox/issues/6493) - Show change log diff for non-atomic (pre-2.11) changes
|
||||
* [#6564](https://github.com/netbox-community/netbox/issues/6564) - Add N connector type for pass-through ports
|
||||
* [#6588](https://github.com/netbox-community/netbox/issues/6588) - Add support for webp files as front/rear device type images
|
||||
* [#6589](https://github.com/netbox-community/netbox/issues/6589) - Standardize breadcrumb navigation for power panels and feeds
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
* [#6553](https://github.com/netbox-community/netbox/issues/6553) - ProviderNetwork search should match on name
|
||||
* [#6562](https://github.com/netbox-community/netbox/issues/6562) - Disable ordering of secrets by assigned object
|
||||
* [#6563](https://github.com/netbox-community/netbox/issues/6563) - Fix filtering by location for cable connection forms
|
||||
* [#6584](https://github.com/netbox-community/netbox/issues/6584) - Fix ordering of nested inventory items
|
||||
* [#6602](https://github.com/netbox-community/netbox/issues/6602) - Fix deletion of devices with cables attached
|
||||
|
||||
---
|
||||
|
||||
## v2.11.6 (2021-06-04)
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
* [#6544](https://github.com/netbox-community/netbox/issues/6544) - Fix migration error when upgrading with VRF(s) defined
|
||||
|
||||
---
|
||||
|
||||
## v2.11.5 (2021-06-04)
|
||||
|
||||
**NOTE:** This release includes a database migration that calculates and annotates prefix depth. It may impose a noticeable delay on the upgrade process: Users should anticipate roughly one minute of delay per 100 thousand prefixes being updated.
|
||||
|
||||
### Enhancements
|
||||
|
||||
* [#6087](https://github.com/netbox-community/netbox/issues/6087) - Improved prefix hierarchy rendering
|
||||
* [#6487](https://github.com/netbox-community/netbox/issues/6487) - Add location filter to cable connection form
|
||||
* [#6501](https://github.com/netbox-community/netbox/issues/6501) - Expose prefix depth and children on REST API serializer
|
||||
* [#6527](https://github.com/netbox-community/netbox/issues/6527) - Support Markdown for report descriptions
|
||||
* [#6540](https://github.com/netbox-community/netbox/issues/6540) - Add a "flat" column to the prefix table
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
* [#6064](https://github.com/netbox-community/netbox/issues/6064) - Fix object permission assignments for user and group models
|
||||
* [#6217](https://github.com/netbox-community/netbox/issues/6217) - Disallow passing of string values for integer custom fields
|
||||
* [#6284](https://github.com/netbox-community/netbox/issues/6284) - Avoid sending redundant webhooks when adding/removing tags
|
||||
* [#6492](https://github.com/netbox-community/netbox/issues/6492) - Correct tag population in post-change data resulting from REST API changes
|
||||
* [#6496](https://github.com/netbox-community/netbox/issues/6496) - Fix upgrade script when Python installed in nonstandard path
|
||||
* [#6502](https://github.com/netbox-community/netbox/issues/6502) - Correct permissions evaluation for running a report via the REST API
|
||||
* [#6517](https://github.com/netbox-community/netbox/issues/6517) - Fix assignment of user when creating rack reservations via REST API
|
||||
* [#6525](https://github.com/netbox-community/netbox/issues/6525) - Paginate related IPs table under IP address view
|
||||
|
||||
---
|
||||
|
||||
## v2.11.4 (2021-05-25)
|
||||
|
||||
### Enhancements
|
||||
|
||||
* [#5121](https://github.com/netbox-community/netbox/issues/5121) - Add content type filters for tags
|
||||
* [#6358](https://github.com/netbox-community/netbox/issues/6358) - Add search field for VLAN groups
|
||||
* [#6393](https://github.com/netbox-community/netbox/issues/6393) - Add `description` filter for IP addresses
|
||||
* [#6400](https://github.com/netbox-community/netbox/issues/6400) - Add cyan color choice for plugin buttons
|
||||
* [#6422](https://github.com/netbox-community/netbox/issues/6422) - Enable filtering users by group under admin UI
|
||||
* [#6441](https://github.com/netbox-community/netbox/issues/6441) - Improve UI paginator to optimize page object count
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
* [#6376](https://github.com/netbox-community/netbox/issues/6376) - Fix assignment of VLAN groups to clusters, cluster groups via REST API
|
||||
* [#6398](https://github.com/netbox-community/netbox/issues/6398) - Avoid exception when deleting device connected to self via circuit
|
||||
* [#6426](https://github.com/netbox-community/netbox/issues/6426) - Allow assigning virtual chassis member interfaces to LAG on VC master
|
||||
* [#6438](https://github.com/netbox-community/netbox/issues/6438) - Fix missing descriptions and label for device type imports and exports
|
||||
* [#6465](https://github.com/netbox-community/netbox/issues/6465) - Fix typo in installed plugins REST API endpoint
|
||||
* [#6467](https://github.com/netbox-community/netbox/issues/6467) - Fix access to metrics on custom `BASE_PATH` when login is required
|
||||
* [#6468](https://github.com/netbox-community/netbox/issues/6468) - Disable ordering VLAN groups list by scope object
|
||||
|
||||
---
|
||||
|
||||
## v2.11.3 (2021-05-07)
|
||||
|
||||
### Enhancements
|
||||
@@ -70,7 +146,7 @@
|
||||
|
||||
## v2.11.0 (2021-04-16)
|
||||
|
||||
**Note:** NetBox v2.11 is the last major release that will support Python 3.6. Beginning with NetBox v2.12, Python 3.7 or later will be required.
|
||||
**Note:** NetBox v2.11 is the last major release that will support Python 3.6. Beginning with NetBox v3.0, Python 3.7 or later will be required.
|
||||
|
||||
### Breaking Changes
|
||||
|
||||
@@ -128,7 +204,7 @@ Devices can now be assigned to locations (formerly known as rack groups) within
|
||||
|
||||
When exporting a list of objects in NetBox, users now have the option of selecting the "current view". This will render CSV output matching the current configuration of the table being viewed. For example, if you modify the sites list to display only the site name, tenant, and status, the rendered CSV will include only these columns, and they will appear in the order chosen.
|
||||
|
||||
The legacy static export behavior has been retained to ensure backward compatibility for dependent integrations. However, users are strongly encouraged to adapt custom export templates where needed as this functionality will be removed in v2.12.
|
||||
The legacy static export behavior has been retained to ensure backward compatibility for dependent integrations. However, users are strongly encouraged to adapt custom export templates where needed as this functionality will be removed in v3.0.
|
||||
|
||||
#### Variable Scope Support for VLAN Groups ([#5284](https://github.com/netbox-community/netbox/issues/5284))
|
||||
|
||||
|
||||
@@ -104,6 +104,7 @@ class ProviderNetworkFilterSet(PrimaryModelFilterSet):
|
||||
if not value.strip():
|
||||
return queryset
|
||||
return queryset.filter(
|
||||
Q(name__icontains=value) |
|
||||
Q(description__icontains=value) |
|
||||
Q(comments__icontains=value)
|
||||
).distinct()
|
||||
|
||||
@@ -20,7 +20,7 @@ __all__ = (
|
||||
)
|
||||
|
||||
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'webhooks')
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'tags', 'webhooks')
|
||||
class Provider(PrimaryModel):
|
||||
"""
|
||||
Each Circuit belongs to a Provider. This is usually a telecommunications company or similar organization. This model
|
||||
@@ -96,7 +96,7 @@ class Provider(PrimaryModel):
|
||||
# Provider networks
|
||||
#
|
||||
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'webhooks')
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'tags', 'webhooks')
|
||||
class ProviderNetwork(PrimaryModel):
|
||||
"""
|
||||
This represents a provider network which exists outside of NetBox, the details of which are unknown or
|
||||
@@ -189,7 +189,7 @@ class CircuitType(OrganizationalModel):
|
||||
)
|
||||
|
||||
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'webhooks')
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'tags', 'webhooks')
|
||||
class Circuit(PrimaryModel):
|
||||
"""
|
||||
A communications circuit connects two points. Each Circuit belongs to a Provider; Providers may have multiple
|
||||
|
||||
@@ -246,10 +246,6 @@ class RackReservationViewSet(ModelViewSet):
|
||||
serializer_class = serializers.RackReservationSerializer
|
||||
filterset_class = filtersets.RackReservationFilterSet
|
||||
|
||||
# Assign user from request
|
||||
def perform_create(self, serializer):
|
||||
serializer.save(user=self.request.user)
|
||||
|
||||
|
||||
#
|
||||
# Manufacturers
|
||||
|
||||
@@ -924,6 +924,7 @@ class PortTypeChoices(ChoiceSet):
|
||||
TYPE_110_PUNCH = '110-punch'
|
||||
TYPE_BNC = 'bnc'
|
||||
TYPE_F = 'f'
|
||||
TYPE_N = 'n'
|
||||
TYPE_MRJ21 = 'mrj21'
|
||||
TYPE_ST = 'st'
|
||||
TYPE_SC = 'sc'
|
||||
@@ -954,6 +955,7 @@ class PortTypeChoices(ChoiceSet):
|
||||
(TYPE_110_PUNCH, '110 Punch'),
|
||||
(TYPE_BNC, 'BNC'),
|
||||
(TYPE_F, 'F Connector'),
|
||||
(TYPE_N, 'N Connector'),
|
||||
(TYPE_MRJ21, 'MRJ21'),
|
||||
),
|
||||
),
|
||||
|
||||
@@ -2,6 +2,9 @@ from django.db.models import Q
|
||||
|
||||
from .choices import InterfaceTypeChoices
|
||||
|
||||
# Exclude SVG images (unsupported by PIL)
|
||||
DEVICETYPE_IMAGE_FORMATS = 'image/bmp,image/gif,image/jpeg,image/png,image/tiff,image/webp'
|
||||
|
||||
|
||||
#
|
||||
# Racks
|
||||
|
||||
@@ -1172,12 +1172,11 @@ class DeviceTypeForm(BootstrapMixin, CustomFieldModelForm):
|
||||
)
|
||||
widgets = {
|
||||
'subdevice_role': StaticSelect2(),
|
||||
# Exclude SVG images (unsupported by PIL)
|
||||
'front_image': forms.ClearableFileInput(attrs={
|
||||
'accept': 'image/bmp,image/gif,image/jpeg,image/png,image/tiff'
|
||||
'accept': DEVICETYPE_IMAGE_FORMATS
|
||||
}),
|
||||
'rear_image': forms.ClearableFileInput(attrs={
|
||||
'accept': 'image/bmp,image/gif,image/jpeg,image/png,image/tiff'
|
||||
'accept': DEVICETYPE_IMAGE_FORMATS
|
||||
})
|
||||
}
|
||||
|
||||
@@ -1825,7 +1824,7 @@ class ConsolePortTemplateImportForm(ComponentTemplateImportForm):
|
||||
class Meta:
|
||||
model = ConsolePortTemplate
|
||||
fields = [
|
||||
'device_type', 'name', 'label', 'type',
|
||||
'device_type', 'name', 'label', 'type', 'description',
|
||||
]
|
||||
|
||||
|
||||
@@ -1834,7 +1833,7 @@ class ConsoleServerPortTemplateImportForm(ComponentTemplateImportForm):
|
||||
class Meta:
|
||||
model = ConsoleServerPortTemplate
|
||||
fields = [
|
||||
'device_type', 'name', 'label', 'type',
|
||||
'device_type', 'name', 'label', 'type', 'description',
|
||||
]
|
||||
|
||||
|
||||
@@ -1843,7 +1842,7 @@ class PowerPortTemplateImportForm(ComponentTemplateImportForm):
|
||||
class Meta:
|
||||
model = PowerPortTemplate
|
||||
fields = [
|
||||
'device_type', 'name', 'label', 'type', 'maximum_draw', 'allocated_draw',
|
||||
'device_type', 'name', 'label', 'type', 'maximum_draw', 'allocated_draw', 'description',
|
||||
]
|
||||
|
||||
|
||||
@@ -1857,7 +1856,7 @@ class PowerOutletTemplateImportForm(ComponentTemplateImportForm):
|
||||
class Meta:
|
||||
model = PowerOutletTemplate
|
||||
fields = [
|
||||
'device_type', 'name', 'label', 'type', 'power_port', 'feed_leg',
|
||||
'device_type', 'name', 'label', 'type', 'power_port', 'feed_leg', 'description',
|
||||
]
|
||||
|
||||
|
||||
@@ -1869,7 +1868,7 @@ class InterfaceTemplateImportForm(ComponentTemplateImportForm):
|
||||
class Meta:
|
||||
model = InterfaceTemplate
|
||||
fields = [
|
||||
'device_type', 'name', 'label', 'type', 'mgmt_only',
|
||||
'device_type', 'name', 'label', 'type', 'mgmt_only', 'description',
|
||||
]
|
||||
|
||||
|
||||
@@ -1886,7 +1885,7 @@ class FrontPortTemplateImportForm(ComponentTemplateImportForm):
|
||||
class Meta:
|
||||
model = FrontPortTemplate
|
||||
fields = [
|
||||
'device_type', 'name', 'type', 'rear_port', 'rear_port_position',
|
||||
'device_type', 'name', 'type', 'rear_port', 'rear_port_position', 'label', 'description',
|
||||
]
|
||||
|
||||
|
||||
@@ -1898,7 +1897,7 @@ class RearPortTemplateImportForm(ComponentTemplateImportForm):
|
||||
class Meta:
|
||||
model = RearPortTemplate
|
||||
fields = [
|
||||
'device_type', 'name', 'type', 'positions',
|
||||
'device_type', 'name', 'type', 'positions', 'label', 'description',
|
||||
]
|
||||
|
||||
|
||||
@@ -1907,7 +1906,7 @@ class DeviceBayTemplateImportForm(ComponentTemplateImportForm):
|
||||
class Meta:
|
||||
model = DeviceBayTemplate
|
||||
fields = [
|
||||
'device_type', 'name',
|
||||
'device_type', 'name', 'label', 'description',
|
||||
]
|
||||
|
||||
|
||||
@@ -3126,9 +3125,13 @@ class InterfaceForm(BootstrapMixin, InterfaceCommonForm, CustomFieldModelForm):
|
||||
|
||||
device = Device.objects.get(pk=self.data['device']) if self.is_bound else self.instance.device
|
||||
|
||||
# Restrict parent/LAG interface assignment by device
|
||||
# Restrict parent/LAG interface assignment by device/VC
|
||||
self.fields['parent'].widget.add_query_param('device_id', device.pk)
|
||||
self.fields['lag'].widget.add_query_param('device_id', device.pk)
|
||||
if device.virtual_chassis and device.virtual_chassis.master:
|
||||
# Get available LAG interfaces by VirtualChassis master
|
||||
self.fields['lag'].widget.add_query_param('device_id', device.virtual_chassis.master.pk)
|
||||
else:
|
||||
self.fields['lag'].widget.add_query_param('device_id', device.pk)
|
||||
|
||||
# Limit VLAN choices by device
|
||||
self.fields['untagged_vlan'].widget.add_query_param('available_on_device', device.pk)
|
||||
@@ -3919,13 +3922,23 @@ class ConnectCableToDeviceForm(BootstrapMixin, CustomFieldModelForm):
|
||||
'group_id': '$termination_b_site_group',
|
||||
}
|
||||
)
|
||||
termination_b_location = DynamicModelChoiceField(
|
||||
queryset=Location.objects.all(),
|
||||
label='Location',
|
||||
required=False,
|
||||
null_option='None',
|
||||
query_params={
|
||||
'site_id': '$termination_b_site'
|
||||
}
|
||||
)
|
||||
termination_b_rack = DynamicModelChoiceField(
|
||||
queryset=Rack.objects.all(),
|
||||
label='Rack',
|
||||
required=False,
|
||||
null_option='None',
|
||||
query_params={
|
||||
'site_id': '$termination_b_site'
|
||||
'site_id': '$termination_b_site',
|
||||
'location_id': '$termination_b_location',
|
||||
}
|
||||
)
|
||||
termination_b_device = DynamicModelChoiceField(
|
||||
@@ -3934,6 +3947,7 @@ class ConnectCableToDeviceForm(BootstrapMixin, CustomFieldModelForm):
|
||||
required=False,
|
||||
query_params={
|
||||
'site_id': '$termination_b_site',
|
||||
'location_id': '$termination_b_location',
|
||||
'rack_id': '$termination_b_rack',
|
||||
}
|
||||
)
|
||||
|
||||
@@ -30,7 +30,7 @@ __all__ = (
|
||||
# Cables
|
||||
#
|
||||
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'webhooks')
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'tags', 'webhooks')
|
||||
class Cable(PrimaryModel):
|
||||
"""
|
||||
A physical connection between two endpoints.
|
||||
|
||||
@@ -211,7 +211,7 @@ class PathEndpoint(models.Model):
|
||||
# Console ports
|
||||
#
|
||||
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'webhooks')
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'tags', 'webhooks')
|
||||
class ConsolePort(ComponentModel, CableTermination, PathEndpoint):
|
||||
"""
|
||||
A physical console port within a Device. ConsolePorts connect to ConsoleServerPorts.
|
||||
@@ -254,7 +254,7 @@ class ConsolePort(ComponentModel, CableTermination, PathEndpoint):
|
||||
# Console server ports
|
||||
#
|
||||
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'webhooks')
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'tags', 'webhooks')
|
||||
class ConsoleServerPort(ComponentModel, CableTermination, PathEndpoint):
|
||||
"""
|
||||
A physical port within a Device (typically a designated console server) which provides access to ConsolePorts.
|
||||
@@ -297,7 +297,7 @@ class ConsoleServerPort(ComponentModel, CableTermination, PathEndpoint):
|
||||
# Power ports
|
||||
#
|
||||
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'webhooks')
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'tags', 'webhooks')
|
||||
class PowerPort(ComponentModel, CableTermination, PathEndpoint):
|
||||
"""
|
||||
A physical power supply (intake) port within a Device. PowerPorts connect to PowerOutlets.
|
||||
@@ -408,7 +408,7 @@ class PowerPort(ComponentModel, CableTermination, PathEndpoint):
|
||||
# Power outlets
|
||||
#
|
||||
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'webhooks')
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'tags', 'webhooks')
|
||||
class PowerOutlet(ComponentModel, CableTermination, PathEndpoint):
|
||||
"""
|
||||
A physical power outlet (output) within a Device which provides power to a PowerPort.
|
||||
@@ -512,7 +512,7 @@ class BaseInterface(models.Model):
|
||||
return self.ip_addresses.count()
|
||||
|
||||
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'webhooks')
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'tags', 'webhooks')
|
||||
class Interface(ComponentModel, BaseInterface, CableTermination, PathEndpoint):
|
||||
"""
|
||||
A network interface within a Device. A physical Interface can connect to exactly one other Interface.
|
||||
@@ -683,7 +683,7 @@ class Interface(ComponentModel, BaseInterface, CableTermination, PathEndpoint):
|
||||
# Pass-through ports
|
||||
#
|
||||
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'webhooks')
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'tags', 'webhooks')
|
||||
class FrontPort(ComponentModel, CableTermination):
|
||||
"""
|
||||
A pass-through port on the front of a Device.
|
||||
@@ -748,7 +748,7 @@ class FrontPort(ComponentModel, CableTermination):
|
||||
})
|
||||
|
||||
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'webhooks')
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'tags', 'webhooks')
|
||||
class RearPort(ComponentModel, CableTermination):
|
||||
"""
|
||||
A pass-through port on the rear of a Device.
|
||||
@@ -801,7 +801,7 @@ class RearPort(ComponentModel, CableTermination):
|
||||
# Device bays
|
||||
#
|
||||
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'webhooks')
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'tags', 'webhooks')
|
||||
class DeviceBay(ComponentModel):
|
||||
"""
|
||||
An empty space within a Device which can house a child device
|
||||
@@ -860,7 +860,7 @@ class DeviceBay(ComponentModel):
|
||||
# Inventory items
|
||||
#
|
||||
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'webhooks')
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'tags', 'webhooks')
|
||||
class InventoryItem(MPTTModel, ComponentModel):
|
||||
"""
|
||||
An InventoryItem represents a serialized piece of hardware within a Device, such as a line card or power supply.
|
||||
|
||||
@@ -75,7 +75,7 @@ class Manufacturer(OrganizationalModel):
|
||||
)
|
||||
|
||||
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'webhooks')
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'tags', 'webhooks')
|
||||
class DeviceType(PrimaryModel):
|
||||
"""
|
||||
A DeviceType represents a particular make (Manufacturer) and model of device. It specifies rack height and depth, as
|
||||
@@ -183,6 +183,8 @@ class DeviceType(PrimaryModel):
|
||||
{
|
||||
'name': c.name,
|
||||
'type': c.type,
|
||||
'label': c.label,
|
||||
'description': c.description,
|
||||
}
|
||||
for c in self.consoleporttemplates.all()
|
||||
]
|
||||
@@ -191,6 +193,8 @@ class DeviceType(PrimaryModel):
|
||||
{
|
||||
'name': c.name,
|
||||
'type': c.type,
|
||||
'label': c.label,
|
||||
'description': c.description,
|
||||
}
|
||||
for c in self.consoleserverporttemplates.all()
|
||||
]
|
||||
@@ -201,6 +205,8 @@ class DeviceType(PrimaryModel):
|
||||
'type': c.type,
|
||||
'maximum_draw': c.maximum_draw,
|
||||
'allocated_draw': c.allocated_draw,
|
||||
'label': c.label,
|
||||
'description': c.description,
|
||||
}
|
||||
for c in self.powerporttemplates.all()
|
||||
]
|
||||
@@ -211,6 +217,8 @@ class DeviceType(PrimaryModel):
|
||||
'type': c.type,
|
||||
'power_port': c.power_port.name if c.power_port else None,
|
||||
'feed_leg': c.feed_leg,
|
||||
'label': c.label,
|
||||
'description': c.description,
|
||||
}
|
||||
for c in self.poweroutlettemplates.all()
|
||||
]
|
||||
@@ -220,6 +228,8 @@ class DeviceType(PrimaryModel):
|
||||
'name': c.name,
|
||||
'type': c.type,
|
||||
'mgmt_only': c.mgmt_only,
|
||||
'label': c.label,
|
||||
'description': c.description,
|
||||
}
|
||||
for c in self.interfacetemplates.all()
|
||||
]
|
||||
@@ -230,6 +240,8 @@ class DeviceType(PrimaryModel):
|
||||
'type': c.type,
|
||||
'rear_port': c.rear_port.name,
|
||||
'rear_port_position': c.rear_port_position,
|
||||
'label': c.label,
|
||||
'description': c.description,
|
||||
}
|
||||
for c in self.frontporttemplates.all()
|
||||
]
|
||||
@@ -239,6 +251,8 @@ class DeviceType(PrimaryModel):
|
||||
'name': c.name,
|
||||
'type': c.type,
|
||||
'positions': c.positions,
|
||||
'label': c.label,
|
||||
'description': c.description,
|
||||
}
|
||||
for c in self.rearporttemplates.all()
|
||||
]
|
||||
@@ -246,6 +260,8 @@ class DeviceType(PrimaryModel):
|
||||
data['device-bays'] = [
|
||||
{
|
||||
'name': c.name,
|
||||
'label': c.label,
|
||||
'description': c.description,
|
||||
}
|
||||
for c in self.devicebaytemplates.all()
|
||||
]
|
||||
@@ -452,7 +468,7 @@ class Platform(OrganizationalModel):
|
||||
)
|
||||
|
||||
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'webhooks')
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'tags', 'webhooks')
|
||||
class Device(PrimaryModel, ConfigContextModel):
|
||||
"""
|
||||
A Device represents a piece of physical hardware mounted within a Rack. Each Device is assigned a DeviceType,
|
||||
@@ -906,7 +922,7 @@ class Device(PrimaryModel, ConfigContextModel):
|
||||
# Virtual chassis
|
||||
#
|
||||
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'webhooks')
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'tags', 'webhooks')
|
||||
class VirtualChassis(PrimaryModel):
|
||||
"""
|
||||
A collection of Devices which operate with a shared control plane (e.g. a switch stack).
|
||||
|
||||
@@ -21,7 +21,7 @@ __all__ = (
|
||||
# Power
|
||||
#
|
||||
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'webhooks')
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'tags', 'webhooks')
|
||||
class PowerPanel(PrimaryModel):
|
||||
"""
|
||||
A distribution point for electrical power; e.g. a data center RPP.
|
||||
@@ -71,7 +71,7 @@ class PowerPanel(PrimaryModel):
|
||||
)
|
||||
|
||||
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'webhooks')
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'tags', 'webhooks')
|
||||
class PowerFeed(PrimaryModel, PathEndpoint, CableTermination):
|
||||
"""
|
||||
An electrical circuit delivered from a PowerPanel.
|
||||
|
||||
@@ -78,7 +78,7 @@ class RackRole(OrganizationalModel):
|
||||
)
|
||||
|
||||
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'webhooks')
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'tags', 'webhooks')
|
||||
class Rack(PrimaryModel):
|
||||
"""
|
||||
Devices are housed within Racks. Each rack has a defined height measured in rack units, and a front and rear face.
|
||||
@@ -467,7 +467,7 @@ class Rack(PrimaryModel):
|
||||
return int(allocated_draw_total / available_power_total * 100)
|
||||
|
||||
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'webhooks')
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'tags', 'webhooks')
|
||||
class RackReservation(PrimaryModel):
|
||||
"""
|
||||
One or more reserved units within a Rack.
|
||||
|
||||
@@ -130,7 +130,7 @@ class SiteGroup(NestedGroupModel):
|
||||
# Sites
|
||||
#
|
||||
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'webhooks')
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'tags', 'webhooks')
|
||||
class Site(PrimaryModel):
|
||||
"""
|
||||
A Site represents a geographic location within a network; typically a building or campus. The optional facility
|
||||
|
||||
@@ -31,9 +31,10 @@ def rebuild_paths(obj):
|
||||
|
||||
with transaction.atomic():
|
||||
for cp in cable_paths:
|
||||
invalidate_obj(cp.origin)
|
||||
cp.delete()
|
||||
create_cablepath(cp.origin)
|
||||
if cp.origin:
|
||||
invalidate_obj(cp.origin)
|
||||
create_cablepath(cp.origin)
|
||||
|
||||
|
||||
#
|
||||
@@ -145,14 +146,12 @@ def nullify_connected_endpoints(instance, **kwargs):
|
||||
# Disassociate the Cable from its termination points
|
||||
if instance.termination_a is not None:
|
||||
logger.debug(f"Nullifying termination A for cable {instance}")
|
||||
instance.termination_a.cable = None
|
||||
instance.termination_a._cable_peer = None
|
||||
instance.termination_a.save()
|
||||
model = instance.termination_a._meta.model
|
||||
model.objects.filter(pk=instance.termination_a.pk).update(_cable_peer_type=None, _cable_peer_id=None)
|
||||
if instance.termination_b is not None:
|
||||
logger.debug(f"Nullifying termination B for cable {instance}")
|
||||
instance.termination_b.cable = None
|
||||
instance.termination_b._cable_peer = None
|
||||
instance.termination_b.save()
|
||||
model = instance.termination_b._meta.model
|
||||
model.objects.filter(pk=instance.termination_b.pk).update(_cable_peer_type=None, _cable_peer_id=None)
|
||||
|
||||
# Delete and retrace any dependent cable paths
|
||||
for cablepath in CablePath.objects.filter(path__contains=instance):
|
||||
|
||||
@@ -694,7 +694,7 @@ class InventoryItemTable(DeviceComponentTable):
|
||||
)
|
||||
cable = None # Override DeviceComponentTable
|
||||
|
||||
class Meta(DeviceComponentTable.Meta):
|
||||
class Meta(BaseTable.Meta):
|
||||
model = InventoryItem
|
||||
fields = (
|
||||
'pk', 'device', 'name', 'label', 'manufacturer', 'part_id', 'serial', 'asset_tag', 'description',
|
||||
@@ -715,7 +715,7 @@ class DeviceInventoryItemTable(InventoryItemTable):
|
||||
buttons=('edit', 'delete')
|
||||
)
|
||||
|
||||
class Meta(DeviceComponentTable.Meta):
|
||||
class Meta(BaseTable.Meta):
|
||||
model = InventoryItem
|
||||
fields = (
|
||||
'pk', 'name', 'label', 'manufacturer', 'part_id', 'serial', 'asset_tag', 'description', 'discovered',
|
||||
|
||||
@@ -349,40 +349,36 @@ class RackReservationTest(APIViewTestCases.APIViewTestCase):
|
||||
user = User.objects.create(username='user1', is_active=True)
|
||||
site = Site.objects.create(name='Test Site 1', slug='test-site-1')
|
||||
|
||||
cls.racks = (
|
||||
racks = (
|
||||
Rack(site=site, name='Rack 1'),
|
||||
Rack(site=site, name='Rack 2'),
|
||||
)
|
||||
Rack.objects.bulk_create(cls.racks)
|
||||
Rack.objects.bulk_create(racks)
|
||||
|
||||
rack_reservations = (
|
||||
RackReservation(rack=cls.racks[0], units=[1, 2, 3], user=user, description='Reservation #1'),
|
||||
RackReservation(rack=cls.racks[0], units=[4, 5, 6], user=user, description='Reservation #2'),
|
||||
RackReservation(rack=cls.racks[0], units=[7, 8, 9], user=user, description='Reservation #3'),
|
||||
RackReservation(rack=racks[0], units=[1, 2, 3], user=user, description='Reservation #1'),
|
||||
RackReservation(rack=racks[0], units=[4, 5, 6], user=user, description='Reservation #2'),
|
||||
RackReservation(rack=racks[0], units=[7, 8, 9], user=user, description='Reservation #3'),
|
||||
)
|
||||
RackReservation.objects.bulk_create(rack_reservations)
|
||||
|
||||
def setUp(self):
|
||||
super().setUp()
|
||||
|
||||
# We have to set creation data under setUp() because we need access to the test user.
|
||||
self.create_data = [
|
||||
cls.create_data = [
|
||||
{
|
||||
'rack': self.racks[1].pk,
|
||||
'rack': racks[1].pk,
|
||||
'units': [10, 11, 12],
|
||||
'user': self.user.pk,
|
||||
'user': user.pk,
|
||||
'description': 'Reservation #4',
|
||||
},
|
||||
{
|
||||
'rack': self.racks[1].pk,
|
||||
'rack': racks[1].pk,
|
||||
'units': [13, 14, 15],
|
||||
'user': self.user.pk,
|
||||
'user': user.pk,
|
||||
'description': 'Reservation #5',
|
||||
},
|
||||
{
|
||||
'rack': self.racks[1].pk,
|
||||
'rack': racks[1].pk,
|
||||
'units': [16, 17, 18],
|
||||
'user': self.user.pk,
|
||||
'user': user.pk,
|
||||
'description': 'Reservation #6',
|
||||
},
|
||||
]
|
||||
|
||||
@@ -239,7 +239,7 @@ class ReportViewSet(ViewSet):
|
||||
Run a Report identified as "<module>.<script>" and return the pending JobResult as the result
|
||||
"""
|
||||
# Check that the user has permission to run reports.
|
||||
if not request.user.has_perm('extras.run_script'):
|
||||
if not request.user.has_perm('extras.run_report'):
|
||||
raise PermissionDenied("This user does not have permission to run reports.")
|
||||
|
||||
# Check that at least one RQ worker is running
|
||||
|
||||
@@ -7,5 +7,6 @@ EXTRAS_FEATURES = [
|
||||
'custom_links',
|
||||
'export_templates',
|
||||
'job_results',
|
||||
'tags',
|
||||
'webhooks'
|
||||
]
|
||||
|
||||
@@ -4,6 +4,7 @@ from django.db.models.signals import m2m_changed, pre_delete, post_save
|
||||
|
||||
from extras.signals import _handle_changed_object, _handle_deleted_object
|
||||
from utilities.utils import curry
|
||||
from .webhooks import flush_webhooks
|
||||
|
||||
|
||||
@contextmanager
|
||||
@@ -14,9 +15,11 @@ def change_logging(request):
|
||||
|
||||
:param request: WSGIRequest object with a unique `id` set
|
||||
"""
|
||||
webhook_queue = []
|
||||
|
||||
# Curry signals receivers to pass the current request
|
||||
handle_changed_object = curry(_handle_changed_object, request)
|
||||
handle_deleted_object = curry(_handle_deleted_object, request)
|
||||
handle_changed_object = curry(_handle_changed_object, request, webhook_queue)
|
||||
handle_deleted_object = curry(_handle_deleted_object, request, webhook_queue)
|
||||
|
||||
# Connect our receivers to the post_save and post_delete signals.
|
||||
post_save.connect(handle_changed_object, dispatch_uid='handle_changed_object')
|
||||
@@ -30,3 +33,7 @@ def change_logging(request):
|
||||
post_save.disconnect(handle_changed_object, dispatch_uid='handle_changed_object')
|
||||
m2m_changed.disconnect(handle_changed_object, dispatch_uid='handle_changed_object')
|
||||
pre_delete.disconnect(handle_deleted_object, dispatch_uid='handle_deleted_object')
|
||||
|
||||
# Flush queued webhooks to RQ
|
||||
flush_webhooks(webhook_queue)
|
||||
del webhook_queue
|
||||
|
||||
@@ -6,7 +6,7 @@ from django.db.models import Q
|
||||
from dcim.models import DeviceRole, DeviceType, Platform, Region, Site, SiteGroup
|
||||
from netbox.filtersets import BaseFilterSet, ChangeLoggedModelFilterSet
|
||||
from tenancy.models import Tenant, TenantGroup
|
||||
from utilities.filters import ContentTypeFilter
|
||||
from utilities.filters import ContentTypeFilter, MultiValueCharFilter, MultiValueNumberFilter
|
||||
from virtualization.models import Cluster, ClusterGroup
|
||||
from .choices import *
|
||||
from .models import *
|
||||
@@ -114,6 +114,12 @@ class TagFilterSet(ChangeLoggedModelFilterSet):
|
||||
method='search',
|
||||
label='Search',
|
||||
)
|
||||
content_type = MultiValueCharFilter(
|
||||
method='_content_type'
|
||||
)
|
||||
content_type_id = MultiValueNumberFilter(
|
||||
method='_content_type_id'
|
||||
)
|
||||
|
||||
class Meta:
|
||||
model = Tag
|
||||
@@ -127,6 +133,32 @@ class TagFilterSet(ChangeLoggedModelFilterSet):
|
||||
Q(slug__icontains=value)
|
||||
)
|
||||
|
||||
def _content_type(self, queryset, name, values):
|
||||
ct_filter = Q()
|
||||
|
||||
# Compile list of app_label & model pairings
|
||||
for value in values:
|
||||
try:
|
||||
app_label, model = value.lower().split('.')
|
||||
ct_filter |= Q(
|
||||
app_label=app_label,
|
||||
model=model
|
||||
)
|
||||
except ValueError:
|
||||
pass
|
||||
|
||||
# Get ContentType instances
|
||||
content_types = ContentType.objects.filter(ct_filter)
|
||||
|
||||
return queryset.filter(extras_taggeditem_items__content_type__in=content_types).distinct()
|
||||
|
||||
def _content_type_id(self, queryset, name, values):
|
||||
|
||||
# Get ContentType instances
|
||||
content_types = ContentType.objects.filter(pk__in=values)
|
||||
|
||||
return queryset.filter(extras_taggeditem_items__content_type__in=content_types).distinct()
|
||||
|
||||
|
||||
class ConfigContextFilterSet(ChangeLoggedModelFilterSet):
|
||||
q = django_filters.CharFilter(
|
||||
|
||||
@@ -8,12 +8,13 @@ from dcim.models import DeviceRole, DeviceType, Platform, Region, Site, SiteGrou
|
||||
from tenancy.models import Tenant, TenantGroup
|
||||
from utilities.forms import (
|
||||
add_blank_choice, APISelectMultiple, BootstrapMixin, BulkEditForm, BulkEditNullBooleanSelect, ColorSelect,
|
||||
CommentField, CSVModelForm, DateTimePicker, DynamicModelMultipleChoiceField, JSONField, SlugField, StaticSelect2,
|
||||
BOOLEAN_WITH_BLANK_CHOICES,
|
||||
CommentField, ContentTypeMultipleChoiceField, CSVModelForm, DateTimePicker, DynamicModelMultipleChoiceField,
|
||||
JSONField, SlugField, StaticSelect2, BOOLEAN_WITH_BLANK_CHOICES,
|
||||
)
|
||||
from virtualization.models import Cluster, ClusterGroup
|
||||
from .choices import *
|
||||
from .models import ConfigContext, CustomField, ImageAttachment, JournalEntry, ObjectChange, Tag
|
||||
from .utils import FeatureQuery
|
||||
|
||||
|
||||
#
|
||||
@@ -180,6 +181,11 @@ class TagFilterForm(BootstrapMixin, forms.Form):
|
||||
required=False,
|
||||
label=_('Search')
|
||||
)
|
||||
content_type_id = ContentTypeMultipleChoiceField(
|
||||
queryset=ContentType.objects.filter(FeatureQuery('tags').get_query()),
|
||||
required=False,
|
||||
label=_('Tagged object type')
|
||||
)
|
||||
|
||||
|
||||
class TagBulkEditForm(BootstrapMixin, BulkEditForm):
|
||||
|
||||
@@ -286,9 +286,7 @@ class CustomField(BigIDModel):
|
||||
|
||||
# Validate integer
|
||||
if self.type == CustomFieldTypeChoices.TYPE_INTEGER:
|
||||
try:
|
||||
int(value)
|
||||
except ValueError:
|
||||
if type(value) is not int:
|
||||
raise ValidationError("Value must be an integer.")
|
||||
if self.validation_minimum is not None and value < self.validation_minimum:
|
||||
raise ValidationError(f"Value must be at least {self.validation_minimum}")
|
||||
|
||||
@@ -42,7 +42,7 @@ class InstalledPluginsAPIView(APIView):
|
||||
'author': plugin_app_config.author,
|
||||
'author_email': plugin_app_config.author_email,
|
||||
'description': plugin_app_config.description,
|
||||
'verison': plugin_app_config.version
|
||||
'version': plugin_app_config.version
|
||||
}
|
||||
|
||||
def get(self, request, format=None):
|
||||
|
||||
@@ -188,10 +188,10 @@ class ObjectVar(ScriptVariable):
|
||||
|
||||
def __init__(self, model, query_params=None, null_option=None, *args, **kwargs):
|
||||
|
||||
# TODO: Remove display_field in v2.12
|
||||
# TODO: Remove display_field in v3.0
|
||||
if 'display_field' in kwargs:
|
||||
warnings.warn(
|
||||
"The 'display_field' parameter has been deprecated, and will be removed in NetBox v2.12. Object "
|
||||
"The 'display_field' parameter has been deprecated, and will be removed in NetBox v3.0. Object "
|
||||
"variables will now reference the 'display' attribute available on all model serializers by default."
|
||||
)
|
||||
display_field = kwargs.pop('display_field', 'display')
|
||||
|
||||
@@ -12,17 +12,27 @@ from prometheus_client import Counter
|
||||
|
||||
from .choices import ObjectChangeActionChoices
|
||||
from .models import CustomField, ObjectChange
|
||||
from .webhooks import enqueue_webhooks
|
||||
from .webhooks import enqueue_object, get_snapshots, serialize_for_webhook
|
||||
|
||||
|
||||
#
|
||||
# Change logging/webhooks
|
||||
#
|
||||
|
||||
def _handle_changed_object(request, sender, instance, **kwargs):
|
||||
def _handle_changed_object(request, webhook_queue, sender, instance, **kwargs):
|
||||
"""
|
||||
Fires when an object is created or updated.
|
||||
"""
|
||||
def is_same_object(instance, webhook_data):
|
||||
return (
|
||||
ContentType.objects.get_for_model(instance) == webhook_data['content_type'] and
|
||||
instance.pk == webhook_data['object_id'] and
|
||||
request.id == webhook_data['request_id']
|
||||
)
|
||||
|
||||
if not hasattr(instance, 'to_objectchange'):
|
||||
return
|
||||
|
||||
m2m_changed = False
|
||||
|
||||
# Determine the type of change being made
|
||||
@@ -53,8 +63,13 @@ def _handle_changed_object(request, sender, instance, **kwargs):
|
||||
objectchange.request_id = request.id
|
||||
objectchange.save()
|
||||
|
||||
# Enqueue webhooks
|
||||
enqueue_webhooks(instance, request.user, request.id, action)
|
||||
# If this is an M2M change, update the previously queued webhook (from post_save)
|
||||
if m2m_changed and webhook_queue and is_same_object(instance, webhook_queue[-1]):
|
||||
instance.refresh_from_db() # Ensure that we're working with fresh M2M assignments
|
||||
webhook_queue[-1]['data'] = serialize_for_webhook(instance)
|
||||
webhook_queue[-1]['snapshots']['postchange'] = get_snapshots(instance, action)['postchange']
|
||||
else:
|
||||
enqueue_object(webhook_queue, instance, request.user, request.id, action)
|
||||
|
||||
# Increment metric counters
|
||||
if action == ObjectChangeActionChoices.ACTION_CREATE:
|
||||
@@ -68,10 +83,13 @@ def _handle_changed_object(request, sender, instance, **kwargs):
|
||||
ObjectChange.objects.filter(time__lt=cutoff)._raw_delete(using=DEFAULT_DB_ALIAS)
|
||||
|
||||
|
||||
def _handle_deleted_object(request, sender, instance, **kwargs):
|
||||
def _handle_deleted_object(request, webhook_queue, sender, instance, **kwargs):
|
||||
"""
|
||||
Fires when an object is deleted.
|
||||
"""
|
||||
if not hasattr(instance, 'to_objectchange'):
|
||||
return
|
||||
|
||||
# Record an ObjectChange if applicable
|
||||
if hasattr(instance, 'to_objectchange'):
|
||||
objectchange = instance.to_objectchange(ObjectChangeActionChoices.ACTION_DELETE)
|
||||
@@ -80,7 +98,7 @@ def _handle_deleted_object(request, sender, instance, **kwargs):
|
||||
objectchange.save()
|
||||
|
||||
# Enqueue webhooks
|
||||
enqueue_webhooks(instance, request.user, request.id, ObjectChangeActionChoices.ACTION_DELETE)
|
||||
enqueue_object(webhook_queue, instance, request.user, request.id, ObjectChangeActionChoices.ACTION_DELETE)
|
||||
|
||||
# Increment metric counters
|
||||
model_deletes.labels(instance._meta.model_name).inc()
|
||||
|
||||
@@ -5,6 +5,7 @@ from django.contrib.auth.models import User
|
||||
from django.contrib.contenttypes.models import ContentType
|
||||
from django.test import TestCase
|
||||
|
||||
from circuits.models import Provider
|
||||
from dcim.models import DeviceRole, DeviceType, Manufacturer, Platform, Rack, Region, Site, SiteGroup
|
||||
from extras.choices import JournalEntryKindChoices, ObjectChangeActionChoices
|
||||
from extras.filtersets import *
|
||||
@@ -537,6 +538,13 @@ class TagTestCase(TestCase, ChangeLoggedFilterSetTests):
|
||||
)
|
||||
Tag.objects.bulk_create(tags)
|
||||
|
||||
# Apply some tags so we can filter by content type
|
||||
site = Site.objects.create(name='Site 1', slug='site-1')
|
||||
provider = Provider.objects.create(name='Provider 1', slug='provider-1')
|
||||
|
||||
site.tags.set(tags[0])
|
||||
provider.tags.set(tags[1])
|
||||
|
||||
def test_name(self):
|
||||
params = {'name': ['Tag 1', 'Tag 2']}
|
||||
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
|
||||
@@ -549,6 +557,14 @@ class TagTestCase(TestCase, ChangeLoggedFilterSetTests):
|
||||
params = {'color': ['ff0000', '00ff00']}
|
||||
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
|
||||
|
||||
def test_content_type(self):
|
||||
params = {'content_type': ['dcim.site', 'circuits.provider']}
|
||||
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
|
||||
site_ct = ContentType.objects.get_for_model(Site).pk
|
||||
provider_ct = ContentType.objects.get_for_model(Provider).pk
|
||||
params = {'content_type_id': [site_ct, provider_ct]}
|
||||
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
|
||||
|
||||
|
||||
class ObjectChangeTestCase(TestCase, BaseFilterSetTests):
|
||||
queryset = ObjectChange.objects.all()
|
||||
|
||||
@@ -11,8 +11,8 @@ from rest_framework import status
|
||||
|
||||
from dcim.models import Site
|
||||
from extras.choices import ObjectChangeActionChoices
|
||||
from extras.models import Webhook
|
||||
from extras.webhooks import enqueue_webhooks, generate_signature
|
||||
from extras.models import Tag, Webhook
|
||||
from extras.webhooks import enqueue_object, flush_webhooks, generate_signature
|
||||
from extras.webhooks_worker import process_webhook
|
||||
from utilities.testing import APITestCase
|
||||
|
||||
@@ -20,11 +20,10 @@ from utilities.testing import APITestCase
|
||||
class WebhookTest(APITestCase):
|
||||
|
||||
def setUp(self):
|
||||
|
||||
super().setUp()
|
||||
|
||||
self.queue = django_rq.get_queue('default')
|
||||
self.queue.empty() # Begin each test with an empty queue
|
||||
self.queue.empty()
|
||||
|
||||
@classmethod
|
||||
def setUpTestData(cls):
|
||||
@@ -34,38 +33,104 @@ class WebhookTest(APITestCase):
|
||||
DUMMY_SECRET = "LOOKATMEIMASECRETSTRING"
|
||||
|
||||
webhooks = Webhook.objects.bulk_create((
|
||||
Webhook(name='Site Create Webhook', type_create=True, payload_url=DUMMY_URL, secret=DUMMY_SECRET, additional_headers='X-Foo: Bar'),
|
||||
Webhook(name='Site Update Webhook', type_update=True, payload_url=DUMMY_URL, secret=DUMMY_SECRET),
|
||||
Webhook(name='Site Delete Webhook', type_delete=True, payload_url=DUMMY_URL, secret=DUMMY_SECRET),
|
||||
Webhook(name='Webhook 1', type_create=True, payload_url=DUMMY_URL, secret=DUMMY_SECRET, additional_headers='X-Foo: Bar'),
|
||||
Webhook(name='Webhook 2', type_update=True, payload_url=DUMMY_URL, secret=DUMMY_SECRET),
|
||||
Webhook(name='Webhook 3', type_delete=True, payload_url=DUMMY_URL, secret=DUMMY_SECRET),
|
||||
))
|
||||
for webhook in webhooks:
|
||||
webhook.content_types.set([site_ct])
|
||||
|
||||
Tag.objects.bulk_create((
|
||||
Tag(name='Foo', slug='foo'),
|
||||
Tag(name='Bar', slug='bar'),
|
||||
Tag(name='Baz', slug='baz'),
|
||||
))
|
||||
|
||||
def test_enqueue_webhook_create(self):
|
||||
# Create an object via the REST API
|
||||
data = {
|
||||
'name': 'Test Site',
|
||||
'slug': 'test-site',
|
||||
'name': 'Site 1',
|
||||
'slug': 'site-1',
|
||||
'tags': [
|
||||
{'name': 'Foo'},
|
||||
{'name': 'Bar'},
|
||||
]
|
||||
}
|
||||
url = reverse('dcim-api:site-list')
|
||||
self.add_permissions('dcim.add_site')
|
||||
response = self.client.post(url, data, format='json', **self.header)
|
||||
self.assertHttpStatus(response, status.HTTP_201_CREATED)
|
||||
self.assertEqual(Site.objects.count(), 1)
|
||||
self.assertEqual(Site.objects.first().tags.count(), 2)
|
||||
|
||||
# Verify that a job was queued for the object creation webhook
|
||||
self.assertEqual(self.queue.count, 1)
|
||||
job = self.queue.jobs[0]
|
||||
self.assertEqual(job.kwargs['webhook'], Webhook.objects.get(type_create=True))
|
||||
self.assertEqual(job.kwargs['data']['id'], response.data['id'])
|
||||
self.assertEqual(job.kwargs['model_name'], 'site')
|
||||
self.assertEqual(job.kwargs['event'], ObjectChangeActionChoices.ACTION_CREATE)
|
||||
self.assertEqual(job.kwargs['model_name'], 'site')
|
||||
self.assertEqual(job.kwargs['data']['id'], response.data['id'])
|
||||
self.assertEqual(len(job.kwargs['data']['tags']), len(response.data['tags']))
|
||||
self.assertEqual(job.kwargs['snapshots']['postchange']['name'], 'Site 1')
|
||||
self.assertEqual(job.kwargs['snapshots']['postchange']['tags'], ['Bar', 'Foo'])
|
||||
|
||||
def test_enqueue_webhook_bulk_create(self):
|
||||
# Create multiple objects via the REST API
|
||||
data = [
|
||||
{
|
||||
'name': 'Site 1',
|
||||
'slug': 'site-1',
|
||||
'tags': [
|
||||
{'name': 'Foo'},
|
||||
{'name': 'Bar'},
|
||||
]
|
||||
},
|
||||
{
|
||||
'name': 'Site 2',
|
||||
'slug': 'site-2',
|
||||
'tags': [
|
||||
{'name': 'Foo'},
|
||||
{'name': 'Bar'},
|
||||
]
|
||||
},
|
||||
{
|
||||
'name': 'Site 3',
|
||||
'slug': 'site-3',
|
||||
'tags': [
|
||||
{'name': 'Foo'},
|
||||
{'name': 'Bar'},
|
||||
]
|
||||
},
|
||||
]
|
||||
url = reverse('dcim-api:site-list')
|
||||
self.add_permissions('dcim.add_site')
|
||||
response = self.client.post(url, data, format='json', **self.header)
|
||||
self.assertHttpStatus(response, status.HTTP_201_CREATED)
|
||||
self.assertEqual(Site.objects.count(), 3)
|
||||
self.assertEqual(Site.objects.first().tags.count(), 2)
|
||||
|
||||
# Verify that a webhook was queued for each object
|
||||
self.assertEqual(self.queue.count, 3)
|
||||
for i, job in enumerate(self.queue.jobs):
|
||||
self.assertEqual(job.kwargs['webhook'], Webhook.objects.get(type_create=True))
|
||||
self.assertEqual(job.kwargs['event'], ObjectChangeActionChoices.ACTION_CREATE)
|
||||
self.assertEqual(job.kwargs['model_name'], 'site')
|
||||
self.assertEqual(job.kwargs['data']['id'], response.data[i]['id'])
|
||||
self.assertEqual(len(job.kwargs['data']['tags']), len(response.data[i]['tags']))
|
||||
self.assertEqual(job.kwargs['snapshots']['postchange']['name'], response.data[i]['name'])
|
||||
self.assertEqual(job.kwargs['snapshots']['postchange']['tags'], ['Bar', 'Foo'])
|
||||
|
||||
def test_enqueue_webhook_update(self):
|
||||
# Update an object via the REST API
|
||||
site = Site.objects.create(name='Site 1', slug='site-1')
|
||||
site.tags.set(*Tag.objects.filter(name__in=['Foo', 'Bar']))
|
||||
|
||||
# Update an object via the REST API
|
||||
data = {
|
||||
'name': 'Site X',
|
||||
'comments': 'Updated the site',
|
||||
'tags': [
|
||||
{'name': 'Baz'}
|
||||
]
|
||||
}
|
||||
url = reverse('dcim-api:site-detail', kwargs={'pk': site.pk})
|
||||
self.add_permissions('dcim.change_site')
|
||||
@@ -76,13 +141,72 @@ class WebhookTest(APITestCase):
|
||||
self.assertEqual(self.queue.count, 1)
|
||||
job = self.queue.jobs[0]
|
||||
self.assertEqual(job.kwargs['webhook'], Webhook.objects.get(type_update=True))
|
||||
self.assertEqual(job.kwargs['data']['id'], site.pk)
|
||||
self.assertEqual(job.kwargs['model_name'], 'site')
|
||||
self.assertEqual(job.kwargs['event'], ObjectChangeActionChoices.ACTION_UPDATE)
|
||||
self.assertEqual(job.kwargs['model_name'], 'site')
|
||||
self.assertEqual(job.kwargs['data']['id'], site.pk)
|
||||
self.assertEqual(len(job.kwargs['data']['tags']), len(response.data['tags']))
|
||||
self.assertEqual(job.kwargs['snapshots']['prechange']['name'], 'Site 1')
|
||||
self.assertEqual(job.kwargs['snapshots']['prechange']['tags'], ['Bar', 'Foo'])
|
||||
self.assertEqual(job.kwargs['snapshots']['postchange']['name'], 'Site X')
|
||||
self.assertEqual(job.kwargs['snapshots']['postchange']['tags'], ['Baz'])
|
||||
|
||||
def test_enqueue_webhook_bulk_update(self):
|
||||
sites = (
|
||||
Site(name='Site 1', slug='site-1'),
|
||||
Site(name='Site 2', slug='site-2'),
|
||||
Site(name='Site 3', slug='site-3'),
|
||||
)
|
||||
Site.objects.bulk_create(sites)
|
||||
for site in sites:
|
||||
site.tags.set(*Tag.objects.filter(name__in=['Foo', 'Bar']))
|
||||
|
||||
# Update three objects via the REST API
|
||||
data = [
|
||||
{
|
||||
'id': sites[0].pk,
|
||||
'name': 'Site X',
|
||||
'tags': [
|
||||
{'name': 'Baz'}
|
||||
]
|
||||
},
|
||||
{
|
||||
'id': sites[1].pk,
|
||||
'name': 'Site Y',
|
||||
'tags': [
|
||||
{'name': 'Baz'}
|
||||
]
|
||||
},
|
||||
{
|
||||
'id': sites[2].pk,
|
||||
'name': 'Site Z',
|
||||
'tags': [
|
||||
{'name': 'Baz'}
|
||||
]
|
||||
},
|
||||
]
|
||||
url = reverse('dcim-api:site-list')
|
||||
self.add_permissions('dcim.change_site')
|
||||
response = self.client.patch(url, data, format='json', **self.header)
|
||||
self.assertHttpStatus(response, status.HTTP_200_OK)
|
||||
|
||||
# Verify that a job was queued for the object update webhook
|
||||
self.assertEqual(self.queue.count, 3)
|
||||
for i, job in enumerate(self.queue.jobs):
|
||||
self.assertEqual(job.kwargs['webhook'], Webhook.objects.get(type_update=True))
|
||||
self.assertEqual(job.kwargs['event'], ObjectChangeActionChoices.ACTION_UPDATE)
|
||||
self.assertEqual(job.kwargs['model_name'], 'site')
|
||||
self.assertEqual(job.kwargs['data']['id'], data[i]['id'])
|
||||
self.assertEqual(len(job.kwargs['data']['tags']), len(response.data[i]['tags']))
|
||||
self.assertEqual(job.kwargs['snapshots']['prechange']['name'], sites[i].name)
|
||||
self.assertEqual(job.kwargs['snapshots']['prechange']['tags'], ['Bar', 'Foo'])
|
||||
self.assertEqual(job.kwargs['snapshots']['postchange']['name'], response.data[i]['name'])
|
||||
self.assertEqual(job.kwargs['snapshots']['postchange']['tags'], ['Baz'])
|
||||
|
||||
def test_enqueue_webhook_delete(self):
|
||||
# Delete an object via the REST API
|
||||
site = Site.objects.create(name='Site 1', slug='site-1')
|
||||
site.tags.set(*Tag.objects.filter(name__in=['Foo', 'Bar']))
|
||||
|
||||
# Delete an object via the REST API
|
||||
url = reverse('dcim-api:site-detail', kwargs={'pk': site.pk})
|
||||
self.add_permissions('dcim.delete_site')
|
||||
response = self.client.delete(url, **self.header)
|
||||
@@ -92,9 +216,40 @@ class WebhookTest(APITestCase):
|
||||
self.assertEqual(self.queue.count, 1)
|
||||
job = self.queue.jobs[0]
|
||||
self.assertEqual(job.kwargs['webhook'], Webhook.objects.get(type_delete=True))
|
||||
self.assertEqual(job.kwargs['data']['id'], site.pk)
|
||||
self.assertEqual(job.kwargs['model_name'], 'site')
|
||||
self.assertEqual(job.kwargs['event'], ObjectChangeActionChoices.ACTION_DELETE)
|
||||
self.assertEqual(job.kwargs['model_name'], 'site')
|
||||
self.assertEqual(job.kwargs['data']['id'], site.pk)
|
||||
self.assertEqual(job.kwargs['snapshots']['prechange']['name'], 'Site 1')
|
||||
self.assertEqual(job.kwargs['snapshots']['prechange']['tags'], ['Bar', 'Foo'])
|
||||
|
||||
def test_enqueue_webhook_bulk_delete(self):
|
||||
sites = (
|
||||
Site(name='Site 1', slug='site-1'),
|
||||
Site(name='Site 2', slug='site-2'),
|
||||
Site(name='Site 3', slug='site-3'),
|
||||
)
|
||||
Site.objects.bulk_create(sites)
|
||||
for site in sites:
|
||||
site.tags.set(*Tag.objects.filter(name__in=['Foo', 'Bar']))
|
||||
|
||||
# Delete three objects via the REST API
|
||||
data = [
|
||||
{'id': site.pk} for site in sites
|
||||
]
|
||||
url = reverse('dcim-api:site-list')
|
||||
self.add_permissions('dcim.delete_site')
|
||||
response = self.client.delete(url, data, format='json', **self.header)
|
||||
self.assertHttpStatus(response, status.HTTP_204_NO_CONTENT)
|
||||
|
||||
# Verify that a job was queued for the object update webhook
|
||||
self.assertEqual(self.queue.count, 3)
|
||||
for i, job in enumerate(self.queue.jobs):
|
||||
self.assertEqual(job.kwargs['webhook'], Webhook.objects.get(type_delete=True))
|
||||
self.assertEqual(job.kwargs['event'], ObjectChangeActionChoices.ACTION_DELETE)
|
||||
self.assertEqual(job.kwargs['model_name'], 'site')
|
||||
self.assertEqual(job.kwargs['data']['id'], sites[i].pk)
|
||||
self.assertEqual(job.kwargs['snapshots']['prechange']['name'], sites[i].name)
|
||||
self.assertEqual(job.kwargs['snapshots']['prechange']['tags'], ['Bar', 'Foo'])
|
||||
|
||||
def test_webhooks_worker(self):
|
||||
|
||||
@@ -125,13 +280,16 @@ class WebhookTest(APITestCase):
|
||||
return HttpResponse()
|
||||
|
||||
# Enqueue a webhook for processing
|
||||
webhooks_queue = []
|
||||
site = Site.objects.create(name='Site 1', slug='site-1')
|
||||
enqueue_webhooks(
|
||||
enqueue_object(
|
||||
webhooks_queue,
|
||||
instance=site,
|
||||
user=self.user,
|
||||
request_id=request_id,
|
||||
action=ObjectChangeActionChoices.ACTION_CREATE
|
||||
)
|
||||
flush_webhooks(webhooks_queue)
|
||||
|
||||
# Retrieve the job from queue
|
||||
job = self.queue.jobs[0]
|
||||
|
||||
@@ -202,15 +202,22 @@ class ObjectChangeView(generic.ObjectView):
|
||||
next_change = objectchanges.filter(time__gt=instance.time).order_by('time').first()
|
||||
prev_change = objectchanges.filter(time__lt=instance.time).order_by('-time').first()
|
||||
|
||||
if instance.prechange_data and instance.postchange_data:
|
||||
if not instance.prechange_data and instance.action in ['update', 'delete'] and prev_change:
|
||||
non_atomic_change = True
|
||||
prechange_data = prev_change.postchange_data
|
||||
else:
|
||||
non_atomic_change = False
|
||||
prechange_data = instance.prechange_data
|
||||
|
||||
if prechange_data and instance.postchange_data:
|
||||
diff_added = shallow_compare_dict(
|
||||
instance.prechange_data or dict(),
|
||||
prechange_data or dict(),
|
||||
instance.postchange_data or dict(),
|
||||
exclude=['last_updated'],
|
||||
)
|
||||
diff_removed = {
|
||||
x: instance.prechange_data.get(x) for x in diff_added
|
||||
} if instance.prechange_data else {}
|
||||
x: prechange_data.get(x) for x in diff_added
|
||||
} if prechange_data else {}
|
||||
else:
|
||||
diff_added = None
|
||||
diff_removed = None
|
||||
@@ -221,7 +228,8 @@ class ObjectChangeView(generic.ObjectView):
|
||||
'next_change': next_change,
|
||||
'prev_change': prev_change,
|
||||
'related_changes_table': related_changes_table,
|
||||
'related_changes_count': related_changes.count()
|
||||
'related_changes_count': related_changes.count(),
|
||||
'non_atomic_change': non_atomic_change
|
||||
}
|
||||
|
||||
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
import hashlib
|
||||
import hmac
|
||||
from collections import defaultdict
|
||||
|
||||
from django.contrib.contenttypes.models import ContentType
|
||||
from django.utils import timezone
|
||||
@@ -12,6 +13,26 @@ from .models import Webhook
|
||||
from .registry import registry
|
||||
|
||||
|
||||
def serialize_for_webhook(instance):
|
||||
"""
|
||||
Return a serialized representation of the given instance suitable for use in a webhook.
|
||||
"""
|
||||
serializer_class = get_serializer_for_model(instance.__class__)
|
||||
serializer_context = {
|
||||
'request': None,
|
||||
}
|
||||
serializer = serializer_class(instance, context=serializer_context)
|
||||
|
||||
return serializer.data
|
||||
|
||||
|
||||
def get_snapshots(instance, action):
|
||||
return {
|
||||
'prechange': getattr(instance, '_prechange_snapshot', None),
|
||||
'postchange': serialize_object(instance) if action != ObjectChangeActionChoices.ACTION_DELETE else None,
|
||||
}
|
||||
|
||||
|
||||
def generate_signature(request_body, secret):
|
||||
"""
|
||||
Return a cryptographic signature that can be used to verify the authenticity of webhook data.
|
||||
@@ -24,10 +45,10 @@ def generate_signature(request_body, secret):
|
||||
return hmac_prep.hexdigest()
|
||||
|
||||
|
||||
def enqueue_webhooks(instance, user, request_id, action):
|
||||
def enqueue_object(queue, instance, user, request_id, action):
|
||||
"""
|
||||
Find Webhook(s) assigned to this instance + action and enqueue them
|
||||
to be processed
|
||||
Enqueue a serialized representation of a created/updated/deleted object for the processing of
|
||||
webhooks once the request has completed.
|
||||
"""
|
||||
# Determine whether this type of object supports webhooks
|
||||
app_label = instance._meta.app_label
|
||||
@@ -35,41 +56,55 @@ def enqueue_webhooks(instance, user, request_id, action):
|
||||
if model_name not in registry['model_features']['webhooks'].get(app_label, []):
|
||||
return
|
||||
|
||||
# Retrieve any applicable Webhooks
|
||||
content_type = ContentType.objects.get_for_model(instance)
|
||||
action_flag = {
|
||||
ObjectChangeActionChoices.ACTION_CREATE: 'type_create',
|
||||
ObjectChangeActionChoices.ACTION_UPDATE: 'type_update',
|
||||
ObjectChangeActionChoices.ACTION_DELETE: 'type_delete',
|
||||
}[action]
|
||||
webhooks = Webhook.objects.filter(content_types=content_type, enabled=True, **{action_flag: True})
|
||||
queue.append({
|
||||
'content_type': ContentType.objects.get_for_model(instance),
|
||||
'object_id': instance.pk,
|
||||
'event': action,
|
||||
'data': serialize_for_webhook(instance),
|
||||
'snapshots': get_snapshots(instance, action),
|
||||
'username': user.username,
|
||||
'request_id': request_id
|
||||
})
|
||||
|
||||
if webhooks.exists():
|
||||
|
||||
# Get the Model's API serializer class and serialize the object
|
||||
serializer_class = get_serializer_for_model(instance.__class__)
|
||||
serializer_context = {
|
||||
'request': None,
|
||||
}
|
||||
serializer = serializer_class(instance, context=serializer_context)
|
||||
def flush_webhooks(queue):
|
||||
"""
|
||||
Flush a list of object representation to RQ for webhook processing.
|
||||
"""
|
||||
rq_queue = get_queue('default')
|
||||
webhooks_cache = {
|
||||
'type_create': {},
|
||||
'type_update': {},
|
||||
'type_delete': {},
|
||||
}
|
||||
|
||||
# Gather pre- and post-change snapshots
|
||||
snapshots = {
|
||||
'prechange': getattr(instance, '_prechange_snapshot', None),
|
||||
'postchange': serialize_object(instance) if action != ObjectChangeActionChoices.ACTION_DELETE else None,
|
||||
}
|
||||
for data in queue:
|
||||
|
||||
action_flag = {
|
||||
ObjectChangeActionChoices.ACTION_CREATE: 'type_create',
|
||||
ObjectChangeActionChoices.ACTION_UPDATE: 'type_update',
|
||||
ObjectChangeActionChoices.ACTION_DELETE: 'type_delete',
|
||||
}[data['event']]
|
||||
content_type = data['content_type']
|
||||
|
||||
# Cache applicable Webhooks
|
||||
if content_type not in webhooks_cache[action_flag]:
|
||||
webhooks_cache[action_flag][content_type] = Webhook.objects.filter(
|
||||
**{action_flag: True},
|
||||
content_types=content_type,
|
||||
enabled=True
|
||||
)
|
||||
webhooks = webhooks_cache[action_flag][content_type]
|
||||
|
||||
# Enqueue the webhooks
|
||||
webhook_queue = get_queue('default')
|
||||
for webhook in webhooks:
|
||||
webhook_queue.enqueue(
|
||||
rq_queue.enqueue(
|
||||
"extras.webhooks_worker.process_webhook",
|
||||
webhook=webhook,
|
||||
model_name=instance._meta.model_name,
|
||||
event=action,
|
||||
data=serializer.data,
|
||||
snapshots=snapshots,
|
||||
model_name=content_type.model,
|
||||
event=data['event'],
|
||||
data=data['data'],
|
||||
snapshots=data['snapshots'],
|
||||
timestamp=str(timezone.now()),
|
||||
username=user.username,
|
||||
request_id=request_id
|
||||
username=data['username'],
|
||||
request_id=data['request_id']
|
||||
)
|
||||
|
||||
@@ -102,10 +102,11 @@ class NestedVLANSerializer(WritableNestedSerializer):
|
||||
class NestedPrefixSerializer(WritableNestedSerializer):
|
||||
url = serializers.HyperlinkedIdentityField(view_name='ipam-api:prefix-detail')
|
||||
family = serializers.IntegerField(read_only=True)
|
||||
_depth = serializers.IntegerField(read_only=True)
|
||||
|
||||
class Meta:
|
||||
model = models.Prefix
|
||||
fields = ['id', 'url', 'display', 'family', 'prefix']
|
||||
fields = ['id', 'url', 'display', 'family', 'prefix', '_depth']
|
||||
|
||||
|
||||
#
|
||||
|
||||
@@ -7,7 +7,7 @@ from rest_framework.validators import UniqueTogetherValidator
|
||||
|
||||
from dcim.api.nested_serializers import NestedDeviceSerializer, NestedSiteSerializer
|
||||
from ipam.choices import *
|
||||
from ipam.constants import IPADDRESS_ASSIGNMENT_MODELS
|
||||
from ipam.constants import IPADDRESS_ASSIGNMENT_MODELS, VLANGROUP_SCOPE_TYPES
|
||||
from ipam.models import Aggregate, IPAddress, Prefix, RIR, Role, RouteTarget, Service, VLAN, VLANGroup, VRF
|
||||
from netbox.api import ChoiceField, ContentTypeField, SerializedPKRelatedField
|
||||
from netbox.api.serializers import OrganizationalModelSerializer
|
||||
@@ -116,8 +116,7 @@ class VLANGroupSerializer(OrganizationalModelSerializer):
|
||||
url = serializers.HyperlinkedIdentityField(view_name='ipam-api:vlangroup-detail')
|
||||
scope_type = ContentTypeField(
|
||||
queryset=ContentType.objects.filter(
|
||||
app_label='dcim',
|
||||
model__in=['region', 'sitegroup', 'site', 'location', 'rack']
|
||||
model__in=VLANGROUP_SCOPE_TYPES
|
||||
),
|
||||
required=False
|
||||
)
|
||||
@@ -198,12 +197,14 @@ class PrefixSerializer(PrimaryModelSerializer):
|
||||
vlan = NestedVLANSerializer(required=False, allow_null=True)
|
||||
status = ChoiceField(choices=PrefixStatusChoices, required=False)
|
||||
role = NestedRoleSerializer(required=False, allow_null=True)
|
||||
children = serializers.IntegerField(read_only=True)
|
||||
_depth = serializers.IntegerField(read_only=True)
|
||||
|
||||
class Meta:
|
||||
model = Prefix
|
||||
fields = [
|
||||
'id', 'url', 'display', 'family', 'prefix', 'site', 'vrf', 'tenant', 'vlan', 'status', 'role', 'is_pool',
|
||||
'description', 'tags', 'custom_fields', 'created', 'last_updated',
|
||||
'description', 'tags', 'custom_fields', 'created', 'last_updated', 'children', '_depth',
|
||||
]
|
||||
read_only_fields = ['family']
|
||||
|
||||
@@ -273,7 +274,7 @@ class IPAddressSerializer(PrimaryModelSerializer):
|
||||
)
|
||||
assigned_object = serializers.SerializerMethodField(read_only=True)
|
||||
nat_inside = NestedIPAddressSerializer(required=False, allow_null=True)
|
||||
nat_outside = NestedIPAddressSerializer(read_only=True)
|
||||
nat_outside = NestedIPAddressSerializer(required=False, read_only=True)
|
||||
|
||||
class Meta:
|
||||
model = IPAddress
|
||||
@@ -282,7 +283,7 @@ class IPAddressSerializer(PrimaryModelSerializer):
|
||||
'assigned_object_id', 'assigned_object', 'nat_inside', 'nat_outside', 'dns_name', 'description', 'tags',
|
||||
'custom_fields', 'created', 'last_updated',
|
||||
]
|
||||
read_only_fields = ['family']
|
||||
read_only_fields = ['family', 'nat_outside']
|
||||
|
||||
@swagger_serializer_method(serializer_or_field=serializers.DictField)
|
||||
def get_assigned_object(self, obj):
|
||||
|
||||
@@ -209,6 +209,12 @@ class PrefixFilterSet(PrimaryModelFilterSet, TenancyFilterSet):
|
||||
method='search_contains',
|
||||
label='Prefixes which contain this prefix or IP',
|
||||
)
|
||||
depth = MultiValueNumberFilter(
|
||||
field_name='_depth'
|
||||
)
|
||||
children = MultiValueNumberFilter(
|
||||
field_name='_children'
|
||||
)
|
||||
mask_length = django_filters.NumberFilter(
|
||||
field_name='prefix',
|
||||
lookup_expr='net_mask_length'
|
||||
@@ -468,7 +474,7 @@ class IPAddressFilterSet(PrimaryModelFilterSet, TenancyFilterSet):
|
||||
|
||||
class Meta:
|
||||
model = IPAddress
|
||||
fields = ['id', 'dns_name']
|
||||
fields = ['id', 'dns_name', 'description']
|
||||
|
||||
def search(self, queryset, name, value):
|
||||
if not value.strip():
|
||||
@@ -536,6 +542,10 @@ class IPAddressFilterSet(PrimaryModelFilterSet, TenancyFilterSet):
|
||||
|
||||
|
||||
class VLANGroupFilterSet(OrganizationalModelFilterSet):
|
||||
q = django_filters.CharFilter(
|
||||
method='search',
|
||||
label='Search',
|
||||
)
|
||||
scope_type = ContentTypeFilter()
|
||||
region = django_filters.NumberFilter(
|
||||
method='filter_scope'
|
||||
@@ -563,6 +573,15 @@ class VLANGroupFilterSet(OrganizationalModelFilterSet):
|
||||
model = VLANGroup
|
||||
fields = ['id', 'name', 'slug', 'description', 'scope_id']
|
||||
|
||||
def search(self, queryset, name, value):
|
||||
if not value.strip():
|
||||
return queryset
|
||||
qs_filter = (
|
||||
Q(name__icontains=value) |
|
||||
Q(description__icontains=value)
|
||||
)
|
||||
return queryset.filter(qs_filter)
|
||||
|
||||
def filter_scope(self, queryset, name, value):
|
||||
return queryset.filter(
|
||||
scope_type=ContentType.objects.get(model=name),
|
||||
|
||||
@@ -1270,6 +1270,10 @@ class VLANGroupBulkEditForm(BootstrapMixin, CustomFieldBulkEditForm):
|
||||
|
||||
|
||||
class VLANGroupFilterForm(BootstrapMixin, forms.Form):
|
||||
q = forms.CharField(
|
||||
required=False,
|
||||
label=_('Search')
|
||||
)
|
||||
region = DynamicModelMultipleChoiceField(
|
||||
queryset=Region.objects.all(),
|
||||
required=False,
|
||||
|
||||
0
netbox/ipam/management/__init__.py
Normal file
0
netbox/ipam/management/__init__.py
Normal file
0
netbox/ipam/management/commands/__init__.py
Normal file
0
netbox/ipam/management/commands/__init__.py
Normal file
27
netbox/ipam/management/commands/rebuild_prefixes.py
Normal file
27
netbox/ipam/management/commands/rebuild_prefixes.py
Normal file
@@ -0,0 +1,27 @@
|
||||
from django.core.management.base import BaseCommand
|
||||
|
||||
from ipam.models import Prefix, VRF
|
||||
from ipam.utils import rebuild_prefixes
|
||||
|
||||
|
||||
class Command(BaseCommand):
|
||||
help = "Rebuild the prefix hierarchy (depth and children counts)"
|
||||
|
||||
def handle(self, *model_names, **options):
|
||||
self.stdout.write(f'Rebuilding {Prefix.objects.count()} prefixes...')
|
||||
|
||||
# Reset existing counts
|
||||
Prefix.objects.update(_depth=0, _children=0)
|
||||
|
||||
# Rebuild the global table
|
||||
global_count = Prefix.objects.filter(vrf__isnull=True).count()
|
||||
self.stdout.write(f'Global: {global_count} prefixes...')
|
||||
rebuild_prefixes(None)
|
||||
|
||||
# Rebuild each VRF
|
||||
for vrf in VRF.objects.all():
|
||||
vrf_count = Prefix.objects.filter(vrf=vrf).count()
|
||||
self.stdout.write(f'VRF {vrf}: {vrf_count} prefixes...')
|
||||
rebuild_prefixes(vrf.pk)
|
||||
|
||||
self.stdout.write(self.style.SUCCESS('Finished.'))
|
||||
21
netbox/ipam/migrations/0047_prefix_depth_children.py
Normal file
21
netbox/ipam/migrations/0047_prefix_depth_children.py
Normal file
@@ -0,0 +1,21 @@
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('ipam', '0046_set_vlangroup_scope_types'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.AddField(
|
||||
model_name='prefix',
|
||||
name='_children',
|
||||
field=models.PositiveBigIntegerField(default=0, editable=False),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='prefix',
|
||||
name='_depth',
|
||||
field=models.PositiveSmallIntegerField(default=0, editable=False),
|
||||
),
|
||||
]
|
||||
@@ -0,0 +1,37 @@
|
||||
import sys
|
||||
from django.db import migrations
|
||||
|
||||
from ipam.utils import rebuild_prefixes
|
||||
|
||||
|
||||
def populate_prefix_hierarchy(apps, schema_editor):
|
||||
"""
|
||||
Populate _depth and _children attrs for all Prefixes.
|
||||
"""
|
||||
Prefix = apps.get_model('ipam', 'Prefix')
|
||||
VRF = apps.get_model('ipam', 'VRF')
|
||||
|
||||
total_count = Prefix.objects.count()
|
||||
if 'test' not in sys.argv:
|
||||
print(f'\nUpdating {total_count} prefixes...')
|
||||
|
||||
# Rebuild the global table
|
||||
rebuild_prefixes(None)
|
||||
|
||||
# Iterate through all VRFs, rebuilding each
|
||||
for vrf in VRF.objects.all():
|
||||
rebuild_prefixes(vrf.pk)
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('ipam', '0047_prefix_depth_children'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.RunPython(
|
||||
code=populate_prefix_hierarchy,
|
||||
reverse_code=migrations.RunPython.noop
|
||||
),
|
||||
]
|
||||
@@ -77,7 +77,7 @@ class RIR(OrganizationalModel):
|
||||
)
|
||||
|
||||
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'webhooks')
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'tags', 'webhooks')
|
||||
class Aggregate(PrimaryModel):
|
||||
"""
|
||||
An aggregate exists at the root level of the IP address space hierarchy in NetBox. Aggregates are used to organize
|
||||
@@ -228,7 +228,7 @@ class Role(OrganizationalModel):
|
||||
)
|
||||
|
||||
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'webhooks')
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'tags', 'webhooks')
|
||||
class Prefix(PrimaryModel):
|
||||
"""
|
||||
A Prefix represents an IPv4 or IPv6 network, including mask length. Prefixes can optionally be assigned to Sites and
|
||||
@@ -293,6 +293,16 @@ class Prefix(PrimaryModel):
|
||||
blank=True
|
||||
)
|
||||
|
||||
# Cached depth & child counts
|
||||
_depth = models.PositiveSmallIntegerField(
|
||||
default=0,
|
||||
editable=False
|
||||
)
|
||||
_children = models.PositiveBigIntegerField(
|
||||
default=0,
|
||||
editable=False
|
||||
)
|
||||
|
||||
objects = PrefixQuerySet.as_manager()
|
||||
|
||||
csv_headers = [
|
||||
@@ -306,6 +316,13 @@ class Prefix(PrimaryModel):
|
||||
ordering = (F('vrf').asc(nulls_first=True), 'prefix', 'pk') # (vrf, prefix) may be non-unique
|
||||
verbose_name_plural = 'prefixes'
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
super().__init__(*args, **kwargs)
|
||||
|
||||
# Cache the original prefix and VRF so we can check if they have changed on post_save
|
||||
self._prefix = self.prefix
|
||||
self._vrf = self.vrf
|
||||
|
||||
def __str__(self):
|
||||
return str(self.prefix)
|
||||
|
||||
@@ -323,16 +340,6 @@ class Prefix(PrimaryModel):
|
||||
'prefix': "Cannot create prefix with /0 mask."
|
||||
})
|
||||
|
||||
# Disallow host masks
|
||||
if self.prefix.version == 4 and self.prefix.prefixlen == 32:
|
||||
raise ValidationError({
|
||||
'prefix': "Cannot create host addresses (/32) as prefixes. Create an IPv4 address instead."
|
||||
})
|
||||
elif self.prefix.version == 6 and self.prefix.prefixlen == 128:
|
||||
raise ValidationError({
|
||||
'prefix': "Cannot create host addresses (/128) as prefixes. Create an IPv6 address instead."
|
||||
})
|
||||
|
||||
# Enforce unique IP space (if applicable)
|
||||
if (self.vrf is None and settings.ENFORCE_GLOBAL_UNIQUE) or (self.vrf and self.vrf.enforce_unique):
|
||||
duplicate_prefixes = self.get_duplicates()
|
||||
@@ -373,6 +380,14 @@ class Prefix(PrimaryModel):
|
||||
return self.prefix.version
|
||||
return None
|
||||
|
||||
@property
|
||||
def depth(self):
|
||||
return self._depth
|
||||
|
||||
@property
|
||||
def children(self):
|
||||
return self._children
|
||||
|
||||
def _set_prefix_length(self, value):
|
||||
"""
|
||||
Expose the IPNetwork object's prefixlen attribute on the parent model so that it can be manipulated directly,
|
||||
@@ -385,6 +400,26 @@ class Prefix(PrimaryModel):
|
||||
def get_status_class(self):
|
||||
return PrefixStatusChoices.CSS_CLASSES.get(self.status)
|
||||
|
||||
def get_parents(self, include_self=False):
|
||||
"""
|
||||
Return all containing Prefixes in the hierarchy.
|
||||
"""
|
||||
lookup = 'net_contains_or_equals' if include_self else 'net_contains'
|
||||
return Prefix.objects.filter(**{
|
||||
'vrf': self.vrf,
|
||||
f'prefix__{lookup}': self.prefix
|
||||
})
|
||||
|
||||
def get_children(self, include_self=False):
|
||||
"""
|
||||
Return all covered Prefixes in the hierarchy.
|
||||
"""
|
||||
lookup = 'net_contained_or_equal' if include_self else 'net_contained'
|
||||
return Prefix.objects.filter(**{
|
||||
'vrf': self.vrf,
|
||||
f'prefix__{lookup}': self.prefix
|
||||
})
|
||||
|
||||
def get_duplicates(self):
|
||||
return Prefix.objects.filter(vrf=self.vrf, prefix=str(self.prefix)).exclude(pk=self.pk)
|
||||
|
||||
@@ -426,8 +461,8 @@ class Prefix(PrimaryModel):
|
||||
child_ips = netaddr.IPSet([ip.address.ip for ip in self.get_child_ips()])
|
||||
available_ips = prefix - child_ips
|
||||
|
||||
# IPv6, pool, or IPv4 /31 sets are fully usable
|
||||
if self.family == 6 or self.is_pool or self.prefix.prefixlen == 31:
|
||||
# IPv6, pool, or IPv4 /31-/32 sets are fully usable
|
||||
if self.family == 6 or self.is_pool or (self.family == 4 and self.prefix.prefixlen >= 31):
|
||||
return available_ips
|
||||
|
||||
# For "normal" IPv4 prefixes, omit first and last addresses
|
||||
@@ -477,7 +512,7 @@ class Prefix(PrimaryModel):
|
||||
return int(float(child_count) / prefix_size * 100)
|
||||
|
||||
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'webhooks')
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'tags', 'webhooks')
|
||||
class IPAddress(PrimaryModel):
|
||||
"""
|
||||
An IPAddress represents an individual IPv4 or IPv6 address and its mask. The mask length should match what is
|
||||
|
||||
@@ -17,7 +17,7 @@ __all__ = (
|
||||
)
|
||||
|
||||
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'webhooks')
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'tags', 'webhooks')
|
||||
class Service(PrimaryModel):
|
||||
"""
|
||||
A Service represents a layer-four service (e.g. HTTP or SSH) running on a Device or VirtualMachine. A Service may
|
||||
|
||||
@@ -100,7 +100,7 @@ class VLANGroup(OrganizationalModel):
|
||||
return None
|
||||
|
||||
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'webhooks')
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'tags', 'webhooks')
|
||||
class VLAN(PrimaryModel):
|
||||
"""
|
||||
A VLAN is a distinct layer two forwarding domain identified by a 12-bit integer (1-4094). Each VLAN must be assigned
|
||||
|
||||
@@ -13,7 +13,7 @@ __all__ = (
|
||||
)
|
||||
|
||||
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'webhooks')
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'tags', 'webhooks')
|
||||
class VRF(PrimaryModel):
|
||||
"""
|
||||
A virtual routing and forwarding (VRF) table represents a discrete layer three forwarding domain (e.g. a routing
|
||||
@@ -92,7 +92,7 @@ class VRF(PrimaryModel):
|
||||
return self.name
|
||||
|
||||
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'webhooks')
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'tags', 'webhooks')
|
||||
class RouteTarget(PrimaryModel):
|
||||
"""
|
||||
A BGP extended community used to control the redistribution of routes among VRFs, as defined in RFC 4364.
|
||||
|
||||
@@ -1,27 +1,32 @@
|
||||
from django.contrib.contenttypes.models import ContentType
|
||||
from django.db.models import Q
|
||||
from django.db.models.expressions import RawSQL
|
||||
|
||||
from utilities.querysets import RestrictedQuerySet
|
||||
|
||||
|
||||
class PrefixQuerySet(RestrictedQuerySet):
|
||||
|
||||
def annotate_tree(self):
|
||||
def annotate_hierarchy(self):
|
||||
"""
|
||||
Annotate the number of parent and child prefixes for each Prefix. Raw SQL is needed for these subqueries
|
||||
because we need to cast NULL VRF values to integers for comparison. (NULL != NULL).
|
||||
Annotate the depth and number of child prefixes for each Prefix. Cast null VRF values to zero for
|
||||
comparison. (NULL != NULL).
|
||||
"""
|
||||
return self.extra(
|
||||
select={
|
||||
'parents': 'SELECT COUNT(U0."prefix") AS "c" '
|
||||
'FROM "ipam_prefix" U0 '
|
||||
'WHERE (U0."prefix" >> "ipam_prefix"."prefix" '
|
||||
'AND COALESCE(U0."vrf_id", 0) = COALESCE("ipam_prefix"."vrf_id", 0))',
|
||||
'children': 'SELECT COUNT(U1."prefix") AS "c" '
|
||||
'FROM "ipam_prefix" U1 '
|
||||
'WHERE (U1."prefix" << "ipam_prefix"."prefix" '
|
||||
'AND COALESCE(U1."vrf_id", 0) = COALESCE("ipam_prefix"."vrf_id", 0))',
|
||||
}
|
||||
return self.annotate(
|
||||
hierarchy_depth=RawSQL(
|
||||
'SELECT COUNT(DISTINCT U0."prefix") AS "c" '
|
||||
'FROM "ipam_prefix" U0 '
|
||||
'WHERE (U0."prefix" >> "ipam_prefix"."prefix" '
|
||||
'AND COALESCE(U0."vrf_id", 0) = COALESCE("ipam_prefix"."vrf_id", 0))',
|
||||
()
|
||||
),
|
||||
hierarchy_children=RawSQL(
|
||||
'SELECT COUNT(U1."prefix") AS "c" '
|
||||
'FROM "ipam_prefix" U1 '
|
||||
'WHERE (U1."prefix" << "ipam_prefix"."prefix" '
|
||||
'AND COALESCE(U1."vrf_id", 0) = COALESCE("ipam_prefix"."vrf_id", 0))',
|
||||
()
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
|
||||
@@ -1,9 +1,52 @@
|
||||
from django.db.models.signals import pre_delete
|
||||
from django.db.models.signals import post_delete, post_save, pre_delete
|
||||
from django.dispatch import receiver
|
||||
|
||||
from dcim.models import Device
|
||||
from virtualization.models import VirtualMachine
|
||||
from .models import IPAddress
|
||||
from .models import IPAddress, Prefix
|
||||
|
||||
|
||||
def update_parents_children(prefix):
|
||||
"""
|
||||
Update depth on prefix & containing prefixes
|
||||
"""
|
||||
parents = prefix.get_parents(include_self=True).annotate_hierarchy()
|
||||
for parent in parents:
|
||||
parent._children = parent.hierarchy_children
|
||||
Prefix.objects.bulk_update(parents, ['_children'], batch_size=100)
|
||||
|
||||
|
||||
def update_children_depth(prefix):
|
||||
"""
|
||||
Update children count on prefix & contained prefixes
|
||||
"""
|
||||
children = prefix.get_children(include_self=True).annotate_hierarchy()
|
||||
for child in children:
|
||||
child._depth = child.hierarchy_depth
|
||||
Prefix.objects.bulk_update(children, ['_depth'], batch_size=100)
|
||||
|
||||
|
||||
@receiver(post_save, sender=Prefix)
|
||||
def handle_prefix_saved(instance, created, **kwargs):
|
||||
|
||||
# Prefix has changed (or new instance has been created)
|
||||
if created or instance.vrf != instance._vrf or instance.prefix != instance._prefix:
|
||||
|
||||
update_parents_children(instance)
|
||||
update_children_depth(instance)
|
||||
|
||||
# If this is not a new prefix, clean up parent/children of previous prefix
|
||||
if not created:
|
||||
old_prefix = Prefix(vrf=instance._vrf, prefix=instance._prefix)
|
||||
update_parents_children(old_prefix)
|
||||
update_children_depth(old_prefix)
|
||||
|
||||
|
||||
@receiver(post_delete, sender=Prefix)
|
||||
def handle_prefix_deleted(instance, **kwargs):
|
||||
|
||||
update_parents_children(instance)
|
||||
update_children_depth(instance)
|
||||
|
||||
|
||||
@receiver(pre_delete, sender=IPAddress)
|
||||
|
||||
@@ -15,7 +15,7 @@ AVAILABLE_LABEL = mark_safe('<span class="label label-success">Available</span>'
|
||||
|
||||
PREFIX_LINK = """
|
||||
{% load helpers %}
|
||||
{% for i in record.parents|as_range %}
|
||||
{% for i in record.depth|as_range %}
|
||||
<i class="mdi mdi-circle-small"></i>
|
||||
{% endfor %}
|
||||
<a href="{% if record.pk %}{% url 'ipam:prefix' pk=record.pk %}{% else %}{% url 'ipam:prefix_add' %}?prefix={{ record }}{% if object.vrf %}&vrf={{ object.vrf.pk }}{% endif %}{% if object.site %}&site={{ object.site.pk }}{% endif %}{% if object.tenant %}&tenant_group={{ object.tenant.group.pk }}&tenant={{ object.tenant.pk }}{% endif %}{% endif %}">{{ record.prefix }}</a>
|
||||
@@ -262,6 +262,24 @@ class PrefixTable(BaseTable):
|
||||
template_code=PREFIX_LINK,
|
||||
attrs={'td': {'class': 'text-nowrap'}}
|
||||
)
|
||||
prefix_flat = tables.Column(
|
||||
accessor=Accessor('prefix'),
|
||||
linkify=True,
|
||||
verbose_name='Prefix (Flat)'
|
||||
)
|
||||
depth = tables.Column(
|
||||
accessor=Accessor('_depth'),
|
||||
verbose_name='Depth'
|
||||
)
|
||||
children = LinkedCountColumn(
|
||||
accessor=Accessor('_children'),
|
||||
viewname='ipam:prefix_list',
|
||||
url_params={
|
||||
'vrf_id': 'vrf_id',
|
||||
'within': 'prefix',
|
||||
},
|
||||
verbose_name='Children'
|
||||
)
|
||||
status = ChoiceFieldColumn(
|
||||
default=AVAILABLE_LABEL
|
||||
)
|
||||
@@ -287,7 +305,8 @@ class PrefixTable(BaseTable):
|
||||
class Meta(BaseTable.Meta):
|
||||
model = Prefix
|
||||
fields = (
|
||||
'pk', 'prefix', 'status', 'children', 'vrf', 'tenant', 'site', 'vlan', 'role', 'is_pool', 'description',
|
||||
'pk', 'prefix', 'prefix_flat', 'status', 'depth', 'children', 'vrf', 'tenant', 'site', 'vlan', 'role',
|
||||
'is_pool', 'description',
|
||||
)
|
||||
default_columns = ('pk', 'prefix', 'status', 'vrf', 'tenant', 'site', 'vlan', 'role', 'description')
|
||||
row_attrs = {
|
||||
@@ -300,15 +319,14 @@ class PrefixDetailTable(PrefixTable):
|
||||
accessor='get_utilization',
|
||||
orderable=False
|
||||
)
|
||||
tenant = TenantColumn()
|
||||
tags = TagColumn(
|
||||
url_name='ipam:prefix_list'
|
||||
)
|
||||
|
||||
class Meta(PrefixTable.Meta):
|
||||
fields = (
|
||||
'pk', 'prefix', 'status', 'children', 'vrf', 'utilization', 'tenant', 'site', 'vlan', 'role', 'is_pool',
|
||||
'description', 'tags',
|
||||
'pk', 'prefix', 'prefix_flat', 'status', 'children', 'vrf', 'utilization', 'tenant', 'site', 'vlan', 'role',
|
||||
'is_pool', 'description', 'tags',
|
||||
)
|
||||
default_columns = (
|
||||
'pk', 'prefix', 'status', 'children', 'vrf', 'utilization', 'tenant', 'site', 'vlan', 'role', 'description',
|
||||
@@ -430,7 +448,8 @@ class VLANGroupTable(BaseTable):
|
||||
name = tables.Column(linkify=True)
|
||||
scope_type = ContentTypeColumn()
|
||||
scope = tables.Column(
|
||||
linkify=True
|
||||
linkify=True,
|
||||
orderable=False
|
||||
)
|
||||
vlan_count = LinkedCountColumn(
|
||||
viewname='ipam:vlan_list',
|
||||
|
||||
@@ -186,7 +186,7 @@ class RoleTest(APIViewTestCases.APIViewTestCase):
|
||||
|
||||
class PrefixTest(APIViewTestCases.APIViewTestCase):
|
||||
model = Prefix
|
||||
brief_fields = ['display', 'family', 'id', 'prefix', 'url']
|
||||
brief_fields = ['_depth', 'display', 'family', 'id', 'prefix', 'url']
|
||||
create_data = [
|
||||
{
|
||||
'prefix': '192.168.4.0/24',
|
||||
|
||||
@@ -400,7 +400,8 @@ class PrefixTestCase(TestCase, ChangeLoggedFilterSetTests):
|
||||
Prefix(prefix='10.0.0.0/16'),
|
||||
Prefix(prefix='2001:db8::/32'),
|
||||
)
|
||||
Prefix.objects.bulk_create(prefixes)
|
||||
for prefix in prefixes:
|
||||
prefix.save()
|
||||
|
||||
def test_family(self):
|
||||
params = {'family': '6'}
|
||||
@@ -431,6 +432,18 @@ class PrefixTestCase(TestCase, ChangeLoggedFilterSetTests):
|
||||
params = {'contains': '2001:db8:0:1::/64'}
|
||||
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
|
||||
|
||||
def test_depth(self):
|
||||
params = {'depth': '0'}
|
||||
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 8)
|
||||
params = {'depth__gt': '0'}
|
||||
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
|
||||
|
||||
def test_children(self):
|
||||
params = {'children': '0'}
|
||||
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 8)
|
||||
params = {'children__gt': '0'}
|
||||
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
|
||||
|
||||
def test_mask_length(self):
|
||||
params = {'mask_length': '24'}
|
||||
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 4)
|
||||
@@ -571,12 +584,12 @@ class IPAddressTestCase(TestCase, ChangeLoggedFilterSetTests):
|
||||
Tenant.objects.bulk_create(tenants)
|
||||
|
||||
ipaddresses = (
|
||||
IPAddress(address='10.0.0.1/24', tenant=None, vrf=None, assigned_object=None, status=IPAddressStatusChoices.STATUS_ACTIVE, dns_name='ipaddress-a'),
|
||||
IPAddress(address='10.0.0.1/24', tenant=None, vrf=None, assigned_object=None, status=IPAddressStatusChoices.STATUS_ACTIVE, dns_name='ipaddress-a', description='foobar1'),
|
||||
IPAddress(address='10.0.0.2/24', tenant=tenants[0], vrf=vrfs[0], assigned_object=interfaces[0], status=IPAddressStatusChoices.STATUS_ACTIVE, dns_name='ipaddress-b'),
|
||||
IPAddress(address='10.0.0.3/24', tenant=tenants[1], vrf=vrfs[1], assigned_object=interfaces[1], status=IPAddressStatusChoices.STATUS_RESERVED, role=IPAddressRoleChoices.ROLE_VIP, dns_name='ipaddress-c'),
|
||||
IPAddress(address='10.0.0.4/24', tenant=tenants[2], vrf=vrfs[2], assigned_object=interfaces[2], status=IPAddressStatusChoices.STATUS_DEPRECATED, role=IPAddressRoleChoices.ROLE_SECONDARY, dns_name='ipaddress-d'),
|
||||
IPAddress(address='10.0.0.1/25', tenant=None, vrf=None, assigned_object=None, status=IPAddressStatusChoices.STATUS_ACTIVE),
|
||||
IPAddress(address='2001:db8::1/64', tenant=None, vrf=None, assigned_object=None, status=IPAddressStatusChoices.STATUS_ACTIVE, dns_name='ipaddress-a'),
|
||||
IPAddress(address='2001:db8::1/64', tenant=None, vrf=None, assigned_object=None, status=IPAddressStatusChoices.STATUS_ACTIVE, dns_name='ipaddress-a', description='foobar2'),
|
||||
IPAddress(address='2001:db8::2/64', tenant=tenants[0], vrf=vrfs[0], assigned_object=vminterfaces[0], status=IPAddressStatusChoices.STATUS_ACTIVE, dns_name='ipaddress-b'),
|
||||
IPAddress(address='2001:db8::3/64', tenant=tenants[1], vrf=vrfs[1], assigned_object=vminterfaces[1], status=IPAddressStatusChoices.STATUS_RESERVED, role=IPAddressRoleChoices.ROLE_VIP, dns_name='ipaddress-c'),
|
||||
IPAddress(address='2001:db8::4/64', tenant=tenants[2], vrf=vrfs[2], assigned_object=vminterfaces[2], status=IPAddressStatusChoices.STATUS_DEPRECATED, role=IPAddressRoleChoices.ROLE_SECONDARY, dns_name='ipaddress-d'),
|
||||
@@ -592,6 +605,10 @@ class IPAddressTestCase(TestCase, ChangeLoggedFilterSetTests):
|
||||
params = {'dns_name': ['ipaddress-a', 'ipaddress-b']}
|
||||
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 4)
|
||||
|
||||
def test_description(self):
|
||||
params = {'description': ['foobar1', 'foobar2']}
|
||||
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
|
||||
|
||||
def test_parent(self):
|
||||
params = {'parent': '10.0.0.0/24'}
|
||||
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 5)
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
import netaddr
|
||||
from netaddr import IPNetwork, IPSet
|
||||
from django.core.exceptions import ValidationError
|
||||
from django.test import TestCase, override_settings
|
||||
|
||||
@@ -10,27 +10,27 @@ class TestAggregate(TestCase):
|
||||
|
||||
def test_get_utilization(self):
|
||||
rir = RIR.objects.create(name='RIR 1', slug='rir-1')
|
||||
aggregate = Aggregate(prefix=netaddr.IPNetwork('10.0.0.0/8'), rir=rir)
|
||||
aggregate = Aggregate(prefix=IPNetwork('10.0.0.0/8'), rir=rir)
|
||||
aggregate.save()
|
||||
|
||||
# 25% utilization
|
||||
Prefix.objects.bulk_create((
|
||||
Prefix(prefix=netaddr.IPNetwork('10.0.0.0/12')),
|
||||
Prefix(prefix=netaddr.IPNetwork('10.16.0.0/12')),
|
||||
Prefix(prefix=netaddr.IPNetwork('10.32.0.0/12')),
|
||||
Prefix(prefix=netaddr.IPNetwork('10.48.0.0/12')),
|
||||
Prefix(prefix=IPNetwork('10.0.0.0/12')),
|
||||
Prefix(prefix=IPNetwork('10.16.0.0/12')),
|
||||
Prefix(prefix=IPNetwork('10.32.0.0/12')),
|
||||
Prefix(prefix=IPNetwork('10.48.0.0/12')),
|
||||
))
|
||||
self.assertEqual(aggregate.get_utilization(), 25)
|
||||
|
||||
# 50% utilization
|
||||
Prefix.objects.bulk_create((
|
||||
Prefix(prefix=netaddr.IPNetwork('10.64.0.0/10')),
|
||||
Prefix(prefix=IPNetwork('10.64.0.0/10')),
|
||||
))
|
||||
self.assertEqual(aggregate.get_utilization(), 50)
|
||||
|
||||
# 100% utilization
|
||||
Prefix.objects.bulk_create((
|
||||
Prefix(prefix=netaddr.IPNetwork('10.128.0.0/9')),
|
||||
Prefix(prefix=IPNetwork('10.128.0.0/9')),
|
||||
))
|
||||
self.assertEqual(aggregate.get_utilization(), 100)
|
||||
|
||||
@@ -39,9 +39,9 @@ class TestPrefix(TestCase):
|
||||
|
||||
def test_get_duplicates(self):
|
||||
prefixes = Prefix.objects.bulk_create((
|
||||
Prefix(prefix=netaddr.IPNetwork('192.0.2.0/24')),
|
||||
Prefix(prefix=netaddr.IPNetwork('192.0.2.0/24')),
|
||||
Prefix(prefix=netaddr.IPNetwork('192.0.2.0/24')),
|
||||
Prefix(prefix=IPNetwork('192.0.2.0/24')),
|
||||
Prefix(prefix=IPNetwork('192.0.2.0/24')),
|
||||
Prefix(prefix=IPNetwork('192.0.2.0/24')),
|
||||
))
|
||||
duplicate_prefix_pks = [p.pk for p in prefixes[0].get_duplicates()]
|
||||
|
||||
@@ -54,11 +54,11 @@ class TestPrefix(TestCase):
|
||||
VRF(name='VRF 3'),
|
||||
))
|
||||
prefixes = Prefix.objects.bulk_create((
|
||||
Prefix(prefix=netaddr.IPNetwork('10.0.0.0/16'), status=PrefixStatusChoices.STATUS_CONTAINER),
|
||||
Prefix(prefix=netaddr.IPNetwork('10.0.0.0/24'), vrf=None),
|
||||
Prefix(prefix=netaddr.IPNetwork('10.0.1.0/24'), vrf=vrfs[0]),
|
||||
Prefix(prefix=netaddr.IPNetwork('10.0.2.0/24'), vrf=vrfs[1]),
|
||||
Prefix(prefix=netaddr.IPNetwork('10.0.3.0/24'), vrf=vrfs[2]),
|
||||
Prefix(prefix=IPNetwork('10.0.0.0/16'), status=PrefixStatusChoices.STATUS_CONTAINER),
|
||||
Prefix(prefix=IPNetwork('10.0.0.0/24'), vrf=None),
|
||||
Prefix(prefix=IPNetwork('10.0.1.0/24'), vrf=vrfs[0]),
|
||||
Prefix(prefix=IPNetwork('10.0.2.0/24'), vrf=vrfs[1]),
|
||||
Prefix(prefix=IPNetwork('10.0.3.0/24'), vrf=vrfs[2]),
|
||||
))
|
||||
child_prefix_pks = {p.pk for p in prefixes[0].get_child_prefixes()}
|
||||
|
||||
@@ -79,13 +79,13 @@ class TestPrefix(TestCase):
|
||||
VRF(name='VRF 3'),
|
||||
))
|
||||
parent_prefix = Prefix.objects.create(
|
||||
prefix=netaddr.IPNetwork('10.0.0.0/16'), status=PrefixStatusChoices.STATUS_CONTAINER
|
||||
prefix=IPNetwork('10.0.0.0/16'), status=PrefixStatusChoices.STATUS_CONTAINER
|
||||
)
|
||||
ips = IPAddress.objects.bulk_create((
|
||||
IPAddress(address=netaddr.IPNetwork('10.0.0.1/24'), vrf=None),
|
||||
IPAddress(address=netaddr.IPNetwork('10.0.1.1/24'), vrf=vrfs[0]),
|
||||
IPAddress(address=netaddr.IPNetwork('10.0.2.1/24'), vrf=vrfs[1]),
|
||||
IPAddress(address=netaddr.IPNetwork('10.0.3.1/24'), vrf=vrfs[2]),
|
||||
IPAddress(address=IPNetwork('10.0.0.1/24'), vrf=None),
|
||||
IPAddress(address=IPNetwork('10.0.1.1/24'), vrf=vrfs[0]),
|
||||
IPAddress(address=IPNetwork('10.0.2.1/24'), vrf=vrfs[1]),
|
||||
IPAddress(address=IPNetwork('10.0.3.1/24'), vrf=vrfs[2]),
|
||||
))
|
||||
child_ip_pks = {p.pk for p in parent_prefix.get_child_ips()}
|
||||
|
||||
@@ -102,16 +102,16 @@ class TestPrefix(TestCase):
|
||||
def test_get_available_prefixes(self):
|
||||
|
||||
prefixes = Prefix.objects.bulk_create((
|
||||
Prefix(prefix=netaddr.IPNetwork('10.0.0.0/16')), # Parent prefix
|
||||
Prefix(prefix=netaddr.IPNetwork('10.0.0.0/20')),
|
||||
Prefix(prefix=netaddr.IPNetwork('10.0.32.0/20')),
|
||||
Prefix(prefix=netaddr.IPNetwork('10.0.128.0/18')),
|
||||
Prefix(prefix=IPNetwork('10.0.0.0/16')), # Parent prefix
|
||||
Prefix(prefix=IPNetwork('10.0.0.0/20')),
|
||||
Prefix(prefix=IPNetwork('10.0.32.0/20')),
|
||||
Prefix(prefix=IPNetwork('10.0.128.0/18')),
|
||||
))
|
||||
missing_prefixes = netaddr.IPSet([
|
||||
netaddr.IPNetwork('10.0.16.0/20'),
|
||||
netaddr.IPNetwork('10.0.48.0/20'),
|
||||
netaddr.IPNetwork('10.0.64.0/18'),
|
||||
netaddr.IPNetwork('10.0.192.0/18'),
|
||||
missing_prefixes = IPSet([
|
||||
IPNetwork('10.0.16.0/20'),
|
||||
IPNetwork('10.0.48.0/20'),
|
||||
IPNetwork('10.0.64.0/18'),
|
||||
IPNetwork('10.0.192.0/18'),
|
||||
])
|
||||
available_prefixes = prefixes[0].get_available_prefixes()
|
||||
|
||||
@@ -119,17 +119,17 @@ class TestPrefix(TestCase):
|
||||
|
||||
def test_get_available_ips(self):
|
||||
|
||||
parent_prefix = Prefix.objects.create(prefix=netaddr.IPNetwork('10.0.0.0/28'))
|
||||
parent_prefix = Prefix.objects.create(prefix=IPNetwork('10.0.0.0/28'))
|
||||
IPAddress.objects.bulk_create((
|
||||
IPAddress(address=netaddr.IPNetwork('10.0.0.1/26')),
|
||||
IPAddress(address=netaddr.IPNetwork('10.0.0.3/26')),
|
||||
IPAddress(address=netaddr.IPNetwork('10.0.0.5/26')),
|
||||
IPAddress(address=netaddr.IPNetwork('10.0.0.7/26')),
|
||||
IPAddress(address=netaddr.IPNetwork('10.0.0.9/26')),
|
||||
IPAddress(address=netaddr.IPNetwork('10.0.0.11/26')),
|
||||
IPAddress(address=netaddr.IPNetwork('10.0.0.13/26')),
|
||||
IPAddress(address=IPNetwork('10.0.0.1/26')),
|
||||
IPAddress(address=IPNetwork('10.0.0.3/26')),
|
||||
IPAddress(address=IPNetwork('10.0.0.5/26')),
|
||||
IPAddress(address=IPNetwork('10.0.0.7/26')),
|
||||
IPAddress(address=IPNetwork('10.0.0.9/26')),
|
||||
IPAddress(address=IPNetwork('10.0.0.11/26')),
|
||||
IPAddress(address=IPNetwork('10.0.0.13/26')),
|
||||
))
|
||||
missing_ips = netaddr.IPSet([
|
||||
missing_ips = IPSet([
|
||||
'10.0.0.2/32',
|
||||
'10.0.0.4/32',
|
||||
'10.0.0.6/32',
|
||||
@@ -145,39 +145,39 @@ class TestPrefix(TestCase):
|
||||
def test_get_first_available_prefix(self):
|
||||
|
||||
prefixes = Prefix.objects.bulk_create((
|
||||
Prefix(prefix=netaddr.IPNetwork('10.0.0.0/16')), # Parent prefix
|
||||
Prefix(prefix=netaddr.IPNetwork('10.0.0.0/24')),
|
||||
Prefix(prefix=netaddr.IPNetwork('10.0.1.0/24')),
|
||||
Prefix(prefix=netaddr.IPNetwork('10.0.2.0/24')),
|
||||
Prefix(prefix=IPNetwork('10.0.0.0/16')), # Parent prefix
|
||||
Prefix(prefix=IPNetwork('10.0.0.0/24')),
|
||||
Prefix(prefix=IPNetwork('10.0.1.0/24')),
|
||||
Prefix(prefix=IPNetwork('10.0.2.0/24')),
|
||||
))
|
||||
self.assertEqual(prefixes[0].get_first_available_prefix(), netaddr.IPNetwork('10.0.3.0/24'))
|
||||
self.assertEqual(prefixes[0].get_first_available_prefix(), IPNetwork('10.0.3.0/24'))
|
||||
|
||||
Prefix.objects.create(prefix=netaddr.IPNetwork('10.0.3.0/24'))
|
||||
self.assertEqual(prefixes[0].get_first_available_prefix(), netaddr.IPNetwork('10.0.4.0/22'))
|
||||
Prefix.objects.create(prefix=IPNetwork('10.0.3.0/24'))
|
||||
self.assertEqual(prefixes[0].get_first_available_prefix(), IPNetwork('10.0.4.0/22'))
|
||||
|
||||
def test_get_first_available_ip(self):
|
||||
|
||||
parent_prefix = Prefix.objects.create(prefix=netaddr.IPNetwork('10.0.0.0/24'))
|
||||
parent_prefix = Prefix.objects.create(prefix=IPNetwork('10.0.0.0/24'))
|
||||
IPAddress.objects.bulk_create((
|
||||
IPAddress(address=netaddr.IPNetwork('10.0.0.1/24')),
|
||||
IPAddress(address=netaddr.IPNetwork('10.0.0.2/24')),
|
||||
IPAddress(address=netaddr.IPNetwork('10.0.0.3/24')),
|
||||
IPAddress(address=IPNetwork('10.0.0.1/24')),
|
||||
IPAddress(address=IPNetwork('10.0.0.2/24')),
|
||||
IPAddress(address=IPNetwork('10.0.0.3/24')),
|
||||
))
|
||||
self.assertEqual(parent_prefix.get_first_available_ip(), '10.0.0.4/24')
|
||||
|
||||
IPAddress.objects.create(address=netaddr.IPNetwork('10.0.0.4/24'))
|
||||
IPAddress.objects.create(address=IPNetwork('10.0.0.4/24'))
|
||||
self.assertEqual(parent_prefix.get_first_available_ip(), '10.0.0.5/24')
|
||||
|
||||
def test_get_utilization(self):
|
||||
|
||||
# Container Prefix
|
||||
prefix = Prefix.objects.create(
|
||||
prefix=netaddr.IPNetwork('10.0.0.0/24'),
|
||||
prefix=IPNetwork('10.0.0.0/24'),
|
||||
status=PrefixStatusChoices.STATUS_CONTAINER
|
||||
)
|
||||
Prefix.objects.bulk_create((
|
||||
Prefix(prefix=netaddr.IPNetwork('10.0.0.0/26')),
|
||||
Prefix(prefix=netaddr.IPNetwork('10.0.0.128/26')),
|
||||
Prefix(prefix=IPNetwork('10.0.0.0/26')),
|
||||
Prefix(prefix=IPNetwork('10.0.0.128/26')),
|
||||
))
|
||||
self.assertEqual(prefix.get_utilization(), 50)
|
||||
|
||||
@@ -186,7 +186,7 @@ class TestPrefix(TestCase):
|
||||
prefix.save()
|
||||
IPAddress.objects.bulk_create(
|
||||
# Create 32 IPAddresses within the Prefix
|
||||
[IPAddress(address=netaddr.IPNetwork('10.0.0.{}/24'.format(i))) for i in range(1, 33)]
|
||||
[IPAddress(address=IPNetwork('10.0.0.{}/24'.format(i))) for i in range(1, 33)]
|
||||
)
|
||||
self.assertEqual(prefix.get_utilization(), 12) # ~= 12%
|
||||
|
||||
@@ -196,36 +196,234 @@ class TestPrefix(TestCase):
|
||||
|
||||
@override_settings(ENFORCE_GLOBAL_UNIQUE=False)
|
||||
def test_duplicate_global(self):
|
||||
Prefix.objects.create(prefix=netaddr.IPNetwork('192.0.2.0/24'))
|
||||
duplicate_prefix = Prefix(prefix=netaddr.IPNetwork('192.0.2.0/24'))
|
||||
Prefix.objects.create(prefix=IPNetwork('192.0.2.0/24'))
|
||||
duplicate_prefix = Prefix(prefix=IPNetwork('192.0.2.0/24'))
|
||||
self.assertIsNone(duplicate_prefix.clean())
|
||||
|
||||
@override_settings(ENFORCE_GLOBAL_UNIQUE=True)
|
||||
def test_duplicate_global_unique(self):
|
||||
Prefix.objects.create(prefix=netaddr.IPNetwork('192.0.2.0/24'))
|
||||
duplicate_prefix = Prefix(prefix=netaddr.IPNetwork('192.0.2.0/24'))
|
||||
Prefix.objects.create(prefix=IPNetwork('192.0.2.0/24'))
|
||||
duplicate_prefix = Prefix(prefix=IPNetwork('192.0.2.0/24'))
|
||||
self.assertRaises(ValidationError, duplicate_prefix.clean)
|
||||
|
||||
def test_duplicate_vrf(self):
|
||||
vrf = VRF.objects.create(name='Test', rd='1:1', enforce_unique=False)
|
||||
Prefix.objects.create(vrf=vrf, prefix=netaddr.IPNetwork('192.0.2.0/24'))
|
||||
duplicate_prefix = Prefix(vrf=vrf, prefix=netaddr.IPNetwork('192.0.2.0/24'))
|
||||
Prefix.objects.create(vrf=vrf, prefix=IPNetwork('192.0.2.0/24'))
|
||||
duplicate_prefix = Prefix(vrf=vrf, prefix=IPNetwork('192.0.2.0/24'))
|
||||
self.assertIsNone(duplicate_prefix.clean())
|
||||
|
||||
def test_duplicate_vrf_unique(self):
|
||||
vrf = VRF.objects.create(name='Test', rd='1:1', enforce_unique=True)
|
||||
Prefix.objects.create(vrf=vrf, prefix=netaddr.IPNetwork('192.0.2.0/24'))
|
||||
duplicate_prefix = Prefix(vrf=vrf, prefix=netaddr.IPNetwork('192.0.2.0/24'))
|
||||
Prefix.objects.create(vrf=vrf, prefix=IPNetwork('192.0.2.0/24'))
|
||||
duplicate_prefix = Prefix(vrf=vrf, prefix=IPNetwork('192.0.2.0/24'))
|
||||
self.assertRaises(ValidationError, duplicate_prefix.clean)
|
||||
|
||||
|
||||
class TestPrefixHierarchy(TestCase):
|
||||
"""
|
||||
Test the automatic updating of depth and child count in response to changes made within
|
||||
the prefix hierarchy.
|
||||
"""
|
||||
@classmethod
|
||||
def setUpTestData(cls):
|
||||
|
||||
prefixes = (
|
||||
|
||||
# IPv4
|
||||
Prefix(prefix='10.0.0.0/8', _depth=0, _children=2),
|
||||
Prefix(prefix='10.0.0.0/16', _depth=1, _children=1),
|
||||
Prefix(prefix='10.0.0.0/24', _depth=2, _children=0),
|
||||
|
||||
# IPv6
|
||||
Prefix(prefix='2001:db8::/32', _depth=0, _children=2),
|
||||
Prefix(prefix='2001:db8::/40', _depth=1, _children=1),
|
||||
Prefix(prefix='2001:db8::/48', _depth=2, _children=0),
|
||||
|
||||
)
|
||||
Prefix.objects.bulk_create(prefixes)
|
||||
|
||||
def test_create_prefix4(self):
|
||||
# Create 10.0.0.0/12
|
||||
Prefix(prefix='10.0.0.0/12').save()
|
||||
|
||||
prefixes = Prefix.objects.filter(prefix__family=4)
|
||||
self.assertEqual(prefixes[0].prefix, IPNetwork('10.0.0.0/8'))
|
||||
self.assertEqual(prefixes[0]._depth, 0)
|
||||
self.assertEqual(prefixes[0]._children, 3)
|
||||
self.assertEqual(prefixes[1].prefix, IPNetwork('10.0.0.0/12'))
|
||||
self.assertEqual(prefixes[1]._depth, 1)
|
||||
self.assertEqual(prefixes[1]._children, 2)
|
||||
self.assertEqual(prefixes[2].prefix, IPNetwork('10.0.0.0/16'))
|
||||
self.assertEqual(prefixes[2]._depth, 2)
|
||||
self.assertEqual(prefixes[2]._children, 1)
|
||||
self.assertEqual(prefixes[3].prefix, IPNetwork('10.0.0.0/24'))
|
||||
self.assertEqual(prefixes[3]._depth, 3)
|
||||
self.assertEqual(prefixes[3]._children, 0)
|
||||
|
||||
def test_create_prefix6(self):
|
||||
# Create 2001:db8::/36
|
||||
Prefix(prefix='2001:db8::/36').save()
|
||||
|
||||
prefixes = Prefix.objects.filter(prefix__family=6)
|
||||
self.assertEqual(prefixes[0].prefix, IPNetwork('2001:db8::/32'))
|
||||
self.assertEqual(prefixes[0]._depth, 0)
|
||||
self.assertEqual(prefixes[0]._children, 3)
|
||||
self.assertEqual(prefixes[1].prefix, IPNetwork('2001:db8::/36'))
|
||||
self.assertEqual(prefixes[1]._depth, 1)
|
||||
self.assertEqual(prefixes[1]._children, 2)
|
||||
self.assertEqual(prefixes[2].prefix, IPNetwork('2001:db8::/40'))
|
||||
self.assertEqual(prefixes[2]._depth, 2)
|
||||
self.assertEqual(prefixes[2]._children, 1)
|
||||
self.assertEqual(prefixes[3].prefix, IPNetwork('2001:db8::/48'))
|
||||
self.assertEqual(prefixes[3]._depth, 3)
|
||||
self.assertEqual(prefixes[3]._children, 0)
|
||||
|
||||
def test_update_prefix4(self):
|
||||
# Change 10.0.0.0/24 to 10.0.0.0/12
|
||||
p = Prefix.objects.get(prefix='10.0.0.0/24')
|
||||
p.prefix = '10.0.0.0/12'
|
||||
p.save()
|
||||
|
||||
prefixes = Prefix.objects.filter(prefix__family=4)
|
||||
self.assertEqual(prefixes[0].prefix, IPNetwork('10.0.0.0/8'))
|
||||
self.assertEqual(prefixes[0]._depth, 0)
|
||||
self.assertEqual(prefixes[0]._children, 2)
|
||||
self.assertEqual(prefixes[1].prefix, IPNetwork('10.0.0.0/12'))
|
||||
self.assertEqual(prefixes[1]._depth, 1)
|
||||
self.assertEqual(prefixes[1]._children, 1)
|
||||
self.assertEqual(prefixes[2].prefix, IPNetwork('10.0.0.0/16'))
|
||||
self.assertEqual(prefixes[2]._depth, 2)
|
||||
self.assertEqual(prefixes[2]._children, 0)
|
||||
|
||||
def test_update_prefix6(self):
|
||||
# Change 2001:db8::/48 to 2001:db8::/36
|
||||
p = Prefix.objects.get(prefix='2001:db8::/48')
|
||||
p.prefix = '2001:db8::/36'
|
||||
p.save()
|
||||
|
||||
prefixes = Prefix.objects.filter(prefix__family=6)
|
||||
self.assertEqual(prefixes[0].prefix, IPNetwork('2001:db8::/32'))
|
||||
self.assertEqual(prefixes[0]._depth, 0)
|
||||
self.assertEqual(prefixes[0]._children, 2)
|
||||
self.assertEqual(prefixes[1].prefix, IPNetwork('2001:db8::/36'))
|
||||
self.assertEqual(prefixes[1]._depth, 1)
|
||||
self.assertEqual(prefixes[1]._children, 1)
|
||||
self.assertEqual(prefixes[2].prefix, IPNetwork('2001:db8::/40'))
|
||||
self.assertEqual(prefixes[2]._depth, 2)
|
||||
self.assertEqual(prefixes[2]._children, 0)
|
||||
|
||||
def test_update_prefix_vrf4(self):
|
||||
vrf = VRF(name='VRF A')
|
||||
vrf.save()
|
||||
|
||||
# Move 10.0.0.0/16 to a VRF
|
||||
p = Prefix.objects.get(prefix='10.0.0.0/16')
|
||||
p.vrf = vrf
|
||||
p.save()
|
||||
|
||||
prefixes = Prefix.objects.filter(vrf__isnull=True, prefix__family=4)
|
||||
self.assertEqual(prefixes[0].prefix, IPNetwork('10.0.0.0/8'))
|
||||
self.assertEqual(prefixes[0]._depth, 0)
|
||||
self.assertEqual(prefixes[0]._children, 1)
|
||||
self.assertEqual(prefixes[1].prefix, IPNetwork('10.0.0.0/24'))
|
||||
self.assertEqual(prefixes[1]._depth, 1)
|
||||
self.assertEqual(prefixes[1]._children, 0)
|
||||
|
||||
prefixes = Prefix.objects.filter(vrf=vrf)
|
||||
self.assertEqual(prefixes[0].prefix, IPNetwork('10.0.0.0/16'))
|
||||
self.assertEqual(prefixes[0]._depth, 0)
|
||||
self.assertEqual(prefixes[0]._children, 0)
|
||||
|
||||
def test_update_prefix_vrf6(self):
|
||||
vrf = VRF(name='VRF A')
|
||||
vrf.save()
|
||||
|
||||
# Move 2001:db8::/40 to a VRF
|
||||
p = Prefix.objects.get(prefix='2001:db8::/40')
|
||||
p.vrf = vrf
|
||||
p.save()
|
||||
|
||||
prefixes = Prefix.objects.filter(vrf__isnull=True, prefix__family=6)
|
||||
self.assertEqual(prefixes[0].prefix, IPNetwork('2001:db8::/32'))
|
||||
self.assertEqual(prefixes[0]._depth, 0)
|
||||
self.assertEqual(prefixes[0]._children, 1)
|
||||
self.assertEqual(prefixes[1].prefix, IPNetwork('2001:db8::/48'))
|
||||
self.assertEqual(prefixes[1]._depth, 1)
|
||||
self.assertEqual(prefixes[1]._children, 0)
|
||||
|
||||
prefixes = Prefix.objects.filter(vrf=vrf)
|
||||
self.assertEqual(prefixes[0].prefix, IPNetwork('2001:db8::/40'))
|
||||
self.assertEqual(prefixes[0]._depth, 0)
|
||||
self.assertEqual(prefixes[0]._children, 0)
|
||||
|
||||
def test_delete_prefix4(self):
|
||||
# Delete 10.0.0.0/16
|
||||
Prefix.objects.filter(prefix='10.0.0.0/16').delete()
|
||||
|
||||
prefixes = Prefix.objects.filter(prefix__family=4)
|
||||
self.assertEqual(prefixes[0].prefix, IPNetwork('10.0.0.0/8'))
|
||||
self.assertEqual(prefixes[0]._depth, 0)
|
||||
self.assertEqual(prefixes[0]._children, 1)
|
||||
self.assertEqual(prefixes[1].prefix, IPNetwork('10.0.0.0/24'))
|
||||
self.assertEqual(prefixes[1]._depth, 1)
|
||||
self.assertEqual(prefixes[1]._children, 0)
|
||||
|
||||
def test_delete_prefix6(self):
|
||||
# Delete 2001:db8::/40
|
||||
Prefix.objects.filter(prefix='2001:db8::/40').delete()
|
||||
|
||||
prefixes = Prefix.objects.filter(prefix__family=6)
|
||||
self.assertEqual(prefixes[0].prefix, IPNetwork('2001:db8::/32'))
|
||||
self.assertEqual(prefixes[0]._depth, 0)
|
||||
self.assertEqual(prefixes[0]._children, 1)
|
||||
self.assertEqual(prefixes[1].prefix, IPNetwork('2001:db8::/48'))
|
||||
self.assertEqual(prefixes[1]._depth, 1)
|
||||
self.assertEqual(prefixes[1]._children, 0)
|
||||
|
||||
def test_duplicate_prefix4(self):
|
||||
# Duplicate 10.0.0.0/16
|
||||
Prefix(prefix='10.0.0.0/16').save()
|
||||
|
||||
prefixes = Prefix.objects.filter(prefix__family=4)
|
||||
self.assertEqual(prefixes[0].prefix, IPNetwork('10.0.0.0/8'))
|
||||
self.assertEqual(prefixes[0]._depth, 0)
|
||||
self.assertEqual(prefixes[0]._children, 3)
|
||||
self.assertEqual(prefixes[1].prefix, IPNetwork('10.0.0.0/16'))
|
||||
self.assertEqual(prefixes[1]._depth, 1)
|
||||
self.assertEqual(prefixes[1]._children, 1)
|
||||
self.assertEqual(prefixes[2].prefix, IPNetwork('10.0.0.0/16'))
|
||||
self.assertEqual(prefixes[2]._depth, 1)
|
||||
self.assertEqual(prefixes[2]._children, 1)
|
||||
self.assertEqual(prefixes[3].prefix, IPNetwork('10.0.0.0/24'))
|
||||
self.assertEqual(prefixes[3]._depth, 2)
|
||||
self.assertEqual(prefixes[3]._children, 0)
|
||||
|
||||
def test_duplicate_prefix6(self):
|
||||
# Duplicate 2001:db8::/40
|
||||
Prefix(prefix='2001:db8::/40').save()
|
||||
|
||||
prefixes = Prefix.objects.filter(prefix__family=6)
|
||||
self.assertEqual(prefixes[0].prefix, IPNetwork('2001:db8::/32'))
|
||||
self.assertEqual(prefixes[0]._depth, 0)
|
||||
self.assertEqual(prefixes[0]._children, 3)
|
||||
self.assertEqual(prefixes[1].prefix, IPNetwork('2001:db8::/40'))
|
||||
self.assertEqual(prefixes[1]._depth, 1)
|
||||
self.assertEqual(prefixes[1]._children, 1)
|
||||
self.assertEqual(prefixes[2].prefix, IPNetwork('2001:db8::/40'))
|
||||
self.assertEqual(prefixes[2]._depth, 1)
|
||||
self.assertEqual(prefixes[2]._children, 1)
|
||||
self.assertEqual(prefixes[3].prefix, IPNetwork('2001:db8::/48'))
|
||||
self.assertEqual(prefixes[3]._depth, 2)
|
||||
self.assertEqual(prefixes[3]._children, 0)
|
||||
|
||||
|
||||
class TestIPAddress(TestCase):
|
||||
|
||||
def test_get_duplicates(self):
|
||||
ips = IPAddress.objects.bulk_create((
|
||||
IPAddress(address=netaddr.IPNetwork('192.0.2.1/24')),
|
||||
IPAddress(address=netaddr.IPNetwork('192.0.2.1/24')),
|
||||
IPAddress(address=netaddr.IPNetwork('192.0.2.1/24')),
|
||||
IPAddress(address=IPNetwork('192.0.2.1/24')),
|
||||
IPAddress(address=IPNetwork('192.0.2.1/24')),
|
||||
IPAddress(address=IPNetwork('192.0.2.1/24')),
|
||||
))
|
||||
duplicate_ip_pks = [p.pk for p in ips[0].get_duplicates()]
|
||||
|
||||
@@ -237,44 +435,44 @@ class TestIPAddress(TestCase):
|
||||
|
||||
@override_settings(ENFORCE_GLOBAL_UNIQUE=False)
|
||||
def test_duplicate_global(self):
|
||||
IPAddress.objects.create(address=netaddr.IPNetwork('192.0.2.1/24'))
|
||||
duplicate_ip = IPAddress(address=netaddr.IPNetwork('192.0.2.1/24'))
|
||||
IPAddress.objects.create(address=IPNetwork('192.0.2.1/24'))
|
||||
duplicate_ip = IPAddress(address=IPNetwork('192.0.2.1/24'))
|
||||
self.assertIsNone(duplicate_ip.clean())
|
||||
|
||||
@override_settings(ENFORCE_GLOBAL_UNIQUE=True)
|
||||
def test_duplicate_global_unique(self):
|
||||
IPAddress.objects.create(address=netaddr.IPNetwork('192.0.2.1/24'))
|
||||
duplicate_ip = IPAddress(address=netaddr.IPNetwork('192.0.2.1/24'))
|
||||
IPAddress.objects.create(address=IPNetwork('192.0.2.1/24'))
|
||||
duplicate_ip = IPAddress(address=IPNetwork('192.0.2.1/24'))
|
||||
self.assertRaises(ValidationError, duplicate_ip.clean)
|
||||
|
||||
def test_duplicate_vrf(self):
|
||||
vrf = VRF.objects.create(name='Test', rd='1:1', enforce_unique=False)
|
||||
IPAddress.objects.create(vrf=vrf, address=netaddr.IPNetwork('192.0.2.1/24'))
|
||||
duplicate_ip = IPAddress(vrf=vrf, address=netaddr.IPNetwork('192.0.2.1/24'))
|
||||
IPAddress.objects.create(vrf=vrf, address=IPNetwork('192.0.2.1/24'))
|
||||
duplicate_ip = IPAddress(vrf=vrf, address=IPNetwork('192.0.2.1/24'))
|
||||
self.assertIsNone(duplicate_ip.clean())
|
||||
|
||||
def test_duplicate_vrf_unique(self):
|
||||
vrf = VRF.objects.create(name='Test', rd='1:1', enforce_unique=True)
|
||||
IPAddress.objects.create(vrf=vrf, address=netaddr.IPNetwork('192.0.2.1/24'))
|
||||
duplicate_ip = IPAddress(vrf=vrf, address=netaddr.IPNetwork('192.0.2.1/24'))
|
||||
IPAddress.objects.create(vrf=vrf, address=IPNetwork('192.0.2.1/24'))
|
||||
duplicate_ip = IPAddress(vrf=vrf, address=IPNetwork('192.0.2.1/24'))
|
||||
self.assertRaises(ValidationError, duplicate_ip.clean)
|
||||
|
||||
@override_settings(ENFORCE_GLOBAL_UNIQUE=True)
|
||||
def test_duplicate_nonunique_nonrole_role(self):
|
||||
IPAddress.objects.create(address=netaddr.IPNetwork('192.0.2.1/24'))
|
||||
duplicate_ip = IPAddress(address=netaddr.IPNetwork('192.0.2.1/24'), role=IPAddressRoleChoices.ROLE_VIP)
|
||||
IPAddress.objects.create(address=IPNetwork('192.0.2.1/24'))
|
||||
duplicate_ip = IPAddress(address=IPNetwork('192.0.2.1/24'), role=IPAddressRoleChoices.ROLE_VIP)
|
||||
self.assertRaises(ValidationError, duplicate_ip.clean)
|
||||
|
||||
@override_settings(ENFORCE_GLOBAL_UNIQUE=True)
|
||||
def test_duplicate_nonunique_role_nonrole(self):
|
||||
IPAddress.objects.create(address=netaddr.IPNetwork('192.0.2.1/24'), role=IPAddressRoleChoices.ROLE_VIP)
|
||||
duplicate_ip = IPAddress(address=netaddr.IPNetwork('192.0.2.1/24'))
|
||||
IPAddress.objects.create(address=IPNetwork('192.0.2.1/24'), role=IPAddressRoleChoices.ROLE_VIP)
|
||||
duplicate_ip = IPAddress(address=IPNetwork('192.0.2.1/24'))
|
||||
self.assertRaises(ValidationError, duplicate_ip.clean)
|
||||
|
||||
@override_settings(ENFORCE_GLOBAL_UNIQUE=True)
|
||||
def test_duplicate_nonunique_role(self):
|
||||
IPAddress.objects.create(address=netaddr.IPNetwork('192.0.2.1/24'), role=IPAddressRoleChoices.ROLE_VIP)
|
||||
IPAddress.objects.create(address=netaddr.IPNetwork('192.0.2.1/24'), role=IPAddressRoleChoices.ROLE_VIP)
|
||||
IPAddress.objects.create(address=IPNetwork('192.0.2.1/24'), role=IPAddressRoleChoices.ROLE_VIP)
|
||||
IPAddress.objects.create(address=IPNetwork('192.0.2.1/24'), role=IPAddressRoleChoices.ROLE_VIP)
|
||||
|
||||
|
||||
class TestVLANGroup(TestCase):
|
||||
|
||||
@@ -91,3 +91,63 @@ def add_available_vlans(vlan_group, vlans):
|
||||
vlans.sort(key=lambda v: v.vid if type(v) == VLAN else v['vid'])
|
||||
|
||||
return vlans
|
||||
|
||||
|
||||
def rebuild_prefixes(vrf):
|
||||
"""
|
||||
Rebuild the prefix hierarchy for all prefixes in the specified VRF (or global table).
|
||||
"""
|
||||
def contains(parent, child):
|
||||
return child in parent and child != parent
|
||||
|
||||
def push_to_stack(prefix):
|
||||
# Increment child count on parent nodes
|
||||
for n in stack:
|
||||
n['children'] += 1
|
||||
stack.append({
|
||||
'pk': [prefix['pk']],
|
||||
'prefix': prefix['prefix'],
|
||||
'children': 0,
|
||||
})
|
||||
|
||||
stack = []
|
||||
update_queue = []
|
||||
prefixes = Prefix.objects.filter(vrf=vrf).values('pk', 'prefix')
|
||||
|
||||
# Iterate through all Prefixes in the VRF, growing and shrinking the stack as we go
|
||||
for i, p in enumerate(prefixes):
|
||||
|
||||
# Grow the stack if this is a child of the most recent prefix
|
||||
if not stack or contains(stack[-1]['prefix'], p['prefix']):
|
||||
push_to_stack(p)
|
||||
|
||||
# Handle duplicate prefixes
|
||||
elif stack[-1]['prefix'] == p['prefix']:
|
||||
stack[-1]['pk'].append(p['pk'])
|
||||
|
||||
# If this is a sibling or parent of the most recent prefix, pop nodes from the
|
||||
# stack until we reach a parent prefix (or the root)
|
||||
else:
|
||||
while stack and not contains(stack[-1]['prefix'], p['prefix']):
|
||||
node = stack.pop()
|
||||
for pk in node['pk']:
|
||||
update_queue.append(
|
||||
Prefix(pk=pk, _depth=len(stack), _children=node['children'])
|
||||
)
|
||||
push_to_stack(p)
|
||||
|
||||
# Flush the update queue once it reaches 100 Prefixes
|
||||
if len(update_queue) >= 100:
|
||||
Prefix.objects.bulk_update(update_queue, ['_depth', '_children'])
|
||||
update_queue = []
|
||||
|
||||
# Clear out any prefixes remaining in the stack
|
||||
while stack:
|
||||
node = stack.pop()
|
||||
for pk in node['pk']:
|
||||
update_queue.append(
|
||||
Prefix(pk=pk, _depth=len(stack), _children=node['children'])
|
||||
)
|
||||
|
||||
# Final flush of any remaining Prefixes
|
||||
Prefix.objects.bulk_update(update_queue, ['_depth', '_children'])
|
||||
|
||||
@@ -238,7 +238,7 @@ class AggregateView(generic.ObjectView):
|
||||
'site', 'role'
|
||||
).order_by(
|
||||
'prefix'
|
||||
).annotate_tree()
|
||||
)
|
||||
|
||||
# Add available prefixes to the table if requested
|
||||
if request.GET.get('show_available', 'true') == 'true':
|
||||
@@ -352,7 +352,7 @@ class RoleBulkDeleteView(generic.BulkDeleteView):
|
||||
#
|
||||
|
||||
class PrefixListView(generic.ObjectListView):
|
||||
queryset = Prefix.objects.annotate_tree()
|
||||
queryset = Prefix.objects.all()
|
||||
filterset = filtersets.PrefixFilterSet
|
||||
filterset_form = forms.PrefixFilterForm
|
||||
table = tables.PrefixDetailTable
|
||||
@@ -377,7 +377,7 @@ class PrefixView(generic.ObjectView):
|
||||
prefix__net_contains=str(instance.prefix)
|
||||
).prefetch_related(
|
||||
'site', 'role'
|
||||
).annotate_tree()
|
||||
)
|
||||
parent_prefix_table = tables.PrefixTable(list(parent_prefixes), orderable=False)
|
||||
parent_prefix_table.exclude = ('vrf',)
|
||||
|
||||
@@ -407,7 +407,7 @@ class PrefixPrefixesView(generic.ObjectView):
|
||||
# Child prefixes table
|
||||
child_prefixes = instance.get_child_prefixes().restrict(request.user, 'view').prefetch_related(
|
||||
'site', 'vlan', 'role',
|
||||
).annotate_tree()
|
||||
)
|
||||
|
||||
# Add available prefixes to the table if requested
|
||||
if child_prefixes and request.GET.get('show_available', 'true') == 'true':
|
||||
@@ -522,7 +522,7 @@ class IPAddressView(generic.ObjectView):
|
||||
# Parent prefixes table
|
||||
parent_prefixes = Prefix.objects.restrict(request.user, 'view').filter(
|
||||
vrf=instance.vrf,
|
||||
prefix__net_contains=str(instance.address.ip)
|
||||
prefix__net_contains_or_equals=str(instance.address.ip)
|
||||
).prefetch_related(
|
||||
'site', 'role'
|
||||
)
|
||||
@@ -551,6 +551,7 @@ class IPAddressView(generic.ObjectView):
|
||||
vrf=instance.vrf, address__net_contained_or_equal=str(instance.address)
|
||||
)
|
||||
related_ips_table = tables.IPAddressTable(related_ips, orderable=False)
|
||||
paginate_table(related_ips_table, request)
|
||||
|
||||
return {
|
||||
'parent_prefixes_table': parent_prefixes_table,
|
||||
|
||||
@@ -20,17 +20,20 @@ class LoginRequiredMiddleware(object):
|
||||
self.get_response = get_response
|
||||
|
||||
def __call__(self, request):
|
||||
# Redirect unauthenticated requests (except those exempted) to the login page if LOGIN_REQUIRED is true
|
||||
if settings.LOGIN_REQUIRED and not request.user.is_authenticated:
|
||||
# Redirect unauthenticated requests to the login page. API requests are exempt from redirection as the API
|
||||
# performs its own authentication. Also metrics can be read without login.
|
||||
api_path = reverse('api-root')
|
||||
if not request.path_info.startswith((api_path, '/metrics')) and request.path_info != settings.LOGIN_URL:
|
||||
return HttpResponseRedirect(
|
||||
'{}?next={}'.format(
|
||||
settings.LOGIN_URL,
|
||||
parse.quote(request.get_full_path_info())
|
||||
)
|
||||
)
|
||||
# Determine exempt paths
|
||||
exempt_paths = [
|
||||
reverse('api-root')
|
||||
]
|
||||
if settings.METRICS_ENABLED:
|
||||
exempt_paths.append(reverse('prometheus-django-metrics'))
|
||||
|
||||
# Redirect unauthenticated requests
|
||||
if not request.path_info.startswith(tuple(exempt_paths)) and request.path_info != settings.LOGIN_URL:
|
||||
login_url = f'{settings.LOGIN_URL}?next={parse.quote(request.get_full_path_info())}'
|
||||
return HttpResponseRedirect(login_url)
|
||||
|
||||
return self.get_response(request)
|
||||
|
||||
|
||||
|
||||
@@ -16,7 +16,7 @@ from django.core.validators import URLValidator
|
||||
# Environment setup
|
||||
#
|
||||
|
||||
VERSION = '2.11.3'
|
||||
VERSION = '2.11.7'
|
||||
|
||||
# Hostname
|
||||
HOSTNAME = platform.node()
|
||||
@@ -29,10 +29,10 @@ if platform.python_version_tuple() < ('3', '6'):
|
||||
raise RuntimeError(
|
||||
"NetBox requires Python 3.6 or higher (current: Python {})".format(platform.python_version())
|
||||
)
|
||||
# TODO: Remove in NetBox v2.12
|
||||
# TODO: Remove in NetBox v3.0
|
||||
if platform.python_version_tuple() < ('3', '7'):
|
||||
warnings.warn(
|
||||
"Support for Python 3.6 will be dropped in NetBox v2.12. Please upgrade to Python 3.7 or later at your "
|
||||
"Support for Python 3.6 will be dropped in NetBox v3.0. Please upgrade to Python 3.7 or later at your "
|
||||
"earliest convenience."
|
||||
)
|
||||
|
||||
|
||||
@@ -774,9 +774,7 @@ class BulkEditView(GetReturnURLMixin, ObjectPermissionRequiredMixin, View):
|
||||
|
||||
# If we are editing *all* objects in the queryset, replace the PK list with all matched objects.
|
||||
if request.POST.get('_all') and self.filterset is not None:
|
||||
pk_list = [
|
||||
obj.pk for obj in self.filterset(request.GET, self.queryset.only('pk')).qs
|
||||
]
|
||||
pk_list = self.filterset(request.GET, self.queryset.values_list('pk', flat=True)).qs
|
||||
else:
|
||||
pk_list = request.POST.getlist('pk')
|
||||
|
||||
|
||||
@@ -273,7 +273,7 @@ class SecretRole(OrganizationalModel):
|
||||
)
|
||||
|
||||
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'webhooks')
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'tags', 'webhooks')
|
||||
class Secret(PrimaryModel):
|
||||
"""
|
||||
A Secret stores an AES256-encrypted copy of sensitive data, such as passwords or secret keys. An irreversible
|
||||
|
||||
@@ -37,6 +37,7 @@ class SecretTable(BaseTable):
|
||||
)
|
||||
assigned_object = tables.Column(
|
||||
linkify=True,
|
||||
orderable=False,
|
||||
verbose_name='Assigned object'
|
||||
)
|
||||
role = tables.Column(
|
||||
|
||||
@@ -86,6 +86,18 @@ class SecretRoleBulkDeleteView(generic.BulkDeleteView):
|
||||
# Secrets
|
||||
#
|
||||
|
||||
def inject_deprecation_warning(request):
|
||||
"""
|
||||
Inject a warning message notifying the user of the pending removal of secrets functionality.
|
||||
"""
|
||||
messages.warning(
|
||||
request,
|
||||
mark_safe('<i class="mdi mdi-alert"></i> The secrets functionality will be moved to a plugin in NetBox v3.0. '
|
||||
'Please see <a href="https://github.com/netbox-community/netbox/issues/5278">issue #5278</a> for '
|
||||
'more information.')
|
||||
)
|
||||
|
||||
|
||||
class SecretListView(generic.ObjectListView):
|
||||
queryset = Secret.objects.all()
|
||||
filterset = filtersets.SecretFilterSet
|
||||
@@ -93,10 +105,18 @@ class SecretListView(generic.ObjectListView):
|
||||
table = tables.SecretTable
|
||||
action_buttons = ('import', 'export')
|
||||
|
||||
def get(self, request):
|
||||
inject_deprecation_warning(request)
|
||||
return super().get(request)
|
||||
|
||||
|
||||
class SecretView(generic.ObjectView):
|
||||
queryset = Secret.objects.all()
|
||||
|
||||
def get(self, request, *args, **kwargs):
|
||||
inject_deprecation_warning(request)
|
||||
return super().get(request, *args, **kwargs)
|
||||
|
||||
|
||||
class SecretEditView(generic.ObjectEditView):
|
||||
queryset = Secret.objects.all()
|
||||
|
||||
@@ -74,7 +74,7 @@
|
||||
<i class="mdi mdi-book-open-page-variant text-primary"></i> <a href="https://netbox.readthedocs.io/">Docs</a> ·
|
||||
<i class="mdi mdi-cloud-braces text-primary"></i> <a href="{% url 'api_docs' %}">API</a> ·
|
||||
<i class="mdi mdi-xml text-primary"></i> <a href="https://github.com/netbox-community/netbox">Code</a> ·
|
||||
<i class="mdi mdi-lifebuoy text-primary"></i> <a href="https://github.com/netbox-community/netbox/wiki">Help</a>
|
||||
<i class="mdi mdi-slack text-primary"></i> <a href="https://netdev.chat/">Community</a>
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
@@ -35,13 +35,13 @@
|
||||
<div class="form-group">
|
||||
<label class="col-md-3 control-label required">Region</label>
|
||||
<div class="col-md-9">
|
||||
<p class="form-control-static">{{ termination_a.device.site.region }}</p>
|
||||
<p class="form-control-static">{{ termination_a.device.site.region|placeholder }}</p>
|
||||
</div>
|
||||
</div>
|
||||
<div class="form-group">
|
||||
<label class="col-md-3 control-label required">Site Group</label>
|
||||
<div class="col-md-9">
|
||||
<p class="form-control-static">{{ termination_a.device.site.group }}</p>
|
||||
<p class="form-control-static">{{ termination_a.device.site.group|placeholder }}</p>
|
||||
</div>
|
||||
</div>
|
||||
<div class="form-group">
|
||||
@@ -50,10 +50,16 @@
|
||||
<p class="form-control-static">{{ termination_a.device.site }}</p>
|
||||
</div>
|
||||
</div>
|
||||
<div class="form-group">
|
||||
<label class="col-md-3 control-label required">Location</label>
|
||||
<div class="col-md-9">
|
||||
<p class="form-control-static">{{ termination_a.device.location|placeholder }}</p>
|
||||
</div>
|
||||
</div>
|
||||
<div class="form-group">
|
||||
<label class="col-md-3 control-label required">Rack</label>
|
||||
<div class="col-md-9">
|
||||
<p class="form-control-static">{{ termination_a.device.rack|default:"None" }}</p>
|
||||
<p class="form-control-static">{{ termination_a.device.rack|placeholder }}</p>
|
||||
</div>
|
||||
</div>
|
||||
<div class="form-group">
|
||||
|
||||
@@ -7,10 +7,10 @@
|
||||
|
||||
{% block breadcrumbs %}
|
||||
<li><a href="{% url 'dcim:powerfeed_list' %}">Power Feeds</a></li>
|
||||
<li><a href="{{ object.power_panel.site.get_absolute_url }}">{{ object.power_panel.site }}</a></li>
|
||||
<li><a href="{{ object.power_panel.get_absolute_url }}">{{ object.power_panel }}</a></li>
|
||||
<li><a href="{% url 'dcim:powerfeed_list' %}?site_id={{ object.power_panel.site.pk }}">{{ object.power_panel.site }}</a></li>
|
||||
<li><a href="{% url 'dcim:powerfeed_list' %}?power_panel_id={{ object.power_panel.pk }}">{{ object.power_panel }}</a></li>
|
||||
{% if object.rack %}
|
||||
<li><a href="{{ object.rack.get_absolute_url }}">{{ object.rack }}</a></li>
|
||||
<li><a href="{% url 'dcim:powerfeed_list' %}?rack_id={{ object.rack.pk }}">{{ object.rack }}</a></li>
|
||||
{% endif %}
|
||||
<li>{{ object }}</li>
|
||||
{% endblock %}
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
|
||||
{% block breadcrumbs %}
|
||||
<li><a href="{% url 'dcim:powerpanel_list' %}">Power Panels</a></li>
|
||||
<li><a href="{{ object.site.get_absolute_url }}">{{ object.site }}</a></li>
|
||||
<li><a href="{% url 'dcim:powerpanel_list' %}?site_id={{ object.site.pk }}">{{ object.site }}</a></li>
|
||||
{% if object.location %}
|
||||
<li><a href="{{ object.location.get_absolute_url }}">{{ object.location }}</a></li>
|
||||
{% endif %}
|
||||
|
||||
@@ -128,6 +128,8 @@
|
||||
<span{% if k in diff_removed %} style="background-color: #ffdce0"{% endif %}>{{ k }}: {{ v|render_json }}</span>
|
||||
{% endspaceless %}
|
||||
{% endfor %}</pre>
|
||||
{% elif non_atomic_change %}
|
||||
Warning: Comparing non-atomic change to previous change record (<a href="{% url 'extras:objectchange' pk=prev_change.pk %}">{{ prev_change.pk }}</a>)
|
||||
{% else %}
|
||||
<span class="text-muted">None</span>
|
||||
{% endif %}
|
||||
|
||||
@@ -29,7 +29,7 @@
|
||||
{% endif %}
|
||||
<h1 class="title">{{ report.name }}</h1>
|
||||
{% if report.description %}
|
||||
<p class="lead">{{ report.description }}</p>
|
||||
<p class="lead">{{ report.description|render_markdown }}</p>
|
||||
{% endif %}
|
||||
{% endblock %}
|
||||
|
||||
|
||||
@@ -29,7 +29,7 @@
|
||||
<td>
|
||||
{% include 'extras/inc/job_label.html' with result=report.result %}
|
||||
</td>
|
||||
<td>{{ report.description|placeholder }}</td>
|
||||
<td class="rendered-markdown">{{ report.description|render_markdown|placeholder }}</td>
|
||||
<td class="text-right">
|
||||
{% if report.result %}
|
||||
<a href="{% url 'extras:report_result' job_result_pk=report.result.pk %}">{{ report.result.created }}</a>
|
||||
|
||||
@@ -29,58 +29,58 @@
|
||||
{% block sidebar %}{% endblock %}
|
||||
</div>
|
||||
{% endif %}
|
||||
{% with bulk_edit_url=content_type.model_class|validated_viewname:"bulk_edit" bulk_delete_url=content_type.model_class|validated_viewname:"bulk_delete" %}
|
||||
{% if permissions.change or permissions.delete %}
|
||||
<form method="post" class="form form-horizontal">
|
||||
{% csrf_token %}
|
||||
<input type="hidden" name="return_url" value="{% if return_url %}{{ return_url }}{% else %}{{ request.path }}{% if request.GET %}?{{ request.GET.urlencode }}{% endif %}{% endif %}" />
|
||||
{% if table.paginator.num_pages > 1 %}
|
||||
<div id="select_all_box" class="hidden panel panel-default noprint">
|
||||
<div class="panel-body">
|
||||
<div class="checkbox-inline">
|
||||
<label for="select_all">
|
||||
<input type="checkbox" id="select_all" name="_all" />
|
||||
Select <strong>all {{ table.rows|length }} {{ table.data.verbose_name_plural }}</strong> matching query
|
||||
</label>
|
||||
</div>
|
||||
<div class="pull-right">
|
||||
{% if bulk_edit_url and permissions.change %}
|
||||
<button type="submit" name="_edit" formaction="{% url bulk_edit_url %}{% if request.GET %}?{{ request.GET.urlencode }}{% endif %}" class="btn btn-warning btn-sm" disabled="disabled">
|
||||
<span class="mdi mdi-pencil" aria-hidden="true"></span> Edit All
|
||||
</button>
|
||||
{% endif %}
|
||||
{% if bulk_delete_url and permissions.delete %}
|
||||
<button type="submit" name="_delete" formaction="{% url bulk_delete_url %}{% if request.GET %}?{{ request.GET.urlencode }}{% endif %}" class="btn btn-danger btn-sm" disabled="disabled">
|
||||
<span class="mdi mdi-trash-can-outline" aria-hidden="true"></span> Delete All
|
||||
</button>
|
||||
{% endif %}
|
||||
<div class="table-responsive">
|
||||
{% with bulk_edit_url=content_type.model_class|validated_viewname:"bulk_edit" bulk_delete_url=content_type.model_class|validated_viewname:"bulk_delete" %}
|
||||
{% if permissions.change or permissions.delete %}
|
||||
<form method="post" class="form form-horizontal">
|
||||
{% csrf_token %}
|
||||
<input type="hidden" name="return_url" value="{% if return_url %}{{ return_url }}{% else %}{{ request.path }}{% if request.GET %}?{{ request.GET.urlencode }}{% endif %}{% endif %}" />
|
||||
{% if table.paginator.num_pages > 1 %}
|
||||
<div id="select_all_box" class="hidden panel panel-default noprint">
|
||||
<div class="panel-body">
|
||||
<div class="checkbox-inline">
|
||||
<label for="select_all">
|
||||
<input type="checkbox" id="select_all" name="_all" />
|
||||
Select <strong>all {{ table.rows|length }} {{ table.data.verbose_name_plural }}</strong> matching query
|
||||
</label>
|
||||
</div>
|
||||
<div class="pull-right">
|
||||
{% if bulk_edit_url and permissions.change %}
|
||||
<button type="submit" name="_edit" formaction="{% url bulk_edit_url %}{% if request.GET %}?{{ request.GET.urlencode }}{% endif %}" class="btn btn-warning btn-sm" disabled="disabled">
|
||||
<span class="mdi mdi-pencil" aria-hidden="true"></span> Edit All
|
||||
</button>
|
||||
{% endif %}
|
||||
{% if bulk_delete_url and permissions.delete %}
|
||||
<button type="submit" name="_delete" formaction="{% url bulk_delete_url %}{% if request.GET %}?{{ request.GET.urlencode }}{% endif %}" class="btn btn-danger btn-sm" disabled="disabled">
|
||||
<span class="mdi mdi-trash-can-outline" aria-hidden="true"></span> Delete All
|
||||
</button>
|
||||
{% endif %}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
{% endif %}
|
||||
{% render_table table 'inc/table.html' %}
|
||||
<div class="pull-left noprint">
|
||||
{% block bulk_buttons %}{% endblock %}
|
||||
{% if bulk_edit_url and permissions.change %}
|
||||
<button type="submit" name="_edit" formaction="{% url bulk_edit_url %}{% if request.GET %}?{{ request.GET.urlencode }}{% endif %}" class="btn btn-warning btn-sm">
|
||||
<span class="mdi mdi-pencil" aria-hidden="true"></span> Edit Selected
|
||||
</button>
|
||||
{% endif %}
|
||||
{% if bulk_delete_url and permissions.delete %}
|
||||
<button type="submit" name="_delete" formaction="{% url bulk_delete_url %}{% if request.GET %}?{{ request.GET.urlencode }}{% endif %}" class="btn btn-danger btn-sm">
|
||||
<span class="mdi mdi-trash-can-outline" aria-hidden="true"></span> Delete Selected
|
||||
</button>
|
||||
{% endif %}
|
||||
</div>
|
||||
{% endif %}
|
||||
</form>
|
||||
{% else %}
|
||||
<div class="table-responsive">
|
||||
{% render_table table 'inc/table.html' %}
|
||||
</div>
|
||||
<div class="pull-left noprint">
|
||||
{% block bulk_buttons %}{% endblock %}
|
||||
{% if bulk_edit_url and permissions.change %}
|
||||
<button type="submit" name="_edit" formaction="{% url bulk_edit_url %}{% if request.GET %}?{{ request.GET.urlencode }}{% endif %}" class="btn btn-warning btn-sm">
|
||||
<span class="mdi mdi-pencil" aria-hidden="true"></span> Edit Selected
|
||||
</button>
|
||||
{% endif %}
|
||||
{% if bulk_delete_url and permissions.delete %}
|
||||
<button type="submit" name="_delete" formaction="{% url bulk_delete_url %}{% if request.GET %}?{{ request.GET.urlencode }}{% endif %}" class="btn btn-danger btn-sm">
|
||||
<span class="mdi mdi-trash-can-outline" aria-hidden="true"></span> Delete Selected
|
||||
</button>
|
||||
{% endif %}
|
||||
</div>
|
||||
</form>
|
||||
{% else %}
|
||||
<div class="table-responsive">
|
||||
{% render_table table 'inc/table.html' %}
|
||||
</div>
|
||||
{% endif %}
|
||||
{% endwith %}
|
||||
{% endif %}
|
||||
{% endwith %}
|
||||
</div>
|
||||
{% include 'inc/paginator.html' with paginator=table.paginator page=table.page %}
|
||||
<div class="clearfix"></div>
|
||||
</div>
|
||||
|
||||
@@ -2,6 +2,26 @@
|
||||
{% load helpers %}
|
||||
|
||||
{% block buttons %}
|
||||
<div class="btn-group" role="group">
|
||||
<div class="dropdown">
|
||||
<button class="btn btn-default dropdown-toggle" type="button" id="max_length" data-toggle="dropdown" aria-haspopup="true" aria-expanded="true">
|
||||
Max Depth{% if "depth__lte" in request.GET %}: {{ request.GET.depth__lte }}{% endif %}
|
||||
<span class="caret"></span>
|
||||
</button>
|
||||
<ul class="dropdown-menu" aria-labelledby="max_length">
|
||||
{% if request.GET.depth__lte %}
|
||||
<li>
|
||||
<a href="{% url 'ipam:prefix_list' %}{% querystring request depth__lte=None page=1 %}">Clear</a>
|
||||
</li>
|
||||
{% endif %}
|
||||
{% for i in 16|as_range %}
|
||||
<li><a href="{% url 'ipam:prefix_list' %}{% querystring request depth__lte=i page=1 %}">
|
||||
{{ i }} {% if request.GET.depth__lte == i %}<i class="mdi mdi-check-bold"></i>{% endif %}
|
||||
</a></li>
|
||||
{% endfor %}
|
||||
</ul>
|
||||
</div>
|
||||
</div>
|
||||
<div class="btn-group" role="group">
|
||||
<div class="dropdown">
|
||||
<button class="btn btn-default dropdown-toggle" type="button" id="max_length" data-toggle="dropdown" aria-haspopup="true" aria-expanded="true">
|
||||
|
||||
@@ -57,7 +57,7 @@ class TenantGroup(NestedGroupModel):
|
||||
)
|
||||
|
||||
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'webhooks')
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'tags', 'webhooks')
|
||||
class Tenant(PrimaryModel):
|
||||
"""
|
||||
A Tenant represents an organization served by the NetBox owner. This is typically a customer or an internal
|
||||
|
||||
@@ -7,7 +7,7 @@ from django.core.exceptions import FieldError, ValidationError
|
||||
|
||||
from utilities.forms.fields import ContentTypeMultipleChoiceField
|
||||
from .constants import *
|
||||
from .models import AdminGroup, AdminUser, ObjectPermission, Token, UserConfig
|
||||
from .models import ObjectPermission, Token, UserConfig
|
||||
|
||||
|
||||
#
|
||||
@@ -39,11 +39,11 @@ class ObjectPermissionInline(admin.TabularInline):
|
||||
|
||||
|
||||
class GroupObjectPermissionInline(ObjectPermissionInline):
|
||||
model = AdminGroup.object_permissions.through
|
||||
model = Group.object_permissions.through
|
||||
|
||||
|
||||
class UserObjectPermissionInline(ObjectPermissionInline):
|
||||
model = AdminUser.object_permissions.through
|
||||
model = User.object_permissions.through
|
||||
|
||||
|
||||
class UserConfigInline(admin.TabularInline):
|
||||
@@ -62,7 +62,7 @@ admin.site.unregister(Group)
|
||||
admin.site.unregister(User)
|
||||
|
||||
|
||||
@admin.register(AdminGroup)
|
||||
@admin.register(Group)
|
||||
class GroupAdmin(admin.ModelAdmin):
|
||||
fields = ('name',)
|
||||
list_display = ('name', 'user_count')
|
||||
@@ -75,7 +75,7 @@ class GroupAdmin(admin.ModelAdmin):
|
||||
return obj.user_set.count()
|
||||
|
||||
|
||||
@admin.register(AdminUser)
|
||||
@admin.register(User)
|
||||
class UserAdmin(UserAdmin_):
|
||||
list_display = [
|
||||
'username', 'email', 'first_name', 'last_name', 'is_superuser', 'is_staff', 'is_active'
|
||||
@@ -89,6 +89,7 @@ class UserAdmin(UserAdmin_):
|
||||
('Important dates', {'fields': ('last_login', 'date_joined')}),
|
||||
)
|
||||
filter_horizontal = ('groups',)
|
||||
list_filter = ('is_active', 'is_staff', 'is_superuser', 'groups__name')
|
||||
|
||||
def get_inlines(self, request, obj):
|
||||
if obj is not None:
|
||||
|
||||
@@ -17,8 +17,6 @@ from .constants import *
|
||||
|
||||
|
||||
__all__ = (
|
||||
'AdminGroup',
|
||||
'AdminUser',
|
||||
'ObjectPermission',
|
||||
'Token',
|
||||
'UserConfig',
|
||||
@@ -163,7 +161,6 @@ class UserConfig(models.Model):
|
||||
|
||||
|
||||
@receiver(post_save, sender=User)
|
||||
@receiver(post_save, sender=AdminUser)
|
||||
def create_userconfig(instance, created, **kwargs):
|
||||
"""
|
||||
Automatically create a new UserConfig when a new User is created.
|
||||
|
||||
@@ -130,22 +130,24 @@ class ColorChoices(ChoiceSet):
|
||||
|
||||
class ButtonColorChoices(ChoiceSet):
|
||||
"""
|
||||
Map standard button color choices to Bootstrap color classes
|
||||
Map standard button color choices to Bootstrap 3 button classes
|
||||
"""
|
||||
DEFAULT = 'default'
|
||||
BLUE = 'primary'
|
||||
GREY = 'secondary'
|
||||
CYAN = 'info'
|
||||
GREEN = 'success'
|
||||
RED = 'danger'
|
||||
YELLOW = 'warning'
|
||||
GREY = 'secondary'
|
||||
BLACK = 'dark'
|
||||
|
||||
CHOICES = (
|
||||
(DEFAULT, 'Default'),
|
||||
(BLUE, 'Blue'),
|
||||
(GREY, 'Grey'),
|
||||
(CYAN, 'Cyan'),
|
||||
(GREEN, 'Green'),
|
||||
(RED, 'Red'),
|
||||
(YELLOW, 'Yellow'),
|
||||
(GREY, 'Grey'),
|
||||
(BLACK, 'Black')
|
||||
)
|
||||
|
||||
@@ -338,7 +338,7 @@ class DynamicModelChoiceMixin:
|
||||
filter = django_filters.ModelChoiceFilter
|
||||
widget = widgets.APISelect
|
||||
|
||||
# TODO: Remove display_field in v2.12
|
||||
# TODO: Remove display_field in v3.0
|
||||
def __init__(self, display_field='display', query_params=None, initial_params=None, null_option=None,
|
||||
disabled_indicator=None, *args, **kwargs):
|
||||
self.display_field = display_field
|
||||
|
||||
@@ -4,7 +4,9 @@ from django.core.paginator import Paginator, Page
|
||||
|
||||
class EnhancedPaginator(Paginator):
|
||||
|
||||
def __init__(self, object_list, per_page, **kwargs):
|
||||
def __init__(self, object_list, per_page, orphans=None, **kwargs):
|
||||
|
||||
# Determine the page size
|
||||
try:
|
||||
per_page = int(per_page)
|
||||
if per_page < 1:
|
||||
@@ -12,7 +14,13 @@ class EnhancedPaginator(Paginator):
|
||||
except ValueError:
|
||||
per_page = settings.PAGINATE_COUNT
|
||||
|
||||
super().__init__(object_list, per_page, **kwargs)
|
||||
# Set orphans count based on page size
|
||||
if orphans is None and per_page <= 50:
|
||||
orphans = 5
|
||||
elif orphans is None:
|
||||
orphans = 10
|
||||
|
||||
super().__init__(object_list, per_page, orphans=orphans, **kwargs)
|
||||
|
||||
def _get_page(self, *args, **kwargs):
|
||||
return EnhancedPage(*args, **kwargs)
|
||||
|
||||
@@ -1,11 +1,11 @@
|
||||
import django_tables2 as tables
|
||||
from django.conf import settings
|
||||
from django.contrib.auth.models import AnonymousUser
|
||||
from django.contrib.contenttypes.fields import GenericForeignKey
|
||||
from django.contrib.contenttypes.models import ContentType
|
||||
from django.core.exceptions import FieldDoesNotExist
|
||||
from django.db.models.fields.related import RelatedField
|
||||
from django.urls import reverse
|
||||
from django.utils.html import strip_tags
|
||||
from django.utils.safestring import mark_safe
|
||||
from django_tables2 import RequestConfig
|
||||
from django_tables2.data import TableQuerysetData
|
||||
@@ -15,19 +15,6 @@ from extras.models import CustomField
|
||||
from .paginator import EnhancedPaginator, get_paginate_count
|
||||
|
||||
|
||||
def stripped_value(self, **kwargs):
|
||||
"""
|
||||
Replaces TemplateColumn's value() method to both strip HTML tags and remove any leading/trailing whitespace.
|
||||
"""
|
||||
html = super(tables.TemplateColumn, self).value(**kwargs)
|
||||
return strip_tags(html).strip() if isinstance(html, str) else html
|
||||
|
||||
|
||||
# TODO: We're monkey-patching TemplateColumn here to strip leading/trailing whitespace. This will no longer
|
||||
# be necessary under django-tables2 v2.3.5+. (See #5926)
|
||||
tables.TemplateColumn.value = stripped_value
|
||||
|
||||
|
||||
class BaseTable(tables.Table):
|
||||
"""
|
||||
Default table for object lists
|
||||
@@ -298,7 +285,10 @@ class LinkedCountColumn(tables.Column):
|
||||
if value:
|
||||
url = reverse(self.viewname, kwargs=self.view_kwargs)
|
||||
if self.url_params:
|
||||
url += '?' + '&'.join([f'{k}={getattr(record, v)}' for k, v in self.url_params.items()])
|
||||
url += '?' + '&'.join([
|
||||
f'{k}={getattr(record, v) or settings.FILTERS_NULL_CHOICE_VALUE}'
|
||||
for k, v in self.url_params.items()
|
||||
])
|
||||
return mark_safe(f'<a href="{url}">{value}</a>')
|
||||
return value
|
||||
|
||||
@@ -350,8 +340,11 @@ class MPTTColumn(tables.TemplateColumn):
|
||||
"""
|
||||
Display a nested hierarchy for MPTT-enabled models.
|
||||
"""
|
||||
template_code = """{% for i in record.get_ancestors %}<i class="mdi mdi-circle-small"></i>{% endfor %}""" \
|
||||
"""<a href="{{ record.get_absolute_url }}">{{ record.name }}</a>"""
|
||||
template_code = """
|
||||
{% load helpers %}
|
||||
{% for i in record.level|as_range %}<i class="mdi mdi-circle-small"></i>{% endfor %}
|
||||
<a href="{{ record.get_absolute_url }}">{{ record.name }}</a>
|
||||
"""
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
super().__init__(
|
||||
|
||||
@@ -105,7 +105,7 @@ def serialize_object(obj, extra=None):
|
||||
|
||||
# Include any tags. Check for tags cached on the instance; fall back to using the manager.
|
||||
if is_taggable(obj):
|
||||
tags = getattr(obj, '_tags', obj.tags.all())
|
||||
tags = getattr(obj, '_tags', None) or obj.tags.all()
|
||||
data['tags'] = [tag.name for tag in tags]
|
||||
|
||||
# Append any extra data
|
||||
|
||||
@@ -116,7 +116,7 @@ class ClusterGroup(OrganizationalModel):
|
||||
# Clusters
|
||||
#
|
||||
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'webhooks')
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'tags', 'webhooks')
|
||||
class Cluster(PrimaryModel):
|
||||
"""
|
||||
A cluster of VirtualMachines. Each Cluster may optionally be associated with one or more Devices.
|
||||
@@ -199,7 +199,7 @@ class Cluster(PrimaryModel):
|
||||
# Virtual machines
|
||||
#
|
||||
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'webhooks')
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'tags', 'webhooks')
|
||||
class VirtualMachine(PrimaryModel, ConfigContextModel):
|
||||
"""
|
||||
A virtual machine which runs inside a Cluster.
|
||||
@@ -380,7 +380,7 @@ class VirtualMachine(PrimaryModel, ConfigContextModel):
|
||||
# Interfaces
|
||||
#
|
||||
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'webhooks')
|
||||
@extras_features('custom_fields', 'custom_links', 'export_templates', 'tags', 'webhooks')
|
||||
class VMInterface(PrimaryModel, BaseInterface):
|
||||
virtual_machine = models.ForeignKey(
|
||||
to='virtualization.VirtualMachine',
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
Django==3.2.2
|
||||
Django==3.2.4
|
||||
django-cacheops==6.0
|
||||
django-cors-headers==3.7.0
|
||||
django-debug-toolbar==3.2.1
|
||||
@@ -7,17 +7,17 @@ django-mptt==0.12.0
|
||||
django-pglocks==1.0.4
|
||||
django-prometheus==2.1.0
|
||||
django-rq==2.4.1
|
||||
django-tables2==2.3.4
|
||||
django-tables2==2.4.0
|
||||
django-taggit==1.4.0
|
||||
django-timezone-field==4.1.2
|
||||
djangorestframework==3.12.4
|
||||
drf-yasg[validation]==1.20.0
|
||||
gunicorn==20.1.0
|
||||
Jinja2==2.11.3
|
||||
Jinja2==3.0.1
|
||||
Markdown==3.3.4
|
||||
netaddr==0.8.0
|
||||
Pillow==8.2.0
|
||||
psycopg2-binary==2.8.6
|
||||
psycopg2-binary==2.9
|
||||
pycryptodome==3.10.1
|
||||
PyYAML==5.4.1
|
||||
svgwrite==1.4.1
|
||||
|
||||
@@ -15,7 +15,7 @@ else
|
||||
fi
|
||||
|
||||
# Create a new virtual environment
|
||||
COMMAND="/usr/bin/python3 -m venv ${VIRTUALENV}"
|
||||
COMMAND="python3 -m venv ${VIRTUALENV}"
|
||||
echo "Creating a new virtual environment at ${VIRTUALENV}..."
|
||||
eval $COMMAND || {
|
||||
echo "--------------------------------------------------------------------"
|
||||
|
||||
Reference in New Issue
Block a user