Compare commits

..

99 Commits

Author SHA1 Message Date
Jeremy Stretch
8f6b71df46 Merge pull request #6723 from netbox-community/develop
Release v2.11.9
2021-07-08 09:18:11 -04:00
jeremystretch
e8e3e9b0be Release v2.11.9 2021-07-08 09:01:40 -04:00
jeremystretch
28ca815c88 Fixes #6456: API schema type should be boolean for _occupied on cable termination models 2021-07-08 08:41:59 -04:00
jeremystretch
54dfa6cb7f Fixes #6714: Fix rendering of device type component creation forms 2021-07-07 15:38:59 -04:00
jeremystretch
7c667f3485 Fixes #6710: Fix assignment of VM interface parent via REST API 2021-07-07 11:55:20 -04:00
jeremystretch
c585175214 PRVB 2021-07-06 11:35:03 -04:00
Jeremy Stretch
a5b95728bf Merge pull request #6700 from netbox-community/develop
Release v2.11.8
2021-07-06 11:28:19 -04:00
Jeremy Stretch
c742501b80 Merge branch 'master' into develop 2021-07-06 11:13:29 -04:00
jeremystretch
9c1de27562 Release v2.11.8 2021-07-06 11:10:02 -04:00
jeremystretch
fc15ef6967 Changelog & cleanup for #5503 2021-07-06 10:43:08 -04:00
Jeremy Stretch
eaf0259c3d Merge pull request #5764 from ypid/feature/5503-ui-iso-date-with-tooltip
Closes #5503: ISO 8601 date in UI and alternative format as tooltip
2021-07-06 10:35:21 -04:00
jeremystretch
fe2ce03ac1 Closes #6200: Add rack reservations to global search 2021-07-06 10:17:16 -04:00
jeremystretch
70585ff32e Fixes #6695: Fix exception when importing device type with invalid front port definition 2021-07-05 09:30:52 -04:00
Robin Schneider
a479c867c4 Do not use annotated_date on custom date fields to avoid date parsing
@jeremystretch:

> It'd be better to have the custom field return a date object than to
> accommodate string values in the template filter. Let's just omit custom
> field dates for now to keep this from getting any more complex.
2021-07-02 22:30:11 +02:00
Robin Schneider
74f1b51b38 Use annotated_date also for updated datetimes
This changes the text from: Updated 5 months, 1 week ago
to: Updated 2021-01-24 00:33 (5 months, 1 week ago)

Co-authored-by: Jeremy Stretch <jstretch@ns1.com>
2021-07-02 22:22:38 +02:00
Robin Schneider
0ad9b83623 Closes #5503: ISO 8601 date in UI and alternative format as tooltip
With this commit all dates in the UI are now consistently displayed.

I changed the long date format as suggested by @xkilian and confirmed by my own
research.

* DATETIME_FORMAT
 * Before July 20, 2020 4:52 p.m.
 * Now 20th July, 2020 16:52

"20th July, 2020" would be spoken as "the 20th of July, 2020" but the "the" and
"of" are never written.

The only exception is `object_list.html`. I tried it but there it does not
work so easily because the dates are passed to Jinja as SafeString.
2021-07-02 22:22:37 +02:00
jeremystretch
631d991d8d Closes #6368: Enable virtual chassis assignment during bulk import of devices 2021-07-01 15:49:05 -04:00
jeremystretch
1be4a57bd4 Closes #6345: Introduce PermissionsViolation exception for use in generic views 2021-07-01 15:33:39 -04:00
jeremystretch
76a6119584 Closes #6138: Add an 'empty' filter modifier for character fields 2021-07-01 15:17:46 -04:00
jeremystretch
add95292ce Fixes #6680: Allow setting custom field values for VM interfaces on intial creation 2021-07-01 10:48:24 -04:00
jeremystretch
18a9e39be6 Closes #6667: Display VM memory as GB/TB as appropriate 2021-06-29 14:00:16 -04:00
jeremystretch
18934bcc69 Closes #6666: Show management-only status under interface detail view 2021-06-29 13:47:44 -04:00
jeremystretch
98ff00bc62 Fixes #6676: Fix device/VM counts per cluster under cluster type/group views 2021-06-29 13:44:46 -04:00
jeremystretch
4292d88a92 Closes #6620: Show assigned VMs count under device role view 2021-06-22 14:21:41 -04:00
jeremystretch
a8af24d7ca Fixes #6637: Fix group assignment in 'available VLANs' link under VLAN group view 2021-06-22 14:16:16 -04:00
jeremystretch
efa0fc2b09 Fixes #6640: Disallow numeric values in custom text fields 2021-06-22 14:00:54 -04:00
jeremystretch
ebb2918a88 Fixes #6652: Fix exception when adding components in bulk to multiple devices 2021-06-22 13:54:03 -04:00
jeremystretch
607039f043 Cleanup for #5139 2021-06-21 08:46:20 -04:00
jeremystretch
fb379b63ec Fixes #6626: Fix site field on VM search form; add site group 2021-06-21 08:38:46 -04:00
jeremystretch
697161beb1 PRVB 2021-06-16 16:21:19 -04:00
Jeremy Stretch
742804ecb8 Merge pull request #6616 from netbox-community/develop
Release v2.11.7
2021-06-16 16:18:52 -04:00
jeremystretch
2bf20fa501 Release NetBox v2.11.7 2021-06-16 15:59:46 -04:00
jeremystretch
685e0ce00d Closes #6588: Add support for webp files as front/rear device type images 2021-06-16 14:01:30 -04:00
jeremystretch
6a6b0236a9 Closes #6589: Standardize breadcrumb navigation for power panels and feeds 2021-06-16 13:50:35 -04:00
jeremystretch
857c70ece9 Closes #6564: Add N connector type for pass-through ports 2021-06-16 13:43:38 -04:00
jeremystretch
e68be6f041 Add Equinix Metal as a sponsor 2021-06-15 15:22:20 -04:00
Jeremy Stretch
52edeb42b5 Merge pull request #6604 from bluikko/patch-2
custom fields documentation missing word "more"
2021-06-15 10:57:58 -04:00
bluikko
c8a8bfd84d custom fields documentation missing word "more"
The "one or object types" looks like it is missing the word "more".
2021-06-15 15:05:37 +07:00
jeremystretch
9f2c4919eb Update NetDev Slack links 2021-06-14 16:41:10 -04:00
jeremystretch
f56a470cc7 Fixes #6602: Fix deletion of devices with cables attached 2021-06-14 16:38:19 -04:00
jeremystretch
54ccc705d0 Adopt IRM terminology 2021-06-14 14:08:55 -04:00
jeremystretch
7e481960f9 Optimize MPTTColumn rendering 2021-06-14 09:19:05 -04:00
jeremystretch
809d9e4697 Fixes #6584: Fix ordering of nested inventory items 2021-06-10 14:27:42 -04:00
jeremystretch
79c06442db Changelog for #6455, 6493 2021-06-08 15:39:39 -04:00
Jeremy Stretch
6195fc0d11 Merge pull request #6552 from drmsoffall/6493-diff-legacy-changes
Show change log diff for non-atomic changes
2021-06-08 15:24:22 -04:00
Jeremy Stretch
6523334a48 Merge pull request #6545 from crafty-ua/Add_ipv4_32_and_ipv6_128_prefix_support_#6455
Add ipv4 /32 and ipv6 /128 prefix support #6455
2021-06-08 15:12:25 -04:00
jeremystretch
b3cde51590 Fixes #6562: Disable ordering of secrets by assigned object 2021-06-08 14:18:24 -04:00
jeremystretch
6ec296f2a7 Fixes #6563: Fix filtering by location for cable connection forms 2021-06-08 14:15:06 -04:00
jeremystretch
cb4392628f Fixes #6553: ProviderNetwork search should match on name 2021-06-08 14:06:17 -04:00
drmsoffall
a224e5d470 Closes #6493: show ObjectChange diff for non-atomic changes 2021-06-05 19:15:25 +00:00
jeremystretch
7444110c79 PRVB 2021-06-04 11:15:12 -04:00
Jeremy Stretch
fc0c8a160b Merge pull request #6548 from netbox-community/develop
Release v2.11.6
2021-06-04 11:13:26 -04:00
Jeremy Stretch
481cc52686 Merge branch 'master' into develop 2021-06-04 11:01:33 -04:00
jeremystretch
4273b6e4fb Release v2.11.6 2021-06-04 10:59:36 -04:00
jeremystretch
5e08b2be37 Fixes #6544: Fix migration error when upgrading with VRF(s) defined 2021-06-04 10:53:13 -04:00
Your Name
a665b79f85 #6455 - initial 2021-06-04 16:46:02 +02:00
Jeremy Stretch
fe4de7f929 Merge pull request #6542 from netbox-community/develop
Release v2.11.5
2021-06-04 09:29:32 -04:00
jeremystretch
0783d57459 Release v2.11.5 2021-06-04 09:09:56 -04:00
jeremystretch
4e1e5bd8c4 Fix "select all" box (again) 2021-06-04 09:01:58 -04:00
jeremystretch
b3a14e9a7b Improve performance when fetching objects for bulk edit 2021-06-03 21:11:45 -04:00
jeremystretch
b725a9bcea Closes #6495: Replace 'help' link in footer with 'community' 2021-06-03 20:35:53 -04:00
jeremystretch
5c263fac8d Closes #6540: Add a 'flat' column to the prefix table 2021-06-03 20:31:09 -04:00
jeremystretch
04c1619eb4 Remove unused function 2021-06-03 20:27:24 -04:00
jeremystretch
d74dbb722a Changelog for #6527 2021-06-03 17:20:24 -04:00
Jeremy Stretch
95969c4979 Merge pull request #6537 from maximumG/6527-report-description-makdown
feat: markdown support in report's description
2021-06-03 17:18:22 -04:00
maximumG
10c9954ebc fix: remove call-outs regarding markdown support 2021-06-03 20:36:52 +02:00
maxime-gerges-external
e61b2b1fc5 feat: markdown support in report's description
* markdown support in report list and report result pages
* Add notes in the documentation regarding markdown
2021-06-03 14:48:18 +02:00
Daniel Sheppard
46ecb0ac03 Fixes: #6432 - Properly mark nat_outside as read-only and not-required. 2021-06-02 22:45:17 -05:00
jeremystretch
0a0b852f2c Fixes #6492: Correct tag population in post-change data resulting from REST API changes 2021-06-02 17:02:44 -04:00
jeremystretch
1658d7ae86 Fixes #6217: Disallow passing of string values for integer custom fields 2021-06-02 16:12:11 -04:00
jeremystretch
ca44cda112 Suppress migration output during testing 2021-06-02 16:02:38 -04:00
jeremystretch
1935f8b27f Fixes #6517: Fix assignment of user when creating rack reservations via REST API 2021-06-02 16:02:22 -04:00
jeremystretch
d32dba43b4 Fixes #6525: Paginate related IPs table under IP address view 2021-06-02 15:48:15 -04:00
jeremystretch
8d0a3c8e69 Closes #6519: Avoid querying applicable webhooks for every object 2021-06-01 13:55:17 -04:00
Jeremy Stretch
f561b2d955 Merge pull request #6516 from netbox-community/6284-m2m-webhooks
Closes #6284: Fix redundant webhooks
2021-06-01 13:09:21 -04:00
jeremystretch
8afb7d654d Changelog for #6284 2021-06-01 12:57:31 -04:00
jeremystretch
32cbc20108 Restore webhooks worker test 2021-06-01 12:52:25 -04:00
jeremystretch
be3cd2a434 Add bulk operation tests for webhooks 2021-06-01 09:50:38 -04:00
jeremystretch
ba3ca6b00d Update post-change snapshot for M2M changes 2021-06-01 09:30:54 -04:00
jeremystretch
c88dcef900 Extend webhook create/update/delete tests 2021-06-01 09:04:01 -04:00
jeremystretch
3d1e4fde81 Initial work on #6284 2021-05-28 16:07:27 -04:00
jeremystretch
1e02bb5999 Fixes #6064: Fix object permission assignments for user and group models 2021-05-28 13:27:05 -04:00
jeremystretch
bd7bcf8a0b Fixes #6496: Fix upgrade script when Python installed in nonstandard path 2021-05-28 13:18:50 -04:00
jeremystretch
1c0f3e1b81 Fixes #6502: Correct permissions evaluation for running a report via the REST API 2021-05-28 13:16:25 -04:00
jeremystretch
b2b3f388b1 Correct Prefix REST API test case 2021-05-28 11:15:45 -04:00
jeremystretch
110a6d11a5 Closes #6487: Add location filter to cable connection form 2021-05-28 09:09:59 -04:00
jeremystretch
75faf7d30e Closes #6501: Expose prefix depth and children on REST API serializer 2021-05-28 08:56:55 -04:00
Jeremy Stretch
e95a9731be Merge pull request #6488 from netbox-community/6087-prefix-depth-children
Closes #6087: Cache prefix depth & children count
2021-05-28 08:37:45 -04:00
jeremystretch
5cb5f9a963 Linkify prefix children count 2021-05-27 15:40:55 -04:00
jeremystretch
88aa3a4e19 Specify batch size for bulk_update() 2021-05-27 15:25:40 -04:00
jeremystretch
d34b9ee00e Add max depth selector 2021-05-27 13:24:31 -04:00
jeremystretch
103730a642 Extend depth & children filters 2021-05-27 12:54:41 -04:00
jeremystretch
84017776ec Fix handling of duplicate prefixes 2021-05-27 10:03:00 -04:00
jeremystretch
34e673f7d6 Introduce rebuild_prefixes management command 2021-05-27 09:24:29 -04:00
jeremystretch
5ac6a307bf Rearrange contact links 2021-05-26 21:45:18 -04:00
jeremystretch
8c1b681391 Add GitHub discussions link; replace Google Group with netdev.chat 2021-05-26 21:43:32 -04:00
jeremystretch
da558de769 Initial work on #6087 2021-05-26 16:06:03 -04:00
jeremystretch
da1fb4f969 Replace references to v2.12 with v3.0 2021-05-25 15:05:02 -04:00
jeremystretch
9046f59b9f PRVB 2021-05-25 12:12:08 -04:00
103 changed files with 1452 additions and 513 deletions

View File

@@ -17,7 +17,7 @@ body:
What version of NetBox are you currently running? (If you don't have access to the most
recent NetBox release, consider testing on our [demo instance](https://demo.netbox.dev/)
before opening a bug report to see if your issue has already been addressed.)
placeholder: v2.11.4
placeholder: v2.11.9
validations:
required: true
- type: dropdown

View File

@@ -3,7 +3,10 @@ blank_issues_enabled: false
contact_links:
- name: 📖 Contributing Policy
url: https://github.com/netbox-community/netbox/blob/develop/CONTRIBUTING.md
about: Please read through our contributing policy before opening an issue or pull request
- name: 💬 Discussion Group
url: https://groups.google.com/g/netbox-discuss
about: Join our discussion group for assistance with installation issues and other problems
about: "Please read through our contributing policy before opening an issue or pull request"
- name: Discussion
url: https://github.com/netbox-community/netbox/discussions
about: "If you're just looking for help, try starting a discussion instead"
- name: 💬 Community Slack
url: https://netdev.chat/
about: "Join #netbox on the NetDev Community Slack for assistance with installation issues and other problems"

View File

@@ -14,7 +14,7 @@ body:
attributes:
label: NetBox version
description: What version of NetBox are you currently running?
placeholder: v2.11.4
placeholder: v2.11.9
validations:
required: true
- type: dropdown

View File

@@ -25,7 +25,7 @@ discussions.
### Slack
For real-time chat, you can join the **#netbox** Slack channel on [NetDev Community](https://slack.netbox.dev/).
For real-time chat, you can join the **#netbox** Slack channel on [NetDev Community](https://netdev.chat/).
Unfortunately, the Slack channel does not provide long-term retention of chat
history, so try to avoid it for any discussions would benefit from being
preserved for future reference.

View File

@@ -2,8 +2,10 @@
<img src="https://raw.githubusercontent.com/netbox-community/netbox/develop/docs/netbox_logo.svg" width="400" alt="NetBox logo" />
</div>
NetBox is an IP address management (IPAM) and data center infrastructure
management (DCIM) tool. Initially conceived by the network engineering team at
![Master branch build status](https://github.com/netbox-community/netbox/workflows/CI/badge.svg?branch=master)
NetBox is an infrastructure resource modeling (IRM) tool designed to empower
network automation. Initially conceived by the network engineering team at
[DigitalOcean](https://www.digitalocean.com/), NetBox was developed specifically
to address the needs of network and infrastructure engineers. It is intended to
function as a domain-specific source of truth for network operations.
@@ -14,18 +16,15 @@ complete list of requirements, see `requirements.txt`. The code is available [on
The complete documentation for NetBox can be found at [Read the Docs](https://netbox.readthedocs.io/en/stable/). A public demo instance is available at https://demo.netbox.dev.
| | status |
|-------------|------------|
| **master** | ![Build status](https://github.com/netbox-community/netbox/workflows/CI/badge.svg?branch=master) |
| **develop** | ![Build status](https://github.com/netbox-community/netbox/workflows/CI/badge.svg?branch=develop) |
<div align="center">
<h4>Thank you to our sponsors!</h4>
[![DigitalOcean](https://raw.githubusercontent.com/wiki/netbox-community/netbox/images/sponsors/digitalocean.png)](https://try.digitalocean.com/developer-cloud)
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
[![NS1](https://raw.githubusercontent.com/wiki/netbox-community/netbox/images/sponsors/ns1.png)](https://ns1.com/)
[![Equinix Metal](https://raw.githubusercontent.com/wiki/netbox-community/netbox/images/sponsors/equinix.png)](https://metal.equinix.com/)
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
[![NS1](https://raw.githubusercontent.com/wiki/netbox-community/netbox/images/sponsors/ns1.png)](https://ns1.com/)
<br />
[![Stellar Technologies](https://raw.githubusercontent.com/wiki/netbox-community/netbox/images/sponsors/stellar.png)](https://stellar.tech/)
</div>
@@ -33,7 +32,7 @@ The complete documentation for NetBox can be found at [Read the Docs](https://ne
### Discussion
* [GitHub Discussions](https://github.com/netbox-community/netbox/discussions) - Discussion forum hosted by GitHub; ideal for Q&A and other structured discussions
* [Slack](https://slack.netbox.dev/) - Real-time chat hosted by the NetDev Community; best for unstructured discussion or just hanging out
* [Slack](https://netdev.chat/) - Real-time chat hosted by the NetDev Community; best for unstructured discussion or just hanging out
* [Google Group](https://groups.google.com/g/netbox-discuss) - Legacy mailing list; slowly being replaced by GitHub discussions
### Installation

View File

@@ -24,7 +24,7 @@ Marking a field as required will force the user to provide a value for the field
The filter logic controls how values are matched when filtering objects by the custom field. Loose filtering (the default) matches on a partial value, whereas exact matching requires a complete match of the given string to a field's value. For example, exact filtering with the string "red" will only match the exact value "red", whereas loose filtering will match on the values "red", "red-orange", or "bored". Setting the filter logic to "disabled" disables filtering by the field entirely.
A custom field must be assigned to one or object types, or models, in NetBox. Once created, custom fields will automatically appear as part of these models in the web UI and REST API. Note that not all models support custom fields.
A custom field must be assigned to one or more object types, or models, in NetBox. Once created, custom fields will automatically appear as part of these models in the web UI and REST API. Note that not all models support custom fields.
### Custom Field Validation

View File

@@ -175,7 +175,7 @@ A particular object within NetBox. Each ObjectVar must specify a particular mode
* `null_option` - A label representing a "null" or empty choice (optional)
!!! warning
The `display_field` parameter is now deprecated, and will be removed in NetBox v2.12. All ObjectVar instances will
The `display_field` parameter is now deprecated, and will be removed in NetBox v3.0. All ObjectVar instances will
instead use the new standard `display` field for all serializers (introduced in NetBox v2.11).
To limit the selections available within the list, additional query parameters can be passed as the `query_params` dictionary. For example, to show only devices with an "active" status:

View File

@@ -80,7 +80,7 @@ class DeviceConnectionsReport(Report):
self.log_success(device)
```
As you can see, reports are completely customizable. Validation logic can be as simple or as complex as needed.
As you can see, reports are completely customizable. Validation logic can be as simple or as complex as needed. Also note that the `description` attribute support markdown syntax. It will be rendered in the report list page.
!!! warning
Reports should never alter data: If you find yourself using the `create()`, `save()`, `update()`, or `delete()` methods on objects within reports, stop and re-evaluate what you're trying to accomplish. Note that there are no safeguards against the accidental alteration or destruction of data.
@@ -93,7 +93,7 @@ The following methods are available to log results within a report:
* log_warning(object, message)
* log_failure(object, message)
The recording of one or more failure messages will automatically flag a report as failed. It is advised to log a success for each object that is evaluated so that the results will reflect how many objects are being reported on. (The inclusion of a log message is optional for successes.) Messages recorded with `log()` will appear in a report's results but are not associated with a particular object or status.
The recording of one or more failure messages will automatically flag a report as failed. It is advised to log a success for each object that is evaluated so that the results will reflect how many objects are being reported on. (The inclusion of a log message is optional for successes.) Messages recorded with `log()` will appear in a report's results but are not associated with a particular object or status. Log messages also support using markdown syntax and will be rendered on the report result page.
To perform additional tasks, such as sending an email or calling a webhook, after a report has been run, extend the `post_run()` method. The status of the report is available as `self.failed` and the results object is `self.result`.

View File

@@ -8,7 +8,7 @@ There are several official forums for communication among the developers and com
* [GitHub issues](https://github.com/netbox-community/netbox/issues) - All feature requests, bug reports, and other substantial changes to the code base **must** be documented in an issue.
* [GitHub Discussions](https://github.com/netbox-community/netbox/discussions) - The preferred forum for general discussion and support issues. Ideal for shaping a feature request prior to submitting an issue.
* [#netbox on NetDev Community Slack](https://slack.netbox.dev/) - Good for quick chats. Avoid any discussion that might need to be referenced later on, as the chat history is not retained long.
* [#netbox on NetDev Community Slack](https://netdev.chat/) - Good for quick chats. Avoid any discussion that might need to be referenced later on, as the chat history is not retained long.
* [Google Group](https://groups.google.com/g/netbox-discuss) - Legacy mailing list; slowly being phased out in favor of GitHub discussions.
## Governance

View File

@@ -2,7 +2,7 @@
# What is NetBox?
NetBox is an open source web application designed to help manage and document computer networks. Initially conceived by the network engineering team at [DigitalOcean](https://www.digitalocean.com/), NetBox was developed specifically to address the needs of network and infrastructure engineers. It encompasses the following aspects of network management:
NetBox is an infrastructure resource modeling (IRM) application designed to empower network automation. Initially conceived by the network engineering team at [DigitalOcean](https://www.digitalocean.com/), NetBox was developed specifically to address the needs of network and infrastructure engineers. NetBox is made available as open source under the Apache 2 license. It encompasses the following aspects of network management:
* **IP address management (IPAM)** - IP networks and addresses, VRFs, and VLANs
* **Equipment racks** - Organized by group and site

View File

@@ -24,7 +24,7 @@ The video below demonstrates the installation of NetBox v2.10.3 on Ubuntu 20.04
| Redis | 4.0 |
!!! note
Python 3.7 or later will be required in NetBox v2.12. Users are strongly encouraged to install NetBox using Python 3.7 or later for new deployments.
Python 3.7 or later will be required in NetBox v3.0. Users are strongly encouraged to install NetBox using Python 3.7 or later for new deployments.
Below is a simplified overview of the NetBox application stack for reference:

View File

@@ -1,5 +1,92 @@
# NetBox v2.11
## v2.11.9 (2021-07-08)
### Bug Fixes
* [#6456](https://github.com/netbox-community/netbox/issues/6456) - API schema type should be boolean for `_occupied` on cable termination models
* [#6710](https://github.com/netbox-community/netbox/issues/6710) - Fix assignment of VM interface parent via REST API
* [#6714](https://github.com/netbox-community/netbox/issues/6714) - Fix rendering of device type component creation forms
---
## v2.11.8 (2021-07-06)
### Enhancements
* [#5503](https://github.com/netbox-community/netbox/issues/5503) - Annotate short date & time fields with their longer form
* [#6138](https://github.com/netbox-community/netbox/issues/6138) - Add an `empty` filter modifier for character fields
* [#6200](https://github.com/netbox-community/netbox/issues/6200) - Add rack reservations to global search
* [#6368](https://github.com/netbox-community/netbox/issues/6368) - Enable virtual chassis assignment during bulk import of devices
* [#6620](https://github.com/netbox-community/netbox/issues/6620) - Show assigned VMs count under device role view
* [#6666](https://github.com/netbox-community/netbox/issues/6666) - Show management-only status under interface detail view
* [#6667](https://github.com/netbox-community/netbox/issues/6667) - Display VM memory as GB/TB as appropriate
### Bug Fixes
* [#6626](https://github.com/netbox-community/netbox/issues/6626) - Fix site field on VM search form; add site group
* [#6637](https://github.com/netbox-community/netbox/issues/6637) - Fix group assignment in "available VLANs" link under VLAN group view
* [#6640](https://github.com/netbox-community/netbox/issues/6640) - Disallow numeric values in custom text fields
* [#6652](https://github.com/netbox-community/netbox/issues/6652) - Fix exception when adding components in bulk to multiple devices
* [#6676](https://github.com/netbox-community/netbox/issues/6676) - Fix device/VM counts per cluster under cluster type/group views
* [#6680](https://github.com/netbox-community/netbox/issues/6680) - Allow setting custom field values for VM interfaces on initial creation
* [#6695](https://github.com/netbox-community/netbox/issues/6695) - Fix exception when importing device type with invalid front port definition
---
## v2.11.7 (2021-06-16)
### Enhancements
* [#6455](https://github.com/netbox-community/netbox/issues/6455) - Permit /32 IPv4 and /128 IPv6 prefixes
* [#6493](https://github.com/netbox-community/netbox/issues/6493) - Show change log diff for non-atomic (pre-2.11) changes
* [#6564](https://github.com/netbox-community/netbox/issues/6564) - Add N connector type for pass-through ports
* [#6588](https://github.com/netbox-community/netbox/issues/6588) - Add support for webp files as front/rear device type images
* [#6589](https://github.com/netbox-community/netbox/issues/6589) - Standardize breadcrumb navigation for power panels and feeds
### Bug Fixes
* [#6553](https://github.com/netbox-community/netbox/issues/6553) - ProviderNetwork search should match on name
* [#6562](https://github.com/netbox-community/netbox/issues/6562) - Disable ordering of secrets by assigned object
* [#6563](https://github.com/netbox-community/netbox/issues/6563) - Fix filtering by location for cable connection forms
* [#6584](https://github.com/netbox-community/netbox/issues/6584) - Fix ordering of nested inventory items
* [#6602](https://github.com/netbox-community/netbox/issues/6602) - Fix deletion of devices with cables attached
---
## v2.11.6 (2021-06-04)
### Bug Fixes
* [#6544](https://github.com/netbox-community/netbox/issues/6544) - Fix migration error when upgrading with VRF(s) defined
---
## v2.11.5 (2021-06-04)
**NOTE:** This release includes a database migration that calculates and annotates prefix depth. It may impose a noticeable delay on the upgrade process: Users should anticipate roughly one minute of delay per 100 thousand prefixes being updated.
### Enhancements
* [#6087](https://github.com/netbox-community/netbox/issues/6087) - Improved prefix hierarchy rendering
* [#6487](https://github.com/netbox-community/netbox/issues/6487) - Add location filter to cable connection form
* [#6501](https://github.com/netbox-community/netbox/issues/6501) - Expose prefix depth and children on REST API serializer
* [#6527](https://github.com/netbox-community/netbox/issues/6527) - Support Markdown for report descriptions
* [#6540](https://github.com/netbox-community/netbox/issues/6540) - Add a "flat" column to the prefix table
### Bug Fixes
* [#6064](https://github.com/netbox-community/netbox/issues/6064) - Fix object permission assignments for user and group models
* [#6217](https://github.com/netbox-community/netbox/issues/6217) - Disallow passing of string values for integer custom fields
* [#6284](https://github.com/netbox-community/netbox/issues/6284) - Avoid sending redundant webhooks when adding/removing tags
* [#6492](https://github.com/netbox-community/netbox/issues/6492) - Correct tag population in post-change data resulting from REST API changes
* [#6496](https://github.com/netbox-community/netbox/issues/6496) - Fix upgrade script when Python installed in nonstandard path
* [#6502](https://github.com/netbox-community/netbox/issues/6502) - Correct permissions evaluation for running a report via the REST API
* [#6517](https://github.com/netbox-community/netbox/issues/6517) - Fix assignment of user when creating rack reservations via REST API
* [#6525](https://github.com/netbox-community/netbox/issues/6525) - Paginate related IPs table under IP address view
---
## v2.11.4 (2021-05-25)
### Enhancements
@@ -93,7 +180,7 @@
## v2.11.0 (2021-04-16)
**Note:** NetBox v2.11 is the last major release that will support Python 3.6. Beginning with NetBox v2.12, Python 3.7 or later will be required.
**Note:** NetBox v2.11 is the last major release that will support Python 3.6. Beginning with NetBox v3.0, Python 3.7 or later will be required.
### Breaking Changes
@@ -151,7 +238,7 @@ Devices can now be assigned to locations (formerly known as rack groups) within
When exporting a list of objects in NetBox, users now have the option of selecting the "current view". This will render CSV output matching the current configuration of the table being viewed. For example, if you modify the sites list to display only the site name, tenant, and status, the rendered CSV will include only these columns, and they will appear in the order chosen.
The legacy static export behavior has been retained to ensure backward compatibility for dependent integrations. However, users are strongly encouraged to adapt custom export templates where needed as this functionality will be removed in v2.12.
The legacy static export behavior has been retained to ensure backward compatibility for dependent integrations. However, users are strongly encouraged to adapt custom export templates where needed as this functionality will be removed in v3.0.
#### Variable Scope Support for VLAN Groups ([#5284](https://github.com/netbox-community/netbox/issues/5284))

View File

@@ -61,25 +61,30 @@ These lookup expressions can be applied by adding a suffix to the desired field'
Numeric based fields (ASN, VLAN ID, etc) support these lookup expressions:
- `n` - not equal to (negation)
- `lt` - less than
- `lte` - less than or equal
- `gt` - greater than
- `gte` - greater than or equal
| Filter | Description |
|--------|-------------|
| `n` | Not equal to |
| `lt` | Less than |
| `lte` | Less than or equal to |
| `gt` | Greater than |
| `gte` | Greater than or equal to |
### String Fields
String based (char) fields (Name, Address, etc) support these lookup expressions:
- `n` - not equal to (negation)
- `ic` - case insensitive contains
- `nic` - negated case insensitive contains
- `isw` - case insensitive starts with
- `nisw` - negated case insensitive starts with
- `iew` - case insensitive ends with
- `niew` - negated case insensitive ends with
- `ie` - case insensitive exact match
- `nie` - negated case insensitive exact match
| Filter | Description |
|--------|-------------|
| `n` | Not equal to |
| `ic` | Contains (case-insensitive) |
| `nic` | Does not contain (case-insensitive) |
| `isw` | Starts with (case-insensitive) |
| `nisw` | Does not start with (case-insensitive) |
| `iew` | Ends with (case-insensitive) |
| `niew` | Does not end with (case-insensitive) |
| `ie` | Exact match (case-insensitive) |
| `nie` | Inverse exact match (case-insensitive) |
| `empty` | Is empty (boolean) |
### Foreign Keys & Other Fields

View File

@@ -104,6 +104,7 @@ class ProviderNetworkFilterSet(PrimaryModelFilterSet):
if not value.strip():
return queryset
return queryset.filter(
Q(name__icontains=value) |
Q(description__icontains=value) |
Q(comments__icontains=value)
).distinct()

View File

@@ -25,6 +25,7 @@ from .nested_serializers import *
class CableTerminationSerializer(serializers.ModelSerializer):
cable_peer_type = serializers.SerializerMethodField(read_only=True)
cable_peer = serializers.SerializerMethodField(read_only=True)
_occupied = serializers.SerializerMethodField(read_only=True)
def get_cable_peer_type(self, obj):
if obj._cable_peer is not None:
@@ -42,6 +43,10 @@ class CableTerminationSerializer(serializers.ModelSerializer):
return serializer(obj._cable_peer, context=context).data
return None
@swagger_serializer_method(serializer_or_field=serializers.BooleanField)
def get__occupied(self, obj):
return obj._occupied
class ConnectedEndpointSerializer(serializers.ModelSerializer):
connected_endpoint_type = serializers.SerializerMethodField(read_only=True)

View File

@@ -246,10 +246,6 @@ class RackReservationViewSet(ModelViewSet):
serializer_class = serializers.RackReservationSerializer
filterset_class = filtersets.RackReservationFilterSet
# Assign user from request
def perform_create(self, serializer):
serializer.save(user=self.request.user)
#
# Manufacturers

View File

@@ -924,6 +924,7 @@ class PortTypeChoices(ChoiceSet):
TYPE_110_PUNCH = '110-punch'
TYPE_BNC = 'bnc'
TYPE_F = 'f'
TYPE_N = 'n'
TYPE_MRJ21 = 'mrj21'
TYPE_ST = 'st'
TYPE_SC = 'sc'
@@ -954,6 +955,7 @@ class PortTypeChoices(ChoiceSet):
(TYPE_110_PUNCH, '110 Punch'),
(TYPE_BNC, 'BNC'),
(TYPE_F, 'F Connector'),
(TYPE_N, 'N Connector'),
(TYPE_MRJ21, 'MRJ21'),
),
),

View File

@@ -2,6 +2,9 @@ from django.db.models import Q
from .choices import InterfaceTypeChoices
# Exclude SVG images (unsupported by PIL)
DEVICETYPE_IMAGE_FORMATS = 'image/bmp,image/gif,image/jpeg,image/png,image/tiff,image/webp'
#
# Racks

View File

@@ -1172,12 +1172,11 @@ class DeviceTypeForm(BootstrapMixin, CustomFieldModelForm):
)
widgets = {
'subdevice_role': StaticSelect2(),
# Exclude SVG images (unsupported by PIL)
'front_image': forms.ClearableFileInput(attrs={
'accept': 'image/bmp,image/gif,image/jpeg,image/png,image/tiff'
'accept': DEVICETYPE_IMAGE_FORMATS
}),
'rear_image': forms.ClearableFileInput(attrs={
'accept': 'image/bmp,image/gif,image/jpeg,image/png,image/tiff'
'accept': DEVICETYPE_IMAGE_FORMATS
})
}
@@ -1879,8 +1878,7 @@ class FrontPortTemplateImportForm(ComponentTemplateImportForm):
)
rear_port = forms.ModelChoiceField(
queryset=RearPortTemplate.objects.all(),
to_field_name='name',
required=False
to_field_name='name'
)
class Meta:
@@ -2237,6 +2235,12 @@ class BaseDeviceCSVForm(CustomFieldModelCSVForm):
choices=DeviceStatusChoices,
help_text='Operational status'
)
virtual_chassis = CSVModelChoiceField(
queryset=VirtualChassis.objects.all(),
to_field_name='name',
required=False,
help_text='Virtual chassis'
)
cluster = CSVModelChoiceField(
queryset=Cluster.objects.all(),
to_field_name='name',
@@ -2247,6 +2251,10 @@ class BaseDeviceCSVForm(CustomFieldModelCSVForm):
class Meta:
fields = []
model = Device
help_texts = {
'vc_position': 'Virtual chassis position',
'vc_priority': 'Virtual chassis priority',
}
def __init__(self, data=None, *args, **kwargs):
super().__init__(data, *args, **kwargs)
@@ -2285,7 +2293,8 @@ class DeviceCSVForm(BaseDeviceCSVForm):
class Meta(BaseDeviceCSVForm.Meta):
fields = [
'name', 'device_role', 'tenant', 'manufacturer', 'device_type', 'platform', 'serial', 'asset_tag', 'status',
'site', 'location', 'rack', 'position', 'face', 'cluster', 'comments',
'site', 'location', 'rack', 'position', 'face', 'virtual_chassis', 'vc_position', 'vc_priority', 'cluster',
'comments',
]
def __init__(self, data=None, *args, **kwargs):
@@ -2320,7 +2329,7 @@ class ChildDeviceCSVForm(BaseDeviceCSVForm):
class Meta(BaseDeviceCSVForm.Meta):
fields = [
'name', 'device_role', 'tenant', 'manufacturer', 'device_type', 'platform', 'serial', 'asset_tag', 'status',
'parent', 'device_bay', 'cluster', 'comments',
'parent', 'device_bay', 'virtual_chassis', 'vc_position', 'vc_priority', 'cluster', 'comments',
]
def __init__(self, data=None, *args, **kwargs):
@@ -3923,13 +3932,23 @@ class ConnectCableToDeviceForm(BootstrapMixin, CustomFieldModelForm):
'group_id': '$termination_b_site_group',
}
)
termination_b_location = DynamicModelChoiceField(
queryset=Location.objects.all(),
label='Location',
required=False,
null_option='None',
query_params={
'site_id': '$termination_b_site'
}
)
termination_b_rack = DynamicModelChoiceField(
queryset=Rack.objects.all(),
label='Rack',
required=False,
null_option='None',
query_params={
'site_id': '$termination_b_site'
'site_id': '$termination_b_site',
'location_id': '$termination_b_location',
}
)
termination_b_device = DynamicModelChoiceField(
@@ -3938,6 +3957,7 @@ class ConnectCableToDeviceForm(BootstrapMixin, CustomFieldModelForm):
required=False,
query_params={
'site_id': '$termination_b_site',
'location_id': '$termination_b_location',
'rack_id': '$termination_b_rack',
}
)

View File

@@ -290,19 +290,24 @@ class FrontPortTemplate(ComponentTemplateModel):
def clean(self):
super().clean()
# Validate rear port assignment
if self.rear_port.device_type != self.device_type:
raise ValidationError(
"Rear port ({}) must belong to the same device type".format(self.rear_port)
)
try:
# Validate rear port position assignment
if self.rear_port_position > self.rear_port.positions:
raise ValidationError(
"Invalid rear port position ({}); rear port {} has only {} positions".format(
self.rear_port_position, self.rear_port.name, self.rear_port.positions
# Validate rear port assignment
if self.rear_port.device_type != self.device_type:
raise ValidationError(
"Rear port ({}) must belong to the same device type".format(self.rear_port)
)
)
# Validate rear port position assignment
if self.rear_port_position > self.rear_port.positions:
raise ValidationError(
"Invalid rear port position ({}); rear port {} has only {} positions".format(
self.rear_port_position, self.rear_port.name, self.rear_port.positions
)
)
except RearPortTemplate.DoesNotExist:
pass
def instantiate(self, device):
if self.rear_port:

View File

@@ -146,14 +146,12 @@ def nullify_connected_endpoints(instance, **kwargs):
# Disassociate the Cable from its termination points
if instance.termination_a is not None:
logger.debug(f"Nullifying termination A for cable {instance}")
instance.termination_a.cable = None
instance.termination_a._cable_peer = None
instance.termination_a.save()
model = instance.termination_a._meta.model
model.objects.filter(pk=instance.termination_a.pk).update(_cable_peer_type=None, _cable_peer_id=None)
if instance.termination_b is not None:
logger.debug(f"Nullifying termination B for cable {instance}")
instance.termination_b.cable = None
instance.termination_b._cable_peer = None
instance.termination_b.save()
model = instance.termination_b._meta.model
model.objects.filter(pk=instance.termination_b.pk).update(_cable_peer_type=None, _cable_peer_id=None)
# Delete and retrace any dependent cable paths
for cablepath in CablePath.objects.filter(path__contains=instance):

View File

@@ -694,7 +694,7 @@ class InventoryItemTable(DeviceComponentTable):
)
cable = None # Override DeviceComponentTable
class Meta(DeviceComponentTable.Meta):
class Meta(BaseTable.Meta):
model = InventoryItem
fields = (
'pk', 'device', 'name', 'label', 'manufacturer', 'part_id', 'serial', 'asset_tag', 'description',
@@ -715,7 +715,7 @@ class DeviceInventoryItemTable(InventoryItemTable):
buttons=('edit', 'delete')
)
class Meta(DeviceComponentTable.Meta):
class Meta(BaseTable.Meta):
model = InventoryItem
fields = (
'pk', 'name', 'label', 'manufacturer', 'part_id', 'serial', 'asset_tag', 'description', 'discovered',

View File

@@ -349,40 +349,36 @@ class RackReservationTest(APIViewTestCases.APIViewTestCase):
user = User.objects.create(username='user1', is_active=True)
site = Site.objects.create(name='Test Site 1', slug='test-site-1')
cls.racks = (
racks = (
Rack(site=site, name='Rack 1'),
Rack(site=site, name='Rack 2'),
)
Rack.objects.bulk_create(cls.racks)
Rack.objects.bulk_create(racks)
rack_reservations = (
RackReservation(rack=cls.racks[0], units=[1, 2, 3], user=user, description='Reservation #1'),
RackReservation(rack=cls.racks[0], units=[4, 5, 6], user=user, description='Reservation #2'),
RackReservation(rack=cls.racks[0], units=[7, 8, 9], user=user, description='Reservation #3'),
RackReservation(rack=racks[0], units=[1, 2, 3], user=user, description='Reservation #1'),
RackReservation(rack=racks[0], units=[4, 5, 6], user=user, description='Reservation #2'),
RackReservation(rack=racks[0], units=[7, 8, 9], user=user, description='Reservation #3'),
)
RackReservation.objects.bulk_create(rack_reservations)
def setUp(self):
super().setUp()
# We have to set creation data under setUp() because we need access to the test user.
self.create_data = [
cls.create_data = [
{
'rack': self.racks[1].pk,
'rack': racks[1].pk,
'units': [10, 11, 12],
'user': self.user.pk,
'user': user.pk,
'description': 'Reservation #4',
},
{
'rack': self.racks[1].pk,
'rack': racks[1].pk,
'units': [13, 14, 15],
'user': self.user.pk,
'user': user.pk,
'description': 'Reservation #5',
},
{
'rack': self.racks[1].pk,
'rack': racks[1].pk,
'units': [16, 17, 18],
'user': self.user.pk,
'user': user.pk,
'description': 'Reservation #6',
},
]
@@ -1215,8 +1211,9 @@ class InterfaceTest(Mixins.ComponentTraceMixin, APIViewTestCases.APIViewTestCase
{
'device': device.pk,
'name': 'Interface 6',
'type': '1000base-t',
'type': 'virtual',
'mode': InterfaceModeChoices.MODE_TAGGED,
'parent': interfaces[0].pk,
'tagged_vlans': [vlans[0].pk, vlans[1].pk],
'untagged_vlan': vlans[2].pk,
},

View File

@@ -1023,6 +1023,8 @@ class DeviceTestCase(ViewTestCases.PrimaryObjectViewTestCase):
tags = create_tags('Alpha', 'Bravo', 'Charlie')
VirtualChassis.objects.create(name='Virtual Chassis 1')
cls.form_data = {
'device_type': devicetypes[1].pk,
'device_role': deviceroles[1].pk,
@@ -1048,10 +1050,10 @@ class DeviceTestCase(ViewTestCases.PrimaryObjectViewTestCase):
}
cls.csv_data = (
"device_role,manufacturer,device_type,status,name,site,location,rack,position,face",
"Device Role 1,Manufacturer 1,Device Type 1,active,Device 4,Site 1,Location 1,Rack 1,10,front",
"Device Role 1,Manufacturer 1,Device Type 1,active,Device 5,Site 1,Location 1,Rack 1,20,front",
"Device Role 1,Manufacturer 1,Device Type 1,active,Device 6,Site 1,Location 1,Rack 1,30,front",
"device_role,manufacturer,device_type,status,name,site,location,rack,position,face,virtual_chassis,vc_position,vc_priority",
"Device Role 1,Manufacturer 1,Device Type 1,active,Device 4,Site 1,Location 1,Rack 1,10,front,Virtual Chassis 1,1,10",
"Device Role 1,Manufacturer 1,Device Type 1,active,Device 5,Site 1,Location 1,Rack 1,20,front,Virtual Chassis 1,2,20",
"Device Role 1,Manufacturer 1,Device Type 1,active,Device 6,Site 1,Location 1,Rack 1,30,front,Virtual Chassis 1,3,30",
)
cls.bulk_edit_data = {

View File

@@ -1169,6 +1169,8 @@ class DeviceRoleView(generic.ObjectView):
return {
'devices_table': devices_table,
'device_count': Device.objects.filter(device_role=instance).count(),
'virtualmachine_count': VirtualMachine.objects.filter(role=instance).count(),
}

View File

@@ -239,7 +239,7 @@ class ReportViewSet(ViewSet):
Run a Report identified as "<module>.<script>" and return the pending JobResult as the result
"""
# Check that the user has permission to run reports.
if not request.user.has_perm('extras.run_script'):
if not request.user.has_perm('extras.run_report'):
raise PermissionDenied("This user does not have permission to run reports.")
# Check that at least one RQ worker is running

View File

@@ -5,4 +5,5 @@ class ExtrasConfig(AppConfig):
name = "extras"
def ready(self):
import extras.lookups
import extras.signals

View File

@@ -4,6 +4,7 @@ from django.db.models.signals import m2m_changed, pre_delete, post_save
from extras.signals import _handle_changed_object, _handle_deleted_object
from utilities.utils import curry
from .webhooks import flush_webhooks
@contextmanager
@@ -14,9 +15,11 @@ def change_logging(request):
:param request: WSGIRequest object with a unique `id` set
"""
webhook_queue = []
# Curry signals receivers to pass the current request
handle_changed_object = curry(_handle_changed_object, request)
handle_deleted_object = curry(_handle_deleted_object, request)
handle_changed_object = curry(_handle_changed_object, request, webhook_queue)
handle_deleted_object = curry(_handle_deleted_object, request, webhook_queue)
# Connect our receivers to the post_save and post_delete signals.
post_save.connect(handle_changed_object, dispatch_uid='handle_changed_object')
@@ -30,3 +33,7 @@ def change_logging(request):
post_save.disconnect(handle_changed_object, dispatch_uid='handle_changed_object')
m2m_changed.disconnect(handle_changed_object, dispatch_uid='handle_changed_object')
pre_delete.disconnect(handle_deleted_object, dispatch_uid='handle_deleted_object')
# Flush queued webhooks to RQ
flush_webhooks(webhook_queue)
del webhook_queue

17
netbox/extras/lookups.py Normal file
View File

@@ -0,0 +1,17 @@
from django.db.models import CharField, Lookup
class Empty(Lookup):
"""
Filter on whether a string is empty.
"""
lookup_name = 'empty'
def as_sql(self, qn, connection):
lhs, lhs_params = self.process_lhs(qn, connection)
rhs, rhs_params = self.process_rhs(qn, connection)
params = lhs_params + rhs_params
return 'CAST(LENGTH(%s) AS BOOLEAN) != %s' % (lhs, rhs), params
CharField.register_lookup(Empty)

View File

@@ -280,15 +280,15 @@ class CustomField(BigIDModel):
if value not in [None, '']:
# Validate text field
if self.type == CustomFieldTypeChoices.TYPE_TEXT and self.validation_regex:
if not re.match(self.validation_regex, value):
if self.type == CustomFieldTypeChoices.TYPE_TEXT:
if type(value) is not str:
raise ValidationError(f"Value must be a string.")
if self.validation_regex and not re.match(self.validation_regex, value):
raise ValidationError(f"Value must match regex '{self.validation_regex}'")
# Validate integer
if self.type == CustomFieldTypeChoices.TYPE_INTEGER:
try:
int(value)
except ValueError:
if type(value) is not int:
raise ValidationError("Value must be an integer.")
if self.validation_minimum is not None and value < self.validation_minimum:
raise ValidationError(f"Value must be at least {self.validation_minimum}")

View File

@@ -431,9 +431,8 @@ class JournalEntry(ChangeLoggedModel):
verbose_name_plural = 'journal entries'
def __str__(self):
created_date = timezone.localdate(self.created)
created_time = timezone.localtime(self.created)
return f"{date_format(created_date)} - {time_format(created_time)} ({self.get_kind_display()})"
created = timezone.localtime(self.created)
return f"{date_format(created, format='SHORT_DATETIME_FORMAT')} ({self.get_kind_display()})"
def get_absolute_url(self):
return reverse('extras:journalentry', args=[self.pk])

View File

@@ -188,10 +188,10 @@ class ObjectVar(ScriptVariable):
def __init__(self, model, query_params=None, null_option=None, *args, **kwargs):
# TODO: Remove display_field in v2.12
# TODO: Remove display_field in v3.0
if 'display_field' in kwargs:
warnings.warn(
"The 'display_field' parameter has been deprecated, and will be removed in NetBox v2.12. Object "
"The 'display_field' parameter has been deprecated, and will be removed in NetBox v3.0. Object "
"variables will now reference the 'display' attribute available on all model serializers by default."
)
display_field = kwargs.pop('display_field', 'display')

View File

@@ -12,17 +12,27 @@ from prometheus_client import Counter
from .choices import ObjectChangeActionChoices
from .models import CustomField, ObjectChange
from .webhooks import enqueue_webhooks
from .webhooks import enqueue_object, get_snapshots, serialize_for_webhook
#
# Change logging/webhooks
#
def _handle_changed_object(request, sender, instance, **kwargs):
def _handle_changed_object(request, webhook_queue, sender, instance, **kwargs):
"""
Fires when an object is created or updated.
"""
def is_same_object(instance, webhook_data):
return (
ContentType.objects.get_for_model(instance) == webhook_data['content_type'] and
instance.pk == webhook_data['object_id'] and
request.id == webhook_data['request_id']
)
if not hasattr(instance, 'to_objectchange'):
return
m2m_changed = False
# Determine the type of change being made
@@ -53,8 +63,13 @@ def _handle_changed_object(request, sender, instance, **kwargs):
objectchange.request_id = request.id
objectchange.save()
# Enqueue webhooks
enqueue_webhooks(instance, request.user, request.id, action)
# If this is an M2M change, update the previously queued webhook (from post_save)
if m2m_changed and webhook_queue and is_same_object(instance, webhook_queue[-1]):
instance.refresh_from_db() # Ensure that we're working with fresh M2M assignments
webhook_queue[-1]['data'] = serialize_for_webhook(instance)
webhook_queue[-1]['snapshots']['postchange'] = get_snapshots(instance, action)['postchange']
else:
enqueue_object(webhook_queue, instance, request.user, request.id, action)
# Increment metric counters
if action == ObjectChangeActionChoices.ACTION_CREATE:
@@ -68,10 +83,13 @@ def _handle_changed_object(request, sender, instance, **kwargs):
ObjectChange.objects.filter(time__lt=cutoff)._raw_delete(using=DEFAULT_DB_ALIAS)
def _handle_deleted_object(request, sender, instance, **kwargs):
def _handle_deleted_object(request, webhook_queue, sender, instance, **kwargs):
"""
Fires when an object is deleted.
"""
if not hasattr(instance, 'to_objectchange'):
return
# Record an ObjectChange if applicable
if hasattr(instance, 'to_objectchange'):
objectchange = instance.to_objectchange(ObjectChangeActionChoices.ACTION_DELETE)
@@ -80,7 +98,7 @@ def _handle_deleted_object(request, sender, instance, **kwargs):
objectchange.save()
# Enqueue webhooks
enqueue_webhooks(instance, request.user, request.id, ObjectChangeActionChoices.ACTION_DELETE)
enqueue_object(webhook_queue, instance, request.user, request.id, ObjectChangeActionChoices.ACTION_DELETE)
# Increment metric counters
model_deletes.labels(instance._meta.model_name).inc()

View File

@@ -11,8 +11,8 @@ from rest_framework import status
from dcim.models import Site
from extras.choices import ObjectChangeActionChoices
from extras.models import Webhook
from extras.webhooks import enqueue_webhooks, generate_signature
from extras.models import Tag, Webhook
from extras.webhooks import enqueue_object, flush_webhooks, generate_signature
from extras.webhooks_worker import process_webhook
from utilities.testing import APITestCase
@@ -20,11 +20,10 @@ from utilities.testing import APITestCase
class WebhookTest(APITestCase):
def setUp(self):
super().setUp()
self.queue = django_rq.get_queue('default')
self.queue.empty() # Begin each test with an empty queue
self.queue.empty()
@classmethod
def setUpTestData(cls):
@@ -34,38 +33,104 @@ class WebhookTest(APITestCase):
DUMMY_SECRET = "LOOKATMEIMASECRETSTRING"
webhooks = Webhook.objects.bulk_create((
Webhook(name='Site Create Webhook', type_create=True, payload_url=DUMMY_URL, secret=DUMMY_SECRET, additional_headers='X-Foo: Bar'),
Webhook(name='Site Update Webhook', type_update=True, payload_url=DUMMY_URL, secret=DUMMY_SECRET),
Webhook(name='Site Delete Webhook', type_delete=True, payload_url=DUMMY_URL, secret=DUMMY_SECRET),
Webhook(name='Webhook 1', type_create=True, payload_url=DUMMY_URL, secret=DUMMY_SECRET, additional_headers='X-Foo: Bar'),
Webhook(name='Webhook 2', type_update=True, payload_url=DUMMY_URL, secret=DUMMY_SECRET),
Webhook(name='Webhook 3', type_delete=True, payload_url=DUMMY_URL, secret=DUMMY_SECRET),
))
for webhook in webhooks:
webhook.content_types.set([site_ct])
Tag.objects.bulk_create((
Tag(name='Foo', slug='foo'),
Tag(name='Bar', slug='bar'),
Tag(name='Baz', slug='baz'),
))
def test_enqueue_webhook_create(self):
# Create an object via the REST API
data = {
'name': 'Test Site',
'slug': 'test-site',
'name': 'Site 1',
'slug': 'site-1',
'tags': [
{'name': 'Foo'},
{'name': 'Bar'},
]
}
url = reverse('dcim-api:site-list')
self.add_permissions('dcim.add_site')
response = self.client.post(url, data, format='json', **self.header)
self.assertHttpStatus(response, status.HTTP_201_CREATED)
self.assertEqual(Site.objects.count(), 1)
self.assertEqual(Site.objects.first().tags.count(), 2)
# Verify that a job was queued for the object creation webhook
self.assertEqual(self.queue.count, 1)
job = self.queue.jobs[0]
self.assertEqual(job.kwargs['webhook'], Webhook.objects.get(type_create=True))
self.assertEqual(job.kwargs['data']['id'], response.data['id'])
self.assertEqual(job.kwargs['model_name'], 'site')
self.assertEqual(job.kwargs['event'], ObjectChangeActionChoices.ACTION_CREATE)
self.assertEqual(job.kwargs['model_name'], 'site')
self.assertEqual(job.kwargs['data']['id'], response.data['id'])
self.assertEqual(len(job.kwargs['data']['tags']), len(response.data['tags']))
self.assertEqual(job.kwargs['snapshots']['postchange']['name'], 'Site 1')
self.assertEqual(job.kwargs['snapshots']['postchange']['tags'], ['Bar', 'Foo'])
def test_enqueue_webhook_bulk_create(self):
# Create multiple objects via the REST API
data = [
{
'name': 'Site 1',
'slug': 'site-1',
'tags': [
{'name': 'Foo'},
{'name': 'Bar'},
]
},
{
'name': 'Site 2',
'slug': 'site-2',
'tags': [
{'name': 'Foo'},
{'name': 'Bar'},
]
},
{
'name': 'Site 3',
'slug': 'site-3',
'tags': [
{'name': 'Foo'},
{'name': 'Bar'},
]
},
]
url = reverse('dcim-api:site-list')
self.add_permissions('dcim.add_site')
response = self.client.post(url, data, format='json', **self.header)
self.assertHttpStatus(response, status.HTTP_201_CREATED)
self.assertEqual(Site.objects.count(), 3)
self.assertEqual(Site.objects.first().tags.count(), 2)
# Verify that a webhook was queued for each object
self.assertEqual(self.queue.count, 3)
for i, job in enumerate(self.queue.jobs):
self.assertEqual(job.kwargs['webhook'], Webhook.objects.get(type_create=True))
self.assertEqual(job.kwargs['event'], ObjectChangeActionChoices.ACTION_CREATE)
self.assertEqual(job.kwargs['model_name'], 'site')
self.assertEqual(job.kwargs['data']['id'], response.data[i]['id'])
self.assertEqual(len(job.kwargs['data']['tags']), len(response.data[i]['tags']))
self.assertEqual(job.kwargs['snapshots']['postchange']['name'], response.data[i]['name'])
self.assertEqual(job.kwargs['snapshots']['postchange']['tags'], ['Bar', 'Foo'])
def test_enqueue_webhook_update(self):
# Update an object via the REST API
site = Site.objects.create(name='Site 1', slug='site-1')
site.tags.set(*Tag.objects.filter(name__in=['Foo', 'Bar']))
# Update an object via the REST API
data = {
'name': 'Site X',
'comments': 'Updated the site',
'tags': [
{'name': 'Baz'}
]
}
url = reverse('dcim-api:site-detail', kwargs={'pk': site.pk})
self.add_permissions('dcim.change_site')
@@ -76,13 +141,72 @@ class WebhookTest(APITestCase):
self.assertEqual(self.queue.count, 1)
job = self.queue.jobs[0]
self.assertEqual(job.kwargs['webhook'], Webhook.objects.get(type_update=True))
self.assertEqual(job.kwargs['data']['id'], site.pk)
self.assertEqual(job.kwargs['model_name'], 'site')
self.assertEqual(job.kwargs['event'], ObjectChangeActionChoices.ACTION_UPDATE)
self.assertEqual(job.kwargs['model_name'], 'site')
self.assertEqual(job.kwargs['data']['id'], site.pk)
self.assertEqual(len(job.kwargs['data']['tags']), len(response.data['tags']))
self.assertEqual(job.kwargs['snapshots']['prechange']['name'], 'Site 1')
self.assertEqual(job.kwargs['snapshots']['prechange']['tags'], ['Bar', 'Foo'])
self.assertEqual(job.kwargs['snapshots']['postchange']['name'], 'Site X')
self.assertEqual(job.kwargs['snapshots']['postchange']['tags'], ['Baz'])
def test_enqueue_webhook_bulk_update(self):
sites = (
Site(name='Site 1', slug='site-1'),
Site(name='Site 2', slug='site-2'),
Site(name='Site 3', slug='site-3'),
)
Site.objects.bulk_create(sites)
for site in sites:
site.tags.set(*Tag.objects.filter(name__in=['Foo', 'Bar']))
# Update three objects via the REST API
data = [
{
'id': sites[0].pk,
'name': 'Site X',
'tags': [
{'name': 'Baz'}
]
},
{
'id': sites[1].pk,
'name': 'Site Y',
'tags': [
{'name': 'Baz'}
]
},
{
'id': sites[2].pk,
'name': 'Site Z',
'tags': [
{'name': 'Baz'}
]
},
]
url = reverse('dcim-api:site-list')
self.add_permissions('dcim.change_site')
response = self.client.patch(url, data, format='json', **self.header)
self.assertHttpStatus(response, status.HTTP_200_OK)
# Verify that a job was queued for the object update webhook
self.assertEqual(self.queue.count, 3)
for i, job in enumerate(self.queue.jobs):
self.assertEqual(job.kwargs['webhook'], Webhook.objects.get(type_update=True))
self.assertEqual(job.kwargs['event'], ObjectChangeActionChoices.ACTION_UPDATE)
self.assertEqual(job.kwargs['model_name'], 'site')
self.assertEqual(job.kwargs['data']['id'], data[i]['id'])
self.assertEqual(len(job.kwargs['data']['tags']), len(response.data[i]['tags']))
self.assertEqual(job.kwargs['snapshots']['prechange']['name'], sites[i].name)
self.assertEqual(job.kwargs['snapshots']['prechange']['tags'], ['Bar', 'Foo'])
self.assertEqual(job.kwargs['snapshots']['postchange']['name'], response.data[i]['name'])
self.assertEqual(job.kwargs['snapshots']['postchange']['tags'], ['Baz'])
def test_enqueue_webhook_delete(self):
# Delete an object via the REST API
site = Site.objects.create(name='Site 1', slug='site-1')
site.tags.set(*Tag.objects.filter(name__in=['Foo', 'Bar']))
# Delete an object via the REST API
url = reverse('dcim-api:site-detail', kwargs={'pk': site.pk})
self.add_permissions('dcim.delete_site')
response = self.client.delete(url, **self.header)
@@ -92,9 +216,40 @@ class WebhookTest(APITestCase):
self.assertEqual(self.queue.count, 1)
job = self.queue.jobs[0]
self.assertEqual(job.kwargs['webhook'], Webhook.objects.get(type_delete=True))
self.assertEqual(job.kwargs['data']['id'], site.pk)
self.assertEqual(job.kwargs['model_name'], 'site')
self.assertEqual(job.kwargs['event'], ObjectChangeActionChoices.ACTION_DELETE)
self.assertEqual(job.kwargs['model_name'], 'site')
self.assertEqual(job.kwargs['data']['id'], site.pk)
self.assertEqual(job.kwargs['snapshots']['prechange']['name'], 'Site 1')
self.assertEqual(job.kwargs['snapshots']['prechange']['tags'], ['Bar', 'Foo'])
def test_enqueue_webhook_bulk_delete(self):
sites = (
Site(name='Site 1', slug='site-1'),
Site(name='Site 2', slug='site-2'),
Site(name='Site 3', slug='site-3'),
)
Site.objects.bulk_create(sites)
for site in sites:
site.tags.set(*Tag.objects.filter(name__in=['Foo', 'Bar']))
# Delete three objects via the REST API
data = [
{'id': site.pk} for site in sites
]
url = reverse('dcim-api:site-list')
self.add_permissions('dcim.delete_site')
response = self.client.delete(url, data, format='json', **self.header)
self.assertHttpStatus(response, status.HTTP_204_NO_CONTENT)
# Verify that a job was queued for the object update webhook
self.assertEqual(self.queue.count, 3)
for i, job in enumerate(self.queue.jobs):
self.assertEqual(job.kwargs['webhook'], Webhook.objects.get(type_delete=True))
self.assertEqual(job.kwargs['event'], ObjectChangeActionChoices.ACTION_DELETE)
self.assertEqual(job.kwargs['model_name'], 'site')
self.assertEqual(job.kwargs['data']['id'], sites[i].pk)
self.assertEqual(job.kwargs['snapshots']['prechange']['name'], sites[i].name)
self.assertEqual(job.kwargs['snapshots']['prechange']['tags'], ['Bar', 'Foo'])
def test_webhooks_worker(self):
@@ -125,13 +280,16 @@ class WebhookTest(APITestCase):
return HttpResponse()
# Enqueue a webhook for processing
webhooks_queue = []
site = Site.objects.create(name='Site 1', slug='site-1')
enqueue_webhooks(
enqueue_object(
webhooks_queue,
instance=site,
user=self.user,
request_id=request_id,
action=ObjectChangeActionChoices.ACTION_CREATE
)
flush_webhooks(webhooks_queue)
# Retrieve the job from queue
job = self.queue.jobs[0]

View File

@@ -202,15 +202,22 @@ class ObjectChangeView(generic.ObjectView):
next_change = objectchanges.filter(time__gt=instance.time).order_by('time').first()
prev_change = objectchanges.filter(time__lt=instance.time).order_by('-time').first()
if instance.prechange_data and instance.postchange_data:
if not instance.prechange_data and instance.action in ['update', 'delete'] and prev_change:
non_atomic_change = True
prechange_data = prev_change.postchange_data
else:
non_atomic_change = False
prechange_data = instance.prechange_data
if prechange_data and instance.postchange_data:
diff_added = shallow_compare_dict(
instance.prechange_data or dict(),
prechange_data or dict(),
instance.postchange_data or dict(),
exclude=['last_updated'],
)
diff_removed = {
x: instance.prechange_data.get(x) for x in diff_added
} if instance.prechange_data else {}
x: prechange_data.get(x) for x in diff_added
} if prechange_data else {}
else:
diff_added = None
diff_removed = None
@@ -221,7 +228,8 @@ class ObjectChangeView(generic.ObjectView):
'next_change': next_change,
'prev_change': prev_change,
'related_changes_table': related_changes_table,
'related_changes_count': related_changes.count()
'related_changes_count': related_changes.count(),
'non_atomic_change': non_atomic_change
}

View File

@@ -1,5 +1,6 @@
import hashlib
import hmac
from collections import defaultdict
from django.contrib.contenttypes.models import ContentType
from django.utils import timezone
@@ -12,6 +13,26 @@ from .models import Webhook
from .registry import registry
def serialize_for_webhook(instance):
"""
Return a serialized representation of the given instance suitable for use in a webhook.
"""
serializer_class = get_serializer_for_model(instance.__class__)
serializer_context = {
'request': None,
}
serializer = serializer_class(instance, context=serializer_context)
return serializer.data
def get_snapshots(instance, action):
return {
'prechange': getattr(instance, '_prechange_snapshot', None),
'postchange': serialize_object(instance) if action != ObjectChangeActionChoices.ACTION_DELETE else None,
}
def generate_signature(request_body, secret):
"""
Return a cryptographic signature that can be used to verify the authenticity of webhook data.
@@ -24,10 +45,10 @@ def generate_signature(request_body, secret):
return hmac_prep.hexdigest()
def enqueue_webhooks(instance, user, request_id, action):
def enqueue_object(queue, instance, user, request_id, action):
"""
Find Webhook(s) assigned to this instance + action and enqueue them
to be processed
Enqueue a serialized representation of a created/updated/deleted object for the processing of
webhooks once the request has completed.
"""
# Determine whether this type of object supports webhooks
app_label = instance._meta.app_label
@@ -35,41 +56,55 @@ def enqueue_webhooks(instance, user, request_id, action):
if model_name not in registry['model_features']['webhooks'].get(app_label, []):
return
# Retrieve any applicable Webhooks
content_type = ContentType.objects.get_for_model(instance)
action_flag = {
ObjectChangeActionChoices.ACTION_CREATE: 'type_create',
ObjectChangeActionChoices.ACTION_UPDATE: 'type_update',
ObjectChangeActionChoices.ACTION_DELETE: 'type_delete',
}[action]
webhooks = Webhook.objects.filter(content_types=content_type, enabled=True, **{action_flag: True})
queue.append({
'content_type': ContentType.objects.get_for_model(instance),
'object_id': instance.pk,
'event': action,
'data': serialize_for_webhook(instance),
'snapshots': get_snapshots(instance, action),
'username': user.username,
'request_id': request_id
})
if webhooks.exists():
# Get the Model's API serializer class and serialize the object
serializer_class = get_serializer_for_model(instance.__class__)
serializer_context = {
'request': None,
}
serializer = serializer_class(instance, context=serializer_context)
def flush_webhooks(queue):
"""
Flush a list of object representation to RQ for webhook processing.
"""
rq_queue = get_queue('default')
webhooks_cache = {
'type_create': {},
'type_update': {},
'type_delete': {},
}
# Gather pre- and post-change snapshots
snapshots = {
'prechange': getattr(instance, '_prechange_snapshot', None),
'postchange': serialize_object(instance) if action != ObjectChangeActionChoices.ACTION_DELETE else None,
}
for data in queue:
action_flag = {
ObjectChangeActionChoices.ACTION_CREATE: 'type_create',
ObjectChangeActionChoices.ACTION_UPDATE: 'type_update',
ObjectChangeActionChoices.ACTION_DELETE: 'type_delete',
}[data['event']]
content_type = data['content_type']
# Cache applicable Webhooks
if content_type not in webhooks_cache[action_flag]:
webhooks_cache[action_flag][content_type] = Webhook.objects.filter(
**{action_flag: True},
content_types=content_type,
enabled=True
)
webhooks = webhooks_cache[action_flag][content_type]
# Enqueue the webhooks
webhook_queue = get_queue('default')
for webhook in webhooks:
webhook_queue.enqueue(
rq_queue.enqueue(
"extras.webhooks_worker.process_webhook",
webhook=webhook,
model_name=instance._meta.model_name,
event=action,
data=serializer.data,
snapshots=snapshots,
model_name=content_type.model,
event=data['event'],
data=data['data'],
snapshots=data['snapshots'],
timestamp=str(timezone.now()),
username=user.username,
request_id=request_id
username=data['username'],
request_id=data['request_id']
)

View File

@@ -102,10 +102,11 @@ class NestedVLANSerializer(WritableNestedSerializer):
class NestedPrefixSerializer(WritableNestedSerializer):
url = serializers.HyperlinkedIdentityField(view_name='ipam-api:prefix-detail')
family = serializers.IntegerField(read_only=True)
_depth = serializers.IntegerField(read_only=True)
class Meta:
model = models.Prefix
fields = ['id', 'url', 'display', 'family', 'prefix']
fields = ['id', 'url', 'display', 'family', 'prefix', '_depth']
#

View File

@@ -197,12 +197,14 @@ class PrefixSerializer(PrimaryModelSerializer):
vlan = NestedVLANSerializer(required=False, allow_null=True)
status = ChoiceField(choices=PrefixStatusChoices, required=False)
role = NestedRoleSerializer(required=False, allow_null=True)
children = serializers.IntegerField(read_only=True)
_depth = serializers.IntegerField(read_only=True)
class Meta:
model = Prefix
fields = [
'id', 'url', 'display', 'family', 'prefix', 'site', 'vrf', 'tenant', 'vlan', 'status', 'role', 'is_pool',
'description', 'tags', 'custom_fields', 'created', 'last_updated',
'description', 'tags', 'custom_fields', 'created', 'last_updated', 'children', '_depth',
]
read_only_fields = ['family']
@@ -272,7 +274,7 @@ class IPAddressSerializer(PrimaryModelSerializer):
)
assigned_object = serializers.SerializerMethodField(read_only=True)
nat_inside = NestedIPAddressSerializer(required=False, allow_null=True)
nat_outside = NestedIPAddressSerializer(read_only=True)
nat_outside = NestedIPAddressSerializer(required=False, read_only=True)
class Meta:
model = IPAddress
@@ -281,7 +283,7 @@ class IPAddressSerializer(PrimaryModelSerializer):
'assigned_object_id', 'assigned_object', 'nat_inside', 'nat_outside', 'dns_name', 'description', 'tags',
'custom_fields', 'created', 'last_updated',
]
read_only_fields = ['family']
read_only_fields = ['family', 'nat_outside']
@swagger_serializer_method(serializer_or_field=serializers.DictField)
def get_assigned_object(self, obj):

View File

@@ -209,6 +209,12 @@ class PrefixFilterSet(PrimaryModelFilterSet, TenancyFilterSet):
method='search_contains',
label='Prefixes which contain this prefix or IP',
)
depth = MultiValueNumberFilter(
field_name='_depth'
)
children = MultiValueNumberFilter(
field_name='_children'
)
mask_length = django_filters.NumberFilter(
field_name='prefix',
lookup_expr='net_mask_length'

View File

View File

@@ -0,0 +1,27 @@
from django.core.management.base import BaseCommand
from ipam.models import Prefix, VRF
from ipam.utils import rebuild_prefixes
class Command(BaseCommand):
help = "Rebuild the prefix hierarchy (depth and children counts)"
def handle(self, *model_names, **options):
self.stdout.write(f'Rebuilding {Prefix.objects.count()} prefixes...')
# Reset existing counts
Prefix.objects.update(_depth=0, _children=0)
# Rebuild the global table
global_count = Prefix.objects.filter(vrf__isnull=True).count()
self.stdout.write(f'Global: {global_count} prefixes...')
rebuild_prefixes(None)
# Rebuild each VRF
for vrf in VRF.objects.all():
vrf_count = Prefix.objects.filter(vrf=vrf).count()
self.stdout.write(f'VRF {vrf}: {vrf_count} prefixes...')
rebuild_prefixes(vrf.pk)
self.stdout.write(self.style.SUCCESS('Finished.'))

View File

@@ -0,0 +1,21 @@
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('ipam', '0046_set_vlangroup_scope_types'),
]
operations = [
migrations.AddField(
model_name='prefix',
name='_children',
field=models.PositiveBigIntegerField(default=0, editable=False),
),
migrations.AddField(
model_name='prefix',
name='_depth',
field=models.PositiveSmallIntegerField(default=0, editable=False),
),
]

View File

@@ -0,0 +1,37 @@
import sys
from django.db import migrations
from ipam.utils import rebuild_prefixes
def populate_prefix_hierarchy(apps, schema_editor):
"""
Populate _depth and _children attrs for all Prefixes.
"""
Prefix = apps.get_model('ipam', 'Prefix')
VRF = apps.get_model('ipam', 'VRF')
total_count = Prefix.objects.count()
if 'test' not in sys.argv:
print(f'\nUpdating {total_count} prefixes...')
# Rebuild the global table
rebuild_prefixes(None)
# Iterate through all VRFs, rebuilding each
for vrf in VRF.objects.all():
rebuild_prefixes(vrf.pk)
class Migration(migrations.Migration):
dependencies = [
('ipam', '0047_prefix_depth_children'),
]
operations = [
migrations.RunPython(
code=populate_prefix_hierarchy,
reverse_code=migrations.RunPython.noop
),
]

View File

@@ -293,6 +293,16 @@ class Prefix(PrimaryModel):
blank=True
)
# Cached depth & child counts
_depth = models.PositiveSmallIntegerField(
default=0,
editable=False
)
_children = models.PositiveBigIntegerField(
default=0,
editable=False
)
objects = PrefixQuerySet.as_manager()
csv_headers = [
@@ -306,6 +316,13 @@ class Prefix(PrimaryModel):
ordering = (F('vrf').asc(nulls_first=True), 'prefix', 'pk') # (vrf, prefix) may be non-unique
verbose_name_plural = 'prefixes'
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
# Cache the original prefix and VRF so we can check if they have changed on post_save
self._prefix = self.prefix
self._vrf = self.vrf
def __str__(self):
return str(self.prefix)
@@ -323,16 +340,6 @@ class Prefix(PrimaryModel):
'prefix': "Cannot create prefix with /0 mask."
})
# Disallow host masks
if self.prefix.version == 4 and self.prefix.prefixlen == 32:
raise ValidationError({
'prefix': "Cannot create host addresses (/32) as prefixes. Create an IPv4 address instead."
})
elif self.prefix.version == 6 and self.prefix.prefixlen == 128:
raise ValidationError({
'prefix': "Cannot create host addresses (/128) as prefixes. Create an IPv6 address instead."
})
# Enforce unique IP space (if applicable)
if (self.vrf is None and settings.ENFORCE_GLOBAL_UNIQUE) or (self.vrf and self.vrf.enforce_unique):
duplicate_prefixes = self.get_duplicates()
@@ -373,6 +380,14 @@ class Prefix(PrimaryModel):
return self.prefix.version
return None
@property
def depth(self):
return self._depth
@property
def children(self):
return self._children
def _set_prefix_length(self, value):
"""
Expose the IPNetwork object's prefixlen attribute on the parent model so that it can be manipulated directly,
@@ -385,6 +400,26 @@ class Prefix(PrimaryModel):
def get_status_class(self):
return PrefixStatusChoices.CSS_CLASSES.get(self.status)
def get_parents(self, include_self=False):
"""
Return all containing Prefixes in the hierarchy.
"""
lookup = 'net_contains_or_equals' if include_self else 'net_contains'
return Prefix.objects.filter(**{
'vrf': self.vrf,
f'prefix__{lookup}': self.prefix
})
def get_children(self, include_self=False):
"""
Return all covered Prefixes in the hierarchy.
"""
lookup = 'net_contained_or_equal' if include_self else 'net_contained'
return Prefix.objects.filter(**{
'vrf': self.vrf,
f'prefix__{lookup}': self.prefix
})
def get_duplicates(self):
return Prefix.objects.filter(vrf=self.vrf, prefix=str(self.prefix)).exclude(pk=self.pk)
@@ -426,8 +461,8 @@ class Prefix(PrimaryModel):
child_ips = netaddr.IPSet([ip.address.ip for ip in self.get_child_ips()])
available_ips = prefix - child_ips
# IPv6, pool, or IPv4 /31 sets are fully usable
if self.family == 6 or self.is_pool or self.prefix.prefixlen == 31:
# IPv6, pool, or IPv4 /31-/32 sets are fully usable
if self.family == 6 or self.is_pool or (self.family == 4 and self.prefix.prefixlen >= 31):
return available_ips
# For "normal" IPv4 prefixes, omit first and last addresses

View File

@@ -1,27 +1,32 @@
from django.contrib.contenttypes.models import ContentType
from django.db.models import Q
from django.db.models.expressions import RawSQL
from utilities.querysets import RestrictedQuerySet
class PrefixQuerySet(RestrictedQuerySet):
def annotate_tree(self):
def annotate_hierarchy(self):
"""
Annotate the number of parent and child prefixes for each Prefix. Raw SQL is needed for these subqueries
because we need to cast NULL VRF values to integers for comparison. (NULL != NULL).
Annotate the depth and number of child prefixes for each Prefix. Cast null VRF values to zero for
comparison. (NULL != NULL).
"""
return self.extra(
select={
'parents': 'SELECT COUNT(U0."prefix") AS "c" '
'FROM "ipam_prefix" U0 '
'WHERE (U0."prefix" >> "ipam_prefix"."prefix" '
'AND COALESCE(U0."vrf_id", 0) = COALESCE("ipam_prefix"."vrf_id", 0))',
'children': 'SELECT COUNT(U1."prefix") AS "c" '
'FROM "ipam_prefix" U1 '
'WHERE (U1."prefix" << "ipam_prefix"."prefix" '
'AND COALESCE(U1."vrf_id", 0) = COALESCE("ipam_prefix"."vrf_id", 0))',
}
return self.annotate(
hierarchy_depth=RawSQL(
'SELECT COUNT(DISTINCT U0."prefix") AS "c" '
'FROM "ipam_prefix" U0 '
'WHERE (U0."prefix" >> "ipam_prefix"."prefix" '
'AND COALESCE(U0."vrf_id", 0) = COALESCE("ipam_prefix"."vrf_id", 0))',
()
),
hierarchy_children=RawSQL(
'SELECT COUNT(U1."prefix") AS "c" '
'FROM "ipam_prefix" U1 '
'WHERE (U1."prefix" << "ipam_prefix"."prefix" '
'AND COALESCE(U1."vrf_id", 0) = COALESCE("ipam_prefix"."vrf_id", 0))',
()
)
)

View File

@@ -1,9 +1,52 @@
from django.db.models.signals import pre_delete
from django.db.models.signals import post_delete, post_save, pre_delete
from django.dispatch import receiver
from dcim.models import Device
from virtualization.models import VirtualMachine
from .models import IPAddress
from .models import IPAddress, Prefix
def update_parents_children(prefix):
"""
Update depth on prefix & containing prefixes
"""
parents = prefix.get_parents(include_self=True).annotate_hierarchy()
for parent in parents:
parent._children = parent.hierarchy_children
Prefix.objects.bulk_update(parents, ['_children'], batch_size=100)
def update_children_depth(prefix):
"""
Update children count on prefix & contained prefixes
"""
children = prefix.get_children(include_self=True).annotate_hierarchy()
for child in children:
child._depth = child.hierarchy_depth
Prefix.objects.bulk_update(children, ['_depth'], batch_size=100)
@receiver(post_save, sender=Prefix)
def handle_prefix_saved(instance, created, **kwargs):
# Prefix has changed (or new instance has been created)
if created or instance.vrf != instance._vrf or instance.prefix != instance._prefix:
update_parents_children(instance)
update_children_depth(instance)
# If this is not a new prefix, clean up parent/children of previous prefix
if not created:
old_prefix = Prefix(vrf=instance._vrf, prefix=instance._prefix)
update_parents_children(old_prefix)
update_children_depth(old_prefix)
@receiver(post_delete, sender=Prefix)
def handle_prefix_deleted(instance, **kwargs):
update_parents_children(instance)
update_children_depth(instance)
@receiver(pre_delete, sender=IPAddress)

View File

@@ -15,7 +15,7 @@ AVAILABLE_LABEL = mark_safe('<span class="label label-success">Available</span>'
PREFIX_LINK = """
{% load helpers %}
{% for i in record.parents|as_range %}
{% for i in record.depth|as_range %}
<i class="mdi mdi-circle-small"></i>
{% endfor %}
<a href="{% if record.pk %}{% url 'ipam:prefix' pk=record.pk %}{% else %}{% url 'ipam:prefix_add' %}?prefix={{ record }}{% if object.vrf %}&vrf={{ object.vrf.pk }}{% endif %}{% if object.site %}&site={{ object.site.pk }}{% endif %}{% if object.tenant %}&tenant_group={{ object.tenant.group.pk }}&tenant={{ object.tenant.pk }}{% endif %}{% endif %}">{{ record.prefix }}</a>
@@ -65,7 +65,7 @@ VLAN_LINK = """
{% if record.pk %}
<a href="{{ record.get_absolute_url }}">{{ record.vid }}</a>
{% elif perms.ipam.add_vlan %}
<a href="{% url 'ipam:vlan_add' %}?vid={{ record.vid }}&group={{ vlan_group.pk }}{% if vlan_group.site %}&site={{ vlan_group.site.pk }}{% endif %}" class="btn btn-xs btn-success">{{ record.available }} VLAN{{ record.available|pluralize }} available</a>
<a href="{% url 'ipam:vlan_add' %}?vid={{ record.vid }}{% if record.vlan_group %}&group={{ record.vlan_group.pk }}{% endif %}" class="btn btn-xs btn-success">{{ record.available }} VLAN{{ record.available|pluralize }} available</a>
{% else %}
{{ record.available }} VLAN{{ record.available|pluralize }} available
{% endif %}
@@ -262,6 +262,24 @@ class PrefixTable(BaseTable):
template_code=PREFIX_LINK,
attrs={'td': {'class': 'text-nowrap'}}
)
prefix_flat = tables.Column(
accessor=Accessor('prefix'),
linkify=True,
verbose_name='Prefix (Flat)'
)
depth = tables.Column(
accessor=Accessor('_depth'),
verbose_name='Depth'
)
children = LinkedCountColumn(
accessor=Accessor('_children'),
viewname='ipam:prefix_list',
url_params={
'vrf_id': 'vrf_id',
'within': 'prefix',
},
verbose_name='Children'
)
status = ChoiceFieldColumn(
default=AVAILABLE_LABEL
)
@@ -287,7 +305,8 @@ class PrefixTable(BaseTable):
class Meta(BaseTable.Meta):
model = Prefix
fields = (
'pk', 'prefix', 'status', 'children', 'vrf', 'tenant', 'site', 'vlan', 'role', 'is_pool', 'description',
'pk', 'prefix', 'prefix_flat', 'status', 'depth', 'children', 'vrf', 'tenant', 'site', 'vlan', 'role',
'is_pool', 'description',
)
default_columns = ('pk', 'prefix', 'status', 'vrf', 'tenant', 'site', 'vlan', 'role', 'description')
row_attrs = {
@@ -300,15 +319,14 @@ class PrefixDetailTable(PrefixTable):
accessor='get_utilization',
orderable=False
)
tenant = TenantColumn()
tags = TagColumn(
url_name='ipam:prefix_list'
)
class Meta(PrefixTable.Meta):
fields = (
'pk', 'prefix', 'status', 'children', 'vrf', 'utilization', 'tenant', 'site', 'vlan', 'role', 'is_pool',
'description', 'tags',
'pk', 'prefix', 'prefix_flat', 'status', 'children', 'vrf', 'utilization', 'tenant', 'site', 'vlan', 'role',
'is_pool', 'description', 'tags',
)
default_columns = (
'pk', 'prefix', 'status', 'children', 'vrf', 'utilization', 'tenant', 'site', 'vlan', 'role', 'description',

View File

@@ -186,7 +186,7 @@ class RoleTest(APIViewTestCases.APIViewTestCase):
class PrefixTest(APIViewTestCases.APIViewTestCase):
model = Prefix
brief_fields = ['display', 'family', 'id', 'prefix', 'url']
brief_fields = ['_depth', 'display', 'family', 'id', 'prefix', 'url']
create_data = [
{
'prefix': '192.168.4.0/24',

View File

@@ -400,7 +400,8 @@ class PrefixTestCase(TestCase, ChangeLoggedFilterSetTests):
Prefix(prefix='10.0.0.0/16'),
Prefix(prefix='2001:db8::/32'),
)
Prefix.objects.bulk_create(prefixes)
for prefix in prefixes:
prefix.save()
def test_family(self):
params = {'family': '6'}
@@ -431,6 +432,18 @@ class PrefixTestCase(TestCase, ChangeLoggedFilterSetTests):
params = {'contains': '2001:db8:0:1::/64'}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
def test_depth(self):
params = {'depth': '0'}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 8)
params = {'depth__gt': '0'}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
def test_children(self):
params = {'children': '0'}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 8)
params = {'children__gt': '0'}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
def test_mask_length(self):
params = {'mask_length': '24'}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 4)

View File

@@ -1,4 +1,4 @@
import netaddr
from netaddr import IPNetwork, IPSet
from django.core.exceptions import ValidationError
from django.test import TestCase, override_settings
@@ -10,27 +10,27 @@ class TestAggregate(TestCase):
def test_get_utilization(self):
rir = RIR.objects.create(name='RIR 1', slug='rir-1')
aggregate = Aggregate(prefix=netaddr.IPNetwork('10.0.0.0/8'), rir=rir)
aggregate = Aggregate(prefix=IPNetwork('10.0.0.0/8'), rir=rir)
aggregate.save()
# 25% utilization
Prefix.objects.bulk_create((
Prefix(prefix=netaddr.IPNetwork('10.0.0.0/12')),
Prefix(prefix=netaddr.IPNetwork('10.16.0.0/12')),
Prefix(prefix=netaddr.IPNetwork('10.32.0.0/12')),
Prefix(prefix=netaddr.IPNetwork('10.48.0.0/12')),
Prefix(prefix=IPNetwork('10.0.0.0/12')),
Prefix(prefix=IPNetwork('10.16.0.0/12')),
Prefix(prefix=IPNetwork('10.32.0.0/12')),
Prefix(prefix=IPNetwork('10.48.0.0/12')),
))
self.assertEqual(aggregate.get_utilization(), 25)
# 50% utilization
Prefix.objects.bulk_create((
Prefix(prefix=netaddr.IPNetwork('10.64.0.0/10')),
Prefix(prefix=IPNetwork('10.64.0.0/10')),
))
self.assertEqual(aggregate.get_utilization(), 50)
# 100% utilization
Prefix.objects.bulk_create((
Prefix(prefix=netaddr.IPNetwork('10.128.0.0/9')),
Prefix(prefix=IPNetwork('10.128.0.0/9')),
))
self.assertEqual(aggregate.get_utilization(), 100)
@@ -39,9 +39,9 @@ class TestPrefix(TestCase):
def test_get_duplicates(self):
prefixes = Prefix.objects.bulk_create((
Prefix(prefix=netaddr.IPNetwork('192.0.2.0/24')),
Prefix(prefix=netaddr.IPNetwork('192.0.2.0/24')),
Prefix(prefix=netaddr.IPNetwork('192.0.2.0/24')),
Prefix(prefix=IPNetwork('192.0.2.0/24')),
Prefix(prefix=IPNetwork('192.0.2.0/24')),
Prefix(prefix=IPNetwork('192.0.2.0/24')),
))
duplicate_prefix_pks = [p.pk for p in prefixes[0].get_duplicates()]
@@ -54,11 +54,11 @@ class TestPrefix(TestCase):
VRF(name='VRF 3'),
))
prefixes = Prefix.objects.bulk_create((
Prefix(prefix=netaddr.IPNetwork('10.0.0.0/16'), status=PrefixStatusChoices.STATUS_CONTAINER),
Prefix(prefix=netaddr.IPNetwork('10.0.0.0/24'), vrf=None),
Prefix(prefix=netaddr.IPNetwork('10.0.1.0/24'), vrf=vrfs[0]),
Prefix(prefix=netaddr.IPNetwork('10.0.2.0/24'), vrf=vrfs[1]),
Prefix(prefix=netaddr.IPNetwork('10.0.3.0/24'), vrf=vrfs[2]),
Prefix(prefix=IPNetwork('10.0.0.0/16'), status=PrefixStatusChoices.STATUS_CONTAINER),
Prefix(prefix=IPNetwork('10.0.0.0/24'), vrf=None),
Prefix(prefix=IPNetwork('10.0.1.0/24'), vrf=vrfs[0]),
Prefix(prefix=IPNetwork('10.0.2.0/24'), vrf=vrfs[1]),
Prefix(prefix=IPNetwork('10.0.3.0/24'), vrf=vrfs[2]),
))
child_prefix_pks = {p.pk for p in prefixes[0].get_child_prefixes()}
@@ -79,13 +79,13 @@ class TestPrefix(TestCase):
VRF(name='VRF 3'),
))
parent_prefix = Prefix.objects.create(
prefix=netaddr.IPNetwork('10.0.0.0/16'), status=PrefixStatusChoices.STATUS_CONTAINER
prefix=IPNetwork('10.0.0.0/16'), status=PrefixStatusChoices.STATUS_CONTAINER
)
ips = IPAddress.objects.bulk_create((
IPAddress(address=netaddr.IPNetwork('10.0.0.1/24'), vrf=None),
IPAddress(address=netaddr.IPNetwork('10.0.1.1/24'), vrf=vrfs[0]),
IPAddress(address=netaddr.IPNetwork('10.0.2.1/24'), vrf=vrfs[1]),
IPAddress(address=netaddr.IPNetwork('10.0.3.1/24'), vrf=vrfs[2]),
IPAddress(address=IPNetwork('10.0.0.1/24'), vrf=None),
IPAddress(address=IPNetwork('10.0.1.1/24'), vrf=vrfs[0]),
IPAddress(address=IPNetwork('10.0.2.1/24'), vrf=vrfs[1]),
IPAddress(address=IPNetwork('10.0.3.1/24'), vrf=vrfs[2]),
))
child_ip_pks = {p.pk for p in parent_prefix.get_child_ips()}
@@ -102,16 +102,16 @@ class TestPrefix(TestCase):
def test_get_available_prefixes(self):
prefixes = Prefix.objects.bulk_create((
Prefix(prefix=netaddr.IPNetwork('10.0.0.0/16')), # Parent prefix
Prefix(prefix=netaddr.IPNetwork('10.0.0.0/20')),
Prefix(prefix=netaddr.IPNetwork('10.0.32.0/20')),
Prefix(prefix=netaddr.IPNetwork('10.0.128.0/18')),
Prefix(prefix=IPNetwork('10.0.0.0/16')), # Parent prefix
Prefix(prefix=IPNetwork('10.0.0.0/20')),
Prefix(prefix=IPNetwork('10.0.32.0/20')),
Prefix(prefix=IPNetwork('10.0.128.0/18')),
))
missing_prefixes = netaddr.IPSet([
netaddr.IPNetwork('10.0.16.0/20'),
netaddr.IPNetwork('10.0.48.0/20'),
netaddr.IPNetwork('10.0.64.0/18'),
netaddr.IPNetwork('10.0.192.0/18'),
missing_prefixes = IPSet([
IPNetwork('10.0.16.0/20'),
IPNetwork('10.0.48.0/20'),
IPNetwork('10.0.64.0/18'),
IPNetwork('10.0.192.0/18'),
])
available_prefixes = prefixes[0].get_available_prefixes()
@@ -119,17 +119,17 @@ class TestPrefix(TestCase):
def test_get_available_ips(self):
parent_prefix = Prefix.objects.create(prefix=netaddr.IPNetwork('10.0.0.0/28'))
parent_prefix = Prefix.objects.create(prefix=IPNetwork('10.0.0.0/28'))
IPAddress.objects.bulk_create((
IPAddress(address=netaddr.IPNetwork('10.0.0.1/26')),
IPAddress(address=netaddr.IPNetwork('10.0.0.3/26')),
IPAddress(address=netaddr.IPNetwork('10.0.0.5/26')),
IPAddress(address=netaddr.IPNetwork('10.0.0.7/26')),
IPAddress(address=netaddr.IPNetwork('10.0.0.9/26')),
IPAddress(address=netaddr.IPNetwork('10.0.0.11/26')),
IPAddress(address=netaddr.IPNetwork('10.0.0.13/26')),
IPAddress(address=IPNetwork('10.0.0.1/26')),
IPAddress(address=IPNetwork('10.0.0.3/26')),
IPAddress(address=IPNetwork('10.0.0.5/26')),
IPAddress(address=IPNetwork('10.0.0.7/26')),
IPAddress(address=IPNetwork('10.0.0.9/26')),
IPAddress(address=IPNetwork('10.0.0.11/26')),
IPAddress(address=IPNetwork('10.0.0.13/26')),
))
missing_ips = netaddr.IPSet([
missing_ips = IPSet([
'10.0.0.2/32',
'10.0.0.4/32',
'10.0.0.6/32',
@@ -145,39 +145,39 @@ class TestPrefix(TestCase):
def test_get_first_available_prefix(self):
prefixes = Prefix.objects.bulk_create((
Prefix(prefix=netaddr.IPNetwork('10.0.0.0/16')), # Parent prefix
Prefix(prefix=netaddr.IPNetwork('10.0.0.0/24')),
Prefix(prefix=netaddr.IPNetwork('10.0.1.0/24')),
Prefix(prefix=netaddr.IPNetwork('10.0.2.0/24')),
Prefix(prefix=IPNetwork('10.0.0.0/16')), # Parent prefix
Prefix(prefix=IPNetwork('10.0.0.0/24')),
Prefix(prefix=IPNetwork('10.0.1.0/24')),
Prefix(prefix=IPNetwork('10.0.2.0/24')),
))
self.assertEqual(prefixes[0].get_first_available_prefix(), netaddr.IPNetwork('10.0.3.0/24'))
self.assertEqual(prefixes[0].get_first_available_prefix(), IPNetwork('10.0.3.0/24'))
Prefix.objects.create(prefix=netaddr.IPNetwork('10.0.3.0/24'))
self.assertEqual(prefixes[0].get_first_available_prefix(), netaddr.IPNetwork('10.0.4.0/22'))
Prefix.objects.create(prefix=IPNetwork('10.0.3.0/24'))
self.assertEqual(prefixes[0].get_first_available_prefix(), IPNetwork('10.0.4.0/22'))
def test_get_first_available_ip(self):
parent_prefix = Prefix.objects.create(prefix=netaddr.IPNetwork('10.0.0.0/24'))
parent_prefix = Prefix.objects.create(prefix=IPNetwork('10.0.0.0/24'))
IPAddress.objects.bulk_create((
IPAddress(address=netaddr.IPNetwork('10.0.0.1/24')),
IPAddress(address=netaddr.IPNetwork('10.0.0.2/24')),
IPAddress(address=netaddr.IPNetwork('10.0.0.3/24')),
IPAddress(address=IPNetwork('10.0.0.1/24')),
IPAddress(address=IPNetwork('10.0.0.2/24')),
IPAddress(address=IPNetwork('10.0.0.3/24')),
))
self.assertEqual(parent_prefix.get_first_available_ip(), '10.0.0.4/24')
IPAddress.objects.create(address=netaddr.IPNetwork('10.0.0.4/24'))
IPAddress.objects.create(address=IPNetwork('10.0.0.4/24'))
self.assertEqual(parent_prefix.get_first_available_ip(), '10.0.0.5/24')
def test_get_utilization(self):
# Container Prefix
prefix = Prefix.objects.create(
prefix=netaddr.IPNetwork('10.0.0.0/24'),
prefix=IPNetwork('10.0.0.0/24'),
status=PrefixStatusChoices.STATUS_CONTAINER
)
Prefix.objects.bulk_create((
Prefix(prefix=netaddr.IPNetwork('10.0.0.0/26')),
Prefix(prefix=netaddr.IPNetwork('10.0.0.128/26')),
Prefix(prefix=IPNetwork('10.0.0.0/26')),
Prefix(prefix=IPNetwork('10.0.0.128/26')),
))
self.assertEqual(prefix.get_utilization(), 50)
@@ -186,7 +186,7 @@ class TestPrefix(TestCase):
prefix.save()
IPAddress.objects.bulk_create(
# Create 32 IPAddresses within the Prefix
[IPAddress(address=netaddr.IPNetwork('10.0.0.{}/24'.format(i))) for i in range(1, 33)]
[IPAddress(address=IPNetwork('10.0.0.{}/24'.format(i))) for i in range(1, 33)]
)
self.assertEqual(prefix.get_utilization(), 12) # ~= 12%
@@ -196,36 +196,234 @@ class TestPrefix(TestCase):
@override_settings(ENFORCE_GLOBAL_UNIQUE=False)
def test_duplicate_global(self):
Prefix.objects.create(prefix=netaddr.IPNetwork('192.0.2.0/24'))
duplicate_prefix = Prefix(prefix=netaddr.IPNetwork('192.0.2.0/24'))
Prefix.objects.create(prefix=IPNetwork('192.0.2.0/24'))
duplicate_prefix = Prefix(prefix=IPNetwork('192.0.2.0/24'))
self.assertIsNone(duplicate_prefix.clean())
@override_settings(ENFORCE_GLOBAL_UNIQUE=True)
def test_duplicate_global_unique(self):
Prefix.objects.create(prefix=netaddr.IPNetwork('192.0.2.0/24'))
duplicate_prefix = Prefix(prefix=netaddr.IPNetwork('192.0.2.0/24'))
Prefix.objects.create(prefix=IPNetwork('192.0.2.0/24'))
duplicate_prefix = Prefix(prefix=IPNetwork('192.0.2.0/24'))
self.assertRaises(ValidationError, duplicate_prefix.clean)
def test_duplicate_vrf(self):
vrf = VRF.objects.create(name='Test', rd='1:1', enforce_unique=False)
Prefix.objects.create(vrf=vrf, prefix=netaddr.IPNetwork('192.0.2.0/24'))
duplicate_prefix = Prefix(vrf=vrf, prefix=netaddr.IPNetwork('192.0.2.0/24'))
Prefix.objects.create(vrf=vrf, prefix=IPNetwork('192.0.2.0/24'))
duplicate_prefix = Prefix(vrf=vrf, prefix=IPNetwork('192.0.2.0/24'))
self.assertIsNone(duplicate_prefix.clean())
def test_duplicate_vrf_unique(self):
vrf = VRF.objects.create(name='Test', rd='1:1', enforce_unique=True)
Prefix.objects.create(vrf=vrf, prefix=netaddr.IPNetwork('192.0.2.0/24'))
duplicate_prefix = Prefix(vrf=vrf, prefix=netaddr.IPNetwork('192.0.2.0/24'))
Prefix.objects.create(vrf=vrf, prefix=IPNetwork('192.0.2.0/24'))
duplicate_prefix = Prefix(vrf=vrf, prefix=IPNetwork('192.0.2.0/24'))
self.assertRaises(ValidationError, duplicate_prefix.clean)
class TestPrefixHierarchy(TestCase):
"""
Test the automatic updating of depth and child count in response to changes made within
the prefix hierarchy.
"""
@classmethod
def setUpTestData(cls):
prefixes = (
# IPv4
Prefix(prefix='10.0.0.0/8', _depth=0, _children=2),
Prefix(prefix='10.0.0.0/16', _depth=1, _children=1),
Prefix(prefix='10.0.0.0/24', _depth=2, _children=0),
# IPv6
Prefix(prefix='2001:db8::/32', _depth=0, _children=2),
Prefix(prefix='2001:db8::/40', _depth=1, _children=1),
Prefix(prefix='2001:db8::/48', _depth=2, _children=0),
)
Prefix.objects.bulk_create(prefixes)
def test_create_prefix4(self):
# Create 10.0.0.0/12
Prefix(prefix='10.0.0.0/12').save()
prefixes = Prefix.objects.filter(prefix__family=4)
self.assertEqual(prefixes[0].prefix, IPNetwork('10.0.0.0/8'))
self.assertEqual(prefixes[0]._depth, 0)
self.assertEqual(prefixes[0]._children, 3)
self.assertEqual(prefixes[1].prefix, IPNetwork('10.0.0.0/12'))
self.assertEqual(prefixes[1]._depth, 1)
self.assertEqual(prefixes[1]._children, 2)
self.assertEqual(prefixes[2].prefix, IPNetwork('10.0.0.0/16'))
self.assertEqual(prefixes[2]._depth, 2)
self.assertEqual(prefixes[2]._children, 1)
self.assertEqual(prefixes[3].prefix, IPNetwork('10.0.0.0/24'))
self.assertEqual(prefixes[3]._depth, 3)
self.assertEqual(prefixes[3]._children, 0)
def test_create_prefix6(self):
# Create 2001:db8::/36
Prefix(prefix='2001:db8::/36').save()
prefixes = Prefix.objects.filter(prefix__family=6)
self.assertEqual(prefixes[0].prefix, IPNetwork('2001:db8::/32'))
self.assertEqual(prefixes[0]._depth, 0)
self.assertEqual(prefixes[0]._children, 3)
self.assertEqual(prefixes[1].prefix, IPNetwork('2001:db8::/36'))
self.assertEqual(prefixes[1]._depth, 1)
self.assertEqual(prefixes[1]._children, 2)
self.assertEqual(prefixes[2].prefix, IPNetwork('2001:db8::/40'))
self.assertEqual(prefixes[2]._depth, 2)
self.assertEqual(prefixes[2]._children, 1)
self.assertEqual(prefixes[3].prefix, IPNetwork('2001:db8::/48'))
self.assertEqual(prefixes[3]._depth, 3)
self.assertEqual(prefixes[3]._children, 0)
def test_update_prefix4(self):
# Change 10.0.0.0/24 to 10.0.0.0/12
p = Prefix.objects.get(prefix='10.0.0.0/24')
p.prefix = '10.0.0.0/12'
p.save()
prefixes = Prefix.objects.filter(prefix__family=4)
self.assertEqual(prefixes[0].prefix, IPNetwork('10.0.0.0/8'))
self.assertEqual(prefixes[0]._depth, 0)
self.assertEqual(prefixes[0]._children, 2)
self.assertEqual(prefixes[1].prefix, IPNetwork('10.0.0.0/12'))
self.assertEqual(prefixes[1]._depth, 1)
self.assertEqual(prefixes[1]._children, 1)
self.assertEqual(prefixes[2].prefix, IPNetwork('10.0.0.0/16'))
self.assertEqual(prefixes[2]._depth, 2)
self.assertEqual(prefixes[2]._children, 0)
def test_update_prefix6(self):
# Change 2001:db8::/48 to 2001:db8::/36
p = Prefix.objects.get(prefix='2001:db8::/48')
p.prefix = '2001:db8::/36'
p.save()
prefixes = Prefix.objects.filter(prefix__family=6)
self.assertEqual(prefixes[0].prefix, IPNetwork('2001:db8::/32'))
self.assertEqual(prefixes[0]._depth, 0)
self.assertEqual(prefixes[0]._children, 2)
self.assertEqual(prefixes[1].prefix, IPNetwork('2001:db8::/36'))
self.assertEqual(prefixes[1]._depth, 1)
self.assertEqual(prefixes[1]._children, 1)
self.assertEqual(prefixes[2].prefix, IPNetwork('2001:db8::/40'))
self.assertEqual(prefixes[2]._depth, 2)
self.assertEqual(prefixes[2]._children, 0)
def test_update_prefix_vrf4(self):
vrf = VRF(name='VRF A')
vrf.save()
# Move 10.0.0.0/16 to a VRF
p = Prefix.objects.get(prefix='10.0.0.0/16')
p.vrf = vrf
p.save()
prefixes = Prefix.objects.filter(vrf__isnull=True, prefix__family=4)
self.assertEqual(prefixes[0].prefix, IPNetwork('10.0.0.0/8'))
self.assertEqual(prefixes[0]._depth, 0)
self.assertEqual(prefixes[0]._children, 1)
self.assertEqual(prefixes[1].prefix, IPNetwork('10.0.0.0/24'))
self.assertEqual(prefixes[1]._depth, 1)
self.assertEqual(prefixes[1]._children, 0)
prefixes = Prefix.objects.filter(vrf=vrf)
self.assertEqual(prefixes[0].prefix, IPNetwork('10.0.0.0/16'))
self.assertEqual(prefixes[0]._depth, 0)
self.assertEqual(prefixes[0]._children, 0)
def test_update_prefix_vrf6(self):
vrf = VRF(name='VRF A')
vrf.save()
# Move 2001:db8::/40 to a VRF
p = Prefix.objects.get(prefix='2001:db8::/40')
p.vrf = vrf
p.save()
prefixes = Prefix.objects.filter(vrf__isnull=True, prefix__family=6)
self.assertEqual(prefixes[0].prefix, IPNetwork('2001:db8::/32'))
self.assertEqual(prefixes[0]._depth, 0)
self.assertEqual(prefixes[0]._children, 1)
self.assertEqual(prefixes[1].prefix, IPNetwork('2001:db8::/48'))
self.assertEqual(prefixes[1]._depth, 1)
self.assertEqual(prefixes[1]._children, 0)
prefixes = Prefix.objects.filter(vrf=vrf)
self.assertEqual(prefixes[0].prefix, IPNetwork('2001:db8::/40'))
self.assertEqual(prefixes[0]._depth, 0)
self.assertEqual(prefixes[0]._children, 0)
def test_delete_prefix4(self):
# Delete 10.0.0.0/16
Prefix.objects.filter(prefix='10.0.0.0/16').delete()
prefixes = Prefix.objects.filter(prefix__family=4)
self.assertEqual(prefixes[0].prefix, IPNetwork('10.0.0.0/8'))
self.assertEqual(prefixes[0]._depth, 0)
self.assertEqual(prefixes[0]._children, 1)
self.assertEqual(prefixes[1].prefix, IPNetwork('10.0.0.0/24'))
self.assertEqual(prefixes[1]._depth, 1)
self.assertEqual(prefixes[1]._children, 0)
def test_delete_prefix6(self):
# Delete 2001:db8::/40
Prefix.objects.filter(prefix='2001:db8::/40').delete()
prefixes = Prefix.objects.filter(prefix__family=6)
self.assertEqual(prefixes[0].prefix, IPNetwork('2001:db8::/32'))
self.assertEqual(prefixes[0]._depth, 0)
self.assertEqual(prefixes[0]._children, 1)
self.assertEqual(prefixes[1].prefix, IPNetwork('2001:db8::/48'))
self.assertEqual(prefixes[1]._depth, 1)
self.assertEqual(prefixes[1]._children, 0)
def test_duplicate_prefix4(self):
# Duplicate 10.0.0.0/16
Prefix(prefix='10.0.0.0/16').save()
prefixes = Prefix.objects.filter(prefix__family=4)
self.assertEqual(prefixes[0].prefix, IPNetwork('10.0.0.0/8'))
self.assertEqual(prefixes[0]._depth, 0)
self.assertEqual(prefixes[0]._children, 3)
self.assertEqual(prefixes[1].prefix, IPNetwork('10.0.0.0/16'))
self.assertEqual(prefixes[1]._depth, 1)
self.assertEqual(prefixes[1]._children, 1)
self.assertEqual(prefixes[2].prefix, IPNetwork('10.0.0.0/16'))
self.assertEqual(prefixes[2]._depth, 1)
self.assertEqual(prefixes[2]._children, 1)
self.assertEqual(prefixes[3].prefix, IPNetwork('10.0.0.0/24'))
self.assertEqual(prefixes[3]._depth, 2)
self.assertEqual(prefixes[3]._children, 0)
def test_duplicate_prefix6(self):
# Duplicate 2001:db8::/40
Prefix(prefix='2001:db8::/40').save()
prefixes = Prefix.objects.filter(prefix__family=6)
self.assertEqual(prefixes[0].prefix, IPNetwork('2001:db8::/32'))
self.assertEqual(prefixes[0]._depth, 0)
self.assertEqual(prefixes[0]._children, 3)
self.assertEqual(prefixes[1].prefix, IPNetwork('2001:db8::/40'))
self.assertEqual(prefixes[1]._depth, 1)
self.assertEqual(prefixes[1]._children, 1)
self.assertEqual(prefixes[2].prefix, IPNetwork('2001:db8::/40'))
self.assertEqual(prefixes[2]._depth, 1)
self.assertEqual(prefixes[2]._children, 1)
self.assertEqual(prefixes[3].prefix, IPNetwork('2001:db8::/48'))
self.assertEqual(prefixes[3]._depth, 2)
self.assertEqual(prefixes[3]._children, 0)
class TestIPAddress(TestCase):
def test_get_duplicates(self):
ips = IPAddress.objects.bulk_create((
IPAddress(address=netaddr.IPNetwork('192.0.2.1/24')),
IPAddress(address=netaddr.IPNetwork('192.0.2.1/24')),
IPAddress(address=netaddr.IPNetwork('192.0.2.1/24')),
IPAddress(address=IPNetwork('192.0.2.1/24')),
IPAddress(address=IPNetwork('192.0.2.1/24')),
IPAddress(address=IPNetwork('192.0.2.1/24')),
))
duplicate_ip_pks = [p.pk for p in ips[0].get_duplicates()]
@@ -237,44 +435,44 @@ class TestIPAddress(TestCase):
@override_settings(ENFORCE_GLOBAL_UNIQUE=False)
def test_duplicate_global(self):
IPAddress.objects.create(address=netaddr.IPNetwork('192.0.2.1/24'))
duplicate_ip = IPAddress(address=netaddr.IPNetwork('192.0.2.1/24'))
IPAddress.objects.create(address=IPNetwork('192.0.2.1/24'))
duplicate_ip = IPAddress(address=IPNetwork('192.0.2.1/24'))
self.assertIsNone(duplicate_ip.clean())
@override_settings(ENFORCE_GLOBAL_UNIQUE=True)
def test_duplicate_global_unique(self):
IPAddress.objects.create(address=netaddr.IPNetwork('192.0.2.1/24'))
duplicate_ip = IPAddress(address=netaddr.IPNetwork('192.0.2.1/24'))
IPAddress.objects.create(address=IPNetwork('192.0.2.1/24'))
duplicate_ip = IPAddress(address=IPNetwork('192.0.2.1/24'))
self.assertRaises(ValidationError, duplicate_ip.clean)
def test_duplicate_vrf(self):
vrf = VRF.objects.create(name='Test', rd='1:1', enforce_unique=False)
IPAddress.objects.create(vrf=vrf, address=netaddr.IPNetwork('192.0.2.1/24'))
duplicate_ip = IPAddress(vrf=vrf, address=netaddr.IPNetwork('192.0.2.1/24'))
IPAddress.objects.create(vrf=vrf, address=IPNetwork('192.0.2.1/24'))
duplicate_ip = IPAddress(vrf=vrf, address=IPNetwork('192.0.2.1/24'))
self.assertIsNone(duplicate_ip.clean())
def test_duplicate_vrf_unique(self):
vrf = VRF.objects.create(name='Test', rd='1:1', enforce_unique=True)
IPAddress.objects.create(vrf=vrf, address=netaddr.IPNetwork('192.0.2.1/24'))
duplicate_ip = IPAddress(vrf=vrf, address=netaddr.IPNetwork('192.0.2.1/24'))
IPAddress.objects.create(vrf=vrf, address=IPNetwork('192.0.2.1/24'))
duplicate_ip = IPAddress(vrf=vrf, address=IPNetwork('192.0.2.1/24'))
self.assertRaises(ValidationError, duplicate_ip.clean)
@override_settings(ENFORCE_GLOBAL_UNIQUE=True)
def test_duplicate_nonunique_nonrole_role(self):
IPAddress.objects.create(address=netaddr.IPNetwork('192.0.2.1/24'))
duplicate_ip = IPAddress(address=netaddr.IPNetwork('192.0.2.1/24'), role=IPAddressRoleChoices.ROLE_VIP)
IPAddress.objects.create(address=IPNetwork('192.0.2.1/24'))
duplicate_ip = IPAddress(address=IPNetwork('192.0.2.1/24'), role=IPAddressRoleChoices.ROLE_VIP)
self.assertRaises(ValidationError, duplicate_ip.clean)
@override_settings(ENFORCE_GLOBAL_UNIQUE=True)
def test_duplicate_nonunique_role_nonrole(self):
IPAddress.objects.create(address=netaddr.IPNetwork('192.0.2.1/24'), role=IPAddressRoleChoices.ROLE_VIP)
duplicate_ip = IPAddress(address=netaddr.IPNetwork('192.0.2.1/24'))
IPAddress.objects.create(address=IPNetwork('192.0.2.1/24'), role=IPAddressRoleChoices.ROLE_VIP)
duplicate_ip = IPAddress(address=IPNetwork('192.0.2.1/24'))
self.assertRaises(ValidationError, duplicate_ip.clean)
@override_settings(ENFORCE_GLOBAL_UNIQUE=True)
def test_duplicate_nonunique_role(self):
IPAddress.objects.create(address=netaddr.IPNetwork('192.0.2.1/24'), role=IPAddressRoleChoices.ROLE_VIP)
IPAddress.objects.create(address=netaddr.IPNetwork('192.0.2.1/24'), role=IPAddressRoleChoices.ROLE_VIP)
IPAddress.objects.create(address=IPNetwork('192.0.2.1/24'), role=IPAddressRoleChoices.ROLE_VIP)
IPAddress.objects.create(address=IPNetwork('192.0.2.1/24'), role=IPAddressRoleChoices.ROLE_VIP)
class TestVLANGroup(TestCase):

View File

@@ -68,26 +68,102 @@ def add_available_ipaddresses(prefix, ipaddress_list, is_pool=False):
return output
def add_available_vlans(vlan_group, vlans):
def add_available_vlans(vlans, vlan_group=None):
"""
Create fake records for all gaps between used VLANs
"""
if not vlans:
return [{'vid': VLAN_VID_MIN, 'available': VLAN_VID_MAX - VLAN_VID_MIN + 1}]
return [{
'vid': VLAN_VID_MIN,
'vlan_group': vlan_group,
'available': VLAN_VID_MAX - VLAN_VID_MIN + 1
}]
prev_vid = VLAN_VID_MAX
new_vlans = []
for vlan in vlans:
if vlan.vid - prev_vid > 1:
new_vlans.append({'vid': prev_vid + 1, 'available': vlan.vid - prev_vid - 1})
new_vlans.append({
'vid': prev_vid + 1,
'vlan_group': vlan_group,
'available': vlan.vid - prev_vid - 1,
})
prev_vid = vlan.vid
if vlans[0].vid > VLAN_VID_MIN:
new_vlans.append({'vid': VLAN_VID_MIN, 'available': vlans[0].vid - VLAN_VID_MIN})
new_vlans.append({
'vid': VLAN_VID_MIN,
'vlan_group': vlan_group,
'available': vlans[0].vid - VLAN_VID_MIN,
})
if prev_vid < VLAN_VID_MAX:
new_vlans.append({'vid': prev_vid + 1, 'available': VLAN_VID_MAX - prev_vid})
new_vlans.append({
'vid': prev_vid + 1,
'vlan_group': vlan_group,
'available': VLAN_VID_MAX - prev_vid,
})
vlans = list(vlans) + new_vlans
vlans.sort(key=lambda v: v.vid if type(v) == VLAN else v['vid'])
return vlans
def rebuild_prefixes(vrf):
"""
Rebuild the prefix hierarchy for all prefixes in the specified VRF (or global table).
"""
def contains(parent, child):
return child in parent and child != parent
def push_to_stack(prefix):
# Increment child count on parent nodes
for n in stack:
n['children'] += 1
stack.append({
'pk': [prefix['pk']],
'prefix': prefix['prefix'],
'children': 0,
})
stack = []
update_queue = []
prefixes = Prefix.objects.filter(vrf=vrf).values('pk', 'prefix')
# Iterate through all Prefixes in the VRF, growing and shrinking the stack as we go
for i, p in enumerate(prefixes):
# Grow the stack if this is a child of the most recent prefix
if not stack or contains(stack[-1]['prefix'], p['prefix']):
push_to_stack(p)
# Handle duplicate prefixes
elif stack[-1]['prefix'] == p['prefix']:
stack[-1]['pk'].append(p['pk'])
# If this is a sibling or parent of the most recent prefix, pop nodes from the
# stack until we reach a parent prefix (or the root)
else:
while stack and not contains(stack[-1]['prefix'], p['prefix']):
node = stack.pop()
for pk in node['pk']:
update_queue.append(
Prefix(pk=pk, _depth=len(stack), _children=node['children'])
)
push_to_stack(p)
# Flush the update queue once it reaches 100 Prefixes
if len(update_queue) >= 100:
Prefix.objects.bulk_update(update_queue, ['_depth', '_children'])
update_queue = []
# Clear out any prefixes remaining in the stack
while stack:
node = stack.pop()
for pk in node['pk']:
update_queue.append(
Prefix(pk=pk, _depth=len(stack), _children=node['children'])
)
# Final flush of any remaining Prefixes
Prefix.objects.bulk_update(update_queue, ['_depth', '_children'])

View File

@@ -145,7 +145,6 @@ class RIRListView(generic.ObjectListView):
filterset = filtersets.RIRFilterSet
filterset_form = forms.RIRFilterForm
table = tables.RIRTable
template_name = 'ipam/rir_list.html'
class RIRView(generic.ObjectView):
@@ -238,7 +237,7 @@ class AggregateView(generic.ObjectView):
'site', 'role'
).order_by(
'prefix'
).annotate_tree()
)
# Add available prefixes to the table if requested
if request.GET.get('show_available', 'true') == 'true':
@@ -352,7 +351,7 @@ class RoleBulkDeleteView(generic.BulkDeleteView):
#
class PrefixListView(generic.ObjectListView):
queryset = Prefix.objects.annotate_tree()
queryset = Prefix.objects.all()
filterset = filtersets.PrefixFilterSet
filterset_form = forms.PrefixFilterForm
table = tables.PrefixDetailTable
@@ -377,7 +376,7 @@ class PrefixView(generic.ObjectView):
prefix__net_contains=str(instance.prefix)
).prefetch_related(
'site', 'role'
).annotate_tree()
)
parent_prefix_table = tables.PrefixTable(list(parent_prefixes), orderable=False)
parent_prefix_table.exclude = ('vrf',)
@@ -407,7 +406,7 @@ class PrefixPrefixesView(generic.ObjectView):
# Child prefixes table
child_prefixes = instance.get_child_prefixes().restrict(request.user, 'view').prefetch_related(
'site', 'vlan', 'role',
).annotate_tree()
)
# Add available prefixes to the table if requested
if child_prefixes and request.GET.get('show_available', 'true') == 'true':
@@ -522,7 +521,7 @@ class IPAddressView(generic.ObjectView):
# Parent prefixes table
parent_prefixes = Prefix.objects.restrict(request.user, 'view').filter(
vrf=instance.vrf,
prefix__net_contains=str(instance.address.ip)
prefix__net_contains_or_equals=str(instance.address.ip)
).prefetch_related(
'site', 'role'
)
@@ -551,6 +550,7 @@ class IPAddressView(generic.ObjectView):
vrf=instance.vrf, address__net_contained_or_equal=str(instance.address)
)
related_ips_table = tables.IPAddressTable(related_ips, orderable=False)
paginate_table(related_ips_table, request)
return {
'parent_prefixes_table': parent_prefixes_table,
@@ -675,7 +675,7 @@ class VLANGroupView(generic.ObjectView):
Prefetch('prefixes', queryset=Prefix.objects.restrict(request.user))
).order_by('vid')
vlans_count = vlans.count()
vlans = add_available_vlans(instance, vlans)
vlans = add_available_vlans(vlans, vlan_group=instance)
vlans_table = tables.VLANDetailTable(vlans)
if request.user.has_perm('ipam.change_vlan') or request.user.has_perm('ipam.delete_vlan'):

View File

@@ -4,12 +4,12 @@ from circuits.filtersets import CircuitFilterSet, ProviderFilterSet, ProviderNet
from circuits.models import Circuit, ProviderNetwork, Provider
from circuits.tables import CircuitTable, ProviderNetworkTable, ProviderTable
from dcim.filtersets import (
CableFilterSet, DeviceFilterSet, DeviceTypeFilterSet, PowerFeedFilterSet, RackFilterSet, LocationFilterSet,
SiteFilterSet, VirtualChassisFilterSet,
CableFilterSet, DeviceFilterSet, DeviceTypeFilterSet, PowerFeedFilterSet, RackFilterSet, RackReservationFilterSet,
LocationFilterSet, SiteFilterSet, VirtualChassisFilterSet,
)
from dcim.models import Cable, Device, DeviceType, PowerFeed, Rack, Location, Site, VirtualChassis
from dcim.models import Cable, Device, DeviceType, Location, PowerFeed, Rack, RackReservation, Site, VirtualChassis
from dcim.tables import (
CableTable, DeviceTable, DeviceTypeTable, PowerFeedTable, RackTable, LocationTable, SiteTable,
CableTable, DeviceTable, DeviceTypeTable, PowerFeedTable, RackTable, RackReservationTable, LocationTable, SiteTable,
VirtualChassisTable,
)
from ipam.filtersets import AggregateFilterSet, IPAddressFilterSet, PrefixFilterSet, VLANFilterSet, VRFFilterSet
@@ -64,6 +64,12 @@ SEARCH_TYPES = OrderedDict((
'table': RackTable,
'url': 'dcim:rack_list',
}),
('rackreservation', {
'queryset': RackReservation.objects.prefetch_related('site', 'rack', 'user'),
'filterset': RackReservationFilterSet,
'table': RackReservationTable,
'url': 'dcim:rackreservation_list',
}),
('location', {
'queryset': Location.objects.add_related_count(
Location.objects.all(),

View File

@@ -89,13 +89,13 @@ class BaseFilterSet(django_filters.FilterSet):
filters.MultiValueNumberFilter,
filters.MultiValueTimeFilter
)):
lookup_map = FILTER_NUMERIC_BASED_LOOKUP_MAP
return FILTER_NUMERIC_BASED_LOOKUP_MAP
elif isinstance(existing_filter, (
filters.TreeNodeMultipleChoiceFilter,
)):
# TreeNodeMultipleChoiceFilter only support negation but must maintain the `in` lookup expression
lookup_map = FILTER_TREENODE_NEGATION_LOOKUP_MAP
return FILTER_TREENODE_NEGATION_LOOKUP_MAP
elif isinstance(existing_filter, (
django_filters.ModelChoiceFilter,
@@ -103,7 +103,7 @@ class BaseFilterSet(django_filters.FilterSet):
TagFilter
)) or existing_filter.extra.get('choices'):
# These filter types support only negation
lookup_map = FILTER_NEGATION_LOOKUP_MAP
return FILTER_NEGATION_LOOKUP_MAP
elif isinstance(existing_filter, (
django_filters.filters.CharFilter,
@@ -111,12 +111,9 @@ class BaseFilterSet(django_filters.FilterSet):
filters.MultiValueCharFilter,
filters.MultiValueMACAddressFilter
)):
lookup_map = FILTER_CHAR_BASED_LOOKUP_MAP
return FILTER_CHAR_BASED_LOOKUP_MAP
else:
lookup_map = None
return lookup_map
return None
@classmethod
def get_filters(cls):

View File

@@ -11,12 +11,13 @@ OBJ_TYPE_CHOICES = (
('DCIM', (
('site', 'Sites'),
('rack', 'Racks'),
('rackreservation', 'Rack reservations'),
('location', 'Locations'),
('devicetype', 'Device types'),
('device', 'Devices'),
('virtualchassis', 'Virtual Chassis'),
('virtualchassis', 'Virtual chassis'),
('cable', 'Cables'),
('powerfeed', 'Power Feeds'),
('powerfeed', 'Power feeds'),
)),
('IPAM', (
('vrf', 'VRFs'),

View File

@@ -16,7 +16,7 @@ from django.core.validators import URLValidator
# Environment setup
#
VERSION = '2.11.4'
VERSION = '2.11.9'
# Hostname
HOSTNAME = platform.node()
@@ -29,10 +29,10 @@ if platform.python_version_tuple() < ('3', '6'):
raise RuntimeError(
"NetBox requires Python 3.6 or higher (current: Python {})".format(platform.python_version())
)
# TODO: Remove in NetBox v2.12
# TODO: Remove in NetBox v3.0
if platform.python_version_tuple() < ('3', '7'):
warnings.warn(
"Support for Python 3.6 will be dropped in NetBox v2.12. Please upgrade to Python 3.7 or later at your "
"Support for Python 3.6 will be dropped in NetBox v3.0. Please upgrade to Python 3.7 or later at your "
"earliest convenience."
)

View File

@@ -18,7 +18,7 @@ from django_tables2.export import TableExport
from extras.models import CustomField, ExportTemplate
from utilities.error_handlers import handle_protectederror
from utilities.exceptions import AbortTransaction
from utilities.exceptions import AbortTransaction, PermissionsViolation
from utilities.forms import (
BootstrapMixin, BulkRenameForm, ConfirmationForm, CSVDataField, ImportForm, TableConfigForm, restrict_form_fields,
)
@@ -290,7 +290,8 @@ class ObjectEditView(GetReturnURLMixin, ObjectPermissionRequiredMixin, View):
obj = form.save()
# Check that the new object conforms with any assigned object-level permissions
self.queryset.get(pk=obj.pk)
if not self.queryset.filter(pk=obj.pk).first():
raise PermissionsViolation()
msg = '{} {}'.format(
'Created' if object_created else 'Modified',
@@ -318,7 +319,7 @@ class ObjectEditView(GetReturnURLMixin, ObjectPermissionRequiredMixin, View):
else:
return redirect(self.get_return_url(request, obj))
except ObjectDoesNotExist:
except PermissionsViolation:
msg = "Object save failed due to object-level permissions violation"
logger.debug(msg)
form.add_error(None, msg)
@@ -480,7 +481,7 @@ class BulkCreateView(GetReturnURLMixin, ObjectPermissionRequiredMixin, View):
# Enforce object-level permissions
if self.queryset.filter(pk__in=[obj.pk for obj in new_objs]).count() != len(new_objs):
raise ObjectDoesNotExist
raise PermissionsViolation
# If we make it to this point, validation has succeeded on all new objects.
msg = "Added {} {}".format(len(new_objs), model._meta.verbose_name_plural)
@@ -494,7 +495,7 @@ class BulkCreateView(GetReturnURLMixin, ObjectPermissionRequiredMixin, View):
except IntegrityError:
pass
except ObjectDoesNotExist:
except PermissionsViolation:
msg = "Object creation failed due to object-level permissions violation"
logger.debug(msg)
form.add_error(None, msg)
@@ -565,7 +566,8 @@ class ObjectImportView(GetReturnURLMixin, ObjectPermissionRequiredMixin, View):
obj = model_form.save()
# Enforce object-level permissions
self.queryset.get(pk=obj.pk)
if not self.queryset.filter(pk=obj.pk).first():
raise PermissionsViolation()
logger.debug(f"Created {obj} (PK: {obj.pk})")
@@ -601,7 +603,7 @@ class ObjectImportView(GetReturnURLMixin, ObjectPermissionRequiredMixin, View):
except AbortTransaction:
pass
except ObjectDoesNotExist:
except PermissionsViolation:
msg = "Object creation failed due to object-level permissions violation"
logger.debug(msg)
form.add_error(None, msg)
@@ -712,7 +714,7 @@ class BulkImportView(GetReturnURLMixin, ObjectPermissionRequiredMixin, View):
# Enforce object-level permissions
if self.queryset.filter(pk__in=[obj.pk for obj in new_objs]).count() != len(new_objs):
raise ObjectDoesNotExist
raise PermissionsViolation
# Compile a table containing the imported objects
obj_table = self.table(new_objs)
@@ -730,7 +732,7 @@ class BulkImportView(GetReturnURLMixin, ObjectPermissionRequiredMixin, View):
except ValidationError:
pass
except ObjectDoesNotExist:
except PermissionsViolation:
msg = "Object import failed due to object-level permissions violation"
logger.debug(msg)
form.add_error(None, msg)
@@ -774,9 +776,7 @@ class BulkEditView(GetReturnURLMixin, ObjectPermissionRequiredMixin, View):
# If we are editing *all* objects in the queryset, replace the PK list with all matched objects.
if request.POST.get('_all') and self.filterset is not None:
pk_list = [
obj.pk for obj in self.filterset(request.GET, self.queryset.only('pk')).qs
]
pk_list = self.filterset(request.GET, self.queryset.values_list('pk', flat=True)).qs
else:
pk_list = request.POST.getlist('pk')
@@ -847,7 +847,7 @@ class BulkEditView(GetReturnURLMixin, ObjectPermissionRequiredMixin, View):
# Enforce object-level permissions
if self.queryset.filter(pk__in=[obj.pk for obj in updated_objects]).count() != len(updated_objects):
raise ObjectDoesNotExist
raise PermissionsViolation
if updated_objects:
msg = 'Updated {} {}'.format(len(updated_objects), model._meta.verbose_name_plural)
@@ -859,7 +859,7 @@ class BulkEditView(GetReturnURLMixin, ObjectPermissionRequiredMixin, View):
except ValidationError as e:
messages.error(self.request, "{} failed validation: {}".format(obj, e))
except ObjectDoesNotExist:
except PermissionsViolation:
msg = "Object update failed due to object-level permissions violation"
logger.debug(msg)
form.add_error(None, msg)
@@ -954,7 +954,7 @@ class BulkRenameView(GetReturnURLMixin, ObjectPermissionRequiredMixin, View):
# Enforce constrained permissions
if self.queryset.filter(pk__in=renamed_pks).count() != len(selected_objects):
raise ObjectDoesNotExist
raise PermissionsViolation
messages.success(request, "Renamed {} {}".format(
len(selected_objects),
@@ -962,7 +962,7 @@ class BulkRenameView(GetReturnURLMixin, ObjectPermissionRequiredMixin, View):
))
return redirect(self.get_return_url(request))
except ObjectDoesNotExist:
except PermissionsViolation:
msg = "Object update failed due to object-level permissions violation"
logger.debug(msg)
form.add_error(None, msg)
@@ -1148,7 +1148,7 @@ class ComponentCreateView(GetReturnURLMixin, ObjectPermissionRequiredMixin, View
# Enforce object-level permissions
if self.queryset.filter(pk__in=[obj.pk for obj in new_objs]).count() != len(new_objs):
raise ObjectDoesNotExist
raise PermissionsViolation
messages.success(request, "Added {} {}".format(
len(new_components), self.queryset.model._meta.verbose_name_plural
@@ -1158,7 +1158,7 @@ class ComponentCreateView(GetReturnURLMixin, ObjectPermissionRequiredMixin, View
else:
return redirect(self.get_return_url(request))
except ObjectDoesNotExist:
except PermissionsViolation:
msg = "Component creation failed due to object-level permissions violation"
logger.debug(msg)
form.add_error(None, msg)
@@ -1231,7 +1231,7 @@ class BulkComponentCreateView(GetReturnURLMixin, ObjectPermissionRequiredMixin,
component_form = self.model_form(component_data)
if component_form.is_valid():
instance = component_form.save()
logger.debug(f"Created {instance} on {instance.parent}")
logger.debug(f"Created {instance} on {instance.parent_object}")
new_components.append(instance)
else:
for field, errors in component_form.errors.as_data().items():
@@ -1240,12 +1240,12 @@ class BulkComponentCreateView(GetReturnURLMixin, ObjectPermissionRequiredMixin,
# Enforce object-level permissions
if self.queryset.filter(pk__in=[obj.pk for obj in new_components]).count() != len(new_components):
raise ObjectDoesNotExist
raise PermissionsViolation
except IntegrityError:
pass
except ObjectDoesNotExist:
except PermissionsViolation:
msg = "Component creation failed due to object-level permissions violation"
logger.debug(msg)
form.add_error(None, msg)

View File

@@ -37,6 +37,7 @@ class SecretTable(BaseTable):
)
assigned_object = tables.Column(
linkify=True,
orderable=False,
verbose_name='Assigned object'
)
role = tables.Column(

View File

@@ -92,7 +92,7 @@ def inject_deprecation_warning(request):
"""
messages.warning(
request,
mark_safe('<i class="mdi mdi-alert"></i> The secrets functionality will be moved to a plugin in NetBox v2.12. '
mark_safe('<i class="mdi mdi-alert"></i> The secrets functionality will be moved to a plugin in NetBox v3.0. '
'Please see <a href="https://github.com/netbox-community/netbox/issues/5278">issue #5278</a> for '
'more information.')
)

View File

@@ -67,14 +67,14 @@
<p class="text-muted">{{ settings.HOSTNAME }} (v{{ settings.VERSION }})</p>
</div>
<div class="col-xs-4 text-center">
<p class="text-muted">{% now 'Y-m-d H:i:s T' %}</p>
<p class="text-muted">{% annotated_now %} {% now 'T' %}</p>
</div>
<div class="col-xs-4 text-right noprint">
<p class="text-muted">
<i class="mdi mdi-book-open-page-variant text-primary"></i> <a href="https://netbox.readthedocs.io/">Docs</a> &middot;
<i class="mdi mdi-cloud-braces text-primary"></i> <a href="{% url 'api_docs' %}">API</a> &middot;
<i class="mdi mdi-xml text-primary"></i> <a href="https://github.com/netbox-community/netbox">Code</a> &middot;
<i class="mdi mdi-lifebuoy text-primary"></i> <a href="https://github.com/netbox-community/netbox/wiki">Help</a>
<i class="mdi mdi-slack text-primary"></i> <a href="https://netdev.chat/">Community</a>
</p>
</div>
</div>

View File

@@ -51,7 +51,7 @@
</tr>
<tr>
<td>Install Date</td>
<td>{{ object.install_date|placeholder }}</td>
<td>{{ object.install_date|annotated_date|placeholder }}</td>
</tr>
<tr>
<td>Commit Rate</td>

View File

@@ -35,13 +35,13 @@
<div class="form-group">
<label class="col-md-3 control-label required">Region</label>
<div class="col-md-9">
<p class="form-control-static">{{ termination_a.device.site.region }}</p>
<p class="form-control-static">{{ termination_a.device.site.region|placeholder }}</p>
</div>
</div>
<div class="form-group">
<label class="col-md-3 control-label required">Site Group</label>
<div class="col-md-9">
<p class="form-control-static">{{ termination_a.device.site.group }}</p>
<p class="form-control-static">{{ termination_a.device.site.group|placeholder }}</p>
</div>
</div>
<div class="form-group">
@@ -50,10 +50,16 @@
<p class="form-control-static">{{ termination_a.device.site }}</p>
</div>
</div>
<div class="form-group">
<label class="col-md-3 control-label required">Location</label>
<div class="col-md-9">
<p class="form-control-static">{{ termination_a.device.location|placeholder }}</p>
</div>
</div>
<div class="form-group">
<label class="col-md-3 control-label required">Rack</label>
<div class="col-md-9">
<p class="form-control-static">{{ termination_a.device.rack|default:"None" }}</p>
<p class="form-control-static">{{ termination_a.device.rack|placeholder }}</p>
</div>
</div>
<div class="form-group">

View File

@@ -1,4 +1,5 @@
{% extends 'base.html' %}
{% load helpers %}
{% load form_helpers %}
{% block title %}Create {{ component_type }}{% endblock %}
@@ -18,19 +19,34 @@
{% endif %}
<div class="panel panel-default">
<div class="panel-heading">
<strong>{{ component_type|title }}</strong>
<strong>{{ component_type|bettertitle }}</strong>
</div>
<div class="panel-body">
{% render_form form %}
{% for field in form.hidden_fields %}
{{ field }}
{% endfor %}
{% for field in form.visible_fields %}
{% if not form.custom_fields or field.name not in form.custom_fields %}
{% render_field field %}
{% endif %}
{% endfor %}
</div>
</div>
<div class="form-group">
{% if form.custom_fields %}
<div class="panel panel-default">
<div class="panel-heading"><strong>Custom Fields</strong></div>
<div class="panel-body">
{% render_custom_fields form %}
</div>
</div>
{% endif %}
<div class="form-group">
<div class="col-md-9 col-md-offset-3">
<button type="submit" name="_create" class="btn btn-primary">Create</button>
<button type="submit" name="_addanother" class="btn btn-primary">Create and Add More</button>
<a href="{{ return_url }}" class="btn btn-default">Cancel</a>
</div>
</div>
</div>
</div>
</div>
</form>

View File

@@ -42,7 +42,17 @@
<tr>
<td>Devices</td>
<td>
<a href="{% url 'dcim:device_list' %}?role_id={{ object.pk }}">{{ devices_table.rows|length }}</a>
<a href="{% url 'dcim:device_list' %}?role_id={{ object.pk }}">{{ device_count }}</a>
</td>
</tr>
<tr>
<td>Virtual Machines</td>
<td>
{% if object.vm_role %}
<a href="{% url 'virtualization:virtualmachine_list' %}?role_id={{ object.pk }}">{{ virtualmachine_count }}</a>
{% else %}
&mdash;
{% endif %}
</td>
</tr>
</table>

View File

@@ -54,6 +54,16 @@
{% endif %}
</td>
</tr>
<tr>
<td>Management Only</td>
<td>
{% if object.mgmt_only %}
<span class="text-success"><i class="mdi mdi-check-bold"></i></span>
{% else %}
<span class="text-danger"><i class="mdi mdi-close"></i></span>
{% endif %}
</td>
</tr>
<tr>
<td>Parent</td>
<td>
@@ -88,7 +98,7 @@
</tr>
<tr>
<td>802.1Q Mode</td>
<td>{{ object.get_mode_display }}</td>
<td>{{ object.get_mode_display|placeholder }}</td>
</tr>
</table>
</div>

View File

@@ -7,10 +7,10 @@
{% block breadcrumbs %}
<li><a href="{% url 'dcim:powerfeed_list' %}">Power Feeds</a></li>
<li><a href="{{ object.power_panel.site.get_absolute_url }}">{{ object.power_panel.site }}</a></li>
<li><a href="{{ object.power_panel.get_absolute_url }}">{{ object.power_panel }}</a></li>
<li><a href="{% url 'dcim:powerfeed_list' %}?site_id={{ object.power_panel.site.pk }}">{{ object.power_panel.site }}</a></li>
<li><a href="{% url 'dcim:powerfeed_list' %}?power_panel_id={{ object.power_panel.pk }}">{{ object.power_panel }}</a></li>
{% if object.rack %}
<li><a href="{{ object.rack.get_absolute_url }}">{{ object.rack }}</a></li>
<li><a href="{% url 'dcim:powerfeed_list' %}?rack_id={{ object.rack.pk }}">{{ object.rack }}</a></li>
{% endif %}
<li>{{ object }}</li>
{% endblock %}

View File

@@ -5,7 +5,7 @@
{% block breadcrumbs %}
<li><a href="{% url 'dcim:powerpanel_list' %}">Power Panels</a></li>
<li><a href="{{ object.site.get_absolute_url }}">{{ object.site }}</a></li>
<li><a href="{% url 'dcim:powerpanel_list' %}?site_id={{ object.site.pk }}">{{ object.site }}</a></li>
{% if object.location %}
<li><a href="{{ object.location.get_absolute_url }}">{{ object.location }}</a></li>
{% endif %}

View File

@@ -268,7 +268,7 @@
</td>
<td>
{{ resv.description }}<br />
<small>{{ resv.user }} &middot; {{ resv.created }}</small>
<small>{{ resv.user }} &middot; {{ resv.created|annotated_date }}</small>
</td>
<td class="text-right noprint">
{% if perms.dcim.change_rackreservation %}

View File

@@ -80,7 +80,11 @@
<td>
{% if object.time_zone %}
{{ object.time_zone }} (UTC {{ object.time_zone|tzoffset }})<br />
<small class="text-muted">Site time: {% timezone object.time_zone %}{% now "SHORT_DATETIME_FORMAT" %}{% endtimezone %}</small>
<small class="text-muted">Site time:
{% timezone object.time_zone %}
{% annotated_now %}
{% endtimezone %}
</small>
{% else %}
<span class="text-muted">&mdash;</span>
{% endif %}

View File

@@ -25,7 +25,7 @@
<tr>
<td>Created</td>
<td>
{{ object.created }}
{{ object.created|annotated_date }}
</td>
</tr>
<tr>

View File

@@ -44,7 +44,7 @@
<tr>
<td>Time</td>
<td>
{{ object.time }}
{{ object.time|annotated_date }}
</td>
</tr>
<tr>
@@ -128,6 +128,8 @@
<span{% if k in diff_removed %} style="background-color: #ffdce0"{% endif %}>{{ k }}: {{ v|render_json }}</span>
{% endspaceless %}
{% endfor %}</pre>
{% elif non_atomic_change %}
Warning: Comparing non-atomic change to previous change record (<a href="{% url 'extras:objectchange' pk=prev_change.pk %}">{{ prev_change.pk }}</a>)
{% else %}
<span class="text-muted">None</span>
{% endif %}

View File

@@ -29,7 +29,7 @@
{% endif %}
<h1 class="title">{{ report.name }}</h1>
{% if report.description %}
<p class="lead">{{ report.description }}</p>
<p class="lead">{{ report.description|render_markdown }}</p>
{% endif %}
{% endblock %}
@@ -38,7 +38,7 @@
<div class="col-md-12">
{% if report.result %}
Last run: <a href="{% url 'extras:report_result' job_result_pk=report.result.pk %}">
<strong>{{ report.result.created }}</strong>
<strong>{{ report.result.created|annotated_date }}</strong>
</a>
{% endif %}
</div>

View File

@@ -29,10 +29,10 @@
<td>
{% include 'extras/inc/job_label.html' with result=report.result %}
</td>
<td>{{ report.description|placeholder }}</td>
<td class="rendered-markdown">{{ report.description|render_markdown|placeholder }}</td>
<td class="text-right">
{% if report.result %}
<a href="{% url 'extras:report_result' job_result_pk=report.result.pk %}">{{ report.result.created }}</a>
<a href="{% url 'extras:report_result' job_result_pk=report.result.pk %}">{{ report.result.created|annotated_date }}</a>
{% else %}
<span class="text-muted">Never</span>
{% endif %}

View File

@@ -8,7 +8,7 @@
<div class="row">
<div class="col-md-12">
<p>
Run: <strong>{{ result.created }}</strong>
Run: <strong>{{ result.created|annotated_date }}</strong>
{% if result.completed %}
Duration: <strong>{{ result.duration }}</strong>
{% else %}

View File

@@ -29,7 +29,7 @@
<td>{{ script.Meta.description|render_markdown }}</td>
{% if script.result %}
<td class="text-right">
<a href="{% url 'extras:script_result' job_result_pk=script.result.pk %}">{{ script.result.created }}</a>
<a href="{% url 'extras:script_result' job_result_pk=script.result.pk %}">{{ script.result.created|annotated_date }}</a>
</td>
{% else %}
<td class="text-right text-muted">Never</td>

View File

@@ -13,7 +13,7 @@
<li><a href="{% url 'extras:script_list' %}">Scripts</a></li>
<li><a href="{% url 'extras:script_list' %}#module.{{ script.module }}">{{ script.module|bettertitle }}</a></li>
<li><a href="{% url 'extras:script' module=script.module name=class_name %}">{{ script }}</a></li>
<li>{{ result.created }}</li>
<li>{{ result.created|annotated_date }}</li>
</ol>
</div>
</div>
@@ -32,7 +32,7 @@
</ul>
<div class="tab-content">
<p>
Run: <strong>{{ result.created }}</strong>
Run: <strong>{{ result.created|annotated_date }}</strong>
{% if result.completed %}
Duration: <strong>{{ result.duration }}</strong>
{% else %}

View File

@@ -42,8 +42,8 @@
<h1 class="title">{% block title %}{{ object }}{% endblock %}</h1>
<p>
<small class="text-muted">
Created {{ object.created }} &middot;
Updated <span title="{{ object.last_updated }}">{{ object.last_updated|timesince }}</span> ago
Created {{ object.created|annotated_date }} &middot;
Updated {{ object.last_updated|annotated_date }} ({{ object.last_updated|timesince }} ago)
</small>
<span class="label label-default">{{ object|meta:"app_label" }}.{{ object|meta:"model_name" }}:{{ object.pk }}</span>
</p>

View File

@@ -29,58 +29,58 @@
{% block sidebar %}{% endblock %}
</div>
{% endif %}
{% with bulk_edit_url=content_type.model_class|validated_viewname:"bulk_edit" bulk_delete_url=content_type.model_class|validated_viewname:"bulk_delete" %}
{% if permissions.change or permissions.delete %}
<form method="post" class="form form-horizontal">
{% csrf_token %}
<input type="hidden" name="return_url" value="{% if return_url %}{{ return_url }}{% else %}{{ request.path }}{% if request.GET %}?{{ request.GET.urlencode }}{% endif %}{% endif %}" />
{% if table.paginator.num_pages > 1 %}
<div id="select_all_box" class="hidden panel panel-default noprint">
<div class="panel-body">
<div class="checkbox-inline">
<label for="select_all">
<input type="checkbox" id="select_all" name="_all" />
Select <strong>all {{ table.rows|length }} {{ table.data.verbose_name_plural }}</strong> matching query
</label>
</div>
<div class="pull-right">
{% if bulk_edit_url and permissions.change %}
<button type="submit" name="_edit" formaction="{% url bulk_edit_url %}{% if request.GET %}?{{ request.GET.urlencode }}{% endif %}" class="btn btn-warning btn-sm" disabled="disabled">
<span class="mdi mdi-pencil" aria-hidden="true"></span> Edit All
</button>
{% endif %}
{% if bulk_delete_url and permissions.delete %}
<button type="submit" name="_delete" formaction="{% url bulk_delete_url %}{% if request.GET %}?{{ request.GET.urlencode }}{% endif %}" class="btn btn-danger btn-sm" disabled="disabled">
<span class="mdi mdi-trash-can-outline" aria-hidden="true"></span> Delete All
</button>
{% endif %}
<div class="table-responsive">
{% with bulk_edit_url=content_type.model_class|validated_viewname:"bulk_edit" bulk_delete_url=content_type.model_class|validated_viewname:"bulk_delete" %}
{% if permissions.change or permissions.delete %}
<form method="post" class="form form-horizontal">
{% csrf_token %}
<input type="hidden" name="return_url" value="{% if return_url %}{{ return_url }}{% else %}{{ request.path }}{% if request.GET %}?{{ request.GET.urlencode }}{% endif %}{% endif %}" />
{% if table.paginator.num_pages > 1 %}
<div id="select_all_box" class="hidden panel panel-default noprint">
<div class="panel-body">
<div class="checkbox-inline">
<label for="select_all">
<input type="checkbox" id="select_all" name="_all" />
Select <strong>all {{ table.rows|length }} {{ table.data.verbose_name_plural }}</strong> matching query
</label>
</div>
<div class="pull-right">
{% if bulk_edit_url and permissions.change %}
<button type="submit" name="_edit" formaction="{% url bulk_edit_url %}{% if request.GET %}?{{ request.GET.urlencode }}{% endif %}" class="btn btn-warning btn-sm" disabled="disabled">
<span class="mdi mdi-pencil" aria-hidden="true"></span> Edit All
</button>
{% endif %}
{% if bulk_delete_url and permissions.delete %}
<button type="submit" name="_delete" formaction="{% url bulk_delete_url %}{% if request.GET %}?{{ request.GET.urlencode }}{% endif %}" class="btn btn-danger btn-sm" disabled="disabled">
<span class="mdi mdi-trash-can-outline" aria-hidden="true"></span> Delete All
</button>
{% endif %}
</div>
</div>
</div>
{% endif %}
{% render_table table 'inc/table.html' %}
<div class="pull-left noprint">
{% block bulk_buttons %}{% endblock %}
{% if bulk_edit_url and permissions.change %}
<button type="submit" name="_edit" formaction="{% url bulk_edit_url %}{% if request.GET %}?{{ request.GET.urlencode }}{% endif %}" class="btn btn-warning btn-sm">
<span class="mdi mdi-pencil" aria-hidden="true"></span> Edit Selected
</button>
{% endif %}
{% if bulk_delete_url and permissions.delete %}
<button type="submit" name="_delete" formaction="{% url bulk_delete_url %}{% if request.GET %}?{{ request.GET.urlencode }}{% endif %}" class="btn btn-danger btn-sm">
<span class="mdi mdi-trash-can-outline" aria-hidden="true"></span> Delete Selected
</button>
{% endif %}
</div>
{% endif %}
</form>
{% else %}
<div class="table-responsive">
{% render_table table 'inc/table.html' %}
</div>
<div class="pull-left noprint">
{% block bulk_buttons %}{% endblock %}
{% if bulk_edit_url and permissions.change %}
<button type="submit" name="_edit" formaction="{% url bulk_edit_url %}{% if request.GET %}?{{ request.GET.urlencode }}{% endif %}" class="btn btn-warning btn-sm">
<span class="mdi mdi-pencil" aria-hidden="true"></span> Edit Selected
</button>
{% endif %}
{% if bulk_delete_url and permissions.delete %}
<button type="submit" name="_delete" formaction="{% url bulk_delete_url %}{% if request.GET %}?{{ request.GET.urlencode }}{% endif %}" class="btn btn-danger btn-sm">
<span class="mdi mdi-trash-can-outline" aria-hidden="true"></span> Delete Selected
</button>
{% endif %}
</div>
</form>
{% else %}
<div class="table-responsive">
{% render_table table 'inc/table.html' %}
</div>
{% endif %}
{% endwith %}
{% endif %}
{% endwith %}
</div>
{% include 'inc/paginator.html' with paginator=table.paginator page=table.page %}
<div class="clearfix"></div>
</div>

View File

@@ -291,7 +291,7 @@
{% for result in report_results %}
<tr>
<td><a href="{% url 'extras:report_result' job_result_pk=result.pk %}">{{ result.name }}</a></td>
<td class="text-right"><span title="{{ result.created }}">{% include 'extras/inc/job_label.html' %}</span></td>
<td class="text-right"><span title="{{ result.created|date:'SHORT_DATETIME_FORMAT' }}">{% include 'extras/inc/job_label.html' %}</span></td>
</tr>
{% endfor %}
</table>

View File

@@ -1,3 +1,4 @@
{% load helpers %}
{% if images %}
<table class="table table-hover panel-body">
<tr>
@@ -13,7 +14,7 @@
<a class="image-preview" href="{{ attachment.image.url }}" target="_blank">{{ attachment }}</a>
</td>
<td>{{ attachment.size|filesizeformat }}</td>
<td>{{ attachment.created }}</td>
<td>{{ attachment.created|annotated_date }}</td>
<td class="text-right noprint">
{% if perms.extras.change_imageattachment %}
<a href="{% url 'extras:imageattachment_edit' pk=attachment.pk %}" class="btn btn-warning btn-xs" title="Edit image">

View File

@@ -54,7 +54,7 @@
</tr>
<tr>
<td>Date Added</td>
<td>{{ object.date_added|placeholder }}</td>
<td>{{ object.date_added|annotated_date|placeholder }}</td>
</tr>
<tr>
<td>Description</td>

View File

@@ -2,6 +2,26 @@
{% load helpers %}
{% block buttons %}
<div class="btn-group" role="group">
<div class="dropdown">
<button class="btn btn-default dropdown-toggle" type="button" id="max_length" data-toggle="dropdown" aria-haspopup="true" aria-expanded="true">
Max Depth{% if "depth__lte" in request.GET %}: {{ request.GET.depth__lte }}{% endif %}
<span class="caret"></span>
</button>
<ul class="dropdown-menu" aria-labelledby="max_length">
{% if request.GET.depth__lte %}
<li>
<a href="{% url 'ipam:prefix_list' %}{% querystring request depth__lte=None page=1 %}">Clear</a>
</li>
{% endif %}
{% for i in 16|as_range %}
<li><a href="{% url 'ipam:prefix_list' %}{% querystring request depth__lte=i page=1 %}">
{{ i }} {% if request.GET.depth__lte == i %}<i class="mdi mdi-check-bold"></i>{% endif %}
</a></li>
{% endfor %}
</ul>
</div>
</div>
<div class="btn-group" role="group">
<div class="dropdown">
<button class="btn btn-default dropdown-toggle" type="button" id="max_length" data-toggle="dropdown" aria-haspopup="true" aria-expanded="true">

View File

@@ -1,23 +0,0 @@
{% extends 'generic/object_list.html' %}
{% block buttons %}
{% if request.GET.family == '6' %}
<a href="{% url 'ipam:rir_list' %}" class="btn btn-default">
<span class="mdi mdi-table" aria-hidden="true"></span>
IPv4 Stats
</a>
{% else %}
<a href="{% url 'ipam:rir_list' %}?family=6{% if request.GET %}&{{ request.GET.urlencode }}{% endif %}" class="btn btn-default">
<span class="mdi mdi-table" aria-hidden="true"></span>
IPv6 Stats
</a>
{% endif %}
{% endblock %}
{% block sidebar %}
{% if request.GET.family == '6' %}
<div class="alert alert-info">
<i class="mdi mdi-information-outline"></i> Numbers shown indicate /64 prefixes.
</div>
{% endif %}
{% endblock %}

View File

@@ -24,12 +24,12 @@
<div class="row">
<div class="col-md-4">
<small class="text-muted">Created</small><br />
<span title="{{ token.created }}">{{ token.created|date }}</span>
{{ token.created|annotated_date }}
</div>
<div class="col-md-4">
<small class="text-muted">Expires</small><br />
{% if token.expires %}
<span title="{{ token.expires }}">{{ token.expires|date }}</span>
{{ token.expires|annotated_date }}
{% else %}
<span>Never</span>
{% endif %}

View File

@@ -11,7 +11,7 @@
<small class="text-muted">Email</small>
<h5>{{ request.user.email }}</h5>
<small class="text-muted">Registered</small>
<h5>{{ request.user.date_joined }}</h5>
<h5>{{ request.user.date_joined|annotated_date }}</h5>
<small class="text-muted">Groups</small>
<h5>{{ request.user.groups.all|join:', ' }}</h5>
<small class="text-muted">Admin access</small>

View File

@@ -1,4 +1,5 @@
{% extends 'users/base.html' %}
{% load helpers %}
{% block title %}User Key{% endblock %}
@@ -19,7 +20,9 @@
{% endif %}
</h4>
<p>
<small class="text-muted">Created {{ object.created }} &middot; Updated <span title="{{ object.last_updated }}">{{ object.last_updated|timesince }}</span> ago</small>
<small class="text-muted">
Created {{ object.created|annotated_date }} &middot;
Updated {{ object.last_updated|annotated_date }} ({{ object.last_updated|timesince }} ago)
</p>
{% if not object.is_active %}
<div class="alert alert-warning" role="alert">
@@ -37,7 +40,7 @@
</a>
</div>
<h4>Session key: <span class="label label-success">Active</span></h4>
<small class="text-muted">Created {{ object.session_key.created }}</small>
<small class="text-muted">Created {{ object.session_key.created|annotated_date }}</small>
{% else %}
<h4>No active session key</h4>
{% endif %}

View File

@@ -138,7 +138,7 @@
<td><i class="mdi mdi-chip"></i> Memory</td>
<td>
{% if object.memory %}
{{ object.memory }} MB
<span title="{{ object.memory }} MB">{{ object.memory|humanize_megabytes }}</span>
{% else %}
<span class="text-muted">&mdash;</span>
{% endif %}

View File

@@ -1,38 +0,0 @@
{% extends 'base.html' %}
{% load helpers %}
{% load form_helpers %}
{% block title %}Create {{ component_type }}{% endblock %}
{% block content %}
<form action="" method="post" class="form form-horizontal">
{% csrf_token %}
<div class="row">
<div class="col-md-6 col-md-offset-3">
{% if form.non_field_errors %}
<div class="panel panel-danger">
<div class="panel-heading"><strong>Errors</strong></div>
<div class="panel-body">
{{ form.non_field_errors }}
</div>
</div>
{% endif %}
<div class="panel panel-default">
<div class="panel-heading">
<strong>{{ component_type|bettertitle }}</strong>
</div>
<div class="panel-body">
{% render_form form %}
</div>
</div>
<div class="form-group">
<div class="col-md-9 col-md-offset-3">
<button type="submit" name="_create" class="btn btn-primary">Create</button>
<button type="submit" name="_addanother" class="btn btn-primary">Create and Add More</button>
<a href="{{ return_url }}" class="btn btn-default">Cancel</a>
</div>
</div>
</div>
</div>
</form>
{% endblock %}

View File

@@ -7,7 +7,7 @@ from django.core.exceptions import FieldError, ValidationError
from utilities.forms.fields import ContentTypeMultipleChoiceField
from .constants import *
from .models import AdminGroup, AdminUser, ObjectPermission, Token, UserConfig
from .models import ObjectPermission, Token, UserConfig
#
@@ -39,11 +39,11 @@ class ObjectPermissionInline(admin.TabularInline):
class GroupObjectPermissionInline(ObjectPermissionInline):
model = AdminGroup.object_permissions.through
model = Group.object_permissions.through
class UserObjectPermissionInline(ObjectPermissionInline):
model = AdminUser.object_permissions.through
model = User.object_permissions.through
class UserConfigInline(admin.TabularInline):
@@ -62,7 +62,7 @@ admin.site.unregister(Group)
admin.site.unregister(User)
@admin.register(AdminGroup)
@admin.register(Group)
class GroupAdmin(admin.ModelAdmin):
fields = ('name',)
list_display = ('name', 'user_count')
@@ -75,7 +75,7 @@ class GroupAdmin(admin.ModelAdmin):
return obj.user_set.count()
@admin.register(AdminUser)
@admin.register(User)
class UserAdmin(UserAdmin_):
list_display = [
'username', 'email', 'first_name', 'last_name', 'is_superuser', 'is_staff', 'is_active'

View File

@@ -17,8 +17,6 @@ from .constants import *
__all__ = (
'AdminGroup',
'AdminUser',
'ObjectPermission',
'Token',
'UserConfig',
@@ -163,7 +161,6 @@ class UserConfig(models.Model):
@receiver(post_save, sender=User)
@receiver(post_save, sender=AdminUser)
def create_userconfig(instance, created, **kwargs):
"""
Automatically create a new UserConfig when a new User is created.

View File

@@ -11,7 +11,8 @@ FILTER_CHAR_BASED_LOOKUP_MAP = dict(
isw='istartswith',
nisw='istartswith',
ie='iexact',
nie='iexact'
nie='iexact',
empty='empty',
)
FILTER_NUMERIC_BASED_LOOKUP_MAP = dict(

View File

@@ -9,6 +9,14 @@ class AbortTransaction(Exception):
pass
class PermissionsViolation(Exception):
"""
Raised when an operation was prevented because it would violate the
allowed permissions.
"""
pass
class RQWorkerNotRunningException(APIException):
"""
Indicates the temporary inability to enqueue a new task (e.g. custom script execution) because no RQ worker

View File

@@ -338,7 +338,7 @@ class DynamicModelChoiceMixin:
filter = django_filters.ModelChoiceFilter
widget = widgets.APISelect
# TODO: Remove display_field in v2.12
# TODO: Remove display_field in v3.0
def __init__(self, display_field='display', query_params=None, initial_params=None, null_option=None,
disabled_indicator=None, *args, **kwargs):
self.display_field = display_field

View File

@@ -1,4 +1,5 @@
import django_tables2 as tables
from django.conf import settings
from django.contrib.auth.models import AnonymousUser
from django.contrib.contenttypes.fields import GenericForeignKey
from django.contrib.contenttypes.models import ContentType
@@ -284,7 +285,10 @@ class LinkedCountColumn(tables.Column):
if value:
url = reverse(self.viewname, kwargs=self.view_kwargs)
if self.url_params:
url += '?' + '&'.join([f'{k}={getattr(record, v)}' for k, v in self.url_params.items()])
url += '?' + '&'.join([
f'{k}={getattr(record, v) or settings.FILTERS_NULL_CHOICE_VALUE}'
for k, v in self.url_params.items()
])
return mark_safe(f'<a href="{url}">{value}</a>')
return value
@@ -336,8 +340,11 @@ class MPTTColumn(tables.TemplateColumn):
"""
Display a nested hierarchy for MPTT-enabled models.
"""
template_code = """{% for i in record.get_ancestors %}<i class="mdi mdi-circle-small"></i>{% endfor %}""" \
"""<a href="{{ record.get_absolute_url }}">{{ record.name }}</a>"""
template_code = """
{% load helpers %}
{% for i in record.level|as_range %}<i class="mdi mdi-circle-small"></i>{% endfor %}
<a href="{{ record.get_absolute_url }}">{{ record.name }}</a>
"""
def __init__(self, *args, **kwargs):
super().__init__(

View File

@@ -5,7 +5,9 @@ import re
import yaml
from django import template
from django.conf import settings
from django.template.defaultfilters import date
from django.urls import NoReverseMatch, reverse
from django.utils import timezone
from django.utils.html import strip_tags
from django.utils.safestring import mark_safe
from markdown import markdown
@@ -129,6 +131,20 @@ def humanize_speed(speed):
return '{} Kbps'.format(speed)
@register.filter()
def humanize_megabytes(mb):
"""
Express a number of megabytes in the most suitable unit (e.g. gigabytes or terabytes).
"""
if not mb:
return ''
if mb >= 1048576:
return f'{int(mb / 1048576)} TB'
if mb >= 1024:
return f'{int(mb / 1024)} GB'
return f'{mb} MB'
@register.filter()
def tzoffset(value):
"""
@@ -137,6 +153,36 @@ def tzoffset(value):
return datetime.datetime.now(value).strftime('%z')
@register.filter(expects_localtime=True)
def annotated_date(date_value):
"""
Returns date as HTML span with short date format as the content and the
(long) date format as the title.
"""
if not date_value:
return ''
if type(date_value) == datetime.date:
long_ts = date(date_value, 'DATE_FORMAT')
short_ts = date(date_value, 'SHORT_DATE_FORMAT')
else:
long_ts = date(date_value, 'DATETIME_FORMAT')
short_ts = date(date_value, 'SHORT_DATETIME_FORMAT')
span = f'<span title="{long_ts}">{short_ts}</span>'
return mark_safe(span)
@register.simple_tag
def annotated_now():
"""
Returns the current date piped through the annotated_date filter.
"""
tzinfo = timezone.get_current_timezone() if settings.USE_TZ else None
return annotated_date(datetime.datetime.now(tz=tzinfo))
@register.filter()
def fgcolor(value):
"""

View File

@@ -105,7 +105,7 @@ def serialize_object(obj, extra=None):
# Include any tags. Check for tags cached on the instance; fall back to using the manager.
if is_taggable(obj):
tags = getattr(obj, '_tags', obj.tags.all())
tags = getattr(obj, '_tags', None) or obj.tags.all()
data['tags'] = [tag.name for tag in tags]
# Append any extra data

View File

@@ -1,8 +1,7 @@
from rest_framework import serializers
from dcim.models import Interface
from netbox.api import WritableNestedSerializer
from virtualization.models import Cluster, ClusterGroup, ClusterType, VirtualMachine
from virtualization.models import Cluster, ClusterGroup, ClusterType, VirtualMachine, VMInterface
__all__ = [
'NestedClusterGroupSerializer',
@@ -61,5 +60,5 @@ class NestedVMInterfaceSerializer(WritableNestedSerializer):
virtual_machine = NestedVirtualMachineSerializer(read_only=True)
class Meta:
model = Interface
model = VMInterface
fields = ['id', 'url', 'display', 'virtual_machine', 'name']

View File

@@ -8,7 +8,8 @@ from dcim.constants import INTERFACE_MTU_MAX, INTERFACE_MTU_MIN
from dcim.forms import InterfaceCommonForm, INTERFACE_MODE_HELP_TEXT
from dcim.models import Device, DeviceRole, Platform, Rack, Region, Site, SiteGroup
from extras.forms import (
AddRemoveTagsForm, CustomFieldBulkEditForm, CustomFieldModelCSVForm, CustomFieldModelForm, CustomFieldFilterForm,
AddRemoveTagsForm, CustomFieldForm, CustomFieldBulkEditForm, CustomFieldModelCSVForm, CustomFieldModelForm,
CustomFieldFilterForm,
)
from extras.models import Tag
from ipam.models import IPAddress, VLAN
@@ -527,8 +528,8 @@ class VirtualMachineBulkEditForm(BootstrapMixin, AddRemoveTagsForm, CustomFieldB
class VirtualMachineFilterForm(BootstrapMixin, TenancyFilterForm, CustomFieldFilterForm):
model = VirtualMachine
field_order = [
'q', 'cluster_group_id', 'cluster_type_id', 'cluster_id', 'status', 'role_id', 'region_id', 'site_id',
'tenant_group_id', 'tenant_id', 'platform_id', 'mac_address',
'q', 'cluster_group_id', 'cluster_type_id', 'cluster_id', 'status', 'role_id', 'region_id', 'site_group_id',
'site_id', 'tenant_group_id', 'tenant_id', 'platform_id', 'mac_address',
]
q = forms.CharField(
required=False,
@@ -556,14 +557,20 @@ class VirtualMachineFilterForm(BootstrapMixin, TenancyFilterForm, CustomFieldFil
required=False,
label=_('Region')
)
site_group_id = DynamicModelMultipleChoiceField(
queryset=SiteGroup.objects.all(),
required=False,
label=_('Site group')
)
site_id = DynamicModelMultipleChoiceField(
queryset=Site.objects.all(),
required=False,
null_option='None',
query_params={
'region_id': '$region_id'
'region_id': '$region_id',
'group_id': '$site_group_id',
},
label=_('Cluster')
label=_('Site')
)
role_id = DynamicModelMultipleChoiceField(
queryset=DeviceRole.objects.all(),
@@ -653,7 +660,8 @@ class VMInterfaceForm(BootstrapMixin, InterfaceCommonForm, CustomFieldModelForm)
self.fields['tagged_vlans'].widget.add_query_param('available_on_virtualmachine', vm_id)
class VMInterfaceCreateForm(BootstrapMixin, InterfaceCommonForm):
class VMInterfaceCreateForm(BootstrapMixin, CustomFieldForm, InterfaceCommonForm):
model = VMInterface
virtual_machine = DynamicModelChoiceField(
queryset=VirtualMachine.objects.all()
)
@@ -717,7 +725,7 @@ class VMInterfaceCreateForm(BootstrapMixin, InterfaceCommonForm):
self.fields['tagged_vlans'].widget.add_query_param('available_on_virtualmachine', vm_id)
class VMInterfaceCSVForm(CSVModelForm):
class VMInterfaceCSVForm(CustomFieldModelCSVForm):
virtual_machine = CSVModelChoiceField(
queryset=VirtualMachine.objects.all(),
to_field_name='name'
@@ -740,7 +748,7 @@ class VMInterfaceCSVForm(CSVModelForm):
return self.cleaned_data['enabled']
class VMInterfaceBulkEditForm(BootstrapMixin, AddRemoveTagsForm, BulkEditForm):
class VMInterfaceBulkEditForm(BootstrapMixin, AddRemoveTagsForm, CustomFieldBulkEditForm):
pk = forms.ModelMultipleChoiceField(
queryset=VMInterface.objects.all(),
widget=forms.MultipleHiddenInput()

View File

@@ -251,6 +251,7 @@ class VMInterfaceTest(APIViewTestCases.APIViewTestCase):
{
'virtual_machine': virtualmachine.pk,
'name': 'Interface 6',
'parent': interfaces[0].pk,
'mode': InterfaceModeChoices.MODE_TAGGED,
'tagged_vlans': [vlans[0].pk, vlans[1].pk],
'untagged_vlan': vlans[2].pk,

Some files were not shown because too many files have changed in this diff Show More