Compare commits

...

51 Commits

Author SHA1 Message Date
Daniel Sheppard
289cb4e1bc Simplify find_parent_prefix method 2026-02-10 08:42:21 -06:00
Daniel Sheppard
6e24ce6a1a Fix test failure by casting to IPAddress 2026-02-10 08:29:03 -06:00
Daniel Sheppard
c6bcfea429 Fine-tune forms, tests, and some model save() functions 2026-02-09 14:57:24 -06:00
Daniel Sheppard
d18d7369e6 Add missing migration 2026-02-08 17:18:49 -06:00
Daniel Sheppard
42e2fd7fb3 Update Triggers and add new functions to triggers to handle certain cases 2026-02-06 15:31:42 -06:00
Daniel Sheppard
cd636168ee Add ignore for migration files to ignore line length requirements as per our developer docs 2026-02-06 15:31:22 -06:00
Daniel Sheppard
b45da7b1e4 Merge branch 'feature' into feature-ip-prefix-link 2026-02-06 10:14:22 -06:00
bctiemann
de1c5120dd Merge pull request #21346 from netbox-community/release-v4.5.2
Release v4.5.2
2026-02-03 08:42:21 -05:00
Jeremy Stretch
87d2e02c85 Release v4.5.2 2026-02-03 08:09:14 -05:00
github-actions
cbbc4f74b8 Update source translation strings 2026-02-03 05:22:13 +00:00
Martin Hauser
be5bd74d4e feat(ipam): Add parent object fields for Services
Include `parent_object_type` and `parent_object_id` in `clone_fields`
for services. This improves cloning behavior for models using parent
object references, ensuring more accurate data duplication.

Fixes #21168
2026-02-02 16:05:09 -05:00
Jason Novinger
cf12bb5bf5 Fixes #20902: Avoid conflict when Git URL contains embedded username (#21252) 2026-02-02 11:16:32 -08:00
Jeremy Stretch
c060eef1d8 Closes #21300: Cache model-specific custom field lookups for the duration of a request (#21334) 2026-02-02 10:58:12 -08:00
bctiemann
96f0debe6e Merge pull request #21328 from netbox-community/21327-ContentTypeField-caching
Closes #21327: Leverage `get_by_natural_key()` to resolve ContentTypes
2026-02-02 13:46:04 -05:00
Martin Hauser
b26c7f34cd feat(models): Handle GFK attributes in CloningMixin
Extend the CloningMixin to inject GenericForeignKey (GFK) attributes
when both content type and ID fields are present. Improves support for
models using GFK fields during cloning operations.

Fixes #21201
2026-02-02 13:02:32 -05:00
bctiemann
d6428c6aa4 Merge pull request #21314 from marsteel/21233-UI-Add-horizontal-padding-to-Release-info-section
Fixes #21233: UI Add horizontal padding to Release info section in Navigation menu
2026-02-02 11:17:30 -05:00
github-actions
e3eca98897 Update source translation strings 2026-01-31 05:14:50 +00:00
Jeremy Stretch
cdc735fe41 Closes #21302: Avoid redundant uniqueness checks in REST API serializers 2026-01-30 19:36:42 -05:00
Jeremy Stretch
aa4a9da955 Closes #21303: Cache serialized post-change data on object (#21325)
* Closes #21303: Cache serialized post-change data on object

* Set to_objectchange.alters_data

* Restructure logic for determining post-change snapshot
2026-01-30 14:49:12 -05:00
Jeremy Stretch
5c6fc2fb6f Closes #21110: Support for cursor-based pagination in GraphQL API (#21322) 2026-01-30 11:45:35 -08:00
Jeremy Stretch
ad29cb2d66 Closes #21263: Prefetch related objects after creating/updating objects via REST API (#21329)
* Closes #21263: Prefetch related objects after creating/updating objects via REST API

* Add comment re: ordering by PK
2026-01-30 14:13:05 -05:00
Aditya Sharma
bec5ecf6a9 Closes #21209: Accept case-insensitive model names in configuration (#21275)
NetBox now accepts case-insensitive model identifiers in configuration, allowing
both lowercase (e.g. "dcim.site") and PascalCase (e.g. "dcim.Site") for
DEFAULT_DASHBOARD, CUSTOM_VALIDATORS, and PROTECTION_RULES.
This makes model name handling consistent with FIELD_CHOICES.

- Add a shared case-insensitive config lookup helper (get_config_value_ci())
- Use the helper in extras/signals.py and core/signals.py
- Update FIELD_CHOICES ChoiceSetMeta to support case-insensitive replace/extend
  (only compute extend choices if no replacement is defined)
- Add unit tests for get_config_value_ci()
- Add integration tests for case-insensitive FIELD_CHOICES replacement/extension
- Update documentation examples to use PascalCase consistently
2026-01-30 13:48:38 +01:00
github-actions
c98f55dbd2 Update source translation strings 2026-01-30 05:18:59 +00:00
Jeremy Stretch
dfe20532a1 Closes #21327: Leverage get_by_natural_key() to resolve ContentTypes 2026-01-29 19:46:22 -05:00
MA Gang
43ae52089f Add padding to release info div
Add padding to release info div in layout.html
2026-01-28 14:29:38 +01:00
Daniel Sheppard
4c35630434 Reorder migrations 2026-01-15 16:57:28 -06:00
Daniel Sheppard
3cc15ecaf0 Merge branch 'feature' into feature-ip-prefix-link 2025-12-28 11:41:39 -06:00
Daniel Sheppard
5ada585129 Remove unrelated development path 2025-11-25 08:16:46 -06:00
Daniel Sheppard
b03158f1de Add pgtrigger as dependancy 2025-11-25 00:06:35 -06:00
Daniel Sheppard
bdde4b7e94 Switch to using triggers
Still outstanding:

* IPAddress and IPRange triggers
* Triggers for VRF changes on Prefix
* Triggers for changing to "container" on Prefix
* Rework logic for saving on all models
2025-11-25 00:01:23 -06:00
Daniel Sheppard
905656f13e Add migration 2025-11-07 09:10:51 -06:00
Daniel Sheppard
42c2dc57f8 Develop triggers for setting parents 2025-11-07 09:02:30 -06:00
Daniel Sheppard
56673f4d88 Signal optimizations 2025-11-05 18:10:38 -06:00
Daniel Sheppard
955c64b68c Re-order migrations 2025-09-14 10:49:44 -05:00
Daniel Sheppard
912e6e4fb1 Merge branch 'feature' of https://github.com/netbox-community/netbox into feature-ip-prefix-link 2025-09-11 20:46:47 -05:00
Daniel Sheppard
90d277610c Merge branch 'feature' of https://github.com/netbox-community/netbox into feature-ip-prefix-link 2025-09-03 22:15:38 -05:00
Daniel Sheppard
b1bc933e98 Clean up Prefix TODOs 2025-08-08 09:29:12 -05:00
Daniel Sheppard
b54196f595 Merge branch 'feature' into feature-ip-prefix-link 2025-08-07 21:33:58 -05:00
Daniel Sheppard
0d31449df8 Optimize prefix assignment. Fix tests 2025-07-10 12:59:59 -05:00
Daniel Sheppard
76e85683ac Re-apply de-duplication to IPRangeSerializer 2025-07-09 13:18:28 -05:00
Daniel Sheppard
f844ec5703 Merge branch 'feature' into feature-ip-prefix-link 2025-07-09 13:00:51 -05:00
Daniel Sheppard
7eb3a8d379 Fix some tests 2025-07-09 12:56:05 -05:00
Daniel Sheppard
ade4354ca4 Fix some test errors 2025-07-09 11:12:52 -05:00
Daniel Sheppard
697d5bd876 Slightly DRY the migration 2025-07-09 10:52:24 -05:00
Daniel Sheppard
b19f81cede More work on IP Address/Range and Prefix relationship 2025-07-09 10:36:41 -05:00
Daniel Sheppard
c5e7b21147 Add additional FKs 2025-06-05 09:43:18 -05:00
Daniel Sheppard
c211b624d0 Merge branch 'feature' of https://github.com/netbox-community/netbox into feature-ip-prefix-link
# Conflicts:
#	netbox/ipam/forms/bulk_import.py
2025-05-15 08:40:58 -05:00
Daniel Sheppard
4c8301b3a5 Update migration 2025-05-15 08:38:03 -05:00
Daniel Sheppard
68d0b58293 Update migration 2025-04-10 08:22:23 -05:00
Daniel Sheppard
738ef63527 Update from feature 2025-04-09 10:24:56 -05:00
Daniel Sheppard
747fef0bc2 Work on IP to Prefix ForeignKey relationship 2025-02-24 14:03:18 -06:00
102 changed files with 30165 additions and 22648 deletions

View File

@@ -15,7 +15,7 @@ body:
attributes:
label: NetBox version
description: What version of NetBox are you currently running?
placeholder: v4.5.1
placeholder: v4.5.2
validations:
required: true
- type: dropdown

View File

@@ -27,7 +27,7 @@ body:
attributes:
label: NetBox Version
description: What version of NetBox are you currently running?
placeholder: v4.5.1
placeholder: v4.5.2
validations:
required: true
- type: dropdown

View File

@@ -8,7 +8,7 @@ body:
attributes:
label: NetBox Version
description: What version of NetBox are you currently running?
placeholder: v4.5.1
placeholder: v4.5.2
validations:
required: true
- type: dropdown

View File

@@ -35,6 +35,11 @@ django-mptt==0.17.0
# https://github.com/Xof/django-pglocks/blob/master/CHANGES.txt
django-pglocks
# Manager for managing PostgreSQL triggers
# https://github.com/AmbitionEng/django-pgtrigger/blob/main/CHANGELOG.md
django-pgtrigger
# Prometheus metrics library for Django
# https://github.com/korfuri/django-prometheus/blob/master/CHANGELOG.md
django-prometheus
@@ -85,7 +90,7 @@ drf-spectacular-sidecar
feedparser
# WSGI HTTP server
# https://docs.gunicorn.org/en/latest/news.html
# https://gunicorn.org/news/
gunicorn
# Platform-agnostic template rendering engine

File diff suppressed because it is too large Load Diff

View File

@@ -8,7 +8,7 @@ This is a mapping of models to [custom validators](../customization/custom-valid
```python
CUSTOM_VALIDATORS = {
"dcim.site": [
"dcim.Site": [
{
"name": {
"min_length": 5,
@@ -17,12 +17,15 @@ CUSTOM_VALIDATORS = {
},
"my_plugin.validators.Validator1"
],
"dcim.device": [
"dcim.Device": [
"my_plugin.validators.Validator1"
]
}
```
!!! info "Case-Insensitive Model Names"
Model identifiers are case-insensitive. Both `dcim.site` and `dcim.Site` are valid and equivalent.
---
## FIELD_CHOICES
@@ -53,6 +56,9 @@ FIELD_CHOICES = {
}
```
!!! info "Case-Insensitive Field Identifiers"
Field identifiers are case-insensitive. Both `dcim.Site.status` and `dcim.site.status` are valid and equivalent.
The following model fields support configurable choices:
* `circuits.Circuit.status`
@@ -98,7 +104,7 @@ This is a mapping of models to [custom validators](../customization/custom-valid
```python
PROTECTION_RULES = {
"dcim.site": [
"dcim.Site": [
{
"status": {
"eq": "decommissioning"
@@ -108,3 +114,6 @@ PROTECTION_RULES = {
]
}
```
!!! info "Case-Insensitive Model Names"
Model identifiers are case-insensitive. Both `dcim.site` and `dcim.Site` are valid and equivalent.

View File

@@ -144,7 +144,7 @@ Then, compile these portable (`.po`) files for use in the application:
* Update the version number and published date in `netbox/release.yaml`. Add or remove the designation (e.g. `beta1`) if applicable.
* Copy the version number from `release.yaml` to `pyproject.toml` in the project root.
* Update the example version numbers in the feature request and bug report templates under `.github/ISSUE_TEMPLATES/`.
* Update the example version numbers in the feature request, bug report, and performance templates under `.github/ISSUE_TEMPLATES/`.
* Add a section for this release at the top of the changelog page for the minor version (e.g. `docs/release-notes/version-4.2.md`) listing all relevant changes made in this release.
!!! tip

View File

@@ -133,23 +133,67 @@ The field "class_type" is an easy way to distinguish what type of object it is w
## Pagination
Queries can be paginated by specifying pagination in the query and supplying an offset and optionaly a limit in the query. If no limit is given, a default of 100 is used. Queries are not paginated unless requested in the query. An example paginated query is shown below:
The GraphQL API supports two types of pagination. Offset-based pagination operates using an offset relative to the first record in a set, specified by the `offset` parameter. For example, the response to a request specifying an offset of 100 will contain the 101st and later matching records. Offset-based pagination feels very natural, but its performance can suffer when dealing with large data sets due to the overhead involved in calculating the relative offset.
The alternative approach is cursor-based pagination, which operates using absolute (rather than relative) primary key values. (These are the numeric IDs assigned to each object in the database.) When using cursor-based pagination, the response will contain records with a primary key greater than or equal to the specified start value, up to the maximum number of results. This strategy requires keeping track of the last seen primary key from each response when paginating through data, but is extremely performant. The cursor is specified by passing the starting object ID via the `start` parameter.
To ensure consistent ordering, objects will always be ordered by their primary keys when cursor-based pagination is used.
!!! note "Cursor-based pagination was introduced in NetBox v4.5.2."
Both pagination strategies support passing an optional `limit` parameter. In both approaches, this specifies the maximum number of objects to include in the response. If no limit is specified, a default value of 100 is used.
### Offset Pagination
The first page will have an `offset` of zero, or the `offset` parameter will be omitted:
```
query {
device_list(pagination: { offset: 0, limit: 20 }) {
device_list(pagination: {offset: 0, limit: 20}) {
id
}
}
```
The second page will have an offset equal to the size of the first page. If the number of records is less than the specified limit, there are no more records to process. For example, if a request specifies a `limit` of 20 but returns only 13 records, we can conclude that this is the final page of records.
```
query {
device_list(pagination: {offset: 20, limit: 20}) {
id
}
}
```
### Cursor Pagination
Set the `start` value to zero to fetch the first page. Note that if the `start` parameter is omitted, offset-based pagination will be used by default.
```
query {
device_list(pagination: {start: 0, limit: 20}) {
id
}
}
```
To determine the `start` value for the next page, add 1 to the primary key (`id`) of the last record in the previous page.
For example, if the ID of the last record in the previous response was 123, we would specify a `start` value of 124:
```
query {
device_list(pagination: {start: 124, limit: 20}) {
id
}
}
```
This will return up to 20 records with an ID greater than or equal to 124.
## Authentication
NetBox's GraphQL API uses the same API authentication tokens as its REST API. Authentication tokens are included with requests by attaching an `Authorization` HTTP header in the following form:
```
Authorization: Token $TOKEN
```
NetBox's GraphQL API uses the same API authentication tokens as its REST API. See the [REST API authentication](./rest-api.md#authentication) documentation for further detail.
## Disabling the GraphQL API

View File

@@ -1,5 +1,53 @@
# NetBox v4.5
## v4.5.2 (2026-02-03)
### Enhancements
* [#15801](https://github.com/netbox-community/netbox/issues/15801) - Add link peer and connection columns to the VLAN device interfaces table
* [#19221](https://github.com/netbox-community/netbox/issues/19221) - Truncate long image attachment filenames in the UI
* [#19869](https://github.com/netbox-community/netbox/issues/19869) - Display peer connections for LAG member interfaces
* [#20052](https://github.com/netbox-community/netbox/issues/20052) - Increase logging level of error message when a custom script fails to load
* [#20172](https://github.com/netbox-community/netbox/issues/20172) - Add `cabled` filter for interfaces in GraphQL API
* [#21081](https://github.com/netbox-community/netbox/issues/21081) - Add owner group table columns & filters across all supported object list views
* [#21088](https://github.com/netbox-community/netbox/issues/21088) - Add max depth and max length dropdowns for child prefix views
* [#21110](https://github.com/netbox-community/netbox/issues/21110) - Support cursor-based pagination in GraphQL API
* [#21201](https://github.com/netbox-community/netbox/issues/21201) - Pre-populate GenericForeignKey form fields when cloning
* [#21209](https://github.com/netbox-community/netbox/issues/21209) - Ignore case sensitivity for configuration parameters which specify an app label and model name
* [#21228](https://github.com/netbox-community/netbox/issues/21228) - Support image attachments for rack types
* [#21244](https://github.com/netbox-community/netbox/issues/21244) - Enable omitting specific fields from REST API responses with `?omit=` parameter
### Performance Improvements
* [#21249](https://github.com/netbox-community/netbox/issues/21249) - Avoid extraneous user query when no event rules are present
* [#21259](https://github.com/netbox-community/netbox/issues/21259) - Cache ObjectType lookups for the duration of a request
* [#21260](https://github.com/netbox-community/netbox/issues/21260) - Defer object serialization for events pipeline processing
* [#21263](https://github.com/netbox-community/netbox/issues/21263) - Prefetch related objects after creating/updating objects via REST API
* [#21300](https://github.com/netbox-community/netbox/issues/21300) - Cache custom field lookups for the duration of a request
* [#21302](https://github.com/netbox-community/netbox/issues/21302) - Avoid redundant uniqueness checks in ValidatedModelSerializer
* [#21303](https://github.com/netbox-community/netbox/issues/21303) - Cache post-change snapshot on each instance after serialization
* [#21327](https://github.com/netbox-community/netbox/issues/21327) - Always leverage `get_by_natural_key()` to resolve ContentTypes
### Bug Fixes
* [#20212](https://github.com/netbox-community/netbox/issues/20212) - Fix support for image attachment thumbnails when using S3 storage
* [#20383](https://github.com/netbox-community/netbox/issues/20383) - When editing a device, clearing the assigned unit should also clear the rack face selection
* [#20902](https://github.com/netbox-community/netbox/issues/20902) - Avoid `SyncError` exception when Git URL contains an embedded username
* [#20977](https://github.com/netbox-community/netbox/issues/20977) - "Run again" button should respect script variable defaults
* [#21115](https://github.com/netbox-community/netbox/issues/21115) - Include `attribute_data` in ModuleType YAML export
* [#21129](https://github.com/netbox-community/netbox/issues/21129) - Store queue name on the Job model to ensure deletion of associated RQ task when a non-default queue is used
* [#21168](https://github.com/netbox-community/netbox/issues/21168) - Fix Application Service cloning to preserve parent object
* [#21173](https://github.com/netbox-community/netbox/issues/21173) - Ensure all plugin menu items are registered regardless of initialization order
* [#21176](https://github.com/netbox-community/netbox/issues/21176) - Remove checkboxes from IP ranges in mixed-type tables
* [#21202](https://github.com/netbox-community/netbox/issues/21202) - Fix scoped form cloning clearing the `scope` field when `scope_type` changes
* [#21214](https://github.com/netbox-community/netbox/issues/21214) - Clean up AutoSyncRecord when detaching from DataSource
* [#21242](https://github.com/netbox-community/netbox/issues/21242) - Navigation menu items for authentication should not require `staff_only` permission
* [#21254](https://github.com/netbox-community/netbox/issues/21254) - Fix `AttributeError` exception when checking for latest release
* [#21262](https://github.com/netbox-community/netbox/issues/21262) - Assigned scope should be replicated when cloning a prefix
* [#21269](https://github.com/netbox-community/netbox/issues/21269) - Fix replication of front/rear port assignments from the module type when installing a module
---
## v4.5.1 (2026-01-20)
### Enhancements

View File

@@ -21,11 +21,24 @@ __all__ = (
'GitBackend',
'LocalBackend',
'S3Backend',
'url_has_embedded_credentials',
)
logger = logging.getLogger('netbox.data_backends')
def url_has_embedded_credentials(url):
"""
Check if a URL contains embedded credentials (username in the URL).
URLs like 'https://user@bitbucket.org/...' have embedded credentials.
This is used to avoid passing explicit credentials to dulwich when the
URL already contains them, which would cause authentication conflicts.
"""
parsed = urlparse(url)
return bool(parsed.username)
@register_data_backend()
class LocalBackend(DataBackend):
name = 'local'
@@ -102,7 +115,9 @@ class GitBackend(DataBackend):
clone_args['pool_manager'] = ProxyPoolManager(self.socks_proxy)
if self.url_scheme in ('http', 'https'):
if self.params.get('username'):
# Only pass explicit credentials if URL doesn't already contain embedded username
# to avoid credential conflicts (see #20902)
if not url_has_embedded_credentials(self.url) and self.params.get('username'):
clone_args.update(
{
"username": self.params.get('username'),

View File

@@ -18,6 +18,7 @@ from extras.events import enqueue_event
from extras.models import Tag
from extras.utils import run_validators
from netbox.config import get_config
from utilities.data import get_config_value_ci
from netbox.context import current_request, events_queue
from netbox.models.features import ChangeLoggingMixin, get_model_features, model_is_public
from utilities.exceptions import AbortRequest
@@ -168,7 +169,7 @@ def handle_deleted_object(sender, instance, **kwargs):
# to queueing any events for the object being deleted, in case a validation error is
# raised, causing the deletion to fail.
model_name = f'{sender._meta.app_label}.{sender._meta.model_name}'
validators = get_config().PROTECTION_RULES.get(model_name, [])
validators = get_config_value_ci(get_config().PROTECTION_RULES, model_name, default=[])
try:
run_validators(instance, validators)
except ValidationError as e:

View File

@@ -0,0 +1,116 @@
from unittest import skipIf
from unittest.mock import patch
from django.test import TestCase
from core.data_backends import url_has_embedded_credentials
try:
import dulwich # noqa: F401
DULWICH_AVAILABLE = True
except ImportError:
DULWICH_AVAILABLE = False
class URLEmbeddedCredentialsTests(TestCase):
def test_url_with_embedded_username(self):
self.assertTrue(url_has_embedded_credentials('https://myuser@bitbucket.org/workspace/repo.git'))
def test_url_without_embedded_username(self):
self.assertFalse(url_has_embedded_credentials('https://bitbucket.org/workspace/repo.git'))
def test_url_with_username_and_password(self):
self.assertTrue(url_has_embedded_credentials('https://user:pass@bitbucket.org/workspace/repo.git'))
def test_various_providers_with_embedded_username(self):
urls = [
'https://user@bitbucket.org/workspace/repo.git',
'https://user@github.com/owner/repo.git',
'https://deploy-key@gitlab.com/group/project.git',
'http://user@internal-git.example.com/repo.git',
]
for url in urls:
with self.subTest(url=url):
self.assertTrue(url_has_embedded_credentials(url))
def test_various_providers_without_embedded_username(self):
"""Various Git providers without embedded usernames."""
urls = [
'https://bitbucket.org/workspace/repo.git',
'https://github.com/owner/repo.git',
'https://gitlab.com/group/project.git',
'http://internal-git.example.com/repo.git',
]
for url in urls:
with self.subTest(url=url):
self.assertFalse(url_has_embedded_credentials(url))
def test_ssh_url(self):
# git@host:path format doesn't parse as having a username in the traditional sense
self.assertFalse(url_has_embedded_credentials('git@github.com:owner/repo.git'))
def test_file_url(self):
self.assertFalse(url_has_embedded_credentials('file:///path/to/repo'))
@skipIf(not DULWICH_AVAILABLE, "dulwich is not installed")
class GitBackendCredentialIntegrationTests(TestCase):
"""
Integration tests that verify GitBackend correctly applies credential logic.
These tests require dulwich to be installed and verify the full integration
of the credential handling in GitBackend.fetch().
"""
def _get_clone_kwargs(self, url, **params):
from core.data_backends import GitBackend
backend = GitBackend(url=url, **params)
with patch('dulwich.porcelain.clone') as mock_clone, \
patch('dulwich.porcelain.NoneStream'):
try:
with backend.fetch():
pass
except Exception:
pass
if mock_clone.called:
return mock_clone.call_args.kwargs
return {}
def test_url_with_embedded_username_skips_explicit_credentials(self):
kwargs = self._get_clone_kwargs(
url='https://myuser@bitbucket.org/workspace/repo.git',
username='myuser',
password='my-api-key'
)
self.assertEqual(kwargs.get('username'), None)
self.assertEqual(kwargs.get('password'), None)
def test_url_without_embedded_username_passes_explicit_credentials(self):
kwargs = self._get_clone_kwargs(
url='https://bitbucket.org/workspace/repo.git',
username='myuser',
password='my-api-key'
)
self.assertEqual(kwargs.get('username'), 'myuser')
self.assertEqual(kwargs.get('password'), 'my-api-key')
def test_url_with_embedded_username_no_explicit_credentials(self):
kwargs = self._get_clone_kwargs(
url='https://myuser@bitbucket.org/workspace/repo.git'
)
self.assertEqual(kwargs.get('username'), None)
self.assertEqual(kwargs.get('password'), None)
def test_public_repo_no_credentials(self):
kwargs = self._get_clone_kwargs(
url='https://github.com/public/repo.git'
)
self.assertEqual(kwargs.get('username'), None)
self.assertEqual(kwargs.get('password'), None)

View File

@@ -4,7 +4,6 @@ from drf_spectacular.utils import extend_schema_field
from rest_framework.fields import Field
from rest_framework.serializers import ValidationError
from core.models import ObjectType
from extras.choices import CustomFieldTypeChoices
from extras.constants import CUSTOMFIELD_EMPTY_VALUES
from extras.models import CustomField
@@ -24,13 +23,9 @@ class CustomFieldDefaultValues:
def __call__(self, serializer_field):
self.model = serializer_field.parent.Meta.model
# Retrieve the CustomFields for the parent model
object_type = ObjectType.objects.get_for_model(self.model)
fields = CustomField.objects.filter(object_types=object_type)
# Populate the default value for each CustomField
# Populate the default value for each CustomField on the model
value = {}
for field in fields:
for field in CustomField.objects.get_for_model(self.model):
if field.default is not None:
value[field.name] = field.default
else:
@@ -47,8 +42,7 @@ class CustomFieldsDataField(Field):
Cache CustomFields assigned to this model to avoid redundant database queries
"""
if not hasattr(self, '_custom_fields'):
object_type = ObjectType.objects.get_for_model(self.parent.Meta.model)
self._custom_fields = CustomField.objects.filter(object_types=object_type)
self._custom_fields = CustomField.objects.get_for_model(self.parent.Meta.model)
return self._custom_fields
def to_representation(self, obj):

View File

@@ -75,10 +75,11 @@ def get_bookmarks_object_type_choices():
def get_models_from_content_types(content_types):
"""
Return a list of models corresponding to the given content types, identified by natural key.
Accepts both lowercase (e.g. "dcim.site") and PascalCase (e.g. "dcim.Site") model names.
"""
models = []
for content_type_id in content_types:
app_label, model_name = content_type_id.split('.')
app_label, model_name = content_type_id.lower().split('.')
try:
content_type = ObjectType.objects.get_by_natural_key(app_label, model_name)
if content_type.model_class():

View File

@@ -51,18 +51,26 @@ def serialize_for_event(instance):
def get_snapshots(instance, event_type):
snapshots = {
'prechange': getattr(instance, '_prechange_snapshot', None),
'postchange': None,
}
if event_type != OBJECT_DELETED:
# Use model's serialize_object() method if defined; fall back to serialize_object() utility function
if hasattr(instance, 'serialize_object'):
snapshots['postchange'] = instance.serialize_object()
else:
snapshots['postchange'] = serialize_object(instance)
"""
Return a dictionary of pre- and post-change snapshots for the given instance.
"""
if event_type == OBJECT_DELETED:
# Post-change snapshot must be empty for deleted objects
postchange_snapshot = None
elif hasattr(instance, '_postchange_snapshot'):
# Use the cached post-change snapshot if one is available
postchange_snapshot = instance._postchange_snapshot
elif hasattr(instance, 'serialize_object'):
# Use model's serialize_object() method if defined
postchange_snapshot = instance.serialize_object()
else:
# Fall back to the serialize_object() utility function
postchange_snapshot = serialize_object(instance)
return snapshots
return {
'prechange': getattr(instance, '_prechange_snapshot', None),
'postchange': postchange_snapshot,
}
def enqueue_event(queue, instance, request, event_type):

View File

@@ -19,6 +19,7 @@ from django.utils.translation import gettext_lazy as _
from core.models import ObjectType
from extras.choices import *
from extras.data import CHOICE_SETS
from netbox.context import query_cache
from netbox.models import ChangeLoggedModel
from netbox.models.features import CloningMixin, ExportTemplatesMixin
from netbox.models.mixins import OwnerMixin
@@ -58,8 +59,20 @@ class CustomFieldManager(models.Manager.from_queryset(RestrictedQuerySet)):
"""
Return all CustomFields assigned to the given model.
"""
# Check the request cache before hitting the database
cache = query_cache.get()
if cache is not None:
if custom_fields := cache['custom_fields'].get(model._meta.model):
return custom_fields
content_type = ObjectType.objects.get_for_model(model._meta.concrete_model)
return self.get_queryset().filter(object_types=content_type)
custom_fields = self.get_queryset().filter(object_types=content_type)
# Populate the request cache to avoid redundant lookups
if cache is not None:
cache['custom_fields'][model._meta.model] = custom_fields
return custom_fields
def get_defaults_for_model(self, model):
"""

View File

@@ -9,6 +9,7 @@ from extras.models import EventRule, Notification, Subscription
from netbox.config import get_config
from netbox.models.features import has_feature
from netbox.signals import post_clean
from utilities.data import get_config_value_ci
from utilities.exceptions import AbortRequest
from .models import CustomField, TaggedItem
from .utils import run_validators
@@ -65,7 +66,7 @@ def run_save_validators(sender, instance, **kwargs):
Run any custom validation rules for the model prior to calling save().
"""
model_name = f'{sender._meta.app_label}.{sender._meta.model_name}'
validators = get_config().CUSTOM_VALIDATORS.get(model_name, [])
validators = get_config_value_ci(get_config().CUSTOM_VALIDATORS, model_name, default=[])
run_validators(instance, validators)

View File

@@ -60,18 +60,24 @@ class PrefixSerializer(PrimaryModelSerializer):
vlan = VLANSerializer(nested=True, required=False, allow_null=True)
status = ChoiceField(choices=PrefixStatusChoices, required=False)
role = RoleSerializer(nested=True, required=False, allow_null=True)
children = serializers.IntegerField(read_only=True)
_children = serializers.IntegerField(read_only=True)
_depth = serializers.IntegerField(read_only=True)
prefix = IPNetworkField()
class Meta:
model = Prefix
fields = [
'id', 'url', 'display_url', 'display', 'family', 'prefix', 'vrf', 'scope_type', 'scope_id', 'scope',
'tenant', 'vlan', 'status', 'role', 'is_pool', 'mark_utilized', 'description', 'owner', 'comments', 'tags',
'custom_fields', 'created', 'last_updated', 'children', '_depth',
'id', 'url', 'display_url', 'display', 'family', 'aggregate', 'parent', 'prefix', 'vrf', 'scope_type',
'scope_id', 'scope', 'tenant', 'vlan', 'status', 'role', 'is_pool', 'mark_utilized', 'description',
'owner', 'comments', 'tags', 'custom_fields', 'created', 'last_updated', '_children', '_depth',
]
brief_fields = ('id', 'url', 'display', 'family', 'prefix', 'description', '_depth')
brief_fields = ('id', 'url', 'display', 'family', 'aggregate', 'parent', 'prefix', 'description', '_depth')
def get_fields(self):
fields = super(PrefixSerializer, self).get_fields()
fields['parent'] = PrefixSerializer(nested=True, read_only=True)
return fields
class PrefixLengthSerializer(serializers.Serializer):
@@ -125,7 +131,9 @@ class AvailablePrefixSerializer(serializers.Serializer):
# IP ranges
#
class IPRangeSerializer(PrimaryModelSerializer):
prefix = PrefixSerializer(nested=True, required=False, allow_null=True)
family = ChoiceField(choices=IPAddressFamilyChoices, read_only=True)
start_address = IPAddressField()
end_address = IPAddressField()
@@ -137,11 +145,11 @@ class IPRangeSerializer(PrimaryModelSerializer):
class Meta:
model = IPRange
fields = [
'id', 'url', 'display_url', 'display', 'family', 'start_address', 'end_address', 'size', 'vrf', 'tenant',
'status', 'role', 'description', 'owner', 'comments', 'tags', 'custom_fields', 'created', 'last_updated',
'mark_populated', 'mark_utilized',
'id', 'url', 'display_url', 'display', 'family', 'prefix', 'start_address', 'end_address', 'size', 'vrf',
'tenant', 'status', 'role', 'description', 'owner', 'comments', 'tags', 'custom_fields', 'created',
'last_updated', 'mark_populated', 'mark_utilized',
]
brief_fields = ('id', 'url', 'display', 'family', 'start_address', 'end_address', 'description')
brief_fields = ('id', 'url', 'display', 'family', 'prefix', 'start_address', 'end_address', 'description')
#
@@ -186,6 +194,7 @@ class AvailableIPRequestSerializer(serializers.Serializer):
class IPAddressSerializer(PrimaryModelSerializer):
prefix = PrefixSerializer(nested=True, required=False, allow_null=True)
family = ChoiceField(choices=IPAddressFamilyChoices, read_only=True)
address = IPAddressField()
vrf = VRFSerializer(nested=True, required=False, allow_null=True)
@@ -204,11 +213,11 @@ class IPAddressSerializer(PrimaryModelSerializer):
class Meta:
model = IPAddress
fields = [
'id', 'url', 'display_url', 'display', 'family', 'address', 'vrf', 'tenant', 'status', 'role',
'id', 'url', 'display_url', 'display', 'family', 'prefix', 'address', 'vrf', 'tenant', 'status', 'role',
'assigned_object_type', 'assigned_object_id', 'assigned_object', 'nat_inside', 'nat_outside',
'dns_name', 'description', 'owner', 'comments', 'tags', 'custom_fields', 'created', 'last_updated',
]
brief_fields = ('id', 'url', 'display', 'family', 'address', 'description')
brief_fields = ('id', 'url', 'display', 'family', 'prefix', 'address', 'description')
class AvailableIPSerializer(serializers.Serializer):

View File

@@ -340,6 +340,26 @@ class PrefixFilterSet(PrimaryModelFilterSet, ScopedFilterSet, TenancyFilterSet,
field_name='prefix',
lookup_expr='net_mask_length__lte'
)
aggregate_id = django_filters.ModelMultipleChoiceFilter(
queryset=Aggregate.objects.all(),
label=_('Aggregate'),
)
aggregate = django_filters.ModelMultipleChoiceFilter(
field_name='aggregate__prefix',
queryset=Aggregate.objects.all(),
to_field_name='prefix',
label=_('Aggregate (Prefix)'),
)
parent_id = django_filters.ModelMultipleChoiceFilter(
queryset=Prefix.objects.all(),
label=_('Parent Prefix'),
)
parent = django_filters.ModelMultipleChoiceFilter(
field_name='parent__prefix',
queryset=Prefix.objects.all(),
to_field_name='prefix',
label=_('Parent Prefix (Prefix)'),
)
vrf_id = django_filters.ModelMultipleChoiceFilter(
queryset=VRF.objects.all(),
label=_('VRF'),
@@ -484,6 +504,16 @@ class IPRangeFilterSet(PrimaryModelFilterSet, TenancyFilterSet, ContactModelFilt
method='search_contains',
label=_('Ranges which contain this prefix or IP'),
)
prefix_id = django_filters.ModelMultipleChoiceFilter(
queryset=Prefix.objects.all(),
label=_('Prefix (ID)'),
)
prefix = django_filters.ModelMultipleChoiceFilter(
field_name='prefix__prefix',
queryset=Prefix.objects.all(),
to_field_name='prefix',
label=_('Prefix'),
)
vrf_id = django_filters.ModelMultipleChoiceFilter(
queryset=VRF.objects.all(),
label=_('VRF'),
@@ -569,6 +599,16 @@ class IPAddressFilterSet(PrimaryModelFilterSet, TenancyFilterSet, ContactModelFi
method='search_by_parent',
label=_('Parent prefix'),
)
prefix_id = django_filters.ModelMultipleChoiceFilter(
queryset=Prefix.objects.all(),
label=_('Prefix (ID)'),
)
prefix = django_filters.ModelMultipleChoiceFilter(
field_name='prefix__prefix',
queryset=Prefix.objects.all(),
to_field_name='prefix',
label=_('Prefix (prefix)'),
)
address = MultiValueCharFilter(
method='filter_address',
label=_('Address'),

View File

@@ -168,6 +168,11 @@ class RoleBulkEditForm(OrganizationalModelBulkEditForm):
class PrefixBulkEditForm(ScopedBulkEditForm, PrimaryModelBulkEditForm):
parent = DynamicModelChoiceField(
queryset=Prefix.objects.all(),
required=False,
label=_('Parent Prefix')
)
vlan_group = DynamicModelChoiceField(
queryset=VLANGroup.objects.all(),
required=False,
@@ -221,7 +226,7 @@ class PrefixBulkEditForm(ScopedBulkEditForm, PrimaryModelBulkEditForm):
model = Prefix
fieldsets = (
FieldSet('tenant', 'status', 'role', 'description'),
FieldSet('vrf', 'prefix_length', 'is_pool', 'mark_utilized', name=_('Addressing')),
FieldSet('parent', 'vrf', 'prefix_length', 'is_pool', 'mark_utilized', name=_('Addressing')),
FieldSet('scope_type', 'scope', name=_('Scope')),
FieldSet('vlan_group', 'vlan', name=_('VLAN Assignment')),
)
@@ -231,6 +236,11 @@ class PrefixBulkEditForm(ScopedBulkEditForm, PrimaryModelBulkEditForm):
class IPRangeBulkEditForm(PrimaryModelBulkEditForm):
prefix = DynamicModelChoiceField(
queryset=Prefix.objects.all(),
required=False,
label=_('Prefix')
)
vrf = DynamicModelChoiceField(
queryset=VRF.objects.all(),
required=False,
@@ -272,6 +282,16 @@ class IPRangeBulkEditForm(PrimaryModelBulkEditForm):
class IPAddressBulkEditForm(PrimaryModelBulkEditForm):
prefix = DynamicModelChoiceField(
queryset=Prefix.objects.all(),
required=False,
label=_('Prefix')
)
prefix = DynamicModelChoiceField(
queryset=Prefix.objects.all(),
required=False,
label=_('Prefix')
)
vrf = DynamicModelChoiceField(
queryset=VRF.objects.all(),
required=False,
@@ -307,10 +327,10 @@ class IPAddressBulkEditForm(PrimaryModelBulkEditForm):
model = IPAddress
fieldsets = (
FieldSet('status', 'role', 'tenant', 'description'),
FieldSet('vrf', 'mask_length', 'dns_name', name=_('Addressing')),
FieldSet('prefix', 'vrf', 'mask_length', 'dns_name', name=_('Addressing')),
)
nullable_fields = (
'vrf', 'role', 'tenant', 'dns_name', 'description', 'comments',
'prefix', 'vrf', 'role', 'tenant', 'dns_name', 'description', 'comments',
)

View File

@@ -343,8 +343,8 @@ class IPAddressImportForm(PrimaryModelImportForm):
class Meta:
model = IPAddress
fields = [
'address', 'vrf', 'tenant', 'status', 'role', 'device', 'virtual_machine', 'interface', 'fhrp_group',
'is_primary', 'is_oob', 'dns_name', 'description', 'owner', 'comments', 'tags',
'prefix', 'address', 'vrf', 'tenant', 'status', 'role', 'device', 'virtual_machine', 'interface',
'fhrp_group', 'is_primary', 'is_oob', 'dns_name', 'owner', 'description', 'comments', 'tags',
]
def __init__(self, data=None, *args, **kwargs):

View File

@@ -219,6 +219,12 @@ class PrefixFilterForm(ContactModelFilterForm, TenancyFilterForm, PrimaryModelFi
choices=PREFIX_MASK_LENGTH_CHOICES,
label=_('Mask length')
)
aggregate_id = DynamicModelMultipleChoiceField(
queryset=Aggregate.objects.all(),
required=False,
label=_('Aggregate'),
null_option='Global'
)
vrf_id = DynamicModelMultipleChoiceField(
queryset=VRF.objects.all(),
required=False,
@@ -292,12 +298,20 @@ class PrefixFilterForm(ContactModelFilterForm, TenancyFilterForm, PrimaryModelFi
class IPRangeFilterForm(ContactModelFilterForm, TenancyFilterForm, PrimaryModelFilterSetForm):
model = IPRange
fieldsets = (
FieldSet('q', 'filter_id', 'tag'),
FieldSet('family', 'vrf_id', 'status', 'role_id', 'mark_populated', 'mark_utilized', name=_('Attributes')),
FieldSet('q', 'filter_id', 'tag', 'owner_id'),
FieldSet(
'prefix', 'family', 'vrf_id', 'status', 'role_id', 'mark_populated', 'mark_utilized', name=_('Attributes')
),
FieldSet('tenant_group_id', 'tenant_id', name=_('Tenant')),
FieldSet('owner_group_id', 'owner_id', name=_('Ownership')),
FieldSet('contact', 'contact_role', 'contact_group', name=_('Contacts')),
)
prefix = DynamicModelMultipleChoiceField(
queryset=Prefix.objects.all(),
required=False,
label=_('Prefix'),
null_option='None'
)
family = forms.ChoiceField(
required=False,
choices=add_blank_choice(IPAddressFamilyChoices),
@@ -342,7 +356,7 @@ class IPAddressFilterForm(ContactModelFilterForm, TenancyFilterForm, PrimaryMode
fieldsets = (
FieldSet('q', 'filter_id', 'tag'),
FieldSet(
'parent', 'family', 'status', 'role', 'mask_length', 'assigned_to_interface', 'dns_name',
'prefix', 'parent', 'family', 'status', 'role', 'mask_length', 'assigned_to_interface', 'dns_name',
name=_('Attributes')
),
FieldSet('vrf_id', 'present_in_vrf_id', name=_('VRF')),
@@ -351,7 +365,7 @@ class IPAddressFilterForm(ContactModelFilterForm, TenancyFilterForm, PrimaryMode
FieldSet('owner_group_id', 'owner_id', name=_('Ownership')),
FieldSet('contact', 'contact_role', 'contact_group', name=_('Contacts')),
)
selector_fields = ('filter_id', 'q', 'region_id', 'group_id', 'parent', 'status', 'role')
selector_fields = ('filter_id', 'q', 'region_id', 'group_id', 'prefix_id', 'parent', 'status', 'role')
parent = forms.CharField(
required=False,
widget=forms.TextInput(
@@ -371,6 +385,11 @@ class IPAddressFilterForm(ContactModelFilterForm, TenancyFilterForm, PrimaryMode
choices=IPADDRESS_MASK_LENGTH_CHOICES,
label=_('Mask length')
)
prefix_id = DynamicModelMultipleChoiceField(
queryset=Prefix.objects.all(),
required=False,
label=_('Prefix'),
)
vrf_id = DynamicModelMultipleChoiceField(
queryset=VRF.objects.all(),
required=False,

View File

@@ -255,8 +255,8 @@ class IPRangeForm(TenancyForm, PrimaryModelForm):
fieldsets = (
FieldSet(
'vrf', 'start_address', 'end_address', 'role', 'status', 'mark_populated', 'mark_utilized', 'description',
'tags', name=_('IP Range')
'vrf', 'start_address', 'end_address', 'role', 'status', 'mark_populated', 'mark_utilized',
'description', 'tags', name=_('IP Range')
),
FieldSet('tenant_group', 'tenant', name=_('Tenancy')),
)
@@ -264,8 +264,8 @@ class IPRangeForm(TenancyForm, PrimaryModelForm):
class Meta:
model = IPRange
fields = [
'vrf', 'start_address', 'end_address', 'status', 'role', 'tenant_group', 'tenant', 'mark_populated',
'mark_utilized', 'description', 'owner', 'comments', 'tags',
'vrf', 'start_address', 'end_address', 'status', 'role', 'tenant_group', 'tenant',
'mark_populated', 'mark_utilized', 'description', 'owner', 'comments', 'tags',
]
@@ -331,8 +331,8 @@ class IPAddressForm(TenancyForm, PrimaryModelForm):
class Meta:
model = IPAddress
fields = [
'address', 'vrf', 'status', 'role', 'dns_name', 'primary_for_parent', 'oob_for_parent', 'nat_inside',
'tenant_group', 'tenant', 'description', 'owner', 'comments', 'tags',
'address', 'vrf', 'status', 'role', 'dns_name', 'primary_for_parent', 'oob_for_parent',
'nat_inside', 'tenant_group', 'tenant', 'description', 'owner', 'comments', 'tags',
]
def __init__(self, *args, **kwargs):

View File

@@ -170,6 +170,7 @@ class FHRPGroupAssignmentFilter(ChangeLoggedModelFilter):
@strawberry_django.filter_type(models.IPAddress, lookups=True)
class IPAddressFilter(ContactFilterMixin, TenancyFilterMixin, PrimaryModelFilter):
prefix: Annotated['PrefixFilter', strawberry.lazy('ipam.graphql.filters')] | None = strawberry_django.filter_field()
address: FilterLookup[str] | None = strawberry_django.filter_field()
vrf: Annotated['VRFFilter', strawberry.lazy('ipam.graphql.filters')] | None = strawberry_django.filter_field()
vrf_id: ID | None = strawberry_django.filter_field()
@@ -221,6 +222,7 @@ class IPAddressFilter(ContactFilterMixin, TenancyFilterMixin, PrimaryModelFilter
@strawberry_django.filter_type(models.IPRange, lookups=True)
class IPRangeFilter(ContactFilterMixin, TenancyFilterMixin, PrimaryModelFilter):
prefix: Annotated['PrefixFilter', strawberry.lazy('ipam.graphql.filters')] | None = strawberry_django.filter_field()
start_address: FilterLookup[str] | None = strawberry_django.filter_field()
end_address: FilterLookup[str] | None = strawberry_django.filter_field()
size: Annotated['IntegerLookup', strawberry.lazy('netbox.graphql.filter_lookups')] | None = (
@@ -275,6 +277,10 @@ class IPRangeFilter(ContactFilterMixin, TenancyFilterMixin, PrimaryModelFilter):
@strawberry_django.filter_type(models.Prefix, lookups=True)
class PrefixFilter(ContactFilterMixin, ScopedFilterMixin, TenancyFilterMixin, PrimaryModelFilter):
aggregate: Annotated['AggregateFilter', strawberry.lazy('ipam.graphql.filters')] | None = (
strawberry_django.filter_field()
)
parent: Annotated['PrefixFilter', strawberry.lazy('ipam.graphql.filters')] | None = strawberry_django.filter_field()
prefix: FilterLookup[str] | None = strawberry_django.filter_field()
vrf: Annotated['VRFFilter', strawberry.lazy('ipam.graphql.filters')] | None = strawberry_django.filter_field()
vrf_id: ID | None = strawberry_django.filter_field()

View File

@@ -143,6 +143,7 @@ class FHRPGroupAssignmentType(BaseObjectType):
)
class IPAddressType(ContactsMixin, BaseIPAddressFamilyType, PrimaryObjectType):
address: str
prefix: Annotated["PrefixType", strawberry.lazy('ipam.graphql.types')] | None
vrf: Annotated["VRFType", strawberry.lazy('ipam.graphql.types')] | None
tenant: Annotated["TenantType", strawberry.lazy('tenancy.graphql.types')] | None
nat_inside: Annotated["IPAddressType", strawberry.lazy('ipam.graphql.types')] | None
@@ -167,6 +168,7 @@ class IPAddressType(ContactsMixin, BaseIPAddressFamilyType, PrimaryObjectType):
pagination=True
)
class IPRangeType(ContactsMixin, PrimaryObjectType):
prefix: Annotated["PrefixType", strawberry.lazy('ipam.graphql.types')] | None
start_address: str
end_address: str
vrf: Annotated["VRFType", strawberry.lazy('ipam.graphql.types')] | None
@@ -181,6 +183,8 @@ class IPRangeType(ContactsMixin, PrimaryObjectType):
pagination=True
)
class PrefixType(ContactsMixin, BaseIPAddressFamilyType, PrimaryObjectType):
aggregate: Annotated["AggregateType", strawberry.lazy('ipam.graphql.types')] | None
parent: Annotated["PrefixType", strawberry.lazy('ipam.graphql.types')] | None
prefix: str
vrf: Annotated["VRFType", strawberry.lazy('ipam.graphql.types')] | None
tenant: Annotated["TenantType", strawberry.lazy('tenancy.graphql.types')] | None

View File

@@ -0,0 +1,58 @@
# Generated by Django 5.0.9 on 2025-02-20 16:49
import django.db.models.deletion
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('ipam', '0086_gfk_indexes'),
]
operations = [
migrations.AddField(
model_name='prefix',
name='parent',
field=models.ForeignKey(
blank=True,
null=True,
on_delete=django.db.models.deletion.DO_NOTHING,
related_name='children',
to='ipam.prefix',
),
),
migrations.AddField(
model_name='ipaddress',
name='prefix',
field=models.ForeignKey(
blank=True,
null=True,
on_delete=django.db.models.deletion.SET_NULL,
related_name='ip_addresses',
to='ipam.prefix',
),
),
migrations.AddField(
model_name='iprange',
name='prefix',
field=models.ForeignKey(
blank=True,
null=True,
on_delete=django.db.models.deletion.SET_NULL,
related_name='ip_ranges',
to='ipam.prefix',
),
),
migrations.AddField(
model_name='prefix',
name='aggregate',
field=models.ForeignKey(
blank=True,
null=True,
on_delete=django.db.models.deletion.SET_NULL,
related_name='prefixes',
to='ipam.aggregate',
),
),
]

View File

@@ -0,0 +1,132 @@
# Generated by Django 5.0.9 on 2025-02-20 16:49
import sys
import time
from django.db import migrations, models
from ipam.choices import PrefixStatusChoices
def draw_progress(count, total, length=20):
if total == 0:
return
progress = count / total
percent = int(progress * 100)
bar = int(progress * length)
sys.stdout.write('\r')
sys.stdout.write(f"[{'=' * bar:{length}s}] {percent}%")
sys.stdout.flush()
def set_prefix(apps, schema_editor, model, attr='address', parent_attr='prefix', parent_model='Prefix'):
start = time.time()
ChildModel = apps.get_model('ipam', model)
ParentModel = apps.get_model('ipam', parent_model)
addresses = ChildModel.objects.all()
total = addresses.count()
if total == 0:
return
print('\r\n')
print(f'Migrating {parent_model}')
print('\r\n')
i = 0
draw_progress(i, total, 50)
for address in addresses:
i += 1
address_attr = getattr(address, attr)
prefixes = ParentModel.objects.filter(
prefix__net_contains_or_equals=str(address_attr.ip),
prefix__net_mask_length__lte=address_attr.prefixlen,
)
setattr(address, parent_attr, prefixes.last())
try:
address.save()
except Exception as e:
print(f'Error at {address}')
raise e
draw_progress(i, total, 50)
end = time.time()
print(f"\r\nElapsed Time: {end - start:.2f}s")
def set_ipaddress_prefix(apps, schema_editor):
set_prefix(apps, schema_editor, 'IPAddress')
def unset_ipaddress_prefix(apps, schema_editor):
IPAddress = apps.get_model('ipam', 'IPAddress')
IPAddress.objects.update(prefix=None)
def set_iprange_prefix(apps, schema_editor):
set_prefix(apps, schema_editor, 'IPRange', 'start_address')
def unset_iprange_prefix(apps, schema_editor):
IPRange = apps.get_model('ipam', 'IPRange')
IPRange.objects.update(prefix=None)
def set_prefix_aggregate(apps, schema_editor):
set_prefix(apps, schema_editor, 'Prefix', 'prefix', 'aggregate', 'Aggregate')
def unset_prefix_aggregate(apps, schema_editor):
Prefix = apps.get_model('ipam', 'Prefix')
Prefix.objects.update(aggregate=None)
def set_prefix_parent(apps, schema_editor):
Prefix = apps.get_model('ipam', 'Prefix')
start = time.time()
addresses = Prefix.objects.all()
i = 0
total = addresses.count()
if total == 0:
return
print('\r\n')
draw_progress(i, total, 50)
for address in addresses:
i += 1
prefixes = Prefix.objects.exclude(pk=address.pk).filter(
models.Q(vrf=address.vrf, prefix__net_contains=str(address.prefix.ip))
| models.Q(
vrf=None,
status=PrefixStatusChoices.STATUS_CONTAINER,
prefix__net_contains=str(address.prefix.ip),
)
)
if not prefixes.exists():
draw_progress(i, total, 50)
continue
address.parent = prefixes.last()
address.save()
draw_progress(i, total, 50)
end = time.time()
print(f"\r\nElapsed Time: {end - start:.2f}s")
def unset_prefix_parent(apps, schema_editor):
Prefix = apps.get_model('ipam', 'Prefix')
Prefix.objects.update(parent=None)
class Migration(migrations.Migration):
dependencies = [
('ipam', '0087_ipaddress_iprange_prefix_parent'),
]
operations = [
migrations.RunPython(set_ipaddress_prefix, unset_ipaddress_prefix),
migrations.RunPython(set_iprange_prefix, unset_iprange_prefix),
migrations.RunPython(set_prefix_aggregate, unset_prefix_aggregate),
migrations.RunPython(set_prefix_parent, unset_prefix_parent),
]

View File

@@ -0,0 +1,57 @@
# Generated by Django 5.2.5 on 2026-02-06 21:30
import pgtrigger.compiler
import pgtrigger.migrations
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('ipam', '0088_ipaddress_iprange_prefix_parent_data'),
]
operations = [
pgtrigger.migrations.AddTrigger(
model_name='prefix',
trigger=pgtrigger.compiler.Trigger(
name='ipam_prefix_delete',
sql=pgtrigger.compiler.UpsertTriggerSql(
func="\n-- Update Child Prefix's with Prefix's PARENT This is a safe assumption based on the fact that the parent would be the\n-- next direct parent for anything else that could contain this prefix\nUPDATE ipam_prefix SET parent_id=OLD.parent_id WHERE parent_id=OLD.id;\nRETURN OLD;\n",
hash='ee3f890009c05a3617428158e7b6f3d77317885d',
operation='DELETE',
pgid='pgtrigger_ipam_prefix_delete_e7810',
table='ipam_prefix',
when='BEFORE',
),
),
),
pgtrigger.migrations.AddTrigger(
model_name='prefix',
trigger=pgtrigger.compiler.Trigger(
name='ipam_prefix_insert',
sql=pgtrigger.compiler.UpsertTriggerSql(
func="\n-- Update the prefix with the new parent if the parent is the most appropriate prefix\nUPDATE ipam_prefix\nSET parent_id=NEW.id\nWHERE\n prefix << NEW.prefix\n AND\n (\n (vrf_id = NEW.vrf_id OR (vrf_id IS NULL AND NEW.vrf_id IS NULL))\n OR\n (\n NEW.vrf_id IS NULL\n AND\n NEW.status = 'container'\n AND\n NOT EXISTS(\n SELECT 1 FROM ipam_prefix p WHERE p.prefix >> ipam_prefix.prefix AND p.vrf_id = ipam_prefix.vrf_id\n )\n )\n )\n AND id != NEW.id\n AND NOT EXISTS (\n SELECT 1 FROM ipam_prefix p\n WHERE\n p.prefix >> ipam_prefix.prefix\n AND p.prefix << NEW.prefix\n AND (\n (p.vrf_id = ipam_prefix.vrf_id OR (p.vrf_id IS NULL AND ipam_prefix.vrf_id IS NULL))\n OR\n (p.vrf_id IS NULL AND p.status = 'container')\n )\n AND p.id != NEW.id\n )\n;\nRETURN NEW;\n",
hash='1d71498f09e767183d3b0d29c06c9ac9e2cc000a',
operation='INSERT',
pgid='pgtrigger_ipam_prefix_insert_46c72',
table='ipam_prefix',
when='AFTER',
),
),
),
pgtrigger.migrations.AddTrigger(
model_name='prefix',
trigger=pgtrigger.compiler.Trigger(
name='ipam_prefix_update',
sql=pgtrigger.compiler.UpsertTriggerSql(
func="\n-- When a prefix changes, reassign any child prefixes that no longer\n-- fall within the new prefix range to the parent prefix (or set null if no parent exists)\nUPDATE ipam_prefix\nSET parent_id = OLD.parent_id\nWHERE\n parent_id = NEW.id\n -- IP address no longer contained within the updated prefix\n AND NOT (prefix << NEW.prefix);\n\n-- When a prefix changes, reassign any ip addresses that no longer\n-- fall within the new prefix range to the parent prefix (or set null if no parent exists)\nUPDATE ipam_ipaddress\nSET prefix_id = OLD.parent_id\nWHERE\n prefix_id = NEW.id\n -- IP address no longer contained within the updated prefix\n AND\n NOT (address << NEW.prefix)\n;\n\n-- When a prefix changes, reassign any ip ranges that no longer\n-- fall within the new prefix range to the parent prefix (or set null if no parent exists)\nUPDATE ipam_iprange\nSET prefix_id = OLD.parent_id\nWHERE\n prefix_id = NEW.id\n -- IP address no longer contained within the updated prefix\n AND\n NOT (start_address << NEW.prefix)\n AND\n NOT (end_address << NEW.prefix)\n;\n\n-- When a prefix changes, reassign any ip addresses that are in-scope but\n-- no longer within the same VRF\nUPDATE ipam_ipaddress\n SET prefix_id = OLD.parent_id\n WHERE\n prefix_id = NEW.id\n AND\n address << OLD.prefix\n AND\n (\n NOT address << NEW.prefix\n OR\n (\n vrf_id is NULL\n AND\n NEW.vrf_id IS NOT NULL\n )\n OR\n (\n OLD.vrf_id IS NULL\n AND\n NEW.vrf_id IS NOT NULL\n AND\n NEW.vrf_id != vrf_id\n )\n )\n;\n\n-- When a prefix changes, reassign any ip ranges that are in-scope but\n-- no longer within the same VRF\nUPDATE ipam_iprange\n SET prefix_id = OLD.parent_id\n WHERE\n prefix_id = NEW.id\n AND\n start_address << OLD.prefix\n AND\n end_address << OLD.prefix\n AND\n (\n NOT start_address << NEW.prefix\n OR\n NOT end_address << NEW.prefix\n OR\n (\n vrf_id is NULL\n AND\n NEW.vrf_id IS NOT NULL\n )\n OR\n (\n OLD.vrf_id IS NULL\n AND\n NEW.vrf_id IS NOT NULL\n AND\n NEW.vrf_id != vrf_id\n )\n )\n;\n\n-- Update the prefix with the new parent if the parent is the most appropriate prefix\nUPDATE ipam_prefix\n SET parent_id=NEW.id\n WHERE\n prefix << NEW.prefix\n AND\n (\n (vrf_id = NEW.vrf_id OR (vrf_id IS NULL AND NEW.vrf_id IS NULL))\n OR\n (\n NEW.vrf_id IS NULL\n AND\n NEW.status = 'container'\n AND\n NOT EXISTS(\n SELECT 1 FROM ipam_prefix p WHERE p.prefix >> prefix AND p.vrf_id = vrf_id\n )\n )\n )\n AND id != NEW.id\n AND NOT EXISTS (\n SELECT 1 FROM ipam_prefix p\n WHERE\n p.prefix >> ipam_prefix.prefix\n AND p.prefix << NEW.prefix\n AND (\n (p.vrf_id = ipam_prefix.vrf_id OR (p.vrf_id IS NULL AND ipam_prefix.vrf_id IS NULL))\n OR\n (p.vrf_id IS NULL AND p.status = 'container')\n )\n AND p.id != NEW.id\n )\n;\nUPDATE ipam_ipaddress\n SET prefix_id = NEW.id\n WHERE\n prefix_id != NEW.id\n AND\n address << NEW.prefix\n AND (\n (vrf_id = NEW.vrf_id OR (vrf_id IS NULL AND NEW.vrf_id IS NULL))\n OR (\n NEW.vrf_id IS NULL\n AND\n NEW.status = 'container'\n AND\n NOT EXISTS(\n SELECT 1 FROM ipam_prefix p WHERE p.prefix >> address AND p.vrf_id = vrf_id\n )\n )\n )\n;\nUPDATE ipam_iprange\n SET prefix_id = NEW.id\n WHERE\n prefix_id != NEW.id\n AND\n start_address << NEW.prefix\n AND\n end_address << NEW.prefix\n AND (\n (vrf_id = NEW.vrf_id OR (vrf_id IS NULL AND NEW.vrf_id IS NULL))\n OR (\n NEW.vrf_id IS NULL\n AND\n NEW.status = 'container'\n AND\n NOT EXISTS(\n SELECT 1 FROM ipam_prefix p WHERE\n p.prefix >> start_address\n AND\n p.prefix >> end_address\n AND\n p.vrf_id = vrf_id\n )\n )\n )\n;\nRETURN NEW;\n",
hash='7dce524151c88aa9864aad70a24cb5982b05aa28',
operation='UPDATE',
pgid='pgtrigger_ipam_prefix_update_e5fca',
table='ipam_prefix',
when='AFTER',
),
),
),
]

View File

@@ -1,4 +1,5 @@
import netaddr
import pgtrigger
from django.contrib.contenttypes.fields import GenericForeignKey
from django.contrib.contenttypes.models import ContentType
from django.contrib.postgres.indexes import GistIndex
@@ -8,6 +9,7 @@ from django.db.models import F
from django.db.models.functions import Cast
from django.utils.functional import cached_property
from django.utils.translation import gettext_lazy as _
from netaddr.ip import IPNetwork
from dcim.models.mixins import CachedScopeMixin
from ipam.choices import *
@@ -16,6 +18,8 @@ from ipam.fields import IPNetworkField, IPAddressField
from ipam.lookups import Host
from ipam.managers import IPAddressManager
from ipam.querysets import PrefixQuerySet
from ipam.triggers import ipam_prefix_delete_adjust_prefix_parent, ipam_prefix_insert_adjust_prefix_parent, \
ipam_prefix_update_adjust_prefix_parent
from ipam.validators import DNSValidator
from netbox.config import get_config
from netbox.models import OrganizationalModel, PrimaryModel
@@ -185,31 +189,28 @@ class Aggregate(ContactsMixin, GetAvailablePrefixesMixin, PrimaryModel):
return min(utilization, 100)
class Role(OrganizationalModel):
"""
A Role represents the functional role of a Prefix or VLAN; for example, "Customer," "Infrastructure," or
"Management."
"""
weight = models.PositiveSmallIntegerField(
verbose_name=_('weight'),
default=1000
)
class Meta:
ordering = ('weight', 'name')
verbose_name = _('role')
verbose_name_plural = _('roles')
def __str__(self):
return self.name
class Prefix(ContactsMixin, GetAvailablePrefixesMixin, CachedScopeMixin, PrimaryModel):
"""
A Prefix represents an IPv4 or IPv6 network, including mask length. Prefixes can optionally be scoped to certain
areas and/or assigned to VRFs. A Prefix must be assigned a status and may optionally be assigned a used-define Role.
A Prefix can also be assigned to a VLAN where appropriate.
"""
aggregate = models.ForeignKey(
to='ipam.Aggregate',
on_delete=models.SET_NULL, # This is handled by triggers
related_name='prefixes',
blank=True,
null=True,
verbose_name=_('aggregate')
)
parent = models.ForeignKey(
to='ipam.Prefix',
on_delete=models.DO_NOTHING,
related_name='children',
blank=True,
null=True,
verbose_name=_('Prefix')
)
prefix = IPNetworkField(
verbose_name=_('prefix'),
help_text=_('IPv4 or IPv6 network with mask')
@@ -284,8 +285,32 @@ class Prefix(ContactsMixin, GetAvailablePrefixesMixin, CachedScopeMixin, Primary
verbose_name_plural = _('prefixes')
indexes = (
models.Index(fields=('scope_type', 'scope_id')),
GistIndex(fields=['prefix'], name='ipam_prefix_gist_idx', opclasses=['inet_ops']),
)
GistIndex(
fields=['prefix'],
name='ipam_prefix_gist_idx',
opclasses=['inet_ops'],
),
)
triggers = (
pgtrigger.Trigger(
name='ipam_prefix_delete',
operation=pgtrigger.Delete,
when=pgtrigger.Before,
func=ipam_prefix_delete_adjust_prefix_parent,
),
pgtrigger.Trigger(
name='ipam_prefix_insert',
operation=pgtrigger.Insert,
when=pgtrigger.After,
func=ipam_prefix_insert_adjust_prefix_parent,
),
pgtrigger.Trigger(
name='ipam_prefix_update',
operation=pgtrigger.Update,
when=pgtrigger.After,
func=ipam_prefix_update_adjust_prefix_parent,
),
)
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
@@ -301,6 +326,8 @@ class Prefix(ContactsMixin, GetAvailablePrefixesMixin, CachedScopeMixin, Primary
super().clean()
if self.prefix:
if not isinstance(self.prefix, IPNetwork):
self.prefix = IPNetwork(self.prefix)
# /0 masks are not acceptable
if self.prefix.prefixlen == 0:
@@ -322,6 +349,10 @@ class Prefix(ContactsMixin, GetAvailablePrefixesMixin, CachedScopeMixin, Primary
def save(self, *args, **kwargs):
if not self.pk or not self.parent or (self.prefix != self._prefix) or (self.vrf_id != self._vrf_id):
parent = self.find_parent_prefix(networks=self.prefix, vrf=self.vrf, exclude=self.pk)
self.parent = parent
if isinstance(self.prefix, netaddr.IPNetwork):
# Clear host bits from prefix
@@ -346,11 +377,11 @@ class Prefix(ContactsMixin, GetAvailablePrefixesMixin, CachedScopeMixin, Primary
return netaddr.IPAddress(self.prefix).format(netaddr.ipv6_full)
@property
def depth(self):
def depth_count(self):
return self._depth
@property
def children(self):
def children_count(self):
return self._children
def _set_prefix_length(self, value):
@@ -490,11 +521,63 @@ class Prefix(ContactsMixin, GetAvailablePrefixesMixin, CachedScopeMixin, Primary
return min(utilization, 100)
@classmethod
def find_parent_prefix(cls, networks, vrf=None, exclude=None):
# TODO: Document
if type(networks) in [netaddr.IPAddress, netaddr.IPNetwork, str]:
networks = [networks, ]
network_filter = models.Q()
for network in networks:
network_filter &= models.Q(
prefix__net_contains_or_equals=network
)
prefixes = Prefix.objects.filter(
models.Q(
network_filter,
vrf=vrf
) | models.Q(
network_filter,
vrf=None,
status=PrefixStatusChoices.STATUS_CONTAINER,
)
)
if exclude:
prefixes = prefixes.exclude(pk=exclude)
return prefixes.last()
class Role(OrganizationalModel):
"""
A Role represents the functional role of a Prefix or VLAN; for example, "Customer," "Infrastructure," or
"Management."
"""
weight = models.PositiveSmallIntegerField(
verbose_name=_('weight'),
default=1000
)
class Meta:
ordering = ('weight', 'name')
verbose_name = _('role')
verbose_name_plural = _('roles')
def __str__(self):
return self.name
class IPRange(ContactsMixin, PrimaryModel):
"""
A range of IP addresses, defined by start and end addresses.
"""
prefix = models.ForeignKey(
to='ipam.Prefix',
on_delete=models.SET_NULL,
related_name='ip_ranges',
null=True,
blank=True,
verbose_name=_('prefix'),
)
start_address = IPAddressField(
verbose_name=_('start address'),
help_text=_('IPv4 or IPv6 address (with mask)')
@@ -564,6 +647,27 @@ class IPRange(ContactsMixin, PrimaryModel):
super().clean()
if self.start_address and self.end_address:
# If prefix is set, validate suitability
if self.prefix:
# Check that start address and end address are within the prefix range
if self.start_address not in self.prefix.prefix and self.end_address not in self.prefix.prefix:
raise ValidationError({
'start_address': _("Start address must be part of the selected prefix"),
'end_address': _("End address must be part of the selected prefix.")
})
elif self.start_address not in self.prefix.prefix:
raise ValidationError({
'start_address': _("Start address must be part of the selected prefix")
})
elif self.end_address not in self.prefix.prefix:
raise ValidationError({
'end_address': _("End address must be part of the selected prefix.")
})
# Check that VRF matches prefix VRF
if self.vrf != self.prefix.vrf:
raise ValidationError({
'vrf': _("VRF must match the prefix VRF.")
})
# Check that start & end IP versions match
if self.start_address.version != self.end_address.version:
@@ -626,6 +730,12 @@ class IPRange(ContactsMixin, PrimaryModel):
# Record the range's size (number of IP addresses)
self.size = int(self.end_address.ip - self.start_address.ip) + 1
# Set the parent prefix
self.prefix = Prefix.find_parent_prefix(
networks=[self.start_address, self.end_address],
vrf=self.vrf
)
super().save(*args, **kwargs)
@property
@@ -732,6 +842,14 @@ class IPAddress(ContactsMixin, PrimaryModel):
for example, when mapping public addresses to private addresses. When an Interface has been assigned an IPAddress
which has a NAT outside IP, that Interface's Device can use either the inside or outside IP as its primary IP.
"""
prefix = models.ForeignKey(
to='ipam.Prefix',
on_delete=models.SET_NULL,
related_name='ip_addresses',
blank=True,
null=True,
verbose_name=_('Prefix')
)
address = IPAddressField(
verbose_name=_('address'),
help_text=_('IPv4 or IPv6 address (with mask)')
@@ -819,6 +937,7 @@ class IPAddress(ContactsMixin, PrimaryModel):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self._address = self.address
# Denote the original assigned object (if any) for validation in clean()
self._original_assigned_object_id = self.__dict__.get('assigned_object_id')
self._original_assigned_object_type_id = self.__dict__.get('assigned_object_type_id')
@@ -865,6 +984,16 @@ class IPAddress(ContactsMixin, PrimaryModel):
super().clean()
if self.address:
# If prefix is set, validate suitability
if self.prefix:
if self.address not in self.prefix.prefix:
raise ValidationError({
'prefix': _("IP address must be part of the selected prefix.")
})
if self.vrf != self.prefix.vrf:
raise ValidationError({
'vrf': _("IP address VRF must match the prefix VRF.")
})
# /0 masks are not acceptable
if self.address.prefixlen == 0:
@@ -958,6 +1087,9 @@ class IPAddress(ContactsMixin, PrimaryModel):
# Force dns_name to lowercase
self.dns_name = self.dns_name.lower()
# Set the parent prefix
self.prefix = Prefix.find_parent_prefix(networks=self.address, vrf=self.vrf)
super().save(*args, **kwargs)
def clone(self):
@@ -1012,3 +1144,8 @@ class IPAddress(ContactsMixin, PrimaryModel):
def get_role_color(self):
return IPAddressRoleChoices.colors.get(self.role)
@classmethod
def find_prefix(self, address):
prefixes = Prefix.objects.filter(prefix__net_contains=address.address, vrf=address.vrf)
return prefixes.last()

View File

@@ -87,7 +87,9 @@ class Service(ContactsMixin, ServiceBase, PrimaryModel):
help_text=_("The specific IP addresses (if any) to which this application service is bound")
)
clone_fields = ['protocol', 'ports', 'description', 'parent', 'ipaddresses', ]
clone_fields = (
'protocol', 'ports', 'description', 'parent_object_type', 'parent_object_id', 'ipaddresses',
)
class Meta:
indexes = (

View File

@@ -53,11 +53,12 @@ class IPAddressIndex(SearchIndex):
model = models.IPAddress
fields = (
('address', 100),
('prefix', 200),
('dns_name', 300),
('description', 500),
('comments', 5000),
)
display_attrs = ('vrf', 'tenant', 'status', 'role', 'description')
display_attrs = ('prefix', 'vrf', 'tenant', 'status', 'role', 'description')
@register_search
@@ -66,10 +67,11 @@ class IPRangeIndex(SearchIndex):
fields = (
('start_address', 100),
('end_address', 300),
('prefix', 400),
('description', 500),
('comments', 5000),
)
display_attrs = ('vrf', 'tenant', 'status', 'role', 'description')
display_attrs = ('prefix', 'vrf', 'tenant', 'status', 'role', 'description')
@register_search
@@ -77,10 +79,12 @@ class PrefixIndex(SearchIndex):
model = models.Prefix
fields = (
('prefix', 110),
('parent', 200),
('aggregate', 300),
('description', 500),
('comments', 5000),
)
display_attrs = ('scope', 'vrf', 'tenant', 'vlan', 'status', 'role', 'description')
display_attrs = ('scope', 'aggregate', 'parent', 'vrf', 'tenant', 'vlan', 'status', 'role', 'description')
@register_search

View File

@@ -152,6 +152,10 @@ class PrefixUtilizationColumn(columns.UtilizationColumn):
class PrefixTable(TenancyColumnsMixin, ContactsColumnMixin, PrimaryModelTable):
parent = tables.Column(
verbose_name=_('Parent'),
linkify=True
)
prefix = columns.TemplateColumn(
verbose_name=_('Prefix'),
template_code=PREFIX_LINK_WITH_DEPTH,
@@ -230,9 +234,9 @@ class PrefixTable(TenancyColumnsMixin, ContactsColumnMixin, PrimaryModelTable):
class Meta(PrimaryModelTable.Meta):
model = Prefix
fields = (
'pk', 'id', 'prefix', 'prefix_flat', 'status', 'children', 'vrf', 'utilization', 'tenant', 'tenant_group',
'scope', 'scope_type', 'vlan_group', 'vlan', 'role', 'is_pool', 'mark_utilized', 'description', 'contacts',
'comments', 'tags', 'created', 'last_updated',
'pk', 'id', 'prefix', 'status', 'parent', 'prefix', 'prefix_flat', 'children', 'vrf', 'utilization',
'tenant', 'tenant_group', 'scope', 'scope_type', 'vlan_group', 'vlan', 'role', 'is_pool', 'mark_utilized',
'contacts', 'description', 'comments', 'tags', 'created', 'last_updated',
)
default_columns = (
'pk', 'prefix', 'status', 'children', 'vrf', 'utilization', 'tenant', 'scope', 'vlan', 'role',
@@ -246,8 +250,11 @@ class PrefixTable(TenancyColumnsMixin, ContactsColumnMixin, PrimaryModelTable):
#
# IP ranges
#
class IPRangeTable(TenancyColumnsMixin, ContactsColumnMixin, PrimaryModelTable):
prefix = tables.Column(
verbose_name=_('Prefix'),
linkify=True
)
start_address = tables.Column(
verbose_name=_('Start address'),
linkify=True
@@ -284,9 +291,9 @@ class IPRangeTable(TenancyColumnsMixin, ContactsColumnMixin, PrimaryModelTable):
class Meta(PrimaryModelTable.Meta):
model = IPRange
fields = (
'pk', 'id', 'start_address', 'end_address', 'size', 'vrf', 'status', 'role', 'tenant', 'tenant_group',
'mark_populated', 'mark_utilized', 'utilization', 'description', 'contacts', 'comments', 'tags',
'created', 'last_updated',
'pk', 'id', 'start_address', 'end_address', 'prefix', 'size', 'vrf', 'status', 'role', 'tenant',
'tenant_group', 'mark_populated', 'mark_utilized', 'utilization', 'description', 'contacts',
'comments', 'tags', 'created', 'last_updated',
)
default_columns = (
'pk', 'start_address', 'end_address', 'size', 'vrf', 'status', 'role', 'tenant', 'description',
@@ -301,10 +308,18 @@ class IPRangeTable(TenancyColumnsMixin, ContactsColumnMixin, PrimaryModelTable):
#
class IPAddressTable(TenancyColumnsMixin, ContactsColumnMixin, PrimaryModelTable):
prefix = tables.Column(
verbose_name=_('Prefix'),
linkify=True
)
address = tables.TemplateColumn(
template_code=IPADDRESS_LINK,
verbose_name=_('IP Address')
)
prefix = tables.Column(
linkify=True,
verbose_name=_('Prefix')
)
vrf = tables.TemplateColumn(
template_code=VRF_LINK,
verbose_name=_('VRF')
@@ -353,8 +368,9 @@ class IPAddressTable(TenancyColumnsMixin, ContactsColumnMixin, PrimaryModelTable
class Meta(PrimaryModelTable.Meta):
model = IPAddress
fields = (
'pk', 'id', 'address', 'vrf', 'status', 'role', 'tenant', 'tenant_group', 'nat_inside', 'nat_outside',
'assigned', 'dns_name', 'description', 'comments', 'contacts', 'tags', 'created', 'last_updated',
'pk', 'id', 'address', 'vrf', 'prefix', 'status', 'role', 'tenant', 'tenant_group', 'nat_inside',
'nat_outside', 'assigned', 'dns_name', 'description', 'comments', 'contacts', 'tags', 'created',
'last_updated',
)
default_columns = (
'pk', 'address', 'vrf', 'status', 'role', 'tenant', 'assigned', 'dns_name', 'description',

View File

@@ -16,12 +16,20 @@ PREFIX_COPY_BUTTON = """
PREFIX_LINK_WITH_DEPTH = """
{% load helpers %}
{% if record.depth %}
<div class="record-depth">
{% for i in record.depth|as_range %}
<span>•</span>
{% endfor %}
</div>
{% if record.depth_count %}
{% if object %}
<div class="record-depth">
{% for i in record.depth_count|parent_depth:object|as_range %}
<span>•</span>
{% endfor %}
</div>
{% else %}
<div class="record-depth">
{% for i in record.depth_count|as_range %}
<span>•</span>
{% endfor %}
</div>
{% endif %}
{% endif %}
""" + PREFIX_LINK

View File

@@ -407,7 +407,8 @@ class RoleTest(APIViewTestCases.APIViewTestCase):
class PrefixTest(APIViewTestCases.APIViewTestCase):
model = Prefix
brief_fields = ['_depth', 'description', 'display', 'family', 'id', 'prefix', 'url']
# TODO: Alter for parent prefix
brief_fields = ['_depth', 'aggregate', 'description', 'display', 'family', 'id', 'parent', 'prefix', 'url']
create_data = [
{
'prefix': '192.168.4.0/24',
@@ -647,7 +648,8 @@ class PrefixTest(APIViewTestCases.APIViewTestCase):
class IPRangeTest(APIViewTestCases.APIViewTestCase):
model = IPRange
brief_fields = ['description', 'display', 'end_address', 'family', 'id', 'start_address', 'url']
# TODO: Alter for parent prefix
brief_fields = ['description', 'display', 'end_address', 'family', 'id', 'prefix', 'start_address', 'url']
create_data = [
{
'start_address': '192.168.4.10/24',
@@ -805,7 +807,8 @@ class IPRangeTest(APIViewTestCases.APIViewTestCase):
class IPAddressTest(APIViewTestCases.APIViewTestCase):
model = IPAddress
brief_fields = ['address', 'description', 'display', 'family', 'id', 'url']
# TODO: Alter for parent prefix
brief_fields = ['address', 'description', 'display', 'family', 'id', 'prefix', 'url']
create_data = [
{
'address': '192.168.0.4/24',

View File

@@ -901,6 +901,10 @@ class PrefixTestCase(TestCase, ChangeLoggedFilterSetTests):
params = {'description': ['foobar1', 'foobar2']}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
# TODO: Test for parent prefix
# TODO: Test for children?
# TODO: Test for aggregate
class IPRangeTestCase(TestCase, ChangeLoggedFilterSetTests):
queryset = IPRange.objects.all()
@@ -1079,6 +1083,7 @@ class IPRangeTestCase(TestCase, ChangeLoggedFilterSetTests):
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
def test_parent(self):
# TODO: Alter for prefix
params = {'parent': ['10.0.1.0/24', '10.0.2.0/24']}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
params = {'parent': ['10.0.1.0/25']} # Range 10.0.1.100-199 is not fully contained by 10.0.1.0/25
@@ -1318,6 +1323,7 @@ class IPAddressTestCase(TestCase, ChangeLoggedFilterSetTests):
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 2)
def test_parent(self):
# TODO: Alter for prefix
params = {'parent': ['10.0.0.0/30', '2001:db8::/126']}
self.assertEqual(self.filterset(params, self.queryset).qs.count(), 8)

View File

@@ -39,6 +39,26 @@ class TestAggregate(TestCase):
class TestIPRange(TestCase):
@classmethod
def setUpTestData(cls):
cls.vrf = VRF.objects.create(name='VRF A', rd='1:1')
cls.prefixes = (
# IPv4
Prefix(prefix='192.0.0.0/16'),
Prefix(prefix='192.0.2.0/24'),
Prefix(prefix='192.0.0.0/16', vrf=cls.vrf),
# IPv6
Prefix(prefix='2001:db8::/32'),
Prefix(prefix='2001:db8::/64'),
)
for prefix in cls.prefixes:
prefix.clean()
prefix.save()
def test_overlapping_range(self):
iprange_192_168 = IPRange.objects.create(
@@ -87,6 +107,69 @@ class TestIPRange(TestCase):
)
iprange_4_198_201.clean()
def test_parent_prefix(self):
ranges = (
IPRange(
start_address=IPNetwork('192.0.0.1/24'),
end_address=IPNetwork('192.0.0.254/24'),
prefix=self.prefixes[0]
),
IPRange(
start_address=IPNetwork('192.0.2.1/24'),
end_address=IPNetwork('192.0.2.254/24'),
prefix=self.prefixes[1]
),
IPRange(
start_address=IPNetwork('192.0.2.1/24'),
end_address=IPNetwork('192.0.2.254/24'),
vrf=self.vrf,
prefix=self.prefixes[2]
),
IPRange(
start_address=IPNetwork('2001:db8::/64'),
end_address=IPNetwork('2001:db8::ffff/64'),
prefix=self.prefixes[4]
),
IPRange(
start_address=IPNetwork('2001:db8:2::/64'),
end_address=IPNetwork('2001:db8:2::ffff/64'),
prefix=self.prefixes[3]
),
)
for range in ranges:
range.clean()
range.save()
self.assertEqual(ranges[0].prefix, self.prefixes[0])
self.assertEqual(ranges[1].prefix, self.prefixes[1])
self.assertEqual(ranges[2].prefix, self.prefixes[2])
self.assertEqual(ranges[3].prefix, self.prefixes[4])
def test_parent_prefix_change(self):
range = IPRange(
start_address=IPNetwork('192.0.1.1/24'),
end_address=IPNetwork('192.0.1.254/24'),
prefix=self.prefixes[0]
)
range.clean()
range.save()
prefix = Prefix(prefix='192.0.0.0/17')
prefix.clean()
prefix.save()
range.refresh_from_db()
self.assertEqual(range.prefix, prefix)
# TODO: Prefix Altered
# TODO: Prefix Deleted
# TODO: Prefix falls outside range
# TODO: Prefix VRF does not match range VRF
class TestPrefix(TestCase):
@@ -169,23 +252,16 @@ class TestPrefix(TestCase):
prefix=IPNetwork('10.0.0.0/16'), status=PrefixStatusChoices.STATUS_CONTAINER
)
ips = IPAddress.objects.bulk_create((
IPAddress(address=IPNetwork('10.0.0.1/24'), vrf=None),
IPAddress(address=IPNetwork('10.0.1.1/24'), vrf=vrfs[0]),
IPAddress(address=IPNetwork('10.0.2.1/24'), vrf=vrfs[1]),
IPAddress(address=IPNetwork('10.0.3.1/24'), vrf=vrfs[2]),
IPAddress(prefix=parent_prefix, address=IPNetwork('10.0.0.1/24'), vrf=None),
IPAddress(prefix=parent_prefix, address=IPNetwork('10.0.1.1/24'), vrf=vrfs[0]),
IPAddress(prefix=parent_prefix, address=IPNetwork('10.0.2.1/24'), vrf=vrfs[1]),
IPAddress(prefix=parent_prefix, address=IPNetwork('10.0.3.1/24'), vrf=vrfs[2]),
))
child_ip_pks = {p.pk for p in parent_prefix.get_child_ips()}
child_ip_pks = {p.pk for p in parent_prefix.ip_addresses.all()}
# Global container should return all children
self.assertSetEqual(child_ip_pks, {ips[0].pk, ips[1].pk, ips[2].pk, ips[3].pk})
parent_prefix.vrf = vrfs[0]
parent_prefix.save()
child_ip_pks = {p.pk for p in parent_prefix.get_child_ips()}
# VRF container is limited to its own VRF
self.assertSetEqual(child_ip_pks, {ips[1].pk})
def test_get_available_prefixes(self):
prefixes = Prefix.objects.bulk_create((
@@ -332,6 +408,62 @@ class TestPrefix(TestCase):
duplicate_prefix = Prefix(vrf=vrf, prefix=IPNetwork('192.0.2.0/24'))
self.assertRaises(ValidationError, duplicate_prefix.clean)
def test_parent_container_prefix_change(self):
vrfs = VRF.objects.bulk_create((
VRF(name='VRF 1'),
VRF(name='VRF 2'),
VRF(name='VRF 3'),
))
parent_prefix = Prefix.objects.create(
prefix=IPNetwork('10.0.0.0/16'), status=PrefixStatusChoices.STATUS_CONTAINER
)
ips = IPAddress.objects.bulk_create((
IPAddress(prefix=parent_prefix, address=IPNetwork('10.0.0.1/24'), vrf=None),
IPAddress(prefix=parent_prefix, address=IPNetwork('10.0.1.1/24'), vrf=vrfs[0]),
IPAddress(prefix=parent_prefix, address=IPNetwork('10.0.2.1/24'), vrf=vrfs[1]),
IPAddress(prefix=parent_prefix, address=IPNetwork('10.0.3.1/24'), vrf=vrfs[2]),
))
child_ip_pks = {p.pk for p in parent_prefix.ip_addresses.all()}
# Global container should return all children
self.assertSetEqual(child_ip_pks, {ips[0].pk, ips[1].pk, ips[2].pk, ips[3].pk})
parent_prefix.vrf = vrfs[0]
parent_prefix.save()
parent_prefix.refresh_from_db()
child_ip_pks = {p.pk for p in parent_prefix.ip_addresses.all()}
# VRF container is limited to its own VRF
self.assertSetEqual(child_ip_pks, {ips[1].pk})
def test_parent_container_vrf_change(self):
vrfs = VRF.objects.bulk_create((
VRF(name='VRF 1'),
VRF(name='VRF 2'),
VRF(name='VRF 3'),
))
parent_prefix = Prefix.objects.create(
prefix=IPNetwork('10.0.0.0/16'), status=PrefixStatusChoices.STATUS_CONTAINER
)
ips = IPAddress.objects.bulk_create((
IPAddress(prefix=parent_prefix, address=IPNetwork('10.0.0.1/24'), vrf=None),
IPAddress(prefix=parent_prefix, address=IPNetwork('10.0.1.1/24'), vrf=vrfs[0]),
IPAddress(prefix=parent_prefix, address=IPNetwork('10.0.2.1/24'), vrf=vrfs[1]),
IPAddress(prefix=parent_prefix, address=IPNetwork('10.0.3.1/24'), vrf=vrfs[2]),
))
child_ip_pks = {p.pk for p in parent_prefix.ip_addresses.all()}
# Global container should return all children
self.assertSetEqual(child_ip_pks, {ips[0].pk, ips[1].pk, ips[2].pk, ips[3].pk})
parent_prefix.prefix = '10.0.0.0/23'
parent_prefix.save()
parent_prefix.refresh_from_db()
child_ip_pks = {p.pk for p in parent_prefix.ip_addresses.all()}
self.assertSetEqual(child_ip_pks, {ips[0].pk, ips[1].pk})
class TestPrefixHierarchy(TestCase):
"""
@@ -344,17 +476,21 @@ class TestPrefixHierarchy(TestCase):
prefixes = (
# IPv4
Prefix(prefix='10.0.0.0/8', _depth=0, _children=2),
Prefix(prefix='10.0.0.0/16', _depth=1, _children=1),
Prefix(prefix='10.0.0.0/24', _depth=2, _children=0),
Prefix(prefix='10.0.0.0/8'),
Prefix(prefix='10.0.0.0/16'),
Prefix(prefix='10.0.0.0/24'),
Prefix(prefix='192.168.0.0/16'),
# IPv6
Prefix(prefix='2001:db8::/32', _depth=0, _children=2),
Prefix(prefix='2001:db8::/40', _depth=1, _children=1),
Prefix(prefix='2001:db8::/48', _depth=2, _children=0),
Prefix(prefix='2001:db8::/32'),
Prefix(prefix='2001:db8::/40'),
Prefix(prefix='2001:db8::/48'),
)
Prefix.objects.bulk_create(prefixes)
for prefix in prefixes:
prefix.clean()
prefix.save()
def test_create_prefix4(self):
# Create 10.0.0.0/12
@@ -362,15 +498,19 @@ class TestPrefixHierarchy(TestCase):
prefixes = Prefix.objects.filter(prefix__family=4)
self.assertEqual(prefixes[0].prefix, IPNetwork('10.0.0.0/8'))
self.assertEqual(prefixes[0].parent, None)
self.assertEqual(prefixes[0]._depth, 0)
self.assertEqual(prefixes[0]._children, 3)
self.assertEqual(prefixes[1].prefix, IPNetwork('10.0.0.0/12'))
self.assertEqual(prefixes[1].parent.prefix, IPNetwork('10.0.0.0/8'))
self.assertEqual(prefixes[1]._depth, 1)
self.assertEqual(prefixes[1]._children, 2)
self.assertEqual(prefixes[2].prefix, IPNetwork('10.0.0.0/16'))
self.assertEqual(prefixes[2].parent.prefix, IPNetwork('10.0.0.0/12'))
self.assertEqual(prefixes[2]._depth, 2)
self.assertEqual(prefixes[2]._children, 1)
self.assertEqual(prefixes[3].prefix, IPNetwork('10.0.0.0/24'))
self.assertEqual(prefixes[3].parent.prefix, IPNetwork('10.0.0.0/16'))
self.assertEqual(prefixes[3]._depth, 3)
self.assertEqual(prefixes[3]._children, 0)
@@ -380,15 +520,19 @@ class TestPrefixHierarchy(TestCase):
prefixes = Prefix.objects.filter(prefix__family=6)
self.assertEqual(prefixes[0].prefix, IPNetwork('2001:db8::/32'))
self.assertEqual(prefixes[0].parent, None)
self.assertEqual(prefixes[0]._depth, 0)
self.assertEqual(prefixes[0]._children, 3)
self.assertEqual(prefixes[1].prefix, IPNetwork('2001:db8::/36'))
self.assertEqual(prefixes[1].parent.prefix, IPNetwork('2001:db8::/32'))
self.assertEqual(prefixes[1]._depth, 1)
self.assertEqual(prefixes[1]._children, 2)
self.assertEqual(prefixes[2].prefix, IPNetwork('2001:db8::/40'))
self.assertEqual(prefixes[2].parent.prefix, IPNetwork('2001:db8::/36'))
self.assertEqual(prefixes[2]._depth, 2)
self.assertEqual(prefixes[2]._children, 1)
self.assertEqual(prefixes[3].prefix, IPNetwork('2001:db8::/48'))
self.assertEqual(prefixes[3].parent.prefix, IPNetwork('2001:db8::/40'))
self.assertEqual(prefixes[3]._depth, 3)
self.assertEqual(prefixes[3]._children, 0)
@@ -400,12 +544,15 @@ class TestPrefixHierarchy(TestCase):
prefixes = Prefix.objects.filter(prefix__family=4)
self.assertEqual(prefixes[0].prefix, IPNetwork('10.0.0.0/8'))
self.assertEqual(prefixes[0].parent, None)
self.assertEqual(prefixes[0]._depth, 0)
self.assertEqual(prefixes[0]._children, 2)
self.assertEqual(prefixes[1].prefix, IPNetwork('10.0.0.0/12'))
self.assertEqual(prefixes[1].parent.prefix, IPNetwork('10.0.0.0/8'))
self.assertEqual(prefixes[1]._depth, 1)
self.assertEqual(prefixes[1]._children, 1)
self.assertEqual(prefixes[2].prefix, IPNetwork('10.0.0.0/16'))
self.assertEqual(prefixes[2].parent.prefix, IPNetwork('10.0.0.0/12'))
self.assertEqual(prefixes[2]._depth, 2)
self.assertEqual(prefixes[2]._children, 0)
@@ -417,12 +564,15 @@ class TestPrefixHierarchy(TestCase):
prefixes = Prefix.objects.filter(prefix__family=6)
self.assertEqual(prefixes[0].prefix, IPNetwork('2001:db8::/32'))
self.assertEqual(prefixes[0].parent, None)
self.assertEqual(prefixes[0]._depth, 0)
self.assertEqual(prefixes[0]._children, 2)
self.assertEqual(prefixes[1].prefix, IPNetwork('2001:db8::/36'))
self.assertEqual(prefixes[1].parent.prefix, IPNetwork('2001:db8::/32'))
self.assertEqual(prefixes[1]._depth, 1)
self.assertEqual(prefixes[1]._children, 1)
self.assertEqual(prefixes[2].prefix, IPNetwork('2001:db8::/40'))
self.assertEqual(prefixes[2].parent.prefix, IPNetwork('2001:db8::/36'))
self.assertEqual(prefixes[2]._depth, 2)
self.assertEqual(prefixes[2]._children, 0)
@@ -437,14 +587,17 @@ class TestPrefixHierarchy(TestCase):
prefixes = Prefix.objects.filter(vrf__isnull=True, prefix__family=4)
self.assertEqual(prefixes[0].prefix, IPNetwork('10.0.0.0/8'))
self.assertEqual(prefixes[0].parent, None)
self.assertEqual(prefixes[0]._depth, 0)
self.assertEqual(prefixes[0]._children, 1)
self.assertEqual(prefixes[1].prefix, IPNetwork('10.0.0.0/24'))
self.assertEqual(prefixes[1].parent.prefix, IPNetwork('10.0.0.0/8'))
self.assertEqual(prefixes[1]._depth, 1)
self.assertEqual(prefixes[1]._children, 0)
prefixes = Prefix.objects.filter(vrf=vrf)
self.assertEqual(prefixes[0].prefix, IPNetwork('10.0.0.0/16'))
self.assertEqual(prefixes[0].parent, None)
self.assertEqual(prefixes[0]._depth, 0)
self.assertEqual(prefixes[0]._children, 0)
@@ -459,14 +612,17 @@ class TestPrefixHierarchy(TestCase):
prefixes = Prefix.objects.filter(vrf__isnull=True, prefix__family=6)
self.assertEqual(prefixes[0].prefix, IPNetwork('2001:db8::/32'))
self.assertEqual(prefixes[0].parent, None)
self.assertEqual(prefixes[0]._depth, 0)
self.assertEqual(prefixes[0]._children, 1)
self.assertEqual(prefixes[1].prefix, IPNetwork('2001:db8::/48'))
self.assertEqual(prefixes[1].parent.prefix, IPNetwork('2001:db8::/32'))
self.assertEqual(prefixes[1]._depth, 1)
self.assertEqual(prefixes[1]._children, 0)
prefixes = Prefix.objects.filter(vrf=vrf)
self.assertEqual(prefixes[0].prefix, IPNetwork('2001:db8::/40'))
self.assertEqual(prefixes[0].parent, None)
self.assertEqual(prefixes[0]._depth, 0)
self.assertEqual(prefixes[0]._children, 0)
@@ -476,9 +632,11 @@ class TestPrefixHierarchy(TestCase):
prefixes = Prefix.objects.filter(prefix__family=4)
self.assertEqual(prefixes[0].prefix, IPNetwork('10.0.0.0/8'))
self.assertEqual(prefixes[0].parent, None)
self.assertEqual(prefixes[0]._depth, 0)
self.assertEqual(prefixes[0]._children, 1)
self.assertEqual(prefixes[1].prefix, IPNetwork('10.0.0.0/24'))
self.assertEqual(prefixes[1].parent.prefix, IPNetwork('10.0.0.0/8'))
self.assertEqual(prefixes[1]._depth, 1)
self.assertEqual(prefixes[1]._children, 0)
@@ -488,9 +646,11 @@ class TestPrefixHierarchy(TestCase):
prefixes = Prefix.objects.filter(prefix__family=6)
self.assertEqual(prefixes[0].prefix, IPNetwork('2001:db8::/32'))
self.assertEqual(prefixes[0].parent, None)
self.assertEqual(prefixes[0]._depth, 0)
self.assertEqual(prefixes[0]._children, 1)
self.assertEqual(prefixes[1].prefix, IPNetwork('2001:db8::/48'))
self.assertEqual(prefixes[1].parent.prefix, IPNetwork('2001:db8::/32'))
self.assertEqual(prefixes[1]._depth, 1)
self.assertEqual(prefixes[1]._children, 0)
@@ -500,15 +660,20 @@ class TestPrefixHierarchy(TestCase):
prefixes = Prefix.objects.filter(prefix__family=4)
self.assertEqual(prefixes[0].prefix, IPNetwork('10.0.0.0/8'))
self.assertEqual(prefixes[0].parent, None)
self.assertEqual(prefixes[0]._depth, 0)
self.assertEqual(prefixes[0]._children, 3)
self.assertEqual(prefixes[1].prefix, IPNetwork('10.0.0.0/16'))
self.assertEqual(prefixes[1].parent.prefix, IPNetwork('10.0.0.0/8'))
self.assertEqual(prefixes[1]._depth, 1)
self.assertEqual(prefixes[1]._children, 1)
self.assertEqual(prefixes[2].prefix, IPNetwork('10.0.0.0/16'))
self.assertEqual(prefixes[2].parent.prefix, IPNetwork('10.0.0.0/8'))
self.assertEqual(prefixes[2]._depth, 1)
self.assertEqual(prefixes[2]._children, 1)
self.assertEqual(prefixes[3].prefix, IPNetwork('10.0.0.0/24'))
# TODO: How to we resolve the parent for duplicate prefixes
self.assertEqual(prefixes[3].parent.prefix, IPNetwork('10.0.0.0/16'))
self.assertEqual(prefixes[3]._depth, 2)
self.assertEqual(prefixes[3]._children, 0)
@@ -518,20 +683,158 @@ class TestPrefixHierarchy(TestCase):
prefixes = Prefix.objects.filter(prefix__family=6)
self.assertEqual(prefixes[0].prefix, IPNetwork('2001:db8::/32'))
self.assertEqual(prefixes[0].parent, None)
self.assertEqual(prefixes[0]._depth, 0)
self.assertEqual(prefixes[0]._children, 3)
self.assertEqual(prefixes[1].prefix, IPNetwork('2001:db8::/40'))
self.assertEqual(prefixes[1].parent.prefix, IPNetwork('2001:db8::/32'))
self.assertEqual(prefixes[1]._depth, 1)
self.assertEqual(prefixes[1]._children, 1)
self.assertEqual(prefixes[2].prefix, IPNetwork('2001:db8::/40'))
self.assertEqual(prefixes[2].parent.prefix, IPNetwork('2001:db8::/32'))
self.assertEqual(prefixes[2]._depth, 1)
self.assertEqual(prefixes[2]._children, 1)
self.assertEqual(prefixes[3].prefix, IPNetwork('2001:db8::/48'))
self.assertEqual(prefixes[3].parent.prefix, IPNetwork('2001:db8::/40'))
self.assertEqual(prefixes[3]._depth, 2)
self.assertEqual(prefixes[3]._children, 0)
class TestTriggers(TestCase):
"""
Test the automatic updating of depth and child count in response to changes made within
the prefix hierarchy.
"""
@classmethod
def setUpTestData(cls):
vrfs = (
VRF(name='VRF A'),
VRF(name='VRF B'),
)
for vrf in vrfs:
vrf.clean()
vrf.save()
cls.prefixes = (
# IPv4
Prefix(prefix='10.0.0.0/8'),
Prefix(prefix='10.0.0.0/16'),
Prefix(prefix='10.0.0.0/22'),
Prefix(prefix='10.0.0.0/23'),
Prefix(prefix='10.0.2.0/23'),
Prefix(prefix='10.0.0.0/24'),
Prefix(prefix='10.0.1.0/24'),
Prefix(prefix='10.0.2.0/24'),
Prefix(prefix='10.0.3.0/24'),
Prefix(prefix='10.1.0.0/16', status='container'),
Prefix(prefix='10.1.0.0/22', vrf=vrfs[0]),
Prefix(prefix='10.1.0.0/23', vrf=vrfs[0]),
Prefix(prefix='10.1.2.0/23', vrf=vrfs[0]),
Prefix(prefix='10.1.0.0/24', vrf=vrfs[0]),
Prefix(prefix='10.1.1.0/24', vrf=vrfs[0]),
Prefix(prefix='10.1.2.0/24', vrf=vrfs[0]),
Prefix(prefix='10.1.3.0/24', vrf=vrfs[0]),
)
for prefix in cls.prefixes:
prefix.clean()
prefix.save()
def test_current_hierarchy(self):
self.assertIsNone(Prefix.objects.get(prefix='10.0.0.0/8').parent)
self.assertEqual(Prefix.objects.get(prefix='10.0.0.0/16').parent, Prefix.objects.get(prefix='10.0.0.0/8'))
self.assertEqual(Prefix.objects.get(prefix='10.0.0.0/22').parent, Prefix.objects.get(prefix='10.0.0.0/16'))
self.assertEqual(Prefix.objects.get(prefix='10.0.0.0/23').parent, Prefix.objects.get(prefix='10.0.0.0/22'))
self.assertEqual(Prefix.objects.get(prefix='10.0.2.0/23').parent, Prefix.objects.get(prefix='10.0.0.0/22'))
self.assertEqual(Prefix.objects.get(prefix='10.0.0.0/24').parent, Prefix.objects.get(prefix='10.0.0.0/23'))
self.assertEqual(Prefix.objects.get(prefix='10.0.1.0/24').parent, Prefix.objects.get(prefix='10.0.0.0/23'))
self.assertEqual(Prefix.objects.get(prefix='10.0.2.0/24').parent, Prefix.objects.get(prefix='10.0.2.0/23'))
self.assertEqual(Prefix.objects.get(prefix='10.0.3.0/24').parent, Prefix.objects.get(prefix='10.0.2.0/23'))
def test_basic_insert(self):
pfx = Prefix.objects.create(prefix='10.0.0.0/21')
self.assertIsNotNone(Prefix.objects.get(prefix='10.0.0.0/22').parent)
self.assertEqual(Prefix.objects.get(prefix='10.0.0.0/22').parent, pfx)
def test_vrf_insert(self):
vrf = VRF.objects.get(name='VRF A')
pfx = Prefix.objects.create(prefix='10.1.0.0/21', vrf=vrf)
parent = Prefix.objects.get(prefix='10.1.0.0/16')
self.assertIsNotNone(Prefix.objects.get(prefix='10.1.0.0/21').parent)
self.assertEqual(Prefix.objects.get(prefix='10.1.0.0/21').parent, parent)
self.assertIsNotNone(Prefix.objects.get(prefix='10.1.0.0/22').parent)
self.assertEqual(Prefix.objects.get(prefix='10.1.0.0/22').parent, pfx)
def test_basic_delete(self):
Prefix.objects.get(prefix='10.0.0.0/23').delete()
parent = Prefix.objects.get(prefix='10.0.0.0/22')
self.assertEqual(Prefix.objects.get(prefix='10.0.0.0/24').parent, parent)
self.assertEqual(Prefix.objects.get(prefix='10.0.1.0/24').parent, parent)
self.assertEqual(Prefix.objects.get(prefix='10.0.2.0/24').parent, Prefix.objects.get(prefix='10.0.2.0/23'))
def test_vrf_delete(self):
Prefix.objects.get(prefix='10.1.0.0/23').delete()
parent = Prefix.objects.get(prefix='10.1.0.0/22')
self.assertEqual(Prefix.objects.get(prefix='10.1.0.0/24').parent, parent)
self.assertEqual(Prefix.objects.get(prefix='10.1.1.0/24').parent, parent)
self.assertEqual(Prefix.objects.get(prefix='10.1.2.0/24').parent, Prefix.objects.get(prefix='10.1.2.0/23'))
def test_basic_update(self):
pfx = Prefix.objects.get(prefix='10.0.0.0/23')
parent = Prefix.objects.get(prefix='10.0.0.0/22')
pfx.prefix = '10.3.0.0/23'
pfx.parent = Prefix.objects.get(prefix='10.0.0.0/8')
pfx.clean()
pfx.save()
self.assertEqual(Prefix.objects.get(prefix='10.0.0.0/24').parent, parent)
self.assertEqual(Prefix.objects.get(prefix='10.0.1.0/24').parent, parent)
self.assertEqual(Prefix.objects.get(prefix='10.0.2.0/24').parent, Prefix.objects.get(prefix='10.0.2.0/23'))
def test_vrf_update(self):
pfx = Prefix.objects.get(prefix='10.1.0.0/23')
parent = Prefix.objects.get(prefix='10.1.0.0/22')
pfx.prefix = '10.3.0.0/23'
pfx.parent = None
pfx.clean()
pfx.save()
self.assertEqual(Prefix.objects.get(prefix='10.1.0.0/24').parent, parent)
self.assertEqual(Prefix.objects.get(prefix='10.1.1.0/24').parent, parent)
self.assertEqual(Prefix.objects.get(prefix='10.1.2.0/24').parent, Prefix.objects.get(prefix='10.1.2.0/23'))
# TODO: Test VRF Changes
class TestIPAddress(TestCase):
"""
Test the automatic updating of depth and child count in response to changes made within
the prefix hierarchy.
"""
@classmethod
def setUpTestData(cls):
cls.vrf = VRF.objects.create(name='VRF A', rd='1:1')
cls.prefixes = (
# IPv4
Prefix(prefix='192.0.0.0/16'),
Prefix(prefix='192.0.2.0/24'),
Prefix(prefix='192.0.0.0/16', vrf=cls.vrf),
# IPv6
Prefix(prefix='2001:db8::/32'),
Prefix(prefix='2001:db8::/64'),
)
for prefix in cls.prefixes:
prefix.clean()
prefix.save()
def test_get_duplicates(self):
ips = IPAddress.objects.bulk_create((
@@ -543,6 +846,44 @@ class TestIPAddress(TestCase):
self.assertSetEqual(set(duplicate_ip_pks), {ips[1].pk, ips[2].pk})
def test_parent_prefix(self):
ips = (
IPAddress(address=IPNetwork('192.0.0.1/24'), prefix=self.prefixes[0]),
IPAddress(address=IPNetwork('192.0.2.1/24'), prefix=self.prefixes[1]),
IPAddress(address=IPNetwork('192.0.2.1/24'), vrf=self.vrf, prefix=self.prefixes[2]),
IPAddress(address=IPNetwork('2001:db8::/64'), prefix=self.prefixes[4]),
IPAddress(address=IPNetwork('2001:db8:2::/64')),
)
for ip in ips:
ip.clean()
ip.save()
self.assertEqual(ips[0].prefix, self.prefixes[0])
self.assertEqual(ips[1].prefix, self.prefixes[1])
self.assertEqual(ips[2].prefix, self.prefixes[2])
self.assertEqual(ips[3].prefix, self.prefixes[4])
self.assertEqual(ips[4].prefix, self.prefixes[3])
def test_parent_prefix_change(self):
ip = IPAddress(address=IPNetwork('192.0.1.1/24'), prefix=self.prefixes[0])
ip.clean()
ip.save()
prefix = Prefix(prefix='192.0.1.0/17')
prefix.clean()
prefix.save()
ip.refresh_from_db()
self.assertEqual(ip.prefix, prefix)
# TODO: Prefix Altered
# TODO: Prefix Deleted
# TODO: Prefix does not contain IP Address
# TODO: Prefix VRF does not match IP Address VRF
#
# Uniqueness enforcement tests
#
@@ -559,13 +900,20 @@ class TestIPAddress(TestCase):
self.assertRaises(ValidationError, duplicate_ip.clean)
def test_duplicate_vrf(self):
vrf = VRF.objects.create(name='Test', rd='1:1', enforce_unique=False)
vrf = VRF.objects.get(rd='1:1')
vrf.enforce_unique = False
vrf.clean()
vrf.save()
IPAddress.objects.create(vrf=vrf, address=IPNetwork('192.0.2.1/24'))
duplicate_ip = IPAddress(vrf=vrf, address=IPNetwork('192.0.2.1/24'))
self.assertIsNone(duplicate_ip.clean())
def test_duplicate_vrf_unique(self):
vrf = VRF.objects.create(name='Test', rd='1:1', enforce_unique=True)
vrf = VRF.objects.get(rd='1:1')
vrf.enforce_unique = True
vrf.clean()
vrf.save()
IPAddress.objects.create(vrf=vrf, address=IPNetwork('192.0.2.1/24'))
duplicate_ip = IPAddress(vrf=vrf, address=IPNetwork('192.0.2.1/24'))
self.assertRaises(ValidationError, duplicate_ip.clean)

View File

@@ -421,6 +421,7 @@ class PrefixTestCase(ViewTestCases.PrimaryObjectViewTestCase):
tags = create_tags('Alpha', 'Bravo', 'Charlie')
# TODO: Alter for prefix
cls.form_data = {
'prefix': IPNetwork('192.0.2.0/24'),
'scope_type': ContentType.objects.get_for_model(Site).pk,
@@ -436,6 +437,7 @@ class PrefixTestCase(ViewTestCases.PrimaryObjectViewTestCase):
}
site = sites[0].pk
# TODO: Alter for prefix
cls.csv_data = (
"vrf,prefix,status,scope_type,scope_id",
f"VRF 1,10.4.0.0/16,active,dcim.site,{site}",
@@ -443,6 +445,7 @@ class PrefixTestCase(ViewTestCases.PrimaryObjectViewTestCase):
f"VRF 1,10.6.0.0/16,active,dcim.site,{site}",
)
# TODO: Alter for prefix
cls.csv_update_data = (
"id,description,status",
f"{prefixes[0].pk},New description 7,{PrefixStatusChoices.STATUS_RESERVED}",
@@ -450,6 +453,7 @@ class PrefixTestCase(ViewTestCases.PrimaryObjectViewTestCase):
f"{prefixes[2].pk},New description 9,{PrefixStatusChoices.STATUS_RESERVED}",
)
# TODO: Alter for prefix
cls.bulk_edit_data = {
'vrf': vrfs[1].pk,
'tenant': None,
@@ -477,9 +481,9 @@ class PrefixTestCase(ViewTestCases.PrimaryObjectViewTestCase):
def test_prefix_ipranges(self):
prefix = Prefix.objects.create(prefix=IPNetwork('192.168.0.0/16'))
ip_ranges = (
IPRange(start_address='192.168.0.1/24', end_address='192.168.0.100/24', size=99),
IPRange(start_address='192.168.1.1/24', end_address='192.168.1.100/24', size=99),
IPRange(start_address='192.168.2.1/24', end_address='192.168.2.100/24', size=99),
IPRange(prefix=prefix, start_address='192.168.0.1/24', end_address='192.168.0.100/24', size=99),
IPRange(prefix=prefix, start_address='192.168.1.1/24', end_address='192.168.1.100/24', size=99),
IPRange(prefix=prefix, start_address='192.168.2.1/24', end_address='192.168.2.100/24', size=99),
)
IPRange.objects.bulk_create(ip_ranges)
self.assertEqual(prefix.get_child_ranges().count(), 3)
@@ -491,12 +495,12 @@ class PrefixTestCase(ViewTestCases.PrimaryObjectViewTestCase):
def test_prefix_ipaddresses(self):
prefix = Prefix.objects.create(prefix=IPNetwork('192.168.0.0/16'))
ip_addresses = (
IPAddress(address=IPNetwork('192.168.0.1/16')),
IPAddress(address=IPNetwork('192.168.0.2/16')),
IPAddress(address=IPNetwork('192.168.0.3/16')),
IPAddress(prefix=prefix, address=IPNetwork('192.168.0.1/16')),
IPAddress(prefix=prefix, address=IPNetwork('192.168.0.2/16')),
IPAddress(prefix=prefix, address=IPNetwork('192.168.0.3/16')),
)
IPAddress.objects.bulk_create(ip_addresses)
self.assertEqual(prefix.get_child_ips().count(), 3)
self.assertEqual(prefix.ip_addresses.all().count(), 3)
url = reverse('ipam:prefix_ipaddresses', kwargs={'pk': prefix.pk})
self.assertHttpStatus(self.client.get(url), 200)
@@ -670,6 +674,7 @@ class IPRangeTestCase(ViewTestCases.PrimaryObjectViewTestCase):
tags = create_tags('Alpha', 'Bravo', 'Charlie')
# TODO: Alter for prefix
cls.form_data = {
'start_address': IPNetwork('192.0.5.10/24'),
'end_address': IPNetwork('192.0.5.100/24'),
@@ -683,6 +688,7 @@ class IPRangeTestCase(ViewTestCases.PrimaryObjectViewTestCase):
'tags': [t.pk for t in tags],
}
# TODO: Alter for prefix
cls.csv_data = (
"vrf,start_address,end_address,status",
"VRF 1,10.1.0.1/16,10.1.9.254/16,active",
@@ -690,6 +696,7 @@ class IPRangeTestCase(ViewTestCases.PrimaryObjectViewTestCase):
"VRF 1,10.3.0.1/16,10.3.9.254/16,active",
)
# TODO: Alter for prefix
cls.csv_update_data = (
"id,description,status",
f"{ip_ranges[0].pk},New description 7,{IPRangeStatusChoices.STATUS_RESERVED}",
@@ -697,6 +704,7 @@ class IPRangeTestCase(ViewTestCases.PrimaryObjectViewTestCase):
f"{ip_ranges[2].pk},New description 9,{IPRangeStatusChoices.STATUS_RESERVED}",
)
# TODO: Alter for prefix
cls.bulk_edit_data = {
'vrf': vrfs[1].pk,
'tenant': None,
@@ -763,6 +771,7 @@ class IPAddressTestCase(ViewTestCases.PrimaryObjectViewTestCase):
),
)
FHRPGroup.objects.bulk_create(fhrp_groups)
# TODO: Alter for prefix
cls.form_data = {
'vrf': vrfs[1].pk,
'address': IPNetwork('192.0.2.99/24'),
@@ -775,6 +784,7 @@ class IPAddressTestCase(ViewTestCases.PrimaryObjectViewTestCase):
'tags': [t.pk for t in tags],
}
# TODO: Alter for prefix
cls.csv_data = (
"vrf,address,status,fhrp_group",
"VRF 1,192.0.2.4/24,active,FHRP Group 1",
@@ -782,6 +792,7 @@ class IPAddressTestCase(ViewTestCases.PrimaryObjectViewTestCase):
"VRF 1,192.0.2.6/24,active,FHRP Group 3",
)
# TODO: Alter for prefix
cls.csv_update_data = (
"id,description,status",
f"{ipaddresses[0].pk},New description 7,{IPAddressStatusChoices.STATUS_RESERVED}",
@@ -789,6 +800,7 @@ class IPAddressTestCase(ViewTestCases.PrimaryObjectViewTestCase):
f"{ipaddresses[2].pk},New description 9,{IPAddressStatusChoices.STATUS_RESERVED}",
)
# TODO: Alter for prefix
cls.bulk_edit_data = {
'vrf': vrfs[1].pk,
'tenant': None,

220
netbox/ipam/triggers.py Normal file
View File

@@ -0,0 +1,220 @@
ipam_prefix_delete_adjust_prefix_parent = """
-- Update Child Prefix's with Prefix's PARENT This is a safe assumption based on the fact that the parent would be the
-- next direct parent for anything else that could contain this prefix
UPDATE ipam_prefix SET parent_id=OLD.parent_id WHERE parent_id=OLD.id;
RETURN OLD;
"""
ipam_prefix_insert_adjust_prefix_parent = """
-- Update the prefix with the new parent if the parent is the most appropriate prefix
UPDATE ipam_prefix
SET parent_id=NEW.id
WHERE
prefix << NEW.prefix
AND
(
(vrf_id = NEW.vrf_id OR (vrf_id IS NULL AND NEW.vrf_id IS NULL))
OR
(
NEW.vrf_id IS NULL
AND
NEW.status = 'container'
AND
NOT EXISTS(
SELECT 1 FROM ipam_prefix p WHERE p.prefix >> ipam_prefix.prefix AND p.vrf_id = ipam_prefix.vrf_id
)
)
)
AND id != NEW.id
AND NOT EXISTS (
SELECT 1 FROM ipam_prefix p
WHERE
p.prefix >> ipam_prefix.prefix
AND p.prefix << NEW.prefix
AND (
(p.vrf_id = ipam_prefix.vrf_id OR (p.vrf_id IS NULL AND ipam_prefix.vrf_id IS NULL))
OR
(p.vrf_id IS NULL AND p.status = 'container')
)
AND p.id != NEW.id
)
;
RETURN NEW;
"""
ipam_prefix_update_adjust_prefix_parent = """
-- When a prefix changes, reassign any child prefixes that no longer
-- fall within the new prefix range to the parent prefix (or set null if no parent exists)
UPDATE ipam_prefix
SET parent_id = OLD.parent_id
WHERE
parent_id = NEW.id
-- IP address no longer contained within the updated prefix
AND NOT (prefix << NEW.prefix);
-- When a prefix changes, reassign any ip addresses that no longer
-- fall within the new prefix range to the parent prefix (or set null if no parent exists)
UPDATE ipam_ipaddress
SET prefix_id = OLD.parent_id
WHERE
prefix_id = NEW.id
-- IP address no longer contained within the updated prefix
AND
NOT (address << NEW.prefix)
;
-- When a prefix changes, reassign any ip ranges that no longer
-- fall within the new prefix range to the parent prefix (or set null if no parent exists)
UPDATE ipam_iprange
SET prefix_id = OLD.parent_id
WHERE
prefix_id = NEW.id
-- IP address no longer contained within the updated prefix
AND
NOT (start_address << NEW.prefix)
AND
NOT (end_address << NEW.prefix)
;
-- When a prefix changes, reassign any ip addresses that are in-scope but
-- no longer within the same VRF
UPDATE ipam_ipaddress
SET prefix_id = OLD.parent_id
WHERE
prefix_id = NEW.id
AND
address << OLD.prefix
AND
(
NOT address << NEW.prefix
OR
(
vrf_id is NULL
AND
NEW.vrf_id IS NOT NULL
)
OR
(
OLD.vrf_id IS NULL
AND
NEW.vrf_id IS NOT NULL
AND
NEW.vrf_id != vrf_id
)
)
;
-- When a prefix changes, reassign any ip ranges that are in-scope but
-- no longer within the same VRF
UPDATE ipam_iprange
SET prefix_id = OLD.parent_id
WHERE
prefix_id = NEW.id
AND
start_address << OLD.prefix
AND
end_address << OLD.prefix
AND
(
NOT start_address << NEW.prefix
OR
NOT end_address << NEW.prefix
OR
(
vrf_id is NULL
AND
NEW.vrf_id IS NOT NULL
)
OR
(
OLD.vrf_id IS NULL
AND
NEW.vrf_id IS NOT NULL
AND
NEW.vrf_id != vrf_id
)
)
;
-- Update the prefix with the new parent if the parent is the most appropriate prefix
UPDATE ipam_prefix
SET parent_id=NEW.id
WHERE
prefix << NEW.prefix
AND
(
(vrf_id = NEW.vrf_id OR (vrf_id IS NULL AND NEW.vrf_id IS NULL))
OR
(
NEW.vrf_id IS NULL
AND
NEW.status = 'container'
AND
NOT EXISTS(
SELECT 1 FROM ipam_prefix p WHERE p.prefix >> prefix AND p.vrf_id = vrf_id
)
)
)
AND id != NEW.id
AND NOT EXISTS (
SELECT 1 FROM ipam_prefix p
WHERE
p.prefix >> ipam_prefix.prefix
AND p.prefix << NEW.prefix
AND (
(p.vrf_id = ipam_prefix.vrf_id OR (p.vrf_id IS NULL AND ipam_prefix.vrf_id IS NULL))
OR
(p.vrf_id IS NULL AND p.status = 'container')
)
AND p.id != NEW.id
)
;
UPDATE ipam_ipaddress
SET prefix_id = NEW.id
WHERE
prefix_id != NEW.id
AND
address << NEW.prefix
AND (
(vrf_id = NEW.vrf_id OR (vrf_id IS NULL AND NEW.vrf_id IS NULL))
OR (
NEW.vrf_id IS NULL
AND
NEW.status = 'container'
AND
NOT EXISTS(
SELECT 1 FROM ipam_prefix p WHERE p.prefix >> address AND p.vrf_id = vrf_id
)
)
)
;
UPDATE ipam_iprange
SET prefix_id = NEW.id
WHERE
prefix_id != NEW.id
AND
start_address << NEW.prefix
AND
end_address << NEW.prefix
AND (
(vrf_id = NEW.vrf_id OR (vrf_id IS NULL AND NEW.vrf_id IS NULL))
OR (
NEW.vrf_id IS NULL
AND
NEW.status = 'container'
AND
NOT EXISTS(
SELECT 1 FROM ipam_prefix p WHERE
p.prefix >> start_address
AND
p.prefix >> end_address
AND
p.vrf_id = vrf_id
)
)
)
;
RETURN NEW;
"""

View File

@@ -687,13 +687,13 @@ class PrefixIPAddressesView(generic.ObjectChildrenView):
template_name = 'ipam/prefix/ip_addresses.html'
tab = ViewTab(
label=_('IP Addresses'),
badge=lambda x: x.get_child_ips().count(),
badge=lambda x: x.ip_addresses.count(),
permission='ipam.view_ipaddress',
weight=700
)
def get_children(self, request, parent):
return parent.get_child_ips().restrict(request.user, 'view').prefetch_related('vrf', 'tenant', 'tenant__group')
return parent.ip_addresses.restrict(request.user, 'view').prefetch_related('vrf', 'tenant', 'tenant__group')
def prep_table_data(self, request, queryset, parent):
if not request.GET.get('q') and not get_table_ordering(request, self.table):

View File

@@ -1,3 +1,4 @@
from django.contrib.contenttypes.models import ContentType
from django.core.exceptions import ObjectDoesNotExist
from django.db.backends.postgresql.psycopg_any import NumericRange
from django.utils.translation import gettext as _
@@ -109,7 +110,7 @@ class ContentTypeField(RelatedField):
def to_internal_value(self, data):
try:
app_label, model = data.split('.')
return self.queryset.get(app_label=app_label, model=model)
return ContentType.objects.get_by_natural_key(app_label=app_label, model=model)
except ObjectDoesNotExist:
self.fail('does_not_exist', content_type=data)
except (AttributeError, TypeError, ValueError):

View File

@@ -112,6 +112,7 @@ class ValidatedModelSerializer(BaseModelSerializer):
for k, v in attrs.items():
setattr(instance, k, v)
instance._m2m_values = m2m_values
instance.full_clean()
# Skip uniqueness validation of individual fields inside `full_clean()` (this is handled by the serializer)
instance.full_clean(validate_unique=False)
return data

View File

@@ -170,6 +170,28 @@ class NetBoxModelViewSet(
# Creates
def create(self, request, *args, **kwargs):
serializer = self.get_serializer(data=request.data)
serializer.is_valid(raise_exception=True)
bulk_create = getattr(serializer, 'many', False)
self.perform_create(serializer)
# After creating the instance(s), re-initialize the serializer with a queryset
# to ensure related objects are prefetched.
if bulk_create:
instance_pks = [obj.pk for obj in serializer.instance]
# Order by PK to ensure that the ordering of objects in the response
# matches the ordering of those in the request.
qs = self.get_queryset().filter(pk__in=instance_pks).order_by('pk')
else:
qs = self.get_queryset().get(pk=serializer.instance.pk)
# Re-serialize the instance(s) with prefetched data
serializer = self.get_serializer(qs, many=bulk_create)
headers = self.get_success_headers(serializer.data)
return Response(serializer.data, status=status.HTTP_201_CREATED, headers=headers)
def perform_create(self, serializer):
model = self.queryset.model
logger = logging.getLogger(f'netbox.api.views.{self.__class__.__name__}')
@@ -186,9 +208,20 @@ class NetBoxModelViewSet(
# Updates
def update(self, request, *args, **kwargs):
# Hotwire get_object() to ensure we save a pre-change snapshot
self.get_object = self.get_object_with_snapshot
return super().update(request, *args, **kwargs)
partial = kwargs.pop('partial', False)
instance = self.get_object_with_snapshot()
serializer = self.get_serializer(instance, data=request.data, partial=partial)
serializer.is_valid(raise_exception=True)
self.perform_update(serializer)
# After updating the instance, re-initialize the serializer with a queryset
# to ensure related objects are prefetched.
qs = self.get_queryset().get(pk=serializer.instance.pk)
# Re-serialize the instance(s) with prefetched data
serializer = self.get_serializer(qs)
return Response(serializer.data)
def perform_update(self, serializer):
model = self.queryset.model

View File

@@ -108,13 +108,17 @@ class BulkUpdateModelMixin:
obj.pop('id'): obj for obj in request.data
}
data = self.perform_bulk_update(qs, update_data, partial=partial)
object_pks = self.perform_bulk_update(qs, update_data, partial=partial)
return Response(data, status=status.HTTP_200_OK)
# Prefetch related objects for all updated instances
qs = self.get_queryset().filter(pk__in=object_pks)
serializer = self.get_serializer(qs, many=True)
return Response(serializer.data, status=status.HTTP_200_OK)
def perform_bulk_update(self, objects, update_data, partial):
updated_pks = []
with transaction.atomic(using=router.db_for_write(self.queryset.model)):
data_list = []
for obj in objects:
data = update_data.get(obj.id)
if hasattr(obj, 'snapshot'):
@@ -122,9 +126,9 @@ class BulkUpdateModelMixin:
serializer = self.get_serializer(obj, data=data, partial=partial)
serializer.is_valid(raise_exception=True)
self.perform_update(serializer)
data_list.append(serializer.data)
updated_pks.append(obj.pk)
return data_list
return updated_pks
def bulk_partial_update(self, request, *args, **kwargs):
kwargs['partial'] = True

View File

@@ -305,18 +305,13 @@ class NetBoxModelFilterSet(ChangeLoggedModelFilterSet):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
# Dynamically add a Filter for each CustomField applicable to the parent model
custom_fields = CustomField.objects.filter(
object_types=ContentType.objects.get_for_model(self._meta.model)
).exclude(
filter_logic=CustomFieldFilterLogicChoices.FILTER_DISABLED
)
custom_field_filters = {}
for custom_field in custom_fields:
filter_name = f'cf_{custom_field.name}'
filter_instance = custom_field.to_filter()
if filter_instance:
for custom_field in CustomField.objects.get_for_model(self._meta.model):
if custom_field.filter_logic == CustomFieldFilterLogicChoices.FILTER_DISABLED:
# Skip disabled fields
continue
if filter_instance := custom_field.to_filter():
filter_name = f'cf_{custom_field.name}'
custom_field_filters[filter_name] = filter_instance
# Add relevant additional lookups

View File

@@ -31,10 +31,11 @@ class NetBoxModelImportForm(CSVModelForm, NetBoxModelForm):
)
def _get_custom_fields(self, content_type):
return CustomField.objects.filter(
object_types=content_type,
ui_editable=CustomFieldUIEditableChoices.YES
)
# Return only custom fields that are editable in the UI
return [
cf for cf in CustomField.objects.get_for_model(content_type.model_class())
if cf.ui_editable == CustomFieldUIEditableChoices.YES
]
def _get_form_field(self, customfield):
return customfield.to_form_field(for_csv_import=True)

View File

@@ -1,5 +1,4 @@
from django import forms
from django.db.models import Q
from django.utils.translation import gettext_lazy as _
from extras.choices import *
@@ -35,10 +34,13 @@ class NetBoxModelFilterSetForm(FilterModifierMixin, CustomFieldsMixin, SavedFilt
selector_fields = ('filter_id', 'q')
def _get_custom_fields(self, content_type):
return super()._get_custom_fields(content_type).exclude(
Q(filter_logic=CustomFieldFilterLogicChoices.FILTER_DISABLED) |
Q(type=CustomFieldTypeChoices.TYPE_JSON)
)
# Return only non-hidden custom fields for which filtering is enabled (excluding JSON fields)
return [
cf for cf in super()._get_custom_fields(content_type) if (
cf.filter_logic != CustomFieldFilterLogicChoices.FILTER_DISABLED and
cf.type != CustomFieldTypeChoices.TYPE_JSON
)
]
def _get_form_field(self, customfield):
return customfield.to_form_field(

View File

@@ -65,9 +65,11 @@ class CustomFieldsMixin:
return ObjectType.objects.get_for_model(self.model)
def _get_custom_fields(self, content_type):
return CustomField.objects.filter(object_types=content_type).exclude(
ui_editable=CustomFieldUIEditableChoices.HIDDEN
)
# Return only custom fields that are not hidden from the UI
return [
cf for cf in CustomField.objects.get_for_model(content_type.model_class())
if cf.ui_editable != CustomFieldUIEditableChoices.HIDDEN
]
def _get_form_field(self, customfield):
return customfield.to_form_field()

View File

@@ -0,0 +1,50 @@
import strawberry
from strawberry.types.unset import UNSET
from strawberry_django.pagination import _QS, apply
__all__ = (
'OffsetPaginationInfo',
'OffsetPaginationInput',
'apply_pagination',
)
@strawberry.type
class OffsetPaginationInfo:
offset: int = 0
limit: int | None = UNSET
start: int | None = UNSET
@strawberry.input
class OffsetPaginationInput(OffsetPaginationInfo):
"""
Customized implementation of OffsetPaginationInput to support cursor-based pagination.
"""
pass
def apply_pagination(
self,
queryset: _QS,
pagination: OffsetPaginationInput | None = None,
*,
related_field_id: str | None = None,
) -> _QS:
"""
Replacement for the `apply_pagination()` method on StrawberryDjangoField to support cursor-based pagination.
"""
if pagination is not None and pagination.start not in (None, UNSET):
if pagination.offset:
raise ValueError('Cannot specify both `start` and `offset` in pagination.')
if pagination.start < 0:
raise ValueError('`start` must be greater than or equal to zero.')
# Filter the queryset to include only records with a primary key greater than or equal to the start value,
# and force ordering by primary key to ensure consistent pagination across all records.
queryset = queryset.filter(pk__gte=pagination.start).order_by('pk')
# Ignore `offset` when `start` is set
pagination.offset = 0
return apply(pagination, queryset, related_field_id=related_field_id)

View File

@@ -2,7 +2,7 @@ import json
from collections import defaultdict
from functools import cached_property
from django.contrib.contenttypes.fields import GenericRelation
from django.contrib.contenttypes.fields import GenericForeignKey, GenericRelation
from django.contrib.contenttypes.models import ContentType
from django.core.validators import ValidationError
from django.db import models
@@ -121,9 +121,11 @@ class ChangeLoggingMixin(DeleteMixin, models.Model):
if hasattr(self, '_prechange_snapshot'):
objectchange.prechange_data = self._prechange_snapshot
if action in (ObjectChangeActionChoices.ACTION_CREATE, ObjectChangeActionChoices.ACTION_UPDATE):
objectchange.postchange_data = self.serialize_object(exclude=exclude)
self._postchange_snapshot = self.serialize_object(exclude=exclude)
objectchange.postchange_data = self._postchange_snapshot
return objectchange
to_objectchange.alters_data = True
class CloningMixin(models.Model):
@@ -159,6 +161,13 @@ class CloningMixin(models.Model):
elif field_value not in (None, ''):
attrs[field_name] = field_value
# Handle GenericForeignKeys. If the CT and ID fields are being cloned, also
# include the name of the GFK attribute itself, as this is what forms expect.
for field in self._meta.private_fields:
if isinstance(field, GenericForeignKey):
if field.ct_field in attrs and field.fk_field in attrs:
attrs[field.name] = attrs[field.fk_field]
# Include tags (if applicable)
if is_taggable(self):
attrs['tags'] = [tag.pk for tag in self.tags.all()]
@@ -317,9 +326,11 @@ class CustomFieldsMixin(models.Model):
raise ValidationError(_("Missing required custom field '{name}'.").format(name=cf.name))
def save(self, *args, **kwargs):
# Populate default values if omitted
for cf in self.custom_fields.filter(default__isnull=False):
if cf.name not in self.custom_field_data:
from extras.models import CustomField
# Populate default values for custom fields not already present in the object data
for cf in CustomField.objects.get_for_model(self):
if cf.name not in self.custom_field_data and cf.default is not None:
self.custom_field_data[cf.name] = cf.default
super().save(*args, **kwargs)

View File

@@ -187,7 +187,6 @@ class CachedValueSearchBackend(SearchBackend):
return ret
def cache(self, instances, indexer=None, remove_existing=True):
object_type = None
custom_fields = None
# Convert a single instance to an iterable
@@ -208,15 +207,18 @@ class CachedValueSearchBackend(SearchBackend):
except KeyError:
break
# Prefetch any associated custom fields
object_type = ObjectType.objects.get_for_model(indexer.model)
custom_fields = CustomField.objects.filter(object_types=object_type).exclude(search_weight=0)
# Prefetch any associated custom fields (excluding those with a zero search weight)
custom_fields = [
cf for cf in CustomField.objects.get_for_model(indexer.model)
if cf.search_weight > 0
]
# Wipe out any previously cached values for the object
if remove_existing:
self.remove(instance)
# Generate cache data
object_type = ObjectType.objects.get_for_model(indexer.model)
for field in indexer.to_cache(instance, custom_fields=custom_fields):
buffer.append(
CachedValue(

View File

@@ -12,10 +12,13 @@ from django.core.validators import URLValidator
from django.utils.module_loading import import_string
from django.utils.translation import gettext_lazy as _
from rest_framework.utils import field_mapping
from strawberry_django import pagination
from strawberry_django.fields.field import StrawberryDjangoField
from core.exceptions import IncompatiblePluginError
from netbox.config import PARAMS as CONFIG_PARAMS
from netbox.constants import RQ_QUEUE_DEFAULT, RQ_QUEUE_HIGH, RQ_QUEUE_LOW
from netbox.graphql.pagination import OffsetPaginationInput, apply_pagination
from netbox.plugins import PluginConfig
from netbox.registry import registry
import storages.utils # type: ignore
@@ -33,6 +36,12 @@ from .monkey import get_unique_validators
# Override DRF's get_unique_validators() function with our own (see bug #19302)
field_mapping.get_unique_validators = get_unique_validators
# Override strawberry-django's OffsetPaginationInput class to add the `start` parameter
pagination.OffsetPaginationInput = OffsetPaginationInput
# Patch StrawberryDjangoField to use our custom `apply_pagination()` method with support for cursor-based pagination
StrawberryDjangoField.apply_pagination = apply_pagination
#
# Environment setup
@@ -454,6 +463,7 @@ INSTALLED_APPS = [
'sorl.thumbnail',
'taggit',
'timezone_field',
'pgtrigger',
'core',
'account',
'circuits',

View File

@@ -242,14 +242,17 @@ class NetBoxTable(BaseTable):
(name, deepcopy(column)) for name, column in registered_columns.items()
])
# Add custom field & custom link columns
object_type = ObjectType.objects.get_for_model(self._meta.model)
custom_fields = CustomField.objects.filter(
object_types=object_type
).exclude(ui_visible=CustomFieldUIVisibleChoices.HIDDEN)
# Add columns for custom fields
custom_fields = [
cf for cf in CustomField.objects.get_for_model(self._meta.model)
if cf.ui_visible != CustomFieldUIVisibleChoices.HIDDEN
]
extra_columns.extend([
(f'cf_{cf.name}', columns.CustomFieldColumn(cf)) for cf in custom_fields
])
# Add columns for custom links
object_type = ObjectType.objects.get_for_model(self._meta.model)
custom_links = CustomLink.objects.filter(object_types=object_type, enabled=True)
extra_columns.extend([
(f'cl_{cl.name}', columns.CustomLinkColumn(cl)) for cl in custom_links

View File

@@ -4,10 +4,8 @@ from django.test import override_settings
from django.urls import reverse
from rest_framework import status
from core.models import ObjectType
from dcim.choices import LocationStatusChoices
from dcim.models import Site, Location
from users.models import ObjectPermission
from utilities.testing import disable_warnings, APITestCase, TestCase
@@ -45,17 +43,28 @@ class GraphQLTestCase(TestCase):
class GraphQLAPITestCase(APITestCase):
@classmethod
def setUpTestData(cls):
sites = (
Site(name='Site 1', slug='site-1'),
Site(name='Site 2', slug='site-2'),
Site(name='Site 3', slug='site-3'),
Site(name='Site 4', slug='site-4'),
Site(name='Site 5', slug='site-5'),
Site(name='Site 6', slug='site-6'),
Site(name='Site 7', slug='site-7'),
)
Site.objects.bulk_create(sites)
@override_settings(LOGIN_REQUIRED=True)
def test_graphql_filter_objects(self):
"""
Test the operation of filters for GraphQL API requests.
"""
sites = (
Site(name='Site 1', slug='site-1'),
Site(name='Site 2', slug='site-2'),
Site(name='Site 3', slug='site-3'),
)
Site.objects.bulk_create(sites)
self.add_permissions('dcim.view_site', 'dcim.view_location')
url = reverse('graphql')
sites = Site.objects.all()[:3]
Location.objects.create(
site=sites[0],
name='Location 1',
@@ -75,18 +84,6 @@ class GraphQLAPITestCase(APITestCase):
status=LocationStatusChoices.STATUS_ACTIVE
),
# Add object-level permission
obj_perm = ObjectPermission(
name='Test permission',
actions=['view']
)
obj_perm.save()
obj_perm.users.add(self.user)
obj_perm.object_types.add(ObjectType.objects.get_for_model(Location))
obj_perm.object_types.add(ObjectType.objects.get_for_model(Site))
url = reverse('graphql')
# A valid request should return the filtered list
query = '{location_list(filters: {site_id: "' + str(sites[0].pk) + '"}) {id site {id}}}'
response = self.client.post(url, data={'query': query}, format="json", **self.header)
@@ -133,10 +130,136 @@ class GraphQLAPITestCase(APITestCase):
self.assertEqual(len(data['data']['location_list']), 0)
# Removing the permissions from location should result in an empty locations list
obj_perm.object_types.remove(ObjectType.objects.get_for_model(Location))
self.remove_permissions('dcim.view_location')
query = '{site(id: ' + str(sites[0].pk) + ') {id locations {id}}}'
response = self.client.post(url, data={'query': query}, format="json", **self.header)
self.assertHttpStatus(response, status.HTTP_200_OK)
data = json.loads(response.content)
self.assertNotIn('errors', data)
self.assertEqual(len(data['data']['site']['locations']), 0)
def test_offset_pagination(self):
self.add_permissions('dcim.view_site')
url = reverse('graphql')
# Test `limit` only
query = """
{
site_list(pagination: {limit: 3}) {
id name
}
}
"""
response = self.client.post(url, data={'query': query}, format='json', **self.header)
self.assertHttpStatus(response, status.HTTP_200_OK)
data = json.loads(response.content)
self.assertNotIn('errors', data)
self.assertEqual(len(data['data']['site_list']), 3)
self.assertEqual(data['data']['site_list'][0]['name'], 'Site 1')
self.assertEqual(data['data']['site_list'][1]['name'], 'Site 2')
self.assertEqual(data['data']['site_list'][2]['name'], 'Site 3')
# Test `offset` only
query = """
{
site_list(pagination: {offset: 3}) {
id name
}
}
"""
response = self.client.post(url, data={'query': query}, format='json', **self.header)
self.assertHttpStatus(response, status.HTTP_200_OK)
data = json.loads(response.content)
self.assertNotIn('errors', data)
self.assertEqual(len(data['data']['site_list']), 4)
self.assertEqual(data['data']['site_list'][0]['name'], 'Site 4')
self.assertEqual(data['data']['site_list'][1]['name'], 'Site 5')
self.assertEqual(data['data']['site_list'][2]['name'], 'Site 6')
self.assertEqual(data['data']['site_list'][3]['name'], 'Site 7')
# Test `offset` & `limit`
query = """
{
site_list(pagination: {offset: 3, limit: 3}) {
id name
}
}
"""
response = self.client.post(url, data={'query': query}, format='json', **self.header)
self.assertHttpStatus(response, status.HTTP_200_OK)
data = json.loads(response.content)
self.assertNotIn('errors', data)
self.assertEqual(len(data['data']['site_list']), 3)
self.assertEqual(data['data']['site_list'][0]['name'], 'Site 4')
self.assertEqual(data['data']['site_list'][1]['name'], 'Site 5')
self.assertEqual(data['data']['site_list'][2]['name'], 'Site 6')
def test_cursor_pagination(self):
self.add_permissions('dcim.view_site')
url = reverse('graphql')
# Page 1
query = """
{
site_list(pagination: {start: 0, limit: 3}) {
id name
}
}
"""
response = self.client.post(url, data={'query': query}, format='json', **self.header)
self.assertHttpStatus(response, status.HTTP_200_OK)
data = json.loads(response.content)
self.assertNotIn('errors', data)
self.assertEqual(len(data['data']['site_list']), 3)
self.assertEqual(data['data']['site_list'][0]['name'], 'Site 1')
self.assertEqual(data['data']['site_list'][1]['name'], 'Site 2')
self.assertEqual(data['data']['site_list'][2]['name'], 'Site 3')
# Page 2
start_id = int(data['data']['site_list'][-1]['id']) + 1
query = """
{
site_list(pagination: {start: """ + str(start_id) + """, limit: 3}) {
id name
}
}
"""
response = self.client.post(url, data={'query': query}, format='json', **self.header)
self.assertHttpStatus(response, status.HTTP_200_OK)
data = json.loads(response.content)
self.assertNotIn('errors', data)
self.assertEqual(len(data['data']['site_list']), 3)
self.assertEqual(data['data']['site_list'][0]['name'], 'Site 4')
self.assertEqual(data['data']['site_list'][1]['name'], 'Site 5')
self.assertEqual(data['data']['site_list'][2]['name'], 'Site 6')
# Page 3
start_id = int(data['data']['site_list'][-1]['id']) + 1
query = """
{
site_list(pagination: {start: """ + str(start_id) + """, limit: 3}) {
id name
}
}
"""
response = self.client.post(url, data={'query': query}, format='json', **self.header)
self.assertHttpStatus(response, status.HTTP_200_OK)
data = json.loads(response.content)
self.assertNotIn('errors', data)
self.assertEqual(len(data['data']['site_list']), 1)
self.assertEqual(data['data']['site_list'][0]['name'], 'Site 7')
def test_pagination_conflict(self):
url = reverse('graphql')
query = """
{
site_list(pagination: {start: 1, offset: 1}) {
id name
}
}
"""
response = self.client.post(url, data={'query': query}, format='json', **self.header)
self.assertHttpStatus(response, status.HTTP_200_OK)
data = json.loads(response.content)
self.assertIn('errors', data)
self.assertEqual(data['errors'][0]['message'], 'Cannot specify both `start` and `offset` in pagination.')

View File

@@ -1,18 +1,28 @@
from unittest import skipIf
from django.conf import settings
from django.test import TestCase
from core.models import AutoSyncRecord, DataSource
from dcim.models import Site
from extras.models import CustomLink
from ipam.models import Prefix
from netbox.models.features import get_model_features, has_feature, model_is_public
from netbox.tests.dummy_plugin.models import DummyModel
from taggit.models import Tag
class ModelFeaturesTestCase(TestCase):
"""
A test case class for verifying model features and utility functions.
"""
@skipIf('netbox.tests.dummy_plugin' not in settings.PLUGINS, 'dummy_plugin not in settings.PLUGINS')
def test_model_is_public(self):
"""
Test that the is_public() utility function returns True for public models only.
"""
from netbox.tests.dummy_plugin.models import DummyModel
# Public model
self.assertFalse(hasattr(DataSource, '_netbox_private'))
self.assertTrue(model_is_public(DataSource))
@@ -51,3 +61,53 @@ class ModelFeaturesTestCase(TestCase):
features = get_model_features(CustomLink)
self.assertIn('cloning', features)
self.assertNotIn('bookmarks', features)
def test_cloningmixin_injects_gfk_attribute(self):
"""
Tests the cloning mixin with GFK attribute injection in the `clone` method.
This test validates that the `clone` method correctly handles
and retains the General Foreign Key (GFK) attributes on an
object when the cloning fields are explicitly defined.
"""
site = Site.objects.create(name='Test Site', slug='test-site')
prefix = Prefix.objects.create(prefix='10.0.0.0/24', scope=site)
original_clone_fields = getattr(Prefix, 'clone_fields', None)
try:
Prefix.clone_fields = ('scope_type', 'scope_id')
attrs = prefix.clone()
self.assertEqual(attrs['scope_type'], prefix.scope_type_id)
self.assertEqual(attrs['scope_id'], prefix.scope_id)
self.assertEqual(attrs['scope'], prefix.scope_id)
finally:
if original_clone_fields is None:
delattr(Prefix, 'clone_fields')
else:
Prefix.clone_fields = original_clone_fields
def test_cloningmixin_does_not_inject_gfk_attribute_if_incomplete(self):
"""
Tests the cloning mixin with incomplete cloning fields does not inject the GFK attribute.
This test validates that the `clone` method correctly handles
the case where the cloning fields are incomplete, ensuring that
the generic foreign key (GFK) attribute is not injected during
the cloning process.
"""
site = Site.objects.create(name='Test Site', slug='test-site')
prefix = Prefix.objects.create(prefix='10.0.0.0/24', scope=site)
original_clone_fields = getattr(Prefix, 'clone_fields', None)
try:
Prefix.clone_fields = ('scope_type',)
attrs = prefix.clone()
self.assertIn('scope_type', attrs)
self.assertNotIn('scope', attrs)
finally:
if original_clone_fields is None:
delattr(Prefix, 'clone_fields')
else:
Prefix.clone_fields = original_clone_fields

View File

@@ -5,7 +5,6 @@ from copy import deepcopy
from django.contrib import messages
from django.contrib.contenttypes.fields import GenericForeignKey, GenericRel
from django.contrib.contenttypes.models import ContentType
from django.core.exceptions import FieldDoesNotExist, ObjectDoesNotExist, ValidationError
from django.db import IntegrityError, router, transaction
from django.db.models import ManyToManyField, ProtectedError, RestrictedError
@@ -484,12 +483,11 @@ class BulkImportView(GetReturnURLMixin, BaseMultiObjectView):
else:
instance = self.queryset.model()
# For newly created objects, apply any default custom field values
custom_fields = CustomField.objects.filter(
object_types=ContentType.objects.get_for_model(self.queryset.model),
ui_editable=CustomFieldUIEditableChoices.YES
)
for cf in custom_fields:
# For newly created objects, apply any default values for custom fields
for cf in CustomField.objects.get_for_model(self.queryset.model):
if cf.ui_editable != CustomFieldUIEditableChoices.YES:
# Skip custom fields which are not editable via the UI
continue
field_name = f'cf_{cf.name}'
if field_name not in record:
record[field_name] = cf.default

View File

@@ -31,20 +31,20 @@
"gridstack": "12.4.2",
"htmx.org": "2.0.8",
"query-string": "9.3.1",
"sass": "1.97.2",
"sass": "1.97.3",
"tom-select": "2.4.3",
"typeface-inter": "3.18.1",
"typeface-roboto-mono": "1.1.13"
},
"devDependencies": {
"@eslint/compat": "^2.0.1",
"@eslint/compat": "^2.0.2",
"@eslint/eslintrc": "^3.3.3",
"@eslint/js": "^9.39.2",
"@types/bootstrap": "5.2.10",
"@types/cookie": "^1.0.0",
"@types/node": "^24.10.1",
"@typescript-eslint/eslint-plugin": "^8.53.1",
"@typescript-eslint/parser": "^8.53.1",
"@typescript-eslint/eslint-plugin": "^8.54.0",
"@typescript-eslint/parser": "^8.54.0",
"esbuild": "^0.27.2",
"esbuild-sass-plugin": "^3.6.0",
"eslint": "^9.39.2",
@@ -52,8 +52,8 @@
"eslint-import-resolver-typescript": "^4.4.4",
"eslint-plugin-import": "^2.32.0",
"eslint-plugin-prettier": "^5.5.5",
"globals": "^17.0.0",
"prettier": "^3.8.0",
"globals": "^17.3.0",
"prettier": "^3.8.1",
"typescript": "^5.9.3"
},
"resolutions": {

View File

@@ -173,12 +173,12 @@
resolved "https://registry.yarnpkg.com/@eslint-community/regexpp/-/regexpp-4.12.2.tgz#bccdf615bcf7b6e8db830ec0b8d21c9a25de597b"
integrity sha512-EriSTlt5OC9/7SXkRSCAhfSxxoSUgBm33OH+IkwbdpgoqsSsUg7y3uh+IICI/Qg4BBWr3U2i39RpmycbxMq4ew==
"@eslint/compat@^2.0.1":
version "2.0.1"
resolved "https://registry.yarnpkg.com/@eslint/compat/-/compat-2.0.1.tgz#5894516f8ce9ba884f4d4ba5ecb6b6459b231144"
integrity sha512-yl/JsgplclzuvGFNqwNYV4XNPhP3l62ZOP9w/47atNAdmDtIFCx6X7CSk/SlWUuBGkT4Et/5+UD+WyvX2iiIWA==
"@eslint/compat@^2.0.2":
version "2.0.2"
resolved "https://registry.yarnpkg.com/@eslint/compat/-/compat-2.0.2.tgz#fc1495688664861870f5e7ee56999dc252b6dd52"
integrity sha512-pR1DoD0h3HfF675QZx0xsyrsU8q70Z/plx7880NOhS02NuWLgBCOMDL787nUeQ7EWLkxv3bPQJaarjcPQb2Dwg==
dependencies:
"@eslint/core" "^1.0.1"
"@eslint/core" "^1.1.0"
"@eslint/config-array@^0.21.1":
version "0.21.1"
@@ -203,10 +203,10 @@
dependencies:
"@types/json-schema" "^7.0.15"
"@eslint/core@^1.0.1":
version "1.0.1"
resolved "https://registry.yarnpkg.com/@eslint/core/-/core-1.0.1.tgz#701ff760cbd279f9490bef0ce54095f4088d4def"
integrity sha512-r18fEAj9uCk+VjzGt2thsbOmychS+4kxI14spVNibUO2vqKX7obOG+ymZljAwuPZl+S3clPGwCwTDtrdqTiY6Q==
"@eslint/core@^1.1.0":
version "1.1.0"
resolved "https://registry.yarnpkg.com/@eslint/core/-/core-1.1.0.tgz#51f5cd970e216fbdae6721ac84491f57f965836d"
integrity sha512-/nr9K9wkr3P1EzFTdFdMoLuo1PmIxjmwvPozwoSodjNBdefGujXQUF93u1DDZpEaTuDvMsIQddsd35BwtrW9Xw==
dependencies:
"@types/json-schema" "^7.0.15"
@@ -935,100 +935,100 @@
dependencies:
"@types/estree" "*"
"@typescript-eslint/eslint-plugin@^8.53.1":
version "8.53.1"
resolved "https://registry.yarnpkg.com/@typescript-eslint/eslint-plugin/-/eslint-plugin-8.53.1.tgz#f6640f6f8749b71d9ab457263939e8932a3c6b46"
integrity sha512-cFYYFZ+oQFi6hUnBTbLRXfTJiaQtYE3t4O692agbBl+2Zy+eqSKWtPjhPXJu1G7j4RLjKgeJPDdq3EqOwmX5Ag==
"@typescript-eslint/eslint-plugin@^8.54.0":
version "8.54.0"
resolved "https://registry.yarnpkg.com/@typescript-eslint/eslint-plugin/-/eslint-plugin-8.54.0.tgz#d8899e5c2eccf5c4a20d01c036a193753748454d"
integrity sha512-hAAP5io/7csFStuOmR782YmTthKBJ9ND3WVL60hcOjvtGFb+HJxH4O5huAcmcZ9v9G8P+JETiZ/G1B8MALnWZQ==
dependencies:
"@eslint-community/regexpp" "^4.12.2"
"@typescript-eslint/scope-manager" "8.53.1"
"@typescript-eslint/type-utils" "8.53.1"
"@typescript-eslint/utils" "8.53.1"
"@typescript-eslint/visitor-keys" "8.53.1"
"@typescript-eslint/scope-manager" "8.54.0"
"@typescript-eslint/type-utils" "8.54.0"
"@typescript-eslint/utils" "8.54.0"
"@typescript-eslint/visitor-keys" "8.54.0"
ignore "^7.0.5"
natural-compare "^1.4.0"
ts-api-utils "^2.4.0"
"@typescript-eslint/parser@^8.53.1":
version "8.53.1"
resolved "https://registry.yarnpkg.com/@typescript-eslint/parser/-/parser-8.53.1.tgz#58d4a70cc2daee2becf7d4521d65ea1782d6ec68"
integrity sha512-nm3cvFN9SqZGXjmw5bZ6cGmvJSyJPn0wU9gHAZZHDnZl2wF9PhHv78Xf06E0MaNk4zLVHL8hb2/c32XvyJOLQg==
"@typescript-eslint/parser@^8.54.0":
version "8.54.0"
resolved "https://registry.yarnpkg.com/@typescript-eslint/parser/-/parser-8.54.0.tgz#3d01a6f54ed247deb9982621f70e7abf1810bd97"
integrity sha512-BtE0k6cjwjLZoZixN0t5AKP0kSzlGu7FctRXYuPAm//aaiZhmfq1JwdYpYr1brzEspYyFeF+8XF5j2VK6oalrA==
dependencies:
"@typescript-eslint/scope-manager" "8.53.1"
"@typescript-eslint/types" "8.53.1"
"@typescript-eslint/typescript-estree" "8.53.1"
"@typescript-eslint/visitor-keys" "8.53.1"
"@typescript-eslint/scope-manager" "8.54.0"
"@typescript-eslint/types" "8.54.0"
"@typescript-eslint/typescript-estree" "8.54.0"
"@typescript-eslint/visitor-keys" "8.54.0"
debug "^4.4.3"
"@typescript-eslint/project-service@8.53.1":
version "8.53.1"
resolved "https://registry.yarnpkg.com/@typescript-eslint/project-service/-/project-service-8.53.1.tgz#4e47856a0b14a1ceb28b0294b4badef3be1e9734"
integrity sha512-WYC4FB5Ra0xidsmlPb+1SsnaSKPmS3gsjIARwbEkHkoWloQmuzcfypljaJcR78uyLA1h8sHdWWPHSLDI+MtNog==
"@typescript-eslint/project-service@8.54.0":
version "8.54.0"
resolved "https://registry.yarnpkg.com/@typescript-eslint/project-service/-/project-service-8.54.0.tgz#f582aceb3d752544c8e1b11fea8d95d00cf9adc6"
integrity sha512-YPf+rvJ1s7MyiWM4uTRhE4DvBXrEV+d8oC3P9Y2eT7S+HBS0clybdMIPnhiATi9vZOYDc7OQ1L/i6ga6NFYK/g==
dependencies:
"@typescript-eslint/tsconfig-utils" "^8.53.1"
"@typescript-eslint/types" "^8.53.1"
"@typescript-eslint/tsconfig-utils" "^8.54.0"
"@typescript-eslint/types" "^8.54.0"
debug "^4.4.3"
"@typescript-eslint/scope-manager@8.53.1":
version "8.53.1"
resolved "https://registry.yarnpkg.com/@typescript-eslint/scope-manager/-/scope-manager-8.53.1.tgz#6c4b8c82cd45ae3b365afc2373636e166743a8fa"
integrity sha512-Lu23yw1uJMFY8cUeq7JlrizAgeQvWugNQzJp8C3x8Eo5Jw5Q2ykMdiiTB9vBVOOUBysMzmRRmUfwFrZuI2C4SQ==
"@typescript-eslint/scope-manager@8.54.0":
version "8.54.0"
resolved "https://registry.yarnpkg.com/@typescript-eslint/scope-manager/-/scope-manager-8.54.0.tgz#307dc8cbd80157e2772c2d36216857415a71ab33"
integrity sha512-27rYVQku26j/PbHYcVfRPonmOlVI6gihHtXFbTdB5sb6qA0wdAQAbyXFVarQ5t4HRojIz64IV90YtsjQSSGlQg==
dependencies:
"@typescript-eslint/types" "8.53.1"
"@typescript-eslint/visitor-keys" "8.53.1"
"@typescript-eslint/types" "8.54.0"
"@typescript-eslint/visitor-keys" "8.54.0"
"@typescript-eslint/tsconfig-utils@8.53.1", "@typescript-eslint/tsconfig-utils@^8.53.1":
version "8.53.1"
resolved "https://registry.yarnpkg.com/@typescript-eslint/tsconfig-utils/-/tsconfig-utils-8.53.1.tgz#efe80b8d019cd49e5a1cf46c2eb0cd2733076424"
integrity sha512-qfvLXS6F6b1y43pnf0pPbXJ+YoXIC7HKg0UGZ27uMIemKMKA6XH2DTxsEDdpdN29D+vHV07x/pnlPNVLhdhWiA==
"@typescript-eslint/tsconfig-utils@8.54.0", "@typescript-eslint/tsconfig-utils@^8.54.0":
version "8.54.0"
resolved "https://registry.yarnpkg.com/@typescript-eslint/tsconfig-utils/-/tsconfig-utils-8.54.0.tgz#71dd7ba1674bd48b172fc4c85b2f734b0eae3dbc"
integrity sha512-dRgOyT2hPk/JwxNMZDsIXDgyl9axdJI3ogZ2XWhBPsnZUv+hPesa5iuhdYt2gzwA9t8RE5ytOJ6xB0moV0Ujvw==
"@typescript-eslint/type-utils@8.53.1":
version "8.53.1"
resolved "https://registry.yarnpkg.com/@typescript-eslint/type-utils/-/type-utils-8.53.1.tgz#95de2651a96d580bf5c6c6089ddd694284d558ad"
integrity sha512-MOrdtNvyhy0rHyv0ENzub1d4wQYKb2NmIqG7qEqPWFW7Mpy2jzFC3pQ2yKDvirZB7jypm5uGjF2Qqs6OIqu47w==
"@typescript-eslint/type-utils@8.54.0":
version "8.54.0"
resolved "https://registry.yarnpkg.com/@typescript-eslint/type-utils/-/type-utils-8.54.0.tgz#64965317dd4118346c2fa5ee94492892200e9fb9"
integrity sha512-hiLguxJWHjjwL6xMBwD903ciAwd7DmK30Y9Axs/etOkftC3ZNN9K44IuRD/EB08amu+Zw6W37x9RecLkOo3pMA==
dependencies:
"@typescript-eslint/types" "8.53.1"
"@typescript-eslint/typescript-estree" "8.53.1"
"@typescript-eslint/utils" "8.53.1"
"@typescript-eslint/types" "8.54.0"
"@typescript-eslint/typescript-estree" "8.54.0"
"@typescript-eslint/utils" "8.54.0"
debug "^4.4.3"
ts-api-utils "^2.4.0"
"@typescript-eslint/types@8.53.1", "@typescript-eslint/types@^8.53.1":
version "8.53.1"
resolved "https://registry.yarnpkg.com/@typescript-eslint/types/-/types-8.53.1.tgz#101f203f0807a63216cceceedb815fabe21d5793"
integrity sha512-jr/swrr2aRmUAUjW5/zQHbMaui//vQlsZcJKijZf3M26bnmLj8LyZUpj8/Rd6uzaek06OWsqdofN/Thenm5O8A==
"@typescript-eslint/types@8.54.0", "@typescript-eslint/types@^8.54.0":
version "8.54.0"
resolved "https://registry.yarnpkg.com/@typescript-eslint/types/-/types-8.54.0.tgz#c12d41f67a2e15a8a96fbc5f2d07b17331130889"
integrity sha512-PDUI9R1BVjqu7AUDsRBbKMtwmjWcn4J3le+5LpcFgWULN3LvHC5rkc9gCVxbrsrGmO1jfPybN5s6h4Jy+OnkAA==
"@typescript-eslint/typescript-estree@8.53.1":
version "8.53.1"
resolved "https://registry.yarnpkg.com/@typescript-eslint/typescript-estree/-/typescript-estree-8.53.1.tgz#b6dce2303c9e27e95b8dcd8c325868fff53e488f"
integrity sha512-RGlVipGhQAG4GxV1s34O91cxQ/vWiHJTDHbXRr0li2q/BGg3RR/7NM8QDWgkEgrwQYCvmJV9ichIwyoKCQ+DTg==
"@typescript-eslint/typescript-estree@8.54.0":
version "8.54.0"
resolved "https://registry.yarnpkg.com/@typescript-eslint/typescript-estree/-/typescript-estree-8.54.0.tgz#3c7716905b2b811fadbd2114804047d1bfc86527"
integrity sha512-BUwcskRaPvTk6fzVWgDPdUndLjB87KYDrN5EYGetnktoeAvPtO4ONHlAZDnj5VFnUANg0Sjm7j4usBlnoVMHwA==
dependencies:
"@typescript-eslint/project-service" "8.53.1"
"@typescript-eslint/tsconfig-utils" "8.53.1"
"@typescript-eslint/types" "8.53.1"
"@typescript-eslint/visitor-keys" "8.53.1"
"@typescript-eslint/project-service" "8.54.0"
"@typescript-eslint/tsconfig-utils" "8.54.0"
"@typescript-eslint/types" "8.54.0"
"@typescript-eslint/visitor-keys" "8.54.0"
debug "^4.4.3"
minimatch "^9.0.5"
semver "^7.7.3"
tinyglobby "^0.2.15"
ts-api-utils "^2.4.0"
"@typescript-eslint/utils@8.53.1":
version "8.53.1"
resolved "https://registry.yarnpkg.com/@typescript-eslint/utils/-/utils-8.53.1.tgz#81fe6c343de288701b774f4d078382f567e6edaa"
integrity sha512-c4bMvGVWW4hv6JmDUEG7fSYlWOl3II2I4ylt0NM+seinYQlZMQIaKaXIIVJWt9Ofh6whrpM+EdDQXKXjNovvrg==
"@typescript-eslint/utils@8.54.0":
version "8.54.0"
resolved "https://registry.yarnpkg.com/@typescript-eslint/utils/-/utils-8.54.0.tgz#c79a4bcbeebb4f571278c0183ed1cb601d84c6c8"
integrity sha512-9Cnda8GS57AQakvRyG0PTejJNlA2xhvyNtEVIMlDWOOeEyBkYWhGPnfrIAnqxLMTSTo6q8g12XVjjev5l1NvMA==
dependencies:
"@eslint-community/eslint-utils" "^4.9.1"
"@typescript-eslint/scope-manager" "8.53.1"
"@typescript-eslint/types" "8.53.1"
"@typescript-eslint/typescript-estree" "8.53.1"
"@typescript-eslint/scope-manager" "8.54.0"
"@typescript-eslint/types" "8.54.0"
"@typescript-eslint/typescript-estree" "8.54.0"
"@typescript-eslint/visitor-keys@8.53.1":
version "8.53.1"
resolved "https://registry.yarnpkg.com/@typescript-eslint/visitor-keys/-/visitor-keys-8.53.1.tgz#405f04959be22b9be364939af8ac19c3649b6eb7"
integrity sha512-oy+wV7xDKFPRyNggmXuZQSBzvoLnpmJs+GhzRhPjrxl2b/jIlyjVokzm47CZCDUdXKr2zd7ZLodPfOBpOPyPlg==
"@typescript-eslint/visitor-keys@8.54.0":
version "8.54.0"
resolved "https://registry.yarnpkg.com/@typescript-eslint/visitor-keys/-/visitor-keys-8.54.0.tgz#0e4b50124b210b8600b245dd66cbad52deb15590"
integrity sha512-VFlhGSl4opC0bprJiItPQ1RfUhGDIBokcPwaFH4yiBCaNPeld/9VeXbiPO1cLyorQi1G1vL+ecBk1x8o1axORA==
dependencies:
"@typescript-eslint/types" "8.53.1"
"@typescript-eslint/types" "8.54.0"
eslint-visitor-keys "^4.2.1"
"@unrs/resolver-binding-android-arm-eabi@1.11.1":
@@ -2184,10 +2184,10 @@ globals@^14.0.0:
resolved "https://registry.npmjs.org/globals/-/globals-14.0.0.tgz"
integrity sha512-oahGvuMGQlPw/ivIYBjVSrWAfWLBeku5tpPE2fOPLi+WHffIWbuh2tCjhyQhTBPMf5E9jDEH4FOmTYgYwbKwtQ==
globals@^17.0.0:
version "17.0.0"
resolved "https://registry.yarnpkg.com/globals/-/globals-17.0.0.tgz#a4196d9cfeb4d627ba165b4647b1f5853bf90a30"
integrity sha512-gv5BeD2EssA793rlFWVPMMCqefTlpusw6/2TbAVMy0FzcG8wKJn4O+NqJ4+XWmmwrayJgw5TzrmWjFgmz1XPqw==
globals@^17.3.0:
version "17.3.0"
resolved "https://registry.yarnpkg.com/globals/-/globals-17.3.0.tgz#8b96544c2fa91afada02747cc9731c002a96f3b9"
integrity sha512-yMqGUQVVCkD4tqjOJf3TnrvaaHDMYp4VlUSObbkIiuCPe/ofdMBFIAcBbCSRFWOnos6qRiTVStDwqPLUclaxIw==
globalthis@^1.0.3, globalthis@^1.0.4:
version "1.0.4"
@@ -2985,10 +2985,10 @@ prettier-linter-helpers@^1.0.1:
dependencies:
fast-diff "^1.1.2"
prettier@^3.8.0:
version "3.8.0"
resolved "https://registry.yarnpkg.com/prettier/-/prettier-3.8.0.tgz#f72cf71505133f40cfa2ef77a2668cdc558fcd69"
integrity sha512-yEPsovQfpxYfgWNhCfECjG5AQaO+K3dp6XERmOepyPDVqcJm+bjyCVO3pmU+nAPe0N5dDvekfGezt/EIiRe1TA==
prettier@^3.8.1:
version "3.8.1"
resolved "https://registry.yarnpkg.com/prettier/-/prettier-3.8.1.tgz#edf48977cf991558f4fcbd8a3ba6015ba2a3a173"
integrity sha512-UOnG6LftzbdaHZcKoPFtOcCKztrQ57WkHDeRD9t/PTQtmT0NHSeWWepj6pS0z/N7+08BHFDQVUrfmfMRcZwbMg==
punycode.js@^2.3.1:
version "2.3.1"
@@ -3172,7 +3172,18 @@ safe-regex-test@^1.1.0:
es-errors "^1.3.0"
is-regex "^1.2.1"
sass@1.97.2, sass@^1.97.2:
sass@1.97.3:
version "1.97.3"
resolved "https://registry.yarnpkg.com/sass/-/sass-1.97.3.tgz#9cb59339514fa7e2aec592b9700953ac6e331ab2"
integrity sha512-fDz1zJpd5GycprAbu4Q2PV/RprsRtKC/0z82z0JLgdytmcq0+ujJbJ/09bPGDxCLkKY3Np5cRAOcWiVkLXJURg==
dependencies:
chokidar "^4.0.0"
immutable "^5.0.2"
source-map-js ">=0.6.2 <2.0.0"
optionalDependencies:
"@parcel/watcher" "^2.4.1"
sass@^1.97.2:
version "1.97.2"
resolved "https://registry.yarnpkg.com/sass/-/sass-1.97.2.tgz#e515a319092fd2c3b015228e3094b40198bff0da"
integrity sha512-y5LWb0IlbO4e97Zr7c3mlpabcbBtS+ieiZ9iwDooShpFKWXf62zz5pEPdwrLYm+Bxn1fnbwFGzHuCLSA9tBmrw==
@@ -3441,7 +3452,7 @@ toggle-selection@^1.0.6:
tom-select@2.4.3:
version "2.4.3"
resolved "https://registry.npmjs.org/tom-select/-/tom-select-2.4.3.tgz"
resolved "https://registry.yarnpkg.com/tom-select/-/tom-select-2.4.3.tgz#1daa4131cd317de691f39eb5bf41148265986c1f"
integrity sha512-MFFrMxP1bpnAMPbdvPCZk0KwYxLqhYZso39torcdoefeV/NThNyDu8dV96/INJ5XQVTL3O55+GqQ78Pkj5oCfw==
dependencies:
"@orchidjs/sifter" "^1.1.0"

View File

@@ -1,3 +1,3 @@
version: "4.5.1"
version: "4.5.2"
edition: "Community"
published: "2026-01-20"
published: "2026-02-03"

View File

@@ -53,7 +53,7 @@ Blocks:
{% nav %}
{# Release info #}
<div class="text-muted text-center fs-5 my-3">
<div class="text-muted text-center fs-5 my-3 px-3">
{{ settings.RELEASE.name }}
{% if not settings.RELEASE.features.commercial and not settings.ISOLATED_DEPLOYMENT %}
<div>

View File

@@ -14,6 +14,10 @@
<th scope="row">{% trans "Family" %}</th>
<td>IPv{{ object.family }}</td>
</tr>
<tr>
<th scope="row">{% trans "Prefix" %}</th>
<td>{{ object.prefix|linkify|placeholder }}</td>
</tr>
<tr>
<th scope="row">{% trans "VRF" %}</th>
<td>

View File

@@ -14,6 +14,7 @@
<div class="row">
<h2 class="col-9 offset-3">{% trans "IP Addresses" %}</h2>
</div>
{% render_field model_form.prefix %}
{% render_field form.pattern %}
{% render_field model_form.status %}
{% render_field model_form.role %}

View File

@@ -13,6 +13,10 @@
<th scope="row">{% trans "Family" %}</th>
<td>IPv{{ object.family }}</td>
</tr>
<tr>
<th scope="row">{% trans "Prefix" %}</th>
<td>{{ object.prefix|linkify|placeholder }}</td>
</tr>
<tr>
<th scope="row">{% trans "Starting Address" %}</th>
<td>{{ object.start_address }}</td>

View File

@@ -109,7 +109,7 @@
{% endif %}
</td>
</tr>
{% with child_ip_count=object.get_child_ips.count %}
{% with child_ip_count=object.ip_addresses.count %}
<tr>
<th scope="row">{% trans "Child IPs" %}</th>
<td>

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -3,6 +3,7 @@ import enum
from django.conf import settings
from django.utils.translation import gettext_lazy as _
from utilities.data import get_config_value_ci
from utilities.string import enum_key
__all__ = (
@@ -24,13 +25,14 @@ class ChoiceSetMeta(type):
).format(name=name)
app = attrs['__module__'].split('.', 1)[0]
replace_key = f'{app}.{key}'
extend_key = f'{replace_key}+' if replace_key else None
if replace_key and replace_key in settings.FIELD_CHOICES:
# Replace the stock choices
attrs['CHOICES'] = settings.FIELD_CHOICES[replace_key]
elif extend_key and extend_key in settings.FIELD_CHOICES:
# Extend the stock choices
attrs['CHOICES'].extend(settings.FIELD_CHOICES[extend_key])
replace_choices = get_config_value_ci(settings.FIELD_CHOICES, replace_key)
if replace_choices is not None:
attrs['CHOICES'] = replace_choices
else:
extend_key = f'{replace_key}+'
extend_choices = get_config_value_ci(settings.FIELD_CHOICES, extend_key)
if extend_choices is not None:
attrs['CHOICES'].extend(extend_choices)
# Define choice tuples and color maps
attrs['_choices'] = []

View File

@@ -10,6 +10,7 @@ __all__ = (
'deepmerge',
'drange',
'flatten_dict',
'get_config_value_ci',
'ranges_to_string',
'ranges_to_string_list',
'resolve_attr_path',
@@ -22,6 +23,19 @@ __all__ = (
# Dictionary utilities
#
def get_config_value_ci(config_dict, key, default=None):
"""
Retrieve a value from a dictionary using case-insensitive key matching.
"""
if key in config_dict:
return config_dict[key]
key_lower = key.lower()
for config_key, value in config_dict.items():
if config_key.lower() == key_lower:
return value
return default
def deepmerge(original, new):
"""
Deep merge two dictionaries (new into original) and return a new dict

View File

@@ -280,6 +280,26 @@ def as_range(n):
return range(n)
@register.filter()
def parent_depth(n, parent=None):
"""
Return the depth of a node based on the parent's depth
"""
parent_depth = 0
if parent and hasattr(parent, 'depth_count'):
parent_depth = parent.depth_count + 1
elif parent and hasattr(parent, 'depth'):
try:
parent_depth = int(parent.depth) + 1
except TypeError:
pass
try:
depth = int(n) - int(parent_depth)
except TypeError:
return n
return depth
@register.filter()
def meters_to_feet(n):
"""

View File

@@ -1,4 +1,4 @@
from django.test import TestCase
from django.test import TestCase, override_settings
from utilities.choices import ChoiceSet
@@ -30,3 +30,29 @@ class ChoiceSetTestCase(TestCase):
def test_values(self):
self.assertListEqual(ExampleChoices.values(), ['a', 'b', 'c', 1, 2, 3])
class FieldChoicesCaseInsensitiveTestCase(TestCase):
"""
Integration tests for FIELD_CHOICES case-insensitive key lookup.
"""
def test_replace_choices_with_different_casing(self):
"""Test that replacement works when config key casing differs."""
# Config uses lowercase, but code constructs PascalCase key
with override_settings(FIELD_CHOICES={'utilities.teststatus': [('new', 'New')]}):
class TestStatusChoices(ChoiceSet):
key = 'TestStatus' # Code will look up 'utilities.TestStatus'
CHOICES = [('old', 'Old')]
self.assertEqual(TestStatusChoices.CHOICES, [('new', 'New')])
def test_extend_choices_with_different_casing(self):
"""Test that extension works with the + suffix under casing differences."""
# Config uses lowercase with + suffix
with override_settings(FIELD_CHOICES={'utilities.teststatus+': [('extra', 'Extra')]}):
class TestStatusChoices(ChoiceSet):
key = 'TestStatus' # Code will look up 'utilities.TestStatus+'
CHOICES = [('base', 'Base')]
self.assertEqual(TestStatusChoices.CHOICES, [('base', 'Base'), ('extra', 'Extra')])

View File

@@ -2,6 +2,7 @@ from django.db.backends.postgresql.psycopg_any import NumericRange
from django.test import TestCase
from utilities.data import (
check_ranges_overlap,
get_config_value_ci,
ranges_to_string,
ranges_to_string_list,
string_to_ranges,
@@ -96,3 +97,25 @@ class RangeFunctionsTestCase(TestCase):
string_to_ranges('2-10, a-b'),
None # Fails to convert
)
class GetConfigValueCITestCase(TestCase):
def test_exact_match(self):
config = {'dcim.site': 'value1', 'dcim.Device': 'value2'}
self.assertEqual(get_config_value_ci(config, 'dcim.site'), 'value1')
self.assertEqual(get_config_value_ci(config, 'dcim.Device'), 'value2')
def test_case_insensitive_match(self):
config = {'dcim.Site': 'value1', 'ipam.IPAddress': 'value2'}
self.assertEqual(get_config_value_ci(config, 'dcim.site'), 'value1')
self.assertEqual(get_config_value_ci(config, 'ipam.ipaddress'), 'value2')
def test_default_value(self):
config = {'dcim.site': 'value1'}
self.assertIsNone(get_config_value_ci(config, 'nonexistent'))
self.assertEqual(get_config_value_ci(config, 'nonexistent', default=[]), [])
def test_empty_dict(self):
self.assertIsNone(get_config_value_ci({}, 'any.key'))
self.assertEqual(get_config_value_ci({}, 'any.key', default=[]), [])

View File

@@ -3,7 +3,7 @@
[project]
name = "netbox"
version = "4.5.1"
version = "4.5.2"
requires-python = ">=3.12"
description = "The premier source of truth powering network automation."
readme = "README.md"

Some files were not shown because too many files have changed in this diff Show More