Files
netbox/netbox/extras/api/serializers_/scripts.py
Arthur Hanson f2d8ae29c2 21701 Allow scripts to be uploaded via post to API (#21756)
* #21701 allow upload script via API

* #21701 allow upload script via API

* add extra test

* change to use Script api endpoint

* ruff fix

* review feedback:

* review feedback:

* review feedback:

* Fix permission check, perform_create delegation, and test mock setup

- destroy() now checks extras.delete_script (queryset is Script.objects.all())
- create() delegates to self.perform_create() instead of calling serializer.save() directly
- Add comment explaining why update/partial_update intentionally return 405
- Fix test_upload_script_module: set mock_storage.save.return_value so file_path
  receives a real string after the _save_upload return-value fix; add DB existence check

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* Return 400 instead of 500 on duplicate script module upload

Catch IntegrityError from the unique (file_root, file_path) constraint
and re-raise as a ValidationError so the API returns a 400 with a clear
message rather than a 500.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* Validate upload_file + data_source conflict for multipart requests

DRF 3.16 Serializer.get_value() uses parse_html_dict() or empty for all
HTML/multipart input. A flat key like data_source=2 produces an empty
dict ({}), which is falsy, so it falls back to empty and the nested
field is silently skipped. data.get('data_source') is therefore always
None in multipart requests, bypassing the conflict check.

Fix: also check self.initial_data for data_source and data_file in all
three guards in validate(), so the raw submitted value is detected even
when DRF's HTML parser drops the deserialized object.

Add test_upload_with_data_source_fails to cover the multipart conflict
path explicitly.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* Require data_file when data_source is specified

data_source alone is not a valid creation payload — a data_file must
also be provided to identify which file within the source to sync.
Add the corresponding validation error and a test to cover the case.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* Align ManagedFileForm validation with API serializer rules

Add the missing checks to ManagedFileForm.clean():
- upload_file + data_source is rejected (matches API)
- data_source without data_file is rejected with a specific message
- Update the 'nothing provided' error to mention data source + data file

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* Revert "Align ManagedFileForm validation with API serializer rules"

This reverts commit f0ac7c3bd2.

* Align API validation messages with UI; restore complete checks

- Match UI error messages for upload+data_file conflict and no-source case
- Keep API-only guards for upload+data_source and data_source-without-data_file
- Restore test_upload_with_data_source_fails

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* Run source/file conflict checks before super().validate() / full_clean()

super().validate() calls full_clean() on the model instance, which raises
a unique-constraint error for (file_root, file_path) when file_path is
empty (e.g. data_source-only requests). Move the conflict guards above the
super() call so they produce clear, actionable error messages before
full_clean() has a chance to surface confusing database-level errors.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* destroy() deletes ScriptModule, not Script

DELETE /api/extras/scripts/<pk>/ now deletes the entire ScriptModule
(matching the UI's delete view), including modules with no Script
children (e.g. sync hasn't run yet). Permission check updated to
delete_scriptmodule. The queryset restriction for destroy is removed
since the module is deleted via script.module, not super().destroy().

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* review feedback:

* cleanup

* cleanup

* cleanup

* cleanup

* change to ScriptModule

* change to ScriptModule

* change to ScriptModule

* update docs

* cleanup

* restore file

* cleanup

* cleanup

* cleanup

* cleanup

* cleanup

* keep only upload functionality

* cleanup

* cleanup

* cleanup

* change to scripts/upload api

* cleanup

* cleanup

* cleanup

* cleanup

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-02 08:42:14 -04:00

147 lines
5.3 KiB
Python

import logging
from django.core.files.storage import storages
from django.db import IntegrityError
from django.utils.translation import gettext_lazy as _
from drf_spectacular.utils import extend_schema_field
from rest_framework import serializers
from core.api.serializers_.jobs import JobSerializer
from core.choices import ManagedFileRootPathChoices
from extras.models import Script, ScriptModule
from netbox.api.serializers import ValidatedModelSerializer
from utilities.datetime import local_now
logger = logging.getLogger(__name__)
__all__ = (
'ScriptDetailSerializer',
'ScriptInputSerializer',
'ScriptModuleSerializer',
'ScriptSerializer',
)
class ScriptModuleSerializer(ValidatedModelSerializer):
file = serializers.FileField(write_only=True)
file_path = serializers.CharField(read_only=True)
class Meta:
model = ScriptModule
fields = ['id', 'display', 'file_path', 'file', 'created', 'last_updated']
brief_fields = ('id', 'display')
def validate(self, data):
# ScriptModule.save() sets file_root; inject it here so full_clean() succeeds.
# Pop 'file' before model instantiation — ScriptModule has no such field.
file = data.pop('file', None)
data['file_root'] = ManagedFileRootPathChoices.SCRIPTS
data = super().validate(data)
data.pop('file_root', None)
if file is not None:
data['file'] = file
return data
def create(self, validated_data):
file = validated_data.pop('file')
storage = storages.create_storage(storages.backends["scripts"])
validated_data['file_path'] = storage.save(file.name, file)
created = False
try:
instance = super().create(validated_data)
created = True
return instance
except IntegrityError as e:
if 'file_path' in str(e):
raise serializers.ValidationError(
_("A script module with this file name already exists.")
)
raise
finally:
if not created and (file_path := validated_data.get('file_path')):
try:
storage.delete(file_path)
except Exception:
logger.warning(f"Failed to delete orphaned script file '{file_path}' from storage.")
class ScriptSerializer(ValidatedModelSerializer):
description = serializers.SerializerMethodField(read_only=True)
vars = serializers.SerializerMethodField(read_only=True)
result = JobSerializer(nested=True, read_only=True)
class Meta:
model = Script
fields = [
'id', 'url', 'display_url', 'module', 'name', 'description', 'vars', 'result', 'display', 'is_executable',
]
brief_fields = ('id', 'url', 'display', 'name', 'description')
@extend_schema_field(serializers.JSONField(allow_null=True))
def get_vars(self, obj):
if obj.python_class:
return {
k: v.__class__.__name__ for k, v in obj.python_class()._get_vars().items()
}
return {}
@extend_schema_field(serializers.CharField())
def get_display(self, obj):
return f'{obj.name} ({obj.module})'
@extend_schema_field(serializers.CharField(allow_null=True))
def get_description(self, obj):
if obj.python_class:
return obj.python_class().description
return None
class ScriptDetailSerializer(ScriptSerializer):
result = serializers.SerializerMethodField(read_only=True)
@extend_schema_field(JobSerializer())
def get_result(self, obj):
job = obj.jobs.all().order_by('-created').first()
context = {
'request': self.context['request']
}
data = JobSerializer(job, context=context).data
return data
class ScriptInputSerializer(serializers.Serializer):
data = serializers.JSONField()
commit = serializers.BooleanField()
schedule_at = serializers.DateTimeField(required=False, allow_null=True)
interval = serializers.IntegerField(required=False, allow_null=True)
def validate_schedule_at(self, value):
"""
Validates the specified schedule time for a script execution.
"""
if value:
if not self.context['script'].python_class.scheduling_enabled:
raise serializers.ValidationError(_('Scheduling is not enabled for this script.'))
if value < local_now():
raise serializers.ValidationError(_('Scheduled time must be in the future.'))
return value
def validate_interval(self, value):
"""
Validates the provided interval based on the script's scheduling configuration.
"""
if value and not self.context['script'].python_class.scheduling_enabled:
raise serializers.ValidationError(_('Scheduling is not enabled for this script.'))
return value
def validate(self, data):
"""
Validates the given data and ensures the necessary fields are populated.
"""
# Set the schedule_at time to now if only an interval is provided
# while handling the case where schedule_at is null.
if data.get('interval') and not data.get('schedule_at'):
data['schedule_at'] = local_now()
return super().validate(data)