Introduce a dedicated logging mechanism for jobs #11344

Closed
opened 2025-12-29 21:43:58 +01:00 by adam · 3 comments
Owner

Originally created by @jeremystretch on GitHub (Jul 3, 2025).

Originally assigned to: @jeremystretch on GitHub.

NetBox version

v4.3.3

Feature type

New functionality

Proposed functionality

Add to the JobRunner class a log() mechanism which can be used to record arbitrary log messages during job execution. These messages will be stored separately from job data. Each log message should include:

  • Timestamp
  • Level (see below)
  • Message

Supported levels should include the following, which map directly to native Python logging levels:

  • Debug
  • Info (default)
  • Warning
  • Error

(Note that these levels are not to be confused with the levels support for custom script logging, which differ in the nature of their use.)

Use case

This would provide a convenient, persistent mechanism to record log messages during the execution of a job, without requiring the job's author to devise a custom implementation.

Database changes

Add a log field to the Job model. This could be a JSONField or an ArrayField, depending on how we structure the data.

External dependencies

N/A

Originally created by @jeremystretch on GitHub (Jul 3, 2025). Originally assigned to: @jeremystretch on GitHub. ### NetBox version v4.3.3 ### Feature type New functionality ### Proposed functionality Add to the JobRunner class a `log()` mechanism which can be used to record arbitrary log messages during job execution. These messages will be stored separately from job data. Each log message should include: - Timestamp - Level (see below) - Message Supported levels should include the following, which map directly to native Python logging levels: - Debug - Info (default) - Warning - Error (Note that these levels are not to be confused with the levels support for custom script logging, which differ in the nature of their use.) ### Use case This would provide a convenient, persistent mechanism to record log messages during the execution of a job, without requiring the job's author to devise a custom implementation. ### Database changes Add a `log` field to the Job model. This could be a JSONField or an ArrayField, depending on how we structure the data. ### External dependencies N/A
adam added the status: acceptedtype: featurecomplexity: medium labels 2025-12-29 21:43:58 +01:00
adam closed this issue 2025-12-29 21:43:58 +01:00
Author
Owner

@alehaa commented on GitHub (Jul 5, 2025):

I believe this aligns with my proposal in #17281. I’m still interested in implementing logging for the JobRunner framework.

@alehaa commented on GitHub (Jul 5, 2025): I believe this aligns with my proposal in #17281. I’m still interested in implementing logging for the JobRunner framework.
Author
Owner

@jeremystretch commented on GitHub (Jul 8, 2025):

I think FR #17281 conflates the concerns of running a script (the logs of which are largely arbitrary) and logging for the execution of the job itself. This FR proposes a mechanism to provide for the later.

@jeremystretch commented on GitHub (Jul 8, 2025): I think FR #17281 conflates the concerns of running a script (the logs of which are largely arbitrary) and logging for the execution of the job itself. This FR proposes a mechanism to provide for the later.
Author
Owner

@alehaa commented on GitHub (Jul 8, 2025):

My intention in #17281 was simply to log any information that originates from within a job. This way, users can be informed about things like "deleted objects a, b, and c." If I understand you correctly, you’d like to log general information about the job’s execution, such as "starting…" and "deleted 123 objects."

My proposal for implementation would be as follows (pseudocode, mostly borrowed from ObjectChange). Of course, one can omit the extra JobLogLine model, but I thought it might be useful for filtering logs.

class JobLogLine(models.model):
    job = models.ForeignKey(
        to='core.Job',
        on_delete=models.CASCADE,
        related_name='log'
    )
    time = models.DateTimeField(
        verbose_name=_('time'),
        auto_now_add=True,  # TODO: Check if time = object created in buffer
        editable=False,
        db_index=True,
    )
    level = models.CharField(  # Using integer might improve performance?
        verbose_name=_('level'),
        max_length=10,
        choices=JobLogLevelChoices,
        default=JobLogLevelChoices.INFO,
    )
    message = models.CharField(
        verbose_name=_('message'),
        max_length=200,  # we might omit this to allow longer messages
        choices=JobLogLevelChoices,
    )

    # Optional. Can be used to log messages like 
    # 'Cannot run housekeeping for this object because of <reason>'
    related_object_type = models.ForeignKey(
        to='contenttypes.ContentType',
        on_delete=models.PROTECT,
        related_name='+',
        blank=True,
        null=True
    )
    related_object_id = models.PositiveBigIntegerField(
        blank=True,
        null=True
    )
    related_object = GenericForeignKey(
        ct_field='related_object_type',
        fk_field='related_object_id'
    )
class JobRunner:
    log_buffer = list()

    def log(self, level=JobLogLevelChoices.INFO, message: str, object: models.Model) -> None:
        self.log_buffer.append(JobLogLine(level=level, message=message, related_object=object))

    def handle(self, ...):
        ...

        # Finally create job log line objects. Not done during run(), as transaction
        # might be active.
        JobLogLine.objects.bulk_create(self.log_buffer)
@alehaa commented on GitHub (Jul 8, 2025): My intention in #17281 was simply to log any information that originates from within a job. This way, users can be informed about things like "deleted objects a, b, and c." If I understand you correctly, you’d like to log general information about the job’s execution, such as "starting…" and "deleted 123 objects." My proposal for implementation would be as follows (pseudocode, mostly borrowed from `ObjectChange`). Of course, one can omit the extra `JobLogLine` model, but I thought it might be useful for filtering logs. ```python class JobLogLine(models.model): job = models.ForeignKey( to='core.Job', on_delete=models.CASCADE, related_name='log' ) time = models.DateTimeField( verbose_name=_('time'), auto_now_add=True, # TODO: Check if time = object created in buffer editable=False, db_index=True, ) level = models.CharField( # Using integer might improve performance? verbose_name=_('level'), max_length=10, choices=JobLogLevelChoices, default=JobLogLevelChoices.INFO, ) message = models.CharField( verbose_name=_('message'), max_length=200, # we might omit this to allow longer messages choices=JobLogLevelChoices, ) # Optional. Can be used to log messages like # 'Cannot run housekeeping for this object because of <reason>' related_object_type = models.ForeignKey( to='contenttypes.ContentType', on_delete=models.PROTECT, related_name='+', blank=True, null=True ) related_object_id = models.PositiveBigIntegerField( blank=True, null=True ) related_object = GenericForeignKey( ct_field='related_object_type', fk_field='related_object_id' ) ``` ```Python class JobRunner: log_buffer = list() def log(self, level=JobLogLevelChoices.INFO, message: str, object: models.Model) -> None: self.log_buffer.append(JobLogLine(level=level, message=message, related_object=object)) def handle(self, ...): ... # Finally create job log line objects. Not done during run(), as transaction # might be active. JobLogLine.objects.bulk_create(self.log_buffer) ```
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: starred/netbox#11344