Webhook content type update not correctly taking effect #1821

Closed
opened 2025-12-29 17:19:26 +01:00 by adam · 4 comments
Owner

Originally created by @lampwins on GitHub (Jun 27, 2018).

Issue type

[ ] Feature request
[ x ] Bug report
[ ] Documentation

Environment

  • Python version: 3.6
  • NetBox version: develop-2.4

Description

When a Webhook's content type (models for which the webhook is attached to) is updated, to remove the current model and add a new one, the old model is not removed from the Webhook and thus the Webhook still responds to events of the old model.

Originally created by @lampwins on GitHub (Jun 27, 2018). <!-- Before opening a new issue, please search through the existing issues to see if your topic has already been addressed. Note that you may need to remove the "is:open" filter from the search bar to include closed issues. Check the appropriate type for your issue below by placing an x between the brackets. For assistance with installation issues, or for any other issues other than those listed below, please raise your topic for discussion on our mailing list: https://groups.google.com/forum/#!forum/netbox-discuss Please note that issues which do not fall under any of the below categories will be closed. Due to an excessive backlog of feature requests, we are not currently accepting any proposals which extend NetBox's feature scope. Do not prepend any sort of tag to your issue's title. An administrator will review your issue and assign labels as appropriate. ---> ### Issue type [ ] Feature request <!-- An enhancement of existing functionality --> [ x ] Bug report <!-- Unexpected or erroneous behavior --> [ ] Documentation <!-- A modification to the documentation --> <!-- Please describe the environment in which you are running NetBox. (Be sure to verify that you are running the latest stable release of NetBox before submitting a bug report.) If you are submitting a bug report and have made any changes to the code base, please first validate that your bug can be recreated while running an official release. --> ### Environment * Python version: 3.6 * NetBox version: develop-2.4 <!-- BUG REPORTS must include: * A list of the steps needed for someone else to reproduce the bug * A description of the expected and observed behavior * Any relevant error messages (screenshots may also help) FEATURE REQUESTS must include: * A detailed description of the proposed functionality * A use case for the new feature * A rough description of any necessary changes to the database schema * Any relevant third-party libraries which would be needed --> ### Description When a Webhook's content type (models for which the webhook is attached to) is updated, to remove the current model and add a new one, the old model is not removed from the Webhook and thus the Webhook still responds to events of the old model.
adam closed this issue 2025-12-29 17:19:26 +01:00
Author
Owner

@lampwins commented on GitHub (Jun 27, 2018):

In fixing this I came across a rather glaring limitation with using LocMemCache in a multi-process environment.

Note that each process will have its own private cache instance, which means no cross-process caching is possible.
https://docs.djangoproject.com/en/2.0/topics/cache/#local-memory-caching

That really defeats the purpose of our use case which intends to only evict the cache when a Webhook instance in modified.

To overcome this, I have switched the cache backend to use redis. This does introduce a new dependency, django-redis but like django-rq, it only needs to be installed when using the webhooks backend feature. I have taken these two requirements and put them in a new webhooks_requirements.txt file and updated the docs to say this file is provided for convenience of deployment of the webhooks feature. django-redis is full featured, actively maintained, and could offer caching benefits to the project down the road. Using redis as the cache backend incurs no additional installation overhead as redis is already required for python-rq to function.

@lampwins commented on GitHub (Jun 27, 2018): In fixing this I came across a rather glaring limitation with using LocMemCache in a multi-process environment. > Note that each process will have its own private cache instance, which means no cross-process caching is possible. https://docs.djangoproject.com/en/2.0/topics/cache/#local-memory-caching That really defeats the purpose of our use case which intends to only evict the cache when a Webhook instance in modified. To overcome this, I have switched the cache backend to use redis. This does introduce a new dependency, django-redis but like django-rq, it only needs to be installed when using the webhooks backend feature. I have taken these two requirements and put them in a new `webhooks_requirements.txt` file and updated the docs to say this file is provided for convenience of deployment of the webhooks feature. django-redis is full featured, actively maintained, and could offer caching benefits to the project down the road. Using redis as the cache backend incurs no additional installation overhead as redis is already required for python-rq to function.
Author
Owner

@jeremystretch commented on GitHub (Jun 27, 2018):

I don't like the idea of introducing another dependency just to cache a single value. Why does it need to be cached in the first place? It's just one additional database query on save/delete, correct?

@jeremystretch commented on GitHub (Jun 27, 2018): I don't like the idea of introducing another dependency just to cache a single value. Why does it need to be cached in the first place? It's just one additional database query on save/delete, correct?
Author
Owner

@lampwins commented on GitHub (Jun 27, 2018):

Right, the idea was to cache the webhook instances so that extra query is not needed whenever a signal is fired. I suppose in hindsight it is a bit of a premature optimization. We can always rip the cache layer out for now, and add it back if the feature proves to be a performance bottleneck.

@lampwins commented on GitHub (Jun 27, 2018): Right, the idea was to cache the webhook instances so that extra query is not needed whenever a signal is fired. I suppose in hindsight it is a bit of a premature optimization. We can always rip the cache layer out for now, and add it back if the feature proves to be a performance bottleneck.
Author
Owner

@lampwins commented on GitHub (Jun 28, 2018):

I fixed this issue in another commit and also ripped out the cache layer since LocMemCache will not work and another solution would be overly complex.

@lampwins commented on GitHub (Jun 28, 2018): I fixed this issue in another commit and also ripped out the cache layer since LocMemCache will not work and another solution would be overly complex.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: starred/netbox#1821