Implement webhook events system #54

Closed
opened 2025-12-29 15:30:43 +01:00 by adam · 23 comments
Owner

Originally created by @mdlayher on GitHub (Jun 28, 2016).

To allow NetBox to become more extensible while still keeping its code clean and focused, a webhooks system could be implemented, enabling hooks to fire when certain events take place.

This might be a ways out, but it's worth thinking about at least. Proposed as an idea to solve #63 and #77 in a more generic way.

Originally created by @mdlayher on GitHub (Jun 28, 2016). To allow NetBox to become more extensible while still keeping its code clean and focused, a webhooks system could be implemented, enabling hooks to fire when certain events take place. This might be a ways out, but it's worth thinking about at least. Proposed as an idea to solve #63 and #77 in a more generic way.
adam added the status: accepted label 2025-12-29 15:30:43 +01:00
adam closed this issue 2025-12-29 15:30:43 +01:00
Author
Owner

@bellwood commented on GitHub (Jun 28, 2016):

WHMCS has a pretty nice hook system that one could look to...

Basically every system action would have a pre and post action hook...

Hooks are defined for these actions and a numerical id assigned on each so they can be processed in a specific order

Reference: http://docs.whmcs.com/Hooks

@bellwood commented on GitHub (Jun 28, 2016): WHMCS has a pretty nice hook system that one could look to... Basically every system action would have a pre and post action hook... Hooks are defined for these actions and a numerical id assigned on each so they can be processed in a specific order Reference: http://docs.whmcs.com/Hooks
Author
Owner

@x-zeroflux-x commented on GitHub (Jun 29, 2016):

Perhaps using the webhooks or an API to allow connection to PowerDNS to handle PTR records for ip allocations.

@x-zeroflux-x commented on GitHub (Jun 29, 2016): Perhaps using the webhooks or an API to allow connection to PowerDNS to handle PTR records for ip allocations.
Author
Owner

@rdujardin commented on GitHub (Aug 1, 2016):

A simple solution may be to use the django signals : https://docs.djangoproject.com/en/1.9/ref/signals/

We can make a system which listens to these signals on every model, and also to the signal emitted when a HTTP request happens (to detect every access to netbox). This system could then call some user-specified scripts, in a specific folder for instance.

What about it ?

@rdujardin commented on GitHub (Aug 1, 2016): A simple solution may be to use the django signals : https://docs.djangoproject.com/en/1.9/ref/signals/ We can make a system which listens to these signals on every model, and also to the signal emitted when a HTTP request happens (to detect every access to netbox). This system could then call some user-specified scripts, in a specific folder for instance. What about it ?
Author
Owner

@rdujardin commented on GitHub (Aug 4, 2016):

Hi,

I have made a hook system for Netbox, you can currently see it on my fork, in the branch "hooks".

It's very simple : there is a folder "userscripts", every script in it is automatically loaded and can connect receivers to django signals. No action is needed to load the scripts. No action will be needed to add support for future models.

Django doesn't emit any signal on a bulk edit, so I created a signal for it, which can be listened the same way.

Below is the doc I have written for this system and an example of a listening script (that I included in the commit), please tell me what you think about it.

User scripts

Netbox emits a signal whenever an event occurs in the database : when an object is created, edited or deleted, when a bulk import, edit or delete happens. It also emits a signal whenever accessed directly or through the API, indeed whenever a HTTP request is received.

You can hook your own scripts on these events by putting them in the folder userscripts and following the right format. This hook system is based upon the Django's signals system, so all you have to do is connecting receiver functions to the signals you want. You can follow the example already present in userscripts, you can also read the documentation about Django signals to know what is possible to do.

If several receivers are connected to the same signal, possibly in several different scripts, they all will be called when the signal is emitted.

Some parameters are transmitted to the receiver functions : in particular, the concerned class and instance. On a bulk import or bulk delete, individual signals will be emitted : respectively a save and a delete signal will be emitted for each object. Bulk edit works another way : a single signal is emitted for the whole bulk edit, transmitting a parameter pk_list containing a list of the concerned primary keys.

For each signal, there are two : a pre and a post.

Your scripts will automatically be loaded if they are in the userscripts folder and if they are simple modules. Packages won't be loaded, so if you have heavy treatments to apply to signals you can organize them the way you want in a package and just put next to it a module which will be loaded and which loads the package.

This hooks system can be used to automate your configuration updating from Netbox for instance.

from django.core.signals import request_finished
from django.db.models.signals import pre_save
from django.dispatch import receiver
from userscripts import pre_bulk_edit, post_bulk_edit

from circuits.models import Provider
from ipam.models import IPAddress

import time
import logging

@receiver(request_finished)
def callback_request_finished(sender, **kwargs):
....logger = logging.getLogger(name)
....logger.info('! Request')

@receiver(pre_save, sender=IPAddress)
def callback_ipaddress_pre_save(sender, **kwargs):
....logger = logging.getLogger(name)
....logger.info('! IP Pre-save')

@receiver(pre_bulk_edit, sender=Provider)
def callback_ipaddress_pre_bulk_edit(sender, **kwargs):
....msg = '! Provider Pre-bulk-edit ('
....for pk in kwargs['pk_list']:
........msg += pk + ','
....msg += ')'
....logger = logging.getLogger(name)
....logger.info(msg)

@receiver(pre_bulk_edit)
def callback_pre_bulk_edit(sender, **kwargs):
....msg = '! {} Pre-bulk-edit ('.format(str(sender))
....for pk in kwargs['pk_list']:
........msg += pk + ','
....msg += ')'
....logger = logging.getLogger(name)
....logger.info(msg)

@rdujardin commented on GitHub (Aug 4, 2016): Hi, I have made a hook system for Netbox, you can currently see it on my fork, in the branch "hooks". It's very simple : there is a folder "userscripts", every script in it is automatically loaded and can connect receivers to django signals. No action is needed to load the scripts. No action will be needed to add support for future models. Django doesn't emit any signal on a bulk edit, so I created a signal for it, which can be listened the same way. Below is the doc I have written for this system and an example of a listening script (that I included in the commit), please tell me what you think about it. > # User scripts > > Netbox emits a signal whenever an event occurs in the database : when an object is created, edited or deleted, when a bulk import, edit or delete happens. It also emits a signal whenever accessed directly or through the API, indeed whenever a HTTP request is received. > > You can hook your own scripts on these events by putting them in the folder _userscripts_ and following the right format. This hook system is based upon the Django's signals system, so all you have to do is connecting receiver functions to the signals you want. You can follow the example already present in _userscripts_, you can also read the [documentation about Django signals](https://docs.djangoproject.com/en/1.9/topics/signals/) to know what is possible to do. > > If several receivers are connected to the same signal, possibly in several different scripts, they all will be called when the signal is emitted. > > Some parameters are transmitted to the receiver functions : in particular, the concerned class and instance. On a bulk import or bulk delete, individual signals will be emitted : respectively a save and a delete signal will be emitted for each object. Bulk edit works another way : a single signal is emitted for the whole bulk edit, transmitting a parameter _pk_list_ containing a list of the concerned primary keys. > > For each signal, there are two : a pre and a post. > > Your scripts will automatically be loaded if they are in the _userscripts_ folder and if they are simple modules. Packages won't be loaded, so if you have heavy treatments to apply to signals you can organize them the way you want in a package and just put next to it a module which will be loaded and which loads the package. > > This hooks system can be used to automate your configuration updating from Netbox for instance. - > from django.core.signals import request_finished > from django.db.models.signals import pre_save > from django.dispatch import receiver > from userscripts import pre_bulk_edit, post_bulk_edit > > from circuits.models import Provider > from ipam.models import IPAddress > > import time > import logging > > @receiver(request_finished) > def callback_request_finished(sender, **kwargs): > ....logger = logging.getLogger(**name**) > ....logger.info('! Request') > > @receiver(pre_save, sender=IPAddress) > def callback_ipaddress_pre_save(sender, **kwargs): > ....logger = logging.getLogger(**name**) > ....logger.info('! IP Pre-save') > > @receiver(pre_bulk_edit, sender=Provider) > def callback_ipaddress_pre_bulk_edit(sender, **kwargs): > ....msg = '! Provider Pre-bulk-edit (' > ....for pk in kwargs['pk_list']: > ........msg += pk + ',' > ....msg += ')' > ....logger = logging.getLogger(**name**) > ....logger.info(msg) > > @receiver(pre_bulk_edit) > def callback_pre_bulk_edit(sender, **kwargs): > ....msg = '! {} Pre-bulk-edit ('.format(str(sender)) > ....for pk in kwargs['pk_list']: > ........msg += pk + ',' > ....msg += ')' > ....logger = logging.getLogger(**name**) > ....logger.info(msg)
Author
Owner

@rdujardin commented on GitHub (Aug 19, 2016):

I added to userscripts a logging utility and the ability to be called through a HTTP request. It allows for instance to call a script daily from a cron instead of each time an event occurs.

New doc :

User scripts

Netbox emits a signal whenever an event occurs in the database : when an object is created, edited or deleted, when a bulk import, edit or delete happens. It also emits a signal whenever accessed directly or through the API, indeed whenever a HTTP request is received.

You can hook your own scripts on these events by putting them in the folder userscripts and following the right format. This hook system is based upon the Django's signals system, so all you have to do is connecting receiver functions to the signals you want. You can follow the example already present in userscripts, you can also read the documentation about Django signals to know what is possible to do.

If several receivers are connected to the same signal, possibly in several different scripts, they all will be called when the signal is emitted.

Some parameters are transmitted to the receiver functions : in particular, the concerned class and instance. On a bulk import or bulk delete, individual signals will be emitted : respectively a save and a delete signal will be emitted for each object. Bulk edit works another way : a single signal is emitted for the whole bulk edit, transmitting a parameter pk_list containing a list of the concerned primary keys.

For each signal, there are two : a pre and a post.

Your scripts will automatically be loaded if they are in the userscripts folder and if they are simple modules. Packages won't be loaded, so if you have heavy treatments to apply to signals you can organize them the way you want in a package and just put next to it a module which will be loaded and which loads the package.

This hooks system can be used to automate your configuration updating from Netbox for instance.

Userscripts can also be called at the URL /userscript/?script=my_script, then if a function call(get) is found in the script my_script.py, it will be called each time the URL is reached, with the parameter get being a dictionary ccontaining the parameters of the HTTP GET request (indeed get is basically request.GET), including the parameter script whose value is the name of the user script called. If the user script or its function call can't be found, or if an uncaught exception is raised by call, a blank response will be sent, but if the function was successfully called, its returned value will be converted to an unicode string and sent as response.

When the server starts, it loads the user scripts, and it creates for each of them a logger whose name is the name of the user script file, for instance the user script my_script.py has a logger named my_script.py. The user script can use this logger to log its messages in the central userscripts log file. Log messages will automatically get formatted and added the date, the name of the user script, and the log level. See the example for more information. The path to the log file and its max size can be set in configuration.py : respectively USERSCRIPTS_LOG_FILE and USERSCRIPTS_LOG_MAX_SIZE (in bytes).

New example script :

https://github.com/rdujardin/netbox/blob/hooks/netbox/userscripts/example.py

@rdujardin commented on GitHub (Aug 19, 2016): I added to userscripts a logging utility and the ability to be called through a HTTP request. It allows for instance to call a script daily from a cron instead of each time an event occurs. New doc : > # User scripts > > Netbox emits a signal whenever an event occurs in the database : when an object is created, edited or deleted, when a bulk import, edit or delete happens. It also emits a signal whenever accessed directly or through the API, indeed whenever a HTTP request is received. > > You can hook your own scripts on these events by putting them in the folder _userscripts_ and following the right format. This hook system is based upon the Django's signals system, so all you have to do is connecting receiver functions to the signals you want. You can follow the example already present in _userscripts_, you can also read the [documentation about Django signals](https://docs.djangoproject.com/en/1.9/topics/signals/) to know what is possible to do. > > If several receivers are connected to the same signal, possibly in several different scripts, they all will be called when the signal is emitted. > > Some parameters are transmitted to the receiver functions : in particular, the concerned class and instance. On a bulk import or bulk delete, individual signals will be emitted : respectively a save and a delete signal will be emitted for each object. Bulk edit works another way : a single signal is emitted for the whole bulk edit, transmitting a parameter _pk_list_ containing a list of the concerned primary keys. > > For each signal, there are two : a pre and a post. > > Your scripts will automatically be loaded if they are in the _userscripts_ folder and if they are simple modules. Packages won't be loaded, so if you have heavy treatments to apply to signals you can organize them the way you want in a package and just put next to it a module which will be loaded and which loads the package. > > This hooks system can be used to automate your configuration updating from Netbox for instance. > > Userscripts can also be called at the URL _/userscript/?script=my_script_, then if a function _call(get)_ is found in the script _my_script.py_, it will be called each time the URL is reached, with the parameter _get_ being a dictionary ccontaining the parameters of the HTTP GET request (indeed get is basically request.GET), including the parameter _script_ whose value is the name of the user script called. If the user script or its function _call_ can't be found, or if an uncaught exception is raised by _call_, a blank response will be sent, but if the function was successfully called, its returned value will be converted to an unicode string and sent as response. > > When the server starts, it loads the user scripts, and it creates for each of them a logger whose name is the name of the user script file, for instance the user script _my_script.py_ has a logger named _my_script.py_. The user script can use this logger to log its messages in the central userscripts log file. Log messages will automatically get formatted and added the date, the name of the user script, and the log level. See the example for more information. The path to the log file and its max size can be set in configuration.py : respectively USERSCRIPTS_LOG_FILE and USERSCRIPTS_LOG_MAX_SIZE (in bytes). New example script : [https://github.com/rdujardin/netbox/blob/hooks/netbox/userscripts/example.py](https://github.com/rdujardin/netbox/blob/hooks/netbox/userscripts/example.py)
Author
Owner

@Armadill0 commented on GitHub (Nov 8, 2016):

This would be such a great feature to connect many external systems. An implementation would be awesome! 😃

@Armadill0 commented on GitHub (Nov 8, 2016): This would be such a great feature to connect many external systems. An implementation would be awesome! :smiley:
Author
Owner

@xenuser commented on GitHub (Nov 8, 2016):

I think what @rdujardin proposed is exactly what most organizations are looking for. It is also exactly what I need. I like the idea to use Django signals and hopefully this approach would be something what jeremystretch could easily adopt.

This feature request is very important since it allows the integration of third-party tools without having to write a specific "module" for each third-party solution/vendor.

In my organization, we'd use such a feature to automatically handover data from NetBox to our own APIs which take care of LDAP, DNS, third-party support tools etc.

In my eyes, this GitHub issue actually covers something I'd consider as a "basic feature" for a decent DCIM solution. And who knows - maybe some people didn't switch to NetBox yet because they are waiting for exactly this feature.

@xenuser commented on GitHub (Nov 8, 2016): I think what @rdujardin proposed is exactly what most organizations are looking for. It is also exactly what I need. I like the idea to use Django signals and hopefully this approach would be something what jeremystretch could easily adopt. This feature request is very important since it allows the integration of third-party tools without having to write a specific "module" for each third-party solution/vendor. In my organization, we'd use such a feature to automatically handover data from NetBox to our own APIs which take care of LDAP, DNS, third-party support tools etc. In my eyes, this GitHub issue actually covers something I'd consider as a "basic feature" for a decent DCIM solution. And who knows - maybe some people didn't switch to NetBox yet because they are waiting for exactly this feature.
Author
Owner

@WilliamMarti commented on GitHub (Nov 14, 2016):

Another +1 for this feature. In my current custom IPAM solution, we have code that automatically talks to the Infoblox API to add DNS entries if desired.

The ability to extend the Netbox functionality to other systems would be very valuable.

@WilliamMarti commented on GitHub (Nov 14, 2016): Another +1 for this feature. In my current custom IPAM solution, we have code that automatically talks to the Infoblox API to add DNS entries if desired. The ability to extend the Netbox functionality to other systems would be very valuable.
Author
Owner

@a1466d44-d3dc-4c0b-90c7-315b088731d7 commented on GitHub (Feb 13, 2017):

+1 from me, as I'd like to let netbox talk to Microsoft DNS and DHCP Server vis custom scripts/plugins/API calls/ect.

@a1466d44-d3dc-4c0b-90c7-315b088731d7 commented on GitHub (Feb 13, 2017): +1 from me, as I'd like to let netbox talk to Microsoft DNS and DHCP Server vis custom scripts/plugins/API calls/ect.
Author
Owner

@lampwins commented on GitHub (Mar 2, 2017):

Also throwing in support for this. One very big use cases I have for this ties in with #150 (vlan port mapping). When a vlan is changed on a port, a webhook is fired off to an automation platform to enact that change.

@lampwins commented on GitHub (Mar 2, 2017): Also throwing in support for this. One very big use cases I have for this ties in with #150 (vlan port mapping). When a vlan is changed on a port, a webhook is fired off to an automation platform to enact that change.
Author
Owner

@jsenecal commented on GitHub (Mar 3, 2017):

This could be achieved through celery tasks but that would also increase
installation complexity...

On Thu, Mar 2, 2017, 15:32 John Anderson, notifications@github.com wrote:

Also throwing in support for this. One very big use cases I have for this
ties in with #150 https://github.com/digitalocean/netbox/issues/150
(vlan port mapping). When a vlan is changed on a port, a webhook is fired
off to an automation platform to enact that change.


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/digitalocean/netbox/issues/81#issuecomment-283771433,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABfg5gS-a7R9sXLGZxSrW_Trhq2yAh5pks5rhydagaJpZM4JALE9
.

@jsenecal commented on GitHub (Mar 3, 2017): This could be achieved through celery tasks but that would also increase installation complexity... On Thu, Mar 2, 2017, 15:32 John Anderson, <notifications@github.com> wrote: > Also throwing in support for this. One very big use cases I have for this > ties in with #150 <https://github.com/digitalocean/netbox/issues/150> > (vlan port mapping). When a vlan is changed on a port, a webhook is fired > off to an automation platform to enact that change. > > — > You are receiving this because you are subscribed to this thread. > Reply to this email directly, view it on GitHub > <https://github.com/digitalocean/netbox/issues/81#issuecomment-283771433>, > or mute the thread > <https://github.com/notifications/unsubscribe-auth/ABfg5gS-a7R9sXLGZxSrW_Trhq2yAh5pks5rhydagaJpZM4JALE9> > . >
Author
Owner

@bmsmithvb commented on GitHub (Mar 8, 2017):

I love seeing features like this become implemented into such powerful software.

Is there any chance that anyone has an example of what an integration with WHMCS or PHPIPAM would look like? Not a coder myself...

@bmsmithvb commented on GitHub (Mar 8, 2017): I love seeing features like this become implemented into such powerful software. Is there any chance that anyone has an example of what an integration with WHMCS or PHPIPAM would look like? Not a coder myself...
Author
Owner

@jeremystretch commented on GitHub (Mar 23, 2017):

I'd like to implement this in a manner similar to how GitHub does it: Select the models and events you're interested, and tell NetBox what URL to hit when something happens. NetBox would then POST a JSON representation of that event to the specified URL.

As @rdujardin suggested, we can leverage Django signals to accomplish this. However, I'd like to store webhooks in the database rather than as user-created scripts. We should still be able to support customization by templatizing the request body. I'm curious what people would expect the body of the POST request to look like.

@jeremystretch commented on GitHub (Mar 23, 2017): I'd like to implement this in a manner similar to how GitHub does it: Select the models and events you're interested, and tell NetBox what URL to hit when something happens. NetBox would then POST a JSON representation of that event to the specified URL. As @rdujardin suggested, we can leverage [Django signals](https://docs.djangoproject.com/en/1.10/topics/signals/) to accomplish this. However, I'd like to store webhooks in the database rather than as user-created scripts. We should still be able to support customization by templatizing the request body. I'm curious what people would expect the body of the POST request to look like.
Author
Owner

@mdlayher commented on GitHub (Mar 23, 2017):

I would probably expect something like the contents of an object that was just added, updated, or deleted, as well as some metadata that talks about the type of event, the time it occurred, the user who performed the action, etc.

JSON POST body all the way.

@mdlayher commented on GitHub (Mar 23, 2017): I would probably expect something like the contents of an object that was just added, updated, or deleted, as well as some metadata that talks about the type of event, the time it occurred, the user who performed the action, etc. JSON POST body all the way.
Author
Owner

@lampwins commented on GitHub (Oct 18, 2017):

I am going to start a WIP for this. Here is my initial though process.

To make this worthwhile it is going to require a persistent back end worker, so these actions happen in the background and we can handle things like retries in a proper manor. My thought is to build this functionality out but not enable it by default as this is really a power user feature anyway.

I want to use python-rq for its relative simplicity. This means, to enable the feature, the user will have to install redis and somehow start the background worker process (most likely just as an option when starting the server).

I do like @jeremystretch's idea to model it after github's implementation. Ideally the event triggers will be fired by Django signals and the background worker will figure out what to do based on the webhook(s) the user defines.

@lampwins commented on GitHub (Oct 18, 2017): I am going to start a WIP for this. Here is my initial though process. To make this worthwhile it is going to require a persistent back end worker, so these actions happen in the background and we can handle things like retries in a proper manor. My thought is to build this functionality out but not enable it by default as this is really a power user feature anyway. I want to use [python-rq](http://python-rq.org/) for its relative simplicity. This means, to enable the feature, the user will have to install redis and somehow start the background worker process (most likely just as an option when starting the server). I do like @jeremystretch's idea to model it after github's implementation. Ideally the event triggers will be fired by Django signals and the background worker will figure out what to do based on the webhook(s) the user defines.
Author
Owner

@lampwins commented on GitHub (Oct 23, 2017):

As a follow up to where I am at with this. I have a fully functional implementation and am working out some of the details now. In an effort to adhere to the 80/20 rule here is how I have chosen to implement it:

These models may have zero or more webhooks registered them (similar to how custom fields work):

  • Site
  • Rack
  • RackGroup
  • Device
  • Interface
  • VRF
  • IPAddress
  • Prefix
  • Aggregate
  • VLAN
  • VLANGroup
  • Service
  • Tenant
  • TenantGroup
  • Circuit
  • Provider
  • Cluster
  • ClusterGroup
  • VirtualMachine

Each of these models may have webhooks registered to one or more of these signals:

  • Create (post_save signal with created=True)
  • Update (post_save signal with created=False)
  • Delete (post_delete signal)

The Webhook model resides in extras and is accessed through the admin site. It looks like this:

  • name
  • payload_url - URL to make POST request to
  • type_create - Webhook is registered to create signals
  • type_update - Webhook is registered to update signals
  • type_delete - Webhook is registered to delete signals
  • enabled
  • insecure_ssl - When true, the POST request will not verify ssl (use with obvious caution)
  • secret - When provided, the POST request will include a X-Hook-Signature header which is a HMAC (sha512) hex digest of the request body using the secret as the key.
  • obj_types - ManyToMany relation on the models this webhook is registered to. Again this is the same way custom fields work.
  • content_type - ContentType for the POST request. Either application/json or application/x-www-form-urlencoded.

The actual POST request is formatted like this using the model's api serializer (using application/json content type):

{
  "event": "created",
  "signal_received_timestamp": 1508769597,
  "model": "Site"
  "instance": {
    ...
  }
}

I consider this an "advanced feature" and like the napalm integration, takes a little bit of extra effort from the user to enable it. That is to say, the internal functionality is all there but ships disabled.

To enable it, the user will have to preform these steps:

  1. Install (or make available for connection) redis server. It is very lightweight and easy to install through standard package managers.
  2. Install django-rq: pip install django-rq. This also installs python-rq and all of its dependancies like python redis.
  3. Provide redis connection settings in the configuration file. By default, these options will allow for connecting to a locally installed redis server using DB 0 with no username or password.
  4. Enable the webhook backend option in the configuration file
  5. Restart netbox

I am currently coming up with an elegant way to start the python-rq background worker process from within the same supervisor unit that netbox uses in the install docs. In my current implementation the user would have to start this separately.

If the feature is not enabled in the configuration, netbox will never try to import django-rq or any of its dependancies nor try to connect to redis so if a user does not wish to use this feature, nothing will have changed and the user will not have to take any action. It will also never register any signals so there is zero performance hit when the feature is disabled.

When enabled there are some safeguards to ensure everything is ready. Namely when netbox is starting up it will ensure django-rq is installed and a redis connection can be made.

On startup, each app registers applicable models to the two generic signal receiver functions in extras.webhooks inside of the app's ready() method. When the extras app is ready, it pulls all webhooks out of the database and stores them in a cache. This is the native django local memory cashe, so there is no added installation/upgrade complexity here. This is important because we don't want to hit the database for webhook matching criteria each time a model signal is fired. The extras app also registers a special signal on the webhook model so that any update will refresh the webhook cache.

When a model signal is fired, it is received by the appropriate receiver function (post_save vs. post_delete). If the webhook feature is enabled, we retrieve the webhook cache and look for any and all webhooks which meet the criteria for this signal. For each matching webhook found, enqueue a job into django-rq (the 'default' job queue is the only one implemented). The background worker will then process anything in the queue. The POST request is built and then made to the payload url. If a good status code (from the python requests definition) is returned, the job is successful, otherwise it has failed and django-rq puts the job in the "failed" queue.

Django-rq also implements an admin view which allows the user to view the status of the job queue(s). There the result of jobs can be seen and failed jobs can be manually retried.

This is my first iteration but it seems to be working quite well :)

@lampwins commented on GitHub (Oct 23, 2017): As a follow up to where I am at with this. I have a fully functional implementation and am working out some of the details now. In an effort to adhere to the 80/20 rule here is how I have chosen to implement it: These models may have zero or more webhooks registered them (similar to how custom fields work): - Site - Rack - RackGroup - Device - Interface - VRF - IPAddress - Prefix - Aggregate - VLAN - VLANGroup - Service - Tenant - TenantGroup - Circuit - Provider - Cluster - ClusterGroup - VirtualMachine Each of these models may have webhooks registered to one or more of these signals: - Create (`post_save` signal with `created=True`) - Update (`post_save` signal with `created=False`) - Delete (`post_delete` signal) The Webhook model resides in extras and is accessed through the admin site. It looks like this: - `name` - `payload_url` - URL to make POST request to - `type_create` - Webhook is registered to create signals - `type_update` - Webhook is registered to update signals - `type_delete` - Webhook is registered to delete signals - `enabled` - `insecure_ssl` - When true, the POST request will not verify ssl (use with obvious caution) - `secret` - When provided, the POST request will include a `X-Hook-Signature` header which is a HMAC (sha512) hex digest of the request body using the secret as the key. - `obj_types` - ManyToMany relation on the models this webhook is registered to. Again this is the same way custom fields work. - `content_type` - ContentType for the POST request. Either `application/json` or `application/x-www-form-urlencoded`. The actual POST request is formatted like this using the model's api serializer (using application/json content type): ``` { "event": "created", "signal_received_timestamp": 1508769597, "model": "Site" "instance": { ... } } ``` I consider this an "advanced feature" and like the napalm integration, takes a little bit of extra effort from the user to enable it. That is to say, the internal functionality is all there but ships disabled. To enable it, the user will have to preform these steps: 1. Install (or make available for connection) redis server. It is very lightweight and easy to install through standard package managers. 2. Install django-rq: `pip install django-rq`. This also installs python-rq and all of its dependancies like python redis. 3. Provide redis connection settings in the configuration file. By default, these options will allow for connecting to a locally installed redis server using DB 0 with no username or password. 4. Enable the webhook backend option in the configuration file 5. Restart netbox I am currently coming up with an elegant way to start the python-rq background worker process from within the same supervisor unit that netbox uses in the install docs. In my current implementation the user would have to start this separately. If the feature is not enabled in the configuration, netbox will never try to import `django-rq` or any of its dependancies nor try to connect to redis so if a user does not wish to use this feature, nothing will have changed and the user will not have to take any action. It will also never register any signals so there is zero performance hit when the feature is disabled. When enabled there are some safeguards to ensure everything is ready. Namely when netbox is starting up it will ensure `django-rq` is installed and a redis connection can be made. On startup, each app registers applicable models to the two generic signal receiver functions in `extras.webhooks` inside of the app's `ready()` method. When the extras app is ready, it pulls all webhooks out of the database and stores them in a cache. This is the native django local memory cashe, so there is no added installation/upgrade complexity here. This is important because we don't want to hit the database for webhook matching criteria each time a model signal is fired. The extras app also registers a special signal on the webhook model so that any update will refresh the webhook cache. When a model signal is fired, it is received by the appropriate receiver function (post_save vs. post_delete). If the webhook feature is enabled, we retrieve the webhook cache and look for any and all webhooks which meet the criteria for this signal. For each matching webhook found, enqueue a job into django-rq (the 'default' job queue is the only one implemented). The background worker will then process anything in the queue. The POST request is built and then made to the payload url. If a good status code (from the python requests definition) is returned, the job is successful, otherwise it has failed and django-rq puts the job in the "failed" queue. Django-rq also implements an admin view which allows the user to view the status of the job queue(s). There the result of jobs can be seen and failed jobs can be manually retried. This is my first iteration but it seems to be working quite well :)
Author
Owner

@madkiss commented on GitHub (Nov 13, 2017):

Hi and thank you for your wonderful work! Am I right to assume that this code is supposed to allow the calling of external utilities (e.g. certain binaries on the Netbox host) in case somebody performs a certain change using the API or the web interface? Based on the code, I am not sure how I would create the "external command" redirection ...

@madkiss commented on GitHub (Nov 13, 2017): Hi and thank you for your wonderful work! Am I right to assume that this code is supposed to allow the calling of external utilities (e.g. certain binaries on the Netbox host) in case somebody performs a certain change using the API or the web interface? Based on the code, I am not sure how I would create the "external command" redirection ...
Author
Owner

@lampwins commented on GitHub (Nov 14, 2017):

@madkiss not exactly. You are correct in that it is used to interact with external systems. However it is mean to interface with HTTP systems. Specifically the way #1640 is implemented, when registered models are created/updated/deleted, a HTTP POST request is made to one or more user configured URLs. The payload of the request includes the model data and the event type.

@lampwins commented on GitHub (Nov 14, 2017): @madkiss not exactly. You are correct in that it is used to interact with external systems. However it is mean to interface with HTTP systems. Specifically the way #1640 is implemented, when registered models are created/updated/deleted, a HTTP POST request is made to one or more user configured URLs. The payload of the request includes the model data and the event type.
Author
Owner

@lampwins commented on GitHub (Jan 30, 2018):

So my first iteration over this (https://github.com/digitalocean/netbox/pull/1640) was very inlightning. Several things came up that need to be addressed in a further implementation attempt.

Basically, I used a property on each Model to link it to its respective API serializer and then dynamically imported those when needed. The django rest framework requires the request be passed in when constructing a serializer with a model instance (for HyperLinked entities). This is an issue because the built in django signals for post_save and post_delete do not include the request object. I worked around this by constructing an empty request object and passing it into the serializer. The result is that the HyperLinked entity urls were relative, i.e. did not contain the hostname.

The main issue resides in the bulk update views. These views use the queryset update method to perform bulk updates. This generates a single SQL query and thus bypasses any django signal that would otherwise be dispatched for each model instance.

Another "annoying" implementation detail was the way in which I registered the models to the signal receivers. Ultimately this can be refactored but I am not sure of the related performance hit.

django-rq worked out nicely and provided just the base level of functionality needed to implement the job queue without adding the tremendous complexity and overhead that celery imposes.

In all, the implementation in the PR was "ok" for a very generic approach and in fact I am using it in another project in which the above considerations are not of the same concern to me at this time.

I think for this to truly succeed, we should actually take a step back and create our own signals. Otherwise we would need a hack to use the build in post_save signal in a bulk update operation. I feel we should actually model the types of events a network operator would actually care about. For instance, I might not care that an individual interface has been updated, but instead I want to receive a notification that the interface configuration for a device has changed. So when you do a bulk update on a set of interfaces, you do not get events for each interface, but instead one event for the device.

@jeremystretch I see you are considering this feature request. Do you have any thoughts? I would be more than happy to dig into this more with you.

@lampwins commented on GitHub (Jan 30, 2018): So my first iteration over this (https://github.com/digitalocean/netbox/pull/1640) was very inlightning. Several things came up that need to be addressed in a further implementation attempt. Basically, I used a property on each Model to link it to its respective API serializer and then dynamically imported those when needed. The django rest framework requires the request be passed in when constructing a serializer with a model instance (for `HyperLinked` entities). This is an issue because the built in django signals for `post_save` and `post_delete` do not include the request object. I worked around this by constructing an empty request object and passing it into the serializer. The result is that the `HyperLinked` entity urls were relative, i.e. did not contain the hostname. The main issue resides in the bulk update views. These views use the queryset `update` method to perform bulk updates. This generates a single SQL query and thus bypasses any django signal that would otherwise be dispatched for each model instance. Another "annoying" implementation detail was the way in which I registered the models to the signal receivers. Ultimately this can be refactored but I am not sure of the related performance hit. django-rq worked out nicely and provided just the base level of functionality needed to implement the job queue without adding the tremendous complexity and overhead that celery imposes. In all, the implementation in the PR was "ok" for a very generic approach and in fact I am using it in another project in which the above considerations are not of the same concern to me at this time. I think for this to truly succeed, we should actually take a step back and create our own signals. Otherwise we would need a hack to use the build in `post_save` signal in a bulk update operation. I feel we should actually model the types of events a _network operator_ would actually care about. For instance, I might not care that an individual interface has been updated, but instead I want to receive a notification that the interface configuration for a device has changed. So when you do a bulk update on a set of interfaces, you do not get events for each interface, but instead one event for the device. @jeremystretch I see you are considering this feature request. Do you have any thoughts? I would be more than happy to dig into this more with you.
Author
Owner

@lampwins commented on GitHub (Feb 2, 2018):

After some thought, I came back to this and refactored my first iteration. I think I solved the bulk operation problem and have reopened #1640. If anyone has the time, please try that branch out and let know what you think.

@lampwins commented on GitHub (Feb 2, 2018): After some thought, I came back to this and refactored my first iteration. I think I solved the bulk operation problem and have reopened #1640. If anyone has the time, please try that branch out and let know what you think.
Author
Owner

@jeremystretch commented on GitHub (May 30, 2018):

Merged @lampwins' PR #1641 into develop-2.4, so this can be officially closed out! 🎉

Please open new issues for any feature requests or bugs related to webhooks from this point forward.

@jeremystretch commented on GitHub (May 30, 2018): Merged @lampwins' PR #1641 into `develop-2.4`, so this can be officially closed out! :tada: Please open new issues for any feature requests or bugs related to webhooks from this point forward.
Author
Owner

@mdlayher commented on GitHub (May 30, 2018):

Yes! Thanks so much @lampwins for doing this! Can't wait to give this a shot in our production environment.

@mdlayher commented on GitHub (May 30, 2018): Yes! Thanks so much @lampwins for doing this! Can't wait to give this a shot in our production environment.
Author
Owner

@ghost commented on GitHub (Sep 21, 2018):

o add DNS e

Another +1 for this feature. In my current custom IPAM solution, we have code that automatically talks to the Infoblox API to add DNS entries if desired.

@WilliamMarti we are also using infoblox in our organization, we are planning to adopting with netbox so Could you elaborate bit more about your setup & how compatible/easy to ingrate infoblox - netbox, it will really great to hear 👍

Thanks.

@ghost commented on GitHub (Sep 21, 2018): > o add DNS e > Another +1 for this feature. In my current custom IPAM solution, we have code that automatically talks to the Infoblox API to add DNS entries if desired. > @WilliamMarti we are also using infoblox in our organization, we are planning to adopting with netbox so Could you elaborate bit more about your setup & how compatible/easy to ingrate infoblox - netbox, it will really great to hear 👍 Thanks.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: starred/netbox#54