'ipam.models.MultipleObjectsReturned' #561

Closed
opened 2025-12-29 16:23:11 +01:00 by adam · 8 comments
Owner

Originally created by @Fyzzle on GitHub (Dec 1, 2016).

I had to migrate the machine from a Hyper-V environment to an ESX one, and the V2V didn't like the Ubuntu 16.04 server I had it on so...

I fired up a new server, went through the setup process and restored the database using:

pg_dumpall -h (ip of server) > backup.bak
psql -f backup.bak postgres

Pretty much everything works except a few subnets under Prefixes. On those I get the error:

Server Error
There was a problem with your request. This error has been logged and administrative staff have been notified. Please return to the home page and try again.

If you are responsible for this installation, please consider filing a bug report. Additional information is provided below:

<class 'ipam.models.MultipleObjectsReturned'>

get() returned more than one Aggregate -- it returned 2!

I'm guessing I have a duplicate entry somewhere, but I'm not sure how to track down and delete the funky one, or if there's a better way to do it.

Originally created by @Fyzzle on GitHub (Dec 1, 2016). I had to migrate the machine from a Hyper-V environment to an ESX one, and the V2V didn't like the Ubuntu 16.04 server I had it on so... I fired up a new server, went through the setup process and restored the database using: pg_dumpall -h (ip of server) > backup.bak psql -f backup.bak postgres Pretty much everything works except a few subnets under Prefixes. On those I get the error: > Server Error > There was a problem with your request. This error has been logged and administrative staff have been notified. Please return to the home page and try again. > > If you are responsible for this installation, please consider filing a bug report. Additional information is provided below: > > <class 'ipam.models.MultipleObjectsReturned'> > > get() returned more than one Aggregate -- it returned 2! I'm guessing I have a duplicate entry somewhere, but I'm not sure how to track down and delete the funky one, or if there's a better way to do it.
adam closed this issue 2025-12-29 16:23:12 +01:00
Author
Owner

@jeremystretch commented on GitHub (Dec 1, 2016):

That's odd. Does the list of aggregates in the admin UI (IPAM > Aggregates) look correct? You can also check via the shell by running ./manage.py shell and pasting the following code:

from ipam.models import Aggregate
for a in Aggregate.objects.all():
    print('[{}] {}'.format(a.pk, a))
@jeremystretch commented on GitHub (Dec 1, 2016): That's odd. Does the list of aggregates in the admin UI (IPAM > Aggregates) look correct? You can also check via the shell by running `./manage.py shell` and pasting the following code: ``` from ipam.models import Aggregate for a in Aggregate.objects.all(): print('[{}] {}'.format(a.pk, a)) ```
Author
Owner

@Fyzzle commented on GitHub (Dec 1, 2016):

Yeah it looks great, both in the web interface and from the shell.

@Fyzzle commented on GitHub (Dec 1, 2016): Yeah it looks great, both in the web interface and from the shell.
Author
Owner

@Fyzzle commented on GitHub (Dec 1, 2016):

Wait I figured it out, I'm a dummy. Posting fix.

Ok so at first glance it looked fine but the subnets were:

[1] 10.0.0.0/8
[8] 10.32.0.0/11
[9] 10.64.0.0/11
[10] 10.96.0.0/11
[11] 10.224.0.0/11
[2] 172.16.0.0/12
[3] 192.168.0.0/16

1 2 and 3 were all using their default settings. I divided up 10.0.0.0/8 into regional chunks as /11 subnets. I completely missed the /8 subnet when looking at things. Once I changed it to 10.0.0.0/11 everything worked great again.

@Fyzzle commented on GitHub (Dec 1, 2016): Wait I figured it out, I'm a dummy. Posting fix. Ok so at first glance it looked fine but the subnets were: [1] 10.0.0.0/8 [8] 10.32.0.0/11 [9] 10.64.0.0/11 [10] 10.96.0.0/11 [11] 10.224.0.0/11 [2] 172.16.0.0/12 [3] 192.168.0.0/16 1 2 and 3 were all using their default settings. I divided up 10.0.0.0/8 into regional chunks as /11 subnets. I completely missed the /8 subnet when looking at things. Once I changed it to 10.0.0.0/11 everything worked great again.
Author
Owner

@Fyzzle commented on GitHub (Dec 1, 2016):

Sorry everyone, false alarm. Can I still blame a turkey hangover?

@Fyzzle commented on GitHub (Dec 1, 2016): Sorry everyone, false alarm. Can I still blame a turkey hangover?
Author
Owner

@jeremystretch commented on GitHub (Dec 2, 2016):

@Fyzzle I'm curious how you ended up with overlapping aggregates in the first place, though. There's validation in place that's supposed to prevent that from happening.

@jeremystretch commented on GitHub (Dec 2, 2016): @Fyzzle I'm curious how you ended up with overlapping aggregates in the first place, though. There's validation in place that's supposed to prevent that from happening.
Author
Owner

@Fyzzle commented on GitHub (Dec 2, 2016):

Ah gotcha, I'm a little overextended today but everything is fresh in my head. I'll write up a step by step of what I did.

@Fyzzle commented on GitHub (Dec 2, 2016): Ah gotcha, I'm a little overextended today but everything is fresh in my head. I'll write up a step by step of what I did.
Author
Owner

@Fyzzle commented on GitHub (Dec 6, 2016):

Okie dokie:

I went through the docs and installed on Ubuntu 16.04 server using the Debian instructions.

I installed by cloning the git repo and went step by step through the rest up until the "Run Database Migrations". Before that step I restored the database to the new server. I think that's where the hang up was, I should have done it later in the process.

The rest of the install was done as written, I did not skip any steps in the process.

@Fyzzle commented on GitHub (Dec 6, 2016): Okie dokie: I went through the docs and installed on Ubuntu 16.04 server using the Debian instructions. I installed by cloning the git repo and went step by step through the rest up until the "Run Database Migrations". _Before_ that step I restored the database to the new server. I think that's where the hang up was, I should have done it later in the process. The rest of the install was done as written, I did not skip any steps in the process.
Author
Owner

@jeremystretch commented on GitHub (Dec 6, 2016):

Thanks for the follow-up. Barring any recurring errors, it seems like we can attribute this to a fluke.

@jeremystretch commented on GitHub (Dec 6, 2016): Thanks for the follow-up. Barring any recurring errors, it seems like we can attribute this to a fluke.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: starred/netbox#561