Processing times

Hey!

Around 15 days ago I wanted to try and upload a 360 video to be used. The upload seems to have gone well, but seems to be stuck in processing map data.

I do see from https://help.mapillary.com/hc/en-us/articles/4408023385874-Troubleshooting-image-processing-delays-and-failed-sequences that 48 hours are normal. But I guess it could be more to process and it could take longer for 360 video?

Wouldn’t have thought much about it if it was just one video waiting. But all 3 I have are pending in the same state.

So just wondering if I’m doing something wrong here and if someone else have seen it before.

Thanks

2 Likes

I’ll add this as well, the same type of video, taken on same camera and settings in a single drive, was successful. (Ref 7 Aug 2025 18:31)

But I guess it has a couple of hundred less images :thinking:

2 Likes

cc: @balys for thoughts

1 Like

@balys Yeah, initially I assumed it to be a matter of images per sequence too. But, my latest uploads tend to suggest that it has not much to do with it specifically. Some of my latest 13k~15k sequences completed before some 7k sequences. However, other ca. 6k sequences just won’t budge. So, many of my latest sequences are also stuck for over a week now for no apparent reason. Is there anything contributors can do to unstuck sequence processing except for deleting an image?

2 Likes

Hi,

Thanks for reporting.

It does sound like an issue on our end - will investigate.

Kind regards,
Balys

2 Likes

I have the same problem. Uploaded more than a month ago and still in “Map Data Processing” state:

Sequence key: LrBRgjlIKUhW1zHYmA8x0P

1 Like

@balys :thinking: Any update on this?

Hi,

It affects small fraction of our uploads - we’re in the early stages of triaging this and regrettably, this is not something that’s trivial to fix.

cc: @boris

Kind regards,
Balys

1 Like

Is there perhaps some way for contributors to workaround it?

I am going to try to upload my intersecting sequences one‑by‑one, that is only upload the next intersecting sequence after the previous has finished processing.

1 Like

Never mind the details—I’m waiting for someone in charge to come up with a solution similar to the one used for sequences that failed.

That is, delete them after two weeks and the problem is solved.

This is sarcasm…

1 Like

Hey!

Thanks for the update! Really appreciate what you and the team does.

Once you know more I would love to know if there are workarounds or things we should avoid to prevent creating these stuck sequences :grinning_face_with_smiling_eyes:

Now that I have learned how to find sequence IDs, I’ll share my stuck ones here as well if it helps with debugging :slight_smile:

  • u1naOz4Md8kRqKCIocBimF - 30 Jul 2025
  • 738947985862531 - 6 Aug 2025
  • ulvjGFqMRa9oUTdy70Vger - 7 Aug 2025
  • bFDgIohHyaTiS4WvVfqU9t - 17 Aug 2025
  • Zjr8DiSuQV1Afc9slLOFoE - 16 Aug 2025
  • mQqX1dcnSRM5DkKvuCT0Ey - 16 Aug 2025

Thanks again!

1 Like

:slightly_frowning_face: I am not sure this helped. :thinking: Looks like it may also be connected to overall processing load? I wish we had at least some hint as to how to work around this issue, @balys. Fewer image count sequences are not really a solution either because they can get stuck too.

Hi,

Apologies for taking a while to respond. Thanks for taking the time to evaluate and highlighting that you see shorter sequences getting stuck too. I’ve created a ticket for this to be looked into. The upside is, if there’s some systematic issue, all of the stuck uploads can get reprocessed.

Many of the features (like viewing/sharing and often image segmentation) are still available even if the upload is stuck.

Thanks for your patience.

Kind regards,
Balys

cc: @boris @asturksever

1 Like

I have not been able to determine a specific number and I also do not think that there is a fixed threshold above which sequence processing gets stuck. It seems to be a widely dynamic thing.

That’s right! :slight_smile: However, I am particularly interested in traffic sign and point feature map layers, and these require reconstruction to complete. And, I do fully understand that some things are not easy or trivial to solve but require more time to truly get resolved. From what I understand, sequence processing completion depends on some non‑deterministic component. Thus, it would be nice if we would either have a workaround or some sort of mechanism for contributors to tap a stuck sequence, say like no sooner than 72 hours after upload. @nikola Would this make sense? Would such functionality be something that could potentially be easily added to the feed?

Sadly, I have to admit that I am sort of fed up with the current situation where for no apparent reason some sequences complete processing while others do not. I have tried multiple things, like splitting sequences and uploading again, or atomic uploads of intersecting sequences. It turns out that sequence image count definitely does not have much of an impact on sequence processing success because sometimes 19k count sequences finish processing nicely while 4k count sequences get stuck or vice versa. :person_shrugging: Additionally, sequences that are in fact absolutely fine and have been at least ingested before, are now rejected as Failed. :face_with_diagonal_mouth:

What seems to work as a workaround for densely captured sequences is splitting them into low count interleaved sequences. These are sequences that are more sparse but still cover and are made of an original single sequence. However, this is only an ugly mitigation strategy because this breaks neat dataset grouping and causes more work for contributors (it is quite laborious to upload). :hot_face:

1 Like

@GITNE - we’re going to take another look to see if we can get to the bottom of this - thanks for your patience!

1 Like

@boris @balys This error affects almost 50 percent of my sequences now:

1 Like

@balys Splitting sequences into interleaved sub‑sequences seems to work as a workaround for now. One side effect of this however is that the distance stats for what was before covered by one sequence, now multiply with the interleaving factor. But anyhow, I do not think that this is how it should work from a contributor’s perspective. There should be no limit on image count per sequence, neither a soft limit due to any processing step nor a hard limit enforced by the uploading tool or the server.

Since it looks like reconstruction happens in sectors, why don’t you create multiple safe sized sets from random images per sector, than trying to process things per sequence when these may hit RAM or other limits? Reconstructing chronologically sorted sequences does not necessarily increase chances for overlap.

1 Like

I already posted it in the old “Current processing delay” chat, but that one was (incorrectly) marked as “solved”, so maybe it should be mentioned here, too:

There are problems again, this time only with API requests. I don’t see new data anymore, for 2025-08-28 and newer.