Duplicates shown on my profile when doing reload - Mapillary uploader 4.5.1

When checking my profile I can see duplicates if I reload is that a problem? I use Mapillary uploader 4.5.1

Example duplicates shown in web interface

Thanks for letting us know! We’ll look into our duplicate detection. Can you walk us through the steps you took to arrive at this situation (how many sessions, did you use editing, did you cancel an upload) so we can better understand how to reproduce the issue?

See my issue were you have my logfiles

I had about 50gbyte data and i feel it gave errors 2-3 Times and then I used the restart…

Thanks! Unfortunately the logs don’t go further than the last upload session due to the number of events logged. So, you tried uploading the same folder/files a few times by retrying from the history? Do you remember which errors you received on the unsuccessful attempts?

Sorry no

I Will track that next time

My feeling is that the problem is not the uploaded picture they dont have duplicates

Instead my guess is that you dont purge the list of uploads Done that are shown in the webinterface

Question: is there a param filé where I can change the size of what is logged… or change debug level to get less errors logged

I recently ran in similar problems. Several duplicate sequences after uploading.
It looked like it is related to uploads running in an error and then using the retry function. The retry seemed not only to repeat the upload of the last interrupted sequence but startet at any point in time with the first sequence again. The only difference to my previous uploads was, that I added folders with hundreds of images one by one and with “Automatic upload” activated. Uploader is version 4.5.1 on Mac.

1 Like

I use insta360 and creates folders with 100 of photos that I drag into the uploader

Then I used retry

@Hol_Ger Can you please share your logs from the uploader so we can investigate? Thanks!

Thanks for the offer, but logs are already rotated and start after the days I encountered the effect. Since then I have avoided to add folders one by one while the uploader is already preparing, because it was really a big mess on the server.

1 Like

I am having this issue. I dropped a directory into the Mapillary Uploader, it processed 116 files, I started uploading, it errored with an “Upload Failed”, I clicked “Retry” (more than one occasion), I still haven’t had a successful run, came to the forum to ask about switching to the command line tools and if it would recognize what had been uploaded by the desktop app, saw this thread and decided to check for duplicates, saw that there are duplicates.

My username is “danbjoseph” and I was uploading to “palangmerahindonesia” but there were several other users uploading to the same organization that I know were restarting failed uploads in the Desktop uploader.

One example from my uploads, is these two same photos with different keys:
https://www.mapillary.com/app/?pKey=529781789844018
https://www.mapillary.com/app/?pKey=27417772391203015

@nikola I have logs I can share, let me know where to send them if you’d like to take a look (as I can’t attach them here - they’re not an authorized file extension).

Yes, logs would be very helpful, please send them to support@mapillary.zendesk.com

1 Like

During my last upload I have run into this problem again. Uploaded overnight. Today morning Uploader has uploaded only 4 sequences out of 39 due to any error. Selected “Retry upload”. First it looked ok, then he suddenly started to upload sequence_idx 0, 'md5sum': 'd21d44509db082f86ed236f037904a4b' again. Digged into the logs. Here you can see it:

The upload before I retried stopped with “There appear to be 1 leaked semaphore objects to clean up at shutdown”. See here:

2024-11-08 22:17:42,606 - DEBUG   - The next offset will be: 989855744
2024-11-08 22:17:42,606 - DEBUG   - Sending upload_progress via IPC: {'file_type': 'image', 'sequence_idx': 4, 'total_sequence_count': 39, 'sequence_image_count': 34, 'sequence_uuid': '4', 'entity_size': 2128745744, 'md5sum': '96a16bb5a8899ca18dc5b0c4118f8236', 'upload_start_time': 1731100410.7024598, 'upload_total_time': 0, 'offset': 989855744, 'retries': 0, 'upload_last_restart_time': 1731100410.864608, 'upload_first_offset': 0, 'chunk_size': 16777216}
[2024-11-08 22:17:42.612] [debug] [mapillary:tools] 2024-11-08 22:17:42,612 - DEBUG   - POST https://rupload.facebook.com/mapillary_public_uploads/mly_tools_96a16bb5a8899ca18dc5b0c4118f8236.zip HEADERS {"Offset": "989855744", "X-Entity-Length": "2128745744", "X-Entity-Name": "mly_tools_96a16bb5a8899ca18dc5b0c4118f8236.zip", "X-Entity-Type": "application/zip"}
[2024-11-08 22:17:42.916] [debug] [mapillary:tools] multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
[2024-11-08 22:17:42.928] [debug] [mapillary:tools] process exit code null and signal SIGPIPE
[2024-11-08 22:17:42.933] [error] [store:modules:uploadSession] exit with code null and signal SIGPIPE
[2024-11-08 22:17:42.963] [info] [power-saver-blocker] stop: 0
[2024-11-08 22:17:42.963] [info] [services:observer-manager] Stopping 8ec6fc20-9e12-11ef-9886-85bca27b185a
[2024-11-08 22:17:42.978] [error] [vue] { code: null, message: 'exit with code null and signal SIGPIPE' }

I started a fresh new upload with “Skip uploaded files” switched on, but sequence_idx 0-3 were uploaded anyway:

35 sequences out of 39 later - Boom!

BTW: You should really not mix different log formats in a log file. It makes it hard to use log analyzers:

[2024-11-08 14:28:32.234] [debug] [mapillary:tools] 2024-11-08 14:28:32,232 - DEBUG   - HTTP response 206: b'{"debug_info":{"retriable":true,"type":"PartialRequestError","message":"Partial request (did not match length of file)"}}'
2024-11-08 14:28:32,232 - DEBUG   - The next offset will be: 1593835520
2024-11-08 14:28:32,232 - DEBUG   - Sending upload_progress via IPC: {'file_type': 'image', 'sequence_idx': 1, 'total_sequence_count': 8, 'sequence_image_count': 31, 'sequence_uuid': '1', 'entity_size': 2108923895, 'md5sum': '6b1a0ffdfd64fdf46251ae3c8921e205', 'upload_start_time': 1731072019.616509, 'upload_total_time': 0, 'offset': 1593835520, 'retries': 0, 'upload_last_restart_time': 1731072019.944736, 'upload_first_offset': 0, 'chunk_size': 16777216}
[2024-11-08 22:17:42.933] [error] [store:modules:uploadSession] exit with code null and signal SIGPIPE

DEBUG vs. [debug]
[2024-11-08 14:28:32.234] vs. 2024-11-08 14:28:32,232
…32.234 vs. …32,232
and so on.

1 Like

Look like the same problems I had. Yesterday upload worked for me without restart and no problems…

Go to last uploaded picture on Mapillary and open “Image details”, click “Advanced”. Maybe capture time can help you:

Duplicate detection is fixed in the new 4.6.0 update: Mapillary Desktop Uploader 4.6.0 is out now Thanks for your help with debugging this issue!

1 Like

Is working fine again. Thanks to you and the Mapillary team.

2 Likes