Mapillary Tools 0.14 is released

Got some network unrelated error on 0.14.2:

04:40:32.971 - DEBUG   - UPLOAD_PROGRESS: {"sequence_idx": 0, "total_sequence_count": 1, "sequence_image_count": 15584, "sequence_uuid": "0", "file_type": "image", "sequence_md5sum": "7239ab5e7db0f3443f5c4f65aa653a11", "entity_size": 386998, "upload_start_time": 1756158525.2105544, "upload_total_time": 0, "upload_last_restart_time": 1756158525.2105544, "upload_first_offset": 0, "import_path": "[REDACTED]/00031447.JPG", "chunk_size": 386834, "retries": 0, "begin_offset": 0, "offset": 386998}
04:40:32.988 - DEBUG   - UPLOAD_FAILED: {"sequence_idx": 0, "total_sequence_count": 1, "sequence_image_count": 15584, "sequence_uuid": "0", "file_type": "image", "sequence_md5sum": "7239ab5e7db0f3443f5c4f65aa653a11", "entity_size": 8826274719, "upload_start_time": 1756158525.2105544, "upload_total_time": 0, "upload_last_restart_time": 1756158525.2105544, "upload_first_offset": 0}
04:40:33.697 - INFO    - ==> Upload summary
04:40:33.697 - INFO    - Nothing uploaded. Bye.
Traceback (most recent call last):
  File "/usr/bin/mapillary_tools", line 8, in <module>
    sys.exit(main())
             ^^^^^^
  File "/usr/lib/python3.12/dist-packages/mapillary_tools/commands/__main__.py", line 156, in main
    args.func(argvars)
  File "/usr/lib/python3.12/dist-packages/mapillary_tools/commands/upload.py", line 82, in run
    upload(
  File "/usr/lib/python3.12/dist-packages/mapillary_tools/upload.py", line 121, in upload
    raise ex
  File "/usr/lib/python3.12/dist-packages/mapillary_tools/upload.py", line 112, in upload
    upload_error = _continue_or_fail(result.error)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/dist-packages/mapillary_tools/upload.py", line 631, in _continue_or_fail
    raise ex
  File "/usr/lib/python3.12/dist-packages/mapillary_tools/uploader.py", line 560, in upload_images
    cluster_id = self._upload_sequence_and_finish(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/dist-packages/mapillary_tools/uploader.py", line 586, in _upload_sequence_and_finish
    raise ex
  File "/usr/lib/python3.12/dist-packages/mapillary_tools/uploader.py", line 581, in _upload_sequence_and_finish
    image_file_handles = self._upload_images_parallel(
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/dist-packages/mapillary_tools/uploader.py", line 657, in _upload_images_parallel
    indexed_image_file_handles.extend(future.result())
                                      ^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/concurrent/futures/_base.py", line 456, in result
    return self.__get_result()
           ^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result
    raise self._exception
  File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/dist-packages/mapillary_tools/uploader.py", line 691, in _upload_images_from_queue
    single_image_uploader = SingleImageUploader(
                            ^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/dist-packages/mapillary_tools/uploader.py", line 742, in __init__
    self.cache = self._maybe_create_persistent_cache_instance(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/dist-packages/mapillary_tools/uploader.py", line 830, in _maybe_create_persistent_cache_instance
    cache.clear_expired()
  File "/usr/lib/python3.12/dist-packages/mapillary_tools/history.py", line 146, in clear_expired
    with dbm.open(self._file, flag="c") as db:
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/dbm/__init__.py", line 95, in open
    return mod.open(file, flag, mode)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
_gdbm.error: [Errno 11] Resource temporarily unavailable: '/tmp/mapillary_tools/upload_cache/py_3_12_3_0.14.2/MLY_5675152195860640_6b02c72e6e3c801e5603ab0495623282/110661107832227/cached_file_handles'

Looks like a race condition error to access cached_file_handles.

Since the last 2 weeks I had multiple errors:

These are on 14.0 and 14.1, I will now try 14.2 (windows 64 bit version).

  1. Too Many Requests, I only have those on my home connection (1 Gbit fiber), at the holiday home I didn’t had this error, the connection was there only 10 Mbit. Before my holiday (2,5 weeks ago) I didn’t had any of these errors.

This is a fatal error, the process is stopped, that’s very annoying.

  1. Precondition failed, not that important, the process will try again.

  1. Database issues, only 1 Mapillary process is running, I don’t know where the error is coming from. Also annoying because of the stopped process.

1 Like

I have had this 429 Too Many Requests error couple of times too, especially on sequences with over 10k images. 0.14.2’s keep alive connections must have solved this error.

I have had this too. Probably caused by too many concurrent upload workers because the server or network load balancing software cannot keep up. Must have been solved by keep alive connections in 0.14.2.

Yep, it is a race condition, usually happening with the last image upload worker hanging on to the database while the main thread wants to open the database to read data to create the upload history.

1 Like

First 2 errors happened not again op 14.2 (for now…). The database error has occured already 3 times since 14.2 usage.

1 Like

Hi tao,

I can’t really reproduce the errors, since they hit massively when I was using Wifi in a vacation park in 2 weeks ago. You can see in my profile that around 8th august there are many duplicate sequences in my profile. It was with a unstable connection and 0.14b1 with the default no of workers then (very many).

I am home now, and here the connection is slower, but stable.

Generally, the problems here are few. I upgraded to 0.14.2 meanwhile and as said, the connection is more reliable here.

But: The last two nights an error hit me again, the same error twice, once with 0.14.1 and now with 0.14.2

Here is the output:

ted@ted-swiftsf314511:~$ export MAPILLARY_TOOLS_MAX_IMAGE_UPLOAD_WORKERS=2
ted@ted-swiftsf314511:~$ mapillary_tools/bin/mapillary_tools upload β€œAayi/001_0189/”
00:39:12.779 - INFO - Verifying profile β€œteddy73”…
00:39:13.227 - INFO - Uploading to profile β€œteddy73”: teddy73 (ID: 103080845264750)
00:39:13.228 - INFO - ==> Uploading…
Uploading IMAGE (1/13): 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2.90G/2.90G [39:58<00:00, 1.30MB/s]
Uploading IMAGE (2/13): 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3.21G/3.21G [44:54<00:00, 1.28MB/s]
Uploading IMAGE (3/13): 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3.07G/3.07G [42:58<00:00, 1.28MB/s]
Uploading IMAGE (4/13): 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹| 2.71G/2.71G [38:49<00:04, 1.25MB/s]
03:26:23.590 - INFO - ==> Upload summary
03:26:23.591 - INFO - 3 sequences uploaded
03:26:23.591 - INFO - 9.8 GB read in total
03:26:23.591 - INFO - 9.8 GB uploaded
03:26:23.591 - INFO - 7672.397 seconds upload time
03:26:23.591 - ERROR - HTTPError: GET https://rupload.facebook.com/mapillary_public_uploads/mly_tools_694b35cda4bd10fb9789fc2c7e51952f.jpg => 429 Too Many Requests: {β€œbackoff”: 10000, β€œdebug_info”: {β€œretriable”: false, β€œtype”: β€œReques
tRateLimitedError”, β€œmessage”: β€œRequest rate limit has been exceeded”}}
ted@ted-swiftsf314511:~$ mapillary_tools/bin/mapillary_tools upload β€œAayi/001_0189/”
05:51:11.097 - INFO - Verifying profile β€œteddy73”…
05:51:11.599 - INFO - Uploading to profile β€œteddy73”: teddy73 (ID: 103080845264750)
05:51:11.600 - INFO - ==> Uploading…
05:51:18.722 - INFO - UploadedAlready: Skipping sequence_0, already uploaded 4 hours ago (2025-08-27 01:19:17)
05:51:24.788 - INFO - UploadedAlready: Skipping sequence_1, already uploaded 3 hours ago (2025-08-27 02:04:20)
05:51:30.381 - INFO - UploadedAlready: Skipping sequence_2, already uploaded 3 hours ago (2025-08-27 02:47:26)
Uploading IMAGE (4/13): 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2.71G/2.71G [00:10<00:00, 278MB/s]
Uploading IMAGE (5/13): 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2.83G/2.83G [39:42<00:00, 1.28MB/s]
Uploading IMAGE (6/13): 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2.89G/2.89G [40:31<00:00, 1.27MB/s]
Uploading IMAGE (7/13): 51%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–

As you can see, the upload stopped somewhere at seq 4 and then nothing more was going on. Just restarting the upload, and it goes on from where it stopped. But why? Error message was the same the last two times: β€œRequest rate limit has been exceeded”

@GITNE @Teddy73 @TheWizard Thanks everyone for all the feedback on 0.14.2. We are aware of the issues and will release a fix 0.14.3 shortly.

3 Likes

Thanks Teddy73. Could you run the command with –verbose? – maybe we can get more info why it quietly exited.

Also share me the exit status code (echo $?) would be also very helpful.

Hey all, MT 0.14.3 is released. It should fix:

  • The β€œdb is locked” issue. Thanks @GITNE for catching the :bug: – It’s indeed a race condition.
  • The β€œ429 Too Many Requests” errors. It’s actually a retriable error so the fix is just automatically retring (not sure why the server says it’s non-retriable though)

cc @Teddy73 @TheWizard @bob3bob3 @nikola @boris

3 Likes

Tnx @tao

Wont have any uploads to do for 1-2 weeks, so will test then.

1 Like

Hi @tao , thanks for the quick fix! I still have a lot to upload (2 TB), perfect for testing, I’ll keep you informed.

1 Like

Hi Tao, unfortunately still the problem of the database is locked issue…

Thanks TheWizard. I will take a look soon.

  • Which OS are you running?
  • Is it occasional or happening every time?
  • Meanwhile can you upload with 1 worker (i.e. --num_upload_workers=1 or envvar MAPILLARY_TOOLS_MAX_IMAGE_UPLOAD_WORKERS=1) to see if it works?

Btw when we were working on these issues we could not reproduce them unfortunately. So what I could do was reasoning hard with my AI buddies, and adding lots of tests. Apparently we missed some cases for some platforms there.

Hi Tao,

I’m running :

Windows 11 Home
Version 24H2
Build 26100.4946

This morning I had it almost every try (at least 10 times), this afternoon I had no issues at all, the uploads are running now fine.

I’m not so at home in the database stuff, is there a performance impact when using 1 worker?

Is the server very busy? The upload is between 6 and 12 MB/s, normally I get between 20 and 45 MB/s (200 / 450 Mbit/s, just to be sure that we are talking the same language ;)).

Below another error that occured 30 minutes ago, it hasn’t had any impact because of the retry. But it’s so strange, 3 weeks ago and before that I never had any issues with uploading, has anything changed? I was on holiday the last couple of weeks and uploaded a little bit (slow connection), there where also some timeout issues, but that could be the connection.

Hi Tao,

I’m running :

Windows 11 Home
Version 24H2
Build 26100.4946

This morning I had it almost every try (at least 10 times), this afternoon I had no issues at all, the uploads are running now fine.

I’m not so at home in the database stuff, is there a performance impact when using 1 worker?

Is the server very busy? The upload is between 6 and 12 MB/s, normally I get between 20 and 45 MB/s (200 / 450 Mbit/s, just to be sure that we are talking the same language ;)).

Below another error that occured 30 minutes ago, it hasn’t had any impact because of the retry. But it’s so strange, 3 weeks ago and before that I never had any issues with uploading, has anything changed? I was on holiday the last couple of weeks and uploaded a little bit (slow connection), there where also some timeout issues, but that could be the connection.

Edit: When speaking about the devil, it was already running for 57 minutes. I will now be testing with the one worker setting.

@tao I didn’t had any issues after changing the one worker setting, it’s only now so slow… :face_savoring_food: . I started 3 simultaneous runs, all without issues (except Windows Defender, it’s removing the mapillary_tools every time after it’s updating…)

1 Like

So far, I had little to no issues with 0.14.3 :+1:, except for sporadic occurrences of the 412 Client Error: Precondition Failed and 429 Client Error: Too Many Requests errors which basically recover immediately.

Upload worker count efficiency depends heavily on the link type and network infrastructure behind the link on the edge. My cable link works most efficiently with 2 upload workers, fiber works best with 4 to 8 upload workers. These figures are not a recommendation for other links or defaults. They work best for me and my setup only. The average image file size also impacts total throughput on both link types. I have not tried cell links yet because they are quite expensive for me to run but I am going to conduct some limited tests over cell links too.

Generally speaking, not having to create ZIP files is a huge relief. :relieved_face: mapillay_tools has definitely evolved toward greater stability lately and is becoming more and more a pleasure to use. All of this is really great but we are not completely across the finish line yet. Anyhow, I am sure you are well aware of your homework assignment. :wink:

2 Likes

@tao Unfortunately it happened again, 3 processes killed around the same time. Like @GITNE I also had the 412 en 429 errors a couple of times but the recovery is almost instant.

1 Like

Thanks for sharing.

From the timestamps, looks like it happens when you were running multiple mapillary_tools instances at the same time? If so, can you run just one mapillary_tools to confirm if the issue still occurs (with the default num_upload_workers=8).

Also can you share us your use case why you’d need to run multiple mapillary_tools instances?

Thanks @GITNE I’m very happy to hear your experience with the latest changes. It’s all because of your feedback and the patience of you all, so we can improve MT quickly in the right direction!

2 Likes

Last night, I had some other new client errors with 2 upload workers:

23:35:08.179 - WARNING - Error uploading sequence_0/00002725.jpg at offset=None since begin_offset=None: HTTPError: 412 Client Error: Precondition Failed for url: https://rupload.facebook.com/mapillary_public_uploads/mly_tools_43702b56439387b95bf7d5708a7008a3.jpg
01:11:31.083 - WARNING - Error uploading sequence_0/00012958.jpg at offset=882160 since begin_offset=0: ReadTimeout: HTTPSConnectionPool(host='rupload.facebook.com', port=443): Read timed out. (read timeout=60)
01:28:25.392 - WARNING - Error uploading sequence_1/00000001.jpg at offset=None since begin_offset=None: ConnectionError: HTTPSConnectionPool(host='rupload.facebook.com', port=443): Max retries exceeded with url: /mapillary_public_uploads/mly_tools_6ea0247181b9078797b4394c0983ccba.jpg (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0xffffb392e900>: Failed to establish a new connection: [Errno 113] No route to host'))
01:28:25.393 - WARNING - Error uploading sequence_1/00000000.jpg at offset=None since begin_offset=None: ConnectionError: HTTPSConnectionPool(host='rupload.facebook.com', port=443): Max retries exceeded with url: /mapillary_public_uploads/mly_tools_f091c9298670434e839884fa6aedb2ce.jpg (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0xffffb392ff80>: Failed to establish a new connection: [Errno 113] No route to host'))
02:47:27.706 - WARNING - Error uploading sequence_1/00007330.jpg at offset=None since begin_offset=None: ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))

Because these errors happen on the client only and recover immediately on the next retry, it looks like all these errors may also actually be the result of or due to a race condition right before they happen. This may also hint at why @TheWizard has to run multiple mapillary_tools instances with 1 upload worker each. In other words, when you have only one upload worker running then the client errors basically seize to occur. Overall, sequence uploads complete fine.