"OSError: [Errno 28] No space left on device" when uploading

mapillary_tools 0.7.4.
Uploading with ~300MB free on the root filesystem failed with the errors below.
Upload happened from an external disk with hundreds of GB free.
Could it be that the images are transferred to the system during upload?
If so, could this be fixed so that only the source filesystem is filled up?

Uploading 992 images with valid mapillary tags (Skipping 7)
Traceback (most recent call last):
File “/usr/local/bin/mapillary_tools”, line 11, in
load_entry_point(‘mapillary-tools==0.7.4’, ‘console_scripts’, ‘mapillary_tools’)()
File “/Library/Python/3.7/site-packages/mapillary_tools/main.py”, line 93, in main
command.run(args)
File “/Library/Python/3.7/site-packages/mapillary_tools/commands/process_and_upload.py”, line 456, in run
for k, v in vars_args.items()
File “/Library/Python/3.7/site-packages/mapillary_tools/upload.py”, line 151, in upload
dry_run=dry_run,
File “/Library/Python/3.7/site-packages/mapillary_tools/uploader.py”, line 248, in upload_sequence_v4
ziph.write(fullpath, relpath)
File “/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.7/lib/python3.7/zipfile.py”, line 1744, in write
shutil.copyfileobj(src, dest, 1024*8)
File “/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.7/lib/python3.7/shutil.py”, line 82, in copyfileobj
fdst.write(buf)
File “/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.7/lib/python3.7/zipfile.py”, line 1098, in write
self._fileobj.write(data)
File “/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.7/lib/python3.7/tempfile.py”, line 481, in func_wrapper
return func(*args, **kwargs)

2 Likes

Free inodes? The logging process would use a lot, but the FS creation would have had to be pretty old… I had the problem when I was process and uploading from an older ext4(?) USB 500GB drive

df -i

Should be good, iused = 2638061, ifree = 4292329218.
At the same time when mapillary_tools failed, the OS also alerted about the root filesystem being full.

Maybe you try a loop to upload each directory seperate and delete the folder inside the script created?

Hmm, uploader.py has a compress step that references “tempfile” so I guess that could be using /tmp, Going out on a limb here the shell variable TMPDIR may be able to be assigned on a per process basis. eg before the mapillary tools command;

TMPDIR = /media/richlv/photdisk/tmp …

This is really a shot in the dark! Python itself has a module tempfile that I think does all this.

sorry not really helpful…

After some troubleshooting, I think I figured it out. On Linux (or at least, on Fedora, it may change by distribution), by default the temp folder is given half the space of RAM. So if you have e.g. 16gb ram, you’ll probably have 8gb for the temp folder. This is not related to how much free space you have on your hard drive or on the external storage from where you’re doing the uploading.

For whatever reason, before finalising a sequence Mapillary Uploader needs to move all of it in the temp folder. So just before you get this error you should see the RAM use increase, and the space in the /tmp/ folder decrease quickly. You can see this by checking the free space repeatedly from inside /tmp just before you get the error.

If the sequence you’re trying to upload is bigger than the current size given to /tmp, then you’ll get this error.

The easiest workaround is probably to simply temporarily give more space to /tmp, e.g.:

mount -o remount,size=12G,noatime /tmp

to give 12gb to /tmp.

This should be automatically reverted to default value after restart. For reference, see e.g. this.

1 Like

Good find!

On Debian 10.x (at least) there is no separate /tmp mountpoint. It’s just a directory in the filesystem. There are however some “tmpfs” filesystems of limited size that may be involved.

1 Like

Thanks, that confirms my suspicion.
mapillary_tools really, really should not be doing this. The filesystem where I’m uploading images should be the only one used during the process.

I’m wondering whether it’s the compressing step - which could then also explain the problem at Page not found .

Will solve in Mapillary_tools does repeated compression · Issue #437 · mapillary/mapillary_tools · GitHub

2 Likes

I found similar when trying to use a Raspberry Pi to sit as a dedicated 24/7 Mapillary uploader (I take a lot of photos but have a slow upload link).

When the tools were updated and the compress-before-upload step added, I found things ground to a crawl for me. The photos are situated on a NAS (bit slow), and it was writing the temporary zip file to the local SD card (very slow). Fixed it by using an 8GB Pi and putting /tmp on tmpfs - each sequence is around 2-3GB, so it fits nicely in the default half-RAM limit.

I’d rather the compress stage were part of the prepare, rather than the upload step, but until that gets sorted I personally much prefer the temporary zip file being written to fast RAM-based /tmp, rather than going back and forth to the NAS it came from.

Interesting usecase - looks like a customisable temporary directory path would be highly desired.
Perhaps worth adding that in the Github issue.

Hey @Richlv @southglos mapillary_tools v0.8.0 is pre-released Release v0.8.0 · mapillary/mapillary_tools · GitHub

It exposed the zip command. Also it should be easier to integrate with your workflow. Let me know your feedback.

Have a great weekend!

4 Likes

Thanks you, will try this out :slight_smile:
What about the automatic process, are the archives preserved during runs (let’s say if an upload fails or is interrupted)?

Hi @tao, trying mapillary_tools 0.9.0 - TMPDIR is set to an external disk, but during the “Uploading import path” phase, root directory ends up seeing major usage increase.
Before I look into this further - is TMPDIR supposed to work properly in 0.9.0?