I have been looking at my daily (2-5hr) processing of BlackVue videos from the standpoint of power consumption. (I run everything in a motor vehicle)
I see that most of the python video_process code runs (100%) on only one (i7) core, but ffmpeg sub process uses all 8. The actual elapsed time to (ffmpeg) convert 1 minute of video to 123 frames (2FPS) is approximately 1/2 of one video. This means my system runs 100%CPU and disk I/O only for that 1/2 of the time.
I am looking for a method to better utilise the available CPU when not running the ffmpeg part. I have considered two instances (ie divide the mp4 input folder) but I note ffmpeg consumes 1GB or more per video. It may start swapping instead! Is there any tools command sequence that might serve me better or a way for the python code to be spread over all CPU’s?
I would be interested to know if Windows has the same problem.
The Mapillary desktop Uploader is 32 bit on a 64 bit computer.
Could that be one of the reasons it is so slow ?
I do not put my desktop to sleep any more.
In my case I doubt thats the issue. The problem is that the Python code is only utilizing 1 of the 8 CPUs. I had the same problem with 32 bit on an i5.
I am seriously thinking of running the ffmpeg step as a separate bash script and modifying the tools such that it doesn’t run the ffmpeg step. If I then start the tools after the ffmpeg has been running for a while I’ll get better utilization and use less W/Hr from the battery.
I ended up running two nice’d instances of mapillary_tools in background, using separate input and output directories. It jumped from about 35% to 100% overall CPU and the processing time better than halved. Memory wasn’t an issue, no swapping took place. With two jobs running the read rate of mp4’s from a USB2 HDD were still below maximum. I did however have to throttle the cpu to avoid overheat. I think there is some dirt caught in the fins.