Efficient use of memory for parallel I /O operations in Python

There are two classes of tasks where we might need parallel processing: I /O operations and tasks that actively use the CPU, such as image processing. Python allows you to implement several approaches to parallel processing of data. Let's consider them in connection with input-output operations.
 
 
Prior to Python 3.? there were two ways to implement parallel processing of I /O operations. Native method - the use of multithreading, another option - libraries like Gevent, which parallelize the problem in the form of micro-streams. Python 3.5 provided built-in concurrency support with asyncio. I was curious to see how each of them will work in terms of memory. The results are below.
 
memory_profiler . The code is available at Github .
 
 

Go!


 

Synchronous processing


 
I implemented a single-threaded version of the script, which became the standard for other solutions. The use of memory was fairly stable throughout the execution, and the obvious drawback was the execution time. Without any parallelism, the script took about 29 seconds.
 
 
Efficient use of memory for parallel I /O operations in Python
 
 

ThreadPoolExecutor


 
Working with multithreading is implemented in the standard library. The most convenient API is ThreadPoolExecutor . However, the use of threads is associated with some drawbacks, one of them is a significant memory consumption. On the other hand, a significant increase in the speed of execution is the reason why we want to use multithreading. Test run time is about 17 seconds. This is significantly less than ~ 29 seconds for synchronous execution. The difference is the speed of I /O operations. In our case, network delays.
 
 

 
 

Gevent


 
Gevent is an alternative approach to parallelism, it brings the coretine to Python code up to version 3.5. Under the hood we have light pseudo-flows of "greenlets" plus several streams for internal needs. Total memory consumption is similar to multipath.
 
 

 
 

Asyncio


 
Since the Python version 3.? the croutons are available in the module. asyncio , which became part of the standard library. To take advantage of asyncio , I used the aiohttp instead of requests . asyncio - Asynchronous equivalent of requests with similar functionality and API.
 
 
The availability of relevant libraries is a major issue that needs to be clarified before the development begins with asyncio , although the most popular IO libraries are requests , redis , psycopg2 - have asynchronous analogs.
 
 

 
 
With asyncio memory consumption is much less. It is similar to a single-threaded version of the script without parallelism.
 
 

Is it time to start using asyncio?


 
Parallelism is a very effective way to accelerate applications with a large number of I /O operations. In my case, this is ~ 40% performance gain versus sequential processing. Differences in speed for the considered ways of implementing parallelism are insignificant.
 
 
ThreadPoolExecutor and Gevent are powerful tools that can accelerate existing applications. Their main advantage is that in most cases they require minor changes to the code base. If we talk about the overall performance, then the best tool is asyncio . Its memory consumption is significantly lower compared to other methods of parallelism, which does not affect the overall speed. For the pluses, you have to pay specialized libraries, jailed for work with asyncio .
+ 0 -

Add comment