Asynchronous Requests and Downloads Without Thinking About It logo
badge1 badge2 badge3

Basic Usage

>>> import aiodownload
>>> urls = ['{}'.format(i) for i in range(0, 5)]
>>> bundles = aiodownload.swarm(urls)
>>> import pprint
>>> pprint.pprint(dict((b.url, b.file_path, ) for b in bundles))
{'': 'C:\\\\\\links\\0',
 '': 'C:\\\\\\links\\1',
 '': 'C:\\\\\\links\\2',
 '': 'C:\\\\\\links\\3',
 '': 'C:\\\\\\links\\4'}
Default Request Strategy (Lenient)
  • two concurrent requests with 0.25 s delay between requests
  • automatically retry unsuccessful requests up to 4 more times with 60 s between attempts
  • response statuses greater than 400 are considered unsuccessful requests
  • 404s are not retried (if they tell us it’s not found, let’s believe them)
Default Download Strategy
  • read and write 65536 byte chunks at a time
  • uses the current working directory as home path to write files
  • relative path and filename are a transformation of the url path segments (the last segment being the filename)
Customizable Strategies
  • Want aiodownload to behave differently? Configure the underlying classes to create your own strategies.


$ pip install aiodownload


See the example package for more basic usages and different ways to configure base objects.

Indices and Tables


This library leverages aiohttp.ClientSession to make requests and manage the HTTP session. aiodownload is a lean wrapper designed to abstract the asynchronous programming paradigm and to cut down on the coding of repetitive request functions.

The public function API for this project was adapted from simple-requests which utilizes gevent and requests. The motivation for reimplementation was to use the native event loop introduced in Python 3. No monkey patching required.