Question
Requests is a really nice library. I'd like to use it for download big files (>1GB). The problem is it's not possible to keep whole file in memory I need to read it in chunks. And this is a problem with the following code:
- import requests
- def DownloadFile(url)
- local_filename = url.split('/')[-1]
- r = requests.get(url)
- f = open(local_filename, 'wb')
- for chunk in r.iter_content(chunk_size=512 * 1024):
- if chunk: # filter out keep-alive new chunks
- f.write(chunk)
- f.close()
- return
How-To
It's much easier if you use Response.raw and shutil.copyfileobj():
- import requests
- import shutil
- def download_file(url):
- local_filename = url.split('/')[-1]
- r = requests.get(url, stream=True)
- with open(local_filename, 'wb') as f:
- shutil.copyfileobj(r.raw, f)
- return local_filename
- def download_file(url):
- local_filename = url.split('/')[-1]
- # NOTE the stream=True parameter
- r = requests.get(url, stream=True)
- with open(local_filename, 'wb') as f:
- for chunk in r.iter_content(chunk_size=1024):
- if chunk: # filter out keep-alive new chunks
- f.write(chunk)
- #f.flush() commented by recommendation from J.F.Sebastian
- return local_filename
沒有留言:
張貼留言