我还是python的新手,并尝试通过构建一些脚本来进行练习。这个人应该在图像subreddit中找到热提交,然后使用提交URL的基本名称将这些图像下载到redditpics dir路径。我正在使用python 3.7。首先,我尝试了这个:
import praw, requests, os, bs4
reddit = praw.Reddit(client_id='xxxx',
client_secret='xxxx',
user_agent='picture downloader',
username='xxxx',
password='xxxx'
)
print(reddit.read_only)
os.makedirs('redditpics', exist_ok=True)
for submission in reddit.subreddit('earthporn').hot(limit=50):
url = submission.url
print(url)
imageFile = open(os.path.join('redditpics', os.path.basename(url)), 'wb')
print('Done')
下载的图像的信息字节为零。然后我从无聊的东西自动化中添加了以下内容:
imageFile = open(os.path.join('redditpics', os.path.basename(url)), 'wb')
for chunk in url.iter_content(100000):
print("saving " + imageFile)
imageFile.write(chunk)
imageFile.close()
print('Done.')
但是出现以下错误:AttributeError: 'str' object has no attribute 'iter_content'