我试图通过抓取twitter的特定关键词,我已经将这些关键词做成了数组。
keywords = ["art", "railway", "neck"]
我试图在特定的位置搜索这些词,我把它写成了
PLACE_LAT = 29.7604
PLACE_LON = -95.3698
PLACE_RAD = 200
我就尝试应用一个函数来查找至少200条微博,但是我知道每次查询只能搜索到100条。我目前的代码如下,然而,这段代码并没有成功。
def retrieve_tweets(api, keyword, batch_count, total_count, latitude, longitude, radius):
"""
collects tweets using the Twitter search API
api: Twitter API instance
keyword: search keyword
batch_count: maximum number of tweets to collect per each request
total_count: maximum number of tweets in total
"""
# the collection of tweets to be returned
tweets_unfiltered = []
tweets = []
# the number of tweets within a single query
batch_count = str(batch_count)
'''
You are required to insert your own code where instructed to perform the first query to Twitter API.
Hint: revise the practical session on Twitter API on how to perform query to Twitter API.
'''
# per the first query, to obtain max_id_str which will be used later to query sub
resp = api.request('search/tweets', {'q': keywords,
'count': '100',
'lang':'en',
'result_type':'recent',
'geocode':'{PLACE_LAT},{PLACE_LONG},{PLACE_RAD}mi'.format(latitude, longitude, radius)})
# store the tweets in a list
# check first if there was an error
if ('errors' in resp.json()):
errors = resp.json()['errors']
if (errors[0]['code'] == 88):
print('Too many attempts to load tweets.')
print('You need to wait for a few minutes before accessing Twitter API again.')
if ('statuses' in resp.json()):
tweets_unfiltered += resp.json()['statuses']
tweets = [tweet for tweet in tweets_unfiltered if ((tweet['retweeted'] != True) and ('RT @' not in tweet['text']))]
# find the max_id_str for the next batch
ids = [tweet['id'] for tweet in tweets_unfiltered]
max_id_str = str(min(ids))
# loop until as many tweets as total_count is collected
number_of_tweets = len(tweets)
while number_of_tweets < total_count:
resp = api.request('search/tweets', {'q': keywords,
'count': '50',
'lang':'en',
'result_type': 'recent',
'max_id': max_id_str,
'geocode':'{PLACE_LAT},{PLACE_LONG},{PLACE_RAD}mi'.format(latitude, longitude, radius)}
)
if ('statuses' in resp.json()):
tweets_unfiltered += resp.json()['statuses']
tweets = [tweet for tweet in tweets_unfiltered if ((tweet['retweeted'] != True) and ('RT @' not in tweet['text']))]
ids = [tweet['id'] for tweet in tweets_unfiltered]
max_id_str = str(min(ids))
number_of_tweets = len(tweets)
print("{} tweets are collected for keyword {}. Last tweet created at {}".format(number_of_tweets,
keyword,
tweets[number_of_tweets-1]['created_at']))
return tweets
我只需要在说#插入你的代码的地方写代码。我需要做什么修改才能让这个工作正常进行
def retrieve_tweets(api, keyword, batch_count, total_count, latitude, longitude, radius):
"""
collects tweets using the Twitter search API
api: Twitter API instance
keyword: search keyword
batch_count: maximum number of tweets to collect per each request
total_count: maximum number of tweets in total
"""
# the collection of tweets to be returned
tweets_unfiltered = []
tweets = []
# the number of tweets within a single query
batch_count = str(batch_count)
'''
You are required to insert your own code where instructed to perform the first query to Twitter API.
Hint: revise the practical session on Twitter API on how to perform query to Twitter API.
'''
# per the first query, to obtain max_id_str which will be used later to query sub
resp = api.request('search/tweets', {'q': #INSERT YOUR CODE
'count': #INSERT YOUR CODE
'lang':'en',
'result_type':'recent',
'geocode':'{},{},{}mi'.format(latitude, longitude, radius)})
# store the tweets in a list
# check first if there was an error
if ('errors' in resp.json()):
errors = resp.json()['errors']
if (errors[0]['code'] == 88):
print('Too many attempts to load tweets.')
print('You need to wait for a few minutes before accessing Twitter API again.')
if ('statuses' in resp.json()):
tweets_unfiltered += resp.json()['statuses']
tweets = [tweet for tweet in tweets_unfiltered if ((tweet['retweeted'] != True) and ('RT @' not in tweet['text']))]
# find the max_id_str for the next batch
ids = [tweet['id'] for tweet in tweets_unfiltered]
max_id_str = str(min(ids))
# loop until as many tweets as total_count is collected
number_of_tweets = len(tweets)
while number_of_tweets < total_count:
resp = api.request('search/tweets', {'q': #INSERT YOUR CODE
'count': #INSERT YOUR CODE
'lang':'en',
'result_type': #INSERT YOUR CODE
'max_id': max_id_str,
'geocode': #INSERT YOUR CODE
)
if ('statuses' in resp.json()):
tweets_unfiltered += resp.json()['statuses']
tweets = [tweet for tweet in tweets_unfiltered if ((tweet['retweeted'] != True) and ('RT @' not in tweet['text']))]
ids = [tweet['id'] for tweet in tweets_unfiltered]
max_id_str = str(min(ids))
number_of_tweets = len(tweets)
print("{} tweets are collected for keyword {}. Last tweet created at {}".format(number_of_tweets,
keyword,
tweets[number_of_tweets-1]['created_at']))
return tweets
你的问题或者说是什么问题?我在你的帖子中没有看到任何问题。
有几个建议... 移除 lang
和 result_type
请求中的参数。因为您使用的是 geocode
你不应该期待很多结果,因为几乎没有人在发推特时打开位置。
另外,与其使用 max_id
参数,你可能想看看 TwitterPager
类,它为你解决了这个问题。下面是一个例子。https:/github.comgeduldigTwitterAPIblobmasterexamplespage_tweets.py..