Why does Twitter use a cap using Tweepy?

I am trying to get all the tweets from the previous day. And in order to set the Twitter speed limit, I implemented two sets of codes.

if counter == 4000:
    time.sleep(60*20) # wait for 20 min every time 4,000 tweets are extracted 
    counter == 0


I have looked at the output file and I usually get a rate limiting message when I have about 5500-6500 tweets. So to be conservative, I was setting that every time 4000 tweets (and their associated extracted fields) were deleted, I paused it for 20 minutes (to cover Twitter at 15 minute intervals).

I also found that someone else is trying to solve the same problem using the following code:

except tweepy.TweepError:


It is supposed to pause the script when there is a TweepError, I tested it but it didn't seem to work, but I included it anyway.

The error I received (after retrieving 10,700 tweets) looks like this:

Traceback (most recent call last):
File "C:\Users\User\Dropbox\Python exercises\_Scraping\Social media\TweepyModule\TweepyTut1.18.py", line 32, in <module>
since='2014-09-15', until='2014-09-16').items(999999999): # changeable here
File "C:\Program Files Extra\Python27\lib\site-packages\tweepy\cursor.py", line 181, in next
self.current_page = self.page_iterator.next()
File "C:\Program Files Extra\Python27\lib\site-packages\tweepy\cursor.py", line 99, in next
data = self.method(max_id=self.max_id, parser=RawParser(), *self.args, **self.kargs)
File "C:\Program Files Extra\Python27\lib\site-packages\tweepy\binder.py", line 230, in _call
return method.execute()
File "C:\Program Files Extra\Python27\lib\site-packages\tweepy\binder.py", line 203, in execute
raise TweepError(error_msg, resp)
tweepy.error.TweepError: {"errors":[{"message":"Rate limit exceeded","code":88}]}
[Finished in 1937.2s with exit code 1]


Here is my code:

import tweepy
import time
import csv

ckey = ""
csecret = ""
atoken = ""
asecret = ""

OAUTH_KEYS = {'consumer_key':ckey, 'consumer_secret':csecret,
    'access_token_key':atoken, 'access_token_secret':asecret}
auth = tweepy.OAuthHandler(OAUTH_KEYS['consumer_key'], OAUTH_KEYS['consumer_secret'])
api = tweepy.API(auth)

searchTerms = '"good book"'

counter = 0
for tweet in tweepy.Cursor(api.search, q=searchTerms, 
    since='2014-09-15', until='2014-09-16').items(999999999): # changeable here

        '''print "Name:", tweet.author.name.encode('utf8')
        print "Screen-name:", tweet.author.screen_name.encode('utf8')
        print "Tweet created:", tweet.created_at'''

        placeHolder = []

        with open("TweetData_goodBook_15SEP2014_all.csv", "ab") as f: # changeable here
            writeFile = csv.writer(f)

        counter += 1

        if counter == 4000:
            time.sleep(60*20) # wait for 20 min everytime 4,000 tweets are extracted 
            counter == 0

    except tweepy.TweepError:

    except IOError:

    except StopIteration:



source to share

All Articles