Link Checker (Spider Crawler)
2 answers
I recently solved a similar problem:
import urllib
import urllib2
import cookielib
login = 'user@host.com'
password = 'secret'
cookiejar = cookielib.CookieJar()
urlOpener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookiejar))
# adjust this to match the form field names
values = {'username': login, 'password': password}
data = urllib.urlencode(values)
request = urllib2.Request('http://target.of.POST-method', data)
url = urlOpener.open(request)
# from now on, we're authenticated and we can access the rest of the site
url = urlOpener.open('http://rest.of.user.area')
+3
source to share
You want to take a look at the cookielib module: http://docs.python.org/library/cookielib.html . It implements a full implementation of cookies that will allow you to store login details. Once you use CookieJar, you just need to get the user data from the user (say from the console) and send the correct POST request.
+2
source to share