Simple Python program stuck

Yesterday I wrote a simple Python program (really simple as shown below) to check HTTP status responses for about 5000 URLs. The thing is, the program seems to be stuck for every 400-500 URLs. Since I am really new to programming, I have no idea how to track down the problem.

I added the "a = a + 1" snippet to keep track of how much URL was processed when it got stuck.

How can I find the problem? Thank you very much!

I am using Ubuntu 11.10 and Python 2.7

#!/usr/bin/env python 
# -*- coding: utf-8 -*-

import httplib

raw_url_list = open ('url.txt', 'r')
url_list = raw_url_list.readlines()
result_file = open('result.txt', 'w')
a = 0

for url in url_list:
    url = url.strip()[23:]
    conn = httplib.HTTPConnection('www.123456789.cn')
    conn.request('HEAD', url)
    res = conn.getresponse()
    result_file.write('http://www.123456789.cn%s, %s, %s \n' % (url, res.status, res.reason))
    a = a + 1
    print a

raw_url_list.close()
result_file.close()

      

+3


source to share


1 answer


You need to close your connections when finished. Just add this to the end of your for loop.



     conn.close()

      

+3


source







All Articles