What factors to consider when using urllib vs urllib2 vs request vs http.client
I am creating a service that consumes data from a REST based web service. I understand that there are several options to choose from when choosing between different modules.
A question arises which compares urllib2 to requests , but this post just provides an answer as to how to use requests, not things to consider.
From an application architecture perspective, what factors should be considered before choosing between the following modules:
My application will accept both JSON and XML data.
source to share
only for your sheer simplicity. There's an informative point on Github that compares logging into an authenticated resource with
... If you are working with JSON responses, you
can easily translate the response directly into a Python file:
r = requests.get("http://example.com/api/query?param=value¶m2=value2", auth=(user, passwd)) results_dict = r.json()
Just like this - no additional imports
for processing, no
ing, etc. Just get the data, translate it to Python by executing.
just not very comfortable. You have to create requests and handlers, set up authentication managers, take care of a lot that you don't need.
is an even lower level - its content is used
to accomplish its task and is often not directly accessed. Queries are becoming more and more functional every day, all with the general principle of keeping things as simple as possible while still allowing for the same customization as needed if your requirements are unusual. It has a very active developer and user community, so if you need to do something, chances are that others are doing it too, and with their short-term schedules, you might see a fix for too long.
So, if you're mainly going to use web services,
this is an easy choice. And, if you can't do something about it, others are in the standard library to support you just in case.
source to share