I probably will never use urllib2 any more because of `requests` Usually I use urllib2 to make simple http requests and parse useful content out of plain html using BeautifulSoup. Since I have done that many times and have pieces of code here and there when dealing with user agent, url encode, cookies…etc. Today I took at look at the requests library (sudo pip install requests) and it is so amazing awesome!
import requests
myheaders = {‘User -Agent’:’Mozilla 5.0′}
myparams = {‘key’:’value’, ‘key1’:’value1’…}
baseurl = “http://example.com/search?”
r = requests.get(baseurl, params=myparams, headers=myheaders)
You can see the encoded url by calling r.url and the response html by r.text
It will encode the query string properly and make the http call in that one line!