Search Results

Search found 166 results on 7 pages for 'urllib'.

Page 2/7 | < Previous Page | 1 2 3 4 5 6 7  | Next Page >

  • How can I get the last-modified time with python3 urllib?

    - by Daenyth
    I'm porting over a program of mine from python2 to python3, and I'm hitting the following error: AttributeError: 'HTTPMessage' object has no attribute 'getdate' Here's the code: conn = urllib.request.urlopen(fileslist, timeout=30) last_modified = conn.info().getdate('last-modified') This section worked under python 2.7, and so far I haven't been able to find out the correct method to get this information in python 3.1. The full context is an update method. It pulls new files from a server down to its local database, but only if the file on the server is newer than the local file. If there's a smarter way to achieve this functionality than just comparing local and remote file timestamps, then I'm open to that as well.

    Read the article

  • With regards to urllib AttributeError: 'module' object has no attribute 'urlopen'

    - by Matt
    import re import string import shutil import os import os.path import time import datetime import math import urllib from array import array import random filehandle = urllib.urlopen('http://www.google.com/') #open webpage s = filehandle.read() #read print s #display #what i plan to do with it once i get the first part working #results = re.findall('[<td style="font-weight:bold;" nowrap>$][0-9][0-9][0-9][.][0-9][0-9][</td></tr></tfoot></table>]',s) #earnings = '$ ' #for money in results: #earnings = earnings + money[1]+money[2]+money[3]+'.'+money[5]+money[6] #print earnings #raw_input() this is the code that i have so far. now i have looked at all the other forums that give solutions such as the name of the script, which is parse_Money.py, and i have tried doing it with urllib.request.urlopen AND i have tried running it on python 2.5, 2.6, and 2.7. If anybody has any suggestions it would be really welcome, thanks everyone!! --Matt ---EDIT--- I also tried this code and it worked, so im thinking its some kind of syntax error, so if anybody with a sharp eye can point it out, i would be very appreciative. import shutil import os import os.path import time import datetime import math import urllib from array import array import random b = 3 #find URL URL = raw_input('Type the URL you would like to read from[Example: http://www.google.com/] :') while b == 3: #get file name file1 = raw_input('Enter a file name for the downloaded code:') filepath = file1 + '.txt' if os.path.isfile(filepath): print 'File already exists' b = 3 else: print 'Filename accepted' b = 4 file_path = filepath #open file FileWrite = open(file_path, 'a') #acces URL filehandle = urllib.urlopen(URL) #display souce code for lines in filehandle.readlines(): FileWrite.write(lines) print lines print 'The above has been saved in both a text and html file' #close files filehandle.close() FileWrite.close()

    Read the article

  • Application closes on Nokia E71 when using urllib.urlopen

    - by sammr
    Hello, Im running the following code on my Nokia E71. But after the text input, the program closes abruptly. I have a GPRS connection on my phone,but i still seem to be having some problem with urllib.urlopen The code is as follows : import appuifw,urllib amountInDollars = appuifw.query(u"Enter amount in Dollars","text") data=urllib.urlopen("http://www.google.com").read() appuifw.note(u"Hey","info") Any way to fix this problem ? Thank You

    Read the article

  • cant download youtube video

    - by dsaccount1
    I'm having trouble retrieving the youtube video automatically, heres the code. The problem is the last part. download = urllib.request.urlopen(download_url).read() # Youtube video download script # 10n1z3d[at]w[dot]cn import urllib.request import sys print("\n--------------------------") print (" Youtube Video Downloader") print ("--------------------------\n") try: video_url = sys.argv[1] except: video_url = input('[+] Enter video URL: ') print("[+] Connecting...") try: if(video_url.endswith('&feature=related')): video_id = video_url.split('www.youtube.com/watch?v=')[1].split('&feature=related')[0] elif(video_url.endswith('&feature=dir')): video_id = video_url.split('www.youtube.com/watch?v=')[1].split('&feature=dir')[0] elif(video_url.endswith('&feature=fvst')): video_id = video_url.split('www.youtube.com/watch?v=')[1].split('&feature=fvst')[0] elif(video_url.endswith('&feature=channel_page')): video_id = video_url.split('www.youtube.com/watch?v=')[1].split('&feature=channel_page')[0] else: video_id = video_url.split('www.youtube.com/watch?v=')[1] except: print("[-] Invalid URL.") exit(1) print("[+] Parsing token...") try: url = str(urllib.request.urlopen('http://www.youtube.com/get_video_info?&video_id=' + video_id).read()) token_value = url.split('video_id='+video_id+'&token=')[1].split('&thumbnail_url')[0] download_url = "http://www.youtube.com/get_video?video_id=" + video_id + "&t=" + token_value + "&fmt=18" except: url = str(urllib.request.urlopen('www.youtube.com/watch?v=' + video_id)) exit(1) v_url=str(urllib.request.urlopen('http://'+video_url).read()) video_title = v_url.split('"rv.2.title": "')[1].split('", "rv.4.rating"')[0] if '&quot;' in video_title: video_title = video_title.replace('&quot;','"') elif '&amp;' in video_title: video_title = video_title.replace('&amp;','&') print("[+] Downloading " + '"' + video_title + '"...') try: print(download_url) file = open(video_title + '.mp4', 'wb') download = urllib.request.urlopen(download_url).read() print(download) for line in download: file.write(line) file.close() except: print("[-] Error downloading. Quitting.") exit(1) print("\n[+] Done. The video is saved to the current working directory(cwd).\n")

    Read the article

  • POST request from Python to PHP

    - by RainbowHat
    Python params = urllib.parse.urlencode({'spam': '1', 'eggs': '2', 'bacon': '3'}) binary_data = params.encode('utf-8') reg = urllib.request.Request("http://www.abc.com/abc/smart/ap/request/",binary_data) reg.add_header('Content-Type','application/x-www-form-urlencoded') f = urllib.request.urlopen(reg) print(f.read()) PHP if($_SERVER['REQUEST_METHOD'] == 'POST') { //parse_str($_SERVER['QUERY_STRING']); var_dump($_SERVER['QUERY_STRING']); } When i try print binary_data , it does show the parameter but by the time it reaches the PHP , i see nothing. Any idea?

    Read the article

  • I/O error(socket error): [Errno 111] Connection refused

    - by Schitti
    I have a program that uses urllib to periodically fetch a url, and I see intermittent errors like : I/O error(socket error): [Errno 111] Connection refused. It works 90% of the time, but the othe r10% it fails. If retry the fetch immediately after it fails, it succeeds. I'm unable to figure out why this is so. I tried to see if any ports are available, and they are. Any debugging ideas? For additional info, the stack trace is: File "/usr/lib/python2.6/urllib.py", line 235, in retrieve fp = self.open(url, data) File "/usr/lib/python2.6/urllib.py", line 203, in open return getattr(self, name)(url) File "/usr/lib/python2.6/urllib.py", line 342, in open_http h.endheaders() File "/usr/lib/python2.6/httplib.py", line 868, in endheaders self._send_output() File "/usr/lib/python2.6/httplib.py", line 740, in _send_output self.send(msg) File "/usr/lib/python2.6/httplib.py", line 699, in send self.connect() File "/usr/lib/python2.6/httplib.py", line 683, in connect self.timeout) File "/usr/lib/python2.6/socket.py", line 512, in create_connection raise error, msg Edit - A google search isn't very helpful, what I got out of it is that the server I'm fetching from sometimes refuses connections, how can I verify its not a bug in my code and this is indeed the case?

    Read the article

  • Making HTTP POST request

    - by infrared
    I'm trying to make a POST request to retrieve information about a book. Here is the code that returns HTTP code: 302, Moved import httplib, urllib params = urllib.urlencode({ 'isbn' : '9780131185838', 'catalogId' : '10001', 'schoolStoreId' : '15828', 'search' : 'Search' }) headers = {"Content-type": "application/x-www-form-urlencoded", "Accept": "text/plain"} conn = httplib.HTTPConnection("bkstr.com:80") conn.request("POST", "/webapp/wcs/stores/servlet/BuybackSearch", params, headers) response = conn.getresponse() print response.status, response.reason data = response.read() conn.close() When I try from a browser, from this page: http://www.bkstr.com/webapp/wcs/stores/servlet/BuybackMaterialsView?langId=-1&catalogId=10001&storeId=10051&schoolStoreId=15828 , it works. What am I missing in my code? Thanks EDIT: Here's what I get when I call print response.msg 302 Moved Date: Tue, 07 Sep 2010 16:54:29 GMT Vary: Host,Accept-Encoding,User-Agent Location: http://www.bkstr.com/webapp/wcs/stores/servlet/BuybackSearch X-UA-Compatible: IE=EmulateIE7 Content-Length: 0 Content-Type: text/plain; charset=utf-8 Seems that the location points to the same url I'm trying to access in the first place? EDIT2: I've tried using urllib2 as suggested here. Here is the code: import urllib, urllib2 url = 'http://www.bkstr.com/webapp/wcs/stores/servlet/BuybackSearch' values = {'isbn' : '9780131185838', 'catalogId' : '10001', 'schoolStoreId' : '15828', 'search' : 'Search' } data = urllib.urlencode(values) req = urllib2.Request(url, data) response = urllib2.urlopen(req) print response.geturl() print response.info() the_page = response.read() print the_page And here is the output: http://www.bkstr.com/webapp/wcs/stores/servlet/BuybackSearch Date: Tue, 07 Sep 2010 16:58:35 GMT Pragma: No-cache Cache-Control: no-cache Expires: Thu, 01 Jan 1970 00:00:00 GMT Set-Cookie: JSESSIONID=0001REjqgX2axkzlR6SvIJlgJkt:1311s25dm; Path=/ Vary: Accept-Encoding,User-Agent X-UA-Compatible: IE=EmulateIE7 Content-Length: 0 Connection: close Content-Type: text/html; charset=utf-8 Content-Language: en-US Set-Cookie: TSde3575=225ec58bcb0fdddfad7332c2816f1f152224db2f71e1b0474c866f3b; Path=/

    Read the article

  • Python3: ssl cert information

    - by MadSc13ntist
    I have been trying to get information regarding expired ssl certificates using python 3 but it would be nice to be able to get as verbose a workup as possible. any takers? So far i have been trying to use urllib.request to get this info (to no avail), does this strike anyone as foolish? I have seen some examples of similar work using older versions of python, but nothing using v3. http://objectmix.com/python/737581-re-urllib-getting-ssl-certificate-info.html http://www.mail-archive.com/[email protected]/msg208150.html

    Read the article

  • Python 2.6 -> Python 3 (ProxyHandler)

    - by blah
    Hallo, I wrote a script that works with a proxy (py2.6x): proxy_support = urllib2.ProxyHandler({'http' : 'http://127.0.0.1:80'}) But in py3.11x there is no urllib2 just a urllib... and that doesnt support the ProxyHandler How can I use a proxy with urllib? Isnt Python 3 newer then Python 2? Why did they remove urllib2 in a newer version?

    Read the article

  • Counting HTML images with Python

    - by user2537246
    I need some feedback on how to count HTML images with Python 3.01 after extracting them, maybe my regular expression are used properly. Here is my code: import re, os import urllib.request def get_image(url): url = 'http://www.google.com' total = 0 try: f = urllib.request.urlopen(url) for line in f.readline(): line = re.compile('<img.*?src="(.*?)">') if total > 0: x = line.count(total) total += x print('Images total:', total) except: pass

    Read the article

  • Looping through a directory on the web and displaying its contents (files and other directories) via

    - by al jaffe
    In the same vein as http://stackoverflow.com/questions/2593399/process-a-set-of-files-from-a-source-directory-to-a-destination-directory-in-pyth I'm wondering if it is possible to create a function that when given a web directory it will list out the files in said directory. Something like... files[] for file in urllib.listdir(dir): if file.isdir: # handle this as directory else: # handle as file I assume I would need to use the urllib library, but there doesn't seem to be an easy way of doing this, that I've seen at least.

    Read the article

  • urllib alternative for iPhone

    - by Pr301
    hi, I am trying to create an iPhone application which in some point connects to the internet, fills an on-line form, fetches the resulting website, parses it and returns a string to the user. I want all this process to happen in the background. I know how to do this kind of things with python and urllib but in objc I can't find an alternative, from on-line search I found either sites that explain how to use webkit to retrieve webpages (I suppose this is for displaying them to the user) or how to parse an existing HTML file or string. Since I want the file to be retrieved from the internet and the whole process should be running in the background, neither of these solutions covers my needs.

    Read the article

  • Saving animated GIFs using urllib.urlopen (image saved does not animate)

    - by wenbert
    I have Apache2 + Django + X-sendfile. My problem is that when I upload an animated GIF, it won't "animate" when I output through the browser. Here is my code to display the image located outside the public accessible directory. def raw(request,uuid): target = str(uuid).split('.')[:-1][0] image = Uploads.objects.get(uuid=target) path = image.path filepath = os.path.join(path,"%s.%s" % (image.uuid,image.ext)) response = HttpResponse(mimetype=mimetypes.guess_type(filepath)) response['Content-Disposition']='filename="%s"'\ %smart_str(image.filename) response["X-Sendfile"] = filepath response['Content-length'] = os.stat(filepath).st_size return response UPDATE It turns out that it works. My problem is when I try to upload an image via URL. It probably doesn't save the entire GIF? def handle_url_file(request): """ Open a file from a URL. Split the file to get the filename and extension. Generate a random uuid using rand1() Then save the file. Return the UUID when successful. """ try: file = urllib.urlopen(request.POST['url']) randname = rand1(settings.RANDOM_ID_LENGTH) newfilename = request.POST['url'].split('/')[-1] ext = str(newfilename.split('.')[-1]).lower() im = cStringIO.StringIO(file.read()) # constructs a StringIO holding the image img = Image.open(im) filehash = checkhash(im) image = Uploads.objects.get(filehash=filehash) uuid = image.uuid return "%s" % (uuid) except Uploads.DoesNotExist: img.save(os.path.join(settings.UPLOAD_DIRECTORY,(("%s.%s")%(randname,ext)))) del img filesize = os.stat(os.path.join(settings.UPLOAD_DIRECTORY,(("%s.%s")%(randname,ext)))).st_size upload = Uploads( ip = request.META['REMOTE_ADDR'], filename = newfilename, uuid = randname, ext = ext, path = settings.UPLOAD_DIRECTORY, views = 1, bandwidth = filesize, source = request.POST['url'], size = filesize, filehash = filehash, ) upload.save() #return uuid return "%s" % (upload.uuid) except IOError, e: raise e Any ideas? Thanks! Wenbert

    Read the article

  • gevent urllib is slow

    - by djay
    I've created a set of demos of a TCP server however my gevent examples are noticely slower. I'm sure must be how I compiled gevent but can't work out the problem. I'm using OSX leopard using fink compiled python 2.6 and 2.7. I've tried both the stable gevent and gevent 1.0b1 and it acts the same. The echo takes 5 seconds to respond, where the other examples take <1sec. If I remove the urllib call then the problem goes away. I put all the code in https://github.com/djay/geventechodemo To run the examples I'm using zc.buildout so to build $ python2.7 bootstrap.py $ bin/buildout To run the gevent example: $ bin/py geventecho3.py & [1] 80790 waiting for connection... $ telnet localhost 8080 Trying 127.0.0.1... ...connected from: ('127.0.0.1', 56588) Connected to localhost. Escape character is '^]'. hello echo: avast This will take 3-4 seconds to respond on my system. However the twisted example $ bin/py threadecho2.py or the twisted example $ bin/py twistedecho2.py Is less than 1s. Any idea what I'm doing wrong?

    Read the article

  • Errno socket error in python

    - by Emma
    i wrote this code : import random import sys import urllib openfile = open(sys.argv[1]).readlines() c = random.choice(openfile) i = 0 while i < 5: i=i+1 c = random.choice(openfile) proxies = {'http': c} opener = urllib.FancyURLopener(proxies).open("http://whatismyip.com.au/").read() ::: I put 3 proxy in a txt file . : http://211.161.159.74:8080 http://119.70.40.101:8080 http://124.42.10.119:8080 but when execute it i get this error : IOError: [Errno socket error] (10054, 'Connection reset by peer') what am i going to do ? please help me .

    Read the article

  • Check if the internet cannot be accessed in Python

    - by Sridhar Ratnakumar
    I have an app that makes a HTTP GET request to a particular URL on the internet. But when the network is down (say, no public wifi - or my ISP is down, or some such thing), I get the following traceback at urllib.urlopen: 70, in get u = urllib2.urlopen(req) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.py", line 126, in urlopen return _opener.open(url, data, timeout) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.py", line 391, in open response = self._open(req, data) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.py", line 409, in _open '_open', req) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.py", line 369, in _call_chain result = func(*args) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.py", line 1161, in http_open return self.do_open(httplib.HTTPConnection, req) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.py", line 1136, in do_open raise URLError(err) URLError: <urlopen error [Errno 8] nodename nor servname provided, or not known> I want to print a friendly error to the user telling him that his network maybe down instead of this unfriendly "nodename nor servname provided" error message. Sure I can catch URLError, but that would catch every url error, not just the one related to network downtime. I am not a purist, so even an error message like "The server example.com cannot be reached; either the server is indeed having problems or your network connection is down" would be nice. How do I go about selectively catching such errors? (For a start, if DNS resolution fails at urllib.urlopen, that can be reasonably assumed as network inaccessibility? If so, how do I "catch" it in the except block?)

    Read the article

  • Inexpensive ways to add seek to a filetype object

    - by becomingGuru
    PdfFileReader reads the content from a pdf file to create an object. I am querying the pdf from a cdn via urllib.urlopen(), this provides me a file like object, which has no seek. PdfFileReader, however uses seek. What is the simple way to create a PdfFileReader object from a pdf downloaded via url. Now, what can I do to avoid writing to disk and reading it again via file(). Thanks in advance.

    Read the article

  • Use Twisted's getPage as urlopen?

    - by RadiantHex
    Hi folks, I would like to use Twisted non-blocking getPage method within a webapp, but it feels quite complicated to use such function compared to urlopen. This is an example of what I'm trying to achive: def web_request(request): response = urllib.urlopen('http://www.example.org') return HttpResponse(len(response.read())) Is it so hard to have something similar with getPage?

    Read the article

  • Loading url with cyrillic symbols

    - by Ockonal
    Hi guys, I have to load some url with cyrillic symbols. My script should work with this: http://wincode.org/%D0%BF%D1%80%D0%BE%D0%B3%D1%80%D0%B0%D0%BC%D0%BC%D0%B8%D1%80%D0%BE%D0%B2%D0%B0%D0%BD%D0%B8%D0%B5/ If I'll use this in browser it would replaced into normal symbols, but urllib code fails with 404 error. How to decode correctly this url? When I'm using that url directly in code, like address = 'that address', it works perfect. But I used parsing page for getting this url. I have a list of urls which contents cyrillic. Maybe they have uncorrect encoding? Here is more code: requestData = urllib2.Request( %SOME_ADDRESS%, None, {"User-Agent": user_agent}) requestHandler = pageHandler.open(requestData) pageData = requestHandler.read().decode('utf-8') soupHandler = BeautifulSoup(pageData) topicLinks = [] for postBlock in soupHandler.findAll('a', href=re.compile('%SOME_REGEXP%')): topicLinks.append(postBlock['href']) postAddress = choice(topicLinks) postRequestData = urllib2.Request(postAddress, None, {"User-Agent": user_agent}) postHandler = pageHandler.open(postRequestData) postData = postHandler.read() File "/usr/lib/python2.6/urllib2.py", line 518, in http_error_default raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) urllib2.HTTPError: HTTP Error 404: Not Found

    Read the article

  • Python's urllib2 don't work on some sites

    - by Binny V A
    I found that you can't read from some sites using Python's urllib2(or urllib). An example... urllib2.urlopen("http://www.dafont.com/").read() # Returns '' These sites works when you visit the site. I can even scrap them using PHP(didn't try other languages). I have seen other sites with the same issue - but can't remember the URL at the moment. My questions are... What is the cause of this issue? Any workaround for this issue?

    Read the article

  • FancyURLOpener failing since moving to python 3.1.2

    - by Andrew Shepherd
    I had an application that was downloading a .CSV file from a password-protected website then processing it futher. I was using FancyURLOpener, and simply hardcoding the username and password. (Obviously, security is not a high priority in this particular instance). Since downloading Python 3.1.2, this code has stopped working. Does anyone know of the changes that have happened to the implementation? Here is a cut down version of the code: import urllib.request; class TracOpener (urllib.request.FancyURLopener) : def prompt_user_passwd(self, host, realm) : return ('andrew_ee', '_my_unenctryped_password') csvUrl='http://mysite/report/19?format=csv@USER=fred_nukre' opener = TracOpener(); f = opener.open(csvUrl); s = f.read(); f.close(); s; For the sake of completeness, here's the entire call stack: Traceback (most recent call last): File "C:\reporting\download_csv_file.py", line 12, in <module> f = opener.open(csvUrl); File "C:\Program Files\Python31\lib\urllib\request.py", line 1454, in open return getattr(self, name)(url) File "C:\Program Files\Python31\lib\urllib\request.py", line 1628, in open_http return self._open_generic_http(http.client.HTTPConnection, url, data) File "C:\Program Files\Python31\lib\urllib\request.py", line 1624, in _open_generic_http response.status, response.reason, response.msg, data) File "C:\Program Files\Python31\lib\urllib\request.py", line 1640, in http_error result = method(url, fp, errcode, errmsg, headers) File "C:\Program Files\Python31\lib\urllib\request.py", line 1878, in http_error_401 return getattr(self,name)(url, realm) File "C:\Program Files\Python31\lib\urllib\request.py", line 1950, in retry_http_basic_auth return self.open(newurl) File "C:\Program Files\Python31\lib\urllib\request.py", line 1454, in open return getattr(self, name)(url) File "C:\Program Files\Python31\lib\urllib\request.py", line 1628, in open_http return self._open_generic_http(http.client.HTTPConnection, url, data) File "C:\Program Files\Python31\lib\urllib\request.py", line 1590, in _open_generic_http auth = base64.b64encode(user_passwd).strip() File "C:\Program Files\Python31\lib\base64.py", line 56, in b64encode raise TypeError("expected bytes, not %s" % s.__class__.__name__) TypeError: expected bytes, not str

    Read the article

< Previous Page | 1 2 3 4 5 6 7  | Next Page >