Search Results

Search found 144 results on 6 pages for 'urlopen'.

Page 3/6 | < Previous Page | 1 2 3 4 5 6  | Next Page >

  • Extracting a .app from a zip file in Python, using ZipFile

    - by Yakattak
    I'm trying to extract new revisions of Chromium.app from their snapshots, and I can download the file fine, but when it comes to extracting it, ZipFile either extracts the chrome-mac folder within as a file, says that directories don't exist, etc. I am very new to python, so these errors make little sense to me. Here is what I have so far. import urllib2 response = urllib2.urlopen('http://build.chromium.org/buildbot/snapshots/chromium-rel-mac/LATEST') latestRev = response.read() print latestRev # we have the revision, now we need to download the zip and extract it latestZip = urllib2.urlopen('http://build.chromium.org/buildbot/snapshots/chromium-rel-mac/%i/chrome-mac.zip' % (int(latestRev)), '~/Desktop/ChromiumUpdate/%i-update' % (int(latestRev))) #declare some vars that hold paths n shit workingDir = '/Users/slehan/Desktop/ChromiumUpdate/' chromiumZipPath = '%s%i-update.zip' % (workingDir, (int(latestRev))) chromiumAppPath = 'chrome-mac/' #the path of the chromium executable within the zip file chromiumAppExtracted = '%s/Chromium.app' % (workingDir) # path of the extracted executable output = open(chromiumZipPath, 'w') #delete any current file there output.write(latestZip.read()) output.close() # we have the .zip now we need to extract the Chromium.app file, it's in ziproot/chrome-mac/Chromium.app import zipfile, os zippedFile = open(chromiumZipPath) zippedChromium = zipfile.ZipFile(zippedFile, 'r') zippedChromium.extract(chromiumAppPath, workingDir) #print zippedChromium.namelist() zippedChromium.close() #zippedChromium.close() Any ideas?

    Read the article

  • Unable to retrieve search results from server side : Facebook Graph API usig Python

    - by DjangoRocks
    Hi all, I'm doing some simple Python + FB Graph training on my own, and I faced a weird problem: import time import sys import urllib2 import urllib from json import loads base_url = "https://graph.facebook.com/search?q=" post_id = None post_type = None user_id = None message = None created_time = None def doit(hour): page = 1 search_term = "\"Plastic Planet\"" encoded_search_term = urllib.quote(search_term) print encoded_search_term type="&type=post" url = "%s%s%s" % (base_url,encoded_search_term,type) print url while(1): try: response = urllib2.urlopen(url) except urllib2.HTTPError, e: print e finally: pass content = response.read() content = loads(content) print "==================================" for c in content["data"]: print c print "****************************************" try: content["paging"] print "current URL" print url print "next page!------------" url = content["paging"]["next"] print url except: pass finally: pass """ print "new URL is =======================" print url print "==================================" """ print url What I'm trying to do here is to automatically page through the search results, but trying for content["paging"]["next"] But the weird thing is that no data is returned; i received the following: {"data":[]} Even in the very first loop. But when i copied the URL into a browser, a lot of results were returned. I've also tried a version with my access token and th same thing happens. Can anyone enlighten me? +++++++++++++++++++EDITED and SIMPLIFIED++++++++++++++++++ ok thanks to TryPyPy, here's the simplified and edited version of my previous question: Why is that: import urllib2 url = "https://graph.facebook.com/searchq=%22Plastic+Planet%22&type=post&limit=25&until=2010-12-29T19%3A54%3A56%2B0000" response = urllib2.urlopen(url) print response.read() result in {"data":[]} ? But the same url produces a lot of data in a browser? Anyone? Best Regards.

    Read the article

  • Manually extracting portions of strings contained in a list (parsing)

    - by user1652011
    I'm aware that there are modules that fully simplify this function, but saying that I am running from a base install of python (standard modules only), how would I extract the following: I have a list. This list is the contents, line by line, of a webpage. Here is a mock up list (unformatted) for informative purposes: <script> link = "/scripts/playlists/1/" + a.id + "/0-5417069212.asx"; <script> "<a href="/apps/audio/?feedId=11065"><span class="px13">Eastern Metro Area Fire</span>" From the above string, I need the following extracted. The feedId (11065), which is incidentally a.id in the code above., "/scripts/playlists/1/" and "/0-5417069212.asx". Remembering that each of these lines is just contents from objects in a list, how would I go about extracting that data? Here is the full list: contents = urllib2.urlopen("http://www.radioreference.com/apps/audio/?ctid=5586") Pseudo: from urllib2 import urlopen as getpage page_contents = getpage("http://www.radioreference.com/apps/audio/?ctid=5586") feedID = % in (page_contents.search() for "/apps/audio/?feedId=%") titleID = % in (page_contents.search() for "<span class="px13">%</span>") playlistID = % in (page_contents.search() for "link = "%" + a.id + "*.asx";") asxID = * in (page_contents.search() for "link = "*" + a.id + "%.asx";") streamURL = "http://www.radioreference.com/" + playlistID + feedID + asxID + ".asx" I plan to format it as such that streamURL should = : http://www.radioreference.com/scripts/playlists/1/11065/0-5417067072.asx

    Read the article

  • Python response parse [migrated]

    - by Pavel Shevelyov
    When I'm sending some data on host: r = urllib2.Request(url, data = data, headers = headers) page = urllib2.urlopen(r) print page.read() I have something like this: [{"command":"settings","settings":{"basePath":"\/","ajaxPageState":{"theme":"spsr","theme_token":"kRHUhchUVpxAMYL8Y8IoyYIcX0cPrUstziAi8gSmMYk","css":[]},"ajax":{"edit-submit":{"callback":"spsr_calculator_form_ajax","wrapper":"calculator_form","method":"replaceWith","event":"mousedown","keypress":true,"url":"\/ru\/system\/ajax","submit":{"_triggering_element_name":"submit"}}}},"merge":true},{"command":"insert","method":null,"selector":null,"data":"\u003cdiv id=\"calculator_form\"\u003e\u003cform action=\"\/ru\/service\/calculator\" method=\"post\" id=\"spsr-calculator-form\" accept-charset=\"UTF-8\"\u003e\u003cdiv\u003e\u003cinput id=\"edit-from-ship-region-id\" type=\"hidden\" name=\"from_ship_region_id\" value=\"\" \/\u003e\n\u003cinput type=\"hidden\" name=\"form_build_id\" value=\"form-0RK_WFli4b2kUDTxpoqsGPp14B_0yf6Fz9x7UK-T3w8\" \/\u003e\n\u003cinput type=\"hidden\" name=\"form_id\" value=\"spsr_calculator_form\" \/\u003e\n\u003c\/div\u003e\n\u003cdiv class=\"bg_p\"\u003e \n\u0421\u0435\u0439\u0447\u0430\u0441 \u0412\u044b... bla bla bla but I want have something, like this: <html><h1>bla bla bla</h1></html> How can I do it?

    Read the article

  • Python regex group clarification

    - by nkr1pt
    I have 0 experience with python, very little with regex and I'm trying to figure out what this small snippet of python regex would give back from a http response header Set-Cookie entry: REGEX_COOKIE = '([A-Z]+=[^;]+;)' resp = urllib2.urlopen(req) re.search(REGEX_COOKIE, resp.info()['Set-Cookie']).group(1) Can one give a simple example of a Set-Cookie value and explain what this would match on + return? Regards

    Read the article

  • Opening SSL URLs with Python

    - by RadiantHex
    Hi folks, I'm using mechanize to navigate pages, it works pretty well. Unfortunately I have a random error come up, by random I mean it occasionally appears. URLError at /test/ urlopen error [Errno 1] _ssl.c:1325: error:140943FC:SSL routines:SSL3_READ_BYTES:sslv3 alert bad record mac I really need help on this one :) any ideas?

    Read the article

  • Uploading file from file object with PyCurl

    - by Tom
    I'm attempting to upload a file like this: import pycurl c = pycurl.Curl() values = [ ("name", "tom"), ("image", (pycurl.FORM_FILE, "tom.png")) ] c.setopt(c.URL, "http://upload.com/submit") c.setopt(c.HTTPPOST, values) c.perform() c.close() This works fine. However, this only works if the file is local. If I was to fetch the image such that: import urllib2 resp = urllib2.urlopen("http://upload.com/people/tom.png") How would I pass resp.fp as a file object instead of writing it to a file and passing the filename? Is this possible?

    Read the article

  • please help turn a simple Python2 code to PHP

    - by user296516
    Hi guys, Sorry to bother again, but I really need help transforming this Python2 code into PHP. net, cid, lac = 25002, 9164, 4000 import urllib a = '000E00000000000000000000000000001B0000000000000000000000030000' b = hex(cid)[2:].zfill(8) + hex(lac)[2:].zfill(8) c = hex(divmod(net,100)[1])[2:].zfill(8) + hex(divmod(net,100)[0])[2:].zfill(8) string = (a + b + c + 'FFFFFFFF00000000').decode('hex') data = urllib.urlopen('http://www.google.com/glm/mmap',string) r = data.read().encode('hex') print float(int(r[14:22],16))/1000000, float(int(r[22:30],16))/1000000 Would be great if someone could help, thanks in advance!

    Read the article

  • Can't parse XML effectively using Python

    - by Harshit Sharma
    import urllib import xml.etree.ElementTree as ET def getWeather(city): #create google weather api url url = "http://www.google.com/ig/api?weather=" + urllib.quote(city) try: # open google weather api url f = urllib.urlopen(url) except: # if there was an error opening the url, return return "Error opening url" # read contents to a string s = f.read() tree=ET.parse(s) current= tree.find("current_condition/condition") condition_data = current.get("data") weather = condition_data if weather == "<?xml version=": return "Invalid city" #return the weather condition #return weather def main(): while True: city = raw_input("Give me a city: ") weather = getWeather(city) print(weather) if __name__ == "__main__": main() gives error , I actually wanted to find values from google weather xml site tags

    Read the article

  • python unittest howto

    - by zubin71
    I`d like to know how I could unit-test the following module. def download_distribution(url, tempdir): """ Method which downloads the distribution from PyPI """ print "Attempting to download from %s" % (url,) try: url_handler = urllib2.urlopen(url) distribution_contents = url_handler.read() url_handler.close() filename = get_file_name(url) file_handler = open(os.path.join(tempdir, filename), "w") file_handler.write(distribution_contents) file_handler.close() return True except ValueError, IOError: return False

    Read the article

  • Reading HTTP server push streams with Python

    - by Sam
    I'm playing around trying to write a client for a site which provides data as an HTTP stream (aka HTTP server push). However, urllib2.urlopen() grabs the stream in its current state and then closes the connection. I tried skipping urllib2 and using httplib directly, but this seems to have the same behaviour. Is there a way to get the stream to stay open, so it can be checked each program loop for new contents, rather than waiting for the whole thing to be redownloaded every few seconds, introducing lag?

    Read the article

  • Inexpensive ways to add seek to a filetype object

    - by becomingGuru
    PdfFileReader reads the content from a pdf file to create an object. I am querying the pdf from a cdn via urllib.urlopen(), this provides me a file like object, which has no seek. PdfFileReader, however uses seek. What is the simple way to create a PdfFileReader object from a pdf downloaded via url. Now, what can I do to avoid writing to disk and reading it again via file(). Thanks in advance.

    Read the article

  • Downloading file with Python results in only 4.1kB

    - by Vlad Ogay
    I'm using simple code: import urllib2 response = urllib2.urlopen("http://www.mysite.com/getfile/4355") output = open('myfile.zip','wb') output.write(response.read()) output.close() The web-server is IIS + ASP.NET MVC 4 It returns FileResult wrapping a zip-file with "application/octet-stream" content-type. The problem is that downloaded zip file is broken - only 4.1kB size, where it must be 24kB. When I type the url adress in web-browser directly - it downloads and opens fine. Could you please, suggest, what's wrong with my Python code?

    Read the article

  • Python: HTTP Post a large file with streaming

    - by Daniel Von Fange
    I'm uploading potentially large files to a web server. Currently I'm doing this: import urllib2 f = open('somelargefile.zip','rb') request = urllib2.Request(url,f.read()) request.add_header("Content-Type", "application/zip") response = urllib2.urlopen(request) However, this reads the entire file's contents into memory before posting it. How can I have it stream the file to the server?

    Read the article

  • Counting HTML images with Python

    - by user2537246
    I need some feedback on how to count HTML images with Python 3.01 after extracting them, maybe my regular expression are used properly. Here is my code: import re, os import urllib.request def get_image(url): url = 'http://www.google.com' total = 0 try: f = urllib.request.urlopen(url) for line in f.readline(): line = re.compile('<img.*?src="(.*?)">') if total > 0: x = line.count(total) total += x print('Images total:', total) except: pass

    Read the article

  • can't use appcfg.py update gae

    - by user353998
    hello, recently i want to upload GAppProxy to GAE. but when i use the appcfg.py to update the files,there comes an error,it was: urllib2.URLError: urlopen error [Errno 8] _ssl.c:480: EOF occurred in violation of protocol i don't know why PS:i live in china,and may be because of the GFW. and when i use the type :appengine.google.com and then input the password,i can't redict to the index page,there is an error too,which says:ssl error

    Read the article

  • Twitter API with urllib2 in python

    - by Dirk Nachbar
    I want to use the Twitter API in Python to lookup user ids from name using the lookup method. I have done similar requests simply using response = urllib2.urlopen('http://search.twitter.com...') but for this one I need authentication. I don't think I can do it through the Google python twitter API because it doesn't have the lookup method. Any ideas how can I can auth with urllib2??

    Read the article

  • POST request from Python to PHP

    - by RainbowHat
    Python params = urllib.parse.urlencode({'spam': '1', 'eggs': '2', 'bacon': '3'}) binary_data = params.encode('utf-8') reg = urllib.request.Request("http://www.abc.com/abc/smart/ap/request/",binary_data) reg.add_header('Content-Type','application/x-www-form-urlencoded') f = urllib.request.urlopen(reg) print(f.read()) PHP if($_SERVER['REQUEST_METHOD'] == 'POST') { //parse_str($_SERVER['QUERY_STRING']); var_dump($_SERVER['QUERY_STRING']); } When i try print binary_data , it does show the parameter but by the time it reaches the PHP , i see nothing. Any idea?

    Read the article

  • Basic Google search using a shell script

    - by Lri
    Something like this but using just basic shell scripting: #!/usr/bin/env python import urllib import json base = 'http://ajax.googleapis.com/ajax/services/search/web?v=1.0&' query = urllib.urlencode({'q' : "something"}) response = urllib.urlopen(base + query).read() data = json.loads(response) print data['responseData']['results'][0]['url'] Any more convenient alternatives to ajax.googleapis.com? If not, how should you encode the URL and parse JSON?

    Read the article

  • Problem with re.findall (duplicates)

    - by user559385
    Hello, I tried to fetch source of 4chan site, and get links to threads. I have problem with regexp (isn't working). Source: import urllib2, re req = urllib2.Request('http://boards.4chan.org/wg/') resp = urllib2.urlopen(req) html = resp.read() print re.findall("res/[0-9]+", html) #print re.findall("^res/[0-9]+$", html) The problem is that: print re.findall("res/[0-9]+", html) is giving duplicates. I can't use: print re.findall("^res/[0-9]+$", html) I have read python docs but they didn't help.

    Read the article

  • Why does 'url' not work as a variable here?

    - by kryptobs2000
    I originally had the variable cpanel named url and the code would not return anything. Any idea why? It doesn't seem to be used by anything else, but there's gotta be something I'm overlooking. import urllib2 cpanel = 'http://www.tas-tech.com/cpanel' req = urllib2.Request(cpanel) try: handle = urllib2.urlopen(req) except IOError, e: if hasattr(e, 'code'): if e.code != 401: print 'We got another error' print e.code else: print e.headers print e.headers['www-authenticate']

    Read the article

  • How do I parse youtube xml for a specific entry?

    - by sharataka
    I am trying to return the duration of the video but am having trouble. #YOUTUBE FEED #download the file: file = urllib2.urlopen('http://gdata.youtube.com/feeds/api/videos/2s0vk2wEMtA') #convert to string: data = file.read() #close file because we dont need it anymore: file.close() #entire feed root = etree.fromstring(data) for entry in root: for item in entry: print item When I print item, I see as the last element: Element '{http://gdata.youtube.com/schemas/2007}duration' at 0x10c4fb7d0 But I don't know how to get the value from this. Any advice?

    Read the article

< Previous Page | 1 2 3 4 5 6  | Next Page >