Search Results

Search found 144 results on 6 pages for 'urlopen'.

Page 6/6 | < Previous Page | 2 3 4 5 6 

  • Slow Python HTTP server on localhost

    - by Abiel
    I am experiencing some performance problems when creating a very simple Python HTTP server. The key issue is that performance is varying depending on which client I use to access it, where the server and all clients are being run on the local machine. For instance, a GET request issued from a Python script (urllib2.urlopen('http://localhost/').read()) takes just over a second to complete, which seems slow considering that the server is under no load. Running the GET request from Excel using MSXML2.ServerXMLHTTP also feels slow. However, requesting the data Google Chrome or from RCurl, the curl add-in for R, yields an essentially instantaneous response, which is what I would expect. Adding further to my confusion is that I do not experience any performance problems for any client when I am on my computer at work (the performance problems are on my home computer). Both systems run Python 2.6, although the work computer runs Windows XP instead of 7. Below is my very simple server example, which simply returns 'Hello world' for any get request. from BaseHTTPServer import BaseHTTPRequestHandler, HTTPServer class MyHandler(BaseHTTPRequestHandler): def do_GET(self): print("Just received a GET request") self.send_response(200) self.send_header("Content-type", "text/html") self.end_headers() self.wfile.write('Hello world') return def log_request(self, code=None, size=None): print('Request') def log_message(self, format, *args): print('Message') if __name__ == "__main__": try: server = HTTPServer(('localhost', 80), MyHandler) print('Started http server') server.serve_forever() except KeyboardInterrupt: print('^C received, shutting down server') server.socket.close() Note that in MyHandler I override the log_request() and log_message() functions. The reason is that I read that a fully-qualified domain name lookup performed by one of these functions might be a reason for a slow server. Unfortunately setting them to just print a static message did not solve my problem. Also, notice that I have put in a print() statement as the first line of the do_GET() routine in MyHandler. The slowness occurs prior to this message being printed, meaning that none of the stuff that comes after it is causing a delay.

    Read the article

  • grabbing a substring while scraping with Python2.6

    - by Diego
    Hey can someone help with the following? I'm trying to scrape a site that has the following information.. I need to pull just the number after the </strong> tag.. [<li><strong>ISBN-13:</strong> 9780375853401</li>, <li><strong>Pub. Date: </strong> 05/11/2010</li>] [<li><strong>UPC:</strong> 490355000372</li>, <li><strong>Catalog No:</strong> 15024/25</li>, <li><strong>Label:</strong> CAMERATA</li>] here's a piece of the code I've been using to grab the above data using mechanize and BeautifulSoup. I'm stuck here as it won't let me use the find() function for a list br_results = mechanize.urlopen(br_results) html = br_results.read() soup = BeautifulSoup(html) local_links = soup.findAll("a", {"class" : "down-arrow csa"}) upc_code = soup.findAll("ul", {"class" : "bc-meta3"}) for upc in upc_code: upc_text = upc.contents.contents print upc_text

    Read the article

  • Prettify working using python idle but not when using python script

    - by Loclip
    So when I use print soup.prettify() on idle it's prettifying my html but when I calling the script from server the prettify() not working #!/usr/bin/python import cgi, cgitb, urllib2, sys from bs4 import BeautifulSoup styles = [] i = 1 site = "www.example.com" page = urllib2.urlopen(site) soup = BeautifulSoup(page) table = soup.find('table', {'class' :'style5'}) alltd = table.findAll('td') allspans = table.findAll('span') while i<55: styles.append("style" + str(i)) i+=1 for td in alltd: if any(x in td["class"] for x in styles): td["class"] = "align" for span in allspans: span.replaceWithChildren() print "Content-type: text/html" print print "<!DOCTYPE html>" print "<html>" print "<head>" print '<meta http-equiv="content-type" content="text/html; charset=utf-8">' print '<link rel="stylesheet" href="lightbox/css/lightbox.css" type="text/css" media="screen">' print '<style type="text/css">' print "body {background-color:#b0c4de;}" print '.align {text-align: center;}' print "</style>" print "</head>" print "<body>" print table.pretiffy() print "</body>" print "</html>"

    Read the article

  • BeautifulSoup HTMLParseError. What's wrong with this?

    - by user1915496
    This is my code: from bs4 import BeautifulSoup as BS import urllib2 url = "http://services.runescape.com/m=news/recruit-a-friend-for-free-membership-and-xp" res = urllib2.urlopen(url) soup = BS(res.read()) other_content = soup.find_all('div',{'class':'Content'})[0] print other_content Yet an error comes up: /Library/Python/2.7/site-packages/bs4/builder/_htmlparser.py:149: RuntimeWarning: Python's built-in HTMLParser cannot parse the given document. This is not a bug in Beautiful Soup. The best solution is to install an external parser (lxml or html5lib), and use Beautiful Soup with that parser. See http://www.crummy.com/software/BeautifulSoup/bs4/doc/#installing-a-parser for help. "Python's built-in HTMLParser cannot parse the given document. This is not a bug in Beautiful Soup. The best solution is to install an external parser (lxml or html5lib), and use Beautiful Soup with that parser. See http://www.crummy.com/software/BeautifulSoup/bs4/doc/#installing-a-parser for help.")) Traceback (most recent call last): File "web.py", line 5, in <module> soup = BS(res.read()) File "/Library/Python/2.7/site-packages/bs4/__init__.py", line 172, in __init__ self._feed() File "/Library/Python/2.7/site-packages/bs4/__init__.py", line 185, in _feed self.builder.feed(self.markup) File "/Library/Python/2.7/site-packages/bs4/builder/_htmlparser.py", line 150, in feed raise e I've let two other people use this code, and it works for them perfectly fine. Why is it not working for me? I have bs4 installed...

    Read the article

  • urllib2.Request() with data returns empty url

    - by Mr. Polywhirl
    My main concern is the function: getUrlAndHtml() If I manually build and append the query to the end of the uri, I can get the response.url(), but if I pass a dictionary as the request data, the url does not come back. Is there anyway to guarantee the redirected url? In my example below, if thisWorks = True I get back a url, but the returned url is the request url as opposed to a redirect link. On a sidenote, the encoding for .E2.80.93 does not translate to - for some reason? #!/usr/bin/python import pprint import urllib import urllib2 from bs4 import BeautifulSoup from sys import argv URL = 'http://en.wikipedia.org/w/index.php?' def yesOrNo(boolVal): return 'yes' if boolVal else 'no' def getTitleFromRaw(page): return page.strip().replace(' ', '_') def getUrlAndHtml(title, printable=False): thisWorks = False if thisWorks: query = 'title={:s}&printable={:s}'.format(title, yesOrNo(printable)) opener = urllib2.build_opener() opener.addheaders = [('User-agent', 'Mozilla/5.0')] response = opener.open(URL + query) else: params = {'title':title,'printable':yesOrNo(printable)} data = urllib.urlencode(params) headers = {'User-agent':'Mozilla/5.0'}; request = urllib2.Request(URL, data, headers) response = urllib2.urlopen(request) return response.geturl(), response.read() def getSoup(html, name=None, attrs=None): soup = BeautifulSoup(html) if name is None: return None return soup.find(name, attrs) def setTitle(soup, newTitle): title = soup.find('div', {'id':'toctitle'}) h2 = title.find('h2') h2.contents[0].replaceWith('{:s} for {:s}'.format(h2.getText(), newTitle)) def updateLinks(soup, url): fragment = '#' for a in soup.findAll('a', href=True): a['href'] = a['href'].replace(fragment, url + fragment) def writeToFile(soup, filename='out.html', indentLevel=2): with open(filename, 'wt') as out: pp = pprint.PrettyPrinter(indent=indentLevel, stream=out) pp.pprint(soup) print('Wrote {:s} successfully.'.format(filename)) if __name__ == '__main__': def exitPgrm(): print('usage: {:s} "<PAGE>" <FILE>'.format(argv[0])) exit(0) if len(argv) == 2: help = argv[1] if help == '-h' or help == '--help': exitPgrm() if False:''' if not len(argv) == 3: exitPgrm() ''' page = 'Led Zeppelin' # argv[1] filename = 'test.html' # argv[2] title = getTitleFromRaw(page) url, html = getUrlAndHtml(title) soup = getSoup(html, 'div', {'id':'toc'}) setTitle(soup, page) updateLinks(soup, url) writeToFile(soup, filename)

    Read the article

  • Python 3, urllib ... Reset Connection Possible?

    - by Rhys
    In the larger scale of my program the goal of the below code is to filter out all dynamic html in a web-page source code code snippet: try: deepreq3 = urllib.request.Request(deepurl3) deepreq3.add_header("User-Agent","etc......") deepdata3 = urllib.request.urlopen(deepurl3).read().decode("utf8", 'ignore') The following code is looped 3 times in order to identify whether the target web-page is Dynamic (source code is changed at intervals) or not. If the page IS dynamic, the above code loops another 15 times and attempts to filter out the dynamic content. QUESTION: While this filtering method works 80% of the time, some pages will reload ALL 15 times and STILL contain dynamic code. HOWEVER. If I manually close down the Python Shell and re-execute my program, the dynamic html that my 'refresh-page method' could not shake off is no longer there ... it's been replaced with new dynamic html that my 'refresh-page method' cannot shake off. So I need to know, what is going on here? How is re-running my program causing the dynamic content of a page to change. AND, is there any way, any 'reset connection' command I can use to recreate this ... without manually restarting my app. Thanks for your response.

    Read the article

  • My python program always brings down my internet connection after several hours running, how do I debug and fix this problem?

    - by Shane
    I'm writing a python script checking/monitoring several server/websites status(response time and similar stuff), it's a GUI program and I use separate thread to check different server/website, and the basic structure of each thread is using an infinite while loop to request that site every random time period(15 to 30 seconds), once there's changes in website/server each thread will start a new thread to do a thorough check(requesting more pages and similar stuff). The problem is, my internet connection always got blocked/jammed/messed up after several hours running of this script, the situation is, from my script side I got urlopen error timed out each time it's requesting a page, and from my FireFox browser side I cannot open any site. But the weird thing is, the moment I close my script my Internet connection got back on immediately which means now I can surf any site through my browser, so it must be the script causing all the problem. I've checked the program carefully and even use del to delete any connection once it's used, still get the same problem. I only use urllib2, urllib, mechanize to do network requests. Anybody knows why such thing happens? How do I debug this problem? Is there a tool or something to check my network status once such situation occurs? It's really bugging me for a while... By the way I'm behind a VPN, does it have something to do with this problem? Although I don't think so because my network always get back on once the script closed, and the VPN connection never drops(as it appears) during the whole process.

    Read the article

  • Get Url Parameters In Django

    - by picomon
    I want to get current transaction id in url. it should be like this www.example.com/final_result/53432e1dd34b3 . I wrote the below codes, but after successful payment, I'm redirected to Page 404. (www.example.com/final_result//) Views.py @csrf_exempt def pay_notif(request, v_transaction_id): if request.method=='POST': v_transaction_id=request.POST.get('transaction_id') endpoint='https://testpay.com/?v_transaction_id={0}&type=json' req=endpoint.format(v_transaction_id) last_result=urlopen(req).read() if 'Approved' in last_result: session=Pay.objects.filter(session=request.session.session_key).latest('id') else: return HttpResponse(status=204) return render_to_response('final.html',{'session':session},context_instance=RequestContext(request)) Urls.py url(r'^final_result/(?P<v_transaction_id>[-A-Za-z0-9_]+)/$', 'digiapp.views.pay_notif', name="pay_notif"), Template: <input type='hidden' name='v_merchant_id' value='{{newpayy.v_merchant_id}}' /> <input type='hidden' name='item_1' value='{{ newpayy.esell.up_name }}' /> <input type='hidden' name='description_1' value='{{ newpayy.esell.up_description }}' /> <input type='hidden' name='price_1' value='{{ newpayy.esell.up_price }}' /> #page to be redirected to after successful payment <input type='hidden' name='success_url' value='http://127.0.0.1:8000/final_result/{{newpayy.v_transaction_id}}/' /> How can I go about this?

    Read the article

  • Python script web service timeout

    - by Robert
    We have had a Python script running for many months now that simply scans through a directory of files, and posts each file to our web site via a web service call. The web site is also written in Python. For no apparent reason, this morning this script started throwing the following error: urllib2.URLError: <urlopen error (10060, 'Operation timed out')> The site itself is up and running just fine. There are no indications of any errors. The developer that was working on this site is no longer with us, and we do not have a strong Python developer on staff as we are moving away from that. Before I do an all nighter and rewrite this thing in C#, I wanted to see if anyone had any experience dealing with this issue. I do know that the script is connecting to a secure site (HTTPS), so I am not sure if something has come up with that, and I honestly dont know where to look to determine that. As I said before, the site itself isn't showing any signs of error, including SSL. Any thoughts?

    Read the article

  • Which credentials should I put in for Google App Engine BulkLoader at development server?

    - by Hoang Pham
    Hello everyone, I would like to ask which kind of credentials do I need to put on for importing data using the Google App Engine BulkLoader class appcfg.py upload_data --config_file=models.py --filename=listcountries.csv --kind=CMSCountry --url=http://localhost:8178/remote_api vit/ And then it asks me for credentials: Please enter login credentials for localhost Here is an extraction of the content of the models.py, I use this listcountries.csv file class CMSCountry(db.Model): sortorder = db.StringProperty() name = db.StringProperty(required=True) formalname = db.StringProperty() type = db.StringProperty() subtype = db.StringProperty() sovereignt = db.StringProperty() capital = db.StringProperty() currencycode = db.StringProperty() currencyname = db.StringProperty() telephonecode = db.StringProperty() lettercode = db.StringProperty() lettercode2 = db.StringProperty() number = db.StringProperty() countrycode = db.StringProperty() class CMSCountryLoader(bulkloader.Loader): def __init__(self): bulkloader.Loader.__init__(self, 'CMSCountry', [('sortorder', str), ('name', str), ('formalname', str), ('type', str), ('subtype', str), ('sovereignt', str), ('capital', str), ('currencycode', str), ('currencyname', str), ('telephonecode', str), ('lettercode', str), ('lettercode2', str), ('number', str), ('countrycode', str) ]) loaders = [CMSCountryLoader] Every tries to enter the email and password result in "Authentication Failed", so I could not import the data to the development server. I don't think that I have any problem with my files neither my models because I have successfully uploaded the data to the appspot.com application. So what should I put in for localhost credentials? I also tried to use Eclipse with Pydev but I still got the same message :( Here is the output: Uploading data records. [INFO ] Logging to bulkloader-log-20090820.121659 [INFO ] Opening database: bulkloader-progress-20090820.121659.sql3 [INFO ] [Thread-1] WorkerThread: started [INFO ] [Thread-2] WorkerThread: started [INFO ] [Thread-3] WorkerThread: started [INFO ] [Thread-4] WorkerThread: started [INFO ] [Thread-5] WorkerThread: started [INFO ] [Thread-6] WorkerThread: started [INFO ] [Thread-7] WorkerThread: started [INFO ] [Thread-8] WorkerThread: started [INFO ] [Thread-9] WorkerThread: started [INFO ] [Thread-10] WorkerThread: started Password for [email protected]: [DEBUG ] Configuring remote_api. url_path = /remote_api, servername = localhost:8178 [DEBUG ] Bulkloader using app_id: abc [INFO ] Connecting to /remote_api [ERROR ] Exception during authentication Traceback (most recent call last): File "D:\Projects\GoogleAppEngine\google_appengine\google\appengine\tools\bulkloader.py", line 2802, in Run request_manager.Authenticate() File "D:\Projects\GoogleAppEngine\google_appengine\google\appengine\tools\bulkloader.py", line 1126, in Authenticate remote_api_stub.MaybeInvokeAuthentication() File "D:\Projects\GoogleAppEngine\google_appengine\google\appengine\ext\remote_api\remote_api_stub.py", line 488, in MaybeInvokeAuthentication datastore_stub._server.Send(datastore_stub._path, payload=None) File "D:\Projects\GoogleAppEngine\google_appengine\google\appengine\tools\appengine_rpc.py", line 344, in Send f = self.opener.open(req) File "C:\Python25\lib\urllib2.py", line 381, in open response = self._open(req, data) File "C:\Python25\lib\urllib2.py", line 399, in _open '_open', req) File "C:\Python25\lib\urllib2.py", line 360, in _call_chain result = func(*args) File "C:\Python25\lib\urllib2.py", line 1107, in http_open return self.do_open(httplib.HTTPConnection, req) File "C:\Python25\lib\urllib2.py", line 1082, in do_open raise URLError(err) URLError: <urlopen error (10061, 'Connection refused')> [INFO ] Authentication Failed Thank you!

    Read the article

  • A Python Wrapper for Shutterfly. Uploading an Image

    - by iJames
    I'm working on a Django app in which I want to order prints through Shutterfly's Open API: http://www.shutterfly.com/documentation/start.sfly So far I've been able to build the appropriate POSTs and GETs using the modules and suggested code snippets including httplib, httplib2, urllib, urllib2, mimetype, etc. But I'm stuck on the image uploading when placing an order (the ordering process is not the same process as uploading images to albums which I haven't tried.) From what I can tell, I'm supposed to basically create the multipart form data by concatenating the HTTP request body together with the binary data of the image. I take the strings: --myuniqueboundary1273149960.175.1 Content-Disposition: form-data; name="AuthenticationID" auniqueauthenticationid --myuniqueboundary1273149960.175.1 Content-Disposition: file; name="Image.Data"; filename="1_41_orig.jpg" Content-Type: image/jpeg and I put this data into it and end with the final boundary: ...\xb5|\xf88\x1dj\t@\xd9\'\x1f\xc6j\x88{\x8a\xc0\x18\x8eGaJG\x03\xe9J-\xd8\x96[\x91T\xc3\x0eTu\xf4\xaa\xa5Ty\x80\x01\x8c\x9f\xe9Z\xad\x8cg\xba# g\x18\xe2\xaa:\x829\x02\xb4["\x17Q\xe7\x801\xea?\xad7j\xfd\xa2\xdf\x81\xd2\x84D\xb6)\xa8\xcb\xc8O\\\x9a\xaf(\x1cqM\x98\x8d*\xb8\'h\xc8+\x8e:u\xaa\xf3*\x9b\x95\x05F8\xedN%\xcb\xe1B2\xa9~Tw\xedF\xc4\xfe\xe8\xfc\xa9\x983\xff\xd9... That ends up making it look like this (when I use print to debug): ... --myuniqueboundary1273149960.175.1 Content-Disposition: file; name="Image.Data"; filename="1_41_orig.jpg" Content-Type: image/jpeg ????q?ExifMM* ? ??(1?2?<??i?b?NIKON CORPORATIONNIKON D40HHQuickTime 7.62009:02:17 13:05:25Mac OS X 10.5.6%??????"?'??0220?????? ???? ? ?|_???,b???50??5 ... --myuniqueboundary1273149960.175.1-- My code for grabbing the binary data is pretty much this: filedata = open('myjpegfile.jpeg','rb').read() Which I then add to the rest of the body. I've see something like this code everywhere. I'm then using this to post the full request (with the headers too): response = urllib2.urlopen(request).read() This seems to me to be the standard way that form POSTS with files happens. Am I missing something here? At some point I might be able to make this into a library worth posting up on github, but this problem has stopped me cold in my tracks. Thanks for any insight!

    Read the article

  • Creating a spam list with a web crawler in python

    - by user313623
    Hey guys, I'm not trying to do anything malicious here, I just need to do some homework. I'm a fairly new programmer, I'm using python 3.0, and I having difficulty using recursion for problem-solving. I've been stuck on this question for quite a while. Here's the assignment: Write a recursive method spam(url, n) that takes a url of a web page as input and a non-negative integer n, collects all the email address contained in the web page and adds them to a global dictionary variable spam_dict, and then recursively calls itself on every http hyperlink contained in the web page. You will use a dictionary so only one copy of every email address is save; your dictionary will store (key,value) pairs (email, email). The recursive call should use the parameter n-1 instead of n. If n = 0, you should collect the email addresses but no recursive calls should be made. The parameter n is used to limit the recursion to at most depth n. You will need to use the solutions of the two above problems; you method spam() will call the methods links2() and emails() and possibly other functions as well. Notes: 1. running spam() directly will produce no output on the screen; to find your spam_dict, you will need to read the value of spam_dict, and you will also need to reset it to the empty dictionary before every run of spam. 2. Recall how global variables are used. Usage: spam_dict = {} spam('http://reed.cs.depaul.edu/lperkovic/csc242/test1.html',0) spam_dict.keys() dict_keys([]) spam_dict = {} spam('http://reed.cs.depaul.edu/lperkovic/csc242/test1.html',1) spam_dict.keys() dict_keys(['[email protected]', '[email protected]']) So far, I've written a function that traverses web pages and puts all the links in a nice little list, and what I wanted to do was call that functions. And why would I use recursion on a dictionary? And how? I don't understand how n ties into all of this. def links2(url): content = str(urlopen(url).read()) myparser = MyHTMLParser() myparser.feed(content) lst = myparser.get() mergelst = [] for link in lst: mergelst.append(urljoin(lst[0],link)) print(mergelst) Any input (except why spam is bad) would be greatly appreciated. Also, I realize that the above function could probably look better, if you have a way to do it, I'm all ears. However, all I need is the point is for the program to produce the proper output.

    Read the article

  • How to get a html elements with python lxml

    - by Damiano
    Hello! I have this html code: <table> <tr> <td class="test"><b><a href="">aaa</a></b></td> <td class="test">bbb</td> <td class="test">ccc</td> <td class="test"><small>ddd</small></td> </tr> <tr> <td class="test"><b><a href="">eee</a></b></td> <td class="test">fff</td> <td class="test">ggg</td> <td class="test"><small>hhh</small></td> </tr> </table> I use this Python code to extract all <td class="test"> with lxml module. import urllib2 import lxml.html code = urllib.urlopen("http://www.example.com/page.html").read() html = lxml.html.fromstring(code) result = html.xpath('//td[@class="test"][position() = 1 or position() = 4]') It works good! The result is: <td class="test"><b><a href="">aaa</a></b></td> <td class="test"><small>ddd</small></td> <td class="test"><b><a href="">eee</a></b></td> <td class="test"><small>hhh</small></td> (so the first and the fourth column of each <tr>) Now, I have to extract: aaa (the title of the link) ddd (text between <small> tag) eee (the title of the link) hhh (text between <small> tag) How could I extract these values? (the problem is that I have to remove <b> tag and get the title of the anchor on the first column and remove <small> tag on the forth column) Thank you!

    Read the article

  • How to solve Python memory leak when using urrlib2?

    - by b_m
    Hi, I'm trying to write a simple Python script for my mobile phone to periodically load a web page using urrlib2. In fact I don't really care about the server response, I'd only like to pass some values in the URL to the PHP. The problem is that Python for S60 uses the old 2.5.4 Python core, which seems to have a memory leak in the urrlib2 module. As I read there's seems to be such problems in every type of network communications as well. This bug have been reported here a couple of years ago, while some workarounds were posted as well. I've tried everything I could find on that page, and with the help of Google, but my phone still runs out of memory after ~70 page loads. Strangely the Garbege Collector does not seem to make any difference either, except making my script much slower. It is said that, that the newer (3.1) core solves this issue, but unfortunately I can't wait a year (or more) for the S60 port to come. here's how my script looks after adding every little trick I've found: import urrlib2, httplib, gc while(true): url = "http://something.com/foo.php?parameter=" + value f = urllib2.urlopen(url) f.read(1) f.fp._sock.recv=None # hacky avoidance f.close() del f gc.collect() Any suggestions, how to make it work forever without getting the "cannot allocate memory" error? Thanks for advance, cheers, b_m update: I've managed to connect 92 times before it ran out of memory, but It's still not good enough. update2: Tried the socket method as suggested earlier, this is the second best (wrong) solution so far: class UpdateSocketThread(threading.Thread): def run(self): global data while 1: url = "/foo.php?parameter=%d"%data s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect(('something.com', 80)) s.send('GET '+url+' HTTP/1.0\r\n\r\n') s.close() sleep(1) I tried the little tricks, from above too. The thread closes after ~50 uploads (the phone has 50MB of memory left, obviously the Python shell has not.) UPDATE: I think I'm getting closer to the solution! I tried sending multiple data without closing and reopening the socket. This may be the key since this method will only leave one open file descriptor. The problem is: import socket s=socket.socket(socket.AF_INET, socket.SOCK_STREAM) socket.connect(("something.com", 80)) socket.send("test") #returns 4 (sent bytes, which is cool) socket.send("test") #4 socket.send("test") #4 socket.send("GET /foo.php?parameter=bar HTTP/1.0\r\n\r\n") #returns the number of sent bytes, ok socket.send("GET /foo.php?parameter=bar HTTP/1.0\r\n\r\n") #returns 0 on the phone, error on Windows7* socket.send("GET /foo.php?parameter=bar HTTP/1.0\r\n\r\n") #returns 0 on the phone, error on Windows7* socket.send("test") #returns 0, strange... *: error message: 10053, software caused connection abort Why can't I send multiple messages??

    Read the article

  • How to use python and beautfulsoup to print timestamp/last updated time (from HTML:) for each row ?

    - by cesalo
    How to use python and beautfulsoup to print timestamp/last updated time (from HTML:) for each row ? thanks a lot ! A) 1) can i add the print a)date/time and b)last updated time after row ? a) date/time - display the time when execute the python code b) last updated time from HTML: HTML structure: td x 1 including two tables each table have few "tr" and within "tr" have few "td" data inside HTML: <td> <table width="100%" border="4" cellspacing="0" bordercolor="white" align="center"> <tbody> <tr> <td colspan="2" class="verd_black11">Last Updated: 18/08/2014 10:19</td> </tr> <tr> <td colspan="3" class="verd_black11">All data delayed at least 15 minutes</td> </tr> </tbody> </table> <table width="100%" border="4" cellspacing="0" bordercolor="white" align="center"> <tbody id="tbody"> <tr id="tr0" class="tableHdrB1" align="center"> <td align="centre">C Aug-14 - 15000</td> <td align="right"> - </td> <td align="right">5</td> <td align="right">9,904</td> </tr> </tbody> </table> </td> Code: import urllib2 from bs4 import BeautifulSoup contenturl = "HTML:" soup = BeautifulSoup(urllib2.urlopen(contenturl).read()) table = soup.find('tbody', attrs={'id': 'tbody'}) rows = table.findAll('tr') for tr in rows: cols = tr.findAll('td') for td in cols: t = td.find(text=True) if t: text = t + ';' print text, print Output from above code C Aug-14 - 15000 ; - ; 5 ; 9,904 Expected output: C Aug-14 - 15000 ; - ; 5 ; 9,904 ; 18/08/2014 ; 13:48:00 ; 18/08/2014 ; 10:19 (execute python code) (last updated time)

    Read the article

  • python/pip error on osx

    - by Ibrahim Chawa
    I've recently purchased a new hard drive and installed a clean copy of OS X Mavericks. I installed python using homebrew and i need to create a python virtual environment. But when ever i try to run any command using pip, I get this error. I haven't been able to find a solution online for this problem. Any reference would be appreciated. Here is the error I'm getting. ERROR:root:code for hash md5 was not found. Traceback (most recent call last): File "/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 139, in <module> globals()[__func_name] = __get_hash(__func_name) File "/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 91, in __get_builtin_constructor raise ValueError('unsupported hash type ' + name) ValueError: unsupported hash type md5 ERROR:root:code for hash sha1 was not found. Traceback (most recent call last): File "/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 139, in <module> globals()[__func_name] = __get_hash(__func_name) File "/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 91, in __get_builtin_constructor raise ValueError('unsupported hash type ' + name) ValueError: unsupported hash type sha1 ERROR:root:code for hash sha224 was not found. Traceback (most recent call last): File "/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 139, in <module> globals()[__func_name] = __get_hash(__func_name) File "/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 91, in __get_builtin_constructor raise ValueError('unsupported hash type ' + name) ValueError: unsupported hash type sha224 ERROR:root:code for hash sha256 was not found. Traceback (most recent call last): File "/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 139, in <module> globals()[__func_name] = __get_hash(__func_name) File "/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 91, in __get_builtin_constructor raise ValueError('unsupported hash type ' + name) ValueError: unsupported hash type sha256 ERROR:root:code for hash sha384 was not found. Traceback (most recent call last): File "/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 139, in <module> globals()[__func_name] = __get_hash(__func_name) File "/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 91, in __get_builtin_constructor raise ValueError('unsupported hash type ' + name) ValueError: unsupported hash type sha384 ERROR:root:code for hash sha512 was not found. Traceback (most recent call last): File "/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 139, in <module> globals()[__func_name] = __get_hash(__func_name) File "/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/hashlib.py", line 91, in __get_builtin_constructor raise ValueError('unsupported hash type ' + name) ValueError: unsupported hash type sha512 Traceback (most recent call last): File "/usr/local/bin/pip", line 9, in <module> load_entry_point('pip==1.5.6', 'console_scripts', 'pip')() File "build/bdist.macosx-10.9-x86_64/egg/pkg_resources.py", line 356, in load_entry_point File "build/bdist.macosx-10.9-x86_64/egg/pkg_resources.py", line 2439, in load_entry_point File "build/bdist.macosx-10.9-x86_64/egg/pkg_resources.py", line 2155, in load File "/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pip-1.5.6-py2.7.egg/pip/__init__.py", line 10, in <module> from pip.util import get_installed_distributions, get_prog File "/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pip-1.5.6-py2.7.egg/pip/util.py", line 18, in <module> from pip._vendor.distlib import version File "/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pip-1.5.6-py2.7.egg/pip/_vendor/distlib/version.py", line 14, in <module> from .compat import string_types File "/usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pip-1.5.6-py2.7.egg/pip/_vendor/distlib/compat.py", line 31, in <module> from urllib2 import (Request, urlopen, URLError, HTTPError, ImportError: cannot import name HTTPSHandler If you need any extra information from me let me know, this is my first time posting a question here. Thanks.

    Read the article

  • Python and mechanize login script

    - by Perun
    Hi fellow programmers! I am trying to write a script to login into my universities "food balance" page using python and the mechanize module... This is the page I am trying to log into: http://www.wcu.edu/11407.asp The website has the following form to login: <FORM method=post action=https://itapp.wcu.edu/BanAuthRedirector/Default.aspx><INPUT value=https://cf.wcu.edu/busafrs/catcard/idsearch.cfm type=hidden name=wcuirs_uri> <P><B>WCU ID Number<BR></B><INPUT maxLength=12 size=12 type=password name=id> </P> <P><B>PIN<BR></B><INPUT maxLength=20 type=password name=PIN> </P> <P></P> <P><INPUT value="Request Access" type=submit name=submit> </P></FORM> From this we know that I need to fill in the following fields: 1. name=id 2. name=PIN With the action: action=https://itapp.wcu.edu/BanAuthRedirector/Default.aspx This is the script I have written thus far: #!/usr/bin/python2 -W ignore import mechanize, cookielib from time import sleep url = 'http://www.wcu.edu/11407.asp' myId = '11111111111' myPin = '22222222222' # Browser #br = mechanize.Browser() #br = mechanize.Browser(factory=mechanize.DefaultFactory(i_want_broken_xhtml_support=True)) br = mechanize.Browser(factory=mechanize.RobustFactory()) # Use this because of bad html tags in the html... # Cookie Jar cj = cookielib.LWPCookieJar() br.set_cookiejar(cj) # Browser options br.set_handle_equiv(True) br.set_handle_gzip(True) br.set_handle_redirect(True) br.set_handle_referer(True) br.set_handle_robots(False) # Follows refresh 0 but not hangs on refresh > 0 br.set_handle_refresh(mechanize._http.HTTPRefreshProcessor(), max_time=1) # User-Agent (fake agent to google-chrome linux x86_64) br.addheaders = [('User-agent','Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11'), ('Accept', 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8'), ('Accept-Encoding', 'gzip,deflate,sdch'), ('Accept-Language', 'en-US,en;q=0.8'), ('Accept-Charset', 'ISO-8859-1,utf-8;q=0.7,*;q=0.3')] # The site we will navigate into br.open(url) # Go though all the forms (for debugging only) for f in br.forms(): print f # Select the first (index two) form br.select_form(nr=2) # User credentials br.form['id'] = myId br.form['PIN'] = myPin br.form.action = 'https://itapp.wcu.edu/BanAuthRedirector/Default.aspx' # Login br.submit() # Wait 10 seconds sleep(10) # Save to a file f = file('mycatpage.html', 'w') f.write(br.response().read()) f.close() Now the problem... For some odd reason the page I get back (in mycatpage.html) is the login page and not the expected page that displays my "cat cash balance" and "number of block meals" left... Does anyone have any idea why? Keep in mind that everything is correct with the header files and while the id and pass are not really 111111111 and 222222222, the correct values do work with the website (using a browser...) Thanks in advance EDIT Another script I tried: from urllib import urlopen, urlencode import urllib2 import httplib url = 'https://itapp.wcu.edu/BanAuthRedirector/Default.aspx' myId = 'xxxxxxxx' myPin = 'xxxxxxxx' data = { 'id':myId, 'PIN':myPin, 'submit':'Request Access', 'wcuirs_uri':'https://cf.wcu.edu/busafrs/catcard/idsearch.cfm' } opener = urllib2.build_opener() opener.addheaders = [('User-agent','Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11'), ('Accept', 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8'), ('Accept-Encoding', 'gzip,deflate,sdch'), ('Accept-Language', 'en-US,en;q=0.8'), ('Accept-Charset', 'ISO-8859-1,utf-8;q=0.7,*;q=0.3')] request = urllib2.Request(url, urlencode(data)) open("mycatpage.html", 'w').write(opener.open(request)) This has the same behavior...

    Read the article

  • urllib2 misbehaving with dynamically loaded content

    - by Sheena
    Some Code headers = {} headers['user-agent'] = 'User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:16.0) Gecko/20100101 Firefox/16.0' headers['Accept'] = 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' headers['Accept-Language'] = 'en-gb,en;q=0.5' #headers['Accept-Encoding'] = 'gzip, deflate' request = urllib.request.Request(sURL, headers = headers) try: response = urllib.request.urlopen(request) except error.HTTPError as e: print('The server couldn\'t fulfill the request.') print('Error code: {0}'.format(e.code)) except error.URLError as e: print('We failed to reach a server.') print('Reason: {0}'.format(e.reason)) else: f = open('output/{0}.html'.format(sFileName),'w') f.write(response.read().decode('utf-8')) A url http://groupon.cl/descuentos/santiago-centro The situation Here's what I did: enable javascript in browser open url above and keep an eye on the console disable javascript repeat step 2 use urllib2 to grab the webpage and save it to a file enable javascript open the file with browser and observe console repeat 7 with javascript off results In step 2 I saw that a whole lot of the page content was loaded dynamically using ajax. So the HTML that arrived was a sort of skeleton and ajax was used to fill in the gaps. This is fine and not at all surprising Since the page should be seo friendly it should work fine without js. in step 4 nothing happens in the console and the skeleton page loads pre-populated rendering the ajax unnecessary. This is also completely not confusing in step 7 the ajax calls are made but fail. this is also ok since the urls they are using are not local, the calls are thus broken. The page looks like the skeleton. This is also great and expected. in step 8: no ajax calls are made and the skeleton is just a skeleton. I would have thought that this should behave very much like in step 4 question What I want to do is use urllib2 to grab the html from step 4 but I cant figure out how. What am I missing and how could I pull this off? To paraphrase If I was writing a spider I would want to be able to grab plain ol' HTML (as in that which resulted in step 4). I dont want to execute ajax stuff or any javascript at all. I don't want to populate anything dynamically. I just want HTML. The seo friendly site wants me to get what I want because that's what seo is all about. How would one go about getting plain HTML content given the situation I outlined? To do it manually I would turn off js, navigate to the page and copy the html. I want to automate this. stuff I've tried I used wireshark to look at packet headers and the GETs sent off from my pc in steps 2 and 4 have the same headers. Reading about SEO stuff makes me think that this is pretty normal otherwise techniques such as hijax wouldn't be used. Here are the headers my browser sends: Host: groupon.cl User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:16.0) Gecko/20100101 Firefox/16.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-gb,en;q=0.5 Accept-Encoding: gzip, deflate Connection: keep-alive Here are the headers my script sends: Accept-Encoding: identity Host: groupon.cl Accept-Language: en-gb,en;q=0.5 Connection: close Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 User-Agent: User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:16.0) Gecko/20100101 Firefox/16.0 The differences are: my script has Connection = close instead of keep-alive. I can't see how this would cause a problem my script has Accept-encoding = identity. This might be the cause of the problem. I can't really see why the host would use this field to determine the user-agent though. If I change encoding to match the browser request headers then I have trouble decoding it. I'm working on this now... watch this space, I'll update the question as new info comes up

    Read the article

  • Why do I get a connection error / timeout when using python suds to connect to Microsoft CRM?

    - by Chris R
    When I try to connect to an MS CRM web service using suds/python-ntlm, I am getting a timeout on requests. However, the code that I'm trying to replace -- which calls out to the cURL command line app to do the same call -- succeeds. Clearly something is different in the way that cURL is sending the command data, but I'll be damned if I know what the difference is. Below are the full details of the various calls. Anyone got any tips? Here's the code that is making the request, followed by the output. The cURL command code is below that, and its response follows. Hosts, users, and passwords have been changed to protect the innocent, of course. wsdl_url = 'https://client.service.host/MSCrmServices/2007/MetadataService.asmx?WSDL' username = r'domain\user.name' password = 'userpass' from suds.transport.https import WindowsHttpAuthenticated from suds.client import Client import logging logging.basicConfig(level=logging.INFO) logging.getLogger('suds.client').setLevel(logging.DEBUG) logging.getLogger('suds.transport').setLevel(logging.DEBUG) ntlmTransport = WindowsHttpAuthenticated(username=username, password=password) metadata_client = Client(wsdl_url, transport=ntlmTransport) request = metadata_client.factory.create('RetrieveAttributeRequest') request.MetadataId = '00000000-0000-0000-0000-000000000000' request.EntityLogicalName = 'opportunity' request.LogicalName = 'new_typeofcontact' request.RetrieveAsIfPublished = 'false' attr = metadata_client.service.Execute(request) print attr Here's the output: DEBUG:suds.client:sending to (http://client.service.host/MSCrmServices/2007/MetadataService.asmx) message: <SOAP-ENV:Envelope xmlns:ns0="http://schemas.xmlsoap.org/soap/envelope/" xmlns:ns1="http://schemas.microsoft.com/crm/2007/WebServices" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"> <SOAP-ENV:Header/> <ns0:Body> <ns1:Execute> <ns1:Request xsi:type="ns1:RetrieveAttributeRequest"> <ns1:MetadataId>00000000-0000-0000-0000-000000000000</ns1:MetadataId> <ns1:EntityLogicalName>opportunity</ns1:EntityLogicalName> <ns1:LogicalName>new_typeofcontact</ns1:LogicalName> <ns1:RetrieveAsIfPublished>false</ns1:RetrieveAsIfPublished> </ns1:Request> </ns1:Execute> </ns0:Body> </SOAP-ENV:Envelope> DEBUG:suds.client:headers = {'SOAPAction': u'"http://schemas.microsoft.com/crm/2007/WebServices/Execute"', 'Content-Type': 'text/xml'} DEBUG:suds.transport.http:sending: URL:http://client.service.host/MSCrmServices/2007/MetadataService.asmx HEADERS: {'SOAPAction': u'"http://schemas.microsoft.com/crm/2007/WebServices/Execute"', 'Content-Type': 'text/xml', 'Content-type': 'text/xml', 'Soapaction': u'"http://schemas.microsoft.com/crm/2007/WebServices/Execute"'} MESSAGE: <SOAP-ENV:Envelope xmlns:ns0="http://schemas.xmlsoap.org/soap/envelope/" xmlns:ns1="http://schemas.microsoft.com/crm/2007/WebServices" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"> <SOAP-ENV:Header/> <ns0:Body> <ns1:Execute> <ns1:Request xsi:type="ns1:RetrieveAttributeRequest"> <ns1:MetadataId>00000000-0000-0000-0000-000000000000</ns1:MetadataId> <ns1:EntityLogicalName>opportunity</ns1:EntityLogicalName> <ns1:LogicalName>new_typeofcontact</ns1:LogicalName> <ns1:RetrieveAsIfPublished>false</ns1:RetrieveAsIfPublished> </ns1:Request> </ns1:Execute> </ns0:Body> </SOAP-ENV:Envelope> ERROR: An unexpected error occurred while tokenizing input The following traceback may be corrupted or invalid The error message is: ('EOF in multi-line statement', (16, 0)) --------------------------------------------------------------------------- URLError Traceback (most recent call last) /Users/crose/projects/2366/crm/<ipython console> in <module>() /var/folders/nb/nbJAzxR1HbOppPcs6xO+dE+++TY/-Tmp-/python-67186icm.py in <module>() 19 request.LogicalName = 'new_typeofcontact' 20 request.RetrieveAsIfPublished = 'false' 21 ---> 22 attr = metadata_client.service.Execute(request) 23 print attr /Users/crose/virtualenv/advanis/lib/python2.6/site-packages/suds/client.pyc in __call__(self, *args, **kwargs) 537 return (500, e) 538 else: --> 539 return client.invoke(args, kwargs) 540 541 def faults(self): /Users/crose/virtualenv/advanis/lib/python2.6/site-packages/suds/client.pyc in invoke(self, args, kwargs) 596 self.method.name, timer) 597 timer.start() --> 598 result = self.send(msg) 599 timer.stop() 600 metrics.log.debug( /Users/crose/virtualenv/advanis/lib/python2.6/site-packages/suds/client.pyc in send(self, msg) 621 request = Request(location, str(msg)) 622 request.headers = self.headers() --> 623 reply = transport.send(request) 624 if retxml: 625 result = reply.message /Users/crose/virtualenv/advanis/lib/python2.6/site-packages/suds/transport/https.pyc in send(self, request) 62 def send(self, request): 63 self.addcredentials(request) ---> 64 return HttpTransport.send(self, request) 65 66 def addcredentials(self, request): /Users/crose/virtualenv/advanis/lib/python2.6/site-packages/suds/transport/http.pyc in send(self, request) 75 request.headers.update(u2request.headers) 76 log.debug('sending:\n%s', request) ---> 77 fp = self.u2open(u2request) 78 self.getcookies(fp, u2request) 79 result = Reply(200, fp.headers.dict, fp.read()) /Users/crose/virtualenv/advanis/lib/python2.6/site-packages/suds/transport/http.pyc in u2open(self, u2request) 116 return url.open(u2request) 117 else: --> 118 return url.open(u2request, timeout=tm) 119 120 def u2opener(self): /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.pyc in open(self, fullurl, data, timeout) 381 req = meth(req) 382 --> 383 response = self._open(req, data) 384 385 # post-process response /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.pyc in _open(self, req, data) 399 protocol = req.get_type() 400 result = self._call_chain(self.handle_open, protocol, protocol + --> 401 '_open', req) 402 if result: 403 return result /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.pyc in _call_chain(self, chain, kind, meth_name, *args) 359 func = getattr(handler, meth_name) 360 --> 361 result = func(*args) 362 if result is not None: 363 return result /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.pyc in http_open(self, req) 1128 1129 def http_open(self, req): -> 1130 return self.do_open(httplib.HTTPConnection, req) 1131 1132 http_request = AbstractHTTPHandler.do_request_ /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.pyc in do_open(self, http_class, req) 1103 r = h.getresponse() 1104 except socket.error, err: # XXX what error? -> 1105 raise URLError(err) 1106 1107 # Pick apart the HTTPResponse object to get the addinfourl URLError: <urlopen error [Errno 60] Operation timed out> The cURL command is: /opt/local/bin/curl --ntlm -u "domain\user.name:userpass" -k -d @- -A "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.2; SV1; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; InfoPath.1)" -H "Connection: Keep-Alive" -H "Content-Type: text/xml; charset=utf-8" -H "SOAPAction: http://schemas.microsoft.com/crm/2007/WebServices/Execute" https://client.service.host/MSCrmServices/2007/MetadataService.asmx The data that is piped to that cURL command: <soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <soap:Header> <CrmAuthenticationToken xmlns="http://schemas.microsoft.com/crm/2007/WebServices"> <AuthenticationType xmlns="http://schemas.microsoft.com/crm/2007/CoreTypes">0</AuthenticationType> <CrmTicket xmlns="http://schemas.microsoft.com/crm/2007/CoreTypes"></CrmTicket> <OrganizationName xmlns="http://schemas.microsoft.com/crm/2007/CoreTypes">CMIFS</OrganizationName> <CallerId xmlns="http://schemas.microsoft.com/crm/2007/CoreTypes">00000000-0000-0000-0000-000000000000</CallerId> </CrmAuthenticationToken> </soap:Header> <soap:Body> <Execute xmlns="http://schemas.microsoft.com/crm/2007/WebServices"> <Request xsi:type="RetrieveAttributeRequest"> <MetadataId>00000000-0000-0000-0000-000000000000</MetadataId> <EntityLogicalName>opportunity</EntityLogicalName> <LogicalName>new_typeofcontact</LogicalName> <RetrieveAsIfPublished>false</RetrieveAsIfPublished> </Request> </Execute> </soap:Body> </soap:Envelope> Here's the response: <?xml version="1.0" encoding="utf-8"?> <soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <soap:Body> <ExecuteResponse xmlns="http://schemas.microsoft.com/crm/2007/WebServices"> <Response xsi:type="RetrieveAttributeResponse"> <AttributeMetadata xsi:type="PicklistAttributeMetadata"> <MetadataId>101346cf-a6af-4eb4-a4bf-9c3c6bbd6582</MetadataId> <SchemaName>New_TypeofContact</SchemaName> <LogicalName>new_typeofcontact</LogicalName> <EntityLogicalName>opportunity</EntityLogicalName> <AttributeType> <Value>Picklist</Value> </AttributeType> <!-- stuff here --> </AttributeMetadata> </Response> </ExecuteResponse> </soap:Body> </soap:Envelope>

    Read the article

< Previous Page | 2 3 4 5 6