Search Results

Search found 1194 results on 48 pages for 'curl'.

Page 17/48 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Extract a specific string from a curl'd result

    - by allentown
    Given this curl command: curl --user-agent "fogent" --silent -o page.html "http://www.google.com/search?q=insansiate" * Spelling is intentionally incorrect. I want to grab the suggestion as my result. I want to be able to either grep into the page.html file perhaps with grep -oE or pipe it right from curl and never store a file. The result should be: 'instantiate' I need only the word 'instantiate', or the phrase, whatever google is auto correcting, is what I am after. Here is the basic html that is returned: <span class=spell style="color:#cc0000">Did you mean: </span><a href="/search?hl=en&amp;ie=UTF-8&amp;&amp;sa=X&amp;ei=VEMUTMDqGoOINraK3NwL&amp;ved=0CB0QBSgA&amp;q=instantiate&amp;spell=1"class=spell><b><i>instantiate</i></b></a>&nbsp;&nbsp;<span class=std>Top 2 results shown</span> So perhaps from/to of the string below, which I hope is unique enough to cover all my bases. class=spell><b><i>instantiate</i></b></a>&nbsp;&nbsp; I keep running into issues with greedy grep; perhaps I should run it though an html prettify tool first to get a line break or 50 in there. I don't know of any simple way to do so in bash, which is what I would ideally like this to be in. I really don't want to deal with firing up perl, and making sure I have the correct module. Any suggestions, thank you?

    Read the article

  • How do i set the proxy and SOCKs in libcurl?

    - by acidzombie24
    I am trying to configure my .NET app to use a proxy. My source is in C# but i learned CURL via C++. My question is where do i put the SOCKs IP and port? i looked through the documentation and didnt see it. I believe that is what is causing me these problems. When i run this code it will quiet literally timeout and not call my header function or writer function. If i comment out the first two curlopt lines (the two proxy lines) my code runs with no problems. In firefox i set the http proxy and SOCKs host separately, they are different IPs and ports. How do i set the sock part, the below has the dummy proxy set but i cant figure out the socks part. static void Main(string[] args) { SeasideResearch.LibCurlNet.Curl.GlobalInit((int)SeasideResearch.LibCurlNet.CURLinitFlag.CURL_GLOBAL_ALL); var curl = new Easy(); { curl.SetOpt(CURLoption.CURLOPT_PROXY, "http://127.0.0.1:1234"); curl.SetOpt(CURLoption.CURLOPT_PROXYTYPE, CURLproxyType.CURLPROXY_SOCKS5); curl.SetOpt(CURLoption.CURLOPT_URL, "http://whatismyipaddress.com/ip-lookup"); curl.SetOpt(CURLoption.CURLOPT_FOLLOWLOCATION, 1); curl.SetOpt(CURLoption.CURLOPT_USERAGENT, @"Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2b5) Gecko/20091204 Firefox/3.6b5"); curl.SetOpt(CURLoption.CURLOPT_HEADERFUNCTION, hf); curl.SetOpt(CURLoption.CURLOPT_HEADERDATA, data); curl.SetOpt(CURLoption.CURLOPT_WRITEFUNCTION, wf); curl.SetOpt(CURLoption.CURLOPT_WRITEDATA, sw); curl.SetOpt(CURLoption.CURLOPT_SSL_VERIFYPEER, 0); curl.Perform(); var sz = sw.ToString(); var myrealip = sz.IndexOf("12.34.56.78") !=-1; } //Console.WriteLine(sz); SeasideResearch.LibCurlNet.Curl.GlobalCleanup(); }

    Read the article

  • Right way to access the Google Cloud Storage bucket via Public API

    - by SyBer
    I'm trying the following request to access the bucket by using curl, via the public API: curl -X POST -H 'Content-Type: image/jpeg' -d @xxx.jpeg 'https://www.googleapis.com/upload/storage/v1/b/clips.eyecam.com/o?uploadType=media&name=x.jpeg&key=XXX' With XXX being the generated key in the Public API. However I'm getting an authorization failure: { "error": { "errors": [ { "domain": "global", "reason": "required", "message": "Login Required", "locationType": "header", "location": "Authorization" } ], "code": 401, "message": "Login Required" } } Seems the request is incorrect and does not pass the authorization key, any idea what would be the right form of the request?

    Read the article

  • Cannot connect to website - SSL handshaking fails

    - by ravenspoint
    So I cannot connect to certain websites. Just a few, most are OK. The one I really care about is paypal.com. I have done the usual things. Let's see: Checked my etc/hosts Flushed the DNS cache Checked firewall Switched on & off virus protection Switched on and off ad blocking pinged the sites Eventually, I decided to look at what curl is saying in detail == Info: About to connect() to www.paypal.com port 443 (#0) == Info: Trying 66.211.169.2... == Info: connected == Info: SSLv3, TLS handshake, Client hello (1): => Send SSL data, 110 bytes (0x6e) 0000: 01 00 00 6a 03 01 4f 6c aa 8c 57 2b 3d 1e 74 64 ...j..Ol..W+=.td 0010: c1 27 25 a5 3a 12 7f 3f 41 0a 17 15 2e c9 67 7c .'%.:.?A.....g| 0020: b3 e1 f6 9a db a9 00 00 2a 00 39 00 38 00 35 00 ........*.9.8.5. 0030: 16 00 13 00 0a 00 33 00 32 00 2f 00 07 00 05 00 ......3.2./..... 0040: 04 00 15 00 12 00 09 00 14 00 11 00 08 00 06 00 ................ 0050: 03 00 ff 01 00 00 17 00 00 00 13 00 11 00 00 0e ................ 0060: 77 77 77 2e 70 61 79 70 61 6c 2e 63 6f 6d www.paypal.com (hangs here for ever) This looks to me like paypal is refusing to reply to the first SSL handshake. I don't know much about SSL, but compaing to the output from a site that works for me seems to make it obvious == Info: About to connect() to www.cibc.com port 443 (#0) == Info: Trying 159.231.80.200... == Info: connected == Info: SSLv3, TLS handshake, Client hello (1): => Send SSL data, 108 bytes (0x6c) 0000: 01 00 00 68 03 01 4f 6c ad 6a 1f 67 d5 84 c4 4b ...h..Ol.j.g...K 0010: 0d 49 ae d6 b9 5b c3 63 f9 48 aa 18 da 43 d1 32 .I...[.c.H...C.2 0020: 47 ae 17 e5 cd e9 00 00 2a 00 39 00 38 00 35 00 G.......*.9.8.5. 0030: 16 00 13 00 0a 00 33 00 32 00 2f 00 07 00 05 00 ......3.2./..... 0040: 04 00 15 00 12 00 09 00 14 00 11 00 08 00 06 00 ................ 0050: 03 00 ff 01 00 00 15 00 00 00 11 00 0f 00 00 0c ................ 0060: 77 77 77 2e 63 69 62 63 2e 63 6f 6d www.cibc.com == Info: SSLv3, TLS handshake, Server hello (2): <= Recv SSL data, 74 bytes (0x4a) 0000: 02 00 00 46 03 01 00 00 58 cf 26 e2 e1 65 db 11 ...F....X.&..e.. 0010: bc 6f 26 7b 3b 6d eb 14 5f ad 47 dd 86 ea 4d a3 .o&{;m.._.G...M. 0020: fb 9f b7 2a 54 3e 20 5f 6b 04 5a 12 38 64 5d 18 ...*T> _k.Z.8d]. 0030: 65 9e e9 cd 61 eb 91 c1 16 25 61 30 bb 08 2a 78 e...a....%a0..*x 0040: b8 ee b8 7e f2 65 6a 00 04 00 ...~.ej... == Info: SSLv3, TLS handshake, CERT (11): ... and so on - working nicely eventually get some nice HTML Now I am reaaly stuck. This has been going on for five days, so I am pretty sure that the problem is not with paypal. But what on my system could be interfering with the SSL handshaking done by curl with this particular site? I suppose I could not be offering any certificates that PayPal accepts, but wouldn't I get a reply telling me so, or at least giving an error?

    Read the article

  • libcurl - unable to download a file

    - by marmistrz
    I'm working on a program which will download lyrics from sites like AZLyrics. I'm using libcurl. It's my code lyricsDownloader.cpp #include "lyricsDownloader.h" #include <curl/curl.h> #include <cstring> #include <iostream> #define DEBUG 1 ///////////////////////////////////////////////////////////////////////////// size_t lyricsDownloader::write_data_to_var(char *ptr, size_t size, size_t nmemb, void *userdata) // this function is a static member function { ostringstream * stream = (ostringstream*) userdata; size_t count = size * nmemb; stream->write(ptr, count); return count; } string AZLyricsDownloader::toProviderCode() const { /*this creates an url*/ } CURLcode AZLyricsDownloader::download() { CURL * handle; CURLcode err; ostringstream buff; handle = curl_easy_init(); if (! handle) return static_cast<CURLcode>(-1); // set verbose if debug on curl_easy_setopt( handle, CURLOPT_VERBOSE, DEBUG ); curl_easy_setopt( handle, CURLOPT_URL, toProviderCode().c_str() ); // set the download url to the generated one curl_easy_setopt(handle, CURLOPT_WRITEDATA, &buff); curl_easy_setopt(handle, CURLOPT_WRITEFUNCTION, &AZLyricsDownloader::write_data_to_var); err = curl_easy_perform(handle); // The segfault should be somewhere here - after calling the function but before it ends cerr << "cleanup\n"; curl_easy_cleanup(handle); // copy the contents to text variable lyrics = buff.str(); return err; } main.cpp #include <QString> #include <QTextEdit> #include <iostream> #include "lyricsDownloader.h" int main(int argc, char *argv[]) { AZLyricsDownloader dl(argv[1], argv[2]); dl.perform(); QTextEdit qtexted(QString::fromStdString(dl.lyrics)); cout << qPrintable(qtexted.toPlainText()); return 0; } When running ./maelyrica Anthrax Madhouse I'm getting this logged from curl * About to connect() to azlyrics.com port 80 (#0) * Trying 174.142.163.250... * connected * Connected to azlyrics.com (174.142.163.250) port 80 (#0) > GET /lyrics/anthrax/madhouse.html HTTP/1.1 Host: azlyrics.com Accept: */* < HTTP/1.1 301 Moved Permanently < Server: nginx/1.0.12 < Date: Thu, 05 Jul 2012 16:59:21 GMT < Content-Type: text/html < Content-Length: 185 < Connection: keep-alive < Location: http://www.azlyrics.com/lyrics/anthrax/madhouse.html < Segmentation fault Strangely, the file is there. The same error is displayed when there's no such page (redirect to azlyrics.com mainpage) What am I doing wrong? Thanks in advance EDIT: I made the function for writing data static, but this changes nothing. Even wget seems to have problems $ wget http://www.azlyrics.com/lyrics/anthrax/madhouse.html --2012-07-06 10:36:05-- http://www.azlyrics.com/lyrics/anthrax/madhouse.html Resolving www.azlyrics.com... 174.142.163.250 Connecting to www.azlyrics.com|174.142.163.250|:80... connected. HTTP request sent, awaiting response... No data received. Retrying. Why does opening the page in a browser work and wget/curl not? EDIT2: After adding this: curl_easy_setopt(handle, CURLOPT_FOLLOWLOCATION, 1); The log is: * About to connect() to azlyrics.com port 80 (#0) * Trying 174.142.163.250... * connected * Connected to azlyrics.com (174.142.163.250) port 80 (#0) > GET /lyrics/anthrax/madhouse.html HTTP/1.1 Host: azlyrics.com Accept: */* < HTTP/1.1 301 Moved Permanently < Server: nginx/1.0.12 < Date: Fri, 06 Jul 2012 09:09:47 GMT < Content-Type: text/html < Content-Length: 185 < Connection: keep-alive < Location: http://www.azlyrics.com/lyrics/anthrax/madhouse.html < * Ignoring the response-body * Connection #0 to host azlyrics.com left intact * Issue another request to this URL: 'http://www.azlyrics.com/lyrics/anthrax/madhouse.html' * About to connect() to www.azlyrics.com port 80 (#1) * Trying 174.142.163.250... * connected * Connected to www.azlyrics.com (174.142.163.250) port 80 (#1) > GET /lyrics/anthrax/madhouse.html HTTP/1.1 Host: www.azlyrics.com Accept: */* < HTTP/1.1 200 OK < Server: nginx/1.0.12 < Date: Fri, 06 Jul 2012 09:09:47 GMT < Content-Type: text/html < Transfer-Encoding: chunked < Connection: keep-alive < Segmentation fault

    Read the article

  • test of ICMP block

    - by Marcos
    In my bash scripts I have been using something like: until fping -u google.com; do echo "$0[$$] Network/DNS down?? $(date)" 1>&2 && sleep $(($RANDOM%(1 + ++trynum * 1) +1)).222; done to test for online connectivity. It halts in place, sleeping growing random intervals, until it can ping google.com again. Problem: At some sites ICMP pings are blocked altogether, and web pages are still reachable. What's a short way to test for this general case? Based on that test I will switch over to an http-based test like the exit status of curl -s google.com >/dev/null if that is a good one.

    Read the article

  • Where to store short strings (with my key) on the internet?

    - by Vi
    Is there simple service to store strings under my key that can be used by bots? Requirements: Simple command line access, automatic posting allowed No need to keep some session with the service alive I choose the key (so pastebins fail) No requirement for registration/authentication (for simplicity) The string should be kept for about a month. I want something like: Store: $ echo some_data_0x1299C0FF | store_my_string testtest2011 Retrieve: $ retrive_my_string testtest2011 some_data_0x1299C0FF Do you have ideas what should I use for it? I can only think of using IRC somehow (channel topics, /whowas, ...), but this is too complex for this simple task. No security is needed: anyone can update my string. The task looks very simple, so I expect the solution to be similarly simple. Expecting something like single simple curl call.

    Read the article

  • Install PHP 5.1.2, Requires: libcurl.so.3()(64bit) error

    - by Scott Rowley
    I'm trying to install php 5.1.2 on a CentOS 6 server (for grandfathering in old websites). I downloaded an RPM file ( php-5.1.2-5.x86_64.rpm ), but when I use: yum install php-5.1.2-5.x86_64.rpm I get the following error: Error: Package: php-5.1.2-5.x86_64 (/php-5.1.2-5.x86_64) Requires: libcurl.so.3()(64bit) I have tried several things including the following: ln -s /usr/lib64/libcurl.so.4 /usr/lib64/libcurl.so.3 (To make it symlink to the newer version) Downloaded curl-7.15.5-2.1.el5_3.5.x86_64.rpm and took the libcurl.so.3 out of the rpm and placed it in /usr/lib64/libcurl.so.3 with the same permissions as libcurl.so.4. Nothing has worked. Any ideas?

    Read the article

  • How did Google get on my Mac?

    - by SamGoody
    Am running a MacBook Pro, and have never installed Chrome, Google Earth, or anything blatantly Google. Just installed Little Snitch (are there no good free firewalls for Mac?) and see that CURL is sending to Google every few minutes, as is a request to Google update and more. Little Snitch doesn't say what program setup these requests. So, how do I find out how G got on my machine, why is it sending so many requests (every minute or so) and how do I remove it (and is it there for reasons other than to help G spy on me)?

    Read the article

  • Install Composer on Ubuntu

    - by Milos
    I am trying to install composer with the command: sudo curl -s https://getcomposer.org/installer | php And I am getting this error: All settings correct for using Composer Downloading... Download failed: failed to open stream: Permission denied Downloading... Download failed: failed to open stream: Permission denied Downloading... Download failed: failed to open stream: Permission denied The download failed repeatedly, aborting. I don't know why? Do you have an idea? I tryed to google it but nothing.

    Read the article

  • iphone: UIwebview curl effect

    - by eshalev
    Hello, I would like to make a standard view container which will give me the curl animation effect on multiple views. Something like uiscrollview and paging, only with a different animation(curl). I will be using UIwebviews as my separate pages. The problem: I do not know how to trap swipes in UIwebviews, But I see that UIscrollview implments this (swiping a uiwebview in a uiscrollview will bring me to the next view). I am therefore assuming that the implmentation of UIscrollview is trapping UIwebview swipes. How can i achieve the same functionality? I also need the UIWebview to keep functioning (as when embedding it in a uiscrollview)

    Read the article

  • CURL alternative - Design ideas

    - by Vincent
    All, I am looking for some web application design ideas here. I have a server X that hosts an SDK, which has the capacity to talk a piece of hardware. When I make an HTTPS request from an external PHP web application (hosted on Server Y) to Server X through curl, Server X gives JSON data as a response. I use this data to render my UI for the web app on Server Y. The above method seems to be slow and has a tendency to fail in production if there are too many concurrent requests. Can anybody let me know if there is an alternative to CURL or any other design people are using to pull data like this from servers? Thanks

    Read the article

  • Logging in to a website cURL!

    - by uknowho_freeman
    I am using cURL for the first time. I need to login to a site. I have problem with setting cookie file and to retrive, so that i can acces that page not just one time, but several times. I found the code on the web, for logging in to a site and Scrap a page for some detailed info, cause to get that page it takes to much time. so i just want to know if it is OK! the code belove(it is just for login in the code for Scraping its not ready) <?php curl_login('http://mywantedsite.com/login.php','user=******&pass=******','','off'); echo curl_grab_page('http://mywantedsite.com/somepage.php','','off'); function curl_login($url,$data,$proxy,$proxystatus){ $fp = fopen("cookie.txt", "w"); fclose($fp); $login = curl_init(); curl_setopt($login, CURLOPT_COOKIEJAR, "cookie.txt"); curl_setopt($login, CURLOPT_COOKIEFILE, "cookie.txt"); curl_setopt($login, CURLOPT_USERAGENT, "Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0)"); curl_setopt($login, CURLOPT_TIMEOUT, 40); curl_setopt($login, CURLOPT_RETURNTRANSFER, TRUE); if ($proxystatus == 'on') { curl_setopt($login, CURLOPT_SSL_VERIFYHOST, FALSE); curl_setopt($login, CURLOPT_HTTPPROXYTUNNEL, TRUE); curl_setopt($login, CURLOPT_PROXY, $proxy); } curl_setopt($login, CURLOPT_URL, $url); curl_setopt($login, CURLOPT_HEADER, TRUE); curl_setopt($login, CURLOPT_USERAGENT, $_SERVER['HTTP_USER_AGENT']); curl_setopt($login, CURLOPT_FOLLOWLOCATION, TRUE); curl_setopt($login, CURLOPT_POST, TRUE); curl_setopt($login, CURLOPT_POSTFIELDS, $data); ob_start(); // prevent any output return curl_exec ($login); // execute the curl command ob_end_clean(); // stop preventing output curl_close ($login); unset($login); } function curl_grab_page($site,$proxy,$proxystatus){ $ch = curl_init(); curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE); if ($proxystatus == 'on') { curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, FALSE); curl_setopt($ch, CURLOPT_HTTPPROXYTUNNEL, TRUE); curl_setopt($ch, CURLOPT_PROXY, $proxy); } curl_setopt($ch, CURLOPT_COOKIEFILE, "cookie.txt"); curl_setopt($ch, CURLOPT_URL, $site); ob_start(); // prevent any output return curl_exec ($ch); // execute the curl command ob_end_clean(); // stop preventing output curl_close ($ch); }

    Read the article

  • Why do I get a connection error / timeout when using python suds to connect to Microsoft CRM?

    - by Chris R
    When I try to connect to an MS CRM web service using suds/python-ntlm, I am getting a timeout on requests. However, the code that I'm trying to replace -- which calls out to the cURL command line app to do the same call -- succeeds. Clearly something is different in the way that cURL is sending the command data, but I'll be damned if I know what the difference is. Below are the full details of the various calls. Anyone got any tips? Here's the code that is making the request, followed by the output. The cURL command code is below that, and its response follows. Hosts, users, and passwords have been changed to protect the innocent, of course. wsdl_url = 'https://client.service.host/MSCrmServices/2007/MetadataService.asmx?WSDL' username = r'domain\user.name' password = 'userpass' from suds.transport.https import WindowsHttpAuthenticated from suds.client import Client import logging logging.basicConfig(level=logging.INFO) logging.getLogger('suds.client').setLevel(logging.DEBUG) logging.getLogger('suds.transport').setLevel(logging.DEBUG) ntlmTransport = WindowsHttpAuthenticated(username=username, password=password) metadata_client = Client(wsdl_url, transport=ntlmTransport) request = metadata_client.factory.create('RetrieveAttributeRequest') request.MetadataId = '00000000-0000-0000-0000-000000000000' request.EntityLogicalName = 'opportunity' request.LogicalName = 'new_typeofcontact' request.RetrieveAsIfPublished = 'false' attr = metadata_client.service.Execute(request) print attr Here's the output: DEBUG:suds.client:sending to (http://client.service.host/MSCrmServices/2007/MetadataService.asmx) message: <SOAP-ENV:Envelope xmlns:ns0="http://schemas.xmlsoap.org/soap/envelope/" xmlns:ns1="http://schemas.microsoft.com/crm/2007/WebServices" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"> <SOAP-ENV:Header/> <ns0:Body> <ns1:Execute> <ns1:Request xsi:type="ns1:RetrieveAttributeRequest"> <ns1:MetadataId>00000000-0000-0000-0000-000000000000</ns1:MetadataId> <ns1:EntityLogicalName>opportunity</ns1:EntityLogicalName> <ns1:LogicalName>new_typeofcontact</ns1:LogicalName> <ns1:RetrieveAsIfPublished>false</ns1:RetrieveAsIfPublished> </ns1:Request> </ns1:Execute> </ns0:Body> </SOAP-ENV:Envelope> DEBUG:suds.client:headers = {'SOAPAction': u'"http://schemas.microsoft.com/crm/2007/WebServices/Execute"', 'Content-Type': 'text/xml'} DEBUG:suds.transport.http:sending: URL:http://client.service.host/MSCrmServices/2007/MetadataService.asmx HEADERS: {'SOAPAction': u'"http://schemas.microsoft.com/crm/2007/WebServices/Execute"', 'Content-Type': 'text/xml', 'Content-type': 'text/xml', 'Soapaction': u'"http://schemas.microsoft.com/crm/2007/WebServices/Execute"'} MESSAGE: <SOAP-ENV:Envelope xmlns:ns0="http://schemas.xmlsoap.org/soap/envelope/" xmlns:ns1="http://schemas.microsoft.com/crm/2007/WebServices" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"> <SOAP-ENV:Header/> <ns0:Body> <ns1:Execute> <ns1:Request xsi:type="ns1:RetrieveAttributeRequest"> <ns1:MetadataId>00000000-0000-0000-0000-000000000000</ns1:MetadataId> <ns1:EntityLogicalName>opportunity</ns1:EntityLogicalName> <ns1:LogicalName>new_typeofcontact</ns1:LogicalName> <ns1:RetrieveAsIfPublished>false</ns1:RetrieveAsIfPublished> </ns1:Request> </ns1:Execute> </ns0:Body> </SOAP-ENV:Envelope> ERROR: An unexpected error occurred while tokenizing input The following traceback may be corrupted or invalid The error message is: ('EOF in multi-line statement', (16, 0)) --------------------------------------------------------------------------- URLError Traceback (most recent call last) /Users/crose/projects/2366/crm/<ipython console> in <module>() /var/folders/nb/nbJAzxR1HbOppPcs6xO+dE+++TY/-Tmp-/python-67186icm.py in <module>() 19 request.LogicalName = 'new_typeofcontact' 20 request.RetrieveAsIfPublished = 'false' 21 ---> 22 attr = metadata_client.service.Execute(request) 23 print attr /Users/crose/virtualenv/advanis/lib/python2.6/site-packages/suds/client.pyc in __call__(self, *args, **kwargs) 537 return (500, e) 538 else: --> 539 return client.invoke(args, kwargs) 540 541 def faults(self): /Users/crose/virtualenv/advanis/lib/python2.6/site-packages/suds/client.pyc in invoke(self, args, kwargs) 596 self.method.name, timer) 597 timer.start() --> 598 result = self.send(msg) 599 timer.stop() 600 metrics.log.debug( /Users/crose/virtualenv/advanis/lib/python2.6/site-packages/suds/client.pyc in send(self, msg) 621 request = Request(location, str(msg)) 622 request.headers = self.headers() --> 623 reply = transport.send(request) 624 if retxml: 625 result = reply.message /Users/crose/virtualenv/advanis/lib/python2.6/site-packages/suds/transport/https.pyc in send(self, request) 62 def send(self, request): 63 self.addcredentials(request) ---> 64 return HttpTransport.send(self, request) 65 66 def addcredentials(self, request): /Users/crose/virtualenv/advanis/lib/python2.6/site-packages/suds/transport/http.pyc in send(self, request) 75 request.headers.update(u2request.headers) 76 log.debug('sending:\n%s', request) ---> 77 fp = self.u2open(u2request) 78 self.getcookies(fp, u2request) 79 result = Reply(200, fp.headers.dict, fp.read()) /Users/crose/virtualenv/advanis/lib/python2.6/site-packages/suds/transport/http.pyc in u2open(self, u2request) 116 return url.open(u2request) 117 else: --> 118 return url.open(u2request, timeout=tm) 119 120 def u2opener(self): /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.pyc in open(self, fullurl, data, timeout) 381 req = meth(req) 382 --> 383 response = self._open(req, data) 384 385 # post-process response /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.pyc in _open(self, req, data) 399 protocol = req.get_type() 400 result = self._call_chain(self.handle_open, protocol, protocol + --> 401 '_open', req) 402 if result: 403 return result /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.pyc in _call_chain(self, chain, kind, meth_name, *args) 359 func = getattr(handler, meth_name) 360 --> 361 result = func(*args) 362 if result is not None: 363 return result /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.pyc in http_open(self, req) 1128 1129 def http_open(self, req): -> 1130 return self.do_open(httplib.HTTPConnection, req) 1131 1132 http_request = AbstractHTTPHandler.do_request_ /System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib2.pyc in do_open(self, http_class, req) 1103 r = h.getresponse() 1104 except socket.error, err: # XXX what error? -> 1105 raise URLError(err) 1106 1107 # Pick apart the HTTPResponse object to get the addinfourl URLError: <urlopen error [Errno 60] Operation timed out> The cURL command is: /opt/local/bin/curl --ntlm -u "domain\user.name:userpass" -k -d @- -A "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.2; SV1; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; InfoPath.1)" -H "Connection: Keep-Alive" -H "Content-Type: text/xml; charset=utf-8" -H "SOAPAction: http://schemas.microsoft.com/crm/2007/WebServices/Execute" https://client.service.host/MSCrmServices/2007/MetadataService.asmx The data that is piped to that cURL command: <soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <soap:Header> <CrmAuthenticationToken xmlns="http://schemas.microsoft.com/crm/2007/WebServices"> <AuthenticationType xmlns="http://schemas.microsoft.com/crm/2007/CoreTypes">0</AuthenticationType> <CrmTicket xmlns="http://schemas.microsoft.com/crm/2007/CoreTypes"></CrmTicket> <OrganizationName xmlns="http://schemas.microsoft.com/crm/2007/CoreTypes">CMIFS</OrganizationName> <CallerId xmlns="http://schemas.microsoft.com/crm/2007/CoreTypes">00000000-0000-0000-0000-000000000000</CallerId> </CrmAuthenticationToken> </soap:Header> <soap:Body> <Execute xmlns="http://schemas.microsoft.com/crm/2007/WebServices"> <Request xsi:type="RetrieveAttributeRequest"> <MetadataId>00000000-0000-0000-0000-000000000000</MetadataId> <EntityLogicalName>opportunity</EntityLogicalName> <LogicalName>new_typeofcontact</LogicalName> <RetrieveAsIfPublished>false</RetrieveAsIfPublished> </Request> </Execute> </soap:Body> </soap:Envelope> Here's the response: <?xml version="1.0" encoding="utf-8"?> <soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <soap:Body> <ExecuteResponse xmlns="http://schemas.microsoft.com/crm/2007/WebServices"> <Response xsi:type="RetrieveAttributeResponse"> <AttributeMetadata xsi:type="PicklistAttributeMetadata"> <MetadataId>101346cf-a6af-4eb4-a4bf-9c3c6bbd6582</MetadataId> <SchemaName>New_TypeofContact</SchemaName> <LogicalName>new_typeofcontact</LogicalName> <EntityLogicalName>opportunity</EntityLogicalName> <AttributeType> <Value>Picklist</Value> </AttributeType> <!-- stuff here --> </AttributeMetadata> </Response> </ExecuteResponse> </soap:Body> </soap:Envelope>

    Read the article

  • Login to website using PHP and get text from page

    - by Anthony Garand
    I am trying to login to a website and grab content from a page you must be authenticated to see. I have done some research and have seen some examples using both cURL and stream_context_create but I cannot get either way to work. I have the url for the page to login to, and the page that contains the data I need to get. Your help is much appreciated! Here's what I'm working with: <?php $pages = array('home' => 'https://www.53.com/wps/portal/personal', 'login' => 'https://www.53.com/wps/portal/personal', 'data' => 'https://www.53.com/servlet/efsonline/index.html?Messages.SortedBy=DATE,REVERSE'); $ch = curl_init(); //Set options for curl session $options = array(CURLOPT_USERAGENT => 'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0)', CURLOPT_SSL_VERIFYPEER => FALSE, CURLOPT_SSL_VERIFYHOST => 2, CURLOPT_HEADER => TRUE, //CURLOPT_RETURNTRANSFER => TRUE, CURLOPT_COOKIEFILE => 'cookie.txt', CURLOPT_COOKIEJAR => 'cookies.txt'); //Hit home page for session cookie $options[CURLOPT_URL] = $pages['home']; curl_setopt_array($ch, $options); curl_exec($ch); //Login $options[CURLOPT_URL] = $pages['login']; $options[CURLOPT_POST] = TRUE; $options[CURLOPT_POSTFIELDS] = 'uid-input=xxx&pw=xxx'; $options[CURLOPT_FOLLOWLOCATION] = FALSE; curl_setopt_array($ch, $options); curl_exec($ch); //Hit data page $options[CURLOPT_URL] = $pages['data']; curl_setopt_array($ch, $options); $data = curl_exec($ch); //Output data echo $data; //Close curl session curl_close($ch); ?> Cheers, Anthony

    Read the article

  • how to get contents of site use HTTPS

    - by cashmoney
    ex of site using ssl ( HTTPs ) : https://www.eb2a.com 1 - i tried to get its content using file_get_contents, but not work and give error ex : <?php $contents = file_get_contents("https://www.eb2a.com/"); echo $contents; ?> 2 - i tried to use fopen, but not work and give error ex: <?php $url = 'https://www.eb2a.com/'; $contents = fopen($url, 'r'); echo "$contents"; ?> 3 - i tried to use CURL, but not work and give BLANK PAGE ex : function cURL($url, $ref, $header, $cookie, $p){ $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, 0); curl_setopt($ch, CURLOPT_USERAGENT, $_SERVER['HTTP_USER_AGENT']); curl_setopt($ch, CURLOPT_REFERER, $ref); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, 0); if ($p) { curl_setopt($ch, CURLOPT_CUSTOMREQUEST, "POST"); curl_setopt($ch, CURLOPT_POST, 1); curl_setopt($ch, CURLOPT_POSTFIELDS, $p); } $result = curl_exec($ch); curl_close($ch); if ($result){ return $result; }else{ return ''; } } $file = cURL('https://www.eb2a.com/','https://www.eb2a.com/',0,0,null); echo $file any one have any idea ??

    Read the article

  • Stange stream of HTTP GET requests in apache logs, from amazon ec2 instances

    - by Alexandre Boeglin
    I just had a look at my apache logs, and I see a lot of very similar requests: GET / HTTP/1.1 User-Agent: curl/7.24.0 (i386-redhat-linux-gnu) libcurl/7.24.0 \ NSS/3.13.5.0 zlib/1.2.5 libidn/1.18 libssh2/1.2.2 Host: [my_domain].org Accept: */* there's a steady stream of those, about 2 or 3 per minute; they all request the same domain and resource (there are slight variations in user agent version numbers); they come form a lot of different IPv4 and IPv6 addresses, in blocs that belong to amazon ec2 (in Singapore, Japan, Ireland and the USA). I tried to look for an explanation online, or even just similar stories, but couldn't find any. Has anyone got a clue as to what this is? It doesn't look malicious per say, but it's just annoying me, and I couldn't find any more information about it. I first suspected it could be a bot checking if my server is still up, but: I don't remember subscribing to such a service; why would it need to check my site twice every minute; why doesn't it use a clearly identifying fqdn. Or, should I send this question to amazon, via their abuse contact? Thanks!

    Read the article

  • Using PHP and cURL to login to indyarocks.com

    - by Divya
    I am new to cURL and don't know much about it. I basically want to login to my account on www.indyarocks.com through libcurl for PHP. I don't know what type of authentication it uses (I don't know how to find that out.). When I go to http://www.indyarocks.com, I get a login form which asks for my username and password. I put in my username and password and click login and everything is good. I tried to automate this using cURL. This is a snippet of my code. curl_setopt($curl_connection, CURLOPT_URL, "http://www.indyarocks.com/loginchk.php"); curl_setopt($curl_connection, CURLOPT_POST, 1); curl_setopt($curl_connection, CURLOPT_HTTPAUTH, CURLAUTH_ANY); curl_setopt($curl_connection, CURLOPT_USERPWD, $username.':'.$password); I looked at the source of the login page and found out the address of the page to which the username and password are sent (the action attribute of the form) which is "http://www.indyarocks.com/loginchk.php" and set it as the target url. When I run this, I get username or password is wrong error and the login fails. My username and password is correct. I don't know what the problem is. Can the password be encrypted? Can that be responsible for this failure? Please help me get around this problem. I'll be really thankful. Thanks in advance.

    Read the article

  • How do I unescape HTML entities in a string in Python 3.1?

    - by Sho Minamimoto
    I have looked all around and only found solutions for python 2.6 and earlier, NOTHING on how to do this in python 3.X. (I only have access to Win7 box.) I HAVE to be able to do this in 3.1 and preferably without external libraries. Currently, I have httplib2 installed and access to command-prompt curl (that's how I'm getting the source code for pages). Unfortunately, curl does not decode html entities, as far as I know, I couldn't find a command to decode it in the documentation. YES, I've tried to get Beautiful Soup to work, MANY TIMES without success in 3.X. If you could provide EXPLICIT instructions on how to get it to work in python 3 in MS Windows environment, I would be very grateful. So, to be clear, I need to turn strings like this: Suzy &amp; John into a string like this: "Suzy & John".

    Read the article

  • "Microsoft DNS Client" vs. getaddrinfo?

    - by Josh K
    Right now, my application is using the c-ares asynchronous DNS resolver library on Windows below cURL, and I have users complaining that it behaves differently from other windows apps. One particular user said that "other applications are using the Microsoft DNS client" and experiences no problems. cURL itself has an asynchronous DNS implementation that uses getaddrinfo() in a thread. My guess is that would be equivalent behavior to using the "DNS Client" and its host of functions (e.g. DnsQuery?) So, dear Lazyweb, I ask if there is a tangible difference between the behavior of getaddrinfo() vs. using the actual Dns* APIs from the Win32 API.

    Read the article

  • "Use of undefined constant CURLOPT_PROTOCOLS and CURLPROTO_HTTP" but it works?

    - by Dave
    Hi on our dev environment we have show all errors, warnings and notices. I'm getting this: Notice: Use of undefined constant CURLOPT_PROTOCOLS - assumed 'CURLOPT_PROTOCOLS' in C:\notion\implementation\development\asterix\library\ExternalLibs\panda.php on line 69 Notice: Use of undefined constant CURLPROTO_HTTP - assumed 'CURLPROTO_HTTP' in C:\notion\implementation\development\asterix\library\ExternalLibs\panda.php on line 69 The code on line 69: curl_setopt($curl, CURLOPT_PROTOCOLS, CURLPROTO_HTTP); But the CURL code works, it goes off to the other server and retrieves whats necessary. What do these notices mean? Thanks very much.

    Read the article

  • Translate a webpage in PHP

    - by Rob
    I'm looking to translate a webpage in PHP 5 so I can save the translation and make it easily accessible via mydomain.com/lang/fr/category/article.html rather than users having to go through google translate. I've found various easy ways to translate text via CURL, however what i'd really like to be able to do is translate an entire webpage but obviously ignore the tags. The problem is that Google Translate messes up all the HTML tags, class names etc Does anyone know of a php class that can translate an entire webpage whilst ignoring the tags? I'm guessing it may be possible via advanced regular expressions or something like that, but i'm not sure. I can't just curl Google's response as i'll have all the extra JS that they put in. Any ideas?

    Read the article

  • Can you use PHP libcurl to pull files from server A to server B?

    - by Majid
    Hi, cURL and libcurl let you do things you normally do with a browser, right? Like, as if you have a jini sit inside a server and fire up a browser and do stuff. Ok then. I need a script to sit on server B, and click on download links on a server A and download files (from server A to server B). I am new to curl, and not sure if all I need is to issue a simple get or something else. I know that Content-disposition header forces the browser to save the document. Does it have the same effect on libcurl too? If it does, I'll make my serving script on server A send that header, and then serve the file. Any advice is appreciated. Thanks

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >