Search Results

Search found 481 results on 20 pages for 'ceiling gecko'.

Page 15/20 | < Previous Page | 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • How to Capture a live stream from Windows Media Server 2008 using c#.net

    - by Hummad Hassan
    I want to capture the live stream from windows media server to filesystem on my pc I have tried with my own media server with the following code. but when i have checked the out put file i have found this in it. please help me with this. Thanks [Reference] Ref1=http://mywindowsmediaserver/test?MSWMExt=.asf Ref2=http://mywindowsmediaserver/test?MSWMExt=.asf FileStream fs = null; try { HttpWebRequest req = (HttpWebRequest)WebRequest.Create("http://mywmsserver/test"); CookieContainer ci = new CookieContainer(1000); req.Timeout = 60000; req.Method = "Get"; req.KeepAlive = true; req.MaximumAutomaticRedirections = 99; req.UseDefaultCredentials = true; req.UserAgent = "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3"; req.ReadWriteTimeout = 90000000; req.CookieContainer = ci; //req.MediaType = "video/x-ms-asf"; req.AllowWriteStreamBuffering = true; HttpWebResponse resp = (HttpWebResponse)req.GetResponse(); Stream resps = resp.GetResponseStream(); fs = new FileStream("d:\\dump.wmv", FileMode.Create, FileAccess.ReadWrite); byte[] buffer = new byte[1024]; int bytesRead = 0; while ((bytesRead = resps.Read(buffer, 0, buffer.Length)) > 0) { fs.Write(buffer, 0, bytesRead); } } catch (Exception ex) { } finally { if (fs != null) fs.Close(); }

    Read the article

  • [Javascript] Linux Ajax (mootools Request.JSON) Header error

    - by VDVLeon
    Hi all, I use the following code to get some json data: var request = new Request.JSON( { 'url': sourceURI, 'onSuccess': onPageData } ); request.get(); Request.JSON is a class from Mootools (a javascript library). But on linux (ubuntu on firefox 3.5 and Chrome) the request always fails. So i tried to display the http request ajax is sending. (I used netcat to display it) The request is like this: OPTIONS /the+url HTTP/1.1 Host: example.com Connection: keep-alive User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US) AppleWebKit/532.3 (KHTML, like Gecko) Chrome/4.0.226.0 Safari/532.3 Referer: http://example.com/ref... Access-Control-Request-Method: GET Origin: http://example.com Access-Control-Request-Headers: X-Request, X-Requested-With, Accept Accept: */* Accept-Encoding: gzip,deflate Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 The HTTP request (first line) is not how it should be: OPTIONS /the+url HTTP/1.1 It should be: GET /the+url HTTP/1.1 Does anybody know why this problem is and how to fix it?

    Read the article

  • How to store multiple cookies through PHP Curl

    - by Ahmad
    'SOUP.IO' is not providing any api. So Iam trying to use 'PHP Curl' to login and submit data through PHP. Iam able to login the website successfully(through cUrl), but when I try to submit data through cUrl, it gives me error of 'invalid user'. When I tried to analysed the code and website, I came to know that cUrl is getting values of only 1-2 cookies. Where as when I open the same page in FireFox, it shows me 6-7 cookies related to 'SOUP.IO'. Can some one guide me how to get all these 7 cookies values. Following cookies are getable by cUrl: soup_session_id Following cookies are shown in Firefox (not through cUrl): __qca, __utma, __utmb, __utmc, __utmz Following is my cUrl code: $cookie_file_path = getcwd()."/cookie/cookie.txt"; $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, 'http://www.soup.io'); curl_setopt($ch, CURLOPT_VERBOSE, 1); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, FALSE); curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, FALSE); curl_setopt($ch, CURLOPT_HEADER, TRUE); curl_setopt($ch, CURLOPT_ENCODING, 'gzip,deflate'); curl_setopt($ch, CURLOPT_COOKIEJAR, $cookie_file_path); curl_setopt($ch, CURLOPT_COOKIEFILE, $cookie_file_path); curl_setopt($ch, CURLOPT_USERAGENT, 'Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 (.NET CLR 3.5.30729) FirePHP/0.4'); curl_setopt($ch, CURLOPT_MAXREDIRS, 10); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, TRUE); curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE); $result = curl_exec($ch); curl_close($ch); print_r($result); ? Can some one guide me in this regards Thanks in advance

    Read the article

  • Why Illegal cookies are send by Browser and received by web servers (rfc2109)?

    - by Artyom
    Hello, According to RFC 2109 cookie's value can be either HTTP token or quoted string, and token can't include non-ASCII characters. Cookie's RFC 2109: http://tools.ietf.org/html/rfc2109#page-3 HTTP's RFC 2068 token definition: http://tools.ietf.org/html/rfc2068#page-16 However I had found that Firefox browser (3.0.6) sends cookies with utf-8 string as-is and three web servers I tested (apache2, lighttpd, nginx) pass this string as-is to the application. For example, raw request from browser: $ nc -l -p 8080 GET /hello HTTP/1.1 Host: localhost:8080 User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.0.9) Gecko/2009050519 Firefox/2.0.0.13 (Debian-3.0.6-1) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: windows-1255,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive Cookie: wikipp=1234; wikipp_username=?????? Cache-Control: max-age=0 And raw response of apache, nginx and lighttpd HTTP_COOKIE CGI variable: wikipp=1234; wikipp_username=?????? What do I miss? Can somebody explain me?

    Read the article

  • accessing a value of a nested hash

    - by st
    Hello! I am new to perl and I have a problem that's very simple but I cannot find the answer when consulting my perl book. When printing the result of Dumper($request); I get the following result: $VAR1 = bless( { '_protocol' => 'HTTP/1.1', '_content' => '', '_uri' => bless( do{\(my $o = 'http://myawesomeserver.org:8081/counter/')}, 'URI::http' ), '_headers' => bless( { 'user-agent' => 'Mozilla/5.0 (X11; U; Linux i686; en; rv:1.9.0.4) Gecko/20080528 Epiphany/2.22 Firefox/3.0', 'connection' => 'keep-alive', 'cache-control' => 'max-age=0', 'keep-alive' => '300', 'accept' => 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', 'accept-language' => 'en-us,en;q=0.5', 'accept-encoding' => 'gzip,deflate', 'host' => 'localhost:8081', 'accept-charset' => 'ISO-8859-1,utf-8;q=0.7,*;q=0.7' }, 'HTTP::Headers' ), '_method' => 'GET', '_handle' => bless( \*Symbol::GEN0, 'FileHandle' ) }, 'HTTP::Server::Simple::Dispatched::Request' ); How can I access the values of '_method' ('GET') or of 'host' ('localhost:8081'). I know that's an easy question, but perl is somewhat cryptic at the beginning. Thank you, St.

    Read the article

  • What do I have to change in my PHP/CURL code to retrieve data from a https:// URL?

    - by Edward Tanguay
    I have a PHP file using CURL that accepts a Google Doc URL as a parameter, then returns the plain text of the Google Doc. It worked well until recently when apparently a redirect was added so that the http:// address redirects to the equivalent https:// address, as in this example: http://docs.google.com/View?id=dc7gj86r_20dn2csqg3 So I changed my code to access the https:// address, but it just returns blank. What do I have to change my CURL code so that I can get the HTML text from the https:// address? $url = filter_input(INPUT_GET, 'url',FILTER_SANITIZE_STRING); $validUrlPrefixes[] = "https://docs.google.com"; if(beginsWithOneOfThese($url, $validUrlPrefixes)) { $user_agent = 'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 (.NET CLR 3.5.30729)'; $ch = curl_init(); curl_setopt($ch, CURLOPT_COOKIEJAR, "/tmp/cookie"); curl_setopt($ch, CURLOPT_COOKIEFILE, "/tmp/cookie"); curl_setopt($ch, CURLOPT_URL, $url ); curl_setopt($ch, CURLOPT_FAILONERROR, 1); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 0); curl_setopt($ch, CURLOPT_RETURNTRANSFER,1); curl_setopt($ch, CURLOPT_TIMEOUT, 15); curl_setopt($ch, CURLOPT_USERAGENT, $user_agent); curl_setopt($ch, CURLOPT_VERBOSE, 0); $rawData = curl_exec($ch); $rawData = cleanText($rawData); if(beginsWith($url, "https://docs.google.com")) { echo qstr::convertGoogleDocContentToText($rawData); die; } echo $rawData; die;

    Read the article

  • How to retrieve captcha and save session with PHP cURL?

    - by user302974
    Hi all, i'm create some script to submit content via php curl. first fetch session and captcha, and user must submit captcha to final submit. the problem is i can't get captcha, i've try with this code and preg_match to get image tag and return it $ch = curl_init(); curl_setopt($ch, CURLOPT_URL,$url); curl_setopt($ch, CURLOPT_USERAGENT, 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.2) Gecko/20070219 Firefox/2.0.0.2'); curl_setopt($ch, CURLOPT_HEADER, 0); curl_setopt($ch, CURLOPT_COOKIE, 1); curl_setopt($ch, CURLOPT_COOKIEJAR, "1"); curl_setopt($ch, CURLOPT_COOKIEFILE, "1"); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1); curl_setopt($ch, CURLOPT_RETURNTRANSFER,1); $result = curl_exec($ch); curl_close($ch); But no luck, page i'm trying to submit is http://abadijayaiklan.co.cc/pasang-iklan/. I hope someone can help me out :) Thanks and regards

    Read the article

  • X-Forwarded-For causing Undefined index in PHP

    - by bateman_ap
    Hi, I am trying to integrate some third party tracking code into one of my sites, however it is throwing up some errors, and their support isn't being much use, so i want to try and fix their code myself. Most I have fixed, however this function is giving me problems: private function getXForwardedFor() { $s =& $this; $xff_ips = array(); $headers = $s->getHTTPHeaders(); if ($headers['X-Forwarded-For']) { $xff_ips[] = $headers['X-Forwarded-For']; } if ($_SERVER['REMOTE_ADDR']) { $xff_ips[] = $_SERVER['REMOTE_ADDR']; } return implode(', ', $xff_ips); // will return blank if not on a web server } In my dev enviroment where I am showing all errors I am getting: Notice: Undefined index: X-Forwarded-For in /sites/webs/includes/OmnitureMeasurement.class.php on line 1129 Line 1129 is: if ($headers['X-Forwarded-For']) { If I print out $headers I get: Array ( [Host] => www.domain.com [User-Agent] => Mozilla/5.0 (Windows; U; Windows NT 6.1; en-GB; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 [Accept] => text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 [Accept-Language] => en-gb,en;q=0.5 [Accept-Encoding] => gzip,deflate [Accept-Charset] => ISO-8859-1,utf-8;q=0.7,*;q=0.7 [Keep-Alive] => 115 [Connection] => keep-alive [Referer] => http://www10.toptable.com/ [Cookie] => PHPSESSID=nh9jd1ianmr4jon2rr7lo0g553; __utmb=134653559.30.10.1275901644; __utmc=134653559 [Cache-Control] => max-age=0 ) I can't see X-Forwarded-For in there which I think is causing the problem. Is there something I should add to the function to take this into account? I am using PHP 5.3 and Apache 2 on Fedora

    Read the article

  • HTML::TreeBuilder has mojibake problem, it shows wired chars in the output

    - by varun_vijay_r
    use strict; use WWW::Curl::Easy; use HTML::TreeBuilder; my $cookie_file ='/tmp/pcook'; my $curl = new WWW::Curl::Easy; my $response_body; my $charset = 'utf-8'; $DocOffline::charset = undef; $curl-setopt (CURLOPT_URL, 'http://www.breitbart.com/article.php?id=D9G7CR5O0&show_article=1'); $curl-setopt ( CURLOPT_USERAGENT, 'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/533.9 (KHTML, like Gecko) Chrome/6.0.400.0 Safari/533.9'); $curl-setopt ( CURLOPT_HEADER, 0); $curl-setopt ( CURLOPT_FOLLOWLOCATION, 1); $curl-setopt ( CURLOPT_AUTOREFERER, 1); $curl-setopt ( CURLOPT_SSL_VERIFYPEER, 0); $curl-setopt ( CURLOPT_COOKIEFILE, $cookie_file); $curl-setopt ( CURLOPT_COOKIEJAR, $cookie_file); $curl-setopt ( CURLOPT_REFERER, 'http://www.iavian.com/docOff/'); $curl-setopt ( CURLOPT_HEADERFUNCTION, \&headerCallback ); open (my $fileb, "", \$response_body); $curl-setopt(CURLOPT_WRITEDATA,$fileb); my $retcode = $curl-perform; if ($retcode == 0) { my $dom_tree = HTML::TreeBuilder-new(); $dom_tree-ignore_elements(qw(script style)); $dom_tree-utf8_mode(1); $dom_tree-parse($response_body); $dom_tree-eof(); print $dom_tree-as_HTML('<&', ' ', {}); } else { print("An error happened: ".$curl-strerror($retcode)." ($retcode)\n"); } sub headerCallback { my($data, $pointer) = @_; $data =~ m/Content-Type:\s*.*;\s*charset=(.*)/; if ($1) { $charset = $1; $charset =~ s/[^a-zA-Z0-9_-]*//g; } return length($data); }

    Read the article

  • Why is Firefox prompting to download a file that is POST'd to?

    - by alex
    This is the most peculiar thing. It is from an old in house CMS. When I attempt to submit my changes, it prompts to save the file linked in the action attribute of the form. Headers Request POST /~site/edit/articles/article_save.php?id=54 HTTP/1.1 Host: example.com User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://example.com Content-Type: multipart/form-data; boundary=---------------------------10102754414578508781458777923 Content-Length: 940 -----------------------------10102754414578508781458777923 Content-Disposition: form-data; name="title" Home Content -----------------------------10102754414578508781458777923 Content-Disposition: form-data; name="catid" 18 -----------------------------10102754414578508781458777923 Content-Disposition: form-data; name="activecheck" 1 -----------------------------10102754414578508781458777923 Content-Disposition: form-data; name="image" -----------------------------10102754414578508781458777923 Content-Disposition: form-data; name="contentWidgToolbarSelectBlock" <p> -----------------------------10102754414578508781458777923 Content-Disposition: form-data; name="content" <p>Edit your article in this text box.</p> -----------------------------10102754414578508781458777923 Content-Disposition: form-data; name="contentWidgEditor" true -----------------------------10102754414578508781458777923-- Response HTTP/0.9 200 OK And then Firefox shows.... I can't determine from the response headers as to why this is prompting to open/save. It has always worked. All other PHP files on the site work fine. Anyone have a clue? Thanks Update Apparently, it just crashes Safari.

    Read the article

  • How spoof referrer using curl

    - by golu molu
    I am using curl code below to spoof referrer , it works fine but there is error on every page - Curl error: $url = somesite.com function doMagic($url) { $curl = curl_init(); $header[0] = "Accept: text/xml,application/xml,application/xhtml+xml,"; $header[0] .= "text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5"; $header[] = "Cache-Control: max-age=0"; $header[] = "Connection: keep-alive"; $header[] = "Keep-Alive: 300"; $header[] = "Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7"; $header[] = "Accept-Language: en-us,en;q=0.5"; $header[] = "Pragma: "; curl_setopt($curl, CURLOPT_URL, $url); curl_setopt($curl, CURLOPT_USERAGENT, "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:7.0.1) Gecko/20100101 Firefox/7.0.12011-10-16 20:23:00"); curl_setopt($curl, CURLOPT_HTTPHEADER, $header); curl_setopt($curl, CURLOPT_REFERER, "http://www.facebook.com"); curl_setopt($curl, CURLOPT_ENCODING, "gzip,deflate"); curl_setopt($curl, CURLOPT_AUTOREFERER, true); curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1); curl_setopt($curl, CURLOPT_TIMEOUT, 30); curl_setopt($curl, CURLOPT_FOLLOWLOCATION,true); $html = curl_exec($curl); echo 'Curl error: '. curl_error($curl); curl_close($curl); return $html; } $text = doMagic($url); print("$text"); what i'm doing wrong?

    Read the article

  • php curl login not work

    - by Massimo Zampieri
    Hi i have a problem with the curl. I watched an old post Remote Login not Working With Curl, but it not work. I followed baba's advice but the code enter in the if statement. Sorry for my bad english. Can anyone help me? This is the code: $url="http://hipfile.com/"; $urllog="http://hipfile.com/login.html"; $postdata = "login=bnnoor&password=########&op=login"; $ch = curl_init(); curl_setopt ($ch, CURLOPT_URL, $url); curl_setopt ($ch, CURLOPT_SSL_VERIFYPEER, FALSE); curl_setopt ($ch, CURLOPT_USERAGENT, "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.6) Gecko/20070725 Firefox/2.0.0.6"); curl_setopt ($ch, CURLOPT_TIMEOUT, 60); curl_setopt ($ch, CURLOPT_FOLLOWLOCATION, 1); curl_setopt ($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt ($ch, CURLOPT_REFERER, $urllog); curl_setopt ($ch, CURLOPT_POSTFIELDS, $postdata); curl_setopt ($ch, CURLOPT_POST, 1); $result = curl_exec ($ch); if (!$result) { $http_code = curl_getinfo($ch, CURLINFO_HTTP_CODE); curl_close($ch); // make sure we closeany current curl sessions die($http_code.' Unable to connect to server. Please come back later.'); } echo $result; curl_close($ch);

    Read the article

  • Javascript/jQuery: programmatically follow a link

    - by Dan
    In Javascript code, I would like to programmatically cause the browser to follow a link that's on my page. Simple case: <a id="foo" href="mailto:[email protected]">something</a> function goToBar() { $('#foo').trigger('follow'); } This is hypothetical as it doesn't actually work. And no, triggering click doesn't do it. I am aware of window.location and window.open but these differ from native link-following in some ways that matter to me: a) in the presence of a <base /> element, and b) in the case of mailto URLs. The latter in particular is significant. In Firefox at least, calling window.location.href = "mailto:[email protected]" causes the window's unload handlers to fire, whereas simply clicking a mailto link does not, as far as I can tell. I'm looking for a way to trigger the browser's default handling of links, from Javascript code. Does such a mechanism exist? Toolkit-specific answers also welcome (especially for Gecko).

    Read the article

  • Javascript + PHP $_POST array empty

    - by Peterim
    While trying to send a POST request via xmlhttp.open("POST", "url", true) (javascript) to the server I get an empty $_POST array. Firebug shows that the data is being sent. Here is the data string from Firebug: a=1&q=151a45a150.... But $_POST['q'] returns nothing. The interesting thing is that file_get_contents('php://input') does have my data (the string above), but PHP somehow doesn't recognize it. Tried both $_POST and $_REQUEST, nothing works. Headers being sent: POST /test.php HTTP/1.1 Host: website.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us;q=0.7,en;q=0.3 Accept-Encoding: gzip,deflate Accept-Charset: utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://website.com/ Content-Length: 156 Content-Type: text/plain; charset=UTF-8 Pragma: no-cache Cache-Control: no-cache Thank you for any suggestions.

    Read the article

  • Why Firebug pretends that my stylesheet is calling my xmlrpc ?

    - by Rebol Tutorial
    Firebug shows a request which causes a huge delay to http://reboltutorial.com/wp-content/themes/minaflow/none Details below but I don't understand why it says it comes from xmlrpc and the stylesheet: Date Sun, 04 Apr 2010 16:10:02 GMT Server Apache X-Powered-By PHP/5.2.13 X-Pingback http://reboltutorial.com/xmlrpc.php Expires Wed, 11 Jan 1984 05:00:00 GMT Cache-Control no-cache, must-revalidate, max-age=0 Pragma no-cache Set-Cookie wordpress_test_cookie=WP+Cookie+check; path=/; domain=.reboltutorial.com Last-Modified Sun, 04 Apr 2010 16:10:03 GMT Vary Accept-Encoding Content-Encoding gzip Keep-Alive timeout=2, max=94 Connection Keep-Alive Transfer-Encoding chunked Content-Type text/html; charset=UTF-8 Requêtemise en page impression GET /wp-content/themes/minaflow/none HTTP/1.1 Host: reboltutorial.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; fr; rv:1.9.2) Gecko/20100115 Firefox/3.6 Accept: image/png,image/*;q=0.8,*/*;q=0.5 Accept-Language: fr,fr-fr;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://reboltutorial.com/wp-content/themes/minaflow/style.css Cookie: _csoot=1267966575980; _csuid=4b6f27395991a2ff; wp-settings-1=editor%3Dhtml%26align%3Dleft%26m0%3Do%26m1%3Do%26m2%3Do%26m3%3Dc%26m4%3Do%26m5%3Dc%26m6%3Do%26m7%3Do%26m8%3Dc%26m9%3Dc%26m10%3Dc%26m11%3Do%26m12%3Dc%26m13%3Dc%26m14%3Dc%26m15%3Dc; wp-settings-time-1=1270384700; subscribe_checkbox_=unchecked; PHPSESSID=o70hjpjf7uj2hb4doe4k0o5co5; wordpress_test_cookie=WP+Cookie+check; xumgeqhxmhohxipF=Erjixxeeskfgnlba; SJECT=CKON; wordpress_=admin%7C1271592539%7C392c555d9051c6fa184074d8441cc472; wordpress_logged_in_=admin%7C1271592539%7C0e7a92bda53cc2f5afc32962237a1037; rcBDvgtspmuEsyzp=rmqjtFbCfheGCjBw; prli_click_15=creatingstandard; prli_visitor=m7928r

    Read the article

  • Debugging Post Request with Chrome Dev Tools

    - by benek
    I am trying to use Chrome Dev for debugging the following Angular post request : $http.post("http://picjboss.puma.lan:8880/fluxpousse/api/flow/createOrUpdateHeader", flowHeader) After running the statement with right-click / evaluate, I can see the post in the network panel with a pending state. How can I get the result or "commit" the request and leave easily this "pending" state from the dev console ? I am not yet very familiar with JS callbacks, some code is expected. Thanks. EDIT I have tried to run from the console : $scope.$apply(function(){$http.post("http://picjboss.puma.lan:8880/fluxpousse/api/flow/createOrUpdateHeader", flowHeader).success(function(data){console.log("error "+data)}).error(function(data){console.log("error "+data)})}) It returns : undefined EDIT The post I am trying to solve generate an HTTP 400. Here is the result : Request URL:http://picjboss.puma.lan:8880/fluxpousse/api/flow/createOrUpdateHeader Request Method:POST Status Code:400 Mauvaise Requ?te Request Headersview source Accept:application/json, text/plain, / Accept-Encoding:gzip,deflate,sdch Accept-Language:fr-FR,fr;q=0.8,en-US;q=0.6,en;q=0.4 Connection:keep-alive Content-Length:5354 Content-Type:application/json;charset=UTF-8 Cookie:JSESSIONID=285AF523EA18C0D7F9D581CDB2286C56 Host:picjboss.puma.lan:8880 Origin:http://picjboss.puma.lan:8880 Referer:http://picjboss.puma.lan:8880/fluxpousse/ User-Agent:Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/30.0.1599.101 Safari/537.36 X-Requested-With:XMLHttpRequest Request Payloadview source {refHeader:IDSFP, idEntrepot:619, codeEntreprise:null, codeBanniere:null, codeArticle:7,…} cessionPrice: 78 codeArticle: "7" codeBanniere: null codeDateAppro: null codeDateDelivery: null codeDatePrepa: null codeEntreprise: null codeFournisseur: null codeUtilisateur: null codeUtilisateurLastUpdate: null createDate: null dateAppro: null dateDelivery: null datePrepa: null hasAssortControl: null hasCadenceForce: null idEntrepot: 619 isFreeCost: null labelArticle: "Mayonnaise de DIJON" labelFournisseur: null listDetail: [,…] pcbArticle: 12 pvc: 78 qte: 78 refCommande: "ref" refHeader: "IDSFP" state: "CREATED" stockArticle: 1200 updateDate: null Response Headersview source Connection:close Content-Length:996 Content-Type:text/html;charset=utf-8 Date:Fri, 08 Nov 2013 15:19:30 GMT Server:Apache-Coyote/1.1 X-Powered-By:Servlet 2.5; JBoss-5.0/JBossWeb-2.1

    Read the article

  • How do i set the proxy and SOCKs in libcurl?

    - by acidzombie24
    I am trying to configure my .NET app to use a proxy. My source is in C# but i learned CURL via C++. My question is where do i put the SOCKs IP and port? i looked through the documentation and didnt see it. I believe that is what is causing me these problems. When i run this code it will quiet literally timeout and not call my header function or writer function. If i comment out the first two curlopt lines (the two proxy lines) my code runs with no problems. In firefox i set the http proxy and SOCKs host separately, they are different IPs and ports. How do i set the sock part, the below has the dummy proxy set but i cant figure out the socks part. static void Main(string[] args) { SeasideResearch.LibCurlNet.Curl.GlobalInit((int)SeasideResearch.LibCurlNet.CURLinitFlag.CURL_GLOBAL_ALL); var curl = new Easy(); { curl.SetOpt(CURLoption.CURLOPT_PROXY, "http://127.0.0.1:1234"); curl.SetOpt(CURLoption.CURLOPT_PROXYTYPE, CURLproxyType.CURLPROXY_SOCKS5); curl.SetOpt(CURLoption.CURLOPT_URL, "http://whatismyipaddress.com/ip-lookup"); curl.SetOpt(CURLoption.CURLOPT_FOLLOWLOCATION, 1); curl.SetOpt(CURLoption.CURLOPT_USERAGENT, @"Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2b5) Gecko/20091204 Firefox/3.6b5"); curl.SetOpt(CURLoption.CURLOPT_HEADERFUNCTION, hf); curl.SetOpt(CURLoption.CURLOPT_HEADERDATA, data); curl.SetOpt(CURLoption.CURLOPT_WRITEFUNCTION, wf); curl.SetOpt(CURLoption.CURLOPT_WRITEDATA, sw); curl.SetOpt(CURLoption.CURLOPT_SSL_VERIFYPEER, 0); curl.Perform(); var sz = sw.ToString(); var myrealip = sz.IndexOf("12.34.56.78") !=-1; } //Console.WriteLine(sz); SeasideResearch.LibCurlNet.Curl.GlobalCleanup(); }

    Read the article

  • Image in table cell doesn't scale down: using ie8/7

    - by monks1975
    Can anyone help me troubleshoot my website? http://www.andrewstonyer.co.uk/test/ My problem: On IE8/7 if you click on a thumbnail (only 'Pulse' and 'Time Within The Hour' are wired in right now) an overlay appears with detail of that piece. What should happen, and does in Gecko/Webkit, is that the overlay contains a table with a heading, a scaled, centered image, and a nav menu. There is a toggle for text, which pushes up the image cell and makes the image smaller, keeping proportion. I know the overlay looks like ass right now- those are just placeholder colours :) On IE, the image doesn't fit perfectly in the table cell, which means that everything is pushed down outside the window. I can't see the nav menu. It appears to render the image at actual pixel size (in the CSS, the img class element is set to 100% height). The text cell is toggled with jquery, when toggled on in IE, it doesn't appear to 'squeeze' the above cell, which is what I want to happen. Could any experts help? Regards, Jon

    Read the article

  • Ajax Request using jQuery in Rails

    - by Steve
    Hi... I am sending an Ajax Request using jQuery. What happens is that I am getting an "405 Method Not Allowed" Error. I am just posting a form, which would get the detail from the form and insert it into the DB. Just the usual stuff.I am using WEBrick that comes as default with the rails package. Can somebody please tell me how to fix this. This is the code that triggers the Ajax Request $.post($(this).attr("action") + ".js",$(this).serialize(),null,"script"); Response Headers Cache-Control no-cache Allow GET, PUT, DELETE Content-Type text/html; charset=utf-8 Content-Length 9502 Server WEBrick/1.3.1 (Ruby/1.9.1/2009-12-07) Date Wed, 02 Jun 2010 20:41:33 GMT Connection Keep-Alive Request Headers Host localhost:3000 User-Agent Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Accept application/json, text/javascript, */* Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Content-Type application/x-www-form-urlencoded; charset=UTF-8 X-Requested-With XMLHttpRequest Referer http://localhost:3000/viewspot/3 Content-Length 141 Pragma no-cache Cache-Control no-cache

    Read the article

  • cURL cookie negative cookie expire

    - by Joe Doe
    I have problems with cookies with cURL. After problems I turned on verbose function and figured out cURL sets them negative expire date even if server sends positive date. Example: * Added cookie _c_sess=""test"" for domain test.com, path /, expire -1630024962 < Set-Cookie: _c_sess="test"; Domain=test.com; HttpOnly; expires=Mon, 26-Mar-2012 14:52:47 GMT; Max-Age=1332773567; Path=/ As you can see both expires and max-age are positive, but cURL sets expire to negative value. Somebody has idea? EDIT: Here is php code I use. $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, "http://site.com/"); curl_setopt($ch, CURLOPT_USERAGENT, 'Mozilla/5.0 (Windows NT 6.1; rv:11.0) Gecko/20100101 Firefox/11.0'); curl_setopt($ch, CURLOPT_COOKIEJAR, $cookiepath); curl_setopt($ch, CURLOPT_COOKIEFILE, $cookiepath); curl_setopt($ch, CURLOPT_HEADER ,1); curl_setopt($ch, CURLOPT_VERBOSE ,1); curl_setopt($ch, CURLOPT_STDERR ,$f); curl_setopt($ch, CURLOPT_RETURNTRANSFER ,1); curl_setopt($ch, CURLOPT_FOLLOWLOCATION ,1); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, 0); $data = curl_exec($ch); Data from cookie jar: #HttpOnly_.test.com TRUE / FALSE -1630016318 _test_sess "test"

    Read the article

  • Converting Source ASCII Files to JPEGs

    - by CommonsWare
    I publish technical books, in print, PDF, and Kindle/MOBI, with EPUB on the way. The Kindle does not support monospace fonts, which are kinda useful for source code listings. The only way to do monospace fonts is to convert the text (Java source, HTML, XML, etc.) into JPEG images. More specifically, due to pagination issues, a given input ASCII file needs to be split into slices of ~6 lines each, with each slice turned into a JPEG, so listings can span a screen. This is a royal pain. My current mechanism to do that involves: Running expand to set a consistent 2-space tab size, which pipes to... a2ps, which pipes to... A small Perl snippet to add a "%%LanguageLevel: 3\n" line, which pipes to... ImageMagick's convert, to take the (E)PS and make a JPEG out it, with an appropriate background, cropped to 575x148+5+28, etc. That used to work 100% of the time. It now works 95% of the time. The rest of the time, I get convert: geometry does not contain image errors, which I cannot seem to get rid of, in part because I don't understand what the problem is. Before this process, I used to use a pretty-print engine (source-highlight) to get HTML out of the source code...but then the only thing I could find to convert the HTML into JPEGs was to automate screen-grabs from an embedded Gecko engine. Reliability stank, which is why I switched to my current mechanism. So, if you were you, and you needed to turn source listings into JPEG images, in an automated fashion, how would you do it? Bonus points if it offers some sort of pretty-print process (e.g., bolded keywords)! Or, if you know what typically causes convert: geometry does not contain image, that might help. My current process is ugly, but if I could get it back to 100% reliability, that'd be just fine for now. Thanks in advance!

    Read the article

  • Curls and file_get_contents times out when loading a page

    - by Joseph
    Im trying to grab the content of this page(http://www.alluc.org/movies/watch-hot-tub-time-machine-2010-online/186214.html) using curl or file_get_contents but it doesnt work, it loads when i just open it in the browser, but not otherwise. Here are my settings for CURL: curl_setopt($ch1, CURLOPT_INTERFACE, "$use_proxy"); curl_setopt($ch1, CURLOPT_URL, $url); curl_setopt($ch1, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch1, CURLOPT_REFERER, 'http://'.$domain); curl_setopt($ch1, CURLOPT_USERAGENT, "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.1) Gecko/20061204 Firefox/2.0.0.1"); curl_setopt($ch1, CURLOPT_FOLLOWLOCATION, TRUE); echo curl_setopt($ch1, CURLOPT_HEADER, 1); curl_setopt($ch1, CURLOPT_VERBOSE, true); It works fine for other sites just not this one for some reason, any clue as to how to make it work ? Thanx. Heres the info from curl_getinfo($ch1): [url] => http://www.alluc.org/movies/watch-hot-tub-time-machine-2010-online/186214.html [content_type] => [http_code] => 0 [header_size] => 0 [request_size] => 0 [filetime] => -1 [ssl_verify_result] => 0 [redirect_count] => 0 [total_time] => 0 [namelookup_time] => 0.002578 [connect_time] => 0 [pretransfer_time] => 0 [size_upload] => 0 [size_download] => 0 [speed_download] => 0 [speed_upload] => 0 [download_content_length] => -1 [upload_content_length] => -1 [starttransfer_time] => 0 [redirect_time] => 0

    Read the article

  • Error in FireFox Loading Images

    - by Brian
    Hello, Details of the app are: ASP.NET project, local web server, hosted in IIS locally, using latest FireFox, uses forms authentication. I'm getting a logon user name/pwd box when trying to access my local web server. Using the net panel in firebug, I see the issue is with an animated GIF, showing up as 401 unauthorized. I check the details and this is what I see for this URL: http://localhost/<virtual>/Images/loading.gif I'm getting the following: Response Headers: Server Microsoft-IIS/5.1 Date Thu, 17 Jun 2010 19:02:58 GMT WWW-Authenticate Negotiate NTLM Connection close Content-Length 4046 Content-Type text/html Request headers: Host localhost User-Agent Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 ( .NET CLR 3.5.30729; .NET4.0E) Accept image/png,image/*;q=0.8,*/*;q=0.5 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Referer <correct referer> Cookie ASP.NET_SessionId=ux4bt345qjz4p3u1wm1zgmez; .ASPXFormsAuth=33F12B444040827B8ABF154EE4EDE43B6CA532432EB846987B355097E00256DF0955C76A37BC593EAA961747BF1CC1D8949FF63C6F2CA69D77213EB15B4EDFAF57A83D9E1F88AB8D821C3A09C07EA2EE; .ASPROLES=qTFrGteJydYAE3118WGXbhJthTDdjdtuQ06t4bYVrM1BwIfcEHU1HhnEcs7TqSOaV-fIN5MH3uO57oNVWXDvrhkZ8gQuURuUk_K0TpoR-DEFXuF953Gl9aIilKAdV211jutMNQmhkt2rdPE2tEhHs3pz953fADxjAOyZl7K-AqNvMk3yqJshhKHhJIf-ALMhWIYlrrKy0WsYznUwh3WCtPfzEBD5XzmXU8HVMJ2-ArLjBISuegvSmxvK1PuXBPhoMRMi9Ynaw6xi9ypGk-R6uN0ljOMCGkB2-20WUlFuP0xWTfac_zCTDT00pbpnyjtygnM-LShOXTrZ_mhoRuXfKYEYSodNihwD6SRr19Nm-8uZ5BQ-W81svM17S2C0vc0FaxtiuAcN_vHcsN1OEJeCuVfRjeqzo9xWEViupP3Vh6aOcCm6yrftgw5x94piuCJO7tCfXjJAw5RVUWDBBWv5gmid171F0k-_XZ0CSv7Gm2Eai1BRfogAqQ_MV3tyPv7XVEyJXRXqYGlf1JpkfTW8S8On4E05v9gx9RcdnKHZebiOZwbP1_ho9nG7pMwXysbhjxtxwZ-zLx-v11_rhZw_i5m7iNcLtt4BbFU-sb_crzMpCKGywHIc452Zp1E0kx1Rfx-2eUnaiLiCfGed-QqelO88NYTpJHttGKEfhFrDgmaIXZPJRtuZ-GrS6t3Vla-8qDAVb1p6ovPwoVT4z4BhQyFsk542gDx-uQDw6D0B6zo7lXfcOjtolUxDcLbETsNlYsexZaxFpRSbw7M1ldwL_k92P9wLPlv9mw4NtyhXKJesMu7GjquZuoBN3hO00AqJEe1tKFFtfrvbE5ZH7uNu7myNdtlxRPe3WZe7qukbqHo1 Pragma no-cache Cache-Control no-cache Any ideas? Thanks.

    Read the article

  • PHP Infine Loope Problem

    - by Ashwin
    function httpGet( $url, $followRedirects=true ) { global $final_url; $url_parsed = parse_url($url); if ( empty($url_parsed['scheme']) ) { $url_parsed = parse_url('http://'.$url); } $final_url = $url_parsed; $port = $url_parsed["port"]; if ( !$port ) { $port = 80; } $rtn['url']['port'] = $port; $path = $url_parsed["path"]; if ( empty($path) ) { $path="/"; } if ( !empty($url_parsed["query"]) ) { $path .= "?".$url_parsed["query"]; } $rtn['url']['path'] = $path; $host = $url_parsed["host"]; $foundBody = false; $out = "GET $path HTTP/1.0\r\n"; $out .= "Host: $host\r\n"; $out .= "User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\r\n"; $out .= "Connection: Close\r\n\r\n"; if ( !$fp = @fsockopen($host, $port, $errno, $errstr, 30) ) { $rtn['errornumber'] = $errno; $rtn['errorstring'] = $errstr; } fwrite($fp, $out); while (!@feof($fp)) { $s = @fgets($fp, 128); if ( $s == "\r\n" ) { $foundBody = true; continue; } if ( $foundBody ) { $body .= $s; } else { if ( ($followRedirects) && (stristr($s, "location:") != false) ) { $redirect = preg_replace("/location:/i", "", $s); return httpGet( trim($redirect) ); } $header .= $s; } } fclose($fp); return(trim($body)); } This code sometimes go infinite loop. What's wrong here?

    Read the article

  • Python - Problems using mechanize to log into a difficult website

    - by user1781599
    × 139886 I am trying to log in to betfair.com by using mechanize. I have tried several ways but it always fail. This is the code I have developed so far, can anyone help me to identify what is wrong with it and how I can improve it to log into my betfair account? Thanks, import cookielib import urllib import urllib2 from BeautifulSoup import BeautifulSoup import mechanize from mechanize import Browser import re bf_username_name = "username" bf_password_name = "password" bf_form_name = "loginForm" bf_username = "xxxxx" bf_password = "yyyyy" urlLogIn = "http://www.betfair.com/" accountUrl = "https://myaccount.betfair.com/account/home?rlhm=0&" # This url I will use to verify if log in has been successful br = mechanize.Browser(factory=mechanize.RobustFactory()) br.addheaders = [("User-Agent","Mozilla/5.0 (Macintosh; Intel Mac OS X 10_5_8) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.90 Safari/537.1")] br.open(urlLogIn) br.select_form(nr=0) print br.form br.form[bf_username_name] = bf_username br.form[bf_password_name] = bf_password print br.form #just to check username and psw have been recorded correctly responseSubmit = br.submit() response = br.open(accountUrl) text_file = open("LogInResponse.html", "w") text_file.write(responseSubmit.read()) #this file should show the home page with me logged in, but it show home page as if I was not logged it text_file.close() text_file = open("Account.html", "w") text_file.write(response.read()) #this file should show my account page, but it should a pop up with an error text_file.close()

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20  | Next Page >