Search Results

Search found 38010 results on 1521 pages for 'page curl'.

Page 35/1521 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • How to perform an external request in Kohana 3?

    - by alex
    I've always used cURL for this sort of stuff, but this article got me thinking I could request another page easily using the Request object in Kohana 3. $url = 'http://www.example.com'; $update = Request::factory($url); $update->method = 'POST'; $update->post = array( 'key' => 'value' ); $update->execute(); echo $update->response; However I get the error Accessing static property Request::$method as non static From this I can assume it means that the method method is static, but that doesn't help me much. I also copied and pasted the example from that article and it threw the same error. Basically, I'm trying to POST to a new page on an external server, and do it the Kohana way. So, am I doing this correctly, or should I just use cURL (or file_get_contents() with context)?

    Read the article

  • Getting search results from Twitter in php

    - by Mark Mayo
    I'm attempting to put together a little mashup with some twitter APIs. However, the whole area is new to me (I'm more of an embedded developer dabbling). And frustratingly, every tutorial I am trying in Php is either out of date, not doing what it claims to do, it or is broken. Essentially, I just want a nice bit of example code - say, an HTML file, a connection.js for the JQuery magic, and a php file - 'getsearch' which contains the relevant Curl calls to the API to just return the results for a given search term. Followed the tutorial to the letter at http://www.reynoldsftw.com/2009/02/using-jquery-php-ajax-with-the-twitter-api/ and even downloaded the guy's code and chucked it on my webserver, but it just seems to sit there. I'm relatively competent at php and html, but it's the Curl and the JQuery side of things which is new to me, and would appreciate any thoughts, links, or code suggestions. I've attempted reading the API - but even that seems sparse - and several links are broken to their own tutorials, so that's put me off a bit for now.

    Read the article

  • How to handle "100 continue" HTTP message ?

    - by Stephane
    Hello, I'm writing a simplistic HTTP server that will accept PUT requests mostly from cURL as client and I'm having a bit of an issue with handling the "Expect: 100-continue" header. As I understand it, the server is supposed to read the header, send back a "HTTP/1.1 100 Continue" response on the connection, read the stream up to the value on "Content-Length" and then send back the real response code (Usually "HTTP/1.1 200 OK" but any other valid HTTP answer should do). Well, that's exactly what my server does. The problem is that, apparently, if I send a "100 Continue" answer, cURL fails to report any subsequent HTTP error code and assumes the upload was a success. For instance, if the upload is rejected due to the nature of the content (there is a basic data check happening), I want the calling client to detect the problem and act accordingly. Am I missing something obvious ? Thanks

    Read the article

  • How to peform an external request in Kohana 3?

    - by alex
    I've always used cURL for this sort of stuff, but this article got me thinking I could request another page easily using the Request object in Kohana 3. $url = 'http://www.example.com'; $update = Request::factory($url); $update->method = 'POST'; $update->post = array( 'key' => 'value' ); $update->execute(); echo $update->response; However I get the error Accessing static property Request::$method as non static From this I can assume it means that the method method is static, but that doesn't help me much. I also copied and pasted the example from that article and it threw the same error. Basically, I'm trying to POST to a new page on an external server, and do it the Kohana way. So, am I doing this correctly, or should I just use cURL (or file_get_contents() with context)?

    Read the article

  • How to convert a url to a browser like url? ( ...%20... )

    - by Kaoukkos
    I want to get data using CURL but I have a problem. When I set the url like this $url = "https://graph.facebook.com/fql?q=SELECT name FROM page"; // continues I do not have anything returned. When I copy the browser url, this is $url = "https://graph.facebook.com/fql?q=SELECT%20name%20FROM%20page"; I get the results through CURL. I tried htmlentities and htmlspecialchars without luck. What am I missing here? $ch = curl_init($url); curl_setopt($ch,CURLOPT_RETURNTRANSFER,1); $content = curl_exec($ch);

    Read the article

  • how auto submit a session based form?

    - by hd
    i have a form and want to submit it with a script. i'm going to use curl function in php to do it. but the form is not submit directly. it have 3 steps and at the end of each step it store entered value in session variables and at the final steps it insert record to database with the values are read from sessions. it is possible to do auto submit this form using curl or not? what is the best solution for it??

    Read the article

  • Data is not get when i call web service

    - by rash111
    I am using curl for post web service call . In local i get data but when i shifted my code and web service to server i am not getting data. When i call from rest client which is add on for firefox i get data. when i hit through code i get following msg. Error:- 1.when using curl for post it reply:- Not Found. 2. when using file_get_contents it gives:-failed to open stream: HTTP request failed! HTTP/1.0 404 Not Found what can i do now

    Read the article

  • Restful client on Codeigniter issue

    - by user1852837
    This is weird. I don't know what is problem on my website. My website works on local server but not on live server. Login page works on first signin but after logout then re-login again message says: "invalid username and password" since it works on first attempt. I found out when I debugging that http://xxxxx.com/api/authentication/sign not found. It display 404 page not found. Sometimes you can login and sometimes not. In my local it works. I contact the web server admin and I ask what is the status of the session on the server and How does it execute it's web requests? (Sockets, file_get_contents, curl?). They said that No problems reproduced with Server Sessions and PHP Curl works fine. I know it's weird but can somebody here can figure it out what is the problem behind of it.

    Read the article

  • Dynamic web widget

    - by user1824996
    My vendor offers a widget creation service where I can login to their page, set initial values of a search form, after the save button is clicked, I can copy & paste the script code on my website to display a product search result widget. I am thinking to change this static widget to a dynamic one. Since my programming knowledge is limited, can experts tell me if it's possible to login https remotely (using cURL) and set search form values equal to values on my page (every time my page content changes, it will change the form value), then save the form. So the widget script I pasted on my page will always be refreshed to new search result. So the issue will involve cross domain, form submission & server/browser communication. I know a little jQuery, PHP, Ajax, cURL but so far I stuck with just having an idea but not really sure how to implement it.

    Read the article

  • pagination - 10 pages per page

    - by arthur
    I have a pagination script that displays a list of all pages like so: prev [1][2][3][4][5][6][7][8][9][10][11][12][13][14] next But I would like to only show ten of the numbers at a time: prev [3][4][5][6][7][8][9][10][11][12] next How can I accomplish this? Here is my code so far: <?php /* Set current, prev and next page */ $page = (!isset($_GET['page']))? 1 : $_GET['page']; $prev = ($page - 1); $next = ($page + 1); /* Max results per page */ $max_results = 2; /* Calculate the offset */ $from = (($page * $max_results) - $max_results); /* Query the db for total results. You need to edit the sql to fit your needs */ $result = mysql_query("select title from topics"); $total_results = mysql_num_rows($result); $total_pages = ceil($total_results / $max_results); $pagination = ''; /* Create a PREV link if there is one */ if($page > 1) { $pagination .= '< a hr_ef="?page='.$prev.'">Previous</a> '; } /* Loop through the total pages */ for($i = 1; $i <= $total_pages; $i++) { if(($page) == $i) { $pagination .= $i; } else { $pagination .= '< a hr_ef="index.php?page='.$i.'">'.$i.'</a>'; } } /* Print NEXT link if there is one */ if($page < $total_pages) { $pagination .= '< a hr_ef="?page='.$next.'"> Next</a>'; } /* Now we have our pagination links in a variable($pagination) ready to print to the page. I pu it in a variable because you may want to show them at the top and bottom of the page */ /* Below is how you query the db for ONLY the results for the current page */ $result=mysql_query("select * from topics LIMIT $from, $max_results "); while ($i = mysql_fetch_array($result)) { echo $i['title'].'<br />'; } echo $pagination; ?>

    Read the article

  • How to customize the content of each page using Page Control and UIScrollView?

    - by viper15
    I have problem with customizing each page using pagecontrol and UIScrollView. I'm customizing Page Control from Apple. Basically I would like to have each page different with text and image alternately on different page. Page 1 will have all text, Page 2 will have just images, Page 3 will have all text and goes on. This is original code: // Set the label and background color when the view has finished loading. - (void)viewDidLoad { pageNumberLabel.text = [NSString stringWithFormat:@"Page %d", pageNumber + 1]; self.view.backgroundColor = [MyViewController pageControlColorWithIndex:pageNumber]; } As you can see, this code shows only Page 1, Page 2 etc as you scroll right. I tried to put in this new code but that didn't make any difference. There's no error. I know this is pretty simple code. I don't why it doesn't work. I declare pageText as UILabel. // Set the label and background color when the view has finished loading. - (void)viewDidLoad { pageNumberLabel.text = [NSString stringWithFormat:@"Page %d", pageNumber + 1]; self.view.backgroundColor = [MyViewController pageControlColorWithIndex:pageNumber]; if (pageNumber == 1) { pageText.text = @"Text in page 1"; } if (pageNumber == 2) { pageText.text = @"Image in page 2"; } if (pageNumber == 3) { pageText.text = @"Text in page 3"; } } I don't know why it doesn't work. Also if you have better way to do it, let me know. Thanks.

    Read the article

  • Error installing FeedZirra

    - by Gautam
    Hi, I am new to Ruby on Rails. I am excited about Feed parsing but when I install FeedZirra I am getting this error. I use Windows 7 and Ruby 1.8.7. Please help. Thanks in advance. C:\Ruby187>gem sources -a http://gems.github.com http://gems.github.com added to sources C:\Ruby187>gem install pauldix-feedzirra Building native extensions. This could take a while... ERROR: Error installing pauldix-feedzirra: ERROR: Failed to build gem native extension. C:/Ruby187/bin/ruby.exe extconf.rb checking for curl-config... no checking for main() in -lcurl... no *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=C:/Ruby187/bin/ruby --with-curl-dir --without-curl-dir --with-curl-include --without-curl-include=${curl-dir}/include --with-curl-lib --without-curl-lib=${curl-dir}/lib --with-curllib --without-curllib extconf.rb:12: Can't find libcurl or curl/curl.h (RuntimeError) Try passing --with-curl-dir or --with-curl-lib and --with-curl-include options to extconf. Gem files will remain installed in C:/Ruby187/lib/ruby/gems/1.8/gems/taf2-curb-0 .5.4.0 for inspection. Results logged to C:/Ruby187/lib/ruby/gems/1.8/gems/taf2-curb-0.5.4.0/ext/gem_ma ke.out

    Read the article

  • [LaTeX] positions of page numbers, position of chapter headings, chapters AND Table of Contents, Ref

    - by kaikanmonaco
    I am writing my PhD thesis (120+ pages) in latex, the deadline is approaching and I am struggling with layout problems. I am using the documentstyle book. I am posting both problems in this one thread because I am not sure if the solution might be related to both problems or not. Problems are: 1.) The page numbers are mostly located on the top-right of each page (this is correct and where I want them to be). However, only on the first page of chapters and on the first page of what I call "special chapters", the page number is located bottom-centered. With "special chapters" I mean: List of Contents, List of Figures, List of Tables, References, Index. My university will not accept the thesis like this. The page number must ALWAYS be top-right one each page, even if the page is the first page of a chapter or the first page of something like the List of Contents. How can I fix this? 2.) On the first page of chapters and "special chapters" (List of Contents...), the chapter title is located far too low on the page. This is the standard layout of LaTeX with documentstyle book I think. However, the chapter title must start at the very top of the page! I.e. the same height as the normal text on the pages that follow. I mean the chapter title, not the header. I.e., if there is a chapter called "Chapter 1 Dynamics of foobar under mechanical stress" then that text has to start from the top the page, but right now it starts several centimeters below the top. How can I fix this? Have tried all kinds of things to no effect, I'd be very thankful for a solution! Thanks.

    Read the article

  • Since Google reduces the value of links alongside nofollow links, what is an alternative?

    - by SharkTheDark
    Since 2009, Google counts nofollow links also as outgoing links, and thus reduces the value of the other links. What are some alternatives to stop Google counting outside links from my page? If I make links appear on my page source like this: <span hrefs="http://link" rel="nofollow" link="true">Link Name</span> and then in JavaScript replace span with a tag and replace hrefs with href for every span tag that has link="true". Will this help?

    Read the article

  • Since Google doesn't use nofollow anymore what is smart alternative?

    - by SharkTheDark
    Since from 2009. Google count nofollow links also as outgoing link, what are alternative to stop Google count outside links from my page? If I make links appear on my page source like this: <span hrefs="http://link" rel="nofollow" link="true">Link Name</span> and then in JavaScript replace span with a tag and replace hrefs with href for every span tag that has link="true". Will this help?

    Read the article

  • Since Google doesn't use nofollow anymore what is an alternative?

    - by SharkTheDark
    Since 2009, Google counts nofollow links also as outgoing links. What are some alternatives to stop Google counting outside links from my page? If I make links appear on my page source like this: <span hrefs="http://link" rel="nofollow" link="true">Link Name</span> and then in JavaScript replace span with a tag and replace hrefs with href for every span tag that has link="true". Will this help?

    Read the article

  • enable curl in cPanel control panel of a shared hosting for my account

    - by Jayapal Chandran
    I have hosted my site in a shared environment. Recently for security reasons the hosting company has disabled socket functions. When i enquired them they said that they will enable to people who personally request for that option. And they said it is a matter of 2 minutes work and asked for my control panel username and password. They said that it is just updating the php.ini for my account. So i want to know how to do it myself. If that could be done by them in 2 minutes then why cant a developer. I asked them but they mumbled... saying not to give trouble to me. so i want to know how to edit php.ini or something like stated above my hosting is using cPanel control panel. suggestions please.

    Read the article

  • Establishing a web page bookmarking process - looking for ideas to improve

    - by Matt
    Like many others, I have a process for bookmarking web pages to read later. My requirements for web page bookmarking are: Ability to bookmark pages must be available from all (within reason) platforms - PC/browser, mobile device, etc. Bookmarks must be centrally stored (implicit from #2) so that I can read the bookmarks from anywhere/any device Full text of web pages must be stored Bonus features would be: Bookmarks and page content should be full text searchable Maintain an archive indefinitely Distinguish between what's read vs. unread Bookmarked page content is cleaned up, e.g. ads eliminated, unnecessary html removed, pages better formatted for reading My current process (which addresses most of these requirements) is as follows: I set up a Gmail account with 2 labels, "Bookmarks Unread" and "Bookmarks Read" Gmail filters set up such that depending on the form of the address (using Gmail's '+string' functionality in addresses), the incoming bookmark gets labeled appropriately On each of my browsers/devices, I have an address book entry for [email protected] and [email protected]. If I want to clean up the page content, I use the Readability bookmarklet which does a great job of giving me the essential content only Anywhere I have Firefox, I use the Send Page by Email extension which, with 2 clicks, allows me to send the cleaned-up Readability page URL and content to one of the above email addresses. Where I don't have Firefox (e.g. iPhone or other mobile device) I use the native ability to send the current link via email (most/all apps have them, including the browser, RSS readers, NYTimes, etc.). In most cases (unless it's built into the particular app), this won't include the page body. The process is almost perfect. I've got the central access and ubiquitous access of Gmail as the storage mechanism, full text searchability (due to Gmail, but of course only for the URLs I send from that Firefox extension), a cleaned up page due to Readability, ability to read offline (assuming I use an IMAP client against Gmail) and permanent archiving of content, including what's been read vs. unread. The missing pieces are: The Send Page by Email Firefox extension seems to only send X bytes of a web page. Or some portion. So it limits my full text searchability. Where I don't have Firefox, I can only send the link, so no full text search at all in those cases. Instapaper looks like it meets most of my requirements (and bonus items). The only downside to me (personal preference) is that central storage is based on Instapaper vs. something more broad like Gmail, which as a generalized service and with Google behind it pretty much means it's permanent. I'm not too hung up on this, but I would definitely prefer to keep Gmail if possible. An upside of Instapaper is that it does the page clean-up as well as stores the entire page content, unlike my Firefox extension. Thoughts on addressing the gaps and improving this process further?

    Read the article

  • Formula-based Excel page headers

    - by Jake Krohn
    I'm using the "Rows to repeat at top" function in Excel's "Page Setup" dialog to ensure that a multi-row header block appears on every printed page of my worksheet. However, I'd like to be able to change certain bits of the header based on the content of the current page. I would simply like to display the value of one cell in the first row that is printed on the page. If this is my header: Section: xx And the data looks like this (columns are Section and Name): 1 Foo 1 Bar 2 Baz I want the "xx" in the header to be "1". If, further down on the next page, the value in the Section column is "3", I want that printed in the header of the next page. I originally thought that using the "OFFSET" function might help, e.g. ="Section: "&OFFSET(A2, 1, 0) But it only shows the offset from the original placement of the header, thus only working on page 1. The end document is a PDF, so right now I'm able to go back in with the "TouchUp Text Tool" in Acrobat and add the numbers page by page. But it gets to be a tedious process with 70+ page reports. Anyone have any better ideas that don't require me mucking up the original Excel document with inserted headers every N lines? This is Excel 2008 for Mac, if it makes a difference.

    Read the article

  • How to use wget to grab copy of Google Code site documents?

    - by Alex Reynolds
    I have a Google Code project which has a lot of wiki'ed documentation. I would like to create a copy of this documentation for offline browsing. I would like to use wget or a similar utility. I have tried the following: $ wget --no-parent \ --recursive \ --page-requisites \ --html-extension \ --base="http://code.google.com/p/myProject/" \ "http://code.google.com/p/myProject/" The problem is that links from within the mirrored copy have links like: file:///p/myProject/documentName This renaming of links in this way causes 404 (not found) errors, since the links point to nowhere valid on the filesystem. What options should I use instead with wget, so that I can make a local copy of the site's documentation and other pages?

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >