Search Results

Search found 1194 results on 48 pages for 'curl'.

Page 40/48 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • Why does _GET in PHP wrongly decodes slash?

    - by Boaz
    Hi, Today I run into some oddity with PHP, which I fail find a proper explanation for in the documentation. Consider the following code: <?php echo $_GET['t']. PHP_EOL; ?> The code is simple - it takes a single t parameter on the url and outputs it back. So if you call it with test.php?t=%5Ca (%5c is a '\'), I expected to see: \a However, this is what I got: $ curl http://localhost/~boaz/test.php?t=%5Ca \\a Notice the double slash. Can anyone explains what's going on and give recipe for retrieving the strings as it was supplied on the URL? Thanks, Boaz

    Read the article

  • What's the best way to write a maintainable web scraping app?

    - by Benj
    I wrote a perl script a while ago which logged into my online banking and emailed me my balance and a mini-statement every day. I found it very useful for keeping track of my finances. The only problem is that I wrote it just using perl and curl and it was quite complicated and hard to maintain. After a few instances of my bank changing their webpage I got fed up of debugging it to keep it up to date. So what's the best way of writing such a program in such a way that it's easy to maintain? I'd like to write a nice well engineered version in either Perl or Java which will be easy to update when the bank inevitably fiddle with their web site.

    Read the article

  • How to programmatically launch a chromecast app from command line

    - by pushmatrix
    I want to launch a Chromecast app but NOT using the chrome extension or iOS or Android. Doing this from command line. I noticed that you can send a POST to your chromecast, and it will launch an app. For example if I do curl -H “Content-Type: application/json” http://CHROMECAST_IP:8008/apps/YouTube -X POST -d ‘v=oHg5SJYRHA0' Then it will start up youtube. But for some reason I can't do this with custom apps (in dev mode). I thought I'd be able to send a POST to http://CHROMECAST_IP:8008/apps/MY_REGISTERED_APP_ID, but no luck. I just get a 404 response. Hmmm... My app is just a simple webpage (it is not streamed media). I want to run a little headless server that starts my chromecast app everyday via a CRON task. Any help is greatly appreciated! Thanks :)

    Read the article

  • php DOM, get values from xml document, php xml

    - by Michael
    I'm trying to get some information (itemID, title, price and mileage) for multiple listings from ebay website using their api . So far I got this link up http://open.api.ebay.com/shopping?callname=GetMultipleItems&responseencoding=XML&appid=Morcovar-c74b-47c0-954f-463afb69a4b3&siteid=0&version=525&IncludeSelector=ItemSpecifics&ItemID=220617293997,250645537939,230485306218 I've saved the document as .xml file using php curl and now I need to get/extract the values(itemID, title, price and mileage) into arrays and store them in database. Unfortunately I never worked with php dom and I can't figure it out how to extract the values . I tried to follow the tutorial found on IBM website http://www.ibm.com/developerworks/library/os-xmldomphp/ but I had no success. Some help would be highly appreciated.

    Read the article

  • Is there a way to send tracking info to Google Analytics from PHP ?

    - by seatoskyhk
    I have a PHP code that will return a image. the link is given to 3rd party. so, i need to keep track where the php request coming from. Because the PHP only return the image, I cannot use the Javascript code for Google analytics. I know that I can get the information from the access.log, but i think I can't dump the access.log to GA for analyzing, right? so, is there a way that I can do in PHP (e.g. sending a CURL ), send somethig to Google Analytics for tracking?

    Read the article

  • Fetch HTML page and store it in MYSQL- How to

    - by codemaker
    Hi, What's the best way to store a formatted html page with CSS on to MYSQL database? Is it possible? What the column type should be? How to retrieve the stored formatted HTML and display it correctly using PHP? What if the page I would like to fetch has pics and videos, show I store the page as blob What's the best way to fetch a page using PHP-CURL,fopen,..-? Many questions guys but I really need your help to put me on the right way to do it. Thanks a lot.

    Read the article

  • Integrating Incoming Email Into a php/mysql App

    - by phirschybar
    I am looking to create an incoming email daemon switchboard that I can integrate with various remote php/mysql apps. Ideally I want to check the 'to' address to see if it is in a mysql database and if it is, have the email parsed and posted via CURL to a target destination as well as have attachments saved somewhere locally. I will likely set up a rackspace cloud server dedicated to this task (just accepting emails and posting to 3rd party APIs). However, I do not know where to start. Which server platform / distribution should I go with? Which software needs to be customized, etc?

    Read the article

  • parsing a xml to get some values

    - by Joan Silverstone
    Hello, i have this xml: http://www.managerleague.com/export_data.pl?data=transfers&output=xml&hide_header=0 These are player sales from a browser game. I want to save some fields from these sales. I am fetching that xml with curl and storing on my server. Then do the following: $xml_str = file_get_contents('salespage.xml'); $xml = new SimpleXMLElement($xml_str); $items = $xml->xpath('*/transfer'); print_r($items); foreach($items as $item) { echo $item['buyerTeamname'], ': ', $item['sellerTeamname'], "\n"; } The array is empty and i cant seem to get anything from it. What am i doing wrong?

    Read the article

  • Good simple C/C++ FTP and SFTP client library recommendation for embedded Linux

    - by Roman Nikitchenko
    Could anyone recommend FTP / SFTP client C/C++ library for Linux-based embedded system? I know about Curl library but I need something as simple as possible just to download files from FTP / SFTP servers. Is there any recommendation to look for? Yes, SFTP support is critical. Actually I can even sacrifice multi-threading because I need only one stream at a time. And I'd like it to be able to work through memory buffers but this should be not a problem. Thank you in advance.

    Read the article

  • Retrieving a page of that has a redirect

    - by Dmitry Makovetskiyd
    I get my page content with this function: private function fetch_url($url){ $ch=curl_init($url); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_setopt($ch, CURLOPT_TIMEOUT, 320); $this->doc = curl_exec($ch); $this->status_code= curl_getinfo($ch, CURLINFO_HTTP_CODE); // echo $this->doc; curl_close($ch); } The problem is that with some urls dont exist on a webpage and there is a redirect to another page.. So say if I put the parameter: http://example.com/uncategorized/ It redirect me to : http://example.com/mature/ The problem is with curl, I dont get any content.. But my aim is to get the content of that page redirect.. Is there an easy way to get the function to work in the way I want..?

    Read the article

  • REST client website login

    - by Jordan
    I have written a REST service that uses WSSE as an authentication method but i want to be able to use this rest service through a browser by creating a website around the service. I want the user to be able to log in on the website then when they view, for example the "view users" page an ajax request is made to test.com/users and back comes the list. The part i'm trying to get my head around is the logging in/out on the website and keeping the user logged in across pages. Since in a true REST implementation there's no state held on the server, i can't use $_SESSION and now i don't know where to start! What is the best way to go about this? Do i still need to store session information on the server then possibly use cURL to make the request? Thanks Jay

    Read the article

  • input URL, output contents of "view page source", i.e. after javascript / etc, library or command-li

    - by Ryan Berckmans
    I need a scalable, automated, method of dumping the contents of "view page source" (DOM) to a file. Programs such as wget or curl will non-interactively retrieve a set of URLs, but do not execute javascript or any of that 'fancy stuff'. My ideal solution looks like any of the following (fantasy solutions): cat urls.txt | google-chrome --quiet --no-gui \ --output-sources-directory=~/urls-source (fantasy command line, no idea if flags like these exist) or cat urls.txt | python -c "import some-library; \ ... use some-library to process urls.txt ; output sources to ~/urls-source" As a secondary concern, I also need: dump all included javascript source to file (a la firebug) dump pdf/image of page to file (print to file)

    Read the article

  • Writing a script to bypass college login page

    - by gtredcvb
    My college has a silly login page that requires you to download a whole bunch of garbage that a lot of us don't need (Norton Anti-virus, Antispyware software, etc.). We have to have them running to get on the internet on campus. Though, if you are on Linux, or at least set your user-agent to linux, the requirements are gone. We could easily use Firefox with the useragent switcher to bypass this, but it'd be nice to create a script that automates this. How would this be possible? I figure this could be written in python, and could grab the webpage with curl specifying a user agent? How would I go about posting the data back to the servers? Thanks

    Read the article

  • JavaScript apparently waits for each AJAX call before sending another in a loop

    - by itako
    Hello. Straight to the point: I have this javascript: for(item=1;item<5;item++) { xmlhttp=new XMLHttpRequest(); xmlhttp.open("GET",'zzz.php', true); xmlhttp.send(); } And in PHP file something like this: usleep(5);die('ok'); Now the problem is javascript seems to be waiting for each ajax call to be completed before sending another one. So the first response gets back after approx. 5 seconds, next after 10 seconds and so on. That's a very simplified version of what I do, since the real script involves using cURL in PHP and jQuery as JS lib. But the problem remains the same. Why do responses come back in 5 second intervals?

    Read the article

  • Need a very simple bash-based webserver for logging XML in HTTP POST

    - by Syffys
    As in title, it's for testing purpose and I need it to be extremely light (1 line to 1 single light file). Here is a XML query sample: XML_QUERY=$(cat <<EOF <?xml version='1.0' encoding='UTF-8'?> <Test></Test> EOF ) curl -H "Content-type: text/xml; charset=utf-8" -H "Soapaction: \"\"" -k -d "${XML_QUERY}" http://localhost:8088 Here are some of the tracks I have found so far even if I wasnt able to adapt them to work as I expect: Netcat minimal webserver: Problem is that my nc does not have the -q option, so the connection is closing before delivering the XML content Netcat Only webserver: Same as above Python based: But does not handle POST Thanks in advance!

    Read the article

  • PHP - How to determine if request is coming from a specific file.

    - by John
    I have fileA.php on SERVER_A and fileB.php on SERVER_B fileB.php makes a curl request to fileA.php for it's contents How can fileA.php determine that the request is coming specifically from fileB.php? -- I was thinking about sending the $_SERVER['SCRIPT_NAME'] in fileB.php to fileA.php but since someone can go into fileB.php or any file in general and just do $_SERVER['SCRIPT_NAME'] = 'fileB.php'; it's not really that secure. So how can I determine, for security reasons, that the request is coming from a specific file on a different server?

    Read the article

  • How do I post a link to the feed of a page via the Graph API *as* the page?

    - by jsdalton
    I'm working on a plugin for a Wordpress blog that posts a link to every article published to a Facebook Page associated with the blog. I'm using the Graph API and I have authenticated myself, for the time being, via OAuth. I can successfully post a message to the page using curl via a POST request to https://graph.facebook.com/{mypageid}/feed` with e.g. message = "This is a test" and it published the message. The problem is that the message is "from" my user account. I'm an admin on this test page, and when I go to Facebook and post an update from the web, the link comes "from" my page. Is there a way to authenticate myself as a page? Or is there an alternate way to POST to a page feed that doesn't end up being interpreted as a comment from a user? Thanks for any thoughts or suggestions.

    Read the article

  • how to verify that an old session is really destroyed?

    - by jodeci
    Um, this might sound a bit weird. We were having some problems with a specific browser under a very specific condition, and finally narrowed down the problem to the fact that we were not properly destroying the old sessions after doing session_regenerate_id(). I believe I have solved this problem by doing session_regenerate_id(true) now, but how does one verify that the previous sessions really do not exist any more? Someone suggested cURL but I cannot find my way around their docs. Sadly(?) the boss does not take 'it just works' for an answer so I'd really appreciate any advice!

    Read the article

  • How to work around a site forbidding me to scrape their images with PHP

    - by Petruza
    I'm scraping a site, searching for JPGs to download. Scraping the site's HTML pages works fine. But when I try getting the JPGs with CURL, copy(), fopen(), etc., I get a 403 forbiden status. I know that's because the site owners don't want their images scraped, so I understand a good answer would be just don't do it, because they don't want you to. Ok, but let's say it's ok and I try to work around this, how could this be achieved? If I get the same URL with a browser, I can open the image perfectly, it's not that my IP is banned or anything, and I'm testing the scraper one file at a time, so it's not blocking me because I make too many requests too often. From my understanding, it could be that either the site is checking for some cookies that confirm that I'm using a browser and browsing their site before I download a JPG. Or that maybe PHP is using some user agent for the requests that the server can detect and filter out. Anyway, have any idea?

    Read the article

  • Best practice to detect iPhone app only access for web services?

    - by Gaius Parx
    I am developing an iPhone app together with web services. The iPhone app will use GET or POST to retrieve data from the web services such as http://www.myserver.com/api/top10songs.json to get data for top ten songs for example. There is no user account and password for the iPhone app. What is the best practice to ensure that only my iPhone app have access to the web API http://www.myserver.com/api/top10songs.json? iPhone SDK's UIDevice uniqueueIdentifier is not sufficient as anyone can fake the device id as parameter making the API call using wget, curl or web browsers. The web services API will not be published. The data of the web services is not secret and private, I just want to prevent abuse as there are also API to write some data to the server such as usage log.

    Read the article

  • Can a plain servlet be configured as a seam component?

    - by stacker
    I created a plain servlet within a seam-gen (2.1.2) application, now I would like to use injection. Thus I annotated it with @Name and it's recognized as component: INFO [Component] Component: ConfigReport, scope: EVENT, type: JAVA_BEAN, class: com.mycompany.servlet.ConfigReport Unfortunatly the injection of the logger doesn't work NullPointerException in init() import org.jboss.seam.annotations.Logger; import org.jboss.seam.annotations.Name; import org.jboss.seam.log.Log; @Name("ConfigReport") public class ConfigReport extends HttpServlet { @Logger private Log log; public void init(ServletConfig config) throws ServletException { log.info( "BOOM" ); } } Is my approach abusive? What would be the alternatives (the client sending requests to the servlet is curl, not a browser)?

    Read the article

  • how can I reliably check that requests to my service file have come from my website?

    - by woot586
    I have a service.php class that I use to service AJAX calls from my website. To prevent other people accessing the service using PHP CURL I would normally check the request has come from mysite, and if they are not then just redirect to my home page e.g. if($_SERVER['HTTP_REFERER'] != "http://www.mysite.com"){ header('location: http://www.mysite.com'); exit; } I read in the PHP holy bible: http://www.php.net/manual/en/reserved.variables.server.php that "Not all user agents will set this, and some provide the ability to modify HTTP_REFERER as a feature. In short, it cannot really be trusted." So if this method is not reliable, my question is how can I reliably check that requests to my service file have come from my website? Thanks for any help you can provide!

    Read the article

  • Running CodeIgniter cron on localhost

    - by stef
    I'm trying to get a cron job to run every 5 min on my localhost. Using the Cronnix app I entered the following command 0,5 * * * * root curl http://localhost:8888/site/ > /dev/null The script runs fine when I visit http://localhost:8888/site/ in my browser. I've read some stuff about getting CI to run on Cron, using wget and various other options but none make a lot of sense. In another SO post I found the following command wget -O - -q -t 1 http://www.example.com/cron/run What is the "-O - -q -t 1" syntax exactly? Are there other options?

    Read the article

  • How to executing the following command in Android?

    - by Aung Pyae
    I would like to create a new App Linking object for the sharing posts of my android application to facebook sdk. I look around and I found this. My App is "Mobile-Only". So, it seems I supposed to send this command to facebook. How can I send this ? FYI; I have set-up Facebook App and successfully integrated "Sharing Post on Facebook via Android App". Seems I am quite new on Graph API of Facebook SDK. Thanks. curl https://graph.facebook.com/app/app_link_hosts \ -F access_token="APP_ACCESS_TOKEN" \ -F name="Android App Link Object Example" \ -F android=' [ { "url" : "sharesample://story/1234", "package" : "com.facebook.samples.sharesample", "app_name" : "ShareSample", }, ]' \ -F web=' { "should_fallback" : false, }'

    Read the article

  • Prevent .htaccess syntax error

    - by seengee
    Hi, As part of one our system's we enable a user in the backoffice to add a block of 301 redirects should they need to. This is just a textarea which then populates a specific area of a .htaccess file. As much as this may seem insecure it has only previously used internally by people who know what they are doing but for various reasons they cannot access the specific file. We need now to allow more access to this function, not for the general public, but for people that probably have far less knowledge of regexp etc and syntax in htaccess files. Obviously the major concern here is the user enters some bad syntax and makes their entire site, including the backoffice where they could fix the issue, totally inaccessible without manual intervention. What approaches can i take to make sure that they do not break their site? A htaccess syntax check? copy the file elsewhere and check it doesnt generate a 500 error (with cURL or similar)?. Would welcome any ideas. Thanks.

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >