Search Results

Search found 64048 results on 2562 pages for 'http post'.

Page 14/2562 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • HTTP Header Issue

    - by Josh
    We've been having weird issues with our web server. Sometimes when you click around you will get the http header in plain text on the web page. HTTP/1.1 200 OK Cache-Control: private Content-Type: text/html; charset=utf-8 Server: Microsoft-IIS/7.0 X-AspNet-Version: 2.0.50727 X-Powered-By: ASP.NET Date: Mon, 26 Oct 2009 21:43:57 GMT Content-Length: 13633 Anyone know why this would show up in the page content?

    Read the article

  • See full HTTP traffic in Tomcat logs

    - by maayank
    For debugging purposes, I need a way to see all HTTP traffic between our Tomcat installation and the test clients. How can I configure Tomcat to trace all HTTP traffic (and not just the headers)? We tried to sniff the traffic using Wireshark, but since the server and the clients are on the same Windows machine it proved problematic, due to the traffic being in localhost.

    Read the article

  • PHP Form POST to external URL with Redirect to another URL

    - by Marlon
    So, what I am trying to accomplish is have a self-posting PHP form, POST to an external page (using CURL) which in turn redirects to another page. Currently, what is happening is that once I click "Submit" on the form (in contact.php) it will POST to itself (as it is a self-posting form). The script then prepares the POST using CURL and performs the post. The external page does its processing and then, the external page is supposed to redirect back to another page, in a referring domain. However, what happens instead, is that it seems like the contact.php page loads the HTML from the page the external page redirected to, and then, the contact.php's HTML loads after that, ON THE SAME PAGE. The effect, is what looks like two separate pages rendered as one page. Naturally, I just want to perform the POST and have the browser render the page it is supposed to redirect to as specified by the external page. Here is the code I have so far: <?php if(isset($_POST['submit'])) { doCURLPost(); } function doCURLPost() { $emailid = "2, 4"; $hotel = $_POST['hotel']; //you will need to setup an array of fields to post with //then create the post string $data = array ( "recipient" => $emailid, "subject" => "Hotel Contact Form", "redirect" => "http://www.localhost.com/thanx.htm", "Will be staying in hotel: " => $_POST['hotel'], "Name" => $_POST['Name'], "Phone" => $_POST['Phone'], "Comments" => $_POST['Comments']); $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, "http://www.externallink.com/external.aspx"); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true); curl_setopt($ch, CURLOPT_POST, true); curl_setopt($ch, CURLOPT_POSTFIELDS, $data); curl_setopt($ch, CURLOPT_HTTPHEADER, array("Referer: http://www.localhost.com/contact.php")); $output = curl_exec($ch); $info = curl_getinfo($ch); curl_close($ch); } ?>

    Read the article

  • Protecting Apache with Fail2Ban

    - by NetStudent
    Having checked my Apache logs for the last two days I have noticed several attempts to access URLs such as /phpmyadmin, /phpldapadmin: 121.14.241.135 - - [09/Jun/2012:04:37:35 +0100] "GET /w00tw00t.at.blackhats.romanian.anti-sec:) HTTP/1.1" 404 415 "-" "ZmEu" 121.14.241.135 - - [09/Jun/2012:04:37:35 +0100] "GET /phpMyAdmin/scripts/setup.php HTTP/1.1" 404 405 "-" "ZmEu" 121.14.241.135 - - [09/Jun/2012:04:37:35 +0100] "GET /phpmyadmin/scripts/setup.php HTTP/1.1" 404 404 "-" "ZmEu" 121.14.241.135 - - [09/Jun/2012:04:37:36 +0100] "GET /pma/scripts/setup.php HTTP/1.1" 404 399 "-" "ZmEu" 121.14.241.135 - - [09/Jun/2012:04:37:36 +0100] "GET /myadmin/scripts/setup.php HTTP/1.1" 404 403 "-" "ZmEu" 121.14.241.135 - - [09/Jun/2012:04:37:37 +0100] "GET /MyAdmin/scripts/setup.php HTTP/1.1" 404 403 "-" "ZmEu" 66.249.72.235 - - [09/Jun/2012:07:11:06 +0100] "GET /robots.txt HTTP/1.1" 404 430 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 66.249.72.235 - - [09/Jun/2012:07:11:06 +0100] "GET / HTTP/1.1" 200 424 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 188.132.178.34 - - [09/Jun/2012:08:39:05 +0100] "HEAD /manager/html HTTP/1.0" 404 166 "-" "-" 95.108.150.235 - - [09/Jun/2012:09:42:09 +0100] "GET /robots.txt HTTP/1.1" 404 432 "-" "Mozilla/5.0 (compatible; YandexBot/3.0; +http://yandex.com/bots)" 95.108.150.235 - - [09/Jun/2012:09:42:09 +0100] "GET /robots.txt HTTP/1.1" 404 432 "-" "Mozilla/5.0 (compatible; YandexBot/3.0; +http://yandex.com/bots)" 95.108.150.235 - - [09/Jun/2012:09:42:10 +0100] "GET / HTTP/1.1" 200 424 "-" "Mozilla/5.0 (compatible; YandexBot/3.0; +http://yandex.com/bots)" 95.108.150.235 - - [09/Jun/2012:09:42:10 +0100] "GET / HTTP/1.1" 200 424 "-" "Mozilla/5.0 (compatible; YandexBot/3.0; +http://yandex.com/bots)" 95.108.150.235 - - [09/Jun/2012:09:42:11 +0100] "GET / HTTP/1.1" 200 424 "-" "Mozilla/5.0 (compatible; YandexBot/3.0; +http://yandex.com/bots)" 95.108.150.235 - - [09/Jun/2012:09:42:11 +0100] "GET / HTTP/1.1" 200 424 "-" "Mozilla/5.0 (compatible; YandexBot/3.0; +http://yandex.com/bots)" 194.128.132.2 - - [09/Jun/2012:16:04:41 +0100] "HEAD / HTTP/1.0" 200 260 "-" "-" 66.249.68.176 - - [09/Jun/2012:18:08:12 +0100] "GET /robots.txt HTTP/1.1" 404 430 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 66.249.68.176 - - [09/Jun/2012:18:08:13 +0100] "GET / HTTP/1.1" 200 424 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 212.3.106.249 - - [09/Jun/2012:18:12:33 +0100] "GET / HTTP/1.1" 200 388 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:34 +0100] "GET /phpldapadmin/ HTTP/1.1" 404 379 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:34 +0100] "GET /phpldapadmin/htdocs/ HTTP/1.1" 404 386 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:35 +0100] "GET /phpldap/ HTTP/1.1" 404 374 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:36 +0100] "GET /phpldap/htdocs/ HTTP/1.1" 404 381 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:36 +0100] "GET /admin/ HTTP/1.1" 404 372 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:38 +0100] "GET /admin/ldap/ HTTP/1.1" 404 377 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:38 +0100] "GET /admin/ldap/htdocs/ HTTP/1.1" 404 384 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:38 +0100] "GET /admin/phpldap/ HTTP/1.1" 404 380 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:39 +0100] "GET /admin/phpldap/htdocs/ HTTP/1.1" 404 387 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:39 +0100] "GET /admin/phpldapadmin/htdocs/ HTTP/1.1" 404 392 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:40 +0100] "GET /admin/phpldapadmin/ HTTP/1.1" 404 385 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:40 +0100] "GET /openldap HTTP/1.1" 404 374 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:41 +0100] "GET /openldap/htdocs HTTP/1.1" 404 381 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:42 +0100] "GET /openldap/htdocs/ HTTP/1.1" 404 382 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:44 +0100] "GET /ldap/ HTTP/1.1" 404 371 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:44 +0100] "GET /ldap/htdocs/ HTTP/1.1" 404 378 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:45 +0100] "GET /ldap/phpldapadmin/ HTTP/1.1" 404 384 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:46 +0100] "GET /ldap/phpldapadmin/htdocs/ HTTP/1.1" 404 391 "-" "-" Is there any way I can use Fail2Ban or any other similar software to ban these IPs in situations when my server is being abused this way (by trying several "common" URLs)?

    Read the article

  • Laptop Acer Travelmate 4050 takes over 10 Mins to POST

    - by Belliez
    Hi, I am a computer tech and have received a laptop for repair. I noticed when I turned it on the laptop would not do anything for a min or two (the fan would run up and stop, power led would shine and some cd rom activity then stop). It would sit there with a black screen. Suddenly after a random number of minutes (between 1-20mins!) the Acer BIOS screen would display and POST would happen before booting into Windows XP. It has frozen in XP at various times and pointed towards a CPU fault and over heating. The fan was on its last legs, sounded like a car engine, so I replaced this. Still same issues. I next replaced the CPU like for like. Same problems. Also applied new thermal paste between the cpu and heatsink, when running the fan kicks in occasionally (not as often as I thought it would) and I left it playing mp3, online radio and updating to service pack 3 and it wouldnt freeze. shutting down ok, cold start, not ok. Waits again before showing the BIOS screen. The hard disk was also making a screaming noise (SMART test and chkdsk passed) but I also replaced this. The laptop powers up with and without the battery so dont think its a battery issue. Running out of ideas and wondered if anyone had any advice. Thanks

    Read the article

  • PC in POST loop

    - by Antony Scott
    Hi, I have a custom built PC using a Gigabyte GA-EP35-DS3P motherboard with a Q6600 CPU. For the last 2 days it has got itself stuck into a POST loop. Saying that, I don't think it actually got in to the BIOS. It repeatedly lit up the LEDs and then not much more. Sometimes I could see the CPU fan twitch. Today I re-seated the DIMMs and it powered up straight away. Could this be a sign of an impending hardware failure? The PC is hooked up to a UPS, so I don't think it's a power spike or anything like that, as I have 2 other PCs on the same UPS and they're both fine. Yesterday, the first time this happened, I was getting a message which I think said "Scanning BIOS image on hard drive". I've been building and using PCs for well over 25 years and that's a new one on me! I don't think it's an over heating problem, as when the PC does finally boot up the CPU is running at 35-40C. Any help or suggestions would be greatly appreciated.

    Read the article

  • How to prevent blocking http auth popups on firefox restart with many tabs open

    - by Glen S. Dalton
    I am using the latest firefox with tab mix plus and tabgoups manager. I have maybe 50 or 100 tabs oben in different tab groups. When I shutdown firefox and start it again all tabs and tab groups are perfectly rebuilt. But I have also many pages open that are behind a standard http auth, and these pages all request their usernames and passwords. So during startup firefox pops up all these pages' http auth windows. And they block everything else in firefox, they are like modal windows. (I am involved in website development and the beta versions are behind apache http auth.) I have to click many times the OK button in the popups, before I can do anything. All the usernames and passwords are already filled in. (And the firefox taskbar entry blinks and the firefox window heading also blinks, and focus switches back and foth, which also annoys me. And sometimes the popups do not react to my clicks, because firefox is maybe just switching focus somewhere else. This is the worst.) I want a plugin or some way to skip those popups. There are some plugins I tried some time ago, but they did not do what I need, because they require a mouse click for each login, which is no improvement over the situation like it already is. This is not about password storage (because firefox already stores them). But of course, if some password storing plugin could heal this it would be great.

    Read the article

  • Setting up a purely Node.js http server on port 80

    - by Luke Burns
    I'm using a fresh install of Centos 5.5. I have Node installed and working (I'm just using Node -- no apache, or nginx.), but I cannot figure out how to make a simple server on port 80. Node is running and is listening to port 80. I'm just using the demo app: var http = require('http'); http.createServer(function (req, res) { res.writeHead(200, {'Content-Type': 'text/plain'}); res.end('Hello World\n'); }).listen(80, "x.x.x.x"); console.log('Server listening to port 80.'); When I visit my IP, it does not work. I obtained my ipaddress using ifconfig. I've tried different ports. So there must be something I am missing. What do I need to configure on my server to make this work? I would like to do this without installing apache or nginx. Luke Edit-- Ok so, I installed nginx and started it up, to see whether or not it is related to node, and I don't see its welcome page. So it definitely has something to do with the server. Am I retrieving the IP Address correctly by running: ifconfig then reading the inet addr under eth0?

    Read the article

  • Conditionally permitting HTTP-only requests to Tomcat?

    - by Mike
    I have 2 versions of a system: Tomcat webserver Nginx reverse-proxy sitting in front of a tomcat webserver. In version 2, nginx only ever talks to Tomcat over HTTP. A user could configure the system so that only HTTPS requests are allowed. If the user does this in Version 1 and then the XML configuration files for Tomcat takes care of this. In version 2, nginx takes care of this. The problem is this: I cannot force a user to update their Tomcat XML config files when they upgrade from version 1 to version 2 (it will be recommended that they do so) because this is done as part of a larger process. This means that if they upgrade and don't update the Tomcat config, an HTTPS request will arrive at nginx, which will proxy it over HTTP to Tomcat which will reject the request because it is not HTTPS. So I can't force an update to the Tomcat XML, and I have to use HTTP between nginx and Tomcat. Any ideas? Is there some way I can affect how Tomcat reads its config in Version 2 so that it ignores the HTTPS-only section?

    Read the article

  • nginx: dump HTTP requests for debugging

    - by Alexander Gladysh
    Ubuntu 10.04.2 nginx 0.7.65 I see some weird HTTP requests coming to my nginx server. To better understand what is going on, I want to dump whole HTTP request data for such queries. (I.e. dump all request headers and body somewhere I can read them.) Can I do this with nginx? Alternatively, is there some HTTP server that allows me to do this out of the box, to which I can proxy these requests by the means of nginx? Update: Note that this box has a bunch of normal traffic, and I would like to avoid capturing all of it on low level (say, with tcpdump) and filtering it out later. I think it would be much easier to filter good traffic first in a rewrite rule (fortunately I can write one quite easily in this case), and then deal with bogus traffic only. And I do not want to channel bogus traffic to another box just to be able to capture it there with tcpdump. Update 2: To give a bit more details, bogus request have parameter named (say) foo in their GET query (the value of the parameter can differ). Good requests are guaranteed not to have this parameter ever. If I can filter by this in tcpdump or ngrep somehow — no problem, I'll use these.

    Read the article

  • Host name change breaking http? Fedora

    - by Dave
    OK so I have been messing around on my development server. It has been a while since I have had my head in linux and I suspect I have broken something. I have SSH running and that is working fine. I also have HTTP and I had FTP running also. Earlier today I decided I wanted to rename the machine so I updated the /etc/hosts file and /etc/sysconfig/network. I also changed the server name in the httpd.conf. I rebooted the machine and reconnected to SSH fine. Later I was messing around with the FTP service (trying to tighten up the user security) and when i tried to connect remotely to FTP no joy, it said cannot connect. I thought that was weird but had planned to remove ftp as we will be using github so removed ftp and moved on. Then I tried to connect to the website but major fail. even connecting to the IP address is failing. I used lynx to connect to the localhost and there was my site so something going on at server level. I thought maybe something up with iptables but I have not changed them but tried adding http but still no joy. I have a - Fedora release 17 (Beefy Miracle) NAME=Fedora VERSION="17 (Beefy Miracle)" ID=fedora VERSION_ID=17 PRETTY_NAME="Fedora 17 (Beefy Miracle)" ANSI_COLOR="0;34" CPE_NAME="cpe:/o:fedoraproject:fedora:17" Fedora release 17 (Beefy Miracle) Fedora release 17 (Beefy Miracle) Linux version 3.3.4-5.fc17.x86_64 ([email protected]) (gcc version 4.7.0 20120504 (Red Hat 4.7.0-4) (GCC) ) #1 SMP Mon May 7 17:29:34 UTC 2012 This is my iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT icmp -- anywhere anywhere ACCEPT all -- anywhere anywhere ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh REJECT all -- anywhere anywhere reject-with icmp-host-prohibited Chain FORWARD (policy ACCEPT) target prot opt source destination REJECT all -- anywhere anywhere reject-with icmp-host-prohibited Chain OUTPUT (policy ACCEPT) target prot opt source destination Like I say I can use SSH no issue but http although running is a no go from a remote computer. Any ideas?

    Read the article

  • Manual NAT on Checkpoint (Redirect all http requests to a local web server)

    - by B. Kulakli
    We have a proxy server in our internal network and I want to redirect all internet http requests to a web server in local network. It'll be like a Network Billboard that says "No direct connection is available. Set up your proxy etc." For example: A user starts the computer Opens the browser Tries to open www.google.com Should see web server output on local network Tries another web site on internet Should see web server output on local network Sets up proxy Tries to connect to a web site Web site should be loaded I have added a simple manual NAT rule to address translation in Checkpoint firewall but it simply does not work. Here is my address translation rule Source Destination Service T.Source T.Destination T.Service MY_PC A_GOOGLE_IP ALL ORIGINAL INT_WEB_SRV ORIGINAL Then when I ping A_GOOGLE_IP, replies come from INT_WEB_SRV, as I expected. However, when I try to connect A_GOOGLE_IP from browser (http://A_GOOGLE_IP), no replies come from SYN_SENT and falls into timeout. When I look at the firewall log of INT_WEB_SRV, I can see the incoming connection requests from MY_PC is accepted and NO denies. By the way, there is no problem to see INT_WEB_SRV (http://INT_WEB_SRV) from browser. My understanding is, my NAT rule at checkpoint NGX R60 does not include return packets. I definitely need some help.

    Read the article

  • Manual NAT on Checkpoint (Redirect all http requests to a local web server)

    - by kulakli
    Hi, We have a proxy server in internal network and I want to redirect all internet http requests to a web server in local network. It'll be like a Network Billboard that say "No direct connection is available. Set up your proxy etc." For example: A user starts the computer Opens the browser Trys to open www.google.com Should see web server output on local network Trys another web site on internet Should see web server output on local network Sets up proxy Trys to connect to a web site Web site should be loaded I have added a simple manual NAT rule to address translation in Checkpoint firewall but it simply does not work. Here is my address translation rule Source Destination Service T.Source T.Destination T.Service MY_PC A_GOOGLE_IP ALL ORIGINAL INT_WEB_SRV ORIGINAL Then when I ping A_GOOGLE_IP, replies come from INT_WEB_SRV, as I expected. However, when I try to connect A_GOOGLE_IP from browser (http://A_GOOGLE_IP), No replies come from SYN_SENT and falls into timeout. When I look at the firewall log of INT_WEB_SRV, I can see the incoming connection requests from MY_PC is accepted and NO denies. By the way, there is no problem to see INT_WEB_SRV (http://INT_WEB_SRV) from browser. My understanding is, my nat rule at checkpoint NGX R60 does not include return packets. I definitely need some help. Regards, Burak

    Read the article

  • insert null character using tamper data

    - by Jeremy Comulu
    Using the firefox extension tamper data (for modifing http requests that firefox makes) how do I insert a null character into a post field? I can enter normal characters, but binary characters in it are not urlencoded and are shown as is, so how do I enter the null character into a field? If you know of a firefox extension like tamper data that I can do this or a way to do this using tamper data please post.

    Read the article

  • Apache works on http and https, SVN only on http

    - by user27880
    I asked a question about this before, and got most of it fixed. If I switch off https redirect and go to http://mydomain.com/svn/test0, I get the authentication window popping up, and I can enter my AD credentials, and bingo. Switching https redirect back on, if I go to http://mydomain.com I am automatically redirected to https, which is what I want, and the 'CerntOS test page' pops up. Perfect. The problem occurs when I want to go to one of my test repos via https. Here is my httpd.conf file, with confidential information suitably hosed... === NameVirtualHost *:80 <VirtualHost *:80> ServerAdmin [email protected] ServerName svn.mycompany.com ErrorLog logs/subversion-error_log CustomLog logs/subversion-access_log common Redirect permanent / https://svn.mycompany.com </VirtualHost> <VirtualHost svn.mycompany.com:443> SSLEngine On SSLCertificateFile /etc/httpd/ssl/wildcard.mycompany.com.crt SSLCertificateKeyFile /etc/httpd/ssl/wildcard.mycompany.com.key SSLCertificateChainFile /etc/httpd/ssl/intermediate.crt ServerName svn.mycompany.com ServerAdmin [email protected] ErrorLog logs/subversion-error_log CustomLog logs/subversion-access_log common <Location /svn> DAV svn SVNParentPath /usr/local/subversion SVNListParentPath off AuthName "Subversion Repositories" # NT Logon Details Require valid-user AuthBasicProvider file ldap AuthType Basic AuthzLDAPAuthoritative off AuthUserFile /etc/httpd/conf/svnpasswd AuthName "Subversion Server II" AuthLDAPURL "ldap://our-pdc:389/OU=Company Name,DC=com,DC=co,DC=uk?sAMAccountName?sub?(objectClass=*)" AuthLDAPBindDN "DOMAIN\subversion" AuthLDAPBindPassword XXXXXXX AuthzSVNAccessFile /etc/httpd/conf/svnaccessfile </Location> </VirtualHost> === Now, in ssl_error_log, I get === ==> /etc/httpd/logs/ssl_error_log <== [Fri Nov 01 16:07:55 2013] [error] [client XXX.XXX.XXX.XXX] File does not exist: /var/www/html/svn === This comes from the DocumentRoot directive further up the httpd.conf file, which of course points to /var/www/html. I know that this location is wrong, but how can I get SVN to serve the repo? I tried an Alias directive as so .. Alias /svn /usr/local/subversion .. but this didn't work. I tried to alter the Location directive. That didn't work either. Can someone help? I sense that this is so close to being solved ... Thanks. Edit: apachectl -S output: [root@svn conf]# apachectl -S VirtualHost configuration: 127.0.0.1:443 svn.mycompany.com (/etc/httpd/conf/httpd.conf:1020) wildcard NameVirtualHosts and default servers: default:443 svn.mycompany.com (/etc/httpd/conf.d/ssl.conf:74) *:80 is a NameVirtualHost default server svn.mycompany.com (/etc/httpd/conf/httpd.conf:1012) port 80 namevhost svn.mycompany.com (/etc/httpd/conf/httpd.conf:1012) Syntax OK

    Read the article

  • XML over HTTP with JMS and Spring

    - by Will Sumekar
    I have a legacy HTTP server where I need to send an XML file over HTTP request (POST) using Java (not browser) and the server will respond with another XML in its HTTP response. It is similar to Web Service but there's no WSDL and I have to follow the existing XML structure to construct my XML to be sent. I have done a research and found an example that matches my requirement here. The example uses HttpClient from Apache Commons. (There are also other examples I found but they use java.net networking package (like URLConnection) which is tedious so I don't want to use them). But it's also my requirement to use Spring and JMS. I know from Spring's reference that it's possible to combine HttpClient, JMS and Spring. My question is, how? Note that it's NOT in my requirement to use HttpClient. If you have a better suggestion, I'm welcome. Appreciate it. For your reference, here's the XML-over-HTTP example I've been talking about: /* * $Header: * $Revision$ * $Date$ * ==================================================================== * * Copyright 2002-2004 The Apache Software Foundation * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * ==================================================================== * * This software consists of voluntary contributions made by many * individuals on behalf of the Apache Software Foundation. For more * information on the Apache Software Foundation, please see * <http://www.apache.org/>. * * [Additional notices, if required by prior licensing conditions] * */ import java.io.File; import java.io.FileInputStream; import org.apache.commons.httpclient.HttpClient; import org.apache.commons.httpclient.methods.InputStreamRequestEntity; import org.apache.commons.httpclient.methods.PostMethod; /** * * This is a sample application that demonstrates * how to use the Jakarta HttpClient API. * * This application sends an XML document * to a remote web server using HTTP POST * * @author Sean C. Sullivan * @author Ortwin Glück * @author Oleg Kalnichevski */ public class PostXML { /** * * Usage: * java PostXML http://mywebserver:80/ c:\foo.xml * * @param args command line arguments * Argument 0 is a URL to a web server * Argument 1 is a local filename * */ public static void main(String[] args) throws Exception { if (args.length != 2) { System.out.println( "Usage: java -classpath <classpath> [-Dorg.apache.commons."+ "logging.simplelog.defaultlog=<loglevel>]" + " PostXML <url> <filename>]"); System.out.println("<classpath> - must contain the "+ "commons-httpclient.jar and commons-logging.jar"); System.out.println("<loglevel> - one of error, "+ "warn, info, debug, trace"); System.out.println("<url> - the URL to post the file to"); System.out.println("<filename> - file to post to the URL"); System.out.println(); System.exit(1); } // Get target URL String strURL = args[0]; // Get file to be posted String strXMLFilename = args[1]; File input = new File(strXMLFilename); // Prepare HTTP post PostMethod post = new PostMethod(strURL); // Request content will be retrieved directly // from the input stream // Per default, the request content needs to be buffered // in order to determine its length. // Request body buffering can be avoided when // content length is explicitly specified post.setRequestEntity(new InputStreamRequestEntity( new FileInputStream(input), input.length())); // Specify content type and encoding // If content encoding is not explicitly specified // ISO-8859-1 is assumed post.setRequestHeader( "Content-type", "text/xml; charset=ISO-8859-1"); // Get HTTP client HttpClient httpclient = new HttpClient(); // Execute request try { int result = httpclient.executeMethod(post); // Display status code System.out.println("Response status code: " + result); // Display response System.out.println("Response body: "); System.out.println(post.getResponseBodyAsString()); } finally { // Release current connection to the connection pool // once you are done post.releaseConnection(); } } }

    Read the article

  • Prevent apache http server changing response code

    - by Brad
    Hi all, I have a servlet providing a REST based service running on tomcat which I am accessing through Apache Http Server v2.2. My problem is that a response code for one for the service methods is being changed when it passes through http server. I have a curl script which I use to test the service. It is supposed to return a 204 No content response which it does when I hit the servlet directly. When I hit Apache with the script the response gets changed to a 200 Ok. Can anyone with experience of configuring Apache advise me how to fix this? Thanks, Brad.

    Read the article

  • Returning "200 OK" in Apache on HTTP OPTIONS requests

    - by i.
    I'm attempting to implement cross-domain HTTP access control without touching any code. I've got my Apache(2) server returning the correct Access Control headers with this block: Header set Access-Control-Allow-Origin "*" Header set Access-Control-Allow-Methods "POST, GET, OPTIONS" I now need to prevent Apache from executing my code when the browser sends a HTTP OPTIONS request (it's stored in the REQUEST_METHOD environment variable), returning 200 OK. How can I configure Apache to respond "200 OK" when the request method is OPTIONS? I've tried this mod_rewrite block, but the Access Control headers are lost. RewriteEngine On RewriteCond %{REQUEST_METHOD} OPTIONS RewriteRule ^(.*)$ $1 [R=200,L]

    Read the article

  • mod_access for lighttpd causes a 403 error for all POST requests

    - by Sam
    I have found on my debian server that running the lighttpd module mod_access is causing the server to response with a 403 to all POST requests. It's very odd as I have two servers, one is running as I'd expect and the other keeps returning these 403's. They are running identical configs for lighttpd and php. My lighttpd.conf is: https://gist.github.com/4269500 There is also one other custom conf: https://gist.github.com/4269508 I've opened up the servers for requests until I get this fixed, the server that works is http://mercury.isitup.org/ and the one that fails is http://venus.isitup.org/. After working out that disabling mod_access resolves the problem I greped all my lighttpd configs for uses of it (docs). Disabling each line I found didn't help, leading me to think this is perhaps some default behaviour (or bug?)... Has anyone come across this before or know what configuration value I've got wrong? Versions Debian: Debian GNU/Linux 6.0.6 (squeeze) Lighttpd: lighttpd/1.4.28 (ssl) PHP: PHP 5.3.19-1~dotdeb.0 with Suhosin-Patch (cli)

    Read the article

  • Tunneling HTTP traffic from a particular host/port

    - by knoopx
    Hello, I'm trying to figure out how to access from my development machine (Devel) to a third party web service (www.domain.com) which I am not allowed to directly contact using my office IP address. Here's a basic diagram (i'm not allowed to post images...): http://yuml.me/diagram/scruffy/class/%5BDevel%5D-%5BA%5D,%20%5BA%5D-%5BB%5D,%20%5BB%5D-%5Bwww.domain.com%5D The only machine allowed to access that service is B (production server) but I do neither can directly access it from my development machine (Devel). So in order to access the web service I have to ssh into A, and then from A to B to access www.domain.com Is there any way of tunneling traffic from B to A and then back to my development machine so I can directly access www.domain.com without having to ssh into every box? Devel: My development machine. A, B: Linux servers. I own root access on both. B: Production server www.domain.com: Third party HTTP API production server uses.

    Read the article

  • No HTTP Response from Tomcat 7 EC2 instance

    - by David Kaczynski
    I am new to EC2 (and Tomcat, for that matter), and I am trying to deploy a vanilla Tomcat 7 server to an Ubuntu 12.04.1 EC2 instance and access the default test site over HTTP. My EC2 instance is running, and the Security Group includes port 80: My /etc/tomcat7/server.xml config has been edited to listen for HTTP requests on port 80: 0 I have restarted my Tomcat 7 server via sudo service tomcat7 restart. However, according to sudo netstat -lnp, Tomcat is not listed as listening over port 80: I am unable to get any response from going to the ...amazonaws.com public DNS in a web browser. What am I missing?

    Read the article

  • Lightweight tool for viewing raw HTTP messages?

    - by rewbs
    Hi, I'm investigating differences in behaviour between a couple of Web servers. I need to see raw response data from the servers (i.e. before the response is de-chunked if it has "Transfer-Encoding:chunked" and before it is decompressed if it has "Content-Encoding:gzip"). I can find plenty of simple HTTP client that nearly do what I need (e.g. Poster, RESTClient), but they tend to decode the response one step too far. Network analysers like Wireshark give me what I need but are a bit heavyweight. Telnet is my best bet so far, but is a bit too simplistic (actions like capturing data or entering requests are a bit laborious). Can anyone recommend a good, lightweight tool for sending / viewing the raw data that constitute HTTP messages? Edit: I should add that I'm on Windows. Also, the tool would need to work both with remote and local servers.

    Read the article

  • HTTP redirects showing ip address

    - by DrKarl
    I have a domain name on 1&1 and a VPS on Linode. I noticed that my site was enclosed in a frameset which I didn't create. I checked nginx and jetty in the VPS but none of them created the frameset. Then I checked the domain control panel in 1&1 and saw that the redirection could be a frame redirect or an http redirect. I changed to http redirect and the frameset was gone, everything was fine except for the fact that in the url bar of the browser it changed to the ip address of the server instead of my domain url. How can I avoid the frameset and still have the proper url displayed instead of an IP?

    Read the article

  • Does Basic User Authentication require 2-Phase communiation?

    - by RED SOFT ADAIR-StefanWoe
    My Application connects to the Internet to HTTP Services using boost::asio. Recently we added support for HTTP Proxys and Basic User Authentication. We implemented Basic User Authentication by just sending Authentication parameters with every HTTP call if a user configured a proxy in our program. Parameters are sent as described here: Authorization: Basic <base64 Encoded username:password> This works at least for one user and his proxy server. Other users report that their Proxy server replys with 407 Proxy Authentication Required My guess is that some proxy servers accept 1 one phase authentication and that others don't. I do not find any information that a 2 Phase communication is requested where the access always is denied for the first call by returning 407 and that only a second call is accepted. Our program yet does not retry the call if a 407 has been returned. Do we have to add this? I asked this question before on stackoverflow but did not get a sufficient answer.

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >