Search Results

Search found 50062 results on 2003 pages for 'http 1 1'.

Page 236/2003 | < Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >

  • Static file download from browser breaking in varnish but works fine in Apache

    - by Ron
    I would at first like to thank everyone at serverfault for this great website and I also come to this site while searching in google for various server related issues and setups. I also have an issue today and so I am posting here and hope that the seniors would help me out. I had setup a website on a dedicated server a few days ago and I used Varnish 3 as the frontend to Apache2 on a Debian Lenny server as the traffic was a bit high. There are several static file downloads of around 10-20 MB in size in the website. The website looked fine in the last few days after I setup. I was checking from a 5mbps + broadband connection and the file downloads were also completed in seconds and working fine. But today I realized that on a slow internet connection the file downloads were breaking off. When I tried to download the files from the website using a browser then it broke off after a minute or so. It kept on happening again and again and so it had nothing to do with the internet connection. The internet connection was around 512 kbps and so it was not dial up level speed too but decent speed where files should easily download though not that fast. Then I thought of trying out with the apache backend port and used the port number to check out if the problem occurs. But then on adding the apache port in the static file download url, the files got downloaded easily and did not break even once. I tried it several times to make sure that it was not a coincidence but every time I was using the apache port in the file download url then it was downloading fine while it was breaking each time with the normal link which was routed through Varnish I suppose. So, it seems Varnish has somehow resulted in the broken file downloads. Could anyone give any idea as to why it is happening and how to fix the problem. For more clarification, take this example: Apache backend set on port 8008, Varnish frontend set on port 80 Now when I download say http://mywebsite.com/directory/filename.extension Then the download breaks off after a minute or so. I cannot be sure it is due to the time or size though and I am just assuming. May be some other reason too. But when I download using: http://mywebsite.com:8008/directory/filename.extension Then the file download does not break at all and it gets download fine. So, it seems that varnish is somehow creating the file download breaking and not apache. Does anybody have any idea as to why it is happening and how can it be fixed. Any help would be highly appreciated. And my varnish default.vcl is backend apache { set backend.host = "127.0.0.1"; set backend.port = "8008"; } sub vcl_deliver { remove resp.http.X-Varnish; remove resp.http.Via; remove resp.http.Age; remove resp.http.Server; remove resp.http.X-Powered-By; }

    Read the article

  • Subversion error: Repository moved permanently to please relocate

    - by Bart S.
    I've set up subversion and apache on my server. If I browse to it through my webbrowser it works fine (http://svn.host.com/reposname). However, if I do a checkout on my machine I get the following error: Command: Checkout from http://svn.host.com/reposname, revision HEAD, Fully recursive, Externals included Error: Repository moved permanently to 'http://svn.host.com/reposname/'; please relocate I checked apache's error log, but it doesn't say anything. (it does now - see edit) My repositories are stored under: /var/www/svn/repos/ My website is stored under: /var/www/vhosts/x/... Here's the conf file for the subdomain: <Location /> DAV svn SVNParentPath /var/www/svn/repos/ AuthType Basic AuthName "Authorization Realm" AuthUserFile /var/www/svn/auth/svn.htpasswd Require valid-user </Location> Authentication works fine. Does anyone know what might be causing this? -- Edit So I restarted apache (again) and tried it again and now it give me an error message, but it doesn't really help. Anyone have an idea what it means? [Wed Mar 31 23:41:55 2010] [error] [client my.ip.he.re] Could not fetch resource information. [403, #0] [Wed Mar 31 23:41:55 2010] [error] [client my.ip.he.re] (2)No such file or directory: The URI does not contain the name of a repository. [403, #190001] -- Edit 2 If I do svn info it doesn't give anything usefull: [root@eduro eduro.nl]# svn info http://svn.domain.com/repos/ Username: username Password for 'username': svn: Repository moved permanently to 'http://svn.domain.com/repos/'; please relocate I also tried doing a local checkout (svn checkout file:///var/www/svn/repos/reposname) and that works fine (also adding / commiting works fine). So it seems is has something to do with apache. Some other information: I'm running CentOs 5.3 Plesk 9.3 Subversion, version 1.6.9 (r901367) -- Edit 3 I tried moving the repositories, but it didn't make any difference. selinux is disabled so that isn't it either. -- Edit 4 Really? Nobody :(?

    Read the article

  • Unable to install vlc and mplayer after update on fedora 18

    - by mahesh
    I just updated fedora 18 using # yum update Then if I try # rpm -ivh http://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-stable.noarch.rpm I get, Retrieving http://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-stable.noarch.rpm warning: /var/tmp/rpm-tmp.0K5pWw: Header V3 RSA/SHA256 Signature, key ID 172ff33d: NOKEY error: Failed dependencies: system-release >= 19 is needed by rpmfusion-free-release-19-1.noarch So I tried installing vlc from development version, # rpm -ivh http://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-rawhide.noarch.rpm I get, Retrieving http://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-rawhide.noarch.rpm warning: /var/tmp/rpm-tmp.WZC0gw: Header V3 RSA/SHA256 Signature, key ID 6446d859: NOKEY error: Failed dependencies: system-release >= 21 is needed by rpmfusion-free-release-21-0.1.noarch There's no system release after 20. What does this mean?

    Read the article

  • How can I use an own asp page as 404 error page on a Windows 7 pro IIS 7.5 local installation

    - by Edelcom
    Running Windows 7 Pro Running local IIS 7.5 ( for web development purposes ) Edited hosts file to be able to use http://www.sitestepper.dev Site build in subfolder of the inetpub/wwwroot/staplijst Site can be viewed using http://www.sitestepper.dev/staplijst/whatever-page.htm I have an http://www.sitestepper.dev/p.asp page I would like to call when an unknown page is requested. This technique works fine on the deployed version of this web site ( deployed on an Windows 2003 server running IIS 7 - not sure about the 7 but it is the version deployed with Windows 2003 server ). I tried ( in the Edit Custom Error Dialog ) : Execute a URL on this site: with value /staplijst/p.asp and Respond with a 302 redirect: with value http://www.sitestepper.dev/staplijst/p.asp I tried this in the properties of the 'Default Web Site' , and I tried this at the staplijst level. Even tried it with values without the /staplijst. I restarted the default web site after each change. And even stopped/started the Web service. But nothing seems to work, I keep getting the 'Server Error In Application "DEFAULT WEB SITE"' , HTTP Error 404.0 error. What am I missing here - probably something obvious, but I don't see it ? Thanks in advance

    Read the article

  • Best way to use mod_rewrite to replace WordPress pages with static files

    - by David Moles
    Here's the situation: I've got an old WordPress installation that I'd like to archive as static files, but I'd also like to preserve old URLs. I've already created the static archive with wget and sorted out the filenames and links. Now I'd like to configure Apache to intercept requests for the old dynamic URL and replace them with the new static one, e.g.: http://www.example.org/log/?p=1234 or http://www.example.org/log/index.php?p=1234 should redirect to http://www.example.org/log/archives/1234.html I've tried adding the following to the VirtualHost config for example.org, but to no effect -- I just get the PHP page. RewriteCond %{REQUEST_URI} /log/ RewriteCond %{QUERY_STRING} p=([^&;]*) RewriteRule ^/$ http://%{SERVER_NAME}/log/archives/%1.html [R,L] I've enabled logging and I can see what look like other rules being applied, but not this one. None of my other guesses at match patterns for %{REQUEST_URI} seem to have any effect either (log, log/, log.*, even .*). I'm new to mod_rewrite and this is mostly cargo cult, so I'm pretty sure I've gotten it wrong. Anyone know what I should be doing here?

    Read the article

  • Connection timed out on Node.js app running under CentOS

    - by ss1271
    I followed this tutorial to create a simple node.js app on my CentOS: the node.js version is: $ node -v v0.10.28 Here's my app.js: // Include http module, var http = require("http"), // And url module, which is very helpful in parsing request parameters. url = require("url"); // show message at console console.log('Node.js app is running.'); // Create the server. http.createServer(function (request, response) { request.resume(); // Attach listener on end event. request.on("end", function () { // Parse the request for arguments and store them in _get variable. // This function parses the url from request and returns object representation. var _get = url.parse(request.url, true).query; // Write headers to the response. response.writeHead(200, { 'Content-Type': 'text/plain' }); // Send data and end response. response.end('Here is your data: ' + _get['data']); }); // Listen on the 8080 port. }).listen(8080); However, when I uploaded this app onto my remote server (assume the address is 123.456.78.9), I couldn't get access to it on my browser http://123.456.78.9:8080/?data=123 The browser returned Error code: ERR_CONNECTION_TIMED_OUT. I tried the same app.js code which runs fine on my local machine, is there anything I am missing? I tried to ping the server and its address was reachable. Thanks.

    Read the article

  • is there any valid reason for users to request phpinfo()

    - by The Journeyman geek
    I'm working on writing a set of rules for fail2ban to make life a little more interesting for whoever is trying to bruteforce his way into my system. A good majority of the attempts tend to revolve around trying to get into phpinfo() via my webserver -as below GET //pma/config/config.inc.php?p=phpinfo(); HTTP/1.1 GET //admin/config/config.inc.php?p=phpinfo(); HTTP/1.1 GET //dbadmin/config/config.inc.php?p=phpinfo(); HTTP/1.1 GET //mysql/config/config.inc.php?p=phpinfo(); HTTP/1.1 I'm wondering if there's any valid reason for a user to attempt to access phpinfo() via apache, since if not, i can simply use that, or more specifically the regex GET //[^>]+=phpinfo\(\) as a filter to eliminate these attacks

    Read the article

  • Using VLC to Unicast High Definition Webcam over local gigabit LAN with low/zero delay

    - by Robin Day
    We're setting up a webcam "window" between two offices in the same buildilng. The two PC's are connected to the same gigabit switch. We're using VLC to stream the webcam over HTTP using the following commands. vlc dshow:// :dshow-caching="0" :dshow-size="640x480" :sout=#transcode{vcodec=h264,vb=0,scale=0}:http{mux=ffmpeg{mux=flv},dst=:8080/} :no-sout-rtp-sap :no-sout-standard-sap :ttl=1 :sout-keep vlc http://192.168.0.1:8080 :http-caching="0" Even with the caching set to zero, the delay in the image is a good 2-3 seconds. The CPU usage of each pc is also maxed. I'm guessing it's the transcoding that's causing much of the delay. Can anyone give me some changes to these command lines that will reduce the transcoding power, or send the webcam over a different protocol, or anything that will reduce the delay of the cameras? Bandwidth is not an issue at all as the pc's can be connected to a dedicated switch/vlan if required.

    Read the article

  • Is it possible to use Google Docs Viewer to view files already in Google Docs?

    - by john2x
    The title is a little confusing. I'll elaborate. As far as I can tell, the Google Docs Viewer tool accepts a link to a raw document file (e.g. .doc, .pdf, et. al.), and renders its contents in the browser. For example, this url to a pdf http://research.google.com/archive/bigtable-osdi06.pdf when passed to Viewer, returns this link: http://docs.google.com/viewer?url=http%3A%2F%2Fresearch.google.com%2Farchive%2Fbigtable-osdi06.pdf What I'm trying to achieve is, use the Viewer to view a document already hosted in Google Docs (i.e. no longer a raw document file). When passing a link to a Google Docs document to the Viewer, the result is not as expected. It renders the link's HTML source instead of the document's contents. The reason I want to do this is that I want to be able to use the "embed" feature of Viewer to view Google Docs documents. Does Google Docs have a "link to embeddable view" feature? P.S. Here is a sample snippet to an embedded document. This is what I want, but pointing to an existing Google Docs document. <iframe src="http://docs.google.com/viewer?url=http%3A%2F%2Fresearch.google.com%2Farchive%2Fbigtable-osdi06.pdf&embedded=true" width="600" height="780" style="border: none;"></iframe>

    Read the article

  • Grizzly server - works with IP, but not with domain name

    - by Hitchhiker
    I'm hosting a grizzly web service on a Windows 7 Pro machine (embedded in a regular Java process), and it is binding to http://my-domain-name. When trying to hit the service from another machine, requests to http://my-domain-name fail (fiddler shows error code 502), but requests to http://my-ip work. When the service runs on a Windows Server 2008 machine, this doesn't happen (both requests succeed). What could be the issue?

    Read the article

  • uWSGI and python virtual env

    - by user27512
    I'm trying to use uWSGI with a virtual env in order to use the Trac bug tracker on it. I've installed system-wide uwsgi via pip. Next, I've installed trac in a virtualenv $ virtualenv venv $ . venv/bin/activate $ pip install trac I've then written a simple uWSGI configuration script: [uwsgi] master = true processes = 1 socket = localhost:3032 home = /srv/http/trac/venv/ no-site = true gid = www-data uid = www-data env = TRAC_ENV=/srv/http/trac/projects/my_project module = trac.web.main:dispatch_request But when I try to launch it, it fails: $ uwsgi --http :8000 --ini /etc/uwsgi/vassals-available/my_project.ini --gid www-data --uid www-data ... Set PythonHome to /srv/http/trac/venv/ ... *** Operational MODE: single process *** ImportError: No module named trac.web.main unable to load app 0 (mountpoint='') (callable not found or import error) I think uWSGI isn't using the virtual env. When inside the virtual env, I can import trac.web.main without having an ImportError. How can I do that ? Thanks

    Read the article

  • How to determine if my AWS/EC2 server has been compromised / resolution?

    - by ElHaix
    I have recently seen an increase in network in/out activity on my server and am trying to determine if my AWS/EC2 instance has been compromised, and if so, how to resolve? In my security group I have: Inbound: 80 (HTTP) 0.0.0.0/0 Outbound: 80 (HTTP) 0.0.0.0/0 443 (HTTPS) 0.0.0.0/0 Using TCP-UDP Endpoint Viewer: I see a lot of w3wp.exe TCP processes with varying local ports http and numbered, as well as varying remote ports. Some processes go red/yellow/green on updates . I see Remote address for most w3wp processes are my ec2 instance, however I am seeing several to *.deploy.akamaitechnologies.com and *.deploy.static.akamaitechnologies.com with received bytes varying between 4-11 megs. I also see Ec2Config.exe, remote address: 169.254.169.254 System Process Remote Address: fetcher4-4.p.mail.ru (how can I get rid of this one?!) local port: http remote port: 33432 I am also seeing some system processes from 114.216-244-93-rdns.wowrack.com: Protocol: TCP local port: http remote port: varying As well as some baiduspider "System Process"'s. I'm afraid that my system may have been compromised, and wondering if these results are any indication of that. If so, how can I get eliminate these possible threats? I have MS Security Essentials installed.

    Read the article

  • Exclude regular expression from virtual host

    - by Joao Trindade
    I have a virtual host in apache which is redirecting requests to another web server. <VirtualHost *:80> DocumentRoot /var/www ServerName another.host ProxyPass / http://another.host2:8081/ ProxyPassReverse / http://another.host2:8081/ </VirtualHost> I need to exclude an URL pattern from being catch by this virtual host. Basically I don't want requests with the url: http://another.host:8081/~username to be forwarded to the other server. Can this be done?

    Read the article

  • Websocket handshake response not forwarded from TCP to client

    - by Saharsh
    I am trying to create a websocket server. I can see the websocket client's opening handhshake. My response to it is received by the client laptop (I can see this on wireshark). So the TCP connection has been established. But the client (a chrome websocket client extension) does not receive the handshake packet. What could be a possible reason for TCP to not forward the handshake to the client or for the client to not be able to read the TCP message? Client handshake: GET HTTP/1.1 Upgrade: websocket Connection:Upgrade Cache-Control:no-cache Host:192.168.0.101 Origin:http://www.websocket.org Pragma:no-cache Sec-WebSocket-Extensions:permessage-deflate; client_max_window_bits, x-webkit-deflate-frame Sec-WebSocket-Key: qrmw/m+BoZije6h9HYKmVw== Sec-WebSocket-Version:13 Upgrade:websocket Server Response: HTTP/1.1 101 Switching Protocols Upgrade: websocket Connection: Upgrade Sec-WebSocket-Accept: jj1g5Io57m9ks8cme3jkbyo2asc= Access-Control-Allow-Origin: http://www.websocket.org Server: xyz Sec-WebSocket-Extensions: Thanks!

    Read the article

  • What to have in sources.list on an Ubuntu LTS server (production)?

    - by nbr
    I have several Ubuntu 10.04 LTS servers in production and I'm using apticron to check that my software is up to date, security-wise. However, by default, Ubuntu has the lucid-updates repository enabled. This means lots of low-priority updates (such as this) that I don't need and thus, extra work for me. Is it okay to just remove the lucid-updates line(s) in sources.list? I still get security updates via lucid-security, right? So, this is what my sources.list would look like. deb http://se.archive.ubuntu.com/ubuntu/ lucid main restricted deb http://se.archive.ubuntu.com/ubuntu/ lucid universe deb http://security.ubuntu.com/ubuntu lucid-security main restricted deb http://security.ubuntu.com/ubuntu lucid-security universe

    Read the article

  • SVN Connection Not Successful

    - by user66850
    I am getting the following message when attempting to connect to our company's SVN repository - the same error occurs whether I try from the OSX command line or Eclipse. Any ideas on where to troubleshoot? I can access from other similar computers and others in my team do not have any problem - this issue started occurring on my MacBook Pro yesterday afternoon (no known changes were made to the OS prior to problem starting). $ svn co http://example.ca/cwl/tags/app svn: OPTIONS of 'http://example.ca/cwl/tags/app': Could not read status line: connection was closed by server (http://example.ca)*

    Read the article

  • Does sphider can be a search engine for intranet ?

    - by garcon1986
    Hello, Sphinx is a kind of search engine but it should be installed on the server. But i can't install it on the server, so i have to find another solution. Actually, i have tested sphider in a small site. But when I want to integrate it in my intranet, it doesn't work. The error code: 1. Retrieving: http://localhost/XXX at 16:44:20. Updated Link To http://localhost/XXX Size of page: 1.54kb. Starting indexing at 16:44:20. Page contains less than 10 words Links found: 1. New links: 1 2. Retrieving: http://localhost/XXX.php at 16:44:20. Unreachable: http 404 Links found: 0. New links: 0 Anyone has ideas? Thanks

    Read the article

  • SSH not working through Double NAT

    - by d_inevitable
    I am trying to setup port forwarding for ssh through 2 NATs The first Router translates my internet IP to my outer network (10.1.7.0). In the outer network there's a second Router that does NAT to my inner network (192.168.1.0). The target server is connected to both, the outer network and the inner network. I cannot change the port forwarding options for outer router. It is currently configured to forward the SSH and HTTP port to the router for the inner network. Internet + | v +-----------------+ +------------------+ | Outer Router | | Inner Router | |-----------------| |------------------| | | SSH HTTP | | +----+ +--------------------->| | | | | | | | | | | | | +-------+---------+ +------+---------+-+ | | | | | | | | | | | | | | +------------------+ | SSH | | | | Server | | | | | |------------------| | | | +-----------> |<-------+ | | | | |HTTP (testing) | +------------------+ | | | +------v------------------+ | | Outer Workstation | +-------------------+ | |-------------------------| | Inner Workstation| | | | |-------------------| | | | | |<----------------+ +-------------------------+ | | +-------------------+ When connecting from a outer workstation to the address of the inner router, then both SSH and HTTP work fine. When connecting from the internet to my public ip with HTTP, the connection works fine as well. However SSH just times out. Most likely because the reply is not routed back properly. I suspect its either because of the SSH itself, or because the server is connected to both, the inner and outer network. Any ideas how I could resolve this issue? The routes on the server are currently: ip route show default via 10.1.7.254 dev eth0 metric 100 10.1.7.0/24 dev eth0 proto kernel scope link src 10.1.7.1 192.168.1.0/24 dev eth1 proto kernel scope link src 192.168.1.2 Do I have to change this? If so how?

    Read the article

  • wget-ing protected content with exported cookies

    - by XXL
    I have exported a pair of cookies from Firefox that are valid for the URL in question and tried accessing/downloading the protected content off that address, but the end result is a return to the login page. I have tried doing the same thing for 3 other websites with similar outcome. Any clues as to what I might be doing wrong? The syntax I'm using: wget --load--cookies=FILE URL ----------------------------------------------- DEBUG output created by Wget 1.12 on linux-gnu. Stored cookie www.x.org -1 (ANY) / <permanent> <insecure> [expiry 1901-12-13 22:25:44] c_secure_login lz8xZQ%3D%3D Stored cookie www.x.org -1 (ANY) / <permanent> <insecure> [expiry 1901-12-13 22:25:44] c_secure_pass 2fd4e1c67a2d28fced849ee1bb76e74a Stored cookie www.x.org -1 (ANY) / <permanent> <insecure> [expiry 1901-12-13 22:25:44] c_secure_uid GZX4TDA%3D --2011-01-14 13:57:02-- www.x.org/download.php?id=397003 Resolving www.x.org... 1.1.1.1 Caching www.x.org => 1.1.1.1 Connecting to www.x.org|1.1.1.1|:80... connected. Created socket 5. Releasing 0x0943ef20 (new refcount 1). ---request begin--- GET /download.php?id=397003 HTTP/1.0 User-Agent: Wget/1.12 (linux-gnu) Accept: */* Host: www.x.org Connection: Keep-Alive ---request end--- HTTP request sent, awaiting response... ---response begin--- HTTP/1.1 302 Found Date: Fri, 14 Jan 2011 11:26:19 GMT Server: Apache X-Powered-By: PHP/5.2.6-1+lenny8 Set-Cookie: PHPSESSID=5f2fd97103f8988554394f23c5897765; path=/ Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Location: www.x.org/login.php?returnto=download.php%3Fid%3D397003 Vary: Accept-Encoding Content-Length: 0 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/html ---response end--- 302 Found Stored cookie www.x.org -1 (ANY) / <session> <insecure> [expiry none] PHPSESSID 5f2fd97103f8988554394f23c5897765 Registered socket 5 for persistent reuse. Location: www.x.org/login.php?returnto=download.php%3Fid%3D397003 [following] Skipping 0 bytes of body: [] done. --2011-01-14 13:57:02-- www.x.org/login.php?returnto=download.php%3Fid%3D397003 Reusing existing connection to www.x.org:80. Reusing fd 5. ---request begin--- GET /login.php?returnto=download.php%3Fid%3D397003 HTTP/1.0 User-Agent: Wget/1.12 (linux-gnu) Accept: */* Host: www.x.org Connection: Keep-Alive Cookie: PHPSESSID=5f2fd97103f8988554394f23c5897765 ---request end--- HTTP request sent, awaiting response... ---response begin--- HTTP/1.1 200 OK Date: Fri, 14 Jan 2011 11:26:20 GMT Server: Apache X-Powered-By: PHP/5.2.6-1+lenny8 Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Vary: Accept-Encoding Content-Length: 2171 Keep-Alive: timeout=15, max=99 Connection: Keep-Alive Content-Type: text/html ---response end--- 200 OK Length: 2171 (2.1K) [text/html] Saving to: `x.out' 0K .. 100% 18.7M=0s 2011-01-14 13:57:02 (18.7 MB/s) - `x.out' saved [2171/2171]

    Read the article

  • Apache: getting proxy, rewrite, and SSL to play nice

    - by Rich M
    Hi, I'm having loads of trouble trying to integrate proxy, rewrite, and SSL altogether in Apache 2. A brief history, my application runs on port 8080 and before adding SSL, I used proxy to strip the 8080 from the url's to and from the server. So instead of www.example.com:8080/myapp, the client app accessed everything via www.example.com/myapp Here was the conf the accomplished this: ProxyRequests Off <Proxy */myapp> Order deny,allow Allow from all </Proxy> ProxyPass /myapp http://www.example.com:8080/myapp ProxyPassReverse /myapp http://www.example.com:8080/myapp What I'm trying to do now is force all requests to myapp to be HTTPS, and then have those SSL requests follow the same proxy rules that strip out the port number as my application used to. Simply changing the ports 8080 to 8443 in the ProxyPass lines does not accomplish this. Unfortunately I'm not an expert in Apache, and my skills of trial and error are already reaching the end of the line. RewriteEngine On RewriteCond %{HTTPS} off RewriteRule myapp/* https://%{HTTP_HOST}%{REQUEST_URI} ProxyRequests Off <Proxy */myapp> Order deny,allow Allow from all </Proxy> SSLProxyEngine on ProxyPass /myapp https://www.example.com:8443/mloyalty ProxyPassReverse /myapp https://www.example.com:8433/mloyalty As this stands, a request to anything on the server other than /myapp load fine with http. If I make a browser http request to /mypp it then redirects to https:// www.example.com:8443/myapp , which is not the desired behavior. Links within the application then resolve to https:// www.example.com/myapp/linkedPage , which is desirable. Browser requests (http and https) to anything one level beyond just /myapp ie. /myapp/mycontext resolve to https:// www.example.com/myapp/mycontext without the port. I'm not sure what other information there is for me to give, but I think my goals should be clear.

    Read the article

  • Repeated requests on our server?

    - by pitty.platsch
    I encountered something strange in the access log of our Apache server which I cannot explain. Requests for webpages that I or my colleagues do from the office's Windows network get repeated by another IP (that we don't know) a couple of seconds later. The user agent repeating our requests is Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; InfoPath.2) Has anyone an idea? Update: I've got some more information now. The referrer of the replicate is set to the URL I requested before and it's not the exact same request as the protocol version is changed from 'HTTP/1.1' to 'HTTP/1.0'. The IP is not just one, it's just one of a subnet (80.40.134.*). It's just the first request to a resource that's get repeated, so it seems the "spy" is building up some kind of cache of visited places. The repeater is also picky. I tried randomly URLs with different HTTP status codes and different file patterns. 301s and 200s are redone, 404s not. Image extensions seem to be ignored. While doing my tests I discovered that this behavior seems to be common as I found other clients visiting just after the first requests: 66.249.73.184 - - [25/Oct/2012:10:51:33 +0100] "GET /foobar/ HTTP/1.1" 200 10952 "-" "Mediapartners-Google" 50.17.125.180 - - [25/Oct/2012:10:51:33 +0100] "GET /foobar/ HTTP/1.1" 200 41312 "-" "Mozilla/5.0 (compatible; proximic; +http://www.proximic.com/info/spider.php)" I wasn't aware about this practice, so I don't see it that much as a threat anymore. I still want to find out who this is, so any further help is appreciated. I'll try later if this also happens if I query some other server where I have access to the access logs and will update here then.

    Read the article

  • How can I get rid of the long Google results URLs?

    - by Teifi
    google.com is always shielded by our firewall. When I search something at google.com, a result list appears. Then I click the link, the URL changes to a processed url like: http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CDcQFjAA&url=http%3A%2F%2Fwww.amazon.com%2F&ei=PE_AUMLmFKW9iAfrl4HoCQ&usg=AFQjCNGcA9BfTgNdpb6LfcoG0sjA7hNW6A&cad=rjt Then my browser is blocked because of google.com I guess. The only useful information in that long like processed URL is http%3A%2F%2Fwww.amazon.com(http://www.amazon.com). My quesitons: What's the meaning of that long like processed URL? Is there a way to remove the header google.com/url?sa.. each time I click the search results?

    Read the article

  • Reverse Proxy to filter out js files from multiple hosts in nginx

    - by stwissel
    I have a website http://someplace.acme.com that I want my users to access via http://myplace.mycorp.com - pretty standard reverse proxy setup. The special requirement: any js file - either identified by the .js extension and/or the mime-type (if that is possible) text/javascript needs to be served from a different location, a local tool that inspects the js for potential threats. So I have location / { proxy_pass http://someplace.acme.com; proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504; proxy_redirect off; proxy_buffering off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } location ~* \.(js)$ { proxy_pass http://127.0.0.1:8188/filter?source=$1; proxy_redirect off; proxy_buffering off; } The JS still is served from remote and I have no idea how to check for the mime type. What do I miss?

    Read the article

  • Using sed to Download ComboFix automatically

    - by user901398
    I'm trying to write a shell script to grab the dynamic URL which ComboFix is located at at BleepingComputer.com/download/combofix However, for some reason I can't seem to get my regex to match the download link of the "click here" if the download doesn't work. I used a regex tester and it said I matched the link, but I can't seem to get it to work when I execute it, it turns up an empty result. Here's my entire script: #!/bin/bash # Download latest ComboFix from BleepingComputer wget -O Listing.html "http://www.bleepingcomputer.com/download/combofix/" -nv downloadpage=$(sed -ne 's@^.*<a href="\(http://www[.]bleepingcomputer[.]com/download/combofix/dl/[0-9]\+/\)" class="goodurl">.*$@\1@p' Listing.html) echo "DL Page: $downloadpage" secondpage="$downloadpage" wget -O Download.html $secondpage -nv file=$(sed -ne 's@^.*<a href="\(http://download[.]bleepingcomputer[.]com/dl/[0-9A-Fa-f]\+/[0-9A-Fa-f]\+/windows/security/anti[-]virus/c/combofix/ComboFix[.]exe\)">.*$@\1@p' Download.html) echo "File: $file" wget -O "ComboFix.exe" "$file" -nv rm Listing.html rm Download.html mkdir Tools mv "ComboFix.exe" "Tools/ComboFix.exe" -f The first two downloads work successfully, and I end up with: http://www.bleepingcomputer.com/download/combofix/dl/12/ But it fails to match the final sed that will give me the download link. The code it's supposed to match is: <a href="http://download.bleepingcomputer.com/dl/6c497ccbaff8226ec84c97dcdfc3ce9a/5058d931/windows/security/anti-virus/c/combofix/ComboFix.exe">click here</a>

    Read the article

  • Running localhost webapp projects under domain name using fiddler2

    - by user01
    I have a Tomcat server running on my local dev machine(running Windows8) & I use fiddler2 to assign an alias to localhost as my domain name (www.mydomainName.com), so my application webpages open in the browser like this: http://www.mydomainName.com/myAppName/welcome.html instead of http://localhost:8080/myAppName/welcome.html But I want to my webapp pages urls to omit 'myAppName' & be something like : http://www.mydomainName.com/welcome.html How could I configure to do this ?

    Read the article

< Previous Page | 232 233 234 235 236 237 238 239 240 241 242 243  | Next Page >