Search Results

Search found 15021 results on 601 pages for 'location aware'.

Page 114/601 | < Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >

  • apache proxypass to webmin

    - by Ricardo
    I have a problem with apache2 webmin redirect. My ProxyPass is: ProxyRequests Off ProxyPreserveHost On SSLProxyEngine On ProxyPass /admin/webmin/ https://localhost:10000/ ProxyHTMLURLMap https://localhost:10000 /admin/webmin <Location /admin/webmin/> ProxyHTMLExtended On SetOutputFilter proxy-html ProxyPassReverse https://localhost:10000/ ProxyPassReverse https://xxxxxxxxxxxxxxxxxxxx.amazonaws.com:10000/ Order allow,deny Allow from all </Location> When I connect using https://xxxxxxxxxxxxxxxxxxxx.amazonaws.com:10000/ there is no problem. But when I connect use https://xxxxxxxxxxxxxxxxxxxx.amazonaws.com/admin/webmin the page lost css and after login show me the error: The requested URL /session_login.cgi was not found on this server. I think is an error with my ProxyPass but I don´t know what is.

    Read the article

  • Rsync like windows backup tool

    - by Halfgaar
    Hi, I need to backup some windows machines and have been unable to find the proper tool. What I need is a tool that does efficient copying of changed files to a windows network location, like Rsync does. In turn, the server will then back that up using rdiff-backup, a tool which does very clever incremental backups. Right now I'm using windows' 7 included backup feature, but I really don't get that. It's too much off-topic, but it doesn't suffice (seems buggy as well). I looked into Amanda, but as soon as it wanted to install MySQL, I aborted. I also tried Deltacopy, but unfortunately, I don't remember what the problem with that was... Any advice for an rsync like tool that just does daily syncs to a network location?

    Read the article

  • Apache server as reverse proxy is removing xmlns info from html tag

    - by Johnco
    I have a Java application running in tomcat, in front of which I have an Apache http server as a reverse proxy. However, the proxy is removing all xmlns data from the html tag, which breaks all the Facebook's FBML which is never parsed. My current config is as follows: ProxyRequests off ProxyHTMLDocType XHTML ProxyPassReverseCookiePath /cas / <Location /> ProxyPass http://localhost:8080/cas ProxyPassReverse http://localhost:8080/cas </Location> ProxyHTMLURLMap /cas / SetOutputFilter proxy-html <Proxy *> Order deny,allow Allow from all Satisfy all </Proxy> Thanks in advance.

    Read the article

  • nginx redirect TLD to TLD with virtual folder (example.com => example.com/test)

    - by Amund
    Im running nginx and in the config file I need to always have the domain example.com redirect to example.com/test. I tried various methods for achieving this but I always got a redirect error. What is the correct way to do this? nginx.conf snippet: server { server_name example.com www.example.com; location / { rewrite ^.+ /test permanent; } } server { listen 80; server_name www.example.com example.com; location / { root /var/www/apps/example/current/public; passenger_enabled on; rails_env production; } } Thanks!

    Read the article

  • Java and Tomcat issue

    - by Steven Peake
    All, At the end of my teather with this and really would appreciate any pointers. I have a system that was installed with Tomcat 5.5.9 and JRE 6 update 13. My issue is that someone has come along and installed tomcat 6 and JRE 5. By this simple action they have destroyed the apps that were originally running on the machine. I have tried to remove the tomcat and java install and from what I can see it has all been removed. My issue, I am now trying to re-install the oringinal app, but this installer now throughs up a file location error and will not re-install the apache part of the installer. Can anyone advise of any hidden location were the apache software may have installed components. This is all installed on Windows Server 2003 R2. Many thanks for your help in advance.

    Read the article

  • Apache server-status when running as proxy server

    - by f-z-N
    We are running apache as a proxy server and have tomcat behind apache. We are using server_status module but when we try to access server_status as in https://host.com/server-status it redirects to tomcat and we get 404 error. I am quite new to this, tried going through apache docs but unable to figure out the solution. Fyi.We have ssl enabled Current ssl.conf settings: ProxyRequests Off ProxyPreserveHost On <Proxy http://localhost:8081/*> Order deny,allow Allow from all </Proxy> ProxyPass / http://localhost:8081/ ProxyPassReverse / http://localhost:8081/ ProxyPassReverse / http://myhost:8081/ <Location /server-status> SetHandler server-status Order deny,allow Deny from all Allow from 10.90 </Location>

    Read the article

  • Apache server as reverse proxy is removing xmlns info from html tag

    - by Johnco
    I have a Java application running in tomcat, in front of which I have an Apache http server as a reverse proxy. However, the proxy is removing all xmlns data from the html tag, which breaks all the Facebook's FBML which is never parsed. My current config is as follows: ProxyRequests off ProxyHTMLDocType XHTML ProxyPassReverseCookiePath /cas / <Location /> ProxyPass http://localhost:8080/cas ProxyPassReverse http://localhost:8080/cas </Location> ProxyHTMLURLMap /cas / SetOutputFilter proxy-html <Proxy *> Order deny,allow Allow from all Satisfy all </Proxy> Thanks in advance.

    Read the article

  • Monit wont start/stop any processes

    - by Vaughan Magnusson
    Hi, I've got monit running on a linux vserver, installed in a custom location /home/user/bin/monit as that is the only suitable location according to the webhost providers. When I installed monit I used ./configure --prefix=/home/user Monit itself runs, and sends me emails of it's activity, and the control file syntax is correct. However, monit cannot seem to start or stop anything - or even run the simplest of scripts. eg. Using 'monit stop all', I try to run the following stop command stop = "/bin/bash /home/user/simple_script.sh" Which fails (and says so in the log). I cant figure out why this is failing, can anyone help with this?

    Read the article

  • How Do I Disable URL Pre-Pending in the FireFox 3 Title Bar When Opeing A New Window With JavaScript

    - by N Rahl
    For (understandable) security reasons, Firefox does not allow JavaScript to open a new window without the address/location bar AND without pre-pending the page's URL to the title in the title bar. For example, when you set: <title>My Site</title> in the header, and open the page using location=no FireFox changes the header to read: http://www.mysite.com - My Site - Mozilla Firefox. I would like it to simply say: My Site Everything I've read suggests this behaviour can't be altered with scripting, and as such, this is not a scripting question. What I would like to know is, which setting(s) can I change in the browser itself to disable URL pre-pending to the title of new windows? This is for a company Intranet, and I control all of the computers/browsers that connect to the application.

    Read the article

  • How to stop Nginx sending static file requests to the CakePHP app controller when running Cake in a

    - by Throlkim
    I'm trying to run a CakePHP app from within a subfolder on Nginx, but the static files are not being found and are instead being passed to the app controller. Here's my current config: location /uniquetv { index index.php index.html; if (-f $request_filename) { break; } if (!-e $request_filename) { rewrite ^/uniquetv(.+)$ /uniquetv/webroot/$1 last; break; } } location /uniquetv/webroot { index index.php; if (!-e $request_filename) { rewrite ^/uniquetv/webroot/(.+)$ /uniquetv/webroot/index.php?url=$1 last; break; } } Any ideas? :)

    Read the article

  • Mercurial not receiving push

    - by Jeffrey04
    I have a mercurial web-frontend (hgwebdir.cgi) installed on a server, and an installation of nginx was installed in front of it as a reverse proxy to the web-frontend as my friend suggested. However, whenever a large changeset is pushed (via a script), it would fail. I found an issue ticket @google-code that describe similar problem, and there is a solution that says (#39) So the server side answer is: don't send the 401 back early. Be as slow/dumb as 'hg serve' and make the hg client send the bundle twice. How do I do that? My current nginx config location /repo/testdomain.com { rewrite ^(.*) http://bpj.kkr.gov.my$1/hgwebdir.cgi; } location /repo/testdomain.com/ { rewrite ^(.*) http://bpj.kkr.gov.my$1hgwebdir.cgi; } location /repo/testdomain.com/hgwebdir.cgi { proxy_pass http://localhost:81/repo/testdomain.com/hgwebdir.cgi; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_buffering on; client_max_body_size 4096M; proxy_read_timeout 30000; proxy_send_timeout 30000; } From the access log we keep seeing 408 entries incoming.ip.address - - [18/Nov/2009:08:29:31 +0800] "POST /repo/testdomain.com/hgwebdir.cgi/example_repository?cmd=unbundle&heads=73121b2b6159afc47cc3a028060902883d5b1e74 HTTP/1.1" 408 0 "-" "mercurial/proto-1.0" incoming.ip.address - - [18/Nov/2009:08:37:14 +0800] "POST /repo/testdomain.com/hgwebdir.cgi/example_repository?cmd=unbundle&heads=73121b2b6159afc47cc3a028060902883d5b1e74 HTTP/1.1" 408 0 "-" "mercurial/proto-1.0" Is there anything else I can do on the server because solving it on the server side is preferable :/ Further Findings Bitbucket seems to have this solved ( Check liquidhg bitbucket project and the Diagnosis wiki page ) on the server side, can't find the config anywhere though :/ What happens next varies depending on your server. Some servers refuse the BODY, simplying closing the pipe from the client and causing Mercurial to fail. Some, like Apache (at least the way I configure it, and that could be part of the problem) and nginx (they way BitBucket.org configures it), accept the BODY, though it may take a few retries. Bottom line: if Mercurial doesn't fail the push, it sends the changeset data at least once to a server that has already told it it lacks credentials (more on this at Blame). Assuming Mercurial is still running, it resends the "unbundle" request and data, this time with authentication. Finally, Apache accepts the data successfully. Nginx, OTOH, at least under BitBucket's configuration, seems to reassemble the previous body (the one that lacked authentication) and somehow keep Mercurial from re-sending the whole body.

    Read the article

  • 502 Bad Gateway with nginx + apache + subversion + ssl (SVN COPY)

    - by theplatz
    I've asked this on stackoverflow, but it may be better suited for serverfault... I'm having a problem running Apache + Subversion with SSL behind an Nginx proxy and I'm hoping someone might have the answer. I've scoured google for hours looking for the answer to my problem and can't seem to figure it out. What I'm seeing are "502 (Bad Gateway)" errors when trying to MOVE or COPY using subversion; however, checkouts and commits work fine. Here are the relevant parts (I think) of the nginx and apache config files in question: Nginx upstream subversion_hosts { server 127.0.0.1:80; } server { listen x.x.x.x:80; server_name hostname; access_log /srv/log/nginx/http.access_log main; error_log /srv/log/nginx/http.error_log info; # redirect all requests to https rewrite ^/(.*)$ https://hostname/$1 redirect; } # HTTPS server server { listen x.x.x.x:443; server_name hostname; passenger_enabled on; root /path/to/rails/root; access_log /srv/log/nginx/ssl.access_log main; error_log /srv/log/nginx/ssl.error_log info; ssl on; ssl_certificate server.crt; ssl_certificate_key server.key; add_header Front-End-Https on; location /svn { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; set $fixed_destination $http_destination; if ( $http_destination ~* ^https(.*)$ ) { set $fixed_destination http$1; } proxy_set_header Destination $fixed_destination; proxy_pass http://subversion_hosts; } } Apache Listen 127.0.0.1:80 <VirtualHost *:80> # in order to support COPY and MOVE, etc - over https (443), # ServerName _must_ be the same as the nginx servername # http://trac.edgewall.org/wiki/TracNginxRecipe ServerName hostname UseCanonicalName on <Location /svn> DAV svn SVNParentPath "/srv/svn" Order deny,allow Deny from all Satisfy any # Some config omitted ... </Location> ErrorLog /var/log/apache2/subversion_error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/subversion_access.log combined </VirtualHost> From what I could tell while researching this problem, the server name has to match on both the apache server as well as the nginx server, which I've done. Additionally, this problem seems to stick around even if I change the configuration to use http only.

    Read the article

  • Reverse proxy 502 bad gateway

    - by Brian Graham
    I have setup a subdomain to proxy my plesk panel, but when saving pages I am getting 502 Bad Gateway error instead of a completion message. I am running CentOS 6. Here is my vhost.conf configuration for http://plesk.domain.tld/: RewriteEngine On RewriteCond %{SERVER_PORT} ^80$ RewriteRule $ https://plesk.domain.tld/ [R,L] Here is my vhost_ssl.conf configuration for https://plesk.domain.tld/: SSLProxyEngine On <Location /> ProxyPass https://localhost:8443/ ProxyPassReverse https://localhost:8443/ </Location> I have more than enough (and I have even checked) RAM, CPU and HDD. There are no spikes. As well, the posted information does save, it just errors when trying to show me a "This information has been saved." green/red block. Here is the relevent error from /var/log/nginx/error.log (IP/Host Filtered): 2014/05/29 02:42:41 [error] 8046#0: *402 upstream prematurely closed connection while reading response header from upstream, client: 173.238.XX.XX, server: plesk.domain.tld, request: "POST /smb/web/edit HTTP/1.1", upstream: "https://198.100.XX.XX:7081/smb/web/edit", host: "plesk.domain.tld", referrer: "https://plesk.domain.tld/smb/web/edit"

    Read the article

  • Status code in nginx try_files directive

    - by Hamish
    Is it possible to use the current status code as a parameter in try_files? For example, we try to provide a host specific 503 static response, or a server-wide fallback if it wasn't found: error_page 503 @error503; location @error503 { root /path_to_static_root/; try_files /$host/503.html /503.html =503; } There are a number of these directives, so it would be convenient to do something like: error_page 404 @error error_page 500 @error error_page 503 @error location @error { root /path_to_static_root/; try_files /$host/$status.html /$status.html =$status; } But the Variables documentation doesn't list anything that we could use to do this. Is it possible, or is there an alternative way to do this?

    Read the article

  • Nginx Rewrite to Previous Directory

    - by ThinkBohemian
    I am trying to move my blog from blog.example.com to example.com/blog to do this I would rather not move anything on disk, so instead i changed my nginx configuration file to the following: location /blog { if (!-e $request_filename) { rewrite ^.*$ /index.php last; } root /home/demo/public_html/blog.example.com/current/public/; index index.php index.html index.html; passenger_enabled off; index index.html index.htm index.php; try_files $uri $uri/ @blog; } This works great but when i visit example.com/blog nginx looks for: /home/demo/public_html/blog.example.com/current/public/blog/index.php instead of /home/demo/public_html/blog.example.com/current/public/index.php Is there a way to put in a rewrite rule so that I can have the server automatically take out the /blog/ directory? something like ? location /blog { rewrite \\blog\D \; }

    Read the article

  • PHP at the root directory using Ngnix on Linode and Ubuntu 12.04

    - by Steve Kinney
    I originally set up my Linode to use it with the Sinatra applications using Phusion Passenger that I was developing and I have it working great for that. However, as time goes on, I find myself needing just a wee bit of PHP to do a server-side thing here or there. My basic set up was based off of this Linode recipe (I copied and pasted the parts that I needed—I did not install Redis and Node). If I go to http://scholarsnyc.com/index.php everything works great. If I just go the base URL however, I get a 403 Forbidden error (I have a vanilla HTML page there for now). I've played with file permissions and the same file will work if I call it directly. I've done my homework and nothing I try seems to work. I'm sure there is an obvious error. I'm also sure that there are some rookie mistakes in my Nginx configuration (some of those mistakes are the artifacts of trying different fixes from my research. user www-data www-data; worker_processes 1; events { worker_connections 1024; } upstream php { server 127.0.0.1:9001; } http { passenger_root /usr/local/lib/ruby/gems/1.9.1/gems/passenger-3.0.12; passenger_ruby /usr/local/bin/ruby; include mime.types; default_type application/octet-stream; index index.php index.html index.htm; sendfile on; keepalive_timeout 65; server { server_name localhost scholarsnyc.com www.scholarsnyc.com; root /srv/www/scholarsnyc.com/public; location / { index index.php; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } server { server_name data.scholarsnyc.com; root /srv/www/data.scholarsnyc.com/public; passenger_enabled on; } server { server_name tech.scholarsnyc.com; root /srv/www/tech.scholarsnyc.com/public; location / { root /srv/www/tech.scholarsnyc.com/public; index index.php index.html index.htm; } } } Any other optimizations are also appreciated. I literally don't know what to do at this point.

    Read the article

  • DFS-R + VSS - Run VSS at all locations or one?

    - by pbarranis
    I have 4 servers, one at each office location, each sharing read/write copies of 1.5TB of data. Changes are replicated 24/7 between all 4 servers via DFS-R. All servers run Windows 2008 R2. I'm interested in implementing VSS (Volume Shadow Services) on this data. I've read that DFS-R and VSS play nicely together, but I'm left with one unanswered question: turn on VSS at all locations or just one (the headquarters). Can I run VSS at all 4 locations safely, or is it wiser to run it at just 1 location? Thanks!

    Read the article

  • Can't Create New folder in my video folder

    - by tiki
    My OS is windows 7 Ultimate 32bit. My video folder location is C:\Users\User\Documents\Downloads\Video. I Can't create folder in the Video folder or see the files i saved. But I can see the files in the root location of the folder i.e by going to the folder from My Computer. I can create and find files when I go to the folder My Computer>C>Users>User>Documents>Downloads>Video. This is an unusual problem I have never seen. Even being a system Administrator I can't able to fix the problem. Please help me out. Thanks in advance

    Read the article

  • Scripts under pm/sleep.d are not getting called when suspending with KDE 4.3

    - by Richard Corden
    Fujitsu-Siemens H240, Slackware-current, KDE 4.3.2. I would like to perform some additional steps when my laptop suspends. I found this SU question which is very close to what I am asking, however the scripts that I placed in that directory are not being called for me. This could be a Slackware thing, or its possible that KDE has a different location for these scripts. I am suspending by using the "Suspend" radio button on the "Guidance Power Manager" dialog of KDE. Is there a standard location where I can place my scripts so they'll be run before and then after the machine has suspended?

    Read the article

  • VPN trace route

    - by Jake
    I am inside an Active Directory (AD) domain and trying to trace route to another AD domain at a remote site, but supposedly connected by VPN in between. the local domain can be accessed at 192.168.3.x and the remote location 192.168.2.x. When I do a tracert, I am suprised to see that the results did not show the intermediate ISP nodes. If I used the public IP of the remote location, then a normal tracert going through every intermediate node would show. 1 <1 ms <1 ms <1 ms 192.168.3.1 2 1 ms <1 ms <1 ms 192.168.3.254 3 7 ms 7 ms 7 ms 212.31.2xx.xx 4 197 ms 201 ms 196 ms 62.6.1.2xx 5 201 ms 201 ms 210 ms vacc27.norwich.vpn-acc.bt.net [62.6.192.87] 6 209 ms 209 ms 209 ms 81.146.xxx.xx 7 209 ms 209 ms 209 ms COMPANYDOMAIN [192.168.2.6] Can someone explain how does this VPN tunnelling works? Does this mean VPN is technically faster than without?

    Read the article

  • Authentication in Apache2 with mod_dav_svn

    - by Poita_
    I'm having some trouble setting up authentication in Apache2 for a SVN repository that's being served using mod_dav_svn. Here is my Apache config for the directory: <Location /svn> DAV svn SVNParentPath /var/svn/repos AuthType Basic AuthName "Subversion Repository" AuthUserFile /etc/apache2/dev.passwd Require valid-user </Location> I can use svn with the projects under /var/svn/repos, so I know that the DAV is working, but when I do svn updates or commits (or anything), Apache doesn't ask for any authentication... It does the exact same thing whether the Auth directives are there or not. The permissions on the repository directory (and all subdirectories/files) only give permission to www-data (the Apache2 user/group). I have also ensured that all relevant modules are enabled (in particular mod_auth is enabled, as are all mod_dav* modules). Any ideas why svn commands aren't authenticating? Thanks in advance.

    Read the article

  • Lightweight monitoring for a Windows XP laptop

    - by kazanaki
    Hello I have a windows XP laptop in a remote location. I would like to have an overview for CPU/Memory statistics from a remote location. Monitoring a specific service (a Tomcat instance) would be nice but not essential. I have seen the monitoring solutions (Nagios, cacti e.t.c) and they are all very heavy. I do not want to install mysql, web server and other stuff like that on the laptop. I don't even need a web solution at all. It could just be a simple command line app with a server port and on my machine another GUI application would connect there (and not a web browser) Is there something like this available?

    Read the article

  • Restoring Box Sync (box.com) using physical media after OS reinstall

    - by sjrm
    I have my Box Folder on a different drive ( D: ) than my Operating System Drive ( C: ) I follow these instructions to set the local folder on the second drive. https://support.box.com/hc/en-us/articles/200852947-How-Can-I-Change-the-Default-Folder-Location-for-Box-Sync- I had to format and reinstall Windows 7, leaving the other drive (D: ) untouched. now i reinstalled Box Sync, and directed it to the same location where i already have all my files. However, Box Sync renamed the folder to (Box ([email protected]) ) as a bckup, and is downloading everything again. As i have like 30Gb of files, and it takes many hour to download them, i just want to use the files that i already have in my local disk. i tried copying all files again to the new local Box Folder, but files started to get duplicated. Is there any way to avoid downloading everything again each time i reinstall my OS ? Thanks

    Read the article

  • How do you change the "scan this dir for additional ini files" path?

    - by amvx
    I managed to get the custom INI to load, but its still loading other .ini files from the default location. I created an fcgi wrapper that passed the ini value as a parameter. That worked. Now just these other ini's need to be loaded from the same dir as my custom ini. The problem is the other .ini files are overriding the settings in my custom php.ini =/ I realize the problem now is that the php.fcgi was compiled with a custom path parameter. So that's a problem. I might have to recompile it using a different location or none at all. I'd hate to have to compile an fcgi for each domain =/

    Read the article

  • Rails with phusion passenger and wordpress

    - by Venu
    We had a site developed using on ruby on rails. It had Website Web services for mobile app Admin panel to manage data. We started using wordpress to manage site content. We have finished development, have to move to production now. This is the current virtual host code for wordpress to work under /wordpress URI. <Location /wordpress> PassengerEnabled off <IfModule mod_rewrite.c> RewriteEngine On RewriteBase /wordpress/ RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /wordpress/index.php [L] </IfModule> </Location> I want to make phusion passenger work for the /admin and /api URIs. And / to go to wordpress. Can we change the document root based on the URI? or any other better solution?

    Read the article

< Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >