Search Results

Search found 10931 results on 438 pages for 'struts config'.

Page 255/438 | < Previous Page | 251 252 253 254 255 256 257 258 259 260 261 262  | Next Page >

  • Remove Content-Length header in nginx proxy_pass

    - by Luc
    I use nginx with proxy path directive. When the application to which the request is proxied return a response, it seems nginx add some header containing the Content-Length. Is that possible to remove this additional header ? UPDATE I have re-installed nginx with the more_headers module but I still have the same result. My config is: upstream my_sock { server unix:/tmp/test.sock fail_timeout=0; } server { listen 11111; client_max_body_size 4G; server_name localhost; keepalive_timeout 5; location / { more_clear_headers 'Content-Length'; proxy_pass http://my_sock; proxy_redirect off; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; } }

    Read the article

  • forward same port but for two different IPs (cisco)

    - by Colin
    Hi! I have a cisco running IOS 12.0(25) responding to two different IPs addresses: IP_A and IP_B. Behind this router I also have two different servers: server_A and server_B. What I want is to forward port 22 to both servers, so: IP_A, port22 -> server_A, port22 IP_B, port22 -> server_B, port22 ATM this only works for one of them (server_A), this is my config: interface Ethernet0/0 description Internet ip address IP_A 255.255.255.0 ip address IP_B 255.255.255.0 secondary no ip directed-broadcast ip nat outside no ip mroute-cache no cdp enable ip nat pool pool_A IP_A IP_A netmask 255.255.255.0 ip nat pool pool_B IP_B IP_B netmask 255.255.255.0 ip nat inside source list A pool pool_A overload ip nat inside source list B pool pool_B overload ip nat inside source static tcp server_B 22 IP_B 22 extendable ip nat inside source static tcp server_A 22 IP_A 22 extendable access-list A permit server_A access-list B permit server_B

    Read the article

  • I can not cd "LaunchAgents" on macbook

    - by why
    after installing mongdb on my macbook-pro, it tells me: If this is your first install, automatically load on login with: cp /usr/local/Cellar/mongodb/1.6.3-x86_64/org.mongodb.mongod.plist ~/Library/LaunchAgents launchctl load -w ~/Library/LaunchAgents/org.mongodb.mongod.plist If this is an upgrade and you already have the org.mongodb.mongod.plist loaded: launchctl unload -w ~/Library/LaunchAgents/org.mongodb.mongod.plist cp /usr/local/Cellar/mongodb/1.6.3-x86_64/org.mongodb.mongod.plist ~/Library/LaunchAgents launchctl load -w ~/Library/LaunchAgents/org.mongodb.mongod.plist Or start it manually: mongod run --config /usr/local/Cellar/mongodb/1.6.3-x86_64/mongod.conf but after i copy org.mongodb.mongod.plist to ~/Library/LaunchAgents, it tells me launchctl load -w ~/Library/LaunchAgents/org.mongodb.mongod.plist launchctl: Couldn't stat("/Users/liuqiang/Library/LaunchAgents/org.mongodb.mongod.plist"): Not a directory and also i can not cd "~/Library/LaunchAgents", but i can ls the directory! "~/Library/LaunchAgents" is a strange directory in mac.

    Read the article

  • Apache rewrite rules behind a nginx proxy

    - by Tuinslak
    Hi, I am running nginx (:80) in front of an Apache webserver (:8080) Nginx config (snippet): location / { proxy_pass http://www.domain.tld:8080; proxy_set_header X-Real-IP $remote_addr; If I set localhost instead of www.domain.tld, my browser gets redirect to http://localhost:8080. Apache rewrite rules: RewriteEngine On Options +FollowSymlinks RewriteBase / RewriteCond %{HTTP_HOST} !^www\. RewriteRule ^ http://www.%{HTTP_HOST}%{REQUEST_URI} [L,R=301] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_URI} !\..+$ RewriteCond %{REQUEST_URI} !/$ RewriteRule (.*) http://%{HTTP_HOST}/$1/ [L,R=301] RewriteCond %{REQUEST_URI} !v2/ RewriteRule ^(.*)$ v1/$1 [L] So far, so good. However, every link (which uses relative paths) appears as http://www.domain.tld:8080/page instead of staying on port 80. Is there any way to solve this through the rewrite rules? I don't want to use absolute paths. Thanks

    Read the article

  • Linux router with diffent gateways for incomming and outgoing connections

    - by nkout
    I have the following topology: LAN Users:192.168.1.2 - 254 (192.168.1.0/24) gateway1: 192.168.2.2/24 used for all outgoing connections of LAN users (default gateway) gateway2: 192.168.3.2/24 used for incoming services (destination NAT, ports 80,443 are forwarded to 192.168.2.1) linux router-server R eth0 192.168.1.1/24: LAN eth1 192.168.2.1/24: WWAN1 eth2 192.168.3.1/24: WWAN2 I want to: route all outgoing traffic coming from LAN and R via 192.168.2.2 route the responses to incoming connections via 192.168.3.2 My config: ifconfig eth0 up 192.168.1.1 netmask 255.255.255.0 ifconfig eth1 up 192.168.2.1 netmask 255.255.255.0 ifconfig eth2 up 192.168.3.1 netmask 255.255.255.0 echo 0 >/proc/sys/net/ipv4/ip_forward route add default gw 192.168.2.2 iptables -t nat -A POSTROUTING -d !192.168.0.0/16 -j MASQUERADE I want to add iptables rule to mark incoming traffic from WWAN2 and send back the responses to WWAN2, while keeping default gateway on WWAN1

    Read the article

  • tailwatchd - chkservd on host.domain.com status: hang

    - by Zim3r
    The chkservd sub-process with pid 17420 was running for 602 seconds. The sub-process was terminated as it exceeded the time between checks of 300 seconds. Please check /var/log/chkservd.log and /usr/local/cpanel/logs/tailwatchd_log to discover the I was notified for this error by email on the destination server while transferring server. what does it mean ? and also this happened: ftpd failed @ Wed Aug 8 11:26:38 2012. A restart was attempted automagically. Service Check Method: [socket connect] Reason: Timeout while trying to get data from service: Died at /usr/local/cpanel/Cpanel/TailWatch/ChkServd.pm line 607. Number of Restart Attempts: 1 Startup Log: Starting pure-config.pl: Running: /usr/sbin/pure-ftpd -O clf:/var/log/xferlog --daemonize -A -c50 -B -C8 -D -fftp -H -I15 -lextauth:/var/run/ftpd.sock -L10000:8 -m4 -s -U133:022 -u100 -Oxferlog:/usr/local/apache/domlogs/ftpxferlog -k99 -Z -Y1 -JHIGH:MEDIUM:+TLSv1:!SSLv2:+SSLv3 [ OK ] Starting pure-authd:

    Read the article

  • Running a home mail server using dynamic dns

    - by user4009
    Hi, Is it possible to run an email server on my home box using dynamic dns? The scenario is, I want to auto cc all incoming and outgoing emails from my one account to another, from some server side config instead of configuring email clients for rules. I have tried Google Apps Mail but it doesn't allow auto cc of outgoing emails. After having read tons of blogs, forum messages etc (hope I have been reading the correct info :) ) the only option to achieve what I am needing is to setup my own mail server, but the cost of getting a static IP doesn't fit my budget. Please can someone point me in the correct direction. Platform doesn't matter, I can setup a Windows or Linux server. Many Thanks

    Read the article

  • Performance & Security Factors of Symbolic Links

    - by Stoosh
    I am thinking about rolling out a very stripped down version of release management for some PHP apps I have running. Essentially the plan is to store each release in /home/release/1.x etc (exported from a tag in SVN) and then do a symlink to /live_folder and change the document root in the apache config. I don't have a problem with setting all this up (I've actually got it working at the moment), however I'm a developer with just basic knowledge of the server admin side of things. Is there anything I need to be aware of from a security or performance perspective when using this method of release management? Thanks

    Read the article

  • Authentication in Apache2 with mod_dav_svn

    - by Poita_
    I'm having some trouble setting up authentication in Apache2 for a SVN repository that's being served using mod_dav_svn. Here is my Apache config for the directory: <Location /svn> DAV svn SVNParentPath /var/svn/repos AuthType Basic AuthName "Subversion Repository" AuthUserFile /etc/apache2/dev.passwd Require valid-user </Location> I can use svn with the projects under /var/svn/repos, so I know that the DAV is working, but when I do svn updates or commits (or anything), Apache doesn't ask for any authentication... It does the exact same thing whether the Auth directives are there or not. The permissions on the repository directory (and all subdirectories/files) only give permission to www-data (the Apache2 user/group). I have also ensured that all relevant modules are enabled (in particular mod_auth is enabled, as are all mod_dav* modules). Any ideas why svn commands aren't authenticating? Thanks in advance.

    Read the article

  • AWS VPC - why have a private subnet at all?

    - by jkim
    In Amazon VPC, the VPC creation wizard allows one to create a single "public subnet" or have the wizard create a "public subnet" and a "private subnet". Initially, the public and private subnet option seemed good for security reasons, allowing webservers to be put in the public subnet and database servers to go in the private subnet. But I've since learned that EC2 instances in the public subnet are not reachable from the Internet unless you associate an Amazon ElasticIP with the EC2 instance. So it seems with just a single public subnet configuration, one could just opt to not associate an ElasticIP with the database servers and end up with the same sort of security. Can anyone explain the advantages of a public + private subnet configuration? Are the advantages of this config more to do with auto-scaling, or is it actually less secure to have a single public subnet?

    Read the article

  • 500 internal server error php long running process

    - by Sabirul Mostofa
    I am trying to run a long php process and it ends with the 500 internal server error. It executes fine for about 8 mins. I have rebooted the machine after changing the php settings. PHP Config: max_execution_time: 3600 After around 10 mins ps ax|grep php: 19007 ? S 0:08 /usr/bin/php /home/gypsy/public_html/index.php I have set the ignore_user_abort true. The process gets stuck at 00:08 min and isn't executed further. Apache error log shows the error: Script timed out before returning headers: index.php It seems somehow the max_execution_time isn't working. Any suggestion would be a great help.

    Read the article

  • How to open a program on particular desktop?

    - by Vi.
    When I start GUI program, it's window appears appears on currently active desktop (essentially, on random desktop). How to make it to appear on the specified desktop? For example, at startup I want certain programs to be started and distributed to desktops. I've already set up config file of openbox to force some programs to always start on specific desktop. Ideally it should be like: start_on_desktop 1 gnome-terminal --tab -e program1 --tab -e program2 start_on_desktop 2 gnome-terminal --tab -e program3 --tab -e program4 start_on_desktop 3 firefox It should be able to start the same program on other desktop. Also dislike when I start program while being on desktop X then switch to desktop Y and SUDDENLY a program which should be on X appears on Y. When I start lots of programs on and switch often between desktops they end up being in chaos and I need to collect them together and redistribute sanely. Also I want the first initial gnome-terminal to be on desktop 3, but I also want subsequent gnome-terminals to be on the desktop where I pressed the keystroke (also configured in openbox) that launches gnome-terminal.

    Read the article

  • Why is my nginx alias not working?

    - by Rob
    I'm trying to set up an alias so when someone accesses /phpmyadmin/, nginx will pull it from /home/phpmyadmin/ rather than from the usual document root. However, everytime I pull up the URL, it gives me a 404 on all items not pulled through fastcgi. fastcgi seems to be working fine, whereas the rest is not. strace is telling me it's trying to pull everything else from the usual document root, yet I can't figure out why. Can anyone provide some insight? Here is the relevant part of my config: location ~ ^/phpmyadmin/(.+\.php)$ { include fcgi.conf; fastcgi_index index.php; fastcgi_pass unix:/tmp/php-cgi.sock; fastcgi_param SCRIPT_FILENAME /home$fastcgi_script_name; } location /phpmyadmin { alias /home/phpmyadmin/; }

    Read the article

  • How to generate good serials for DNS zones with Puppet?

    - by Bittrance
    My tradition is to set all zone serials to the timestamp at modification. Now that Puppet is my new religion, I want to set serial timestamps when building zone files from exported resources. A somewhat trivialized example may look like this: file { "/tmp/dafile": content = inline_template("<%= Time.now.to_i %>"), } The problem with this approach is that content will be different all the time, which will (ultimately) provoke rebuilding of zone files on each puppet config poll. Is there some way I can insert a timestamp without it being included in the data that is compared against previous state?

    Read the article

  • NGINX - Two different rails apps under same domain

    - by Murkin
    I have two different Rails (passenger) apps that I wan to host on one server: somehost.com/ <-- App #1 somehost.com/admin <--- App #2 Tried playing with the 'location' directive, but failed to have both operate. Can someone suggest the correct approach ? (I would prefer both to share same environment, only launch from different directories) EDIT: Sample (desired) config Trying to do something like: server { listen 80; server_name myhost.com; rails_env production; passenger_enabled on; location / { root /opt/main_site/public/; } location /dev { root /opt/admin_site/public/; } }

    Read the article

  • How can I disable chrome extensions (when Chrome is unresponsive)?

    - by John
    I'm having a problem where Chrome won't ever fully start (cursor just spins indefinitely and can't use the any menus or buttons in Chrome). I think it is due to an extension I installed but since the browser is unresponsive, I can't do anything through Chrome itself. Is there a flag to disable all extensions for Chrome or a config file I can manually edit to disable extensions so I can figure out what exactly is causing it? I'd prefer to not have to blow it away and reinstall as I might just install the offending extension again (assuming that is what is causing the issue).

    Read the article

  • Do best-practices say to restrict the usage of /var to sudoers?

    - by NewAlexandria
    I wrote a package, and would like to use /var to persist some data. The data I'm storing would perhaps even be thought of as an addition for /var/db. The pattern I observe is that files in /var/db, and the surrounds, are owned by root. The primary (intended) use of the package filters cron jobs - meaning you would need permissions to edit the crontab. Should I presume a sudo install of the package? Should I have the package gracefully degrade to a /usr subdir, and if so then which one? If I 'opinionate' that any non-sudo install requires a configrc (with paths), where should the package look (presuming a shared-host environment) for that config file? Incidentally, this package is a ruby gem, and you can find it here.

    Read the article

  • Disable CTRL + ALT + [LETTER] to produce accented variations

    - by Barney
    After an unknown config change, CTRL + ALT + [LETTER] has started producing common accented versions of [LETTER]. I'm not a big fan of this arrangement, seeing as I've memorized all my favourite ALT + [NUMPAD SEQUENCE] references and was used to using CTRL + ALT + [LETTER] for various other application-specific commands in my text editor. The prominent result from my searching suggests that this has something to do with a switch to an 'international keyboard', and says this can be removed in the control panel or toggled by hitting ALT + SHIFT, but I can't get my system to confirm this, and the solutions (or close approximations thereof) don't work. Specifically, I've been to Control Panel\Clock, Language, and Region\Language\Advanced settings and switched the override for default input method from 'language list' to English and I've been to Control Panel\Clock, Language, and Region\Language\Language options and made sure that I only have my one input method (UK). Other than that I'm not quite sure where to look. Any ideas?

    Read the article

  • MySQL Optimizing

    - by Thoman
    Hello My web use an dedicated Intel(R) Xeon(R) CPU E5620 8core 12Gram Centos32bit/Driectadmin DISK SAS 80G Php-cgi This dedicated running one website Use wordpress 2.92(+plugin cache...) Database size 600MB only 100online But mywebsite runing very snow. please hep me config file my.cnf [mysqld] user=mysql key_buffer=128M set-variable = max_connections=1000 socket = /var/lib/mysql/mysql.sock key_buffer =32M table_cache = 1024 open_files_limit = 16344 join_buffer_size = 8M read_buffer_size = 8M sort_buffer_size = 8M tmp_table_size=512M read_rnd_buffer_size=8M max_heap_table_size=256M #myisam_sort_buffer_size=256M thread_cache_size=8 thread_cache=32 query_cache_type=1 query_cache_limit=1024M query_cache_size=1024M thread_concurrency = 16 wait_timeout = 10 connect_timeout = 10 interactive_timeout = 10 long_query_time=1 log-slow-queries = /var/log/mysqlslowqueries.log max_allowed_packet=32M skip-innodb [myisamchk] key_buffer = 64M sort_buffer = 64M read_buffer = 16M write_buffer = 16M [isamchk] key_buffer=64M sort_buffer=64M read_buffer=16M write_buffer=16M And apache

    Read the article

  • Reverse Proxy (mod_rewrite) and Rails (absolute paths)

    - by SooDesuNe
    I have front end rails app, that reverse proxies to any of a number of backend rails apps depending on URL, for example http://www.my_host.com/app_one reverse proxies to http://www.remote_host_running_app_one.com such that a URL like http://www.my_host.com/app_one/users will display the contents of http://www.remote_host_running_app_one.com/users I have a large, and ever expanding number of backends, so they can not be explicitly listed anywhere other than a database. This is no problem for mod_rewrite using a prg:/ rewrite map reverse proxy. The question is, the urls returned by rails helpers have the form /controller/action making them absolute to the root. This is a problem for the page served by mod_rewrite because links on the proxied page appear as absolute to the domain. i.e.: http://www.my_host.com/app_one/controller/action has links that end up looking like /controller/action/ when they need to look like /app_one/controller/action mod_proxy_html seems like the right idea, but it doesn't seem to be as dynamic as I would need, since the rules need to be hard coded into the config files. Is there a way to fix this server-side, so that the links will be routed correctly?

    Read the article

  • cPanel IPTables custom rules

    - by James Haigh
    Hi, I'm trying to allow a host access to port 3306 by IP. I've added the rule and ran an iptables-save and also service iptables save. These commands show as "OK" with no reported errors. And this works absolutely fine. Now, the server hasn't been restarted at all since I've been having this problem, but every day when I start developing on the server that needs mySQL access, it reports that the connection is refused. Back on the mySQL server, all I need to do is service iptables restart and everything then works as normal. The mySQL server is a CentOS cPanel VPS running on OpenVZ. Anyone know how I can make these rules persist? Is it something cPanel is doing overnight that is messing with my config? Thanks.

    Read the article

  • Does nginx auth_basic work over HTTPS?

    - by monde_
    I've been trying to setup a password protected directory in a SSL website as follows: /etc/nginx/sites-available/default server { listen 443: ssl on; ssl_certificate /usr/certs/server.crt; ssl_certificate_key /usr/certs/server.key; server_name server1.example.com; root /var/www/example.com/htdocs/; index index.html; location /secure/ { auth_basic "Restricted"; auth_basic_user_file /var/www/example.com/.htpasswd; } } The problem is when I try to access the URL https://server1.example.com/secure/, I get a "404: Not Found" error page. My error.log shows the following error: 011/11/26 03:09:06 [error] 10913#0: *1 no user/password was provided for basic authentication, client: 192.168.0.24, server: server1.example.com, request: "GET /secure/ HTTP/1.1", host: "server1.example.com" However, I was able to setup password protected directories for a normal HTTP virtual host without any problems. Is it a problem with the config or something else?

    Read the article

  • How to monitor Windows Server events with Centreon Nagios

    - by Miss M
    I want to monitor events on a Windows Exchange Server (Windows 2008 R2) and have installed NSclient ++ so I can use Centreon Nagios to monitor it. I did a bit of research and came across this question that I found somewhat helpful: How to monitor Windows host with Nagios? Nick Kavadias gave a good answer but it did not provide an explanation on how to configure the Nagios config file in such a way that it would monitor a specific service on the server. So, how do I set up a service in Nagios in such a way that it will detect when a windows event occurs on a server?

    Read the article

  • Why doesn't phpMyAdmin connect to MySQL server?

    - by Grafica
    I'm running xampp on Windows 7, and when I type localhost/phpmyadmin, there is an error: phpMyAdmin tried to connect to the MySQL server, and the server rejected the connection. You should check the host, username and password in your configuration and make sure that they correspond to the information given by the administrator of the MySQL server. Here's what I did, but I'm still not able to connect: In config.inc.php, changed from true to false: $cfg['Servers'[$i]['AllowNoPassword'] = false; Changed password here: localhost/security/xamppsecurity.php In resetroot.bat, typed new password where it says 'password': echo REPLACE INTO user VALUES ('localhost', 'pma', 'password', 'N', 'N', 'N', 'N', 'N', 'N', 'N', 'N', 'N', 'N', 'N', 'N', 'N', 'N', 'N', 'N', 'N', 'N', 'N', 'N', 'N', 'N', 'N', 'N', 'N', 'N', 'N', 'N', 'N', '', '', '', '', 0, 0, 0, 0, '', ''); >>resetroot.sql Restarted apache and mySQL I still get the same error message. Thanks in advance!

    Read the article

  • Apache deny access to images folder, but still able to display via <img> on site

    - by jeffery_the_wind
    I have an images folder on my site, let's call it /images/ where I keep a lot of images. I don't want anyone to have direct access to the images via the web, so I put a new directive in my Apache config that achieves this: <Directory "/var/www/images/"> Options Includes AllowOverride All Order allow,deny Deny from All </Directory> This is working, but it is blocking out ALL ACCESS, and I can't show the images anymore through my web pages. I guess this makes sense. So how do I selectively control access to these images? Basically I only want to display certain images through certain webpages and to certain users. What is best way to do this? Do I need to save the images to the database? Tim

    Read the article

< Previous Page | 251 252 253 254 255 256 257 258 259 260 261 262  | Next Page >