Search Results

Search found 7082 results on 284 pages for 'trs 80'.

Page 215/284 | < Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >

  • setting up tracd behind mod_proxy?

    - by FilmJ
    I'm having trouble setting up mod_proxy and tracd. Seems almost all the search results for this problem take me to the built-in trac documentation page that mentions it as an option. I have several VirtualServers already running on the box in question, so running tracd on port 80 or 443 is not an option, but I do want to make my trac server accessible on this machine without exposing an additional port via the firewall. Making things even more complicated is that I have multiple trac repositories being served by the same instance of tracd, and so I want to set it up so: http://trac.abc.com is proxy'd to localhost:8000/projects/abcproject, and http://trac.def.com is proxy'd to localhost:8000/projects/defproject. Currently, the setup I have below results in 100% 403 errors. The server is running as www-data and the directory where all trac files are stored is owned by www-data, AND tracd (as show below) is running as www-data, so not sure where it's getting hung up. The relevant configuration on /var/apache2/sites-enabled/trac.abc.com: ProxyPass / http://localhost:8000/abcproject ProxyPassReverse / http://localhost:8000/abcproject The relevant configuration on /var/apache2/sites-enabled/trac.def.com: ProxyPass / http://localhost:8000/defproject ProxyPassReverse / http://localhost:8000/defproject The command used to instantiate tracd: tracd -a defproject,/var/www/vhosts/trac-common/users.htdigest,DEFProject -a abcproject,/var/www/vhosts/trac-common/users.htdigest,ABCProject -p 8000 -b localhost -e /var/www/vhosts/trac-common/projects If I access the site at http://localhost:8000/ everything works fine, but if I try to access via any of the proxy'd hosts I end up with 403 at every turn. I've used mod_proxy successfully as described above for other servers, such as couchdb, so maybe this has to do with the headers sent by tracd??

    Read the article

  • Configure Apache + Passenger to serve static files from different directory

    - by Rory Fitzpatrick
    I'm trying to setup Apache and Passenger to serve a Rails app. However, I also need it to serve static files from a directory other than /public and give precedence to these static files over anything in the Rails app. The Rails app is in /home/user/apps/testapp and the static files in /home/user/public_html. For various reasons the static files cannot simply be moved to the Rails public folder. Also note that the root http://domain.com/ should be served by the index.html file in the public_html folder. Here is the config I'm using: <VirtualHost *:80> ServerName domain.com DocumentRoot /home/user/apps/testapp/public RewriteEngine On RewriteCond /home/user/public_html/%{REQUEST_FILENAME} -f RewriteCond /home/user/public_html/%{REQUEST_FILENAME} -d RewriteRule ^/(.*)$ /home/user/public_html/$1 [L] </VirtualHost> This serves the Rails application fine but gives 404 for any static content from public_html. I have also tried a configuration that uses DocumentRoot /home/user/public_html but this doesn't serve the Rails app at all, presumably because Passenger doesn't know to process the request. Interestingly, if I change the conditions to !-f and !-d and the rewrite rule to redirecto to another domain, it works as expected (e.g. http://domain.com/doesnt_exist gets redirected to http://otherdomain.com/doesnt_exist) How can I configure Apache to serve static files like this, but allow all other requests to continue to Passenger?

    Read the article

  • Apache 2.4 with PHP-FPM

    - by tubaguy50035
    I'm trying to setup Apache 2.4 with PHP-FPM 5.4 using the new modules with Apache 2.4. The following is what I have currently in my virtual host file: <VirtualHost *:80> ServerAdmin root@localhost DocumentRoot /var/www #Directory permissions <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Require all granted </Directory> CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> I have PHP-FPM running using Unix sockets with a sock file located at /var/run/php5-fpm.sock. How do I proxy my requests to this sock file? I've seen some sites say to use ProxyPassMatch and others are saying Rewrite Rule. Are there pros or cons on either side? Also, most sites I'm seeing are showing ProxyPassMatch with a regex to only pass .php files. Could I also send it .html files? For whatever reason, we have a ton of PHP inside .html files. Edit: As noted in the comments, it looks like mod_proxy_fcgi doesn't support Unix sockets. Is there another module I should be using?

    Read the article

  • ffmpeg cutting video duration

    - by Steve Spence
    When using ffmpeg on linux, my 4.3GB 2.21 second video is being chopped down to 1.56 duration. I'm trying to reduce file size, but not lose frames. steve@steve-OptiPlex-170L:~/Desktop$ ffmpeg -i microbe.avi microbe.mp4 ffmpeg version 0.8.3-4:0.8.3-0ubuntu0.12.04.1, Copyright (c) 2000-2012 the Libav developers built on Jun 12 2012 16:37:58 with gcc 4.6.3 * THIS PROGRAM IS DEPRECATED * This program is only provided for compatibility and will be removed in a future release. Please use avconv instead. Input #0, avi, from 'microbe.avi': Duration: 00:02:21.80, start: 0.000000, bitrate: 242311 kb/s Stream #0.0: Video: rawvideo, bgr24, 1280x960, 10 tbr, 10 tbn, 10 tbc Incompatible pixel format 'bgr24' for codec 'mpeg4', auto-selecting format 'yuv420p' [buffer @ 0x9f861e0] w:1280 h:960 pixfmt:bgr24 [avsink @ 0x9f86440] auto-inserting filter 'auto-inserted scaler 0' between the filter 'src' and the filter 'out' [scale @ 0x9f7d800] w:1280 h:960 fmt:bgr24 - w:1280 h:960 fmt:yuv420p flags:0x4 Output #0, mp4, to 'microbe.mp4': Metadata: encoder : Lavf53.21.0 Stream #0.0: Video: mpeg4, yuv420p, 1280x960, q=2-31, 200 kb/s, 10 tbn, 10 tbc Stream mapping: Stream #0.0 - #0.0 Press ctrl-c to stop encoding frame= 1164 fps= 6 q=31.0 Lsize= 3775kB time=116.40 bitrate= 265.7kbits/s video:3765kB audio:0kB global headers:0kB muxing overhead 0.272870% steve@steve-OptiPlex-170L:~/Desktop$

    Read the article

  • Graphite not running

    - by River
    I'm currently trying to install graphite 0.9.9 on a gentoo box using these instructions from the graphite wiki. Essentially, it fronts graphite using apache and mod_wsgi. Everything seems to have gone well, except that apache / the graphite webapp never seem to return a response to the web browser (the browser continuously waits to load the page). I've turned on the graphite debug info, but the only message in the log files is this, repeated over and over again in info.log (with the pid always changing): Thu Feb 23 01:59:38 2012 :: graphite.wsgi - pid 4810 - reloading search index These instructions have worked for me before to set up graphite on an Ubuntu machine. I suspect that mod_wsgi is dying, but I have confirmed that mod_wsgi works fine when not serving the graphite webapp. This is what my graphite.conf vhost file looks like: WSGISocketPrefix /etc/httpd/wsgi/ <VirtualHost *:80> ServerName # Server name DocumentRoot "/opt/graphite/webapp" ErrorLog /opt/graphite/storage/log/webapp/error.log CustomLog /opt/graphite/storage/log/webapp/access.log common # I've found that an equal number of processes & threads tends # to show the best performance for Graphite (ymmv). WSGIDaemonProcess graphite processes=5 threads=5 display-name='%{GROUP}' inactivity-timeout=120 WSGIProcessGroup graphite WSGIApplicationGroup %{GLOBAL} WSGIImportScript /opt/graphite/conf/graphite.wsgi process-group=graphite application-group=%{GLOBAL} WSGIScriptAlias / /opt/graphite/conf/graphite.wsgi Alias /content/ /opt/graphite/webapp/content/ <Location "/content/"> SetHandler None </Location> # XXX In order for the django admin site media to work you # must change @DJANGO_ROOT@ to be the path to your django # installation, which is probably something like: # /usr/lib/python2.6/site-packages/django Alias /media/ "/usr/lib64/python2.6/site-packages/django/contrib/admin/media/" <Location "/media/"> SetHandler None </Location> # The graphite.wsgi file has to be accessible by apache. It won't # be visible to clients because of the DocumentRoot though. <Directory /opt/graphite/conf/> Order deny,allow Allow from all </Directory> </VirtualHost>

    Read the article

  • RTNETLINK answers: File exists... maybe because assigned a new mac adress

    - by steven
    I got a "RTNETLINK answers: File exists Failed to bring up eth0:1" on "ifup eth0:1". I suspect it happens because i assigned a new mac adress in my VM's network adapter. Can you tell me how to fix the issue? My configuration looks like this: # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 allow-hotplug eth0 iface eth0 inet static address 192.168.1.80 netmask 255.255.255.0 gateway 192.168.1.1 dns-nameservers 192.168.1.1 # Alias being connected to 192.168.10.x Network auto eth0:1 allow-hotplug eth0:1 iface eth0:1 inet static address 192.168.10.83 netmask 255.255.255.0 gateway 192.168.10.10 dns-nameservers 192.168.10.1 Why do I get "RTNETLINK answer: File exists.." suddenly? I worked with this configuration before without problems. All i did in the past is to renew the adapters mac adress. At the moment I am connected to the 192.168.10.x Network and if I do /etc/init.d/networking stop /etc/init.d/networking start then i got "RTNETLINK [...] falied to bring up eth0:1" but the strage thing is that i am able to connect to 192.168.10.83 via ssh from my host machine. But I cannot reach the internet from the debian client. I hope it is clear what my problem is, now. update if i change my /etc/network/interfaces like this then "ifup eth0" fails, too with the same error! # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 allow-hotplug eth0 iface eth0 inet static address 192.168.10.83 netmask 255.255.255.0 gateway 192.168.10.10 dns-nameservers 192.168.10.1 with verbose option enabled i got: Configuring interfache eth0=eth0 (inet) run-parts --verbose /etc/network/if-pre-up.d ip addr add 192.168.10.83/255.255.255.0 broadcast 192.168.10.255 dev eth0 label eth0 RTNETLINK answers: File exists Failed to bring up eth0. same if i type this manually: ip addr add 192.168.10.83/255.255.255.0 broadcast 192.168.10.255 dev eth0 label eth0

    Read the article

  • nginx phpmyadmin 404

    - by borannb
    I am trying to install phpmyadmin on my nginx web server. I installed phpmyadmin without a problem. I created subdomain for it. For security reasons I didnt call my subdomain "phpmyadmin" i used a different name. Then I used this config for my subdomain server { listen 80; server_name myphpmyadminsubdomain.domain.com; access_log off; error_log /srv/www/myphpmyadminsubdomain/error.log; location / { root /usr/share/phpmyadmin; index index.php; } location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; include fastcgi_params; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; # fastcgi_intercept_errors on; fastcgi_pass php; } location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { allow all; log_not_found off; access_log off; } location ~ /\. { deny all; access_log off; log_not_found off; } } Then I enabled it like this; /etc/nginx/sites-available/myphpmyadminsubdomain /etc/nginx/sites-enabled/myphpmyadminsubdomain I have restarted the nginx and go to myphpmyadminsubdomain.domain.com and it is giving me nginx 404 Not Found error. what am I doing wrong?

    Read the article

  • Java website on Tomcat PHP website on Apache - how to get PHP web pages into Java web pages?

    - by Venkat
    We have a Java web application deployed on Tomcat. We also setup Apache and mod_proxy_ajp to route web requests (port 80/443) to Tomcat. We would like to deploy a PHP application on the same Apache server - probably under a subdirectory (/var/www/ourapp). Now we would like to access & display web pages from PHP application within web pages generated by Java application. Planning to implement Single Sign-on as well. Example: Web page from java has (JQuery Tabs) and we like to display the PHP web page within a tab while all other HTML comes from java application. Can you please give a overall picture of how to proceed about this? Mainly 1. how we should install/setup our PHP application on same Apache server which is used to route web requests to Tomcat? i.e. either setup sub domain or install in sub directory 2. How to bring PHP pages into present web pages (generated by java). Can we use AJAX requests or should go for Java PHP Bridge/ Querces such applications? Thank you for your time in advance. Regards.

    Read the article

  • Varnish does not start properly (crashes after startup) with no error messages

    - by Matthew Savage
    I am running Varnish (2.0.4 from the Ubuntu unstable apt repository, though I have also used the standard repository) in a test environment (Virtual Machines) on Ubuntu 9.10, soon to be 10.04. When I have a working configuration and the server starts successfully it seems like everything is fine, however if, for whatever reason, I stop and then restart the varnish daemon it doesn't always startup properly, and there are no errors going into syslog or messages to indicate what might be wrong. If I run varnish in debug mode (-d) and issue start when prompted then 7 times out of time it will run, but occasionally it will just shut down 'silently'. My startup command is (the $1 allows for me to pass -d to the script this lives in): varnishd -a :80 $1 \ -T 127.0.0.1:6082 \ -s malloc,1GB \ -f /home/deploy/mysite.vcl \ -u deploy \ -g deploy \ -p obj_workspace=4096 \ -p sess_workspace=262144 \ -p listen_depth=2048 \ -p overflow_max=2000 \ -p ping_interval=2 \ -p log_hashstring=off \ -h classic,5000009 \ -p thread_pool_max=1000 \ -p lru_interval=60 \ -p esi_syntax=0x00000003 \ -p sess_timeout=10 \ -p thread_pools=1 \ -p thread_pool_min=100 \ -p shm_workspace=32768 \ -p thread_pool_add_delay=1 and the VCL looks like this: # nginx/passenger server, HTTP:81 backend default { .host = "127.0.0.1"; .port = "81"; } sub vcl_recv { # Don't cache the /useradmin or /admin path if (req.url ~ "^/(useradmin|admin|session|sessions|login|members|logout|forgot_password)") { pipe; } # If cache is 'regenerating' then allow for old cache to be served set req.grace = 2m; # Forward to cache lookup lookup; } # This should be obvious sub vcl_hit { deliver; } sub vcl_fetch { # See link #16, allow for old cache serving set obj.grace = 2m; if (req.url ~ "\.(png|gif|jpg|swf|css|js)$") { deliver; } remove obj.http.Set-Cookie; remove obj.http.Etag; set obj.http.Cache-Control = "no-cache"; set obj.ttl = 7d; deliver; } Any suggestions would be greatly appreciated, this is driving me absolutely crazy, especially because its such an inconsistent behaviour.

    Read the article

  • linux routing issue

    - by Duc To
    Hi! I have 2 linksys routers which has linux running on it and using tomato firmware.. both has internet lines plugged on but only 1 acts as DHCP server (router 1) What I am having to achieve is that all packets goes to router 1 from internal IPs want to access internet will go out to that internet line but from 1 specific port, if router 1 detects packets from a specific source port (for ex: http port: 80), it will redirect that packet to router 2 and goes out to the internet from there.. I have found some documents which give solution that I will need a linux servers with 2 ethernet cards and then we plug both internet lines on that server and routing base on it but I do not want to do that because my boss does not want to have an extra work mantaining that server, besides, he says that the router itself already a linux one so why.. I tend to agree his points.. Can it be done or a seperate linux server acting as a router is a must? Thank you all in advance and really look forward in your replies.. I am newbie to linux network and it seems to be something out of my capacity to solve :( Your sincerely! Duc To

    Read the article

  • In Icinga (Nagios), how do I configure hosts with multiple IPs?

    - by gertvdijk
    I'm setting up Icinga (Nagios fork) and I have some machines with multiple interfaces. Some services are only listening on one of them and to check them correctly, I like to know if it's possible to have multiple IP addresses configured for a single host in Icinga. Here's a minimal example: Remote Server: eth0: 1.2.3.4 (public IP) eth1: 10.1.2.3 (private IP, secure tunnel) Apache listening on 1.2.3.4:80. (public only) OpenSSH listening on 10.1.2.3:22. (internal network only) Postfix SMTP listening on 0.0.0.0:25 (all interfaces) Icinga Server: eth0: 10.2.3.4 (private IP, internet access) Now if I define a host: define host { use generic-host host_name server1 alias server1.gertvandijk.net address 10.1.2.3 } This will not check the HTTP status correctly. And defining an additional host: define host { use generic-host host_name server1-public alias server1.gertvandijk.net address 1.2.3.4 } will check everything, but shows up as two independent hosts. Now I want to 'aggregate' these two hosts to show up as a single host, yet providing an easy configuration to check the services on their proper address. What is the most elegant number-of-configuration-lines-saving solution to this? I read about several plugins available to workaround this, but I can't figure out what is the current way to address it. Solutions go back to 2003, but I'm running Icinga 1.7.1, already capable of the address6 option, yet that triggers IPv6-only resolving on the hostname... Ideally, I wish to configure Icinga to be intelligent enough to know that the Postfix instance running on 10.1.2.3:25 is the same as 1.2.3.4:25 and thus not triggering two alarms. I guess this must have been tackled before and sysadmins have it set up now. Please share your solution to this. Thanks! :)

    Read the article

  • How to avoid index.php in Zend Framework route using Nginx rewrite

    - by Adam Benayoun
    I am trying to get rid of index.php from the default Zend Framework route. I think it should be corrected at the server level and not on the application. (Correct me if I am wrong, but I think doing it on the server side is more efficient). I run Nginx 0.7.1 and php-fpm 5.3.3 This is my nginx configuration server { listen *:80; server_name domain; root /path/to/http; index index.php; client_max_body_size 30m; location / { try_files $uri $uri/ /index.php?$args; } location /min { try_files $uri $uri/ /min/index.php?q=; } location /blog { try_files $uri $uri/ /blog/index.php; } location /apc { try_files $uri $uri/ /apc.php$args; } location ~ \.php { include /usr/local/etc/nginx/conf.d/params/fastcgi_params_local; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param SERVER_NAME $http_host; fastcgi_pass 127.0.0.1:9000; } location ~* ^.+\.(ht|svn)$ { deny all; } # Static files location location ~* ^.+\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js)$ { expires max; } } Basically www.domain.com/index.php/path/to/url and www.domain.com/path/to/url serves the same content. I'd like to fix this using nginx rewrite. Any help will be appreciated.

    Read the article

  • TCPDump and IPTables DROP by string

    - by Tiffany Walker
    by using tcpdump -nlASX -s 0 -vvv port 80 I get something like: 14:58:55.121160 IP (tos 0x0, ttl 64, id 49764, offset 0, flags [DF], proto TCP (6), length 1480) 206.72.206.58.http > 2.187.196.7.4624: Flags [.], cksum 0x6900 (incorrect -> 0xcd18), seq 1672149449:1672150889, ack 4202197968, win 15340, length 1440 0x0000: 4500 05c8 c264 4000 4006 0f86 ce48 ce3a E....d@[email protected].: 0x0010: 02bb c407 0050 1210 63aa f9c9 fa78 73d0 .....P..c....xs. 0x0020: 5010 3bec 6900 0000 0f29 95cc fac4 2854 P.;.i....)....(T 0x0030: c0e7 3384 e89a 74fa 8d8c a069 f93f fc40 ..3...t....i.?.@ 0x0040: 1561 af61 1cf3 0d9c 3460 aa23 0b54 aac0 .a.a....4`.#.T.. 0x0050: 5090 ced1 b7bf 8857 c476 e1c0 8814 81ed P......W.v...... 0x0060: 9e85 87e8 d693 b637 bd3a 56ef c5fa 77e8 .......7.:V...w. 0x0070: 3035 743a 283e 89c7 ced8 c7c1 cff9 6ca3 05t:(>........l. 0x0080: 5f3f 0162 ebf1 419e c410 7180 7cd0 29e1 _?.b..A...q.|.). 0x0090: fec9 c708 0f01 9b2f a96b 20fe b95a 31cf ......./.k...Z1. 0x00a0: 8166 3612 bac9 4e8d 7087 4974 0063 1270 .f6...N.p.It.c.p What do I pull to use IPTables to block via string. Or is there a better way to block attacks that have something in common? Question is: Can I pick any piece from that IP packet and call it a string? iptables -A INPUT -m string --alog bm --string attack_string -j DROP In other words: In some cases I can ban with TTL=xxx and use that should an attack have the same TTL. Sure it will block some legit packets but if it means keeping the box up it works till the attack goes away but I would like to LEARN how to FIND other common things in a packet to block with IPTables

    Read the article

  • iptables -- OK, **now** am I doing it right?

    - by Agvorth
    This is a follow up to a previous question where I asked whether my iptables config is correct. CentOS 5.3 system. Intended result: block everything except ping, ssh, Apache, and SSL. Based on xenoterracide's advice and the other responses to the question (thanks guys), I created this script: # Establish a clean slate iptables -P INPUT ACCEPT iptables -P FORWARD ACCEPT iptables -P OUTPUT ACCEPT iptables -F # Flush all rules iptables -X # Delete all chains # Disable routing. Drop packets if they reach the end of the chain. iptables -P FORWARD DROP # Drop all packets with a bad state iptables -A INPUT -m state --state INVALID -j DROP # Accept any packets that have something to do with ones we've sent on outbound iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT # Accept any packets coming or going on localhost (this can be very important) iptables -A INPUT -i lo -j ACCEPT # Accept ICMP iptables -A INPUT -p icmp -j ACCEPT # Allow ssh iptables -A INPUT -p tcp --dport 22 -j ACCEPT # Allow httpd iptables -A INPUT -p tcp --dport 80 -j ACCEPT # Allow SSL iptables -A INPUT -p tcp --dport 443 -j ACCEPT # Block all other traffic iptables -A INPUT -j DROP Now when I list the rules I get... # iptables -L -v Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 DROP all -- any any anywhere anywhere state INVALID 9 612 ACCEPT all -- any any anywhere anywhere state RELATED,ESTABLISHED 0 0 ACCEPT all -- lo any anywhere anywhere 0 0 ACCEPT icmp -- any any anywhere anywhere 0 0 ACCEPT tcp -- any any anywhere anywhere tcp dpt:ssh 0 0 ACCEPT tcp -- any any anywhere anywhere tcp dpt:http 0 0 ACCEPT tcp -- any any anywhere anywhere tcp dpt:https 0 0 DROP all -- any any anywhere anywhere Chain FORWARD (policy DROP 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 5 packets, 644 bytes) pkts bytes target prot opt in out source destination I ran it and I can still log in, so that's good. Anyone notice anything major out of wack?

    Read the article

  • AJP Connector Apache-Tomcat with php and java application

    - by Safari
    I have a question about proxy and ajp module. On my machine I have a Apache web server and a Tomcat servlet container. On Tomcat is running a my java webapplication. On Apache I have some services and I can call these in this way: http://myhos/service1 http://myhos/service2 http://myhos/service3 I would configurate a ajp connector to call my tomcat webapplication from Apache. I would somethin as http://myhost to call the Tomcat webapp. So, I configurated my apache in this way..and I have what I wanted: I can use http://myHost to visualize the Tomcat webApp by Apache. <VirtualHost *:80> ProxyRequests off ProxyPreserveHost On ServerAlias myserveralias ErrorLog logs/error.log CustomLog logs/access.log common <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass /server-status ! ProxyPass /balancer-manager ! ProxyPass / balancer://mycluster/ stickysession=JSESSIONID nofailover=Off maxattempts=1 <Proxy balancer://mycluster> BalancerMember ajp://myIp:8009 min=10 max=100 route=portale loadfactor=1 ProxySet lbmethod=bytraffic </Proxy> <Location /balancer-manager> SetHandler balancer-manager Order deny,allow Allow from localhost </Location> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent </VirtualHost> But, now I can't use the apache services: If I use http://myhos/service1 I have an error because apache try to search service1 on my tomcatWebApp. Is there a way to fix it?

    Read the article

  • Having trouble keeping a 1GB RAM Centos server running

    - by Josh
    This is my first time configuring a VPS server and I'm having a few issues. We're running Wordpress on a 1GB Centos server configured per the internet (online research). No custom queries or anything crazy but closing in on 8K posts. At arbitrary intervals, the server just goes down. From the client side, it just says "Loading..." and will spin more or less indefinitely. On the server side, the shell will lock completely. We have to do a hard reboot from the control panel and then everything is fine. Watching "top" I see it hovering between 35 - 55% memory usage generally and occasional spikes up to around 80%. When I saw it go down, there were about 30 - 40 Apache processes showing which pushed memory over the edge. "error_log" tells me that maxclients was reached right before each reboot instance. I've tried tinkering with that but to no avail. I think we'll probably need to bump the server up to the next RAM level but with ~120K pageviews per month, it seems like that's a bit overkill since it was running fairly well on a shared server before. Any ideas? httpd.conf and my.cnf values to add? I'll update this with the current ones if that helps. Thanks in advance! This has been a fun and important learning experience but, overall, quite frustrating! Edit: quick top snapshot: top - 15:18:15 up 2 days, 13:04, 1 user, load average: 0.56, 0.44, 0.38 Tasks: 85 total, 2 running, 83 sleeping, 0 stopped, 0 zombie Cpu(s): 6.7%us, 3.5%sy, 0.0%ni, 89.6%id, 0.0%wa, 0.0%hi, 0.1%si, 0.0%st Mem: 2051088k total, 736708k used, 1314380k free, 199576k buffers Swap: 4194300k total, 0k used, 4194300k free, 287688k cached

    Read the article

  • Multiple rack apps on nginx + passenger, one as root, the other not...config help

    - by cannikin
    So I've got two apps I want to run on a server. One app I would like to be the "default" app--that is, all URLs should be sent this app by default, except for a certain path, lets call it /foo: http://mydomain.com/ -> app1 http://mydomain.com/apples -> app1 http://mydomain.com/foo -> app2 My two rack apps are installed like so: /var /www /apps /app1 app.rb config.ru /public /app2 app.rb config.ru /public app1 -> apps/app1/public app2 -> apps/app2/public (app1 and app2 are symlinks to their respective apps' public directories). This is the Passenger setup for sub URIs described here: http://www.modrails.com/documentation/Users%20guide%20Nginx.html#deploying_rack_to_sub_uri With the following config I've got /foo going to app2: server { listen 80; server_name mydomain.com; root /var/www; passenger_enabled on; passenger_base_uri /app1; passenger_base_uri /app2; location /foo { rewrite ^.*$ /app2 last; } } Now, how do I get app1 to pick up everything else? I've tried the following (placed after the location /foo directive), but I get a 500 with an infinite internal redirect in error.log: location / { rewrite ^(.*)$ /app1$1 last; } I hoped that the last directive would prevent that infinite redirect, but I guess not. /foo gets the same error. Any ideas? Thanks!

    Read the article

  • Guide To Setting Accessing PhpMyAdmin On NGINX, Ubuntu 11.04, EC2 Remote MySQL Instance

    - by darkAsPitch
    I have setup a domain name to run on amazon ec2 running ubuntu 11.04, nginx and php5-fpm. The domain name works great, I have setup it's own sites-available configuration file and sym-linked it to sites-enabled. I installed phpmyadmin via sudo apt-get install phpmyadmin and followed the instructions. I then added this just above my /etc/nginx/nginx.conf file and restarted nginx. server { listen 80; server_name phpmyadmin.domain.com; location / { root /usr/share/phpmyadmin; index index.php; } #make sure all php files are processed by fast_cgi location ~ \.php { # try_files $uri =404; fastcgi_index index.php; fastcgi_pass 127.0.0.1:9000; include fastcgi_params; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } } I have also added the appropriate dns A records for phpmyadmin.domain.com phpmyadmin.domain.com just shows a 404 error code. All other subdomains do not respond at all so at least something is working here. FYI I have edited the /etc/phpmyadmin/config.inc.php file so that I can connect to a remote MySQL Database. What else do I need to do?

    Read the article

  • How to host multiple FLEX applications in IIS7

    - by Devtron
    Hello, I manually deploy a FLEX application to my web server (IIS 7). There are two virtual directories, 1.) Default 2.) myFlexApp1. myFlexApp1 is where my working FLEX application resides. I now need to deploy a different FLEX application (let's call it myFlexApp2) to the same web server. I set up a virtual directory for [myFlexApp2] and it complains about the "bindings" using port 80, which is already used by [myFlexApp1]. I have tried to give them separate host names in their bindings properties. For example, myFlexApp1.mydomain.com and myFlexApp2.mydomain.com. I can never get [myFlexApp2] to show from an external browser. I was able to get only one or the other to display, but never could run both. Here is what I need: myFlexApp1.mydomain.com -- myFlexApp1 calendar.mydomain.com -- myFlexApp2 test.mydomain.com -- myFlexApp1 where test.mydomain.com is the default URL. Is this possible? What am I doing wrong? I even tried to edit the hosts file in [C:\Windows\System32\drivers\etc] but that didnt work either. How can I serve up two FLEX applications on IIS 7? It shouldn't be this hard!

    Read the article

  • Can't get DNS Alias work on Ubuntu 10.04 with Apache 2

    - by Johnny
    I want to use the DNS Alias to configure one of my domain pointing to a specific directory on the server. Here is what I've done: Change the IP address in domain setting, and it works $ ping www.example.com PING example.com (124.205.62.xxx): 56 data bytes 64 bytes from 124.205.62.xxx: icmp_seq=0 ttl=48 time=53.088 ms 64 bytes from 124.205.62.xxx: icmp_seq=1 ttl=48 time=52.125 ms ^C --- example.com ping statistics --- 2 packets transmitted, 2 packets received, 0.0% packet loss round-trip min/avg/max/stddev = 52.125/52.606/53.088/0.482 ms Add sites-available and sites-enabled $ ls -l /etc/apache2/sites-available/ total 16 -rw-r--r-- 1 root root 948 2010-04-14 03:27 default -rw-r--r-- 1 root root 7467 2010-04-14 03:27 default-ssl -rw-r--r-- 1 root root 365 2010-06-09 18:27 example.com $ ls -l /etc/apache2/sites-enabled/ total 0 lrwxrwxrwx 1 root root 26 2010-06-09 15:46 000-default -> ../sites-available/default lrwxrwxrwx 1 root root 33 2010-06-09 18:17 001-example.com -> ../sites-available/example.com But it doesn't work and when I open the browser for www.example.com, it shows an 111 error: The following error was encountered: Connection to 124.205.62.48 Failed The system returned: (111) Connection refused Here is how example.com's config: $ cat /etc/apache2/sites-enabled/001-example.com <virtualhost *:80> DocumentRoot "/vhosts/example.com/htdocs/" ServerName www.example.com ServerAlias example.com <Location /> Order Deny,Allow Deny from None Allow from all </Location> #Include /etc/phpmyadmin/apache.conf ErrorLog /vhosts/example.com/logs/error.log CustomLog /vhosts/example.com/logs/access.log combined Could you please tell me how to solve this?

    Read the article

  • Hard Disk Not Counting Reallocated Sectors

    - by MetaNova
    I have a drive that is reporting that the current pending sectors is "45". I have used badblocks to identify the sectors and I have been trying to write zeros to them with dd. From what I understand, when I attempt writing data directly to the bad sectors, it should trigger a reallocation, reducing current pending sectors by one and increasing the reallocated sector count. However, on this disk both Reallocated_Sector_Ct and Reallocated_Event_Count raw values are 0, and dd fails with I/O errors when I attempt to write zeros to the bad sectors. dd works fine, however, when I write to a good sector. # dd if=/dev/zero of=/dev/sdb bs=512 count=1 seek=217152 dd: error writing ‘/dev/sdb’: Input/output error Does this mean that my drive, in some way, has no spare sectors to be used for reallocation? Is my drive just in general a terrible person? (The drive isn't actually mine, I'm helping a friend out. They might have just gotten a cheap drive or something.) In case it is relevant, here is the output of smartctl -i : Model Family: Western Digital Caviar Green (AF) Device Model: WDC WD15EARS-00Z5B1 Serial Number: WD-WMAVU3027748 LU WWN Device Id: 5 0014ee 25998d213 Firmware Version: 80.00A80 User Capacity: 1,500,301,910,016 bytes [1.50 TB] Sector Size: 512 bytes logical/physical Device is: In smartctl database [for details use: -P show] ATA Version is: ATA8-ACS (minor revision not indicated) SATA Version is: SATA 2.6, 3.0 Gb/s Local Time is: Fri Oct 18 17:47:29 2013 CDT SMART support is: Available - device has SMART capability. SMART support is: Enabled UPDATE: I have run shred on the disk, which has caused Current_Pending_Sector to go to zero. However, Reallocated_Sector_Ct and Reallocated_Event_Count are still zero, and dd is now able to write data to the sectors it was previously unable to. This leads me with several other questions: Why aren't the reallocations being recored by the disk? I'm assuming the reallocation took place as I can now write data directly to the sector and couldn't before. Why did shred cause reallocation and not dd? Does the fact that shred writes random data instead of just zeros make a difference?

    Read the article

  • Squid reverse proxy array - siblings not communicating with each other

    - by V. Romanov
    I want to set up 2 squid servers to act as reverse proxy and cache for a webserver on our intranet. The load balancing will be done with DNS round robin or just different mappings for different clients. The thing is, I want both servers to try and contact each other to see if they have the object required in cache before contacting the webserver for it (the network that servers the webserver is the bottleneck and I'm trying to eliminate it) Both squids are configured the same, here are the relevant config lines : acl dvr1_cache_it_best_tv_com dstdomain dvr1.cache.it.best-tv.com acl squid1_it_best_tv_com dstdomain squid1.it.best-tv.com acl squid2_it_best_tv_com dstdomain squid2.it.best-tv.com http_access allow dvr1_cache_it_best_tv_com http_access allow squid1_it_best_tv_com http_access allow squid2_it_best_tv_com http_access allow all http_port 8081 accel defaultsite=dvr1.cache.it.best-tv.com cache_peer dvr1.origin.it.best-tv.com parent 80 0 no-query originserver name=Proxy_dvr1_origin_it_best_tv_com cache_peer squid1.it.best-tv.com sibling 8081 3130 weight=10 name=Proxy_Squid1_it_best_tv_com cache_peer squid2.it.best-tv.com sibling 8081 3130 weight=10 name=Proxy_Squid2_it_best_tv_com cache_peer_access Proxy_dvr1_origin_it_best_tv_com allow dvr1_cache_it_best_tv_com cache_peer_access Proxy_squid1_it_best_tv_com allow squid1_it_best_tv_com cache_peer_access Proxy_squid1_it_best_tv_com allow squid2_it_best_tv_com cache_peer_access Proxy_squid1_it_best_tv_com allow dvr1_cache_it_best_tv_com cache_peer_access Proxy_squid2_it_best_tv_com allow squid1_it_best_tv_com cache_peer_access Proxy_squid2_it_best_tv_com allow squid2_it_best_tv_com cache_peer_access Proxy_squid2_it_best_tv_com allow dvr1_cache_it_best_tv_com just to make it clear - dvr1.cache is the alias for the proxy servers. dvr1.origin is the web server. Both servers work, both serve content and cache it and work fine. However, when I clear the cache on one server and then access it, it gets the content from the parent (DVR1_ORIGIN) instead of going to the sibling squid. What did I configure wrong? Or perhaps I don't understand the architecture correctly? I read the squid manuals but as far as I see i did it all by the book and yet it doesn't work right. Any help will be appreciated!

    Read the article

  • Nginx configuration leads to endless redirect loop

    - by brianthecoder
    So I've looked at every sample configuration I could find and yet every time I try and view a page that requires ssl, I end up in an redirect loop. I'm running nginx/0.8.53 and passenger 3.0.2. Here's the ssl config server { listen 443 default ssl; server_name <redacted>.com www.<redacted>.com; root /home/app/<redacted>/public; passenger_enabled on; rails_env production; ssl_certificate /home/app/ssl/<redacted>.com.pem; ssl_certificate_key /home/app/ssl/<redacted>.key; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X_FORWARDED_PROTO https; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-Url-Scheme $scheme; proxy_redirect off; proxy_max_temp_file_size 0; location /blog { rewrite ^/blog(/.*)?$ http://blog.<redacted>.com/$1 permanent; } location ~* \.(js|css|jpg|jpeg|gif|png)$ { if (-f $request_filename) { expires max; break; } } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } Here's the non-ssl config server { listen 80; server_name <redacted>.com www.<redacted>.com; root /home/app/<redacted>/public; passenger_enabled on; rails_env production; location /blog { rewrite ^/blog(/.*)?$ http://blog.<redacted>.com/$1 permanent; } location ~* \.(js|css|jpg|jpeg|gif|png)$ { if (-f $request_filename) { expires max; break; } } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } Let me know if there's any additional info I can give to help diagnose the issue.

    Read the article

  • Apache DAV at `/` with normal hosting at `/foo` - how?

    - by mandrake
    Should I not be able to have a configuration where I serve SVN repos with SVNParentPath at <Location /> and then override DAV and host normal files using another location <Location /foo>? I wish to host my XSLT files on the same subdomain and still host repos at root. Of course, if I was to have a repo called foo, that would not be accessible, and that's ok. <VirtualHost *:80> ... #Host XSLT files here <Location /foo> DAV Off </Location> #Host my repos relative to root, such as /my_repo/ <Location /> DAV svn SVNParentPath "myrepos" SVNListParentPath on SVNIndexXSLT "/foo/my.xsl" ... </Location> </VirtualHost> But DAV SVN still looks for a repo: <?xml version="1.0" encoding="utf-8"?> <D:error xmlns:D="DAV:" xmlns:m="http://apache.org/dav/xmlns" xmlns:C="svn:"> <C:error/> <m:human-readable errcode="720003"> Could not open the requested SVN filesystem </m:human-readable> </D:error>

    Read the article

  • Should I use "Raid 5 + spare" or "Raid 6"?

    - by Trevor Boyd Smith
    What is "Raid 5 + Spare" (excerpt from User Manual, Sect 4.17.2, P.54): RAID5+Spare: RAID 5+Spare is a RAID 5 array in which one disk is used as spare to rebuild the system as soon as a disk fails (Fig. 79). At least four disks are required. If one physical disk fails, the data remains available because it is read from the parity blocks. Data from a failed disk is rebuilt onto the hot spare disk. When a failed disk is replaced, the replacement becomes the new hot spare. No data is lost in the case of a single disk failure, but if a second disk fails before the system can rebuild data to the hot spare, all data in the array will be lost. What is "Raid 6" (excerpt from User Manual, Sect 4.17.2, P.54): RAID6: In RAID 6, data is striped across all disks (minimum of four) and a two parity blocks for each data block (p and q in Fig. 80) is written on the same stripe. If one physical disk fails, the data from the failed disk can be rebuilt onto a replacement disk. This Raid mode can support up to two disk failures with no data loss. RAID 6 provides for faster rebuilding of data from a failed disk. Both "Raid 5 + spare" and "Raid 6" are SO similar ... I can't tell the difference. When would "Raid 5 + Spare" be optimal? And when would "Raid 6" be optimal"? The manual dumbs down the different raid with 5 star ratings. "Raid 5 + Spare" only gets 4 stars but "Raid 6" gets 5 stars. If I were to blindly trust the manual I would conclude that "Raid 6" is always better. Is "Raid 6" always better?

    Read the article

< Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >