Search Results

Search found 4830 results on 194 pages for 'conf'.

Page 70/194 | < Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >

  • Configure nginx for multiple node.js apps with own domains

    - by udo
    I have a node webapp up and running with my nginx on debian squeeze. Now I want to add another one with an own domain but when I do so, only the first app is served and even if I go to the second domain I simply get redirected to the first webapp. Hope you see what I did wrong here: example1.conf: upstream example1.com { server 127.0.0.1:3000; } server { listen 80; server_name www.example1.com; rewrite ^/(.*) http://example1.com/$1 permanent; } # the nginx server instance server { listen 80; server_name example1.com; access_log /var/log/nginx/example1.com/access.log; # pass the request to the node.js server with the correct headers and much more can be added, see nginx config options location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; proxy_pass http://example1.com; proxy_redirect off; } } example2.conf: upstream example2.com { server 127.0.0.1:1111; } server { listen 80; server_name www.example2.com; rewrite ^/(.*) http://example2.com/$1 permanent; } # the nginx server instance server { listen 80; server_name example2.com; access_log /var/log/nginx/example2.com/access.log; # pass the request to the node.js server with the correct headers and much more can be added, see nginx config options location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; proxy_pass http://example2.com; proxy_redirect off; } } curl simply does this: zazzl:Desktop udo$ curl -I http://example2.com/ HTTP/1.1 301 Moved Permanently Server: nginx/1.2.2 Date: Sat, 04 Aug 2012 13:46:30 GMT Content-Type: text/html Content-Length: 184 Connection: keep-alive Location: http://example1.com/ Thanks :)

    Read the article

  • Cannot get mod_rewrite to work on Mac OSX Mountain Lion

    - by Joel Joel Binks
    I have tried everything I can think of and it still doesn't work. I am trying to get the example code from Larry Ullman's Advanced PHP book to work. His instructions were a bit lacking so I had to do some research. Here is what I have configured: username.conf <Directory "/Users/me/Sites/"> Options Indexes MultiViews FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> httpd.conf LoadModule rewrite_module libexec/apache2/mod_rewrite.so DocumentRoot "/Users/me/Sites" <Directory /> Options Indexes MultiViews FollowSymLinks AllowOverride All Order deny,allow Allow from all </Directory> <Directory "Users/me/Sites"> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny Allow from all </Directory> .htaccess <IfModule mod_rewrite.so> RewriteEngine on RewriteBase /phplearning/ADVANCED/ch02/ # Redirect certain paths to index.php: RewriteRule ^(about|contact|this|that|search)/?$ index.php?p=$1 RewriteLog "/var/log/apache/rewrite.log" RewriteLogLevel 2 </IfModule> Nothing has worked and it won't even log to the rewrite.log file. What have I done wrong? FYI even when I set up an extremely simple rule or use the root as the rewrite base, it still fails. I have also verified the mod_rewrite module is running. I am really angry.

    Read the article

  • Running Upstart user jobs on startup

    - by dgel
    I am running Ubuntu server 11.04. I have created an Upstart user job as described here. I have the following file at my /home/myuser/.init/sensors.conf: start on started mysql stop on stopping mysql chdir /home/myuser/mydir/project exec /home/myuser/mydir/env/bin/python /home/myuser/mydir/project/manage.py sensors respawn respawn limit 10 90 As myuser I can start, stop, and reload the job fine- it works perfectly: $ start sensors sensors start/running, process 1332 $ stop sensors sensors stop/waiting The problem is that the job is not starting automatically at boot when mysql starts. After a fresh boot, mysql is running but my sensors job is not. What's strange, is that although the job doesn't begin on bootup, if I use sudo to restart mysql it does indeed start my job. The following commands are run as myuser from a fresh startup: $ status sensors sensors stop/waiting $ sudo restart mysql mysql start/running, process 1209 $ status sensors sensors start/running, process 1229 The documentation for Upstart user jobs is pretty limited. What is the correct technique to have a user job start automatically on startup of the system? I know I can just throw something in rc.local to start it, or I could move my sensors.conf to /etc/init but I'm curious if there is a way to do it using just Upstart.

    Read the article

  • Nexus functionality is limited after installation

    - by Dmitriy Sukharev
    I have a CentOS based server with Sonatype Nexus 2.0.4-1 installed. The issue is that there are no standard "Artifact Search", "Advanced Search", "Browse Index", "Refresh Index" Nexus features, as well as Artifact Information tab after selection of any artifact (only Maven Information tab). I tried to Google, but was amazed that there're no information about this issue. Actually it looks like all actions I've done are: wget http://www.sonatype.org/downloads/nexus-2.0.4-1-bundle.tar.gz tar -xvf nexus-2.0.4-1-bundle.tar.gz cp -r nexus-2.0.4-1 sonatype-work /opt/ ln -s /opt/nexus-2.0.4-1/* /opt/nexus ln /opt/nexus/bin/nexus /etc/init.d/ chmod 755 /etc/init.d/nexus vim /etc/init.d/nexus NEXUS_HOME=“/opt/nexus” RUN_AS_USER=“nexus” useradd -s /sbin/nologin -d /var/lib/nexus nexus chown -R nexus /opt/nexus/ chown -R nexus /opt/nexus-2.0.4-1/ sudo -u nexus cp /opt/nexus/conf/examples/proxy-https/jetty.xml /opt/nexus/conf/ To force Nexus be available through HTTPS I went to Administration - Server - Application Server Settings as admin and changed Base URL to https:// external IP/nexus and set Force Base URL to true. Any ideas how to get missed Nexus features?

    Read the article

  • LDAP Authentication woes

    - by Marcelo de Moraes Serpa
    Hello list, I have a local OpenLDAP server with a couple of users. I'm using it for development purposes, here's the ldif: #Top level - the organization dn: dc=site, dc=com dc: site description: My Organization objectClass: dcObject objectClass: organization o: Organization #Top level - manager dn: cn=Manager, dc=site, dc=com objectClass: organizationalRole cn: Manager #Second level - organizational units dn: ou=people, dc=site, dc=com ou: people description: All people in the organization objectClass: organizationalunit dn: ou=groups, dc=site, dc=com ou: groups description: All groups in the organization objectClass: organizationalunit #Third level - people dn: uid=celoserpa, ou=people, dc=site, dc=com objectclass: pilotPerson objectclass: uidObject uid: celoserpa cn: Marcelo de Moraes Serpa sn: de Moraes Serpa userPassword: secret_12345 mail: [email protected] So far, so good. I can bind with "cn=Manager,dc=site,dc=com" and the 12345678 password (the local server password, setup on slapd.conf). However, I would like to bind with any user in under the people OU. In this case, I'd like to bind with: dn: uid=celoserpa, ou=people, dc=site, dc=com userPassword: secret_12345 But I'm getting a "(49) - Invalid Credentials" error everytime. I have tried through CLI tools (such as ldapadd, ldapwhoami, etc) and also ruby/ldap. The bind with these credentials fails with a invalid credentials error. I thought that it could be an ACL issue, however, the ACLs on slapd.conf seem to be right: access to attrs=userPassword by self write by dn.sub="ou=people,dc=site,dc=com" read by anonymous auth access to * by * read I was suspecting that maybe OpenLDAP doesn't compare against userPassword? Or maybe some ACL configuration I am missing that is somehow affecting the read access to userPassword for the specific DN. I'm really lost here, any suggestion appreciated! Cheers, Marcelo.

    Read the article

  • Munin with postgresql 9.2

    - by jreid9001
    I am trying to set up Munin to collect stats on a server with postgresql 9.1 and 9.2 (the server is currently running 9.1, have tested on a fresh VM with 9.2 to rule out some weird problem on the running server. I had to patch some of the plugins for 9.2 due to renamed columns (e.g. procpid to pid), but that's no problem). Munin is installed from the EPEL repos, postgres from the official one. Both up to date. When I try to run munin-node-configure --suggest, I get this output: # The following plugins caused errors: # postgres_bgwriter: # Junk printed to stderr # postgres_cache_: # Junk printed to stderr # postgres_checkpoints: # Junk printed to stderr # postgres_connections_: # Junk printed to stderr # postgres_connections_db: # Junk printed to stderr # postgres_locks_: # Junk printed to stderr # postgres_querylength_: # Junk printed to stderr # postgres_scans_: # Junk printed to stderr # postgres_size_: # Junk printed to stderr # postgres_transactions_: # Junk printed to stderr # postgres_tuples_: # Junk printed to stderr # postgres_users: # Junk printed to stderr # postgres_xlog: # Junk printed to stderr After a lot of searching around, I edited /etc/munin/plugin-conf.d/munin-node and added the following: [postgres*] user postgres This stops munin-node-configure complaining about stderr and lets me add the plugins, but when I telnet to the server on 4949 and try to fetch the stats, I just get "Bad exit". When I run the plugin individually via munin-run (e.g. munin-run postgres_size_ALL ), it works completely fine. Looking at /var/log/munin/munin-node.log, this is the output: Error output from postgres_size_ALL: DBI connect('dbname=template1','',...)failed: could not connect to server: Permission denied Is the server running locally and accepting connections on Unix domain socket "/tmp/.s.PGSQL.5432"? at /usr/share/perl5/vendor_perl/Munin/Plugin/Pgsql.pm line 377 Service 'postgres_size_ALL exited with status 1/0. I am now out of ideas... the socket definitely exists, and pg_hba.conf is set to allow all users/databases from localhost with trust.

    Read the article

  • Need help setting up OpenLDAP on OSX Mountain Lion

    - by rjcarr
    I'm trying to get OpenLDAP manually configured on OSX Mountain Lion. I'd prefer to do it manually instead of installing OSX server, but if that's the only option (i.e., OpenLDAP on OSX isn't meant to be used without server) then I'll just install it. I've seen guides that mostly just say to change the password in slapd.conf and then start the server and it should work. However, whenever I try to do anything with the client it tells me this: ldap_bind: Invalid credentials (49) I've tried encrypting the password as well as leaving it plain it doesn't seem to matter. The version is 2.4.28 and I've read as of 2.4 OpenLDAP uses slapd.d directories, but that doesn't seem to be the case in OSX. There was mention of an 'olcRootPW' I should use (instead of 'rootpw' in slapd.conf), but I only found that in a file named slapd.ldif. Anyway, I tried setting a password in there but it didn't make a difference). So ... I'm really confused. Has anyone got OpenLDAP working on OSX Moutain Lion without the server tools?

    Read the article

  • location of index.html CentOS 6

    - by user2118559
    Based on this http://www.servermom.com/how-to-add-new-site-into-your-apache-based-centos-server/454/ tutorial installed Apache-based CentOS Server I use putty.exe as editor vi /etc/httpd/conf/httpd.conf at very bottom modified to <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /var/www/fikitipis.com/public_html ServerName www.fikitipis.com ServerAlias fikitipis.com ErrorLog /var/www/fikitipis.com/error.log CustomLog /var/www/fikitipis.com/requests.log common </VirtualHost> So expect that index is at /var/www/fikitipis.com/public_html When in browser type ip address of server, see Apache 2 Test Page powered by CentOS and so on You may now add content to the directory /var/www/html/ Then [root@vps ~]# ls /var/www/ see cgi-bin domain.com error fikitipis.com html icons Checking content of directories ls /var/www/domain.com/public_html, ls /var/www/fikitipis.com/public_html, /var/www/html/ are empty Where is index.html? Did touch /var/www/fikitipis.com/public_html/index1.html then vi /var/www/fikitipis.com/public_html/index1.html, typed a, then wrote some text in file, then Escape and shift+zz. And in browser http://111.111.11.111/index1.html and see what I had wrote. So until now seems that all works

    Read the article

  • Name resolution not working with ipv6 on centos

    - by jolivier
    I just installed CentOs 6.3 on a server to be installed in a data center, but cannot get name resolution / curl to work. I know this is because of it trying to use ipv6, since ping google.com works, curl -4 google.com works, but not curl google.com. I removed the ipv6 adress from the interface and it does not change anything. This is very problematic since most system tools like yum fail at name resolution currently. Browsers like Firefox work because they might be using another tool for name resolution than the one use by curl. I managed to fix this on workstations by completely disabling ipv6 following tutorials like this one / hardcoding name resolution in /etc/hosts. But since I am here configuring a server which will be later installed in a remote data center, I would like not to mess up, understand what is going on and fix it properly. Besides, I will face the same issue with more servers to come so I would really appreciate your help in understanding this problem and how to solve it. I would be happy to provide more information if needed to help understand what is going on. The current network configuration is a small enterprise network, with a DNS server (let's call it A) configured once a long time ago. dig google.com and dig -4 google.com are both refused by the A DNS. But this is also true for my workstation on which curl is working (and yes they both use the same A DNS server). Indeed this faulty server and my workstation have multiple nameservers in /etc/resolv.conf, and the second one is working fine for both of them, so if I remove A from my resolv.conf everything works fine! Regards, Olivier

    Read the article

  • Setting up DNS using VirtualMin/WebMin

    - by Nyxynyx
    I am moving from a cPanel server to one where I've installed VirtualMin. The LAMP stack and the website files have been setup properly and I can access the website by its IP address. Problem: Now its time to point my domain mydomain.com to my new server. After reading many sites describing setting up bind and master zones, I am pretty confused as to what to do, especially coming from a cPanel server where its really simple to set this up. Attempt Tried to register my nameservers ns1.mydomain.com and ns2.mydomain.com at my domain registrar, but I am missing the IPs I need to point these nameservers to. Should I set ns1.mydomain.com to the IP addres of my web server, and not register ns2.mydomain.com? When specifying the DNS for mydomain.com, the first one I've set it to ns1.apadment.com. On the manager/admin page of my webhost provider, I am given the option to create a secondary slave DNS, which I assigned to the IP address of my server. Though I am not sure how the slave DNS will copy the info from my web server? I have assigned this secondary DNS ns.hostprovider.com as the second DNS for mydomain.com I tried creating a Virtual Server under Virtualmin, but it seems to mess up Apache's DocumentRoot for the site by creating and enabling a new vhost file that ends with .conf. I edited the .conf file to point DocumentRoot back to where its supposed to be /var/www/mydomain instead of /user/mydomain.com I believe the next step is to setup the zone. Virtualmin has already created a Master Zone with 8 different addresses (www.mydomain.com, ftp.mydomain.com...). Under Nameservers, there are already 2 records. One is the hostname (random name given by hostprovider, ns12345.ip123-123.net), the other is the secondary slave DNS provided by the host provider. Does having BIND running on my web server makes the server the master DNS? Thank you!

    Read the article

  • Broken fonts in Konsole KDE 4.3.4

    - by depesz
    I have a strange situation - after some upgrades a couple of days ago fonts in KDE Konsole broke. To make it more specific - standard fonts look more or less OK, but when I use my national characters (like acelnsózz) they all look broken - like from another font, or badly scaled. The same problem doesn't exist in GNOME Terminal. I usually use the Terminus font, so I used this for demonstration, but it shows in other fonts as well - if that will be necessary I will provide list. Konsole shot: GNOME Terminal shot: As for my settings: =$ cat /etc/X11/xorg.conf Section "Device" Identifier "Builtin Default intel Device 0" Driver "intel" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Monitor Vendor" ModelName "Monitor Model" EndSection Section "Screen" Identifier "Builtin Default intel Screen 0" Device "Builtin Default intel Device 0" Monitor "Monitor0" EndSection Section "InputDevice" Identifier "touchpad" Driver "synaptics" Option "CorePointer" EndSection Section "ServerLayout" Identifier "Builtin Default Layout" Screen "Builtin Default intel Screen 0" InputDevice "touchpad" EndSection =$ xdpyinfo | grep -E resolution\|dimensions dimensions: 1680x1050 pixels (444x277 millimeters) resolution: 96x96 dots per inch I tried forcing DPI in system settings (to 120), or adding monitor size to xorg.conf - so far nothing helped. Any idea on what should I do to make it work sanely again?

    Read the article

  • How to enable Ipv6 on my ubunutu 11.04 virtual machine

    - by liv2hak
    I have installed 3 VM's on my PC.(Ubuntu 11.04).I want to setup an IPV6 network to review and test some of the IPV6 tools like NDPMonitor.(monitors ICMP messages of Neighbour Discovery Protocol.) IP v6 addresses are as follows. linux_router - fe80::a00:27ff:fed5:f7e9/64 labhack1 - fe80::a00:27ff:fed2:8bd1/64 labhack2 - fe80::a00:27ff:fed7:2f2d/64 The below commands have been run on both linux_router and labhack1. sudo ip r a 2001:468:181:f100::/64 dev eth0 sudo vim /etc/radvd.conf /file looks like below./ interface eth0 { AdvSendAdvert on; /*means that we are sending out advertisments.*/ MinRtrAdvInterval 5; /*these options control how often advertisments are sent*/ MaxRtrAdvInterval 15; /*these are not mandatory but valueable settings.*/ prefix fe80::a00:27ff::/64 { AdvOnLink on; /*Says to the host everyone sharing this prefix is on the sam local link as you.*/ AdvAutonomous on; /*Says to a host: "Use this prefix to autoconfigure your address"*/ }; }; sudo sysctl -w net.ipv6.conf.all.forwarding=1 I try to do a ping6 -I eth0 fe80::a00:27ff:fed5:f7e9 I get Destination unreachable: Address unreachable.I am not sure what I am doing wrong here.I am a beginner at linux administration.Basically I think I am missing whatever is analogous to physically connecting VM's.Any help that would point me in the right direction would be greatly appreciated.

    Read the article

  • Active directory Kerberos OSX problems

    - by Temotodochi
    I'll try to keep this short, but informative. I'm currently unable to bind OSX lion (10.7.4) machines to our AD. OSX kerberos (heimdal) is unable to locate the KDC service. However i can bind linux & windows machines to the AD without any problems in the same network AD controls the domain DNS and all the relevant _kerberos._tcp.x.domain.com and _kpasswd SRV DNS records are there and resolve fine when tried from OSX machines. Defined ports are open for service and manually accessible from OSX. When i try kinit in the OSX, i can get the first auth through (wrong passwords fail instantly), but when supplied with correct password, kinit fails after some waiting with "unable to reach KDC". All machines run NTP and have correct time. During testing, network is not firewalled between the machines Linux and windows machines have no problems whatsoever I have tried with and without /etc/krb5.conf - OSX by default does not need it in the krb5.conf i used a working config from one of our linux machines. dsconfigad fails with simple "connection failed to the directory server" I'm a bit baffled with this. OSX is like the KDC is nowhere to be found and at the same time my test machines with windows 7 and some linux (centos 6 & debian 6) machines have no problems whatsoever. Same network, same configurations. I'm missing some vital piece of configuration somewhere, and i can't find out what it is.

    Read the article

  • Avoiding DNS timeouts when a dns server fails

    - by Neil Katin
    We have a small datacenter with about a hundred hosts pointing to 3 internal dns servers (bind 9). Our problem comes when one of the internal dns servers becomes unavailable. At that point all the clients that point to that server start performing very slowly. The problem seems to be that the stock linux resolver doesn't really have the concept of "failing over" to a different dns server. You can adjust the timeout and number of retries it uses, (and set rotate so it will work through the list), but no matter what settings one uses our services perform much more slowly if a primary dns server becomes unavailable. At the moment this is one of the largest sources of service disruptions for us. My ideal answer would be something like "RTFM: tweak /etc/resolv.conf like this...", but if that's an option I haven't seen it. I was wondering how other folks handled this issue? I can see 3 possible types of solutions: Use linux-ha/Pacemaker and failover ips (so the dns IP VIPs are "always" available). Alas, we don't have a good fencing infrastructure, and without fencing pacemaker doesn't work very well (in my experience Pacemaker lowers availability without fencing). Run a local dns server on each node, and have resolv.conf point to localhost. This would work, but it would give us a lot more services to monitor and manage. Run a local cache on each node. Folks seem to consider nscd "broken", but dnrd seems to have the right feature set: it marks dns servers as up or down, and won't use 'down' dns servers. Any-casting seems to work only at the ip routing level, and depends on route updates for server failure. Multi-casting seemed like it would be a perfect answer, but bind does not support broadcasting or multi-casting, and the docs I could find seem to suggest that multicast dns is more aimed at service discovery and auto-configuration rather than regular dns resolving. Am I missing an obvious solution?

    Read the article

  • Subdomains and address bar

    - by Priednis
    I have a fairly noob question about how subdomains work. As I understand at first the DNS server specifies that a request for certain subdomain.domain.com has to go to the IP address of domain.com, and the webserver at domain.com further processes the request and displays the needed subdomain page. It is not entirely clear to me how (for example Apache) server does it. As I understand there can be entries in vhosts.conf file which specify folders that contain the subdomain data. Something like: <VirtualHost *> ServerName www.domain.com DocumentRoot /home/httpd/htdocs/ </VirtualHost> <VirtualHost *> ServerName subdomain.domain.com DocumentRoot /home/httpd/htdocs/subdomain/ </VirtualHost> and there also can be redirect entries in .htaccess files like rewritecond %{http_host} ^subdomain.domain.com [nc] rewriterule ^(.*)$ http://www.domain.com/subdomain/ [r=301,nc] however in this case the user gets directed to the directory which contains the subdomain data but the user gets "out" of the subdomain. I would like to know - how, when going to subdomain.domain.com the subdomain.domain.com, beginning of address remains visible in the address bar of the explorer? Can it be done by an alternate entry in .htaccess file? If a VirtualHost entry is specified in the vhosts.conf file, does it mean, that a new user account has to be specified for access to this directory?

    Read the article

  • Switching from prefork MPM to worker MPM + php-fpm on ubuntu

    - by Shane
    All tutorials I found were how to fresh install worker MPM + PHP-FPM, since my wordpress blog's already up and running with prefork MPM, correct me if I'm wrong in the simulated installation process: I'm on ubuntu and according to some tutorials, the following lines would do all the tricks: apt-get install apache2-mpm-worker libapache2-mod-fastcgi php5-fpm php5-gd a2enmod actions fastcgi alias Then you setup configuration in /etc/apache2/conf.d/php5-fpm.conf: <IfModule mod_fastcgi.c> AddHandler php5-fcgi .php Action php5-fcgi /php5-fcgi Alias /php5-fcgi /usr/lib/cgi-bin/php5-fcgi FastCgiExternalServer /usr/lib/cgi-bin/php5-fcgi -host 127.0.0.1:9000 -pass-header Authorization </IfModule> After all these, restart: service apache2 restart && service php5-fpm restart Question: 1) Would it cause any down time in the whole process for previously running sites with prefork MPM? 2) Do you have to change any already existent configuration files like php or mysql or apache2(would they take effect immediately after the switch without you doing anything)? 3) I've already have apc up and running, do you have to re-install/re-configure it after the switch? 4) How do you find out if apache2 is working in worker MPM mode as expected? Thanks a lot!

    Read the article

  • Apache2/mod_fcgid/PHP Process Limits Not Respected

    - by Daniel
    I've recently moved to Apache2 / mod_fcgid / PHP from nginx / php_fpm. This is the second server on which I've made this migration, but it's used much less frequently than the first, which is working like a charm. The problem is in the PHP processes that it's spawning. In looking at the mod_fcgid documentation, it appears that the default for killing idle processes is 300 seconds; I've changed that to 20. At this point, I'd be fine if 300 would work - but it's not happening. It's been running for nearly a day now, and server-status shows 12 active processes: Process name: php5 Pid Active Idle Accesses State 19243 84879 14420 11 Ready Process name: php5 Pid Active Idle Accesses State 20954 82143 149 22 Ready 20947 82149 149 22 Ready 20953 82143 149 13 Ready Process name: php5 Pid Active Idle Accesses State 20589 82765 23644 72 Ready Process name: php5 Pid Active Idle Accesses State 17663 86103 2034 117 Ready Process name: php5 Pid Active Idle Accesses State 19862 83961 1976 91 Ready Process name: php5 Pid Active Idle Accesses State 18495 85825 5164 18 Ready Process name: php5 Pid Active Idle Accesses State 25463 75109 23948 24 Ready Process name: php5 Pid Active Idle Accesses State 2466 60019 60016 2 Ready Process name: php5 Pid Active Idle Accesses State 20729 82541 12592 23 Ready Process name: php5 Pid Active Idle Accesses State 22135 80616 46361 6 Ready PHP applications are not being served at this point - Apache is returning a 503. However, it is still serving the server-status module, and mod_mono/Mono 2.10 applications are still being served. The problem is with the PHP. /etc/apache2/mods-available/fcgid.conf... <IfModule mod_fcgid.c> AddHandler fcgid-script .fcgi FcgidConnectTimeout 10 FcgidMaxRequestsPerProcess 500 FcgidIdleTimeout 20 FcgidFixPathinfo 1 FcgidMaxProcesses 10 </IfModule> (heh - Max Processes isn't being respected either...) Of course, fcgid.conf is smylinked in mods-enabled.

    Read the article

  • Connecting XRAID to Sunfire via LSI FC card - Drives not showing up

    - by Matthew Watson
    Hi, I've got a Sunfire T1000 machine ( Solaris 10 10/09 s10s_u8wos_08a SPARC ) with a LSI7404EP-LC fibre channel card in it. This is plugged into an XRAID. The system seems to have picked up the card > /usr/platform/`uname -i `/sbin/prtdiag IO Location Type Slot Path Name Model ----------- ----- ---- --------------------------------------------- ------------------------- --------- MB/PCIE0 PCIE 0 /pci@780/fibre-channel fibre-channel MB/PCIE0 PCIE 0 /pci@780/fibre-channel fibre-channel MB/NET0 PCIE MB /pci@7c0/pci@0/network@4 network-pci14e4,1668 MB/NET1 PCIE MB /pci@7c0/pci@0/network@4,1 network-pci14e4,1668 MB/NET2 PCIX MB /pci@7c0/pci@0/pci@8/network@1 network-pci108e,1648 MB/NET3 PCIX MB /pci@7c0/pci@0/pci@8/network@1,1 network-pci108e,1648 MB/PCIX PCIX MB /pci@7c0/pci@0/pci@8/scsi@2 scsi-pci1000,50 LSI,1064 However it doesn't seem to be able to see the xraid attached to it, lsiutil only reports the onboard SAS controller. > /usr/local/bin/lsiutil ~ LSI Logic MPT Configuration Utility, Version 1.62, January 14, 2009 1 MPT Port found Port Name Chip Vendor/Type/Rev MPT Rev Firmware Rev IOC 1. mpt0 LSI Logic SAS1064 A3 105 010a0000 0 Select a device: [1-1 or 0 to quit] I've tried adding the configuration to /kernal/drv/sd.conf and /kernal/drv/ssd.conf as per this thread, however format still cannot see any drives on the xraid. I'm not sure where to go next. Any suggestions? From what I've read..this should pretty much just eb plug it in and they show up in format..

    Read the article

  • Broken fonts in konsole kde 4.3.4

    - by depesz
    I have strange situation - after some upgrade couple of days ago fonts in KDE konsole broke. To make it more specific - standard fonts look more or less ok, but when I use my national characters (like acelnsózz) they all look broken - like from another font, or badly scaled. The same problem doesn't exist in gnome-terminal. I usually use Terminus font, so I used this for demonstration, but it shows in other fonts as well - if that will be necessary I will provide list. Konsole shot: gnome-terminal shot: As for my settings: =$ cat /etc/X11/xorg.conf Section "Device" Identifier "Builtin Default intel Device 0" Driver "intel" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Monitor Vendor" ModelName "Monitor Model" EndSection Section "Screen" Identifier "Builtin Default intel Screen 0" Device "Builtin Default intel Device 0" Monitor "Monitor0" EndSection Section "InputDevice" Identifier "touchpad" Driver "synaptics" Option "CorePointer" EndSection Section "ServerLayout" Identifier "Builtin Default Layout" Screen "Builtin Default intel Screen 0" InputDevice "touchpad" EndSection =$ xdpyinfo | grep -E resolution\|dimensions dimensions: 1680x1050 pixels (444x277 millimeters) resolution: 96x96 dots per inch I tried forcing DPI in system settings (to 120), or adding monitor size to xorg.conf - so far nothing helped. Any idea on what should I do to make it work sanely again?

    Read the article

  • OpenVPN bad source address from client

    - by Bogdan
    I have one problem with OpenVPN. There are a lot drops records in the openvpn log file on the server: Mon Oct 22 10:14:41 2012 us=726541 laptop/???:1194 MULTI: bad source address from client [192.168.1.107], packet dropped grep -E "^[a-z]" server.conf ----- port 1194 proto udp dev tun ca data/ca.crt cert data/server.crt key data/server.key dh data/dh1024.pem tls-server tls-auth data/ta.key 0 remote-cert-tls client cipher AES-256-CBC tun-mtu 1200 server 10.10.10.0 255.255.255.0 ifconfig-pool-persist ipp.txt push "redirect-gateway def1 bypass-dhcp" push "dhcp-option DNS 8.8.8.8" client-to-client client-config-dir /etc/openvpn/ccd route 10.10.10.0 255.255.255.0 keepalive 10 120 comp-lzo persist-key persist-tun max-clients 5 status /var/log/status-openvpn.log log /var/log/openvpn.log verb 4 auth-user-pass-verify /etc/openvpn/verify.sh via-file tmp-dir /tmp script-security 2 ----- cat ccd/laptop ----- iroute 10.10.10.0 255.255.255.0 ----- cat client.conf ----- remote server ip 1194 client dev tun ping 10 comp-lzo proto udp tls-client tls-auth data/ta.key 1 pkcs12 data/vpn.laptop.p12 remote-cert-tls server #ns-cert-type server persist-key persist-tun cipher AES-256-CBC verb 3 pull auth-user-pass /home/user/.openvpn/users.db ----- According to "Jan Just Keijser - OpenVPN 2 Cookbook" root of the problem is incorrect config options.see the screenshot But, as you see, my config has such options. Could you please help me to solve this problem. @week Verb leverl=6; client log. Mon Oct 22 16:06:02 2012 do_ifconfig, tt->ipv6=0, tt->did_ifconfig_ipv6_setup=0 Mon Oct 22 16:06:02 2012 /sbin/ifconfig tun0 10.10.10.3 pointopoint 10.10.10.5 mtu 1500 Mon Oct 22 16:06:02 2012 /sbin/route add -net xxxx netmask 255.255.255.255 gw 192.168.1.1 Mon Oct 22 16:06:02 2012 /sbin/route add -net 0.0.0.0 netmask 128.0.0.0 gw 10.10.10.5 Mon Oct 22 16:06:02 2012 /sbin/route add -net 128.0.0.0 netmask 128.0.0.0 gw 10.10.10.5 Mon Oct 22 16:06:02 2012 Initialization Sequence Completed cat ccd/latop iroute 10.10.10.0 255.255.255.0 ifconfig-push 10.10.10.3 10.10.10.5

    Read the article

  • Apache2: How to split out the SSL configuration?

    - by Klaas van Schelven
    In Apache2, I'd like to separately define my SSL-related stuff once, and in a separate file from the rest of the configuration. This is mostly a matter of taste, but it also allows me to include the rest of the configuration in my automatic deployment process. I.e.: current situation: # in file: 0000-ourdomain.com.conf (number needs to be low) <VirtualHost xx.xx.xx.xx:443> # SSL part SSLEngine on SSLCertificateFile ....crt SSLCACertificateFile ...pem SSLCertificateChainFile ...intermediate.pem SSLCertificateKeyFile ....wildcard.ourdomain.com.key SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown ServerName www.ourdomain.com ServerAlias ourdomain.com # the actual configuration, as found for xx.xx.xx.xx:80, repeated </VirtualHost> I'd like # in file: 0000-ssl-stuff <VirtualHost xx.xx.xx.xx:443> # SSL part SSLEngine on SSLCertificateFile ....crt SSLCACertificateFile ...pem SSLCertificateChainFile ...intermediate.pem SSLCertificateKeyFile ....wildcard.ourdomain.com.key SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown ServerName www.ourdomain.com ServerAlias ourdomain.com </VirtualHost> # in file: ourdomain.com.conf <VirtualHost xx.xx.xx.xx:443> # the actual configuration, as found for xx.xx.xx.xx:80, repeated </VirtualHost> Unfortunately, this does not seem to work. Apache SSL fails, though it does not give an error message at reload or syntax-check. My best found workaround is to us an Include directive from the 0000-ssl file. Many thanks!

    Read the article

  • Apache2 Manage Server default

    - by Jaime E. Valdez
    I'm trying to setup two domains correctly. I have some issues I hope you can help me. Site one's conf: <VirtualHost myipaddress:80> ServerName www.domain1.com ServerAdmin [email protected] DocumentRoot /home/domain1/public_html </VirtualHost> My other domain conf is: <VirtualHost myipaddress:80> ServerName www.domain2.com ServerAlias *.domain2.com domain2.com ServerAdmin [email protected] DocumentRoot /home/domain2/public_html </VirtualHost> The default site is disabled. The problem is that when accessing "domain2.com" from my browser, it always redirects to "www.domain1.com". It only works when I excplicitly access "www.domain2.com". I have also other domains like "domain1.net", "domain1.info" pointing to my server but at this moment are not configured either setup on Apache yet I can access from browser and always accessing to "www.domain1.com". By the way is there any possible configuration over Apache to handle IP only, I mean if I type "http://myipaddress/" I get the "www.domain1.com"... Arrgh.

    Read the article

  • Ruby Passeger + Nginx or lighthttpi + fgci for shared hosting

    - by devnull
    I have set up a passenger + nginx setup and I plan to offer a free non-commercial hosting (or in fact on the fly deployment) for rack-based frameworks (e.g. camping, sinatra). I am facing an "issue" with passenger. For each application you need to configure nginx.conf (it would be the same with apache so it is not an nginx issue) with: server { ... passenger_base_uri /app1; passenger_base_uri /app2; passenger_base_uri /app3; } Now this is not inherently bad as, in theory, I could allow a user to run just one app on his webspace but even in this case I need to create a new server directory on nginx e.g. (user.domain.com). As this will mainly be used to deploy apps the behavior I am looking at is more the possibility to auto map several apps (e.g. app1, app2, app3, app4) under the same server (your app.com/app1 yourapp.com/app2) without having to update the nginx or apache file each time. This seems to be a limitation in passenger. As such I am thinking about an alternative with lighttpd and fastcgi. Would this allow immediate deployment without touching the lighttpd config file e.g. I create a new directory with app2 and it will run immediately ? What is your experience in performance difference between passenger + nginx vs. lighttpd + fastcgi ? thanks in advance scenario details: on nginx + passenger - user cannot add a new sub-folder and run another sinatra/camping app without declaring the path on nginx.conf and restarting the server; wished behavior with the new setup: - user can add a new folder with a new app and it would run on lighttpd+fcgi without any extra configuration of the web server;

    Read the article

  • Exclamation 403 forbidden for cgi-bin/ and cannot protect site with password

    - by gasgdasdgasdg
    First problem i have is i am getting 403 forbidden error for cgi-bin/ I have created a new /var/www2/ i can access it fine. php runs fine. Second problem is I cannot password protect it. i first tried doing htpasswd, it asks for login but everytime i login it keeps asking for new one. its getting frustrating, i have tried all tricks. and doesn't seem to work. this is a virtual host config inside sites-available. httpd.conf is empty but i have apache2.conf Code: NameVirtualHost 12.12.12.12. <VirtualHost 12.12.12.12> ServerAdmin webmaster@localhost DocumentRoot /var/www2/ <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www2/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /var/www2/cgi-bin/ <Directory "/var/www2/cgi-bin/"> AllowOverride Options Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch AddHandler cgi-script cgi pl Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined ServerSignature On Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost>

    Read the article

  • Error in Apache: /var/run/apache2 not found

    - by Julen
    This is more self-answered question but since it drove me crazy I would like to share with the community and maybe someone can tell me why it happened or what it caused. The thing is I wanted to install in my Ubuntu 10.4 machine a CGI app, one built in the samples that come with the gSOAP toolkit. My intention was to access those from ASP .NET machine. Regular Ubuntu does not come with Apache so I install it from Sypnatic. Pretty easy. I followed this How to Install Apache2 webserver with PHP,CGI and Perl Support in Ubuntu Server. Instead of apache.conf I tweaked httpd.conf since a college here used that file instead of the first to put his Apache running. Besides I was able to access his CGI from my ASP .NET but mysteriously I could not from mine, I was getting always "The request failed with HTTP status 503: Service Temporarily Unavailable". Checking Apache error.log I found these messages: No such file or directory: unable to connect to cgi daemon after multiple tries: /home/julen/htdocs/cgi-bin/calcserver And looking more carefully whenever I restarted Apache I got this other message No such file or directory: Couldn't bind unix domain socket /var/run/apache2/cgisock. cgid daemon failed to initialize I am pretty new with Ubuntu and I could not think that Apache and Synaptic made a mistake in the installation process of the server, but it is true that the /var/run/apache2 was missing whereas in my college's computer was not. I tried to find and "elegant" solution but I found a post from 2006 that had an slight reference to it. Finally I decided to create the folder myself (as root) and then everything worked fine. Hope this helps others if they encounter a similar problem. Still I have the doubt why the folders was not created in the first place. Best, Julen.

    Read the article

< Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >