Search Results

Search found 1902 results on 77 pages for 'nginx'.

Page 73/77 | < Previous Page | 69 70 71 72 73 74 75 76 77  | Next Page >

  • MySQL encoding problem after site move

    - by Quan Zhou
    Guys, I need your help. Since last month my friend has lost his database on Dreamhost, he decided to move his wordpress based blog site (written in Chinese) to my server. He's using a wp-plugin called wp-db-backup to perform regular db backups. And the servers backgrounds are: Dreamhost: Linux 2.6.31.5-modsign-aufs2-grsec-2-opt mysql Ver 14.12 Distrib 5.0.16, for pc-linux-gnu (i386) using readline 5.0 apache2 unknown version My Server: Linux li159-46 2.6.32.12-x86_64-linode12 mysql Ver 14.14 Distrib 5.1.45, for debian-linux-gnu (x86_64) using readline 6.1 nginx 0.8.36 His site's encoding was UTF-8 in both wp-config and db. I imported his db backup file in UTF-8 by default, then I sync'd files using rsync from dreamhost, then I just changed the db address and nothing more. But when I take first look at the "new" site, it was full of unreadable characters, I met this problem before, I changed charset options in browser but none of them can make it displayed properly. Then I converted his db to GB18030, it works with only if browser set charset to GB18030 either GBK, but by default they recognize the charset as UTF-8. I tried to edit the headers but it doesn't work. What could I do now? Thx~~

    Read the article

  • Apache2/mod_fcgid/PHP Process Limits Not Respected

    - by Daniel
    I've recently moved to Apache2 / mod_fcgid / PHP from nginx / php_fpm. This is the second server on which I've made this migration, but it's used much less frequently than the first, which is working like a charm. The problem is in the PHP processes that it's spawning. In looking at the mod_fcgid documentation, it appears that the default for killing idle processes is 300 seconds; I've changed that to 20. At this point, I'd be fine if 300 would work - but it's not happening. It's been running for nearly a day now, and server-status shows 12 active processes: Process name: php5 Pid Active Idle Accesses State 19243 84879 14420 11 Ready Process name: php5 Pid Active Idle Accesses State 20954 82143 149 22 Ready 20947 82149 149 22 Ready 20953 82143 149 13 Ready Process name: php5 Pid Active Idle Accesses State 20589 82765 23644 72 Ready Process name: php5 Pid Active Idle Accesses State 17663 86103 2034 117 Ready Process name: php5 Pid Active Idle Accesses State 19862 83961 1976 91 Ready Process name: php5 Pid Active Idle Accesses State 18495 85825 5164 18 Ready Process name: php5 Pid Active Idle Accesses State 25463 75109 23948 24 Ready Process name: php5 Pid Active Idle Accesses State 2466 60019 60016 2 Ready Process name: php5 Pid Active Idle Accesses State 20729 82541 12592 23 Ready Process name: php5 Pid Active Idle Accesses State 22135 80616 46361 6 Ready PHP applications are not being served at this point - Apache is returning a 503. However, it is still serving the server-status module, and mod_mono/Mono 2.10 applications are still being served. The problem is with the PHP. /etc/apache2/mods-available/fcgid.conf... <IfModule mod_fcgid.c> AddHandler fcgid-script .fcgi FcgidConnectTimeout 10 FcgidMaxRequestsPerProcess 500 FcgidIdleTimeout 20 FcgidFixPathinfo 1 FcgidMaxProcesses 10 </IfModule> (heh - Max Processes isn't being respected either...) Of course, fcgid.conf is smylinked in mods-enabled.

    Read the article

  • How have I locked me out from my Ubuntu VPS?

    - by Sanoj
    I have a Ubuntu Server as VPS (OpenVZ), and yesterday I installed php-fpm, but I guess something went wrong with the installation. Because since then I cannot log in to my server over SSH with PuTTY or using WinSCP. The message I get when connecting is Network error: Connection timed out. Immediately after the installation I was not able to use emacs either, I had to re-install it with apt-get install emacs. I have tried with clearing the firewall and rebooting the server from my web-based "control panel", but it doesn't help. The commands I used for installation of the PHP-fpm was from Installing PHP 5.3, Nginx And PHP-fpm On Ubuntu/Debian. And I guess it has something to do with these commands: cd /tmp wget http://us.archive.ubuntu.com/ubuntu/pool/main/k/krb5/libkrb53_1.6.dfsg.4~beta1-5ubuntu2_i386.deb wget http://us.archive.ubuntu.com/ubuntu/pool/main/i/icu/libicu38_3.8-6ubuntu0.2_i386.deb sudo dpkg -i *.deb sudo echo "deb http://php53.dotdeb.org stable all" >> /etc/apt/sources.list sudo apt-get update sudo apt-get install php5-cli php5-common php5-suhosin sudo apt-get install php5-fpm php5-cgi The web-sites that are hosted from my server works fine. Anyone that have the same experience or know how this could happen? I guess that I have to re-install Ubuntu Server from my "control panel" now, but I would like to avoid this situation in the future. Finally, I have backup on everything so nothing is lost if I have to re-install the machine.

    Read the article

  • Passing PATH through sudo

    - by whitequark
    In short: how to make sudo not to flush PATH everytime? I have some websites deployed on my server (Debian testing) written with Ruby on Rails. I use Mongrel+Nginx to host them, but there is one problem that comes when I need to restart Mongrel (e.g. after making some changes). All sites are checked in VCS (git, but it is not important) and have owner and group set to my user, whereas Mongrel runs under the, huh, mongrel user that is severely restricted in it's rights. So Mongrel must be started under root (it can automatically change UID) or mongrel. To manage mongrel I use mongrel_cluster gem because it allows starting or stopping any amount of Mongrel servers with just one command. But it needs the directory /var/lib/gems/1.8/bin to be in PATH: this is not enough to start it with absolute path. Modifying PATH in root .bashrc changed nothing, tweaking sudo's env_reset and keepenv didn't either. So the question: how to add a directory to PATH or keep user's PATH in sudo?

    Read the article

  • apache 2.4, mod_proxy_fcgi not honouring .htaccess, work around needed

    - by user229874
    I am using apache 2.4.7 with mod_proxy_fcgi for purpose of passing through php to php-fpm (this will be used for shared hosting environment). The htaccess works fine for non php files, but once it hit rewrite rule that proxies through the php requests, the htaccess is ignored. I know why it is happening. The question is: how do I work around it? The question how do I force apache to treat the request to php file as a request to local file, and then proxy it through? I have spent substantial time in researching on this problem, and following "answers" were given as solution: 1) "use apache configuration instead of .htaccess" it is valid solution, but not for shared hosting environment (I am not going to give access to apache configuration to shared hosting customers ;)). 2) "don't use .htaccess, as it has performance/security/other issues", well how else would shared hosting customers control access/url rewriting on their site? Besides if the .htaccess was not a requirement I would simply use nginx. 3) "put rewrite rule for proxy inside of " - this is incorrect, and it does not work. This behaviour appears to be not a bug but a "feature" as per https://issues.apache.org/bugzilla/show_bug.cgi?id=54887

    Read the article

  • Chrome caching 302 redirects

    - by Thermionix
    I have a php script with is used to rotate banner images on a site. Under Firefox/IE page refreshes will make another request and a different image will be returned. Under Chrome, the request seems to be cached and only opening the page in a new tab will cause it to actually query the script. I believe this used to work in older versions of chrome, I've tried a few different types of redirect codes all with the same result. Any tips? <img class="banner" src="/inc/banner.php" alt=""> ~$ cat /var/www/inc/banner.php <?php header("HTTP/1.1 302 Redirect"); header("Cache-Control: max-age=0, no-cache, no-store, must-revalidate"); //header('HTTP/1.1 307 Temporary Redirect'); //header("expires: none"); //header("expires: max"); //header("Cache-Control: public"); $folder = '../img/banner/'; $exts = 'jpg jpeg png gif'; $files = array(); $i = -1; if ('' == $folder) $folder = './'; $handle = opendir($folder); $exts = explode(' ', $exts); while (false !== ($file = readdir($handle))) { foreach($exts as $ext) { // for each extension check the extension if (preg_match('/\.'.$ext.'$/i', $file, $test)) { // faster than ereg, case insensitive $files[] = $file; // it's good ++$i; } } } closedir($handle); // We're not using it anymore mt_srand((double)microtime()*1000000); // seed for PHP < 4.2 $rand = mt_rand(0, $i); // $i was incremented as we went along header('Location: '.$folder.$files[$rand]); flush(); ?> curl output; ~$ curl -I -k https://example.net/inc/banner.php HTTP/1.1 302 Redirect Server: nginx/1.1.14 Date: Fri, 24 Feb 2012 03:23:46 GMT Content-Type: text/html Connection: keep-alive X-Powered-By: PHP/5.3.10-1ubuntu1 Cache-Control: max-age=0, no-cache, no-store, must-revalidate Location: ../img/banner/2.jpg

    Read the article

  • postfix (for sending mail only) multiple domain setup

    - by seanl
    I have the following problem, I have a Centos 5.4 VPS hosting a few nginx sites (some static, some cakephp), I would like to be able to send email from each sites contact page through postfix to my google apps hosted email (different accounts for each site) so that apps can then send out an auto email to the person filling in the contact form etc I have a bare-bones postfix installation with the following added into the main.cf config file. from using this guide virtual_alias_domains = hash:/etc/postfix/virtual_alias_domains virtual_alias_maps = hash:/etc/postfix/virtual_alias_maps (both of these files have been converted into db files using postmap) I have configured DNS correctly for each site and setup SPF records. (I'm aware R-DNS will still reference my actual hostname not the domain name and cause a possible spam issue but one thing at a time) I can telnet localhost and the helo localhost so that I can send a command line email from an address in the virtual_alias_domains to an email in the virtual_alias_maps file which seems sends without giving an error but it is sending to my local linux account not the email address specified. my question is am i approching this the wrong way in terms of the virtual alias mapping or is this even possible to do in the manner im trying. Any help is greatly appreciated thanks. my postconf -n outlook looks like this alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix debug_peer_level = 2 html_directory = no inet_interfaces = localhost mail_owner = postfix mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = $myhostname, localhost.$mydomain, localhost myhostname = myactual hostname mynetworks = 127.0.0.0/8 myorigin = $mydomain newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.3.3/README_FILES sample_directory = /usr/share/doc/postfix-2.3.3/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop unknown_local_recipient_reject_code = 550 virtual_alias_domains = hash:/etc/postfix/virtual_alias_domains virtual_alias_maps = hash:/etc/postfix/virtual_alias_maps

    Read the article

  • How should I manage VPS email?

    - by Xeoncross
    I have been slowly learning how to run a linux VPS for a while now. Since I build websites I'm confident with running and securing a web server like nginx... or at least there haven't been any casualties yet. However, email scares me. Almost all websites require email to communicate with users. Most of the time email is only needed on my sites during registration as a means of verification. I hardly ever need to accept incoming mail back. Nevertheless, my lack off understanding of how email servers can be abused is worrying me. Not only do you need to secure email servers - you also have to prove to the world that your emails are legit and constantly fight against being blacklisted. Insuring my emails 'good name' is not something I want to devote my life too. What should someone like me do to send emails from my VPS? Should I look for a company to send email through that can worry about this for me? Should I just use google apps until my sites are large enough to worry about? Or is all this just ignorant fear and running your own email server (that actually works) really is easy?

    Read the article

  • Long access time for static web page on virtual machine

    - by Karol
    My setup Windows 7 on workstation that I use at work (with domain) and home (no domain) Virtual machine (VMWare) that runs Arch Linux (I will call it just "Linux") with network interface in bridged mode. Linux serves web pages with Nginx. IP address of Linux machine is 192.168.0.16 and is added to C:\windows\system32\drivers\etc\hosts: 192.168.0.16 bridged bri IP address of Windows workstation is added to /etc/hosts: 192.168.0.10 workstation I can add more details to my setup description (I am not sure what is relevant). The question Often (but not always) it takes long time for a web browser (Firefox) to open static web page served by Linux. I am sure it is not a performance issue. To be more specific: it takes about ~20 seconds to resolve(?) the address http://bridged for a web browser. Additionally I have just installed samba service and noticed similar problem, so it is not specific to browser & http. Initial access for samba shares also takes long time.

    Read the article

  • Running WordPress and Ghost on Apache with mod_proxy

    - by Jack Perry
    I currently have three WordPress sites hosted on Apache with virtual host files to direct the right domain to the right DocumentRoot. Ghost (node.js) just came out and I've wanted to tinker with it and just play around on one of my spare domains. I'm not really interested in moving over to nginx so I'm trying to get Ghost working on Apache via mod_proxy. I've managed to get Ghost working on my spare domain, but I think there's a problem with my virtual host files, as all of my other domains start pointing to Ghost as well. Here are two virtual host files, one for my main WordPress site that works fine, and the second for Ghost. Domains removed and replaced with DOMAIN and DOMAIN2. DOMAIN <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName DOMAIN.com ServerAlias www.DOMAIN.com DocumentRoot /var/www/DOMAIN.com/public_html <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/DOMAIN.com/public_html> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> DOMAIN2 <VirtualHost IP:80> ServerAdmin EMAIL ServerName DOMAIN2.com ServerAlias www.DOMAIN2.com ProxyPreserveHost on ProxyPass / http://IP:2368/ </VirtualHost> I get the feeling I'm not working with virtual hosts or mod_proxy right, and Google-fu has let me down after many suggested attempts. Any ideas? Thanks!

    Read the article

  • SFTP ChRoot result in broken pipe

    - by Patrick Pruneau
    I have a website that I want to add some restricted access to a sub-folder. For this, I've decided to use CHROOT with SFTP (I mostly followed this link : http://shapeshed.com/chroot_sftp_users_on_ubuntu_intrepid/) For now, I've created a user (sio2104) and a group (magento).After following the guide, my folder list look like this : -rw-r--r-- 1 root root 27 2012-02-01 14:23 index.html -rw-r--r-- 1 root root 21 2012-02-01 14:24 info.php drwx------ 15 root root 4096 2012-02-25 00:31 magento As you can see, i've chown root:root the folder magento I wanted to jail-in the user and ...everything else by the way. Also in the magento folder, I chown sio2104:magento everything so they can access what they want. Finally, I've added this to sshd_config file : #Subsystem sftp /usr/lib/openssh/sftp-server Subsystem sftp internal-sftp Match Group magento ChrootDirectory /usr/share/nginx/www/magento ForceCommand internal-sftp AllowTCPForwarding no X11Forwarding no PasswordAuthentication yes #UsePAM yes And the result is...well, I can enter my login, password and it's all finished with a "broken pipe" error. $ sftp [email protected] [....some debug....] [email protected]'s password: debug1: Authentication succeeded (password). Authenticated to 10.20.0.50 ([10.20.0.50]:22). debug1: channel 0: new [client-session] debug1: Requesting [email protected] debug1: Entering interactive session. Write failed: Broken pipe Connection closed Verbose mode gives nothing to help. Anyone have an idea of what I've done wrong? If I try to login with ssh or sftp with my personnal user, everything works fine.

    Read the article

  • Microsoft , Hotmail , Live , MSN, Outlook , unable to send emails and no support received from microsoft in 3 months we are trying asking for that

    - by bombastic
    Ok this is somenthing unbelievable, we have a website, users sign up and receives links to confirm they signed up BUT: 1 - microsoft blocked our IP (no one with microsoft email account can receive our emails) 2 - we tryed contacting microsoft submitting the detailed form about our problem 3 - we posted 3 times in their community about our problem 4 - we tweeted they about our problem 5 - we tryed finding out some telephone support number (the few there are arent' helping at all) Do you think we solved? the answer is NO :/ We still unable to send emails from our IP to microsoft email accounts, since 3 months back. Our emails are perfect we checked all the email headers following microsoft guidelines but it seems not enought, checking our IP reputation it seems everythings ok, indeed we can send email easly to any other provider , gmail, yahoo, etc Do you know any other way to try to get help ? FULL STACK ERROR FROM MICROSOFT: host mx1.hotmail.com[65.55.37.120] said: 550 SC-001 (COL0-MC4-F28) Unfortunately, messages from 94.23.***** weren't sent. Please contact your Internet service provider since part of their network is on our block list. You can also refer your provider to http://mail.live.com/mail/troubleshooting.aspx#errors. (in reply to MAIL FROM command) We are running a Virtual Private Server , so no HOSTING SITE, using NGINX too

    Read the article

  • Linux/Unix in Windows

    - by Dmitriy Nagirnyak
    Hi, What would be the best way to get the full-blown Unix/Linux bash inside Windows? I don't mean the Virtual Machine, but rather only the terminal with mounted NTFS drives. This way I could use the power of Unix/Linux still being on Windows. The things I want to be able to do from the terminal: Package management (apt-get in Debian). SSH. File operations (including grub and similar). Run a web server (Apache, nginx) for testing purposes. Easy to use: start terminal - Linux is on, end terminal - Linux is shut down. Would be nice to be able to copy-paste from Windows into Terminal and vice versa. This really feels like a separate OS and I realize that VM would, probably, be the best thing. But I guess it should be possible to have a lighter installation. THE NOTE: I cannot just use Linux because of I still need to do development on Windows. Also I am a Linux noobie - just getting started with it so sorry if asking something obvious/stupid. Thanks, Dmitriy.

    Read the article

  • How to debug slow queries in Django+Postgres

    - by lacker
    My database queries from Django are starting to take 1-2 seconds and I'm having trouble figuring out why. Not too big a site, about 1-2 requests per second (that hit Django; static files are just served from nginx.) The thing that confuses me is, I can replicate the slowness in the Django shell using debug mode. But when I issue the exact same queries at an sql prompt they are fast. It takes about a second for a query to return, but when I check connection.queries it reports the time as under 10 ms. Here's an example (from the Django shell): >>> p = PlayerData.objects.get(uid="100000521952372") >>> a = time.time(); p.save(); print time.time() - a 1.96812295914 >>> for d in connection.queries: print d["time"] ... 0.002 0.000 0.000 How can I figure out where this extra time is being spent? I'm using Apache+mod_wsgi in daemon mode, but this happens with just the django shell as well, so I figure it is not apache-related.

    Read the article

  • APC uptime 0 because of Fast

    - by demlasjr
    I have a VPS using Parallels/Plesk (11.0.9 Update #22, last updated at Oct 31, 2012 03:33 AM CentOS 6.3 (Final) x86_64) I have apache (CGI/FastCGI) installed and nginx as reverse proxy. Everything is working just fine. I installed APC for caching, but the issue is that the uptime is 0 always. It's restarting each 15 seconds or so. I checked everywhere and can't find a solution to fix it. The server have the grace restart enabled, but every 6 hours, which shouldn't influence the APC uptime. Searching in Google I found that this could be related to Apache, running with FCGId instead of FastCGI. Plesk/Apache is using this config file: usr/local/psa/admin/conf/templates/default/service/php_over_fastcgi.php which content is: <IfModule mod_fcgid.c> <Files ~ (\.php)> SetHandler fcgid-script FCGIWrapper <?php echo $VAR->server->webserver->apache->phpCgiBin ?> .p$ Options +ExecCGI allow from all </Files> Is here the issue or elsewhere ? How can I fix this to work with FastCGI and make APC working properly. I forgot to specify that even if the uptime is below one minute, APC is doing pretty good job caching (92% are hits).

    Read the article

  • x86_64 and memory issues

    - by Valery
    Recently I've switched from ubuntu 32bit to 64bit version. And now I experiencing some problems. All application take twice more memory. And some application takes even more. For example sshd on new server: root 6608 0.0 0.0 67972 2912 ? Ss 14:43 0:00 sshd: deploy [priv] deploy 6616 0.0 0.0 67972 1724 ? S 14:43 0:00 sshd: deploy@pts/4 root 20892 0.0 0.0 50916 1160 ? Ss 15:53 0:00 /usr/sbin/sshd root 21170 0.0 0.0 67972 2912 ? Ss 15:56 0:00 sshd: deploy [priv] deploy 21173 0.0 0.0 67972 1728 ? S 15:56 0:00 sshd: deploy@pts/0 root 23802 0.0 0.0 67972 2912 ? Ss 16:08 0:00 sshd: deploy [priv] deploy 23804 0.0 0.0 67972 1724 ? S 16:08 0:00 sshd: deploy@pts/1 root 24570 0.0 0.0 67972 2908 ? Ss 12:45 0:00 sshd: deploy [priv] deploy 24573 0.0 0.0 68112 1804 ? S 12:45 0:00 sshd: deploy@pts/3 deploy 25014 0.0 0.0 5168 852 pts/0 S+ 16:13 0:00 grep ssh the same on the old server: root 4867 0.0 0.0 5312 1028 ? Ss Mar23 0:00 /usr/sbin/sshd root 23753 0.0 0.0 8052 2556 ? Ss 16:15 0:00 sshd: deploy [priv] deploy 23755 0.0 0.0 8052 1524 ? S 16:15 0:00 sshd: deploy@pts/0 deploy 23770 0.0 0.0 3004 748 pts/0 D+ 16:15 0:00 grep ssh The same problems with postfix, nginx and some other application.

    Read the article

  • fcgiwrap listening to a unix socket file: how to change file permissions

    - by user36520
    I have a web server (nginx) and a CGI application (gitweb) that is ran with fcgiwrap to enable Fast CGI access to it. I want the Fast CGI protocol to take place over a unix socket file. To start the fcgiwrap daemon, I run: setuidgid git fcgiwrap -s "unix:$PWD/fastcgi.sock" (this is a daemontools daemon) The problem is that my web server runs as the user www-data and not the user git. And fcgiwrap creates the socket fastcgi.sock with user git, group git and read only fort the non owner. Thus, nginc with the user www-data can't access the socket. Apparently, fcgiwrap is not able to select permissions of unix socket files. And this is quite annoying. Moreover, if I manage to have the socket file exists before I run fcgiwrap (which is quite difficult given I did not find any shell command to create a socket file), it quits with the following error: Failed to bind: Address already in use The only solution I found is to start the server the following way: rm -f fastcgi.sock # Ensure that the socket doesn't already exists (sleep 5; chgrp www-data fastcgi.sock; chmod g+w fastcgi.sock) & exec setuidgid git fcgiwrap -s "unix:$PWD/fastcgi.sock" Which is far from the most elegant solution. Can you think of anything better ? Thanks

    Read the article

  • HAProxy is caching the forwarding?

    - by shadow_of__soul
    i'm trying to set up a server structure for an application i'm building in Node.js with socket.io. My setup is: HAProxy frontend forward to -> apache2 as default backend (or nginx, is apache in this local test) -> node.js app if the url has socket.io in the request AND a domain name i have something like: global log 127.0.0.1 local0 log 127.0.0.1 local1 notice maxconn 4096 user haproxy group haproxy daemon defaults log global mode http maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 frontend all 0.0.0.0:80 timeout client 5000 default_backend www_backend acl is_soio url_dom(host) -i socket.io #if the request contains socket.io acl is_chat hdr_dom(host) -i chaturl #if the request comes from chaturl.com use_backend chat_backend if is_chat is_soio backend www_backend balance roundrobin option forwardfor # This sets X-Forwarded-For timeout server 5000 timeout connect 4000 server server1 localhost:6060 weight 1 maxconn 1024 check #forwards to apache2 backend chat_backend balance roundrobin option forwardfor # This sets X-Forwarded-For timeout queue 50000 timeout server 50000 timeout connect 50000 server server1 localhost:5558 weight 1 maxconn 1024 check #forward to node.js app The problem comes when i made a request to something like www.chaturl.com/index.html it load perfectly but fails to loads the socket.io files (www.chaturl.com/socket.io/socket.io.js) why it redirect to apache (and should redirect to the node.js app that serve the files). The weird thing is that if i access directly to the socket.io file, after refreshing a few times, it loads, so i suppose is "caching" the forwarding for the client when it makes the first request and reach the apache server. Any suggestion of how this can be solved? or what i can try or look about this?

    Read the article

  • Postfix mail forwarder

    - by Andrew
    Hello, I just bought a dedicated server and I'm trying to install a webserver on it. The server is Ubuntu 10.04. I installed ftp, nginx, php, mysql, bind and now I have to install mail server. For the mail server I'm using Postfix, because it's recomended on ubuntu. I installed Postfix with apt-get install postfix but mail() function from php wasn't working. After a little debug I found the way to solve this : I created an empty file /etc/postfix/main.cf and it worked good. I do have a mx record like this mail 5M IN A xxx.xxx.xxx.xxx example.com. 5M IN MX 1 mail.example.com. After that I wanted to forward all e-mails to my GMail address. So I googled for it and I found in the official docs Virtual Domain Host Forwarding I added these lines in main.cf virtual_alias_domains = example.com virtual_alias_maps = hash:/etc/postfix/virtual I created map file and I placed this line in it @example.com [email protected] I run in terminal postmap /etc/postfix/virtual postfix reload The result: I can send e-mail from php with mail() function, but when I send an email to [email protected] that e-mail isn't forwarded to my Gmail. How to solve this? -Andrew I also tried this but not working http://rackerhacker.com/2006/12/26/postfix-virtual-mailboxes-forwarding-externally/ It works now! But I don't know what the problem was. I just installed "Mail Server" from Tasksel and after that it worked fine. Can anyone tell me what Tasksel installed or that it changed ?

    Read the article

  • Unexplained cache RAM drops on Linux machine

    - by FunkyChicken
    I run a CentOS 5.7 64 machine with 24gb ram and running kernel 2.6.18-274.12.1.el5. This machine runs only Nginx, php-fpm and Xcache as extra applications. Since about 3 weeks my memory behavior on this machine has changed and I cannot explain why. There are no crons running which flush anything like this. There are also no large numbers of files being deleted/changed during these drops. The 'cached' memory gets dropped about every few hours, but it's never a set gap between flushes, this indicates to me that some bottleneck gets reached instead. It also always seems to be when total memory usages gets to about 18GB, but again, not always exactly 18GB. This is a graph of my memory usage: As you can see in the graph the 'buffers' always stay more or less the same, it is mainly the 'cache' that gets dropped. Running vmstat -m I have outputted the memory usage just before and just after a memory drop. The output is here: http://pastebin.com/diff.php?i=hJqZqztm 'old version' being before, 'new version' being after a drop. About 3 weeks ago my server crashed during a heavy DDOS attack, after I rebooted the machine this odd behavior started. I have checked a bunch of logs, restarted the machine again, and cannot find any indication what changed. During these 'cache' memory drops, my iNode usage drops at the same time. Does anyone have any idea what might be causing this behavior? Clearly my RAM isn't full, so I am curious why this could be happening.

    Read the article

  • High load average due to high system cpu load (%sys)

    - by Nick
    We have server with high traffic website. Recently we moved from 2 x 4 core server (8 cores in /proc/cpuinfo), 32 GB RAM, running CentOS 5.x, to 2 x 4 core server (16 cores in /proc/cpuinfo), 32 GB RAM, running CentOS 6.3 Server running nginx as a proxy, mysql server and sphinx-search. Traffic is high, but mysql and sphinx-search databases are relatively small, and usually everything works blazing fast. Today server experienced load average of 100++. Looking at top and sar, we noticed that (%sys) is very high - 50 to 70%. Disk utilization was less 1%. We tried to reboot, but problem existed after the reboot. At any moment server had at least 3-4 GB free RAM. Only message shown by dmesg was "possible SYN flooding on port 80. Sending cookies.". Here is snippet of sar 11:00:01 CPU %user %nice %system %iowait %steal %idle 11:10:01 all 21.60 0.00 66.38 0.03 0.00 11.99 We know that this is traffic issue, but we do not know how to proceed future and where to check for solution. Is there a way we can find where exactly those "66.38%" are used. Any suggestions would be appreciated.

    Read the article

  • What's the best way to run Drupal and Django sites behind the same Varnish server?

    - by Alexis Bellido
    I have a high traffic website running with Drupal and Apache, five web servers behind a Varnish server load balancing. Let's say this site is example.com. I'm using five backends and a director like this in my default.vcl: director balancer round-robin { { .backend = web1; } { .backend = web2; } { .backend = web3; } { .backend = web4; } { .backend = web5; } } Now I'm working on a new Django project that will be a new section of this site running on example.com/new-section. After checking the documentation I found I can do something like this: sub vcl_recv { if (req.url ~ "^/new-section/") { set req.backend = newbackend; } else { set req.backend = default; } } That is, using a different backend for a subdirectory /new-section under the same domain. My question is, how do I make something like this work with my director and load balancing setup? I'm probably going to run two or more web servers (backends) with my new Django project, each one with a mix of Gunicorn, Nginx, and a few Python packages, and would like to put all of those in their own Varnish director to load balance. Is it possible to do use the above approach to decide which director to use?, like this: sub vcl_recv { if (req.url ~ "^/new-section/") { set req.director = newdirector; } else { set req.director = balancer; } } All suggestions welcome. Thanks!

    Read the article

  • What am I doing wrong in my config for MySql?

    - by Knight Hawk3
    When I load my my.conf with the config at the bottom Mysql fails to start and prints no errors. I am running Arch Linux (Updated) with the latest MySQL (5.5) and the latest nginx (Well latest in the repository, Not sure how to check. Only installed it today) I will give you any info you ask for. Thanks for helping! # The following options will be passed to all MySQL clients [client] #password = your_password port = 3306 socket = /var/run/mysqld/mysqld.sock # Here follows entries for some specific programs # The MySQL server [mysqld] port = 3306 socket = /var/run/mysqld/mysqld.sock skip-locking key_buffer = 16K max_allowed_packet = 1M table_cache = 4 sort_buffer_size = 64K read_buffer_size = 256K read_rnd_buffer_size = 256K net_buffer_length = 2K thread_stack = 64K # Don’t listen on a TCP/IP port at all. This can be a security enhancement, # if all processes that need to connect to mysqld run on the same host. # All interaction with mysqld must be made via Unix sockets or named pipes. # Note that using this option without enabling named pipes on Windows # (using the “enable-named-pipe” option) will render mysqld useless! # #skip-networking server-id = 1 # Uncomment the following if you want to log updates #log-bin=mysql-bin # Uncomment the following if you are NOT using BDB tables skip-bdb # Uncomment the following if you are using InnoDB tables #innodb_data_home_dir = /var/lib/mysql/ #innodb_data_file_path = ibdata1:10M:autoextend #innodb_log_group_home_dir = /var/lib/mysql/ #innodb_log_arch_dir = /var/lib/mysql/ # You can set .._buffer_pool_size up to 50 – 80 % # of RAM but beware of setting memory usage too high #innodb_buffer_pool_size = 16M #innodb_additional_mem_pool_size = 2M # Set .._log_file_size to 25 % of buffer pool size #innodb_log_file_size = 5M #innodb_log_buffer_size = 8M #innodb_flush_log_at_trx_commit = 1 #innodb_lock_wait_timeout = 50 skip-innodb [mysqldump] quick max_allowed_packet = 16M [mysql] no-auto-rehash # Remove the next comment character if you are not familiar with SQL #safe-updates [isamchk] key_buffer = 1M sort_buffer_size = 1M [myisamchk] key_buffer = 1M sort_buffer_size = 1M [mysqlhotcopy] interactive-timeout So what is my silly error?

    Read the article

  • Designing a software based load balancer

    - by Kishore pandey
    Hello to all Server fault users, I am new to this website but have constantly been using the mother website, stackover flow. Well to begin with, i would like to design a load balancer for the organization i am working for. As i am very new to this whole, idea about load balancing and networks. I am finding it very difficult to start my project. I did a lot of research on already existing load balancer and found some(HAPROXY,NGINX) that could solve my problems, but the point is, I am still in a dilemma if they could answer the following requirements of mine: The client and server in my architecture are distributed. The load balancer should take care of the firewall. LB server should balance the load among all servers present in WWW cloud. The LB server should have some sort of configuration file, with the help of which it is possible to configure the servers. Heart beat: With the help of which it would be possible to check if any server is down, if any server is down the request should be passed to some other server. Various load balancing algorithms of the incoming requests. Easy error handling. It should be fairly possible to prioritize the incoming requests. Is there any already available load balncer solution on the market that could satisfy these requirements? If not is there any base code available with the help of which i could develop my own load balncer. If not where should i start from scratch? I am practically new to everything. Any help from a load balancer expert is very much appreciated. Thanx a ton in advance. Cheers and regards. Kishore

    Read the article

  • AWS VPC ELB vs. Custom Load Balancing

    - by CP510
    So I'm wondering if this is a good idea. I have a Amazon AWS VPC setup with a public and private subnets. So I all ready get the Internet Gateway and NAT. I was going to setup all my web servers (Apache2 isntances) and DB servers in the private subnet and use a Load Balancer/Reverse Proxy to pick up requests and send them into the private subnets cluster of servers. My question then, is Amazons ELB's a good use for these, or is it better to setup my own custom instance to handle the public requests and run them through the NAT using nginx or pound? I like the second option just for the sake of having a instance I can log into and check. As well as taking advantage of caching and fail2ban ddos prevention, as well as possibly using fail safes to redirect traffic. But I have no experience with their ELB's, so I thought I'd ask your opinions. Also, if you guys have an opinion on this as well, would using the second option allow me to only have 1 public IP address and be able to route SSH connections through port numbers to respective instances? Thanks in advance!

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77  | Next Page >