Search Results

Search found 2761 results on 111 pages for 'mod fastcgi'.

Page 100/111 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • OS X Apache giving 503 error for anything in /api directory

    - by WilliamMayor
    I have a locally hosted website that uses Smarty templates, I'm trying to get started on building an API for the site. I've used virtualhost.sh to create a local virtual host for this and other sites. I've discovered that if I put a directory called api at the root of any of these virtual hosts I will get a 503 error when I try to access anything inside. I am using mod-rewrite but so far only to append a .php extension when needed. Here are the error logs for a request: [Thu Feb 09 13:42:37 2012] [error] proxy: HTTP: disabled connection for (localhost) [Thu Feb 09 13:49:06 2012] [error] (61)Connection refused: proxy: HTTP: attempt to connect to [fe80::1]:8080 (localhost) failed [Thu Feb 09 13:49:06 2012] [error] ap_proxy_connect_backend disabling worker for (localhost) The middle line gave me a clue to look in my hosts file because why would a request go to [fe80::1]:8080? I commented out that line and tried again, this time the error was in connecting to the standard 127.0.0.1 localhost. I have concluded that perhaps there is some config file somewhere picking up the underlying request of localhost/api and pointing it somewhere other than my virtual host. At this point my ability to fix the problem fails me. Can anyone help?

    Read the article

  • htaccess rewrite rules in Nginx: setting the rewrite path

    - by ct2k7
    I have a htaccess file I'm trying to convert into an nignx config file. Here's my htaccess file. RewriteEngine on RewriteCond %{SCRIPT_FILENAME} !-f RewriteCond %{SCRIPT_FILENAME} !-d RewriteRule !\.(jpg|css|js|gif|png)$ public/ [L] RewriteRule !\.(jpg|css|js|gif|png)$ public/index.php?url=$1 And the rules I have in my nginx config file: location / { if ($request_uri !~ "-f"){ rewrite !\.(jpg|css|js|gif|png)$ public/ break; } rewrite !\.(jpg|css|js|gif|png)$ public/index.php?url=$1; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 location ~ \.php$ { # Move to the @missing part when the file doesn't exist try_files $uri @missing; # Fix for server variables that behave differently under nginx/$ fastcgi_split_path_info ^(.+\.php)(/.+)$; # Include the standard fastcgi_params file included with ngingx include fastcgi_params; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_index index.php; # Pass to upstream PHP-FPM; This must match whater you name you$ #fastcgi_pass phpfpm; fastcgi_pass 127.0.0.1:9000; } location @missing { rewrite ^(.*)$ public/index.php?url=$1 break; } However, when I hit /, I get a 403 Forbidden, but I can get to /public/index.php, thus the rewrite isn't working. Any ideas on what I'm doing wrong?

    Read the article

  • Apache Reverse proxy for intranet and other integrated application on intranet

    - by user1433448
    I'm trying to configure a reverse proxy (ssl) with apache 2.2 in Debian Squeeze, but I have some problems, specially with some path absolute and with https I'll try to detail what I have made and what I'm trying to configure I have a server Debian Squeeze with apache2.2 + mod_proxy_html with: # apt-get install libapache2-mod-proxy-html libxml2-dev # a2enmod proxy # a2enmod proxy_http # a2enmod proxy_html # a2enmod headers After that I have configured a virtual host with: reverse_proxy_ssl.conf I'm trying to configure to allow access of our intranet from internet with a reverse proxy (apache that is located in DMZ). With this configuration domain.com/intranet works correctly and we can access to intranet, but we have one problem when from domain.com/intranet we need to use another internal application that is called from intranet with absolute path ( https://192.168.10.25/application/) and from internet appears that try to access with internal ip, and this link es incorrect from external site We only need to access from intranet to multiple internal application that are in external server and we like to restrict to minimal access from internet. All the application that are in the smae server of intranet are working. The second problem is with https and reverse proxy in our firewall appears some errors with packets (not valid packets), and with https seems to work. What can I do to solve this problems (absolute path and ssl problem) Thanks

    Read the article

  • php5-mysqlnd on debian wheezy/sid?

    - by Joseph
    I am trying to install php5-mysqlnd on a fresh install of Wheezy (/etc/debian_version refers to it as wheezy/sid) and I'm having a problem: root@debian:/var/www/lottery1# apt-get install php5-mysqlnd Reading package lists... Done Building dependency tree Reading state information... Done php5-mysqlnd is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? Y Setting up php5-mysqlnd (5.4.0-3) ... ucfr: Attempt from package php5-mysqlnd to take /etc/php5/mods-available/mysql.ini away from package php5-mysql ucfr: Aborting. dpkg: error processing php5-mysqlnd (--configure): subprocess installed post-installation script returned error exit status 4 Processing triggers for libapache2-mod-php5 ... configured to not write apport reports Reloading web server config: apache2. Errors were encountered while processing: php5-mysqlnd E: Sub-process /usr/bin/dpkg returned an error code (1) It seems there is some sort of conflict with the php5-mysql package, but I still get this error even after removing (with --purge) the php5-mysql package. Any thoughts? I'm trying to run a web tool that makes heavy use of mysqli_result::fetch_all(). Thanks!

    Read the article

  • Debian apache2 restart fault after some updates

    - by Ripeed
    can anyone give me an advice with this please: I run update on my debian server by Webmin. After updating some apache2 and etc. It shows update fail. After that I cant start apache2. I must run netstat -ltnp | grep ':80' Then kill pid kill -9 1047 and now i can start apache2 When I started it first time after update some websites on fastCGI wont work I must change them in ISPconfig3 to mod-PHP and now works NOW - I cant restart apache without kill pid. In log of ISP I see Unable to open logs (98)Address already in use: make_sock: could not bind to address [::]:80 (98)Address already in use: make_sock: could not bind to address 0.0.0.0:80 no listening sockets available, shutting down In log of some website I see [emerg] (13)Permission denied: mod_fcgid: can´t lock process table in pid 19264 Do you thing it will be solution update everithing by: apt-get update and apt-get upgrade to complete all updates? I have little scare if I do that then next errors will occur. If I look at apache log i see error: Debian Python version mismatch, expected '2.6.5+', found '2.6.6' But that was there before that problem before. Thanks A LOT for help.

    Read the article

  • Nginx Not Passing URL Parameters

    - by jmccartie
    Messing around with Nginx ... for some reason, it looks like none of my URL parameters are being passed. My homepage loads fine, but a URL like "http://mysite.com/more.php?id=101" throws errors, saying that the ID is an undefined index. I'm assuming this is something basic I'm missing in a conf file. Some info: conf.d/virtual.conf server { listen 80; server_name dev.mysite.com; index index.php; root /var/www/dev.mysite.com_html; location / { root /var/www/dev.mysite.com_html; } location ~ \.php(.*)$ { fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME /var/www/wap/dev.mysite.com_html/$fastcgi_script_n ame; fastcgi_index index.php; include /etc/nginx/fastcgi_params; } } Error log: 2009/06/22 11:44:21 [notice] 16319#0: start worker process 16322 2009/06/22 11:44:28 [error] 16320#0: *1 FastCGI sent in stderr: "PHP Notice: Undefined index: id in /var/www/dev.mysite.com_html/more.php on line 10 Thanks in advance.

    Read the article

  • What permission(s) does an application pool identity required to manage other application pools?

    - by Mr Shoubs
    I have a web site (used to manage various parts of our software) that needs the permissions required to start/stop other application pools. I've created a user and set the app pool identity to custom, however the web app still can't start/stop the app pools. I get the following Error: System.UnauthorizedAccessException: Filename: redirection.config Error: Cannot read configuration file due to insufficient permissions at Microsoft.Web.Administration.Interop.AppHostWritableAdminManager.GetAdminSection(String bstrSectionName, String bstrSectionPath) at Microsoft.Web.Administration.Configuration.GetSectionInternal(ConfigurationSection section, String sectionPath, String locationPath) at Microsoft.Web.Administration.ServerManager.get_ApplicationPoolsSection() at Microsoft.Web.Administration.ServerManager.get_ApplicationPools() Discussion here suggests setting the application pool to local system or administrator, this does work, but I don't want to do this for security reasons (external support will need access this site). I did give the user higher permissions (as suggested here), starting by making it part of the local administrators group, but initially this didn't work, and giving the user read/write/mod permission on C:\Windows\System32\inetsrv\config also didn't work. I must have done something wrong as local administrator now works, however this still isn't what I want. So can anyone suggest the permissions I need to add to this user, and how can I apply them? An answer my problem (but different question) is here, but to clarify, I think I need to give an individual user "IIS Runtime Operation Permissions", does anyone know how to do this, if indeed this is the permissions I require?

    Read the article

  • Installing Trac on Windows under Apache 2.2?

    - by Warren P
    Trac is a python-powered bug-tracking and project-management app. According to Trac's wiki, there are several options for installing Trac, a standalone server (tracd), or under a dedicated webserver using one of these options: FastCGI - Not available on windows. mod_wsgi - No version of mod_wsgi available for Apache 2.2.22 and Python 2.7.3-amd64 that actually runs on my system! mod_python - no longer recommended, as mod_python is not actively maintained anymore) CGI -should not be used, as the performance is far from optimal) That leaves me with zero ways to run Trac on Windows. Apache 2.2.22 with ModWSGI loading, crashes the Apache2.2 service on startup without any error logs. Disabling the line in the apache configuration to load mod_wsgi restores sanity. I just want an installation of Trac on windows with Authentication enabled. I am unable to get authenetication to work using basic tracd like this: tracd -p 8000 --basic-auth="c:\tmp,c:\tmp\Passwords.md5.txt,mycompany" c:\tmp\RootFolder And I am unable to get Mod_WSGI installed. I'm going to keep trying to figure out a combination that works, I suspect I should have installed 32 bit python instead of 64 bit python, to start with. Did I do wrong to install Python 64 bit 2.7.3? I tried again with all 32 bit components, and still can't get MOD_WSGI to work with apache 2.2.22. I'm going to try to compile mod_wsgi myself with Visual C++ Express 2010, but it seems to me that it ought to be easier than this to get Trac running on windows, with authentication. Is there a way to run Trac on Windows, under Apache, with authentication? The last "Trac on windows" article died in 2008, leaving only this internet archive link for "Trac on windows" setup.

    Read the article

  • Deploying concrete5 on nginx

    - by Nithin
    I have a concrete5 site that works 'out of the box' in apache server. However I am having a lot of trouble running it in nginx. The following is the nginx configuration i am using: server { root /home/test/public; index index.php; access_log /home/test/logs/access.log; error_log /home/test/logs/error.log; location / { # First attempt to serve request as file, then # as directory, then fall back to index.html try_files $uri $uri/ index.php; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } # pass the PHP scripts to FastCGI server listening on unix socket # location ~ \.php($|/) { fastcgi_pass unix:/tmp/phpfpm.sock; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; include fastcgi_params; } location ~ /\.ht { deny all; } } I am able to get the homepage but am having problem with the inner pages. The inner pages display an "Access denied". Possibly the rewrite is not working, in effect I think its querying and trying to execute php files directly instead of going through the concrete dispatcher. I am totally lost here. Thank you for your help, in advance.

    Read the article

  • Where to get grub files without using grub-install

    - by Jacky
    I am in a particular situation. I have a MacBook Pro with no internal CD drive and both MacOS X (minimal setup) and Linux (my main system) is installed. During a cross-upgrade to Ubuntu 12.04 I messed up grub, so that my /boot/grub directory is basically empty. This means I can't boot Linux on the laptop anymore but only get into grub rescue. Normally this is no issue as you'd just boot from a rescue CD or USB stick, but unfortunately with a MacBook Pro this is not possible (I have reFIT installed and it attempts to boot, but it fails and the manual says that Apple's EFI firmware is not able to handle this situation). From MacOS X, however, I still have write access to the Linux partition. I've now been trying to figure out how to populate the /boot/grub folder with the necessary files, to no avail so far. The ISO image of Ubuntu 12.04 contains an EFI folder which is not what I am looking for, instead I need the normal.mod files for the grub version of Ubuntu 12.04. I do not have any other machine to set up a virtual machine of Ubuntu 12.04 to extract this from after a grub-install, so I am asking for ideas here how to solve this mess. P.S.: I installed the Linux previously when I still had a working internal CD drive. This is gone now.

    Read the article

  • installing lots of perl modules

    - by Colin Pickard
    Hi, I've been landed with the job of documenting how to install a very complicated application onto a clean server. Part of the application requires a lot of perl scripts, each of which seem to require lots of different perl modules. I don't know much about perl, and I only know one way to install the required modules. This means my documentation now looks this: Type each of these commands and accept all the defaults: sudo perl -MCPAN -e 'install JSON' sudo perl -MCPAN -e 'install Date::Simple' sudo perl -MCPAN -e 'install Log::Log4perl' sudo perl -MCPAN -e 'install Email::Simple' (.... continues for 2 more pages... ) Is there any way I can do all this one line like I can with aptitude i.e. Type the following command and go get a coffee: sudo aptitude install openssh-server libapache2-mod-perl2 build-essential ... Thank you (on behalf of the long suffering people who will be reading my document) EDIT: The best way to do this is to use the packaged versions. For the modules which were not packaged for Ubuntu 10.10 I ended up with a little perl script which I found here ) #!/usr/bin/perl -w use CPANPLUS; use strict; CPANPLUS::Backend->new( conf => { prereqs => 1 } )->install( modules => [ qw( Date::Simple File::Slurp LWP::Simple MIME::Base64 MIME::Parser MIME::QuotedPrint ) ] ); This means I can put a nice one liner in my document: sudo perl installmodules.pl

    Read the article

  • Can't set screen brightness in Gentoo system

    - by Real Yang
    My system: Linux gentoo 3.10.7-gentoo-r1 VGA compatible controller: NVIDIA Corporation GT216M [GeForce GT 240M] (rev a2) output of xbacklight: No outputs have backlight property output of xrandr: xrandr: Failed to get size of gamma for output default Screen 0: minimum 640 x 480, current 1280 x 720, maximum 1280 x 768 default connected 1280x720+0+0 0mm x 0mm 1280x720 0.0* 1024x768 61.0 800x600 61.0 640x480 60.0 1280x768 0.0 output of ls /proc/acpi: button/ event When I'm in kernel 3.8.13, I can change my brightness using xbacklight. I compiled 3.10.7-r1 using genkernel all. Before the upgrade I did get a notice of "compatible issues for Nvdia users" from emerge but I still don't know the details. It there anyway to let me set the brightness? Then i found a ebuild app-laptop/nvdiabl-0.81 and tried to emerege nvidabl, I got this message: Your kernel does not support FB_BACKLIGHT. To enable you it you can enable any frame buffer with backlight control or nouveau. Note that you cannot use FB_NVIDIA with nvidia's proprietary driver Please check to make sure these options are set correctly. Failure to do so may cause unexpected problems. Once you have satisfied these options, please try merging this package again. ERROR: app-laptop/nvidiabl-0.81::gentoo failed (pretend phase): Incorrect kernel configuration options Call stack: ebuild.sh, line 93: Called pkg_pretend nvidiabl-0.81.ebuild, line 31: Called linux-mod_pkg_setup linux-mod.eclass, line 559: Called linux-info_pkg_setup linux-info.eclass, line 911: Called check_extra_config linux-info.eclass, line 805: Called die The specific snippet of code: die "Incorrect kernel configuration options" [SOLVED] I enter the menuconfig again and check the Device Drivers -> Graphics support -> Support for frame buffer devices, then i found this: <*> nVidia Framebuffer Support [*] Support for backlight control (NEW) What can i say. Recompiling...

    Read the article

  • Either, nginx+php-fpm bad config or nginx+php-fpm cannot handle high query?

    - by The Wolf
    I have wordpress installed in my server configured(hopefully with nginx+php-fpm+mariaDB). I am trying to import using wordpress importer a 1.5MB xml file. Everytime I try to upload it using the importer, it got cut of... meaning just blank screen result.. Here is my error log: actually I just posted 2 of the errors [error] 858#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: xx.xxx.xx.xx, server: xxx.com, request: "GET xxxx.html HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "xxx.com" [error] 858#0: *13 connect() failed (111: Connection refused) while connecting to upstream, client: xxx.x.xx.xx, server: xxx.com, request: "GET xxxx.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "xxx.com" I don't know what is the reason why it can't process the wordpress export .xml. I already increased max_file_upload & etc., but nothing happens. Hope somebody can help me. Here are my conf: nginx.conf user nginx; worker_processes 8; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; server_tokens off; keepalive_timeout 65; fastcgi_read_timeout 500; #gzip on; client_max_body_size 2M; php-fpm.conf ;;;;;;;;;;;;;;;;;;;;; ; FPM Configuration ; ;;;;;;;;;;;;;;;;;;;;; ; All relative paths in this configuration file are relative to PHP's install ; prefix. ; Include one or more files. If glob(3) exists, it is used to include a bunch of ; files from a glob(3) pattern. This directive can be used everywhere in the ; file. include=/etc/php-fpm.d/*.conf ;;;;;;;;;;;;;;;;;; ; Global Options ; ;;;;;;;;;;;;;;;;;; [global] ; Pid file ; Default Value: none pid = /var/run/php-fpm/php-fpm.pid ; Error log file ; Default Value: /var/log/php-fpm.log error_log = /var/log/php-fpm/error.log ; Log level ; Possible Values: alert, error, warning, notice, debug ; Default Value: notice ;log_level = notice ; If this number of child processes exit with SIGSEGV or SIGBUS within the time ; interval set by emergency_restart_interval then FPM will restart. A value ; of '0' means 'Off'. ; Default Value: 0 ;emergency_restart_threshold = 0 ; Interval of time used by emergency_restart_interval to determine when ; a graceful restart will be initiated. This can be useful to work around ; accidental corruptions in an accelerator's shared memory. ; Available Units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;emergency_restart_interval = 0 ; Time limit for child processes to wait for a reaction on signals from master. ; Available units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;process_control_timeout = 0 ; Send FPM to background. Set to 'no' to keep FPM in foreground for debugging. ; Default Value: yes daemonize = no ;;;;;;;;;;;;;;;;;;;; ; Pool Definitions ; ;;;;;;;;;;;;;;;;;;;; ; See /etc/php-fpm.d/*.conf [root@host etc]# vim php-fpm.conf [root@host etc]# vim php-fpm.conf ; Default Value: notice ;log_level = notice ; If this number of child processes exit with SIGSEGV or SIGBUS within the time ; interval set by emergency_restart_interval then FPM will restart. A value ; of '0' means 'Off'. ; Default Value: 0 ;emergency_restart_threshold = 0 ; Interval of time used by emergency_restart_interval to determine when ; a graceful restart will be initiated. This can be useful to work around ; accidental corruptions in an accelerator's shared memory. ; Available Units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;emergency_restart_interval = 0 ; Time limit for child processes to wait for a reaction on signals from master. ; Available units: s(econds), m(inutes), h(ours), or d(ays) ; Default Unit: seconds ; Default Value: 0 ;process_control_timeout = 0 ; Send FPM to background. Set to 'no' to keep FPM in foreground for debugging. ; Default Value: yes daemonize = no ;;;;;;;;;;;;;;;;;;;; ; Pool Definitions ; ;;;;;;;;;;;;;;;;;;;; ; See /etc/php-fpm.d/*.conf ps aux [root@host etc]# ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.1 2900 1380 ? Ss Jun02 0:00 init root 2 0.0 0.0 0 0 ? S Jun02 0:00 [kthreadd/9308] root 3 0.0 0.0 0 0 ? S Jun02 0:00 [khelper/9308] root 124 0.0 0.0 2464 576 ? S<s Jun02 0:00 /sbin/udevd -d root 460 0.0 0.1 35976 1308 ? Sl Jun02 0:00 /sbin/rsyslogd -i /var/run/syslogd.pid -c 5 root 474 0.0 0.0 8940 1028 ? Ss Jun02 0:00 /usr/sbin/sshd root 481 0.0 0.0 3264 876 ? Ss Jun02 0:00 xinetd -stayalive -pidfile /var/run/xinetd.pid root 491 0.0 0.1 6268 1432 ? S Jun02 0:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql --pid-file=/var/lib/mysql/host.busilak.com. mysql 584 0.1 6.8 679072 71456 ? Sl Jun02 0:04 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib/mysql/plugin --use root 586 0.0 0.3 12008 3820 ? Ss Jun02 0:01 sshd: root@pts/0 root 629 0.0 0.0 9140 756 ? Ss Jun02 0:00 /usr/sbin/saslauthd -m /var/run/saslauthd -a pam -n 2 root 630 0.0 0.0 9140 520 ? S Jun02 0:00 /usr/sbin/saslauthd -m /var/run/saslauthd -a pam -n 2 root 645 0.0 0.1 12788 1928 ? Ss Jun02 0:01 sendmail: accepting connections smmsp 653 0.0 0.1 12576 1728 ? Ss Jun02 0:00 sendmail: Queue runner@01:00:00 for /var/spool/clientmqueue root 691 0.0 0.1 7148 1184 ? Ss Jun02 0:00 crond root 698 0.0 0.1 6272 1688 pts/0 Ss Jun02 0:00 -bash root 1006 0.0 0.0 7828 924 ? Ss 00:30 0:00 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf nginx 1007 0.0 0.1 8156 1724 ? S 00:30 0:00 nginx: worker process nginx 1008 0.0 0.1 8024 1360 ? S 00:30 0:00 nginx: worker process nginx 1009 0.0 0.1 8020 1356 ? S 00:30 0:00 nginx: worker process nginx 1011 0.0 0.1 8024 1360 ? S 00:30 0:00 nginx: worker process nginx 1012 0.0 0.1 8024 1360 ? S 00:30 0:00 nginx: worker process nginx 1013 0.0 0.1 8024 1360 ? S 00:30 0:00 nginx: worker process nginx 1014 0.0 0.1 8024 1360 ? S 00:30 0:00 nginx: worker process nginx 1015 0.0 0.1 8024 1344 ? S 00:30 0:00 nginx: worker process root 1030 0.0 0.2 25396 2904 ? Ss 00:30 0:00 php-fpm: master process (/etc/php-fpm.conf) apache 1031 0.0 1.9 40700 20624 ? S 00:30 0:00 php-fpm: pool www apache 1032 0.0 2.0 41924 21888 ? S 00:30 0:01 php-fpm: pool www apache 1033 0.0 1.9 41212 20848 ? S 00:30 0:01 php-fpm: pool www apache 1034 0.0 1.9 40956 20792 ? S 00:30 0:01 php-fpm: pool www apache 1035 0.0 2.0 41560 21556 ? S 00:30 0:02 php-fpm: pool www apache 1040 0.0 1.8 39292 19120 ? S 00:30 0:00 php-fpm: pool www root 1125 0.0 0.0 6080 1040 pts/0 R+ 01:04 0:00 ps aux netstat -l [root@host etc]# netstat -l Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 *:ssh *:* LISTEN tcp 0 0 localhost.localdomain:smtp *:* LISTEN tcp 0 0 localhost.locald:cslistener *:* LISTEN tcp 0 0 *:mysql *:* LISTEN tcp 0 0 *:http *:* LISTEN tcp 0 0 *:ssh *:* LISTEN Active UNIX domain sockets (only servers) Proto RefCnt Flags Type State I-Node Path unix 2 [ ACC ] STREAM LISTENING 60575947 /var/run/saslauthd/mux unix 2 [ ACC ] STREAM LISTENING 60574168 @/com/ubuntu/upstart unix 2 [ ACC ] STREAM LISTENING 60575873 /var/lib/mysql/mysql.sock Hope somebody can help me to figure out what is the problem.

    Read the article

  • Email encoding on IIS7

    - by Ivanhoe123
    All emails sent from the server are displaying Cyrillic letters as weird characters, for example: Можно. Regular alphabet letters are properly rendered. I searched all across the web but was not able to find any solutions. Here is some information about the system: Dedicated server with Windows 2008 and IIS7 Application are in PHP (run as FastCGI) If of any importance, Smartermail is installed on the server The emails are sent using PHPs mail() function through a Drupal website. Encoding on that site is set up properly and there are no display issues on front end. Where is the problem? How can I make Cyrillic letters to be properly encoded? Any help is greatly appreciated. Thanks! UPDATE Here are the email headers: Received: from SERVERNAME (mail.domain.com [12.123.123.123]) by mail.domain.com with SMTP; Fri, 16 Nov 2012 00:00:00 +0100 From: [email protected] To: [email protected] Subject: Email subject Date: Fri, 16 Nov 2012 00:00:00 +0100 MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Mailer: Drupal Sender: [email protected] Return-Path: [email protected] Message-ID: f98b801988c642ef911ef46f7cace92b@com X-SmarterMail-Spam: SPF_None, ISpamAssassin 8 [raw: 5], DK_None, DKIM_None, Custom Rules [] X-SmarterMail-TotalSpamWeight: 8

    Read the article

  • nginx 2 symfony2 web application, one ip no domain

    - by Krzysztof Koch
    I have irritating with nginx. I set up in /usr/share/nginx/www/firstapp one application and in /usr/share/nginx/www/secondapp. in my default conf i setup that in / root localization i want first app: when write 9.9.9.9 in browser show me first app, and when i write 9.9.9.9/makeup, there not show me seccond app. Why first app displays me good, and seccondapp cannot? Please help me. Sorry for quality here pasterbin code: enter link description here server { listen 80; server_name localhost; root /usr/share/nginx/www/firstapp/web; access_log /var/log/nginx/$host.access.log; error_log /var/log/nginx/error.log error; # strip app.php/ prefix if it is present rewrite ^/app\.php/?(.*)$ /$1 permanent; location / { root /usr/share/nginx/www/firstapp/web/; index app.php; try_files $uri @rewriteapp; } location /makeup/ { alias /usr/share/nginx/www/seccondapp/web/; index app.php; try_files $uri @rewriteapp; } location @rewriteapp { rewrite ^(.*)$ /app.php/$1 last; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 location ~ ^/(app|app_dev)\.php(/|$) { #fastcgi_pass 127.0.0.1:9000; fastcgi_pass unix:/var/lib/php5-fpm/www.sock; fastcgi_split_path_info ^(.+\.php)(/.*)$; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param HTTPS off; #fastcgi_param SERVER_PORT 80; }

    Read the article

  • Moving Zend Framework 2 from apache to nginx

    - by Aleksander
    I would like to move site that uses Zend Framework 2 from Apache to Nginx. The problem is that site have 6 modules, and apache handles it by aliases defined in httpd-vhosts.conf, #httpd-vhosts.conf <VirtualHost _default_:443> ServerName localhost:443 Alias /develop/cpanel "C:/webapps/develop/mil_catele_cp/public" Alias /develop/docs/tech "C:/webapps/develop/mil_catele_tech_docs/public" Alias /develop/docs "C:/webapps/develop/mil_catele_docs/public" Alias /develop/auth "C:/webapps/develop/mil_catele_auth/public" Alias /develop "C:/webapps/develop/mil_web_dicom_viewer/public" DocumentRoot "C:/webapps/mil_catele_homepage" </VirtualHost> in httpd.conf DocumentRoot is set to C:/webapps. Sites are avialeble at for example localhost/develop/cpanel. Framework handles further routing. In Nginx I was able to make only one site available by specifing root C:/webapps/develop/mil_catele_tech_docs/public; in server block. It works only because docs module don't depend on auth like others, and site was at localhost/. In next attempt: root C:/webapps; location /develop/auth { root C:/webapps/develop/mil_catele_auth/public; try_files $uri $uri/ /develop/mil_catele_auth/public/index.php$is_args$args; } Now as I enter localhost/develop/cpanel it gets to correct index.php but can't find any resources (css,js files). I have no Idea why reference paths in browswer's GET requsts changed to https://localhost/css/bootstrap.css form https://localhost/develop/auth/css/bootstrap.css as it was on apache. This root directive seems not working. Nginx handles php by using fastCGI location ~ \.(php|phtml)?$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param APPLICATION_ENV production; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } I googled whole day, and found nothing usefull. Can someone help me make this configuration work like on Apache?

    Read the article

  • How to serve pages through multiple frameworks/template engines efficiently

    - by Leftium
    I would like to render a file that has both PHP tags and Web2Py tags mixed together. To do this, I would like the web server to pass the file through Web2Py, then PHP. I found a method to call PHP from Web2py via Python (based on this method for running PHP on top of django), but this method loses the benefits of any server optimizations from mod_php or FastCGI like caching and multi-threaded operation. A new process is created for each PHP request, which is very slow. Is there a better way to efficiently render pages with both Web2Py(Python) and PHP tags in the same file? Note I am not looking for methods of serving PHP-only and Web2Py-only files from the same server/domain. I prefer solutions for Apache2 or Cherokee. I'm open to using other web servers, though. Background info: I prefer to develop in Web2Py, but we have this pre-existing system written in PHP. I would like to augment the PHP system with some of Web2Py's features like auth authentication/user management and the T() internationalization object. Also it would make it much easier to port the PHP project to Web2Py if it could be done piecemeal. Since the PHP project consists of many files, it would greatly help if they did not need modification.

    Read the article

  • Can I make TCP/IP session to run less than 60 seconds?

    - by par
    Our server is overloaded with TCP/IP sessions, we have 1200 - 1500 of them. Most of them are hanging in TIME_OUT state. It turns out that a connection in TIME_OUT state occupies a socket until 60 second time-out is elapsed. The problem is that the server gets unresponsive and many clients are not getting served. I have made a simple test: download an XML file from the server with Internet Explorer 8.0 The download finishes in a fraction of second. But then I see that the TCP/IP connection is hanging in TIME_OUT state for 60 seconds. Is there any way to get rid of TIME_OUT waiting or make it less to free the socket for new connections? I understand why TCP/IP connection enters TIME_OUT state, but I don't understand why Internet Explorer does not close the connection after the XML file download is over. The details. Our server runs web service written in Perl (mod-perl). The service provides weather data to clients. Client is a Flash appication (actually Flash ActiveX control embedded in Windows application). OS: Ubuntu Apache "Keep Alive" option is set to 0

    Read the article

  • Disconnect from PHP after output generated

    - by Oli
    I have a LEMP stack. Nginx sitting in front of PHP-FPM. Because some of the sites are heavy and there's OPCode caching, PHP is set up so that there are only 5 child processes running. The aim being that each child can deal with any request in less than half-a-second and then move onto the next request. One problem I've found is that if it's a big chunk of HTML that's getting sent out, and the user has a slow connection, that PHP thread stays occupied until they've finished downloading. Because of my current setup, I have a pretty unforgiving timeout inside PHP where the script is killed after 20 seconds. This is to make sure everybody gets a turn but on a slow connection, this can mean the user gets cut off with a 504 Gateway timeout. I was wondering if there was some sort of buffer solution that I could implement within or just behind Nginx that sent the request through and then... well... buffered the content into its own cache and feed that onto the client as and when they could download it. The aim being that the underlying PHP thread can be freed up. What I'm asking for doesn't have to be PHP-specific. Anything that deals with FastCGI, or even any Nginx-upstream might have a similar issue to this.

    Read the article

  • Installing Apache MPM Worker on Centos 5.5

    - by mrmartinblue
    I have a CentOS 5.5. server and am trying to switch from MPM Prefork to MPM worker. I have the standard yum httpd packages installed currently and from my reading I did the following: Uncomment the httpd.worker line in the /etc/sysconfig/httpd file. I also made sure that the httpd.worker file exists in the /usr/sbin/ directory. I also made sure that httpd service is stopped before making the above change. Ensured PHP was disabled for Apache. I'm fine with this and will use FastCGI to handle PHP files once I get the MPM worker up and running. Restart the httpd service, everything starts fine. Do a # httpd -V The console tells me it's still using prefork. If I do a # vi /etc/init.d/httpd the httpd.worker line is still commented out. I've tried changed this as well to no difference. Any suggestions? Things to look at? My application requires the worker MPM so the only choice I can think of is to go with ubuntu or another flavor that has the dedicated apache2-mpm-worker package. Is there something similar in the yum repos somewhere? Thanks in advance!

    Read the article

  • Windows authentication to SQL Server via IIS and PHP

    - by Jeff
    We're running a PHP 5.4 application on Server 2008 R2. We would like to connect to a SQL Server 2008 database, on a separate server, using Windows authentication (must be Windows authentication--the DB admins won't let us connect any other way). I have downloaded the SQL Server drivers for PHP and installed them. IIS is configured for Windows authentication, and anonymous authentication has been disabled. $_SERVER['AUTH_USER'] reports our currently logged on Windows account. In php.ini, we have set fastcgi.impersonate = 1. When we setup a connection using the following code from Microsoft: $serverName = "sqlserver\sqlserver"; $connectionInfo = array( "Database"=>"some_db"); /* Connect using Windows Authentication. */ $conn = sqlsrv_connect( $serverName, $connectionInfo); if( $conn === false ) { echo "Unable to connect.</br>"; die( print_r( sqlsrv_errors(), true)); } We are presented with the following error message: Unable to connect. Array ( [0] => Array ( [0] => 28000 [SQLSTATE] => 28000 [1] => 18456 [code] => 18456 [2] => [Microsoft][SQL Server Native Client 11.0][SQL Server]Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. [message] => [Microsoft][SQL Server Native Client 11.0][SQL Server]Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. ) Is it possible to connect to SQL Server 2008 via PHP using Windows authentication? Are there any additional required settings we need to make on IIS, SQL Server, or any other component (like a domain controller)?

    Read the article

  • Nginx + php-fpm - recv() error

    - by Ilya Biryukov
    I get the follow error in the nginx log [error] 17734#0: *6643 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: [cut], server: [cut], request: "GET /venues HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "[cut]" I have a dedicated box with 8 gb ram, quad core chip. Good server. Nginx, php-fpm & mysql all latest versions running under ubuntu 10.04 I only get this when I stress test the server with siege. If I increase the number of concurrent connections to 100, I can get up to 20% of all requests to fail. Furthermore, I don't get this on pages that have no mysql queries. And only a few failures on pages with moderate number of queries. Bit, I'm not sure if that's got to do anything with it. I have a feeling this is something to do with php. But I can't figure it out. Any suggestions of where to even start looking? Update: and the php error log is silent. No record of anything going wrong

    Read the article

  • setting up phpmyadmin with nginx within ubuntu 11.04

    - by Patrick
    I have nginx and php5-fpm running on ubuntu 11.04. I have installed phpmyadmin but im having trouble accessing it. I would like to access it via http://localhost/phpmyadmin I've used all the default locations for the nginx, php5, and phpmyadmin installs. I'm being directed to use the block below by the blog guide im following, but im not sure what to change to get it to point how im wanting it to. server { listen 80; server_name php.example.com; // <-I know i need to edit this, but not sure to what. access_log /var/log/nginx/localhost.access.log; root /usr/share/phpmyadmin; index index.php; location / { try_files $uri $uri/ @phpmyadmin; } location @phpmyadmin { fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME /usr/share/phpmyadmin/index.php; include /etc/nginx/fastcgi_params; fastcgi_param SCRIPT_NAME /index.php; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/phpmyadmin$fastcgi_script_name; include fastcgi_params; } }

    Read the article

  • Intermittent extrememly long response times when downloading documents

    - by pap
    I have a Java web application running om Tomcat 7 with an Apache httpd 2.2 fronting with mod_jk/AJP. One part of the application is serving files (up to 4mb size). Now, normally this all runs very smooth with stable, low response-times. However, in rare instances (<0.1% of downloads), the downloadtime will go beyond 1 minute. After activating the ThreadStuckValve in Tomcat, I can see that the long responses seem to be stuck at org.apache.tomcat.jni.Socket.sendbb(Native method) i.e network I/O. At most, these long-running downloads take 5 minutes, which I strongly suspect is because of the default 300 second timout in Apache 2.2 (http://httpd.apache.org/docs/2.2/mod/core.html, "TimeOut directive"). To me, this looks like network problems. The Apache timeout (if that is what is kicking in at the 5 minute mark) indicates that ACK packets are not being transmitted correctly. My questions are what could be causing this? Closed browser at receiving end but socket not signaled as closed properly? Packet loss or some other network failure in transit? Where would I start troubleshooting this? We're running Tomcat and Apache on Windows server 2008-R2 in a vmware virtualized server.

    Read the article

  • nginx reverse proxy to apache mod_wsgi doesn't work

    - by user11243
    I'm trying to run a django site with apache mod-wsgi with nginx as the front-end to reverse proxy into apache. In my Apache ports.conf file: NameVirtualHost 192.168.0.1:7000 Listen 192.168.0.1:7000 <VirtualHost 192.168.0.1:7000> DocumentRoot /var/apps/example/ ServerName example.com WSGIDaemonProcess example WSGIProcessGroup example Alias /m/ /var/apps/example/forum/skins/ Alias /upfiles/ /var/apps/example/forum/upfiles/ <Directory /var/apps/example/forum/skins> Order deny,allow Allow from all </Directory> WSGIScriptAlias / /var/apps/example/django.wsgi </VirtualHost> In my nginx config: server { listen 80; server_name example.com; location / { include /usr/local/nginx/conf/proxy.conf; proxy_pass http://192.168.0.1:7000; proxy_redirect default; root /var/apps/example/forum/skins/; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } After restarting both apache and nginx, nothing works, example.com simply hangs or serves index.html in my /var/www/ folder. I'd appreciate any advice to point me in the right direction. I've tried several tutorials online to no avail.

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >