Search Results

Search found 14709 results on 589 pages for 'root permission'.

Page 392/589 | < Previous Page | 388 389 390 391 392 393 394 395 396 397 398 399  | Next Page >

  • Nginx + PHP-FPM on Centos 6.5 gives me 502 Bad Gateway (fpm error: unable to read what child say: Bad file descriptor)

    - by Latheesan Kanes
    I am setting up a standard LEMP stack. My current setup is giving me the following error: 502 Bad Gateway This is what is currently installed on my server: Here's the configurations I've created/updated so far, can some one take a look at the following and see where the error might be? I've already checked my logs, there's nothing in there (http://i.imgur.com/iRq3ksb.png). And I saw the following in /var/log/php-fpm/error.log file. sidenote: both the nginx and php-fpm has been configured to run under a local account called www-data and the following folders exits on the server nginx.conf global nginx configuration user www-data; worker_processes 6; worker_rlimit_nofile 100000; error_log /var/log/nginx/error.log crit; pid /var/run/nginx.pid; events { worker_connections 2048; use epoll; multi_accept on; } http { include /etc/nginx/mime.types; default_type application/octet-stream; # cache informations about FDs, frequently accessed files can boost performance open_file_cache max=200000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; # to boost IO on HDD we can disable access logs access_log off; # copies data between one FD and other from within the kernel # faster then read() + write() sendfile on; # send headers in one peace, its better then sending them one by one tcp_nopush on; # don't buffer data sent, good for small data bursts in real time tcp_nodelay on; # server will close connection after this time keepalive_timeout 60; # number of requests client can make over keep-alive -- for testing keepalive_requests 100000; # allow the server to close connection on non responding client, this will free up memory reset_timedout_connection on; # request timed out -- default 60 client_body_timeout 60; # if client stop responding, free up memory -- default 60 send_timeout 60; # reduce the data that needs to be sent over network gzip on; gzip_min_length 10240; gzip_proxied expired no-cache no-store private auth; gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml; gzip_disable "MSIE [1-6]\."; # Load vHosts include /etc/nginx/conf.d/*.conf; } conf.d/www.domain.com.conf my vhost entry ## Nginx php-fpm Upstream upstream wwwdomaincom { server unix:/var/run/php-fcgi-www-data.sock; } ## Global Config client_max_body_size 10M; server_names_hash_bucket_size 64; ## Web Server Config server { ## Server Info listen 80; server_name domain.com *.domain.com; root /home/www-data/public_html; index index.html index.php; ## Error log error_log /home/www-data/logs/nginx-errors.log; ## DocumentRoot setup location / { try_files $uri $uri/ @handler; expires 30d; } ## These locations would be hidden by .htaccess normally #location /app/ { deny all; } ## Disable .htaccess and other hidden files location /. { return 404; } ## Magento uses a common front handler location @handler { rewrite / /index.php; } ## Forward paths like /js/index.php/x.js to relevant handler location ~ .php/ { rewrite ^(.*.php)/ $1 last; } ## Execute PHP scripts location ~ \.php$ { try_files $uri =404; expires off; fastcgi_read_timeout 900; fastcgi_pass wwwdomaincom; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } ## GZip Compression gzip on; gzip_comp_level 8; gzip_min_length 1000; gzip_proxied any; gzip_types text/plain application/xml text/css text/js application/x-javascript; } /etc/php-fpm.d/www-data.conf my php-fpm pool config ## Nginx php-fpm Upstream upstream wwwdomaincom { server unix:/var/run/php-fcgi-www-data.sock; } ## Global Config client_max_body_size 10M; server_names_hash_bucket_size 64; ## Web Server Config server { ## Server Info listen 80; server_name domain.com *.domain.com; root /home/www-data/public_html; index index.html index.php; ## Error log error_log /home/www-data/logs/nginx-errors.log; ## DocumentRoot setup location / { try_files $uri $uri/ @handler; expires 30d; } ## These locations would be hidden by .htaccess normally #location /app/ { deny all; } ## Disable .htaccess and other hidden files location /. { return 404; } ## Magento uses a common front handler location @handler { rewrite / /index.php; } ## Forward paths like /js/index.php/x.js to relevant handler location ~ .php/ { rewrite ^(.*.php)/ $1 last; } ## Execute PHP scripts location ~ \.php$ { try_files $uri =404; expires off; fastcgi_read_timeout 900; fastcgi_pass wwwdomaincom; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } ## GZip Compression gzip on; gzip_comp_level 8; gzip_min_length 1000; gzip_proxied any; gzip_types text/plain application/xml text/css text/js application/x-javascript; } I've got a file in /home/www-data/public_html/index.php with the code <?php phpinfo(); ?> (file uploaded as user www-data).

    Read the article

  • Xen 4.1 host (dom0) with blktap disks ("tap:aio:") not connecting

    - by Manwe
    Problem using blktap with xen-4.1 running Ubuntu Precise stock kernel with dom0 xen-4.1. I get: [ 5.580106] XENBUS: Waiting for devices to initialise: 295s...290s. ... [ 300.580288] XENBUS: Timeout connecting to device: device/vbd/51713 (local state 3, remote state 1) And some syslog lines: May 17 13:07:30 localhost logger: /etc/xen/scripts/blktap: add XENBUS_PATH=backend/tap/10/51713 May 17 13:07:31 localhost logger: /etc/xen/scripts/blktap: Writing backend/tap/10/51713/hotplug-status connected to xenstore. with tap:aio: disk lines. file:/ works. disk = [ 'tap:aio:/data/root.img,xvda1,w', ] Problem exists with lucid and precises domU kernels and both guests work in Ubuntu hardy dom0 Host 64bit 2.6.24-28-xen xen-3.3 3.2.0-24-generic #37-Ubuntu SMP Wed Apr 25 08:43:22 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux Distributor ID: Ubuntu Description: Ubuntu 12.04 LTS Release: 12.04 Codename: precise

    Read the article

  • Stupid question: prevent file from being changed under linux

    - by Josh
    Ok. Stupid question. I have forgotten what this is called and without remembering the name the search function on the site and on google is failing me. What's the command under linux to mark a file as "locked"/to prevent any changes from being made to it? I'm not talking about chmod. There's a property that can be set (again the name escapes me at the moment) which prevents even processes running as root from changing a file. What is this called and how do I set it?

    Read the article

  • mod_rewrite directory path to deeper directory

    - by DA.
    I don't usually work with LAMP and am a bit stumped getting a site working locally. The site is set up to be used via localhost: 1) http://localhost/mysite However, the way the site files are physically on the server the root is located as such: 2) /var/www/mysite/trunk/site I'm trying to figure out a way where I could type #1 but have apache actually looking for the files in #2 so that all of the asset/page links in the web application work. Is mod_rewrite the solution? If so, I'm stumped on the syntax. I have this but it won't work (due, I assume, to it causing an infinite loop) RewriteRule ^mysite/ mysite/trunk/site I have a hunch I need to sprinkle on some regex?

    Read the article

  • how to resolve all externally unresolved DNS queries ?

    - by red eyes dev
    I am using PowerDns on a Linux box (Debian 6). I would like to set up the powerdns server to resolve all externally unresolved DNS queries to a given, internal host. Is this possible? How is it done? I think it's necessary to use pdns-recursor, but my configuration file doesn't works ! I use mysql for backend. I add manually google.com and it's works, but if I delete entry I have "server failed", root dns (or isp dns) don't answer me.

    Read the article

  • Best practice, or generally best way to set up web-hosting server, permissions, etc.

    - by Jagot
    Hi, I'm about to set up a server upon which a friend and I will be hosting web sites, and I'll be using Debian. I've set up a LAMP solution many times just to using for local testing purposes, but never for actual production use. I was wondering what are the best practices are in terms of setting the server up, in reference specifically to accessing the web root directory. A couple of the options I have seen: Set up a single user account on the server for us both to use and use a virtual host to point to the somewhere in the home directory, e.g. /home/webdev/www. Set each of us up a user account, and grant permissions in some way to /var/www (What would be the best way? Set up a new group?) I want to get this right when I first set this up as there won't be any going back for a while once our first site is up and running. Appreciate any guidance in advance.

    Read the article

  • Cloning a linux system from sdx to cciss

    - by churnd
    I have an HP ML 310 server running CentOS Linux 5.5. I'm buying a RAID card (LSI 9260-8i) to set up a mirrored OS drive. Right now, the boot drive is set up with GRUB installed on the MBR of /dev/sda & has a 100MB /boot partition for /dev/sda1, then the rest is configured in LVM with a 20GB with a 20GB VG for the root partition & ~80GB VG for home. The new disk sizes will also be slightly larger as well. What is the best way to clone the boot drive to the new CCISS device?

    Read the article

  • apache running but site not accessible

    - by Shyam
    Am pretty new to server administration. So I am not able to get to the root of the problem. I am running Apache2 with mod_php on a 1GB Rackspace Cloud Server (Ubuntu 9.10). My site goes down often, and I have to restart apache2 to get the site working. I checked the "error.log" file. There were no signs of any error messages. I even searched for words like [error] / error / warn / [warn] . But no results. The site goes down and even then apache is running. When the site was down, the checked the status /etc/init.d/apache2 status and it gave ** * Apache is running (pid 433). ** Any suggestions where I should look for the problem. Thanks a lot.

    Read the article

  • Hp Procurve Switch : port filtered

    - by user117140
    My HP Procurve switch is blocking port 22 and I dont know how to unblock it.Please let me know From the server, see port 22 is blocked [root@server ~]#nmap -p22,80,443 10.247.172.70 Starting Nmap 4.11 ( http://www.insecure.org/nmap/ ) at 2012-04-16 14:12 IST mass_dns: warning: Unable to determine any DNS servers. Reverse DNS is disabled. Try using --system-dns or specify valid servers with --dns_servers Interesting ports on 10.247.172.70: PORT STATE SERVICE 22/tcp filtered ssh ------------------> see 80/tcp filtered http 443/tcp filtered https This is blocked on cisco switch but I dont have any clue how this is done. I know that vlan is configured on switch. vlan 54 ip ospf 10.247.172.65 area 0.0.0.10 vrrp vrid 54 owner virtual-ip-address 10.247.172.65 255.255.255.192 priority 255 enable exit exit Please let me know how to unblock ssh port 22 access on this switch?

    Read the article

  • CentOS: How to prevent a user from executing an application installed in a specific directory

    - by slayernoah
    I have an application installed in /etc/mydir. I have executed the following to remove the ability for users to execute this program. chown root:group1 /etc/mydir -R chmod 700 /etc/mydir -R I created a new user and logged in as this user. The new user was not added to group1 However, I was able to execute this program by just typing the program name. How can I stop users being able to run this using chmod and chown. Please let me know. PS. the new users cannot cd into /etc/mydir but they can still execute using the program name.

    Read the article

  • Corrupt mysql system tables

    - by psynnott
    I am having issues with the columns_priv table in the mysql system database. I cannot add new users currently. I have tried repairing it using mysqlcheck --auto-repair --all-databases --password but I get the following output: mysql.columns_priv Error : Incorrect file format 'columns_priv' error : Corrupt Is there any other way to repair this table, or how do I go about replacing it with a blank table? What would I lose by doing that? Thank you Edit (Additional Info) mysqld is currently using 100% cpu constantly. Looking at show processlist, I get: mysql> show processlist; +-----+------------------+-----------+-------+---------+------+-------------------+------------------------------------------------------------------------------------------------------+ | Id | User | Host | db | Command | Time | State | Info | +-----+------------------+-----------+-------+---------+------+-------------------+------------------------------------------------------------------------------------------------------+ | 5 | debian-sys-maint | localhost | mysql | Query | 1589 | Opening tables | ALTER TABLE tables_priv MODIFY Column_priv set('Select','Insert','Update','References') COLL | | | 752 | root | localhost | NULL | Query | 0 | NULL | show processlist | +-----+------------------+-----------+-------+---------+------+-------------------+------------------------------------------------------------------------------------------------------+ 2 rows in set (0.00 sec)

    Read the article

  • Which is better for multi-use auth, MySQL, PostgreSQL, or LDAP?

    - by Fearless
    I want to set up an Oracle Linux 6 server that gives users secure IMAP email (with dovecot), Jabber IM, FTP (with vsftpd), and calDav. However, I want each user logon to be able to authenticate all services (e.g. Joe Smith signs up once for a username and password that he can use for email, ftp, and his calendar). My question is, which database service will be best suited for that application? Also, is there a way to link the database with the preexisting server shell logins (e.g. so I can read the root account's LogCheck emails on a different device)?

    Read the article

  • "killed" message from cron.daily, but not when run from command line

    - by Dan Stahlke
    On Fedora 17, I put a file into /etc/cron.daily with the following contents: cd / su dstahlke /home/dstahlke/bin/anacron-daily.sh exit 0 For some reason, I get a mail every day that just says /etc/cron.daily/dstahlke-daily: ...killed. I tried with and without the exit 0 line above (I noticed that some system scripts have that and others don't, I'm not sure of the purpose). Running /etc/cron.daily/dstahlke-daily from the command line as root produces no ...killed message. Other than the message, everything seems to work fine. Putting set -x in the above script, as well as in the /home/dstahlke/bin/anacron-daily.sh script shows that the ...killed message happens just after the latter script terminates (or perhaps just after the su command finishes). What causes the ...killed message? Or, is there a more acceptable way to have anacron run a user script daily? I figured that putting this in /etc/cron.daily would help the system coordinate all of the daily tasks rather than potentially running my task concurrently with the system tasks.

    Read the article

  • Storing bundled AMI:s at Amazon EC2

    - by Industrial
    Hi everybody, I am totally new on configuring servers and working with EC2, so please bare with me. I managed after a lot of hair pulling to get a server with Ubuntu up and running with memcached and some other goodies that would make a great package for me. I thought that however, when storing it as an AMI with this tool I would be able to have memcached available next time I launched an instance based upon that image. What can I do to make sure that my configuration is saved properly to an instance? Question number two: - Can I someway make a command that is automatically run on server creation, like initiating memcache with "memcache -d -m 1700 -u root" or even a batch of them?

    Read the article

  • How to access remotly to a mysql server?

    - by ÉricP
    Hi, I'm trying to access my remote mysql server from my own computer. I uncommented: bind-address = 80.10.65.45 I added 80.10.65.45 as a server in privilege root 80.10.65.45 yes ALL PRIVILEGES yes I'm using Sequel Pro on MacosX to connect via SSH here is the debug log: debug1: Authentication succeeded (password). debug1: Local connections to LOCALHOST:58517 forwarded to remote address 127.0.0.1:3306 debug1: Local forwarding listening on ::1 port 58517. debug1: channel 0: new [port listener] debug1: Local forwarding listening on 127.0.0.1 port 58517. debug1: channel 1: new [port listener] debug1: Entering interactive session. debug1: Connection to port 58517 forwarding to 127.0.0.1 port 3306 requested. debug1: channel 2: new [direct-tcpip] channel 2: open failed: connect failed: Connection refused debug1: channel 2: free: direct-tcpip: listening port 58517 for 127.0.0.1 port 3306, connect from 127.0.0.1 port 58519, nchannels 3

    Read the article

  • Same command on multiple servers

    - by w00t
    Hello everyone. I'm just wondering if there is any fellow sysadmin with the need to execute one command on multiple servers. If so, what technique are you using? I have grown tired to ssh-ing to 3-5 servers and executing the same thing over and over again, so I'm thinking to make my life easier. Also, I think I should create keys so I don't have to enter passwords anymore (though I'm using root). After 2 years of doing this, I kind of developed a laziness. I googled it up, I know about cssh, pssh, tentakel (this one seems cool), and the more pro-genre - Puppet (of which I just heard of, didn't invest the time to read the docs). BTW, I'm using XP+putty, so if there is any putty-cool-thingy available, that's welcome too. If not, I can always ssh to one server and from there start my rest-of-the-servers-conquest :) *evil* Hit me up. Thanks.

    Read the article

  • Help on using mod_rewrite to serve I18N static site

    - by Guandalino
    My static site www.example.com is translated in different languages and files are organized in this hierarchy: / /de index.html seite-1.html /en index.html page-1.html /it index.html pagina-1.html The root contains no files, just one subdirectory for each language the site is translated in, while subdirectories contain pages translated (both content and file name are) in the language corresponding to subdirectory name, de, en, it, etc. The question is: how to configure mod_rewrite so that when a client visits www.example.com it is taken to the correct version of the site, falling back to english version if the required locale is not supported (i.e. Accept-Language header doesn't exist or specifies a language for which the site is not available, e.g. fr)? Thanks for any pointer, I'm here to provide further details or feedback! Best regards

    Read the article

  • Is it possible to limit output bandwidth between eth0 and lo?

    - by mmcbro
    I'm trying to limit the bandwidth between my eth0 output (nginx proxy) to my loopback inteface (apache) by filtering on destination port. Incoming Packet -> Eth0 -> 0.0.0.0:80 Nginx -> tc qdisc class/iptable mangle 2525port -> 127.0.0.1:2525 Apache I don't know if it's even possible I'm just experimenting. My rules are the followings : tc qdisc add dev eth0 root handle 1:0 htb tc class add dev eth0 parent 1:0 classid 1:10 htb rate 2mbps ceil 2mbps prio 0 tc filter add dev eth0 parent 1:0 prio 0 protocol ip handle 10 fw flowid 1:10 iptables -A OUTPUT -t mangle -p tcp --dport 2525 -j MARK --set-mark 10 I also tried to with FORWARD chain but its still the same.

    Read the article

  • Virtual hosting in Varnish with individual vcl files for configuration

    - by Michael Sørensen
    I wish to use varnish to put in front of an apache and a tomcat on the same server. Depending on the ip requested, it goes to a different backend. This works. Now for most of the sites the default varnish logic will work just fine. However for some specific sites I wish to use custom VCL code. I can test for host name and include config files for the specific domains, but this only works inside the individual methods recv etc. Is there a way to include a complete set of instructions, in one file, per domain, without having to manage separate files for subdomain_recv, subdomain_fetch etc? And preferably without running seperate instances of varnish. When I try to include a file on the "root level" of default.vcl, I get a compilation error. Best regards, Michael

    Read the article

  • Dynamic Subdomains

    - by crash
    On my new site I want to have dynamic subdomains. I'm trying to make it so that the subdomains use the same web root as the main domain, all under a single CodeIgniter installation. For example, subdomain.example.com would lead to example.com/subdomain, which is actually example.com/index.php/subdomain. I've already the DNS, virtual hosts set up but I"m getting caught up on the .htaccess. The effect of the linked htaccess is that when navigating to any subdomain, it gets caught up in an infinite loop. (Error log after one request.) It's the same effect for www., which should just resolve to the main domain.

    Read the article

  • Have apache choose a php version based on the extension in the url, but with a single file on the filesystem

    - by Somejan
    I want to configure a local apache server to serve php files with different php versions. In my document root I have phpinfo.php, now if I go to http://localhost/phpinfo.php4, I want to see the phpinfo.php file processed with php4, if I go to http://localhost/phpinfo.php5 I want to see the same file processed with php5. Note: both php 4 and 5 are already installed side by side, I have no problem configuring apache to treat files that have a .php4 or .php5 extension on the filesystem with the correct php version. What I want is for apache to do the following: If the url-path ends in .php5, serve the file which has a .php extension on the filesystem using the application/x-httpd-php5 handler. If the url-path ends in .php4, serve the same file with the .php extension on the filesystem using the application/x-httpd-php4 handler.

    Read the article

  • How can I whitelist a user-agent in nginx?

    - by djb
    I'm trying to figure out how to whitelist a user agent from my nginx conf. All other agents should be shown a password. In my naivity, I tried to put the following in before deny all: if ($http_user_agent ~* SpecialAgent ) { allow; } but I'm told "allow" directive is not allowed here (!). How can I make it work? A chunk of my config file: server { server_name site.com; root /var/www/site; auth_basic "Restricted"; auth_basic_user_file /usr/local/nginx/conf/htpasswd; allow 123.456.789.123; deny all; satisfy any; #other stuff... } Thanks for any help.

    Read the article

  • MySQL will Stop working after being Started

    - by user115343
    i am new to a webserver thing. I use Centmin mod to install nginx + mariaDB to setup small wordpress blog,the first day it is ok,there are nice "hello world" on my box's IP,but today i have checked that mysql is stop working so i immediately start it again but it is stoped again after some minutes! i use this tutorial but still,it will stop after some period here is my log [root@rylai ~]# tail -f /var/log/mysqld.log 120326 16:19:05 [Note] Plugin 'PBXT_STATISTICS' is disabled. 120326 16:19:05 [Note] Plugin 'InnoDB' is disabled. 120326 16:19:06 [Note] Event Scheduler: Loaded 0 events 120326 16:19:06 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.2.10-MariaDB-mariadb107' socket: '/var/lib/mysql/mysql.sock' port: 3306 (MariaDB - http://mariadb.com/) 120326 16:20:36 mysqld_safe Number of processes running now: 0 120326 16:20:36 mysqld_safe mysqld restarted 120326 16:20:39 [Note] Plugin 'ARCHIVE' is disabled. 120326 16:20:39 [Note] Plugin 'FEDERATED' is disabled. 120326 16:20:40 mysqld_safe mysqld from pid file /var/lib/mysql/rylai.pid ended I only access mysql on CLI,didnt install any panel yet

    Read the article

  • Email is not sending when the script is running by CRON

    - by Adam Blok
    I wrote the simple backup bash script and at the end of it, it's sending an email to me that backup is ready. Everything works perfect when I run this script from terminal (root), but when the script is running by CRON, email is not sending :-/. #!/bin/sh filename=$(date +%d-%m-%Y) backup_dir="/mnt/backup/" email_from_name="BACKUP" email_to="my@email" email_subject="Backup is ready" email_body_file="/tmp/backup-email-body.txt" tar czf "$backup_dir$filename.tgz" "/home/www" echo "Subject: $email_subject" > $email_body_file ls $backup_dir -sh >> $email_body_file sendmail -F $email_from_name -t $email_to < $email_body_file

    Read the article

  • Why can my Mac not connect to my iPhone via ssh?

    - by martin08
    I couldn't always ssh to my iPhone from my Mac. They're both on the same wifi network but sometimes the connection is established, sometimes it failed. From my Mac: $ ssh [email protected] ssh: connect to host 192.168.0.102 port 22: Operation timed out $ ping 192.168.0.102 PING 192.168.0.102 (192.168.0.102): 56 data bytes ping: sendto: No route to host ping: sendto: Host is down ping: sendto: Host is down I enabled SSH on the phone and am sure it can load webpages. So what might be a reason why they cannot connect? Thanks

    Read the article

< Previous Page | 388 389 390 391 392 393 394 395 396 397 398 399  | Next Page >