Search Results

Search found 8501 results on 341 pages for 'status'.

Page 297/341 | < Previous Page | 293 294 295 296 297 298 299 300 301 302 303 304  | Next Page >

  • Allow anonymous upload for Vsftpd?

    - by user15318
    I need a basic FTP server on Linux (CentOS 5.5) without any security measure, since the server and the clients are located on a test LAN, not connected to the rest of the network, which itself uses non-routable IP's behind a NAT firewall with no incoming access to FTP. Some people recommend Vsftpd over PureFTPd or ProFTPd. No matter what I try, I can't get it to allow an anonymous user (ie. logging as "ftp" or "anonymous" and typing any string as password) to upload a file: # yum install vsftpd # mkdir /var/ftp/pub/upload # cat vsftpd.conf listen=YES anonymous_enable=YES local_enable=YES write_enable=YES xferlog_file=YES #anonymous users are restricted (chrooted) to anon_root #directory was created by root, hence owned by root.root anon_root=/var/ftp/pub/incoming anon_upload_enable=YES anon_mkdir_write_enable=YES #chroot_local_user=NO #chroot_list_enable=YES #chroot_list_file=/etc/vsftpd.chroot_list chown_uploads=YES When I log on from a client, here's what I get: 500 OOPS: cannot change directory:/var/ftp/pub/incoming I also tried "# chmod 777 /var/ftp/incoming/", but get the same error. Does someone know how to configure Vsftpd with minimum security? Thank you. Edit: SELinux is disabled and here are the file permissions: # cat /etc/sysconfig/selinux SELINUX=disabled SELINUXTYPE=targeted SETLOCALDEFS=0 # sestatus SELinux status: disabled # getenforce Disabled # grep ftp /etc/passwd ftp:x:14:50:FTP User:/var/ftp:/sbin/nologin # ll /var/ drwxr-xr-x 4 root root 4096 Mar 14 10:53 ftp # ll /var/ftp/ drwxrwxrwx 2 ftp ftp 4096 Mar 14 10:53 incoming drwxr-xr-x 3 ftp ftp 4096 Mar 14 11:29 pub Edit: latest vsftpd.conf: listen=YES local_enable=YES write_enable=YES xferlog_file=YES #anonymous users are restricted (chrooted) to anon_root anonymous_enable=YES anon_root=/var/ftp/pub/incoming anon_upload_enable=YES anon_mkdir_write_enable=YES #500 OOPS: bad bool value in config file for: chown_uploads chown_uploads=YES chown_username=ftp Edit: with trailing space removed from "chown_uploads", err 500 is solved, but anonymous still doesn't work: client> ./ftp server Connected to server. 220 (vsFTPd 2.0.5) Name (server:root): ftp 331 Please specify the password. Password: 500 OOPS: cannot change directory:/var/ftp/pub/incoming Login failed. ftp> bye With user "ftp" listed in /etc/passwd with home directory set to "/var/ftp" and access rights to /var/ftp set to "drwxr-xr-x" and /var/ftp/incoming to "drwxrwxrwx"...could it be due to PAM maybe? I don't find any FTP log file in /var/log to investigate. Edit: Here's a working configuration to let ftp/anonymous connect and upload files to /var/ftp: listen=YES anonymous_enable=YES write_enable=YES anon_upload_enable=YES anon_mkdir_write_enable=YES

    Read the article

  • How to change SMP affinity of an IRQ on Ubuntu domU inside Xen XCP?

    - by Alexander Gladysh
    I'd like to change IRQ SMP affinity for reasons, outlined in this question: CPU0 is swamped with eth1 interrupts But I can't — I see Input/output error when I try to write to /proc/irq/*/smp_affinity. Please point me to the HOWTO on the matter. (A formal reference on /proc/irq/*/ would be cool as well.) Gory details: Note that this is a VM inside an Ubuntu-based Xen XCP host. $ uname -a Linux MYHOST 2.6.38-15-virtual #59-Ubuntu SMP Fri Apr 27 16:40:18 UTC 2012 i686 i686 i386 GNU/Linux $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 11.04 Release: 11.04 Codename: natty $ sudo cat /proc/irq/*/smp_affinity 01 01 01 01 01 80 80 80 80 80 80 40 40 40 40 40 40 20 20 20 20 20 20 10 10 10 10 10 10 08 08 08 08 08 08 04 04 04 04 04 04 02 02 02 02 02 02 01 01 01 01 01 01 Update. The error details: $ N=$(grep -c processor /proc/cpuinfo) $ echo $N 8 $ printf %x $((2**N-1)) ff $ printf %x $((2**N-1)) | sudo tee /proc/irq/*/smp_affinity fftee: /proc/irq/288/smp_affinity: Input/output error tee: /proc/irq/289/smp_affinity: Input/output error tee: /proc/irq/290/smp_affinity: Input/output error tee: /proc/irq/291/smp_affinity: Input/output error tee: /proc/irq/292/smp_affinity: Input/output error tee: /proc/irq/293/smp_affinity: Input/output error tee: /proc/irq/294/smp_affinity: Input/output error tee: /proc/irq/295/smp_affinity: Input/output error tee: /proc/irq/296/smp_affinity: Input/output error tee: /proc/irq/297/smp_affinity: Input/output error tee: /proc/irq/298/smp_affinity: Input/output error tee: /proc/irq/299/smp_affinity: Input/output error tee: /proc/irq/300/smp_affinity: Input/output error tee: /proc/irq/301/smp_affinity: Input/output error tee: /proc/irq/302/smp_affinity: Input/output error tee: /proc/irq/303/smp_affinity: Input/output error tee: /proc/irq/304/smp_affinity: Input/output error tee: /proc/irq/305/smp_affinity: Input/output error tee: /proc/irq/306/smp_affinity: Input/output error tee: /proc/irq/307/smp_affinity: Input/output error tee: /proc/irq/308/smp_affinity: Input/output error tee: /proc/irq/309/smp_affinity: Input/output error tee: /proc/irq/310/smp_affinity: Input/output error tee: /proc/irq/311/smp_affinity: Input/output error tee: /proc/irq/312/smp_affinity: Input/output error tee: /proc/irq/313/smp_affinity: Input/output error tee: /proc/irq/314/smp_affinity: Input/output error tee: /proc/irq/315/smp_affinity: Input/output error tee: /proc/irq/316/smp_affinity: Input/output error tee: /proc/irq/317/smp_affinity: Input/output error tee: /proc/irq/318/smp_affinity: Input/output error tee: /proc/irq/319/smp_affinity: Input/output error tee: /proc/irq/320/smp_affinity: Input/output error tee: /proc/irq/321/smp_affinity: Input/output error tee: /proc/irq/322/smp_affinity: Input/output error tee: /proc/irq/323/smp_affinity: Input/output error tee: /proc/irq/324/smp_affinity: Input/output error tee: /proc/irq/325/smp_affinity: Input/output error tee: /proc/irq/326/smp_affinity: Input/output error tee: /proc/irq/327/smp_affinity: Input/output error tee: /proc/irq/328/smp_affinity: Input/output error tee: /proc/irq/329/smp_affinity: Input/output error tee: /proc/irq/330/smp_affinity: Input/output error tee: /proc/irq/331/smp_affinity: Input/output error tee: /proc/irq/332/smp_affinity: Input/output error tee: /proc/irq/333/smp_affinity: Input/output error tee: /proc/irq/334/smp_affinity: Input/output error tee: /proc/irq/335/smp_affinity: Input/output error Update. irqbalance is running: $ sudo service irqbalance status irqbalance start/running, process 560

    Read the article

  • Monitoring slow nginx/unicorn requests

    - by injekt
    I'm currently using Nginx to proxy requests to a Unicorn server running a Sinatra application. The application only has a couple of routes defined, those of which make fairly simple (non costly) queries to a PostgreSQL database, and finally return data in JSON format, these services are being monitored by God. I'm currently experiencing extremely slow response times from this application server. I have another two Unicorn servers being proxied via Nginx, and these are responding perfectly fine, so I think I can rule out any wrong doing from Nginx. Here is my God configuration: # God configuration APP_ROOT = File.expand_path '../', File.dirname(__FILE__) God.watch do |w| w.name = "app_name" w.interval = 30.seconds # default w.start = "cd #{APP_ROOT} && unicorn -c #{APP_ROOT}/config/unicorn.rb -D" # -QUIT = graceful shutdown, waits for workers to finish their current request before finishing w.stop = "kill -QUIT `cat #{APP_ROOT}/tmp/unicorn.pid`" w.restart = "kill -USR2 `cat #{APP_ROOT}/tmp/unicorn.pid`" w.start_grace = 10.seconds w.restart_grace = 10.seconds w.pid_file = "#{APP_ROOT}/tmp/unicorn.pid" # User under which to run the process w.uid = 'web' w.gid = 'web' # Cleanup the pid file (this is needed for processes running as a daemon) w.behavior(:clean_pid_file) # Conditions under which to start the process w.start_if do |start| start.condition(:process_running) do |c| c.interval = 5.seconds c.running = false end end # Conditions under which to restart the process w.restart_if do |restart| restart.condition(:memory_usage) do |c| c.above = 150.megabytes c.times = [3, 5] # 3 out of 5 intervals end restart.condition(:cpu_usage) do |c| c.above = 50.percent c.times = 5 end end w.lifecycle do |on| on.condition(:flapping) do |c| c.to_state = [:start, :restart] c.times = 5 c.within = 5.minute c.transition = :unmonitored c.retry_in = 10.minutes c.retry_times = 5 c.retry_within = 2.hours end end end Here is my Unicorn configuration: # Unicorn configuration file APP_ROOT = File.expand_path '../', File.dirname(__FILE__) worker_processes 8 preload_app true pid "#{APP_ROOT}/tmp/unicorn.pid" listen 8001 stderr_path "#{APP_ROOT}/log/unicorn.stderr.log" stdout_path "#{APP_ROOT}/log/unicorn.stdout.log" before_fork do |server, worker| old_pid = "#{APP_ROOT}/tmp/unicorn.pid.oldbin" if File.exists?(old_pid) && server.pid != old_pid begin Process.kill("QUIT", File.read(old_pid).to_i) rescue Errno::ENOENT, Errno::ESRCH # someone else did our job for us end end end I have checked God status logs but it appears CPU and Memory Usage are never out of bounds. I also have something to kill high memory workers, which can be found on the GitHub blog page here. When running a tail -f on the Unicorn logs I see some requests, but they're far and few between, when I was at around 60-100 a second before this trouble seemed to have arrived. This log also shows workers being reaped and started as expected. So my question is, how would I go about debugging this? What are the next steps I should be taking? I'm extremely baffled that the server will sometimes respond quickly, but at others time it's very slow, for long periods of time (which may or may not be peak traffic times). Any advice is much appreciated.

    Read the article

  • I can't send email from my server to gmail addresses

    - by brianegge
    I'm using postfix, and have setup spf, dkim, and domainkeys. I can get my email to go to Yahoo, but not gmail. Here's the headers from an email send to Yahoo. Yahoo reports the email as domain key verified. X-Apparently-To: brianegge at yahoo.com via 68.142.206.167; Sat, 20 Mar 2010 05:29:19 -0700 Return-Path: <domains at theeggeadventure.com> X-YahooFilteredBulk: 67.207.137.114 X-YMailISG: x7_Rl9EWLDuugoqPcORhih0FeQMOaIIpz4qfuu9ttx1xbo3uKI2kz.CLUy2cJ1BTtHAwuJtrsGRsveHIx.Dx95avNGlPPGWy_cSpnEwWLXGxBciO.YgtSQxdURQiWLCLvbHej0QPjQIHFjAFjdeGhJd2Y8NgTW1wcExq45Sb7LMlOGvtGMjSQuc8QazwXUxpZrQbIxgSQUTmzQO1x30xaZ2Us6TQTab7Wpya6OgAX.emKOM3phfS5kfhYj9FLQ.qi32sFNWnAoFdVK596OTP2F63PAJOVM5qPsM2jIAbJylIBmnj94LO7hOVr3KOS6XLtCPRn2Oe X-Originating-IP: [67.207.137.114] Authentication-Results: mta1055.mail.mud.yahoo.com from=theeggeadventure.com; domainkeys=pass (ok); from=theeggeadventure.com; dkim=pass (ok) Received: from 127.0.0.1 (EHLO mail.theeggeadventure.com) (67.207.137.114) by mta1055.mail.mud.yahoo.com with SMTP; Sat, 20 Mar 2010 05:29:19 -0700 Received: by mail.theeggeadventure.com (Postfix, from userid 1003) id BB5B01C16A4; Sat, 20 Mar 2010 12:29:16 +0000 (UTC) DomainKey-Signature: a=rsa-sha1; s=2010; d=theeggeadventure.com; c=simple; q=dns; b=JHbK9VhqyQTfpQFqaXxJrKpEG9h9H0IZ0LdWoBooJEA7hv3SYWmFUtyE247EuwoaG gzApKJ1DuRhwESZ7PswrbzuaUL8poAUO8LmMvZ+OqnDolgNSJUYWu0FcO+fe3H4m9ZD grkj0xMpHw+uFjXV4plKO+sa8olJXJAmP+9cMEo= X-DKIM: Sendmail DKIM Filter v2.8.2 mail.theeggeadventure.com BB5B01C16A4 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=theeggeadventure.com; s=2010; t=1269088156; bh=bUlMldcnzFCmCmNT8qjpRl6fiY1YyjiZiC9jhCXASOw=; h=Subject:To:Message-Id:Date:From; b=EVNolTlh4Gch5/HIrrHaRQvcApl7wkB42gB44NsPcLZD2QrhuOvnhanhnEB4UbV0e A+3dAOjhX7LKzgGrn11jXNTiEjNX1vQDsX3HyG0fNra73aWiGTzr1nHJfnuEJ7Ph0j 5tp0HRL5jjikD1XJcvmsYzTpT22mxuz60HXYRB1s= Subject: cron To: <brianegge at yahoo.com> X-Mailer: mail (GNU Mailutils 1.2) Message-Id: <[email protected]> Date: Sat, 20 Mar 2010 12:29:16 +0000 (UTC) From: This sender is DomainKeys verified [email protected] (domains) View contact details Content-Length: 818 When I send to gmail, I see the following in my server log, but the message doesn't even reach my spam folder. Mar 20 12:59:12 Everest postfix/pickup[27802]: C81C61C16A4: uid=1000 from=<egge> Mar 20 12:59:12 Everest postfix/cleanup[27847]: C81C61C16A4: message-id=<[email protected]> Mar 20 12:59:13 Everest postfix/qmgr[27801]: C81C61C16A4: from=<[email protected]>, size=2784, nrcpt=1 (queue active) Mar 20 12:59:14 Everest postfix/smtp[27849]: C81C61C16A4: to=<brianegge at gmail.com>, relay=gmail-smtp-in.l.google.com[209.85.223.24]:25, delay=2.1, delays=0.39/0.28/0.13/1.3, dsn=2.0.0, status=sent (250 2.0.0 OK 1269089954 32si4566750iwn.51) Mar 20 12:59:14 Everest postfix/qmgr[27801]: C81C61C16A4: removed I've send to email to test services, and the report everything verifies ok. I've also checked all the RBL lists, and I'm not on any of them.

    Read the article

  • Veewee, Vagrant, Puppet, Erlang and RabbitMQ

    - by Tobias
    I am kinda stuck with a problem I am trying to wrap my head around for days now. Here is what I am doing: By using Veewee, I am creating a VirtualBox image and then I create a Vagrant box from it. See here, here Finally I run puppet from Vagrant to install RabbitMQ, see here. Veewee, Vagrant and VirtualBox all run on MacOS X 10.7.4. The vagrant box itself is CentOS 6.2. This worked fine for quite some time until I was recreating the VirtualBox image a couple of days ago. During installation of the rabbitmq-plugins during my puppet run I now get the following error: /Stage[main]/Rabbitmq/Exec[rabbitmq-plugins]/returns: erlexec: HOME must be set My RabbitMQ puppet configuration can be found on my GitHub repo for that project, but here is the most important part: $version = "2.8.7" $url = "http://www.rabbitmq.com/releases/rabbitmq-server/v${version}/rabbitmq-server-${version}-1.noarch.rpm" package{"erlang": ensure => "present", } package{"rabbitmq-server": provider => "rpm", source => $url, require => Package["erlang"] } exec{"rabbitmq-plugins": path => "/usr/bin:/usr/sbin:/bin", command => "rabbitmq-plugins enable rabbitmq_management", require => Package["rabbitmq-server"] } My additional repositories, e.g. epel, are defined in veewees postinstall.sh right at the top of the file. Finally, this is what I get when I do '/etc/init.d/rabbitmq-server status' [{pid,2834}, {running_applications,[{rabbit,"RabbitMQ","2.8.7"}, {ssl,"Erlang/OTP SSL application","4.1.6"}, {public_key,"Public key infrastructure","0.13"}, {crypto,"CRYPTO version 2","2.0.4"}, {mnesia,"MNESIA CXC 138 12","4.5"}, {os_mon,"CPO CXC 138 46","2.2.7"}, {sasl,"SASL CXC 138 11","2.1.10"}, {stdlib,"ERTS CXC 138 10","1.17.5"}, {kernel,"ERTS CXC 138 10","2.14.5"}]}, {os,{unix,linux}}, {erlang_version,"Erlang R14B04 (erts-5.8.5) [source] [64-bit] [rq:1] [async-threads:30] [kernel-poll:true]\n"}, {memory,[{total,24993120}, {processes,10328496}, {processes_used,10321296}, {system,14664624}, {atom,1175905}, {atom_used,1143841}, {binary,17192}, {code,11416020}, {ets,766168}]}, {vm_memory_high_watermark,0.4}, {vm_memory_limit,205851852}, {disk_free_limit,1000000000}, {disk_free,7089795072}, {file_descriptors,[{total_limit,924}, {total_used,4}, {sockets_limit,829}, {sockets_used,2}]}, {processes,[{limit,1048576},{used,131}]}, {run_queue,0}, {uptime,6}] Sources in the web suggest, that I have to set HOME. Of course I was logging into the box if HOME was set, for user vagrant it was '/home/vagrant' and for root it was 'root'. As always, any hints/ideas/suggestions/assumptions are more than welcome. Thanks a lot! Cheers, Tobi

    Read the article

  • WD1000FYPS harddrive is marked 0 mb in 3ware (and no SMART)

    - by osgx
    After reboot my SATA 1TB WD1000FYPS (previously is was "Drive error") is marked 0 mb in 3ware web gui. Complete message: Available Drives (Controller ID 0) Port 1 WDC WD1000FYPS-01ZKB0 0.00 MB NOT SUPPORTED [Remove Drive] SMART gives me only Device Model and ATA protocol version 1 (not 7-8 as it must be for SATA) What does it mean? Just before reboot, when is was marked only with "Device Error", smart was: Device Model: WDC WD1000FYPS-01ZKB0 Serial Number: WD-WCASJ1130*** Firmware Version: 02.01B01 User Capacity: 1,000,204,886,016 bytes Device is: Not in smartctl database [for details use: -P showall] ATA Version is: 8 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Sun Mar 7 18:47:35 2010 MSK SMART support is: Available - device has SMART capability. SMART support is: Enabled SMART overall-health self-assessment test result: PASSED SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 200 200 051 Pre-fail Always - 0 3 Spin_Up_Time 0x0003 188 186 021 Pre-fail Always - 7591 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 229 5 Reallocated_Sector_Ct 0x0033 199 199 140 Pre-fail Always - 3 7 Seek_Error_Rate 0x000e 193 193 000 Old_age Always - 125 9 Power_On_Hours 0x0032 078 078 000 Old_age Always - 16615 10 Spin_Retry_Count 0x0012 100 100 000 Old_age Always - 0 11 Calibration_Retry_Count 0x0012 100 253 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 77 192 Power-Off_Retract_Count 0x0032 198 198 000 Old_age Always - 1564 193 Load_Cycle_Count 0x0032 146 146 000 Old_age Always - 164824 194 Temperature_Celsius 0x0022 117 100 000 Old_age Always - 35 196 Reallocated_Event_Count 0x0032 199 199 000 Old_age Always - 1 197 Current_Pending_Sector 0x0012 200 200 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0010 200 200 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age Offline - 0 What can be wrong with he? Can it be restored? PS new smart is === START OF INFORMATION SECTION === Device Model: WDC WD1000FYPS-01ZKB0 Serial Number: [No Information Found] Firmware Version: [No Information Found] Device is: Not in smartctl database [for details use: -P showall] ATA Version is: 1 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Mon Mar 8 00:29:44 2010 MSK SMART is only available in ATA Version 3 Revision 3 or greater. We will try to proceed in spite of this. SMART support is: Ambiguous - ATA IDENTIFY DEVICE words 82-83 don't show if SMART supported. Checking for SMART support by trying SMART ENABLE command. Command failed, ata.status=(0x00), ata.command=(0x51), ata.flags=(0x01) Error SMART Enable failed: Input/output error SMART ENABLE failed - this establishes that this device lacks SMART functionality. A mandatory SMART command failed: exiting. To continue, add one or more '-T permissive' options. PPS There was a rapid grow of " 192 Power-Off_Retract_Count " before dying. The hard was used in raid, with several hards from the same fabric packaging box (close id's). The hard drives were placed identically. Rapid means almost linear grow from 300 to 1700 in 6-7 hours. Maximal temperature was 41C. (thanks to munin's smart monitoring)

    Read the article

  • iptables (NAT/PAT) setup for SSH & Samba

    - by IanVaughan
    I need to access a Linux box via SSH & Samba that is hidden/connected behind another one. Setup :- A switch B C |----| |---| |----| |----| |eth0|----| |----|eth0| | | |----| |---| |eth1|----|eth1| |----| |----| Eg, SSH/Samba from A to C How does one go about this? I was thinking that it cannot be done via IP alone? Or can it? Could B say "hi on eth0, if your looking for 192.168.0.2, its here on eth1"? Is this NAT? This is a large private network, so what about if another PC has that IP?! More likely it would be PAT? A would say "hi 192.168.109.15:1234" B would say "hi on eth0, traffic for port 1234 goes on here eth1" How could that be done? And would the SSH/Samba demons see the correct packet header info and work?? IP info :- A - eth0 - 192.168.109.2 B - eth0 - B1 = 192.168.109.15 B2 = 172.24.40.130 - eth1 - 192.168.0.1 C - eth1 - 192.168.0.2 A, B & C are RHEL (RedHat) But Windows computers can be connected to the switch. I configured the 192.168.0.* IPs, they are changeable. Update after response from Eddie Few problems (and Machines' B IP is different!) From A :- ssh 172.24.40.130 works ok, (can get to B2) but ssh 172.24.40.130 -p 2022 -vv times out with :- OpenSSH_4.3p2, OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to 172.24.40.130 [172.24.40.130] port 2022. ...wait ages... debug1: connect to address 172.24.40.130 port 2022: Connection timed out ssh: connect to host 172.24.40.130 port 2022: Connection timed out From B2 :- $ service iptables status Table: filter Chain INPUT (policy ACCEPT) num target prot opt source destination Chain FORWARD (policy ACCEPT) num target prot opt source destination 1 ACCEPT tcp -- 0.0.0.0/0 192.168.0.2 tcp dpt:22 Chain OUTPUT (policy ACCEPT) num target prot opt source destination Table: nat Chain PREROUTING (policy ACCEPT) num target prot opt source destination 1 DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:2022 to:192.168.0.2:22 Chain POSTROUTING (policy ACCEPT) num target prot opt source destination Chain OUTPUT (policy ACCEPT) num target prot opt source destination And ssh from B2 to C works fine :- $ ssh 192.168.0.2 Route info :- $ route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.0.0 * 255.255.255.0 U 0 0 0 eth1 172.24.40.0 * 255.255.255.0 U 0 0 0 eth0 169.254.0.0 * 255.255.0.0 U 0 0 0 eth1 default 172.24.40.1 0.0.0.0 UG 0 0 0 eth0 $ ip route 192.168.0.0/24 dev eth1 proto kernel scope link src 192.168.0.1 172.24.40.0/24 dev eth0 proto kernel scope link src 172.24.40.130 169.254.0.0/16 dev eth1 scope link default via 172.24.40.1 dev eth0 So I just dont know why the port forward doesnt work from A to B2?

    Read the article

  • missing files after reassemble of RAID-5

    - by Kris_R
    I had to open my file-server's housing on Sunday to replace a faulty fan. What I didn't see was that one of the sata-cables was not properly connected. The 1st thing I did after a reboot was a check of the RAID status and it showed immediately that one drive is missing. Till this moment the device was not used (however it was mounted, so I'm not 100% sure that system did nothing). I stopped md0 and re-plugged the cable: mdadm --stop /dev/md0 poweroff After another reboot I checked the removed drive: mdadm --examine /dev/sdd1 ... Checksum : 3276bc1d - correct Events : 315782 Layout : left-symmetric Chunk Size : 32K Number Major Minor RaidDevice State this 0 8 49 0 active sync /dev/sdd1 0 0 8 49 0 active sync /dev/sdd1 1 1 8 65 1 active sync /dev/sde1 2 2 8 33 2 active sync /dev/sdc1 3 3 8 17 3 active sync /dev/sdb1 I was a bit surprised that it was shown as active (even if earlier mdadm said, that this device was removed from array) and its checksum was OK. I recreated RAID with: mdadm --assemble /dev/md0 --scan The command mdadm --detail /dev/md0 showed that all drives were running and system was in "clean" state. I mounted the device md0 and then came hic-cup. I wanted to work on one of the last files that I had been using before all the situation and it was not there. In another place I missed actually all files from the directory where I was working. As far as I can see most of the files that are older than a few days are intact but some newer ones are missing. Now the big question: what would be your advice? Is there a way to get these data? I thought about removing the drive that was earlier labeled by mdadm and rebuild array with another empty HDD. I've found that after re-assemble the "broken" drive has another label (changed from sdd to sdb). Can this have influence on rebuilt process? If yes, how to reassemble the array properly? I'm sure the SATA-cables are connected still in the same order to the controller. p.s. Please no advises like "restore from backup". I'm doing back-ups on Sunday's night and this happened in the late afternoon, so backup is not really options for me. p.s.s. I asked this question on Unix&Linux but no answer came up during last two days. I'm getting quite anxious. Sorry for duplicating if any of you reading the other forum.

    Read the article

  • APC (PHP Cache) Uptime 0 minutes, not caching

    - by Jussi
    My goal is to implement APC for opcode cache for a drupal 6 production site. I have so far tested APC with several php files with and without including other php files with include_once. Also tried to tweak the apc.ini values for shm_size, apc.include_once_override and apc.stat. Restarted apache every time. Resulting in apc.php not showing any changes in any values. (except of course the changed apc.ini values are shown as they should) Every time i refresh the apc.php test page, the start time resets as the current time showing uptime 0 minutes. apc.php -testpage shows: General Cache InformationAPC Version 3.1.9 PHP Version 5.2.10 APC Host xxxx.xx.xx Server Software Apache/2.2.3 (CentOS) Shared Memory 1 Segment(s) with 128.0 MBytes (mmap memory, pthread mutex Locks locking) Start Time 2011/07/26 11:53:56 Uptime 0 minutes File Upload Support 1 Cached Files 0 ( 0.0 Bytes) Hits 1 Misses 1 Request Rate (hits, misses) 2.00 cache requests/second Hit Rate 1.00 cache requests/second Miss Rate 1.00 cache requests/second Insert Rate 0.00 cache requests/second Cache full count 0 Cached Variables 0 ( 0.0 Bytes) Hits 0 Misses 0 Request Rate (hits, misses) 0.00 cache requests/second Hit Rate 0.00 cache requests/second Miss Rate 0.00 cache requests/second Insert Rate 0.00 cache requests/second Cache full count 0 apc.cache_by_default 1 apc.canonicalize 1 apc.coredump_unmap 0 apc.enable_cli 0 apc.enabled 1 apc.file_md5 0 apc.file_update_protection 2 apc.filters apc.gc_ttl 3600 apc.include_once_override 0 apc.lazy_classes 0 apc.lazy_functions 0 apc.max_file_size 16 apc.mmap_file_mask /tmp/apcphp5.095eRm apc.num_files_hint 1024 apc.preload_path apc.report_autofilter 0 apc.rfc1867 0 apc.rfc1867_freq 0 apc.rfc1867_name APC_UPLOAD_PROGRESS apc.rfc1867_prefix upload_ apc.rfc1867_ttl 3600 apc.serializer default apc.shm_segments 1 apc.shm_size 128M apc.slam_defense 0 apc.stat 0 apc.stat_ctime 0 apc.ttl 7200 apc.use_request_time 1 apc.user_entries_hint 4096 apc.user_ttl 7200 apc.write_lock 1 Host Status Diagrams: Free: 128.0 MBytes (100.0%) Hits: 1 (50.0%) Used: 20.3 KBytes (0.0%) Misses: 1 (50.0%) Detailed Memory Usage and Fragmentation: Fragmentation: 0% phpinfo shows: Server API CGI/FastCGI APC: Version 3.1.9 APC Debugging Enabled MMAP Support Enabled MMAP File Mask /tmp/apcphp5.JkKDk7 Locking type pthread mutex Locks Serialization Support php Revision $Revision: 308812 $ Build Date Jul 21 2011 14:31:12 I followed these steps to find if suexec settings would prevent caching: http://www.litespeedtech.com/support/forum/showthread.php?t=4189 [root@host /]# ps -ef|grep lsphp root 20402 17833 0 11:21 pts/0 00:00:00 grep lsphp [root@host /]# ps -waux root 17833 0.0 0.1 5004 1484 pts/0 S 10:39 0:00 bash ..indicates that there is no lsphp running on the host also I read the following article and comments, concluding that in my case the problem is not the suexec as the user apache is the httpd process owner http://www.brandonturner.net/blog/2009/07/fastcgi_with_php_opcode_cache/ also suexec command is not recognized when logged and launced as root @ host also i'm almost confident that there is no cPanel running on the host to check if a setting there would reset the running cache process at some interval This leaves me with few clues where to head next. I tried to set (with chown and chgrp) apache as the owner of the apc.php file and some test php files resulting in 500 server error. Is there a way to check if the file permissions prevent the apc stay running? I'm tremendously grateful for any suggestions or help.

    Read the article

  • Cannot run a VM with more than three network interfaces with KVM

    - by Bostonvaulter
    I'm running KVM on top of Ubuntu 10.10 Server I can create VM's (Virtual Machine) and network interfaces fine but I cannot seem to add more than three network interfaces. As soon as I have a VM with four network interfaces it gets stuck on startup at the starting SeaBIOS page with this message: Starting SeaBIOS (version pre-0.6.1-20100702_143500-palmer) So far I've verified this with two VM's, a Ubuntu 10.10 desktop and a Vyatta router. The specific network hardware I assign to the VM's doesn't seem to matter. I'm trying to have one bridged interface and three private networks using Vyatta to route between them. Does anyone know why I can't run a VM with more than three network interfaces? Edit: Additionally the KVM thread responsible for the specific VM hangs using ~100% CPU (i.e. one core). Here's the command for the process that is hanging: /usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 512 -smp 1,sockets=1,cores=1,threads=1 -name vyatta -uuid 6dff7c94-6810-423e-5fea-fec10da0e9b7 -nodefaults -chardev socket,id=monitor,path=/var/lib/libvirt/qemu/vyatta.monitor,server,nowait -mon chardev=monitor,mode=readline -rtc base=utc -boot c -drive file=/home/rams/virtual-machines/vyatta.img,if=none,id=drive-ide0-0-0,boot=on,format=raw -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -device rtl8139,vlan=0,id=net0,mac=00:54:00:be:cc:4b,bus=pci.0,addr=0x3 -net tap,fd=97,vlan=0,name=hostnet0 -device rtl8139,vlan=1,id=net1,mac=52:54:00:da:59:ed,bus=pci.0,addr=0x5 -net tap,fd=98,vlan=1,name=hostnet1 -device rtl8139,vlan=2,id=net2,mac=52:54:00:ce:22:b6,bus=pci.0,addr=0x6 -net tap,fd=99,vlan=2,name=hostnet2 -device rtl8139,vlan=3,id=net3,mac=52:54:00:1e:bc:46,bus=pci.0,addr=0x7 -net tap,fd=101,vlan=3,name=hostnet3 -chardev pty,id=serial0 -device isa-serial,chardev=serial0 -usb -vnc 127.0.0.1:0 -k en-us -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4 Edit: I've also found an error in dmesg that might be related (it also shows up when running virtd in verbose mode): 14:47:24.399: warning : qemudParsePCIDeviceStrs:1422 : Unexpected exit status '1', qemu probably failed I've also tried disabling app armor but that doesn't seem to make a difference.

    Read the article

  • Emails from Google Apps to custom SMTP server delayed by 1 hour consistently

    - by vimalk
    The outgoing mails from Google Apps/Gmail to our own custom SMTP server are getting delayed by 1 hour consistently. mxtoolbox.com diagnostics of our custom SMTP server are looking OK. Our custom SMTP server is receiving emails from other sources (yahoo, hotmail etc.) on time. Looking at the SMTP logs show a delay in a google intermediate SMTP server. Received: by qwi2 with SMTP id 2so1989393qwi.3 for <[email protected]>; Thu, 27 Jan 2011 03:54:23 -0800 (PST) MIME-Version: 1.0 Received: by 10.224.19.203 with SMTP id c11mr1587082qab.170.1296125657457; Thu, 27 Jan 2011 02:54:17 -0800 (PST) This setup has been working fine for a year though our custom email server was missing a reverse DNS entry and SPF records. Thinking that this could be the cause of the issue, we added these entries a week ago. But the issue still persists. Here are are more details: We are using Google Apps to host our primary domain email (say: mydomain.com) The custom SMTP server (say: s1.mydomain.com) hosts our subdomain (say: sub.mydomain.com) This is how the email log looks from [email protected] to [email protected] Return-Path: [email protected] Received: from localhost.localdomain (LHLO s1.mydomain.com) (127.0.0.1) by s1.mydomain.com with LMTP; Thu, 27 Jan 2011 17:24:28 +0530 (IST) Received: from localhost (localhost.localdomain [127.0.0.1]) by s1.mydomain.com (Postfix) with ESMTP id 605116A6565 for <[email protected]>; Thu, 27 Jan 2011 17:24:28 +0530 (IST) X-Virus-Scanned: amavisd-new at sub.mydomain.com X-Spam-Flag: NO X-Spam-Score: 2.984 X-Spam-Level: ** X-Spam-Status: No, score=2.984 tagged_above=-10 required=6.6 t ests=[AWL=-0.337, BAYES_50=0.001, DNS_FROM_OPENWHOIS=1.13, FH_DATE_PAST_20XX=3.188, HTML_MESSAGE=0.001, HTML_OBFUSCATE_05_10=0.001, RCVD_IN_DNSWL_LOW=-1] autolearn=no Received: from s1.mydomain.com ([127.0.0.1]) by localhost (s1.mydomain.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id RBjF7Wwr44mP for <[email protected]>; Thu, 27 Jan 2011 17:24:24 +0530 (IST) Received: from mail-qw0-f44.google.com (mail-qw0-f44.google.com [209.85.216.44]) by s1.mydomain.com (Postfix) with ESMTP id BB5DE6A6512 for <[email protected]>; Thu, 27 Jan 2011 17:24:23 +0530 (IST) Received: by qwi2 with SMTP id 2so1989393qwi.3 for <[email protected]>; Thu, 27 Jan 2011 03:54:23 -0800 (PST) MIME-Version: 1.0 Received: by 10.224.19.203 with SMTP id c11mr1587082qab.170.1296125657457; Thu, 27 Jan 2011 02:54:17 -0800 (PST) Received: by 10.220.117.17 with HTTP; Thu, 27 Jan 2011 02:54:17 -0800 (PST) Date: Thu, 27 Jan 2011 16:24:17 +0530 Message-ID: <[email protected]> Subject: test : 16:24 From: X <[email protected]> To: [email protected] Content-Type: multipart/alternative; boundary=0015175cba2865a5fe049ad1c5cd We appreciate any help that could help solve this issue :)

    Read the article

  • How to change mouse pointer icon in Xfce Debian 7 Wheezy?

    - by kadaj
    I copied the cursor theme (oxy-neon or Oxygen Neon) to /usr/share/icons and from Applications Menu - Settings - Mouse, I am able to see the new theme. I clicked on it and the pointer doesn't change. However the text typing icon ('I'), busy icon, hand icon, and resize window icons got changed. The pointer icon remains the same, the black Adwaita. I removed the Adwaita folder from the icons folder, and still the mouse pointer doesn't change. Is the pointer theme specified elsewhere? I have no setting under home directory. I tried logging out, restart, restarting xfwm4, but nothing works. I just found that the icon pointer changes when the pointer is inside Firefox, but it's not consistent. It keeps changing when I click menu items. Very weird. Any idea how to fix this? This is the output of running: gsettings list-recursively org.gnome.desktop.interface : ~$ gsettings list-recursively org.gnome.desktop.interface org.gnome.desktop.interface automatic-mnemonics true org.gnome.desktop.interface buttons-have-icons false org.gnome.desktop.interface can-change-accels false org.gnome.desktop.interface clock-format '24h' org.gnome.desktop.interface clock-show-date false org.gnome.desktop.interface clock-show-seconds false org.gnome.desktop.interface cursor-blink true org.gnome.desktop.interface cursor-blink-time 1200 org.gnome.desktop.interface cursor-blink-timeout 10 org.gnome.desktop.interface cursor-size 24 org.gnome.desktop.interface cursor-theme 'Adwaita' org.gnome.desktop.interface document-font-name 'Sans 11' org.gnome.desktop.interface enable-animations true org.gnome.desktop.interface font-name 'Cantarell 11' org.gnome.desktop.interface gtk-color-palette 'black:white:gray50:red:purple:blue:light blue:green:yellow:orange:lavender:brown:goldenrod4:dodger blue:pink:light green:gray10:gray30:gray75:gray90' org.gnome.desktop.interface gtk-color-scheme '' org.gnome.desktop.interface gtk-im-module '' org.gnome.desktop.interface gtk-im-preedit-style 'callback' org.gnome.desktop.interface gtk-im-status-style 'callback' org.gnome.desktop.interface gtk-key-theme 'Default' org.gnome.desktop.interface gtk-theme 'Adwaita' org.gnome.desktop.interface gtk-timeout-initial 200 org.gnome.desktop.interface gtk-timeout-repeat 20 org.gnome.desktop.interface icon-theme 'gnome' org.gnome.desktop.interface menubar-accel 'F10' org.gnome.desktop.interface menubar-detachable false org.gnome.desktop.interface menus-have-icons false org.gnome.desktop.interface menus-have-tearoff false org.gnome.desktop.interface monospace-font-name 'Monospace 11' org.gnome.desktop.interface show-input-method-menu true org.gnome.desktop.interface show-unicode-menu true org.gnome.desktop.interface text-scaling-factor 1.0 org.gnome.desktop.interface toolbar-detachable false org.gnome.desktop.interface toolbar-icons-size 'large' org.gnome.desktop.interface toolbar-style 'both-horiz' org.gnome.desktop.interface toolkit-accessibility false ~$

    Read the article

  • Guests can't access KVM host server by name although nslookup and dig returns correct record

    - by user190196
    So I have a KVM host that also runs an apache server with some yum repos. The VM guests are connected to the default virtual network, which is configured to offer DHCP and forwarding with NAT on virbr0 (192.168.12.1). The guests can successfully access the yum repos on the host by IP address, so for example curl 192.168.122.1/repo1 returns the content without problems. But I'd like to have the guests be able to reach the web server on the host by name rather IP address. I added the desired name record to the host's /etc/hosts file and libvirt's dnsmasq service seems to be serving that correctly to the guests since nslookup and dig successfully resolve the name on the guests: [root@localhost ~]# nslookup repo Server: 192.168.122.1 Address: 192.168.122.1#53 Name: repo Address: 192.168.122.1 [root@localhost ~]# dig repo ; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.17.rc1.el6 <<>> repo ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 55938 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;repo. IN A ;; ANSWER SECTION: repo. 0 IN A 192.168.122.1 ;; Query time: 0 msec ;; SERVER: 192.168.122.1#53(192.168.122.1) ;; WHEN: Tue Sep 17 02:10:46 2013 ;; MSG SIZE rcvd: 38 But curl/ping/etc still fail: [root@localhost ~]# curl repo curl: (6) Couldn't resolve host 'repo' While a request via ip address works: [root@localhost ~]# curl 192.168.122.1 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"> <html> <head> <title>Index of /</title> [...] Same with ping: [root@localhost ~]# ping repo ping: unknown host repo [root@localhost ~]# ping 192.168.122.1 PING 192.168.122.1 (192.168.122.1) 56(84) bytes of data. 64 bytes from 192.168.122.1: icmp_seq=1 ttl=64 time=0.110 ms 64 bytes from 192.168.122.1: icmp_seq=2 ttl=64 time=0.146 ms 64 bytes from 192.168.122.1: icmp_seq=3 ttl=64 time=0.191 ms ^C --- 192.168.122.1 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2298ms rtt min/avg/max/mdev = 0.110/0.149/0.191/0.033 ms I tried adding repo 192.168.122.1 to the guests' /etc/hosts files but still no dice. Also tried changing guests' /etc/nsswitch.conf with both: hosts: files dns and hosts: dns files I've read the relevant libvirt documentation and I'm not sure where else to learn more about this and be able to move forward with it.

    Read the article

  • Determining the health of a Cisco switch port?

    - by ewwhite
    I've been chasing a packet-loss and network stability issue for a handful of end-users on an internal network for the past few days... These issues surfaced recently, however, the location was struck by lightning six weeks ago. I was seeing 5-10% packet loss between a stack of four Cisco 2960's and several PC's and phones on the other side of a 77-meter run. The PC's were run inline with the phones over a trunked link. We were seeing dropped calls and interruptions in client-server applications and Microsoft Exchange connectivity. I tried the usual troubleshooting steps remotely, having a local technician do the following during breaks in user and production activity: change cables between the wall jack and device. change patch cables between the patch panel and switch port(s). try different switch ports within the 2960 stack. change end-user devices with known-good equipment (new phones, different PC's). clear switch port interface counters and monitor incrementing errors closely. (Pastebin output of sh int) Pored over the device logs and Observium RRD graphs. No link up/down issues from the switch side. change power strips on the end-user side. test cable runs from the Cisco 2960 using test cable-diagnostics tdr int Gi4/0/9 (clean)* test cable runs with a Tripp-Lite cable tester. (clean) run diagnostics on the switch stack members. (clean) In the end, it took three changes of switch ports to find a stable solution. The only logical conclusion is that a few Cisco 2960 switch ports are bad or flaky... Not dead, but not consistent in behavior either. I'm not used to seeing individual ports die in this manner. What else can I test or check to determine if these devices are bad? Is it common for single ports to have problems, rather than a contiguous bank of ports? BTW - show cable-diagnostics tdr int Gi4/0/14 is very cool... Interface Speed Local pair Pair length Remote pair Pair status --------- ----- ---------- ------------------ ----------- -------------------- Gi4/0/14 1000M Pair A 79 +/- 0 meters Pair B Normal Pair B 75 +/- 0 meters Pair A Normal Pair C 77 +/- 0 meters Pair D Normal Pair D 79 +/- 0 meters Pair C Normal

    Read the article

  • mint linux, DVD drive keeps randomly being accessed. unsure how to find culprit

    - by juicebox
    I have a workstation with mint linux 12. It seems like the DVD drive on the machine keeps randomly "activating". By activating it makes noise, the light turns on, and it seems like it is checking if a disk is in it. At first I thought I was being hacked and someone/something was trying to check if I had media in the DVDRom drive. I ruled that out with netstat and rkhunter. I checked my logs and the only thing I can find that might help point out the problem are these repeated chunks in syslog: Mar 24 17:47:31 rich-MINT kernel: [ 9846.551422] ata2.00: cmd a0/00:00:00:08:00/00:00:00:00:00/a0 tag 0 pio 16392 in Mar 24 17:47:31 rich-MINT kernel: [ 9846.551424] res 51/40:01:00:00:00/00:00:00:00:00/a0 Emask 0x10 (ATA bus error) Mar 24 17:47:31 rich-MINT kernel: [ 9846.551427] ata2.00: status: { DRDY ERR } Mar 24 17:47:31 rich-MINT kernel: [ 9846.551433] ata2.00: hard resetting link Mar 24 17:47:32 rich-MINT kernel: [ 9846.868012] ata2.01: hard resetting link Mar 24 17:47:32 rich-MINT kernel: [ 9847.344054] ata2.00: SATA link up 1.5 Gbps (SStatus 113 SControl 310) Mar 24 17:47:32 rich-MINT kernel: [ 9847.344067] ata2.01: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Mar 24 17:47:32 rich-MINT kernel: [ 9847.376118] ata2.00: configured for PIO0 Mar 24 17:47:32 rich-MINT kernel: [ 9847.393047] ata2.01: configured for UDMA/133 Mar 24 17:47:32 rich-MINT kernel: [ 9847.397046] ata2: EH complete and again Mar 24 17:55:28 rich-MINT kernel: [10323.633268] sr 1:0:0:0: ioctl_internal_command return code = 8000002 Mar 24 17:55:28 rich-MINT kernel: [10323.633270] : Sense Key : Aborted Command [current] [descriptor] Mar 24 17:55:28 rich-MINT kernel: [10323.633275] : Add. Sense: No additional sense information Mar 24 17:55:11 rich-MINT kernel: [10306.640009] ata2.00: link is slow to respond, please be patient (ready=0) Mar 24 17:55:16 rich-MINT kernel: [10310.840009] ata2.00: SRST failed (errno=-16) Mar 24 17:55:16 rich-MINT kernel: [10310.840016] ata2.00: hard resetting link Mar 24 17:55:16 rich-MINT kernel: [10311.160013] ata2.01: hard resetting link Mar 24 17:55:16 rich-MINT kernel: [10311.636061] ata2.00: SATA link up 1.5 Gbps (SStatus 113 SControl 310) Mar 24 17:55:16 rich-MINT kernel: [10311.636075] ata2.01: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Mar 24 17:55:16 rich-MINT kernel: [10311.668122] ata2.00: configured for PIO0 Mar 24 17:55:16 rich-MINT kernel: [10311.684854] ata2.01: configured for UDMA/133 Mar 24 17:55:17 rich-MINT kernel: [10312.105473] ata2: EH complete (Copied from Pastebin - http://pastebin.com/YNDrnyzH) If any linux masters could take a quick look at these log outputs and help me understand what is going on , much appreciated.

    Read the article

  • Bind9 virtual subdomains

    - by Steffan
    I am trying to setup virtual subdomains using Bind9, following this tutorial.. http://groups.drupal.org/node/16862 which I've completed. Basically setting up the zone and modifying the resolv.conf file and the named.conf.local file. I've gotten everything to work, and I am able to from my server ping mydomain.com , test.mydomain.com and when i do a dig I get the following.. ; <<>> DiG 9.7.0-P1 <<>> test.mydomain.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 32606 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 1 ;; QUESTION SECTION: ;test.mydomain.com. IN A ;; ANSWER SECTION: test.mydomain.com. 86400 IN A 174.###.###.# ;; AUTHORITY SECTION: mydomain.com. 86400 IN NS mydomain.com. ;; ADDITIONAL SECTION: mydomain.com. 86400 IN A 174.###.###.# ;; Query time: 0 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Wed Jan 19 21:06:01 2011 ;; MSG SIZE rcvd: 86 So it looks like everything is working. However, when I try and do test.mydomain.com in the browser, expecting it to default for now to mydomain.com it does not work and I get a server not found page in Firefox. I did read elsewhere that in your virutalhosts file you also need to setup a *.mydomain.com alias, but that didn't fix anything. Any other information that I could provide to help troubleshoot, or any troubleshooting suggestions? I am using Ubuntu 10.4, with typical LAMP setup. The only other things installed on the server are Bind9 and ftp client.

    Read the article

  • Issues getting a Cisco WLC 5508 to find AIR-LAP1142N

    - by user95917
    hoping someone can help me with a problem here. I'm attempting to setup a test (loan from Cisco) wireless network. Here's what i've got/done: 5508 Controller - Service Port IP set to 10.74.5.2 /24. Management IP set to 10.74.6.2 /24 with a default gateway of 10.74.6.1. Virtual IP set to 1.1.1.1. Copper SFP in slot 7, CAT5 (known good) going from there to port 1/0/47 on the switch. Green lights on both ends. 2960-S Switch - Vlan1 - 10.74.6.1 /24. dhcp pool 10.74.6.0 /24, default router 10.74.6.1. excluded-address 10.74.6.1, 10.74.6.2. 1/0/4 on the switch is set to switchport mode access and no shut. 1/0/47 on the switch is setup to switchport mode trunk and no shut. 1/0/4 has a CAT5 (known good) cable going from there to the AP. When I do a sh cdp nei from the switch, i can see the AP and Controller listed. When i configure my PC's nic to 10.74.5.5, and plug a cable from my nic to the SP port on the controller i can get on the device via the gui. In there, the only errors/info that show up in the trap are: Link Up: Slot: 0 Port: 7 Controller time base status - Controller is out of sync with the central timebase. I've manually set the time but apparently that's not quite the problem (or at least not the entire problem). When i plug the AP in, i see on the switch console that it grants it power, it sees it connect...but the controller won't see it for some reason. From what i've read you shouldn't have to do anything to the AP as it's managed by the controller...but i'm not sure what setting I'm missing for it to work. The AP light on top is continually cycling green, red, yellow. When I first start it up, it blinks green for 20 or so seconds, then goes to solid green for another 20 seconds or so, then flashes blue, green, red for awhile...but always ends up goinn back to the standard, green, red, yellow. Does anyone see any obvious issues with my setup or have any suggestions as to why i might be having a problem? Thanks for your help!

    Read the article

  • Error installing scipy on Mountain Lion with Xcode 4.5.1

    - by Xster
    Environment: Mountain Lion 10.8.2, Xcode 4.5.1 command line tools, Python 2.7.3, virtualenv 1.8.2 and numpy 1.6.2 When installing scipy with pip install -e "git+https://github.com/scipy/scipy#egg=scipy-dev" on a fresh virtualenv. llvm-gcc: scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c In file included from /System/Library/Frameworks/vecLib.framework/Headers/vecLib.h:43, from /System/Library/Frameworks/Accelerate.framework/Headers/Accelerate.h:20, from scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:2: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:51:23: error: immintrin.h: No such file or directory In file included from /System/Library/Frameworks/vecLib.framework/Headers/vecLib.h:43, from /System/Library/Frameworks/Accelerate.framework/Headers/Accelerate.h:20, from scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:2: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h: In function ‘vceilf’: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:53: error: incompatible types in return /System/Library/Frameworks/vecLib.framework/Headers/vfp.h: In function ‘vfloorf’: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:54: error: incompatible types in return /System/Library/Frameworks/vecLib.framework/Headers/vfp.h: In function ‘vintf’: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:55: error: ‘_MM_FROUND_TRUNC’ undeclared (first use in this function) /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:55: error: (Each undeclared identifier is reported only once /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:55: error: for each function it appears in.) /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:55: error: incompatible types in return /System/Library/Frameworks/vecLib.framework/Headers/vfp.h: In function ‘vnintf’: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:56: error: ‘_MM_FROUND_NINT’ undeclared (first use in this function) /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:56: error: incompatible types in return In file included from /System/Library/Frameworks/vecLib.framework/Headers/vecLib.h:43, from /System/Library/Frameworks/Accelerate.framework/Headers/Accelerate.h:20, from scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:2: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:51:23: error: immintrin.h: No such file or directory In file included from /System/Library/Frameworks/vecLib.framework/Headers/vecLib.h:43, from /System/Library/Frameworks/Accelerate.framework/Headers/Accelerate.h:20, from scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c:2: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h: In function ‘vceilf’: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:53: error: incompatible types in return /System/Library/Frameworks/vecLib.framework/Headers/vfp.h: In function ‘vfloorf’: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:54: error: incompatible types in return /System/Library/Frameworks/vecLib.framework/Headers/vfp.h: In function ‘vintf’: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:55: error: ‘_MM_FROUND_TRUNC’ undeclared (first use in this function) /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:55: error: (Each undeclared identifier is reported only once /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:55: error: for each function it appears in.) /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:55: error: incompatible types in return /System/Library/Frameworks/vecLib.framework/Headers/vfp.h: In function ‘vnintf’: /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:56: error: ‘_MM_FROUND_NINT’ undeclared (first use in this function) /System/Library/Frameworks/vecLib.framework/Headers/vfp.h:56: error: incompatible types in return error: Command "/usr/bin/llvm-gcc -fno-strict-aliasing -Os -w -pipe -march=core2 -msse4 -fwrapv -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -Iscipy/sparse/linalg/eigen/arpack/ARPACK/SRC -I/Users/xiao/.virtualenv/lib/python2.7/site-packages/numpy/core/include -c scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.c -o build/temp.macosx-10.4-x86_64-2.7/scipy/sparse/linalg/eigen/arpack/ARPACK/FWRAPPERS/veclib_cabi_c.o" failed with exit status 1 Is it supposed to be looking for headers from my system frameworks? Is the development version of scipy no longer good for the latest version of Mountain Lion/Xcode?

    Read the article

  • Trailing dots in url result in empty 404 page on IIS

    - by Peter Hahndorf
    I have an ASP.NET site on IIS8, but IIS7.5 behaves exactly the same. When I enter a URL like: mysite.com/foo/bar.. I get the following error with a '500 Internal Server Error' status code: even though I have custom error pages set up for 500 and 404 and I don't see anything wrong with my custom error page. In my web.config system.web node I have the following: <customErrors mode="On"> <error statusCode="404" redirect="/404.aspx" /> </customErrors> If I remove that section, I get a 404.0 response back but the page itself is blank. In web.config system.webServer I have: <httpErrors errorMode="DetailedLocalOnly"> <remove statusCode="404" subStatusCode="-1" /> <error statusCode="404" prefixLanguageFilePath="" path="404.html" responseMode="File" /> </httpErrors> But whether that is there or not, I get the same blank 404.0 page rather than my expected custom error page, or at least an internal IIS message. So first of all why is the asp.net handler picking up a request for '..' (also works with one or more trailing dots) If I remove the following handler from applicacationHost.config: <add name="ExtensionlessUrlHandler-Integrated-4.0" path="*." verb="GET,HEAD,POST,DEBUG" type="System.Web.Handlers.TransferRequestHandler" preCondition="integratedMode,runtimeVersionv4.0" responseBufferLimit="0" /> I get my expected custom 404 page, but of course removing that handler breaks routing in asp.net among other things. Looking at the failure trace I see: Windows Authentication is disabled for the site, so why is that module even in the request pipeline? For now my fix is to use the URL Rewrite module with the following rule: <rewrite> <rules> <rule name="Trailing Dots" stopProcessing="true"> <match url="\.+$" /> <action type="Rewrite" url="/404.html" appendQueryString="false" /> </rule> </rules> </rewrite> This works okay, but I wonder why IIS/ASP.NET behaves this way?

    Read the article

  • why i failed to build vsftp?

    - by hugemeow
    make, then failed with the following message. the main point is /lib/libcap.so.1: could not read symbols: File in wrong format, confusing... gcc -c readwrite.c -O2 -Wall -W -Wshadow -idirafter dummyinc gcc -c opts.c -O2 -Wall -W -Wshadow -idirafter dummyinc gcc -c ssl.c -O2 -Wall -W -Wshadow -idirafter dummyinc gcc -c sslslave.c -O2 -Wall -W -Wshadow -idirafter dummyinc gcc -c ptracesandbox.c -O2 -Wall -W -Wshadow -idirafter dummyinc gcc -c ftppolicy.c -O2 -Wall -W -Wshadow -idirafter dummyinc gcc -c sysutil.c -O2 -Wall -W -Wshadow -idirafter dummyinc gcc -c sysdeputil.c -O2 -Wall -W -Wshadow -idirafter dummyinc gcc -o vsftpd main.o utility.o prelogin.o ftpcmdio.o postlogin.o privsock.o tunables.o ftpdataio.o secbuf.o ls.o postprivparent.o logging.o str.o netstr.o sysstr.o strlist.o banner.o filestr.o parseconf.o secutil.o ascii.o oneprocess.o twoprocess.o privops.o standalone.o hash.o tcpwrap.o ipaddrparse.o access.o features.o readwrite.o opts.o ssl.o sslslave.o ptracesandbox.o ftppolicy.o sysutil.o sysdeputil.o -Wl,-s `./vsf_findlibs.sh` /lib/libcap.so.1: could not read symbols: File in wrong format collect2: ld returned 1 exit status make: *** [vsftpd] Error 1 [mirror@hugemeow vsftpd]$ ls /lib/libc libc-2.5.so libcap.so.1.10 libcidn.so.1 libcom_err.so.2.1 libcrypto.so.0.9.8e libcrypt.so.1 libcap.so.1 libcidn-2.5.so libcom_err.so.2 libcrypt-2.5.so libcrypto.so.6 libc.so.6 [mirror@hugemeow vsftpd]$ ls /lib/libc libc-2.5.so libcap.so.1.10 libcidn.so.1 libcom_err.so.2.1 libcrypto.so.0.9.8e libcrypt.so.1 libcap.so.1 libcidn-2.5.so libcom_err.so.2 libcrypt-2.5.so libcrypto.so.6 libc.so.6 [mirror@hugemeow vsftpd]$ ls /lib/libcap.so.1 -l lrwxrwxrwx 1 root root 14 Mar 20 2012 /lib/libcap.so.1 -> libcap.so.1.10 [mirror@hugemeow vsftpd]$ ls /lib/libcap.so.1 -lh lrwxrwxrwx 1 root root 14 Mar 20 2012 /lib/libcap.so.1 -> libcap.so.1.10 [mirror@hugemeow vsftpd]$ ls /lib/libcap.so.1 -lhL -rwxr-xr-x 1 root root 12K Mar 15 2007 /lib/libcap.so.1 this may have something to do with 64 bit system, but i have make modification to vsf_findlibs.sh 48 # Look for libcap (capabilities) 49 if locate_library /lib64/libcap.so.1; then 50 echo "/lib/libcap.so.1"; 51 elif locate_library /lib64/libcap.so.2; then 52 echo "/lib/libcap.so.2"; 53 else 54 # locate_library /usr/lib/libcap.so && echo "-lcap"; 55 # locate_library /lib/libcap.so && echo "-lcap"; 56 locate_library /lib64/libcap.so.1 && echo "-lcap"; 57 fi but make failed with the same error, why?

    Read the article

  • D-Link DIR-300 slows down / loses network

    - by basic6
    Hi there, there are 2 buildings (A and B). In bldg A is an open WLAN (which I'm allowed to use btw). In bldg B is a computer that I want to connect to that network. So I flashed an old D-Link DIR-300 AP with DD-WRT, mounted it to the wall (bldg B) near a window, attached a 13 dBi directional antenna (pointing to bldg A) and configured it as AP client in that wireless network. Then there's another AP, connected to the D-Link AP, acting as standard access point, which the computer is connected to. That's basically working so far, but: Every now and then the connection is lost. Not the connection between the computer and the D-Link (I can access the DD-WRT admin page normally) or the connection between the D-Link and the WLAN (in Status - Wireless it says it's still connected to the network), but when I want to access a web page (which only works if I'm connected to the wireless network from bldg A), my Firefox keeps "Looking for" (name resolution) without finding anything. When I reset the D-Link (power off, power on) in this situation, after some moments, everything's working fine again (Internet access). I've no idea why this is happening, but usually it's at most every few weeks (most times when nobody was using the computer, so no traffic). Compared to the connection speed when I connect directly to the WLAN in bldg A (Laptop), the speed in bldg B is rather slow, but I have the impression that this difference is worse in the last few days. A few minutes ago, I got 582 KB/s down and 911 KB/s up in bldg A (directly/laptop) and 84 KB/s down and 9 KB/s up in bldg B. The speed in bldg B used to be way higher (I remember 200 KB/s up) while the actual network speed in bldg A was lower than it is now (close to those 200). I'm aware that the wireless connection between those buildings should slow things down, but I'm wondering why this difference has become that extreme. Thanks for any tips... Update: I currently want to upload a large file (1.5 GB) via FTP (FileZilla). Since that caused the D-Link to disconnect (as described in first post), I took my laptop to bldg A, connected directly to the original WLAN (bypassing my D-Link) and tried the same upload. Guess what - same issue: At some point the connection is dead (at this point I would have reset my D-Link if I was connected to it). Just as the D-Link, my laptop is still connected, but not even name resolution is working ("Looking for..." in Firefox). After reconnecting, it's working again. Maybe my D-Link isn't the problem at all...

    Read the article

  • The server rejected the session-establishment request: WCF hosted on IIS

    - by Dave Hanna
    Background: I'm working on a project where we have about a dozen distinct WCF services implemented in an IIS application, communicating over net.tcp on the default port (808), using the Microsoft Net.Tcp Port Sharing Service. I recently added a self-test method to the base class of each of these services so that I could remotely hit the service and get back a status string verifying that it was in operation. We implement this app in a ladder of environments - Development, QA, UAT, and finally production. My problem: My test program, which instantiates a connection to each service in turn and invokes the self-test method, works fine on all the environments below production. We recently moved the app to production, and I'm getting a weird error that I can't explain: On the first of the services that I hit, I get back an exception: "The server at [URL] rejected the session-establishment request". All the other services respond fine. I initially thought there was something wrong with the particular service that was failing, but I tried rearranging the list of services into a different order, and it SEEMS to always be the first service that I hit that fails. (I say SEEMS because it think once in the early iterations of testing, I saw it happen on the second service that it hit. But I haven't been able to reproduce that.) I've looked at application startup delays, and that doesn't seem to be the problem, because I can come back and run the test again as soon as it finishes - a delay of only a minute or two - and get the same error. Also, in the lower level environments, there is a start up delay of probably 30 seconds to a minute, but the result still comes back as expected. I've tried accessing the services over http from INetManager, and I get intermittent failures on all the services - a particular service will return a yellow screen of death on on invocation, then come up with the expected link to the WSDL on the next one seconds later. I'm completely at a loss to explain this behavior, or how to resolve it. I've googled the error message, and not found anything helpful. It may be a configuration issue - the production servers are newly provisioned VM's, and we may not have the config exactly right (whereas all the lower level environments have been running this and other similar apps for some time), but I have not idea what to look for. I've looked at the properties of the app pool that the app is running on and compared it to the lower level environments without finding any differences. If somebody can point me in the right direction, you would have my undying gratitude.

    Read the article

  • LUKS with LVM, mount is not persistent after reboot

    - by linxsaga
    I have created a Logical vol and used luks to encrypt it. But while rebooting the server. I get a error message (below), therefore I would have to enter the root pass and disable the /etc/fstab entry. So mount of the LUKS partition is not persistent during reboot using LUKS. I have this setup on RHEL6 and wondering what i could be missing. I want to the LV to get be mount on reboot. Later I would want to replace it with UUID instead of the device name. Error message on reboot: "Give root password for maintenance (or type Control-D to continue):" Here are the steps from the beginning: [root@rhel6 ~]# pvcreate /dev/sdb Physical volume "/dev/sdb" successfully created [root@rhel6 ~]# vgcreate vg01 /dev/sdb Volume group "vg01" successfully created [root@rhel6 ~]# lvcreate --size 500M -n lvol1 vg01 Logical volume "lvol1" created [root@rhel6 ~]# lvdisplay --- Logical volume --- LV Name /dev/vg01/lvol1 VG Name vg01 LV UUID nX9DDe-ctqG-XCgO-2wcx-ddy4-i91Y-rZ5u91 LV Write Access read/write LV Status available # open 0 LV Size 500.00 MiB Current LE 125 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0 [root@rhel6 ~]# cryptsetup luksFormat /dev/vg01/lvol1 WARNING! ======== This will overwrite data on /dev/vg01/lvol1 irrevocably. Are you sure? (Type uppercase yes): YES Enter LUKS passphrase: Verify passphrase: [root@rhel6 ~]# mkdir /house [root@rhel6 ~]# cryptsetup luksOpen /dev/vg01/lvol1 house Enter passphrase for /dev/vg01/lvol1: [root@rhel6 ~]# mkfs.ext4 /dev/mapper/house mke2fs 1.41.12 (17-May-2010) Filesystem label= OS type: Linux Block size=1024 (log=0) Fragment size=1024 (log=0) Stride=0 blocks, Stripe width=0 blocks 127512 inodes, 509952 blocks 25497 blocks (5.00%) reserved for the super user First data block=1 Maximum filesystem blocks=67633152 63 block groups 8192 blocks per group, 8192 fragments per group 2024 inodes per group Superblock backups stored on blocks: 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 Writing inode tables: done Creating journal (8192 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 21 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. [root@rhel6 ~]# mount -t ext4 /dev/mapper/house /house PS: HERE I have successfully mounted: [root@rhel6 ~]# ls /house/ lost+found [root@rhel6 ~]# vim /etc/fstab -> as follow /dev/mapper/house /house ext4 defaults 1 2 [root@rhel6 ~]# vim /etc/crypttab -> entry as follows house /dev/vg01/lvol1 password [root@rhel6 ~]# mount -o remount /house [root@rhel6 ~]# ls /house/ lost+found [root@rhel6 ~]# umount /house/ [root@rhel6 ~]# mount -a -> SUCCESSFUL AGAIN [root@rhel6 ~]# ls /house/ lost+found Please let me know if I am missing anything here. Thanks in advance.

    Read the article

  • Access denied to mysql cause by invalid server hostname bind address

    - by Mark
    I cannot login to mysql using the terminal. [root@fst mysql]# mysql -h localhost -u admin -p Enter password: ERROR 1045 (28000): Access denied for user 'admin'@'localhost' (using password: YES) I am sure I have the correct password. The mysql is also running when I check status. The mysql database is also present in the directory /var/lib/mysql/. The host host.myi, host.myd and host.frm are present. By the way this a related to question on my previous problem MySQL server quit without updating PID file . Initially the problem arise when the root directory was full. To be able to login to directadmin and start mysql, I added a soft link of the /var/lib/mysql/ to /home/mysql. Since my database used up the most of the root directory. The root directory has 50Gb and /home has 1.5Gb. Somehow the /var/lib/mysql/idbdata1 is corrupted. So I move it to another location. Now, I can start the mysql server but I cannot login into it. Below are the contents from the myql logs. 121212 20:44:10 mysqld_safe mysqld from pid file /var/lib/mysql/fst.srv.net.pid ended 121212 20:44:10 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql 121212 20:44:10 [Note] Plugin 'FEDERATED' is disabled. 121212 20:44:10 InnoDB: The InnoDB memory heap is disabled 121212 20:44:10 InnoDB: Mutexes and rw_locks use GCC atomic builtins 121212 20:44:10 InnoDB: Compressed tables use zlib 1.2.3 121212 20:44:10 InnoDB: Using Linux native AIO 121212 20:44:10 InnoDB: Initializing buffer pool, size = 128.0M 121212 20:44:10 InnoDB: Completed initialization of buffer pool 121212 20:44:10 InnoDB: highest supported file format is Barracuda. 121212 20:44:11 InnoDB: Waiting for the background threads to start 121212 20:44:12 InnoDB: 1.1.8 started; log sequence number 1595675 121212 20:44:12 [Note] Server hostname (bind-address): '0.0.0.0'; port: 3306 121212 20:44:12 [Note] - '0.0.0.0' resolves to '0.0.0.0'; 121212 20:44:12 [Note] Server socket created on IP: '0.0.0.0'. 121212 20:44:12 [Note] Event Scheduler: Loaded 0 events 121212 20:44:12 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.5.27-log' socket: '/var/lib/mysql/mysql.sock' port: 3306 MySQL Community Server (GPL) I guess there is something wrong with the bind address. How should I fix the problem?

    Read the article

  • Nginx phpmyadmin redirecting to / instead of /phpmyadmin upon login

    - by Frederik Nielsen
    I am having issues with my phpmyadmin on my nginx install. When I enter <ServerIP>/phpmyadmin and logs in, I get redirected to <ServerIP>/index.php?<tokenstuff> instead of <ServerIP>/phpmyadmin/index.php?<tokenstuff> Nginx config file: user nginx; worker_processes 5; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 2; #gzip on; include /etc/nginx/conf.d/*.conf; } Default.conf: server { listen 80; server_name _; #charset koi8-r; #access_log /var/log/nginx/log/host.access.log main; location / { root /usr/share/nginx/html; index index.php index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { root /usr/share/nginx/html; try_files $uri =404; fastcgi_pass unix:/tmp/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # location ~ /\.ht { deny all; } location /phpmyadmin { root /usr/share/; index index.php index.html index.htm; location ~ ^/phpmyadmin/(.+\.php)$ { try_files $uri =404; root /usr/share/; fastcgi_pass unix:/tmp/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $request_filename; include fastcgi_params; fastcgi_param PATH_INFO $fastcgi_script_name; } location ~* ^/phpmyadmin/(.+\.(jpg|jpeg|gif|css|png|js|ico|html|xml|txt))$ { root /usr/share/; } } } (Any general tips on tidying op those config files are accepted too)

    Read the article

< Previous Page | 293 294 295 296 297 298 299 300 301 302 303 304  | Next Page >