Search Results

Search found 10527 results on 422 pages for 'ccnet config'.

Page 372/422 | < Previous Page | 368 369 370 371 372 373 374 375 376 377 378 379  | Next Page >

  • Linux boot on a raid1 software raid ?

    - by azera
    Hello I am trying to convert my single disk boot to a raid1 boot So far here is what i have: I sucessfully create the raid 1 as degraded with the new drive alone, I copied all the data on it I can mount that raid 1, see its files etc I already have a raid5 that is working on the same box (although not booting on it) I have installed grub on both drive When grub boot, it loads the kernel alright, but during the kernel boot it fails to load the "root block device" The kernel tells me : 1 - detected that root device is an md device 2 - determining root devices 3 - mounting root 4 - mounting /dev/md125 on /newroot failed: input/output error. Please enter another root device: ... At this point, if I enter /dev/sda3 (my "old" root device that isn't converted to raid yet) everything boots fine without the root. The /dev/md125 device is indeed created but it seems to be created after the error happens, as in it creates it after loading the device, when mdadm is loaded. Somehow it looks like it can't/doesn't load the raid array before it needs to mount it, and I don't know how I can solve that. My config files (taken from the system once it boots with sda3 as root device): $ cat /etc/mdadm.conf ARRAY /dev/md/md0-r5 metadata=0.90 UUID=1a118934:c831bdb3:64188b84:66721085 ARRAY /dev/md125 metadata=0.90 UUID=48ec4190:a80d4dde:64188b84:66721085 $ cat /proc/mdstat Personalities : [raid1] [raid6] [raid5] [raid4] [raid0] [raid10] md125 : active raid1 sdc3[1] 477853312 blocks [2/1] [_U] md127 : active raid5 sdd[0] sdf[3] sdb[2] sde[1] 4395415488 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU] unused devices: <none> $ cat /boot/grub/menu.lst default 0 timeout 8 splashimage=(hd0,0)/boot/grub/splash.xpm.gz title Gentoo Linux 2.6.31-r10 root (hd0,0) #kernel /boot/kernel-genkernel-x86_64-2.6.31-gentoo-r10 root=/dev/ram0 real_root=/dev/sda3 kernel /boot/kernel-genkernel-x86_64-2.6.31-gentoo-r10 root=/dev/md125 md=125,/dev/sdc3,/dev/sda3 initrd /boot/initramfs-genkernel-x86_64-2.6.31-gentoo-r10 # blkid /dev/sda1: UUID="89fee223-b845-4e0a-8a0b-e6cf695d5bcf" TYPE="ext2" /dev/sda2: UUID="a72296a8-d7d4-447f-a34b-ee920fd1a767" TYPE="swap" /dev/sda3: UUID="97eb0a6a-c385-4a9d-bf74-c0bab1fa4dc1" TYPE="ext3" /dev/sdb: UUID="1a118934-c831-bdb3-6418-8b8466721085" TYPE="linux_raid_member" /dev/sdc1: UUID="d36537fd-19a0-b8a3-6418-8b8466721085" TYPE="linux_raid_member" /dev/sdd: UUID="1a118934-c831-bdb3-6418-8b8466721085" TYPE="linux_raid_member" /dev/sde: UUID="1a118934-c831-bdb3-6418-8b8466721085" TYPE="linux_raid_member" /dev/md127: UUID="13a41589-4cf1-4c04-91ca-37484182c783" TYPE="ext4" /dev/sdf: UUID="1a118934-c831-bdb3-6418-8b8466721085" TYPE="linux_raid_member" /dev/sdc2: UUID="a1916397-1b48-45d7-9f98-73aa521e882f" TYPE="swap" /dev/sdc3: UUID="48ec4190-a80d-4dde-6418-8b8466721085" TYPE="linux_raid_member" /dev/md125: UUID="c947ed64-1d4d-4d1d-b4d2-24669fff916e" SEC_TYPE="ext2" TYPE="ext3" # mdadm -E mdadm: No devices to examine # fdisk -l Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xe975e9fc Device Boot Start End Blocks Id System /dev/sda1 1 5 40131 83 Linux /dev/sda2 6 1311 10490445 82 Linux swap / Solaris /dev/sda3 1312 60801 477853425 83 Linux Disk /dev/sdc: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xe975e9fc Device Boot Start End Blocks Id System /dev/sdc1 1 5 40131 83 Linux /dev/sdc2 6 1311 10490445 82 Linux swap / Solaris /dev/sdc3 1312 60801 477853425 83 Linux Disk /dev/md125: 489.3 GB, 489321791488 bytes 2 heads, 4 sectors/track, 119463328 cylinders Units = cylinders of 8 * 512 = 4096 bytes Disk identifier: 0x00000000 Disk /dev/md125 doesn't contain a valid partition table

    Read the article

  • postfix cannot getting my domain name?

    - by Kossel
    Hi I'm trying to setup webmin+postfix+dovecot+roundcube, for this moment I want things be as simple as possible so I'm using linux users as email accounts. I can send/receive from the same domain, I mean [email protected] can send/receive to/from [email protected] I tested smtp/imap with outlook and says no problem. if I send a mail from gmail it reject with error of: Technical details of temporary failure: The recipient server did not accept our requests to connect. when I login with roundcube the email address display in the right corner is something like user1@com and I get this error message from logs: [11-Nov-2012 07:39:03 +0400]: IMAP Error: Login failed for user1 from 187.150.xx.xx. Could not connect to com:143: php_network_getaddresses: getaddrinfo failed: Name or service not known in /var/www/webmail/program/include/rcube_imap.php on line 191 (POST /webmail/?_task=login&_action=login) it says Could not connect to com:143 looks like it cannot read the domain name. I used http://mxtoolbox.com/ to check the mx record and it says it can find the server of mail.mydomain.com. I quit sure the problema is from postfix or my server configs, but I have been looking for every config file and cannot find the answer of this. any suggestion I will appreciate. here are some of my configs (I don't want to make this question too long, I can provide any other information to solve this question): postfix main.cf #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file = /etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_sasl_security_options = noanonymous smtpd_sasl_auth_enable = yes smtpd_sasl_type = dovecot smtpd_sasl_path = private/auth # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. mydomain = mydomain.com myhostname = mail.mydomain.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases mydestination = $mydomain, $myhostname mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_command = procmail -a "$EXTENSION" mailbox_size_limit = 0 recipient_delimiter = + virtual_alias_domains = mydomain.com smtpd_recipient_restrictions = permit_mynetworks reject_unauth_destination permit_sasl_authenticated myorigin = $mydomain roundcube conf // ---------------------------------- // IMAP // ---------------------------------- $rcmail_config['default_host'] = '%d'; $rcmail_config['default_port'] = 143; $rcmail_config['imap_auth_type'] = null; $rcmail_config['imap_delimiter'] = null; $rcmail_config['imap_ns_personal'] = null; $rcmail_config['imap_ns_other'] = null; $rcmail_config['imap_ns_shared'] = null; $rcmail_config['imap_force_caps'] = false; $rcmail_config['imap_force_lsub'] = false; $rcmail_config['imap_force_ns'] = false; $rcmail_config['imap_timeout'] = 0; $rcmail_config['imap_auth_cid'] = null; $rcmail_config['imap_auth_pw'] = null; $rcmail_config['imap_cache'] = null; $rcmail_config['messages_cache'] = false;

    Read the article

  • Linux HA cluster w/Xen, Heartbeat, Pacemaker. domU does not failover to secondary node

    - by Kendall
    I am having the followig problem with an OenSuSE + Heartbeat + Pacemaker + Xen HA cluster: when the node a Xen domU is running on is "dead" the Xen domU running on it is not restarted on the second node. The cluster is setup with two nodes, each running OpenSuSE-11.3, Heartbeat 3.0, and Pacemaker 1.0 in CRM mode. For storage I am using a LUN on an iSCSI SAN device; the LUN is formatted with OCFS2 and managed with LVM. The Xen domU has two logical volumes; one for root and the other for swap. I am using IPMI cards for STONITH devices, and a dedicated ethernet link for heartbeat communications. The ha.cf file is as follows: keepalive 1 deadtime 10 warntime 5 udpport 694 ucast eth1 auto_failback off node dhcp-166 node stage use_logd yes crm yes My resources look as follows: shocrm(live)configure# show node $id="5c1aa924-bba4-4f95-a367-6c9a58ac4a38" dhcp-166 node $id="cebc92eb-af24-4833-aaf0-672adf80b58e" stage primitive Xen-Util ocf:heartbeat:Xen \ meta target-role="Started" \ operations $id="Xen-Util-operations" \ op start interval="0" timeout="60" start-delay="0" \ op stop interval="0" timeout="120" \ params xmfile="/etc/xen/vm/xen-util" primitive my-stonith stonith:external/ipmi \ params hostname="dhcp-166" ipaddr="192.168.3.106" userid="ADMIN" passwd="xxx" \ op monitor interval="2m" timeout="60s" primitive my-stonith2 stonith:external/ipmi \ params hostname="stage" ipaddr="192.168.3.105" userid="ADMIN" passwd="xxx" \ op monitor interval="2m" timeout="60s" property $id="cib-bootstrap-options" \ dc-version="1.0.9-89bd754939df5150de7cd76835f98fe90851b677" \ cluster-infrastructure="Heartbeat" The Xen domU config file is as follows: name = "xen-util" bootloader = "/usr/lib/xen/boot/domUloader.py" #bootargs = "xvda1:/vmlinuz-xen,/initrd-xen" bootargs = "--entry=xvda1:/boot/vmlinuz-xen,/boot/initrd-xen" memory = 4096 disk = [ 'phy:vg_xen/xen-util-root,xvda1,w', 'phy:vg_xen/xen-util-swap,xvda2,w', ] root = "/dev/xvda1" vif = [ 'mac=00:16:3e:42:42:06' ] #vfb = [ 'type=vnc,vncunused=0,vnclisten=192.168.3.172' ] extra = "" Say domU "Xen-Util" is running on node "stage"; if "stage" goes down, "Xen-Util" does not restart on node "dhcp-166". It seems to want to try as an "xm list" will show it for a few seconds and if you "xm console xen-util" it will give a message like "copying /boot/kernel.gz from xvda1 to /var/lib/xen/tmp/kernel.a53gs for booting". However, it never gets past that, eventually gives up, and no longer appears in "xm list". Now, when node "stage" comes back online after being power cycled, it detects that "Xen-Util" isn't running, and starts it (on stage). I've tried starting "Xen-Util" on node "dhcp-166" without the cluster running, and it works fine. No problems. So, I know it works in that respect. Any ideas? Thanks!

    Read the article

  • How to make Firefox use TCP for DNS

    - by miniBill
    I want to use TCP for DNS, to bypass my ISP's slow and broken DNS servers. I'm not using (and don't want to use) a proxy. Note: I want to use DNS over TCP because if I use it over udp, no matter what server I set, I get answers from my ISP's DNS. Notice that I will fiercely downvote whoever suggests: programs to do TCP over DNS, the setting in about:config to make DNS go over the proxy too: I'm not using a proxy, use another DNS: I've already set up Google as my DNS, but I get intercepted. Example of what I mean by saying intercept: $ dig @8.8.8.8 thepiratebay.se ; <<>> DiG 9.8.1 <<>> @8.8.8.8 thepiratebay.se ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 24385 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;thepiratebay.se. IN A ;; ANSWER SECTION: thepiratebay.se. 28800 IN A 83.224.65.41 ;; Query time: 50 msec ;; SERVER: 8.8.8.8#53(8.8.8.8) ;; WHEN: Sun Sep 16 22:51:06 2012 ;; MSG SIZE rcvd: 49 $ dig +tcp @8.8.8.8 thepiratebay.se ; <<>> DiG 9.8.1 <<>> +tcp @8.8.8.8 thepiratebay.se ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 15131 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;thepiratebay.se. IN A ;; ANSWER SECTION: thepiratebay.se. 436 IN A 194.71.107.15 ;; Query time: 61 msec ;; SERVER: 8.8.8.8#53(8.8.8.8) ;; WHEN: Sun Sep 16 22:51:10 2012 ;; MSG SIZE rcvd: 49 If it matters, I'm using Firefox 14 on Gentoo Linux.

    Read the article

  • Allow anonymous upload for Vsftpd?

    - by user15318
    I need a basic FTP server on Linux (CentOS 5.5) without any security measure, since the server and the clients are located on a test LAN, not connected to the rest of the network, which itself uses non-routable IP's behind a NAT firewall with no incoming access to FTP. Some people recommend Vsftpd over PureFTPd or ProFTPd. No matter what I try, I can't get it to allow an anonymous user (ie. logging as "ftp" or "anonymous" and typing any string as password) to upload a file: # yum install vsftpd # mkdir /var/ftp/pub/upload # cat vsftpd.conf listen=YES anonymous_enable=YES local_enable=YES write_enable=YES xferlog_file=YES #anonymous users are restricted (chrooted) to anon_root #directory was created by root, hence owned by root.root anon_root=/var/ftp/pub/incoming anon_upload_enable=YES anon_mkdir_write_enable=YES #chroot_local_user=NO #chroot_list_enable=YES #chroot_list_file=/etc/vsftpd.chroot_list chown_uploads=YES When I log on from a client, here's what I get: 500 OOPS: cannot change directory:/var/ftp/pub/incoming I also tried "# chmod 777 /var/ftp/incoming/", but get the same error. Does someone know how to configure Vsftpd with minimum security? Thank you. Edit: SELinux is disabled and here are the file permissions: # cat /etc/sysconfig/selinux SELINUX=disabled SELINUXTYPE=targeted SETLOCALDEFS=0 # sestatus SELinux status: disabled # getenforce Disabled # grep ftp /etc/passwd ftp:x:14:50:FTP User:/var/ftp:/sbin/nologin # ll /var/ drwxr-xr-x 4 root root 4096 Mar 14 10:53 ftp # ll /var/ftp/ drwxrwxrwx 2 ftp ftp 4096 Mar 14 10:53 incoming drwxr-xr-x 3 ftp ftp 4096 Mar 14 11:29 pub Edit: latest vsftpd.conf: listen=YES local_enable=YES write_enable=YES xferlog_file=YES #anonymous users are restricted (chrooted) to anon_root anonymous_enable=YES anon_root=/var/ftp/pub/incoming anon_upload_enable=YES anon_mkdir_write_enable=YES #500 OOPS: bad bool value in config file for: chown_uploads chown_uploads=YES chown_username=ftp Edit: with trailing space removed from "chown_uploads", err 500 is solved, but anonymous still doesn't work: client> ./ftp server Connected to server. 220 (vsFTPd 2.0.5) Name (server:root): ftp 331 Please specify the password. Password: 500 OOPS: cannot change directory:/var/ftp/pub/incoming Login failed. ftp> bye With user "ftp" listed in /etc/passwd with home directory set to "/var/ftp" and access rights to /var/ftp set to "drwxr-xr-x" and /var/ftp/incoming to "drwxrwxrwx"...could it be due to PAM maybe? I don't find any FTP log file in /var/log to investigate. Edit: Here's a working configuration to let ftp/anonymous connect and upload files to /var/ftp: listen=YES anonymous_enable=YES write_enable=YES anon_upload_enable=YES anon_mkdir_write_enable=YES

    Read the article

  • OS X: Finder error -36 when using SMB shares on a Samba server bound to AD

    - by Frenchie
    We're looking at deploying SMB homes on Debian (5.0.3) for our mac clients rather than purchasing four new Xserves. We've got our test servers built and functioning properly. Windows clients behave perfectly, but we've run into an issue with OS X (10.6.x and 10.5.x). We're going this route instead of Windows file servers due to a whole bunch of other issues that arise when going that way. Specifically, when mounting a SMB share with unix extensions switched on and the remote server bound to AD, the finder cannot save files on the share, instead touching the file and then bombing out with a -36 IO error, folder creation is fine. Copying files in the terminal behaves fine and the problem seems to be limited to the finder. The issue arises (I think) as the remote UID/GID is passed across when using unix extensions. OS X uses its own winbind idmap (odsam) to work out the effective UID/GID from AD users and groups whilst we're using a rid map on the server. Consequently, there is a mismatch in ownership which the finder chooses to honour. How OS X appears to handle this is to use the remote uid and gid at the file permission level (see below) and then set an OS X acl granting the local uid/gid to have the appropriate permissions on the file. I think the finder touches the file (which the kernel allows because of the ACL) and then checks the filesystem perms and drops out with the IO error. On a Client fc-003353-d:homes2 root# ls -led test/ drwx------+ 2 135978 100513 16384 Feb 3 15:14 test/ 0: user:jfrench allow list,add_file,search,delete,add_subdirectory,delete_child,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown,file_inherit,directory_inherit 1: group:ARTS\domain users allow 2: group:everyone allow 3: group:owner allow list,add_file,search,delete,add_subdirectory,delete_child,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown,file_inherit,directory_inherit,only_inherit 4: group:group allow list,add_file,search,delete,add_subdirectory,delete_child,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown,file_inherit,directory_inherit,only_inherit 5: group:everyone allow list,add_file,search,delete,add_subdirectory,delete_child,readattr,writeattr,readextattr,writeextattr,readsecurity,writesecurity,chown,file_inherit,directory_inherit,only_inherit We've tried the following without any luck: Setting the Linux side file owner to match the OS X GID/UID Adding ACLs on the linux filesystem which grant the OS X GID/UID perms Disabling extended attributes Setting steams=no in /etc/nsmb.conf on the client We're currently running a workaround which is to just turn off unix extensions which forces the macs to just mount the share as the local user with u=rwx perms. This works for most things but is causing a few apps that expect certain perms to break in subtle ways. Worst case scenario is that we'll continue running in this way but we would like to have the unix extensions on. Regards. Relevant SMB config below: [global] workgroup = ARTS realm = *snip* security = ADS password server = *snip* unix extensions = yes panic action = /usr/share/panic-action %d idmap backend = rid:ARTS=100000-10000000 idmap uid = 100000-10000000 idmap gid = 100000-10000000 winbind enum users = Yes winbind enum groups = Yes veto files = /lost+found/aquota.*/ hide files = /desktop.ini/$RECYCLE.BIN/.*/AppData/Library/ ea support = yes store dos attributes = yes map system = no map archive = no map readonly = no

    Read the article

  • Monitoring slow nginx/unicorn requests

    - by injekt
    I'm currently using Nginx to proxy requests to a Unicorn server running a Sinatra application. The application only has a couple of routes defined, those of which make fairly simple (non costly) queries to a PostgreSQL database, and finally return data in JSON format, these services are being monitored by God. I'm currently experiencing extremely slow response times from this application server. I have another two Unicorn servers being proxied via Nginx, and these are responding perfectly fine, so I think I can rule out any wrong doing from Nginx. Here is my God configuration: # God configuration APP_ROOT = File.expand_path '../', File.dirname(__FILE__) God.watch do |w| w.name = "app_name" w.interval = 30.seconds # default w.start = "cd #{APP_ROOT} && unicorn -c #{APP_ROOT}/config/unicorn.rb -D" # -QUIT = graceful shutdown, waits for workers to finish their current request before finishing w.stop = "kill -QUIT `cat #{APP_ROOT}/tmp/unicorn.pid`" w.restart = "kill -USR2 `cat #{APP_ROOT}/tmp/unicorn.pid`" w.start_grace = 10.seconds w.restart_grace = 10.seconds w.pid_file = "#{APP_ROOT}/tmp/unicorn.pid" # User under which to run the process w.uid = 'web' w.gid = 'web' # Cleanup the pid file (this is needed for processes running as a daemon) w.behavior(:clean_pid_file) # Conditions under which to start the process w.start_if do |start| start.condition(:process_running) do |c| c.interval = 5.seconds c.running = false end end # Conditions under which to restart the process w.restart_if do |restart| restart.condition(:memory_usage) do |c| c.above = 150.megabytes c.times = [3, 5] # 3 out of 5 intervals end restart.condition(:cpu_usage) do |c| c.above = 50.percent c.times = 5 end end w.lifecycle do |on| on.condition(:flapping) do |c| c.to_state = [:start, :restart] c.times = 5 c.within = 5.minute c.transition = :unmonitored c.retry_in = 10.minutes c.retry_times = 5 c.retry_within = 2.hours end end end Here is my Unicorn configuration: # Unicorn configuration file APP_ROOT = File.expand_path '../', File.dirname(__FILE__) worker_processes 8 preload_app true pid "#{APP_ROOT}/tmp/unicorn.pid" listen 8001 stderr_path "#{APP_ROOT}/log/unicorn.stderr.log" stdout_path "#{APP_ROOT}/log/unicorn.stdout.log" before_fork do |server, worker| old_pid = "#{APP_ROOT}/tmp/unicorn.pid.oldbin" if File.exists?(old_pid) && server.pid != old_pid begin Process.kill("QUIT", File.read(old_pid).to_i) rescue Errno::ENOENT, Errno::ESRCH # someone else did our job for us end end end I have checked God status logs but it appears CPU and Memory Usage are never out of bounds. I also have something to kill high memory workers, which can be found on the GitHub blog page here. When running a tail -f on the Unicorn logs I see some requests, but they're far and few between, when I was at around 60-100 a second before this trouble seemed to have arrived. This log also shows workers being reaped and started as expected. So my question is, how would I go about debugging this? What are the next steps I should be taking? I'm extremely baffled that the server will sometimes respond quickly, but at others time it's very slow, for long periods of time (which may or may not be peak traffic times). Any advice is much appreciated.

    Read the article

  • which package i should choose, if i want to install virtualenv for python?

    - by hugemeow
    pip search just returns so many matches, i am confused about which package i should choose to install .. should i only install virtualenv? or i'd better also install virtualenv-commands and virtualenv-commands, etc, but i really don't know exactly what virtualenv-commands is ... mirror0@lab:~$ pip search virtualenv virtualenvwrapper - Enhancements to virtualenv virtualenv - Virtual Python Environment builder veh - virtualenv for hg pyutilib.virtualenv - PyUtilib utility for building custom virtualenv bootstrap scripts. envbuilder - A package for automatic generation of virtualenvs virtstrap-core - A bootstrapping mechanism for virtualenv+pip and shell scripts tox - virtualenv-based automation of test activities virtualenvwrapper-win - Port of Doug Hellmann's virtualenvwrapper to Windows batch scripts everyapp.bootstrap - Enhanced virtualenv bootstrap script creation. orb - pip/virtualenv shell script wrapper monupco-virtualenv-python - monupco.com registration agent for stand-alone Python virtualenv applications virtualenvwrapper-powershell - Enhancements to virtualenv (for Windows). A clone of Doug Hellmann's virtualenvwrapper RVirtualEnv - relocatable python virtual environment virtualenv-clone - script to clone virtualenvs. virtualenvcontext - switch virtualenvs with a python context manager lessrb - Wrapper for ruby less so that it's in a virtualenv. carton - make self-extracting virtualenvs virtualenv5 - Virtual Python 3 Environment builder clever-alexis - Clever redhead girl that builds and packs Python project with Virtualenv into rpm, deb, etc. kforgeinstall - Virtualenv bootstrap script for KForge pypyenv - Install PyPy in virtualenv virtualenv-distribute - Virtual Python Environment builder virtualenvwrapper.project - virtualenvwrapper plugin to manage a project work directory virtualenv-commands - Additional commands for virtualenv. rjm.recipe.venv - zc.buildout recipe to turn the entire buildout tree into a virtualenv virtualenvwrapper.bitbucket - virtualenvwrapper plugin to manage a project work directory based on a BitBucket repository tg_bootstrap - Bootstrap a TurboGears app in a VirtualEnv django-env - Automaticly manages virtualenv for django project virtual-node - Install node.js into your virtualenv django-environment - A plugin for virtualenvwrapper that makes setting up and creating new Django environments easier. vip - vip is a simple library that makes your python aware of existing virtualenv underneath. virtualenvwrapper.django - virtualenvwrapper plugin to create a Django project work directory terrarium - Package and ship relocatable python virtualenvs venv_dependencies - Easy to install any dependencies in a virtualenviroment(without making symlinks by hand and etc...) virtualenv-sh - Convenient shell interface to virtualenv virtualenvwrapper.github - Plugin for virtualenvwrapper to automatically create projects based on github repositories. virtualenvwrapper.configvar - Plugin for virtualenvwrapper to automatically export config vars found in your project level .env file. virtualenvwrapper-emacs-desktop - virtualenvwrapper plugin to control emacs desktop mode bootstrapper - Bootstrap Python projects with virtualenv and pip. virtualenv3 - Obsolete fork of virtualenv isotoma.depends.zope2_13_8 - Running zope in a virtualenv virtual-less - Install lessc into your virtualenv virtualenvwrapper.tmpenv - Temporary virtualenvs are automatically deleted when deactivated isotoma.plone.heroku - Tooling for running Plone on heroku in a virtualenv gae-virtualenv - Using virtualenv with zipimport on Google App Engine pinvenv - VirtualEnv plugins for pin isotoma.depends.plone4_1 - Running plone in a virtualenv virtualenv-tools - A set of tools for virtualenv virtualenvwrapper.npm - Plugin for virtualenvwrapper to automatically encapsulate inside the virtual environment any npm installed globaly when the venv is activated d51.django.virtualenv.test_runner - Simple package for running isolated Django tests from within virtualenv difio-virtualenv-python - Difio registration agent for stand-alone Python virtualenv applications VirtualEnvManager - A package to manage various virtual environments. virtualenvwrapper.gem - Plugin for virtualenvwrapper to automatically encapsulate inside the virtual environment any gems installed when the venv is activated

    Read the article

  • PHP 5.3.2 + Fcgid 2.3.5 + Apache 2.2.14 + SuExec => Connection reset by peer: mod_fcgid: error reading data from FastCGI server

    - by Zigzag
    I'm trying to use PHP 5.3.2 + Fcgid 2.3.5 + Apache 2.2.14 but I always have the error : "Connection reset by peer: mod_fcgid: error reading data from FastCGI server". And Apache returns an error 500 each time I tried to execute a php page : I have compiled the Apache with this options: ./configure --with-mpm=worker --enable-userdir=shared --enable-actions=shared --enable-alias=shared --enable-auth=shared --enable-so --enable-deflate \ --enable-cache=shared --enable-disk-cache=shared --enable-info=shared --enable-rewrite=shared \ --enable-suexec=shared --with-suexec-caller=www-data --with-suexec-userdir=site --with-suexec-logfile=/usr/local/apache2/logs/suexec.log --with-suexec-docroot=/home Then PHP: ./configure --with-config-file-path=/usr/local/apache2/php --with-apxs2=/usr/local/apache2/bin/apxs --with-mysql --with-zlib --enable-exif --with-gd --enable-cgi Then FCdigd: APXS=/usr/local/apache2/bin/apxs ./configure.apxs The VHOST is: <Directory /home/website_panel/site/> FCGIWrapper /home/website_panel/cgi/php .php ... ErrorLog /home/website_panel/logs/error.log </Directory> cat /home/website_panel/logs/error.log [Sun Mar 07 22:19:41 2010] [warn] [client xx.xx.xx.xx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Sun Mar 07 22:19:41 2010] [error] [client xx.xx.xx.xx] Premature end of script headers: test.php [Sun Mar 07 22:19:41 2010] [warn] [client xx.xx.xx.xx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Sun Mar 07 22:19:41 2010] [error] [client xx.xx.xx.xx] Premature end of script headers: test.php [Sun Mar 07 22:19:42 2010] [warn] [client xx.xx.xx.xx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Sun Mar 07 22:19:42 2010] [error] [client xx.xx.xx.xx] Premature end of script headers: test.php [Sun Mar 07 22:19:43 2010] [warn] [client xx.xx.xx.xx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server [Sun Mar 07 22:19:43 2010] [error] [client xx.xx.xx.xx] Premature end of script headers: test.php The Suexec log: root:/usr/local/apache2# cat /var/log/apache2/suexec.log [2010-03-07 22:11:05]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:11:15]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:11:23]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:19:41]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:19:41]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:19:42]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php [2010-03-07 22:19:43]: uid: (1001/website_panel) gid: (1001/website_panel) cmd: php root:/usr/local/apache2# cat logs/error_log [Sun Mar 07 22:18:47 2010] [notice] suEXEC mechanism enabled (wrapper: /usr/local/apache2/bin/suexec) [Sun Mar 07 22:18:47 2010] [notice] mod_bw : Memory Allocated 0 bytes (each conf takes 32 bytes) [Sun Mar 07 22:18:47 2010] [notice] mod_bw : Version 0.7 - Initialized [0 Confs] [Sun Mar 07 22:18:47 2010] [notice] Apache/2.2.14 (Unix) mod_fcgid/2.3.5 configured -- resuming normal operations root:/usr/local/apache2# /home/website_panel/cgi/php -v PHP 5.3.2 (cli) (built: Mar 7 2010 16:01:49) Copyright (c) 1997-2010 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2010 Zend Technologies If someone has got an idea, I want to hear it ^^ Thanks !

    Read the article

  • Cannot exclude a path from basic auth when using a front controller script

    - by Adam Monsen
    I have a small PHP/Apache2 web application wherein I'd like to do two seemingly incompatible operations: Route all requests through a single PHP script (a "front controller", if you will) Secure everything except API calls with HTTP basic authentication I can satisfy either requirement just fine in isolation, it's when I try to do both at once that I am blocked. For no good reason I'm trying to accomplish these requirements solely with Apache configuration. Here are the requirements stated as an example. A GET request for this URL: http://basic/api/listcars?max=10 should be sent through front.php without requiring basic auth. front.php will get /api/listcars?max=10 and do whatever it needs to with that. Here's what I think should work. In my /etc/hosts I added 127.0.0.1 basic and I am using this Apache config: <Location /> AuthType Basic AuthName "Home Secure" AuthUserFile /etc/apache2/passwords require valid-user </Location> <VirtualHost *:80> ServerName basic DocumentRoot /var/www/basic <Directory /var/www/basic> <IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{SCRIPT_FILENAME} !-f RewriteCond %{SCRIPT_FILENAME} !-d RewriteRule ^(.*)$ /front.php/$1 [QSA,L] </IfModule> </Directory> <Location /api> Order deny,allow Allow from all Satisfy any </Location> </VirtualHost> But I still always get a HTTP 401: Authorization Required response. I can make it work by changing <Location /api> into <Location ~ /api> but this allows more than I want to past basic auth. I also tried changing the <Directory /var/www/basic> section into <Location />, but this doesn't work either (and it results in some strange values for PATH_TRANSLATED being passed to the script). I searched around and found many examples of selective exclusion of basic auth, but none that also incorporated a front controller. I could certainly do something like handle basic auth in the front controller, but if I can have Apache do that instead I'll be able to keep all authentication logic out of my PHP code. A friend suggested splitting this into two vhosts, which I know also works. This used to be two separate vhosts, actually. I'm using Apache 2.2.22 / PHP 5.3.10 on Ubuntu 12.04.

    Read the article

  • gmail dkim=neutral (no signature)

    - by Bretticus
    After testing much and retracing my steps, I still cannot get google mail to validate. My mail server is Debian 5.0 with exim Exim version 4.72 #1 built 31-Jul-2010 08:12:17 Copyright (c) University of Cambridge, 1995 - 2007 Berkeley DB: Berkeley DB 4.8.24: (August 14, 2009) Support for: crypteq iconv() IPv6 PAM Perl Expand_dlfunc GnuTLS move_frozen_messages Content_Scanning DKIM Old_Demime Lookups: lsearch wildlsearch nwildlsearch iplsearch cdb dbm dbmnz dnsdb dsearch ldap ldapdn ldapm mysql nis nis0 passwd pgsql sqlite Authenticators: cram_md5 cyrus_sasl dovecot plaintext spa Routers: accept dnslookup ipliteral iplookup manualroute queryprogram redirect Transports: appendfile/maildir/mailstore/mbx autoreply lmtp pipe smtp Fixed never_users: 0 Size of off_t: 8 GnuTLS compile-time version: 2.4.2 GnuTLS runtime version: 2.4.2 Configuration file is /var/lib/exim4/config.autogenerated My remote smtp transport configuration: remote_smtp: debug_print = "T: remote_smtp for $local_part@$domain" driver = smtp helo_data = mailer.mydomain.com dkim_domain = mydomain.com dkim_selector = mailer dkim_private_key = /etc/exim4/dkim/mailer.mydomain.com.key dkim_canon = relaxed .ifdef REMOTE_SMTP_HOSTS_AVOID_TLS hosts_avoid_tls = REMOTE_SMTP_HOSTS_AVOID_TLS .endif .ifdef REMOTE_SMTP_HEADERS_REWRITE headers_rewrite = REMOTE_SMTP_HEADERS_REWRITE .endif .ifdef REMOTE_SMTP_RETURN_PATH return_path = REMOTE_SMTP_RETURN_PATH .endif .ifdef REMOTE_SMTP_HELO_FROM_DNS helo_data=REMOTE_SMTP_HELO_DATA .endif The path to my private key is correct. I see a DKIM header in my messages as they end up in my gmail account: DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mydomain.com; s=mailer; h=Content-Type:MIME-Version:Message-ID:Date:Subject:Reply-To:To:From; bh=nKgQAFyGv<snip>tg=; b=m84lyYvX6<snip>RBBqmW52m1ce2g=; However, gmail headers always report dkim=neutral (no signature): dkim=neutral (no signature) [email protected] My DNS results: dig +short txt mailer._domainkey.mydomain.com mailer._domainkey. mydomain.com descriptive text "v=DKIM1\; k=rsa\; t=y\; p=LS0tLS1CRUdJ<snip>M0RRRUJBUVV" "BQTRHTkFEQ0J<snip>GdLamdaaG" "JwaFZkai93b3<snip>laSCtCYmdsYlBrWkdqeVExN3gxN" "mpQTzF6OWJDN3hoY21LNFhaR0NjeENMR0FmOWI4Z<snip>tLQo=" Note that the base64 public key is 364 chars long so I had to break up the key using bind9. $ORIGIN _domainkey. mydomain.com. mailer TXT ("v=DKIM1; k=rsa; t=y; p=LS0tLS1CRUdJTiBQVUJM<snip>U0liM0RRRUJBUVV" "BQTRHTkFEQ0JpUUtCZ1<snip>15MGdLamdaaG" "JwaFZkai93b3lDK21MR<snip>YlBrWkdqeVExN3gxN" "mpQTzF6OWJDN3hoY21L<snip>Ci0tLS0tRU5E" "IFBVQkxJQyBLRVktLS0tLQo=") Can anyone point me in the right direction? I would really appreciate it.

    Read the article

  • Error with postgres and Rails in Bundle Install on Ubuntu 12.10

    - by jason328
    I'm trying to install postgres onto Ubuntu. When running the Bundle Install in terminal I'm receiving this message at the end of the running code. How do I get pg to install properly? Libpq-dev is installed as well. Using coffee-rails (3.2.2) Using diff-lcs (1.1.3) Using jquery-rails (2.0.2) Installing pg (0.12.2) with native extensions Gem::Installer::ExtensionBuildError: ERROR: Failed to build gem native extension. /usr/bin/ruby1.9.1 extconf.rb checking for pg_config... yes Using config values from /usr/bin/pg_config checking for libpq-fe.h... yes checking for libpq/libpq-fs.h... yes checking for PQconnectdb() in -lpq... yes checking for PQconnectionUsedPassword()... yes checking for PQisthreadsafe()... yes checking for PQprepare()... yes checking for PQexecParams()... yes checking for PQescapeString()... yes checking for PQescapeStringConn()... yes checking for PQgetCancel()... yes checking for lo_create()... yes checking for pg_encoding_to_char()... yes checking for PQsetClientEncoding()... yes checking for rb_encdb_alias()... yes checking for rb_enc_alias()... no checking for struct pgNotify.extra in libpq-fe.h... yes checking for unistd.h... yes checking for ruby/st.h... yes creating extconf.h creating Makefile make compiling compat.c compiling pg.c pg.c: In function ‘pgconn_wait_for_notify’: pg.c:2117:3: warning: ‘rb_thread_select’ is deprecated (declared at /usr/include/ruby- 1.9.1/ruby/intern.h:379) [-Wdeprecated-declarations] pg.c: In function ‘pgconn_block’: pg.c:2592:3: error: format not a string literal and no format arguments [-Werror=format- security] pg.c:2598:3: warning: ‘rb_thread_select’ is deprecated (declared at /usr/include/ruby- 1.9.1/ruby/intern.h:379) [-Wdeprecated-declarations] pg.c:2607:4: error: format not a string literal and no format arguments [-Werror=format- security] pg.c: In function ‘pgconn_locreate’: pg.c:2866:11: warning: variable ‘lo_oid’ set but not used [-Wunused-but-set-variable] pg.c: In function ‘find_or_create_johab’: pg.c:3947:3: warning: implicit declaration of function ‘rb_encdb_alias’ [-Wimplicit- function-declaration] cc1: some warnings being treated as errors make: *** [pg.o] Error 1 Gem files will remain installed in /home/jason/.bundler/tmp/10083/gems/pg-0.12.2 for inspection. Results logged to /home/jason/.bundler/tmp/10083/gems/pg-0.12.2/ext/gem_make.out An error occurred while installing pg (0.12.2), and Bundler cannot continue. Make sure that `gem install pg -v '0.12.2'` succeeds before bundling.

    Read the article

  • ActiveMQ Pure Master / Slave - Out of sync

    - by pico
    What i have : 1 master broker and 1 slave broker both in ActiveMQ 5.4.0 What i use : waitForSlave on master side and failover uri on slave side (in the master connector URI) What i want to do : I want to wait some delay (like 5 seconds) in case of a tiny network failures between master and slave before starting slave transpôrt connectors So i put this in slave config : <broker xmlns="http://activemq.apache.org/schema/core" brokerName="slave" dataDirectory="${activemq.base}/data" useJmx="true" persistent="true" populateJMSXUserID="true" masterConnectorURI="failover://(tcp://master:61616)?initialReconnectDelay=1000&amp;maxReconnectDelay=30000" shutdownOnMasterFailure="false" advisorySupport="false"> It seems to work but after a network hang between master and slave, the slave reconnect successfully and then the master logs a lot of : 2010-10-18 17:08:44,421 | ERROR | Slave Failed | org.apache.activemq.broker.ft.MasterBroker | ActiveMQ Task java.lang.IllegalStateException: Cannot lookup a connection that had not been registered: ID:master-1040-634226732611718750-0:0 at org.apache.activemq.broker.MapTransportConnectionStateRegister.lookupConnectionState(MapTransportConnectionStateRegister.java:93) at org.apache.activemq.broker.TransportConnection.lookupConnectionState(TransportConnection.java:1412) at org.apache.activemq.broker.TransportConnection.processRemoveConsumer(TransportConnection.java:561) at org.apache.activemq.command.RemoveInfo.visit(RemoveInfo.java:76) at org.apache.activemq.broker.TransportConnection.service(TransportConnection.java:309) at org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:185) at org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116) at org.apache.activemq.transport.TransportFilter.onCommand(TransportFilter.java:69) at org.apache.activemq.transport.vm.VMTransport.iterate(VMTransport.java:218) at org.apache.activemq.thread.DedicatedTaskRunner.runTask(DedicatedTaskRunner.java:98) at org.apache.activemq.thread.DedicatedTaskRunner$1.run(DedicatedTaskRunner.java:36) On the slave side everything is fine. So after that, i've tried to stop the master to see if the slave is capable of turning master after these "network hangs". The master took long time to shutdown (10 seconds) and then some error message appears in slave logs : 2010-10-18 17:09:32,915 | WARN | Async error occurred: java.lang.IllegalStateException: Cannot lookup a connection that had not been registered: ID:master-1049-634226732657812500-0:3 | org.apache.activemq.broker.TransportConnection.Service | VMTransport: vm://slave#5 java.lang.IllegalStateException: Cannot lookup a connection that had not been registered: ID:master-1049-634226732657812500-0:3 at org.apache.activemq.broker.MapTransportConnectionStateRegister.lookupConnectionState(MapTransportConnectionStateRegister.java:93) at org.apache.activemq.broker.TransportConnection.lookupConnectionState(TransportConnection.java:1412) at org.apache.activemq.broker.TransportConnection.processRemoveSession(TransportConnection.java:600) at org.apache.activemq.command.RemoveInfo.visit(RemoveInfo.java:74) at org.apache.activemq.broker.TransportConnection.service(TransportConnection.java:309) at org.apache.activemq.broker.TransportConnection$1.onCommand(TransportConnection.java:185) at org.apache.activemq.transport.ResponseCorrelator.onCommand(ResponseCorrelator.java:116) at org.apache.activemq.transport.TransportFilter.onCommand(TransportFilter.java:69) at org.apache.activemq.transport.vm.VMTransport.iterate(VMTransport.java:218) at org.apache.activemq.thread.DedicatedTaskRunner.runTask(DedicatedTaskRunner.java:98) at org.apache.activemq.thread.DedicatedTaskRunner$1.run(DedicatedTaskRunner.java:36) Are they any ways to keep my kaha stores (they are individual stores) synchronised? The main problem is that the slave never turn master after a master failure, it stay block on this message : 2010-10-18 17:09:33,681 | WARN | Transport (master/172.21.60.61:61616) failed to tcp://master:61616 , attempting to automatically reconnect due to: java.net.SocketException: Software caused connection abort: socket write error | org.apache.activemq.transport.failover.FailoverTransport | ActiveMQ Transport: tcp://master/172.21.60.61:61616 I'm totally stuck with these syncs problems, any help welcome! Regards

    Read the article

  • NTPD seems to delete all network interfaces

    - by Aurelin
    We have a couple of virtual interfaces configured on eth0 on a CentOS, and every now and then, they went down seemingly out of the blue. Now after going through the log files, I found out that apparently ntpd deletes all eth0 interfaces, and that dhclient automatically brings eth0 back up. The virtual interfaces, however, stay down which causes several of our websites to be inaccessible. Can someone explain to me why ntpd deletes interfaces? Can / should that be turned off, or can / should I configure dhclient to bring the virtual interfaces back up automatically, too? EDIT// The log files that I should've posted : Nov 12 13:10:28 raptor dhclient[20048]: DHCPREQUEST on eth0 to 255.255.255.255 port 67 (xid=0x6a825e97) Nov 12 13:10:42 raptor dhclient[20048]: DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 8 (xid=0x24554092) Nov 12 13:10:42 raptor dhclient[20048]: DHCPOFFER from 96.126.108.78 Nov 12 13:10:42 raptor dhclient[20048]: DHCPREQUEST on eth0 to 255.255.255.255 port 67 (xid=0x24554092) Nov 12 13:10:42 raptor dhclient[20048]: DHCPACK from 96.126.108.78 (xid=0x24554092) Nov 12 13:10:42 raptor ntpd[2109]: Deleting interface #31 eth0, 50.116.50.97#123, interface stats: received=3255, sent=3256, dropped=0, active_time=1559394 secs Nov 12 13:10:42 raptor ntpd[2109]: Deleting interface #32 eth0:0, 50.116.53.56#123, interface stats: received=3, sent=0, dropped=0, active_time=1559391 secs Nov 12 13:10:42 raptor ntpd[2109]: Deleting interface #33 eth0:1, 66.175.211.192#123, interface stats: received=2, sent=0, dropped=0, active_time=1559389 secs Nov 12 13:10:42 raptor ntpd[2109]: Deleting interface #34 eth0:2, 50.116.53.95#123, interface stats: received=3, sent=0, dropped=0, active_time=1559387 secs Nov 12 13:10:42 raptor ntpd[2109]: Deleting interface #35 eth0:3, 97.107.132.32#123, interface stats: received=2, sent=0, dropped=0, active_time=1559385 secs Nov 12 13:10:42 raptor ntpd[2109]: Deleting interface #36 eth0:4, 50.116.56.201#123, interface stats: received=2, sent=0, dropped=0, active_time=1559383 secs Nov 12 13:10:42 raptor ntpd[2109]: Deleting interface #37 eth0:5, 66.175.212.121#123, interface stats: received=2, sent=0, dropped=0, active_time=1559381 secs Nov 12 13:10:42 raptor ntpd[2109]: Deleting interface #38 eth0:6, 66.175.215.137#123, interface stats: received=2, sent=0, dropped=0, active_time=1559379 secs Nov 12 13:10:44 raptor NET[1573]: /sbin/dhclient-script : updated /etc/resolv.conf Nov 12 13:10:44 raptor dhclient[20048]: bound to 50.116.50.97 -- renewal in 32692 seconds. Nov 12 13:10:45 raptor ntpd[2109]: Listening on interface #39 eth0, 50.116.50.97#123 Enabled The eth0 config : DEVICE="eth0" ONBOOT="yes" BOOTPROTO="dhcp" IPV6INIT="no" IPADDR=50.116.50.97 NETMASK=255.255.255.0 GATEWAY=50.116.50.1 And the virtual interfaces (I posted the first one only, they look the same for the most part) : # Configuration for eth0:0 DEVICE=eth0:0 BOOTPROTO=none # This line ensures that the interface will be brought up during boot. ONBOOT=yes # eth0:0 IPADDR=50.116.53.56 NETMASK=255.255.255.0

    Read the article

  • TPROXY Not working with HAProxy, Ubuntu 14.04

    - by Nyxynyx
    I'm trying to use HAProxy as a fully transparent proxy using TPROXY in Ubuntu 14.04. HAProxy will be setup on the first server with eth1 111.111.250.250 and eth0 10.111.128.134. The single balanced server has eth1 and eth0 as well. eth1 is the public facing network interface while eth0 is for the private network which both servers are in. Problem: I'm able to connect to the balanced server's port 1234 directly (via eth1) but am not able to reach the balanced server via Haproxy port 1234 (which redirects to 1234 via eth0). Am I missing out something in this configuration? On the HAProxy server The current kernel is: Linux extremehash-lb2 3.13.0-24-generic #46-Ubuntu SMP Thu Apr 10 19:11:08 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux The kernel appears to have TPROXY support: # grep TPROXY /boot/config-3.13.0-24-generic CONFIG_NETFILTER_XT_TARGET_TPROXY=m HAProxy was compiled with TPROXY support: haproxy -vv HA-Proxy version 1.5.3 2014/07/25 Copyright 2000-2014 Willy Tarreau <[email protected]> Build options : TARGET = linux26 CPU = x86_64 CC = gcc CFLAGS = -g -fno-strict-aliasing OPTIONS = USE_LINUX_TPROXY=1 USE_LIBCRYPT=1 USE_STATIC_PCRE=1 Default settings : maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200 Encrypted password support via crypt(3): yes Built without zlib support (USE_ZLIB not set) Compression algorithms supported : identity Built without OpenSSL support (USE_OPENSSL not set) Built with PCRE version : 8.31 2012-07-06 PCRE library supports JIT : no (USE_PCRE_JIT not set) Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND Available polling systems : epoll : pref=300, test result OK poll : pref=200, test result OK select : pref=150, test result OK Total: 3 (3 usable), will use epoll. In /etc/haproxy/haproxy.cfg, I've configured a port to have the following options: listen test1235 :1234 mode tcp option tcplog balance leastconn source 0.0.0.0 usesrc clientip server balanced1 10.111.163.76:1234 check inter 5s rise 2 fall 4 weight 4 On the balanced server In /etc/networking/interfaces I've set the gateway for eth0 to be the HAProxy box 10.111.128.134 and restarted networking. auto eth0 eth1 iface eth0 inet static address 111.111.250.250 netmask 255.255.224.0 gateway 111.131.224.1 dns-nameservers 8.8.4.4 8.8.8.8 209.244.0.3 iface eth1 inet static address 10.111.163.76 netmask 255.255.0.0 gateway 10.111.128.134 ip route gives: default via 111.111.224.1 dev eth0 10.111.0.0/16 dev eth1 proto kernel scope link src 10.111.163.76 111.111.224.0/19 dev eth0 proto kernel scope link src 111.111.250.250

    Read the article

  • Apache Solr Admin on Tomcat Deployed in WebApps Directory

    - by KM01
    I am trying to get Apache Solr to work on Redhat6 and Tomcat6 (using these instructions), but get this error when browsing to the admin section, http://localhost:8080/solr-example/admin: HTTP Status 404 - missing core name in path type Status report message missing core name in path description The requested resource (missing core name in path) is not available. http://localhost:8080/solr-example loads fine, with a link to "Solr Admin." My setup is as follows: tomcat6: /etc/tomcat6 Solr: /app/solr/example I have a solr-example.xml in /etc/tomcat6/Catalina/localhost/, which reads: <?xml version="1.0" encoding="utf-8"?> <Context docBase="/app/solr/example/apache-solr-3.4.0.war" debug="0" crossContext="true"> <Environment name="solr/home" type="java.lang.String" value="/app/solr/example" override="true"/> </Context> I don't see anything in the logs (/var/log/tomcat6) ... only entires in catalina.out are regarding the starting and stopping of tomcat6. My questions are: 1.What else do I need to do to get "Solr Admin" to work under Tomcat? 2.Where are these "cores" supposed to be specified? I see an entry in /app/solr/example/solr/solr.xml ? <solr persistent="false"> adminPath: RequestHandler path to manage cores. If 'null' (or absent), cores will not be manageable via request handler <cores adminPath="/admin/cores" defaultCoreName="collection1"> <core name="collection1" instanceDir="." /> </cores> </solr> 3.How do I got about ensuring that logs are working correctly? I can't find logs that contain mention of the 404 above. Update in response to @quanta's comment: Downloaded former (apache-solr-3.4.0.tgz) dataDir was not set, now set to: <dataDir>${solr.data.dir:../solr/data}</dataDir> JAVA_OPTS: /usr/lib/jvm/java/bin/java -classpath :/usr/share/tomcat6/bin/bootstrap.jar:/usr/share/tomcat6/bin/tomcat-juli.jar:/usr/share/java/commons-daemon.jar -Dcatalina.base=/usr/share/tomcat6 -Dcatalina.home=/usr/share/tomcat6 -Djava.endorsed.dirs= -Djava.io.tmpdir=/var/cache/tomcat6/temp -Djava.util.logging.config.file=/usr/share/tomcat6/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager org.apache.catalina.startup.Bootstrap start catalina.out contains no indication of the above error

    Read the article

  • Ubuntu 12.04: apt-get "failed to fetch"; apt is trying to fetch via old static IP

    - by gabe
    Sample error: W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/precise-security/universe/i18n/Translation-en Unable to connect to 192.168.1.70:8118: Now this was working just fine until I changed the IP this morning. I have the server set to a static IP of 10.0.1.70 and for years it has been 192.168.1.70 - the IP apt-get is trying to use right now. I use privoxy and tor thus the 8118 port. Like I said it all worked until I changed the static IP from 192.168.1.70 to 10.0.1.70. I was forced to do so because of router issues. (Long and involved story, I didn't really want to change the IP because I know something like this would happen.) The setup for TOR/Privoxy requires that has you point Privoxy at TOR via 127.0.0.1:9050. Then point curl, etc to Privoxy via $HOME/.bashrc. Typically you would set the listen to IP for Privoxy to 127.0.0.1 but if you want it accessible to the rest of the LAN you set the IP to the server's LAN IP. Which I did a long time ago and was working fine until this morning. I have changed all instances of 192.168.1.70 to 10.0.1.70 in both /etc/privoxy/config and $HOME/.bashrc. What makes this really strange for me is that curl is working fine. I curl icanhazip.com and voila I get a new IP every 10 minutes or so. I curl CNN.com and I get the short but sweet permanently moved to www.cnn.com message I expect. Firefox works fine. Ping works fine. And I've tested all of this via Remote Desktop over my LAN. So the connection appears to be fine for everything except apt. I've also rebooted hoping that would clear 192.168.1.70 from apt. So the connection to the internet and DNS aren't an issue for these programs. And they are, as far as I can tell, using Privoxy/TOR just fine. The real irony here is that I've tried to open up Privoxy to go to Ubuntu's servers directly without going through TOR to speed up the downloads from Ubuntu (did this months ago). So somewhere that I have not been able to find, apt has stored the IP 192.168.1.70. And 192.168.1.70 is no longer valid. Thanks for the help

    Read the article

  • Nginx and PHP-FPM running out of connections

    - by E3pO
    I keep running into errors like these, [02-Jun-2012 01:52:04] WARNING: [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 8 children, there are 19 idle, and 49 total children [02-Jun-2012 01:52:05] WARNING: [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 16 children, there are 19 idle, and 50 total children [02-Jun-2012 01:52:06] WARNING: [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 32 children, there are 19 idle, and 51 total children [02-Jun-2012 03:10:51] WARNING: [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 8 children, there are 18 idle, and 91 total children I changed my settings for php-fpm to these, pm.max_children = 150 (It was at 100, i got a max_children reached and upped to 150) pm.start_servers = 75 pm.min_spare_servers = 20 pm.max_spare_servers = 150 Resulting in [02-Jun-2012 01:39:19] WARNING: [pool www] server reached pm.max_children setting (150), consider raising it I've just launched a new website that is getting a conciderable amount of traffic on it. This traffic is legitimate and users are getting 504 gateway timeouts when the limit is reached. I have limited connections to my server with IPTABLES and I'm running fail2ban and keeping track of nginx access logs. The traffic is all legitimate, i'm just running out of room for users. I'm currently running on a dual core box with ubuntu 64bit. free total used free shared buffers cached Mem: 6114284 5726984 387300 0 141612 4985384 -/+ buffers/cache: 599988 5514296 Swap: 524284 5804 518480 My php.ini max_input_time = 60 My nginx config is worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 19000; # multi_accept on; } worker_rlimit_nofile 20000; #each connection needs a filehandle (or 2 if you are proxying) client_max_body_size 30M; client_body_timeout 10; client_header_timeout 10; keepalive_timeout 5 5; send_timeout 10; location ~ \.php$ { try_files $uri /er/error.php; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_connect_timeout 60; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 256 16k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_max_temp_file_size 0; fastcgi_intercept_errors on; fastcgi_pass unix:/tmp/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; } What can I do to stop running out of connections? Why does this keep occurring? I'm monitoring my traffic on Google Analytics realtime and when the user count gets above about 120 my php-fpm.log is full of these warnings..

    Read the article

  • Understanding vhosts settings

    - by Matt
    Ok so i have a server and I want to put a few applications on and i am having vhost configuration problems. Here is what i have and I want some direction on what i am doing wrong...ok so the first file is /etc/apache2/ports.conf NameVirtualHost 184.106.111.142:80 Listen 80 <IfModule mod_ssl.c> Listen 443 </IfModule> <IfModule mod_gnutls.c> Listen 443 </IfModule> then i have /etc/apache2/sites-available/somesite.com <VirtualHost 184.106.111.142:80> ServerAdmin [email protected] ServerName somesite.com ServerAlias www.somesite.com DocumentRoot /srv/www/somesite.com/ ErrorLog /srv/www/somesite.com/logs/error.log CustomLog /srv/www/somesite.com/logs/access.log combined <Directory "/srv/www/somesite.com/"> AllowOverride all Options -MultiViews </Directory> </VirtualHost> when i visit somesite.com everything works great but when i add another vhost and lets say thats named anothersite.com. So i have /etc/apache2/sites-available/anothersite.com <VirtualHost 184.106.111.142:80> ServerAdmin [email protected] ServerName anothersite.com ServerAlias www.anothersite.com DocumentRoot /srv/www/anothersite.com/ ErrorLog /srv/www/anothersite.com/logs/error.log CustomLog /srv/www/anothersite.com/logs/access.log combined <Directory "/srv/www/anothersite.com/"> AllowOverride all Options -MultiViews </Directory> </VirtualHost> then i run the following commands >> sudo a2ensite anothersite.com Enabling site anothersite.com. Run '/etc/init.d/apache2 reload' to activate new configuration! >> /etc/init.d/apache2 reload * Reloading web server config apache2 ...done. but when i visit anothersite.com or somesite.com they are both down..What is going on with the vhosts. Could it be the NameVirtualHost declaration with the ip or something...maybe my understanding of vhost settings is not clear. What i dont understand is why do both site now all the sudden not work at all.I would highly appreciate the clarity By the way anothersite.com or somesite.com are the only things I changed to make it more readable

    Read the article

  • Trac problem: AttributeError: Cannot find an implementation of the "IRequestHandler" interface named "WikiModule"

    - by Janosch
    This problem already has been described mutliple times in different mailing lists, but no solution has yet been published. My original setup is as follows (but in the mean time i have a simpler one on Windows 7): Ubuntu server with apache 2.2 and python 2.7 virutal python environment created with virtualenv installed babel, genshi and trac in this order using pip in the virtual environment Trac seems to run fine with tracd, but when visiting it through apache, i get the following error in an official trac error page: AttributeError: Cannot find an implementation of the "IRequestHandler" interface named "WikiModule" The stacktrace looks like this: Traceback (most recent call last): File "/srv/trac/python-environment/lib/python2.5/site-packages/Trac-0.13dev_r10668-py2.5.egg/trac/web/main.py",line 473, in _dispatch_request dispatcher.dispatch(req) File "/srv/trac/python-environment/lib/python2.5/site-packages/Trac-0.13dev_r10668-py2.5.egg/trac/web/main.py", line 154, in dispatch chosen_handler = self.default_handler File "/srv/trac/python-environment/lib/python2.5/site-packages/Trac-0.13dev_r10668-py2.5.egg/trac/config.py", line 691, in __get__ self.section, self.name)) AttributeError: Cannot find an implementation of the "IRequestHandler" interface named "WikiModule". Please update the option trac.default_handler in trac.ini. I already tried a lot to get down to the root of the problem, for me it looks as if all nativ trac components refuse to load. When one explicitely imports these components in the wsgi handler, some of them start to work somehow. Since i suspected the virtual environment, i dropped it, and manually copied all dependencies (babel, genshi, trac, ..) to one directory, and added this directory to system.path in the wsgi handler. I get exactly the same error. Since this setup is now independend from the environment, one can easily try it out on any other machine (windows or linux) running apache 2 and python 2.7. On my Windwos 7 machine, i got exactly the same problem. I zipped together the whole bundle, one can download it from http://www.xterity.de/tmp/trac-installation.zip . In the apache configuration (Windows 7 machine) i use the following settings: Alias /trac/chrome/common "D:/workspace/trac-installation/trac-resources/common" Alias /trac/chrome/site "D:/workspace/trac-installation/trac-resources/site" WSGIScriptAlias /trac "D:/workspace/trac-installation/apache/handler.wsgi" <Directory "D:/workspace/trac-installation/trac-resources"> Order allow,deny Allow from all </Directory> <Directory "D:/workspace/trac-installation"> Order allow,deny Allow from all </Directory> <Location "/trac"> Order allow,deny Allow from all </Location> And my handler.wsgi looks like this: import os import sys sys.path.append('D:/workspace/trac-installation/dependencies/') os.environ['TRAC_ENV'] = 'D:/workspace/trac-installation/trac-environments/Esp004' os.environ['PYTHON_EGG_CACHE'] = 'D:/workspace/trac-installation/eggs' import trac.web.main application = trac.web.main.dispatch_request Has anybody got an idea what could be the problem, or how to find out where it comes from?

    Read the article

  • Persistent Issues on small business network using Cisco 871W and Catalyst Express 500

    - by Ben Campbell
    Being the most qualified (read: still not qualified) to solve our persistant network issues, I've turned to serverfault for guidance. I've done some searching, reading related documentation on cisco.com and tried a bit of troubleshooting. Here is the config: 100mb synchronous connection from a business internet provider (tested multiple times at 100meg at the source) Cisco 871W wireless point & router is where the WAN connection starts (this serves all our wireless). The only wired connection in the 871W is the Catalyst switch listed below. Cisco Catalyst Express 500 (24TT) is where all the wired connections terminate. About 20 Windows workstations and servers (AD/Webservers only). Some services in EC2 including mail and other web servers/apps. I've been TOLD cabling internally should be gigabit-ready. Here are the problems: generally slow download rates from the internet to the desktop/laptop frequent "page cannot be displayed" errors in browsers-sometimes 3 or 4 reloads are necessary... often times CSS wont load or other content requiring the browser to connect to a different server. slow speed within the LAN from workstation to workstation copying files. I would expect extremely fast data transfer workstation to workstation / server to workstation in this simple network. Several things I need to admit: I'm not primarily a network guy. Funding is relatively low, I need to be the guy that finds the solution. I understand most of the terminology and most of the technology. Implementation is where I fail due to lack of experience. Getting to the point: I'm wondering whether experienced network admins think that our small network should be sufficiently served with our current hardware if configured properly... or if we should purchase new equipment and start fresh? If starting fresh is the plan, whatever that new equipment may be is a likely different question entirely. If I haven't provided enough information, I will happily do some troubleshooting and update with the results. I have experience using wireshark and some other tools. Please let me know what you think would be most helpful and thanks in advance. EDIT: I forgot to add that the Cisco applicance will not finish loading the SDM Express console. It hangs every time at the "populating modules... DHCP". It eventually crashes and closes. I've rebooted the hardware and this still happens.

    Read the article

  • can not connect through SCP, but SSH connections works

    - by Joe Cabezas
    i am trying to connect to my server to transfer file using scp: $ scp -v -r -P <port> <user>@<host>:~/dir/ dir/ this is the output: OpenSSH_5.2p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /Users/joe/.ssh/config debug1: Reading configuration data /etc/ssh_config debug1: Connecting to <host> [<host>] port <port>. debug1: Connection established. debug1: identity file /Users/joe/.ssh/identity type -1 debug1: identity file /Users/joe/.ssh/id_rsa type -1 debug1: identity file /Users/joe/.ssh/id_dsa type -1 ssh_exchange_identification: Connection closed by remote host but connecting via SSH works fine: $ ssh <user>@<host> -p <port> <user>@<host>'s password: <user>@<host>:~$ OK what can be wrong with this? my /etc/ssh/sshd_config file on the host is: # Package generated configuration file # See the sshd_config(5) manpage for details # What ports, IPs and protocols we listen for Port <port> # Use these options to restrict which interfaces/protocols sshd will bind to #ListenAddress :: #ListenAddress 0.0.0.0 Protocol 2 # HostKeys for protocol version 2 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_dsa_key HostKey /etc/ssh/ssh_host_ecdsa_key #Privilege Separation is turned on for security UsePrivilegeSeparation yes # Lifetime and size of ephemeral version 1 server key KeyRegenerationInterval 3600 ServerKeyBits 768 # Logging SyslogFacility AUTH LogLevel INFO # Authentication: LoginGraceTime 120 PermitRootLogin yes StrictModes yes RSAAuthentication yes PubkeyAuthentication no #AuthorizedKeysFile %h/.ssh/authorized_keys # Don't read the user's ~/.rhosts and ~/.shosts files IgnoreRhosts yes # For this to work you will also need host keys in /etc/ssh_known_hosts RhostsRSAAuthentication no # similar for protocol version 2 HostbasedAuthentication no # Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication #IgnoreUserKnownHosts yes # To enable empty passwords, change to yes (NOT RECOMMENDED) PermitEmptyPasswords no # Change to yes to enable challenge-response passwords (beware issues with # some PAM modules and threads) ChallengeResponseAuthentication no # Change to no to disable tunnelled clear text passwords #PasswordAuthentication yes # Kerberos options #KerberosAuthentication no #KerberosGetAFSToken no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes # GSSAPI options #GSSAPIAuthentication no #GSSAPICleanupCredentials yes X11Forwarding yes X11DisplayOffset 10 PrintMotd no PrintLastLog yes TCPKeepAlive yes #UseLogin no #MaxStartups 10:30:60 #Banner /etc/issue.net # Allow client to pass locale environment variables AcceptEnv LANG LC_* Subsystem sftp /usr/lib/openssh/sftp-server # Set this to 'yes' to enable PAM authentication, account processing, # and session processing. If this is enabled, PAM authentication will # be allowed through the ChallengeResponseAuthentication and # PasswordAuthentication. Depending on your PAM configuration, # PAM authentication via ChallengeResponseAuthentication may bypass # the setting of "PermitRootLogin without-password". # If you just want the PAM account and session checks to run without # PAM authentication, then enable this but set PasswordAuthentication # and ChallengeResponseAuthentication to 'no'. UsePAM yes

    Read the article

  • getfacl command and Linux file permissions - getting 403 error when accessing Wordpress

    - by tommytwoeyes
    I'm configuring Wordpress for a friend, and I just screwed up the Wordpress directory permissions (I suspect) using setfacl. Webfaction doesn't allow sudo or allow me to change the directory group ownership using chown. Now it appears that something I did is causing the entire application to give me 403 errors when I try to access it. The current directory listing looks like this (I set the whole thing to 777 temporarily to try to recover access to it): drwxrwsr-x+ 6 myusername myusername 4096 Mar 2 07:07 ./ drwxr-xr-x 3 root root 4096 Feb 25 19:48 ../ -rwxrwxr-x+ 1 myusername myusername 286 Mar 2 06:33 gzip.php -rwxrwxr-x+ 1 myusername myusername 4831 Mar 4 20:02 .htaccess -rwxrwxr-x+ 1 myusername myusername 397 Feb 25 19:49 index.php -rw-rw-r--+ 1 myusername myusername 15606 Feb 25 19:49 license.txt -rw-rw-r--+ 1 myusername myusername 9200 Feb 25 19:49 readme.html drwxrwsr-x+ 6 myusername myusername 4096 Feb 25 19:49 .svn/ -rwxrwxr-x+ 1 myusername myusername 4337 Feb 25 19:49 wp-activate.php drwxr-xr-x+ 10 myusername myusername 4096 Mar 4 20:03 wp-admin/ -rwxrwxr-x+ 1 myusername myusername 40283 Feb 25 19:49 wp-app.php -rwxrwxr-x+ 1 myusername myusername 226 Feb 25 19:49 wp-atom.php -rwxrwxr-x+ 1 myusername myusername 274 Feb 25 19:49 wp-blog-header.php -rwxrwxr-x+ 1 myusername myusername 3931 Feb 25 19:49 wp-comments-post.php -rwxrwxr-x+ 1 myusername myusername 244 Feb 25 19:49 wp-commentsrss2.php -rwxrwxr-x+ 1 myusername myusername 3485 Feb 25 20:15 wp-config.php drwxr-xr-x+ 6 myusername myusername 4096 Feb 26 08:52 wp-content/ -rwxrwxr-x+ 1 myusername myusername 1255 Feb 25 19:49 wp-cron.php -rwxrwxr-x+ 1 myusername myusername 246 Feb 25 19:49 wp-feed.php drwxrwxr-x+ 9 myusername myusername 4096 Feb 25 19:49 wp-includes/ -rwxrwxr-x+ 1 myusername myusername 1997 Feb 25 19:49 wp-links-opml.php -rwxrwxr-x+ 1 myusername myusername 2453 Feb 25 19:49 wp-load.php -rwxrwxr-x+ 1 myusername myusername 27787 Feb 25 19:49 wp-login.php -rwxrwxr-x+ 1 myusername myusername 7774 Feb 25 19:49 wp-mail.php -rwxrwxr-x+ 1 myusername myusername 494 Feb 25 19:49 wp-pass.php -rwxrwxr-x+ 1 myusername myusername 224 Feb 25 19:49 wp-rdf.php -rwxrwxr-x+ 1 myusername myusername 334 Feb 25 19:49 wp-register.php -rwxrwxr-x+ 1 myusername myusername 226 Feb 25 19:49 wp-rss2.php -rwxrwxr-x+ 1 myusername myusername 224 Feb 25 19:49 wp-rss.php -rwxrwxr-x+ 1 myusername myusername 9655 Feb 25 19:49 wp-settings.php -rwxrwxr-x+ 1 myusername myusername 18644 Feb 25 19:49 wp-signup.php -rwxrwxr-x+ 1 myusername myusername 3702 Feb 25 19:49 wp-trackback.php -rwxrwxr-x+ 1 myusername myusername 3210 Feb 25 19:49 xmlrpc.php The getfacl output looks like this: # file: . # owner: myusername # group: myusername user::rwx group::r-x group:apache:rw- mask::rwx other::r-x I simply wanted to change the ownership to myusername:apache and the file permissions to 755. I have no idea how to fix the permissions now. Any help would be really appreciated! Thanks, Tom

    Read the article

  • Rails 3 shows 404 error instead of index.html (nginx + unicorn)

    - by Miko
    I have an index.html in public/ that should be loading by default but instead I get a 404 error when I try to access http://example.com/ The page you were looking for doesn't exist. You may have mistyped the address or the page may have moved. This has something to do with nginx and unicorn which I am using to power Rails 3 When take unicorn out of the nginx configuration file, the problem goes away and index.html loads just fine. Here is my nginx configuration file: upstream unicorn { server unix:/tmp/.sock fail_timeout=0; } server { server_name example.com; root /www/example.com/current/public; index index.html; keepalive_timeout 5; location / { try_files $uri @unicorn; } location @unicorn { proxy_pass http://unicorn; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_redirect off; } } My config/routes.rb is pretty much empty: Advertise::Application.routes.draw do |map| resources :users end The index.html file is located in public/index.html and it loads fine if I request it directly: http://example.com/index.html To reiterate, when I remove all references to unicorn from the nginx conf, index.html loads without any problems, I have a hard time understanding why this occurs because nginx should be trying to load that file on its own by default. -- Here is the error stack from production.log: Started GET "/" for 68.107.80.21 at 2010-08-08 12:06:29 -0700 Processing by HomeController#index as HTML Completed in 1ms ActionView::MissingTemplate (Missing template home/index with {:handlers=>[:erb, :rjs, :builder, :rhtml, :rxml, :haml], :formats=>[:html], :locale=>[:en, :en]} in view paths "/www/example.com/releases/20100808170224/app/views", "/www/example.com/releases/20100808170224/vendor/plugins/paperclip/app/views", "/www/example.com/releases/20100808170224/vendor/plugins/haml/app/views"): /usr/local/rvm/gems/ruby-1.9.2-rc2/gems/actionpack-3.0.0.beta4/lib/action_view/paths.rb:14:in `find' /usr/local/rvm/gems/ruby-1.9.2-rc2/gems/actionpack-3.0.0.beta4/lib/action_view/lookup_context.rb:79:in `find' /usr/local/rvm/gems/ruby-1.9.2-rc2/gems/actionpack-3.0.0.beta4/lib/action_view/base.rb:186:in `find_template' /usr/local/rvm/gems/ruby-1.9.2-rc2/gems/actionpack-3.0.0.beta4/lib/action_view/render/rendering.rb:45:in `_determine_template' /usr/local/rvm/gems/ruby-1.9.2-rc2/gems/actionpack-3.0.0.beta4/lib/action_view/render/rendering.rb:23:in `render' /usr/local/rvm/gems/ruby-1.9.2-rc2/gems/haml-3.0.15/lib/haml/helpers/action_view_mods.rb:13:in `render_with_haml' etc... -- nginx error log for this virtualhost comes up empty: 2010/08/08 12:40:22 [info] 3118#0: *1 client 68.107.80.21 closed keepalive connection My guess is unicorn is intercepting the request to index.html before nginx gets to process it.

    Read the article

  • IPSEC site-to-site Openswan to Cisco ASA

    - by Jim
    I recieved a list of commands that were run on the right side of the VPN tunnel which is where the Cisco ASA resides. On my side, I have a linux based firewall running debian with openswan installed. I am having an issue with getting to Phase 2 of the VPN negotiation. Here is the Cisco Information I was sent: {my_public_ip} = left side of connection tunnel-group {my_public_ip} type ipsec-l2l tunnel-group {my_public_ip} ipsec-attributes pre-shared-key fakefake crypto map vpn1 1 match add customer-ipsec crypto map vpn1 1 set peer {my_public_ip} crypto map vpn1 1 set transform-set aes-256-sha crypto map vpn1 interface outside static (outside,inside) 10.2.1.200 {my_public_ip} netmask 255.255.255.255 crypto ipsec transform-set aes-256-sha esp-aes-256 esp-sha-hmac crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 crypto map vpn1 1 match address customer-ipsec crypto map vpn1 1 set peer {my_public_ip} crypto map vpn1 1 set transform-set aes-256-sha crypto map vpn1 interface outside crypto isakmp enable outside crypto isakmp policy 1 authentication pre-share encryption aes-256 hash sha group 2 lifetime 86400 Myside ipsec.conf config setup klipsdebug=none plutodebug=none protostack=netkey #nat_traversal=yes conn cisco #name of VPN connection type=tunnel authby=secret #left side (myside) left={myPublicIP} leftsubnet=172.16.250.0/24 #net subnet on left sdie to assign to right side leftnexthop=%defaultroute #right security gateway (ASA side) right={CiscoASA_publicIP} #cisco ASA rightsubnet=10.2.1.0/24 rightnexthop=%defaultroute #crypo stuff keyexchange=ike ikelifetime=86400s auth=esp pfs=no compress=no auto=start ipsec.secrets file {CiscoASA_publicIP} {myPublicIP}: PSK "fakefake" When I start ipsec from the left side/my side I don't recieve any errors, however when I run the ipsec auto --status command: 000 "cisco": 172.16.250.0/24==={left_public_ip}<{left_public_ip}>[+S=C]---{left_public_ip_gateway}...{left_public_ip_gateway}--{right_public_ip}<{right_public_ip}>[+S=C]===10.2.1.0/24; prospective erouted; eroute owner: #0 000 "cisco": myip=unset; hisip=unset; 000 "cisco": ike_life: 86400s; ipsec_life: 28800s; rekey_margin: 540s; rekey_fuzz: 100%; keyingtries: 0 000 "cisco": policy: PSK+ENCRYPT+TUNNEL+UP+IKEv2ALLOW+SAREFTRACK+lKOD+rKOD; prio: 24,24; interface: eth0; 000 "cisco": newest ISAKMP SA: #0; newest IPsec SA: #0; 000 000 #2: "cisco":500 STATE_MAIN_I1 (sent MI1, expecting MR1); EVENT_RETRANSMIT in 10s; nodpd; idle; import:admin initiate 000 #2: pending Phase 2 for "cisco" replacing #0 Now I'm new to setting up an site-to-site IPSEC tunnel so the status informatino I am unsure what it means. All I know is it sits at this "pending Phase 2" and I can't ping the other side, Another question I have is, if I do a route -n, should I see anything relating to this connection? Also, I read a few artilcle where configs contained the interface="ipsec0=eth0", is this an interface that I have to create on the linux debian firewall on my side? Appreciate your time to look at this.

    Read the article

< Previous Page | 368 369 370 371 372 373 374 375 376 377 378 379  | Next Page >