Search Results

Search found 2301 results on 93 pages for 'calico cat'.

Page 72/93 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >

  • Failing to load rootfs: Ubuntu 10 + grub2 + rootfs ext4 w/ RAID1

    - by James
    I am having problems booting a new Ubuntu 10 (server) install. My primary HD (/dev/sda) is laid out as follows: Device Boot Start End Blocks Id System /dev/sda1 * 1 18 144553+ 83 Linux <-- /BOOT /dev/sda2 19 182401 1464991447+ 5 Extended /dev/sda5 19 2207 17583111 fd Linux raid autodetect /dev/sda6 2208 11934 78132096 fd Linux raid autodetect <-- / (ROOTFS) /dev/sda7 11935 182401 1369276146 fd Linux raid autodetect The rootfs is part of a RAID1 (software) array (currently degraded): # cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md2 : active raid1 sda6[1] 78132032 blocks [2/1] [_U] The UUIDs for the partitions are as follows: # blkid /dev/sda1 /dev/sda1: UUID="b25dd301-41b9-4f4d-9b0a-0e31713dd74c" TYPE="ext2" # blkid /dev/sda6 /dev/sda6: UUID="af7b9ede-fa53-c0c1-74be-31ec752c5cd5" TYPE="linux_raid_member" # blkid /dev/md2 /dev/md2: UUID="a0602d42-6855-482f-870c-6f6ecdcdae3f" TYPE="ext4" Finally, I have my grub2 menuentry setup as follows: ### BEGIN /etc/grub.d/10_linux ### menuentry 'Ubuntu, with Linux 2.6.32-25-server' --class ubuntu --class gnu-linux --class gnu --class os { insmod ext2 insmod raid insmod mdraid set root='(hd0,1)' search --no-floppy --fs-uuid --set b25dd301-41b9-4f4d-9b0a-0e31713dd74c linux /vmlinuz-2.6.32-25-server root=UUID=a0602d42-6855-482f-870c-6f6ecdcdae3f ro nosplash noplymouth initrd /initrd.img-2.6.32-25-server } When I attempt to boot, grub loads OK, however I eventually get the following error message: Gave up waiting for root device. ALERT /dev/disk/by-uuid/a0602d42-6855-482f-870c-6f6ecdcdae3f does not exist. Dropping to a shell! If from the grub bootloader I open a grub command line, I can ls (hd0,) and it lists the correct partitions with the UUIDs as shown above - sda6 shows 'a0602d42-6855-482f-870c-6f6ecdcdae3f' (the RAID UUID). If I ls (md2)/ it properly lists all the files on the RAID1 filesystem (ext4) so it doesn't appear to be an issue accessing the raid device. Does anyone have any suggestions as to what the problem might be? I can't figure this one out.

    Read the article

  • NFS: Server says "authenticated mount request", but client sees "access denied"

    - by zigdon
    I have two machine, an NFS server (RHEL) and a client (Debian). The server has NFS set up, exporting a particular directory: server:~$ sudo /usr/sbin/rpcinfo -p localhost program vers proto port 100000 2 tcp 111 portmapper 100000 2 udp 111 portmapper 100024 1 udp 910 status 100024 1 tcp 913 status 100021 1 udp 53391 nlockmgr 100021 3 udp 53391 nlockmgr 100021 4 udp 53391 nlockmgr 100021 1 tcp 32774 nlockmgr 100021 3 tcp 32774 nlockmgr 100021 4 tcp 32774 nlockmgr 100007 2 udp 830 ypbind 100007 1 udp 830 ypbind 100007 2 tcp 833 ypbind 100007 1 tcp 833 ypbind 100011 1 udp 999 rquotad 100011 2 udp 999 rquotad 100011 1 tcp 1002 rquotad 100011 2 tcp 1002 rquotad 100003 2 udp 2049 nfs 100003 3 udp 2049 nfs 100003 4 udp 2049 nfs 100003 2 tcp 2049 nfs 100003 3 tcp 2049 nfs 100003 4 tcp 2049 nfs 100005 1 udp 1013 mountd 100005 1 tcp 1016 mountd 100005 2 udp 1013 mountd 100005 2 tcp 1016 mountd 100005 3 udp 1013 mountd 100005 3 tcp 1016 mountd server$ cat /etc/exports /dir *.my.domain.com(ro) client$ grep dir /etc/fstab server.my.domain.com:/dir /dir nfs tcp,soft,bg,noauto,ro 0 0 All seems well, but when I try to mount, I see the following: client$ sudo mount /dir mount.nfs: access denied by server while mounting server.my.domain.com:/dir And on the server I see: server$ tail /var/log/messages Mar 15 13:46:23 server mountd[413]: authenticated mount request from client.my.domain.com:723 for /dir (/dir) What am I missing here? How should I be debugging this?

    Read the article

  • FreeBSD 8 and Samba 3.3 Samba seems to crash/not start?

    - by scraft3613
    I am both new to FreeBSD and Samba, and with that in mind ... I installed Samba 3.3. from Ports. I have this in my rc.conf: #samba nmbd_enable="YES" smbd_enable="YES" winbindd_enable="YES" I can see an active PID: prod1# cat /var/run/smbd.pid 24426 But it seems like smbd isn't running: prod1# ps -auwxx | egrep '[sn]mbd' root 24513 0.0 0.3 21108 4672 ?? Ss Sat02PM 0:00.71 /usr/local/sbin/nmbd -D -s /usr/local/etc/smb.conf If I restart samba with /usr/local/etc/rc.d/samba restart it runs: prod1# ps -auwxx | egrep '[sn]mbd' root 30188 0.0 0.3 21080 4700 ?? Ss 2:49PM 0:00.00 /usr/local/sbin/nmbd -D -s /usr/local/etc/smb.conf root 30196 0.0 0.6 35520 10952 ?? Ss 2:49PM 0:00.00 /usr/local/sbin/smbd -D -s /usr/local/etc/smb.conf root 30198 0.0 0.6 35520 10880 ?? S 2:49PM 0:00.00 /usr/local/sbin/smbd -D -s /usr/local/etc/smb.conf Until I do: prod1# smbclient -L prod1 Connection to prod1 failed (Error NT_STATUS_CONNECTION_REFUSED) prod1# prod1# ps -auwxx | egrep '[sn]mbd' root 30188 0.0 0.3 21080 4700 ?? Ss 2:49PM 0:00.00 /usr/local/sbin/nmbd -D -s /usr/local/etc/smb.conf What should I be checking to find out what's going on?

    Read the article

  • What is proper relationship between /etc/hosts and DNS A records for a Linux server?

    - by MountainX
    I have an Ubuntu server. It is going to be a web server with a URI of www.example.com. I have a DNS A record pointing www.example.com to the server's IP address. Let's say I pick "trinity" as the hostname for this server. I want to set up the DNS records correctly. I need reverse DNS to www.example.com, so a CNAME for www.example.com doesn't seem appropriate. Here's my question: Is it considered best practice to set up two DNS records (which in my case would likely be two A records), one for www.example.com and one for trinity.example.com, both pointing to this server's IP address? (Or, even if it is not accepted as a best practice, is it a good idea?) If so, would the following be a proper /etc/hosts file? $ cat /etc/hosts 127.0.1.1 trinity.local trinity 99.100.101.102 trinity.example.com trinity www.example.com This server is a Linode and Linode's docs seem to imply that the above approach is best (if I am reading them correctly). Here's the relevant section. I bolded the line that seems to apply here. Update /etc/hosts Next, edit your /etc/hosts file to resemble the following example, replacing "plato" with your chosen hostname, "example.com" with your system's domain name, and "12.34.56.78" with your system's IP address. As with the hostname, the domain name part of your FQDN does not necesarily need to have any relationship to websites or other services hosted on the server (although it may if you wish). As an example, you might host "www.something.com" on your server, but the system's FQDN might be "mars.somethingelse.com." File:/etc/hosts 127.0.0.1 localhost.localdomain localhost 12.34.56.78 plato.example.com plato The value you assign as your system's FQDN should have an "A" record in DNS pointing to your Linode's IP address. For more information on configuring DNS, please see our guide on configuring DNS with the Linode Manager.

    Read the article

  • Pages load in brower fine, but 404 not found reported for the page during the GET on all pages except index

    - by user885983
    I believe this question is more suited to serverfault (please correct me if not). This issue appears very similar to this question (except there are no 301 Moved Permanently for any pages). The domain is yorkshirebadges.co.uk. For example, loading yorkshirebadges.co.uk or yorkshirebadges.co.uk/index.php reports no 404s during network inspection. But every other page (/contact.php, /products.php) report a not found. Mod_rewrite is being used on the site, I checked this out but didn't see any obvious errors. It's included below for reference: RewriteEngine on RewriteRule ^store/material/([^/\.]+)/price/?([^/\.]+)?$ products.php?prodType=$1&price=$2 RewriteRule ^store/price/?([^/\.]+)?$ products.php?price=$1; RewriteRule ^store/material/?([^/\.]+)?$ products.php?prodType=$1 RewriteRule ^store/([^/\.]+)/?$ products.php?prodCat=$1 RewriteRule ^store/([^/\.]+)/price/([^/\.]+)$ products.php?prodCat=$1&price=$2 RewriteRule ^store/Type/?([^/\.]+) products.php?prodType=$1 RewriteRule ^store/([^/\.]+)/?([^/\.]+)?$ view-product-details.php?cat=$1&prodName=$2 RewriteRule ^store/([^/\.]+)/material/?([^/\.]+)?$ products.php?prodCat=$1&prodType=$2 RewriteRule analytics http://www.google.com/analytics <IfModule mod_suphp.c> suPHP_ConfigPath /home/yorkshir <Files php.ini> order allow,deny deny from all </Files> </IfModule> Chrome Network Inspection (and firebug on firefox) report 404s on all pages except the index, the server is apache2. Really scratching my head on this one!

    Read the article

  • Linux usd disk just create sg device

    - by MTilsted
    I have a Corsair R60 ssd disk which is a disk with both sata and usb connectors. But the usb thing seems to be a bit non-standard, or maybe its just my fedora linux. When I insert the disk using a usb cabel to a running Fedora 14 linux system, a device called /dev/sg3 is added but that is all. No new /dev/sd* device is created so I can't mount the disk. If I look at cat /proc/scsi/sg/device_strs I get ATA Hitachi HTS54321 FB2O HL-DT-ST DVDRAM GSA-T50N RP05 Seagate Desktop 0130 Corsair CSSD-R60GB2 So the disk is there. (The last entry) but my linux will for some reason not see it as a usb hard disk. When I insert other usb disks they work fine. It is only this specific disk which causes problems. I have tried on 3 different computers with the same result. A hint to the problem may be that if I add the disk to a windows system(With usb) the disk is called "A fixed disk" and not a portable disk as expected. The disk works fine with linux If i connect it with the sata cabel, but I would really like to have it working with usb too. (To mount it on computers without sata).

    Read the article

  • Can't mount Linux usd disk. It just create /dev/sg device but no /dev/sd

    - by MTilsted
    I have a Corsair R60 ssd disk which is a disk with both sata and usb connectors. But the usb thing seems to be a bit non-standard, or maybe its just my fedora linux. When I insert the disk using a usb cabel to a running Fedora 14 linux system, a device called /dev/sg3 is added but that is all. No new /dev/sd* device is created so I can't mount the disk. If I look at cat /proc/scsi/sg/device_strs I get ATA Hitachi HTS54321 FB2O HL-DT-ST DVDRAM GSA-T50N RP05 Seagate Desktop 0130 Corsair CSSD-R60GB2 So the disk is there. (The last entry) but my linux will for some reason not see it as a usb hard disk. When I insert other usb disks they work fine. It is only this specific disk which causes problems. I have tried on 3 different computers with the same result. A hint to the problem may be that if I add the disk to a windows system(With usb) the disk is called "A fixed disk" and not a portable disk as expected. The disk works fine with linux If i connect it with the sata cabel, but I would really like to have it working with usb too. (To mount it on computers without sata).

    Read the article

  • How to force mdadm to stop RAID5 array?

    - by lucek
    I have /dev/md127 RAID5 array that consisted of four drives. I managed to hot remove them from the array and currently /dev/md127 does not have any drives: cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid1 sdd1[0] sda1[1] 304052032 blocks super 1.2 [2/2] [UU] md1 : active raid0 sda5[1] sdd5[0] 16770048 blocks super 1.2 512k chunks md127 : active raid5 super 1.2 level 5, 512k chunk, algorithm 2 [4/0] [____] unused devices: <none> and mdadm --detail /dev/md127 /dev/md127: Version : 1.2 Creation Time : Thu Sep 6 10:39:57 2012 Raid Level : raid5 Array Size : 8790402048 (8383.18 GiB 9001.37 GB) Used Dev Size : 2930134016 (2794.39 GiB 3000.46 GB) Raid Devices : 4 Total Devices : 0 Persistence : Superblock is persistent Update Time : Fri Sep 7 17:19:47 2012 State : clean, FAILED Active Devices : 0 Working Devices : 0 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Number Major Minor RaidDevice State 0 0 0 0 removed 1 0 0 1 removed 2 0 0 2 removed 3 0 0 3 removed I've tried to do mdadm --stop /dev/md127 but: mdadm --stop /dev/md127 mdadm: Cannot get exclusive access to /dev/md127:Perhaps a running process, mounted filesystem or active volume group? I made sure that it's unmounted, umount -l /dev/md127 and confirmed that it indeed is unmounted: umount /dev/md127 umount: /dev/md127: not mounted I've tried to zero superblock of each drive and I get (for each drive): mdadm --zero-superblock /dev/sde1 mdadm: Unrecognised md component device - /dev/sde1 Here's output of lsof|grep md127: lsof|grep md127 md127_rai 276 root cwd DIR 9,0 4096 2 / md127_rai 276 root rtd DIR 9,0 4096 2 / md127_rai 276 root txt unknown /proc/276/exe What else can I do? LVM is not even installed so it can't be a factor.

    Read the article

  • Can't mount Linux usb disk. It just create /dev/sg device but no /dev/sd

    - by MTilsted
    I have a Corsair R60 ssd disk which is a disk with both sata and usb connectors. But the usb thing seems to be a bit non-standard, or maybe its just my fedora linux. When I insert the disk using a usb cabel to a running Fedora 14 linux system, a device called /dev/sg3 is added but that is all. No new /dev/sd* device is created so I can't mount the disk. If I look at cat /proc/scsi/sg/device_strs I get ATA Hitachi HTS54321 FB2O HL-DT-ST DVDRAM GSA-T50N RP05 Seagate Desktop 0130 Corsair CSSD-R60GB2 So the disk is there. (The last entry) but my linux will for some reason not see it as a usb hard disk. When I insert other usb disks they work fine. It is only this specific disk which causes problems. I have tried on 3 different computers with the same result. A hint to the problem may be that if I add the disk to a windows system(With usb) the disk is called "A fixed disk" and not a portable disk as expected. The disk works fine with linux If i connect it with the sata cabel, but I would really like to have it working with usb too. (To mount it on computers without sata). Added: I did try to mount /dev/sg3 but mount say that its not a block device. (File say Its a character special device). Added output from dmesg: [ 97.454073] usb 7-1: USB disconnect, address 2 [ 105.913055] hub 2-0:1.0: unable to enumerate USB device on port 3 [ 107.048054] usb 2-3: new high speed USB device using ehci_hcd and address 5 [ 107.162900] usb 2-3: New USB device found, idVendor=1b1c, idProduct=1ab8 [ 107.162903] usb 2-3: New USB device strings: Mfr=1, Product=2, SerialNumber=5 [ 107.162906] usb 2-3: Product: CSSD-R60GB2 [ 107.162908] usb 2-3: Manufacturer: Corsair [ 107.162910] usb 2-3: SerialNumber: 10111441000000990069 [ 107.167651] scsi7 : usb-storage 2-3:1.0 [ 108.195543] scsi 7:0:0:0: Direct-Access Corsair CSSD-R60GB2 PQ: 1 ANSI: 0 [ 108.197732] scsi 7:0:0:0: Attached scsi generic sg3 type 0

    Read the article

  • Can't get DNS Alias work on Ubuntu 10.04 with Apache 2

    - by Johnny
    I want to use the DNS Alias to configure one of my domain pointing to a specific directory on the server. Here is what I've done: Change the IP address in domain setting, and it works $ ping www.example.com PING example.com (124.205.62.xxx): 56 data bytes 64 bytes from 124.205.62.xxx: icmp_seq=0 ttl=48 time=53.088 ms 64 bytes from 124.205.62.xxx: icmp_seq=1 ttl=48 time=52.125 ms ^C --- example.com ping statistics --- 2 packets transmitted, 2 packets received, 0.0% packet loss round-trip min/avg/max/stddev = 52.125/52.606/53.088/0.482 ms Add sites-available and sites-enabled $ ls -l /etc/apache2/sites-available/ total 16 -rw-r--r-- 1 root root 948 2010-04-14 03:27 default -rw-r--r-- 1 root root 7467 2010-04-14 03:27 default-ssl -rw-r--r-- 1 root root 365 2010-06-09 18:27 example.com $ ls -l /etc/apache2/sites-enabled/ total 0 lrwxrwxrwx 1 root root 26 2010-06-09 15:46 000-default -> ../sites-available/default lrwxrwxrwx 1 root root 33 2010-06-09 18:17 001-example.com -> ../sites-available/example.com But it doesn't work and when I open the browser for www.example.com, it shows an 111 error: The following error was encountered: Connection to 124.205.62.48 Failed The system returned: (111) Connection refused Here is how example.com's config: $ cat /etc/apache2/sites-enabled/001-example.com <virtualhost *:80> DocumentRoot "/vhosts/example.com/htdocs/" ServerName www.example.com ServerAlias example.com <Location /> Order Deny,Allow Deny from None Allow from all </Location> #Include /etc/phpmyadmin/apache.conf ErrorLog /vhosts/example.com/logs/error.log CustomLog /vhosts/example.com/logs/access.log combined Could you please tell me how to solve this?

    Read the article

  • Generalized strategy for file server virtualization in Xenserver

    - by Jamie
    I'm not shopping as much as I'm looking for some guidance on good idea / bad idea strategies. I'm sure I'm not in the "best practices" budget range. Currently, I have 3 dell poweredges running xenserver in a pool. Each node has a ubuntu file server, serving about 6TB. One is the primary, the other two are rsync targets for backup. The 6TB is stored on their respective local storage disks as an LVM of 3x2tb virtual disks. The fileserver VM disks are also stored on the node local disks. Each node also runs a smattering of light-weight VMs for web, development, windows VMs, and stuff like that. Several of those VM's disks reside on a QNAP NAS to play with live migration. These VM's are often clients of the primary file server (like all the mail, web content, user files are stored on the file server, not on the mail, web, and samba VMs). This all works fine, and is a major step up for us. The downside is that the QNAP is a single point of failure. And the only thing the QNAP is doing is serving migratable VM images, not client data. Someday the poweredge local arrays will be full, and we will have to reinvent ourselves again. Is it wise to have heavywieght vms (like the fileserver, with its 6+ TB disks) on a SAN or NAS? Would it be better to keep the VMs lightweight, have the VM images on a SAN or NAS, and use 2 or more NAS act as NFS-serving file appliances? A hybrid SAN/NAS that can serve iscsi for images and NFS for the client vms? It seems like live-magration would be a misnomer if you have to migrate a fileserver with its entire 6+ TB disk. I recognize there are plenty of ways to skin the cat. We've already skinned it a few ways. What makes sense?

    Read the article

  • Slow manipulation of netfilter rules

    - by Ole Martin Eide
    I have a script maintaining gre tunnels and firewall rules using the "ip" and "iptables" tools. Setting up hundreds of tunnels, and adresses per interface runs just fine. Takes less than 0.1 second per interface, however when I get around to do the firewall rules everything slows down spending 0.5 per insertion. Why is it running so slow? What can I do to improve the speed? It seems like I could try ipset instead, but I really feel there is something wrong with the kernel or something. The interesting thing is that the first 10 rules runs fast, then it slows down.. mybox(root) foo# iptables -V iptables v1.3.5 mybox(root) foo# uname -a Linux foo 2.6.18-164.el5 #1 SMP Tue Aug 18 15:51:48 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux mybox(root) foo# cat test.sh #!/bin/sh for n in {1..100} do /sbin/iptables -A OUTPUT -s ${n} -j ACCEPT /sbin/iptables -D OUTPUT -s ${n} -j ACCEPT done mybox(root) foo# time ./test.sh real 1m38.839s user 0m0.100s sys 1m38.724s Appriciate any help. Cheers!

    Read the article

  • How to calibrate ASUS k52f battery on Ubuntu?

    - by cutalion
    I'm not sure if the problem is in software or my battery is dying, I'll move my question to another forum if it's not a SO question I have a problem with the battery on my laptop - ASUS K52F. It shows incorrect information about capacity. When I unplug the charger it can work some time, but then it will power off without any warnings. Sometimes it will power off right after I unplug the charger. Here is some info I could get: > uname -a Linux alligator 3.5.0-18-generic #29-Ubuntu SMP Fri Oct 19 10:26:51 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux > acpi -i Battery 0: Charging, 99%, 18:25:15 until charged Battery 0: design capacity 5235 mAh, last full capacity 69964 mAh = 100% > cat /sys/class/power_supply/BAT0/uevent POWER_SUPPLY_NAME=BAT0 POWER_SUPPLY_STATUS=Charging POWER_SUPPLY_PRESENT=1 POWER_SUPPLY_TECHNOLOGY=Li-ion POWER_SUPPLY_CYCLE_COUNT=0 POWER_SUPPLY_VOLTAGE_MIN_DESIGN=10800000 POWER_SUPPLY_VOLTAGE_NOW=9246000 POWER_SUPPLY_POWER_NOW=176000 POWER_SUPPLY_ENERGY_FULL_DESIGN=**48400000** POWER_SUPPLY_ENERGY_FULL=**646822000** POWER_SUPPLY_ENERGY_NOW=**643588000** POWER_SUPPLY_MODEL_NAME=K52F-44 POWER_SUPPLY_MANUFACTURER=ASUSTek POWER_SUPPLY_SERIAL_NUMBER= I noticed, that POWER_SUPPLY_ENERGY_NOW and POWER_SUPPLY_ENERGY_FULL are greater than POWER_SUPPLY_ENERGY_FULL_DESIGN. I don't think it's ok :) I can run any additional commands.

    Read the article

  • Package pinning in Debian lenny

    - by bronto
    I need your advice as I don't know if I hit a bug, or I am misunderstanding something. On a Debian Lenny, I am trying to prevent the installation of two particular packages, when they are requested as dependencies fromother packages. I am using the same syntax I successfully used in Squeeze, but with no success at all. On squeeze, the following works as expected: # cat /etc/apt/preferences.d/local-no-pike.pref Package: pike7.6-core Pin: version * Pin-Priority: -1000 If I try to install pike7.6, which depends on pike7.6-core, apt and aptitude refuse to do so. On Lenny, the only difference is that there is no support for "fragments" in /etc/apt/preferences.d, and all preferences must be in the /etc/apt/preferences file. But it's not working. E.g., if the file contains: Package: grub-common Pin: version * Pin-Priority: -1000 apt doesn't stop me from installing grub, which depends on grub-common. I used strace to see if the file is being read, and it is. I was suggested to use some Debug:: options, but they didn't help to pinpoint the problem either. I have google'd a lot with some combinations of "lenny" "prevent" "package" "installation" "pinning" and the like, but nothing nice came out. And of course I read man apt_preferences. What am I missing here?

    Read the article

  • How do I connect to SSH without the password to be requested every time ? - Already follow some answers here but it doesn't work

    - by MEM
    MAC OS X Lion 10.7.3 1) On host, I've created an authorized_keys file inside .ssh folder, by doing: touch authorized_keys 2) I've copy my public ssh key into host .ssh folder by doing: scp ~/.ssh/mykey.pub [email protected]:/home/userhost/.ssh/mykey.pub 3) I've place it's contents inside authorized files by doing: cat mykey.pub >> authorized_keys 4) Then I've removed the mykey.pub file: rm mykey.pub 5) On my terminal, locally, inside my ~/.ssh folder I made: ssh-add mykey (notice that it is without the pub extension); 6) I've closed and opened again the terminal. When I first connect to this host, it has being added to the *known_hosts* file inside ~/.ssh; I've pico known_hosts and the hash is there. Still, every time I connect by doing: ssh [email protected] it requests a password ! What am I missing here ? UPDATE: I've done EVEN TWO MORE THINGS here: 7) Set your key to be the default identity - if it doesn't exist, create; touch ~/.ssh/config and place inside the following line: IdentityFile ~/.ssh/yourkeyname *id_rsa is normally your default key. You should switched to your key. This tells that the outgoing ssh connections should use this as a default identity.* 8) Add a bash process to your ssh-agent: ssh-agent bash ssh-add ~/.ssh/yourkeyname Lisinge answer helped but it's not definitive. If we restart our machine, the password gets prompted again!!! How can we debug this? What can we do here? How can we check where is this process failing ? UPDATE 2: If I use: ssh -v -i <keyfile> [email protected] I get among other things: OpenSSH_5.6p1, OpenSSL 0.9.8r 8 Feb 2011 Warning: Identity file yourkeyname not accessible: No such file or directory. This message refers to what? The identify file is not accessible on the localhost, or it's not accessible on the remote host ? Please advice

    Read the article

  • KVM and libvirt: How to configure a new disc device to an existing VM?

    - by initall
    I've got an Ubuntu 9.04 server running two VM's. In /etc/libvirt/qemu/machine1.xml two disk devices are defined like this: <devices> <emulator>/usr/bin/kvm</emulator> <disk type='file' device='disk'> <source file='/vserver/machine1/disk0.qcow2'/> <target dev='hda' bus='ide'/> </disk> <disk type='file' device='disk'> <source file='/vserver/machine1/disk1.qcow2'/> <target dev='hdb' bus='ide'/> </disk> I need more storage space in at least one of the devices and thought about adding a third hdc device by simply adding one with same style as above and re-organising my mount structure (The virtual sizes of the current qcow2 files are unfortunately limited.) My problem is that reloading libvirtd and restarting the VM do not result in a new visible device (checked with fdisk). I'm aware of extending an existing qcow2 file (converting to raw format, cat-ing/adding the new one, using smth. like gparted) - but only as a last resort. Hopefully it's something very simple I'm missing?

    Read the article

  • Configure static IPv6 on Ubuntu

    - by Charles Offenbacher
    I'm trying to configure IPv6 on a dedicated Ubuntu server. My provider gave me a "/64" (whatever that is - I'm still confused) of IPv6 addresses. However, when I try to use them, I can't ping anything. What do I do? :( # ping6 ipv6.google.com PING ipv6.google.com(vx-in-x63.1e100.net) 56 data bytes From fe80::219:d1ff:fefb:42d8 icmp_seq=1 Destination unreachable: Address unreachable From fe80::219:d1ff:fefb:42d8 icmp_seq=2 Destination unreachable: Address unreachable From fe80::219:d1ff:fefb:42d8 icmp_seq=3 Destination unreachable: Address unreachable --- ipv6.google.com ping statistics --- 3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2014ms # tracepath6 ipv6.google.com 1?: [LOCALHOST] 0.025ms pmtu 1500 1: fe80::219:d1ff:fefb:42d8%eth0 2000.022ms !H Resume: pmtu 1500 # cat /etc/network/interfaces # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet static address 64.***.***.*** netmask 255.255.255.248 gateway 64.***.***.*** iface eth0 inet6 static pre-up modprobe ipv6 address 2607:F878:1:***::1 netmask 64 gateway 2607:F878:1:***(same as address)::1 # ifconfig eth0 Link encap:Ethernet HWaddr 00:19:d1:fb:42:d8 inet addr:64.***.***.*** Bcast:64.***.***.*** Mask:255.255.255.248 inet6 addr: fe80::219:d1ff:fefb:42d8/64 Scope:Link inet6 addr: 2607:f878:1:***::1/64 Scope:Global UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:52451 errors:0 dropped:0 overruns:0 frame:0 TX packets:39729 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:6817761 (6.8 MB) TX bytes:6153835 (6.1 MB) Interrupt:41 Base address:0xc000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:166 errors:0 dropped:0 overruns:0 frame:0 TX packets:166 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:31714 (31.7 KB) TX bytes:31714 (31.7 KB)

    Read the article

  • CentOS 6.5 x64 + Ajendi +nginx + php-fpm + mysql setup unable to load the PHP page

    - by Francis
    I'm using Ajendi to setup a PHP web server, I can load the PHP info page, when I try to load the PHP copy from other server, I get a blank page with this log appear on access log, I have no clue what was wrong and troubleshoot further, please advice. x.x.x.x - - [28/May/2014:10:08:37 -0400] "GET /index.php HTTP/1.1" 200 31 "-" "Mozilla/5.0 (Windows NT 6.1; rv:29.0) Gecko/20100101 Firefox/29.0" 2.6.32-431.1.2.0.1.el6.x86_64 cat /etc/redhat-release CentOS release 6.5 (Final) The vhost config AUTOMATICALLY GENERATED - DO NO EDIT! server { listen *:80; server_name test.com www.test.com; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; root /www; index index.html index.htm index.php; location ~ \.php$ { alias /www; fastcgi_index index.php; include fcgi.conf; fastcgi_pass unix:/var/run/php-fcgi-php-fcgi-0.sock; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } }

    Read the article

  • clocksource tsc unstable

    - by amorfis
    Ok, now I have real server fault ;) After some time from booting (about one minute) my server hangs. All I can do is hard reset. Then after restart in /var/log/kern.log I can find: Jul 29 22:38:57 leonidas kernel: [ 90.729598] longhaul: Failed to set requested frequency! Jul 29 22:38:57 leonidas kernel: [ 90.731252] longhaul: Enabling "Ignore Revision ID" option. Jul 29 22:38:57 leonidas kernel: [ 91.201461] longhaul: Failed to set requested frequency! Jul 29 22:38:57 leonidas kernel: [ 91.201482] longhaul: Disabling ACPI C3 support. Jul 29 22:38:57 leonidas kernel: [ 91.204230] longhaul: Disabling "Ignore Revision ID" option. Jul 29 22:38:58 leonidas kernel: [ 91.416133] longhaul: Failed to set requested frequency! Jul 29 22:38:58 leonidas kernel: [ 91.416152] longhaul: Enabling "Ignore Revision ID" option. Jul 29 22:38:58 leonidas kernel: [ 91.960048] Clocksource tsc unstable (delta = -105611479 ns) I found some resources on the net, and it said to change clocksource, or disable ACPI. I tried disabling ACPI but it didn't help (but I noticed there was longer time before hanging). I can't change clock to hpet, because my system doesn't have such one. Output of cat /sys/devices/system/clocksource/clocksource0/available_clocksource: acpi_pm jiffies tsc My system is ubuntu server on VIA Epia hardware.

    Read the article

  • Glassfish v3 failure when startup. "Cannot allocate memory "

    - by Shisoft
    It is clear in this Question Fail to start Glassfish 3.1: java.io.IOException: error=12, Cannot allocate memory But in my case,I have a 512M memory Ubuntu 10.04 vps.It seems that I don't need to change any configure.But when start the server,I got this exception VM failed to start: java.io.IOException: Cannot run program "/usr/lib/jvm/java-6-sun-1.6.0.22/bin/java" (in directory "/home/glassfish/glassfish/domains/domain1/config"): java.io.IOException: error=12, Cannot allocate memory So,I set <jvm-options>-Xmx512</jvm-options> to <jvm-options>-Xmx400</jvm-options> The exception remains.What did I do something wrong? result of free -m total used free shared buffers cached Mem: 512 43 468 0 0 0 -/+ buffers/cache: 43 468 Swap: 0 0 0 result of cat /proc/user_beancounters Version: 2.5 uid resource held maxheld barrier limit failcnt 146049: kmemsize 2670652 5385253 51200000 51200000 0 lockedpages 0 8 2048 2048 0 privvmpages 11134 134522 131200 262200 4 shmpages 648 1352 128000 128000 0 dummy 0 0 0 0 0 numproc 12 73 500 500 0 physpages 6519 28162 0 200000000 0 vmguarpages 0 0 512000 512000 0 oomguarpages 6527 28169 512000 512000 0 numtcpsock 4 14 4096 4096 0 numflock 0 5 2048 2048 0 numpty 1 2 32 32 0 numsiginfo 0 3 1024 1024 0 tcpsndbuf 159600 265744 20480000 20480000 0 tcprcvbuf 65536 3590352 20480000 20480000 0 othersockbuf 44232 90640 20480000 20480000 0 dgramrcvbuf 0 12848 10240000 10240000 0 numothersock 22 31 2048 2048 0 dcachesize 0 0 10240000 10240000 0 numfile 1002 1474 50000 50000 0 dummy 0 0 0 0 0 dummy 0 0 0 0 0 dummy 0 0 0 0 0 numiptent 24 24 2048 2048 0 Thanks

    Read the article

  • Configuring postfix with Gmail

    - by MultiformeIngegno
    This is what I did.. sudo apt-get install postfix This is my /etc/postfix/main.cf: # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=no smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache myhostname = tsXXX561.server.topcloud.it alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = relayhost = [smtp.gmail.com]:587 mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = loopback-only default_transport = smtp relay_transport = smtp inet_protocols = all # SASL Settings smtp_use_tls=yes smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sasl_security_options = noanonymous smtp_sasl_tls_security_options = noanonymous smtp_tls_CAfile = /etc/postfix/cacert.pem Then I created the file /etc/mailname with my hostname as content: tsXXX561.server.topcloud.it Then I created the file /etc/postfix/sasl_passwd: [smtp.gmail.com]:587 [email protected]:gmail_password Then sudo postmap /etc/postfix/sasl/passwd sudo cat /etc/ssl/certs/Thawte_Premium_Server_CA.pem | sudo tee -a /etc/postfix/cacert.pem service postfix restart Still sends nothing... I'm on Ubuntu Server 12.04.

    Read the article

  • FreeNAS pool configuration - RAID1 + other drives

    - by trnelson
    Simple questions, really. I found this answer with a similar setup, but not sure it answers my question. If it does, I'm curious why since the answer seems a bit unsure: ZFS Hard Drive Configuration in FreeNAS I'm building a server which will be used primarily for backup, plus some media streaming, possibly with Plex. I seem to understand most everything I need, but I'm still a bit confused on how pools work, and how to configure them for my scenario. I will have 2x 2TB WD Red drives, which I plan on using in a mirrored set up (RAID1). This would be for backup, and I'd also like to do offsite backup to my CrashPlan account from this array. I also have a few other drives: 1.5TB, 320GB, 250GB. I'm not sure exactly what to do with them yet, but looking for options. FreeNAS OS will be running from a 16GB USB Flash drive. Would it be wise to use the 1.5TB as a backup-backup, essentially as a mirror or perhaps for snapshots of the 2TB RAID1? I'm still learning about snapshots. Should the 2TB mirrored drives be in their own pool? Should the other drives be set up in their own pools as well, or should they be JBOD in a single pool? They may or may not get much use since the 2TB array is plenty for me. Does a dataset basically mimic the idea of a partition or a network share? In other words, I would map \SERVER\Share to X: on my laptop? Let's say I wanted to use the 250GB drive as an encrypted drive to store all of my cat pictures. Would it have to be in its own pool? If I use jails apps, should they go in the backup RAID1, or in another place? Thank you!

    Read the article

  • Removing a device in "removed" state from Linux software RAID array

    - by Sahasranaman MS
    My workstation has two disks(/dev/sd[ab]), both with similar partitioning. /dev/sdb failed, and cat /proc/mdstat stopped showing the second sdb partition. I ran mdadm --fail and mdadm --remove for all partitions from the failed disk on the arrays that use them, although all such commands failed with mdadm: set device faulty failed for /dev/sdb2: No such device mdadm: hot remove failed for /dev/sdb2: No such device or address Then I hot swapped the failed disk, partitioned the new disk and added the partitions to the respective arrays. All arrays got rebuilt properly except one, because in /dev/md2, the failed disk doesn't seem to have been removed from the array properly. Because of this, the new partition keeps getting added as a spare to the partition, and its status remains degraded. Here's what mdadm --detail /dev/md2 shows: [root@ldmohanr ~]# mdadm --detail /dev/md2 /dev/md2: Version : 1.1 Creation Time : Tue Dec 27 22:55:14 2011 Raid Level : raid1 Array Size : 52427708 (50.00 GiB 53.69 GB) Used Dev Size : 52427708 (50.00 GiB 53.69 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Fri Nov 23 14:59:56 2012 State : active, degraded Active Devices : 1 Working Devices : 2 Failed Devices : 0 Spare Devices : 1 Name : ldmohanr.net:2 (local to host ldmohanr.net) UUID : 4483f95d:e485207a:b43c9af2:c37c6df1 Events : 5912611 Number Major Minor RaidDevice State 0 8 2 0 active sync /dev/sda2 1 0 0 1 removed 2 8 18 - spare /dev/sdb2 To remove a disk, mdadm needs a device filename, which was /dev/sdb2 originally, but that no longer refers to device number 1. I need help with removing device number 1 with 'removed' status and making /dev/sdb2 active.

    Read the article

  • Can't access server sound card when vnc'd into ubuntu server

    - by Corey Kennedy
    I've set up my ubunutu 10 server with xfce, nxserver, and now tightvncserver so that I can control it remotely from my Windows 7 laptop. NX is working fine for remote access, but when I run (for example) exaile, no sound will be sent through the server's sound card. I installed tightvncserver and connected, but ran into the same problem. Exaile opens, sound isn't muted, I can see that sound cards are installed (via cat /proc/asound/cards), but I can't seem to get the remote sessions to access the server's sound card. Also, just to confirm that the sound card was working I hooked up a montior/keyboard to the server and opened a local xfce session. That worked fine. While I had the local session running, I was also able to open a remote session with NXClient and start exaile - which then successfully piped sound to the local card. After disconnecting the monitor/keyboard and moving the box back to its normal spot, though, I was not able to play sound via either an NX or VNC session. Does anyone have any suggestions? Surely it's possible to configure my remote sessions to pipe sound to the server's sound card, right? Or at least get xfce up and running without a monitor or keyboard but with access to the sound card so I can VNC into it? Thanks!

    Read the article

  • Why is my cron daemon is being killed every few minutes?

    - by user113215
    As of about a week ago, my cron daemon refuses to stay running. I'm using Debian 6 x64 on an OpenVZ virtual machine. Running something like pgrep cron shows that the daemon isn't running. I start the service with service cron start or /etc/init.d/cron start and it launches, but it disappears from the running process list after a few minutes (varying anywhere between 1 - 30 minutes before the process is killed again). Using strace -f service cron start, I can see that the process is being killed for some reason: nanosleep({60, 0}, <unfinished ...> +++ killed by SIGKILL +++ There's nothing relevant in /var/log/syslog, /var/log/messages, /var/log/auth.log, or /var/log/kern.log to explain why the the process is dying. The system has at least 800 MB of free memory, and cat /proc/loadavg returns 0.22 0.13 0.04 so resources shouldn't be the issue. With cron running, free -m reports: total used free shared buffers cached Mem: 1024 211 812 0 0 0 -/+ buffers/cache: 211 812 Swap: 0 0 0 I also tried removing and reinstalling the cron package using apt-get. Update: I initially thought the problem was a resource issues. I erased my entire VPS and started from a fresh Debian image. There is now nothing else running on the system, but even from a clean install my cron daemon is still being killed at random. What else should I check? How do I find out what's killing my crond?

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >