Search Results

Search found 440 results on 18 pages for 'md oppenheimer'.

Page 11/18 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • Kernel panic when bringing up DRBD resource

    - by sc.
    I'm trying to set up two machines synchonizing with DRBD. The storage is setup as follows: PV - LVM - DRBD - CLVM - GFS2. DRBD is set up in dual primary mode. The first server is set up and running fine in primary mode. The drives on the first server have data on them. I've set up the second server and I'm trying to bring up the DRBD resources. I created all the base LVM's to match the first server. After initializing the resources with `` drbdadm create-md storage I'm bringing up the resources by issuing drbdadm up storage After issuing that command, I get a kernel panic and the server reboots in 30 seconds. Here's a screen capture. My configuration is as follows: OS: CentOS 6 uname -a Linux host.structuralcomponents.net 2.6.32-279.5.2.el6.x86_64 #1 SMP Fri Aug 24 01:07:11 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux rpm -qa | grep drbd kmod-drbd84-8.4.1-2.el6.elrepo.x86_64 drbd84-utils-8.4.1-2.el6.elrepo.x86_64 cat /etc/drbd.d/global_common.conf global { usage-count yes; # minor-count dialog-refresh disable-ip-verification } common { handlers { pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b /proc/sysrq-trigger ; reboot -f"; pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b /proc/sysrq-trigger ; reboot -f"; local-io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o /proc/sysrq-trigger ; halt -f"; # fence-peer "/usr/lib/drbd/crm-fence-peer.sh"; # split-brain "/usr/lib/drbd/notify-split-brain.sh root"; # out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root"; # before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 -- -c 16k"; # after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh; } startup { # wfc-timeout degr-wfc-timeout outdated-wfc-timeout wait-after-sb become-primary-on both; wfc-timeout 30; degr-wfc-timeout 10; outdated-wfc-timeout 10; } options { # cpu-mask on-no-data-accessible } disk { # size max-bio-bvecs on-io-error fencing disk-barrier disk-flushes # disk-drain md-flushes resync-rate resync-after al-extents # c-plan-ahead c-delay-target c-fill-target c-max-rate # c-min-rate disk-timeout } net { # protocol timeout max-epoch-size max-buffers unplug-watermark # connect-int ping-int sndbuf-size rcvbuf-size ko-count # allow-two-primaries cram-hmac-alg shared-secret after-sb-0pri # after-sb-1pri after-sb-2pri always-asbp rr-conflict # ping-timeout data-integrity-alg tcp-cork on-congestion # congestion-fill congestion-extents csums-alg verify-alg # use-rle protocol C; allow-two-primaries yes; after-sb-0pri discard-zero-changes; after-sb-1pri discard-secondary; after-sb-2pri disconnect; } } cat /etc/drbd.d/storage.res resource storage { device /dev/drbd0; meta-disk internal; on host.structuralcomponents.net { address 10.10.1.120:7788; disk /dev/vg_storage/lv_storage; } on host2.structuralcomponents.net { address 10.10.1.121:7788; disk /dev/vg_storage/lv_storage; } /var/log/messages is not logging anything about the crash. I've been trying to find a cause of this but I've come up with nothing. Can anyone help me out? Thanks.

    Read the article

  • Can't re-mount existing RAID10 on Ubuntu

    - by Zoran
    I saw similar questions, but didn't find what solution to my problem. After power-cut, one of RAID10 (4 disks were) appears to be malfunctioning. I make tha array active one, but can not mount it. Always the same error: mount: you must specify the filesystem type So, here is what I have when type mdadm --detail /dev/md0 /dev/md0: Version : 00.90.03 Creation Time : Tue Sep 1 11:00:40 2009 Raid Level : raid10 Array Size : 1465148928 (1397.27 GiB 1500.31 GB) Used Dev Size : 732574464 (698.64 GiB 750.16 GB) Raid Devices : 4 Total Devices : 3 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Mon Jun 11 09:54:27 2012 State : clean, degraded Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Layout : near=2, far=1 Chunk Size : 64K UUID : 1a02e789:c34377a1:2e29483d:f114274d Events : 0.166 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 0 0 1 removed 2 8 48 2 active sync /dev/sdd 3 8 64 3 active sync /dev/sde At the /etc/mdadm/mdadm.conf I have by default, scan all partitions (/proc/partitions) for MD superblocks. alternatively, specify devices to scan, using wildcards if desired. DEVICE partitions auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes automatically tag new arrays as belonging to the local system HOMEHOST <system> instruct the monitoring daemon where to send mail alerts MAILADDR root definitions of existing MD arrays ARRAY /dev/md0 level=raid10 num-devices=4 UUID=1a02e789:c34377a1:2e29483d:f114274d ARRAY /dev/md1 level=raid1 num-devices=2 UUID=9b592be7:c6a2052f:2e29483d:f114274d This file was auto-generated... So, my question is, how can I mount md0 array (md1 has been mounted without problem) in order to preserve existing data? One more thing, fdisk -l command gives the following result: Disk /dev/sdb: 750.1 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x660a6799 Device Boot Start End Blocks Id System /dev/sdb1 * 1 88217 708603021 83 Linux /dev/sdb2 88218 91201 23968980 5 Extended /dev/sdb5 88218 91201 23968948+ 82 Linux swap / Solaris Disk /dev/sdc: 750.1 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x0008f8ae Device Boot Start End Blocks Id System /dev/sdc1 1 88217 708603021 83 Linux /dev/sdc2 88218 91201 23968980 5 Extended /dev/sdc5 88218 91201 23968948+ 82 Linux swap / Solaris Disk /dev/sdd: 750.1 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x4be1abdb Device Boot Start End Blocks Id System Disk /dev/sde: 750.1 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xa4d5632e Device Boot Start End Blocks Id System Disk /dev/sdf: 750.1 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xdacb141c Device Boot Start End Blocks Id System Disk /dev/sdg: 750.1 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xdacb141c Device Boot Start End Blocks Id System Disk /dev/md1: 750.1 GB, 750156251136 bytes 2 heads, 4 sectors/track, 183143616 cylinders Units = cylinders of 8 * 512 = 4096 bytes Disk identifier: 0xdacb141c Device Boot Start End Blocks Id System Warning: ignoring extra data in partition table 5 Warning: ignoring extra data in partition table 5 Warning: ignoring extra data in partition table 5 Warning: invalid flag 0x7b6e of partition table 5 will be corrected by w(rite) Disk /dev/md0: 1500.3 GB, 1500312502272 bytes 255 heads, 63 sectors/track, 182402 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x660a6799 Device Boot Start End Blocks Id System /dev/md0p1 * 1 88217 708603021 83 Linux /dev/md0p2 88218 91201 23968980 5 Extended /dev/md0p5 ? 121767 155317 269488144 20 Unknown And one more thing. When using mdadm --examine command, here ise result: mdadm -v --examine --scan /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sd ARRAY /dev/md1 level=raid1 num-devices=2 UUID=9b592be7:c6a2052f:2e29483d:f114274d devices=/dev/sdf ARRAY /dev/md0 level=raid10 num-devices=4 UUID=1a02e789:c34377a1:2e29483d:f114274d devices=/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde md0 has 3 devices which are active. Can someone instruct me how to solve this issue? If it is possible, I would like not to removing faulty HDD. Please advise

    Read the article

  • Nvidia drivers don't work with mainline kernel

    - by dutchie
    I want to try some of the new features in the btrfs filesystem, and to do that I need to use a newer kernel than is included in Ubuntu 12.04. To do that, I have installed linux-headers-3.4.0-030400_3.4.0-030400.201205210521_all.deb, linux-headers-3.4.0-030400-generic_3.4.0-030400.201205210521_amd64.deb, and linux-image-3.4.0-030400-generic_3.4.0-030400.201205210521_amd64.deb from the mainline kernel download here. However, on rebooting into the 3.4 kernel, my desktop is stuck at a very low resolution and I cannot increase it to the full. This did happen when I first installed, but a simple install of the nvidia-current package got everything working nicely with my GTX570 card. There were appear to be some DKMS errors when I installed the kernel, and they indicated I should look at /var/lib/dkms/nvidia-current/295.40/build/make.log: josh@sirius:~/Downloads$ sudo dpkg -i linux-*.deb Selecting previously unselected package linux-headers-3.4.0-030400. (Reading database ... 309400 files and directories currently installed.) Unpacking linux-headers-3.4.0-030400 (from linux-headers-3.4.0-030400_3.4.0-030400.201205210521_all.deb) ... Selecting previously unselected package linux-headers-3.4.0-030400-generic. Unpacking linux-headers-3.4.0-030400-generic (from linux-headers-3.4.0-030400-generic_3.4.0-030400.201205210521_amd64.deb) ... Selecting previously unselected package linux-image-3.4.0-030400-generic. Unpacking linux-image-3.4.0-030400-generic (from linux-image-3.4.0-030400-generic_3.4.0-030400.201205210521_amd64.deb) ... Done. Setting up linux-headers-3.4.0-030400 (3.4.0-030400.201205210521) ... Setting up linux-headers-3.4.0-030400-generic (3.4.0-030400.201205210521) ... Examining /etc/kernel/header_postinst.d. run-parts: executing /etc/kernel/header_postinst.d/dkms 3.4.0-030400-generic /boot/vmlinuz-3.4.0-030400-generic ERROR (dkms apport): kernel package linux-headers-3.4.0-030400-generic is not supported Error! Bad return status for module build on kernel: 3.4.0-030400-generic (x86_64) Consult /var/lib/dkms/nvidia-current/295.40/build/make.log for more information. Setting up linux-image-3.4.0-030400-generic (3.4.0-030400.201205210521) ... Running depmod. update-initramfs: deferring update (hook will be called later) Examining /etc/kernel/postinst.d. run-parts: executing /etc/kernel/postinst.d/dkms 3.4.0-030400-generic /boot/vmlinuz-3.4.0-030400-generic ERROR (dkms apport): kernel package linux-headers-3.4.0-030400-generic is not supported Error! Bad return status for module build on kernel: 3.4.0-030400-generic (x86_64) Consult /var/lib/dkms/nvidia-current/295.40/build/make.log for more information. run-parts: executing /etc/kernel/postinst.d/initramfs-tools 3.4.0-030400-generic /boot/vmlinuz-3.4.0-030400-generic update-initramfs: Generating /boot/initrd.img-3.4.0-030400-generic run-parts: executing /etc/kernel/postinst.d/pm-utils 3.4.0-030400-generic /boot/vmlinuz-3.4.0-030400-generic run-parts: executing /etc/kernel/postinst.d/update-notifier 3.4.0-030400-generic /boot/vmlinuz-3.4.0-030400-generic run-parts: executing /etc/kernel/postinst.d/zz-update-grub 3.4.0-030400-generic /boot/vmlinuz-3.4.0-030400-generic Generating grub.cfg ... Found linux image: /boot/vmlinuz-3.4.0-030400-generic Found initrd image: /boot/initrd.img-3.4.0-030400-generic Found linux image: /boot/vmlinuz-3.2.0-24-generic Found initrd image: /boot/initrd.img-3.2.0-24-generic Found memtest86+ image: /memtest86+.bin Found Ubuntu 12.04 LTS (12.04) on /dev/sda1 Found Windows 7 (loader) on /dev/sda2 Found Windows 7 (loader) on /dev/sda3 done /var/lib/dkms/nvidia-current/295.40/build/make.log: DKMS make.log for nvidia-current-295.40 for kernel 3.4.0-030400-generic (x86_64) Thu Jun 7 00:58:39 BST 2012 NVIDIA: calling KBUILD... test -e include/generated/autoconf.h -a -e include/config/auto.conf || ( \ echo; \ echo " ERROR: Kernel configuration is invalid."; \ echo " include/generated/autoconf.h or include/config/auto.conf are missing.";\ echo " Run 'make oldconfig && make prepare' on kernel src to fix it."; \ echo; \ /bin/false) mkdir -p /var/lib/dkms/nvidia-current/295.40/build/.tmp_versions ; rm -f /var/lib/dkms/nvidia-current/295.40/build/.tmp_versions/* make -f scripts/Makefile.build obj=/var/lib/dkms/nvidia-current/295.40/build cc -Wp,-MD,/var/lib/dkms/nvidia-current/295.40/build/.nv.o.d -nostdinc -isystem /usr/lib/gcc/x86_64-linux-gnu/4.6/include -I/usr/src/linux-headers-3.4.0-030400-generic/arch/x86/include -Iarch/x86/include/generated -Iinclude -include /usr/src/linux-headers-3.4.0-030400-generic/include/linux/kconfig.h -D__KERNEL__ -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs -fno-strict-aliasing -fno-common -Werror-implicit-function-declaration -Wno-format-security -fno-delete-null-pointer-checks -O2 -m64 -mtune=generic -mno-red-zone -mcmodel=kernel -funit-at-a-time -maccumulate-outgoing-args -fstack-protector -DCONFIG_AS_CFI=1 -DCONFIG_AS_CFI_SIGNAL_FRAME=1 -DCONFIG_AS_CFI_SECTIONS=1 -DCONFIG_AS_FXSAVEQ=1 -pipe -Wno-sign-compare -fno-asynchronous-unwind-tables -mno-sse -mno-mmx -mno-sse2 -mno-3dnow -mno-avx -Wframe-larger-than=1024 -Wno-unused-but-set-variable -fno-omit-frame-pointer -fno-optimize-sibling-calls -pg -Wdeclaration-after-statement -Wno-pointer-sign -fno-strict-overflow -fconserve-stack -DCC_HAVE_ASM_GOTO -I/var/lib/dkms/nvidia-current/295.40/build -Wall -MD -Wsign-compare -Wno-cast-qual -Wno-error -D__KERNEL__ -DMODULE -DNVRM -DNV_VERSION_STRING=\"295.40\" -Wno-unused-function -Wuninitialized -mno-red-zone -mcmodel=kernel -UDEBUG -U_DEBUG -DNDEBUG -DMODULE -D"KBUILD_STR(s)=#s" -D"KBUILD_BASENAME=KBUILD_STR(nv)" -D"KBUILD_MODNAME=KBUILD_STR(nvidia)" -c -o /var/lib/dkms/nvidia-current/295.40/build/.tmp_nv.o /var/lib/dkms/nvidia-current/295.40/build/nv.c In file included from include/linux/kernel.h:19:0, from include/linux/sched.h:55, from include/linux/utsname.h:35, from /var/lib/dkms/nvidia-current/295.40/build/nv-linux.h:38, from /var/lib/dkms/nvidia-current/295.40/build/nv.c:13: include/linux/bitops.h: In function ‘hweight_long’: include/linux/bitops.h:66:41: warning: signed and unsigned type in conditional expression [-Wsign-compare] In file included from /usr/src/linux-headers-3.4.0-030400-generic/arch/x86/include/asm/uaccess.h:577:0, from include/linux/poll.h:14, from /var/lib/dkms/nvidia-current/295.40/build/nv-linux.h:97, from /var/lib/dkms/nvidia-current/295.40/build/nv.c:13: /usr/src/linux-headers-3.4.0-030400-generic/arch/x86/include/asm/uaccess_64.h: In function ‘copy_from_user’: /usr/src/linux-headers-3.4.0-030400-generic/arch/x86/include/asm/uaccess_64.h:53:6: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] In file included from /var/lib/dkms/nvidia-current/295.40/build/nv.c:13:0: /var/lib/dkms/nvidia-current/295.40/build/nv-linux.h: At top level: /var/lib/dkms/nvidia-current/295.40/build/nv-linux.h:114:75: fatal error: asm/system.h: No such file or directory compilation terminated. make[3]: *** [/var/lib/dkms/nvidia-current/295.40/build/nv.o] Error 1 make[2]: *** [_module_/var/lib/dkms/nvidia-current/295.40/build] Error 2 NVIDIA: left KBUILD. nvidia.ko failed to build! make[1]: *** [module] Error 1 make: *** [module] Error 2

    Read the article

  • Using Linux LVM, can I change the number of stripes and "rebalance" the logical volume?

    - by mss
    I created a RAID10 by adding two RAID1 md devices as physical volumes to a volume group. Unfortunately it looks like I forgot to specify the number of stripes when I created the logical volumes (it was late): PV VG Fmt Attr PSize PFree /dev/md312 volume lvm2 a- 927.01G 291.01G /dev/md334 volume lvm2 a- 927.01G 927.01G I know that I can move all the data of a logical volume from one physical volume to another with pvmove. It also looks like lvextend supports an -i switch to change the number of stripes. Is there any way to combine these two, ie. change the number of stripes and "rebalance" the data over the stripes based on the allocation policy? According to this mail by Ross Walker from March 2010 it isn't possible but maybe this has changed since then.

    Read the article

  • Installing nGinX Reverse Proxy on CentOS 5

    - by heavymark
    I'm trying to install nGinX as a reverse proxy on CentOS 5 with apache. The instructions to do this are here: http://wiki.mediatemple.net/w/(dv):Configure_nginx_as_reverse_proxy_web_server Note- in the instructions, for the url to get nginx I'm using the following: http://nginx.org/download/nginx-1.0.10.tar.gz Now here is my problem. After installing the required packages and running .configure I get the following: checking for OS + Linux 2.6.18-028stab094.3 x86_64 checking for C compiler ... found + using GNU C compiler + gcc version: 4.1.2 20080704 (Red Hat 4.1.2-51) checking for gcc -pipe switch ... found checking for gcc builtin atomic operations ... found checking for C99 variadic macros ... found checking for gcc variadic macros ... found checking for unistd.h ... found checking for inttypes.h ... found checking for limits.h ... found checking for sys/filio.h ... not found checking for sys/param.h ... found checking for sys/mount.h ... found checking for sys/statvfs.h ... found checking for crypt.h ... found checking for Linux specific features checking for epoll ... found checking for sendfile() ... found checking for sendfile64() ... found checking for sys/prctl.h ... found checking for prctl(PR_SET_DUMPABLE) ... found checking for sched_setaffinity() ... found checking for crypt_r() ... found checking for sys/vfs.h ... found checking for nobody group ... found checking for poll() ... found checking for /dev/poll ... not found checking for kqueue ... not found checking for crypt() ... not found checking for crypt() in libcrypt ... found checking for F_READAHEAD ... not found checking for posix_fadvise() ... found checking for O_DIRECT ... found checking for F_NOCACHE ... not found checking for directio() ... not found checking for statfs() ... found checking for statvfs() ... found checking for dlopen() ... not found checking for dlopen() in libdl ... found checking for sched_yield() ... found checking for SO_SETFIB ... not found checking for SO_ACCEPTFILTER ... not found checking for TCP_DEFER_ACCEPT ... found checking for accept4() ... not found checking for int size ... 4 bytes checking for long size ... 8 bytes checking for long long size ... 8 bytes checking for void * size ... 8 bytes checking for uint64_t ... found checking for sig_atomic_t ... found checking for sig_atomic_t size ... 4 bytes checking for socklen_t ... found checking for in_addr_t ... found checking for in_port_t ... found checking for rlim_t ... found checking for uintptr_t ... uintptr_t found checking for system endianess ... little endianess checking for size_t size ... 8 bytes checking for off_t size ... 8 bytes checking for time_t size ... 8 bytes checking for setproctitle() ... not found checking for pread() ... found checking for pwrite() ... found checking for sys_nerr ... found checking for localtime_r() ... found checking for posix_memalign() ... found checking for memalign() ... found checking for mmap(MAP_ANON|MAP_SHARED) ... found checking for mmap("/dev/zero", MAP_SHARED) ... found checking for System V shared memory ... found checking for POSIX semaphores ... not found checking for POSIX semaphores in libpthread ... found checking for struct msghdr.msg_control ... found checking for ioctl(FIONBIO) ... found checking for struct tm.tm_gmtoff ... found checking for struct dirent.d_namlen ... not found checking for struct dirent.d_type ... found checking for PCRE library ... found checking for system md library ... not found checking for system md5 library ... not found checking for OpenSSL md5 crypto library ... found checking for sha1 in system md library ... not found checking for OpenSSL sha1 crypto library ... found checking for zlib library ... found creating objs/Makefile Configuration summary + using system PCRE library + OpenSSL library is not used + md5: using system crypto library + sha1: using system crypto library + using system zlib library nginx path prefix: "/usr/local/nginx" nginx binary file: "/usr/local/nginx/sbin/nginx" nginx configuration prefix: "/usr/local/nginx/conf" nginx configuration file: "/usr/local/nginx/conf/nginx.conf" nginx pid file: "/usr/local/nginx/logs/nginx.pid" nginx error log file: "/usr/local/nginx/logs/error.log" nginx http access log file: "/usr/local/nginx/logs/access.log" nginx http client request body temporary files: "client_body_temp" nginx http proxy temporary files: "proxy_temp" nginx http fastcgi temporary files: "fastcgi_temp" nginx http uwsgi temporary files: "uwsgi_temp" nginx http scgi temporary files: "scgi_temp" It says if you get errors to stop and make sure packages are installed. I didn't get errors but as you can see I got several "not founds". Are those considered errors? If so how do I resolve that. And as noted in the link, I cannot install through yum, because it wont work with plesk then. Thanks!

    Read the article

  • mdadm email notification - change the default subject

    - by Shirker
    I have entry in my crontab 00 */1 * * * /sbin/mdadm --monitor --scan -1 [email protected] It works more than perfect, but I need to change the default email template. So instead of subject "mdadm monitoring" it wished to be "mdadm monitoring from «IP ADDRESS»" or like that. [root@mail ~]# rpm -ql mdadm-3.2.5-4.el6_4.2.x86_64 | grep -v -E '(man|doc)' /etc/cron.d/raid-check /etc/rc.d/init.d/mdmonitor /etc/sysconfig/raid-check /lib/udev/rules.d/65-md-incremental.rules /sbin/mdadm /sbin/mdmon /usr/sbin/raid-check /var/run/mdadm Is it hardcoded or its possible to change?

    Read the article

  • "mv: cannot stat file" in for loop

    - by F.C.
    I wanted to rename a lot of files with a pattern so I tried this for loop: $ for f in *; do mv \""$f"\" \""HouseMD-S06E${f#*Episode }"\"; done But I got this error: mv: cannot stat `"House MD Season 6 Episode 01 - Broken (Parts 1 & 2).avi"': No such file or directory So what I did was echo the mv commands to a file like this: $ for f in *; do echo mv \""$f"\" \""HouseMD-S06E${f#*Episode }"\">>mv.txt; done And the run the file with source. Any ideas why the first for didn't work and how can I fix it?

    Read the article

  • Spotlight Infinite Indexing issue (external data drive)

    - by Manca Weeks
    This is an external drive, formerly a boot drive which is now in use only to access music files (sibelius, audio, midi, live, logic etc.) without transferring the data into a new boot system, partly because of the issue I am about to describe, but mostly because the majority of the data is mainly there for archival purposes. The user is a composer and prominent musician and needs to be able to rehash the data at will. I have tried several things - here is a list: - make complete filesystem clone with antonio diaz's ddrescue - run Disk Warrior on copy, repair whatever errors occurred - wipe out all ACLs on entire drive - set all permissions to the same value - wide open 777 - remove any system data (applications, system files, including hidden files to the best of my knowledge) by selecting only non-system/app data and using Carbon Copy Cloner to put only the data of interest onto a newly formatted drive - transfer data to newly formatted drive folder by folder, resetting the spotlight index in between adding each to observe for issues (interesting here is that no issues occurred except for in Documents folder - when I transferred only the Documents folder to a newly formatted drive on its own - no trouble. It appears almost as thought it may not be the content but the quantity or specific combination of data that results in problems) - use DataRescue to transfer the data to yet another newly formatted drive to expose any missed hidden files Between each of the above steps I stopped Spotlight (search for anything beginning with md in Activity Monitor - All Processes and quitting it), deleted the .Spotlight-V100 directory from the affected drive. Restart Splotlight indexing by adding drive to Spotlight privacy list and removing it. In each case the same issue occurs - Spotlight begins indexing normally (or so it seems), then the index estimated time increases, usually to 4 hours remaining. This is where it gets stuck and continues to predict 4 hours remaining but never finishes. Sometimes I can't eject the drive and have to quit the md.. processes from Activity Monitor to be able to eject the drive without Force Eject. Once I disconnect the drive after the 4 hours remaining situation - if I reattach it, Spotlight forever estimates remaining time and never gets going again. So there it is. It is apparently not a filesystem issue, not a permissions issue and not tied to any particular piece of hardware or protocol (used USB and FW drives). I have tried this on several machines (3 to be precise) and in 10.5.8 and 10.6.5. Simply disabling Spotlight on this volume is not an option because the owner has no clue where things are as the data on the volume dates back to music projects and compositions from 2003 and before. He needs to be able to query for results. Anyone got any ideas? ---update 2-6-11 Since I have not received any responses except the one below which appears to misunderstand my point, I am updating this post hoping to get more responses. I have used the terminal command sudo opensnoop -p PID where PID is the mdworker process ID to try and determine what Spotlight is doing and hopefully find the files it's having trouble with. Here's what happens: After indexing for a few hours, mdworker is gone. It no longer shows up in Activity Monitor under "All Processes" and the Terminal window with the opensnoop result stops moving. I then proceeded to execute the same command on mds to see what it was doing and here's what I get, repeatedly: 501 57 mds 21 / 501 57 mds 21 /Volumes/Sno Leppard 501 57 mds 21 /Volumes/Tiger 501 57 mds 21 /Volumes/Leppard 501 57 mds 21 /Volumes/Disk Warrior 501 57 mds 21 /Volumes/ONM Data These represent all the volumes currently mounted in the system. All except ONM Data, which is the one I am trying to index, are excluded from SPotlight indexing at the moment. The sequence above repeats over and over, with slight variation, sometimes skipping one of the volumes. Questions - what happened to mdworker? What is mds doing? I will let this run until tomorrow morning and throughout the day and monitor for any changes. Any input would be very much appreciated. Even if you're not sure what the ultimate answer is, please alert me to anything you think I may be missing. Hopefully at some point we will figure this out... Thanks, M __final edit__ I finally resolved the issue and here is how I did it. I used the terminal command "sudo opensnoop -p PID" where the PID is the process id of the processes I was monitoring. I was looking at all instances of mds and mdworker running in the system. After the third time through indexing the same data set (see info above), I contacted Apple and got to their highest level of support - they were flabbergasted as well. They advised me to install yet another default 10.6.6 system and try again. The same pattern repeated - mds and mdworker(s) would start indexing and eventually the spotlight icon would say 6 hours remaining and all mdworkers were gone, mds at 90% or so of CPU. But I did finally figure out that the first time mdworker stopped like that, the last file it touched was always in the same folder. I excluded that folder from spotlight search and the rest of the data set indexed within about 2 hours with no strange behavior or failures. I copied that folder to another machine and Spotlight barfed immediately. Exclude that folder and all is well again. I have no clue what is causing this behavior, still, but I did find a functional solution to the problem. Anyone with a similar problem - run opensnoop on all instances of mds and mdworker and wait patiently for wdworker to exit. Look at the last file it touched and exclude the enclosing folder from being indexed. I was able to repeat the issue and solution on 2 different installs and 2 different copies of the data set. Hope this helps. If we find an actual cause of the folder being such a problem (it is called MICHAEL BRECKER RECORD SOLOS and contains almost 1 GB of audio related files - performer, live, SD2 - things like that), I will edit again to let you all know. Thanks for ay attempts to help, M

    Read the article

  • mdadm auto grow raid

    - by johannes
    I have a raid0/1 on lvm logical volumes. I resized the logical volumes. Now I want to resize the raid to use the complete logical volumes. This can be done with mdadm /dev/md? --grow -z newsize But somehow I can't figure out how to calculate the newsize argument. Is there a way to tell mdadm to grow to the biggest possible size? If not, how do I calculate the biggest possible size of the raid to use for the newsize argument?

    Read the article

  • Large Users Profile - Windows 7 - Machine running slowly

    - by Richard
    I have the MD of a client of ours who has a Windows 7 Profile that is currently 14GB thanks to Videos/Music and Documents. The first thing we did was to switch from roaming to local. What I need to know is now the profile is local am I wasting my time by reducing it any further? Does it really make a difference to performance having a large local user profile? Only the 4GB outlook ost that talks to the network frequently. Thanks in advance.... Richard

    Read the article

  • Independent SharePoint Trainer in DC ~ I conduct, teacher-led SHAREPOINT user training anywhere ~

    - by technical-trainer-pro
    Your options: "*interactive" hands-on VIRTUAL or CLASSROOM style training to all SharePoint Users & Site Admin owners.* I also develop customized classes tailored to the specific design of any SharePoint Site - acting as the translator for those left to understand and use it, on an everyday basis. Audience: users,clients,stakeholders,trainers Areas: functionality,operations,management, user site customization,ITIL training, governance process,change mangement and industry or client specific scenerios. INDIVIDUAL RATE- $300 to join any class *(1)* GROUP RATE - $1500 for a private group of (6-10) Flexible Scheduling contact me : [email protected] Local to DC/MD/VA ---can train hands-on anywhere~

    Read the article

  • Hyper-V Cluster with failover - NETWORKING

    - by Adam
    Hi We are looking to setup a 3 node Hyper-V cluster with live migration and failover using: 3 x Dell R710's with dual quad core Xeon & 128 GB RAM & 6 NICs in each 1 x Dell MD 3220i SAN We will be running this setup from a data center and so co-locating our kit. Can anybody explain how we should setup the network connections to make the system redundant? We have looked into this great article but are not sure how to get a 3 server setup correctly and reliably: http://faultbucket.ca/2011/01/hyper-v-failover-cluster-setup/. I believe we need network connections for: live migration, heartbeat, management, hyper-v etc. I assume as we are running it from a DC all IPs will have to be public IPs? Are AD servers will be VMs. One on each Hyper-V server and setup not to be HA.

    Read the article

  • Install VLC 2.0.7 in CentOS 6.4?

    - by raaz
    I am keep failing in the installation process I have tried. I have started process as follows. yum install gcc dbus-glib-devel* lua-devel* libcddb wget http://download.videolan.org/pub/videolan/vlc/2.0.7/vlc-2.0.7.tar.xz tar -xf vlc-2.0.7.tar.xz && cd vlc-2.0.7 ./configure in the configure I am getting the error as follows configure: WARNING: No package 'libcddb' found: CDDB access disabled. checking for Linux DVB version 5... yes checking for DVBPSI... no checking gme/gme.h usability... no checking gme/gme.h presence... no checking for gme/gme.h... no checking for SID... no configure: WARNING: No package 'libsidplay2' found (required for sid). checking for OGG... no configure: WARNING: Library ogg >= 1.0 needed for ogg was not found checking for MUX_OGG... no configure: WARNING: Library ogg >= 1.0 needed for mux_ogg was not found checking for SHOUT... no configure: WARNING: Library shout >= 2.1 needed for shout was not found checking ebml/EbmlVersion.h usability... no checking ebml/EbmlVersion.h presence... no checking for ebml/EbmlVersion.h... no checking for LIBMODPLUG... no configure: WARNING: No package 'libmodplug' found No package 'libmodplug' found. checking mpc/mpcdec.h usability... no checking mpc/mpcdec.h presence... no checking for mpc/mpcdec.h... no checking mpcdec/mpcdec.h usability... no checking mpcdec/mpcdec.h presence... no checking for mpcdec/mpcdec.h... no checking for libcrystalhd/libcrystalhd_if.h... no checking mad.h usability... no checking mad.h presence... no checking for mad.h... no configure: error: Could not find libmad on your system: you may get it from http://www.underbit.com/products/mad/. Alternatively you can use --disable-mad to disable the mad plugin. [root@localhost vlc-2.0.7]# So I went to libmad http location and downloaded it and while doing make it gave me the errors.There are no errors at ./configure with libmad but make not going through. [root@localhost libmad-0.15.0b]# make (sed -e '1s|.*|/*|' -e '1b' -e '$s|.*| */|' -e '$b' \ -e 's/^.*/ *&/' ./COPYRIGHT; echo; \ echo "# ifdef __cplusplus"; \ echo 'extern "C" {'; \ echo "# endif"; echo; \ if [ ".-DFPM_INTEL" != "." ]; then \ echo ".-DFPM_INTEL" | sed -e 's|^\.-D|# define |'; echo; \ fi; \ sed -ne 's/^# *define *\(HAVE_.*_ASM\).*/# define \1/p' \ config.h; echo; \ sed -ne 's/^# *define *OPT_\(SPEED\|ACCURACY\).*/# define OPT_\1/p' \ config.h; echo; \ sed -ne 's/^# *define *\(SIZEOF_.*\)/# define \1/p' \ config.h; echo; \ for header in version.h fixed.h bit.h timer.h stream.h frame.h synth.h decoder.h; do \ echo; \ sed -n -f ./mad.h.sed ./$header; \ done; echo; \ echo "# ifdef __cplusplus"; \ echo '}'; \ echo "# endif") >mad.h make all-recursive make[1]: Entering directory `/home/raja/Downloads/libmad-0.15.0b' make[2]: Entering directory `/home/raja/Downloads/libmad-0.15.0b' if /bin/sh ./libtool --mode=compile gcc -DHAVE_CONFIG_H -I. -I. -I. -DFPM_INTEL -DASO_ZEROCHECK -Wall -march=i486 -g -O -fforce-mem -fforce-addr -fthread-jumps -fcse-follow-jumps -fcse-skip-blocks -fexpensive-optimizations -fregmove -fschedule-insns2 -fstrength-reduce -MT version.lo -MD -MP -MF ".deps/version.Tpo" \ -c -o version.lo `test -f 'version.c' || echo './'`version.c; \ then mv -f ".deps/version.Tpo" ".deps/version.Plo"; \ else rm -f ".deps/version.Tpo"; exit 1; \ fi mkdir .libs gcc -DHAVE_CONFIG_H -I. -I. -I. -DFPM_INTEL -DASO_ZEROCHECK -Wall -march=i486 -g -O -fforce-mem -fforce-addr -fthread-jumps -fcse-follow-jumps -fcse-skip-blocks -fexpensive-optimizations -fregmove -fschedule-insns2 -fstrength-reduce -MT version.lo -MD -MP -MF .deps/version.Tpo -c version.c -fPIC -DPIC -o .libs/version.lo cc1: error: unrecognized command line option "-fforce-mem" make[2]: *** [version.lo] Error 1 make[2]: Leaving directory `/home/raja/Downloads/libmad-0.15.0b' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/home/raja/Downloads/libmad-0.15.0b' make: *** [all] Error 2 how can i resolve the issue and install VLC in my Centos ? I am using CentOS 6.4 . Thank you.

    Read the article

  • Installing gitlab on Debian 6.0.5

    - by helmus
    I am using following directions in an attempt to install gitlab on Debian 6.0.5 https://github.com/gitlabhq/gitlabhq/blob/stable/doc/installation.md I am getting an error when i'm running following command sudo -u gitlab bundle exec rake gitlab:app:setup RAILS_ENV=production WARNING: #<ArgumentError: Illformed requirement ["#<Syck::DefaultKey:0x00000004b52198> 1.1.4"]> # -*- encoding: utf-8 -*- Gem::Specification.new do |s| s.name = %q{carrierwave} s.version = "0.6.2" s.required_rubygems_version = Gem::Requirement.new(">= 0") if s.respond_to? :required_rubygems_version= s.authors = ["Jonas Nicklas"] ....more error.... s.add_dependency(%q<mini_magick>, [">= 0"]) s.add_dependency(%q<rmagick>, [">= 0"]) end end WARNING: Invalid .gemspec format in '/usr/local/lib/ruby/gems/1.9.1/specifications/carrierwave-0.6.2.gemspec' Could not locate Gemfile Some pointers to what could cause this would be much appreciated, i have only little experience with RoR and it seems to be related to that.

    Read the article

  • How to erase a SSD to restore factory performance in Linux?

    - by Andy B
    Due to big performance issues with an mdraid-1 array I'd like to pull down from the array one of the devices (Samsung 840 Pro), erase it to restore factory performance and re-add it to the array. The reason I want to do this to one of the SSDs is because the poor performance seems to be related to one specific SSD out of the two (although they are the same brand, model and firmware ver). But how do I erase a SSD from Linux? I mention that hdparm indicates that both drives are frozen at this time. Maybe because they are part of an md array? Thanks in advance!

    Read the article

  • Commercial NAS RAID1 disks moved to Software Raid system?

    - by Rolnik
    I've got a couple of commercial NAS boxes and I'm wondering if they (ReadyNas duo, DLink DNS-323) or any other NAS is suitable for having their RAIDed disks moved to a software-based NAS. To be specific, I'm a big fan of the (largely) Debian-based Ubuntu. Can the aforementioned NAS drives be migrated to Ubuntu (e.g. using the mdadm Linux command)? Secondly, is there any commercial NAS that can be migrated over? Incidentally, here is a link to somebody who succeeded in a migration: http://www.linuxquestions.org/questions/slackware-14/moving-raid1-drives-into-computer-with-same-md-numbers-862312/ My specific scenario I'd like to prepare for, is the eventual (sudden) death of one of the NAS motherboards.

    Read the article

  • Dell Powervault MD3000 - Not sharing Files between servers

    - by Kevin
    I'm a developer who has to set up a Dell Powervault MD3000 due to lack of resources. I have connected the Powervault to 2 Dell 2950 servers via the SAS cables. I performed the setup using Dell's MD Storage Manager software (4 disks, RAID 5 with hot spare). Then I added the disks using Windows 2003 disk management (Basic, not dynamic disk and formatted with NTFS). When I add files to the array from one server, they are not visible on the other server (and vice-versa). Is the error in the windows disk management configuration?

    Read the article

  • Calculate age in days/months/years in OpenOffice

    - by Sanjay
    In need to find the age in days - months - years in OpenOffice. There is DATEDIF() in Microsoft Excel. You can use it to find the difference in days/months/years between two dates. Age Calculation You can calculate a persons age based on their birthday and todays date. The calculation uses the DATEDIF() function. The DATEDIF() is not documented in Excel 5, 7 or 97, but it is in 2000. (Makes you wonder what else Microsoft forgot to tell us!) Birth date : 01-Jan-60 Years lived : 52 =DATEDIF(C8,TODAY(),"y") and the months : 4 =DATEDIF(C8,TODAY(),"ym") and the days : 30 =DATEDIF(C8,TODAY(),"md") One can calculate by below formula, but it is cumbersome to calculate months. Another way to calculate age This method gives you an age which may potentially have decimal places representing the months. If the age is 20.5, the .5 represents 6 months. Birth date : 01-Jan-60 Age is : 52.41 =(TODAY()-C23)/365.25

    Read the article

  • LVM mirroring VS RAID1

    - by syrenity
    Hi. Having learned a bit about LVM mirroring, I thought about replacing the current RAID-1 scheme I'm using to gain some flexibility. Problem is that according to what I found on the Internet, LVM is: 1) Slower then RAID-1, at least in reading (as only single volume being used for reading). 2) Non-reliable on power interrupts, and requires disk cache disabling for prevention of data loss. http://www.joshbryan.com/blog/2008/01/02/lvm2-mirrors-vs-md-raid-1/ Also it seems, at least to several setup guides I read (http://www.tcpdump.com/kb/os/linux/lvm-mirroring/intro.html), that one actually requires a 3rd disk for storing the LVM log. This makes the setup completely unusable on 2 disks installations, and lowers the amount of used mirror disks on higher amount of disks. Can anyone comment the above facts, and let me know his experience of using LVM mirroring? Thanks.

    Read the article

  • Build a user's profile directory on creation in batch

    - by Moses
    I have a batch script that I use when I set up new Windows 7 PCs that creates a user based on a variable, creates a folder on their desktop, then shares it: @echo off SET /p unitnumber="Enter unit number: " net user unit%unitnumber% password /add /expire:never MD "C:/Users/unit%unitnumber%/Desktop/Accounting #%unitnumber%" runas /user:administrator "net share "Accounting#%unitnumber%"="C:/Users/unit%unitnumber%/Desktop/Accounting#%unitnumber"" I discovered that the share that is created is overwritten when the newly created user first logs on, because Windows creates builds their profile directory at that time. Is there any way to initiate a build of a user's profile directory in the batch file just after creating the it? The only thing that looks useful is the /homedir:pathname switch for the net user command, but I believe that option assumes the directory already exists. Other than that web research hasn't been fruitful. I'd be to use whatever to get this done as long as I can incorporate/launch it from the batch. Any suggestions?

    Read the article

  • CMD Command to create folder for each file and move file into folder

    - by Tom
    I need a command that can be run from the command line to create a folder for each file (based on the file-name) in a directory and then move the file into the newly created folders. Example : Starting Folder: Dog.jpg Cat.jpg The following command works great at creating a folder for each filename in the current working directory. for %i in (*) do md "%~ni" Result Folder: \Dog\ \Cat\ Dog.jpg Cat.jpg I need to take this one step further and move the file into the folder. What I want to achieve is: \Dog\Dog.jpg \Cat\Cat.jpg Can someone help me with one command to do all of this?

    Read the article

  • How to identify RAID (5 or 6) controllers that allow dynamic resize of the array

    - by David Pfeffer
    I'm building a server with a RAID5 array, based on a hardware controller. I want to be able to later add additional disks and have the array rebalance across all of the disks, enlarging the usable size. I also want to be able to later upgrade to bigger disks (one at a time, of course) and then expand the array to fill the entire drive. These features are available in Linux software raid (md). I've also heard they're available in some hardware controllers. Currently, I own the Adaptec RAID 3805 card and the 3ware 9650se card. I'd prefer to use the Adaptec if possible, but I can't find if either of these cards offer this feature. If they don't, are there other affordable (read as: sub-$600) RAID cards available that can accomplish this?

    Read the article

  • Software RAID 1 broken, how do I fix this?

    - by Edward
    I'm running CentOS 6 x86_64. There is a software RAID 1 being used on the two internal 80GB drives. I got the following e-mail sent to me: A DegradedArray event had been detected on md device /dev/md1. Faithfully yours, etc. P.S. The /proc/mdstat file currently contains the following: Personalities : [raid1] md0 : active raid1 sda1[0] 511988 blocks super 1.0 [2/1] [U_] md1 : active raid1 sda2[0] 8190968 blocks super 1.1 [2/1] [U_] bitmap: 1/1 pages [4KB], 65536KB chunk md4 : active raid1 sdc1[0] sdb1[1] 1953512400 blocks super 1.2 [2/2] [UU] md3 : active raid1 sdd5[1] sda5[0] 61224892 blocks super 1.1 [2/2] [UU] bitmap: 1/1 pages [4KB], 65536KB chunk md2 : active raid1 sdd3[1] sda3[0] 8190968 blocks super 1.1 [2/2] [UU] unused devices: <none> The system appears to have booted fine and is working. The two drives' content did not change at all. I only removed and reinstalled them while I was booted on the CentOS Live DVD. How do I get the array working again?

    Read the article

  • Directories will list, but are not recognize by cd

    - by mdimond
    New to terminal and having problems out of the gate. Using Terminal 2.1.2 on a Mac running 10.6.8. Using the "ls Documents" will list the contents, but when I try to change directories, which I tried several different ways, I get the following results: new-host-2:~ MDimond$ cd. -bash: cd.: command not found new-host-2:~ MDimond$ cd./Users/MDimond/Documents -bash: cd./Users/MDimond/Documents: No such file or directory new-host-2:~ MDimond$ cd. /Documents -bash: cd.: command not found The /usr/bin has the cd command listed; the /bin does not. Any assistance would be greatly appreciated. Thanks, md

    Read the article

  • "(401)Authorization Required" when making a web service call using Axis

    - by Arun P Johny
    Hi, I'm using apache axis to connect to my sugar crm instance. When I'm trying to connect to the instance it is throwing the following exception Exception in thread "main" AxisFault faultCode: {http://xml.apache.org/axis/}HTTP faultSubcode: faultString: (401)Authorization Required faultActor: faultNode: faultDetail: {}:return code: 401 &lt;!DOCTYPE HTML PUBLIC &quot;-//IETF//DTD HTML 2.0//EN&quot;&gt; &lt;html&gt;&lt;head&gt; &lt;title&gt;401 Authorization Required&lt;/title&gt; &lt;/head&gt;&lt;body&gt; &lt;h1&gt;Authorization Required&lt;/h1&gt; &lt;p&gt;This server could not verify that you are authorized to access the document requested. Either you supplied the wrong credentials (e.g., bad password), or your browser doesn't understand how to supply the credentials required.&lt;/p&gt; &lt;/body&gt;&lt;/html&gt; {http://xml.apache.org/axis/}HttpErrorCode:401 (401)Authorization Required at org.apache.axis.transport.http.HTTPSender.readFromSocket(HTTPSender.java:744) at org.apache.axis.transport.http.HTTPSender.invoke(HTTPSender.java:144) at org.apache.axis.strategies.InvocationStrategy.visit(InvocationStrategy.java:32) at org.apache.axis.SimpleChain.doVisiting(SimpleChain.java:118) at org.apache.axis.SimpleChain.invoke(SimpleChain.java:83) at org.apache.axis.client.AxisClient.invoke(AxisClient.java:165) at org.apache.axis.client.Call.invokeEngine(Call.java:2784) at org.apache.axis.client.Call.invoke(Call.java:2767) at org.apache.axis.client.Call.invoke(Call.java:2443) at org.apache.axis.client.Call.invoke(Call.java:2366) at org.apache.axis.client.Call.invoke(Call.java:1812) at org.beanizer.sugarcrm.SugarsoapBindingStub.get_server_info(SugarsoapBindingStub.java:1115) at com.greytip.sugarcrm.GreytipCrm.main(GreytipCrm.java:42) This basically says that I do not have the authorization to the resource. The same code is working fine in my testing environment. Sugarsoap service = new SugarsoapLocator(); SugarsoapPortType port = service.getsugarsoapPort(new java.net.URL( SUGAR_CRM_LOCATION + "/soap.php")); System.out.println(port.get_server_info().getVersion()); User_auth userAuth = new User_auth(); userAuth.setUser_name("user_name"); MessageDigest md = MessageDigest.getInstance("MD5"); String password = getHexString(md.digest("password".getBytes())); userAuth.setPassword(password); // userAuth.setVersion("0.1"); Entry_value login = port.login(userAuth, "myAppName", null); String sessionID = login.getId(); Above code is used to connect to the Sugar CRM installation. here line "System.out.println(port.get_server_info().getVersion());" is throwing the exception. One difference I noticed between the test and production environment is when I used the soap url in the browser the production site pops up a 'Authentication Required' popup. When I gives my proxy username and password in this popup, it shows the soap request details. The same is applicable for the login url also. First it will ask for the 'Authentication' then it will take to the sugar crm login page? Is it a server security setting? If it is then how to set this user name and password using java in a web service call. The authentication required popup is same as the one which comes when we try to access the tomcat manager through a browser. Thanks

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >