Search Results

Search found 777 results on 32 pages for 'volumes'.

Page 9/32 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • My sound stopped working today, how can I fix it?

    - by Oli
    This seems to be a problem with pulseaudio. I was logged in over VNC on my phone and started playing a video this caused X to crash (as sometimes happens). I restarted and suddenly the sound doesn't work. I have a Intel HDA/Realtek ALC889 00:1b.0 Audio device: Intel Corporation 82801JI (ICH10 Family) HD Audio Controller alsamixer is detecting this just fine. PulseAudio doesn't detect this alsa device so is using auto_null as the default sink (logs below). When I properly kill PulseAudio (tell it not to auto-start) direct ALSA communication with the sound card works just fine. speaker-test, for example, works. So the hardware and ALSA layers are fine IMO. In the logs, it seems that the card might be "busy" but I really don't know how or why it would be now (and never before). Is there an ALSA lock file somewhere that it still there because of my crash? I just ran sudo fuser /dev/snd/* and saw this: oli@bert:~$ sudo fuser /dev/snd/* /dev/snd/controlC0: 1884 /dev/snd/pcmC0D0c: 1884m /dev/snd/timer: 1884 A look at the process list (ps aux | grep 1884) tells me process 1884 is arecord -c 1 -f S16_LE -r 8000 -t raw. No idea what this is or why it's running. When I try and kill arecord (as root), it just respawns and rebinds on the hardware. I'm in a very annoying situation where I don't know what is going on and don't know how to find out. I'm open to all suggestions to get this working again. Fire away. And here's what I get when I stop PA auto-loading, kill it and then start it with -vvvv. oli@bert:~$ pulseaudio -vvvvv I: main.c: setrlimit(RLIMIT_NICE, (31, 31)) failed: Operation not permitted I: main.c: setrlimit(RLIMIT_RTPRIO, (9, 9)) failed: Operation not permitted D: core-rtclock.c: Timer slack is set to 50 us. D: core-util.c: RealtimeKit worked. I: core-util.c: Successfully gained nice level -11. I: main.c: This is PulseAudio 0.9.21-63-gd3efa-dirty D: main.c: Compilation host: x86_64-pc-linux-gnu D: main.c: Compilation CFLAGS: -g -O2 -g -Wall -O3 -Wall -W -Wextra -pipe -Wno-long-long -Winline -Wvla -Wno-overlength-strings -Wunsafe-loop-optimizations -Wundef -Wformat=2 -Wlogical-op -Wsign-compare -Wformat-security -Wmissing-include-dirs -Wformat-nonliteral -Wold-style-definition -Wpointer-arith -Winit-self -Wdeclaration-after-statement -Wfloat-equal -Wmissing-prototypes -Wstrict-prototypes -Wredundant-decls -Wmissing-declarations -Wmissing-noreturn -Wshadow -Wendif-labels -Wcast-align -Wstrict-aliasing=2 -Wwrite-strings -Wno-unused-parameter -ffast-math -Wp,-D_FORTIFY_SOURCE=2 -fno-common -fdiagnostics-show-option D: main.c: Running on host: Linux x86_64 2.6.38-rc3 #1 SMP Tue Feb 1 10:53:04 GMT 2011 D: main.c: Found 8 CPUs. I: main.c: Page size is 4096 bytes D: main.c: Compiled with Valgrind support: no D: main.c: Running in valgrind mode: no D: main.c: Running in VM: no D: main.c: Optimised build: yes D: main.c: All asserts enabled. I: main.c: Machine ID is 8310740c4729ef474fe5ecec4bbf5a6b. I: main.c: Session ID is 8310740c4729ef474fe5ecec4bbf5a6b-1297338553.571075-1050119523. I: main.c: Using runtime directory /home/oli/.pulse/8310740c4729ef474fe5ecec4bbf5a6b-runtime. I: main.c: Using state directory /home/oli/.pulse. I: main.c: Using modules directory /usr/lib/pulse-0.9.21/modules. I: main.c: Running in system mode: no I: main.c: Fresh high-resolution timers available! Enjoy ol' chap! I: cpu-x86.c: CPU flags: CMOV MMX SSE SSE2 SSE3 SSSE3 SSE4_1 SSE4_2 I: svolume_mmx.c: Initialising MMX optimized functions. I: remap_mmx.c: Initialising MMX optimized remappers. I: svolume_sse.c: Initialising SSE2 optimized functions. I: remap_sse.c: Initialising SSE2 optimized remappers. I: sconv_sse.c: Initialising SSE2 optimized conversions. D: memblock.c: Using shared memory pool with 1024 slots of size 64.0 KiB each, total size is 64.0 MiB, maximum usable slot size is 65472 D: database-tdb.c: Opened TDB database '/home/oli/.pulse/8310740c4729ef474fe5ecec4bbf5a6b-device-volumes.tdb' I: module-device-restore.c: Sucessfully opened database file '/home/oli/.pulse/8310740c4729ef474fe5ecec4bbf5a6b-device-volumes'. I: module.c: Loaded "module-device-restore" (index: #0; argument: ""). D: database-tdb.c: Opened TDB database '/home/oli/.pulse/8310740c4729ef474fe5ecec4bbf5a6b-stream-volumes.tdb' I: module-stream-restore.c: Sucessfully opened database file '/home/oli/.pulse/8310740c4729ef474fe5ecec4bbf5a6b-stream-volumes'. I: module.c: Loaded "module-stream-restore" (index: #1; argument: ""). D: database-tdb.c: Opened TDB database '/home/oli/.pulse/8310740c4729ef474fe5ecec4bbf5a6b-card-database.tdb' I: module-card-restore.c: Sucessfully opened database file '/home/oli/.pulse/8310740c4729ef474fe5ecec4bbf5a6b-card-database'. I: module.c: Loaded "module-card-restore" (index: #2; argument: ""). I: module.c: Loaded "module-augment-properties" (index: #3; argument: ""). D: cli-command.c: Checking for existance of '/usr/lib/pulse-0.9.21/modules/module-udev-detect.so': success D: module-udev-detect.c: /dev/snd/controlC0 is accessible: yes D: module-udev-detect.c: /devices/pci0000:00/0000:00:1b.0/sound/card0 is busy: yes I: module-udev-detect.c: Found 1 cards. I: module.c: Loaded "module-udev-detect" (index: #4; argument: ""). D: cli-command.c: Checking for existance of '/usr/lib/pulse-0.9.21/modules/module-bluetooth-discover.so': success D: dbus-util.c: Successfully connected to D-Bus system bus ba7c9a1f90b3d49d930bca2100000015 as :1.62 D: bluetooth-util.c: dbus: interface=org.freedesktop.DBus, path=/org/freedesktop/DBus, member=NameAcquired D: bluetooth-util.c: Bluetooth daemon is apparently not available. I: module.c: Loaded "module-bluetooth-discover" (index: #5; argument: ""). D: cli-command.c: Checking for existance of '/usr/lib/pulse-0.9.21/modules/module-esound-protocol-unix.so': success I: module.c: Loaded "module-esound-protocol-unix" (index: #6; argument: ""). I: module.c: Loaded "module-native-protocol-unix" (index: #7; argument: ""). D: cli-command.c: Checking for existance of '/usr/lib/pulse-0.9.21/modules/module-gconf.so': success I: module.c: Loaded "module-gconf" (index: #8; argument: ""). I: module-default-device-restore.c: Saved default sink 'auto_null' not existant, not restoring default sink setting. I: module-default-device-restore.c: Saved default source 'auto_null.monitor' not existant, not restoring default source setting. I: module.c: Loaded "module-default-device-restore" (index: #9; argument: ""). I: module.c: Loaded "module-rescue-streams" (index: #10; argument: ""). D: module-always-sink.c: Autoloading null-sink as no other sinks detected. I: sink.c: Created sink 0 "auto_null" with sample spec s16le 6ch 44100Hz and channel map front-left,front-left-of-center,front-center,front-right,front-right-of-center,rear-center I: sink.c: device.description = "Dummy Output" I: sink.c: device.class = "abstract" I: sink.c: device.icon_name = "audio-card" D: core-subscribe.c: Dropped redundant event due to change event. I: source.c: Created source 0 "auto_null.monitor" with sample spec s16le 6ch 44100Hz and channel map front-left,front-left-of-center,front-center,front-right,front-right-of-center,rear-center I: source.c: device.description = "Monitor of Dummy Output" I: source.c: device.class = "monitor" I: source.c: device.icon_name = "audio-input-microphone" D: module-null-sink.c: Thread starting up I: module.c: Loaded "module-null-sink" (index: #11; argument: "sink_name=auto_null sink_properties='device.description="Dummy Output"'"). I: module.c: Loaded "module-always-sink" (index: #12; argument: ""). I: module.c: Loaded "module-intended-roles" (index: #13; argument: ""). D: module-suspend-on-idle.c: Sink auto_null becomes idle, timeout in 5 seconds. I: module.c: Loaded "module-suspend-on-idle" (index: #14; argument: ""). I: client.c: Created 0 "ConsoleKit Session /org/freedesktop/ConsoleKit/Session1" D: module-console-kit.c: Added new session /org/freedesktop/ConsoleKit/Session1 I: module.c: Loaded "module-console-kit" (index: #15; argument: ""). I: module.c: Loaded "module-position-event-sounds" (index: #16; argument: ""). D: dbus-util.c: Successfully connected to D-Bus session bus efbffc6788fad56cfd64d40c00000018 as :1.182 D: main.c: Got org.pulseaudio.Server! I: main.c: Daemon startup complete. I: client.c: Created 1 "Native client (UNIX socket client)" I: client.c: Created 2 "Native client (UNIX socket client)" D: protocol-native.c: Protocol version: remote 16, local 16 I: protocol-native.c: Got credentials: uid=1000 gid=1000 success=1 D: protocol-native.c: SHM possible: yes D: protocol-native.c: Negotiated SHM: yes D: protocol-native.c: Protocol version: remote 16, local 16 I: protocol-native.c: Got credentials: uid=1000 gid=1000 success=1 D: protocol-native.c: SHM possible: yes D: protocol-native.c: Negotiated SHM: yes D: module-augment-properties.c: Looking for .desktop file for gnome-volume-control-applet D: module-augment-properties.c: Looking for .desktop file for gnome-settings-daemon D: core-subscribe.c: Dropped redundant event due to change event. I: module-suspend-on-idle.c: Sink auto_null idle for too long, suspending ... D: sink.c: Suspend cause of sink auto_null is 0x0004, suspending Note the one section that seems to find the hardware but says it's busy (no idea if this is relevant). D: cli-command.c: Checking for existance of '/usr/lib/pulse-0.9.21/modules/module-udev-detect.so': success D: module-udev-detect.c: /dev/snd/controlC0 is accessible: yes D: module-udev-detect.c: /devices/pci0000:00/0000:00:1b.0/sound/card0 is busy: yes I: module-udev-detect.c: Found 1 cards.

    Read the article

  • Amazon EC2 migration from one region to the other

    - by Gnanam
    I'm using the following Amazon EC2 resources in the US East (Virginia) region: 1 Running Instance 1 Elastic IP 2 EBS Volumes 100 EBS Snapshots 1 Key Pair 2 Security Groups 5 My Own AMIs (customized based on my application stack) My instance is based on Linux distribution (CentOS) and my AMIs are S3-backed. Both EBS volumes are mounted on this running instance. We're planning to migrate our deployment to US-West region. Because Amazon EC2 resources are not shared across regions, my questions are: What are all the factors that I need to consider in advance? What are all the recommended & different ways of migrating each EC2 resources from one region to the other? Are there any hidden risks involved during and/or after the migration? Experts ideas/suggestions/recommendations on this are highly appreciated.

    Read the article

  • EBS+RAID10+XFS slower than EBS+RAID10+EXT3 using MySQL?

    - by Johann Tagle
    We're currently using EC2 with 16 EBS volumes in RAID10 configuration for our MySQL data. I know some people don't recommend to put EBS volumes to RAID but that's not what I'm concerned about at the moment. Current format is ext3, but we're experimenting with moving to xfs, given many reports that it is faster. However, we're actually experiencing a performance degradation when the partition was converted to xfs - a benchmark run with inserts, updates, selects and deletes was more than 10 seconds slower using xfs. Any idea what could be the problem? Below is the fstab entry (really only changed ext3 to xfs). Database tables are innodb and we are using innodb_file_per_table. /dev/mapper/vg_data-lv_data /data xfs noatime 0 0 Thanks.

    Read the article

  • High I/O latency with software RAID, LUKS encrypted and LVM partitioned KVM setup

    - by aef
    I found out a performance problems with a Mumble server, which I described in a previous question are caused by an I/O latency problem of unknown origin. As I have no idea what is causing this and how to further debug it, I'm asking for your ideas on the topic. I'm running a Hetzner EX4S root server as KVM hypervisor. The server is running Debian Wheezy Beta 4 and KVM virtualisation is utilized through LibVirt. The server has two different 3TB hard drives as one of the hard drives was replaced after S.M.A.R.T. errors were reported. The first hard disk is a Seagate Barracuda XT ST33000651AS (512 bytes logical, 4096 bytes physical sector size), the other one a Seagate Barracuda 7200.14 (AF) ST3000DM001-9YN166 (512 bytes logical and physical sector size). There are two Linux software RAID1 devices. One for the unencrypted boot partition and one as container for the encrypted rest, using both hard drives. Inside the latter RAID device lies an AES encrypted LUKS container. Inside the LUKS container there is a LVM physical volume. The hypervisor's VFS is split on three logical volumes on the described LVM physical volume: one for /, one for /home and one for swap. Here is a diagram of the block device configuration stack: sda (Physical HDD) - md0 (RAID1) - md1 (RAID1) sdb (Physical HDD) - md0 (RAID1) - md1 (RAID1) md0 (Boot RAID) - ext4 (/boot) md1 (Data RAID) - LUKS container - LVM Physical volume - LVM volume hypervisor-root - LVM volume hypervisor-home - LVM volume hypervisor-swap - … (Virtual machine volumes) The guest systems (virtual machines) are mostly running Debian Wheezy Beta 4 too. We have one additional Ubuntu Precise instance. They get their block devices from the LVM physical volume, too. The volumes are accessed through Virtio drivers in native writethrough mode. The IO scheduler (elevator) on both the hypervisor and the guest system is set to deadline instead of the default cfs as that happened to be the most performant setup according to our bonnie++ test series. The I/O latency problem is experienced not only inside the guest systems but is also affecting services running on the hypervisor system itself. The setup seems complex, but I'm sure that not the basic structure causes the latency problems, as my previous server ran four years with almost the same basic setup, without any of the performance problems. On the old setup the following things were different: Debian Lenny was the OS for both hypervisor and almost all guests Xen software virtualisation (therefore no Virtio, also) no LibVirt management Different hard drives, each 1.5TB in size (one of them was a Seagate Barracuda 7200.11 ST31500341AS, the other one I can't tell anymore) We had no IPv6 connectivity Neither in the hypervisor nor in guests we had noticable I/O latency problems According the the datasheets, the current hard drives and the one of the old machine have an average latency of 4.12ms.

    Read the article

  • LUNS access issue in ESX4 Cluster server

    - by rmustafa
    HI, I've created volumes in equallogic in PS 6000 XV(having 2 member which is in 1 pool), checked & those volumes can be easily detected my ISCSI software in windows. But the problem with ESX , not able to see the assigned disk on ESX server, I can explain what I've done: 1.Created Cluster with enabled HA & DRS 2.Added 3 ESX4 HOST 3.Added VMkernel & configured in all 3 ESX4, enabled vmotion & FT on the same adapter. 4.went to iSCSI storage adapter properties, enabled iSCSI 5.Trying to discover the available storage with the controller IP on dynamic discovery, but not able to see the assigned storage Note: the same volume is accessed to windows that means there is no issue from storage , am I right ???? Note: I wanted to mount the same volume in all 3 ESX host. Please suggest .... Thanks & Regards, Rashid Mustafa

    Read the article

  • MacOS X 10.6 Portable Home Directory sync fails due to FileSync agent crashing

    - by tegbains
    On one of our cleanly installed MacPro machines running MacOS X 10.6.6 connected to our MacOS X 10.6.6 Server, syncing data using Portable Home Directories fails. It seems to be due to the filesync agent crashing during the home sync. We get -41 and -8026 errors, which we are suspecting are indicating that there is too much data or filesync agent can't read the files. The user is the owner of the files and can read/write to all of the files. < Logout 0:: [11/02/04 13:10:42.751] Error -41 copying /Volumes/RCAUsers/earlpeng/Library/Mail/Mailboxes/email from old imac./Attachments/12081/2.2. (source = NO) < Logout 0:: [11/02/04 13:10:42.758] Error -8062 copying /Volumes/RCAUsers/earlpeng/Library/Mail/Mailboxes/email from old imac./Attachments/12081/2.2/[email protected]. (source = NO) < Logout 1:: [11/02/04 13:10:42.758] -[DeepCopyContext deepCopyError:sourceError:sourceRef:]: error = -8062, wasSource = NO: return shouldContinue = NO

    Read the article

  • Venez nous voir au Forum Oracle Big Data le 5 avril !

    - by Kinoa
    Le Big Data vient de plus en plus souvent au devant de la scène et vous souhaitez en apprendre davantage ? Générés à partir des réseaux sociaux, de capteurs numériques et autres équipements mobiles, les Big Data - autrement dits, d'énormes volumes de données - constituent une mine d'informations précieuses sur vos activités et les comportements de vos clients. Votre challenge aujourd’hui consiste à gérer l’acquisition, l’organisation et la compréhension de ces volumes de données non structurées, et à les intégrer dans votre système d’information. Vous avez des questions ? Ca vous parait complexe ? Alors le Forum Oracle Bid Data organisé par Oracle et Intel est fait pour vous !   Nous aborderons plusieurs points : Accélération du déploiement de Big Data par l'approche intégrée du hardware et du software Mise à disposition de tous les outils nécessaires au processus complet, de l'acquisition des données à la restitution Intégration de Big Data dans votre système d'information pour fournir aux utilisateurs la quintessence de l'information Nous vous avons concocté un programme des plus alléchant pour cette journée du 5 avril : 9h00 Accueil et remise des badges 9h30 Big Data : The Industry View. Are you ready ?Johan Hendrickx, Core Technology Director, Oracle EMEA Keynote : Big Data – Are you ready ? George Lumpkin, Vice President of DW Product Management, Oracle Corporation Acquisition des données dans votre Big Dataavec Hadoop et Oracle NoSQL Pause Organisez et structurez l'information au sein de votre Big Data avec Big Data Connectors et Oracle Data Integrator Tirez parti des analyses des données de votre Big Dataavec Oracle Endeca et Oracle Business Intelligence 13h00 Cocktail déjeunatoire Le nombre de places est limité, pensez à vous inscrire dès maintenant. Lieu :  Maison de la Chimie28 B, rue Saint Dominique 75007 Paris

    Read the article

  • Disk boot failure in Windows 7 64-bit after installing the latest NVIDIA drivers

    - by Domchi
    I successfully installed the newest NVIDIA drivers (275.33) in Windows 7 64-bit and rebooted afterwards just in case. After the reboot, I got an error about missing MBR. I disconnected the slave disk so that Windows doesn't get confused and got this message: DISK BOOT FAILURE, INSERT SYSTEM DISK AND PRESS ENTER It seems that Windows doesn't recognize the main disk anymore. I booted from Windows install disk but the main disk doesn't get listed as possible Windows locations to repair, and I can't get to it from the recovery prompt. The BIOS does recognize it, and I'm able to see it if I run diskpart - however "detail disk" in diskpart says that there are no volumes on the disk. I also tried bootrec /FixMbr without effect, and bootrec /FixBoot which gives the error message: Element not found. What else can I do? Why would diskpart say there are no volumes on the disk?

    Read the article

  • UUID in Mountain Lion

    - by Naji
    I am trying to find my external HDD UUID in Mountain Lion but diskutil info /dev/disk1s1 returns: Najis-MacBook-Air:~ ****$ diskutil info disk1s1 Device Identifier: disk1s1 Device Node: /dev/disk1s1 Part of Whole: disk1 Device / Media Name: Untitled 1 Volume Name: My Book Escaped with Unicode: My%FF%FE%20%00Book Mounted: Yes Mount Point: /Volumes/My Book Escaped with Unicode: /Volumes/My%FF%FE%20%00Book File System Personality: NTFS Type (Bundle): ntfs Name (User Visible): Windows NT File System (NTFS) Partition Type: Windows_NTFS OS Can Be Installed: No Media Type: Generic Protocol: USB SMART Status: Not Supported Total Size: 2.0 TB (2000364240896 Bytes) (exactly 3906961408 512-Byte-Blocks) Volume Free Space: 212.5 GB (212506509312 Bytes) (exactly 415051776 512-Byte-Blocks) Device Block Size: 512 Bytes Read-Only Media: No Read-Only Volume: Yes Ejectable: Yes Whole: No Internal: No And there is no UUID. What is wrong exactly? Thank you.

    Read the article

  • MacOS X 10.6 Portable Home Directory sync fails due to FileSync agent crashing

    - by tegbains
    On one of our cleanly installed MacPro machines running MacOS X 10.6.6 connected to our MacOS X 10.6.6 Server, syncing data using Portable Home Directories fails. It seems to be due to the filesync agent crashing during the home sync. We get -41 and -8026 errors, which we are suspecting are indicating that there is too much data or filesync agent can't read the files. The user is the owner of the files and can read/write to all of the files. < Logout 0:: [11/02/04 13:10:42.751] Error -41 copying /Volumes/RCAUsers/earlpeng/Library/Mail/Mailboxes/email from old imac./Attachments/12081/2.2. (source = NO) < Logout 0:: [11/02/04 13:10:42.758] Error -8062 copying /Volumes/RCAUsers/earlpeng/Library/Mail/Mailboxes/email from old imac./Attachments/12081/2.2/[email protected]. (source = NO) < Logout 1:: [11/02/04 13:10:42.758] -[DeepCopyContext deepCopyError:sourceError:sourceRef:]: error = -8062, wasSource = NO: return shouldContinue = NO

    Read the article

  • mdadm - Recovering a 'split' RAID1 array

    - by Hamza
    I have two drives that used to be part of a single RAID1 volume but it appears that one of them went offline for some time, something I've noticed just now when I rebooted my system. I now seem to have two RAID volumes, as reported by: # cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md126 : active raid1 sdc[1] 2096116 blocks super 1.2 [2/1] [_U] md127 : active (auto-read-only) raid1 sdb[0] 2096116 blocks super 1.2 [2/1] [U_] unused devices: <none> Not exactly sure where to go from here. How can I merge and re-sync these volumes without data loss? Thanks.

    Read the article

  • LVM, Soft RAID1, and Replication?

    - by mtkoan
    Hi all, I am practicing putting together a HA file server. It is a linux server with 2 1.5TB Hard drives. My plan is to use LVM to manage the physical volumes into logical volumes for /, /home, and /var. Then use md (soft RAID 1) to mirror the image onto the second HDD, THEN use DRDB to mirror the entire setup another server. Is this overkill? Would I just be okay with just md and DRDB? The system will serve user's homedirs (~100) and probably some groupware or other local intranet. On my own machines I've always separated root and /home partitions in case I break something, I can easily reinstall the OS. Should I follow that same theory here? If so I need LVM, because I really can't predict where we'll need more space, /var or /home.

    Read the article

  • Restore files from certain increments using Duplicity

    - by luckytaxi
    Given the following backup sets ... Found primary backup chain with matching signature chain: ------------------------- Chain start time: Tue Jun 21 11:27:26 2011 Chain end time: Tue Jun 21 11:27:59 2011 Number of contained backup sets: 2 Total number of contained volumes: 2 Type of backup set: Time: Num volumes: Full Tue Jun 21 11:27:26 2011 1 Incremental Tue Jun 21 11:27:59 2011 1 If i run the following command, it works (1308655646 was converted from Tue Jun 21 11:27:26 2011): duplicity --no-encryption --restore-time 1308655646 --file-to-restore ORIG_FILE \ file:///storage/test/ restored-file.txt However, if I run the following command, it restores the from the latest set. duplicity --no-encryption --restore-time 2011-06-21T11:27:26 --file-to-restore \ ORIG_FILE file:///storage/test/ restored-file.txt What am I doing wrong w/ the time? I prefer the second option only because I don't want to have to do the conversion manually.

    Read the article

  • Does a successful exit of rsync -acvvv s d guarantee identical directory trees?

    - by user259774
    I have two volumes, one xfs, and another ntfs - ntfs was empty, and xfs had 10 subitems. I needed to sync them. I initially copied a few of the subitems by dragging them over in a gui fm. Several of the direct descendants which i had dragged finished, apparently. One I stopped before it was done, and the rest I cancelled while it still appeared to be gathering information about the files. Then I ran rsync -acvvv xmp/ nmp/, where xmp and nmp are the volumes' respective mountpoints, which exited with a 0 status. find xmp -printf x | wc -c and find nmp -printf x | wc -c both return 372926. My question is: Am I guaranteed that the two drives' contents are identical?

    Read the article

  • Accessing a storage-side snapshot of a cluster-shared volume

    - by syneticon-dj
    From time to time I am in the situation where I need to get data back from storage-side snapshots of cluster shared volumes. I suppose I just never figured out a way to do it right, so I always needed to: expose the shadow copy as a separate LUN offline the original CSV in the cluster un-expose the LUN carrying the original CSV make sure my cluster nodes have detected the new LUN and no longer list the original one add the volume to the list of cluster volumes, promote it to be a CSV copy off the data I need undo steps 5. - 1. to revert to the original configuration This is quite tedious and requires downtime for the original volume. Is there a better way to do this without involving a separate host outside of the cluster?

    Read the article

  • Duplicating an instance into a new VPC from a Snapshot

    - by Remmus
    We have a group of instances in an Amazon VPC we use for our live environment. We have a big release to do and want to test that the deployment will run smoothly. I have created a second VPC, created instances of the same size on the same private ips and then removed their original volumes and attached new volumes that were created from snapshots of the live environment. Unfortunately none of the instance will allow me to connect. They start running fine, but I don't get any system logs appear and can't connect. The only thing I can think of is that the new instance was created from a new AMI as the old one is deprecated due to new security fixes. Is this a problem? If so can I fix it in any way? And if this isn't a problem, does anyone have any ideas how I can fix it?

    Read the article

  • rsync --link-dest behaviour when run as sudo

    - by fotNelton
    In order to create regular backups, I'm using rsync together with --link-dest so as to create hard-links for unchanged files. For example: rsync -ax \ --partial --delete --delete-excluded --inplace \ --exclude-from=/tmp/temp_excludes \ --link-dest=/Volumes/Backup/current \ /Users /Volumes/Backup/2012-06-25 This works very well as long as I start the process from my normal user account. Though as soon as I start the process using sudo it behaves erradically, meaning that rsync copies all the unchanged files instead of hard-linking them. Since sudo modifies the environment, I've already also tried sudo -E in conjunction with making sure that my sudoers file has the corresponding option set. Well, that didn't work either. So, the question is, how can I run rsync using sudo? Whereas the above example only shows a backup of the Users directory, I also need to backup some system files that I can only access as root.

    Read the article

  • Does /boot safe on top of a lvm LV (logical volume)?

    - by fantoman
    Title already asked the question. More specifically, I read in some documents that logical volumes are nice in general but not for /boot in a linux system. They say that bootloaders don't understand LVM volumes, so create a separate partition for /boot out of lvm. I recently installed Ubuntu server (9.10) for my home server, but by default /boot is created in the LVM. Everything is fine now, but I am not sure it is safe to use /boot in LVM. Second question is do I really need a physical partition (volume)(pv) for /boot or is it equally fine if I put it into a logical volume (lv) on top of a single shared volume group. Thanks in advance.

    Read the article

  • Backup Mongodb on EC2 through EBS snapshots - timing issue

    - by DmitrySemenov
    I'm following this guidance http://docs.mongodb.org/ecosystem/tutorial/backup-and-restore-mongodb-on-amazon-ec2/ I have 4 EBS 1000 IOPS volumes assigned to instance These 4 volumes through MDADM assembled into software RAID10 array. I want to do backups through EBS Snapshots as explained in the article above Question: Mongodb says - that I need to mongo shelldb.runCommand({fsync:1,lock:1}); -- this will lock the db for writing ....run snapshot creation... mongo shell db.$cmd.sys.unlock.findOne(); -- this will unlock the db for writing so do I need to unlock the DB for writing after I issued the comand ec2-create-snapshot or after it's finished and the actual snapshot is created thanks, Dmitry

    Read the article

  • Large volume at /mnt on AWS instance

    - by rhaag71
    I know this is probably a somewhat 'dumb' question :) I have an AWS (small) instance and I just noticed that there is a ~150gb volume attached at /mnt, is this normal? It kinda freaked me out, I was thinking maybe someone was trying to capture whatever I mount in /mnt, there is the entry in my fstab too (and I found that others have this by googling)... the entry is as follows /dev/xvdb /mnt auto defaults,nobootwait,comment=cloudconfig 0 2 I don't have any volumes this large in my AWS volumes section though. I was just trying to understand this and be sure that someone is not trying to 'get in'... as there are many attempts daily. Thanks

    Read the article

  • LVM incorrectly reported missing after power failure

    - by mensi
    We have had a major power failure in the data-center. We are using a set of servers for our storage needs. The main server has several pairs of disks mirrored with mdadm. The resulting /dev/mdX are LVM physical volumes and belong to a big volume-group with all our data. After the powerloss, we had the problem that one of the mdadm devices was not auto-detected due to a missing entry in mdadm.conf. As a consequence, the volumegroup had inactive logical volumes due to the missing PV. We were able to fix the mdadm config and reboot. pvscan shows all expected PVs but one LV still does not come up. vgdisplay shows: [...] Cur PV: 3 Act PV: 2 [...] Neither vgscan nor pvscan show any missing devices. What went wrong? How can we force LVM to activate all PVs?

    Read the article

  • unittest import error with virtualenv + google-app-engine-django

    - by Ray Yun
    I'm working with google-app-engine-django + zipped django. Just running "python manage.py test" succeeded without error. But with virtualenv, test was failed with "import unittest error". same error with Django 1.1. - OSX 10.5.6 - google-app-engine-django (r101 via svn) : r100 was failed with launcher 1.3.0 - GoogleAppLauncher 1.3.0 - Django 1.1 & 1.1.1 (zipped) : both failed - virtualenv 1.4.5 - virtualenvwrapper 1.24 Error Message: (django_appengine)Reiot:warclouds Reiot$ python manage.py test WARNING:root:Could not read datastore data from /var/folders/UZ/UZ1vQeLFH2ShHk4kIiLcFk+++TI/-Tmp-/django_google-app-engine-django.datastore INFO:root:zipimporter('/Volumes/data/Documents/warclouds/django.zip', 'django/core/serializers/') .WARNING:root:Can't open zipfile /Users/Reiot/.virtualenvs/django_appengine/lib/python2.5/site-packages/setuptools-0.6c11-py2.5.egg: IOError: [Errno 13] file not accessible: '/Users/Reiot/.virtualenvs/django_appengine/lib/python2.5/site-packages/setuptools-0.6c11-py2.5.egg' WARNING:root:Can't open zipfile /Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/setuptools-0.6c9-py2.5.egg: IOError: [Errno 13] file not accessible: '/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/setuptools-0.6c9-py2.5.egg' ERROR:root:Exception encountered handling request Traceback (most recent call last): File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/dev_appserver.py", line 3177, in _HandleRequest self._Dispatch(dispatcher, self.rfile, outfile, env_dict) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/dev_appserver.py", line 3120, in _Dispatch base_env_dict=env_dict) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/dev_appserver.py", line 515, in Dispatch base_env_dict=base_env_dict) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/dev_appserver.py", line 2379, in Dispatch self._module_dict) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/dev_appserver.py", line 2289, in ExecuteCGI reset_modules = exec_script(handler_path, cgi_path, hook) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/dev_appserver.py", line 2185, in ExecuteOrImportScript exec module_code in script_module.__dict__ File "/Volumes/data/Documents/warclouds/main.py", line 28, in <module> from appengine_django import InstallAppengineHelperForDjango File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/dev_appserver.py", line 1264, in Decorate return func(self, *args, **kwargs) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/dev_appserver.py", line 1914, in load_module return self.FindAndLoadModule(submodule, fullname, search_path) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/dev_appserver.py", line 1264, in Decorate return func(self, *args, **kwargs) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/dev_appserver.py", line 1816, in FindAndLoadModule description) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/dev_appserver.py", line 1264, in Decorate return func(self, *args, **kwargs) File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/dev_appserver.py", line 1767, in LoadModuleRestricted description) File "/Volumes/data/Documents/warclouds/appengine_django/__init__.py", line 44, in <module> import unittest ImportError: No module named unittest INFO:root:"GET / HTTP/1.1" 500 - INFO:root:zipimporter('/Users/Reiot/.virtualenvs/django_appengine/lib/python2.5/site-packages/setuptools-0.6c11-py2.5.egg', '') INFO:root:zipimporter('/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/site-packages/setuptools-0.6c9-py2.5.egg', '') F........................................................... ====================================================================== FAIL: a request to the default page works in the dev_appserver ---------------------------------------------------------------------- Traceback (most recent call last): File "/Volumes/data/Documents/warclouds/appengine_django/tests/integration_test.py", line 176, in testBasic self.assertEquals(rv.status_code, 200) AssertionError: 500 != 200 I also tried with console import but it was ok. > which python /Users/Reiot/.virtualenvs/django_appengine/bin/python > python >>> import unittest Here is my environments: $ mkvirtualenv --no-site-packages no-django $ mkvirtualenv --no-site-packages django-1.1 $ mkvirtualenv --no-site-packages django-1.1.1 (django-1.1)$ easy_install Django-1.1.tar (django-1.1.1)$ easy_install Django-1.1.1.tar $ mkdir google-app-engine-django-svn $ cp -r google-app-engine-django-svn google-app-engine-django-svn-django-1.1 // copy appropriate django.zip $ cp -r google-app-engine-django-svn google-app-engine-django-svn-django-1.1.1 // copy appropriate django.zip

    Read the article

  • How to Achieve Real-Time Data Protection and Availabilty....For Real

    - by JoeMeeks
    There is a class of business and mission critical applications where downtime or data loss have substantial negative impact on revenue, customer service, reputation, cost, etc. Because the Oracle Database is used extensively to provide reliable performance and availability for this class of application, it also provides an integrated set of capabilities for real-time data protection and availability. Active Data Guard, depicted in the figure below, is the cornerstone for accomplishing these objectives because it provides the absolute best real-time data protection and availability for the Oracle Database. This is a bold statement, but it is supported by the facts. It isn’t so much that alternative solutions are bad, it’s just that their architectures prevent them from achieving the same levels of data protection, availability, simplicity, and asset utilization provided by Active Data Guard. Let’s explore further. Backups are the most popular method used to protect data and are an essential best practice for every database. Not surprisingly, Oracle Recovery Manager (RMAN) is one of the most commonly used features of the Oracle Database. But comparing Active Data Guard to backups is like comparing apples to motorcycles. Active Data Guard uses a hot (open read-only), synchronized copy of the production database to provide real-time data protection and HA. In contrast, a restore from backup takes time and often has many moving parts - people, processes, software and systems – that can create a level of uncertainty during an outage that critical applications can’t afford. This is why backups play a secondary role for your most critical databases by complementing real-time solutions that can provide both data protection and availability. Before Data Guard, enterprises used storage remote-mirroring for real-time data protection and availability. Remote-mirroring is a sophisticated storage technology promoted as a generic infrastructure solution that makes a simple promise – whatever is written to a primary volume will also be written to the mirrored volume at a remote site. Keeping this promise is also what causes data loss and downtime when the data written to primary volumes is corrupt – the same corruption is faithfully mirrored to the remote volume making both copies unusable. This happens because remote-mirroring is a generic process. It has no  intrinsic knowledge of Oracle data structures to enable advanced protection, nor can it perform independent Oracle validation BEFORE changes are applied to the remote copy. There is also nothing to prevent human error (e.g. a storage admin accidentally deleting critical files) from also impacting the remote mirrored copy. Remote-mirroring tricks users by creating a false impression that there are two separate copies of the Oracle Database. In truth; while remote-mirroring maintains two copies of the data on different volumes, both are part of a single closely coupled system. Not only will remote-mirroring propagate corruptions and administrative errors, but the changes applied to the mirrored volume are a result of the same Oracle code path that applied the change to the source volume. There is no isolation, either from a storage mirroring perspective or from an Oracle software perspective.  Bottom line, storage remote-mirroring lacks both the smarts and isolation level necessary to provide true data protection. Active Data Guard offers much more than storage remote-mirroring when your objective is protecting your enterprise from downtime and data loss. Like remote-mirroring, an Active Data Guard replica is an exact block for block copy of the primary. Unlike remote-mirroring, an Active Data Guard replica is NOT a tightly coupled copy of the source volumes - it is a completely independent Oracle Database. Active Data Guard’s inherent knowledge of Oracle data block and redo structures enables a separate Oracle Database using a different Oracle code path than the primary to use the full complement of Oracle data validation methods before changes are applied to the synchronized copy. These include: physical check sum, logical intra-block checking, lost write validation, and automatic block repair. The figure below illustrates the stark difference between the knowledge that remote-mirroring can discern from an Oracle data block and what Active Data Guard can discern. An Active Data Guard standby also provides a range of additional services enabled by the fact that it is a running Oracle Database - not just a mirrored copy of data files. An Active Data Guard standby database can be open read-only while it is synchronizing with the primary. This enables read-only workloads to be offloaded from the primary system and run on the active standby - boosting performance by utilizing all assets. An Active Data Guard standby can also be used to implement many types of system and database maintenance in rolling fashion. Maintenance and upgrades are first implemented on the standby while production runs unaffected at the primary. After the primary and standby are synchronized and all changes have been validated, the production workload is quickly switched to the standby. The only downtime is the time required for user connections to transfer from one system to the next. These capabilities further expand the expectations of availability offered by a data protection solution beyond what is possible to do using storage remote-mirroring. So don’t be fooled by appearances.  Storage remote-mirroring and Active Data Guard replication may look similar on the surface - but the devil is in the details. Only Active Data Guard has the smarts, the isolation, and the simplicity, to provide the best data protection and availability for the Oracle Database. Stay tuned for future blog posts that dive into the many differences between storage remote-mirroring and Active Data Guard along the dimensions of data protection, data availability, cost, asset utilization and return on investment. For additional information on Active Data Guard, see: Active Data Guard Technical White Paper Active Data Guard vs Storage Remote-Mirroring Active Data Guard Home Page on the Oracle Technology Network

    Read the article

  • C# Generic Arrays and math operations on it

    - by msedi
    Hello, I'm currently involved in a project where I have very large image volumes. This volumes have to processed very fast (adding, subtracting, thresholding, and so on). Additionally most of the volume are so large that they event don't fit into the memory of the system. For that reason I have created an abstract volume class (VoxelVolume) that host the volume and image data and overloads the operators so that it's possible to perform the regular mathematical operations on volumes. Thereby two more questions opened up which I will put into stackoverflow into two additional threads. Here is my first question. My volume is implemented in a way that it only can contain float array data, but most of the containing data is from an UInt16 image source. Only operations on the volume can create float array images. When I started implementing such a volume the class looked like following: public abstract class VoxelVolume<T> { ... } but then I realized that overloading the operators or return values would get more complicated. An example would be: public abstract class VoxelVolume<T> { ... public static VoxelVolume<T> Import<T>(param string[] files) { } } also adding two overloading operators would be more complicated: ... public static VoxelVolume<T> operator+(VoxelVolume<T> A, VoxelVolume<T> B) { ... } Let's assume I can overcome the problems described above, nevertheless I have different types of arrays that contain the image data. Since I have fixed my type in the volumes to float the is no problem and I can do an unsafe operation when adding the contents of two image volume arrays. I have read a few threads here and had a look around the web, but found no real good explanation of what to do when I want to add two arrays of different types in a fast way. Unfortunately every math operation on generics is not possible, since C# is not able to calculate the size of the underlying data type. Of course there might by a way around this problem by using C++/CLR, but currently everything I have done so far, runs in 32bit and 64bit without having to do a thing. Switching to C++/CLR seemed to me (pleased correct me if I'm wrong) that I'm bound to a certain platform (32bit) and I have to compile two assemblies when I let the application run on another platform (64bit). Is this true? So asked shortly: How is it possible to add two arrays of two different types in a fast way. Is it true that the developers of C# haven't thought about this. Switching to a different language (C# - C++) seems not to be an option. I realize that simply performing this operation float []A = new float[]{1,2,3}; byte []B = new byte[]{1,2,3}; float []C = A+B; is not possible and unnecessary although it would be nice if it would work. My solution I was trying was following: public static class ArrayExt { public static unsafe TResult[] Add<T1, T2, TResult>(T1 []A, T2 []B) { // Assume the length of both arrays is equal TResult[] result = new TResult[A.Length]; GCHandle h1 = GCHandle.Alloc (A, Pinned); GCHandle h2 = GCHandle.Alloc (B, Pinned); GCHandle hR = GCHandle.Alloc (C, Pinned); void *ptrA = h1.ToPointer(); void *ptrB = h2.ToPointer(); void *ptrR = hR.ToPointer(); for (int i=0; i<A.Length; i++) { *((TResult *)ptrR) = (TResult *)((T1)*ptrA + (T2)*ptrB)); } h1.Free(); h2.Free(); hR.Free(); return result; } } Please excuse if the code above is not quite correct, I wrote it without using an C# editor. Is such a solution a shown above thinkable? Please feel free to ask if I made a mistake or described some things incompletely. Thanks for your help Martin

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >