Search Results

Search found 2808 results on 113 pages for 'volume mixer'.

Page 40/113 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • fsck: FILE SYSTEM WAS MODIFIED after each check with -c, why?

    - by Chris
    Hi I use a script to partition and format CF cards (connected with a USB card writer) in an automated way. After the main process I check the card again with fsck. To check bad blocks I also tried the '-c' switch, but I always get a return value != 0 and the message "FILE SYSTEM WAS MODIFIED" (see below). I get the same result when checking the very same drive several times... Does anyone know why a) the file system is modified at all and b) why this seems to happen every time I check and not only in case of an error (like bad blocks)? Here's the output: linux-box# fsck.ext3 -c /dev/sdx1 e2fsck 1.40.2 (12-Jul-2007) Checking for bad blocks (read-only test): done Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information Volume (/dev/sdx1): ***** FILE SYSTEM WAS MODIFIED ***** Volume (/dev/sdx1): 5132/245760 files (1.2% non-contiguous), 178910/1959896 blocks Thanks, Chris

    Read the article

  • Are speakers wasting power when they are not in use?

    - by Dennis Cheung
    I guess all of you have a pair/set of speakers for your MP3s. I am just interesting to how do they do when they are not playing? How actually the power spent by the speaker? Does any the below cases matter? Idle / Silence input When you computer is off / not connected Raise the volume by hardware (turn/push the button on the speaker) Raise the volume by software (do it with your mouse) BTW, to save the world. I am here to suggest you to unplug it when you leave your PC.

    Read the article

  • differencing disk opinions

    - by troth
    I've read about the performance issues with dfferencing disks but I still think there is a solid place for them and thats the os boot partition. If I'm going to have 20 vms on a csv based volume I don't won't to waste the 20+ gigs per guest just for the os boot. If I get a good base disk with all of the most used applications installed and have the pagefile located somewhere else I don't think the delta's would be that great thus it should not create a performance issue. Also in a SAN based csv volumes does it make any sense in having the pagefile go to a seperate csv volume? Any opinions on this? thanks

    Read the article

  • Solaris to Linux conversion: Use VxFS or GFS?

    - by w00t
    We're a Solaris shop looking at RedHat Enterprise Linux and one of the things we're wondering is if we should keep Veritas Volume Manager + FileSystem or go with LVM+ext3 or RedHat's preferred cluster filesystem solution, GFS. One of the things we like about Veritas is that it can use Veritas Volume Replicator to have a remote copy of important filesystems. This functionality seems to be missing from RedHat, DRBD doesn't seem to be packaged in RHEL... So my questions are: Does anybody use VxFS/VxVM/VVR on Linux? Thoughts, experiences? Comparison with LVM+ext3? Anybody using GFS? Thoughts, experiences? Do you do remote replication for disaster recovery, and if so, how? Is there a standard RedHat way?

    Read the article

  • Can I use dmraid instead of md (mdadm) to make software RAID-1 and RAID-1+0 volumes?

    - by Don MacAskill
    On a related question about SSDs and TRIM (see: Possible to get SSD TRIM (discard) working on ext4 + LVM + software RAID in Linux? ), it turns out that dmraid may now (or shortly) support TRIM on RAID-1. Typically, we've used md (via mdadm) to create our RAID-1 volumes, then used LVM to create volume groups, then formatted with the file system of our choice (ext4 lately). We've been doing this for years, and Google & ServerFault searches seem to confirm this is the most common way of doing software RAID with volume management. Google searches seem to suggest that dmraid is use for so-called 'fakeRAID' configurations where there's some level of hardware 'help' in the form of RAID BIOS in the controller, which we don't have (and don't want to use - we'd like a fully software solution). Since we'd like to use TRIM on our SSDs, and since md doesn't seem to (yet?) support TRIM, I'm wondering if it's possible to use dmraid instead of md to create RAID-1 (and RAID-1+0) volumes in software, with no hardware support (ie, just plugged into a dumb SATA/SAS bus)?

    Read the article

  • Forgot to unmount/eject external hard drive, lost moved files. Mac OS X

    - by balupton
    So I was using my Mac with my external hard drive connected via USB. I moved about 10 GB of data to it (via drag and drop while holding down the Command key to move the files rather than to copy them). They moved to the drive all right, but as I was having some issues and the Finder crashed after the transfer, I was unable to eject the volume and later everything froze so I had to do a hard restart (hold the power button). When I remounted the volume (plugged the external hard drive back in) it no longer had any of the files which I moved onto it. As it was a lot of data, how can I recover these files?

    Read the article

  • LUNS access issue in ESX4 Cluster server

    - by rmustafa
    HI, I've created volumes in equallogic in PS 6000 XV(having 2 member which is in 1 pool), checked & those volumes can be easily detected my ISCSI software in windows. But the problem with ESX , not able to see the assigned disk on ESX server, I can explain what I've done: 1.Created Cluster with enabled HA & DRS 2.Added 3 ESX4 HOST 3.Added VMkernel & configured in all 3 ESX4, enabled vmotion & FT on the same adapter. 4.went to iSCSI storage adapter properties, enabled iSCSI 5.Trying to discover the available storage with the controller IP on dynamic discovery, but not able to see the assigned storage Note: the same volume is accessed to windows that means there is no issue from storage , am I right ???? Note: I wanted to mount the same volume in all 3 ESX host. Please suggest .... Thanks & Regards, Rashid Mustafa

    Read the article

  • Microsoft Expression Studio 4 Ultimate license problem.

    - by Sung Meister
    I have installed a volume licensed version of Expression Studio 4 Ultimate. When I contacted support, I was told that a product key is not required for volume license version. But after I installing it, I get folowing error message: A licensing error has occurred. Restart your Expression program and try again. If you continue to receive this error message, reinstall your Expression program to make sure that the license installs correctly. As a side note, I used to have full version of Blend 3 and Blend 4 Beta installed side by side.

    Read the article

  • reiserfsck --rebuild-tree failed: Not enough allocable blocks

    - by mojo
    I have a reiserfs volume that required a --rebuild-tree, but is currently failing to complete when I pass it --rebuild-tree. Here is the output that I receive when running it: reiserfsck 3.6.19 (2003 www.namesys.com) # reiserfsck --rebuild-tree started at Mon Oct 26 13:22:16 2009 # Pass 0: # Pass 0 The whole partition (7864320 blocks) is to be scanned Skipping 8450 blocks (super block, journal, bitmaps) 7855870 blocks will be read 0%....20%....40%....60%....80%....100% left 0, 9408 /sec 287884 directory entries were hashed with "r5" hash. "r5" hash is selected Flushing..finished Read blocks (but not data blocks) 7855870 Leaves among those 6105606 Objectids found 287892 Pass 1 (will try to insert 6105606 leaves): # Pass 1 Looking for allocable blocks .. finished 0%....20%....40%....60%....80%....Not enough allocable blocks, checking bitmap...there are 1 allocable blocks, btw out of disk space Aborted I can't mount it, and I can't fsck it. I've tried extending the volume, but that hasn't helped either.

    Read the article

  • Linux: How to break a large file into smaller files?

    - by Runcible
    I have a giant file (20 gigs) sitting on my source machine and I need to transfer it to my target machine. For the purposes of this question, let's assume that I do not have network connectivity between the two machines. I need to break this file into a series of smaller files, write the smaller files to DVD(s), then re-assemble everything on the target machine. Both source and destination machines are Linux boxes. Is there a way to accomplish this using tar? I have a feeling that I need to use the --multi-volume parameter. What are my options? I need to be able to specify the size of the volume files, in order to make sure that each one will fit onto a single DVD. Thanks!

    Read the article

  • Changing the partition icon for Boot Camp

    - by zneak
    Hey guys, I've installed Windows 7 for a dual-boot setup on my new Core i7 MacBook Pro. Now, just for the looks, I'd like to change the volume icon. The partition is in NTFS format. I remember that in the past (with Leopard), you just had to add a .VolumeIcon.icns file at the root of a volume to set its icon. It seems this trick wore off with Snow Leopard. It apparently still works with CDs and DVDs, but hard drives keep that old, boring drive icon, no matter how lovely the .VolumeIcon.icns file I've put at the root. How can I change that?

    Read the article

  • Virtualizing OpenSolaris with physical disks

    - by Fionna Davids
    I currently have a OpenSolaris installation with a ~1Tb RaidZ volume made up of 3 500Gb hard drives. This is on commodity hardware (ASUS NVIDIA based board on Intel Core 2). I'm wondering whether anyone knows if XenServer or Oracle VM can be used to install 2009.06 and get given physical access to the three SATA drives so that I can continue to use the zpool and be able to use the Xen bits for other areas. I'm thinking of installing the JeOS version of OpenSolaris, have it manage just my ZFS volume and some other stuff for work(4GB), then have a Windows(2GB) and Linux(1GB) VM (theres 8Gb RAM on that box) virtualised for testing things. Currently I am using VirtualBox installed on OpenSolaris for the Windows and Linux testing but wondered if the above was a better alternative. Essentially, 3 Disks - OpenSolaris Guest VM, it loads the zpool and offers it to the other VMs via CIFS.

    Read the article

  • Extending C drive not possible

    - by gokul
    This is my computer partition list. You can see my C drive is running very low on disk space. I wanted to extend my disk space, so I used mmc Disk managment to shrink a volume, but I can't extend it to the C drive because the extended volume in dropdown list of C is not clickable. I've tried many packages, but none were able to do it. My C drive is simple, basic, NTFS, healthy (boot, crash dump, primary partition). The MMC Windows: What should I do?

    Read the article

  • flashcache with mdadm and LVM

    - by Backtogeek
    I am having trouble setting up flashcache on a system with LVM and mdadm, I suspect I am either just missing an obvious step or getting some mapping wrong and hoped someone could point me in the right direction? system info: CentOS 6.4 64 bit mdadm config md0 : active raid1 sdd3[2] sde3[3] sdf3[4] sdg3[5] sdh3[1] sda3[0] 204736 blocks super 1.0 [6/6] [UUUUUU] md2 : active raid6 sdd5[2] sde5[3] sdf5[4] sdg5[5] sdh5[1] sda5[0] 3794905088 blocks super 1.1 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU] md3 : active raid0 sdc1[1] sdb1[0] 250065920 blocks super 1.1 512k chunks md1 : active raid10 sdh1[1] sda1[0] sdd1[2] sdf1[4] sdg1[5] sde1[3] 76749312 blocks super 1.1 512K chunks 2 near-copies [6/6] [UUUUUU] pcsvan PV /dev/mapper/ssdcache VG Xenvol lvm2 [3.53 TiB / 3.53 TiB free] Total: 1 [3.53 TiB] / in use: 1 [3.53 TiB] / in no VG: 0 [0 ] flashcache create command used: flashcache_create -p back ssdcache /dev/md3 /dev/md2 pvdisplay --- Physical volume --- PV Name /dev/mapper/ssdcache VG Name Xenvol PV Size 3.53 TiB / not usable 106.00 MiB Allocatable yes PE Size 128.00 MiB Total PE 28952 Free PE 28912 Allocated PE 40 PV UUID w0ENVR-EjvO-gAZ8-TQA1-5wYu-ISOk-pJv7LV vgdisplay --- Volume group --- VG Name Xenvol System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 2 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 1 Max PV 0 Cur PV 1 Act PV 1 VG Size 3.53 TiB PE Size 128.00 MiB Total PE 28952 Alloc PE / Size 40 / 5.00 GiB Free PE / Size 28912 / 3.53 TiB VG UUID 7vfKWh-ENPb-P8dV-jVlb-kP0o-1dDd-N8zzYj So that is where I am at, I thought that was the job done however when creating a logical volume called test and mounting it is /mnt/test the sequential write is pathetic, 60 ish MB/s /dev/md3 has 2 x SSD's in Raid0 which alone is performing at around 800 MB/s sequential write and I am trying to cache /dev/md2 which is 6 x 1TB drives in raid6 I have read a number of pages through the day and some of them here, it is obvious from the results that the cache is not functioning but I am unsure why. I have added the filter line in the lvm.conf filter = [ "r|/dev/sdb|", "r|/dev/sdc|", "r|/dev/md3|" ] It is probably something silly but the cache is clearly performing no writes so I suspect I am not mapping it or have not mounted the cache correctly. dmsetup status ssdcache: 0 7589810176 flashcache stats: reads(142), writes(0) read hits(133), read hit percent(93) write hits(0) write hit percent(0) dirty write hits(0) dirty write hit percent(0) replacement(0), write replacement(0) write invalidates(0), read invalidates(0) pending enqueues(0), pending inval(0) metadata dirties(0), metadata cleans(0) metadata batch(0) metadata ssd writes(0) cleanings(0) fallow cleanings(0) no room(0) front merge(0) back merge(0) force_clean_block(0) disk reads(9), disk writes(0) ssd reads(133) ssd writes(9) uncached reads(0), uncached writes(0), uncached IO requeue(0) disk read errors(0), disk write errors(0) ssd read errors(0) ssd write errors(0) uncached sequential reads(0), uncached sequential writes(0) pid_adds(0), pid_dels(0), pid_drops(0) pid_expiry(0) lru hot blocks(31136000), lru warm blocks(31136000) lru promotions(0), lru demotions(0) Xenvol-test: 0 10485760 linear I have included as much info as I can think of, look forward to any replies.

    Read the article

  • Time Machine (archive) of pre-Leopard systems?

    - by benc
    I want to get off an older Mac OS X system permanently. It is an iBook G3, so that has two important characteristics: Power PC, not Intel based. Runs only Tiger, not Leopard. This means, as far as I can tell: Cannot run Time Machine directly. Here's the approach I have been contemplating: Mount the drive in Firewire mode. Back up the drive as a external drive to the Time Machine volume. Disconnect the drive (permanently). However, I'm concerned that this drive will eventually age out, when the Time Machine volume fills up, and the old-system-as-external-drive is gone. Would it be better to do a single backup with another utility, to shared disk?

    Read the article

  • Can you link an NTFS junction point to a directory on a Network Attached Storage?

    - by Zachary Burt
    I'm using Windows, and I want to use Dropbox to back up a folder outside my Dropbox directory. So I want to create a junction point from my target directory to my Dropbox folder. Accoding to the Wikipedia article on NTFS junction points, which the Dropbox answer links to: "Junction points can only link to directories on a local volume; junction points to remote shares are unsupported." I am looking to link to a directory on networked attached storage, which would not be a local volume, I believe. What should I do?

    Read the article

  • Should use EXT4 or XFS to be able to 'sync'/backup to S3?

    - by Rafa
    It's my first message here, so bear with me... (I have already checked quite a few of the "Related Questions" suggested by the editor) Here's the setup, a brand new dedicated server (8GB RAM, some 140+ GB disk, Raid 1 via HW controller, 15000 RPM) it's a production web server (with MySQL in it, too, not just serving web requests); not a personal desktop computer or similar. Ubuntu Server 64bit 10.04 LTS We have an Amazon EC2+EBS setup with the EBS volume formatted as XFS for easily taking snapshots to S3, via AWS' console. We are now migrating to the dedicated server and I want to be able to backup our data to Amazon's S3. The main reason being the possibility of using the latest snapshot from an EC2 instance in case of hardware failure on the dedicated server. There are two approaches I am thinking of: do a "simple" file-based backup with rsync, dumping the database' and other files, and uploading to amazon via S3 API commands, or to an EC2 instance, or something. do a file-system "freeze" (using XFS) with the usual ebs/ec2 snapshot tool to take part of the file system, take a snapshot, and upload it to Amazon. Here's my question (or series of questions): Can I safely use XFS for the whole system as the main and only format on the dedicated server? If not, is it safe to use EXT4? Or should I use something else? would then be possible to make snapshots of the system to upload to Amazon? Is it possible/feasible/practical to do what I want to do, anyway? any recommendations? When searching around for S3/EBS/XFS, anything relevant to my problem is usually focused on taking snapshots of a XFS system that is already an EBS volume. My intention is to do it in a "real"/metal dedicated server. Update: I just saw this on Wikipedia: XFS does not provide direct support for snapshots, as it expects the snapshot process to be implemented by the volume manager. I had always assumed that I could choose 2 ways of doing snapshots: via LVM or via XFS (without LVM). After reading this, I realize these 2 options are more like it: With XFS: 1) do xfs_freeze; 2) copy the frozen files via, eg, rsync; 3) unfreeze xfs With LVM and XFS: 1) do xfs_freeze; 2) make a binary copy of the frozen fs via lvcreate and related commands; 3) unfreeze xfs; 4) somehow backup the LVM snapshot. Thanks a lot in advance, Let me know if I need to clarify something.

    Read the article

  • UDF filesystem -> Maximum number of files

    - by user978122
    I am considering partitioning a rather large hard drive with the UDF filesystem for an experiment, and would like to ask if anyone knows the maximum number of files, either by directory, or as a whole, that the UDF filesystem can handle? For some background, I looked at the JFS and XFS filesystems (NTFS has a limitation of the number of files per volume); however, since I run Windows, that's kind of out. UFD, on the other hand, does not appear to have these limitations, but then, I cannot really find any information on just how many files per volume the UDF file system supports.

    Read the article

  • How should I use LVM with Ganeti?

    - by javano
    I am building a small Ganeti cluster on some low end hardware (I only have the resources given sadly). I am confused as to the use of LVMs with DRBD. I have two instances and three nodes. What I want is instance1 replicated between node 1 & 2, and instance2 replicated between nodes 3 & 2 (so node2 is doing nothing, except waiting for either node1 or 3 to fail, is it is the secondary node for both instances). This is because node2 is a lower hardware spec than 1 and 3, so I just want it as an hot-spare. How can I achieve this? I don't want instance1 being replicated to node3 for example, nor instance2 replicated to node1. Nodes 1 & 2 have /dev/sda5 which is 150GBs (for example). Nodes 2 & 3 have /dev/sda6 which is also 75GBs (for example). Using just nodes 1 & 2, after looking at the Ganeti docs I would; vgcreate my-vg Next I would create the cluster via gnt-cluster VG = "my-vg". It is here I believe that I am missing some knowledge. I believe that what I need to do is create the same Logical Volume on nodes 1 & 2 in Volume Group "my-vg", that solely consists of /dev/sda5 and call it "lv1". Then create an Logical Volume on nodes 2 & 3 the solely consists of /dev/sda6 in "my-vg" that is called "lv2". When creating instance1 I would then use "-vg=lv1 -n node1:node2", and when creating instance2 I would use "-vg=lv2 -n node3:node2". I breifly had a go at this today and I'm dubious if this will be possible. When trying to create instance2, "lv2" wont exist on node1 (the cluster master) so I don't believe it will allow the instance creation. Could I create a 1kb parition (/dev/sda6) on node1 and put it into a LV called "lv2" or is that too flakey? Is this set up possible? Thank you.

    Read the article

  • Webcam microphone input in Gnome/pulseaudio

    - by sdaau
    Just got a "Trust" webcam, which gets recognized on my Ubuntu Lucid. It has a built in microphone - which also gets recognized - however, I cannot really get it to act as the system microphone input? Here are some screenshots of what is shown by gnome-volume-control: The default window shows Trust webcam - which has two profiles: "Analog Mono Input" and "Off" - of course, I have it on "Analog Mono Input": However, on the "Input" tab - there is no matching "device for sound input" - neither a matching connector: Then I installed pavucontrol - but that doesn't show that much more; it tells first that gnome-volume-control reads from "Internal Audio Analog Stereo": Then in "Input devices" tab, there is again nothing resembling the mic input from webcam: Finally, under "Configuration" tab, the "Trust" webcam shows, but even if its profile is on "Analog Mono Input", nothing much happens:   So, does anyone know how I could get this webcam microphone to be recognized as the system input? Many thanks in advance for any answers, Cheers!

    Read the article

  • How to interpret IOZone results?

    - by homer5439
    Here are the resuts of running IOZone on an ext3 filesystem on an LVM volume residing on a SAN LUN (it was ran with 5 parallel processes). "Throughput report Y-axis is type of test X-axis is number of processes" "Record size = 4 Kbytes " "Output is in Kbytes/sec" " Initial write " 81628.55 " Rewrite " 83354.72 " Read " 115595.02 " Re-read " 119306.09 " Reverse Read " 47684.20 " Stride read " 10011.09 " Random read " 16751.27 " Mixed workload " 5659.77 " Random write " 1661.85 " Pwrite " 36030.83 Now this is all nice and dandy, but my question is: how do I know whether the values are as good as they could be or there is something to tweak (and if so, what?) The actual usage I will have for that Logical Volume is to act as virtual disk for a VM.

    Read the article

  • Disk Utility Restore causes "Could not validate resource - Invalid Argument"

    - by Yahoo
    I have a problem with Disk Utility on Mac OS X 10.6. I have an image of Windows that I would like to use as a bootable volume on a pen drive or external hard drive. The thing is: When I try to restore the volume from the image I get an error: "Restore Failure: Could not validate resource - Invalid Argument" I read some information about that error on the Internet. I converted the image into .iso (Mac OS Extended/ISO (Joliet) Hybrid Image) format and then got this error: "Restore Failure: Could not find any scan information. The source image needs to be imagescanned before it can be restored." When I try to scan the image for Restore, I get the first message. I really read a lot of information about this topic on the Internet, but I haven't found the solution. I tried both ISO and DMG formats; I don't know which is best.

    Read the article

  • fsck: FILE SYSTEM WAS MODIFIED after each check with -c, why?

    - by Chris
    I use a script to partition and format CF cards (connected with a USB card writer) in an automated way. After the main process I check the card again with fsck. To check bad blocks I also tried the '-c' switch, but I always get a return value != 0 and the message "FILE SYSTEM WAS MODIFIED" (see below). I get the same result when checking the very same drive several times... Does anyone know why a) the file system is modified at all and b) why this seems to happen every time I check and not only in case of an error (like bad blocks)? Here's the output: linux-box# fsck.ext3 -c /dev/sdx1 e2fsck 1.40.2 (12-Jul-2007) Checking for bad blocks (read-only test): done Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information Volume (/dev/sdx1): ***** FILE SYSTEM WAS MODIFIED ***** Volume (/dev/sdx1): 5132/245760 files (1.2% non-contiguous), 178910/1959896 blocks

    Read the article

  • How do I restore the default applets to Gnome's notification area?

    - by gbacon
    I have a fresh install of Karmic Koala. In a botched attempt at trying to change my default window manager, I somehow removed at least three applets from the notification area: network manager (nm-applet), volume control (gnome-volume-control-applet), and the battery meter (???). Now if I logout and back in, these applets don't run, but I can start them from the command line. Because it's a fresh install, I completely removed my luser account and home directory. After recreating my account, I was frustrated to find that the applets are still missing and no obvious way to add them back. How can I restore the default configuration?

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >