Search Results

Search found 777 results on 32 pages for 'volumes'.

Page 13/32 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Backup Xen domU machines while running.

    - by Jonathan Hawkes
    The host machine is running CentOS 5.3 and using LVM to create Logical Volumes (LVs) and to allow live snapshots to be taken of those LVs. My thought was to store all of the image files for the Xen underpriviledged domains (domU) in a single LV and periodically take a snapshot of that LV and copy the disk images out of the snapshot in order to make a live backup of these systems. Is this doable? Is there a better way? Thanks!

    Read the article

  • How many disks is too many in this RAID 5 configuration??

    - by Tom
    HP 2012i SAN, 7 disks in RAID 5 with 1 hot spare, took several days to expand the volume from 5 to 7 300GB SAS drives. Looking for suggestions about when and how I would determine that having 2 volumes in the SAN, each one with RAID 5, would be better?? I can add 3 more drives to the controller someday, the SAN is used for ESX/vSphere VMs. Thank you...

    Read the article

  • SUBST for OSX? Error when trying to map local folder as network drive on Mac OSX 10.9

    - by Taylor Wright
    I would like to map a local folder as a drive (similar to Window's SUBST). One solution I found was to map a shared folder, but I get the following error when using a local folder: There was a problem connecting to the server “MyDrive.local”. This file server is available on your computer. Access the volumes and files locally. I was using this guide: Mapping Drives (Shared Folders) on Mac OS X

    Read the article

  • Problem with Quotas and File Screening on Mount Points in Windows 2008

    - by James P
    Hello, I have a Windows 2008 Server running the File Server Role and I would like to use mount points for my volumes instead of drive letters. However, I need to use the quota and file screening features of File Server Resource Manager, and it seems that they do not apply correctly to mount point folders. I am able to upload oversized files and excluded file types without any warnings. Could someone help me with a fix or workaround for this issue? Thanks, Jamie

    Read the article

  • Win'08 - Extend volume size on SAN attached storage in a failover cluster

    - by user53207
    Running Win 2008, I'd like to extend the volume of a SAN attached drive that is part of a failover cluster. The SAN team has allocated additional drive space which is being seen by Windows Storage Manager. However, the option to "Extend Volume" is disabled, so is the ability to turn it into a dynamic disk. Is the ability to extend volumes when part of a failover cluster disabled or not available when it's part of SAN attached storage?

    Read the article

  • DPM 2010 iSCSI Mirror

    - by Thermionix
    We're using DPM 2010 for exchange backups, The backup Disk(s) are iSCSI attached drives from multiple NAS boxes. We'd like to mirror iqn.2009-07.com.example.example:RAID.iscsi4.vg0.iscsi05 onto iqn.2012-3.com.example.example:RAID.iscsi4.vg0.iscsi05 DPM 2010 requires the disk for itself and handles volume creation, Therefore we can't just create a mirrored volume in Disk Management. DPM itself doesn't seem to have any ability to mirror the Disks in its storage pool. Any tips on how to mirror the volumes from one drive to the other?

    Read the article

  • Windows DFS File System Clustering

    - by tearman
    We're attempted to set up a high availability network for our file servers, and we're wanting to do a DFS file system cluster using the same back-end storage (our back-end storage has its own clustering mechanisms that it manages itself). The question being, A. how would one go about setting up DFS clustering, and B. how can we get Windows to cooperate with multiple servers accessing the same SAN volumes?

    Read the article

  • Backup server 2008 system state without using wbadmin?

    - by Beuy
    Is it possible to backup the System State of a 2008 server without using wbamdin? The setup in question does support the requirements that wbadmin forces (all volumes are marked as critical). Third party tools are an option but I would like to keep away from the big money sinks (BE etc)

    Read the article

  • Do I dare clicking Delete Volume instead of Delete Partition?

    - by Olle
    I have a VMWare machine with one VM. That VM has a virtual disk which in windows is configured with two partitions and then a lot of slack space, as illustrated here: http://piclair.com/q8g5s What I want to do is delete the partition of 639 GB. However, since it's a dynamic disk, the right menu item says "Delete Volume" instead of "Delete Partition" (when I right click the 639GB space). My question is weather I dare to use "Delete Volume". I have read doing stuff like this on a dynamic volume can cause other partitions/volumes to go corrupt.

    Read the article

  • Will using FAT32 provide better pagefile performance than NTFS?

    - by llazzaro
    Hello, I was discussing with my others personalities, and came up with a conflict. In http://technet.microsoft.com/en-us/library/cc938440.aspx , says that FAT32 is faster when using smaller volumes. Ok separate disk, will give more performance than same disk. But did anyone test this? Scenario 1 : Separate hard disk FAT32 (small volume) Scenario 2 : Separate hard disk NTFS which one will win? minimum gain?

    Read the article

  • Generating/managing config files for hosted application

    - by mfinni
    I asked a question about config management, and haven't seen a reply. It's possible my question was too vague, so let's get down to brass tacks. Here's the process we follow when onboarding a new customer instance into our hosted application : how would you manage this? I'm leaning towards a Perl script to populate templates to generate shell scripts, config files, XML config files, etc. Looking briefly at CFengine and Chef, it seems like they're not going to reduce the amount of work, because I'd still have to manually specify all of the changes/edits within the tool. Doesn't seem to be much of a gain over touching the config files directly. We add a stanza to the main config file for the core (3rd-party) application. This stanza has values that defines the instance (customer) name the TCP listener port for this instance (not one currently used) the DB2 database name (serial numeric identifier, already exists, they get prestaged for us by the DBAs) three sub-config files, by name - they need to be created from 3 templates and be named after the instance The sub-config files define: The filepath for the DB2 volumes The filepath for the storage of objects The filepath for just one of the DB2 volumes (yes, redundant to the first item. We run some application commands, start the instance We do some LDAP thingies (make an OU for the instance, etc.) We add a stanza to the config file for our security listener that acts as a passthrough to LDAP instance name LDAP OU TCP port for instance DB2 database name We restart the security listener (off-hours), change the main config file from item 1, stop and restart the instance. It is now authenticating via LDAP. We add the stop and start commands for this instance to the HA failover scripts. We import an XML config file into the instance that defines things for the actual application for the customer - user names, groups, permissions, and business rules. The XML is supplied by the implementation team. Now, we configure the dataloading application We add a stanza to the existing top-level config file that points to a new customer-level config file. The new customer-level config file includes: the instance (customer) name the DB2 database name arbitrary number of sub-config files, by name Each of the sub-config files defines: filepaths to the directories for ingestion, feedback, backup, and failure those filepaths have a common path to a customer-specific folder, and then one folder for each sub-config file Each of those filepaths needs to be created We need to add this customer instance to our monitoring scripts that confirm the proper processes are running and can be logged into. Of course, those monitoring config files include the instance name, the TCP port, the DB2 database name, etc. There's also a reporting application that needs to be configured for the new instance. You get the idea. There's also XML that is loaded into WAS by the middleware team. We give them the values for them to plug into the XML - they could very easily hand us the template and we could give them back completed XML.

    Read the article

  • Ubuntu Software RAID 0 on AWS Does Not Survive Reboot

    - by Eric J.
    I'm experimenting with creating a software RAID 0 device from 4 EBS volumes on Ubuntu 9.10 running at Amazon AWS following this guide: http://alestic.com/2009/06/ec2-ebs-raid The device appears (and according to SysBench is 3.5x faster than a regular attached EBS volume). Problem is, when I reboot the instance, all files on the RAID device are gone. The device is available and mounted where expected, but contains no files. I am able to write new files to it, which survive until the next reboot.

    Read the article

  • filtering itunes library items by file location

    - by Cawas
    3 answers and unfortunately no solution yet. The Problem I've got way more than 1000 duplicated items in my iTunes Library pointing to a non-existant place (the "where" under "get info" window), along with other duplicated items and other MIAs (Missing In Action). Is there any simple way to just delete all of them and only them? From the library, of course. By that I mean some MIAs are pointing to /Volumes while some are pointing to .../music/Music/... or just .../music/.... I want to delete all pointing to /Volumes as to later I'll recover the rest. Check the image below. Some Background I tried searching for a specific key word on the path and creating smart play list, but with no result. Being able to just sort all library by path would be a perfect solution! I believe old iTunes could do that. PowerTunes can do it (sort by path) but I can't do anything with its list. I would also welcome any program able to handle this, then import and properly export back the iTunes library. Since this seems to just not be clear enough... AppleScript doesn't work That's because AppleScript just can't gather the missing info anywhere in iTunes Library. Maybe we could use AppleScript by opening the XML file, but that's a whole nother issue. Here's a quote from my conversation with Doug the man himself Adams last december: I don't think you do understand. There is no way to get the path to the file of a dead track because iTunes has "forgotten" it. That is, by definition, what a dead track is. Doug On Dec 21, 2010, at 7:08 AM, Caue Rego wrote: yes I understand that and have seem the script. but I'm not looking for the file. just the old broken path reference to it. Sent from my iPhone On 21/12/2010, at 10:00, Doug Adams wrote: You cannot locate missing files of dead tracks because, by definition, a dead track is one that doesn't have any file information. If you look at "Super Remove Dead Tracks", you will notice it looks for tracks that have "missing value" for the location property.

    Read the article

  • EBS with RAID0 (striping) and restoring snapshots

    - by grourk
    We have a MySQL database on EC2 and are looking at the disk IO performance there. Currently we have a single EBS volume with XFS and take snapshots for backup. It seems that a lot of people have seen significant performance gains by striping across multiple EBS volumes with software RAID. If this is done, how does one take snapshots and ensure the consistency of the file system? It seems to me that restoring the file system from multiple snapshots could be tricky.

    Read the article

  • How to get physical partition name from iSCSI details on Windows?

    - by Barry Kelly
    I've got a piece of software that needs the name of a partition in \Device\Harddisk2\Partition1 style, as shown e.g. in WinObj. I want to get this partition name from details of the iSCSI connection that underlies the partition. The trouble is that disk order is not fixed - depending on what devices are connected and initialized in what order, it can move around. So suppose I have the portal name (DNS of the iSCSI target), target IQN, etc. I'd like to somehow discover which volumes in the system relate to it, in an automated fashion. I can write some PowerShell WMI queries that get somewhat close to the desired info: PS> get-wmiobject -class Win32_DiskPartition NumberOfBlocks : 204800 BootPartition : True Name : Disk #0, Partition #0 PrimaryPartition : True Size : 104857600 Index : 0 ... From the Name here, I think I can fabricate the corresponding name by adding 1 to the partition number: \Device\Harddisk0\Partition1 - Partition0 appears to be a fake partition mapping to the whole disk. But the above doesn't have enough information to map to the underlying physical device, unless I take a guess based on exact size matching. I can get some info on SCSI devices, but it's not helpful in joining things up (iSCSI target is Nexenta/Solaris COMSTAR): PS> get-wmiobject -class Win32_SCSIControllerDevice __GENUS : 2 __CLASS : Win32_SCSIControllerDevice ... Antecedent : \\COBRA\root\cimv2:Win32_SCSIController.DeviceID="ROOT\\ISCSIPRT\\0000" Dependent : \\COBRA\root\cimv2:Win32_PnPEntity.DeviceID="SCSI\\DISK&VEN_NEXENTA&PROD_COMSTAR... Similarly, I can run queries like these: PS> get-wmiobject -namespace ROOT\WMI -class MSiSCSIInitiator_TargetClass PS> get-wmiobject -namespace ROOT\WMI -class MSiSCSIInitiator_PersistentDevices These guys return information relating to my iSCSI target name and the GUID volume name respectively (a volume name like \\?\Volume{guid-goes-here}), but the GUID volume name is no good to me, and there doesn't appear to be a reliable correspondence between the target name and the volume that I can join on. I simply can't find an easy way of getting from an IQN (e.g. iqn.1992-01.com.example:storage:diskarrays-sn-a8675309) to physical partitions mapped from that target. The way I do it by hand? I start Disk Management, and look for a partition of the correct size, verify that its driver says NEXENTA COMSTAR, and look at the disk number. But even this is unreliable if I have multiple iSCSI volumes of the exact same size. Any suggestions?

    Read the article

  • Freeware (preferably open-source) tool for creating multi-file spanning archives as a self merging SFX

    - by Lockszmith
    I have a large file I want to transfer using either Internet storage hosting, DVD-Rs or USB storage, which sometimes is limited to FAT file-systems (for example: mobile phones) What I'm basically looking for is a tool that create multiple files/volumes (less than 2GB each - FAT's file size limit) which are packed with a self-extracting executable. Currently the only tool I found doing this is WinRAR, but that's shareware, and not free. Is there any Free, preferably Open-Source tool that does that? Thank in advance

    Read the article

  • Bacula backup process always blocks the restore

    - by georgehu
    Every day we have a long running catalog backup process, and I found there is no way to restore a file during the backup. So, Bacula is designed to block the restore while back is running? I'm using a disk backup, I couldn't understand why I can't restore file from early written volumes as the back process is not supposed to writing on the same volume file.

    Read the article

  • How to crypt and share a directory on OS-X via NFS?

    - by dgAlien
    We have an osx desktop Environment with nfs shares and using linux/vm´s as nfs-clients We want to encrpyt the nfs-data/directories on our os-x machines. Is that possible? Apple´s File-Fault is using kerberos, but file-fault data isnt´ accessible via nfs. Is there a possibility to use file-vault anyway or should we use truecrypt volumes? How do we setup truecypt/filevault + nfs?

    Read the article

  • Suppress "running out of disk space" Message (per drive) on Windows Server 2003

    - by Shoeless
    We have a database server with separate drives for OS, various data files and the transaction log. Our transaction log spills over onto other volumes as well- this is expected behavior. The problem is that we are constantly getting popups that our transaction log drive is out of space (and that I can free space by deleting old or unnecessary files). Is there some way to prevent this message from popping up for this particular drive?

    Read the article

  • Bacula Volume Retention and Automatic Recycling

    - by Kyle Brandt
    Will a bacula volume be recycled if: No more volumes are appendable The date of the last job on it is past the volume retention period Jobs on that volume are not past the job and file retention periods This is the way the manual reads to me, but I saw a post saying all retention periods have to be up before a volume is recycled (which does make less sense to me). Anyone know for sure from experience?

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >