Search Results

Search found 777 results on 32 pages for 'volumes'.

Page 21/32 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Raid5 is off on my Lefthand P4500

    - by Soeren
    I have 1 cluster with serveral p4500. There is HW Raid5 and the Raid5 is split on: first 6 is one raid 5, and the other 6 is one raid 5. There were two disk that came up with a fault. I replaced both disk in same Raid5. Now the storagesystem says that Volumes is unaccessable and the Raid5 of the P4500 is down. How can I get the lefthand up and working again? Ive tried to put the old disk in the P4500 again, but this does not help. I also tryed to restart the disk shelf as well. Do you have any idea? Or something to get it to work again?

    Read the article

  • Issue with Windows Server backup

    - by mamu
    I have windows server 2008 r2 installed, Only service running on it is hyper-v. I am trying to take backup using windows server backup feature and it fails with following error in eventlog The backup operation that started at '?2009?-?08?-?22T18:42:14.123000000Z' has failed because the Volume Shadow Copy Service operation to create a shadow copy of the volumes being backed up failed with following error code '2155348129'. Please review the event details for a solution, and then rerun the backup operation once the issue is resolved. Above error itself is point to other event logs for more detail but i can't find anything in event logs Then i ran following command vssadmin list writers It had following out of ordinary in list Writer name: 'Microsoft Hyper-V VSS Writer' Writer Id: {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de} Writer Instance Id: {d15c5f78-121c-464f-b23b-f285e919b05c} State: [8] Failed Last error: Inconsistent shadow copy How could i resolve this?

    Read the article

  • Rebooting an EC2 Instance

    - by ABrown
    I'm working on a project involving EC2 and I'm having a difficult time wrapping my head around this concept. With EC2 instances, will non-EBS backed volumes (standard EC2) survive a reboot of the OS? For example, I have an Ubuntu instance. If I type "/sbin/shutdown -r now", will I lose all data on the drive not in the AMI? I understand that if I terminate the instance via the tools or the control panel, I lose everything, but I can't find a concrete answer to the restart issue. An extra gold star goes to anyone who can link to documentation clearly explaining this. ;) Thanks for your time!

    Read the article

  • Using rsync to synchronise folders without overwriting files of same name on Mac OS X

    - by Adam
    I would like to synchronise the contents of two directories. Without overwriting but to create a copy if two files have the same name, but different sizes Without duplicating if two files have the same name and size. To work recursively So far I have found the following command which might work $ rsync -varE --progress ~/folder /volumes/server/folder But I'm not entirely sure what the -E flag does. It was suggested by a user on bananica.com but couldn't see a description for it in the manual. Would this do what I require successfully? Thanks

    Read the article

  • Any programs for getting rid of .DS_Store files? [closed]

    - by mcandre
    Possible Duplicate: How to prevent Mac OS X creating .DS_Store files on non Mac (HFS) Volumes? I dual boot between Mac and Windows. When I browse my Windows partition with Finder, it drops little .DS_Store turds all over the folders. They show up when I boot back into Windows. Right now I've got one on my Desktop, sigh. Are there any (free) programs I can use to stop this from happening? I know, I know, there's a Finder setting to stop dropping .DS_Store files on network drives, but my local Windows partition is NOT a network drive.

    Read the article

  • Windows Server 2008 R2 and OSX 10.5/.6/.7

    - by Keith Loughnane
    I'm handling a migration from a on old mac server to a Windows Server 2008 R2 machine running a 12TB(10 usable) RAID5 server. It's using an SMB share and now the OSX 10.5/.6 users can search sometimes it works but takes up to 10 minutes. The OSX 10.7 machine seems to be fine. I've looked in the root of the shared drive for a .Spotlight-V100 file (ls -a) but it doesn't seem to be there. mdutil says indexing is on for that volume and I have cleared the index using mdutil -E /Volumes/MeSharedVolume numerous times. Any ideas?

    Read the article

  • Unable to mount Amazon Public Dataset using ec2-create-volume

    - by the0ther
    I am trying to use a Public Dataset with the snapshot id of snap-­e1608d88. I am looking at these instructions, but they do not seem to help. The first suggestion there says I should click on Volumes and create a new volume, set it's size and availability zone, as well as specifying the snapshot id. The problem is, snapshot id is a dropdown, not a text field, and there are over 100 options in the dropdown. Next I installed the ec2 command line tools and tried to run the ec2-create-volume command. For my first attempt I tried ec2-create-volume --snapshot snap-­e1608d88 --availability-zone us-east-1 but that gave output indicating I need to provide a certificate with the --cert switch. Which certficate exactly? I tried my SSH cert at ~/.ssh/id_rsa. No dice. I got the following Java error: "org.codehaus.xfire.fault.XFireFault: General security error;"

    Read the article

  • Windows XP 32-bit + RocketRaid 622 + 4 x 3TB = not quite a RAID setup

    - by gmoney
    I'm looking to make a 6TB RAID 10 array from my new pile of drives under Windows XP 32-bit, however they are only for auxiliary storage. After adding all the drives to an array, and initializing them XP sees only a fraction of the storage, 2TB. I'm assuming this has to do with MBR vs GPT. Is making a series of 2TB volumes and then spanning my only solution? Most questions online have to do with booting from this setup, but I'm just using them as extra storage. Hardware: 4 x 3TB Hitachi Deskstars + RocketRaid 622 + Sans Digital TR8M TowerRAID. The array is connected via eSATA.

    Read the article

  • How do I move an Amazon micro instance to a small instance?

    - by Navetz
    I want to move my instance to a micro instance to a small instance but when I try to launch a new AMI based on my Micro instance AMI it only gives me the option for 64 bit instances. My initial ami is based off an ubuntu 10.04 image. Is it not possible to move between 64 bit and 32 bit instance? Would it be possible to use a load balancer to have a 32bit instance and a 64bit instance work together? I have a website/web app that I will be uploading huge volumes of data to. I will be starting with 65gigs of images and then moving up to 100+ gigs of images. I am not sure which instance type would be best for this. I was going to use a load balancer and auto scaling to increase the number of instance when the load is high. Also when using a load balancer, does one of the AMI instance become the primary image and the rest act as clones of it?

    Read the article

  • Site experiencing low traffic volume between 8AM and 4PM BST

    - by BizNuge
    There may be no definitive answer to this question but I thought peer review of the problem might stimulate some ideas on the topic. We have a boutique sales site that is experiencing low volumes of traffic (both UK and international) between 8AM and 4PM BST. This seems sort of strange since our target audience for the site is UK based, and this would seem to be when people are awake and online. We are in contact with another boutique site in the same sector who don't experience this issue, so it seems kinda strange. Later on in the day we are getting traffic from the UK, as well as a fair amount of international traffic, so I'm at a loss to figure this one out. The site is fairly well optimised including:- sitemap.xml Proper caching policies across the board google merchant dublin core microdata html5 pretty urls meta and content are reviewed as an ongoing concern we have decent sitelinks for direct queries thru google on the site name a decent amount of inbound links FB, Twitter, Google +1 Google maps listing [verified] site has been selling for ~4 months and is getting ~250 users per day. So I'm not entirely sure how to explain the mid day dip in our figures.... Any ideas at all would be useful. Cheers all!

    Read the article

  • Logical volume that spans raid1 sets: what happens if a RAID fails?

    - by Jeff Shattock
    Consider the following scenario: /dev/md0 - 10GB RAID 1 volume built from /dev/sda and /dev/sdb /dev/md1 - 10GB RAID 1 volume built from /dev/sdc and /dev/sdd /dev/vg0 - volume group containing md0 and md1 /dev/vg0/lv0 - 15GB logical volume The raid devices are created with mdadm; the logical volumes by LVM. What happens to lv0 if md0 fails entirely? That is, if both sda and sdb disintegrate so that the md0 device can not start. Is the portion of the data that resided on md1 still accessible, or is the entire LV gone? Would the answer change if lv0 were created as a striped volume vs non-striped?

    Read the article

  • Extract duplicity difftar files manually

    - by isnogud
    I have a duplicity backup which i am not able to recover with duplicity. By calling duplicity file:///path/to/backups /path/to/dir, it returns "Local and Remote metadata are syncronized, no sync needed." but the /path/to/dir is empty. I decrypted all backup volumes and I'm able to view and extract the files from the different difftar files. My only problem is that there are files partitioned and saved in folders named after the files. Can anyone give me a simple script or at least a hint how to untar these difftar files so i get the actual files instead of the partitioned ones?

    Read the article

  • How can I prevent JungleDisk/MacOS X (10.6) creating a local volume for a removed external drive?

    - by Rew
    Ok, here is situation: I use JungleDisk to sync an online folder on to a external drive connected to my Mac. If I right click Finder, click Go to Folder... then type /Volumes/ I see the drive linked here. Once I remove the external drive, an actual folder is created here in the name of the external drive, JungleDisk continues to copy files to this folder, rather than stop. Is this a feature of Mac OS X? Can I turn if off? After I re-connect my external drive, the link to the drive is appended with a 1 (so if I called the drive SpareDrive it becomes SpareDrive 1 as the newly created folder is called SpareDrive. I realise my explanation isn't very clear, but anyone understand this, and knows how to prevent it happening please let me know. PS: I have a low reputation as I don't use this often, I tend to use stackoverflow, but will check back here for answers.

    Read the article

  • Partition tool with console UI (as in server installation)?

    - by lepe
    Back in 2006, Ray (3DLover) posted the same question in: http://ubuntuforums.org/showthread.php?t=309680 but none of the answers were really useful. Now with a little help from AskUbuntu community, I would like to repeat his question again to see if this time it can be answered correctly. So this is the question (and what I wish too): I'm looking for a UI tool for managing partitions in a console. I have installed Ubuntu Server, so I don't have X Windows at all. fdisk and sfdisk are entirely command line. parted is slightly better, but it's not really a UI. cfdisk has somewhat of a UI, but it only works on one disk at a time, and there's no advanced options like configuring LVM or RAID. Just partitioning. I love the partition tool that is available during the OS install procedure. You can partition, configure RAID's and LMV sets. It can format the partitions with several different file systems, it can set labels, mount options and it can insert your volumes into your fstab. Is this tool available as a stand-alone program? I can't find it anywhere. I think it's called parted_server, but I can't find much information about where to get it. In the past, I have run the Ubuntu install procedure just to use the partition manager that comes with it. (canceling the install after making my partition edits) Anyone help me on this? Thanks -Ray Thanks in advance.

    Read the article

  • Developing an Interface to a Dynamic System

    - by radix07
    I work for a small company and have been designing a GUI to interface our embedded system. The problem with this embedded system is that it is not a finished product (may never be) and is constantly under development and being tweaked and updated for different customers and applications in small volumes. So to deal with this I made a program that can export all the data from a spreadsheet where most of the embedded system variables are sourced from and throw them into a small database for the GUI application to use. This database program I made also spits out a cross reference file for the embedded system which allows the GUI to look up all the variables. This system works pretty well so far, and is even integrated with version control among the GUI, database, and embedded system. The big problem is that there is constant development on several projects that use this system and it gets terribly tedious to keep the system up to date and bring in new changes. This has gotten to the point to where I have had to code the GUI to dynamically (generically) generate all interfaces since I am never guaranteed to find the same data the same way. I have not been able to come up with a good way to uniquely identify the data I import from excel since all fields are able to be changed (due to engineering stubbornness, code re-factoring and/or excel issues) and I cannot assign a fixed reference within the sheet itself. So, are there any good methods or ideas on how to handle the chaos?

    Read the article

  • Strange zsh autocomplete behaviour.

    - by Leda
    Every time I use tab autocompletion with zsh instead of completing the current string, it gives me a new string + options to complete. It's hard to explane, so here is an example. This is what would happen if I type 'ls Nue' and hit tab. [me@mbp:/Volumes/hdd/music]: ls Neu ls Neu Neuraxis/ Neurosis/ Neutral\ Milk\ Hotel/ If I delete the second `ls Nue', I am unable to delete the white space and the first. If I hit return, it is as if I have just entered a blank line. Does anyone know what is going on.

    Read the article

  • Where does truecrypt store the backup volume header?

    - by happygolucky
    When using WDE, where does (if anywhere) truecrypt store a backup volume header? As i know there is always a backup header for regular truecrypt volumes, however i am not sure if this applies when system encryption is used. Because if i damage the volume header in track 0, my password won't boot my system anymore. So there is no backup header on the drive? I read somewhere on a forum that truecrypt might have a backup header relative to some position from the END of the HDD, however this doesn't make sense as it could easily be wiped over by programs running in Windows. And how would truecrypt know where this backup is anyway?

    Read the article

  • Red Hat Kickstart: How do I Prevent partitioning?

    - by frio
    Hey all, I'm currently working on a new virtualisation setup using Xen, and Centos for my workplace. We intend to deploy the domUs into LVM volumes. Currently, the only thing preventing this from working as smoothly as we'd like is the Kickstart script's insistence on partitioning. This is the relevant part from our current KS template (which I've been messing with): # Partitioning clearpart --all --initlabel --drives=xvda part / --size=0 --grow --ondisk=xvda --fstype=ext3 This sets up a single partition and installs to it - which would be fine, but I'd prefer if there were no partitions, and installed directly to the existing LVM (so that we could then mount the LVM from the dom0 for backup and maintenance purposes). It's possible I'm doing something wrong, and should be exporting the volume as xvda1 rather than xvda - which I'm more than happy to amend - but I'm still not sure how I'd navigate the Kickstart! I'd really appreciate any help :). Cheers in advance!

    Read the article

  • Linux virtual disk stripping or multi-path samba share?

    - by wachpwnski
    I am trying to build a file storage box for media. It needs to span two or more directories or partitions as one share. There are a few solutions but reasons why I want to avoid them, among these are: Using LVM2 for stripping. I don't really have the resources to back up everything on the volumes incase one HDD goes south. I would end up loosing everything. Maybe there is a better option for this to prevent data loss with hot swappable drives or some kind of raid. Using symbolic links in the share. This will get tedious every time a new sub-directory is added. Is there some kind of software raid I can use to merge two directories virtually? I am aware of the issue where /dev/hda1/media/file.1 and /dev/hdb1/media/file.1 both exist. But I'm sure there are some creative solutions for this.

    Read the article

  • Best way to copy large amount of data between partitions

    - by skinp
    I'm looking to transfer data across 2 lv of an HP-UX server. I have a couple of those transfers to do, some of which are mostly binary (Oracle tablespace...) and some others are more text files (logs...). Used data size of the volumes is between 100Gb and 1Tb. Also, I will be changing the block size from 1K to 8K on some of these partitions... Things I'm looking for: Guarantees data integrity Fastest data transfer speed Keeps file ownership and permissions Right now, I've thought about dd, cp and rsync, but I'm not sure on the best one to use and the best way to use them...

    Read the article

  • Why is my portable WD MY PASSPORT drive is not recognized?

    - by kloop
    My "MY PASSPORT" (Western Digital) portable drive is not recognized OSx. It used to be recognized, but not anymore. It does not appear in /Volumes. The hard drive is recognized by a Linux machine. I am not sure what happened -- any ideas how to fix that? Thanks. EDIT: ` #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *750.2 GB disk0 1: EFI 209.7 MB disk0s1 2: Apple_HFS Macintosh HD 749.3 GB disk0s2 3: Apple_Boot Recovery HD 650.0 MB disk0s3 /dev/disk1 #: TYPE NAME SIZE IDENTIFIER 0: FDisk_partition_scheme *1.0 TB disk1 1: DOS_FAT_32 MY PASSPORT 1.0 TB disk1s1 `

    Read the article

  • Poor disk performance with high disk capacity usage

    - by GoldenNewby
    I've heard numerous times in the web hosting industry that using "too much" disk space on a drive is bad for performance. Is this just a myth? Can someone explain why this is an issue, even in a situation where the amount of IO done to the drive would be the same at 10% as it would be at 90%? I'm especially curious in the case of virtual servers. If I set up 10 Logical volumes as the virtual disks for some VMs, is it going to run better if I "waste" 20% of the disk space?

    Read the article

  • Software RAID underneath ESXi datastore

    - by carlpett
    I'm building an virtual environment for a small business. It is based around a single ESXi 5.1 host, which will host half a dozen or so VMs. I'm having some doubts regarding how to implement the storage though. I naturally want the datastore to be fault tolerant, but I can't get the funds for a separate storage machine, nor expensive hardware RAID solutions, so I would like to use some software RAID (lvm/mdadm, most likely). How can this be implemented? My only idea so far would be to create a VM which has the storage adapter as passthrough, puts some software RAID on top of the disks and then presents the resulting volumes "back" to the ESXi host which then creates a datastore from which other VMs get their storage presented. This does seem kind of round-about, do I have any better options? From my research, passthrough seems to come with quite a few drawbacks, such as no suspend/resume etc.

    Read the article

  • Turning a running Linux system into a KVM instance on another machine

    - by Charles
    I have two physical machines that I wish to virtualize. I can not (physically) plug the hard drives from either machine into the new machine that will act as their VM host, so I think that copying the entire structure of the system over using dd is out of the question. How can I best go about migrating these machines from their hardware to the KVM environment? I've set up empty, unformatted LVM logical volumes to host their filesystems, with the understanding that giving the VMs a real partition to work with achieves higher performance than sticking an image on the filesystem. Would I be better off creating new OS installs and rsyncing the differences over? FWIW, the two machines to be VM'd are running CentOS 5, and the host machine is running Ubuntu Server 10.04 for no particularly important reason. I doubt this matters too much, as it's still going to be KVM and libvert that matter.

    Read the article

  • If a SQL Server Replication Distributor and Subscriber are on the same server, should a PUSH or PULL subsciption be used?

    - by userx
    Thanks in advance for any help. I'm setting up a new Microsoft SQL Server replication and I have the Distributor and Subscriber running on the same server. The Publisher is on a remote server (as it is a production database and MS recommends that for high volumes, the Distributor should be remote). I don't know much about the inner workings of PUSH vs PULL subscriptions, but my gut tells me that a PUSH subscription would be less resource intensive because (1) the Distributor is already remote, so this shouldn't negatively effect the Publisher and (2) pushing the transactions from the Distributor to the Subscriber is more efficient than the Subscriber polling the Distribution database. Does any one have any resources or insight into PUSH vs PULL which would recommend one over the other? Is there really going to be that big of a difference in performance / reliability / security?

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >