Search Results

Search found 1650 results on 66 pages for 'sas san'.

Page 11/66 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • Can I get redundancy with a JBOD storage subsystem

    - by Dat Chu
    I have a Promise Technology J610S. This is a JBOD subsystem. Is it possible for me to buy a SAS hardware RAID controller and provide some type of redundancy for these drives? I am unsure whether I will use Linux or Windows yet so an answer with enumeration for both would be highly appreciated. One solution that I thought of was: if my J610s can export each drive as a target, my server will simply see 16 drives. The RAID controller can then perform the RAID5/RAID6 if I want.

    Read the article

  • spontaneous hard disk password

    - by sc
    I had an HP proliant server go down recently. All of the sudden the sas controller (e200i) would not see any of the physical disks. New disks were detected just fine. I thought it was odd that all 6 disks would go down at one time so sent them to a data recovery firm to find out what happened. I'm being told that, somehow, all of the disks were spontaneously password protected. These are Hitachi 2.5" drives and I guess this is something of a known issue. The company has worked for a while to try and recover them, with no luck. Has anyone had experience with this? Any recommendations for how to recover the drives or a company that might have the expertise to do so?

    Read the article

  • SSD Performance for PHP?

    - by Andrew Fashion
    My programmer just built an application with PHP using Doctrine ORM (will be a high traffic social networking website), and it's very heavy in PHP/Apache and CPU. The queries are wonderfully fast, and MySQL is barely using any CPU, it's just Apache. I was curious to if an SSD would help speed up PHP/Apache, because I know the bottleneck is in PHP reading multiple files, class files, and loading up a bunch of data. So common sense makes me think if PHP is reading multiple PHP files, an SSD would only help as far as read/write? I was thinking of doing a high performance SSD for the PHP application, but for user image uploads, I would just continue using a 15k SAS. Is there any performance issues regarding using an SSD in this kind of situation? And would it prove to help speed up PHP/Apache, and help the CPU problem out?

    Read the article

  • RAID 10 over RAID 5 when using SSDs

    - by root
    I am considering implementing an iSCSI shared storage array using SATA SSDs instead of the 15k RPM SAS drives we normally purchase. We normally use RAID 10 because of spindle contention with the random IO produced by virtualized workloads. I was wondering if we could switch to RAID 5 or RAID 6 to have more usuable space now that spindle contention is less of an issue. A question in my mind is how much overhead there is from the controller calculating parity. I am aware that this configuration will not allow TRIM to function. Our current workloads are running on a Dell H800 with a 24 bay external enclosure.

    Read the article

  • vDS - vCenter Problem

    - by rbmadison
    We are implementing a vSphere farm and are using a distrubuted switch. The VC is a VM within the farm connected to the distrubuted switch. We had a SAN issue and all of our VMs were down. When the SAN recovered and we restarted the ESX host containing the VC the VC couldn't connect to the network through the vDS. We had to remove a NIC from the vDS on that host and create a regular vswitch and then connect the VC to that before the VC would connect to the network. Is this typical behavior? If the VC goes down does all vDS networking stop on all the hosts? That seems to be a very bad thing. I thought networking would work even though the VC is down because the hosts have the vDS configuration cached. Is there a better way to configure it to prevent this from happening. We want to keep the VC as a VM for HA and recoverabilty purposes. Can anyone offer suggestions or explanations? I appreciate the help. Thanks, Rick

    Read the article

  • Cannot increase Datastore

    - by k4w4zz
    Hello, We have an ESX 4.0 cluster with 2 hosts, EMC Clarion SAN storage with 10 LUNs. We have added 2 new 400 GB LUNs. All the LUNs are visible from both hosts. I have extended an existing 500 GB datastore with one of these 400 GB LUNs - the new datastore size is now 900 GB. I'd like to do the same operation with the second 400 GB LUN to extend another existing datastore but I'm not able to do it. The LUN is available to create a brand new datastore but is not visible to extend an existing one. I don't understand why everything was fine with the other one and why can't I do the same exact operation with this LUN. The result is the same on both hosts. The SAN admin have erased and re-created several times this LUN. I have rescan the HBA each time. In attachment you can find the result of the esxcfg-mpath -l and fdisk -l commands on both servers. Does somebody have an idea please?

    Read the article

  • HP/Lenovo alternative to Buffalo iSCSI TerraStation?

    - by Robin Day
    I'm looking at virtualising some of our infrastructure in order to allow for more resiliance and future expandability. We have successfully virtualised on single servers with Direct Attached Storage and are now looking for a more future proof solution using a high powered host (or two) and a SAN (or two). I'm thinking that the host machine will probably be an HP ProLiant DL360 G7 (all of our exisiting infrastructure is HP). Unfortunately, I am new to the world of SANs. From what I can see, the Buffalo Terrastation III is all I would need in order to setup an iSCSI SAN for VMWare to use. However, I'm a little reticent to go that way as it's a bit too "entry level" for my liking. In particular I would be very keen for more redundancy, power, networking, etc. I'm also very aware that you "get what you pay for". Therefore, can anyone reccommend equivalents from the big boys? HP/Lenovo? I have searched high and low on the HP site and seen many options but am struggling to work out if it is all the hardware I will need. Some options appear to need separate controllers from disk enclosures, etc.

    Read the article

  • HP/IBM alternative to Buffalo iSCSI TerraStation?

    - by Robin Day
    I'm looking at virtualising some of our infrastructure in order to allow for more resiliance and future expandability. We have successfully virtualised on single servers with Direct Attached Storage and are now looking for a more future proof solution using a high powered host (or two) and a SAN (or two). I'm thinking that the host machine will probably be an HP ProLiant DL360 G7 (all of our exisiting infrastructure is HP). Unfortunately, I am new to the world of SANs. From what I can see, the Buffalo Terrastation III is all I would need in order to setup an iSCSI SAN for VMWare to use. However, I'm a little reticent to go that way as it's a bit too "entry level" for my liking. In particular I would be very keen for more redundancy, power, networking, etc. I'm also very aware that you "get what you pay for". Therefore, can anyone reccommend equivalents from the big boys? HP/IBM? I have searched high and low on the HP site and seen many options but am struggling to work out if it is all the hardware I will need. Some options appear to need separate controllers from disk enclosures, etc.

    Read the article

  • SQL Server 2005 standard filegroups / files for performance on SAN

    - by Blootac
    Ok so I've just been on a SQL Server course and we discussed the usage scenarios of multiple filegroups and files when in use over local RAID and local disks but we didn't touch SAN scenarios so my question is as follows; I currently have a 250 gig database running on SQL Server 2005 where some tables have a huge number of writes and others are fairly static. The database and all objects reside in a single file group with a single data file. The log file is also on the same volume. My interpretation is that separate data files should be used across different disks to lessen disk contention and that file groups should be used for partitioning of data. However, with a SAN you obviously don't really have the same issue of disk contention that you do with a small RAID setup (or at least we don't at the moment), and standard edition doesn't support partitioning. So in order to improve parallelism what should I do? My understanding of various Microsoft publications is that if I increase the number of data files, separate threads can act across each file separately. Which leads me to the question how many files should I have. One per core? Should I be putting tables and indexes with high levels of activity in separate file groups, each with the same number of data files as we have cores? Thank you

    Read the article

  • EMC ESRS stops working when it is VMotioned

    - by makerofthings7
    EMC is on site and told me: The ESRS SAN monitoring solution will cease to function if that host is VMotioned In case anyone doesn't know, the ESRS is a dial home solution that works over IP. An EMC SecureID is required to add or modify the list of devices that are monitored. The ESRS software is installed on the customer premises. Question If ESRS truly fails to work, as the EMC engineer stated, and based on our customer experience, what is it within VMWare that is exposed to the virtualized host that allows this behavior to happen?

    Read the article

  • SQL Server iSCSI session issue with a NetApp SAN

    - by Matt Beckman
    We had an issue early this morning when iSCSI issues broke connectivity with a few of our databases (resulting in a SQL Server Error 21). Attempts to DBCC CheckDB did not work, and the only solution was to restart the SQL Service. Is there a known reason why an iSCSI initiator session would reset itself out of the blue? Example below from the NetApp syslog. This set of errors was replicated 4 times (once for each SQL server in production). Only one SQL server was noticeably impacted, however. [san1: iscsi.notice:notice]: ISCSI: iswta, ISID Rule: new connection from same initiator, shutting down old session 7 [san1: iscsi.notice:notice]: ISCSI: iswta, New session from initiator iqn.1991-05.com.microsoft:sql1.example.corp at IP addr 10.xxx.xxx.123

    Read the article

  • Problems connecting Clariion LUNS to Solaris 10

    - by vialde
    I've got a Clariion San and a Solaris 10 server with an emulex HBA. The Thin LUNS are visible in Solaris and I can happily format the raw devices. Unfortunately that's all I can do. All other operations result in I/O errors. I'm running current versions of PowerPath and Solaris is patched as high as it'll go. Anyone have any similar experiences?

    Read the article

  • HP StorageWorks P4500 G2 Manager Management

    - by MDMarra
    According to the documentation, a management group should have an odd number of managers greater than 1. I have a four node SAN consisting of P4500 G2s. I plan on having two clusters with two nodes each in this management group, i.e.: -Managent_Group1 -Cluster1 -Node1 -Node2 -Cluster2 -Node3 -Node4 Are there any issues running standard managers on Node1, Node2, and Node3? After reading the documentation, I'm still unclear about whether or not cluster membership matters in quorum consistency, or if they don't matter at all.

    Read the article

  • SQL IO and SAN troubles

    - by James
    We are running two servers with identical software setup but different hardware. The first one is a VM on VMWare on a normal tower server with dual core xeons, 16 GB RAM and a 7200 RPM drive. The second one is a VM on XenServer on a powerful brand new rack server, with 4 core xeons and shared storage. We are running Dynamics AX 2012 and SQL Server 2008 R2. When I insert 15 000 records into a table on the slow tower server (as a test), it does so in 13 seconds. On the fast server it takes 33 seconds. I re-ran these tests several times with the same results. I have a feeling it is some sort of IO bottleneck, so I ran SQLIO on both. Here are the results for the slow tower server: C:\Program Files (x86)\SQLIO>test.bat C:\Program Files (x86)\SQLIO>sqlio -kW -t8 -s120 -o8 -frandom -b8 -BH -LS C:\Tes tFile.dat sqlio v1.5.SG using system counter for latency timings, 14318180 counts per second 8 threads writing for 120 secs to file C:\TestFile.dat using 8KB random IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) using current size: 5120 MB for file: C:\TestFile.dat initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: 226.97 MBs/sec: 1.77 latency metrics: Min_Latency(ms): 0 Avg_Latency(ms): 281 Max_Latency(ms): 467 histogram: ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ %: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 99 C:\Program Files (x86)\SQLIO>sqlio -kR -t8 -s120 -o8 -frandom -b8 -BH -LS C:\Tes tFile.dat sqlio v1.5.SG using system counter for latency timings, 14318180 counts per second 8 threads reading for 120 secs from file C:\TestFile.dat using 8KB random IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) using current size: 5120 MB for file: C:\TestFile.dat initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: 91.34 MBs/sec: 0.71 latency metrics: Min_Latency(ms): 14 Avg_Latency(ms): 699 Max_Latency(ms): 1124 histogram: ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ %: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100 C:\Program Files (x86)\SQLIO>sqlio -kW -t8 -s120 -o8 -fsequential -b64 -BH -LS C :\TestFile.dat sqlio v1.5.SG using system counter for latency timings, 14318180 counts per second 8 threads writing for 120 secs to file C:\TestFile.dat using 64KB sequential IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) using current size: 5120 MB for file: C:\TestFile.dat initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: 1094.50 MBs/sec: 68.40 latency metrics: Min_Latency(ms): 0 Avg_Latency(ms): 58 Max_Latency(ms): 467 histogram: ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ %: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100 C:\Program Files (x86)\SQLIO>sqlio -kR -t8 -s120 -o8 -fsequential -b64 -BH -LS C :\TestFile.dat sqlio v1.5.SG using system counter for latency timings, 14318180 counts per second 8 threads reading for 120 secs from file C:\TestFile.dat using 64KB sequential IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) using current size: 5120 MB for file: C:\TestFile.dat initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: 1155.31 MBs/sec: 72.20 latency metrics: Min_Latency(ms): 17 Avg_Latency(ms): 55 Max_Latency(ms): 205 histogram: ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ %: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100 Here are the results of the fast rack server: C:\Program Files (x86)\SQLIO>test.bat C:\Program Files (x86)\SQLIO>sqlio -kW -t8 -s120 -o8 -frandom -b8 -BH -LS E:\Tes tFile.dat sqlio v1.5.SG using system counter for latency timings, 62500000 counts per second 8 threads writing for 120 secs to file E:\TestFile.dat using 8KB random IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) open_file: CreateFile (E:\TestFile.dat for write): The system cannot find the pa th specified. exiting C:\Program Files (x86)\SQLIO>sqlio -kR -t8 -s120 -o8 -frandom -b8 -BH -LS E:\Tes tFile.dat sqlio v1.5.SG using system counter for latency timings, 62500000 counts per second 8 threads reading for 120 secs from file E:\TestFile.dat using 8KB random IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) open_file: CreateFile (E:\TestFile.dat for read): The system cannot find the pat h specified. exiting C:\Program Files (x86)\SQLIO>sqlio -kW -t8 -s120 -o8 -fsequential -b64 -BH -LS E :\TestFile.dat sqlio v1.5.SG using system counter for latency timings, 62500000 counts per second 8 threads writing for 120 secs to file E:\TestFile.dat using 64KB sequential IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) open_file: CreateFile (E:\TestFile.dat for write): The system cannot find the pa th specified. exiting C:\Program Files (x86)\SQLIO>sqlio -kR -t8 -s120 -o8 -fsequential -b64 -BH -LS E :\TestFile.dat sqlio v1.5.SG using system counter for latency timings, 62500000 counts per second 8 threads reading for 120 secs from file E:\TestFile.dat using 64KB sequential IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) open_file: CreateFile (E:\TestFile.dat for read): The system cannot find the pat h specified. exiting C:\Program Files (x86)\SQLIO>test.bat C:\Program Files (x86)\SQLIO>sqlio -kW -t8 -s120 -o8 -frandom -b8 -BH -LS c:\Tes tFile.dat sqlio v1.5.SG using system counter for latency timings, 62500000 counts per second 8 threads writing for 120 secs to file c:\TestFile.dat using 8KB random IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) using current size: 5120 MB for file: c:\TestFile.dat initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: 2575.77 MBs/sec: 20.12 latency metrics: Min_Latency(ms): 1 Avg_Latency(ms): 24 Max_Latency(ms): 655 histogram: ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ %: 0 0 0 5 8 9 9 9 8 5 3 1 1 1 1 0 0 0 0 0 0 0 0 0 37 C:\Program Files (x86)\SQLIO>sqlio -kR -t8 -s120 -o8 -frandom -b8 -BH -LS c:\Tes tFile.dat sqlio v1.5.SG using system counter for latency timings, 62500000 counts per second 8 threads reading for 120 secs from file c:\TestFile.dat using 8KB random IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) using current size: 5120 MB for file: c:\TestFile.dat initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: 1141.39 MBs/sec: 8.91 latency metrics: Min_Latency(ms): 1 Avg_Latency(ms): 55 Max_Latency(ms): 652 histogram: ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ %: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 91 C:\Program Files (x86)\SQLIO>sqlio -kW -t8 -s120 -o8 -fsequential -b64 -BH -LS c :\TestFile.dat sqlio v1.5.SG using system counter for latency timings, 62500000 counts per second 8 threads writing for 120 secs to file c:\TestFile.dat using 64KB sequential IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) using current size: 5120 MB for file: c:\TestFile.dat initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: 341.37 MBs/sec: 21.33 latency metrics: Min_Latency(ms): 5 Avg_Latency(ms): 186 Max_Latency(ms): 120037 histogram: ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ %: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100 C:\Program Files (x86)\SQLIO>sqlio -kR -t8 -s120 -o8 -fsequential -b64 -BH -LS c :\TestFile.dat sqlio v1.5.SG using system counter for latency timings, 62500000 counts per second 8 threads reading for 120 secs from file c:\TestFile.dat using 64KB sequential IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) using current size: 5120 MB for file: c:\TestFile.dat initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: 1024.07 MBs/sec: 64.00 latency metrics: Min_Latency(ms): 5 Avg_Latency(ms): 61 Max_Latency(ms): 81632 histogram: ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ %: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100 Three of the four tests are, to my mind, within reasonable parameters for the rack server. However, the 64 write test is incredibly slow on the rack server. (68 mb/sec on the slow tower vs 21 mb/s on the rack). The read speed for 64k also seems slow. Is this enough to say there is some sort of bottleneck with the shared storage? I need to know if I can take this evidence and say we need to launch an investigation into this. Any help is appreciated.

    Read the article

  • Shared storage setup for Windows

    - by KarmaVille
    This is a n00b question. I want to setup a SAN that will be used as shared storage between multiple Windows 2008 R2 servers. By shared storage, I mean the files can be seen by all servers. How do I do that? Is it possible to implement this without a dedicated Windows file server? (I don't want replication). I'm doing this so that I can setup: http://activemq.apache.org/shared-file-system-master-slave.html

    Read the article

  • How to dump the Subject Alternative Name (SAN) from an SSL certificate file

    - by LonelyPixel
    I know that I can dump the entire information from a PEM certificate file with this command: openssl x509 -in certfile -noout -text And I've already found another direct parameter to show me only the expiry date of a certificate: openssl x509 -in certfile -noout -enddate But is there also a shortcut to get only the alternative names? Like when a certificate can be used for example.com as well as www.example.com. In the full dump, it's here: Certificate: Data: X509v3 extensions: X509v3 Subject Alternative Name: DNS:www.example.com, DNS:example.com I'd just like to save me the hassle to parse this output and get the domain names only. Is that possible? Otherwise, what would be best practices to parse this output? What can be assumed, what may change? Could I use a regexp like X509v3 Subject Alternative Name:\s*DNS:(\S+)(?:, DNS:(\S+))*?

    Read the article

  • Best practice Raid groups for EqualLogic PS6510X

    - by 20th Century Boy
    We are thinking about purchasing 4 x EqualLogic PS6510X SANs (the Sumo boxes). Each has 48 x 600GB 10k SAS drives. They will be stacked to form a logical pool of storage (all in the same location). I understand that when you create a RAID group its done on a "per box" basis. So one box could be Raid 50, another Raid 10 etc. My question is, should I make one box a "performance" box ie Raid 10, and the other boxes "standard" ie Raid50? How do people configure their EQL arrays in the real world?

    Read the article

  • Storage product testing

    - by wildchild
    hello, I know this is out of place (being an active member here i am coming for the help from seniors) ,but i need some information regarding storage testing ,testing of Raid arrays, SCSI, SAS ,SATA and also test carried out on fabric manager(Cisco MDS series switches). I am aware that this is an administrative forum and i would really appreciate if you could direct me to the correct forum ar links where i can learn things . @ moderators-Sorry for posting at the wrong place,i would be deleting this as soon as i get the help. Thanks !

    Read the article

  • Areca 1880ix RAID hangs

    - by Dave
    Areca RAID controller ARC-1880ix-12 (firmware 1.50) hangs when on high load. My setup is: Chenbro 3U chassis Intel S5500BC mainboard Xeon 5603 CPU 16GB of RAM 12 Seagate SAS drives ST32000645SS (2 of them as hot spare, 10 as RAID10) Mellanox Infiniband HBA card This server is working as external infiniband storage for Xen VMs. When load is quite big Areca's firmware hangs - it becomes unreachable even from Areca's ethernet adapter. After resetting the server power it returns to normal operation. While Areca is hanged I can confirm that it is powered (ethernet link is active) and Infiniband HBA works Ok. Thanks in advance for any idea or suggestion where the problem might be!

    Read the article

  • How to block some disks from probes on Linux boot?

    - by Igor Velkov
    My linux host connected to SAN with FC interface. It connect with one path, and see some luns, that can't access, because they need anohter path, not available to host. On boot linux probe all lun he can see, get read error on unaccessible luns, and hangs there for a long-long time. Is there a way to disable any access to some luns at boot time, and later? I found a filters for device ignoration for LVM and MULTIPATH, but it not help during boot process. Generally, lvm still affected too despite of filter, and gives me a IO error on every operation like lvdisplay and vgdisplay, but this is another question.

    Read the article

  • CORAID using only 1 of the 2 available NICs for AoE traffic

    - by Peter Carrero
    We got 6 CORAID shelves in my workplace. On 2 of them I see AoE traffic on only 1 of the 2 NICs that are attached to the SAN switch. We got jumbo frames enabled on all devices. Both NICs show up when I issue the aoe-interfaces command. This wouldn't bother me too much if the throughput performance observed on the "bad" shelves using bonnie++ wasn't half of the result of the "good" shelves. The "good" shelves are older SR1521 model and they have ReiserFS on their LUNS - not that I think it makes a difference - and the "bad" shelves are newer SR2421 model and have JFS. Any help as to what is going on and how to rectify this would be greatly appreciated. BTW: even the lower performing shelves outperform another iSCSI device we got, but that is another story... Thanks.

    Read the article

  • just curious if anybody every tried this- hyper-v r2

    - by tony roth
    I have a server that san boots that I want to p2v. I have many options disk2vhd, scvmm etc but I was thinking about cloning the lun (flexclone, netapp) presenting it to my hyper-v r2 server. Within the hv manager do a create new disk then have it copy the cloned lun to a vhd file. Then do the bcdedit\bootsect stuff to it. Should work right? I'm also curious if anybodys booting vhd's that are on bootable luns? I've booted native vhd's just fine was just curious about the running them off a bootable lun. I think that this has quite a few advantages like instant p2v etc.. any thoughts on this? hmm dang as I was typing this I realized that I should not use the hv manager new disk copy routine, I should just disk2vhd the mounted lun. This has advantages in that it should be a lot faster!! thanks

    Read the article

  • Error adding 4tb LUN (Raw Device Mapping) to ESX4 VM

    - by Tom Gardiner
    Hi guys, I'm trying to map an existing 4tb LUN from a Fibre Channel SAN, through to a VM in my ESX4 environment. It keeps telling me that the VMDK file size exceeds the the maximum size supported by the datastore. I've tried in Physical compatibility mode, and also both Virtual styles. I'm a little confused by this as we had the same LUN mapped through to another VM when we were running ESX3.5... I've also noticed that some of my other RAW mappings are generating extremely large VMDK files on the ESX servers. Does anyone know if this change in behaviour is intentional? And if so why? It doesn't seem to me that if the LUN is mapped directly to the VM that it's size should be relevant. We're running 4.0.0 build 236512, and 4.0.0 build 219382 and I've not had any success on either. Any insight or advice would be much appreciated! TG

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >