Search Results

Search found 24 results on 1 pages for 'equallogic'.

Page 1/1 | 1 

  • Dirty Cache Dell Equallogic Storage Array

    - by Jermal Smith
    has anyone ever run into a dirty cache issue with a Equallogic SAN. Even after replacement of the controller cards in the Equallogic Storage Array fails offline with a dirty cache. I have listed steps here on my blog to bring the SAN online again, however this is not the best solution as it continues to fail. http://jermsmit.com/dirty-cache-dell-equallogic-storage-array/ If you have any info on this please share. Thanks, Jermal

    Read the article

  • Best practice Raid groups for EqualLogic PS6510X

    - by 20th Century Boy
    We are thinking about purchasing 4 x EqualLogic PS6510X SANs (the Sumo boxes). Each has 48 x 600GB 10k SAS drives. They will be stacked to form a logical pool of storage (all in the same location). I understand that when you create a RAID group its done on a "per box" basis. So one box could be Raid 50, another Raid 10 etc. My question is, should I make one box a "performance" box ie Raid 10, and the other boxes "standard" ie Raid50? How do people configure their EQL arrays in the real world?

    Read the article

  • iSCSI SAN Implementation with several ESXi hosts and two Equallogic SANs

    - by Sergey
    I work for a small state college. We currently have 4 ESXi hosts (all made by Dell), 2 EqualLogic SANs (PS4000 and PS4100) and a bunch of old HP Procurve switches. The current setup is very far from being redundant and fast so we want to improve it. I read several threads but get even more confused. The Procurve Switches are 2824. I know they don't support Jumbo Frames and Flow Control at the same time, but we have plans to upgrade to something like Procurve 3500yl. Any suggestions? I heard Dell Powerconnects 6xxx are pretty good but I'm not sure how they compare to HPs. There will be a 4-port Etherchannel (Link Aggregation) between the switches, and all control modules on SAN will be connected to different switches. Is there anything that will make the setup better? Are there better switches then Procurves 3500yl that cost less than 5k? What kind of bandwidth can I expect between ESXi hosts (they will also be connected to 2824 with multiple cables) and SANs?

    Read the article

  • Samba 3.5 Shadow Copy for Windows 7

    - by Prashanth Sundaram
    Over the past several days I have been trying to get the shadow to work with samba but haven’t been successful. Can someone check below config and let me know if I am missing something? We are using Equallogic SAN and iSCSI LUNS to mount volumes. I can cleanly access samba shares on Windows 7 clients but just not shadow copy. I have referred the official how-to but couldn’t get it to work. I see these messages in the logs. Any help is deeply appreciated. [2012/10/31 12:20:53.549863, 0] smbd/nttrans.c:2170(call_nt_transact_ioctl) FSCTL_GET_SHADOW_COPY_DATA: connectpath /fs/test-01, failed. [2012/10/31 12:21:13.887198, 0] modules/vfs_shadow_copy2.c:734(shadow_copy2_get_shadow_copy2_data) shadow:snapdir not found for /fs/test-01 in get_shadow_copy_data [2012/10/31 12:21:13.887265, 0] smbd/nttrans.c:2170(call_nt_transact_ioctl) FSCTL_GET_SHADOW_COPY_DATA: connectpath /fs/test-01, failed. == Samba pkgs == samba-3.5.10-116.el6_2.x86_64 samba-common-3.5.10-116.el6_2.x86_64 samba-winbind-clients-3.5.10-116.el6_2.x86_64 samba-client-3.5.10-116.el6_2.x86_64 === df –h == First is the iSCSI LUN and 2 others are snapshots. /dev/mapper/eql-0-fs-test01 5.0G 2.3G 2.5G 48% /fs/test-01 /dev/mapper/eql-2-0+fs-test01 5.0G 2.3G 2.5G 48% /fs/test-01/@GMT-2012.10.26-17.32.42/fs/test-01 (SNAPSHOT-1) /dev/mapper/eql-d-0+fs-test01 5.0G 2.3G 2.5G 48% /fs/test-01/@GMT-2012.10.31-11.52.42/fs/test-01 (SNAPSHOT- 2) ===/etc/samba/smb.conf === [global] workgroup = DOMAIN server string = Samba Server Version %v security = ads realm = DOMAIN.CORP encrypt passwords = yes guest account = nobody map to guest = bad uid log file = /var/log/samba/%m.log domain master = no local master = no preferred master = no os level = 0 load printers = no show add printer wizard = no printable = no printcap name = /dev/null disable spoolss = yes follow symlinks = yes wide links = yes unix extensions = no [test] comment = Test Directories path = /fs/test-01 vfs objects = shadow_copy2 #shadow_copy2: sort = desc #shadow: localtime = yes #shadow: snapdir = /fs/test-01/test #shadow: basedir = /fs/test-01 guest ok = yes writeable = yes map archive = no force create mode = 0660 force directory mode = 2770 inherit owner = yes inherit permissions = yes All feedback is welcome. Thanks!

    Read the article

  • How to configure multiple iSCSI Portal Groups on a EqualLogic PS6100?

    - by kce
    I am working on a migration from a VMware vSphere environment to a Hyper-V Cluster utilizing Windows Server 2012 R2. The setup is pretty small, an EqualLogic PS6100e and two Dell PowerConnect 5424 switches and handful of R710s and R620s. The SAN was configured as a non-RFC1918 network that is not assigned to our organization and since I am working on building a new virtualization environment I figured that this would be an appropriate time to do a subnet migration. I configured a separate VLAN and subnet on the switches and the two previously unused NICs on the PS6100's controllers. At this time I only have a single Hyper-V host cabled in but I can successfully ping the PS6100 from the host. From the PS6100 I can ping each of the four NICs that currently on the storage network. I cannot connect the Microsoft iSCSI Initiator to the Target. I have successfully added the Target Portals (the IP addresses of PS6100 NICs) and the Targets are discovered but listed as inactive. If I try to Connect to them I get the following error, "Log onto Target - Connection Failed" and ISCSIPrt 1 and 70 events are recorded in the Event Log. I have verified that access control to the volume is not the problem by temporarily disabling it. I suspect the problem is with the Portal Group IP address which is still listed as Group Address of old subnet (I know, I know I might be committing the sin of the X/Y problem but everything else looks good): RFC3720 has this to say about Network Portal and Portal Groups: Network Portal: The Network Portal is a component of a Network Entity that has a TCP/IP network address and that may be used by an iSCSI Node within that Network Entity for the connection(s) within one of its iSCSI sessions. A Network Portal in an initiator is identified by its IP address. A Network Portal in a target is identified by its IP address and its listening TCP port. Portal Groups: iSCSI supports multiple connections within the same session; some implementations will have the ability to combine connections in a session across multiple Network Portals. A Portal Group defines a set of Network Portals within an iSCSI Network Entity that collectively supports the capability of coordinating a session with connections spanning these portals. Not all Network Portals within a Portal Group need participate in every session connected through that Portal Group. One or more Portal Groups may provide access to an iSCSI Node. Each Network Portal, as utilized by a given iSCSI Node, belongs to exactly one portal group within that node. The EqualLogic Group Manager documentation has this to say about the Group IP Address: You use the group IP address as the iSCSI discovery address when connecting initiators to iSCSI targets in the group. If you modify the group IP address, you might need to change your initiator configuration to use the new discovery address Changing the group IP address disconnects any iSCSI connections to the group and any administrators logged in to the group through the group IP address. Which sounds equivalent to me (I am following up with support to confirm). I think a reasonable explanation at this point is that the Initiator can't complete the connection to the Target because the Group IP Address / Network Portal is on a different subnet. I really want to avoid a cutover and would prefer to run both subnets side-by-side until I can install and configure each Hyper-V host. Question/s: Is my assessment at all reasonable? Is it possible to configure multiple Group IP Addresses on the EqualLogic PS6100? I don't want to just change it as it will disconnect the remaining ESXi hosts. Am I just Doing It Wrong(TM)?

    Read the article

  • Cannot login to ISCSI Target - hangs after sending login details

    - by Frank
    I have an ISCSI target volume, to which i am trying to connect using CentOS Linux server. Everything works fine, but cannot its stuck at login. Here are the steps i am performing: [root@neon ~]# iscsiadm -m node -l iscsiadm: could not read session targetname: 5 iscsiadm: could not find session info for session20 iscsiadm: could not read session targetname: 5 iscsiadm: could not find session info for session21 iscsiadm: could not read session targetname: 5 iscsiadm: could not find session info for session22 iscsiadm: could not read session targetname: 5 iscsiadm: could not find session info for session23 iscsiadm: could not read session targetname: 5 iscsiadm: could not find session info for session30 iscsiadm: could not read session targetname: 5 iscsiadm: could not find session info for session31 iscsiadm: could not read session targetname: 5 iscsiadm: could not find session info for session78 iscsiadm: could not read session targetname: 5 iscsiadm: could not find session info for session79 iscsiadm: could not read session targetname: 5 iscsiadm: could not find session info for session80 iscsiadm: could not read session targetname: 5 iscsiadm: could not find session info for session81 Logging in to [iface: eql.eth2, target: iqn.2001-05.com.equallogic:0-8a0906-ab4764e0b-55ed2ef5cf350a66-neon105, portal: 10.10.1.1,3260] (multiple) After this step, its stucks, waits for some time and then gives this output: Logging in to [iface: iface1, target: iqn.2001-05.com.equallogic:0-8a0906-ab4764e0b-55ed2ef5cf350a66-neon105, portal: 10.10.1.1,3260] (multiple) iscsiadm: Could not login to [iface: eql.eth2, target: iqn.2001-05.com.equallogic:0-8a0906-ab4764e0b-55ed2ef5cf350a66-neon105, portal: 10.10.1.1,3260]. My iscsi.conf is this: node.startup = automatic node.session.timeo.replacement_timeout = 15 # default 120; RedHat recommended node.conn[0].timeo.login_timeout = 15 node.conn[0].timeo.logout_timeout = 15 node.conn[0].timeo.noop_out_interval = 5 node.conn[0].timeo.noop_out_timeout = 5 node.session.err_timeo.abort_timeout = 15 node.session.err_timeo.lu_reset_timeout = 20 node.session.initial_login_retry_max = 8 # default 8; Dell recommended node.session.cmds_max = 1024 # default 128; Equallogic recommended node.session.queue_depth = 32 # default 32; Equallogic recommended node.session.iscsi.InitialR2T = No node.session.iscsi.ImmediateData = Yes node.session.iscsi.FirstBurstLength = 262144 node.session.iscsi.MaxBurstLength = 16776192 node.conn[0].iscsi.MaxRecvDataSegmentLength = 262144 discovery.sendtargets.iscsi.MaxRecvDataSegmentLength = 32768 node.conn[0].iscsi.HeaderDigest = None node.session.iscsi.FastAbort = Yes Also, in access control, i have given full access to Any IP, Any CHAP user and fixed iscsi initiator name. With same access level, all other volumes on rest of servers are working, except this one.

    Read the article

  • Dell EqualLogic vs. EMC VNXe [closed]

    - by Untalented
    We've been looking into SMB SANs and based on the competitive pricing I've been getting we're really liking these two array's. There are some pro's to both solutions, but I've unable to really decide which to choose. The EMC offers better expandability since you can buy an additional shelf (roughly $1200) and can add drives then to the array. However, the Dell unit is still very nice. Can anyone comment on their experiences with the two and thoughts on this? Also, to get the VMware Storage API support you need VMware Enterprise. How much additional performance does this provide? It's roughly $15k more than the Essentials Plus bundle we're looking at (this is a small environment [3 Hosts 1 Array].

    Read the article

  • Getting Partial / No Redundancy on VM's created on latest datastore

    - by Germano
    Hi, First some background. I'm in the process of upgrading my ESX servers from 3.5 to vSphere 4 and so far I have setup the new vCenter Server. Before I start the upgrade of the ESX, I needed more storage so I created 3 datastores from available space on my Equallogic PS6000 which has been connected for a while so as far as connectivity, nothing has changed. but now here's my problem, I get a "Partial / No Redundancy" on any VM that I create in any of these new datastores. I can create VM's on any of the older datstores on LUN's from exactly the same Equallogic and it works fine, but not the new ones. Keep in mind that these new datastores are the only ones that were created under the new vCenter, so I believe it must have something to do with it. Is anyone aware of any issues about creating datastored using the new vCenter but on a 3.5 ESX host? ISCSI with QLogic QLE406x Thanks in advance for nay help. Germano

    Read the article

  • Citrix XenServer iSCSI shared disk?

    - by chsguy
    I am running Citrix XenServer Essentials 5.5, with VMs stored on an EqualLogic iSCSI shelf, using XenServer's StorageLink. I would like to create a "virtual disk" that can be attached to multiple VMs. This would be used for a cluster file system like OCFS2 or GFS. This doesn't seem possible using the XenCenter GUI and I can't find anything online about how to do it. I realize I could simply expose the iSCSI network to the VM and have the VM initiate its own iSCSI, but that creates a lot of security challenges. This was pretty easy to do on Oracle VM Server, which is Xen based, so I know it's not a limitation of Xen itself. Maybe there is an "xe-" command for this? Thanks for any suggestions you can provide.

    Read the article

  • iSCSI SAN RAID 10 Performance -- Poor Read, Good Write

    - by Litzner
    I have a EqualLogic PS4000 SAN unit with the latest firmware, setup in RAID 10. I have 3 2TB Volumes on the SAN shared out via iSCSI on 2 eth ports on two different subnets. I have moved a test server over to this newly setup SAN, and my testing is showing me a problem. I am getting dismal read performance in everything except a test with 32 queue depth (see attach image) Write performance seems to be right about where it should be. I have tried MPIO on and off, on was slightly better but not much.

    Read the article

  • VMware vSphere 4.1 and BackupExec 2010

    - by Josh
    I'm sure a common problem with most shops is backups, their size, and the window in which you have to back up the data. What we are working with: VMware vSphere 4.1 Cluster PS4000XV Equallogic Storage Array (1.6TB Volume dedicated for Backup to Disk) Physical Backup Server with a single LTO4 drive. BackupExec 2010 R3 with the following agents, Exchange, SQL, Active Directory, VMware. Dual Gigabit MPIO Connections between all devices (Storage Array, Backup Server, VM Hosts) What we would like to accomplish: I would like to implement an efficient Backup to Disk to Tape solution where all of our VMs are backed up to the Storage Array first, and then once completely backed up to the array are replicated to tape. In the event we needed to recover, we would be able to do so directly from tape. Where we are at currently. Of the several ways I have setup the jobs in Backup Exec 2010 R3 the backup jobs all queue up at the same time, as soon as a job is finished backing up to disk it then starts that same job to tape, but pulling from the original source instead of the designated B2D location. I understand that I could create a job that backs up the "Backup to Disk" folder to tape, but in the event of restoration, I would first need to stage the data in the B2D folder before I could restore the VM. I would really like to hear from individuals in similar situations. Any and all comments and critiques are appreciated.

    Read the article

  • Performance of Cluster Shared Volume file copy from SAN

    - by Sequenzia
    I am hoping someone can help me out with a strange issue. We are running a Microsoft Failover Cluster with Server 2008 R2 and an Equallogic PS4000 SAN. Our main configuration has 2 Dell Poweredge T710 Servers in the cluster. We have CSV and Quorm setup. The servers each have 10 Broadcom 1Gb NICs. Right now 4 of the NICS are on the iSCSI network for accessing the SAN. They use MPIO and the Dell HIT pack. We have 5 VMs running on each node and everything runs smooth. No noticeable performance issues or anything. From the SAN I can see the 4 iSCSI connections from each server to each volume (CSV and Quorm). Again, it seems to perform great. The problem I am running into is with backups. I have tried a few backup programs like backupchain and Veeam. The problem is both of them are very very slow to backup the VMs. For instance I have a 500GB (fixed disc) VHD that’s running on the cluster. It takes over 18 hours to backup that VHD and that’s with compression and depuping turned off which is supposed to be the fasted. We also have a separate server that is just for backups. It has a lot of directed attached storage. As part of the troubleshooting I decided to bring that server into the cluster as a node. It now has access to the CSV and can read from C:\clusterstorage\volume1 which is where our VHDs live. This backup server only has 2 NICs. 1 NIC is going to the iSCSI network and the other is just on the main network. It has Intel NICS in it without any sort of MPIO or teaming. So with the 3rd server now in the cluster I started doing some benchmarking. I have a test VHD that’s about 7GBs that’s stored in the CSV. I have tested file copying that VHD from all 3 servers to directed attached storage in the respective server. The 2 Dell servers that are the main nodes in the cluster (they house the VMs) are reading that file at about 20Mbs/Sec. Which at that rate is way to slow for the backups. The other server which only has 1 NIC to the SAN is reading at around 100Mbs/Sec. I spent a few hours on the phone with Dell today about this . We went through all kind of tests and he was pretty dumb founded. He really has no idea why that server with only 1 NIC is reading about 5 times as fast as the servers with 4 NICS and MPIO. We looked at the network utilization of the NICs while the file copy was going on. The servers with the 4 NICs had a small increase of activity during the file copy but they only went up to around 8-10% on all 4 NICs. The other server with the 1 NIC jumped up to over 80% during the file copy. I plan on doing some more testing after hours and calling Dell back tomorrow but I really am confused (and so is Dell’s support rep) why I cannot get faster file copy access to the CSV on those servers. Anyone have any input on this? Any feedback would be greatly appreciated. Thanks in advance.

    Read the article

  • Expected IOPS for log writing on PS6000X SAN?

    - by dssz
    Customer is experiencing poor Sybase ASE 15 performance on a PS6000X SAN with 16 X 450GB 10K in RAID-50. The server is a Dell R710 running 2003 server R2 64bit in ESX 4.0.0,256968 I've used sqlio to benchmark the sequential write performance of 4KB blocks on the drive. sqlio -kW -t1 -s600 -dE -o1 -fsequential -b4 -BH -LS sqliotestfile.dat Result is 1900 IOPS. However, when Sybase is running a sustained workload of small inserts SAN HQ shows a consistent 590 IOPS (and 100% 4K write activity). It also shows that the write latency increases to 1.2ms from <1ms. Monitoring and tests in Sybase demonstrate the performance problem is IO related and in particular there is a lot of wait time writing to the log. The SAN indicates that write caching is enabled. What IOPS should the SAN be capable of for 4k sequential write activity? Also, with write caching enabled, shouldn't the controller be batching up the 4K writes into something more efficient? Also, any tips on Sybase on ESX would be appreciated.

    Read the article

  • Lefthand SAN questions.

    - by Gk
    I'm curious about Lefthand SAN solutions from HP. People from Dell have told me that Lefthand SAN's require at least two nodes and data must be mirroring between them so capacity is a half less compare to other SAN technology (e.g.Equal Logic). Is it true? Can a HP lefhand SAN be used as a stand-alone storage server with full RAID function (1, 10, 5)? TIA, -giobuon

    Read the article

  • Lefthand SAN quetions.

    - by Gk
    I'm curios about Lefthand SAN solution from HP. Ppl from Dell told me that lefthand SAN require at least two nodes and data must be mirroring between them so capacity is a half less compare to other SAN technology (e.g.Equal Logic). Is it true? Can a HP lefhand SAN can use as a stand-alone storage server with full RAID function (1, 10, 5)? TIA, -giobuon

    Read the article

  • Recognizing Dell EquilLogic with Nagios

    - by user3677595
    EDIT: All firmware and models are compatible, that is why nothing is posted about it. Okay, so there will be a lot here, so please bare with me. I've been working on this now for a few hours (reading manuals and such) so I'm not just coming here right out of the blue. I am working on a PRE-EXISTING Nagios server where there are several other existing plugins and checks running and working. Now I want to add another server there to check so I made the following modifications: First and foremost, I added a file to /usr/local/nagios/libexec named: check_equallogic.sh. The permissions are 755, the same as all others. I have chowned to nagios:nagios and in the listing it shows the Owner as Nagios. I then added a command to the commands.cfg file in \usr\local\nagios\etc\objects that shows the following: # 'check_equallogic' command definition define command{ command_name check_equallogic command_line $USER1$/check_equallogic -H $HOSTADDRESS$ -C $ARG1$ -t $ARG2$ $ARG3$ } Following this, I created a file named equallogic.cfg in the objects directory and it contains (more or less): define host{ use linux-server ; Inherit default values from a template host_name 172.16.50.11 ; The name we're giving to this device alias EqualLogic ; A longer name associated with the device address 172.16.50.11 ; IP address of the device contact_groups admins } Check Equallogic Information define service{ use generic-service host_name 172.16.50.11 service_description General Information check_command check_equallogic!public!info } After ensuring that permissions are okay for all files, I restart the nagios service, no errors. When I go into the WebGUI, I get the following errors AFTER the check runs: (Return code of 127 is out of bounds - plugin may be missing) Extra, probably unrelated problem Furthermore, when I log into the EquilLogic server, under Audit logs I get the following error: Level: AUDIT Time: 26/05/2014 3:59:13 PM Member: ps4100-1 Subsystem: agent Event ID: 22.7.1 SNMP packet validation failed, request received from 172.16.10.11 An snmpwalk receives a timeout, whereas others succeed. I will work on importing the MIBs tomorrow. The reason why I am mentioning it is because I want to make sure that it is only a MIB issue for the SNMP. If it is, then ignore this area. I am entirely unsure of what to do here.

    Read the article

  • Setup of HP ProCurve 2810-24G for iSCSI?

    - by 3molo
    Hi, I have a pair of ProCurve 2810-24G that I will use with a Dell Equallogic SAN and Vmware ESXi. Since ESXi does MPIO, I am a little uncertain on the configuration for links between the switches. Is a trunk the right way to go between the switches? I know that the ports for the SAN and the ESXi hosts should be untagged, so does that mean that I want tagged VLAN on the trunk ports? This is more or less the configuration: trunk 1-4 Trk1 Trunk snmp-server community "public" Unrestricted vlan 1 name "DEFAULT_VLAN" untagged 24,Trk1 ip address 10.180.3.1 255.255.255.0 no untagged 5-23 exit vlan 801 name "Storage" untagged 5-23 tagged Trk1 jumbo exit no fault-finder broadcast-storm stack commander "sanstack" spanning-tree spanning-tree Trk1 priority 4 spanning-tree force-version RSTP-operation The Equallogic PS4000 SAN has two controllers, with two network interfaces each. Dell recommends each controller to be connected to each of the switches. From vmware documentation, it seems creating one vmkernel per pNIC is recommended. With MPIO, this could allow for more than 1 Gbps throughput.

    Read the article

  • How to share a volume between VM in ESX 4 ?

    - by edomaur
    I want to access a single volume from vmware ESX4 vms, in three ESX hosts with datastores in an Equallogic PS6000 SAN. I know how to manage the datas, but I cannot seems to find a way to do this. How can I share VMDK accross hosts ? (the relevant files are on the SAN) Is this even possible ? Is there a mean to do this with RDM ?

    Read the article

  • SQL server in VMware

    - by UndertheFold
    Please provide your tips and best practices for virtualizing SQL Server in VMWare ESX I am interested in advanced configurations and settings. Please provide reasoning behind your recommendations Edit: Just to clarify, I already have over 70 Virtual SQL servers in separate clusters using an ISCSI equallogic San - What I am really looking for are those advanced configurations like: How you configured your disks / RDM's Do you make use of settings like Mem.ShareScanGHz - http://communities.vmware.com/thread/143828 - that are not well documented

    Read the article

  • Removing an iSCSI Target - iSCSI initiator 2.0 on Windows Server 2003 R2

    - by DWong
    For the life of me I cannot figure out how to remove an iSCSI target (Dell Equallogic SAN) from a Windows Server 2003 box. The volume shows up in Windows as drive letter Y:\ Using the iSCSI initiator, I can remove the Target Portal, but cannot remove the Target itself. Can someone give me some guidance on this? I've gone as far as setting the volume offline in the Dell SAN management tool, and even permanently deleted the volume. The Target no longer shows up in the iSCSI Initiator properties, but the drive letter is still there under My Computer. And now Windows is throwing delayed write errors for that drive. There must be a proper way to successfully remove an attached Target. TIA!

    Read the article

  • LUNS access issue in ESX4 Cluster server

    - by rmustafa
    HI, I've created volumes in equallogic in PS 6000 XV(having 2 member which is in 1 pool), checked & those volumes can be easily detected my ISCSI software in windows. But the problem with ESX , not able to see the assigned disk on ESX server, I can explain what I've done: 1.Created Cluster with enabled HA & DRS 2.Added 3 ESX4 HOST 3.Added VMkernel & configured in all 3 ESX4, enabled vmotion & FT on the same adapter. 4.went to iSCSI storage adapter properties, enabled iSCSI 5.Trying to discover the available storage with the controller IP on dynamic discovery, but not able to see the assigned storage Note: the same volume is accessed to windows that means there is no issue from storage , am I right ???? Note: I wanted to mount the same volume in all 3 ESX host. Please suggest .... Thanks & Regards, Rashid Mustafa

    Read the article

  • How does the LeftHand SAN perform in a Production environment?

    - by Keith Sirmons
    Howdy, I previously asked this ServerFault question: Does anyone have experience with lefthands VSA SAN The general consensus looks like it does not perform well enough for a production SQL server even at a light load. So the new question is, How does LeftHand's SAN perform on the HP or Dell dedicated Hardware boxes? We are looking at the Starter SAN with 2 HP nodes in a 2-way replication, 2 ESX servers hosting a total of 2 Active Directory server, 1 MS SQL server, 1 File Server, and 1 General Purpose Server for things like Virus Scan (All Microsoft Server 2005 or 2008). The reason I am looking at LeftHand is for the complete software package. I plan to have a DR site and like how the SAN can perform an Async Replication to the offsite location without having to go back to the Vendor for more licenses. I also like the redundancy built into the Network Raid architecture. I have looked at other SANS and found different faults with them. For example, Dell's EqualLogic: Found that although the individual box is very redundant in hardware, the Data once spanned across multiple boxes is not redundant, if a node goes down you have lost the only copy of the data sitting on that hardware (One thing is certain, all hardware fails... When? is the only question.). I have used an XioTech SAN as well.. Well worth the money BTW, but I think it is overkill for the size of the office I am targeting. The cost to get the hardware redundancy in the XioTech makes it a little out of reach for the budget I am working in. Thank you, Keith

    Read the article

  • What are the most important aspects to consider when choosing a SAN for a small office virtualizatio

    - by Prof. Moriarty
    I am in the process of consolidating 6 physical servers running 6 different operating system flavors (don't ask) into two identical physical servers (Dell PowerEdge 2900), using the free VMware ESXi 4.0 platform. We will install an iSCSI SAN over a 1GbE network, and store all virtual machine images on the SAN. Each physical server would run 3 VMs, and in the case of a physical server failure, we would manually switch over the other 3. These are all internal servers, while important, they can tolerate some amount of downtime (say <1h) to keep cost and complexity associated with HA down. I now need to choose the SAN to be used for the setup, on a low budget. We currently have about 2TB of data, but of course I want to able to grow, do backups of VM snapshots on other drives and remove them to a different location, etc. So what I would like to know is: Which are the must have features for this setup, without which using a SAN is not worth it? We are mostly a Dell shop, so I have been looking at the EqualLogic PS4000E High Availability model. Any opinions, anecdotes, bad experiences with this model? (This is one of the few models which could accomodate our existing disks from the physical servers.) If you can recommend something that is not Dell, but it has better value, I would most definitely consider it. Caveats, things to look out for?

    Read the article

  • iSCSI targets don't appear after rescan

    - by asmr
    Hi everybody, I have an Equallogic 4000PS SAN box to which I have connected 2 x ESX 4.0.0 hosts sharing the LUNs. I have an older ESX 3.5 host which I want to setup to share the same LUNs. I have setup a vmkernel port with 2 NICs attached to 2 the iSCSI switch. When I perform an iSCSI software adapter rescan, it takes a long time and it doesn't find the targets. In the ESX-3.5 host's log file I find these messages: Mar 30 08:52:48 sc59 vmkernel: 368:19:23:11.394 cpu5:1039)WARNING: SCSI: 279: SCSI device type 0xd is not supported. Cannot create target vmhba1:288:0 Mar 30 08:52:48 sc59 vmkernel: 368:19:23:11.394 cpu5:1039)WARNING: SCSI: 1293: LegacyMP Plugin could not claim path: vmhba1:288:0. Not supported Mar 30 08:52:48 sc59 vmkernel: 368:19:23:11.394 cpu5:1039)WARNING: ScsiPath: 3187: Plugin 'legacyMP' had an error (Not supported) while claiming path 'vmhba1:C0:T288:L0'.Skipping the path. Mar 30 08:52:48 sc59 vmkernel: 368:19:23:11.397 cpu0:1040)WARNING: SCSI: 279: SCSI device type 0xd is not supported. Cannot create target vmhba1:288:0 Mar 30 08:52:48 sc59 vmkernel: 368:19:23:11.397 cpu0:1040)WARNING: SCSI: 1293: LegacyMP Plugin could not claim path: vmhba1:288:0. Not supported Mar 30 08:52:48 sc59 vmkernel: 368:19:23:11.397 cpu0:1040)WARNING: ScsiPath: 3187: Plugin 'legacyMP' had an error (Not supported) while claiming path 'vmhba1:C0:T288:L0'.Skipping the path. Mar 30 08:52:48 sc59 vmkernel: 368:19:23:11.442 cpu1:1040)WARNING: SCSI: 279: SCSI device type 0xd is not supported. Cannot create target vmhba1:288:0 Mar 30 08:52:48 sc59 vmkernel: 368:19:23:11.442 cpu1:1040)WARNING: SCSI: 1293: LegacyMP Plugin could not claim path: vmhba1:288:0. Not supported Mar 30 08:52:48 sc59 vmkernel: 368:19:23:11.442 cpu1:1040)WARNING: ScsiPath: 3187: Plugin 'legacyMP' had an error (Not supported) while claiming path 'vmhba1:C0:T288:L0'.Skipping the path. Mar 30 08:57:09 sc59 vmkernel: 368:19:27:32.874 cpu3:1040)WARNING: SCSI: 279: SCSI device type 0xd is not supported. Cannot create target vmhba1:288:0 Mar 30 08:57:09 sc59 vmkernel: 368:19:27:32.874 cpu3:1040)WARNING: SCSI: 1293: LegacyMP Plugin could not claim path: vmhba1:288:0. Not supported Mar 30 08:57:09 sc59 vmkernel: 368:19:27:32.874 cpu3:1040)WARNING: ScsiPath: 3187: Plugin 'legacyMP' had an error (Not supported) while claiming path 'vmhba1:C0:T288:L0'.Skipping the path. Mar 30 08:57:09 sc59 vmkernel: 368:19:27:32.884 cpu4:1041)WARNING: SCSI: 279: SCSI device type 0xd is not supported. Cannot create target vmhba1:288:0 Mar 30 08:57:09 sc59 vmkernel: 368:19:27:32.884 cpu4:1041)WARNING: SCSI: 1293: LegacyMP Plugin could not claim path: vmhba1:288:0. Not supported Mar 30 08:57:09 sc59 vmkernel: 368:19:27:32.884 cpu4:1041)WARNING: ScsiPath: 3187: Plugin 'legacyMP' had an error (Not supported) while claiming path 'vmhba1:C0:T288:L0'.Skipping the path. Mar 30 08:57:09 sc59 vmkernel: 368:19:27:32.888 cpu3:1040)WARNING: SCSI: 279: SCSI device type 0xd is not supported. Cannot create target vmhba1:288:0 Mar 30 08:57:09 sc59 vmkernel: 368:19:27:32.888 cpu3:1040)WARNING: SCSI: 1293: LegacyMP Plugin could not claim path: vmhba1:288:0. Not supported Mar 30 08:57:09 sc59 vmkernel: 368:19:27:32.888 cpu3:1040)WARNING: ScsiPath: 3187: Plugin 'legacyMP' had an error (Not supported) while claiming path 'vmhba1:C0:T288:L0'.Skipping the path. Mar 30 08:57:09 sc59 vmkernel: 368:19:27:33.042 cpu7:1039)WARNING: SCSI: 279: SCSI device type 0xd is not supported. Cannot create target vmhba1:288:0 Mar 30 08:57:09 sc59 vmkernel: 368:19:27:33.042 cpu7:1039)WARNING: SCSI: 1293: LegacyMP Plugin could not claim path: vmhba1:288:0. Not supported Mar 30 08:57:09 sc59 vmkernel: 368:19:27:33.042 cpu7:1039)WARNING: ScsiPath: 3187: Plugin 'legacyMP' had an error (Not supported) while claiming path 'vmhba1:C0:T288:L0'.Skipping the path. Mar 30 08:57:09 sc59 vmkernel: 368:19:27:33.044 cpu3:1040)WARNING: SCSI: 279: SCSI device type 0xd is not supported. Cannot create target vmhba1:288:0 Mar 30 08:57:09 sc59 vmkernel: 368:19:27:33.044 cpu3:1040)WARNING: SCSI: 1293: LegacyMP Plugin could not claim path: vmhba1:288:0. Not supported Mar 30 08:57:09 sc59 vmkernel: 368:19:27:33.044 cpu3:1040)WARNING: ScsiPath: 3187: Plugin 'legacyMP' had an error (Not supported) while claiming path 'vmhba1:C0:T288:L0'.Skipping the path. Mar 30 08:57:09 sc59 vmkernel: 368:19:27:33.045 cpu4:1041)WARNING: SCSI: 279: SCSI device type 0xd is not supported. Cannot create target vmhba1:288:0 Mar 30 08:57:09 sc59 vmkernel: 368:19:27:33.045 cpu4:1041)WARNING: SCSI: 1293: LegacyMP Plugin could not claim path: vmhba1:288:0. Not supported Mar 30 08:57:09 sc59 vmkernel: 368:19:27:33.045 cpu4:1041)WARNING: ScsiPath: 3187: Plugin 'legacyMP' had an error (Not supported) while claiming path 'vmhba1:C0:T288:L0'.Skipping the path. Mar 30 08:57:10 sc59 vmkernel: 368:19:27:33.308 cpu3:1040)WARNING: SCSI: 279: SCSI device type 0xd is not supported. Cannot create target vmhba1:288:0 Mar 30 08:57:10 sc59 vmkernel: 368:19:27:33.309 cpu3:1040)WARNING: SCSI: 1293: LegacyMP Plugin could not claim path: vmhba1:288:0. Not supported Mar 30 08:57:10 sc59 vmkernel: 368:19:27:33.309 cpu3:1040)WARNING: ScsiPath: 3187: Plugin 'legacyMP' had an error (Not supported) while claiming path 'vmhba1:C0:T288:L0'.Skipping the path. Mar 30 08:57:10 sc59 vmkernel: 368:19:27:33.598 cpu2:1040)WARNING: SCSI: 279: SCSI device type 0xd is not supported. Cannot create target vmhba1:288:0 Mar 30 08:57:10 sc59 vmkernel: 368:19:27:33.598 cpu2:1040)WARNING: SCSI: 1293: LegacyMP Plugin could not claim path: vmhba1:288:0. Not supported Mar 30 08:57:10 sc59 vmkernel: 368:19:27:33.598 cpu2:1040)WARNING: ScsiPath: 3187: Plugin 'legacyMP' had an error (Not supported) while claiming path 'vmhba1:C0:T288:L0'.Skipping the path. Mar 30 08:57:10 sc59 vmkernel: 368:19:27:33.600 cpu7:1039)WARNING: SCSI: 279: SCSI device type 0xd is not supported. Cannot create target vmhba1:288:0 Mar 30 08:57:10 sc59 vmkernel: 368:19:27:33.600 cpu7:1039)WARNING: SCSI: 1293: LegacyMP Plugin could not claim path: vmhba1:288:0. Not supported Mar 30 08:57:10 sc59 vmkernel: 368:19:27:33.600 cpu7:1039)WARNING: ScsiPath: 3187: Plugin 'legacyMP' had an error (Not supported) while claiming path 'vmhba1:C0:T288:L0'.Skipping the path. Any ideas what the problem is?

    Read the article

1