-
as seen on Server Fault
- Search for 'Server Fault'
The more I look into ESX the more often i have to handle cases where the partition table of a disk with a VMFS volume gets corrupted.
The reasons for this can be
* idiot user
* failed update
* power failure
* ....
I guess you guys must already have something like a usual procedure on how to work…
>>> More
-
as seen on Server Fault
- Search for 'Server Fault'
I want to add a new harddisk to an existing VM and want the best performance possible. The new hard disk will exist on an NFS datastore. Currently I did the following:
Created new vmdk on NFS datastore
Created new lvm partition using fdisk
Create new physical volume, volume group, and logical volume…
>>> More
-
as seen on Server Fault
- Search for 'Server Fault'
Existing setup:
host1 and host2, ESX 4.0, 2 HBAs each.
lun1 and lun2, 2 LUNs belonging to the same RAID set (my terminology might be sketchy here).
This has been working just fine all along.
I added host3, ESXi 4.1, 2 HBAs.
If I view Configuration / Storage Adapters, I can see that both HBAs…
>>> More
-
as seen on Server Fault
- Search for 'Server Fault'
Existing setup:
host1 and host2, ESX 4.0, 2 HBAs each.
lun1 and lun2, 2 LUNs belonging to the same RAID set (my terminology might be sketchy here).
This has been working just fine all along.
I added host3, ESXi 4.1, 2 HBAs.
If I view Configuration / Storage Adapters, I can see that both HBAs…
>>> More
-
as seen on Server Fault
- Search for 'Server Fault'
I currently have a cluster of two ESX 3.5U2 servers connected directly via FiberChannel to a NetApp 3020 cluster. These hosts mount four VMFS LUNs for virtual machine storage. Currently these LUNs are only made available via our FiberChannel initator in the Netapp configuration
If I were to add…
>>> More