Search Results

Search found 1650 results on 66 pages for 'sas san'.

Page 52/66 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • Connecting XRAID to Sunfire via LSI FC card - Drives not showing up

    - by Matthew Watson
    Hi, I've got a Sunfire T1000 machine ( Solaris 10 10/09 s10s_u8wos_08a SPARC ) with a LSI7404EP-LC fibre channel card in it. This is plugged into an XRAID. The system seems to have picked up the card > /usr/platform/`uname -i `/sbin/prtdiag IO Location Type Slot Path Name Model ----------- ----- ---- --------------------------------------------- ------------------------- --------- MB/PCIE0 PCIE 0 /pci@780/fibre-channel fibre-channel MB/PCIE0 PCIE 0 /pci@780/fibre-channel fibre-channel MB/NET0 PCIE MB /pci@7c0/pci@0/network@4 network-pci14e4,1668 MB/NET1 PCIE MB /pci@7c0/pci@0/network@4,1 network-pci14e4,1668 MB/NET2 PCIX MB /pci@7c0/pci@0/pci@8/network@1 network-pci108e,1648 MB/NET3 PCIX MB /pci@7c0/pci@0/pci@8/network@1,1 network-pci108e,1648 MB/PCIX PCIX MB /pci@7c0/pci@0/pci@8/scsi@2 scsi-pci1000,50 LSI,1064 However it doesn't seem to be able to see the xraid attached to it, lsiutil only reports the onboard SAS controller. > /usr/local/bin/lsiutil ~ LSI Logic MPT Configuration Utility, Version 1.62, January 14, 2009 1 MPT Port found Port Name Chip Vendor/Type/Rev MPT Rev Firmware Rev IOC 1. mpt0 LSI Logic SAS1064 A3 105 010a0000 0 Select a device: [1-1 or 0 to quit] I've tried adding the configuration to /kernal/drv/sd.conf and /kernal/drv/ssd.conf as per this thread, however format still cannot see any drives on the xraid. I'm not sure where to go next. Any suggestions? From what I've read..this should pretty much just eb plug it in and they show up in format..

    Read the article

  • Will adding a SSD cache device to my ZFS storage improve performance?

    - by Sysadminicus
    The server has 4GB of RAM and my zpool is made up of 15.5k SAS drives arranged like this: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 c0t2d0 ONLINE 0 0 0 c0t3d0 ONLINE 0 0 0 c0t4d0 ONLINE 0 0 0 c0t5d0 ONLINE 0 0 0 c0t6d0 ONLINE 0 0 0 c0t7d0 ONLINE 0 0 0 c0t8d0 ONLINE 0 0 0 raidz1-1 ONLINE 0 0 0 c0t10d0 ONLINE 0 0 0 c0t11d0 ONLINE 0 0 0 c0t12d0 ONLINE 0 0 0 c0t13d0 ONLINE 0 0 0 c0t14d0 ONLINE 0 0 0 spares c0t9d0 AVAIL c0t1d0 AVAIL The primary use is as an NFS store for a couple VMWare ESXi servers. I can't do any "true" benchmarks because this is a production system (no budget for test systems), but using dd and bonnie++ I can't get more than ~40-50MB/s writes and ~70-90MB/s reads. It seems I should be able to do much better, but I'm not sure where to optimize. Based on what I've read, I think dropping in a OCZ Vertex 2 Pro SSD as my L2ARC is going to be the best bang-for-the-buck to improve througput. Is there something else I should be looking into to help performance? If not... How do I know how big a cache device I need? Am I safe with only a single SSD as my cache device?

    Read the article

  • ESXi Guests will not boot on IBM x3550 M3

    - by Adrian
    I have a problem with Guests not booting under VMWare ESXi 5.0 on my IBM x3550M3 server. VM Host Server: IBM x3550 M3 7944AC1 server w/ 2x Intel Xeon E5607 2.27Ghz CPUs ESXi 5.0.0 Build 623860, built for IBM Hardware downloaded from IBM Storage: 2x500GB SAS local storage 8GB RAM Vt is verified to be ENABLED Server Health Status: Normal The ESXi host boots just fine. The Client connects just fine. Guests can be configured but do not successfully boot. The initial guest memory consumption jumps up to 560MB and drops down to 40MB after a few seconds. Initial CPU usage is 1 full CPU (3000Ghz per the chart) and immediately drops downm to 29Mhz. Guests do not display any output in the Console tab but show a state of 'Powered On'. VMs are listed as Version 7 and the behavior is duplicated across all availabled Guest OS flavors. Problem also duplicated when server is booted up in Legacy Only mode. Logs do not contain anything particularly suspucious. Edit: No firewalls, routers, or VLANs in between the client and server. Edit 2: We have tried to Boot Guest into BIOS screen at Next Boot checkbox in the Guest Setting. Was not successful. Edit 3: 500GB datastore with 1 40GB VM on it. Plenty of space.

    Read the article

  • How to choose the most optimal RAID settings on PE2950

    - by javano
    I have some Dell PowerEdge 2950's with 4x 15k, 150GB Cheetah SAS drives in them. They are going to be VM hosts, CentOS running ESXi with Windows Server 2k8 guests. Some guests will be hosting IIS servers, and others MSSQL servers. I am trying to set the RAID virtual disks settings and can't decide which is more optimal given this situation; Read Policy: Out of Read-Ahead, No-Read-Ahead and Adaptive Read-Ahead, the default is Read-Ahead. I will be making large sequential writes initially, writing out blank images for virtual machine hard drives (lets say 30GBs from /dev/zero for example) so Read-Ahead seems good at first. But within the virtual machines reads could be random from anywhere within their file systems as they are IIS and MSSQL servers, so perhaps No-Read-Ahead is a better idea? Now I think Adaptive Read-Ahead would be better then as a compromise but I don't know much about this option, how does it compare in performance to the others? Write Policy: write-back caching, write-through caching, the default is write-back caching. The default of write-back caching is safer than write-through caching but at a performance expense. My thinking here is that in the event of power loss for example, it seems more likely in my head (this is why I need some clarification!) that damage will occur to a guest VM with write-back caching enabled, so I should favour write-through? I have searched around and there is obviously no definitive answer, so I would like to find out what is best for my situation.

    Read the article

  • Possible to IPSec VPN Tunnel Public IP Addresses?

    - by caleban
    A customer uses an IBM SAS product over the internet. Traffic flows from the IBM hosting data center to the customer network through Juniper VPN appliances. IBM says they're not tunneling private IP addresses. IBM says they're tunneling public IP addresses. Is this possible? What does this look like in the VPN configuration and in the packets? I'd like to know what the source/destination ip/ports would look like in the encrypted tunneled IPSec Payload and in the IP packet carrying the IPSec Payload. IPSec Payload: source:1.1.1.101:1001 destination:2.2.2.101:2001 IP Packet: source:1.1.1.1:101 destination:2.2.2.1:201 Is it possible to send public IP addresses through an IPSec VPN tunnel? Is it possible for IBM to send a print job from a server on their network using the static-nat public address over a VPN to a printer at a customer network using the printer's static-nat public address? Or can a VPN not do this? Can a VPN only work with interesting traffic from and to private IP addresses?

    Read the article

  • Fibre channel long distance woes

    - by Marki
    I need a fresh pair of eyes. We're using a 15km fibre optic line across which fibrechannel and 10GbE is multiplexed (passive optical CWDM). For FC we have long distance lasers suitable up to 40km (Skylane SFCxx0404F0D). The multiplexer is limited by the SFPs which can do max. 4Gb fibrechannel. The FC switch is a Brocade 5000 series. The respective wavelengths are 1550,1570,1590 and 1610nm for FC and 1530nm for 10GbE. The problem is the 4GbFC fabrics are almost never clean. Sometimes they are for a while even with a lot of traffic on them. Then they may suddenly start producing errors (RX CRC, RX encoding, RX disparity, ...) even with only marginal traffic on them. I am attaching some error and traffic graphs. Errors are currently in the order of 50-100 errors per 5 minutes when with 1Gb/s traffic. Optics Here is the power output of one port summarized (collected using sfpshow on different switches) SITE-A units=uW (microwatt) SITE-B ********************************************** FAB1 SW1 TX 1234.3 RX 49.1 SW3 1550nm (ko) RX 95.2 TX 1175.6 FAB2 SW2 TX 1422.0 RX 104.6 SW4 1610nm (ok) RX 54.3 TX 1468.4 What I find curious at this point is the asymmetry in the power levels. While SW2 transmits with 1422uW which SW4 receives with 104uW, SW2 only receives the SW4 signal with similar original power only with 54uW. Vice versa for SW1-3. Anyway the SFPs have RX sensitivity down to -18dBm (ca. 20uW) so in any case it should be fine... But nothing is. Some SFPs have been diagnosed as malfunctioning by the manufacturer (the 1550nm ones shown above with "ko"). The 1610nm ones apparently are ok, they have been tested using a traffic generator. The leased line has also been tested more than once. All is within tolerances. I'm awaiting the replacements but for some reason I don't believe it will make things better as the apparently good ones don't produce ZERO errors either. Earlier there was active equipment involved (some kind of 4GFC retimer) before putting the signal on the line. No idea why. That equipment was eliminated because of the problems so we now only have: the long distance laser in the switch, (new) 10m LC-SC monomode cable to the mux (for each fabric), the leased line, the same thing but reversed on the other side of the link. FC switches Here is a port config from the Brocade portcfgshow (it's like that on both sides, obviously) Area Number: 0 Speed Level: 4G Fill Word(On Active) 0(Idle-Idle) Fill Word(Current) 0(Idle-Idle) AL_PA Offset 13: OFF Trunk Port ON Long Distance LS VC Link Init OFF Desired Distance 32 Km Reserved Buffers 70 Locked L_Port OFF Locked G_Port OFF Disabled E_Port OFF Locked E_Port OFF ISL R_RDY Mode OFF RSCN Suppressed OFF Persistent Disable OFF LOS TOV enable OFF NPIV capability ON QOS E_Port OFF Port Auto Disable: OFF Rate Limit OFF EX Port OFF Mirror Port OFF Credit Recovery ON F_Port Buffers OFF Fault Delay: 0(R_A_TOV) NPIV PP Limit: 126 CSCTL mode: OFF Forcing the links to 2GbFC produces no errors, but we bought 4GbFC and we want 4GbFC. I don't know where to look anymore. Any ideas what to try next or how to proceed? If we can't make 4GbFC work reliably I wonder what the people working with 8 or 16 do... I don't assume that "a few errors here and there" are acceptable. Oh and BTW we are in contact with everyone of the manufacturers (FC switch, MUX, SFPs, ...) Except for the SFPs to be changed (some have been changed before) nobody has a clue. Brocade SAN Health says the fabric is ok. MUX, well, it's passive, it's only a prism, nature at it's best. Any shots in the dark? APPENDIX: Answers to your questions @Chopper3: This is the second generation of Brocades exhibiting the problem. Before we had 5000s, now we have 5100s. In the beginning when we still had the active MUX we rented a longdistance laser once to put it into the switch directly in order to make tests for a day, during that day of course it was clean. But as I said, sometimes it's clean just like that. And sometimes it's not. Alternative switches would mean to rebuild the entire SAN with those only to test. Alternative SFPs, well they're hard to come by just like that. @longneck: The line is rented. It's a dark fibre (9um monomode) so there's noone else on it. Sure there are splices. I can't go and look but I have to trust they have been done correctly. As I said the line has been checked and rechecked (using an optical time-domain reflectometer). Obviously you don't have all this equipment yourself because it's way too expensive. @mdpc: What would be the "wrong" type of cable according to you? Up to the switch everything is monomode, yes. The connectors are the correct ones too. Yeah I know there are the green ones where the fibre is cut off at a certain angle etc. But we have the correct ones for all that I know. Progress Report #1 We have had two fabrics (=2x2 switches) with Brocade 5100s with FabricOS 6.4.1 and two fabrics (another 2x4 switches) on FabricOS 7.0.2. On the longdistance ISLs (one in each fabric) it turned out that with FOS 6.4.1 setting it to long distance issues warnings about the VC Init setting and consequently the fill word. But those are only warnings. FOS 7.0.2 requires you to do modifications to VCI and the fillword for long distance links. Setting FOS 6.4.1 to the LS (long-distance static distance) setting with wrong VCI and fillword setting made the whole fabric inoperational (stuck in an SCN loop, use fabriclog -s to see, you don't see it anywhere else, no port error counters or anything increasing). Currently I'm giving the one fabric with the IMHO more correct settings a beating and it seems to do fine, whereas the other one without much traffic still has errors here and there. In short: We have eliminated the active part of the MUX (the FC retimer). We are putting the long distance SFPs into the end equipment themselves. Just to be sure we bought new monomode cables to connect the end equipment to the remaining passive part of the MUX. We are now trying out several long distance configs. It's almost black magic. Everything that happens is mostly empirical, noone seems to have a clue what are the exact reasons to do something. ("We have tried this, and it didn't work, then we tried that and it worked, so we stuck with that." But noone really seems to know why.) I'll keep you updated. Progress Report #2 We got the new lasers for one of the fabrics on warranty. It's ultra clean even on 4GbFC. They're transmitting with roughly 2mW (3dBm) whereas the others are only at 1.5mW (1.5dBm) although that should really be enough. The other fabric (where the lasers are apparently ok) still produces one or two CRCs infrequently. Using sfpshow the SFP producing the actual RX errors shows Status/Ctrl: 0x82 Alarm flags[0,1] = 0x5, 0x40 Warn Flags[0,1] = 0x5, 0x40 Now I'll have to find out what that means. Not sure if it was there before. Well I'll first clear my head with a week of vacation. 8-)

    Read the article

  • Computer hangs at boot screen with new RAID card

    - by shanethehat
    I am trying to build a new server around a Biostar TH61 motherboard and an Adaptec 6405E RAID controller card. The machine booted fine from USB before the RAID card and drives were installed. After installing, on the first boot the card was detected, but then started to spit out the following message every 10 seconds: Error: Controller Kernel Stopped Running << Press any key to continue ... Following the troubleshooting guide I unplugged everything, reseated the card, and reattached all the drives. This time the machine is sitting on the boot screen without any error messages and flashing a cursor, but after 15 minutes of this, nothing seems to be happening. Given that there are no error messages I'm hesitant to reboot again. Is it normal for a RAID card to sit without a status message when it firsts boots, maybe to initialise the drives or something? The current screen output looks a bit like this: Controller #00 found at PCI Slot:01, Bus:01, Dev:00, Func:00 Controller Model: Adaptec 6405E Firmware Version: 5.2-0[18512] Memory Size: 128MB Serial number: 111111111111111 SAS WWN: 50000D1104AE9180 _ Update: So after waiting 30 minutes I've rebooted back to the Kernal Stopped Running error. Maybe time to update the RAID BIOS.

    Read the article

  • Resize Win2003 system+boot partitions to bigger disks & different controller?

    - by ane
    Have an old Win2003 server with 1 SCSI hard drive partitioned as follows: D: boot (includes D:\ntldr, boot.ini, etc.) C: system (includes C:\WINDOWS) Want to move the whole system to new hardware with bigger drives and different controllers. Specifically, C: to a 300GB SAS drive, and D: to a 2TB SATA drive. Tried: VMWare Converter - VMWare Server - Diskpart Result: Diskpart refuses to resize system or boot disks VMWare Converter - VMWare Server - GParted Result: Will not boot (see http://serverfault.com/questions/219868/resize-ntfs-system-partitions-with-gparted ) Attach original VMWare disk to a duplicate VMWare install - Diskpart Result: Will not boot (goes to Directory Services Restore mode) Backup Exec System Recovery Server Edition 2010 with Restore Anywhere (tried restoring both to VMWare and to the bare system, without VMWare) Result: Windows Boot error: Could not read from the selected boot disk. Check boot path. Supposedly this is a boot.ini problem, so I try bootcfg /rebuild from the recovery console. Says it can't find windows partition so it can't rebuild. Thought about Ghost but it's completely different hardware/controllers that we're restoring to, so I doubt it would boot. Reinstalling Windows from scratch is not an option due to critical custom software heavily embedded on the original machine. Has anyone been in a similar situation (with unusual boot/system partitions) before and figured out how to resize onto different disks?

    Read the article

  • Linux - real-world hardware RAID controller tuning (scsi and cciss)

    - by ewwhite
    Most of the Linux systems I manage feature hardware RAID controllers (mostly HP Smart Array). They're all running RHEL or CentOS. I'm looking for real-world tunables to help optimize performance for setups that incorporate hardware RAID controllers with SAS disks (Smart Array, Perc, LSI, etc.) and battery-backed or flash-backed cache. Assume RAID 1+0 and multiple spindles (4+ disks). I spend a considerable amount of time tuning Linux network settings for low-latency and financial trading applications. But many of those options are well-documented (changing send/receive buffers, modifying TCP window settings, etc.). What are engineers doing on the storage side? Historically, I've made changes to the I/O scheduling elevator, recently opting for the deadline and noop schedulers to improve performance within my applications. As RHEL versions have progressed, I've also noticed that the compiled-in defaults for SCSI and CCISS block devices have changed as well. This has had an impact on the recommended storage subsystem settings over time. However, it's been awhile since I've seen any clear recommendations. And I know that the OS defaults aren't optimal. For example, it seems that the default read-ahead buffer of 128kb is extremely small for a deployment on server-class hardware. The following articles explore the performance impact of changing read-ahead cache and nr_requests values on the block queues. http://zackreed.me/articles/54-hp-smart-array-p410-controller-tuning http://www.overclock.net/t/515068/tuning-a-hp-smart-array-p400-with-linux-why-tuning-really-matters http://yoshinorimatsunobu.blogspot.com/2009/04/linux-io-scheduler-queue-size-and.html For example, these are suggested changes for an HP Smart Array RAID controller: echo "noop" > /sys/block/cciss\!c0d0/queue/scheduler blockdev --setra 65536 /dev/cciss/c0d0 echo 512 > /sys/block/cciss\!c0d0/queue/nr_requests echo 2048 > /sys/block/cciss\!c0d0/queue/read_ahead_kb What else can be reliably tuned to improve storage performance? I'm specifically looking for sysctl and sysfs options in production scenarios.

    Read the article

  • How to interpret iozone values

    - by Henno
    I ran a test to measure my I/O IOPS on Linux: iozone -s 4g -r 2k -r 4k -r 8k -r 16k -r 32k -O -b /tmp/results.xls iozone claims that output is in operations per second yet the numbers are too big for that to be plausible. I'm observing some 320 CMDs/s maximum on vmware esx console (esxtop, then v). File size set to 4194304 KB Record Size 2 KB Record Size 4 KB Record Size 8 KB Record Size 16 KB Record Size 32 KB OPS Mode. Output is in operations per second. Command line used: iozone -s 4g -r 2k -r 4k -r 8k -r 16k -r 32k -O -b tmpresults.xls Time Resolution = 0.000001 seconds. Processor cache size set to 1024 Kbytes. Processor cache line size set to 32 bytes. File stride size set to 17 * record size. random random bkwd record stride KB reclen write rewrite read reread read write read rewrite read fwrite frewrite fread freread 4194304 2 19025 5580 27581 29848 284 198 415 1103217 1498 18541 4340 24245 25618 4194304 4 15650 21942 18962 21068 252 1198 193 976164 1677 22802 23093 21089 21232 4194304 8 11121 11638 10273 10165 247 1196 202 625020^C The test ran for 15 hours before I pressed ^C. Is that ordinary expectation for such command line (dedicated 4 drive RAID10 LUN, 10k RPM SAS drives in EMC CX300)?

    Read the article

  • Cloning a NAS drive which hosts a SQL Server DB

    - by Adrian Hand
    We have a system in the field running a server application which is suffering with major performance issues. The system in question has 2 onboard 300gb sas drives in RAID 5 from which it boots Windows Server 2003, and a 6tb buffalo terastation NAS unit (also RAID 5) to which the server app does all of its reading and writing. I believe the terastation is the source of all our woes. Whilst under load, reads and writes tick by at something of the order of 1meg/sec, though the network in question is hardly utilised. The terastation contains various data, but crucially hosts a full instance worth of SQL Server .mdf and .ldf files (master etc - the whole shooting match) I wish to stop all the services on the server, then take everything on the terastation and essentially clone it to some alternative onboard storage, so as to eliminate the terastation from the equation as far as poor performance is concerned. ie the terastation is currently drive D: - I want to copy everything off and then have the duplicate assume the drive letter so that as far as the software is aware, nothing is different. This is tricky because of the mdf and ldf files - everything else will work with a straight up file copy. Can anyone suggest a means to achieve what I am describing? Many thanks!

    Read the article

  • What is the recommended glusterFS configuration for a growing website?

    - by montana
    Hello, I have a website that is tracking towards 50 million hits per day average, and within the next 3 months should be over 100 million hits per day. We are trying to use GlusterFS v 3.0.0 (with latest patches as of 1-17-2010) Currently, we've just upgraded to a load balancer environment that has 3 physical hosts with 6 Xen-Server 5.5u1 VM's (2 on each host) to serve webpage traffic. Each machine has 6 Raid-6 local storage drives (7200RPM-SATA). The old machine we came from had 1 mirrored SAS 10k drive. We also set up glusterFS currently with 3 bricks, one on each host, and it is serving the 6 VM's as clients. In testing, everything seemed fine. However when we went to production, it seemed that there just wasn't enough I/O's available to serve traffic even upwards of 15mil hits. Weeks prior, our old server was able to handle traffic, maxed out, at 20mil. Is there any recommended configurations for such an application, or things to be aware of that isn't apparent with their documentation at gluster.org for a site our size?

    Read the article

  • ssd firmware, linux: updating large batch of drives

    - by wryfi
    I was recently hit with a fatal firmware bug that affected dozens of Crucial SSDs deployed in my datacenter. Many of the affected machines use LSI or other proprietary SAS controllers, which Crucial's bootable ISO does not recognize. None of the affected machines has a Windows license. The story is roughly similar for other SSD mfrs, including Samsung and Intel. To resolve this issue, I was forced to stop each machine, remove the affected SSD, remove the SSD from its hotswap caddy, install it temporarily into my ThinkPad, flash the firmware, reverse, rinse, repeat. It took the better part of a day to get through all the affected devices. I am looking for hardware, software, and/or purchasing strategies to ease this pain, as SSD firmware bugs seem inevitable, and our SSD footprint is growing. My first thought is to get a laptop with eSATA and one of these cables (http://www.newegg.com/Product/Product.aspx?Item=N82E16812311004). That should at least make it so I don't have to remove the drives from their caddies. Surely others have run into this. Any novel solutions?

    Read the article

  • setup lowcost image storage server with 24x SSD array to get high IOPS?

    - by Nenad
    I want to build let's name it a lowcost Ra*san which would host for our social site the images (many millions) we have 5 sizes of every photo with 3 KB, 7 KB, 15 KB, 25 KB and 80 KB per Image. My idea is to build a Server with 24x consumer 240 GB SSD's in Raid 6 which will give me some 5 TB Disk space for the photo storage. To have HA I can add a 2nd one and use drdb. I'm looking to get above 150'000 IOPS (4K Random reads). As we mostly have read access only and rarely delete photos i think to go with consumer MLC SSD. I read many endurance reviews and don't see there a problem as long we don't rewrite the cells. What you think about my idea? - I'm not sure between Raid 6 or Raid 10 (more IOPS, cost SSD). - Is ext4 OK for the filesystem - Would you use 1 or 2 Raid controller, with Extender Backplane If anyone has realized something similar i would be happy to get Real World numbers. UPDATE I have buy 12 (plus some spare) OCZ Talos 480GB SAS SSD Drive's they will be placed in a 12-bay DAS and attached to a PERC H800 (1GB NV Cache, manufactured by LSI with fastpath) Controller, I plan to setup Raid 50 with ext4. If someone is wondering about some benchmarks let me know what you would like to see.

    Read the article

  • Load is 0, yet site crawls (sometimes). What gives?

    - by Yegor
    I have a ~1.5-2mil page views per day site running on 2 servers. One for mysql, other for everything else. Mysql box has a load of 3, frontend is usually 0.0-0.1. Both are dual quad core with 8GB ram running SAS drives in raid5. CPU is idle for majority of the time, iowait is non-existent. Im running nginx, memcache, and site is built on php. Half the time everything runs perfect, while at other times it lags something severe, when it takes 10-15 seconds for a page to load. Page execution time is always super low, but it seems to hang, waiting for something before it actually loads the page. Whats even more weird is that it only happens to 1 file on the site (but its the one thats most commonly accessed, that actually loads the content on the site). Other pages are super fast at all times, even when it takes 15 seconds to load actual content. I have nginx_stats plugin installed, and if I monitor it, the lag spikes happen when the write column starts going above 100, and it frequently does... all the way to 500-1000. It does so at totally random times... not when traffic is heavy... it can do this in the middle of the night, and work perfectly at 5pm when traffic is at its highest. Any ideas?

    Read the article

  • Getting access to physical drives in ESXi v5.5 installation on Dell PowerEdge R710 with PERC 6/i

    - by Big-Blue
    I've acquired a Dell PowerEdge R710 server a few days ago, which includes a PERC 6/i RAID controller. The server is now fitted with a SATA SSD, one SAS drive and four SATA HDD's, all of which I would like to be passed through to ESXi in an "as-is" state, without creating any logical drives in the RAID controller. Now, the ESXi v5.5 installation image I grabbed from the Dell homepage starts just fine but only lists the logical drives and connected flash drives as possible installation targets, not any of the physical drives. If I create a small logical drive on my SSD (which the PERC 6/i detects as SATA-SSD type), the ESXi install wizard lists the SSD value on that drive as false; which is far from optimal. I have also tried disabling the RAID controller entirely in the setup, but that also did not help. Everything that should enable passthrough is enabled in BIOS, but that shouldn't be a concern at this early stage of the ESXi installation. How would I be able to install ESXi v5.5 to a part of my SSD that is connected to the storage controller, while giving it entire physical access to the disk (to allow for SMART values to be read etc.)?

    Read the article

  • 100% CPU load on Ubuntu 10.04.3 LTS 64bit

    - by deadtired
    I have 2 days since I am trying to fix this issue, with no success. The server is a mysql database server. Hardware: DELL Poweredge 1950, 2x Intel Xeon Quad Core E5345 @ 2.33GHz, 16 Gb mem, 2x 146Gb SAS (software RAID1) Software: Ubuntu 10.04.3 LTS, MySQL 5.1.41 Issue: while mysql is not used and runs with no database, everything seems alright. As soon as I install a database, it has the reason to bring all 8 cores in 100% with low memory consumption. So, you can imagine the load average goes high (I saw 212 load average for the first time). The server doesn't become unresponsive, but you can see it's slow while browsing the project installed. Additional info: the database used is not more than 24MB and it was moved from a server with less resources and a lot more larger databases. So it's not the database/project. my.cnf is not a reason also, as I used both default one and the one I use on the same distribution on another server.What is interesting is that mysql doesn't close any process and runs to the limit of the max_connections. Logs are quiet. Nothing there. I switched to this Ubuntu version after I suspected some problems in the newly Ubuntu 11.10 server. This one worked alright for an hour after I made a kernel upgrade to 3.0.1 (it was using the memory also) I tested disk speed and seems alright. Some more output on the running server: dstat -cndymlp -N total -D total 3: htop command: Idea? Did anyone meet the same problem? Any fix you can think of?

    Read the article

  • System Center 2012 VMM UI is very slow

    - by Grant
    I've recently setup system center 2012 a new server 2008 r2 server which I'm using for virtual machines. Everything seems to be working fine, and the virtual machines are nice and fast. But the Virtual Machine Manager interface is always excruciatingly slow. Sometimes taking up to 15 seconds moving between screens. It's very frustrating trying to use it when a task that just involves a couple clicks ends up taking several minutes. Pages that have a lot of form fields seem to take the longest to load - such as the page to change hardware settings of a virtual machine. Is this just normal performance for VMM? If not, where can I look to find what is slowing it down. Nothing else on the system seems to suffer. I can load and use Hyper-V manager with no noticable slowness. Even programs like event viewer that are usually rather slow seem to load fairly fast. Only the system center programs seem slow. Server is a Dell R710, 2x16 core opteron 6274 processors, 96GB RAM. OS drive is 2x500GB 7.2k RPM SAS drives in RAID1 (opted for the less expensive 7.2k drives since pretty much everything is stored on the SAN). Am I just being impatient? Does anyone else use VMM 2012 and find it slow?

    Read the article

  • KVM Hosting: How to efficiently replicate guests

    - by javano
    I have three KVM servers each with 1 guest VM, running directly on it's local storage, (so they are essentially getting a dedicated box worth of computing power each). In the event of a host failure I would like the guests replicated to at least one of the other hosts so I can spin it up there, until the failing host is fixed. I am curious about KVM cloning. I can clone a VM live or when it's suspended/shutdown. Obivously suspended VMs will naturally be quicker to clone but these three VMs comprise three parts of a single solution, so I don't want to ever have any one of them shutdown. How can I efficiently clone these VMs between servers? I have had a couple of ideas, but are these insane or, is there a better method I have missed for my scenario? Set up a DRDB partition between box 1 and 2 where VM 1 runs from, and so is replicated between box1 and box 2, repeat between box 2 & 3, and box 3 & 1 (This could be insane, I have never used DRDB only read about it) Just use standard KVM CLI clone options to perform live clones (I'm dubious about this because I don't know how long it will take and what the performance impact will be during) Run a copy of each VM on at least one other host, and have the guest on one host export it's data to the matching guest on another host where it can import that data, scripting this on the guest) Some of other way? Ideas welcome! Side Note These servers have 4x15k SAS drives in a RAID 10 so they aren't rocketing fast, and as I mentioned, each VM runs from the host's local storage, no NAS or SAN etc. So that is why I am asking this question about guest replication. Also, this isn't about disaster recovery. Guests will be exporting their data to a NAS over a VPN, so I am looking at how I can have them quickly spun up in a host failure situation.

    Read the article

  • Decrease in disk performance after partitioning and encryption, is this much of a drop normal?

    - by Biohazard
    I have a server that I only have remote access to. Earlier in the week I repartitioned the 2 disk raid as follows: Filesystem Size Used Avail Use% Mounted on /dev/mapper/sda1_crypt 363G 1.8G 343G 1% / tmpfs 2.0G 0 2.0G 0% /lib/init/rw udev 2.0G 140K 2.0G 1% /dev tmpfs 2.0G 0 2.0G 0% /dev/shm /dev/sda5 461M 26M 412M 6% /boot /dev/sda7 179G 8.6G 162G 6% /data The raid consists of 2 x 300gb SAS 15k disks. Prior to the changes I made, it was being used as a single unencrypted root parition and hdparm -t /dev/sda was giving readings around 240mb/s, which I still get if I do it now: /dev/sda: Timing buffered disk reads: 730 MB in 3.00 seconds = 243.06 MB/sec Since the repartition and encryption, I get the following on the separate partitions: Unencrypted /dev/sda7: /dev/sda7: Timing buffered disk reads: 540 MB in 3.00 seconds = 179.78 MB/sec Unencrypted /dev/sda5: /dev/sda5: Timing buffered disk reads: 476 MB in 2.55 seconds = 186.86 MB/sec Encrypted /dev/mapper/sda1_crypt: /dev/mapper/sda1_crypt: Timing buffered disk reads: 150 MB in 3.03 seconds = 49.54 MB/sec I expected a drop in performance on the encrypted partition, but not that much, but I didn't expect I would get a drop in performance on the other partitions at all. The other hardware in the server is: 2 x Quad Core Intel(R) Xeon(R) CPU E5405 @ 2.00GHz and 4gb RAM $ cat /proc/scsi/scsi Attached devices: Host: scsi0 Channel: 00 Id: 32 Lun: 00 Vendor: DP Model: BACKPLANE Rev: 1.05 Type: Enclosure ANSI SCSI revision: 05 Host: scsi0 Channel: 02 Id: 00 Lun: 00 Vendor: DELL Model: PERC 6/i Rev: 1.11 Type: Direct-Access ANSI SCSI revision: 05 Host: scsi1 Channel: 00 Id: 00 Lun: 00 Vendor: HL-DT-ST Model: CD-ROM GCR-8240N Rev: 1.10 Type: CD-ROM ANSI SCSI revision: 05 I'm guessing this means the server has a PERC 6/i RAID controller? The encryption was done with default settings during debian 6 installation. I can't recall the exact specifics and am not sure how I go about finding them? Thanks

    Read the article

  • Adding more drives to a drive array

    - by Mystere Man
    I have a friend who has two servers, a Dell 1800 and an HP 350 ML G5, both have SAS drive arrays. The Dell is a 3.5" and the HP is a 2.5". They currently only have 3 drives in each array. We want to add additional drives, but they do not appear to have caddies, just "fake" covers. I haven't been able to take a good look at them, so I'm not sure what I need to do here. Are the "sockets" just there, and I can buy additional caddies and just stick them in? Or do I have to buy some kind of caddy adapter? Also, i'm thinking of just going 2.5" in the new server, so is there a 2.5" adapter caddy that will fit in the 3.5" chassis for the Dell, so I can use 2.5" drives in the 3.5" chassis? Can I buy 6GB/s drives and add them to the 3GB/s controller? The reason is that we're going to replace both computers in a year or so, and we want to bring the drives with. So rather than buy 3GB/s drives, we just want to buy 6GB/s drives so they can be used in the new server.

    Read the article

  • LSI1068E hidden drives after failed raid volume creation

    - by silk
    We are using LSI 1068E raid chipset with SAS drives. We had added new drives to the system, and tried to create new raid volume with the lsiutil, unfortunately the creation failed. The problem is that now we do not have the new raid volume and disks 'disappeared' and are not available as targets for raid. Lsiutil option 8 (scan for devices) does not display these disks at all. lsiutil option 16 (display attached devices) does list them as targets. lsiutil option 21+30 (create raid) does not list these disks. Just after insrting them to enclosure these disks appeared in the system, as expected. During the raid creation kernel logged: Mar 4 08:40:02 kilo kernel: [57555.687946] mptbase: ioc0: RAID STATUS CHANGE for PhysDisk 2 id=0 Mar 4 08:40:02 kilo kernel: [57555.687978] mptbase: ioc0: PhysDisk has been created Mar 4 08:40:02 kilo kernel: [57555.695438] scsi target0:0:2: mptsas: ioc0: RAID Hidding: fw_channel=0, fw_id=0, physdsk 2, sas_addr 0x5000c50008ebe5fd for both of them, again as expected. Unfortunately they did not appear back even though the volume was not created. The same situation is in the controller's bios after a reboot. Taking the disks out and inserting in different slots did not help, either. Has someone seen a similar problem? And knows how to 'get back' our disks?

    Read the article

  • centos install / partitioning

    - by ServerSideX
    I'm using NOC-PS to remotely install Centos 6.2 via KVM / IPMI. I'm going to install cPanel as well and they recommend this layout /boot (99MB) swap (2x server RAM) / (remainder) In the o/s install profile within NOC-PS software, it shows as this: part /boot --fstype ext2 --size 250 part pv.01 --size 1 --grow volgroup vg pv.01 logvol / --vgname=vg --size=1 --grow --fstype ext4 --fsoptions=discard,noatime --name=root logvol /tmp --vgname=vg --size=1024 --fstype ext4 --fsoptions=discard,noatime --name=tmp logvol swap --vgname=vg --recommended --name=swap By the time the default partition setup was done installing Centos, I get this [root@server005 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg-root 532G 907M 504G 1% / tmpfs 7.8G 0 7.8G 0% /dev/shm /dev/sda1 243M 28M 202M 13% /boot /dev/mapper/vg-tmp 1008M 34M 924M 4% /tmp [root@server005 ~]# cat /etc/fstab # # /etc/fstab # Created by anaconda on Fri Dec 7 18:47:24 2012 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # /dev/mapper/vg-root / ext4 discard,noatime 1 1 UUID=58b31aaf-5072-4fb1-a858-33bc316fa793 /boot ext2 defaults 1 2 /dev/mapper/vg-tmp /tmp ext4 discard,noatime 1 2 /dev/mapper/vg-swap swap swap defaults 0 0 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 My question is, how should the NOC-PS install profile look like to get the recommended cPanel partitioning? The server has 16GB RAM, dual 600GB SAS drives and will be used for cPanel shared hosting.

    Read the article

  • Why do I get swap space related errors when I still have lots of free memory in Solaris 10?

    - by Tom Duckering
    I am seeing a few of my services suffering/crashing with errors along the lines of "Error allocating memory" or "Can't create new process" etc. I'm slightly confused by this since logs show that at the time the system has lots of free memory (around 26GB in one case) of memory available and is not particularly stressed in any other way. After noting a JVM crash with similar error with the added query of "Out of swap space?" it made me dig a little deeper. It turns out that someone has configured our zone with a 2GB swap file. Our zone doesn't have capped memory and currently has access to as much of the 128GB of the RAM as it need. Our SAs are planning to cap this at 32GB when they get the chance. My current thinking is that whilst there is memory aplenty for the OS to allocate, the swap space seems grossly undersized (based on other answers here). It seems as though Solaris is wanting to make sure there's enough swap space in case things have to swap out (i.e. it's reserving the swap space). Is this thinking right or is there some other reason that I get memory allocation errors with this large amount of memory free and seemingly undersized swap space?

    Read the article

  • Performance of external USB disk with ESXi5

    - by PeterMmm
    I have a new HP DL120 G7 server with ESXi5. One VM is a Win2003 instalation and I have an external USB2.0 drive attached by USB Controller and USB Device. I copy a 4GB file from external USB to server disk. In the VM that takes up to 10 minutes. On a native Win2003 that takes aprox. 3 minutes. I have no explaination for that diference: In any case the bottleneck is the USB connection, much slower than the disks (SAS, RAID1). If the USB connection on the VM would be USB1.1 and not USB2.0 it would take much more time. (The disk performance between server partitions on the VM is correct. - see update) Could be that my native box is extremely fast and the VM is the normal case. ??? Update I try with passtrough and a first run copy the same data in aprox. 7 minutes. Still 2 times slower than the native connection. I also did another messure and the copy between partitions on the same VM takes 3 minutes.

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >