Search Results

Search found 1864 results on 75 pages for 'raid'.

Page 50/75 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • Leeroy Jenkins Reimagined As a Bank Heist [Video]

    - by Jason Fitzpatrick
    In this 3-minute short film, Leeroy Jenkins of World of War Craft infamy, brings his raid-botching power to the real world in a bank heist gone wrong. If you haven’t seen the original Leeroy Jenkins clip, watch it first to properly contextualize this entertaining little film. [via Super Punch] HTG Explains: Learn How Websites Are Tracking You Online Here’s How to Download Windows 8 Release Preview Right Now HTG Explains: Why Linux Doesn’t Need Defragmenting

    Read the article

  • Avoid random disk names

    - by BarsMonster
    Hi! I have Ubuntu Server 10.04 1 system disk, and 5 disks in RAID-5 configuration. The problem is that names of these disks are changed from time to time, they are being randomly mixed from time to time (sda,b,c,d,e,f - so system disks might be sda, or sdc at different time for example).... is there any way to fix drive names, so that even if it's disconnected for example, no other drive can occupy this letter based on disk UUID or something?

    Read the article

  • 11g R2 RAC Expert ????????? ~?????????~

    - by user763243
    ??????????????????????????????????????????????????????????????????????··· ??1??????????HDD????????????????????????????????????????????????HDD?????????????????????5?????DVD???????????????(???????????????????? ?)?????????????? IT????????????????????RAID????????????????????????????????????????HDD?????????????????????????????????????????????????????????????HDD???????????????????6????1?8?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????···orz ???RAC?????????????????????????·?????????????????????????????????????????????iPhone????????????????????WiFi??????????????????????????????????????????????????????????????????????? ??????????????????????????????????3G??????????????????????????????????????????????????????? ???????????????????????????????????????????PDF??????????????????????????????????????????????????????????????????????????????(????????????)???????????????????????????????????????????????????PDF?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ???????????????????????????iPhone???????????????????????????????????????????????????????????????????????? ??????????????????????????????????????????????????????????????????????????????????????????????????(??)????????????????????

    Read the article

  • Dell VRTX - slow cluster shared storage

    - by NorbyTheGeek
    I have a brand new Dell VRTX box set up as a Failover Cluster running HA Hyper-V virtual machines. This is my first time setting up clustering, and my first time with one of these boxes, so I'm sure I've missed something. The virtual machines are experiencing high disk latency and bad performance when accessing their VHD(x) files located on a Cluster Shared Volume. The VRTX has 10 x 900 GB 10K SAS drives in RAID 6 configuration, and the VRTX has the redundant Shared PERC 8 controllers. Both blades have full access to the virtual disks. There are two M520 blades installed, each with 128 GB RAM. MPIO is configured for the PERC 8 controllers. Operating system on the blades is Server 2012 (NOT R2). The RAID 6 array is split into a small (8 GB) volume for cluster quorum witness and a large (6.5 TB) volume for a Cluster Shared Volume (mounted on the nodes as C:\ClusterStorage\Volume1) An example of slow disk access: logging into a Server 2012 VM and having Server Manager come up automatically. Disk access goes to 100%, with write speeds at 20 MB or so, read speeds of 500 KB or so, and Average Response Time of over 1000 ms, sometimes spiking at 4000-5000 ms or so. It's the latency that really worries me. Is there something specific I should look at in my configuration? It doesn't seem to matter whether I use VHD or VHDX, dynamic or static.

    Read the article

  • Dell 2950 Perc 6/i "physical disk" and "Enclosure(Backplane)" under Connector 1 in OMSA tree- Troubleshoot help

    - by user66357
    Just looking for someone who might know why this could occur... In OMSA, on my Dell 2950, there usually is only one "Physical Disks" child under "Enclosure (Backplane)" in the tree view. Currently, the tree looks like this: Dell PERC 6/i Integrated Connector 1 (RAID) Enclosure (Backplane) Physical Disks (1:04 good, 1:05 removed) Physical Disks (1:33 Ready but unused) Normally it's like this: Connector 1 (RAID) Enclosure (Backplane) Physical Disks (1:04 good, 1:05 good) From the front, 6 of 6 3.5" SAS drives are connected. The server is showing Slot 5 as bad and the disk as removed. It seems that the drive in Slot 5 is being sensed as external to the Enclosure. Any ideas why this would happen? Think I can get away with rebuilding the virtual disk by replacing 1:05 with 1:33? Thanks. UPDATE: The only options on the Physical Disk 1:33 were Assign as Global Hot Spare and Clear... After clearing, I assigned it as the Global Hot Spare. This allowed the rebuilding of the virtual disk. Hopefully it won't fail. I'm still unsure of the reason for this odd behavior. I'm checking the firmware next.

    Read the article

  • Dell Perc 6i with FreeBSD 8.1 errors with mfi0: COMMAND xxxxxxxx TIMEOUT AFTER xxx SECONDS

    - by jDempster
    We've recently bought two Dell PowerEdge R710 servers with Perc 6i controllers and 6x 135GB SAS Drives. We'd done some pretty extensive testing on a Dell PowerEdge R510 server with a Perc 6i and 4x 135GB SAS Drives running FreeBSD 8.1 for it's wonderful ZFS support and mfiutil. We hadn't had any problems with the R510 and had got to a point where we where happy with the performance of ZFS. Since running FreeBSD 8.1 on the R710 we've been getting errors from the RAID controller. mfi0: COMMAND 0xffffff80005d1770 TIMEOUT AFTER 6178 SECONDS This usually brings the system to a stand still. But it doesn't always happen, and performs very well up until it does happen. We've been running the disk as 3 mirrored drives striped in ZFS. So far we've noticed that running the drives with RAID10 on the RAID seems to work without errors (still testing). At first I thought hardware error as we'd been running FreeBSD on the R510 with the same controller without any issues. But both R710 have the same issue. All controllers are running the same firmware.

    Read the article

  • VMWare ESX, storage over 2TB

    - by Phliplip
    Hi, First of, i'm a webdeveloper and my server experience lies in setting up FreeBSD servers for webserver. I'm working on a project for at photographer, and i'm hired to develop a new online photo ordering system - where user of course can view their photos :) They have a massive need of storage, thus we have bought a HP G6 and 8x1TB SATA HDD. Our plan is to install VMWare ESX 4.0, running multiple virtual machines; FreeBSD 8 for webserver and some windows servers. Allready done that. Then mount one big storage to the BSD, and share it through Samba to the WinServers. The raid is set up with an array of 2x 1TB to handle the VMs. And the rest is setup as 3 2x1TB to handle the photo-data. Thus 2.73TB for photo-data (the raids are 1+0). Now if we add a datastore in the ESX and add the 3 LUNs we can get a datastore of 2.74TB. But i don't se how i can add this datastore direct to the VM. Only the BSD VM needs access to this. Only way is to create a VirtualDisk, with a max of 2TB (8MB blocksize). This is because the datastore where we save the virtualdisk has a maximum filesize of 2TB. Then add it as a harddisk to the BSD VM. In the 'Add Harddisk' pane for the VM, i see an option for Raw Disk Management. I think this is to access the datastore or the raid directly. Only problem is that its greyed out! Can i access the datastorage directly from the BSD? Without creating and adding virtualdisk.

    Read the article

  • RAID1 Broken Mirroring

    - by Sanoj
    I'm having a little server with Windows Small Business Server 2003. I'm using RAID1, via a HighPoint Rocket RAID 1640 RAID-card, using two harddrives. This week the server alarmed, and durig reboot I got the error-message Broken Mirroring (User Manual page 30). I had a few alternatives (see the manual), first I tried Continue, but the server restarted during boot. Next time I took Power Off, and replaced the oldest harddrive with a new one, and when I booted, I selected Rebuild. Then I selected the new harddrive to be the new one. The rebuild-procedure started and a progress bar at 0% showed up, but after a few seconds I got the message Copy Failed!, then the server booted and Windows Server started. Now it works fine. But I guess that I'm just using one harddrive now, and it's not mirrored. I haven't touched the server since then (two days ago). What should I do now? I have no experience of this situation. Anyone that have some guidance?

    Read the article

  • Bad Blocks Exist in Virtual Device PERC H700 Integrated

    - by neoX
    I have a DELL server with PERC H700 Integrated controller. I've made RAID5 with 12 harddrives and the virtual device is in Optimal state, but I receive such errors under linux: sd 0:2:0:0: [sda] Unhandled error code sd 0:2:0:0: [sda] Result: hostbyte=0x07 driverbyte=0x00 sd 0:2:0:0: [sda] CDB: cdb[0]=0x88: 88 00 00 00 00 07 22 50 bd 98 00 00 00 08 00 00 end_request: I/O error, dev sda, sector 30640487832 sd 0:2:0:0: [sda] Unhandled error code sd 0:2:0:0: [sda] Result: hostbyte=0x07 driverbyte=0x00 sd 0:2:0:0: [sda] CDB: cdb[0]=0x88: 88 00 00 00 00 07 22 50 bd 98 00 00 00 08 00 00 end_request: I/O error, dev sda, sector 30640487832 sd 0:2:0:0: [sda] Unhandled error code sd 0:2:0:0: [sda] Result: hostbyte=0x07 driverbyte=0x00 sd 0:2:0:0: [sda] CDB: cdb[0]=0x88: 88 00 00 00 00 07 22 50 bc e0 00 00 01 00 00 00 end_request: I/O error, dev sda, sector 30640487648 But all disk are in Firmware state: Online, Spun Up. Also there is not a single ATA read or write error in any disk in the raid (I check them with smartctl -a -d sat+megaraid,N -H /dev/sda). The only strange thing is in the output in megacli: megacli -LDInfo -L0 -a0 ... Bad Blocks Exist: Yes How could there be bad blocks in a Virtual Drive, which is in optimal state and no disk is broken or even with a single error? I tried "Consistency Check", but it finished successfully and the errors are still in dmesg. Could Someone help me to figure it out what is wrong with my raid?

    Read the article

  • Hardware for a home server running Windows Server 2008 R2 Hyper-V or Microsoft Hyper-V Server 2008 R2

    - by David Hayes
    Hi, I'm planning to build a server to do the following Act as a file server (videos, pictures music) Run Squeezebox server Run Zune Software to allow wireless syncing to Windows Phone 7 I'd also like to aim for Low power usage (i'd settle for less than the 90-100Watts I'm using atm Flexibility, I might want to add a web server or sharepoint or... Something I can learn/test on, work is mainly a Windows shop but I do have Linux experience too I'd like to take a look at App-V (application virtualization) too I'd like it to cost less than $1000 Quiet would be nice but not essential (it'll be in the basement) I'm thinking of getting a technet subscription to get access to Windows Server 2008 R2 at a reasonable price ($199) So my plan was this Get a bunch of 2TB Caviar green drives to RAID up (RAID 1 or 6 probably) Get a Quad core CPU (Intel i5/i7 probably) Install a Hypervisor Install w2k8 R2 Storage Server for a NAS Install Windows 7 Pro to run Zune/Squeeze box Install any other machines I want to play with Questions Can anyone see any issues with this or have any better ideas? Do you think I'd need an i7 over an i5? Is 4 cores enough/too much? Can anyone sugest a nice, reasonably priced case that will hold 6-8 drives and stay cool Should I wait for Sandy Bridge parts?

    Read the article

  • Is it possible to shrink the size of an HP Smart Array logical drive?

    - by ewwhite
    I know extension is quite possible using the hpacucli utility, but is there an easy way to reduce the size of an existing logical drive (not array)? The controller is a P410i in a ProLiant DL360 G6 server. I'd like to reduce logicaldrive 1 from 72GB to 40GB. => ctrl all show config detail Smart Array P410i in Slot 0 (Embedded) Bus Interface: PCI Slot: 0 Serial Number: 5001438006FD9A50 Cache Serial Number: PAAVP9VYFB8Y RAID 6 (ADG) Status: Disabled Controller Status: OK Chassis Slot: Hardware Revision: Rev C Firmware Version: 3.66 Rebuild Priority: Medium Expand Priority: Medium Surface Scan Delay: 3 secs Surface Scan Mode: Idle Queue Depth: Automatic Monitor and Performance Delay: 60 min Elevator Sort: Enabled Degraded Performance Optimization: Disabled Inconsistency Repair Policy: Disabled Wait for Cache Room: Disabled Surface Analysis Inconsistency Notification: Disabled Post Prompt Timeout: 15 secs Cache Board Present: True Cache Status: OK Accelerator Ratio: 25% Read / 75% Write Drive Write Cache: Enabled Total Cache Size: 512 MB No-Battery Write Cache: Disabled Cache Backup Power Source: Batteries Battery/Capacitor Count: 1 Battery/Capacitor Status: OK SATA NCQ Supported: True Array: A Interface Type: SAS Unused Space: 412476 MB Status: OK Logical Drive: 1 Size: 72.0 GB Fault Tolerance: RAID 1+0 Heads: 255 Sectors Per Track: 32 Cylinders: 18504 Strip Size: 256 KB Status: OK Array Accelerator: Enabled Unique Identifier: 600508B1001C132E4BBDFAA6DAD13DA3 Disk Name: /dev/cciss/c0d0 Mount Points: /boot 196 MB, / 12.0 GB, /usr 8.0 GB, /var 4.0 GB, /tmp 2.0 GB OS Status: LOCKED Logical Drive Label: AE438D6A5001438006FD9A50BE0A Mirror Group 0: physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 146 GB, OK) physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 146 GB, OK) Mirror Group 1: physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SAS, 146 GB, OK) physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SAS, 146 GB, OK) SEP (Vendor ID PMCSIERA, Model SRC 8x6G) 250 Device Number: 250 Firmware Version: RevC WWID: 5001438006FD9A5F Vendor ID: PMCSIERA Model: SRC 8x6G

    Read the article

  • Resizing a LUKS encrypted volume

    - by mgorven
    I have a 500GiB ext4 filesystem on top of LUKS on top of an LVM LV. I want to resize the LV to 100GiB. I know how to resize ext4 on top of an LVM LV, but how do I deal with the LUKS volume? mgorven@moab:~% sudo lvdisplay /dev/moab/backup --- Logical volume --- LV Name /dev/moab/backup VG Name moab LV UUID nQ3z1J-Pemd-uTEB-fazN-yEux-nOxP-QQair5 LV Write Access read/write LV Status available # open 1 LV Size 500.00 GiB Current LE 128000 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 2048 Block device 252:3 mgorven@moab:~% sudo cryptsetup status backup /dev/mapper/backup is active and is in use. type: LUKS1 cipher: aes-cbc-essiv:sha256 keysize: 256 bits device: /dev/mapper/moab-backup offset: 3072 sectors size: 1048572928 sectors mode: read/write mgorven@moab:~% sudo tune2fs -l /dev/mapper/backup tune2fs 1.42 (29-Nov-2011) Filesystem volume name: backup Last mounted on: /srv/backup Filesystem UUID: 63877e0e-0549-4c73-8535-b7a81eb363ed Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean with errors Errors behavior: Continue Filesystem OS type: Linux Inode count: 32768000 Block count: 131071616 Reserved block count: 0 Free blocks: 112894078 Free inodes: 32044830 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 992 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 RAID stride: 128 RAID stripe width: 128 Flex block group size: 16 Filesystem created: Sun Mar 11 19:24:53 2012 Last mount time: Sat May 19 13:29:27 2012 Last write time: Fri Jun 1 11:07:22 2012 Mount count: 0 Maximum mount count: 100 Last checked: Fri Jun 1 11:03:50 2012 Check interval: 31104000 (12 months) Next check after: Mon May 27 11:03:50 2013 Lifetime writes: 118 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: 383bcbc5-fde9-4720-b98e-2d6224713ecf Journal backup: inode blocks

    Read the article

  • Configuration for a two machine ESXi cluster using VSA to present local storage to VMs

    - by MDMarra
    I'm designing a little vSphere 5 cluster for one of our remote sites. We have some IBM x3650s that have 6x 300GB 10K RPM drives in them, along with dual quad core CPUs and 24GB RAM. Because we use HP P4500 G2s at our primary site, we have licenses available for HP P4000 VSAs. I thought that this would be the perfect opportunity to use them. Below is a basic drawing of what I want to accomplish: I want to run a P4000 VSA on each server and run them in a Network RAID-10 (Lefthand speak for network mirroring, think of it as RAID 1 across nodes or as an active/active storage cluster). I will then present this storage to guests that will run on this mini-cluster. It will be managed by a vCenter Server on our main site. All connections will be GbE with two dedicated to storage. Management and Data will share a pair of connections, since I don't expect there to be high load. These servers are just there to provide directory services, dhcp, printing, etc. Does anyone see anything potentially wrong with this approach? Is this the best way to do this without adding additional dedicated storage heads? Are there any pitfalls in this design, besides the lack of dedicated Data/Mgmt interfaces?

    Read the article

  • Installing SATA dvd burner on machine with no spare SATA ports/connectors

    - by Faheem Mitha
    Greetings. I have the following motherboard Tyan Thunder K8WE S2895A2NRF Motherboard - extended ATX - nForce Pro 2200/2050 - Socket 940 - UDMA133, Serial ATA-300 (RAID) - 2 x Gigabit Ethernet - FireWire - 6-1 channel audio This is part of a computer that was assembled in the winter of 2006/2007. The user manual says the following with regard to SATA Integrated SATAII Generation 1 Controllers (from NForce Professional 2200) Two integrated dual port SATA II controllers Four SATA connectors support up to four drives 3 Gb/s per direction per channel NvRAID v2.0 support Supports RAID 0, 1, 0+1 and JBOD. I just purchased a SATA DVD burner. Here is the page for the product http://www.amazon.ca/gp/product/B002QGDWLK/ The problem I am facing is that I already have 4 SATA drives installed. I don't want to remove any of them. However, I want the DVD burner above installed as well. The person I am consulting with here (Bombay, India) tells me that my four available SATA ports are filled, and that my only option is to install a SATA card into the one free PCI slot on the motherboard. However, he says that with this setup I will not be able to boot from the DVD drive. Are these statements correct, and what are my other options if any? Even it the statements in the last para are true, I suppose I could use one of the motherboard connectors/ports there are currently being used with the hard drives with the DVD drive, and use the "add-on" connector with one of the hard drives. Not all the 4 hard drives need to be bootable. BTW, despite having read through http://en.wikipedia.org/wiki/Serial_ATA#Cables.2C_connectors.2C_and_ports I am fuzzy on the differences between connectors, cables and ports. Thanks in advance.

    Read the article

  • mysqld causes high CPU load

    - by Radu
    My mysqld goes to use 99.9% of CPU for variable time (between 2 - 20 minutes), and then goes back to normal 0.1% - 5%. Checked processlist: all is normal, 1 to 20 inserts or updates that last 2 to 5 sec, and about 20 process that are in Sleep Mode (maybe because the scripts don't close the mysql connection, but are they are closed in about 5 - 10 secs, I didn't make the scripts :P but the server was running fine the last 2 years, since is was made): | 15375 | root | localhost | stoc | Query | 0 | NULL | show processlist | | 79480 | pppoe | localhost | pppoe | Sleep | 4 | NULL | NULL | | 79481 | pppoe | localhost | pppoe | Sleep | 4 | NULL | NULL | | 79482 | pppoe | localhost | pppoe | Sleep | 4 | NULL | NULL | | 79483 | pppoe | localhost | pppoe | Query | 0 | init | UPDATE acc SET InputOctets="0", OutputOctets="0", InputPackets="unknown", OutputPackets="User | | 79484 | pppoe | localhost | pppoe | Sleep | 5 | NULL | NULL | | 79485 | pppoe | localhost | pppoe | Sleep | 5 | NULL | NULL | | 79486 | pppoe | localhost | pppoe | Sleep | 5 | NULL | NULL Checked raid, seemns OK: [root@db2]# cat /proc/mdstat Personalities : [raid5] [raid4] [raid1] md0 : active raid1 sdd1[3] sdc1[2] sdb1[0] sda1[1] 136448 blocks [4/4] [UUUU] md1 : active raid5 sdd2[3] sdc2[2] sdb2[0] sda2[1] 12023808 blocks level 5, 256k chunk, algorithm 2 [4/4] [UUUU] md3 : active raid5 sda4[1] sdd4[3] sdc4[2] sdb4[0] 203647488 blocks level 5, 256k chunk, algorithm 2 [4/4] [UUUU] md2 : active raid5 sda3[1] sdd3[3] sdc3[2] sdb3[0] 24024576 blocks level 5, 256k chunk, algorithm 2 [4/4] [UUUU] unused devices: <none> [root@db2]# top sees my mysqld cpu load, but nothing else seems to be wrong: [root@db2]# top top - 17:56:05 up 7 days, 3:55, 3 users, load average: 32.93, 24.72, 22.70 Tasks: 75 total, 4 running, 71 sleeping, 0 stopped, 0 zombie Cpu(s): 63.4% us, 36.6% sy, 0.0% ni, 0.0% id, 0.0% wa, 0.0% hi, 0.0% si, 0.0% st Mem: 1988824k total, 1304776k used, 684048k free, 99588k buffers Swap: 12023800k total, 0k used, 12023800k free, 951028k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 5754 mysql 19 0 236m 57m 5108 R 99.9 2.9 21:58.76 mysqld 1 root 16 0 7216 700 580 S 0.0 0.0 0:00.39 init 2 root RT 0 0 0 0 S 0.0 0.0 0:00.00 migration/0 Repaired all mysql databases, reindexed raid ... I'm running out of ideeas ... Anyone has an ideea what can go wrong with this server ? Thank you

    Read the article

  • How do I keep folders synced and backed up between two macs using a Linux NAS (rsync?)

    - by Hultner
    I've got two primary computers, one Mac Pro and one MacBook Pro for when I'm on the go. I've also got a Linux sever which also acts as NAS. Currently I backup the entire computers to an external drive with Time Machine which is rather useless and doesn't sync anything. What I really want to do is to keep my important files synced between both computers and my NAS (which is running RAID 5), that way I'm not backing up easily replaceable systemfiles and I've got all my important files in 3 places where two of them are running raid so at least 5 drives would have to crash at the same time before actual data loss occur. Folders I want to keep synced is basically my photo, documents, development, mamp and work folders and then I want to keep the user library folder backed up but not synced. I'm thinking that I'd have to use rsync but don't know how. Before suggesting Dropbox and similar suggestions I don't want to use them because of several reasons some of them being security (Dropbox obviously proved this), Speed (sometimes I'll sync gigabytes of data and that will be significantly faster locally and probably even through VPN as I have a Gigabit pipe), Space (space on my NAS is cheap and only practically limited by my needs), reliability (even if my internet were to go down I still need to be able to keep my files synced incase I'd need to go somewhere on the fly), price (I already have all the hardware and for the amount of gigabytes and bandwidth I'd need I doubt that there's any free or cheap service). Those are my main reason for wanting to keep it locally. I'm sorry for any spelling or grammatical mistakes that I've might have done. I'm writing this on my smartphone from a shaky train and English isn't my mother tongue. I gratefully appreciate any answers even if only partly solving my problem.

    Read the article

  • SQL Server 2005 standard filegroups / files for performance on SAN

    - by Blootac
    I submitted this to stack overflow (here) but realised it should really be on serverfault. so apologies for the incorrect and duplicate posting: Ok so I've just been on a SQL Server course and we discussed the usage scenarios of multiple filegroups and files when in use over local RAID and local disks but we didn't touch SAN scenarios so my question is as follows; I currently have a 250 gig database running on SQL Server 2005 where some tables have a huge number of writes and others are fairly static. The database and all objects reside in a single file group with a single data file. The log file is also on the same volume. My interpretation is that separate data files should be used across different disks to lessen disk contention and that file groups should be used for partitioning of data. However, with a SAN you obviously don't really have the same issue of disk contention that you do with a small RAID setup (or at least we don't at the moment), and standard edition doesn't support partitioning. So in order to improve parallelism what should I do? My understanding of various Microsoft publications is that if I increase the number of data files, separate threads can act across each file separately. Which leads me to the question how many files should I have. One per core? Should I be putting tables and indexes with high levels of activity in separate file groups, each with the same number of data files as we have cores? Thank you

    Read the article

  • Setup of high-end web server and DB server cluster on Amazon EC2: Is this how it's done?

    - by user1086584
    Amazon is so technical, I want to confirm that my understanding is correct. We have a large 500 GB database. (OrientDB.) We will have it mirrored to one another in the same Availability Zone. We believe the database size will grow rapidly. The plan is: Get 4 large instances that are compatible types with Placement Groups (as well as ideally, Enhanced Networking) (2 for web, 2 for DB.) We use an EBS-backed instances to store our operating system. Discussion here: http://alestic.com/2012/01/ec2-ebs-boot-recommended We can set up ephemeral SSD instance storage as swap space. (But it is lost after even a reboot. I hear its hard to add ephemeral storage if booting from EBS, but possible.) For offsite backup, we will take periodic snapshots and store them on S3. Obviously we need to ensure the database is in a safe state when that snapshot happens to avoid corruption. (Any hints here, aside from shutting down the DB?) If the database gets too big, we need to create a EBS volume that's larger. We can use RAID to break the 1 TB limit: http://alestic.com/2009/06/ec2-ebs-raid Static assets on web servers will be stored on S3. Is that correct? Or am I missing something?

    Read the article

  • Virtual (ESXi4) Win 2k8 R2 server hangs when adding role(s)

    - by Holocryptic
    I'm trying to provision a 2k8r2 Enterprise server in ESXi4. The OS installation goes fine, VMware tools, adding to domain, updates. All the basic stuff before you start adding Roles and Features. I've had this happen on two attempts already, and I'm not sure where the problem might be. I don't think it's hardware, because I have another 2k8r2 Standard server that's running fine. The only real difference is the install media. The server that's working was installed using a trial ISO and license. The one I'm having problems with is a full MAK installation. When I go to add a Role (the last case was Application Server) it gets all the way to "collecting installation results" before it hangs. CPU utilization in the vSphere client shows little spikes of activity with flatlines inbetween, but the whole console is locked up. The only way to release it is to power off and bring it back up. When you go to look at the added roles after bringing it back up, it shows that it is installed, but I don't trust that something didn't get wedged in all of that. The first install I did was with Thin Disk provisioning. The second attempt was with regular disk provisioning. In both cases 4GB of RAM, 2 vCPUs. VMware host is a HP Proliant DL380 G6, RAID-1 OS, RAID-5 data volume. 12 GB RAM. Has anyone else had this problem, or know where I should start poking around?

    Read the article

  • Picking a linux compatible motherboard

    - by Chris
    Last time I bought a new computer (I build them myself) I got a motherboard that had really poor linux support for a long time. Specifically the audio. I had to wait months before the kernel supported the on board audio chipset. That is exactly the situation I'm trying to avoid this time around. I have some specific questions about "server motherboards" actually. I looked at a few models of server motherboards by intel, and some random models on newegg. I wasn't able to see much of a difference from regular desktop motherboard other than most had two sockets, and support for much more ram. These boards seem more popular with Linux users. Why? AMD and Intel both have server CPUs as well. Some question, what's the difference? To make this question more concrete, I was looking at this this motherboard. The main questions about it that I can't answer are: Can I get a motherboard without on board raid and audio? I wanted to get a hardware raid controller and a PCI audio card. I thought a server motherboard would be cheaper and not have these "extras", since who wants an audio card on a server? Where can I found out about Linux support for the components on this board? "Intel ICH10R", "Realtek ALC889", "Marvell 88E8056" I'm buying this computer to work as a Linux desktop for a lot of compiling, coding and audio/video work, but I don't want to rule out the possibility of installing windows and playing some games at one point. (even if the last game I got has been sitting in its box unopened for almost a year). Is it a good idea to buy a "server motherboard" and play games on it, or are desktop boards better value for this? The ultimate solution for me would be a motherboard that had GPL divers for onboard LAN, a single CPU socket, lots of PCI express and PCI. USB 3.0, and no fancy hard disk controllers since I'll be getting a separate one.

    Read the article

  • Is basing storage requirements based on IOPS sufficient?

    - by Boden
    The current system in question is running SBS 2003, and is going to be migrated on new hardware to SBS 2008. Currently I'm seeing on average 200-300 disk transfers per second total across all the arrays in the system. The array seeing the bulk of activity is a 6 disk 7200RPM RAID 6 and it struggles to keep up during high traffic times (idle time often only 10-20%; response times peaking 20-50+ ms). Based on some rough calculations this makes sense (avg ~245 IOPS on this array at 70/30 read to write ratio). I'm considering using a much simpler disk configuration using a single RAID 10 array of 10K disks. Using the same parameters for my calculations above, I'm getting 583 average random IOPS / sec. Granted SBS 2008 is not the same beast as 2003, but I'd like to make the assumption that it'll be similar in terms of disk performance, if not better (Exchange 2007 is easier on the disk and there's no ISA server). Am I correct in believing that the proposed system will be sufficient in terms of performance, or am I missing something? I've read so much about recommended disk configurations for various products like Exchange, and they often mention things like dedicating spindles to logs, etc. I understand the reasoning behind this, but if I've got more than enough random I/O overhead, does it really matter? I've always at the very least had separate spindles for the OS, but I could really reduce cost and complexity if I just had a single, good performing array. So as not to make you guys do my job for me, the generic version of this question is: if I have a projected IOPS figure for a new system, is it sufficient to use this value alone to spec the storage, ignoring "best practice" configurations? (given similar technology, not going from DAS to SAN or anything)

    Read the article

  • Improving IO with FlashCache

    - by Devator
    I have a server with 2 HDD's (2x 1 TB), running in RAID 1 (SW-RAID). I want to improve IO performance by using flashcache. There are running KVM virtual machines on it, using LVM. Regarding this, I have the following questions: Will this even work? flashcache works for block devices, however these are all virtual machines with their own setup. How much would I expect to increase performance? Most virtual machines run websites and some host games. How big does the SSD needs to be? Would having a bigger SSD increase performance since it's able to cache more files? What happens if the SSD dies? Would flashcache retrieve files from the traditional HDD and I could simply replace the SSD? How much faster would writeback be in comparison with writethrough and writearound? I have no access to a test system unfortunately, so could I install flashcache on a live server without unmounting the the disks? I found a great tutorial here which I would be using.

    Read the article

  • What GPT partition type to use for protecting DRBD metadata?

    - by Carsten Scholtes
    I'm planning to install a DRBD device on a (replicated) disk with two GPT partitions. DRBD requires some space for (preferentially "internal") metadata at the end of the underlying device. I'm hesitant to leave this space unpartitionend (or unformatted in a normal partition). I'd like to reserve an extra partition at the end of the underlying disk device for the metadata. (If I understand correctly, DRBD would not care about the partition or its type and could then use that space exclusively.) My question is: Which would be a suitable GPT partition type for such a metadata partition? It should not be a type interpreted while booting (such as EF00 EFI System). It should not be a type prone to be modified accidentialy by the booted OS (such as 8200 Linux swap, 8e00 Linux LVM, fd00 Linux raid). (The booted OS will be Ubuntu Linux 12.04.3.) It should not be a type indicating a normal filesystem (such as 0c01 or 8301), prone to be formatted correspondingly. It should not be a type requiring any special content in the partition (since the content is to be handled exclusively by DRBD). It should express the purpose of being reserved for something special (namely DRBD). (The types I listed are as provided by gdisk. I'm thinking about using some type unlikely to be used by the OS (maybe bf0a Solaris Reserved 4) or an invented(?) type such as fd01 (close to fd00 Linux raid…). Would something like this be suitable, too dangerous or even possible?)

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >