Search Results

Search found 7222 results on 289 pages for 'storage cells'.

Page 191/289 | < Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >

  • Optimal Configuration for five 300 GB 15K SAS Drives

    - by Bob
    I recently acquired an HP Z800 workstation that has five 300 GB 15K SAS Drives. This system will be dedicated to running multiple virtual machines under VMware Workstation (Note: I'm not using ESXi because I do plan to use the system for other purposes.). For the host OS, I plan to install RHEL 5. My number one concern is guest performance. For example, should I create a RAID 10 array for the OS and virtual machine storage with four of the drives and reserve the 5th? Or, is there a solution that will provide better performance?

    Read the article

  • How can I improve performance over SMB/CIFS for an application that has poor write speeds?

    - by Jeremy
    I have a third party application that reads several large files and generates a third large file. Its performance is quite good when the generated file is stored on "local storage", i.e. either a direct attached or iSCSI-based disk. The source files that are read can be stored remotely on our NAS and accessed via SMB with little effect on performance. However, if we attempt to write the target file to any kind of SMB/CIFS share (Samba or Windows Server) the performance drops almost ten-fold. This is unacceptably slow in our case. Writing files to network shares is not otherwise slow. I can copy large files to SMB shares and get great performance - near what I would expect is possible given the disks and network in question. I have a theory that this application's problem with SMB shares has something to do with a lack of write caching over the share and perhaps lots of network roundtrips. Is this possible and is there anything that can be done about it?

    Read the article

  • Calculating IOPS for a single HDD - what am I doing wrong?

    - by red888
    So I know there is no standardized way of calculating IOPS for a HDD, but from everything I have read it appears one of the most accurate formulas is the following: IOP/ms = + {rotational latency} + ({block size} / {data transfer rate}) Which is IOs per millisecond or what the book I've been reading calls "Disk Service Time". Also rotational latency is calculated as half of one rotation in milliseconds. This was taken from the EMC book "Information Storage and Management" -arguably a pretty reliable source right\wrong? Putting this formula into practice consider this Seagate data sheet. I am going to calculate IOPS for the ST3000DM001 model for a block size of 4kb: Seek Average (Write) = 9.5 -I'll measuring IOPS for writes Spindle speed = 7200rpm Average Data Rate = 156MB/s So my variables are: Seek Time = 9.5ms Rotational latency = (.5 / (7200rpm / 60)) = 0.004s = 4ms Data Rate = 156MB/s = (0.156MB/ms / 0.004MB) = 39 9.5ms + 4ms + 39 = IO/ms 52.5 1 / (52.5 * 0.001) = 19 IOPS 19 IOPS for this drive clearly is not right so what am I doing wrong?

    Read the article

  • Recommending simple appliance for DansGuardian, iptables, snort inline

    - by SRobertJames
    I'm currently using a Linksys E2000 with dd-wrt. I'd like to add DansGuardian for Content Filtering and snort-inline for IPS; but those require a more powerful box (mainly, more storage). Can you recommend a good device to use? I'm open to both overwrite-the-firmware (like dd-wrt) and designed-to-be-customized boxes. Requirements: 1. 5+ Ethernet ports, pref. GigE 2. small form factor 3. No noise (office environment) 4. low power 5. Not sure about 802.11 wireless Budget < $400, pref. less.

    Read the article

  • Are there C# controls that can be used to create a hierarchical list of prioritised items?

    - by Mendokusai
    I need to be able to display and edit a hierarchical list of tasks in a C# app. It can either be a Windows form app, or ASP.NET. Basically, I want similar behaviour to the way Microsoft Project handles tasks. The control would need to: 1) Maintain a list of items made up of several fields 2) Each item can have a number of children (at least 3 levels of nesting) 3) It needs to be very easy to change the parents/children of an item 4) It needs to be very easy to edit the fields (as fast as changing cells in Excel) 5) It needs to be very easy to reorder the items by dragging and dropping or cut and paste 6) If I can easily connect the control to a database, even better Before I go and create something manually, I'm wondering if there is something available already?

    Read the article

  • Best way to replicate servers

    - by Matthew
    I currently have two servers both with linux software RAID1 configurations. They use heartbeat and DRBD to create a shared DRBD device that hosts a a exported NFS directory. The servers run Ubuntu Server with a LXDE GUI and some IP These servers are going to be placed on fishing vessels to act has redundant storage for IP cameras. My boss wants me to figure out the most efficient way to create these servers. We might be looking at pushing out several systems a week. Each configuration will be almost identical besides IP addressing. What would be the best method to automate the configuration process? We are trying to cut down on labor costs to set these up. Imaging and Proceeding are both on my mind right now

    Read the article

  • Optimal Configuration for five 300 GB 15K SAS Drives

    - by Bob
    I recently acquired an HP Z800 workstation that has five 300 GB 15K SAS Drives. This system will be dedicated to running multiple virtual machines under VMware Workstation (Note: I'm not using ESXi because I do plan to use the system for other purposes.). For the host OS, I plan to install RHEL 5. My number one concern is guest performance. For example, should I create a RAID 10 array for the OS and virtual machine storage with four of the drives and reserve the 5th? Or, is there a solution that will provide better performance?

    Read the article

  • Variable width columns in a table

    - by Jack
    I'm using an HTML table for a calendar and I want to fill the cells with various events from my database. Usually they will land on weekends but some will run for long weekend, bank holidays or even the odd week day. How can I get my tables columns to expand and shrink accordingly. I'd like to avoid the use of javascript if possible. If this can't be done I'm going to need a tutorial to help me get my head around how to make div's positioning behave. cheers

    Read the article

  • Parity Initialization after putting in two new disks

    - by lbanz
    All my firmware is up to date on the server and the controllers. Storage crashed over the weekend. I rebooted it and it detected that I put in two new disks last week (I did check that both disk completed the rebuilding process last week). After it booted into the OS I see that it gave me an information message. After 18 hours it is at 54% so it is looking healthy. But I need to replace 5 more disk in the msa. Should I wait for this message to finish before replacing more disks? 785 Background parity initialization is currently queued or in progress on Logical Drive 1 (15.0 TB, RAID 5). If background parity initialization is queued, it will start when I/O is performed on the drive. When background parity initialization completes, the performance of the logical drive will improve.

    Read the article

  • System has reached the maximum size allowed for the system part of the registry

    - by Bob Denny
    To be precise System has reached the maximum size allowed for the system part of the registry. Additional storage requests will be ignored. WinXP/64 running fine for 2 years (no /3Gb switch), just started happening. I used ntregopt and the problem went away at least temporarily. However, looking before and after in Windows\System32\Config I see that my System file was reduced only by 10% and is still 170+ Mb. According to my rather extensive research with Google, this is "huge" and should be more like 10-20Mb. The system runs fine. There is a System.bak that is only 11Mb and has the date when I ran ntregopt. That's what I know. Now my question: Is there anything I can do to reduce or rebuild the System registry hive given the above info?

    Read the article

  • Ways to increase my Ubuntu partition space

    - by Andreas Grech
    I am currently running Ubuntu and Windows 7 as dual-boot on a single HD. The problem is that when I installed Ubuntu, I didn't allocate as much space as I thought I would need and now I need 'reinstall' Ubuntu so that I can increase the amount of storage space. Now there are two ways to go about this. Either I use use gparted to increase my partition space (but I read that it's not really that safe as regards data loss) or create the new partition with more space and reinstall Ubuntu there. But if want to reinstall Ubuntu, is there a way I can somehow "save" my current Ubuntu and install that one? What I mean is that I don't want to lose my current installed packages and files that I have on this partition. Is there a way to kind of maybe 'streamline' my current Ubuntu so that I install this one on the new partition? If not, what are your opinions as regards gparted?

    Read the article

  • RHEL raw device (over VMware RDM) performance issues

    - by jifa
    I'm running RHEL 5.3 over vSphere 4.0U1. I configured multiple LUNs on my NetApp (Fibre) storage, and added the RDM on two (Linux) VMs, using the Paravirtual SCSI adapter. One LUN is 100GB in size, successfully mapped to /dev/sdb on both VMs, 5 more are 500MB in size (mapped to /dev/sd{c-g}. I also created one partition per device. I have encountered two issues: First, writing directly to /dev/sdb1 gives me ~50MB/s, while any of the /dev/sd{c-g}1 gives me ~9MB/s. There is no difference in configuration of the LUNs apart from their size. I am wondering what causes this but this is not my main problem, as I would settle for 9 MB/s. I created raw devices using udev pretty straightforwardly: ACTION=="add", KERNEL=="sdb1", RUN+="/bin/raw /dev/raw/raw1 %N" per device Writing to any of the new raw devices dramatically slows down performance to just over 900KB/s. Can anyone point me in a helpful direction? Thanks in advance, -- jifa

    Read the article

  • NSTableView won't begin dragging rows if the mouseDown happens within the rect of an NSButtonCell.

    - by Joel Day
    I currently have an odd case where I need to be able to reorder rows in an NSTableView, but the only column happens to be an NSButtonCell. I'm trying to see how I can override NSButtonCell's mouse tracking in order to get it to behave in a way so that the NSTableView will begin dragging the row, but am not having much luck. Additional info that might affect the behavior: With this NSTableView, I am not allowing any rows to be selected, but I have forced mouse tracking to always occur for all cells. This is so that the button can still be clicked even though its row can never be selected. Thanks!

    Read the article

  • Win 7 Explorer backup and long paths

    - by user53299
    I use Explorer to do backups because Win 7's backup program asks me to take backups previously done and to put them back in the drive. I am opposed to that idea since I believe backups should remain in storage. With Explorer backups (burn and burn to disc) I have encountered the "destination path too long" error message and it shows the name of a folder "Debug" three times. I have hundreds of folders named "Debug" thanks to Visual Studio. At this moment I'm too angry at Microsoft to write a program to determine my 3 longest paths. (Aside: This is all after coincidentally reading two articles about path junctions earlier this evening which already made me kind of unhappy.) Please, is there an easy way to continue to make backups with Explorer? Edit: I should add that renaming paths wrecks Visual Studio projects so I really need to isolate the small number of problem paths or find a cleaner solution.

    Read the article

  • Network adapters reliability

    - by casey_miller
    Can you help me with understanding of reliability of network adapters. Most of the time servers do have at least 2 NIC's bonded to provide sort of a HA for it. So in case of one NIC fails, the second would still do the job. I wonder which factors work when you use network adapters. I know that, the most important and weakest part of any computer system is: storage (i.e HDD). but how reliable actually network adapters are? There are more expensive ones, and cheaper adapters. In which cases do they actually fail? In what circumstances. May it be a intensive usage of them Time when it's on In your experience how often you found yourself changing NIC's due to their fail? Or just what's the typical lifetime of commodity NIC's? thanks.

    Read the article

  • Implement a Cellular Automaton ? "Rule 110"

    - by ZaZu
    I was wondering how to use the Rule 110, with 55 lines and 14 cells. I have to then display that in an LED matrix display. Anyway my question is, how can I implement such automaton ?? I dont really know where to start, can someone please shed some light on how can I approach this problem ? Is there a specific METHOD I must follow ? Thanks --PROGRAM USED IS - C EDIT char array[54][14]; for(v=0;v<55;v++){ for(b=0;b<15;b++){ if(org[v][b-1]==0 && org[v][b]==0 && org[v][b+1] == 0) { array[v][b]=0; } array[v][b]=org[v][b]; } } Does that make sense ?? org stands for original

    Read the article

  • Retrieving a specific value from "df -h" using shell

    - by Diego Dias
    When I use df -h, I get the following output: Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 59G 2.2G 54G 4% / /dev/sda1 122M 38M 78M 33% /boot tmpfs 1.1G 0 1.1G 0% /dev/shm 10.10.0.105:/somepath 11T 8.4T 2.1T 81% /storage4 10.11.0.101:/somepath 15T 8.9T 5.9T 61% /storage1 /dev/mapper/patha 5.0T 255G 4.8T 5% /storage5_vol0 /dev/mapper/pathb 5.0T 195G 4.9T 4% /storage5_vol1 /dev/mapper/pathc 5.0T 608G 4.5T 12% /storage5_vol2 I want to write a script that gets the value of Avail column on a specific storage. I used to use df -k /storage_name | tail -1 | awk '{print $3}' But the FileSystem column can have a value or not .. which would change the variable of my script from $3 to $4. How can I get the Avail on a single command line even if there are no values on the previous columns?

    Read the article

  • Command-line access for Apple Time Machine?

    - by Stefan Lasiewski
    We use Apple's Time Machine to back up our workstations at the office. If I want to restore a file, I need to open up the Time Machine GUI and browse files there. The GUI is ugly eye-candy and gets in my way. Is there a way to browse the Time Machine archive using the Mac's command-line? I'm used to Netapps and other storage appliances. I use backintime for my Ubuntu workstation. To restore a file with one of those systems, you can restore a file with a simple command like: cp .snapshot/daily.0/filename.txt . or cp /backup/backintime/20100611-000002/backup/etc/shadow /etc/shadow Is there an equivalent for Apple's Time Machine?

    Read the article

  • Serverlocation moved and how can I Move the files

    - by Bernhard
    Hello together, I´ve a big problem. I have to move data from an old Webspace which is only accessibla by ftp. No we have a new root server which is accessible by ssh of course :-) No i Need to move all data from the old space but there is a lot of Gb of files. Is there a way to fetch all files directly from the old ftp to the storage and not over a third station (my local machine)? I´ve tried it with ftp but without success. I think I´ve used the wrong commands. Is there a way to etablish something like this including all files and directorys? Thank you in advance Bernhard

    Read the article

  • Can someone explain the physical architecture of RAID 10 in complete layman's terms?

    - by Hank
    I am a newbie in the world of storage and I am having a hard time digesting the physical architecture of some of the RAID levels. I am particularly interested in RAID 10, and 50. I asked the question specifically about RAID 10, because I feel if I understand that, I'll understand the other. So, I get the definition of RAID 10 - "minimum 4 disks, a striped array whose segments are mirrored". If I've got 4 disks and Disks 1 and 2 are a mirrored pair, and Disks 3 and 4 are a mirrored pair - where does the data get striped? Thanks.

    Read the article

  • Excel VBA to check autofilter for data

    - by cav719
    I need help checking for autofiltered rows not including the header. I want it to give a message box "No records found." then exit sub or continue with copy paste if there are rows beyond the header row. I know I need an If/Else entry after the filter to check for data but I'm having trouble figuring how to check. This code is being done from a UserForm I created. Here is my script: Private Sub Searchbycompanyfield_Click() If CompanyComboBox1.Value = "" Then MsgBox "Please enter a Company to begin search." Exit Sub End If ActiveSheet.Range("$A:$H").AutoFilter Field:=1, Criteria1:=EQDataEntry.CompanyComboBox1.Value, Operator:=xlOr Cells.Select Selection.Copy Sheets("Sheet2").Select Range("A5").Select ActiveSheet.Paste Call MessageBoxYesOrNoMsgBox End Sub Any help would be greatly appreciated.

    Read the article

  • Speeding up jQuery empty() or replaceWith() Functions When Dealing with Large DOM Elements

    - by Levi Hackwith
    Let me start off by apologizing for not giving a code snippet. The project I'm working on is proprietary and I'm afraid I can't show exactly what I'm working on. However, I'll do my best to be descriptive. Here's a breakdown of what goes on in my application: User clicks a button Server retrieves a list of images in the form of a data-table Each row in the table contains 8 data-cells that in turn each contain one hyperlink Each request by the user can contain up to 50 rows (I can change this number if need be) That means the table contains upwards of 800 individual DOM elements My analysis shows that jQuery("#dataTable").empty() and jQuery("#dataTable).replaceWith(tableCloneObject) take up 97% of my overall processing time and take on average 4 - 6 seconds to complete. I'm looking for a way to speed up either of the above mentioned jQuery functions when dealing with massive DOM elements that need to be removed / replaced. I hope my explanation helps.

    Read the article

  • Transfer many Gigabytes between two servers

    - by Bernhard
    Hello, I have a big problem. I have to move data from an old Webspace which is only accessibla by ftp. The new root server is accessible by ssh of course :-) I need to move all the data from the old space but the amount is just huge. Is there a way to move all the files directly from the old ftp to the storage and not over a third station (my local machine)? I´ve tried it with ftp but it didn't work. I think I´ve used the wrong commands. Is there a way to do this? Thank you in advance Bernhard

    Read the article

  • why is there so much variance in prices for a 2-bay NAS?

    - by jcollum
    I'm considering buying a 2bay NAS for media storage. I'm perplexed by the variety of prices. They go from about $115 to $1200. The only thing I could see that differentiated the high end drive was encryption and a dual gigabit ethernet port. I don't understand how that can add up to $800+ dollars. Clearly I should know why there's this price variance before considering buying a 2 Bay NAS. Newegg link to 2 Bay NAS Should I move this question to serverfault?

    Read the article

  • How do I do multiple assignment in MATLAB?

    - by Benjamin Oakes
    Here's an example of what I'm looking for: >> foo = [88, 12]; >> [x, y] = foo; I'd expect something like this afterwards: >> x x = 88 >> y y = 12 But instead I get errors like: ??? Too many output arguments. I thought deal() might do it, but it seems to only work on cells. >> [x, y] = deal(foo{:}); ??? Cell contents reference from a non-cell array object. How do I solve my problem? Must I constantly index by 1 and 2 if I want to deal with them separately?

    Read the article

< Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >