Search Results

Search found 11115 results on 445 pages for 'disk monitoring'.

Page 95/445 | < Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >

  • NULL pointer dereference in swiotlb_unmap_sg_attrs() on disk IO

    - by Inductiveload
    I'm getting an error I really don't understand when reading or writing files using a PCIe block device driver. I seem to be hitting an issue in swiotlb_unmap_sg_attrs(), which appears to be doing a NULL dereference of the sg pointer, but I don't know where this is coming from, as the only scatterlist I use myself is allocated as part of the device info structure and persists as long as the driver does. There is a stacktrace to go with the problem. It tends to vary a bit in exact details, but it always crashes in swiotlb_unmap_sq_attrs(). I think it's likely I have a locking issue, as I am not sure how to handle the locks around the IO functions. The lock is already held when the request function is called, I release it before the IO functions themselves are called, as they need an (MSI) IRQ to complete. The IRQ handler updates a "status" value, which the IO function is waiting for. When the IO function returns, I then take the lock back up and return to request queue handling. The crash happens in blk_fetch_request() during the following: if (!__blk_end_request(req, res, bytes)){ printk(KERN_ERR "%s next request\n", DRIVER_NAME); req = blk_fetch_request(q); } else { printk(KERN_ERR "%s same request\n", DRIVER_NAME); } where bytes is updated by the request handler to be the total length of IO (summed length of each scatter-gather segment).

    Read the article

  • Rails: How can I log all requests which take more than 4s to execute?

    - by Fedyashev Nikita
    I have a web app hosted in a cloud environment which can be expanded to multiple web-nodes to serve higher load. What I need to do is to catch this situation when we get more and more HTTP requests (assets are stored remotely). How can I do that? The problem I see from this point of view is that if we have more requests than mongrel cluster can handle then the queue will grow. And in our Rails app we can only count only after mongrel will receive the request from balancer.. Any recommendations?

    Read the article

  • Does anyone know of any good tutorials for using the APIs from Amazon WeB Services, namely CloudWatc

    - by undefined
    Hi, I have been wrestling with Amazon's CloudWatch API with limited success. Does anyone know of any good resources (other than amazon's api docs) for using the APIs. I have tried to run them using the PHP library for CloudWatch but get nothing but error codes. I am configuring the GetMetricStatisticsSample.php file as follows: $request = array(); $endTime = date("Y-m-d G:i:s"); $yesterday = mktime (date("H"), date("i"), date("s"), date("m"), date("d")-1, date("Y")); $startTime = date("Y-m-d 00:00:00", $yesterday); $request["Statistics.member.1"] = "Average"; $request["EndTime"] = $endTime; $request["StartTime"] = $startTime; $request["MeasureName"] = "CPUUtilization"; $request["Unit"] = "Percent"; invokeGetMetricStatistics($service, $request); But this returns "Caught Exception: Internal Error Response Status Code: 400 Error Code: Error Type: Request ID: XML:" I have also tried from command line as follows - set JAVA_HOME=C:\Program Files\Java\jre1.6.0_05 set AWS_CLOUDWATCH_HOME=C:\AmazonWebServices\API_tools\CloudWatch-1.0.0.24 set PATH=%AWS_CLOUDWATCH_HOME%\bin mon-list-metrics but get C:|Program' is not recognized as an internal or external command... any suggestions? cheers

    Read the article

  • Best data recovery tools?

    - by Nonick
    So due to a recent act of stupidity and bravado, I uttered the words "backups! who needs backups?!" and what followed was the tragic loss of 260gb of data. This scenario in particular is requiring me to recover a repartitioned hard disk, but I was wondering what tools people here use in general to recover lost data. I'm sure everyone has been there, either accidentally rewriting files, resaving an old version, computer crash, hard disk death, user deletes an important document etc. So was thinking it might be an interesting point of discussion as to what you guys use to recover lost data. I appologise if this is considered irrelevant, but considering there has been a few recovery questions, I think this might be interesting.

    Read the article

  • Application to watch what an executable does?

    - by OverTheRainbow
    Hello I need to find out exactly what files/directories a Lua program uses so I can try to only pack what it needs into a ZIP file, and come up with a simple way to deploy this script. I used SysInternals' Process Monitor, but I'm surprised by the small amount of information it returned while it watched the program (For Lua users out there, it's wsapi.exe, which is the launcher for the light-weight Xavante web server). Does someone know of a good Windows application that can completely monitor what a program does, eg. something like a live version of the venerable PCMag's InCtrl5. Thank you.

    Read the article

  • MySQL: LOAD DATA reclaim disk space after delete

    - by Michael
    I have a DB schema composed of MYISAM tables, i am interested to delete old records from time to time from some of the tables. I know that delete does not reclaim the memory space, but as i found in a description of DELETE command, inserts may reuse the space deleted In MyISAM tables, deleted rows are maintained in a linked list and subsequent INSERT operations reuse old row positions. I am interested if LOAD DATA command also reuses the deleted space? UPDATE I am also interested how the index space reclaimed?

    Read the article

  • Capture IP packets on Dialup connection - Windows 7

    - by Assaf Levy
    Our product utilizes (the wonderful) Winpcap to capture ip packets from all devices with an IP address and analyze them in real time. Unfortunately, we discovered that it does NOT capture any packets on dialup (e.g. PPP) connections on Windows 7, and that there are no near-term plans for enabling this (1). So we need something else. Microsoft Network Monitor and Windows Packet Filter are two options that surfaced during a bit of googling, but before delving into research I wanted to ask the experienced: what are out options, given the following requirements: Capture all in/outbound IP packets on the machine. Complete background processing - no UI should be involved. Support Windows Vista / 7. Performance (user should not feel the difference). Thanks in advance.

    Read the article

  • Regex help with Google Page Monitor extension

    - by bibliwho
    I'm trying to monitor a small section of a web page for changes using the the Google Page Monitor extension -- https://chrome.google.com/extensions/detail/pemhgklkefakciniebenbfclihhmmfcd Under advanced settings I can use either Regex or Selectors to accomplish this, but need help with this. In the following html, I'd like to monitor the following for changes in either the URL in line 4 or the text in line 5. Any pointers gratefully accepted. <div id="rtBtmBox"><div id="sectHead" style="margin-bottom:5px;"> <h3>SLJ's Pick of the Day</h3></div> <p align="center">From the&nbsp;March issue</p> <p align="center"><a target="_blank" href="http://www.schoollibraryjournal.com/article/CA6723937.html"> <font color="#0000ff"><strong><em>The Summer I Turned Pretty</em></strong><br/>

    Read the article

  • How to partition Seagate FreeAgent GoFlex 2TB hard disk?

    - by balki
    Hi I bought a new Seagate 2TB external hard disk. I opened the drive's application in my virtual windows, did product registration using the application present in it. I have few questions on how best to use it. The drive by default has some files and folders - setup.exe, System Volume Information, USB 3.0 PC Card Adapter etc,. I copied all the files to my laptop. Is it safe to delete these files? It has a dash board for windows which allows to tune power options, test the drive etc. Will I be able to use the dash board if I put back all these files and mount on windows again? I want to partition and format the hard disk. Data I like to store is Around 10 to 20 GB Files - Virtual box images. Around 4GB Files - Dvd images. Other Movies and personal Files. What is the best filesystem to store very huge files like 10 to 20GB files. So that they are written and accessed fast also best uses the drive's capacity. If I leave one of the partition as ntfs and others to different files systems, will it be able to mount on windows and Will I be able to use the device's dash board? Note: I dont need any encryption for my data. Any other advice on using the hard disk is also welcome.

    Read the article

  • Package upgrade on Ubuntu raid server and grub setup issue

    - by RecNes
    I have remote Ubuntu 10.10 server running on raid system. I did package upgrade yesterday night for security reasons. During the upgrade, grub installation screen appeared and asked me which partition I wanted to install grub. Options are sda,sdb,md1 and md2. I decide to install them on both sda and sdb partitions. I wondering, was I make true decision? If machine get reboot is it can be boot up safely? You can find fdisk output and fstab mount points below: Fstab: proc /proc proc defaults 0 0 none /dev/pts devpts gid=5,mode=620 0 0 /dev/md0 none swap sw 0 0 /dev/md1 /boot ext3 defaults 0 0 /dev/md2 / ext3 defaults 0 0 Fdisk: Disk /dev/sda: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00029bb5 Device Boot Start End Blocks Id System /dev/sda1 1 262 2102562 fd Linux raid autodetect /dev/sda2 263 295 265072+ fd Linux raid autodetect /dev/sda3 296 91201 730202445 fd Linux raid autodetect Disk /dev/md0: 2152 MB, 2152923136 bytes 2 heads, 4 sectors/track, 525616 cylinders Units = cylinders of 8 * 512 = 4096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/md0 doesn't contain a valid partition table Disk /dev/md1: 271 MB, 271319040 bytes 2 heads, 4 sectors/track, 66240 cylinders Units = cylinders of 8 * 512 = 4096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/md1 doesn't contain a valid partition table Disk /dev/md2: 747.7 GB, 747727224832 bytes 2 heads, 4 sectors/track, 182550592 cylinders Units = cylinders of 8 * 512 = 4096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/md2 doesn't contain a valid partition table Disk /dev/sdb: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00088969 Device Boot Start End Blocks Id System /dev/sdb1 1 262 2102562 fd Linux raid autodetect /dev/sdb2 263 295 265072+ fd Linux raid autodetect /dev/sdb3 296 91201 730202445 fd Linux raid autodetect

    Read the article

  • Should I use "Raid 5 + spare" or "Raid 6"?

    - by Trevor Boyd Smith
    What is "Raid 5 + Spare" (excerpt from User Manual, Sect 4.17.2, P.54): RAID5+Spare: RAID 5+Spare is a RAID 5 array in which one disk is used as spare to rebuild the system as soon as a disk fails (Fig. 79). At least four disks are required. If one physical disk fails, the data remains available because it is read from the parity blocks. Data from a failed disk is rebuilt onto the hot spare disk. When a failed disk is replaced, the replacement becomes the new hot spare. No data is lost in the case of a single disk failure, but if a second disk fails before the system can rebuild data to the hot spare, all data in the array will be lost. What is "Raid 6" (excerpt from User Manual, Sect 4.17.2, P.54): RAID6: In RAID 6, data is striped across all disks (minimum of four) and a two parity blocks for each data block (p and q in Fig. 80) is written on the same stripe. If one physical disk fails, the data from the failed disk can be rebuilt onto a replacement disk. This Raid mode can support up to two disk failures with no data loss. RAID 6 provides for faster rebuilding of data from a failed disk. Both "Raid 5 + spare" and "Raid 6" are SO similar ... I can't tell the difference. When would "Raid 5 + Spare" be optimal? And when would "Raid 6" be optimal"? The manual dumbs down the different raid with 5 star ratings. "Raid 5 + Spare" only gets 4 stars but "Raid 6" gets 5 stars. If I were to blindly trust the manual I would conclude that "Raid 6" is always better. Is "Raid 6" always better?

    Read the article

  • OS Analytics - Deep Dive Into Your OS

    - by Eran_Steiner
    Enterprise Manager Ops Center provides a feature called "OS Analytics". This feature allows you to get a better understanding of how the Operating System is being utilized. You can research the historical usage as well as real time data. This post will show how you can benefit from OS Analytics and how it works behind the scenes. We will have a call to discuss this blog - please join us!Date: Thursday, November 1, 2012Time: 11:00 am, Eastern Daylight Time (New York, GMT-04:00)1. Go to https://oracleconferencing.webex.com/oracleconferencing/j.php?ED=209833067&UID=1512092402&PW=NY2JhMmFjMmFh&RT=MiMxMQ%3D%3D2. If requested, enter your name and email address.3. If a password is required, enter the meeting password: oracle1234. Click "Join". To join the teleconference:Call-in toll-free number:       1-866-682-4770  (US/Canada)      Other countries:                https://oracle.intercallonline.com/portlets/scheduling/viewNumbers/viewNumber.do?ownerNumber=5931260&audioType=RP&viewGa=true&ga=ONConference Code:       7629343#Security code:            7777# Here is quick summary of what you can do with OS Analytics in Ops Center: View historical charts and real time value of CPU, memory, network and disk utilization Find the top CPU and Memory processes in real time or at a certain historical day Determine proper monitoring thresholds based on historical data View Solaris services status details Drill down into a process details View the busiest zones if applicable Where to start To start with OS Analytics, choose the OS asset in the tree and click the Analytics tab. You can see the CPU utilization, Memory utilization and Network utilization, along with the current real time top 5 processes in each category (click the image to see a larger version):  In the above screen, you can click each of the top 5 processes to see a more detailed view of that process. Here is an example of one of the processes: One of the cool things is that you can see the process tree for this process along with some port binding and open file descriptors. On Solaris machines with zones, you get an extra level of tabs, allowing you to get more information on the different zones: This is a good way to see the busiest zones. For example, one zone may not take a lot of CPU but it can consume a lot of memory, or perhaps network bandwidth. To see the detailed Analytics for each of the zones, simply click each of the zones in the tree and go to its Analytics tab. Next, click the "Processes" tab to see real time information of all the processes on the machine: An interesting column is the "Target" column. If you configured Ops Center to work with Enterprise Manager Cloud Control, then the two products will talk to each other and Ops Center will display the correlated target from Cloud Control in this table. If you are only using Ops Center - this column will remain empty. Next, if you view a Solaris machine, you will have a "Services" tab: By default, all services will be displayed, but you can choose to display only certain states, for example, those in maintenance or the degraded ones. You can highlight a service and choose to view the details, where you can see the Dependencies, Dependents and also the location of the service log file (not shown in the picture as you need to scroll down to see the log file). The "Threshold" tab is particularly helpful - you can view historical trends of different monitored values and based on the graph - determine what the monitoring values should be: You can ask Ops Center to suggest monitoring levels based on the historical values or you can set your own. The different colors in the graph represent the current set levels: Red for critical, Yellow for warning and Blue for Information, allowing you to quickly see how they're positioned against real data. It's important to note that when looking at longer periods, Ops Center smooths out the data and uses averages. So when looking at values such as CPU Usage, try shorter time frames which are more detailed, such as one hour or one day. Applying new monitoring values When first applying new values to monitored attributes - a popup will come up asking if it's OK to get you out of the current Monitoring Policy. This is OK if you want to either have custom monitoring for a specific machine, or if you want to use this current machine as a "Gold image" and extract a Monitoring Policy from it. You can later apply the new Monitoring Policy to other machines and also set it as a default Monitoring Profile. Once you're done with applying the different monitoring values, you can review and change them in the "Monitoring" tab. You can also click the "Extract a Monitoring Policy" in the actions pane on the right to save all the new values to a new Monitoring Policy, which can then be found under "Plan Management" -> "Monitoring Policies". Visiting the past Under the "History" tab you can "go back in time". This is very helpful when you know that a machine was busy a few hours ago (perhaps in the middle of the night?), but you were not around to take a look at it in real time. Here's a view into yesterday's data on one of the machines: You can see an interesting CPU spike happening at around 3:30 am along with some memory use. In the bottom table you can see the top 5 CPU and Memory consumers at the requested time. Very quickly you can see that this spike is related to the Solaris 11 IPS repository synchronization process using the "pkgrecv" command. The "time machine" doesn't stop here - you can also view historical data to determine which of the zones was the busiest at a given time: Under the hood The data collected is stored on each of the agents under /var/opt/sun/xvm/analytics/historical/ An "os.zip" file exists for the main OS. Inside you will find many small text files, named after the Epoch time stamp in which they were taken If you have any zones, there will be a file called "guests.zip" containing the same small files for all the zones, as well as a folder with the name of the zone along with "os.zip" in it If this is the Enterprise Controller or the Proxy Controller, you will have folders called "proxy" and "sat" in which you will find the "os.zip" for that controller The actual script collecting the data can be viewed for debugging purposes as well: On Linux, the location is: /opt/sun/xvmoc/private/os_analytics/collect On Solaris, the location is /opt/SUNWxvmoc/private/os_analytics/collect If you would like to redirect all the standard error into a file for debugging, touch the following file and the output will go into it: # touch /tmp/.collect.stderr   The temporary data is collected under /var/opt/sun/xvm/analytics/.collectdb until it is zipped. If you would like to review the properties for the Analytics, you can view those per each agent in /opt/sun/n1gc/lib/XVM.properties. Find the section "Analytics configurable properties for OS and VSC" to view the Analytics specific values. I hope you find this helpful! Please post questions in the comments below. Eran Steiner

    Read the article

  • Why would a process monitoring script use exit 1; on finding no problems?

    - by user568458
    General question: On a Linux (Centos) server, if a process monitoring script run by cron is set to close with exit 1; rather than exit 0; on finding that everything is okay and that no action is needed, is that a mistake? Or are there legitimate reasons for calling exit 1; instead of exit 0; on the "Everything's fine, no action needed" condition? exit 0; on finding no problems seems to me to be more appropriate. But maybe there's something I'm not aware of. For example, maybe there's something specific to Cron? Or maybe there's a convention in process monitoring scripts that 'failure' means 'this script failed to need to fix a problem' (rather than what I would expect which is that exit 1; would mean 'the process being monitored has failed'?) My specific case: I'm looking at a process monitoring script written by my web hosting company. By process monitoring script, I mean a script executed by Cron on a regular basis that checks if an important system process is running, and if it isn't running, takes actions such as mailing an administrator or restarting the process. Here's the (generalised) structure of their script, for a service running on port 8080 (in this case, Apache Tomcat): SERVICE=$(/usr/sbin/lsof -i tcp:8080 | wc -l); if [ $SERVICE != 0 ]; then exit 1; else #take action fi Seems simple enough even for someone with limited knowledge like me, except the exit 1; part seems odd. As I understand it, exit 0; closes a program and signifies to the parent that executed the program that everything is fine, exit n; where n0 and n<127 signifies that there has been some kind of error or problem. Here, their script seems to go against that rule - it calls exit 1; in the condition where everything is fine, and doesn't exit after taking remedial action in the problem condition. To me, this looks like a mistake - but my experience in this area is limited. Are there cases where calling exit 1; in the "Everything's fine, no action needed" condition is more appropriate than calling exit 0;? Or is it a mistake? Wider context is pretty simple. It's a Centos VPS, running Plesk. The script is being called by Cron via Plesk's "Scheduled tasks" Cron manager. There's no custom layer between Cron and this script that would respond in an unusual way to the exit call. It's a fairly average, almost out-of-the box Plesk-managed Centos VPS (in so far as there is such a thing). The process being monitored by this script is Apache Tomcat.

    Read the article

  • Why does OS X insist on spinning up all external drives when loading a file from the local drive?

    - by Phillip Oldham
    Why does OS X insist on spinning up all the attached external drives (firewire, usb) when loading a file from the local (internal) drive? It's driving me insane that I have to wait for 3 attached drives (1 back-up, 2 media) to spin up -- a total of 20s -- to access a file that is located only on my local/internal drive. There is no obvious need to access the other drives; nothing is being read from them and nothing need be written. Examples: Quicktime X opening a file from the local HDD. Starting Caffeine, an app which doesn't access any other files at all. Can I tell OS X to only spin those drives up when actually accessing them?

    Read the article

  • CloneZilla PXE Boot Without NFS

    - by John
    I am trying to setup CloneZilla to be bootable via PXE without using NFS. I do not have NFS running on our PXE server and would like to keep it that way. However, most of the information that I have found online indicates that you need to setup NFS in order to PXE boot CloneZilla. I believe that I am pretty close in getting it to work, but am not sure where to go next. Listed below are the different PXE menu option configurations that I have used so far. LABEL Clonezilla Live MENU LABEL Clonezilla Live KERNEL utilities/clonezilla/vmlinuz APPEND initrd=utilities/clonezilla/initrd.img boot=live live-config noswap nolocales edd=on nomodeset ocs_live_run="ocs-live-general" ocs_live_extra_param="" ocs_live_keymap="" ocs_live_batch="no" o$ I have also tried the following append lines, without success: APPEND initrd=utilities/clonezilla/initrd.img boot=live union=aufs noswap noprompt vga=788 fetch=tftp://10.130.155.23/filesystem.squashfs APPEND initrd=utilities/clonezilla/initrd.img boot=live union=aufs noswap noprompt vga=normal nomodeset nosplash fetch=tftp://10.130.155.23/filesystem.squashfs Each of them have resulted in a no go with the following error: "Unable to find a live file system on the network". It looks like it gets to the point of trying to load the filesystem.squashfs file, hangs, and then throws the error. Any help would be greatly appreciated.

    Read the article

  • what is acceptable datastore latency on VMware ESXi host?

    - by BeowulfNode42
    Looking at our performance figures on our existing VMware ESXi 4.1 host at the Datastore/Real-time performance data Write Latency Avg 14 ms Max 41 ms Read Latency Avg 4.5 ms Max 12 ms People don't seem to be complaining too much about it being slow with those numbers. But how much higher could they get before people found it to be a problem? We are reviewing our head office systems due to running low on storage space, and are tossing up between buying a 2nd VM host with DAS or buying some sort of NAS for SMB file shares in the near term and maybe running VMs from it in the longer term. Currently we have just under 40 staff at head office with 9 smaller branches spread across the country. Head office is runnning in an MS RDS session based environment with linux ERP and mail systems. In total 22 VMs on a single host with DAS made from a RAID 10 made of 6x 15k SAS disks.

    Read the article

  • MD3200 - 3 to 4x less throughput than MD1220. Am I missing something here?

    - by Igor Polishchuk
    I have two R710 servers with similar configuration. One in my office has MD1220 attached. Another one in the datacenter of my hosting services vendor has MD3200. I'm getting significantly worse throughput from MD3200 at my vendors setup. I'm mostly interested in sequential writes, and I'm getting these results in bonnie++ and dd tests: Seq. writes on MD1220 in my office: 1.1 GB/s - bonnie++, 1.3GB/s - dd Seq. writes on MD3200 at my vendor's: 240MB/s - bonnie++, 310MB/s - dd Unfortunately, I could not test the exactly the same configurations, but the two I have should be comparable. If anything, my good performing environment is cheaper than the bad performing. I expect at least similar throughput from these two setups. My vendor cannot really help me. Hopefully, somebody more familiar with the DAS performance can look at it and tell if I'm missing something here and my expectations are too high. To summarize, the question here is it reasonable to expect about 100MB/s of sequential write throughput per each couple of drives in RAID10 on MD3200? Is there any trick to enable such performance in MD3200 with dual controller as opposed to simple MD1220 with a single H800 adapter? More details about the configurations: A good one in my office: Dell R710 2CPU X5650 @ 2.67GHz 12 cores 96GB DDR3, OS: RHEL 5.5, kernel 2.6.18-194.26.1.el5 x86_64 20x300GB 2.5" SAS 10K in a single RAID10 1MB chunk size on MD1220 + Dell H800 I/O controller with 1GB cache in the host Not so good one at my vendor's: Dell R710 2CPU L5520 @ 2.27GHz 8 cores 144GB DDR3, OS: RHEL 5.5, kernel 2.6.18-194.11.4.el5 x86_64 20x146GB 2.5" SAS 15K in a single RAID10 512KB chunk size, Dell MD3200, 2 I/O controllers in array with 1GB cache each Additional information. I've also ran the same tests on the same vendor's host, but the storage was: two raids of 14x146GB 15K RPM drives RAID 10, striped together on the OS level on MD3000+MD1000. The performance was about 25% worse than on MD3200 despite having more drives. When I ran similar tests on the internal storage of my vendor's host (2x146GB 15K RPM drives RAID1, Perc 6i) I've got about 128MB/s seq. writes. Just two internal drives gave me about a half of 20 drives' throughput on MD3200. The random I/O performance of the MD3200 setup is ok, it gives me at least 1300 IOPS. I'm mostly have problems with sequentioal I/O throughput. Thank you for looking into it. Regards Igor

    Read the article

  • The difference between desktop-series HDD drives and server-series

    - by FractalizeR
    Hello. What are the main differences between desktop-series hard disks and server-series? The obvious things I can see are: durability (server hardware mostly more qualitative and have more warranty) and power consumption (server hardware more focused on performance, than on power economy). Also server disks are usually a little faster, but it seems, that it is not always the case. May be there are some other reasons, that make you choose server-oriented series (Seagate ES drives, for example) over desktop-oriented ones (Seagate Barracuda series)? What are they?

    Read the article

  • Migration with SysPrep, ImageX and

    - by Jack Smith
    I know that you can use SysPrep and ImageX to create a prepared image that can be used on several systems but the question is. How well does it work in a corporate environment of moving machines from old hardware off to new harddrives and new hardware? EDIT: The system runs accounting software and databases. So would SysPrep remove all License keys and other information which means would cause problems right? Would something else be a better option even though there are heavy costs involved? Currently, when I clone/copy the drive, Windows will black screen on me. So I need something with differential hardware support?

    Read the article

  • What usb-bootable utility should I use to copy SATA hard drives?

    - by Steve Brown
    I have a computer that only has two SATA connections and I need to copy one SATA hard drive to another. Since I have to unplug the CD drive to copy the drives I need a USB-bootable utility. I have an old school copy of Norton Ghost (CD based): Ghost has always worked well for me in the past - I see there is a new "version 15" out but I'm not sure if it is worth buying. A friend has Acronis True Image on a USB drive: We tried to use that on the computer but it was unable to copy both partitions (restore partition and main partition). Of course there may be some problem with the drive that is keeping Acronis from working (it is just exiting with a lame error about not being able to copy the disks and no error code or detailed information), but I'm interested in knowing if there is a better, more solid, or widely used solution that I should invest in. What usb-bootable utilities can I use to copy SATA hard drives?

    Read the article

  • Cannot resize OS X partition

    - by David Pearce
    I am trying to resize my existing Mac OS Extended partition on my Macbook to install Windows 7 (using steps similar to these), but when ever I go to apply the changes, I get this error: Partition failed Partition failed with the error: The partition cannot be resized. Try reducing the amount of change in the size of the partition. The total capacity of the hard drive in question is 260GB, with the entirety being taken up by the OS X boot partition. There is I am aiming to shrink that partition down to 60GB. How can I fix this problem? I have been reducing the amount of change by 10GB each attempt, but it still is not working. I assume the problem is that there is not a large amount of continuous space on the device. Is there some way to can do a manual defrag that would rectify this problem?

    Read the article

  • What would I need to do to clone a Linux install to another machine?

    - by Marcus Hughes
    Basically, I want to make an image of one Linux install I have, and distribute it over similar (if not identical) spec machines on a network. I know it is possible, however am unsure of what needs to be done before I make the image of the install to ensure it works correctly on all machines. First think I know needs to be done, is remove all interface configuration, as I assume it would just jump to eth1 (and possibly use the same MAC), as it wont locate the network interface it had before, how would I do this so it auto detects and goes to eth0 again? Is there anything else I would need to do to the install before I make its image?

    Read the article

< Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >