Search Results

Search found 19055 results on 763 pages for 'high performance'.

Page 259/763 | < Previous Page | 255 256 257 258 259 260 261 262 263 264 265 266  | Next Page >

  • BIOS setting: AHCI or RAID (when using SSD + 2x HDD in RAID-0)

    - by nixdagibts
    Hello there, I want to add a new SSD and use it as system drive with Win7 x64 installed. As driver I chose newest Intel Rapid Storage driver (not MSAHCI). I know that I have to use AHCI as BIOS setting for optimal SSD read/write performance. But I'm also using 2 normal HDDs as separate RAID-0 SSD: Win7 HDD: RAID-0 HDD: RAID-0 If I set my BIOS on my ASUS P5W DH Deluxe to AHCI, my RAID-0 cant be recognized And If im using RAID as setting, maybe my SSD has not its top speed. But I'm not sure about that. In short: AHCI no RAID-0 RAID no optimal SSD performance (?) Now my question: Can I use RAID as BIOS setting and be sure, that theres no decrease in SSD performance? Google finds so many articles with similar topics and my head is just exploding. Two examples: - set AHCI and after installing OS switch to RAID as BIOS setting... what? - use a diskette and F6 while installing win7... really? O.o I thought those times are gone

    Read the article

  • Optimizing Disk I/O & RAID on Windows SQL Server 2005

    - by David
    I've been monitoring our SQL server for a while, and have noticed that I/O hits 100% every so often using Task Manager and Perfmon. I have normally been able to correlate this spike with SUSPENDED processes in SQL Server Management when I execute "exec sp_who2". The RAID controller is controlled by LSI MegaRAID Storage Manager. We have the following setup: System Drive (Windows) on RAID 1 with two 280GB drives SQL is on a RAID 10 (2 mirroed drives of 280GB in two different spans) This is a database that is hammered during the day, but is pretty inactive at night. The DB size is currently about 13GB, and is used by approximately 200 (and growing) users a day. I have a couple of ideas I'm toying around with: Checking for Indexes & reindexing some tables Adding an additional RAID 1 (with 2 new, smaller, HDs) and moving the SQL's Log Data File (LDF) onto the new RAID. For #2, my question is this: Would we really be increasing disk performance (IO) by moving data off of the RAID 10 onto a RAID 1? RAID 10 obviously has better performance than RAID 1. Furthermore, SQL must write to the transaction logs before writing to the database. But on the flip side, we'll be reducing both the size of the disks as well as the amount of data written to the RAID 10, which is where all of the "meat" is - thereby increasing that RAID's performance for read requests. Is there any way to find out what our current limiting factor is? (The drives vs. the RAID Controller)? If the limiting factor is the drives, then maybe adding the additional RAID 1 makes sense. But if the limiting factor is the Controller itself, then I think we're approaching this thing wrong. Finally, are we just wasting our time? Should we instead be focusing our efforts towards #1 (reindexing tables, reducing network latency where possible, etc...)?

    Read the article

  • Implications of disabling the AMD Phenom's TLB patch?

    - by DMA57361
    I'm currently running a AMD Phenom X4 9600 processor (yeah, it's aging a bit, but other recent problems mean it's not getting upgraded in the immediate future), which happens to be one of the chips that suffer from the TLB errata. I recall that the first time I played with disabling the TLB patch (probably over a year ago, while playing a game that had a severe performance problem such that it was almost unplayable unless the patch was disabled) I had at least one BSOD, but I can't remeber them being particularly frequent. However, because it decreased instability, I stopped disabling the patch once I was done with the game. Now, after some recent hardware changes I was experiancing much worse performance than expected from the new hardware under some circumstances, and the TLB jumped to mind - after testing I found that disabling the patch would improve the performance to expected levels. I'm now wondering if it's worthwhile always having the patch disabled to avoid any potential slowdowns cropping up in the future, or if it is too dangerous. Everything I read states that the bug, when not patched, can causes a system lock-up in "rare circumstances". So, with the TLB patch disabled: How frequently should system lock-ups be expected? Do we know what the circumstances that trigger the lock-ups are? (Don't worry too much about being highly technical, but essentially I wonder if the chip more vunerable under heavy load, or heavy memory usage, etc?) Are there any secondary problems I should be aware of? (Don't include things that are charateristic to all lock-ups, please)

    Read the article

  • Fix Video timelines

    - by Josh
    So, I have been going through and riping all of my DVD's and it seems that the way to get the highest quality out of these is to have DVD Shrink de-encrypt, rip, and decompress, the DVD's. After that I usually end up with a high quality (high size) set of .vob files in a classic DVD structure. Then I use a python script that I wrote to automate the process of finding the title sequence and then combining all of the title sequences' .vob files together into one file(similar to the "copy /b" command in windows), and then changing the extension to .mpg (a more widely supported format then .vob). This allows me to get a high quality rip in about 40 min. The problem comes in playing the files. I need all of the ripped dvd's to play on my media computer using windows media center but windows media center (and vlc for that matter) all think that the video files are anywhere from 5 min. to 0 min. which is not a problem (the video will still play all the way through) but if you want to pause it, when it is unpaused the video will start all the way over (Also fast forward and rewind don't work). I suspect that it is something wrong with the way the timeline is encoded in the video file, various forums on the internet recommended using virtualdub to fix the errors. But when I try to open the file virtual dub says that the file is not in mpeg-1 encoding and may be in mpeg-2. Is there any way to fix this? PS: I am aware that there was a similar question but it hasn't had any activity for 2 months and is dealing more with wmv files.

    Read the article

  • What is a long-term strategy to deal with CPU fan dust in my home office?

    - by PaulG
    There are numerous discussions of CPU overheating and how sometimes this can be corrected by removing the dust from the CPU fan. I have read many of these, but I can't find anyone expressing a long-term strategy to deal with this problem. There are some suggestions here, for example, about how often the inside of the computer should be dusted. But I find this generally unsatisfactory. As it stands, in my rather dusty house (heated by a wood stove, with no central air circulation), I need to vacuum out the CPU fan every 3 to 4 months. At high CPU load, this can make a difference between 65C and 100C. I'm tired of hauling out the vacuum every time I anticipate high CPU load. What steps can I take to deal with this systematically in the long-term? Moving my high CPU load computing to the cloud is not a realistic option. Neither is vacuuming my home office more than once a week! (Details: my computer is on the floor in a Cooler Master HAF922 case, and uses an Intel CPU fan on an i7 chip) EDIT: While this would definitely solve the problem (submerging motherboard in mineral oil), it is a bit of an expensive solution.

    Read the article

  • How can I format an SD card with a more robust Linux-usable filesystem with a specific cluster size for better write performace?

    - by Harvey
    Goal: microSD card formatted... for best write performance for use only with embedded Linux for better reliability (random power failures may occur) using an 64kB cluster size I'm using an 8GB microSD card for data storage inside an embedded Linux/ARM device. The SD card is not removable. I've been using ext3 instead of the pre-installed FAT32 because it seems to better handle random power failures during writes. However, I kept noticing that my write performance is always best with the pre-installed FAT32 from Kingston. If I reformat the card with FAT32, the performance still suffers. After browsing wikipedia, I stumbled upon the following comment saying that some cards are optimized for specific cluster sizes. In my case, the Kingston comes pre-formatted for an 64kB cluster size. Risks of reformatting Reformatting an SD card with a different file system, or even with the same one, may make the card slower, or shorten its lifespan. Some cards use wear leveling, in which frequently modified blocks are mapped to different portions of memory at different times, and some wear-leveling algorithms are designed for the access patterns typical of the file allocation table on a FAT16 or FAT32 device.[60] In addition, the preformatted file system may use a cluster size that matches the erase region of the physical memory on the card; reformatting may change the cluster size and make writes less efficient.

    Read the article

  • Painless deployment of a Django app (port from Drupal). Do I have to switch to a VPS?

    - by Monden
    I'm about to complete porting my Drupal based community site to Django. My Drupal site is hosted at a shared hosting (Dreamhost) for last 4 years, and stability & performance has been satisfactory. The site gets around 5k unique visitors with 70-80k page views a day. This will be my first deployment of a Django application and I'm not comfortable with managing my own VPS. I use Ubuntu as a dev. server, but I don't have experience with it at the production env. I have an unrelated internal CRM app (Django) that I host with Webfaction. However security and performance isn't an issue as it's only accessed by 5 people. Unfortunately, I don't have much time to learn and maintain a VPS at this moment. I would like to know if I can host a site with this much traffic at Webfaction's shared environment? How would performance differ in comparison to Linode or Slicehost? Google AppEngine isn't an option at the moment as I'll be using my current Postgresql database.

    Read the article

  • Distributed storage and computing

    - by Tim van Elteren
    Dear Serverfault community, After researching a number of distributed file systems for deployment in a production environment with the main purpose of performing both batch and real-time distributed computing I've identified the following list as potential candidates, mainly on maturity, license and support: Ceph Lustre GlusterFS HDFS FhGFS MooseFS XtreemFS The key properties that our system should exhibit: an open source, liberally licensed, yet production ready, e.g. a mature, reliable, community and commercially supported solution; ability to run on commodity hardware, preferably be designed for it; provide high availability of the data with the most focus on reads; high scalability, so operation over multiple data centres, possibly on a global scale; removal of single points of failure with the use of replication and distribution of (meta-)data, e.g. provide fault-tolerance. The sensitivity points that were identified, and resulted in the following questions, are: transparency to the processing layer / application with respect to data locality, e.g. know where data is physically located on a server level, mainly for resource allocation and fast processing, high performance, how can this be accomplished? Do you from experience know what solutions provide this transparency and to what extent? posix compliance, or conformance, is mentioned on the wiki pages of most of the above listed solutions. The question here mainly is, how relevant is support for the posix standard? Hadoop for example isn't posix compliant by design, what are the pro's and con's? what about the difference between synchronous and asynchronous opeartion of a distributed file system. Though a synchronous distributed file system has the preference because of reliability it also imposes certain limitations with respect to scalability. What would be, from your expertise, the way to go on this? I'm looking forward to your replies. Thanks in advance! :) With kind regards, Tim van Elteren

    Read the article

  • virtual machines, dual booting and data disks on SSD

    - by stevemarvell
    This is in planning, so if I've got the strategy wrong, please let me know. There are multiple questions here, but I think they all degenerate to the same answers. The hardware is a laptop with a single SSD. I'm trying to not lose the performance of the SSD. I plan a native dual booting Windows (plus cygwin) and Linux machine which is my BYOD and represents the development environment. I keep the codebase on a shared partition (though sometimes this is an external thunderbolt SSD) which can be natively "mounted" by whichever OS is in operation. I boot into one or the other environments depending on the task in hand. Sometime I have to develop with windows tools, but generally, Linux is my preferred development environment. It would be ideal if I could VM the other OS and run either in either. I'm going to assume, because I've not found a sensible VM based solution, that I have get samba involved to share the code partition between VMs. Is this going to blow my SSD performance in the VM? The client also supplies me with a VM for the target environment, usually linux. This is not often suited to development and is used for testing only. I normally keep two copies of this, one as a sandbox and one which I deploy to using the client's preferred method. I keep these VM snapshots on the shared partition. The latter is interacted with over the network and so has no disk sharing requirements. However, it would be useful for the sandbox to be able to "mount" the code base from the natively running OS. Is this samba or nfs again, depending on the native OS? Am I missing a trick which allows this to all work smoothly with all four environments running at once without loosing the SSD performance?

    Read the article

  • Curious enigma of a network cable / connection / quality

    - by Foo Bar
    So, the situation is like this: I'm renting an apartment in a large house and I'm sharing internet with the landlord who lives downstairs. The internet is (in my best guess) optical 20/20Mbit. I don't know how it's all wired in his flat (haven't been there / seen it). Anyway, in my flat comes a cable which seems to be connected directly to the optic to ethernet router (and the password is the default one, so I have access, he he). There was a switch connected to that and to wires that go around the flat, and the wiring is terrible. It's even mixing phone and ethernet, and from what I see some cables are even interconnected!? Anyways, this cable that comes to my flat is very short. I can barely connect my computer on it, but if I do, I seem to get decent speed / performance. Not great, but decent. If, however, I connect switch to it (tried 2 different switches and a wifi switch) it's all blinking but I can't even connect to 192.168.1.1 (the router). DHCP fails, ping is losing 80-100% of replies. So I connected this cable directly to the other cable which goes to my work room, with a connector that has two female jacks and no electronics. Now when I connect my computer in my room, again, the performance is decent. When I connect WRT54GL (with tomato, DHCP disabled) to it and I plug a cable in this WRT and to my computer... the performance is gone. Download seems okay on Speedtest, but upload is .2Mbps and it's connecting forever. So what kind cable troll am I having here? Any ideas?

    Read the article

  • What does SQL Server's BACKUPIO wait type mean?

    - by solublefish
    I'm using Sql Server 2008 ("R1"), with some maintenance plans that back up my databases to a network share. Some of my backup jobs show long waits of type "BACKUPIO". Of course it seems like this is an I/O subsystem limitation, but I'm skeptical. Perfmon stats for I/O on the production (source) server are well within normal trends for that server. The destination server shows a sustained 7MB/s write rate, which seems incredibly low, even for a slow disk. The network link is gigabit ethernet and nowhere near saturated. The few docs I've turned up about BACKUPIO indicate that it's not specifically a wait on I/O, surprisingly enough. This MSFT doc says it's abnormal unless you're using a tape drive, which I'm not. But it doesn't say (or I don't understand) exactly what resource is missing. http://www.docstoc.com/docs/24580659/Performance-Tuning-in-SQL-Server-2005 And this piece says it's not related to I/O performance at all. http://www.informit.com/articles/article.aspx?p=686168&seqNum=5 "Note that BACKUPIO and IO_AUDIT_MUTEX are not related to IO performance." Anyway, does anyone know what BACKUPIO actually means and/or what I can do to diagnose or eliminate it?

    Read the article

  • WPF Animation FPS vs. CPU usage - Am I expecting too much?

    - by Cory Charlton
    Working on a screen saver for my wife, http://cchearts.codeplex.com/, and while I've been able to improve FPS on lower end machines (switch from Path to StreamGeometry, use DrawingVisual instead of UserControl, etc) the CPU usage still seems very high. Here's some numbers I ran from a few 5 minute sampling periods: ~60FPS 35% average CPU on Core 2 Duo T7500 @ 2.2GHz, 3GB ram, NVIDIA Quadro NVS 140M (128MB), Vista [My dev laptop] ~40FPS 50% average CPU on Pentium D @ 3.4GHz, 1.5GB ram, Standard VGA Graphics Adapter (unknown), 2003 Server [A crappy desktop] I can understand the lower frame rate and higher CPU usage on the crappy desktop but it still seems pretty high and 35% on my dev laptop seems high as well. I'd really like to analyze the application to get more details but I'm having issues there as well so I'm wondering if I'm doing something wrong (never profiled WPF before). WPF Performance Suite: Process Launch Error Unable to attach to process: CCHearts.exe Do you want to kill it? This error message occurs when I click cancel after attempting launch. If I don't click cancel it sits there idle, I guess waiting to attach. Performance Explorer: Could not launch C:\Projects2\CC.Hearts\CC.Hearts\bin\Debug (USEVISUAL)\CCHearts.exe. Previous attempt to profile the application finished unsuccessfully. Please restart the application. Output Window from Performance: Profiling started. Profiling process ID 5360 (CCHearts). Process ID 5360 has exited. Data written to C:\Projects2\CC.Hearts\CCHearts100608.vsp. Profiling finished. PRF0025: No data was collected. Profiling complete. So I'm stuck wanting to improve performance but have no concrete way to determine where the bottleneck is. Have been relatively successful throwing darts at this point but I'm beyond that now :) PS: Screensaver is hosted at CodePlex if you want to look at the source and missed the link above. Edit: My RenderOptions darts... // NOTE: Grasping at straws here ;-) RenderOptions.SetBitmapScalingMode(newHeart, BitmapScalingMode.LowQuality); RenderOptions.SetCachingHint(newHeart, CachingHint.Cache); RenderOptions.SetEdgeMode(newHeart, EdgeMode.Aliased); I threw those a little while back and didn't see much difference (not sure if the bitmap scaling even comes into play). Really wish I could get profiling working to know where I should try to optimize. For now I assume there is some overhead in creating a new HeartVisual and the DrawingVisual contained inside. Maybe if I reset and reused the hearts (tossed them in a queue once they completed or something) I'd see an improvement. Shrug Throwing darts while blindfolder is always fun.

    Read the article

  • Which MacBook(Pro) for running Visual Studio 2010 on VMWare Fusion on a Mac?

    - by Greg
    Hi Anyone have experience running Visual Studio 2010 on a MacBook or MacBook Pro? (via VMWare fusion) Any feedback / advice based on your experience re what level of MacBook Pro (i.e. CPU type, CPU speed) you would target to get reasonable/good performance from VS2010 on it? (I'm just concerned about getting a base level MacBook Pro 13" 2.4GHz Core2Duo whether I would be frustrated with performance or not)

    Read the article

  • How does ASP.Net MVC differ from Classic ASP (not ASP.Net--the original ASP)

    - by LuftMensch
    I'm trying to get a high-level understanding of ASP.Net MVC, and it has started to occur to me that it looks a lot like the original ASP script. Back in the day, we were organizing our "model"/business logic code into VBScript classes, or into VB COM components. Of course, now we have the additional power of c# and the .net framework classes. Besides the high-level oo and other capabilities in c# and .Net, what are the other major differences between the original ASP and ASP.Net MVC?

    Read the article

  • CGContextDrawPDFPage doesn't seem to persist in CGContext

    - by erichf
    I am trying to access the pixels of a CGContext written to with a PDF, but the bitmap buffer doesn't seem to update. Any help would be appreciated: //Get the reference to our current page pageRef = CGPDFDocumentGetPage(docRef, iCurrentPage); //Start with a media crop, but see if we can shrink to smaller crop CGRect pdfRect1 = CGRectIntegral(CGPDFPageGetBoxRect(pageRef, kCGPDFMediaBox)); CGRect r1 = CGRectIntegral(CGPDFPageGetBoxRect(pageRef, kCGPDFCropBox)); if (!CGRectIsEmpty(r1)) pdfRect1 = r1; int wide = pdfRect1.size.width + pdfRect1.origin.x; int high = pdfRect1.size.height + pdfRect1.origin.y; CGContextRef ctxBuffer = NULL; CGColorSpaceRef colorSpace; UInt8* bitmapData; int bitmapByteCount; int bitmapBytesPerRow; bitmapBytesPerRow = (wide * 4); bitmapByteCount = (bitmapBytesPerRow * high); colorSpace = CGColorSpaceCreateDeviceRGB(); bitmapData = malloc( bitmapByteCount ); if (bitmapData == NULL) { DebugLog (@"Memory not allocated!"); return; } ctxBuffer = CGBitmapContextCreate (bitmapData, wide, high, 8, // bits per component bitmapBytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big); // if (ctxBuffer== NULL) { free (bitmapData); DebugLog (@"Context not created!"); return; } CGColorSpaceRelease( colorSpace ); //White out the current context CGContextSetRGBFillColor(ctxBuffer, 1.0, 1.0, 1.0, 1.0); CGContextFillRect(ctxBuffer, CGContextGetClipBoundingBox(ctxBuffer)); CGContextDrawPDFPage(ctxBuffer, pageRef); //!!!This displays just fine to the context passed in from - (void)drawLayer:(CALayer *)layer inContext:(CGContextRef)ctx. That is, I can see the PDf page rendered, so we know ctxBuffer was created correctly //However, if I view bitmapData in memory, it only shows as 0xFF (or whatever fill color I use) CGImageRef img = CGBitmapContextCreateImage(ctxBuffer); CGContextDrawImage(ctx, tiledLayer.frame, img); void *data = CGBitmapContextGetData (ctx); for (int i = 0; i < wide; i++) { for (int j = 0; j < high; j++) { //All of the bytes show as 0xFF (or whatever fill color I test with)?! int byteIndex = (j * 4) + i * 4; UInt8 red = bitmapData[byteIndex]; UInt8 green = bitmapData[byteIndex + 1]; UInt8 blue = bitmapData[byteIndex + 2]; UInt8 alpha = m_PixelBuf[byteIndex + 3]; } } I have also tried using CGDataProviderCopyData(CGImageGetDataProvider(img)) & CFDataGetBytePtr, but the results are the same?

    Read the article

  • C# memory management: unsafe keyword and pointers

    - by Alerty
    What are the consequences (positive/negative) of using the unsafe keyword in C# to use pointers? For example, what becomes of garbage collection, what are the performance gains/losses, what are the performance gains/losses compared to other languages manual memory management, what are the dangers, in which situation is it really justifiable to make use of this language feature... ?

    Read the article

  • Mongoid or MongoMapper?

    - by PanosJee
    I have tried MongoMapper and it is feature complete (offering almost all AR functionality) but i was not very happy with the performance when using large datasets. Has anyone compared with Mongoid? Any performance gains ?

    Read the article

  • How to modify code so that it adheres to the Law of Demeter

    - by guazz
    public class BigPerformance { public decimal Value {get;set;} } public class Performance { public BigPerformance BigPerf {get; set}; } public class Category { public Performance Perf {get;set; } } If I call: Category cat = new Cateogry(); cat.Perf.BigPerf.Value = 1.0; I assume this this breaks the LoD? If so, how do I remedy this if I have a large number of inner class Properties?

    Read the article

  • Windows RPC vs XML-RPC

    - by Y.Z
    Is there any benchmark about encoding/decoding certain common typed data in Microsoft RPC NDR engine (DCE 1.1) in comparison with that in XML-RPC-C/C++ in the de-facto C/C++ implementation in XML-RPC? Actually I have to choose between Windows RPC and XML-RPC-C/C++ to implement my own common object infrastructure for High Performance Computing on Windows. Any recommandation about which with regard to their performance? Thank you. Best Regards, Yang

    Read the article

  • StringBuilder vs XmlTextWriter

    - by Wololo
    I am trying to squeeze as much performance as i can from a custom HttpHandler that serves Xml content. I' m wondering which is better for performance. Using the XmlTextWriter class or ad-hoc StringBuilder operations like: StringBuilder sb = new StringBuilder("<?xml version="1.0" encoding="UTF-8" ?>"); sb.AppendFormat("<element>{0}</element>", SOMEVALUE); Does anyone have first hand experience?

    Read the article

  • Running virtual machines: Linux vs Windows 7

    - by vikp
    Hi, I have tried running windows xp development virtual machine under windows 7 and the performance was dreadful. I'm considering installing Linux and running the virtual machine from the Linux, but I'm not sure whether I can expect any performance gains? It's a 2.4ghz core 2 duo machine with 4gb ram and 5400 rpm hdd. Can somebody please recommend very cut down version of linux that can run VMWare player and isn't resource hungry? Thank you

    Read the article

  • String literals vs constants for Session[...] dictionary keys

    - by FreshCode
    Session[Constant] vs Session["String Literal"] Performance I'm retrieving user-specific data like ViewData["CartItems"] = Session["CartItems"]; with a string literal for keys on every request. Should I be using constants for this? If yes, how should I go about implementing frequently used string literals and will it significantly affect performance on a high-traffic site? Related question does not address ASP.NET MVC or Session.

    Read the article

  • Linux memory fragmentation

    - by Raghu
    Hi all, Is there a way to detect memory fragmentation on linux ? This is because on some long running servers I have noticed performance degradation and only after I restart process I see better performance. I noticed it more when using linux huge page support -- are huge pages in linux more prone to fragmentation ? I have looked at /proc/buddyinfo in particular. I want to know whether there are any better ways to look at it.

    Read the article

  • Grouping records from while loop | PHP

    - by Wayne
    I'm trying to group down records by their priority levels, e.g. --- Priority: High --- Records... --- Priority: Medium --- Records... --- Priority: Low --- Records... Something like that, how do I do that in PHP? The while loop orders records by the priority column which has int value (high = 3, medium = 2, low = 1). e.g. WHERE priority = '1' The label: "Priority: [priority level]" has to be set above the grouped records regarding their level

    Read the article

  • Should you use a PHP Framework?

    - by christopher-mccann
    I have asked before about selecting a framework but I have never actually asked should you a framework? In the past I have never used one but I am being drawn more and more to the idea to speed up development time. I am really concerned about the performance of the application though. Does anyone have any views on whether it is a good idea or not to use a framework? The performance of the apps built on top of it will be crucial. Thanks

    Read the article

< Previous Page | 255 256 257 258 259 260 261 262 263 264 265 266  | Next Page >