Search Results

Search found 13151 results on 527 pages for 'performance counters'.

Page 262/527 | < Previous Page | 258 259 260 261 262 263 264 265 266 267 268 269  | Next Page >

  • Is there any way to limit the turbo boost speed / intensity on i7 lap?

    - by Anonymous
    I've just got a used i7 laptop, one of these overheating pavilions from HP with quad cores. And I really want to find a compromise between the temp and performance. If I use linpack, or some other heavy benchmark, the temp easily gets to 95+, and having a TJ of 100 Degrees, for a 2630QM model, it really gets me throttling, that no cooling pad or even an industrial fan could solve. I figured later that it is due to turbo boost, and if I set my power settings to use 99% of the CPU instead of 100%, and it seems to disable the turbo boost, so the temp gets better. But then again it loses quite a bit of performance. The regular clock is 2GHz, and in turbo boost it gets to 2.6Ghz, but I just wonder if I could limit it to around 2.3Ghz, that would be a real nice thing. Also there is another question I've hard time getting answer to. It seems to me that clocks are very quickly boosting up to max even when not needed, eg, it's ok if the CPU has 0% load, the clocks get to their 800MHz, but even if it gets to about 5% it quickly jumps to a max and even popping up turbo, which seems very strange to me. So I wonder if there is any way to adjust the sensitivity of the Speed Step feature. I believe it would be more logical to demand increased clock if it hits let's say 50% load. I do understand that most of these features are probably hardwired somewhere in the CPU itself or the MB, which has no tuning options just like on many laptops. But I would appreciate if you could recommend some thing, or some software. Thanks

    Read the article

  • UW-IMAP server, high load for one user

    - by Bruce Garlock
    We have been experiencing a very strange anomaly, with one specific user with our UW-IMAP server. We have about 75 users using the server, and one particular user, who is in about the middle as far as used storage keeps having issues with slow speed. Most of our users all use Thunderbird 2, or Thunderbird 3. Mostly 2, because of the performance issues we have had with 3. This user was on 3, and I downgraded him to 2. The performance has gotten better, but according to the imapd processes on the server, his username is using the most CPU % and CPU time. I've already done all the usual T/S'ing: Started profile from scratch, compacted folders, re-indexed, newer faster computer, etc.. Still, this users' imapd process is always using the most CPU on the server. For troubleshooting, we setup another user which has more usage, folders, etc.. than he does, but we don't see the users process taking up most of the CPU with the imapd process. So, it almost sounds like a particular email may be the culprit, but how can we find it, if thats the problem? This has been going on for a while, and he is a management person, so his patience is about to end. Does anyone have any ideas?

    Read the article

  • High memory utilization with firebird + windows server 2008 r2

    - by chesterman
    i have a Windows Server 2008 R2 (64bit) running a 64bit installation of Firebird 2.1.4.18393_0 in a 4GB phisical server. After a while, the task manager show that all memory is used, but the sum of the memory of all process does not stack to the half of the memory. Unfortunally, it's start swapping. Using RAMMAP, i can see that my entire database file is mapped into the memory. This only occours in windows server 2008 r2 and windows 7 64 bit. i can use firebird 32 or 64bit installations, doesn't matter. How can i prevent this? Why this only occours in w2k8r2 and w7? tks in advance ** UPDATE Aparently, this occours by the use of all memory by the file system cache. The microsoft documentations explain that this WAS a issue in windows xp, 2k3, vista and 2k8, but it was solved in 7 and 2k8r2. also adds that this issue is more common in 64bit hosts. (http://support.microsoft.com/kb/976618) there are some tools (DynCache, setcache and the Get/SetSystemFileCacheSize system calls from windows API) that allows me to fix a upper limit to the memory usage by the fscache, but the documentation argues that i should not do this in w2k8r2 because it will severely impact on the overall system performance. anyway, i tried, the performance remained the same shit and the use of the page file remained, although there is now more the 1gb of free memory.

    Read the article

  • VMWare Workstation 8 Disk I/O & Hard Faults

    - by Scott
    I have VMWare Workstation 8 installed on a host machine with the following specs: Intel i5 2500k CPU 16GB DDR3 1600 ram 1TB Western Digital Caviar Black HD I have two Windows 7 virtual machines configured (currently running one at a time but will be operating both at once when my 32GB RAM kit arrives in a couple days). Each one is configured with 8GB of RAM and no tweaks/performance customizations or anything done. All of the VMWare settings are the defaults. When I boot into these machines and run various programs (Visual Studio, Outlook, etc), I can hear the disk thrashing quite a bit and checking Resource Monitor, I can see that I'm getting anywhere between 300-800 hard faults per second. From the host machine, it shows they're coming from the VMWare image. If I go to the virtual machine, whatever app I'm currently loading is the image that's causing the hard faults. As I understand it, hard faults are (simply) when an address in memory has been swapped out to the page file and has to be read from the page file instead of from memory. I don't understand why this is happening though. With 8GB of ram on the guest machine and 6.5GB available, what could be causing this? I know Windows 7 supposedly improved on page file management over XP but it seems excessive for this kind of slowdown, disk thrashing and high hard fault count when I have that much free RAM. Is there anything I can to to improve the performance in my guest machines? On the host machine, I can open/run any applications at all and hard faults stays around 0 with low disk I/O.

    Read the article

  • Array on servers which receive several hundred GB of data a day

    - by Matthew
    This is hopefully a simple question. Right now we are deploying servers which will serve as data warehouses. I know with raid 5 the best practice is 6 disks per raid 5. However, our plan is to use RAID 10 (both for performance and safety). We have a total of 14 disks (16 actually, but two are being used for OS). Keeping in mind that performance is very much an issue, which is better - doing several raid 1's? Do one large raid 10? One large raid 10 had been our original plan, but I want to see if anyone has any opinions I haven't thought of. Please note: This system was designed for using Raid 1+0, so losing half of the raw storage capacity is not an issue. Sorry i hadn't mentioned that initially. The concern is more whether or not we want to use one large Raid 1+0 containing all 14 disks, or several smaller raid 1+0's and then stripe across them using LVM. I know the best practice for higher raid levels is to never use more than 6 disks in an array.

    Read the article

  • How to verify system using right GPU, after system reset [duplicate]

    - by Antoros
    This question already has an answer here: Is my mobile AMD card being used? 2 answers OS: Windows 8 CPU: Intel® Core™ i7 Processor 3635QM GPU 1 : Intel HD Graphics 4000 GPU 2 : AMD Radeon™ HD 8870M other info: System Spects Problem: im unsure that CCC is using AMD card instead of Intel's, i have encountered several issues since updating to 8.1 and i don't know what to do What happened: Installed 8.1 patch first day After 1 minute of use, BBSOD, windows never loaded again System restore wouldnt recognize 8.0 restore points i did a system reset to windows 8 since the laptop was only 3 weeks old System Broke, it did restore to factory BUT kept the registry almost intact, i had to install almost everything again, since the factory drivers where working with the updated one's registry and several problems CCC Broke too <- What i've already done Installing new drivers on top of old ones didnt work, so i used AMD uninstaller first Uninstalled and Re-installed Intel's HD Graphics Driver Tried to install mobile center, but AMD told me that it wasnt compatible (even if thats the only driver that they provide via their page as seen Here) Tried to use Auto-Detect, couldnt install driver because card was disabled because it didnt have the drivers... (see what they did here?) Had to use a workaround with Samsung Update, the driver didnt appear as download so had to use search and downloaded the driver manually. Now the graphic card appears on device manager and catalyst but as 8800 series (not exact model), and cant check the card with neither dxdiag/GPU-z/HWMonitor when right-clicking on CCC only Intel card appears launching a game and using as "high performance" would speed it up a little but i cant be sure How to verify its working properly? HWMonitor wont show AMD card even when set to high performance Latest GPU-Z wont work because a problem with Intel's, and legacy ones wont either what can I do now? I don't even know if I fixed my problem or not, and i also want to to use Adobe Premier with it, and its locked (the option to run it with the amd card not intels) Edit: now it seems to work, but cant change the setting for adobe Premiere and other programs that i Need to

    Read the article

  • explanation of RAM specs, and what do I need for a Gaming rig

    - by ewok
    I am looking into upgrading my custom built PC's RAM. I use the machine mostly for gaming, but I don't really know a ton about RAM, so I wanted to ask a few questions. The research I've done tells me there is a negligible increase in speed for anything above 1600 MHz. is this true or is it worth the extra money to go higher? Other than drawing more power from the PSU, is there any real difference in performance with different voltages (1.5V vs 1.65V)? most of the kits I've found in the 2x4 1600 range have a CAS latency of 9 and timing of 9-9-9-24. For a significant increase in price (usually about 1.5x), I can get either 8 or 7 and lower timing. Is it worth the cost? What I am looking for here is someone to give a good explanation of what the different specs represent, and how that relates to the performance of the machine. Specifically, I'm looking for what specs I need to focus on for a good gaming rig. I am NOT looking for a "buy this, it's the best RAM" without an explanation of why. The information will be much more valuable as it will allow me to make my own informed decision. As they say, give a man a fish, he'll eat for a day. teach a man to fish, and he'll eat for the rest of his life.

    Read the article

  • Capabilities of business and SOHO routers

    - by Q8Y
    I'm currently studying for the CCNA certifications (especially for Cisco routers and configuration). I know that business routers provide more features than SOHO routers, the processing speed and RAM can be enough. Assume I need to connect a number of users through a network (accessing internet, share files, printers, ...). I have a high speed connection to the internet and I already applied QoS. How can I find out how many users such a single (SOHO) router could handle? In my case I'd attach to it multiple switches until I have the number of ports needed. Would everything work well and smoothly with 50 users? What about 300? At which point would I need a business router instead? If I implemented VLAN here, would it make any difference in the performance? When do I really need to use more than one router? (Both SOHO and business) I'm thinking that I may need them only if I want to increase the performance (instead of replacing the existing one) and if I have multiple locations, so in this situation I need to have multiple routers, right? Put differently: Is there is a need to have another router if my business all in one place?

    Read the article

  • Weird glitches on Intel iGPU

    - by FrederikVds
    I have a weird problem that I can't manage to describe in one word, so I'm having trouble searching for a solution. My monitors sometimes go black for a tenth of a second. Other times, they show the image shifted a few centimeters to the left or to the right. This happens on both of my monitors, but not necessarily at the same time. I would say it happens about once a minute, unless under heavy load, in which case it can happen every second or so. Interestingly, heavy CPU/memory usage can also cause this, not just heavy GPU usage. This only happens when they are both at 1920x1080, not when one of them, or both, are at a lower resolution. It also happens when they are in mirrored mode instead of extended desktop mode. My GPU is obviously not at full clock speed most of the time: this happens at 350 MHz as well as at 1200 MHz. So it doesn't seem like a matter of too little performance. I'm not seeing any traditional artefacts, even when I use MSI Kombustor, only this kind of full-screen glitches. CPU stressing software isn't reporting any issues either. I'm thinking maybe the connection between my CPU and my PCH isn't fast enough, but I can't find anyone with the same problem to confirm that. I'd rather not invest in a discrete GPU without being certain it will fix something. Does anyone know how to search for this, or even better, does anyone have a solution? My full specs are below. Thanks in advance! Specs: ASUS P8Z77-M Intel Core i5-3570K (with Intel HD 4000 Graphics) 2x4 GB AMD Performance Edition memory Corsair Force 3 Series Rev. B 120GB SSD Maxtor 200GB HD Lite-On DVD-RW Antec 350 Watt PSU 64-bit Windows 7 Professional 2x Iiyama ProLite E2208HDS display

    Read the article

  • What should the memory configuration be?

    - by AngryHacker
    We have a server (ProLiant DL585 G1 by HP), which hosts Windows 2003 x64 R2 with SQL Server 2005 x64 and a host of other apps. It currently has 6GB of RAM. We are currently very memory constrained and it's clear that we need to get more memory. 8GB will probably do the trick, however, we are not sure as to what memory configuration will give us the biggest performance buck. Currently all 8 memory slots are filled (4 slots have 1GB chip, while the other 4 slots have 512MB chips). Should we throw the 512MB sticks away and just replace them all with 1GB sticks? If we decided to go with a higher memory configuration (e.g. 10GB or 12GB or 16GB), is it advisable to keep all the sticks of the same size or it does not matter? I was once told that interleaved memory requires (for better performance) that memory should be in multiples (e.g. 2 or 4 or 8 or 16, etc...). I am not even sure that the server has an interleaved configuration (and don't know how to find out), but is this true? Thanks.

    Read the article

  • Memory overcommitment on VmWare ESXi 5.0

    - by Tibor
    I would like to understand better the possibilities of VmWare ESXi memory overcommitment. I've read this paper from VmWare, so I am familiar with general concepts, such as hypervisor swapping, memory balooning and page sharing. It seems that a combination of these techniques allows for quite a large degree of overcommitment. However, I am not sure. I am deploying a virtual test lab comprising of 4 identical sets of virtual servers and workstations and a couple of virtual router instances. Overall, I expect to be running around 20 virtual machines with Windows XP, Windows 7 and Ubuntu for workstation hosts as well as CentOS and Windows 2008 Server instances for servers. The problem is, however, that the host machine only has 12GB of RAM and I don't have an option to stuff in some more. I would like to know what is the best option to configure hosts in order to achieve reasonable performance within the constrains. I have these two options: Allocate as little as possible of RAM to each virtual machine. Allocate an extraordinary amount (such as 4 GB per instance) and let the baloon driver do the rest. Something else? Which would work better? Machines will mostly be idle, so I don't have any major performance expectations, but they should run reasonably smoothly nevertheless.

    Read the article

  • Ruby installed on Ubuntu 10.10 slow on one machine but not other

    - by Aaron Jensen
    I have a machine that was provisioned several months ago. RVM was used to install ruby 1.9.3-p125 as well as 1.9.3-p125-perf. When I compared raw ruby performance to another identical machine the older machine smoked them. For example: ================================================================================ With in-block needle calculation ================================================================================ Rehearsal ---------------------------------------------- detect 3.790000 0.000000 3.790000 ( 3.800895) each 2.410000 0.000000 2.410000 ( 2.420860) any 3.960000 0.000000 3.960000 ( 3.972099) include 1.440000 0.000000 1.440000 ( 1.442862) ------------------------------------ total: 11.600000sec vs ================================================================================ With in-block needle calculation ================================================================================ Rehearsal ---------------------------------------------- detect 10.740000 0.000000 10.740000 ( 10.769366) each 6.080000 0.010000 6.090000 ( 6.106323) any 10.600000 0.000000 10.600000 ( 10.641606) include 4.160000 0.000000 4.160000 ( 4.171530) ------------------------------------ total: 31.590000sec I attempted to reinstall 1.9.3-p125 with rvm on the fast machine and that ruby is now slow. It's as if something changed in RVM, or I installed some package that made compiled versions of ruby perform significantly worse. I know this is a tough question to answer, but what things should I look into in order to track down why the performance has suffered so much? edit I just attempted to install with ruby-build and the version installed was fast. Something rvm is doing to build it in my environment is slow.

    Read the article

  • Is there a utility to visualise / isolate and watch application calls

    - by MyStream
    Note: I'm not sure what to search for so guidance on that may be just as valuable as an answer. I'm looking for a way to visually compare activity of two applications (in this case a webserver with php communicating with the system or mysql or network devices, etc) such that I can compare the performance at a glance. I know there are tools to generate data dumps from benchmarks for apache and some available for php for tracing that you can dump and analyse but what I'm looking for is something that can report performance metrics visually from data on calls (what called what, how long did it take, how much memory did it consume, how can that be represented visually in a call stack) and present it graphically as if it were a topology or layered visual with different elements of system calls occupying different layers. A typical visual may consist of (e.g. using swim diagrams as just one analogy): Network (details here relevant to network diagnostics) | ^ back out v | Linux (details here related to firewall/routing diagnostics) ^ back to network | | V ^ back to system Apache (details here related to web request) | | ^ response to V | apache PHP (etc) PHP---------->other accesses to php files/resources----- | ^ v | MySQL (total time) MySQL | ^ V | Each call listed + time + tables hit/record returned My aim would be to be able to 'inspect' a request/range of requests over a period of time to see what constituted the activity at that point in time and trace it from beginning to end as a diagnostic tool. Is there any such work in this direction? I realise it would be intensive on the server, but the intention is to benchmark and analyse processes against each other for both educational and professional reasons and a visual aid is a great eye-opener compared to raw statistics or dozens of discrete activity vs time graphs. It's hard to show the full cycle. Any pointers welcome. Thanks! FROM COMMENTS: > XHProf in conjunction with other programs such as Perconna toolkit > (percona.com/doc/percona-toolkit/2.0/pt-pmp.html) for mySQL run apache > with httpd -X & (Single threaded debug mode and background) then > attach with strace -> kcache grind

    Read the article

  • Serious 64-bit laptop

    - by Daniel Gehriger
    For the past couple of years, I have been using an IBM Thinkpad T60p for daily work (software development, desktop & embedded). I am extremely satisfied with this machine, due to its robustness. It also has a few features I depend on: a high resolution display: 15.0" TFT FlexView display with 1600x1200 (UXGA); excellent keyboard; decent graphics and CPU performance. Some of the software I develop benefits from larger amounts of RAM, and 3GB (Windows 7 32-bit) or 4GB (Windows 7 64-bit on T60p) are no longer sufficient. My customers run desktop computers with 20GB and more, and I need to have at least 8GB to at least be able to run reasonable test cases. So I'm shopping around for a new laptop, but I'm struggling to find anything that matches my requirements: must run Windows 7 64-bit Pro or higher; must support at least 8GB of RAM (more is better) high screen resolution! While I prefer 4:3 I can live with wide screen. But I really hope to find something with a vertical screen resolution similar to what I have now... portable, so < 16" but = 14" I realize that FlexView isn't available anymore, but I'd like to avoid a glossy screen if possible. decent (not more) graphics performance, ideally hybrid (I'm doing a lot of CAD, never games). good keyboard reasonable CPU -- but I'm still fine with my current Core 2 Duo, so that shouldn't be too complicated. The T60p fits all those requirements, except the 8GB of RAM. Can you help me find a current notebook that would match most of them? I don't mind changing brand. Thanks!

    Read the article

  • Cisco configuration for public library internet

    - by AlternateZ
    I'm a C/C++ computer programmer turned IT support guy working for a public library. My day is usually spent helping random grandparents learn how to use email, so my networking knowledge is limited to what I can glean from google. Here's the situation. We have a public library with 20 PCs on a LAN and also public wifi access. Previously we were running all of this on 1 ADSL connection and people complained about low speeds. We hired a networking company to set up a Cisco dual-WAN router for us, and purchased an additional ADSL connection. The intention was to give the LAN PCs a guaranteed amount of bandwidth each, and then let the wifi users split the rest. The results were far worse than what we expected, and all we got from the company was excuses and they've since washed their hands of us. During busy periods, net performance on the LAN PCs are so poor that attaching files to gmail etc often times out and fails - far from the "guaranteed amount of bandwidth each" that we hope for! Sometimes it feels like performance is worse than before when we had 1 ADSL link and an unconfigured router? Anyways, surely this is a problem encountered a million times over across the world? (Sharing internet across many users effectively.) What are standard solutions for something like this? I admit to not even knowing the right jargon to google for (load balancing?) I'd appreciate any links to resources/guides that might help me get a better understanding of the problem/solutions, and perhaps some stories of your own experience in solving similar problems. This will help us evaluate and negotiate with network consultants in the future. If its relevant, our router config contains a section "policy-map" with "bandwidth percent" for each class of user (LAN, wifi), and "fair queue".

    Read the article

  • Firefox: Clear History Is SUPER EFFECTIVE?

    - by acidzombie24
    I'm seeing a performance problem on certain sites (like gmail) which clearing the history should not affect. Is this a website problem or a firefox problem and what can i do to fix it w/o clearing my history? Also as a webdeveloper i am interested in how to make this happen (or not happen). I'm using firefox 8 and i confirmed the problem by copying my profile to firefox 11 (portable). To reproduce go to gmail.com and sign in. Have your task manager open. Once you click signin or hit enter gmail will bring up your emails. Keep your eye on the CPU usage. I checked and right now on this machine its using all my CPU for 22seconds!!!! Yes. 22 seconds. Once i cleared my "browser & download history" Its <6seconds. WTF. I have no idea why or how the size of history and CPU usage when loading up gmail are correlated. I have firefox setup so it never clears the history. But... 22seconds is a disaster. Can someone explain why this is happening or a fix that isnt clearing my history? I tried visiting a few websites and only gmail eats up that much CPU. Most websites only take <5sec of max CPU. So maybe this is a gmail problem? Or a firefox problem that gmail happens to hit? I still dont understand why it happens. -edit- I forgot to mention places.sqlite is 90mb. I dont think that matters. I have a sqlite file 400mb which is pretty much 2 large tables. It has no performance issues

    Read the article

  • Network switches for LAN party

    - by guywhoneedsahand
    I am working on setting up the network for a small LAN party (less than 16 people). Most of them do not have wireless cards in their rigs, so I need to set up some way for everyone to a) play LAN games and b) access the internet. The LAN party will probably take place in my basement, where I have enough space. However, the basement is not wired up with the router which is actually on the floor above. I make a cantenna a while back that can boost the wireless performance of my computer significantly. How can I use this to provide internet and LAN to guests? My hope was that I could use a switch like this http://www.newegg.com/Product/Product.aspx?Item=N82E16833181166 for the LAN - but how can I give people access to the internet? Is there such thing as a network extender / 16-port switch? Obviously, the internet performance doesn't need to be super stellar, because the games will be using LAN - so I am looking to provide some usable internet for web browsing, and very high speed LAN for games. Thanks!

    Read the article

  • Would a PHP application benefit from being served from a RAM drive?

    - by Tom Marthenal
    I am in charge of hosting a PHP application that is large and slow, but easy to scale. The application is entirely static, with writable disk storage needed. We've profiled the application, and the main bottleneck appears to come from loading the application and not the work the application does. The application is not CPU-intensive, although it does use a fair amount of memory (think Magento). Currently we distribute it by having a series of servers with the same PHP files on their hard drive and a load balancer in front of them. Easy but expensive. I've been reading about RAM disks and the IO benefits they offer, and was wondering if they would be well-suited to PHP applications. Since PHP applications are loaded from disk for every request and often involve lots of different files (as opposed to being kept in memory like with a Java application), I would figure that disk performance can be a severe bottleneck. Would placing the PHP files on a RAM disk and using the mount point as Apache's document root offer performance benefits? A startup script could create the RAM drive and then copy the files (which are plain-text and small) from a permanent location to the temporary RAM drive. Does this make sense, or should I just trust the linux kernel to cache the appropriate files in memory by itself?

    Read the article

  • Howto align partitions in Linux + NetApp

    - by santisaez
    NetApp support has suggested us aligning partitions to improve performance, in short: starting sector must be divisible by 8. How can I move the start point in a misaligned partition -in production, with ext3- under Linux? A screenshot with a misaligned (start=63s) and aligned (start=64s) partition is available at: http://filesocial.com/lkwvvn2 (If anyone is interested in this topic, NetApp has a good document explaining performance issues in misaligned partitions, search for "tr-3747": Best Practices for File System Alignment in Virtual Environments.) I have tried using parted "resize + move" commands, but when moving start point a get this error: (parted) resize Partition number? 1 Start? [64s]? End? [419425019s]? 419425018 (parted) move Partition number? 1 Start? 65 End? [419425019s]? 419425019 Error: Can't move a partition onto itself. Try using resize, perhaps? Using fdisk 'b' command in expert mode ('move beginning of data in a partition') works, but it doesn't move the file system.. thanks!!

    Read the article

  • Matched or unmatched drives for RAID arrays?

    - by Will
    Looking around there is conflciting information on this, with some strongly suggesting one or the other. From my understanding the issue with matched drives is that the wear on both drives is more or less the same, so the potential for the second drive failing with or very soon after the first is pretty high. People also claim matched drives give substianatally higher performance however assuming the unmatched drives are more or less the same (eg 2, 1 TB STATA II 7200rpm drives with 32MB cache), would the minor differences between say a Seagate and a Western Digital one (say one has a 128MB/s read rate, and the other a 150MB/s read rate, as well as I guess various other minor differences) actually cause any notable performance loss, ie potentialy worse than two matched 128MB/s drives, or does RAID not really care and give you essentially an optimal solution (eg upto 278MB/s total read speed for RAID 0 and 1) and similar for other RAID with more "unmatched" drives (5 and 1+0 come to mind as possibilities)? Also I couldnt find much info on how this is different on different RAID setups, eg RAID 0 or RAID 1, software or hardware RAID, etc. I'm assuming such things have an effect, and thats it's not all the same for RAID in general?

    Read the article

  • Permissions Issue with Files Generated by PerfMon

    - by SvrGuy
    We are trying to implement some data logging to CSV files using a Data Collector Set in PerfMon (on a windows Server 2008R2 system). The issue we are running into is that we (seemingly) can't control the permissions being set on the log files created by perfmon. What we want is for the log files created by perfmon to have Everyone:F permissions (Full Control for Everyone). So, we have a directory structure setup where all logs go into a folder: c:\vms\PerfMonLogs\%MACHINENAME% (e.g. c:\vms\PerfMonLogs\EvaluationG2) In the above example, c:\vms\PerfMonLogs\EvaluationG2 has permissions Everyone:F (below is the icacls for this directory) EVALUATIONG2/ Everyone:(OI)(CI)(F) NT AUTHORITY\SYSTEM:(OI)(CI)(F) BUILTIN\Administrators:(OI)(CI)(F) BUILTIN\Performance Log Users:(OI)(R) When the data collector set runs, it creates new sub folders and files within c:\vms\PerfMonLogs\EvaluationG2, e.g. (C:\vms\PerfMonLogs\EVALUATIONG2\M11d26y2012N3) Each of these directories and files has the following permissions: M11d26y2012N3 NT AUTHORITY\SYSTEM:(OI)(CI)(F) BUILTIN\Administrators:(OI)(CI)(F) BUILTIN\Performance Log Users:(OI)(R) So these new folders and not simply inheriting permissions from the parent folder (don't know why). Now, we tried adding Everyone:F using the security tab on the collector set (No dice). Any ideas? How do we control the permissions on the log files generated by perfmon data collector set?

    Read the article

  • Are these hardwares compatible?

    - by Tom Kaufmann
    I am trying to upgrade my new machine but I want to do it myself. This is my 1st attempt at building system. After carefully reading reviewing feedback and my budget I have decided to select the below listed components. Can anybody let me know are they compatible or not? Transcend 64 GB 2.5" SATA Solid State Drive Asus GeForce GTX550 1GB DDR5 ENGTX550 TI DI/1GD5 Graphics Card Seagate Barracuda 1 TB HDD Internal Hard Drive Cooler Master eXtreme Power Pro 600 Power Supply Intel Core i5 2500K Sandy Bridge 3.30 GHz 95 W 4 Core Desktop Processor Intel DX79TO Motherboard Corsair CMZ8GX3M2A1600C9 8 GB DDR3 SDRAM 1600 MHz Dual Channel Kit Desktop Memory Sony AD-7260S-ZS Internal DVD Writer - Black Cooler Master Hyper TX3 EVO Intel CPU Cooler Cooler Master Elite 335U Cabinet LG E2051T 20.1 Inch SuperSlim Monitor Is any of these hardware components incompatible with I5 2500K? If you have any other suggestions for selecting any other harwdware that can boost up my performance or lower my cost while having the same performance, please suggest. But my primary questions is whether they are compatible or not! Any help is appreciated. Thank you.

    Read the article

  • Can a VM perform better when only two cores instead of four cores are presented to it?

    - by arcain
    We had a VMWare VM at work with two cores allocated to it that ran a pretty heinous process in IIS. Under load the process was maxing out the CPU usage on both cores, so we asked our system engineers to present the other two cores of the physical processor to the VM. The engineer immediately said that this would not improve performance at all, but would make the VM perform worse. That statement didn't make much sense to me, and I'm wondering how what the engineer said could be true. Are there actually cases where four cores presented to a VM would cause worse performance than two cores on the same physical hardware? Let's assume an ideal situation where there's only one VM on the host server, so nothing is being shared with other OS instances. I believe the physical server had a single quad core processor, and was most likely hosting multiple VMs. I don't really know what version of ESX was running on the host, nor do I know with certainty what the physical processor config was, but from within the VM I had access to, I saw two 3.33 GHz AMD processors. In the end, I never got to test the engineer's assertion out because (while we were trying to get the VM upgraded) we were able to optimize the process and reduce it's CPU consumption, and 2) we ended up migrating to a different VM on another ESX server which had four cores presented to it.

    Read the article

  • How to upgrade iPhone 3G to iOS 5 now that iOS 6 is out? [closed]

    - by mmmshuddup
    My friend has an old iPhone 3G and I wanted to upgrade it to iOS 5 because I had read that the performance is actually better with iOS 5 on that phone in spite of the difference in processor power. The problem is that now that iOS 6 is out, iTunes only gives me the option to upgrade to that. I am a little more leery of iOS 6 knowing that it was launched mostly for the iPhone 5 and since that phone has a way faster processor, I would assume that performance would be an issue on the iPhone 3G. How can I, if possible, upgrade just to iOS 5? Note: I would like to avoid having to jailbreak the phone by all means possible. (It's not mine and I don't want to take the risk with a phone that doesn't belong to me.) EDIT: If anyone knows of a good site to get help on questions like this let me know. Apparently you're allowed to ask questions about iPhones here, just not this. Which is completely utter asinine, but oh well. Anyone with helpful ideas, post in the comments please.

    Read the article

  • Non-volatile cache RAID controllers: what kind of protection is there against NVCACHE failure?

    - by astrostl
    The battery back-up (BBU) model: admin enables write-back cache with BBU writes are cached to the RAID controller's RAM (major performance benefit) the battery saves uncommitted and cached data in the event of a power loss (reliability) If I lose power and come back within a day or so, my data should be both complete and uncorrupted. The downside to this is that, if the battery is dead or low, OR EVEN IF IT IS IN A RELEARN CYCLE (drain/charge loops to ensure the battery's health), the controller reverts to write-through mode and performance will suffer. What's more, the relearn cycles are usually automated on a schedule which may or may not happen in the middle of big traffic. So, that has to be manually disabled and manually scheduled for off-hours if it's a concern. Annoying either way. NV caches have capacitors with a sufficient charge to commit any uncommitted-to-disk data to flash. Not only is that more survivable in longer loss situations, but you don't have to concern yourself with battery death, wear-out, or relearning. All of that sounds great to me. What doesn't sound great to me is the prospect of that flash module having an issue, though. What if it's completely hosed? What if it's only partially hosed? A bit corrupted at the edges? Relearn cycles can tell when something like a simple battery is failing, but is there a similar process to verify that the flash is functional? I'm just far more trusting of a battery, warts and all. I know the card's RAM can fail, the card itself can fail - that's common territory, though. In case you didn't guess, yeah, I've experienced a shocking-to-me amount of flash/SSD/etc. failure :)

    Read the article

< Previous Page | 258 259 260 261 262 263 264 265 266 267 268 269  | Next Page >