Search Results

Search found 14841 results on 594 pages for 'performance monitoring'.

Page 89/594 | < Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >

  • Defragment / Performance Monitor without Task Scheduler

    - by mjaggard
    My organisation has a policy of disabling Task Scheduler on all servers and workstations (don't ask, I tried once to wrestle the pig). I need to collect performance stats using Data Collector Sets in Windows 7 or Windows 2008 but the Performance Monitor interface requires Task Scheduler to be running. Is this possible because I'm not trying to schedule anything (except the collection of WMI information every 15 seconds but I doubt it hands that task off to the task scheduler)? Is there any way to trick it into thinking Task Scheduler is running? If not, is there any way to temporarily override the group policy to allow Task Scheduler to run? I've found that most group policy can be overridden in this way by an Administrator by editing the registry. On exactly the same vein, I want to defragment a hard disk on one of my workstations, but I can't get it to start because of the dependancy on Task Scheduler - is it possible to overcome this?

    Read the article

  • Any reason not to disable the Windows pagefile given enough physical RAM?

    - by Evgeny
    The question of disabling the Windows pagefile has already been discussed quite a bit, for example here and here and here. People continue to upvote answers that say "you should not disable your pagefile even if you have plenty of RAM", but I have yet to see any concrete, verifiable reasons being given for this advice. As far as I can see, if you never need to read from the pagefile (because you have enough RAM) then performance could only be worse with it enabled due to Windows pre-emptively writing to it. At best, performance would be the same. I can't see how it could possibly be improved by writing data you never need to read. So my question is: Assuming that I have enough physical RAM for everything I do, is there any reason I should not disable the pagefile? Let's say the version of Windows is Windows XP x64 SP2 or Windows Server 2003 x64 SP2 (same thing). If it's different for Windows Server 2008 x64 I'd be interested to hear an answer for that as well. I'm looking for specific, objective reasons from good sources, not just opinions. Something like "here are the benchmarks done with and without a pagefile and the results were better with a pagefile, even with enough RAM" or "according to this MS KB article problem X occurs if you disable the pagefile". So far the only reasons I've seen mentioned are: Even if you think you have enough RAM you might run out. OK, but for the purposes of this question, let's just take it as a given that I have enough. Maybe I only ever read my email and I have 16GB RAM. Or 128GB. Or 1TB. Or whatever - but it's enough for 100% of what I do, 100% of the time. Another way to think of it is: if I have x MB physical RAM and y MB pagefile and I never run out of RAM in that configuration, would I not be better off, performance-wise, with x+y MB physical RAM and no pagefile? Windows is "used to" having a paging file and it might not function as reliably (from Understanding the Impact of RAM on Overall System Performance That's rather vague and I find it hard to believe, given that MS has provided the option to disable the pagefile. Windows knows what it's doing better than you. No - it doesn't know that I won't run more programs or load more data, but I do.

    Read the article

  • Peforming an Audit for SQL Server 2008

    - by Nai
    Hi all, Do you guys have any good step by step type links for performing an SQL Server 2008 Performance Audit? I know Brad McGehee has written extensively on this but for SQL Server 2005 over at http://www.sql-server-performance.com. But are any such articles for SQL Server 2008? Thanks!

    Read the article

  • increase performance of virtual machines on the 27" imac

    - by evan
    I'm using a 27" iMac (i7, 8GB RAM) at work and normally run two or three virtual machines at the same time, which hurts the performance of each virtual machine. I've learned on these forums the best way to increase virtual machine performance (aside from RAM) is to have them running on a separate hard drive from the one the OS is on. Of course with the iMac you can only have one hard drive and not even an SAS or solid state drive (well you could probably take it apart and put one in yourself but I wouldn't be permitted to do that). That being said, do you think it would help to run one or more virtual machines from a firewire external drive (or a usb 2.0)? Thanks for your input!

    Read the article

  • IIS httpTracing setting has no effect

    - by digahill
    I'm trying to troubleshoot some performance issues we are having on a specific ASP.NET page with Microsoft's Perfecto Tool on IIS 7.5. Perfecto uses the ETW hooks build in to IIS to report on specific HTTP request, and is working quite well. However, I only want IIS to emit traces for one specific page, say "Default.aspx" in my TestApp Web Application. Following the instructions on the httpTracing man page, I should be able to add the traceUrls element to my root web.config file for TestApp. This doesn't seem to affect tracing whatsoever when I do so. For example, I've used the following settings in the web.config file and every request that hits the IIS server is sending tracing messages that are in turn picked up by Perfecto. (In the System.WebServer section) <httpTracing> <traceUrls> <add value="/Default.aspx" /> </traceUrls> </httpTracing> I then found that the applicationHost.config file on the server had an empty element. I tried removing this element, as well as the httpTracing element in the web.config. After a machine reboot, I was still getting tracing messages! My understanding is that the presense of the httpTracing element is what controlls whether ETW tracing is on or not. I ensured there was no reference to httpTracing in the machine.config, too. At a loss, I decided to remove the IIS Tracing feature with Server Manager. After a reboot, I no longer got ETW tracing. I then reinstalled IIS Tracing feature with Server Manager. As expected, the httpTracing element reappeared in the applicationhost.config file. Tracing messages began sending again for all sites and pages. I then tried to use the traceUrls element at the applicationhost.config level. This also didn't filter out and traces. I must be misunderstanting something key with how httpTracing works. There aren't many resources on the web to help me, either. Can anyone tell me if what I'm trying should work? Has anyone else had success filtering tracing message per page with traceUrls? I should note that I also tried changing with the following setting in applicationhost.config to "allow". It didn't seem to help. <section name="httpTracing" overrideModeDefault="Allow" />

    Read the article

  • MD3200i Slow Performance and Queue Depth

    - by Caleb_S
    Read performance on our SAN is slow under certain workloads. When we compare this to some local storage, we find the local storage performing 2x as fast. The SAN performs well with a high Queue Depth, and poorly with a low queue depth. However, the local storage performs well with a low Queue Depth. I'd like to know the reason for this occurring and find out what the specific limiting factor is in this situation. MD3200i iSCSI SAN ($15,000) 6 x 600GB 15k SAS RAID5 6 x 2TB 7.2k NLS RAID5 XCOPY /j Benchmark: (Slow) 15k Array - 71MB/s (Queue Depth 1) 7.2k Array- 71MB/s (Queue Depth 1) Robycopy /MT:32 Benchmark: (Fast) 15k Array - 171MB/s (Queue Depth ~12) 7.2k Array- 128MB/s (Queue Depth ~12) , , Read Performance on a Local controller is fast under the workload the SAN is slow at. , HighPoint 2230 RAID Controller ($600) 4 x 1TB 7.2k SATA RAID5 XCOPY /j Benchmark: 7.2k Array - 145MB/s (Queue Depth 1) (appears to max out the SATA bus)

    Read the article

  • Cassandra on heterogeneous servers

    - by happy-coding
    I am currently running 4 cassandra nodes with the following hardware in a Apache Cassandra cluster: AMD Athlon 64 X2 6000+ 8G RAM 750G hard disk It shows not such a good writing performance and a really bad read performance with sometimes also timeouts. I was wondering if it makes sense to add 2 nodes with a different hardware (8 CPUs and more RAM) to improve this. Or does a cassandra cluster works best with the same hardware in every node? Thanks & best regards

    Read the article

  • TCP/IP performance tuning under KVM/Qemu

    - by vpetersson
    With more and more companies switching to public cloud services, I'm curious what you guys' thoughts are on TCP/IP tuning in the cloud. Is it worth bothering with? Given that you don't have access to the host-server, you're somewhat limited I presume Let's say for the sake of the argument that you're running three MongoDB-servers in a replica-set on FreeBSD or Linux that all sync over an internal network. I'd also be curious if anyone made any actual performance benchmarks to back up their arguments. I benchmarked the various network drivers available for KVM/Qemu here, but I'm curious what the gurus here suggest to tune further. I started playing around a bit with the tuning-recommendations as suggested over here, but interestingly enough I saw a decrease in performance, rather than an increase, but perhaps I didn't fully understand the tweaks. Update: I did a few more benchmarks and posted the result here. Unfortunately the result wasn't really what I expected.

    Read the article

  • a load balancing scenario using HAProxy and keepalived shows no performance advantage

    - by chakoshi
    Hi, I am trying to setup a load balanced web server scenario, using two HAproxy load balancers and two debian web servers following this guide http://www.howtoforge.com/setting-up-a-high-availability-load-balancer-with-haproxy-keepalived-on-debian-lenny. the setup is working but the results of simple performance benchmarking is not what I expected. I tried apache benchmark tool to send lots of requests to servers (one time directly testing one of the web servers and the other time testing through the load balancer) using the command "ab -n 1000000 -c 500 http://IP/index.html", but the test results shows better performance for the single server without load balancer. can any one tell me if I'm going wrong on some thing?

    Read the article

  • iSCSI SAN RAID 10 Performance -- Poor Read, Good Write

    - by Litzner
    I have a EqualLogic PS4000 SAN unit with the latest firmware, setup in RAID 10. I have 3 2TB Volumes on the SAN shared out via iSCSI on 2 eth ports on two different subnets. I have moved a test server over to this newly setup SAN, and my testing is showing me a problem. I am getting dismal read performance in everything except a test with 32 queue depth (see attach image) Write performance seems to be right about where it should be. I have tried MPIO on and off, on was slightly better but not much.

    Read the article

  • Addons which actually make Firefox run faster?

    - by Zombies
    I would like to know of addons which actually enhance firefox's performance, both intentionally and unintentionally. I find that firefox tends to have major performance issues with certain websites. These websites tend to have a fair amount of javascript and css, and probably a large dom tree which may even be growing dynamically through javascript too. The worse offenders are those with heavy javascript, use heavy facebook integration, websites with non performant javascript, excessive javascript and websites with too many advertisements.

    Read the article

  • What is the best way to secure MySQL data on a laptop *without* whole-disk-encryption?

    - by GJ
    I need to have the mysql data on my laptop stored in an encrypted state so that in case of the laptop being lost/stolen it will extremely difficult to recover the data without the password. I don't wish to use whole disk encryption, due to the performance impact it will have on other disk-intensive programs' usage. What could be the ideal solution for me balancing security and performance? Thanks!

    Read the article

  • Improving Windows Authentication performance on IIS

    - by flalar
    We're struggling with performance issues with a ASP.NET MVC site that is using Windows Authentication. Response time is very slow on the first request to the site when the user is being authenticated. Further, every time the Authorization header is sent from the browser the response time increases with many seconds. The same issue occurs for both executed files and static content like CSS and JS. Access to the application is restricted to users within a certain role and we are now planning to allow access to static files for all authenticated users to see if that helps. The authentication method in use is NTLM. How should we go forward in pinpointing why authentication decreases performance drastically?

    Read the article

  • nginx: URL rewrites and performance

    - by j0nes
    I have a website where I need to change the URL structure. The old URLs look like /olddir/part1_de.htm, the new ones will look like /newdir/sub/category/anotherpage.htm. There are a lot of URL rewrites I need to do, I assume about 500 distinct rewrites in the end. As my website gets quite a lot of traffic, my main concern is about performance at the moment. My questions are: I assume that for each request, the rewrites block will be parsed and the regex will be evaluated. Am I right? Will there be a performance penalty if I use these rewrites? Can nginx handle this? Are there any "best practices" to follow when doing a lot of rewrites?

    Read the article

  • SSD Performance for PHP?

    - by Andrew Fashion
    My programmer just built an application with PHP using Doctrine ORM (will be a high traffic social networking website), and it's very heavy in PHP/Apache and CPU. The queries are wonderfully fast, and MySQL is barely using any CPU, it's just Apache. I was curious to if an SSD would help speed up PHP/Apache, because I know the bottleneck is in PHP reading multiple files, class files, and loading up a bunch of data. So common sense makes me think if PHP is reading multiple PHP files, an SSD would only help as far as read/write? I was thinking of doing a high performance SSD for the PHP application, but for user image uploads, I would just continue using a 15k SAS. Is there any performance issues regarding using an SSD in this kind of situation? And would it prove to help speed up PHP/Apache, and help the CPU problem out?

    Read the article

  • How to Check the Performance of VPS?

    - by Ngu Soon Hui
    I have subscribed to a VPS offered by a hosting provider. The guaranteed performance 1GB RAM, 1M bandwidth. But I found that from time to time the websites can be very slow, so slow that it could take more than 30 seconds to load a simple Joomla website. However the website resumed the usual speed after a few minutes. This created a problem for me when I want to report the performance problem to the hosting provider. They would say to me "see, no problem". Of course there is no problem because the problem is only there for a few minutes , and everything is normal. This ocassionally-slow problem will bug me a few days later and the cycle repeats. Any idea how to solve this problem?

    Read the article

  • Are there any known issues with Windows-8 installed on a VHD?

    - by Richard
    I installed Windows 8 preview on a VHD image and it seemed to work until I actually started using it. I´m seeing terrible performance. Installing anything makes everything else "stutter" or freeze for up to a couple of seconds at a time. I looked up hard disk performance in the task manager and this is what I found: It doesn't seem right it has 2500ms response time while reading/writing at those speeds. Is this an issue with my drive, installation or VHDs in general?

    Read the article

  • Which type of motherboard i should by and why?

    - by metal gear solid
    If budged is not matter. I just need best performance with less power consumption. I can purchase any cabinet , power supply and Motherboard. Is Power supply has any relation with Form factor? Is the size of motherboard and number of Slots only difference between all form factors? Is there any difference related to performance of motherboard? Is bigger in Size (ATX) motherboard always better?

    Read the article

  • Soap client call has slow performance

    - by Alon_A
    OS is Centos 6.2 with PHP 5.3.15. We have a Facebook application that is using PHP soap web services. We sometimes experince slow preformance when connecting to these services, but we cant understand what exacly is causing the problem. We've try to analyse the behavior using the profiling tool Kcachgrind. Here is a call graph from the index.php page that took 21 seconds to load. You can clearly see that calling the soap client is the bottle neck. I've also noticed that exactly before the page finishes to load, this file is being created in our serve's /tmp folder: wsdl-apache-d1032d85dfd16c0d91a6b70facc70e43 These are the permission of /tmp drwxrwxrwt 6 root root 40960 Aug 30 10:39 tmp I know its not the most specific question, but if any one had similar performance issues with soap client, We would love some ideas about what can cause this kind of performance problem, what can we do to investigate more accurately or how to overcome the problem ? Thanks.

    Read the article

  • RPC for java/python with rest support, HTML monitoring and goodies

    - by Ran
    Here's my set of requirements: I'm looking for an RPC framework such as thrift, avro, protobuf (when adding services to it) which supports: Easy and intuitive IDL. No serial numbers, no manual versioning, simple... avro is a good example for this. Works with Java and Python Supports both fast binary prorocol, as well as HTTP based restful style. I'd like to be able to use it for both backend-to-backend communication (java-java or python-java) as well as frontend-to-backend communication (javascript to java). The rest support needs to include &param=value input as get/post requests (configurable per request) and output in three possible formats: json, jsonp, XML. Compact, fast, backward compatible, easy to upgrade etc... Provides some nice monitoring interfaces such as: JMX, web page status reports (e.g. packets in, packets out, error rate etc) Ops friendly... no need to take the whole site down to release new versions Both sync and asyc communication ... other goodies are welcome... Is there something out there? So far I've looked at thrift and avro and they are both nice in some ways, but don't check all my list. Thanks

    Read the article

  • JavaCard monitoring folder

    - by GxG
    I want to write a two way application: applet for javacard and an application in C#. I've got the C# covered but i want to know if with JavaCard i can monitor a folder on the memory and how would i go about doing that. I have a shared folder let's call it temp in which i want to store buffer information between the simulated smartcard and the C# application. The C# application will only read from that folder and display the information, but also it will write requests towards the smartcard. For example i simulate entering the PIN for the card. The applet will write a file containing available options and the C# application will read that file and display those options; from the C# app i will chose and option and write a request file in the same folder. This is when the smartcard which is monitoring that folder will read the request and issue a response. Can i make the smartcard monitor that folder? I was thinking of using encrypted XML files for the request/response operations. But simple .txt files are good to. I am limited to using JavaCard v2.2.1, and every operation has to be encrypted/decrypted. (with the ciphering i have no problem)

    Read the article

  • Plan Caching and Query Memory Part II (Hash Match) – When not to use stored procedure - Most common performance mistake SQL Server developers make.

    - by sqlworkshops
    SQL Server estimates Memory requirement at compile time, when stored procedure or other plan caching mechanisms like sp_executesql or prepared statement are used, the memory requirement is estimated based on first set of execution parameters. This is a common reason for spill over tempdb and hence poor performance. Common memory allocating queries are that perform Sort and do Hash Match operations like Hash Join or Hash Aggregation or Hash Union. This article covers Hash Match operations with examples. It is recommended to read Plan Caching and Query Memory Part I before this article which covers an introduction and Query memory for Sort. In most cases it is cheaper to pay for the compilation cost of dynamic queries than huge cost for spill over tempdb, unless memory requirement for a query does not change significantly based on predicates.   This article covers underestimation / overestimation of memory for Hash Match operation. Plan Caching and Query Memory Part I covers underestimation / overestimation for Sort. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   To read additional articles I wrote click here.   The best way to learn is to practice. To create the below tables and reproduce the behavior, join the mailing list by using this link: www.sqlworkshops.com/ml and I will send you the table creation script. Most of these concepts are also covered in our webcasts: www.sqlworkshops.com/webcasts  Let’s create a Customer’s State table that has 99% of customers in NY and the rest 1% in WA.Customers table used in Part I of this article is also used here.To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'. --Example provided by www.sqlworkshops.com drop table CustomersState go create table CustomersState (CustomerID int primary key, Address char(200), State char(2)) go insert into CustomersState (CustomerID, Address) select CustomerID, 'Address' from Customers update CustomersState set State = 'NY' where CustomerID % 100 != 1 update CustomersState set State = 'WA' where CustomerID % 100 = 1 go update statistics CustomersState with fullscan go   Let’s create a stored procedure that joins customers with CustomersState table with a predicate on State. --Example provided by www.sqlworkshops.com create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1) end go  Let’s execute the stored procedure first with parameter value ‘WA’ – which will select 1% of data. set statistics time on go --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' goThe stored procedure took 294 ms to complete.  The stored procedure was granted 6704 KB based on 8000 rows being estimated.  The estimated number of rows, 8000 is similar to actual number of rows 8000 and hence the memory estimation should be ok.  There was no Hash Warning in SQL Profiler. To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'.   Now let’s execute the stored procedure with parameter value ‘NY’ – which will select 99% of data. -Example provided by www.sqlworkshops.com exec CustomersByState 'NY' go  The stored procedure took 2922 ms to complete.   The stored procedure was granted 6704 KB based on 8000 rows being estimated.    The estimated number of rows, 8000 is way different from the actual number of rows 792000 because the estimation is based on the first set of parameter value supplied to the stored procedure which is ‘WA’ in our case. This underestimation will lead to spill over tempdb, resulting in poor performance.   There was Hash Warning (Recursion) in SQL Profiler. To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'.   Let’s recompile the stored procedure and then let’s first execute the stored procedure with parameter value ‘NY’.  In a production instance it is not advisable to use sp_recompile instead one should use DBCC FREEPROCCACHE (plan_handle). This is due to locking issues involved with sp_recompile, refer to our webcasts, www.sqlworkshops.com/webcasts for further details.   exec sp_recompile CustomersByState go --Example provided by www.sqlworkshops.com exec CustomersByState 'NY' go  Now the stored procedure took only 1046 ms instead of 2922 ms.   The stored procedure was granted 146752 KB of memory. The estimated number of rows, 792000 is similar to actual number of rows of 792000. Better performance of this stored procedure execution is due to better estimation of memory and avoiding spill over tempdb.   There was no Hash Warning in SQL Profiler.   Now let’s execute the stored procedure with parameter value ‘WA’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go  The stored procedure took 351 ms to complete, higher than the previous execution time of 294 ms.    This stored procedure was granted more memory (146752 KB) than necessary (6704 KB) based on parameter value ‘NY’ for estimation (792000 rows) instead of parameter value ‘WA’ for estimation (8000 rows). This is because the estimation is based on the first set of parameter value supplied to the stored procedure which is ‘NY’ in this case. This overestimation leads to poor performance of this Hash Match operation, it might also affect the performance of other concurrently executing queries requiring memory and hence overestimation is not recommended.     The estimated number of rows, 792000 is much more than the actual number of rows of 8000.  Intermediate Summary: This issue can be avoided by not caching the plan for memory allocating queries. Other possibility is to use recompile hint or optimize for hint to allocate memory for predefined data range.Let’s recreate the stored procedure with recompile hint. --Example provided by www.sqlworkshops.com drop proc CustomersByState go create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1, recompile) end go  Let’s execute the stored procedure initially with parameter value ‘WA’ and then with parameter value ‘NY’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go exec CustomersByState 'NY' go  The stored procedure took 297 ms and 1102 ms in line with previous optimal execution times.   The stored procedure with parameter value ‘WA’ has good estimation like before.   Estimated number of rows of 8000 is similar to actual number of rows of 8000.   The stored procedure with parameter value ‘NY’ also has good estimation and memory grant like before because the stored procedure was recompiled with current set of parameter values.  Estimated number of rows of 792000 is similar to actual number of rows of 792000.    The compilation time and compilation CPU of 1 ms is not expensive in this case compared to the performance benefit.   There was no Hash Warning in SQL Profiler.   Let’s recreate the stored procedure with optimize for hint of ‘NY’. --Example provided by www.sqlworkshops.com drop proc CustomersByState go create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1, optimize for (@State = 'NY')) end go  Let’s execute the stored procedure initially with parameter value ‘WA’ and then with parameter value ‘NY’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go exec CustomersByState 'NY' go  The stored procedure took 353 ms with parameter value ‘WA’, this is much slower than the optimal execution time of 294 ms we observed previously. This is because of overestimation of memory. The stored procedure with parameter value ‘NY’ has optimal execution time like before.   The stored procedure with parameter value ‘WA’ has overestimation of rows because of optimize for hint value of ‘NY’.   Unlike before, more memory was estimated to this stored procedure based on optimize for hint value ‘NY’.    The stored procedure with parameter value ‘NY’ has good estimation because of optimize for hint value of ‘NY’. Estimated number of rows of 792000 is similar to actual number of rows of 792000.   Optimal amount memory was estimated to this stored procedure based on optimize for hint value ‘NY’.   There was no Hash Warning in SQL Profiler.   This article covers underestimation / overestimation of memory for Hash Match operation. Plan Caching and Query Memory Part I covers underestimation / overestimation for Sort. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   Summary: Cached plan might lead to underestimation or overestimation of memory because the memory is estimated based on first set of execution parameters. It is recommended not to cache the plan if the amount of memory required to execute the stored procedure has a wide range of possibilities. One can mitigate this by using recompile hint, but that will lead to compilation overhead. However, in most cases it might be ok to pay for compilation rather than spilling sort over tempdb which could be very expensive compared to compilation cost. The other possibility is to use optimize for hint, but in case one sorts more data than hinted by optimize for hint, this will still lead to spill. On the other side there is also the possibility of overestimation leading to unnecessary memory issues for other concurrently executing queries. In case of Hash Match operations, this overestimation of memory might lead to poor performance. When the values used in optimize for hint are archived from the database, the estimation will be wrong leading to worst performance, so one has to exercise caution before using optimize for hint, recompile hint is better in this case.   I explain these concepts with detailed examples in my webcasts (www.sqlworkshops.com/webcasts), I recommend you to watch them. The best way to learn is to practice. To create the above tables and reproduce the behavior, join the mailing list at www.sqlworkshops.com/ml and I will send you the relevant SQL Scripts.  Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.   Disclaimer and copyright information:This article refers to organizations and products that may be the trademarks or registered trademarks of their various owners. Copyright of this article belongs to R Meyyappan / www.sqlworkshops.com. You may freely use the ideas and concepts discussed in this article with acknowledgement (www.sqlworkshops.com), but you may not claim any of it as your own work. This article is for informational purposes only; you use any of the suggestions given here entirely at your own risk.   R Meyyappan [email protected] LinkedIn: http://at.linkedin.com/in/rmeyyappan

    Read the article

< Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >