Search Results

Search found 18148 results on 726 pages for 'performance monitor'.

Page 5/726 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Intel graphics Ubuntu 12.04.3 LTS: does not detect second monitor

    - by user206551
    I have some problems to get the second monitor working on my Ubuntu 12.04.3 LTS. If I click on the detect button it does not work. Info about my system: $uname -a Linux LabTop2 3.8.0-32-generic #47~precise1-Ubuntu SMP Wed Oct 2 16:22:28 UTC 2013 i686 i686 i386 GNU/Linux $cat /etc/*-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION="Ubuntu 12.04.3 LTS" NAME="Ubuntu" VERSION="12.04.3 LTS, Precise Pangolin" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu precise (12.04.3 LTS)" VERSION_ID="12.04" $lspci |grep VGA 00:02.0 VGA compatible controller: Intel Corporation 3rd Gen Core processor Graphics Controller (rev 09) $lsmod | grep video uvcvideo 72250 0 videobuf2_core 39385 1 uvcvideo videodev 96131 2 uvcvideo,videobuf2_core videobuf2_vmalloc 12920 1 uvcvideo videobuf2_memops 13042 1 videobuf2_vmalloc video 19116 1 i915 $xrandr -q xrandr: Failed to get size of gamma for output default Screen 0: minimum 1366 x 768, current 1368 x 768, maximum 1368 x 768 default connected 1368x768+0+0 0mm x 0mm 1366x768 0.0 1368x768 0.0* Before upgrading the system, xrand -q showed my much more resolution options and the other monitor. I have tried to install intel-linux-graphics-installer but this version of ubuntu is not supported Any help will be apreciated!!

    Read the article

  • Monitor blinking

    - by Shark
    Hi I have Ubuntu 10.04, my laptop is ASUS k50ij, intel duo t3000, 2gb ram, 320 HDD, intel gma 4500M video card. My problem is that a couple days ago my monitor had start blinking for a few seconds. Then it stops and begins the same behaviour let's say in next 2 hours. What am i missing here? Another funny problem is that when i give more or less brightness to monitor the pop up window shows and giving "Power information: Laptop battery is charged". When my brightness is lower the blinking increases and vice versa. thx for help

    Read the article

  • 12.04 boots fine, with graphical splash screen, but then Monitor "out of range"

    - by Jim Bednar
    I see dozens of posts from people whose monitors are saying "out of range" under Ubuntu; seems like there are some serious problems in Ubuntu with autodetection of monitor capabilities. :-( But none of the many, many suggestions I found have solved my problem, and right now I can't use anything graphical on this machine! History: I installed Ubuntu 12.04 on my HP Proliant Microserver N40L, which worked reasonably at the default resolution across several reboots. At some point I noticed that the proprietary video driver was not in use, and tried to install one to get better window-drawing speeds, but it failed with some sort of error, and I gave up on that. A few weeks later when I next rebooted, it showed the usual BIOS screen and various boot loading screens (including GRUB), and then the usual purple Ubuntu splash screen with the dots showing that things were loading, but when it finished booting the monitor went black and eventually showed "Out of range" (with no other information). Given that there were several weeks between reboots (it's a server, after all), I've no idea if it was some system update, trying to install the proprietary drivers, or something else that caused the problem. Anyway, the system has booted fine, as I can do Ctrl-Alt-F1 to get a text prompt and can log in there. But Ctrl-Alt-F7 goes back to the out of range error. Some posters said to try Ctrl-Alt-- (minus) to cycle through resolutions until one works, but that didn't have any visible effect. Many, many others said it was a grub problem, which seems unlikely given that grub's screen looks fine, but I tried editing /etc/default/grub to set a particular resolution (trying many of them) and running update-grub, with no apparent effect. Rebooting into failsafe mode works the same as regular mode. Replacing xorg.conf with xorg.conf.failsafe works the same too. I'm at my wits' end! Isn't there anything I can do to convince Ubuntu to choose a mode that the monitor supports? E.g. the one that it is using for the splash screen? I don't need great resolution on this machine, just anything that works!!!!! Help!!!!!! Please!!!!

    Read the article

  • Monitor displays "No VGA signal" "Check DVI cable" after installing motherboard drivers

    - by user1604220
    I bought computer with all of the needed parts, got everything sorted out, installed windows 7 but my CD-ROM doesn't fit my motherboard. So I had to download the driver manually and install it using USB disc. My motherboard: ECS elitegroup H61H2-M2, I downloaded it's drivers, installed the BIOS map or something, and then the computer forced a reboot after it was complete, after the reboot my monitor just stopped working and displayed No VGA signal, No DVI cable I think I've just installed the wrong driver, well but how can I sort it out? This is the driver I installed. On the book, it tells me to install the drivers, without the driver it won't see the Internet connection cable. I'm 100% sure there is nothing wrong with the monitor or it's cables. it stopped working exactly after the reboot after the installation.

    Read the article

  • Dual monitor setup with Intel graphics and Nvidia Geforce GT 425M on 12.04

    - by fo_x86
    I have a Dell XPS L401x and just installed 12.04. I have a mini-display and an HDMI output, and I could hook up two Dell UltraSharp monitors to each port on Windows 7, and get a triple monitor setup. Doing a bit of research, it seems like the mini-display is wired to the integrated graphics card whereas HDMI is hooked up to Nvidia graphics card. I've also installed Bumblebee as it seemed like that was the proper way of installing Nvidia drivers on Ubuntu. At the moment none of my monitors is being recognized by Ubuntu. Is it even possible to have a triple monitor (laptop display + 2 external) setup? Has anyone successfully done this?

    Read the article

  • 13.04 - Extenal monitor plugged on laptop HDMI shows image but lags a lot

    - by Thiago Fassina
    It uses an Nvidia GeForce GTS 250M. When I plug in the monitor, it shows some images but with bad FPS and lagging a lot. For example, when I move the mouse pointer, it leaves a trail that vanishes from time to time. I didn't install Nvidia drivers because every time I did this on past versions I ended up with an unbootable system. On past versions I just gave up trying to use the monitor cause it never showed anything regardless of what driver I installed. But since 13.04 ALMOST made it, I'll make another tries. ps.: Works perfectlty on Windows 7/8

    Read the article

  • How do I set the correct monitor resolution with Nvidia drivers for a monitor that does not send EDID?

    - by Torben Gundtofte-Bruun
    I keep having trouble getting the correct monitor resolution - every time I reinstall, I happen to use a newer Ubuntu release and the old tricks I used to know no longer work. Instead of leaving a long trail of questions for every new release, I am looking for a more universal and timeless solution. What's the correct way to set the correct monitor resolution with an Nvidia GPU for a screen that does not send EDID values? Note: This is a "dummy" question -- with the help from the chat, I already found the answer, and I am now going to add my own answer to document a solution that is hopefully universal.

    Read the article

  • Monitor not turning off when screen saver is on

    - by Elad92
    I installed Ubuntu for the first time 3 weeks ago and I really love it. I have Ubuntu 12.04. My only problem is that when screen saver is on, it just shows blank screen, and not turning off the monitor, so my monitor is always on. I have Samsung E2220 - 22". I looked over the web and I found this command: sudo xset dpms force off This command shuts down my screen so I don't see a reason why the screen saver can't shut it down. Thanks a lot! Elad

    Read the article

  • Monitor not recognised after apt-get upgrade on 11.10 - maximum 1024x768 resolution

    - by Josh
    I did an apt-get upgrade through update manager on 11.10. After restarting my VGA monitor is no longer recognised and I can only get a maximum of 1024x768 on a 1440x900 monitor. I can't find xorg.conf so I am pretty lost! xrandr output: Screen 0: minimum 320 x 200, current 1024 x 768, maximum 4096 x 4096 VGA-0 connected 1024x768+0+0 (normal left inverted right x axis y axis) 0mm x 0mm 1024x768 60.0* 800x600 60.3 56.2 848x480 60.0 640x480 59.9 DVI-0 disconnected (normal left inverted right x axis y axis) S-video disconnected (normal left inverted right x axis y axis)

    Read the article

  • Application Performance: The Best of the Web

    - by Michaela Murray
    Wisdom A deep understanding and realization […] resulting in the ability to apply perceptions, judgements and actions. It is also the comprehension of what is true coupled with optimum judgment as to action. - Wikipedia We’re writing a book for ASP.NET developers, and we want you to be a part of it. We know that there’s a huge amount of web developer wisdom that never gets shared, and we want to find those golden nuggets of knowledge and experience, and make sure everyone can learn from them. Right now, we want to find out about your top tips, hard-won lessons, and sage advice for avoiding, finding, and fixing application performance problems. If you work with .NET and SQL, even better – a lot of application performance relies on the interaction with the database, so we want to hear from you! “How Do You Want Me To Be Involved?” Right! Details! We want you, our most excellent readers, to email us with the Best Advice you would give to other developers for getting the best performance out of their applications. It doesn’t matter if your advice is for newbies or veterans, .NET or SQL – so long as it’s about application performance, we want to hear from you. (And if you think that there’s developer wisdom out there that “everyone knows”, a) I’m willing to bet you could find someone who doesn’t know about it, and b) it probably bears repeating anyway!) “I’m Interested. What Can You Do For Me?” Excellent question. For starters, there’s a chance to win a Microsoft Surface (the tablet, not the table-top). Once all the ASP.NET Wisdom has been collected, tallied, and labelled, it will then be weighed and measured by a team of expert judges (whose identities are still a closely-guarded secret).  The top tip in both SQL & .NET categories will each win their author their very own MS Surface. But that’s not all! We can also give you… immortality! More details? Ok. We’ll be collecting all of the tips sent in by our readers (and we can’t wait to learn from you all,) and with the help of our Simple-Talk editors, we will publish and distribute your combined and documented knowledge as a free, community-created, professionally typeset eBook. You will naturally be credited by name / pseudonym / twitter handle / GitHub username / StackOverflow profile / Whatever, as the clearly ingenious author of hot performance tips. The Not-Very-Fine Print Here’s the breakdown: We want to bring together the best application performance knowledge from ASP.NET developers. Closing date for submissions will be 9am GMT, December 4th. Submissions should be made by email – [email protected] Submissions will be judged by a panel of expert judges (who will be revealed soon). The top submission in both the SQL & .NET categories will each win a Microsoft Surface. ALL the tips which make it through the judging process will be polished by Simple-Talk editors, and turned into a professionally typeset eBook, which will be freely available, and promoted alongside the ANTS Performance Profiler tool. Anyone whose entry makes it into the book will be clearly and profusely credited in the method of their choice (or can remain anonymous.) The really REALLY short version Share what you know about ASP.NET application performance for a chance to win a Microsoft Surface, and then get your name credited in a slick eBook with top-notch production values. For more details, see above. We can’t wait to learn from you!

    Read the article

  • Server Performance

    - by sb12
    I know very little about performance tuning of servers etc... so i thought i'd put this up here as i start some research on it, just to get some direction. I am in the process of migrating from my old server to a new one - both are 64 bit machines. One is a few years old, the other brand new (PowerEdge R410). The old server spec is: 2 cpus, 3.4GHz Pentiums, 8G of RAM, Fedora 11 currently installed The new server spec is: 16 cpus, 3.2 GHz Xeon, 16G of RAM, CentOS 6.2 installed. Also RAID10 is on the new server - no RAID on the old one. Both servers currently have the same database (MySQL) with the same data migrated. I wrote a Perl script that simply steps through each row of a table in the database (about 18000 rows) and updates a value in that row. Every row in the table is updated. Out of curiosity i ran this perl script on both machines, just to see how the new server would perform vs. the old one, and it produced interesting results: The old server was twice as fast as the new one to complete. Looking at the database, both are configured exactly the same (the new one being a dump of the old one...)... Anyone any ideas why this would be given the hardware gap between both? As i said i'm about to start some digging, but thought i'd put this up here to maybe get some good direction.... Many thanks in advance..

    Read the article

  • Analysing and measuring the performance of a .NET application (survey results)

    - by Laila
    Back in December last year, I asked myself: could it be that .NET developers think that you need three days and a PhD to do performance profiling on their code? What if developers are shunning profilers because they perceive them as too complex to use? If so, then what method do they use to measure and analyse the performance of their .NET applications? Do they even care about performance? So, a few weeks ago, I decided to get a 1-minute survey up and running in the hopes that some good, hard data would clear the matter up once and for all. I posted the survey on Simple Talk and got help from a few people to promote it. The survey consisted of 3 simple questions: Amazingly, 533 developers took the time to respond - which means I had enough data to get representative results! So before I go any further, I would like to thank all of you who contributed, because I now have some pretty good answers to the troubling questions I was asking myself. To thank you properly, I thought I would share some of the results with you. First of all, application performance is indeed important to most of you. In fact, performance is an intrinsic part of the development cycle for a good 40% of you, which is much higher than I had anticipated, I have to admit. (I know, "Have a little faith Laila!") When asked what tool you use to measure and analyse application performance, I found that nearly half of the respondents use logging statements, a third use performance counters, and 70% of respondents use a profiler of some sort (a 3rd party performance profilers, the CLR profiler or the Visual Studio profiler). The importance attributed to logging statements did surprise me a little. I am still not sure why somebody would go to the trouble of manually instrumenting code in order to measure its performance, instead of just using a profiler. I personally find the process of annotating code, calculating times from log files, and relating it all back to your source terrifyingly laborious. Not to mention that you then need to remember to turn it all off later! Even when you have logging in place throughout all your code anyway, you still have a fair amount of potentially error-prone calculation to sift through the results; in addition, you'll only get method-level rather than line-level timings, and you won't get timings from any framework or library methods you don't have source for. To top it all, we all know that bottlenecks are rarely where you would expect them to be, so you could be wasting time looking for a performance problem in the wrong place. On the other hand, profilers do all the work for you: they automatically collect the CPU and wall-clock timings, and present the results from method timing all the way down to individual lines of code. Maybe I'm missing a trick. I would love to know about the types of scenarios where you actively prefer to use logging statements. Finally, while a third of the respondents didn't have a strong opinion about code performance profilers, those who had an opinion thought that they were mainly complex to use and time consuming. Three respondents in particular summarised this perfectly: "sometimes, they are rather complex to use, adding an additional time-sink to the process of trying to resolve the existing problem". "they are simple to use, but the results are hard to understand" "Complex to find the more advanced things, easy to find some low hanging fruit". These results confirmed my suspicions: Profilers are seen to be designed for more advanced users who can use them effectively and make sense of the results. I found yet more interesting information when I started comparing samples of "developers for whom performance is an important part of the dev cycle", with those "to whom performance is only looked at in times of crisis", and "developers to whom performance is not important, as long as the app works". See the three graphs below. Sample of developers to whom performance is an important part of the dev cycle: Sample of developers to whom performance is important only in times of crisis: Sample of developers to whom performance is not important, as long as the app works: As you can see, there is a strong correlation between the usage of a profiler and the importance attributed to performance: indeed, the more important performance is to a development team, the more likely they are to use a profiler. In addition, developers to whom performance is an important part of the dev cycle have a higher tendency to use a much wider range of methods for performance measurement and analysis. And, unsurprisingly, the less important performance is, the less varied the methods of measurement are. So all in all, to come back to my random questions: .NET developers do care about performance. Those who care the most use a wider range of performance measurement methods than those who care less. But overall, logging statements, performance counters and third party performance profilers are the performance measurement methods of choice for most developers. Finally, although most of you find code profilers complex to use, those of you who care the most about performance tend to use profilers more than those of you to whom performance is not so important.

    Read the article

  • C# Performance Pitfall – Interop Scenarios Change the Rules

    - by Reed
    C# and .NET, overall, really do have fantastic performance in my opinion.  That being said, the performance characteristics dramatically differ from native programming, and take some relearning if you’re used to doing performance optimization in most other languages, especially C, C++, and similar.  However, there are times when revisiting tricks learned in native code play a critical role in performance optimization in C#. I recently ran across a nasty scenario that illustrated to me how dangerous following any fixed rules for optimization can be… The rules in C# when optimizing code are very different than C or C++.  Often, they’re exactly backwards.  For example, in C and C++, lifting a variable out of loops in order to avoid memory allocations often can have huge advantages.  If some function within a call graph is allocating memory dynamically, and that gets called in a loop, it can dramatically slow down a routine. This can be a tricky bottleneck to track down, even with a profiler.  Looking at the memory allocation graph is usually the key for spotting this routine, as it’s often “hidden” deep in call graph.  For example, while optimizing some of my scientific routines, I ran into a situation where I had a loop similar to: for (i=0; i<numberToProcess; ++i) { // Do some work ProcessElement(element[i]); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This loop was at a fairly high level in the call graph, and often could take many hours to complete, depending on the input data.  As such, any performance optimization we could achieve would be greatly appreciated by our users. After a fair bit of profiling, I noticed that a couple of function calls down the call graph (inside of ProcessElement), there was some code that effectively was doing: // Allocate some data required DataStructure* data = new DataStructure(num); // Call into a subroutine that passed around and manipulated this data highly CallSubroutine(data); // Read and use some values from here double values = data->Foo; // Cleanup delete data; // ... return bar; Normally, if “DataStructure” was a simple data type, I could just allocate it on the stack.  However, it’s constructor, internally, allocated it’s own memory using new, so this wouldn’t eliminate the problem.  In this case, however, I could change the call signatures to allow the pointer to the data structure to be passed into ProcessElement and through the call graph, allowing the inner routine to reuse the same “data” memory instead of allocating.  At the highest level, my code effectively changed to something like: DataStructure* data = new DataStructure(numberToProcess); for (i=0; i<numberToProcess; ++i) { // Do some work ProcessElement(element[i], data); } delete data; Granted, this dramatically reduced the maintainability of the code, so it wasn’t something I wanted to do unless there was a significant benefit.  In this case, after profiling the new version, I found that it increased the overall performance dramatically – my main test case went from 35 minutes runtime down to 21 minutes.  This was such a significant improvement, I felt it was worth the reduction in maintainability. In C and C++, it’s generally a good idea (for performance) to: Reduce the number of memory allocations as much as possible, Use fewer, larger memory allocations instead of many smaller ones, and Allocate as high up the call stack as possible, and reuse memory I’ve seen many people try to make similar optimizations in C# code.  For good or bad, this is typically not a good idea.  The garbage collector in .NET completely changes the rules here. In C#, reallocating memory in a loop is not always a bad idea.  In this scenario, for example, I may have been much better off leaving the original code alone.  The reason for this is the garbage collector.  The GC in .NET is incredibly effective, and leaving the allocation deep inside the call stack has some huge advantages.  First and foremost, it tends to make the code more maintainable – passing around object references tends to couple the methods together more than necessary, and overall increase the complexity of the code.  This is something that should be avoided unless there is a significant reason.  Second, (unlike C and C++) memory allocation of a single object in C# is normally cheap and fast.  Finally, and most critically, there is a large advantage to having short lived objects.  If you lift a variable out of the loop and reuse the memory, its much more likely that object will get promoted to Gen1 (or worse, Gen2).  This can cause expensive compaction operations to be required, and also lead to (at least temporary) memory fragmentation as well as more costly collections later. As such, I’ve found that it’s often (though not always) faster to leave memory allocations where you’d naturally place them – deep inside of the call graph, inside of the loops.  This causes the objects to stay very short lived, which in turn increases the efficiency of the garbage collector, and can dramatically improve the overall performance of the routine as a whole. In C#, I tend to: Keep variable declarations in the tightest scope possible Declare and allocate objects at usage While this tends to cause some of the same goals (reducing unnecessary allocations, etc), the goal here is a bit different – it’s about keeping the objects rooted for as little time as possible in order to (attempt) to keep them completely in Gen0, or worst case, Gen1.  It also has the huge advantage of keeping the code very maintainable – objects are used and “released” as soon as possible, which keeps the code very clean.  It does, however, often have the side effect of causing more allocations to occur, but keeping the objects rooted for a much shorter time. Now – nowhere here am I suggesting that these rules are hard, fast rules that are always true.  That being said, my time spent optimizing over the years encourages me to naturally write code that follows the above guidelines, then profile and adjust as necessary.  In my current project, however, I ran across one of those nasty little pitfalls that’s something to keep in mind – interop changes the rules. In this case, I was dealing with an API that, internally, used some COM objects.  In this case, these COM objects were leading to native allocations (most likely C++) occurring in a loop deep in my call graph.  Even though I was writing nice, clean managed code, the normal managed code rules for performance no longer apply.  After profiling to find the bottleneck in my code, I realized that my inner loop, a innocuous looking block of C# code, was effectively causing a set of native memory allocations in every iteration.  This required going back to a “native programming” mindset for optimization.  Lifting these variables and reusing them took a 1:10 routine down to 0:20 – again, a very worthwhile improvement. Overall, the lessons here are: Always profile if you suspect a performance problem – don’t assume any rule is correct, or any code is efficient just because it looks like it should be Remember to check memory allocations when profiling, not just CPU cycles Interop scenarios often cause managed code to act very differently than “normal” managed code. Native code can be hidden very cleverly inside of managed wrappers

    Read the article

  • How often is software speed evident in the eyes of customers?

    - by rwong
    In theory, customers should be able to feel the software performance improvements from first-hand experience. In practice, sometimes the improvements are not noticible enough, such that in order to monetize from the improvements, it is necessary to use quotable performance figures in marketing in order to attract customers. We already know the difference between perceived performance (GUI latency, etc) and server-side performance (machines, networks, infrastructure, etc). How often is it that programmers need to go the extra length to "write up" performance analyses for which the audience is not fellow programmers, but managers and customers?

    Read the article

  • No-Weld Multi-Monitor Stand Crafted From Sturdy Metal Framing

    - by Jason Fitzpatrick
    As far as DIY stands for multiple monitors go, this design has to be the sturdiest and least difficult to construct model we’ve seen in some time. Read on to see how one DIYer cleverly crafted a solid metal triple monitor stand with no welding involved. Tinker and gamer Opteced wanted a new stand for his Eyefinity setup but wasn’t in a hurry to spend a pile of cash on a custom stand. His DIY solution is just as sturdy as a commercial metal stand but is made out of inexpensive hardware store parts–the main supports and base are made from Unistrut, a simple metal framing material. Unlike many DIY stands made from metal rods and piping, this build doesn’t require any sort of welding or custom pipe threading. In fact, the metal struts are so over engineered for the task of holding up flat-panel monitors he was able to simply partially saw through them and bend them to the shape he wanted. Hit up the link below for additional pictures of the build. Unistrut Monitor Stand [via Hack A Day] 8 Deadly Commands You Should Never Run on Linux 14 Special Google Searches That Show Instant Answers How To Create a Customized Windows 7 Installation Disc With Integrated Updates

    Read the article

  • Ubuntu 12.04 LTS , Dell d410 output to External Monitor/TV with docking

    - by Gck
    I am not able to see the video output on My external monitor/TV after installing 12.04 LTS. I had the same issue with 11 versions also. This issue does not happen on 10.04 LTS version. This is my setup. Dell latitude D410 with Intel Mobile 915 GM/910GML express controller using the Dell docking station Connecting to my 42inch TV with DVI (from Docking) to HDMI cable (on TV) If i close the lid, some times i am able to see the video output on TV, but the resolution is making the fonts so small, i cannot even see it and when i click displays option to change the resolution, the output goes off completely and it is shown now on the laptop screen. one time, i got the output on both the screens, but i am not able to replicate that scenario even after trying the Fn+F8 key option on my laptop. earlier some time back when i ran version 11.x ubuntu, i was able to get the output on both the screens (basically mirrored output) with large font resolution on my TV using the monitor Test tool as mentioned here :http://www.bigfatostrich.com/2011/05/solved-ubuntu-11-04-broken-external-display/ The problem with this approach is that i have to do it every time i restart the machine and more over the video output is present on both the screens (mirrored), which is of no use. As i stated before, this does not happen with 10.04 LTS. With 10.04 LTS my laptop screen is off automatically and the output is seen on TV/external display. Can anyone share any options that i can try? How can we achieve the behavior of 10.04 LTS with respect to the External display on 12.04 LTS . Thankyou very much for your help in advance.

    Read the article

  • SQL SERVER – Server Side Paging in SQL Server 2011 Performance Comparison

    - by pinaldave
    Earlier, I have written about SQL SERVER – Server Side Paging in SQL Server 2011 – A Better Alternative. I got many emails asking for performance analysis of paging. Here is the quick analysis of it. The real challenge of paging is all the unnecessary IO reads from the database. Network traffic was one of the reasons why paging has become a very expensive operation. I have seen many legacy applications where a complete resultset is brought back to the application and paging has been done. As what you have read earlier, SQL Server 2011 offers a better alternative to an age-old solution. This article has been divided into two parts: Test 1: Performance Comparison of the Two Different Pages on SQL Server 2011 Method In this test, we will analyze the performance of the two different pages where one is at the beginning of the table and the other one is at its end. Test 2: Performance Comparison of the Two Different Pages Using CTE (Earlier Solution from SQL Server 2005/2008) and the New Method of SQL Server 2011 We will explore this in the next article. This article will tackle test 1 first. Test 1: Retrieving Page from two different locations of the table. Run the following T-SQL Script and compare the performance. SET STATISTICS IO ON; USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 5 SELECT * FROM Sales.SalesOrderDetail ORDER BY SalesOrderDetailID OFFSET @PageNumber*@RowsPerPage ROWS FETCH NEXT 10 ROWS ONLY GO USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 12100 SELECT * FROM Sales.SalesOrderDetail ORDER BY SalesOrderDetailID OFFSET @PageNumber*@RowsPerPage ROWS FETCH NEXT 10 ROWS ONLY GO You will notice that when we are reading the page from the beginning of the table, the database pages read are much lower than when the page is read from the end of the table. This is very interesting as when the the OFFSET changes, PAGE IO is increased or decreased. In the normal case of the search engine, people usually read it from the first few pages, which means that IO will be increased as we go further in the higher parts of navigation. I am really impressed because using the new method of SQL Server 2011,  PAGE IO will be much lower when the first few pages are searched in the navigation. Test 2: Retrieving Page from two different locations of the table and comparing to earlier versions. In this test, we will compare the queries of the Test 1 with the earlier solution via Common Table Expression (CTE) which we utilized in SQL Server 2005 and SQL Server 2008. Test 2 A : Page early in the table -- Test with pages early in table USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 5 ;WITH CTE_SalesOrderDetail AS ( SELECT *, ROW_NUMBER() OVER( ORDER BY SalesOrderDetailID) AS RowNumber FROM Sales.SalesOrderDetail PC) SELECT * FROM CTE_SalesOrderDetail WHERE RowNumber >= @PageNumber*@RowsPerPage+1 AND RowNumber <= (@PageNumber+1)*@RowsPerPage ORDER BY SalesOrderDetailID GO SET STATISTICS IO ON; USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 5 SELECT * FROM Sales.SalesOrderDetail ORDER BY SalesOrderDetailID OFFSET @PageNumber*@RowsPerPage ROWS FETCH NEXT 10 ROWS ONLY GO Test 2 B : Page later in the table -- Test with pages later in table USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 12100 ;WITH CTE_SalesOrderDetail AS ( SELECT *, ROW_NUMBER() OVER( ORDER BY SalesOrderDetailID) AS RowNumber FROM Sales.SalesOrderDetail PC) SELECT * FROM CTE_SalesOrderDetail WHERE RowNumber >= @PageNumber*@RowsPerPage+1 AND RowNumber <= (@PageNumber+1)*@RowsPerPage ORDER BY SalesOrderDetailID GO SET STATISTICS IO ON; USE AdventureWorks2008R2 GO DECLARE @RowsPerPage INT = 10, @PageNumber INT = 12100 SELECT * FROM Sales.SalesOrderDetail ORDER BY SalesOrderDetailID OFFSET @PageNumber*@RowsPerPage ROWS FETCH NEXT 10 ROWS ONLY GO From the resultset, it is very clear that in the earlier case, the pages read in the solution are always much higher than the new technique introduced in SQL Server 2011 even if we don’t retrieve all the data to the screen. If you carefully look at both the comparisons, the PAGE IO is much lesser in the case of the new technique introduced in SQL Server 2011 when we read the page from the beginning of the table and when we read it from the end. I consider this as a big improvement as paging is one of the most used features for the most part of the application. The solution introduced in SQL Server 2011 is very elegant because it also improves the performance of the query and, at large, the database. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Free Online Performance Tuning Event

    - by Andrew Kelly
      On June 9th 2010 I will be showing several sessions related to performance tuning for SQL Server and they are the best kind because they are free :).  So mark your calendars. Here is the event info and URL: June 29, 2010 - 10:00 am - 3:00 pm Eastern SQL Server is the platform for business. In this day-long free virtual event, well-known SQL Server performance expert Andrew Kelly will provide you with the tools and knowledge you need to stay on top of three key areas related to peak performance...(read more)

    Read the article

  • SQL Server – Learning SQL Server Performance: Indexing Basics – Interview of Vinod Kumar by Pinal Dave

    - by pinaldave
    Recently I just wrote a blog post on about Learning SQL Server Performance: Indexing Basics and I received lots of request that if we can share some insight into the course. Every single time when Performance is discussed, Indexes are mentioned along with it. In recent times, data and application complexity is continuously growing.  The demand for faster query response, performance, and scalability by organizations is increasing and developers and DBAs need to now write efficient code to achieve this. When we developed the course – we made sure that this course remains practical and demo heavy instead of just theories on this subject. Vinod Kumar and myself we often thought about this and realized that practical understanding of the indexes is very important. One can not master every single aspects of the index. However there are some minimum expertise one should gain if performance is one of the concern. Here is 200 seconds interview of Vinod Kumar I took right after completing the course. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology, Video

    Read the article

  • Understanding Performance Profiling Targets

    In this sample chapter from his upcoming book, Paul Glavich explains performance metrics and walks us through the steps needed to establish meaningful performance targets. He covers many metrics such as "time to first byte" and explains why you should add some contingency into your estimated performance requirements.

    Read the article

  • System Wide Performance Sanity Check Procedures

    - by user702295
    Do you need to boost your overall implementation performance? Do you need a direction to pinpoint possible performance opportunities? Are you looking for a general performance guide? Try MOS note 69565.1.  This paper describes a holistic methodology that defines a systematic approach to resolve complex Application performance problems.  It has been successfully used on many critical accounts.  The 'end-to-end' tuning approach encompasses the client, network and database and has proven far more effective than isolated tuning exercises.  It has been used to define and measure targets to ensure success.  Even though it was checked for relevance on 13-Oct-2008, the procedure is still very valuable. Regards!  

    Read the article

  • Building Performance Metrics into ASP.NET MVC Applications

    When you're instrumenting an ASP.NET MVC or Web API application to monitor its performance while it is running, it makes sense to use custom performance counters.There are plenty of tools available that read performance counter data, report on it and create alerts based on it. You can then plot application metrics against all sorts of server and workstation metrics.This way, there will always be the right data to guide your tuning efforts.

    Read the article

  • How can I tell which page is creating a high-CPU-load httpd process?

    - by Greg
    I have a LAMP server (CentOS-based MediaTemple (DV) Extreme with 2GB RAM) running a customized Wordpress+bbPress combination . At about 30k pageviews per day the server is starting to groan. It stumbled earlier today for about 5 minutes when there was an influx of traffic. Even under normal conditions I can see that the virtual server is sometimes at 90%+ CPU load. Using Top I can often see 5-7 httpd processes that are each using 15-30% (and sometimes even 50%) CPU. Before we do a big optimization pass (our use of MySQL is probably the culprit) I would love to find the pages that are the main offenders and deal with them first. Is there a way that I can find out which specific requests were responsible for the most CPU-hungry httpd processes? I have found a lot of info on optimization in general, but nothing on this specific question. Secondly, I know there are a million variables, but if you have any insight on whether we should be at the boundaries of performance with a single dedicated virtual server with a site of this size, then I would love to hear your opinion. Should we be thinking about moving to a more powerful server, or should we be focused on optimization on the current server?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >