Search Results

Search found 12988 results on 520 pages for 'performance'.

Page 44/520 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • How to improve Unity Performance?

    - by Wolter Hellmund
    I installed ubuntu netbook edition on my netbook* expecting to get the best performance out of it, but apparently, that didn't turn out. Unity is a bit slow on it, and when I click on Files and folders it takes a while to load the respective interface and the bar at the top disappears and then loads in. Is this expected? Is there anything I can do to improve the performance? Is this problem specific to my netbook? *Netbook info: Acer - Aspire One 1.6 GHz Intel Atom Processor 1 GB RAM Memory Intel GMA 950 graphics card

    Read the article

  • Optimus Bumblebee Performance

    - by Chance
    After installing Ubuntu 12.04 and Bumblebee, I tested out the performance of the Nvidia card using Minecraft and a few other games. I've noticed it to be way slower than it was on Windows 7. Is this normal? I'm using the GT 630m. From what I've read online, nobody has said it to be slower than Windows. I'm just really curious because I want to use Linux so much more than Windows, but if I don't get the same performance I feel really picky. The Nvidia card is still faster than my Intel graphics on Ubuntu, but it's not as fast as it was for my on Windows. I get 60 - 80 fps on Minecraft on windows, while I get 28 - 48 fps on Ubuntu. Any Ideas why? Thanks so much!

    Read the article

  • Multiple database accesses or one massive access?

    - by DudeOnRock
    What is a better approach when it comes to performance and optimal resource utilization: accessing a database multiple times through AJAX to only get the exact information needed when it is needed, or performing one access to retrieve an object that holds all information that might be needed, with a high probability that not all is actually needed? I know how to benchmark the actual queries, but I don't know how to test what is best when it comes to database performance when thousands of users are accessing the database simultaneously and how connection pooling comes into play.

    Read the article

  • Design Patterns for SSIS Performance (Presentation)

    Here are the slides from my session (Design patterns for SSIS Performance) presented at SQLBits VI in London last Friday. Slides - Design Patterns for SSIS Performance - Darren Green.pptx (86KB) It was an interesting session, with some very kind feedback, especially considering I woke up on Friday without a voice. The remnants of a near fatal case on man flu rather than any overindulgence the night before I assure you. With much coughing, I tried to turn the off the radio mike during the worst, and an interesting vocal range, we got through it and it seemed to be well received. Thanks to all those who attended.

    Read the article

  • Ubuntu 12.10 beta 2 has awfull 3d performance and Video Tearing do something about this [closed]

    - by digitalcrow
    Ubuntu 12.10 beta 2 has awfull 3d performance and Video Tearing do something about this. Tearing is common issue on linux and makes linux sucks. Ubuntu also is bloated and uses compiz , i hope ubuntu developers should be more serious cause even a noob should know that compiz sucks now. I have no screen tearing and full opengl 3d performance on cinnamon ! And thats why i like linux mint 13. Ubuntu is so slow applications launching time is bigger compared to linux mint or xubuntu and many many times slower compared to arch linux. I was sattisfied before the 12.04 version now i'm just dissapointed..... (downvoters are simply noobs or silly fanboys and they deserve ubuntu)

    Read the article

  • The Path to Best-In-Class Service Business Performance

    - by Charles Knapp
    What would it matter to offer your customers best-in-class service and support experiences? According to a new study, best-in-class companies enjoy margins that are nearly double the average, retain almost all of their customers each year, deliver annual revenue growth that is six greater than average, and realize cost decreases rather than increases! What does it take to become best in class? Some of the keys are: Engage customers effectively and consistently across all channels Focus on mobility to improve reactive service performance Continue to transition from primarily reactive to proactive and predictive service performance Build the support structure for new services and service contracts Construct an engaged service delivery team Join the Aberdeen Group, Oracle, Infosys, and Hyundai Capital as we highlight the key stages in the service transformation journey and reveal how Best-in-Class organizations are equipping themselves to thrive in this new era of service. Please join us for "Service Excellence and the Path to Business Transformation" -- this Thursday, October 25, 8:00 AM PDT | 11:00 AM EDT | 3:00 PM GMT | 4:00 PM BST.

    Read the article

  • ????Oracle EBS R12 on Exadata V2 ,MMA and Hight Performance

    - by longchun.zhu
    ???????? ?????,???, ????????hands-on ??? ??????: 1. Oracle EBS R12 on Database Machine MAA & Performance Architecture. 2. Oracle EBS R12 Single Instance Node Deployment Procedure. 2.Start Rapid Install Wizard. 3. Oracle EBS R12 Single Instance Node Chinese Patch Update. 4.Applying Patches. 5.Upgrade Application Database Version to 11g Release 2. 6.Database Upgrade 7. Deploy Clone Application Database to Sun Oracle Database Machine. 8.Migrate Application Database File System to Exadata ASM Storage. 9.Covert Application Database Single Instance to RAC.. 10.Configure High Availability & High Performance Architecture with Exadata.

    Read the article

  • Sysadmin 101: How can I figure out why my server crashes and monitor performance?

    - by bflora
    I have a Drupal-powered site that seems to have neverending performance problems. It was butt-slow about 5 months ago. I brought in some guys who installed nginx for anonymous visitors, ajaxified a few queries so they wouldn't fire during page load, and helped me find a few bottlenecks in the code. For about a month, the site was significantly faster, though not "fast" by any stretch of the word. Meanwhile, I'm now shelling out $400/month to Slicehost to host a site that gets less than 5,000/uniques a day. Yes, you read that right. Go Drupal. Recently the site started crashing again and is slow again. I can't afford to hire people to come in, study my code from top to bottom, and make changes that may or may not help anymore. And I can't afford to throw more hardware at the problem. So I need to figure out what the problem is myself. Questions: When apache crashes, is it possible to find out what caused it to crash? There has to be a way, right? If so, how can I do this? Is there software I can use that will tell me which process caused my server to die? (e.g. "Apache crashed because someone visited page X." or "Apache crashed because you were importing too many RSS items from feed X.") There's got to be a way to learn this, right? What's a good, noob-friendly way to monitor my current apache performance? My developer friends tell me to "just use Top, dude," but Top shows me a bunch of numbers without any context. I have no clue what qualifies as a bad number or a good number in Top, or which processes are relevant and which aren't. Are there any noob-friendly server monitoring tools out there? Ideally, I could have a page that would give me a color-coded indicator about how apache is performing and then show me a list of processes or pages that are sucking right now. This way, I could know when performance is bad and then what's causing it to be so bad. Why does PHP memory matter? My apparently has a 30MB memory foot print. Will it run faster if I bring that number down? Thanks for any advice. I spent a year or so trying to boost my advertising income so I could hire a contractor to solve my performance woes. I didn't want to have to learn all this sysadmin voodoo. I'm now resigned to the fact that might not have a choice.

    Read the article

  • Performance of browser plugin based RIA vs. Java Script based RIA

    - by Kabeer
    Hello. For my data intensive web application (heavy forms & complex reports), from performance standpoint, which is better ... a browser plug-in based RIA (say SilverLight) or Java Script based RIA (say ExtJS). At this moment, we can avoid the discussion of plug-in availability, etc. My only focus is performance. Reasoning will be appreciated.

    Read the article

  • Is there a performance hit when running obfuscated code?

    - by nvivek
    All, I am proposing the addition of code obfuscation to the standard build process at my organization. One of the questions being asked is whether there is a performance hit to running obfuscated code vs. running unobfuscated code. What is your experience? Have you seen a reduction in performance at runtime because you obfuscated your Java or C# code? Thanks, VI

    Read the article

  • Biggest performance improvement you've had with the smallest change?

    - by JoelFan
    What's the biggest performance improvement you've had with the smallest change? For example, I once improved the performance of a certain page on a high-profile web app by a factor of 10, just by moving "where customerID = ?" to a different place inside a complicated SQL statement (before my change it had been selecting all customers in a join, then later selecting out the desired customer).

    Read the article

  • Do fluent interfaces significantly impact runtime performance of a .NET application?

    - by stakx
    I'm currently occupying myself with implementing a fluent interface for an existing technology, which would allow code similar to the following snippet: using (var directory = Open.Directory(@"path\to\some\directory")) { using (var file = Open.File("foobar.html").In(directory)) { // ... } } In order to implement such constructs, classes are needed that accumulate arguments and pass them on to other objects. For example, to implement the Open.File(...).In(...) construct, you would need two classes: // handles 'Open.XXX': public static class OpenPhrase { // handles 'Open.File(XXX)': public static OpenFilePhrase File(string filename) { return new OpenFilePhrase(filename); } // handles 'Open.Directory(XXX)': public static DirectoryObject Directory(string path) { // ... } } // handles 'Open.File(XXX).XXX': public class OpenFilePhrase { internal OpenFilePhrase(string filename) { _filename = filename } // handles 'Open.File(XXX).In(XXX): public FileObject In(DirectoryObject directory) { // ... } private readonly string _filename; } That is, the more constituent parts statements such as the initial examples have, the more objects need to be created for passing on arguments to subsequent objects in the chain until the actual statement can finally execute. Question: I am interested in some opinions: Does a fluent interface which is implemented using the above technique significantly impact the runtime performance of an application that uses it? With runtime performance, I refer to both speed and memory usage aspects. Bear in mind that a potentially large number of temporary, argument-saving objects would have to be created for only very brief timespans, which I assume may put a certain pressure on the garbage collector. If you think there is significant performance impact, do you know of a better way to implement fluent interfaces?

    Read the article

  • How do I track down sporadic ASP.NET performance problems in a production environment?

    - by Steve Wortham
    I've had sporadic performance problems with my website for awhile now. 90% of the time the site is very fast. But occasionally it is just really, really slow. I mean like 5-10 seconds load time kind of slow. I thought I had narrowed it down to the server I was on so I migrated everything to a new dedicated server from a completely different web hosting company. But the problems continue. I guess what I'm looking for is a good tool that'll help me track down the problem, because it's clearly not the hardware. I'd like to be able to log certain events in my ASP.NET code and have that same logger also track server performance/resources at the time. If I can then look back at the logs then I can see what exactly my website was doing at the time of extreme slowness. Is there a .NET logging system that'll allow me to make calls into it with code while simultaneously tracking performance? What would you recommend?

    Read the article

  • How do I improve my performance with this singly linked list struct within my program?

    - by Jesus
    Hey guys, I have a program that does operations of sets of strings. We have to implement functions such as addition and subtraction of two sets of strings. We are suppose to get it down to the point where performance if of O(N+M), where N,M are sets of strings. Right now, I believe my performance is at O(N*M), since I for each element of N, I go through every element of M. I'm particularly focused on getting the subtraction to the proper performance, as if I can get that down to proper performance, I believe I can carry that knowledge over to the rest of things I have to implement. The '-' operator is suppose to work like this, for example. Declare set1 to be an empty set. Declare set2 to be a set with { a b c } elements Declare set3 to be a set with ( b c d } elements set1 = set2 - set3 And now set1 is suppose to equal { a }. So basically, just remove any element from set3, that is also in set2. For the addition implementation (overloaded '+' operator), I also do the sorting of the strings (since we have to). All the functions work right now btw. So I was wondering if anyone could a) Confirm that currently I'm doing O(N*M) performance b) Give me some ideas/implementations on how to improve the performance to O(N+M) Note: I cannot add any member variables or functions to the class strSet or to the node structure. The implementation of the main program isn't very important, but I will post the code for my class definition and the implementation of the member functions: strSet2.h (Implementation of my class and struct) // Class to implement sets of strings // Implements operators for union, intersection, subtraction, // etc. for sets of strings // V1.1 15 Feb 2011 Added guard (#ifndef), deleted using namespace RCH #ifndef _STRSET_ #define _STRSET_ #include <iostream> #include <vector> #include <string> // Deleted: using namespace std; 15 Feb 2011 RCH struct node { std::string s1; node * next; }; class strSet { private: node * first; public: strSet (); // Create empty set strSet (std::string s); // Create singleton set strSet (const strSet &copy); // Copy constructor ~strSet (); // Destructor int SIZE() const; bool isMember (std::string s) const; strSet operator + (const strSet& rtSide); // Union strSet operator - (const strSet& rtSide); // Set subtraction strSet& operator = (const strSet& rtSide); // Assignment }; // End of strSet class #endif // _STRSET_ strSet2.cpp (implementation of member functions) #include <iostream> #include <vector> #include <string> #include "strset2.h" using namespace std; strSet::strSet() { first = NULL; } strSet::strSet(string s) { node *temp; temp = new node; temp->s1 = s; temp->next = NULL; first = temp; } strSet::strSet(const strSet& copy) { if(copy.first == NULL) { first = NULL; } else { node *n = copy.first; node *prev = NULL; while (n) { node *newNode = new node; newNode->s1 = n->s1; newNode->next = NULL; if (prev) { prev->next = newNode; } else { first = newNode; } prev = newNode; n = n->next; } } } strSet::~strSet() { if(first != NULL) { while(first->next != NULL) { node *nextNode = first->next; first->next = nextNode->next; delete nextNode; } } } int strSet::SIZE() const { int size = 0; node *temp = first; while(temp!=NULL) { size++; temp=temp->next; } return size; } bool strSet::isMember(string s) const { node *temp = first; while(temp != NULL) { if(temp->s1 == s) { return true; } temp = temp->next; } return false; } strSet strSet::operator + (const strSet& rtSide) { strSet newSet; newSet = *this; node *temp = rtSide.first; while(temp != NULL) { string newEle = temp->s1; if(!isMember(newEle)) { if(newSet.first==NULL) { node *newNode; newNode = new node; newNode->s1 = newEle; newNode->next = NULL; newSet.first = newNode; } else if(newSet.SIZE() == 1) { if(newEle < newSet.first->s1) { node *tempNext = newSet.first; node *newNode; newNode = new node; newNode->s1 = newEle; newNode->next = tempNext; newSet.first = newNode; } else { node *newNode; newNode = new node; newNode->s1 = newEle; newNode->next = NULL; newSet.first->next = newNode; } } else { node *prev = NULL; node *curr = newSet.first; while(curr != NULL) { if(newEle < curr->s1) { if(prev == NULL) { node *newNode; newNode = new node; newNode->s1 = newEle; newNode->next = curr; newSet.first = newNode; break; } else { node *newNode; newNode = new node; newNode->s1 = newEle; newNode->next = curr; prev->next = newNode; break; } } if(curr->next == NULL) { node *newNode; newNode = new node; newNode->s1 = newEle; newNode->next = NULL; curr->next = newNode; break; } prev = curr; curr = curr->next; } } } temp = temp->next; } return newSet; } strSet strSet::operator - (const strSet& rtSide) { strSet newSet; newSet = *this; node *temp = rtSide.first; while(temp != NULL) { string element = temp->s1; node *prev = NULL; node *curr = newSet.first; while(curr != NULL) { if( element < curr->s1 ) break; if( curr->s1 == element ) { if( prev == NULL) { node *duplicate = curr; newSet.first = newSet.first->next; delete duplicate; break; } else { node *duplicate = curr; prev->next = curr->next; delete duplicate; break; } } prev = curr; curr = curr->next; } temp = temp->next; } return newSet; } strSet& strSet::operator = (const strSet& rtSide) { if(this != &rtSide) { if(first != NULL) { while(first->next != NULL) { node *nextNode = first->next; first->next = nextNode->next; delete nextNode; } } if(rtSide.first == NULL) { first = NULL; } else { node *n = rtSide.first; node *prev = NULL; while (n) { node *newNode = new node; newNode->s1 = n->s1; newNode->next = NULL; if (prev) { prev->next = newNode; } else { first = newNode; } prev = newNode; n = n->next; } } } return *this; }

    Read the article

  • The best way to predict performance without actually porting the code?

    - by ardiyu07
    I believe there are people with the same experience with me, where he/she must give a (estimated) performance report of porting a program from sequential to parallel with some designated multicore hardwares, with a very few amount of time given. For instance, if a 10K LoC sequential program was given and executes on Intel i7-3770k (not vectorized) in 100 ms, how long would it take to run if one parallelizes the code to a Tesla C2075 with NVIDIA CUDA, given that all kinds of parallelizing optimization techniques were done? (but you're only given 2-4 days to report the performance? assume that you didn't know the algorithm at all. Or perhaps it'd be safer if we just assume that it's an impossible situation to finish the job) Therefore, I'm wondering, what most likely be the fastest way to give such performance report? Is it safe to calculate solely by the hardware's capability, such as GFLOPs peak and memory bandwidth rate? Is there a mathematical way to calculate it? If there is, please prove your method with the corresponding problem description and the algorithm, and also the target hardwares' specifications. Or perhaps there already exists such tool to (roughly) estimate code porting? (Please don't the answer: 'kill yourself is the fastest way.')

    Read the article

  • SQL SERVER – Subquery or Join – Various Options – SQL Server Engine knows the Best

    - by pinaldave
    This is followup post of my earlier article SQL SERVER – Convert IN to EXISTS – Performance Talk, after reading all the comments I have received I felt that I could write more on the same subject to clear few things out. First let us run following four queries, all of them are giving exactly same resultset. USE AdventureWorks GO -- use of = SELECT * FROM HumanResources.Employee E WHERE E.EmployeeID = ( SELECT EA.EmployeeID FROM HumanResources.EmployeeAddress EA WHERE EA.EmployeeID = E.EmployeeID) GO -- use of in SELECT * FROM HumanResources.Employee E WHERE E.EmployeeID IN ( SELECT EA.EmployeeID FROM HumanResources.EmployeeAddress EA WHERE EA.EmployeeID = E.EmployeeID) GO -- use of exists SELECT * FROM HumanResources.Employee E WHERE EXISTS ( SELECT EA.EmployeeID FROM HumanResources.EmployeeAddress EA WHERE EA.EmployeeID = E.EmployeeID) GO -- Use of Join SELECT * FROM HumanResources.Employee E INNER JOIN HumanResources.EmployeeAddress EA ON E.EmployeeID = EA.EmployeeID GO Let us compare the execution plan of the queries listed above. Click on image to see larger image. It is quite clear from the execution plan that in case of IN, EXISTS and JOIN SQL Server Engines is smart enough to figure out what is the best optimal plan of Merge Join for the same query and execute the same. However, in the case of use of Equal (=) Operator, SQL Server is forced to use Nested Loop and test each result of the inner query and compare to outer query, leading to cut the performance. Please note that here I no mean suggesting that Nested Loop is bad or Merge Join is better. This can very well vary on your machine and amount of resources available on your computer. When I see Equal (=) operator used in query like above, I usually recommend to see if user can use IN or EXISTS or JOIN. As I said, this can very much vary on different system. What is your take in above query? I believe SQL Server Engines is usually pretty smart to figure out what is ideal execution plan and use it. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Joins, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Server Side Paging in SQL Server 2011 – Part2

    - by pinaldave
    The best part of the having blog is that SQL Community helps to keep it running with new ideas. Earlier I wrote about SQL SERVER – Server Side Paging in SQL Server 2011 – A Better Alternative. A very popular article on that subject. I had used variables for “number of the rows” and “number of the pages”. Blog reader send me email asking in their organizations these values are stored in the table. Is there any the new syntax can read the data from the table. Absolutely YES! USE AdventureWorks2008R2 GO CREATE TABLE PagingSetting (RowsPerPage INT, PageNumber INT) INSERT INTO PagingSetting (RowsPerPage, PageNumber) VALUES(10,5) GO SELECT * FROM Sales.SalesOrderDetail ORDER BY SalesOrderDetailID OFFSET (SELECT RowsPerPage*PageNumber FROM PagingSetting) ROWS FETCH NEXT (SELECT RowsPerPage FROM PagingSetting) ROWS ONLY GO Here is the quick script: This is really an easy trick. I also wrote blog post on comparison of the performance over here: . SQL SERVER – Server Side Paging in SQL Server 2011 Performance Comparison Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology Tagged: SQL Paging

    Read the article

  • SQLAuthority News – Public Training Classes In Hyderabad 12-14 May – SQL and 10-11 May SharePoint

    - by pinaldave
    There were lots of request about providing more details for the blog post through email address specified in the article SQLAuthority News – Public Training Classes In Hyderabad 12-14 May – Microsoft SQL Server 2005/2008 Query Optimization & Performance Tuning. Here is the complete brochure of the course. There are two different courses are offered by Solid Quality Mentors 1) Microsoft SQL Server 2005/2008 Query Optimization & Performance Tuning – Pinal Dave Date: May 12-14, 2010 Price: Rs. 14,000/person for 3 days Discount Code: ‘SQLAuthority.com‘ Effective Price: Rs. 11,000/person for 3 days 2) SharePoint 2010 – Joy Rathnayake Date: May 10-11, 2010 Price: Rs. 11,000/person for 3 days Discount Code: ‘SQLAuthority.com’ Effective Price: Rs. 8,000/person for 3 days Download the complete PDF brochure. Additionally there is special program of SolidQ India Insider. I will provide the details for the same very soon. Please do send me email if you need any additional details. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, SQLAuthority News, T SQL, Technology

    Read the article

  • Oracle Linux Delivers Top CPU Benchmark Results on Sun Blades

    - by sergio.leunissen
    From the Performance and Best Practices blog: Fresh SPEC CPU2006 results for Sun Blade X6275 M2 Server Modules running Oracle Linux 5.5. The highlights: The dual-node Sun Blade X6275 M2 server module, equipped with two Intel Xeon X5670 2.93 GHz processors per node and running the Oracle Enterprise Linux 5.5 operating system delivered the best SPECint_rate2006 and SPECfp_rate2006 benchmark results for all systems with Intel Xeon processor 5000 sequence. With a SPECint_rate2006 benchmark result of 679, the Sun Blade X6275 M2 server module, with two compute nodes per blade, delivers maximum performance for space constrained environments. Comparing Oracle's dual-node blade to HP's dual-node blade server, based on their single node performance, the Sun Blade X6275 M2 server module SPECfp_rate2006 score of 241 outperforms the best published HP ProLiant BL2X220c G5 server score by 3.2x. A single node of a Sun Blade X6275 M2 server module using 2.93 GHz Intel Xeon X5670 processors delivered 37% improvement in SPECint_rate2006 benchmark results and 22% improvement in SPECfp_rate2006 benchmark results compared to the previous generation Sun Blade X6275 server module. Both nodes of a Sun Blade X6275 M2 server module using 2.93 GHz Intel Xeon X5670 processors delivered 59% improvement on the SPECint_rate2006 benchmark and 40% improvement on the SPECfp_rate2006 benchmark compared to the previous generation Sun Blade X6275 server module.

    Read the article

  • SQLAuthority News – Public Training Classes In Hyderabad 12-14 May – SQL and 10-11 May SharePoint

    - by pinaldave
    There were lots of request about providing more details for the blog post through email address specified in the article SQLAuthority News – Public Training Classes In Hyderabad 12-14 May – Microsoft SQL Server 2005/2008 Query Optimization & Performance Tuning. Here is the complete brochure of the course. There are two different courses are offered by Solid Quality Mentors 1) Microsoft SQL Server 2005/2008 Query Optimization & Performance Tuning – Pinal Dave Date: May 12-14, 2010 Price: Rs. 14,000/person for 3 days Discount Code: ‘SQLAuthority.com‘ Effective Price: Rs. 11,000/person for 3 days 2) SharePoint 2010 – Joy Rathnayake Date: May 10-11, 2010 Price: Rs. 11,000/person for 3 days Discount Code: ‘SQLAuthority.com’ Effective Price: Rs. 8,000/person for 3 days Download the complete PDF brochure. Additionally there is special program of SolidQ India Insider. I will provide the details for the same very soon. Please do send me email if you need any additional details. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, SQLAuthority News, T SQL, Technology

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >