Search Results

Search found 2714 results on 109 pages for 'extremely frustrated'.

Page 37/109 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • TypeError: Error #1009 Actionscript 3 help.

    - by matt_t
    I am extremely frustrated. I'm following a tutorial and mimicing it on my own. I've been able to sort out most of the errors so far but this one has me stumped. I have tried replacing all of the class files with the tutorial specimen ones but i still get the error. TypeError: Error #1009: Cannot access a property or method of a null object reference. at com.senocular.utils::KeyObject/construct() at com.senocular.utils::KeyObject() at com.asgamer.basics1::Ship() at com.asgamer.basics1::Engine() Now, not really understanding the error properly I paste dumped the files online for you to look at. Ship class: textbin.com/78z35 Engine class: textbin.com/32b24 KeyObject class: textbin.com/p2725 As the error still occured when using the specimen class files I really have no idea where to begin. I will gladly try out any suggestions (although later on, i'm tired now ;). Thanks very much.

    Read the article

  • Are there any context-sensitive code search tools?

    - by Vicky
    I have been getting very frustrated recently in dealing with a massive bulk of legacy code which I am trying to get familiar with. Say I try to search for a particular function call, I get loads of results that turn out to be completely irrelevant; some of them are easy to spot, eg a comment saying // Fixed functionality in foo() so don't need to handle this here any more But others are much harder to spot manually, because they turn out to be calls from other functions in modules that are only compiled in certain cases, or are part of a much larger block of code that is #if 0'd out in its entirety. What I'd like would be a search tool that would allow me to search for a term and give me the choice to include or exclude commented out or #if 0'd out code. Then the search results would be displayed alongside a list of #defines that are required in order for that snippet of code to be relevant. I'm working in C / C++, but other than the specific comment syntax I guess the techniques should be more generally applicable. Does such a tool exist?

    Read the article

  • Confusion with cookie session token and oauth2.0 don't know where to go anymore

    - by byte_slave
    Hi guys, I'm completely confused, frustrated and nothing seems to make sense and work any more. I' dev some iframe fb app and i've been using the javascript sdk (FB.Init()) to get the access_token, but doesn't always work, sometimes i'm already logged into FB and doesn't works... Did some reading, and read also that there is problems using cookies in iframes in Opera and IE, so I was thinking in use the OAuth 2.0 but i'm not sure how via facebook sdk c# and now I'm now completely lost, don't know if i still need to use the javascript FB.Init(). Documentation out there is poor and unclear, a lot of stuff refers to old code, and after hours of reading, jumping on examples, i'm completely messed up and confused. Can some, please, point/explain/enlightening me about this? Thanks a lot guys, appreciated! Merry christmas!

    Read the article

  • Returning multiple datasets from a stored proc in VB

    - by Ryan Stephens
    I have an encrypted SQL Server stored proc that I run with (or the vb .net equivalent code of): declare @p4 nvarchar(100) set @p4=NULL declare @p5 bigint set @p5=NULL exec AA_PAY_BACS_EXPORT_RETRIEVE_S @PS_UserId=N'ADMN', @PS_Department=N'', @PS_PayFrequency=2, @PS_ErrorDescription=@p4 output select @p4, @p5 this returns 2 datasets and the output parameters for the results, the datasets are made up of various table joins etc, one holds the header record and one the detail records. I need to get the 2 datasets into a structure in VB .net (e.g. linq, sqldatareader, typed datasets) so that I can code with them, I don't know what tables any of this comes from and there are alot of them Whooopeee!!! I came close using Linq to SQL and IMultipleResults but got frustrated when I had to recode it every time I made a change to the designer file. My feelings are that there must be a simple way to do this, any ideas?

    Read the article

  • How to Design Programs: An Introduction to Programming and Computing -- teacher guide access

    - by user295683
    Hello -- I'm a biologist switching careers, and trying to learn programming as a result. I stumbled upon the aforementioned book on Amazon, which jived with my liberal arts background. Despite my great satisfaction with the didactic approach, I was frustrated to see that the answers to the exercises are restricted to teachers only. As I am pursuing this endeavor on my own, this restriction dramatically cripples the value of this book. My request to the author's website for access to the answers has not been answered, and I would desperately like to continue with this book. Anyone have any experience dealing with the book's website, or at the very least a torrent of the answers? Otherwise, I suspect I will be relegated to using JavaScript for everything! Thanks!

    Read the article

  • MySQL Query involving column names containing math operators

    - by devil_fingers
    I'm a MySQL scrub, and I have asked around and checked around the internet for what I'm sure will turn out to be something obvious, but I'm very frustrated with what I thought would be a very, very simple query not working. So here goes. Please be gentle. Basically, in a large database, some of the column names contain mathematical operators like "/" and "+." (Don't ask, it's not my database, I can't do anything about it). Here is the "essence" of my query (I took out the irrelevant stuff for the sake of this question): SELECT PlayerId, Season, WPA/LI AS WPALI FROM tht.stats_batting_master WHERE Season = "2010" AND teamid > 0 AND PA >= 502 GROUP BY playerid ORDER BY WPALI DESC When I run this, it returns "Unknown column 'LI' in 'field list'," I assume because it sees the "/" in WPA/LI as a division sign. Like I said, I'm sure this is easy enough to work around (it must be given how much this database is used), but I haven't' been able to figure out how. Thanks in advance for any help.

    Read the article

  • javascript not working on localhost

    - by Adam McMahon
    Ok so I'm lost here, frustrated and pulling my hair and out. Plus probably about to be fired or take a pay cut. I moved Files from a development server to my local machine. The files are consistent (used diff tool), all the dependencies are there. It works for the most part. The problem is that the some of the javascript (not all) is just not working. We're using jquery and a lot of plugins for it. I've checked with the web developer plugin in firefox and all the js files are loading. I cleared the cache in both firefox and chrome multiple times to no avail. The development server is a windows server running wamp. My local machine is running ubuntu. Somebody tell me what I missed.

    Read the article

  • Transitioning from C++ to C#

    - by Jim
    I am very experienced in C and C++ but am having problems finding current opportunities. I decided to switch to C# but noticed that while the syntax is very similar, the programming styles are very different (we don't have IEnumerable etc in C). So the problem I am having now is having 20 years of experience but is only a junior when it comes to interviewing for a C# job. Most of the problems are me not being able to answer questions like 1) How do you do XYZ in Sql Server? 2) What's the use of in Spring.NET In other words, the phone interviewers don;t care if you know DB programming or not - they want to know if you know SQL Server. I am frustrated because the guys who are doing the interviews seem to be so narrow sighted in knowing ONLY C#, .NET and SQL Server, that I don't know how to get past them.

    Read the article

  • visual studio 2005 problem with windows flying open

    - by Kevin
    I'm going through the problem of setting up a new computer and I'm having a problem with VS 2005. Whenever I start debugging all the windows (properties, watch, errorlist, stack, ...) pop up all over the place undocked. At this point I've tried docking them and closing them. When I stop debugging more windows pop up all undocked. This keeps happening over and over and over... I've tried closing and docking them but they keep popping out. Wasn't sure how to google this and my patience has grown thin with this whole process of moving to a new comp. Sincerely, Frustrated

    Read the article

  • NSInvalidArgumentException – App terminates while i'm trying to copy a captured video to documents f

    - by ChrNis
    I'd like to try the wisdom of the crowds..because i'm frustrated right now. Thanks in advance. So here is my code: - (void)imagePickerController:(UIImagePickerController *)ipc didFinishPickingMediaWithInfo:(NSDictionary *)info{ NSLog(@"info: %@",info); NSString *newFilename = [NSString stringWithFormat:@"%@/%@.mov", [NSHomeDirectory() stringByAppendingPathComponent:@"Documents"], [NSString stringWithFormat:@"%d", (long)[[NSDate date] timeIntervalSince1970]]]; NSLog(@"newFilename: %@",newFilename); NSFileManager *filemgr = [NSFileManager defaultManager]; NSError *err; if ([filemgr copyItemAtPath:[info objectForKey:@"UIImagePickerControllerMediaURL"] toPath:newFilename error:&err] == YES) NSLog (@"Move successful"); else NSLog (@"Move failed"); and this is the log: 2010-05-16 18:19:01.975 erlkoenig[7099:307] info: { UIImagePickerControllerMediaType = "public.movie"; UIImagePickerControllerMediaURL = "file://localhost/private/var/mobile/Applications/BE25F9B5-2D08-4B59-8B62-D04DF7BB7E5B/tmp/-Tmp-/capture-T0x108cb0.tmp.8M81HU/capturedvideo.MOV"; } newFilename: /var/mobile/Applications/BE25F9B5-2D08-4B59-8B62-D04DF7BB7E5B/Documents/1274026741.mov [NSURL fileSystemRepresentation]: unrecognized selector sent to instance 0x1c1f90 *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '-[NSURL fileSystemRepresentation]: unrecognized selector sent to instance 0x1c1f90

    Read the article

  • Distributing cpu-bound compression jobs to multiple computers?

    - by barnaby
    The other day I needed to archive a lot of data on our network and I was frustrated I had no immediate way to harness the power of multiple machines to speed-up the process. I understand that creating a distributed job management system is a leap from a command-line archiving tool. I'm now wondering what the simplest solution to this type of distributed performance scenario could be. Would a custom tool always be a requirement or are there ways to use standard utilities and somehow distribute their load transparently at a higher level? Thanks for any suggestions.

    Read the article

  • C++ vector insights

    - by Sunscreen
    Hi, I am a little bit frustrated of how to use vectors in C++. I use them widely though I am not exactly certail of how I use them. Below are teh questions? If I have a vector lets say: std::vector<CString> v_strMyVector, with (int)v_strMyVector.size > i can I access the i member: v_strMyVector[i] == "xxxx"; ? (it works, though why?) Do i always need to define an iterator to acces to go to the beginning of the vector, and lop on its members ? What is the purpose of an iterator if I have access to all members of the vector directly (see 1)? Thanks in advance, Sun

    Read the article

  • Is there a TextWriter interface to the System.Diagnostics.Debug class?

    - by John Källén
    I'm often frustrated by the System.Diagnostics.Debug.Write/WriteLine methods. I would like to use the Write/WriteLine methods familiar from the TextWriter class, so I often write Debug.WriteLine("# entries {0} for connection {1}", countOfEntries, connection); which causes a compiler error. I end up writing Debug.WriteLine(string.Format("# entries {0} for connection {1}", countOfEntries, connection)); which is really awkward. Does the CLR have a class deriving from TextWriter that "wraps" System.Debug, or should I roll my own?

    Read the article

  • Possible conflict with jquery libraries

    - by TooCooL
    http://www.pro-marketing-invest.de/golz-racing/reservierungen I made a wp plugin which makes online reservations for rent a car but I tested it locally with a different wp theme and it worked fine! but when I installed it in this web site for some reason the jquery functions dont work! when I press the fortsetzen button it wont open the other step (its a form with 4 steps). I think it is because of the other jquery functions or libraries that the theme uses, I am frustrated I dont know what is causing this! Any ideas?

    Read the article

  • Where does Eclipse save the list of files to open on startup?

    - by Grundlefleck
    Question: where does Eclipse store the list of files it opens on startup? Background: Having installed a plugin into Eclipse which promptly crashed, my Eclipse workspace is in a bit of a state. When started, the building workspace task pauses indefinitely at 20%. Before I uninstall the plugin I want to give it another chance. I have a feeling that the reason Eclipse is pausing is because of a file which was opened when it crashed, which it tries to reopen on startup. If I can stop this file from opening on startup there's a chance I may be able to coax the plugin to behave. The problem is I have no idea where that list of files is persisted between runs of Eclipse. ...a second before I posted this question, I realised I could just delete the file causing the problem (duh). However, the search has frustrated me enough to want to find the answer.

    Read the article

  • Problem in building a tab bar inside navigation controller

    - by Heba
    Hello Everyone! I am really frustrated and I rally hope that you will help me to solve this problem! I'm trying to build a tab bar inside a navigation controller. I used this template provided by WiredBob. My problem is that I want to add more bar items to the tab bar, but I keep getting crash! From the log: 2010-05-24 00:15:43.469 NavTab[9315:207] * Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: '-[UITableViewController loadView] loaded the "AnnView" nib but didn't get a UITableView.' Also, I tried to fix the size of the a view in IB to fit in with the tab bar, but I couldn't! It was unchangeable. Thanks in advance :-)

    Read the article

  • Tracking Useful Information

    - by Steve M
    What do the clever programmers here do to keep track of handy programming tricks and useful information they pick up over their many years of experience? Things like useful compiler arguments, IDE short-cuts, clever code snippets, etc. I sometimes find myself frustrated when looking up something that I used to know a year or two ago. My IE favourites probably represent a good chunk of the Internet in the late 1990s, so clearly that isn't effective (at least for me). Or am I just getting old? So.. what do you do?

    Read the article

  • My company is a Rackspace Cloud client (provided to us for free) and I'm trying to find some way to set up version control

    - by Nick S.
    As the title says, my (small) business is provided a free Rackspace Cloud client account. We receive a decent amount of traffic but I haven't been able to put together a business case to move to our own server yet. However, we are developing some complex apps and I'm frustrated with not having the ability to even ssh into the remote server. Ultimately, I'd like to set up some sort of version control (at this point, I'll take anything, git or otherwise). I have control over databases, can FTP, set up cron jobs, and perform a few other basic functions. I can't think of any way to set up git or something similar without ssh access. A thought went through my mind that maybe some sort of PHP version control exists that I might be able to set up, but I haven't had any luck finding it yet. Do you guys have any ideas, thoughts, or advice?

    Read the article

  • Writing out to a file in scheme

    - by Ceelos
    The goal of this is to check if the character taken into account is a number or operand and then output it into a list which will be written out to a txt file. I'm wondering which process would be more efficient, whether to do it as I stated above (writing it to a list and then writing that list out into a file) or being writing out into a txt file right from the procedure. I'm new with scheme so I apologize if I am not using the correct terminology (define input '("3" "+" "4")) (define check (if (number? (car input)) (write this out to a list or directly to a file) (check the rest of file))) Another question I had in mind, how can I make it so that the check process is recursive? I know it's a lot of asking but I've getting a little frustrated with checking out the methods that I have found on other sites. I really appreciate the help!

    Read the article

  • So my girlfriend wants to learn to program [closed]

    - by vanstee
    Possible Duplicate: What programming language should be taught in Computer Science 101? My girlfriend hates feeling completely out of the loop when my friends and I talk about anything related to computers, so she asked me to teach her how to program. I'm pretty happy she asked, but I want to be able to teach her enough to know the basics without her completely losing interest or getting too frustrated. She is a very smart girl, probably smarter than me, but her computer related skills are pretty minimal. What language should I teach her and why?

    Read the article

  • How to find and fix performance problems in ORM powered applications

    - by FransBouma
    Once in a while we get requests about how to fix performance problems with our framework. As it comes down to following the same steps and looking into the same things every single time, I decided to write a blogpost about it instead, so more people can learn from this and solve performance problems in their O/R mapper powered applications. In some parts it's focused on LLBLGen Pro but it's also usable for other O/R mapping frameworks, as the vast majority of performance problems in O/R mapper powered applications are not specific for a certain O/R mapper framework. Too often, the developer looks at the wrong part of the application, trying to fix what isn't a problem in that part, and getting frustrated that 'things are so slow with <insert your favorite framework X here>'. I'm in the O/R mapper business for a long time now (almost 10 years, full time) and as it's a small world, we O/R mapper developers know almost all tricks to pull off by now: we all know what to do to make task ABC faster and what compromises (because there are almost always compromises) to deal with if we decide to make ABC faster that way. Some O/R mapper frameworks are faster in X, others in Y, but you can be sure the difference is mainly a result of a compromise some developers are willing to deal with and others aren't. That's why the O/R mapper frameworks on the market today are different in many ways, even though they all fetch and save entities from and to a database. I'm not suggesting there's no room for improvement in today's O/R mapper frameworks, there always is, but it's not a matter of 'the slowness of the application is caused by the O/R mapper' anymore. Perhaps query generation can be optimized a bit here, row materialization can be optimized a bit there, but it's mainly coming down to milliseconds. Still worth it if you're a framework developer, but it's not much compared to the time spend inside databases and in user code: if a complete fetch takes 40ms or 50ms (from call to entity object collection), it won't make a difference for your application as that 10ms difference won't be noticed. That's why it's very important to find the real locations of the problems so developers can fix them properly and don't get frustrated because their quest to get a fast, performing application failed. Performance tuning basics and rules Finding and fixing performance problems in any application is a strict procedure with four prescribed steps: isolate, analyze, interpret and fix, in that order. It's key that you don't skip a step nor make assumptions: these steps help you find the reason of a problem which seems to be there, and how to fix it or leave it as-is. Skipping a step, or when you assume things will be bad/slow without doing analysis will lead to the path of premature optimization and won't actually solve your problems, only create new ones. The most important rule of finding and fixing performance problems in software is that you have to understand what 'performance problem' actually means. Most developers will say "when a piece of software / code is slow, you have a performance problem". But is that actually the case? If I write a Linq query which will aggregate, group and sort 5 million rows from several tables to produce a resultset of 10 rows, it might take more than a couple of milliseconds before that resultset is ready to be consumed by other logic. If I solely look at the Linq query, the code consuming the resultset of the 10 rows and then look at the time it takes to complete the whole procedure, it will appear to me to be slow: all that time taken to produce and consume 10 rows? But if you look closer, if you analyze and interpret the situation, you'll see it does a tremendous amount of work, and in that light it might even be extremely fast. With every performance problem you encounter, always do realize that what you're trying to solve is perhaps not a technical problem at all, but a perception problem. The second most important rule you have to understand is based on the old saying "Penny wise, Pound Foolish": the part which takes e.g. 5% of the total time T for a given task isn't worth optimizing if you have another part which takes a much larger part of the total time T for that same given task. Optimizing parts which are relatively insignificant for the total time taken is not going to bring you better results overall, even if you totally optimize that part away. This is the core reason why analysis of the complete set of application parts which participate in a given task is key to being successful in solving performance problems: No analysis -> no problem -> no solution. One warning up front: hunting for performance will always include making compromises. Fast software can be made maintainable, but if you want to squeeze as much performance out of your software, you will inevitably be faced with the dilemma of compromising one or more from the group {readability, maintainability, features} for the extra performance you think you'll gain. It's then up to you to decide whether it's worth it. In almost all cases it's not. The reason for this is simple: the vast majority of performance problems can be solved by implementing the proper algorithms, the ones with proven Big O-characteristics so you know the performance you'll get plus you know the algorithm will work. The time taken by the algorithm implementing code is inevitable: you already implemented the best algorithm. You might find some optimizations on the technical level but in general these are minor. Let's look at the four steps to see how they guide us through the quest to find and fix performance problems. Isolate The first thing you need to do is to isolate the areas in your application which are assumed to be slow. For example, if your application is a web application and a given page is taking several seconds or even minutes to load, it's a good candidate to check out. It's important to start with the isolate step because it allows you to focus on a single code path per area with a clear begin and end and ignore the rest. The rest of the steps are taken per identified problematic area. Keep in mind that isolation focuses on tasks in an application, not code snippets. A task is something that's started in your application by either another task or the user, or another program, and has a beginning and an end. You can see a task as a piece of functionality offered by your application.  Analyze Once you've determined the problem areas, you have to perform analysis on the code paths of each area, to see where the performance problems occur and which areas are not the problem. This is a multi-layered effort: an application which uses an O/R mapper typically consists of multiple parts: there's likely some kind of interface (web, webservice, windows etc.), a part which controls the interface and business logic, the O/R mapper part and the RDBMS, all connected with either a network or inter-process connections provided by the OS or other means. Each of these parts, including the connectivity plumbing, eat up a part of the total time it takes to complete a task, e.g. load a webpage with all orders of a given customer X. To understand which parts participate in the task / area we're investigating and how much they contribute to the total time taken to complete the task, analysis of each participating task is essential. Start with the code you wrote which starts the task, analyze the code and track the path it follows through your application. What does the code do along the way, verify whether it's correct or not. Analyze whether you have implemented the right algorithms in your code for this particular area. Remember we're looking at one area at a time, which means we're ignoring all other code paths, just the code path of the current problematic area, from begin to end and back. Don't dig in and start optimizing at the code level just yet. We're just analyzing. If your analysis reveals big architectural stupidity, it's perhaps a good idea to rethink the architecture at this point. For the rest, we're analyzing which means we collect data about what could be wrong, for each participating part of the complete application. Reviewing the code you wrote is a good tool to get deeper understanding of what is going on for a given task but ultimately it lacks precision and overview what really happens: humans aren't good code interpreters, computers are. We therefore need to utilize tools to get deeper understanding about which parts contribute how much time to the total task, triggered by which other parts and for example how many times are they called. There are two different kind of tools which are necessary: .NET profilers and O/R mapper / RDBMS profilers. .NET profiling .NET profilers (e.g. dotTrace by JetBrains or Ants by Red Gate software) show exactly which pieces of code are called, how many times they're called, and the time it took to run that piece of code, at the method level and sometimes even at the line level. The .NET profilers are essential tools for understanding whether the time taken to complete a given task / area in your application is consumed by .NET code, where exactly in your code, the path to that code, how many times that code was called by other code and thus reveals where hotspots are located: the areas where a solution can be found. Importantly, they also reveal which areas can be left alone: remember our penny wise pound foolish saying: if a profiler reveals that a group of methods are fast, or don't contribute much to the total time taken for a given task, ignore them. Even if the code in them is perhaps complex and looks like a candidate for optimization: you can work all day on that, it won't matter.  As we're focusing on a single area of the application, it's best to start profiling right before you actually activate the task/area. Most .NET profilers support this by starting the application without starting the profiling procedure just yet. You navigate to the particular part which is slow, start profiling in the profiler, in your application you perform the actions which are considered slow, and afterwards you get a snapshot in the profiler. The snapshot contains the data collected by the profiler during the slow action, so most data is produced by code in the area to investigate. This is important, because it allows you to stay focused on a single area. O/R mapper and RDBMS profiling .NET profilers give you a good insight in the .NET side of things, but not in the RDBMS side of the application. As this article is about O/R mapper powered applications, we're also looking at databases, and the software making it possible to consume the database in your application: the O/R mapper. To understand which parts of the O/R mapper and database participate how much to the total time taken for task T, we need different tools. There are two kind of tools focusing on O/R mappers and database performance profiling: O/R mapper profilers and RDBMS profilers. For O/R mapper profilers, you can look at LLBLGen Prof by hibernating rhinos or the Linq to Sql/LLBLGen Pro profiler by Huagati. Hibernating rhinos also have profilers for other O/R mappers like NHibernate (NHProf) and Entity Framework (EFProf) and work the same as LLBLGen Prof. For RDBMS profilers, you have to look whether the RDBMS vendor has a profiler. For example for SQL Server, the profiler is shipped with SQL Server, for Oracle it's build into the RDBMS, however there are also 3rd party tools. Which tool you're using isn't really important, what's important is that you get insight in which queries are executed during the task / area we're currently focused on and how long they took. Here, the O/R mapper profilers have an advantage as they collect the time it took to execute the query from the application's perspective so they also collect the time it took to transport data across the network. This is important because a query which returns a massive resultset or a resultset with large blob/clob/ntext/image fields takes more time to get transported across the network than a small resultset and a database profiler doesn't take this into account most of the time. Another tool to use in this case, which is more low level and not all O/R mappers support it (though LLBLGen Pro and NHibernate as well do) is tracing: most O/R mappers offer some form of tracing or logging system which you can use to collect the SQL generated and executed and often also other activity behind the scenes. While tracing can produce a tremendous amount of data in some cases, it also gives insight in what's going on. Interpret After we've completed the analysis step it's time to look at the data we've collected. We've done code reviews to see whether we've done anything stupid and which parts actually take place and if the proper algorithms have been implemented. We've done .NET profiling to see which parts are choke points and how much time they contribute to the total time taken to complete the task we're investigating. We've performed O/R mapper profiling and RDBMS profiling to see which queries were executed during the task, how many queries were generated and executed and how long they took to complete, including network transportation. All this data reveals two things: which parts are big contributors to the total time taken and which parts are irrelevant. Both aspects are very important. The parts which are irrelevant (i.e. don't contribute significantly to the total time taken) can be ignored from now on, we won't look at them. The parts which contribute a lot to the total time taken are important to look at. We now have to first look at the .NET profiler results, to see whether the time taken is consumed in our own code, in .NET framework code, in the O/R mapper itself or somewhere else. For example if most of the time is consumed by DbCommand.ExecuteReader, the time it took to complete the task is depending on the time the data is fetched from the database. If there was just 1 query executed, according to tracing or O/R mapper profilers / RDBMS profilers, check whether that query is optimal, uses indexes or has to deal with a lot of data. Interpret means that you follow the path from begin to end through the data collected and determine where, along the path, the most time is contributed. It also means that you have to check whether this was expected or is totally unexpected. My previous example of the 10 row resultset of a query which groups millions of rows will likely reveal that a long time is spend inside the database and almost no time is spend in the .NET code, meaning the RDBMS part contributes the most to the total time taken, the rest is compared to that time, irrelevant. Considering the vastness of the source data set, it's expected this will take some time. However, does it need tweaking? Perhaps all possible tweaks are already in place. In the interpret step you then have to decide that further action in this area is necessary or not, based on what the analysis results show: if the analysis results were unexpected and in the area where the most time is contributed to the total time taken is room for improvement, action should be taken. If not, you can only accept the situation and move on. In all cases, document your decision together with the analysis you've done. If you decide that the perceived performance problem is actually expected due to the nature of the task performed, it's essential that in the future when someone else looks at the application and starts asking questions you can answer them properly and new analysis is only necessary if situations changed. Fix After interpreting the analysis results you've concluded that some areas need adjustment. This is the fix step: you're actively correcting the performance problem with proper action targeted at the real cause. In many cases related to O/R mapper powered applications it means you'll use different features of the O/R mapper to achieve the same goal, or apply optimizations at the RDBMS level. It could also mean you apply caching inside your application (compromise memory consumption over performance) to avoid unnecessary re-querying data and re-consuming the results. After applying a change, it's key you re-do the analysis and interpretation steps: compare the results and expectations with what you had before, to see whether your actions had any effect or whether it moved the problem to a different part of the application. Don't fall into the trap to do partly analysis: do the full analysis again: .NET profiling and O/R mapper / RDBMS profiling. It might very well be that the changes you've made make one part faster but another part significantly slower, in such a way that the overall problem hasn't changed at all. Performance tuning is dealing with compromises and making choices: to use one feature over the other, to accept a higher memory footprint, to go away from the strict-OO path and execute queries directly onto the RDBMS, these are choices and compromises which will cross your path if you want to fix performance problems with respect to O/R mappers or data-access and databases in general. In most cases it's not a big issue: alternatives are often good choices too and the compromises aren't that hard to deal with. What is important is that you document why you made a choice, a compromise: which analysis data, which interpretation led you to the choice made. This is key for good maintainability in the years to come. Most common performance problems with O/R mappers Below is an incomplete list of common performance problems related to data-access / O/R mappers / RDBMS code. It will help you with fixing the hotspots you found in the interpretation step. SELECT N+1: (Lazy-loading specific). Lazy loading triggered performance bottlenecks. Consider a list of Orders bound to a grid. You have a Field mapped onto a related field in Order, Customer.CompanyName. Showing this column in the grid will make the grid fetch (indirectly) for each row the Customer row. This means you'll get for the single list not 1 query (for the orders) but 1+(the number of orders shown) queries. To solve this: use eager loading using a prefetch path to fetch the customers with the orders. SELECT N+1 is easy to spot with an O/R mapper profiler or RDBMS profiler: if you see a lot of identical queries executed at once, you have this problem. Prefetch paths using many path nodes or sorting, or limiting. Eager loading problem. Prefetch paths can help with performance, but as 1 query is fetched per node, it can be the number of data fetched in a child node is bigger than you think. Also consider that data in every node is merged on the client within the parent. This is fast, but it also can take some time if you fetch massive amounts of entities. If you keep fetches small, you can use tuning parameters like the ParameterizedPrefetchPathThreshold setting to get more optimal queries. Deep inheritance hierarchies of type Target Per Entity/Type. If you use inheritance of type Target per Entity / Type (each type in the inheritance hierarchy is mapped onto its own table/view), fetches will join subtype- and supertype tables in many cases, which can lead to a lot of performance problems if the hierarchy has many types. With this problem, keep inheritance to a minimum if possible, or switch to a hierarchy of type Target Per Hierarchy, which means all entities in the inheritance hierarchy are mapped onto the same table/view. Of course this has its own set of drawbacks, but it's a compromise you might want to take. Fetching massive amounts of data by fetching large lists of entities. LLBLGen Pro supports paging (and limiting the # of rows returned), which is often key to process through large sets of data. Use paging on the RDBMS if possible (so a query is executed which returns only the rows in the page requested). When using paging in a web application, be sure that you switch server-side paging on on the datasourcecontrol used. In this case, paging on the grid alone is not enough: this can lead to fetching a lot of data which is then loaded into the grid and paged there. Keep note that analyzing queries for paging could lead to the false assumption that paging doesn't occur, e.g. when the query contains a field of type ntext/image/clob/blob and DISTINCT can't be applied while it should have (e.g. due to a join): the datareader will do DISTINCT filtering on the client. this is a little slower but it does perform paging functionality on the data-reader so it won't fetch all rows even if the query suggests it does. Fetch massive amounts of data because blob/clob/ntext/image fields aren't excluded. LLBLGen Pro supports field exclusion for queries. You can exclude fields (also in prefetch paths) per query to avoid fetching all fields of an entity, e.g. when you don't need them for the logic consuming the resultset. Excluding fields can greatly reduce the amount of time spend on data-transport across the network. Use this optimization if you see that there's a big difference between query execution time on the RDBMS and the time reported by the .NET profiler for the ExecuteReader method call. Doing client-side aggregates/scalar calculations by consuming a lot of data. If possible, try to formulate a scalar query or group by query using the projection system or GetScalar functionality of LLBLGen Pro to do data consumption on the RDBMS server. It's far more efficient to process data on the RDBMS server than to first load it all in memory, then traverse the data in-memory to calculate a value. Using .ToList() constructs inside linq queries. It might be you use .ToList() somewhere in a Linq query which makes the query be run partially in-memory. Example: var q = from c in metaData.Customers.ToList() where c.Country=="Norway" select c; This will actually fetch all customers in-memory and do an in-memory filtering, as the linq query is defined on an IEnumerable<T>, and not on the IQueryable<T>. Linq is nice, but it can often be a bit unclear where some parts of a Linq query might run. Fetching all entities to delete into memory first. To delete a set of entities it's rather inefficient to first fetch them all into memory and then delete them one by one. It's more efficient to execute a DELETE FROM ... WHERE query on the database directly to delete the entities in one go. LLBLGen Pro supports this feature, and so do some other O/R mappers. It's not always possible to do this operation in the context of an O/R mapper however: if an O/R mapper relies on a cache, these kind of operations are likely not supported because they make it impossible to track whether an entity is actually removed from the DB and thus can be removed from the cache. Fetching all entities to update with an expression into memory first. Similar to the previous point: it is more efficient to update a set of entities directly with a single UPDATE query using an expression instead of fetching the entities into memory first and then updating the entities in a loop, and afterwards saving them. It might however be a compromise you don't want to take as it is working around the idea of having an object graph in memory which is manipulated and instead makes the code fully aware there's a RDBMS somewhere. Conclusion Performance tuning is almost always about compromises and making choices. It's also about knowing where to look and how the systems in play behave and should behave. The four steps I provided should help you stay focused on the real problem and lead you towards the solution. Knowing how to optimally use the systems participating in your own code (.NET framework, O/R mapper, RDBMS, network/services) is key for success as well as knowing what's going on inside the application you built. I hope you'll find this guide useful in tracking down performance problems and dealing with them in a useful way.  

    Read the article

  • Emoti-phrases

    - by Tony Davis
    Surely the next radical step in the development of User-interface design is for applications to react appropriately to the rising tide of bewilderment, frustration or antipathy of the users. When an application understands that rapport is lost, it should respond accordingly. When we, for example, become confused by an unforgiving interface, shouldn't there be a way of signalling our bewilderment and having the application respond appropriately? There is surprisingly little in the current interface standards that would help. If we're getting frustrated with an unresponsive application, perhaps we could let it know of our increasing irritation by means of an "I'm getting angry and exasperated" slider. Although, by 'responding appropriately', I don't include playing a "we are experiencing unusually heavy traffic: your application usage is important to us" message, accompanied by calming muzak. When confronted with a tide of wizards, 'are you sure?' messages, or page-after-page of tiresome and barely-relevant options, how one yearns for a handy 'JFDI' (Just Flaming Do It) button. One click and the application miraculously desists with its annoying questions and just gets on with the job, using the defaults, or whatever we selected last time. Much more satisfying, and more direct to most developers and DBAs, however, would be the facility to communicate to the application via a twitter-style input field, or via parameters to command-line applications ("I don't want a wide-ranging debate with you; just open the bl**dy PDF!" or, or "Don't forget which of us has the close button"). Although to avoid too much cultural-dependence, perhaps we should take the lead from emoticons, and use a set of standardized emoti-phrases such as 'sez you', 'huh?', 'Pshaw!', or 'meh', which could be used to vent a range of feelings in any given application, whether it be SQL Server stubbornly refusing to give us the result we are expecting, or when an online-survey is getting too personal. Or a 'Lingua Glaswegia' perhaps: 'Atsabelter' ("very good") 'Atspish' ("must try harder") 'AnThenYerArsFellAff ' ("I don't quite trust these results") 'BileYerHeid' or 'ShutYerGub' ("please stop these inane questions") There would, of course, have to be an ANSI standards body to define the phrases that were acceptable. Presumably, there would be a tussle amongst the different international standards organizations. Meanwhile Oracle, Microsoft and Apple would each release non-standard extensions. Time then, surely, to plant emoti-phrases on the lot of them and develop a user-driven consensus. Send us your suggestions! The best one will win an iPod nano! Cheers! Tony.

    Read the article

  • Speed Up the Help Dialog in Windows and Office

    - by Matthew Guay
    When you click help, you don’t want to wait for your computer to bring it to you.  Here’s how you can speed up the help dialog in Windows and Office. If you have a slow internet connection, chances are you’ve been frustrated by the Help dialog in Windows and Office trying to download fresh content every time you open them. This can be great if the updated help files contain better content, but sometimes you just want to find what you were looking for without waiting.  Here’s how you can turn off the automatic online help. Use Local Help in Windows Windows 7 and Vista’s help dialog usually tries to load the latest content from the net, but this can take a long time on slow connections. If you’re seeing the above screen a lot, you may want to switch to offline help.  Click the “Online Help” button at the bottom, and select “Get offline Help”. Now your computer will just load the pre-installed help files.  And don’t worry; if there’s a major update to your help files, Windows will download and install it through Windows Update.   Stupid Geek Tip: An easy way to open Windows Help is to click on your desktop or Start Menu and press F1 on your keyboard. Use Local Help in Office This same trick works in Office 2007 and 2010.  We’ve actually had more problems with Office’s help being tardy. Solve this the same way as with Windows help.  Click on the “Connected to Office.com” or “Connected to Office Online” button, depending on your version of Office, and select “Show content only from this computer”. This will automatically change the settings for Help in all of your Office applications. While this may not be a major trick, it can be helpful especially if you have a slow internet connection and want to get things done quickly.  Similar Articles Productive Geek Tips How to See the About Dialog and Version Information in Office 2007Speed Up SATA Hard Drives in Windows VistaMake Mouse Navigation Faster in WindowsSpeed up Your Windows Vista Computer with ReadyBoostSet the Speed Dial as the Opera Startup Page TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos

    Read the article

  • Understanding the JSF Lifecycle and ADF Optimized Lifecycle

    - by Steven Davelaar
    While coaching ADF development teams over the years, I have noticed that many developers lack a basic understanding of Java Server Faces, in particular the JSF lifecycle and how ADF optimizes this lifecycle in specific situations. As a result, ADF developers who are tasked to build a seemingly simple ADF page, can get extremely frustrated by the -in their eyes- unexpected or unlogical behavior of ADF.  They start to play with the immediate property and the partialTriggers property in a trial-and-error manner. Often, they play with these properties until their specific issue is solved, unaware of other more severe bugs that might be introduced by the values they choose for these properties. So, I decided to submit a presentation for the UKOUG entitled "What you need to know about JSF to be succesful with ADF".  The abstract was accepted, and I started putting together the presentation and demo application. I built up a demo application step-by-step, trying to cover the JSF-related  top issues and challenges I encountered over the years in a simple "Hello World" demo. This turned out to be both a very time-consuming and very interesting journey. I had never thought I would learn so much myself in preparing this presentation. I never thought I would end up with potentially controversial conclusions like "Never set immediate=true on an editable component".  I did not realize the sometimes immense implications of the ADF optimized lifecycle beforehand. I never thought that "Hello World" demo's could get so complex. But as I went on I was confident this was valuable material, even for experienced ADF developers with a good understanding of JSF. When I finished, I realized the original title and abstract was misleading, as was the target audience. Yes, it was covering the JSF lifecycle, but no other aspects of JSF you need to know for ADF development. Yes, it was covering some JSF basics as mentioned in the abstract, but all in all it had become a pretty advanced presentation. At the same time, the issues discussed are very common, novice ADF developers might easily run into them while building their first pages. I ran out of time, so I decided to just present what I had, apologizing at the beginning for the misleading title, showing a second slide with a better title "18 invaluable lessons about ADF-JSF interaction". I think the presentation was well received overall, although people who don't like it or don't understand it, usually don't come and tell you afterwards.... I am still struggling with the title, for this blog post I used yet another title, anyway, you can download the presentation-that-still-lacks-a-good-title here. The finished JDev 11.1.1.6 demo app can be downloaded here.  The 18 lessons mentioned in the presentation are summarized here. As mentioned on the last slide, print out the lessons, and learn them by heart, I am pretty sure it will save you lots of time and frustration!

    Read the article

  • SQL SERVER – Creating All New Database with Full Recovery Model

    - by pinaldave
    Sometimes, complex problems have very simple solutions. Let us see the following email which I received recently. “Hi Pinal, In our system when we create new database, by default, they are all created with the Simple Recovery Model. We have to manually change the recovery model after we create the database. We used the following simple T-SQL code: CREATE DATABASE dbname. We are very frustrated with this situation. We want all our databases to have the Full Recovery Model option by default. We are considering the following methods; please suggest the most efficient one among them. 1) Creating a Policy; when it is violated, the database model can be fixed 2) Triggers at Server Level 3) Automated Job which goes through all the databases and checks their recovery model; if the DBA has not changed the model, then the job will list the Databases and change their recovery model Also, we have a situation where we need a database in the Simple Recovery Model as well – how to white list them? Please suggest the best method.” Indeed, an interesting email! The answer to their question, i.e., which is the best method to fit their needs (white list, default, etc)? It will be NONE of the above. Here is the solution in one line and also the easiest way: Just go to your Model database: Path in SSMS >> Databases > System Databases >> model >> Right Click Properties >> Options >> Recovery Model - Select Full from dropdown. Every newly created database takes its base template from the Model Database. If you create a custom SP in the Model Database, when you create a new database, it will automatically exist in that database. Any database that was already created before making changes in the Model Database will not be affected at all. Creating Policy is also a good method, and I will blog about this in a separate blog post, but looking at current specifications of the reader, I think the Model Database should be modified to have a Full Recovery Option. While writing this blog post, I remembered my another blog post where the model database log file was growing drastically even though there were no transactions SQL SERVER – Log File Growing for Model Database – model Database Log File Grew Too Big. NOTE: Please do not touch the Model Database unnecessary. It is a strict “No.” If you want to create an object that you need in all the databases, then instead of creating it in model database, I suggest that you create a new database called maintenance and create the object there. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, Readers Question, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >