Search Results

Search found 21592 results on 864 pages for 'custom task'.

Page 96/864 | < Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >

  • Parallel.For Batching

    - by chibacity
    Is there built-in support in the TPL for batching operations? I was recently playing with a routine to carry out character replacement on a character array which required a lookup table i.e. transliteration: for (int i = 0; i < chars.Length; i++) { char replaceChar; if (lookup.TryGetValue(chars[i], out replaceChar)) { chars[i] = replaceChar; } } I could see that this could be trivially parallelized, so jumped in with a first stab which I knew would perform worse as the tasks were too fine-grained: Parallel.For(0, chars.Length, i => { char replaceChar; if (lookup.TryGetValue(chars[i], out replaceChar)) { chars[i] = replaceChar; } }); I then reworked the algorithm to use batching so that the work could be chunked onto different threads in less fine-grained batches. This made use of threads as expected and I got some near linear speed up. I'm sure that there must be built-in support for batching in the TPL. What is the syntax, and how do I use it? const int CharBatch = 100; int charLen = chars.Length; Parallel.For(0, ((charLen / CharBatch) + 1), i => { int batchUpper = ((i + 1) * CharBatch); for (int j = i * CharBatch; j < batchUpper && j < charLen; j++) { char replaceChar; if (lookup.TryGetValue(chars[j], out replaceChar)) { chars[j] = replaceChar; } } });

    Read the article

  • How to add default value on django save form?

    - by Ignacio
    I have an object Task and a form that saves it. I want to automatically asign created_by field to the currently logged in user. So, my view is this: def new_task(request, task_id=None): message = None if task_id is not None: task = Task.objects.get(pk=task_id) message = 'TaskOK' submit = 'Update' else: task = Task(created_by = GPUser(user=request.user)) submit = 'Create' if request.method == 'POST': # If the form has been submitted... form = TaskForm(request.POST, instance=task) if form.is_valid(): task = form.save(commit=False); task.created_by = GPUser(user=request.user) task.save() if message == None: message = 'taskOK' return tasks(request, message) else: form = TaskForm(instance=task) return custom_render('user/new_task.html', {'form': form, 'submit': submit, 'task_id':task.id}, request) The problem is, you guessed, the created_by field doesn't get saved. Any ideas? Thanks

    Read the article

  • Does my GetEnumerator cause a deadlock?

    - by Scott Chamberlain
    I am starting to write my first parallel applications. This partitioner will enumerate over a IDataReader pulling chunkSize records at a time from the data-source. TLDR; version private object _Lock = new object(); public IEnumerator GetEnumerator() { var infoSource = myInforSource.GetEnumerator(); //Will this cause a deadlock if two threads lock (_Lock) //use the enumator at the same time? { while (infoSource.MoveNext()) { yield return infoSource.Current; } } } full code protected class DataSourcePartitioner<object[]> : System.Collections.Concurrent.Partitioner<object[]> { private readonly System.Data.IDataReader _Input; private readonly int _ChunkSize; public DataSourcePartitioner(System.Data.IDataReader input, int chunkSize = 10000) : base() { if (chunkSize < 1) throw new ArgumentOutOfRangeException("chunkSize"); _Input = input; _ChunkSize = chunkSize; } public override bool SupportsDynamicPartitions { get { return true; } } public override IList<IEnumerator<object[]>> GetPartitions(int partitionCount) { var dynamicPartitions = GetDynamicPartitions(); var partitions = new IEnumerator<object[]>[partitionCount]; for (int i = 0; i < partitionCount; i++) { partitions[i] = dynamicPartitions.GetEnumerator(); } return partitions; } public override IEnumerable<object[]> GetDynamicPartitions() { return new ListDynamicPartitions(_Input, _ChunkSize); } private class ListDynamicPartitions : IEnumerable<object[]> { private System.Data.IDataReader _Input; int _ChunkSize; private object _ChunkLock = new object(); public ListDynamicPartitions(System.Data.IDataReader input, int chunkSize) { _Input = input; _ChunkSize = chunkSize; } public IEnumerator<object[]> GetEnumerator() { while (true) { List<object[]> chunk = new List<object[]>(_ChunkSize); lock(_Input) { for (int i = 0; i < _ChunkSize; ++i) { if (!_Input.Read()) break; var values = new object[_Input.FieldCount]; _Input.GetValues(values); chunk.Add(values); } if (chunk.Count == 0) yield break; } var chunkEnumerator = chunk.GetEnumerator(); lock(_ChunkLock) //Will this cause a deadlock? { while (chunkEnumerator.MoveNext()) { yield return chunkEnumerator.Current; } } } } IEnumerator IEnumerable.GetEnumerator() { return ((IEnumerable<object[]>)this).GetEnumerator(); } } } I wanted IEnumerable object it passed back to be thread safe (the MSDN example was so I am assuming PLINQ and TPL could need it) will the lock on _ChunkLock near the bottom help provide thread safety or will it cause a deadlock? From the documentation I could not tell if the lock would be released on the yeld return. Also if there is built in functionality to .net that will do what I am trying to do I would much rather use that. And if you find any other problems with the code I would appreciate it.

    Read the article

  • Exposing some live data on a website - New to ASP.NET, need guidelines

    - by Carlos
    I have a large .NET based system running within the company intranet, which allows winforms users to see some live calculated numbers and a custom winforms control drawn live (but not real-time). The Forms users can also affect the operation of the system. I would like to just show the live numbers on a website, along with the custom control. Nothing needs to come back from the web user, as the web app is meant to be just for monitoring. All the numbers can be calculated at the server. It's been a long time since I touched ASP.NET, and I need to know how to proceed. What are the steps in building and deploying such a website? Any caveats I need to look out for?

    Read the article

  • Import orders file on magento enterprise or community (product with customs options)

    - by wil
    Hello, We need to import some orders file on magento enterprise. In our file, products contains customs Options. We tried to make an extension but we have some problems to import Customs options. The import of standard product is successful but not for the product with customs options. For customs option, missing "info_buyRequest" valu in database. The technical support of magento we responded "the import process currently can't handle importing products with custom options". Magento use custom options when ordering a product with customs options on website. What features do magento use to fill in the fields "info_buyRequest" and "Product_options" when ordering? Have you see a extension pack for import file order with product contains customs options? Thanks for your help.

    Read the article

  • Android: How to make launcher always open the main activity instead of child activity? (or otherwise

    - by yuku
    I have activities A and B. The A is the one with LAUNCHER intent-filter (i.e. the activity that is started when we click the app icon on home screen). A launches B using startActivity(new Intent(A.this, B.class)). When the user has the B activity open, and then put my application into the background, and later my application's process is killed, when the user starts my application again, B is opened instead of A. This caused a force close in my app, because A is the activity that initializes the resources my app needs, and when B tried to access the uninitialized resources, B crashes. Do you have any suggestions what should I do in this situation?

    Read the article

  • Improve performance writing 10 million records to text file using windows service

    - by user1039583
    I'm fetching more than 10 millions of records from database and writing to a text file. It takes hours of time to complete this operation. Is there any option to use TPL features here? It would be great if someone could get me started implementing this with the TPL. using (FileStream fStream = new FileStream("d:\\file.txt", FileMode.OpenOrCreate, FileAccess.ReadWrite)) { BufferedStream bStream = new BufferedStream(fStream); TextWriter writer = new StreamWriter(bStream); for (int i = 0; i < 100000000; i++) { writer.WriteLine(i); } bStream.Flush(); writer.Flush(); // empty buffer; fStream.Flush(); }

    Read the article

  • How to copy a folder recursively with out overwriting the previous one

    what i need is i have linked my project with the cruise control, so when ever a build happens i want to copy the bin folder to a seperate destination folder with version number. That is when the project build happens for the second time i dont want to replace the bin folder of the first build i want to save this with another version number. How can i do that. Right now i got the thing how to copy the folder but it was overwriting the previous one. i dont want that to happen please help me how to implement the concept of versioning.

    Read the article

  • Home based business would like customers to schedule via website the time, day and date they want to take a class.

    - by Alessandro Machi
    I'm using google blogger. I want to ad thumbnail images of different classes I will be offering in my home film/video/sound/lighting studio. The idea is the prospective student visits my website, sees a class they want to take, clicks the thumbnail so first read a descriptive article about the class, at which point they can schedule the class for the time, day, and date of their choosing between the hours of 5am to 9pm, 365 days a year. As soon as the student has inputed the time, day and date of the class they want, they would go to a check out page to purchase the class time. The student would then be sent an email confirmation along with the exact location, the class name, and the time and date they selected. I was thinking of using Dwolla for the check out page because Dwolla offers either no fee or 25 cents per payment transaction, but I'm not sure I can hook up to them easily enough. My blog site is not finished by a longshot. I still have to actually input all of the class thumbnail images along with descriptions, but if you need to see what the page looks like the web address is http://www.myalexlogic.com Google blogger allows for third party code to be added within movable gadgets.

    Read the article

  • Import orders file on magento enterprise 1.7 (product with customs options)

    - by wil
    Hello, We need to import some orders file on magento enterprise. In our file, products contains customs Options. We tried to make an extension but we have some problems to import Customs options. The import of standard product is successful but not for the product with customs options. For customs option, missing "info_buyRequest" valu in database. The technical support of magento we responded "the import process currently can't handle importing products with custom options". Magento use custom options when ordering a product with customs options on website. What features do magento use to fill in the fields "info_buyRequest" and "Product_options" when ordering? Have you see a extension pack for import file order with product contains customs options? Thanks for your help.

    Read the article

  • What method of UIView gets called when instantiated from a NIB?

    - by retailevolved
    I have a simple custom view that is connected via outlet to a NIB. For this particular view, there are actions that I would like to perform on the view when it is initialized, no matter what NIB it is on. Trouble is, neither the (id)init or the (id)initWithFrame:(CGRect)frame methods are getting called on the custom view. Which method gets called on a UIView when it is instantiated from a NIB? I would just use the view controller and viewDidLoad method except that this particular view appears on a lot of different NIBs.

    Read the article

  • MySQL UpdateXML with automatic node inserting?

    - by JimPsr
    I am writing an application that supports custom fields. Currently I store all custom fields in a XML formated text field ( e.g. '<root><field1>val1</field1><field2>val2</field2></root>' in cust_field) I am able to to use updateXML(cust_field, '/root/field1', '<field1>new value</field1') to update those values, however if I use updateXML(cust_field, '/root/field3', '<field3>new value</field3>') then it does not work since field3 is not in the old value. Is there a way to let MySQL automatically insert the new field3 node and its value into cust_field? I am thinking about stored procedure or even stored function but not familiar with both, can anyone point me to the right direction?

    Read the article

  • Wordpress meta data is written on top of page instead of the loop

    - by Fruxelot
    i'm building a wordpress webpage based on the Skeleton Wordpress theme. I have 2 posts showing on a page and each of these posts have custom fields values (meta data). Im using the shortcode from the skeleton theme to get a post-feed from a specific category and in that loop i have inserted this tag that displays the custom fields data <?php the_meta(); ?> I am getting the data - but the problem is, the data is shown on TOP of the page instead of inside the in the post. What could've ive possibly done wrong? or is it something with skeleton i am doing wrong? Webpage : http://visbyfangelse.se.preview.binero.se/rum-priser-preview/ as you can see two posts are shown - and the meta data is shown on the top of the page. Code to the loop : http://pastebin.com/mRQY5GNz As you can see i want the meta displayed in the div which i assigned this class to "my_room_meta".

    Read the article

  • How to detect invalid user input in a Batch File?

    - by user2975367
    I want to use a batch file to ask for a password to continue, i have very simple code that works. @echo off :Begin cls echo. echo Enter Password set /p pass= if %pass%==Password goto Start :Start cls echo What would you like me to do? (Date/Chrome/Lock/Shutdown/Close) set /p task= if %task%==Date goto Task=Date if %task%==Chrome goto Task=Chrome if %task%==Lock goto Task=Lock if %task%==Shutdown goto Task=Shutdown if %task%==Close goto Task=Close I need to detect when the user entered an invalid password, i have spent an hour researching but i found nothing. I'm not advanced in any way so try and keep it very simple like the code above. Please help me.

    Read the article

  • how to use q.js promises to work with multiple asynchronous operations

    - by kimsia
    Note: This question is also cross-posted in Q.js mailing list over here. i had a situation with multiple asynchronous operations and the answer I accepted pointed out that using Promises using a library such as q.js would be more beneficial. I am convinced to refactor my code to use Promises but because the code is pretty long, i have trimmed the irrelevant portions and exported the crucial parts into a separate repo. The repo is here and the most important file is this. The requirement is that I want pageSizes to be non-empty after traversing all the dragged'n dropped files. The problem is that the FileAPI operations inside getSizeSettingsFromPage function causes getSizeSettingsFromPage to be async. So I cannot place checkWhenReady(); like this. function traverseFiles() { for (var i=0, l=pages.length; i<l; i++) { getSizeSettingsFromPage(pages[i], calculateRatio); } checkWhenReady(); // this always returns 0. } This works, but it is not ideal. I prefer to call checkWhenReady just ONCE after all the pages have undergone this function calculateRatio successfully. function calculateRatio(width, height, filename) { // .... code pageSizes.add(filename, object); checkWhenReady(); // this works but it is not ideal. I prefer to call this method AFTER all the `pages` have undergone calculateRatio // ..... more code... } How do I refactor the code to make use of Promises in Q.js?

    Read the article

  • Not really a quaestion...but i need help

    - by Dan F.
    I have to make a process in Oracle/PLSQL.....i have to verify that the interval of time between start_date and end_date from a new row that i create must not intersect other start_dates and end_dates from other rows. Now I need to check each row for that condition and if it doesn't correspond the repetitive instruction should stop and after that to display a message such as "The interval of time given is not correct". I don't know how to make repetitive instructions in Oracle/PLSQL and I would appreciate if you would help me.

    Read the article

  • How to find and fix performance problems in ORM powered applications

    - by FransBouma
    Once in a while we get requests about how to fix performance problems with our framework. As it comes down to following the same steps and looking into the same things every single time, I decided to write a blogpost about it instead, so more people can learn from this and solve performance problems in their O/R mapper powered applications. In some parts it's focused on LLBLGen Pro but it's also usable for other O/R mapping frameworks, as the vast majority of performance problems in O/R mapper powered applications are not specific for a certain O/R mapper framework. Too often, the developer looks at the wrong part of the application, trying to fix what isn't a problem in that part, and getting frustrated that 'things are so slow with <insert your favorite framework X here>'. I'm in the O/R mapper business for a long time now (almost 10 years, full time) and as it's a small world, we O/R mapper developers know almost all tricks to pull off by now: we all know what to do to make task ABC faster and what compromises (because there are almost always compromises) to deal with if we decide to make ABC faster that way. Some O/R mapper frameworks are faster in X, others in Y, but you can be sure the difference is mainly a result of a compromise some developers are willing to deal with and others aren't. That's why the O/R mapper frameworks on the market today are different in many ways, even though they all fetch and save entities from and to a database. I'm not suggesting there's no room for improvement in today's O/R mapper frameworks, there always is, but it's not a matter of 'the slowness of the application is caused by the O/R mapper' anymore. Perhaps query generation can be optimized a bit here, row materialization can be optimized a bit there, but it's mainly coming down to milliseconds. Still worth it if you're a framework developer, but it's not much compared to the time spend inside databases and in user code: if a complete fetch takes 40ms or 50ms (from call to entity object collection), it won't make a difference for your application as that 10ms difference won't be noticed. That's why it's very important to find the real locations of the problems so developers can fix them properly and don't get frustrated because their quest to get a fast, performing application failed. Performance tuning basics and rules Finding and fixing performance problems in any application is a strict procedure with four prescribed steps: isolate, analyze, interpret and fix, in that order. It's key that you don't skip a step nor make assumptions: these steps help you find the reason of a problem which seems to be there, and how to fix it or leave it as-is. Skipping a step, or when you assume things will be bad/slow without doing analysis will lead to the path of premature optimization and won't actually solve your problems, only create new ones. The most important rule of finding and fixing performance problems in software is that you have to understand what 'performance problem' actually means. Most developers will say "when a piece of software / code is slow, you have a performance problem". But is that actually the case? If I write a Linq query which will aggregate, group and sort 5 million rows from several tables to produce a resultset of 10 rows, it might take more than a couple of milliseconds before that resultset is ready to be consumed by other logic. If I solely look at the Linq query, the code consuming the resultset of the 10 rows and then look at the time it takes to complete the whole procedure, it will appear to me to be slow: all that time taken to produce and consume 10 rows? But if you look closer, if you analyze and interpret the situation, you'll see it does a tremendous amount of work, and in that light it might even be extremely fast. With every performance problem you encounter, always do realize that what you're trying to solve is perhaps not a technical problem at all, but a perception problem. The second most important rule you have to understand is based on the old saying "Penny wise, Pound Foolish": the part which takes e.g. 5% of the total time T for a given task isn't worth optimizing if you have another part which takes a much larger part of the total time T for that same given task. Optimizing parts which are relatively insignificant for the total time taken is not going to bring you better results overall, even if you totally optimize that part away. This is the core reason why analysis of the complete set of application parts which participate in a given task is key to being successful in solving performance problems: No analysis -> no problem -> no solution. One warning up front: hunting for performance will always include making compromises. Fast software can be made maintainable, but if you want to squeeze as much performance out of your software, you will inevitably be faced with the dilemma of compromising one or more from the group {readability, maintainability, features} for the extra performance you think you'll gain. It's then up to you to decide whether it's worth it. In almost all cases it's not. The reason for this is simple: the vast majority of performance problems can be solved by implementing the proper algorithms, the ones with proven Big O-characteristics so you know the performance you'll get plus you know the algorithm will work. The time taken by the algorithm implementing code is inevitable: you already implemented the best algorithm. You might find some optimizations on the technical level but in general these are minor. Let's look at the four steps to see how they guide us through the quest to find and fix performance problems. Isolate The first thing you need to do is to isolate the areas in your application which are assumed to be slow. For example, if your application is a web application and a given page is taking several seconds or even minutes to load, it's a good candidate to check out. It's important to start with the isolate step because it allows you to focus on a single code path per area with a clear begin and end and ignore the rest. The rest of the steps are taken per identified problematic area. Keep in mind that isolation focuses on tasks in an application, not code snippets. A task is something that's started in your application by either another task or the user, or another program, and has a beginning and an end. You can see a task as a piece of functionality offered by your application.  Analyze Once you've determined the problem areas, you have to perform analysis on the code paths of each area, to see where the performance problems occur and which areas are not the problem. This is a multi-layered effort: an application which uses an O/R mapper typically consists of multiple parts: there's likely some kind of interface (web, webservice, windows etc.), a part which controls the interface and business logic, the O/R mapper part and the RDBMS, all connected with either a network or inter-process connections provided by the OS or other means. Each of these parts, including the connectivity plumbing, eat up a part of the total time it takes to complete a task, e.g. load a webpage with all orders of a given customer X. To understand which parts participate in the task / area we're investigating and how much they contribute to the total time taken to complete the task, analysis of each participating task is essential. Start with the code you wrote which starts the task, analyze the code and track the path it follows through your application. What does the code do along the way, verify whether it's correct or not. Analyze whether you have implemented the right algorithms in your code for this particular area. Remember we're looking at one area at a time, which means we're ignoring all other code paths, just the code path of the current problematic area, from begin to end and back. Don't dig in and start optimizing at the code level just yet. We're just analyzing. If your analysis reveals big architectural stupidity, it's perhaps a good idea to rethink the architecture at this point. For the rest, we're analyzing which means we collect data about what could be wrong, for each participating part of the complete application. Reviewing the code you wrote is a good tool to get deeper understanding of what is going on for a given task but ultimately it lacks precision and overview what really happens: humans aren't good code interpreters, computers are. We therefore need to utilize tools to get deeper understanding about which parts contribute how much time to the total task, triggered by which other parts and for example how many times are they called. There are two different kind of tools which are necessary: .NET profilers and O/R mapper / RDBMS profilers. .NET profiling .NET profilers (e.g. dotTrace by JetBrains or Ants by Red Gate software) show exactly which pieces of code are called, how many times they're called, and the time it took to run that piece of code, at the method level and sometimes even at the line level. The .NET profilers are essential tools for understanding whether the time taken to complete a given task / area in your application is consumed by .NET code, where exactly in your code, the path to that code, how many times that code was called by other code and thus reveals where hotspots are located: the areas where a solution can be found. Importantly, they also reveal which areas can be left alone: remember our penny wise pound foolish saying: if a profiler reveals that a group of methods are fast, or don't contribute much to the total time taken for a given task, ignore them. Even if the code in them is perhaps complex and looks like a candidate for optimization: you can work all day on that, it won't matter.  As we're focusing on a single area of the application, it's best to start profiling right before you actually activate the task/area. Most .NET profilers support this by starting the application without starting the profiling procedure just yet. You navigate to the particular part which is slow, start profiling in the profiler, in your application you perform the actions which are considered slow, and afterwards you get a snapshot in the profiler. The snapshot contains the data collected by the profiler during the slow action, so most data is produced by code in the area to investigate. This is important, because it allows you to stay focused on a single area. O/R mapper and RDBMS profiling .NET profilers give you a good insight in the .NET side of things, but not in the RDBMS side of the application. As this article is about O/R mapper powered applications, we're also looking at databases, and the software making it possible to consume the database in your application: the O/R mapper. To understand which parts of the O/R mapper and database participate how much to the total time taken for task T, we need different tools. There are two kind of tools focusing on O/R mappers and database performance profiling: O/R mapper profilers and RDBMS profilers. For O/R mapper profilers, you can look at LLBLGen Prof by hibernating rhinos or the Linq to Sql/LLBLGen Pro profiler by Huagati. Hibernating rhinos also have profilers for other O/R mappers like NHibernate (NHProf) and Entity Framework (EFProf) and work the same as LLBLGen Prof. For RDBMS profilers, you have to look whether the RDBMS vendor has a profiler. For example for SQL Server, the profiler is shipped with SQL Server, for Oracle it's build into the RDBMS, however there are also 3rd party tools. Which tool you're using isn't really important, what's important is that you get insight in which queries are executed during the task / area we're currently focused on and how long they took. Here, the O/R mapper profilers have an advantage as they collect the time it took to execute the query from the application's perspective so they also collect the time it took to transport data across the network. This is important because a query which returns a massive resultset or a resultset with large blob/clob/ntext/image fields takes more time to get transported across the network than a small resultset and a database profiler doesn't take this into account most of the time. Another tool to use in this case, which is more low level and not all O/R mappers support it (though LLBLGen Pro and NHibernate as well do) is tracing: most O/R mappers offer some form of tracing or logging system which you can use to collect the SQL generated and executed and often also other activity behind the scenes. While tracing can produce a tremendous amount of data in some cases, it also gives insight in what's going on. Interpret After we've completed the analysis step it's time to look at the data we've collected. We've done code reviews to see whether we've done anything stupid and which parts actually take place and if the proper algorithms have been implemented. We've done .NET profiling to see which parts are choke points and how much time they contribute to the total time taken to complete the task we're investigating. We've performed O/R mapper profiling and RDBMS profiling to see which queries were executed during the task, how many queries were generated and executed and how long they took to complete, including network transportation. All this data reveals two things: which parts are big contributors to the total time taken and which parts are irrelevant. Both aspects are very important. The parts which are irrelevant (i.e. don't contribute significantly to the total time taken) can be ignored from now on, we won't look at them. The parts which contribute a lot to the total time taken are important to look at. We now have to first look at the .NET profiler results, to see whether the time taken is consumed in our own code, in .NET framework code, in the O/R mapper itself or somewhere else. For example if most of the time is consumed by DbCommand.ExecuteReader, the time it took to complete the task is depending on the time the data is fetched from the database. If there was just 1 query executed, according to tracing or O/R mapper profilers / RDBMS profilers, check whether that query is optimal, uses indexes or has to deal with a lot of data. Interpret means that you follow the path from begin to end through the data collected and determine where, along the path, the most time is contributed. It also means that you have to check whether this was expected or is totally unexpected. My previous example of the 10 row resultset of a query which groups millions of rows will likely reveal that a long time is spend inside the database and almost no time is spend in the .NET code, meaning the RDBMS part contributes the most to the total time taken, the rest is compared to that time, irrelevant. Considering the vastness of the source data set, it's expected this will take some time. However, does it need tweaking? Perhaps all possible tweaks are already in place. In the interpret step you then have to decide that further action in this area is necessary or not, based on what the analysis results show: if the analysis results were unexpected and in the area where the most time is contributed to the total time taken is room for improvement, action should be taken. If not, you can only accept the situation and move on. In all cases, document your decision together with the analysis you've done. If you decide that the perceived performance problem is actually expected due to the nature of the task performed, it's essential that in the future when someone else looks at the application and starts asking questions you can answer them properly and new analysis is only necessary if situations changed. Fix After interpreting the analysis results you've concluded that some areas need adjustment. This is the fix step: you're actively correcting the performance problem with proper action targeted at the real cause. In many cases related to O/R mapper powered applications it means you'll use different features of the O/R mapper to achieve the same goal, or apply optimizations at the RDBMS level. It could also mean you apply caching inside your application (compromise memory consumption over performance) to avoid unnecessary re-querying data and re-consuming the results. After applying a change, it's key you re-do the analysis and interpretation steps: compare the results and expectations with what you had before, to see whether your actions had any effect or whether it moved the problem to a different part of the application. Don't fall into the trap to do partly analysis: do the full analysis again: .NET profiling and O/R mapper / RDBMS profiling. It might very well be that the changes you've made make one part faster but another part significantly slower, in such a way that the overall problem hasn't changed at all. Performance tuning is dealing with compromises and making choices: to use one feature over the other, to accept a higher memory footprint, to go away from the strict-OO path and execute queries directly onto the RDBMS, these are choices and compromises which will cross your path if you want to fix performance problems with respect to O/R mappers or data-access and databases in general. In most cases it's not a big issue: alternatives are often good choices too and the compromises aren't that hard to deal with. What is important is that you document why you made a choice, a compromise: which analysis data, which interpretation led you to the choice made. This is key for good maintainability in the years to come. Most common performance problems with O/R mappers Below is an incomplete list of common performance problems related to data-access / O/R mappers / RDBMS code. It will help you with fixing the hotspots you found in the interpretation step. SELECT N+1: (Lazy-loading specific). Lazy loading triggered performance bottlenecks. Consider a list of Orders bound to a grid. You have a Field mapped onto a related field in Order, Customer.CompanyName. Showing this column in the grid will make the grid fetch (indirectly) for each row the Customer row. This means you'll get for the single list not 1 query (for the orders) but 1+(the number of orders shown) queries. To solve this: use eager loading using a prefetch path to fetch the customers with the orders. SELECT N+1 is easy to spot with an O/R mapper profiler or RDBMS profiler: if you see a lot of identical queries executed at once, you have this problem. Prefetch paths using many path nodes or sorting, or limiting. Eager loading problem. Prefetch paths can help with performance, but as 1 query is fetched per node, it can be the number of data fetched in a child node is bigger than you think. Also consider that data in every node is merged on the client within the parent. This is fast, but it also can take some time if you fetch massive amounts of entities. If you keep fetches small, you can use tuning parameters like the ParameterizedPrefetchPathThreshold setting to get more optimal queries. Deep inheritance hierarchies of type Target Per Entity/Type. If you use inheritance of type Target per Entity / Type (each type in the inheritance hierarchy is mapped onto its own table/view), fetches will join subtype- and supertype tables in many cases, which can lead to a lot of performance problems if the hierarchy has many types. With this problem, keep inheritance to a minimum if possible, or switch to a hierarchy of type Target Per Hierarchy, which means all entities in the inheritance hierarchy are mapped onto the same table/view. Of course this has its own set of drawbacks, but it's a compromise you might want to take. Fetching massive amounts of data by fetching large lists of entities. LLBLGen Pro supports paging (and limiting the # of rows returned), which is often key to process through large sets of data. Use paging on the RDBMS if possible (so a query is executed which returns only the rows in the page requested). When using paging in a web application, be sure that you switch server-side paging on on the datasourcecontrol used. In this case, paging on the grid alone is not enough: this can lead to fetching a lot of data which is then loaded into the grid and paged there. Keep note that analyzing queries for paging could lead to the false assumption that paging doesn't occur, e.g. when the query contains a field of type ntext/image/clob/blob and DISTINCT can't be applied while it should have (e.g. due to a join): the datareader will do DISTINCT filtering on the client. this is a little slower but it does perform paging functionality on the data-reader so it won't fetch all rows even if the query suggests it does. Fetch massive amounts of data because blob/clob/ntext/image fields aren't excluded. LLBLGen Pro supports field exclusion for queries. You can exclude fields (also in prefetch paths) per query to avoid fetching all fields of an entity, e.g. when you don't need them for the logic consuming the resultset. Excluding fields can greatly reduce the amount of time spend on data-transport across the network. Use this optimization if you see that there's a big difference between query execution time on the RDBMS and the time reported by the .NET profiler for the ExecuteReader method call. Doing client-side aggregates/scalar calculations by consuming a lot of data. If possible, try to formulate a scalar query or group by query using the projection system or GetScalar functionality of LLBLGen Pro to do data consumption on the RDBMS server. It's far more efficient to process data on the RDBMS server than to first load it all in memory, then traverse the data in-memory to calculate a value. Using .ToList() constructs inside linq queries. It might be you use .ToList() somewhere in a Linq query which makes the query be run partially in-memory. Example: var q = from c in metaData.Customers.ToList() where c.Country=="Norway" select c; This will actually fetch all customers in-memory and do an in-memory filtering, as the linq query is defined on an IEnumerable<T>, and not on the IQueryable<T>. Linq is nice, but it can often be a bit unclear where some parts of a Linq query might run. Fetching all entities to delete into memory first. To delete a set of entities it's rather inefficient to first fetch them all into memory and then delete them one by one. It's more efficient to execute a DELETE FROM ... WHERE query on the database directly to delete the entities in one go. LLBLGen Pro supports this feature, and so do some other O/R mappers. It's not always possible to do this operation in the context of an O/R mapper however: if an O/R mapper relies on a cache, these kind of operations are likely not supported because they make it impossible to track whether an entity is actually removed from the DB and thus can be removed from the cache. Fetching all entities to update with an expression into memory first. Similar to the previous point: it is more efficient to update a set of entities directly with a single UPDATE query using an expression instead of fetching the entities into memory first and then updating the entities in a loop, and afterwards saving them. It might however be a compromise you don't want to take as it is working around the idea of having an object graph in memory which is manipulated and instead makes the code fully aware there's a RDBMS somewhere. Conclusion Performance tuning is almost always about compromises and making choices. It's also about knowing where to look and how the systems in play behave and should behave. The four steps I provided should help you stay focused on the real problem and lead you towards the solution. Knowing how to optimally use the systems participating in your own code (.NET framework, O/R mapper, RDBMS, network/services) is key for success as well as knowing what's going on inside the application you built. I hope you'll find this guide useful in tracking down performance problems and dealing with them in a useful way.  

    Read the article

  • Using PHP Redirect Script together with Custom Fields (WordPress)? [on hold]

    - by Alex Scherer
    I am currently trying to make yoast's link cloaking script ( Yoast.com script manual // Github Script files ) work together with the Wordpress plugin Advanced Custom Fields. The script fetches 2 values (redirect id, redirect url) via GET and then redirects to this particular URL which is defined in a .txt file called redirects.txt I would like to change the script, so that I can define both the id and redirection URL via custom fields on each post in my WP dashboard.. I would be really happy if someone could help me to code something that does the same as the script above but without using a redirects.txt file to save the values but furthermore gets those values from custom fields. Best regards ! Alex

    Read the article

  • Android: Layouts and views or a single full screen custom view?

    - by futlib
    I'm developing an Android game, and I'm making it so that it can run on low end devices without GPU, so I'm using the 2D API. I have so far tried to use Android's mechanisms such as layouts and activities where possible, but I'm beginning to wonder if it's not easier to just create a single custom view (or one per activity) and do all the work there. Here's an example of how I currently do things: I'm using a layout to display the game's background as an image view and the square game area, which is a custom view, centered in the middle. What would you say? Should I continue to use layouts where possible or is it more common/reasonable to just use a large custom view? I'm thinking that this would probably also make it easier to port my code to other platforms.

    Read the article

  • Using Microsoft.Reporting.WebForms.ReportViewer in a custom SharePoint WebPart

    - by iHeartDucks
    I have a requirement where I have to display some data (from a custom db) and let the user export it to an excel file. I decided to use the ReportViewer control present in Microsoft.Reporting.WebForms.ReportViewer. I added the required assembly to the project references and I add the following code to test it out protected override void CreateChildControls() { base.CreateChildControls(); objRV = new Microsoft.Reporting.WebForms.ReportViewer(); objRV.ID = "objRV"; this.Controls.Add(objRV); } The first error asked me to add this line in the web.config which I did and the next error says The type 'Microsoft.SharePoint.Portal.Analytics.UI.ReportViewerMessages, Microsoft.SharePoint.Portal, Version=12.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c' does not implement IReportViewerMessages Is it possible to use ReportViewer in my custom Web Part? I rather not bind a repeater and write my own export to excel code. I want to use something which is already built by Microsoft? Any ideas on what I can reuse? Edit I commented the following line <add key="ReportViewerMessages"... and now my code looks like this after I added a data source to it protected override void CreateChildControls() { base.CreateChildControls(); objRV = new Microsoft.Reporting.WebForms.ReportViewer(); objRV.ID = "objRV"; objRV.Visible = true; Microsoft.Reporting.WebForms.ReportDataSource datasource = new Microsoft.Reporting.WebForms.ReportDataSource("test", GroupM.Common.DB.GetAllClientCodes()); objRV.LocalReport.DataSources.Clear(); objRV.LocalReport.DataSources.Add(datasource); objRV.LocalReport.Refresh(); this.Controls.Add(objRV); } but now I do not see any data on the page. I did check my db call and it does return a data table with 15 rows. Any ideas why I don't see anything on the page?

    Read the article

  • ASP.NET MVC2 Implementing Custom RoleManager problem

    - by ile
    To create a custom membership provider I followed these instructions: http://stackoverflow.com/questions/2771094/asp-net-mvc2-custom-membership and these: http://mattwrock.com/post/2009/10/14/Implementing-custom-Membership-Provider-and-Role-Provider-for-Authinticating-ASPNET-MVC-Applications.aspx So far, I've managed to implement custom membership provider and that part works fine. RoleManager still needs some modifications... Project structure: SAMembershipProvider.cs: public class SAMembershipProvider : MembershipProvider { #region - Properties - private int NewPasswordLength { get; set; } private string ConnectionString { get; set; } public bool enablePasswordReset { get; set; } public bool enablePasswordRetrieval { get; set; } public bool requiresQuestionAndAnswer { get; set; } public bool requiresUniqueEmail { get; set; } public int maxInvalidPasswordAttempts { get; set; } public int passwordAttemptWindow { get; set; } public MembershipPasswordFormat passwordFormat { get; set; } public int minRequiredNonAlphanumericCharacters { get; set; } public int minRequiredPasswordLength { get; set; } public string passwordStrengthRegularExpression { get; set; } public override string ApplicationName { get; set; } public override bool EnablePasswordRetrieval { get { return enablePasswordRetrieval; } } public override bool EnablePasswordReset { get { return enablePasswordReset; } } public override bool RequiresQuestionAndAnswer { get { return requiresQuestionAndAnswer; } } public override int MaxInvalidPasswordAttempts { get { return maxInvalidPasswordAttempts; } } public override int PasswordAttemptWindow { get { return passwordAttemptWindow; } } public override bool RequiresUniqueEmail { get { return requiresUniqueEmail; } } public override MembershipPasswordFormat PasswordFormat { get { return passwordFormat; } } public override int MinRequiredPasswordLength { get { return minRequiredPasswordLength; } } public override int MinRequiredNonAlphanumericCharacters { get { return minRequiredNonAlphanumericCharacters; } } public override string PasswordStrengthRegularExpression { get { return passwordStrengthRegularExpression; } } #endregion #region - Methods - public override void Initialize(string name, NameValueCollection config) { throw new NotImplementedException(); } public override bool ChangePassword(string username, string oldPassword, string newPassword) { throw new NotImplementedException(); } public override bool ChangePasswordQuestionAndAnswer(string username, string password, string newPasswordQuestion, string newPasswordAnswer) { throw new NotImplementedException(); } public override MembershipUser CreateUser(string username, string password, string email, string passwordQuestion, string passwordAnswer, bool isApproved, object providerUserKey, out MembershipCreateStatus status) { throw new NotImplementedException(); } public override bool DeleteUser(string username, bool deleteAllRelatedData) { throw new NotImplementedException(); } public override MembershipUserCollection FindUsersByEmail(string emailToMatch, int pageIndex, int pageSize, out int totalRecords) { throw new NotImplementedException(); } public override MembershipUserCollection FindUsersByName(string usernameToMatch, int pageIndex, int pageSize, out int totalRecords) { throw new NotImplementedException(); } public override MembershipUserCollection GetAllUsers(int pageIndex, int pageSize, out int totalRecords) { throw new NotImplementedException(); } public override int GetNumberOfUsersOnline() { throw new NotImplementedException(); } public override string GetPassword(string username, string answer) { throw new NotImplementedException(); } public override MembershipUser GetUser(object providerUserKey, bool userIsOnline) { throw new NotImplementedException(); } public override MembershipUser GetUser(string username, bool userIsOnline) { throw new NotImplementedException(); } public override string GetUserNameByEmail(string email) { throw new NotImplementedException(); } protected override void OnValidatingPassword(ValidatePasswordEventArgs e) { base.OnValidatingPassword(e); } public override string ResetPassword(string username, string answer) { throw new NotImplementedException(); } public override bool UnlockUser(string userName) { throw new NotImplementedException(); } public override void UpdateUser(MembershipUser user) { throw new NotImplementedException(); } public override bool ValidateUser(string username, string password) { AccountRepository accountRepository = new AccountRepository(); var user = accountRepository.GetUser(username); if (string.IsNullOrEmpty(password.Trim())) return false; if (user == null) return false; //string hash = EncryptPassword(password); var email = user.Email; var pass = user.Password; if (user == null) return false; if (pass == password) { //User = user; return true; } return false; } #endregion protected string EncryptPassword(string password) { //we use codepage 1252 because that is what sql server uses byte[] pwdBytes = Encoding.GetEncoding(1252).GetBytes(password); byte[] hashBytes = System.Security.Cryptography.MD5.Create().ComputeHash(pwdBytes); return Encoding.GetEncoding(1252).GetString(hashBytes); } } SARoleProvider.cs public class SARoleProvider : RoleProvider { AccountRepository accountRepository = new AccountRepository(); public override bool IsUserInRole(string username, string roleName) { return true; } public override string ApplicationName { get { throw new NotImplementedException(); } set { throw new NotImplementedException(); } } public override void AddUsersToRoles(string[] usernames, string[] roleNames) { throw new NotImplementedException(); } public override void RemoveUsersFromRoles(string[] usernames, string[] roleNames) { throw new NotImplementedException(); } public override void CreateRole(string roleName) { throw new NotImplementedException(); } public override bool DeleteRole(string roleName, bool throwOnPopulatedRole) { throw new NotImplementedException(); } public override bool RoleExists(string roleName) { throw new NotImplementedException(); } public override string[] GetRolesForUser(string username) { int rolesCount = 0; IQueryable<RoleViewModel> rolesNames; try { // get roles for this user from DB... rolesNames = accountRepository.GetRolesForUser(username); rolesCount = rolesNames.Count(); } catch (Exception ex) { throw ex; } string[] roles = new string[rolesCount]; int counter = 0; foreach (var item in rolesNames) { roles[counter] = item.RoleName.ToString(); counter++; } return roles; } public override string[] GetUsersInRole(string roleName) { throw new NotImplementedException(); } public override string[] FindUsersInRole(string roleName, string usernameToMatch) { throw new NotImplementedException(); } public override string[] GetAllRoles() { throw new NotImplementedException(); } } AccountRepository.cs public class RoleViewModel { public string RoleName { get; set; } } public class AccountRepository { private DB db = new DB(); public User GetUser(string email) { return db.Users.SingleOrDefault(d => d.Email == email); } public IQueryable<RoleViewModel> GetRolesForUser(string email) { var result = ( from role in db.Roles join user in db.Users on role.RoleID equals user.RoleID where user.Email == email select new RoleViewModel { RoleName = role.Name }); return result; } } webconfig <membership defaultProvider="SAMembershipProvider" userIsOnlineTimeWindow="15"> <providers> <clear/> <add name="SAMembershipProvider" type="SA_Contacts.Membership.SAMembershipProvider, SA_Contacts" connectionStringName ="ShinyAntConnectionString" /> </providers> </membership> <roleManager defaultProvider="SARoleProvider" enabled="true" cacheRolesInCookie="true"> <providers> <clear/> <add name="SARoleProvider" type="SA_Contacts.Membership.SARoleProvider" connectionStringName ="ShinyAntConnectionString" /> </providers> </roleManager> AccountController.cs: public class AccountController : Controller { SAMembershipProvider provider = new SAMembershipProvider(); AccountRepository accountRepository = new AccountRepository(); public AccountController() { } public ActionResult LogOn() { return View(); } [AcceptVerbs(HttpVerbs.Post)] public ActionResult LogOn(string userName, string password, string returnUrl) { if (!ValidateLogOn(userName, password)) { return View(); } var user = accountRepository.GetUser(userName); var userFullName = user.FirstName + " " + user.LastName; FormsAuthentication.SetAuthCookie(userFullName, false); if (!String.IsNullOrEmpty(returnUrl) && returnUrl != "/") { return Redirect(returnUrl); } else { return RedirectToAction("Index", "Home"); } } public ActionResult LogOff() { FormsAuthentication.SignOut(); return RedirectToAction("Index", "Home"); } private bool ValidateLogOn(string userName, string password) { if (String.IsNullOrEmpty(userName)) { ModelState.AddModelError("username", "You must specify a username."); } if (String.IsNullOrEmpty(password)) { ModelState.AddModelError("password", "You must specify a password."); } if (!provider.ValidateUser(userName, password)) { ModelState.AddModelError("_FORM", "The username or password provided is incorrect."); } return ModelState.IsValid; } } In some testing controller I have following: [Authorize] public class ContactsController : Controller { SAMembershipProvider saMembershipProvider = new SAMembershipProvider(); SARoleProvider saRoleProvider = new SARoleProvider(); // // GET: /Contact/ public ActionResult Index() { string[] roleNames = Roles.GetRolesForUser("[email protected]"); // Outputs admin ViewData["r1"] = roleNames[0].ToString(); // Outputs True // I'm not even sure if this method is the same as the one below ViewData["r2"] = Roles.IsUserInRole("[email protected]", roleNames[0].ToString()); // Outputs True ViewData["r3"] = saRoleProvider.IsUserInRole("[email protected]", "admin"); return View(); } If I use attribute [Authorize] then everything works ok, but if I use [Authorize(Roles="admin")] then user is always rejected, like he is not in role. Any idea of what could be wrong here? Thanks in advance, Ile

    Read the article

  • How to add custom SOAP-Header element to the generated WSDL in Spring-WS

    - by Petr Macek
    Hi, we are migrating from WebLogic web-services to Spring-WS (1.5.X). There is currently one issue we are facing: We need to pass a context object (on WLS it is passed as SOAP-Header element) to other services that are still running on WLS from the Spring-WS powered service. The header element is still formulated on client side and the newly created WS (Spring-WS) should just pass it to other services. I can imagine how the custom element would be passed: override the doWithMessage(WebServiceMessage message) method... Is there a way to generate the wsdl with the help of DefaultWsdl11Definition to contain that custom header element? See the example: <wsdl:operation name="GetSomeInformation"> <soap:operation soapAction="http://www.dummyservice.com/InformationService/GetSomeInformation" /> <wsdl:input> <soap:body use="literal" /> <soap:header message="ctx:ServiceContextMessage" part="serviceContext" use="literal" /> </wsdl:input> <wsdl:output> <soap:body use="literal" /> </wsdl:output> <wsdl:fault name="Error"> <soap:fault name="Error" use="literal" /> </wsdl:fault> </wsdl:operation> Thanks for help

    Read the article

< Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >