Search Results

Search found 10838 results on 434 pages for 'adf task flow'.

Page 60/434 | < Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >

  • Home based business would like customers to schedule via website the time, day and date they want to take a class.

    - by Alessandro Machi
    I'm using google blogger. I want to ad thumbnail images of different classes I will be offering in my home film/video/sound/lighting studio. The idea is the prospective student visits my website, sees a class they want to take, clicks the thumbnail so first read a descriptive article about the class, at which point they can schedule the class for the time, day, and date of their choosing between the hours of 5am to 9pm, 365 days a year. As soon as the student has inputed the time, day and date of the class they want, they would go to a check out page to purchase the class time. The student would then be sent an email confirmation along with the exact location, the class name, and the time and date they selected. I was thinking of using Dwolla for the check out page because Dwolla offers either no fee or 25 cents per payment transaction, but I'm not sure I can hook up to them easily enough. My blog site is not finished by a longshot. I still have to actually input all of the class thumbnail images along with descriptions, but if you need to see what the page looks like the web address is http://www.myalexlogic.com Google blogger allows for third party code to be added within movable gadgets.

    Read the article

  • How to detect invalid user input in a Batch File?

    - by user2975367
    I want to use a batch file to ask for a password to continue, i have very simple code that works. @echo off :Begin cls echo. echo Enter Password set /p pass= if %pass%==Password goto Start :Start cls echo What would you like me to do? (Date/Chrome/Lock/Shutdown/Close) set /p task= if %task%==Date goto Task=Date if %task%==Chrome goto Task=Chrome if %task%==Lock goto Task=Lock if %task%==Shutdown goto Task=Shutdown if %task%==Close goto Task=Close I need to detect when the user entered an invalid password, i have spent an hour researching but i found nothing. I'm not advanced in any way so try and keep it very simple like the code above. Please help me.

    Read the article

  • how to use q.js promises to work with multiple asynchronous operations

    - by kimsia
    Note: This question is also cross-posted in Q.js mailing list over here. i had a situation with multiple asynchronous operations and the answer I accepted pointed out that using Promises using a library such as q.js would be more beneficial. I am convinced to refactor my code to use Promises but because the code is pretty long, i have trimmed the irrelevant portions and exported the crucial parts into a separate repo. The repo is here and the most important file is this. The requirement is that I want pageSizes to be non-empty after traversing all the dragged'n dropped files. The problem is that the FileAPI operations inside getSizeSettingsFromPage function causes getSizeSettingsFromPage to be async. So I cannot place checkWhenReady(); like this. function traverseFiles() { for (var i=0, l=pages.length; i<l; i++) { getSizeSettingsFromPage(pages[i], calculateRatio); } checkWhenReady(); // this always returns 0. } This works, but it is not ideal. I prefer to call checkWhenReady just ONCE after all the pages have undergone this function calculateRatio successfully. function calculateRatio(width, height, filename) { // .... code pageSizes.add(filename, object); checkWhenReady(); // this works but it is not ideal. I prefer to call this method AFTER all the `pages` have undergone calculateRatio // ..... more code... } How do I refactor the code to make use of Promises in Q.js?

    Read the article

  • Not really a quaestion...but i need help

    - by Dan F.
    I have to make a process in Oracle/PLSQL.....i have to verify that the interval of time between start_date and end_date from a new row that i create must not intersect other start_dates and end_dates from other rows. Now I need to check each row for that condition and if it doesn't correspond the repetitive instruction should stop and after that to display a message such as "The interval of time given is not correct". I don't know how to make repetitive instructions in Oracle/PLSQL and I would appreciate if you would help me.

    Read the article

  • How to find and fix performance problems in ORM powered applications

    - by FransBouma
    Once in a while we get requests about how to fix performance problems with our framework. As it comes down to following the same steps and looking into the same things every single time, I decided to write a blogpost about it instead, so more people can learn from this and solve performance problems in their O/R mapper powered applications. In some parts it's focused on LLBLGen Pro but it's also usable for other O/R mapping frameworks, as the vast majority of performance problems in O/R mapper powered applications are not specific for a certain O/R mapper framework. Too often, the developer looks at the wrong part of the application, trying to fix what isn't a problem in that part, and getting frustrated that 'things are so slow with <insert your favorite framework X here>'. I'm in the O/R mapper business for a long time now (almost 10 years, full time) and as it's a small world, we O/R mapper developers know almost all tricks to pull off by now: we all know what to do to make task ABC faster and what compromises (because there are almost always compromises) to deal with if we decide to make ABC faster that way. Some O/R mapper frameworks are faster in X, others in Y, but you can be sure the difference is mainly a result of a compromise some developers are willing to deal with and others aren't. That's why the O/R mapper frameworks on the market today are different in many ways, even though they all fetch and save entities from and to a database. I'm not suggesting there's no room for improvement in today's O/R mapper frameworks, there always is, but it's not a matter of 'the slowness of the application is caused by the O/R mapper' anymore. Perhaps query generation can be optimized a bit here, row materialization can be optimized a bit there, but it's mainly coming down to milliseconds. Still worth it if you're a framework developer, but it's not much compared to the time spend inside databases and in user code: if a complete fetch takes 40ms or 50ms (from call to entity object collection), it won't make a difference for your application as that 10ms difference won't be noticed. That's why it's very important to find the real locations of the problems so developers can fix them properly and don't get frustrated because their quest to get a fast, performing application failed. Performance tuning basics and rules Finding and fixing performance problems in any application is a strict procedure with four prescribed steps: isolate, analyze, interpret and fix, in that order. It's key that you don't skip a step nor make assumptions: these steps help you find the reason of a problem which seems to be there, and how to fix it or leave it as-is. Skipping a step, or when you assume things will be bad/slow without doing analysis will lead to the path of premature optimization and won't actually solve your problems, only create new ones. The most important rule of finding and fixing performance problems in software is that you have to understand what 'performance problem' actually means. Most developers will say "when a piece of software / code is slow, you have a performance problem". But is that actually the case? If I write a Linq query which will aggregate, group and sort 5 million rows from several tables to produce a resultset of 10 rows, it might take more than a couple of milliseconds before that resultset is ready to be consumed by other logic. If I solely look at the Linq query, the code consuming the resultset of the 10 rows and then look at the time it takes to complete the whole procedure, it will appear to me to be slow: all that time taken to produce and consume 10 rows? But if you look closer, if you analyze and interpret the situation, you'll see it does a tremendous amount of work, and in that light it might even be extremely fast. With every performance problem you encounter, always do realize that what you're trying to solve is perhaps not a technical problem at all, but a perception problem. The second most important rule you have to understand is based on the old saying "Penny wise, Pound Foolish": the part which takes e.g. 5% of the total time T for a given task isn't worth optimizing if you have another part which takes a much larger part of the total time T for that same given task. Optimizing parts which are relatively insignificant for the total time taken is not going to bring you better results overall, even if you totally optimize that part away. This is the core reason why analysis of the complete set of application parts which participate in a given task is key to being successful in solving performance problems: No analysis -> no problem -> no solution. One warning up front: hunting for performance will always include making compromises. Fast software can be made maintainable, but if you want to squeeze as much performance out of your software, you will inevitably be faced with the dilemma of compromising one or more from the group {readability, maintainability, features} for the extra performance you think you'll gain. It's then up to you to decide whether it's worth it. In almost all cases it's not. The reason for this is simple: the vast majority of performance problems can be solved by implementing the proper algorithms, the ones with proven Big O-characteristics so you know the performance you'll get plus you know the algorithm will work. The time taken by the algorithm implementing code is inevitable: you already implemented the best algorithm. You might find some optimizations on the technical level but in general these are minor. Let's look at the four steps to see how they guide us through the quest to find and fix performance problems. Isolate The first thing you need to do is to isolate the areas in your application which are assumed to be slow. For example, if your application is a web application and a given page is taking several seconds or even minutes to load, it's a good candidate to check out. It's important to start with the isolate step because it allows you to focus on a single code path per area with a clear begin and end and ignore the rest. The rest of the steps are taken per identified problematic area. Keep in mind that isolation focuses on tasks in an application, not code snippets. A task is something that's started in your application by either another task or the user, or another program, and has a beginning and an end. You can see a task as a piece of functionality offered by your application.  Analyze Once you've determined the problem areas, you have to perform analysis on the code paths of each area, to see where the performance problems occur and which areas are not the problem. This is a multi-layered effort: an application which uses an O/R mapper typically consists of multiple parts: there's likely some kind of interface (web, webservice, windows etc.), a part which controls the interface and business logic, the O/R mapper part and the RDBMS, all connected with either a network or inter-process connections provided by the OS or other means. Each of these parts, including the connectivity plumbing, eat up a part of the total time it takes to complete a task, e.g. load a webpage with all orders of a given customer X. To understand which parts participate in the task / area we're investigating and how much they contribute to the total time taken to complete the task, analysis of each participating task is essential. Start with the code you wrote which starts the task, analyze the code and track the path it follows through your application. What does the code do along the way, verify whether it's correct or not. Analyze whether you have implemented the right algorithms in your code for this particular area. Remember we're looking at one area at a time, which means we're ignoring all other code paths, just the code path of the current problematic area, from begin to end and back. Don't dig in and start optimizing at the code level just yet. We're just analyzing. If your analysis reveals big architectural stupidity, it's perhaps a good idea to rethink the architecture at this point. For the rest, we're analyzing which means we collect data about what could be wrong, for each participating part of the complete application. Reviewing the code you wrote is a good tool to get deeper understanding of what is going on for a given task but ultimately it lacks precision and overview what really happens: humans aren't good code interpreters, computers are. We therefore need to utilize tools to get deeper understanding about which parts contribute how much time to the total task, triggered by which other parts and for example how many times are they called. There are two different kind of tools which are necessary: .NET profilers and O/R mapper / RDBMS profilers. .NET profiling .NET profilers (e.g. dotTrace by JetBrains or Ants by Red Gate software) show exactly which pieces of code are called, how many times they're called, and the time it took to run that piece of code, at the method level and sometimes even at the line level. The .NET profilers are essential tools for understanding whether the time taken to complete a given task / area in your application is consumed by .NET code, where exactly in your code, the path to that code, how many times that code was called by other code and thus reveals where hotspots are located: the areas where a solution can be found. Importantly, they also reveal which areas can be left alone: remember our penny wise pound foolish saying: if a profiler reveals that a group of methods are fast, or don't contribute much to the total time taken for a given task, ignore them. Even if the code in them is perhaps complex and looks like a candidate for optimization: you can work all day on that, it won't matter.  As we're focusing on a single area of the application, it's best to start profiling right before you actually activate the task/area. Most .NET profilers support this by starting the application without starting the profiling procedure just yet. You navigate to the particular part which is slow, start profiling in the profiler, in your application you perform the actions which are considered slow, and afterwards you get a snapshot in the profiler. The snapshot contains the data collected by the profiler during the slow action, so most data is produced by code in the area to investigate. This is important, because it allows you to stay focused on a single area. O/R mapper and RDBMS profiling .NET profilers give you a good insight in the .NET side of things, but not in the RDBMS side of the application. As this article is about O/R mapper powered applications, we're also looking at databases, and the software making it possible to consume the database in your application: the O/R mapper. To understand which parts of the O/R mapper and database participate how much to the total time taken for task T, we need different tools. There are two kind of tools focusing on O/R mappers and database performance profiling: O/R mapper profilers and RDBMS profilers. For O/R mapper profilers, you can look at LLBLGen Prof by hibernating rhinos or the Linq to Sql/LLBLGen Pro profiler by Huagati. Hibernating rhinos also have profilers for other O/R mappers like NHibernate (NHProf) and Entity Framework (EFProf) and work the same as LLBLGen Prof. For RDBMS profilers, you have to look whether the RDBMS vendor has a profiler. For example for SQL Server, the profiler is shipped with SQL Server, for Oracle it's build into the RDBMS, however there are also 3rd party tools. Which tool you're using isn't really important, what's important is that you get insight in which queries are executed during the task / area we're currently focused on and how long they took. Here, the O/R mapper profilers have an advantage as they collect the time it took to execute the query from the application's perspective so they also collect the time it took to transport data across the network. This is important because a query which returns a massive resultset or a resultset with large blob/clob/ntext/image fields takes more time to get transported across the network than a small resultset and a database profiler doesn't take this into account most of the time. Another tool to use in this case, which is more low level and not all O/R mappers support it (though LLBLGen Pro and NHibernate as well do) is tracing: most O/R mappers offer some form of tracing or logging system which you can use to collect the SQL generated and executed and often also other activity behind the scenes. While tracing can produce a tremendous amount of data in some cases, it also gives insight in what's going on. Interpret After we've completed the analysis step it's time to look at the data we've collected. We've done code reviews to see whether we've done anything stupid and which parts actually take place and if the proper algorithms have been implemented. We've done .NET profiling to see which parts are choke points and how much time they contribute to the total time taken to complete the task we're investigating. We've performed O/R mapper profiling and RDBMS profiling to see which queries were executed during the task, how many queries were generated and executed and how long they took to complete, including network transportation. All this data reveals two things: which parts are big contributors to the total time taken and which parts are irrelevant. Both aspects are very important. The parts which are irrelevant (i.e. don't contribute significantly to the total time taken) can be ignored from now on, we won't look at them. The parts which contribute a lot to the total time taken are important to look at. We now have to first look at the .NET profiler results, to see whether the time taken is consumed in our own code, in .NET framework code, in the O/R mapper itself or somewhere else. For example if most of the time is consumed by DbCommand.ExecuteReader, the time it took to complete the task is depending on the time the data is fetched from the database. If there was just 1 query executed, according to tracing or O/R mapper profilers / RDBMS profilers, check whether that query is optimal, uses indexes or has to deal with a lot of data. Interpret means that you follow the path from begin to end through the data collected and determine where, along the path, the most time is contributed. It also means that you have to check whether this was expected or is totally unexpected. My previous example of the 10 row resultset of a query which groups millions of rows will likely reveal that a long time is spend inside the database and almost no time is spend in the .NET code, meaning the RDBMS part contributes the most to the total time taken, the rest is compared to that time, irrelevant. Considering the vastness of the source data set, it's expected this will take some time. However, does it need tweaking? Perhaps all possible tweaks are already in place. In the interpret step you then have to decide that further action in this area is necessary or not, based on what the analysis results show: if the analysis results were unexpected and in the area where the most time is contributed to the total time taken is room for improvement, action should be taken. If not, you can only accept the situation and move on. In all cases, document your decision together with the analysis you've done. If you decide that the perceived performance problem is actually expected due to the nature of the task performed, it's essential that in the future when someone else looks at the application and starts asking questions you can answer them properly and new analysis is only necessary if situations changed. Fix After interpreting the analysis results you've concluded that some areas need adjustment. This is the fix step: you're actively correcting the performance problem with proper action targeted at the real cause. In many cases related to O/R mapper powered applications it means you'll use different features of the O/R mapper to achieve the same goal, or apply optimizations at the RDBMS level. It could also mean you apply caching inside your application (compromise memory consumption over performance) to avoid unnecessary re-querying data and re-consuming the results. After applying a change, it's key you re-do the analysis and interpretation steps: compare the results and expectations with what you had before, to see whether your actions had any effect or whether it moved the problem to a different part of the application. Don't fall into the trap to do partly analysis: do the full analysis again: .NET profiling and O/R mapper / RDBMS profiling. It might very well be that the changes you've made make one part faster but another part significantly slower, in such a way that the overall problem hasn't changed at all. Performance tuning is dealing with compromises and making choices: to use one feature over the other, to accept a higher memory footprint, to go away from the strict-OO path and execute queries directly onto the RDBMS, these are choices and compromises which will cross your path if you want to fix performance problems with respect to O/R mappers or data-access and databases in general. In most cases it's not a big issue: alternatives are often good choices too and the compromises aren't that hard to deal with. What is important is that you document why you made a choice, a compromise: which analysis data, which interpretation led you to the choice made. This is key for good maintainability in the years to come. Most common performance problems with O/R mappers Below is an incomplete list of common performance problems related to data-access / O/R mappers / RDBMS code. It will help you with fixing the hotspots you found in the interpretation step. SELECT N+1: (Lazy-loading specific). Lazy loading triggered performance bottlenecks. Consider a list of Orders bound to a grid. You have a Field mapped onto a related field in Order, Customer.CompanyName. Showing this column in the grid will make the grid fetch (indirectly) for each row the Customer row. This means you'll get for the single list not 1 query (for the orders) but 1+(the number of orders shown) queries. To solve this: use eager loading using a prefetch path to fetch the customers with the orders. SELECT N+1 is easy to spot with an O/R mapper profiler or RDBMS profiler: if you see a lot of identical queries executed at once, you have this problem. Prefetch paths using many path nodes or sorting, or limiting. Eager loading problem. Prefetch paths can help with performance, but as 1 query is fetched per node, it can be the number of data fetched in a child node is bigger than you think. Also consider that data in every node is merged on the client within the parent. This is fast, but it also can take some time if you fetch massive amounts of entities. If you keep fetches small, you can use tuning parameters like the ParameterizedPrefetchPathThreshold setting to get more optimal queries. Deep inheritance hierarchies of type Target Per Entity/Type. If you use inheritance of type Target per Entity / Type (each type in the inheritance hierarchy is mapped onto its own table/view), fetches will join subtype- and supertype tables in many cases, which can lead to a lot of performance problems if the hierarchy has many types. With this problem, keep inheritance to a minimum if possible, or switch to a hierarchy of type Target Per Hierarchy, which means all entities in the inheritance hierarchy are mapped onto the same table/view. Of course this has its own set of drawbacks, but it's a compromise you might want to take. Fetching massive amounts of data by fetching large lists of entities. LLBLGen Pro supports paging (and limiting the # of rows returned), which is often key to process through large sets of data. Use paging on the RDBMS if possible (so a query is executed which returns only the rows in the page requested). When using paging in a web application, be sure that you switch server-side paging on on the datasourcecontrol used. In this case, paging on the grid alone is not enough: this can lead to fetching a lot of data which is then loaded into the grid and paged there. Keep note that analyzing queries for paging could lead to the false assumption that paging doesn't occur, e.g. when the query contains a field of type ntext/image/clob/blob and DISTINCT can't be applied while it should have (e.g. due to a join): the datareader will do DISTINCT filtering on the client. this is a little slower but it does perform paging functionality on the data-reader so it won't fetch all rows even if the query suggests it does. Fetch massive amounts of data because blob/clob/ntext/image fields aren't excluded. LLBLGen Pro supports field exclusion for queries. You can exclude fields (also in prefetch paths) per query to avoid fetching all fields of an entity, e.g. when you don't need them for the logic consuming the resultset. Excluding fields can greatly reduce the amount of time spend on data-transport across the network. Use this optimization if you see that there's a big difference between query execution time on the RDBMS and the time reported by the .NET profiler for the ExecuteReader method call. Doing client-side aggregates/scalar calculations by consuming a lot of data. If possible, try to formulate a scalar query or group by query using the projection system or GetScalar functionality of LLBLGen Pro to do data consumption on the RDBMS server. It's far more efficient to process data on the RDBMS server than to first load it all in memory, then traverse the data in-memory to calculate a value. Using .ToList() constructs inside linq queries. It might be you use .ToList() somewhere in a Linq query which makes the query be run partially in-memory. Example: var q = from c in metaData.Customers.ToList() where c.Country=="Norway" select c; This will actually fetch all customers in-memory and do an in-memory filtering, as the linq query is defined on an IEnumerable<T>, and not on the IQueryable<T>. Linq is nice, but it can often be a bit unclear where some parts of a Linq query might run. Fetching all entities to delete into memory first. To delete a set of entities it's rather inefficient to first fetch them all into memory and then delete them one by one. It's more efficient to execute a DELETE FROM ... WHERE query on the database directly to delete the entities in one go. LLBLGen Pro supports this feature, and so do some other O/R mappers. It's not always possible to do this operation in the context of an O/R mapper however: if an O/R mapper relies on a cache, these kind of operations are likely not supported because they make it impossible to track whether an entity is actually removed from the DB and thus can be removed from the cache. Fetching all entities to update with an expression into memory first. Similar to the previous point: it is more efficient to update a set of entities directly with a single UPDATE query using an expression instead of fetching the entities into memory first and then updating the entities in a loop, and afterwards saving them. It might however be a compromise you don't want to take as it is working around the idea of having an object graph in memory which is manipulated and instead makes the code fully aware there's a RDBMS somewhere. Conclusion Performance tuning is almost always about compromises and making choices. It's also about knowing where to look and how the systems in play behave and should behave. The four steps I provided should help you stay focused on the real problem and lead you towards the solution. Knowing how to optimally use the systems participating in your own code (.NET framework, O/R mapper, RDBMS, network/services) is key for success as well as knowing what's going on inside the application you built. I hope you'll find this guide useful in tracking down performance problems and dealing with them in a useful way.  

    Read the article

  • InSync12 and Australia Visits: UX is Global, Regional, Everywhere!

    - by ultan o'broin
    I attended the Australian Oracle User Group (AUSOUG) and Quest International User Group's InSync12 event in Melbourne, Australia: the user group conference for Oracle products in the ANZ region. I demoed Oracle Fusion Applications and then presented how Oracle crafted the world class Fusion Apps user experience (UX). I explained about the Oracle user experience design pattern strategy of uptake for all apps, not just Fusion, and what our UX pattern externalization strategy means for customers, partners, and ADF developers. A great conference, lots of energy, the InSync12 highlights for me were Oracle's Senior Vice President Cliff Godwin’s fast-moving Oracle E-Business Suite (EBS) roadshow with the killer Oracle Endeca user experience uptake, and Oracle ADF product outreachmeister Chris Muir’s (@chriscmuir) session on Oracle ADF Mobile solution and his hands-on mobile app development showing how existing ADF/JDev skills can build a secure, code once-deploy-to-many-device hybrid app solution in minutes. Cliff Godwin shows off the Oracle Endeca integration with Oracle E-Business Suite. Chris Muir talked the talk and then walked the walked with Oracle ADF Mobile. Applications UX was mixing it up with the crowd at InSync12 too, showing off cool mobile UX solutions, gathering data for future innovations, and engaging with EBS, JD Edwards, and PeopleSoft apps customers and partners. User conferences such as InSync12 are an important part of our Oracle Applications UX user-centered design process, giving real apps users the opportunity to make real inputs and a way for us to watch and to listen to their needs and wants and get views on current and emerging UX too. Eric Stilan (@icondaddy) of Applications UX uses an iPad to gather feedback on the latest UX designs from conference attendees. While in Melbourne, I also visited impressive Oracle partner, Callista for a major ADF and UX pow-wow, and was the er, star of a very proactive event hosted by another partner Park Lane Information Technology (coordinated by Bambi Price (@bambiprice) of ODTUG) where I explained what UX is about, and how partner and customers can engage, participate and deploy that Applications UX scientific insight to advantage for their entire business. I also paired up with Oracle Australia in Sydney to visit key customers while there, and back at Oracle in Melbourne I spoke with sales consultants and account managers about regional opportunities and UX strategy, and came away with an understanding of what makes the Oracle market tick in Australia. Mobile worker solution development and user experience is hot news in Australia, and this was a great opportunity to team up with Chris Muir and show how the alignment of the twin stars of UX design patterns and ADF technology enables developers to make great-looking, usable apps that really sparkle. Our UX design patterns--or functional (UI) patterns, to use the developer world language--means that developers now have not only a great tool set to build apps on Oracle ADF/FMW but proven, tested usability solutions to solve common problems they can apply in the IDE too. In all, a whirlwind UX visit, packed with events and delivery opportunities, and all too short a time in the wonderful city of Melbourne. I need to get back there soon! For those who need a reminder, there's a website explaining how to get involved with, and participate in, Applications User Experience (including the Oracle Usability Advisory Board) events and programs. Thank you to AUSOUG, Quest, InSync, Callista, Park Lane IT, everyone at Oracle Australia, Chris Muir, and all the other people who came together to make this a productive visit. Stay tuned for more UX developments and engagements in the region on the Oracle VoX blog and Usable Apps website too!

    Read the article

  • How to run an async task afor every x mins in android?

    - by Shan
    how to run the async task at specific time? (I want to run it every 2 mins) I tried using post delayed but it's not working? tvData.postDelayed(new Runnable(){ @Override public void run() { readWebpage(); }}, 100); In the above code readwebpage is function which calls the async task for me.. Right now below is the method which I am using public void onCreate(Bundle savedInstanceState) { readwebapage(); } public void readWebpage() { DownloadWebPageTask task = new DownloadWebPageTask(); task.execute("http://www.google.com"); } private class DownloadWebPageTask extends AsyncTask<String, Void, String> { @Override protected String doInBackground(String... urls) { String response1 = ""; response1=read(); //read is my another function which does the real work response1=read(); super.onPostExecute(response1); return response1; } protected void onPostExecute(String result) { try { Thread.sleep(100); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } TextView tvData = (TextView) findViewById(R.id.TextView01); tvData.setText(result); DownloadWebPageTask task = new DownloadWebPageTask(); task.execute(new String[] { "http://www.google.com" }); } } This is what I my code is and it works perfectly fine but the big problem I drains my battery?

    Read the article

  • Importing CSV filte to SQL server...

    - by sam
    HI guys, I am trying to import CSV file to SQL server database, no success, I am still newbie to sql server, thanks Operation stopped... Initializing Data Flow Task (Success) Initializing Connections (Success) Setting SQL Command (Success) Setting Source Connection (Success) Setting Destination Connection (Success) Validating (Success) Messages Warning 0x80049304: Data Flow Task 1: Warning: Could not open global shared memory to communicate with performance DLL; data flow performance counters are not available. To resolve, run this package as an administrator, or on the system's console. (SQL Server Import and Export Wizard) Prepare for Execute (Success) Pre-execute (Success) Messages Information 0x402090dc: Data Flow Task 1: The processing of file "D:\test.csv" has started. (SQL Server Import and Export Wizard) Executing (Error) Messages Error 0xc002f210: Drop table(s) SQL Task 1: Executing the query "drop table [dbo].[test] " failed with the following error: "Cannot drop the table 'dbo.test', because it does not exist or you do not have permission.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly. (SQL Server Import and Export Wizard) Error 0xc02020a1: Data Flow Task 1: Data conversion failed. The data conversion for column ""Code"" returned status value 4 and status text "Text was truncated or one or more characters had no match in the target code page.". (SQL Server Import and Export Wizard) Error 0xc020902a: Data Flow Task 1: The "output column ""Code"" (38)" failed because truncation occurred, and the truncation row disposition on "output column ""Code"" (38)" specifies failure on truncation. A truncation error occurred on the specified object of the specified component. (SQL Server Import and Export Wizard) Error 0xc0202092: Data Flow Task 1: An error occurred while processing file "D:\test.csv" on data row 21. (SQL Server Import and Export Wizard) Error 0xc0047038: Data Flow Task 1: SSIS Error Code DTS_E_PRIMEOUTPUTFAILED. The PrimeOutput method on component "Source - test_csv" (1) returned error code 0xC0202092. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing. There may be error messages posted before this with more information about the failure. (SQL Server Import and Export Wizard) Copying to [dbo].[test] (Stopped) Post-execute (Success) Messages Information 0x402090dd: Data Flow Task 1: The processing of file "D:\test.csv" has ended. (SQL Server Import and Export Wizard) Information 0x402090df: Data Flow Task 1: The final commit for the data insertion in "component "Destination - test" (70)" has started. (SQL Server Import and Export Wizard) Information 0x402090e0: Data Flow Task 1: The final commit for the data insertion in "component "Destination - test" (70)" has ended. (SQL Server Import and Export Wizard) Information 0x4004300b: Data Flow Task 1: "component "Destination - test" (70)" wrote 0 rows. (SQL Server Import and Export Wizard)

    Read the article

  • How to implement login page using Spring Security so that it works with Spring web flow?

    - by simon
    I have a web application using Spring 2.5.6 and Spring Security 2.0.4. I have implemented a working login page, which authenticates the user against a web service. The authentication is done by defining a custom authentincation manager, like this: <beans:bean id="customizedFormLoginFilter" class="org.springframework.security.ui.webapp.AuthenticationProcessingFilter"> <custom-filter position="AUTHENTICATION_PROCESSING_FILTER" /> <beans:property name="defaultTargetUrl" value="/index.do" /> <beans:property name="authenticationFailureUrl" value="/login.do?error=true" /> <beans:property name="authenticationManager" ref="customAuthenticationManager" /> <beans:property name="allowSessionCreation" value="true" /> </beans:bean> <beans:bean id="customAuthenticationManager" class="com.sevenp.mobile.samplemgmt.web.security.CustomAuthenticationManager"> <beans:property name="authenticateUrlWs" value="${WS_ENDPOINT_ADDRESS}" /> </beans:bean> The authentication manager class: public class CustomAuthenticationManager implements AuthenticationManager, ApplicationContextAware { @Transactional @Override public Authentication authenticate(Authentication authentication) throws AuthenticationException { //authentication logic return new UsernamePasswordAuthenticationToken(principal, authentication.getCredentials(), grantedAuthorityArray); } The essential part of the login jsp looks like this: <c:url value="/j_spring_security_check" var="formUrlSecurityCheck"/> <form method="post" action="${formUrlSecurityCheck}"> <div id="errorArea" class="errorBox"> <c:if test="${not empty param.error}"> ${sessionScope["SPRING_SECURITY_LAST_EXCEPTION"].message} </c:if> </div> <label for="loginName"> Username: <input style="width:125px;" tabindex="1" id="login" name="j_username" /> </label> <label for="password"> Password: <input style="width:125px;" tabindex="2" id="password" name="j_password" type="password" /> </label> <input type="submit" tabindex="3" name="login" class="formButton" value="Login" /> </form> Now the problem is that the application should use Spring Web Flow. After the application was configured to use Spring Web Flow, the login does not work anymore - the form action to "/j_spring_security_check" results in a blank page without error message. What is the best way to adapt the existing login process so that it works with Spring Web Flow?

    Read the article

  • Is there a free (as in beer) Flow chart generator for COBOL Code?

    - by btelles
    Hi I've never read COBOL in my life and have been tasked with rewriting the old COBOL code in a new language. Are there any free or free-to-try software packages out there that will generate a flow chart for a COBOL program? I've looked at "Visustin" and "Code Visual to Flowchart" Visustin blanks out part of the code and does random rotations in the demo version, which causes the demo to be less accurate. I couldn't get Code Visual Flow Chart to work correctly with our code. Know of any other packages I might try?

    Read the article

  • When using TCP load balancing with HAProxy, does all outbound traffic flow through the LB?

    - by user122875
    I am setting up an app to be hosted using VMs(probably amazon, but that is not set in stone) which will require both HTTP load balancing and load balancing a large number(50k or so if possible) of persistant TCP connections. The amount of data is not all that high, but updates are frequent. Right now I am evaluating load balancers and am a bit confused about the architecture of HAProxy. If I use HAProxy to balance the TCP connections, will all the resulting traffic have to flow through the load balancer? If so, would another solution(such as LVS or even nginx_tcp_proxy_module) be a better fit?

    Read the article

  • How do you reach a "flow" state while programming?

    - by acrosman
    I'm not talking about program flow, but as in the state of working called flow, the state where you can get great work done the most effectively. I find that my current work environment while good in many ways does not allow me to get into a good state of mind for writing code most of the time (my job includes many other functions). If it's critical to get something done I'll often put on head-phones with classical music and try to drown out the office noise around me (and discourage co-workings from asking me questions). I am best able to get work done late in the evening when the house is quite and I've been thinking about the project for most of the day. What tricks have you found when working in less then perfect office environments?

    Read the article

  • Is it possible to transfer data between html pages driven by spring web flow?

    - by Easwaramoorthy Kanagaraj
    I am aware of passing data between jsp in spring web flow. Is it possible to transfer data between html pages driven by spring web flow. I don't want to use the HTML5 local storage capabilities. Example: Page 1: Search box for an employee id. Page 2: Search result for the employee details. Two ways that I could think: Get the employee details in page 1 by ajax and pass the result to the page two. Pass the employee id to page 2 and get the result by ajax in onload. In both case I need to pass any variable/data. I am confused in doing this. Is there anything in the Spring webflow using which I could do this? Thanks in advance, Easwar

    Read the article

  • C# 5 Async, Part 2: Asynchrony Today

    - by Reed
    The .NET Framework has always supported asynchronous operations.  However, different mechanisms for supporting exist throughout the framework.  While there are at least three separate asynchronous patterns used through the framework, only the latest is directly usable with the new Visual Studio Async CTP.  Before delving into details on the new features, I will talk about existing asynchronous code, and demonstrate how to adapt it for use with the new pattern. The first asynchronous pattern used in the .NET framework was the Asynchronous Programming Model (APM).  This pattern was based around callbacks.  A method is used to start the operation.  It typically is named as BeginSomeOperation.  This method is passed a callback defined as an AsyncCallback, and returns an object that implements IAsyncResult.  Later, the IAsyncResult is used in a call to a method named EndSomeOperation, which blocks until completion and returns the value normally directly returned from the synchronous version of the operation.  Often, the EndSomeOperation call would be called from the callback function passed, which allows you to write code that never blocks. While this pattern works perfectly to prevent blocking, it can make quite confusing code, and be difficult to implement.  For example, the sample code provided for FileStream’s BeginRead/EndRead methods is not simple to understand.  In addition, implementing your own asynchronous methods requires creating an entire class just to implement the IAsyncResult. Given the complexity of the APM, other options have been introduced in later versions of the framework.  The next major pattern introduced was the Event-based Asynchronous Pattern (EAP).  This provides a simpler pattern for asynchronous operations.  It works by providing a method typically named SomeOperationAsync, which signals its completion via an event typically named SomeOperationCompleted. The EAP provides a simpler model for asynchronous programming.  It is much easier to understand and use, and far simpler to implement.  Instead of requiring a custom class and callbacks, the standard event mechanism in C# is used directly.  For example, the WebClient class uses this extensively.  A method is used, such as DownloadDataAsync, and the results are returned via the DownloadDataCompleted event. While the EAP is far simpler to understand and use than the APM, it is still not ideal.  By separating your code into method calls and event handlers, the logic of your program gets more complex.  It also typically loses the ability to block until the result is received, which is often useful.  Blocking often requires writing the code to block by hand, which is error prone and adds complexity. As a result, .NET 4 introduced a third major pattern for asynchronous programming.  The Task<T> class introduced a new, simpler concept for asynchrony.  Task and Task<T> effectively represent an operation that will complete at some point in the future.  This is a perfect model for thinking about asynchronous code, and is the preferred model for all new code going forward.  Task and Task<T> provide all of the advantages of both the APM and the EAP models – you have the ability to block on results (via Task.Wait() or Task<T>.Result), and you can stay completely asynchronous via the use of Task Continuations.  In addition, the Task class provides a new model for task composition and error and cancelation handling.  This is a far superior option to the previous asynchronous patterns. The Visual Studio Async CTP extends the Task based asynchronous model, allowing it to be used in a much simpler manner.  However, it requires the use of Task and Task<T> for all operations.

    Read the article

  • Cloning a WebCenter Portal Managed Server

    - by Maiko Rocha
    I had to run some tests on a WebCenter Portal application deployed in a cluster. I've got a development VM with WebCenter PS4 (this also works on PS5) and I was trying to figure out how could I easily add a new managed server to my single-node domain, and make it a cluster. Creating the machine and cluster are a piece of cake, you can do it pretty quick through WLS Console. Now, you'd guess that using the clone option on WLS Console would do the magic of cloning an existing instance, right? Well, it does, but all you get is an "empty" managed server: with no target libraries.  It was a good surprise to find that WebCenter provides a way of cloning an existing WebCenter Portal managed server through a simple WLST command: cloneWebCenterManagedServer  This is a screenshot of my starting point. I want to clone WC_CustomPortal managed server: These are the steps to clone my WC_CustomPortal managed server: 1. In the command line, invoke WLST. It should be on <ORACLE_HOME_for_component>/common/bin/wlst.sh. In my case, it is ./product/Middleware/WebCenterPortal/common/bin/wlst.sh 2. Connect to the Admin Server:  connect ('<wls_admin_username>','<password>','t3://<server>:<port>') 3. Execute the following command: wls:/webcenter/serverConfig> cloneWebCenterManagedServer(baseManagedServer='WC_CustomPortal', newManagedServer='WC_CustomPortal2', newManagedServerPort=8893, verbose=1) I've turned on verbose output on purpose so I could see what the script was doing while executing. This is the output:  [...] Creating the Managed Server "WC_CustomPortal2" MBean type Server with name WC_CustomPortal2 has been created successfully. Targeting the library "oracle.bi.adf.model.slib#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.bi.adf.view.slib#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.bi.adf.webcenter.slib#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.wsm.seedpolicies#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.jsp.next#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.dconfig-infra#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "orai18n-adf#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.adf.dconfigbeans#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.pwdgen#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.jrf.system.filter" to the Managed Server "WC_CustomPortal2" Targeting the library "adf.oracle.domain#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "adf.oracle.businesseditor#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.adf.management#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "adf.oracle.domain.webapp#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "jsf#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "jstl#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "UIX#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "ohw-rcf#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "ohw-uix#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.adf.desktopintegration.model#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.adf.desktopintegration#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.bi.jbips#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.bi.composer#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.webcenter.skin#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.webcenter.composer#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.webcenter.framework.core#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.sdp.client#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.soa.workflow.wc#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.soa.worklist.webapp#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.ucm.ridc.app-lib#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "p13n-app-lib-base#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "p13n-core-web-lib#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "jaxrs-framework-web-lib#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "jersey-web-lib#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "wcps-util-app-lib#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "wcps-services-client-web-lib#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "content-app-lib#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "content-web-lib#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.webcenter.framework#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.webcenter.framework.view#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.webcenter.forum.dependency#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.webcenter.jive.dependency#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.webcenter.spaces.fwk#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.webcenter.activitygraph.lib#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the datasource "mds-CustomPortalDS" to the Managed Server "WC_CustomPortal2" Targeting the datasource "WebCenter-CustomPortalDS" to the Managed Server "WC_CustomPortal2" Targeting the datasource "Activities-CustomPortalDS" to the Managed Server "WC_CustomPortal2" Targeting the application "wsil-wls" to the Managed Server "WC_CustomPortal2" Targeting the application "DMS Application#11.1.1.1.0" to the Managed Server "WC_CustomPortal2" Targeting the application "ViewHandlerOverride_webapp1#V2.0" to the Managed Server "WC_CustomPortal2" Targeting the application "ViewHandlerOverride_application1#V2.0" to the Managed Server "WC_CustomPortal2" Targeting the startup class "JRF Startup Class" to the Managed Server "WC_CustomPortal2" Targeting the startup class "JPS Startup Class" to the Managed Server "WC_CustomPortal2" Targeting the startup class "ODL-Startup" to the Managed Server "WC_CustomPortal2" Targeting the startup class "Audit Loader Startup Class" to the Managed Server "WC_CustomPortal2" Targeting the startup class "AWT Application Context Startup Class" to the Managed Server "WC_CustomPortal2" Targeting the startup class "JMX Framework Startup Class" to the Managed Server "WC_CustomPortal2" Targeting the startup class "Web Services Startup Class" to the Managed Server "WC_CustomPortal2" Targeting the startup class "JOC-Startup" to the Managed Server "WC_CustomPortal2" Targeting the startup class "DMS-Startup" to the Managed Server "WC_CustomPortal2" Targeting the shutdown class "JOC-Shutdown" to the Managed Server "WC_CustomPortal2" Targeting the shutdown class "DMSShutdown" to the Managed Server "WC_CustomPortal2" Validating changes ... Validated the changes successfully [...] And this is the newly created WC_CustomPortal2 managed server showing up on Weblogic console:  Here is the full reference to WebCenter Portal Custom WLST Commands. Special thanks to Todd Vender for pointing this one out! :-)

    Read the article

  • Using TPL and PLINQ to raise performance of feed aggregator

    - by DigiMortal
    In this posting I will show you how to use Task Parallel Library (TPL) and PLINQ features to boost performance of simple RSS-feed aggregator. I will use here only very basic .NET classes that almost every developer starts from when learning parallel programming. Of course, we will also measure how every optimization affects performance of feed aggregator. Feed aggregator Our feed aggregator works as follows: Load list of blogs Download RSS-feed Parse feed XML Add new posts to database Our feed aggregator is run by task scheduler after every 15 minutes by example. We will start our journey with serial implementation of feed aggregator. Second step is to use task parallelism and parallelize feeds downloading and parsing. And our last step is to use data parallelism to parallelize database operations. We will use Stopwatch class to measure how much time it takes for aggregator to download and insert all posts from all registered blogs. After every run we empty posts table in database. Serial aggregation Before doing parallel stuff let’s take a look at serial implementation of feed aggregator. All tasks happen one after other. internal class FeedClient {     private readonly INewsService _newsService;     private const int FeedItemContentMaxLength = 255;       public FeedClient()     {          ObjectFactory.Initialize(container =>          {              container.PullConfigurationFromAppConfig = true;          });           _newsService = ObjectFactory.GetInstance<INewsService>();     }       public void Execute()     {         var blogs = _newsService.ListPublishedBlogs();           for (var index = 0; index <blogs.Count; index++)         {              ImportFeed(blogs[index]);         }     }       private void ImportFeed(BlogDto blog)     {         if(blog == null)             return;         if (string.IsNullOrEmpty(blog.RssUrl))             return;           var uri = new Uri(blog.RssUrl);         SyndicationContentFormat feedFormat;           feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);           if (feedFormat == SyndicationContentFormat.Rss)             ImportRssFeed(blog);         if (feedFormat == SyndicationContentFormat.Atom)             ImportAtomFeed(blog);                 }       private void ImportRssFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = RssFeed.Create(uri);           foreach (var item in feed.Channel.Items)         {             SaveRssFeedItem(item, blog.Id, blog.CreatedById);         }     }       private void ImportAtomFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = AtomFeed.Create(uri);           foreach (var item in feed.Entries)         {             SaveAtomFeedEntry(item, blog.Id, blog.CreatedById);         }     } } Serial implementation of feed aggregator downloads and inserts all posts with 25.46 seconds. Task parallelism Task parallelism means that separate tasks are run in parallel. You can find out more about task parallelism from MSDN page Task Parallelism (Task Parallel Library) and Wikipedia page Task parallelism. Although finding parts of code that can run safely in parallel without synchronization issues is not easy task we are lucky this time. Feeds import and parsing is perfect candidate for parallel tasks. We can safely parallelize feeds import because importing tasks doesn’t share any resources and therefore they don’t also need any synchronization. After getting the list of blogs we iterate through the collection and start new TPL task for each blog feed aggregation. internal class FeedClient {     private readonly INewsService _newsService;     private const int FeedItemContentMaxLength = 255;       public FeedClient()     {          ObjectFactory.Initialize(container =>          {              container.PullConfigurationFromAppConfig = true;          });           _newsService = ObjectFactory.GetInstance<INewsService>();     }       public void Execute()     {         var blogs = _newsService.ListPublishedBlogs();                var tasks = new Task[blogs.Count];           for (var index = 0; index <blogs.Count; index++)         {             tasks[index] = new Task(ImportFeed, blogs[index]);             tasks[index].Start();         }           Task.WaitAll(tasks);     }       private void ImportFeed(object blogObject)     {         if(blogObject == null)             return;         var blog = (BlogDto)blogObject;         if (string.IsNullOrEmpty(blog.RssUrl))             return;           var uri = new Uri(blog.RssUrl);         SyndicationContentFormat feedFormat;           feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);           if (feedFormat == SyndicationContentFormat.Rss)             ImportRssFeed(blog);         if (feedFormat == SyndicationContentFormat.Atom)             ImportAtomFeed(blog);                }       private void ImportRssFeed(BlogDto blog)     {          var uri = new Uri(blog.RssUrl);          var feed = RssFeed.Create(uri);           foreach (var item in feed.Channel.Items)          {              SaveRssFeedItem(item, blog.Id, blog.CreatedById);          }     }     private void ImportAtomFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = AtomFeed.Create(uri);           foreach (var item in feed.Entries)         {             SaveAtomFeedEntry(item, blog.Id, blog.CreatedById);         }     } } You should notice first signs of the power of TPL. We made only minor changes to our code to parallelize blog feeds aggregating. On my machine this modification gives some performance boost – time is now 17.57 seconds. Data parallelism There is one more way how to parallelize activities. Previous section introduced task or operation based parallelism, this section introduces data based parallelism. By MSDN page Data Parallelism (Task Parallel Library) data parallelism refers to scenario in which the same operation is performed concurrently on elements in a source collection or array. In our code we have independent collections we can process in parallel – imported feed entries. As checking for feed entry existence and inserting it if it is missing from database doesn’t affect other entries the imported feed entries collection is ideal candidate for parallelization. internal class FeedClient {     private readonly INewsService _newsService;     private const int FeedItemContentMaxLength = 255;       public FeedClient()     {          ObjectFactory.Initialize(container =>          {              container.PullConfigurationFromAppConfig = true;          });           _newsService = ObjectFactory.GetInstance<INewsService>();     }       public void Execute()     {         var blogs = _newsService.ListPublishedBlogs();                var tasks = new Task[blogs.Count];           for (var index = 0; index <blogs.Count; index++)         {             tasks[index] = new Task(ImportFeed, blogs[index]);             tasks[index].Start();         }           Task.WaitAll(tasks);     }       private void ImportFeed(object blogObject)     {         if(blogObject == null)             return;         var blog = (BlogDto)blogObject;         if (string.IsNullOrEmpty(blog.RssUrl))             return;           var uri = new Uri(blog.RssUrl);         SyndicationContentFormat feedFormat;           feedFormat = SyndicationDiscoveryUtility.SyndicationContentFormatGet(uri);           if (feedFormat == SyndicationContentFormat.Rss)             ImportRssFeed(blog);         if (feedFormat == SyndicationContentFormat.Atom)             ImportAtomFeed(blog);                }       private void ImportRssFeed(BlogDto blog)     {         var uri = new Uri(blog.RssUrl);         var feed = RssFeed.Create(uri);           feed.Channel.Items.AsParallel().ForAll(a =>         {             SaveRssFeedItem(a, blog.Id, blog.CreatedById);         });      }        private void ImportAtomFeed(BlogDto blog)      {         var uri = new Uri(blog.RssUrl);         var feed = AtomFeed.Create(uri);           feed.Entries.AsParallel().ForAll(a =>         {              SaveAtomFeedEntry(a, blog.Id, blog.CreatedById);         });      } } We did small change again and as the result we parallelized checking and saving of feed items. This change was data centric as we applied same operation to all elements in collection. On my machine I got better performance again. Time is now 11.22 seconds. Results Let’s visualize our measurement results (numbers are given in seconds). As we can see then with task parallelism feed aggregation takes about 25% less time than in original case. When adding data parallelism to task parallelism our aggregation takes about 2.3 times less time than in original case. More about TPL and PLINQ Adding parallelism to your application can be very challenging task. You have to carefully find out parts of your code where you can safely go to parallel processing and even then you have to measure the effects of parallel processing to find out if parallel code performs better. If you are not careful then troubles you will face later are worse than ones you have seen before (imagine error that occurs by average only once per 10000 code runs). Parallel programming is something that is hard to ignore. Effective programs are able to use multiple cores of processors. Using TPL you can also set degree of parallelism so your application doesn’t use all computing cores and leaves one or more of them free for host system and other processes. And there are many more things in TPL that make it easier for you to start and go on with parallel programming. In next major version all .NET languages will have built-in support for parallel programming. There will be also new language constructs that support parallel programming. Currently you can download Visual Studio Async to get some idea about what is coming. Conclusion Parallel programming is very challenging but good tools offered by Visual Studio and .NET Framework make it way easier for us. In this posting we started with feed aggregator that imports feed items on serial mode. With two steps we parallelized feed importing and entries inserting gaining 2.3 times raise in performance. Although this number is specific to my test environment it shows clearly that parallel programming may raise the performance of your application significantly.

    Read the article

  • Registration Open Now! Virtual Developer Day: Oracle ADF Development

    - by Greg Jensen
    Is your organization looking at developing Web or Mobile application based upon the Oracle platform?  Oracle is offering a virtual event for Developer Leads, Managers and Architects to learn more about developing Web, Mobile and beyond based on Oracle applications. This event will provide sessions that range from introductory to deep dive covering Oracle's strategic framework for developing multi-channel enterprise applications for the Oracle platforms. Multiple tracks cover every interest and every level and include live online Q&A chats with Oracle's technical staff.   For Registration and Information, please follow the link HERE Sign up for one of the following events below Americas - Tuesday - November 19th / 9am to 1pm PDT / 12pm to 4pm EDT / 1pm to 5pm BRT APAC - Thursday - November 21st / 10am - 1:30pm IST (India) / 12:30pm - 4pm SGT (Singapore) / 3:30pm -7pm AESDT EMEA - Tuesday - November 26th / 9am - 1pm GMT / 1pm - 5pm GST / 2:30pm -6:30pm IST

    Read the article

  • Passing Parameters to an ADF Page through the URL - Part 2.

    - by shay.shmeltzer
    I showed before how to pass a parameter on the URL when invoking a taskflow (where the taskflow starts with a method call and then a page). However in some simpler scenarios you don't actually need a full blown taskflow. Instead you can use page level parameters defined for your page in the adfc-config.xml file. So below is a demo of this technique. I'm also taking advantage of this video to show the concept of a view object level service method and how to invoke it from your page. P.S. You might wonder - why not just reference #{param.amount} as the value set for the method parameter? Why do I need to copy it into a viewScope parameter? The advantage of placing the value in the viewScope is that it is available even when the page went through several sumbits. For example if you switch the "partialSumbit" property of the "Next" button to false in the above example - the minute that you press the button to go to the next department - the param.amount value is gone. However the ViewScope is still there as long as you stay on this page.

    Read the article

  • How can I create a separate toolbar from the Task Bar?

    - by Iszi
    In Windows XP, you could separate toolbars from the Task Bar by dragging them to the desktop. They could then be left lying about anywhere on your screen or, my preferred option, docked to any side of the screen. I found this particularly useful to keep a handy list of common phone numbers quickly accessible. I'd create a new toolbar pointing to a custom folder, and put a bunch of dead shortcuts in the folder that had names and numbers as their file names. I'd then dock the toolbar to the left side, set it to auto-hide and always on top (options which could be set separate from the Task Bar as well) and it would be readily available no matter what else I was doing on my system. However, on my Windows 7 system, I seem unable to perform the crucial step of pulling the new toolbar off of the Task Bar. This is of course with the Task Bar "unlocked" so that I can move all my toolbars around. Is there something I'm missing here, or is this a feature that's been disabled in Windows 7? Is there any way to re-enable it, or otherwise achieve similar functionality? I'd rather be able to do this without additional software, if possible.

    Read the article

  • Accessing Attributes in a Many-to-Many

    - by tshauck
    Hi, I have a rails app and I'd like to be able to do something like task.labels.first.label_name to get the label name of a task. However, I get an undefined method label_name. I did a t = Task.first; t.labels.first.label_name in the console, and that worked so I'm not sure what's going on. Here's the models then the locations of the error: class Categorization < ActiveRecord::Base belongs_to :label belongs_to :task end class Label < ActiveRecord::Base attr_accessible :label_name has_many :categorizations has_many :tasks, :through => :categorizations end class Task < ActiveRecord::Base attr_accessible :task has_many :categorizations has_many :labels, :through => :categorizations end The error is in the index <% for task in @tasks %> <tr> <td><%= task.task %></td> <td><%= task.labels.first.label_name %></td> <td><%= link_to "Show", task %></td> <td><%= link_to "Edit", edit_task_path(task) %></td> <td><%= link_to "Destroy", task, :confirm => 'Are you sure?', :method => :delete %></td> </tr> <% end %

    Read the article

  • How do I wait for all other threads to finish their tasks?

    - by Mike
    I have several threads consuming tasks from a queue using something similar to the code below. The problem is that there is one type of task which cannot run while any other tasks are being processed. Here is what I have: while (true) // Threaded code { while (true) { lock(locker) { if (close_thread) return; task = GetNextTask(); // Get the next task from the queue } if (task != null) break; wh.WaitOne(); // Wait until a task is added to the queue } task.Run(); } And this is kind of what I need: while (true) { while (true) { lock(locker) { if (close_thread) return; if (disable_new_tasks) { task = null; } else { task = GetNextTask(); } } if (task != null) break; wh.WaitOne(); } if(!task.IsThreadSafe()) { // I would set this to false inside task.Run() at // the end of the non-thread safe task disable_new_tasks = true; Wait_for_all_threads_to_finish_their_current_tasks(); } task.Run(); } The problem is I don't know how to achive this without creating a mess.

    Read the article

  • Mixing together Connect by, inner join and sum with Oracle

    - by François
    Hey there, I need help with a oracle query. Excuse me in advance for my english. Here is my setup: I have 2 tables called respectively "tasks" and "timesheets". The "tasks" table is a recursive one, that way each task can have multiple subtasks. Each timesheet is associated with a task (not necessarily the "root" task) and contains the number of hours worked on it. Example: Tasks id:1 | name: Task A | parent_id: NULL id:2 | name: Task A1 | parent_id: 1 id:3 | name: Task A1.1 | parent_id: 2 id:4 | name: Task B | parent_id: NULL id:5 | name: Task B1 | parent_id: 4 Timesheets id:1 | task_id: 1 | hours: 1 id:2 | task_id: 2 | hours: 3 id:3 | task_id:3 | hours: 1 id:5 | task_id:5 | hours:1 ... What I want to do: I want a query that will return the sum of all the hours worked on a "task hierarchy". If we take a look at the previous example, It means I would like to have the following results: task A - 5 hour(s) | task B - 1 hour(s) At first I tried this SELECT TaskName, Sum(Hours) "TotalHours" FROM ( SELECT replace(sys_connect_by_path(decode(level, 1, t.name), '~'), '~') As TaskName, ts.hours as hours FROM tasks t INNER JOIN timesheets ts ON t.id=ts.task_id START WITH PARENTOID=-1 CONNECT BY PRIOR t.id = t.parent_id ) GROUP BY TaskName Having Sum(Hours) > 0 ORDER BY TaskName And it almost work. THe only problem is that if there are no timesheet for a root task, it will skip the whole hieararchy... but there might be timesheets for the child rows and it is exactly what happens with Task B1. I know it is the "inner join" part that is causing my problem but I'm not sure how can I get rid of it. Any idea how to solve this problem? Thank you

    Read the article

< Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >