Search Results

Search found 9667 results on 387 pages for 'parallel loop'.

Page 90/387 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • How to call shared_ptr<boost::signal> from a vector in a loop?

    - by BTR
    I've got a working callback system that uses boost::signal. I'm extending it into a more flexible and efficient callback manager which uses a vector of shared_ptr's to my signals. I've been able to successfully create and add callbacks to the list, but I'm unclear as to how to actually execute the signals. ... // Signal aliases typedef boost::signal<void (float *, int32_t)> Callback; typedef std::shared_ptr<Callback> CallbackRef; // The callback list std::vector<CallbackRef> mCallbacks; // Adds a callback to the list template<typename T> void addCallback(void (T::* callbackFunction)(float * data, int32_t size), T * callbackObject) { CallbackRef mCallback = CallbackRef(new Callback()); mCallback->connect(boost::function<void (float *, int32_t)>(boost::bind(callbackFunction, callbackObject, _1, _2))); mCallbacks.push_back(mCallback); } // Pass the float array and its size to the callbacks void execute(float * data, int32_t size) { // Iterate through the callback list for (vector<CallbackRef>::iterator i = mCallbacks.begin(); i != mCallbacks.end(); ++i) { // What do I do here? // (* i)(data, size); // <-- Dereferencing doesn't work } } ... All of this code works. I'm just not sure how to run the call from within a shared_ptr from with a vector. Any help would be neat-o. Thanks, in advance.

    Read the article

  • lock statement not working when there is a loop inside it?

    - by Ngu Soon Hui
    See this code: public class multiply { public Thread myThread; public int Counter { get; private set; } public string name { get; private set; } public void RunConsolePrint() { lock(this) { RunLockCode("lock"); } } private void RunLockCode(string lockCode) { Console.WriteLine("Now thread "+lockCode+" " + name + " has started"); for (int i = 1; i <= Counter; i++) { Console.WriteLine(lockCode+" "+name + ": count has reached " + i + ": total count is " + Counter); } Console.WriteLine("Thread " + lockCode + " " + name + " has finished"); } public multiply(string pname, int pCounter) { name = pname; Counter = pCounter; myThread = new Thread(new ThreadStart(RunConsolePrint)); } } And this is the test run code: static void Main(string[] args) { int counter = 50; multiply m2 = new multiply("Second", counter); multiply m1 = new multiply("First", counter); m1.myThread.Start(); m2.myThread.Start(); Console.ReadLine(); } I would expect that m2 must execute from start to finish before m1 starts executing, or vice versa, because of the lock statement. But the result I found was the call to lock first and lock second was intermingled together, i.e., something like this Now thread lock First has started Now thread lock Second has started lock First: Count has reached 1: total count is 50 lock First: Count has reached 2: total count is 50 lock Second: Count has reached 1: total count is 50 What did I do wrong?

    Read the article

  • What Happens if i create a byte array continuously in a while loop with different size and add read an stream into it?

    - by SajidKhan
    I want to read an audio file into multiple byte arrays , with different size . And then add into a shared memory. What will happen if use below code. Does the byte array gets over written. I understand it will creat multiple byte array , how do i erase those byte arrays after my code does what it needs to do. int TotalBuffer = 10; while (TotalBuffer !=0){ bufferData = new byte[AClipTextFileHandler.BufferSize.get(j)]; input.read(bufferData); Sharedbuffer.put(bufferData); i++; j++; TotalBuffer--; }

    Read the article

  • Looping in filemaker using a local variable

    - by Mike Davis
    Been programming in C# for a little bit - trying to use file maker and I cant believe it but I cant even get a simple for loop to work. What I want to do is simple: loop through for the amount of entries in my "amountOfRooms" field and create an entry in a table for each room. Sounds so simple, but I cant get it to work. Right now I have this: Set Variable[$cnt[Customers::AmountOfRooms]; value:1] Go to Layout["rooms"(Rooms)] Loop Exit Loop If[$cnt = Customers::AmountOfRooms] New Record / Request Set Variable[$cnt; Value: $cnt + 1] End Loop Exit Script No new records are created. I know the script is running because it does go to my layout, but doesnt create any new records. There is a "Repetition" field for my local variable - not sure how to use that or what it means? Any thoughts on how to do this simple loop? Thanks.

    Read the article

  • Why is Python 3.1 throwing a SyntaxError when printing after loop?

    - by bubersson
    Hi, I'm trying to run this snippet in Python 3.1 console and I'm getting SyntaxError: >>> while True: ... a=5 ... if a<6: ... break ... print("hello") File "<stdin>", line 5 print("hello") ^ SyntaxError: invalid syntax >>> (This is just shortened code to make a point.) Am I missing something? Is there some other Magic I don't know about? Thanks for your help (since this is my first StackOverflow question and I'm not a native English speaker)

    Read the article

  • Is it alright to call len() in a loop's conditional statement?

    - by DormoTheNord
    In C, it is considered bad practice to call strlen like this: for ( i = 0; strlen ( str ) != foo; i++ ) { // stuff } The reason, of course, is that it is inefficient since it "counts" the characters in a string multiple times. However, in Python, I see code like this quite often: for i in range ( 0, len ( list ) ): # stuff Is this bad practice? Should I store the result of len() in a variable and use that?

    Read the article

  • How can I speed-up this loop (in C)?

    - by splicer
    Hi! I'm trying to parallelize a convolution function in C. Here's the original function which convolves two arrays of 64-bit floats: void convolve(const Float64 *in1, UInt32 in1Len, const Float64 *in2, UInt32 in2Len, Float64 *results) { UInt32 i, j; for (i = 0; i < in1Len; i++) { for (j = 0; j < in2Len; j++) { results[i+j] += in1[i] * in2[j]; } } } In order to allow for concurrency (without semaphores), I created a function that computes the result for a particular position in the results array: void convolveHelper(const Float64 *in1, UInt32 in1Len, const Float64 *in2, UInt32 in2Len, Float64 *result, UInt32 outPosition) { UInt32 i, j; for (i = 0; i < in1Len; i++) { if (i > outPosition) break; j = outPosition - i; if (j >= in2Len) continue; *result += in1[i] * in2[j]; } } The problem is, using convolveHelper slows down the code about 3.5 times (when running on a single thread). Any ideas on how I can speed-up convolveHelper, while maintaining thread safety?

    Read the article

  • How to change the text int JLabel under the while or for loop?

    - by guilgamos
    I want to create a simple clock by using java. The code is very simply that i will give an example shown below for(int i=0;i<=60;i++) jLabel11.setText( Integer.toString(i) ); The problem is while I'm running my program the result didn't show the update in sequence. I mean it show only 60 digit immediately. It didn't show the change from 1 to 2 to 3 ... something like this. How can i fix this problem. Thank you in advance.

    Read the article

  • Why is Python 3.1 throwing a SyntaxError when printing after loop? [resolved]

    - by bubersson
    Hi, I'm trying to run this snippet in Python 3.1 console and I'm getting SyntaxError: >>> while True: ... a=5 ... if a<6: ... break ... print("hello") File "<stdin>", line 5 print("hello") ^ SyntaxError: invalid syntax >>> (This is just shortened code to make a point.) Am I missing something? Is there some other Magic I don't know about? Thanks for your help (since this is my first StackOverflow question and I'm not a native English speaker)

    Read the article

  • How does Array.ForEach() compare to standard for loop in C#?

    - by DaveN59
    I pine for the days when, as a C programmer, I could type: memset( byte_array, '0xFF' ); and get a byte array filled with 'FF' characters. So, I have been looking for a replacement for this: for (int i=0; i < byteArray.Length; i++) { byteArray[i] = 0xFF; } Lately, I have been using some of the new C# features and have been using this approach instead: Array.ForEach<byte>(byteArray, b => b = 0xFF); Granted, the second approach seems cleaner and is easier on the eye, but how does the performance compare to using the first approach? Am I introducing needless overhead by using Linq and generics? Thanks, Dave

    Read the article

  • External File Upload Optimizations for Windows Azure

    - by rgillen
    [Cross posted from here: http://rob.gillenfamily.net/post/External-File-Upload-Optimizations-for-Windows-Azure.aspx] I’m wrapping up a bit of the work we’ve been doing on data movement optimizations for cloud computing and the latest set of data yielded some interesting points I thought I’d share. The work done here is not really rocket science but may, in some ways, be slightly counter-intuitive and therefore seemed worthy of posting. Summary: for those who don’t like to read detailed posts or don’t have time, the synopsis is that if you are uploading data to Azure, block your data (even down to 1MB) and upload in parallel. Set your block size based on your source file size, but if you must choose a fixed value, use 1MB. Following the above will result in significant performance gains… upwards of 10x-24x and a reduction in overall file transfer time of upwards of 90% (eg, uploading a 1GB file averaged 46.37 minutes prior to optimizations and averaged 1.86 minutes afterwards). Detail: For those of you who want more detail, or think that the claims at the end of the preceding paragraph are over-reaching, what follows is information and code supporting these claims. As the title would indicate, these tests were run from our research facility pointing to the Azure cloud (specifically US North Central as it is physically closest to us) and do not represent intra-cloud results… we have performed intra-cloud tests and the overall results are similar in notion but the data rates are significantly different as well as the tipping points for the various block sizes… this will be detailed separately). We started by building a very simple console application that would loop through a directory and upload each file to Azure storage. This application used the shipping storage client library from the 1.1 version of the azure tools. The only real variation from the client library is that we added code to collect and record the duration (in ms) and size (in bytes) for each file transferred. The code is available here. We then created a directory that had a collection of files for the following sizes: 2KB, 32KB, 64KB, 128KB, 512KB, 1MB, 5MB, 10MB, 25MB, 50MB, 100MB, 250MB, 500MB, 750MB, and 1GB (50 files for each size listed). These files contained randomly-generated binary data and do not benefit from compression (a separate discussion topic). Our file generation tool is available here. The baseline was established by running the application described above against the directory containing all of the data files. This application uploads the files in a random order so as to avoid transferring all of the files of a given size sequentially and thereby spreading the affects of periodic Internet delays across the collection of results.  We then ran some scripts to split the resulting data and generate some reports. The raw data collected for our non-optimized tests is available via the links in the Related Resources section at the bottom of this post. For each file size, we calculated the average upload time (and standard deviation) and the average transfer rate (and standard deviation). As you likely are aware, transferring data across the Internet is susceptible to many transient delays which can cause anomalies in the resulting data. It is for this reason that we randomized the order of source file processing as well as executed the tests 50x for each file size. We expect that these steps will yield a sufficiently balanced set of results. Once the baseline was collected and analyzed, we updated the test harness application with some methods to split the source file into user-defined block sizes and then to upload those blocks in parallel (using the PutBlock() method of Azure storage). The parallelization was handled by simply relying on the Parallel Extensions to .NET to provide a Parallel.For loop (see linked source for specific implementation details in Program.cs, line 173 and following… less than 100 lines total). Once all of the blocks were uploaded, we called PutBlockList() to assemble/commit the file in Azure storage. For each block transferred, the MD5 was calculated and sent ensuring that the bits that arrived matched was was intended. The timer for the blocked/parallelized transfer method wraps the entire process (source file splitting, block transfer, MD5 validation, file committal). A diagram of the process is as follows: We then tested the affects of blocking & parallelizing the transfers by running the updated application against the same source set and did a parameter sweep on the block size including 256KB, 512KB, 1MB, 2MB, and 4MB (our assumption was that anything lower than 256KB wasn’t worth the trouble and 4MB is the maximum size of a block supported by Azure). The raw data for the parallel tests is available via the links in the Related Resources section at the bottom of this post. This data was processed and then compared against the single-threaded / non-optimized transfer numbers and the results were encouraging. The Excel version of the results is available here. Two semi-obvious points need to be made prior to reviewing the data. The first is that if the block size is larger than the source file size you will end up with a “negative optimization” due to the overhead of attempting to block and parallelize. The second is that as the files get smaller, the clock-time cost of blocking and parallelizing (overhead) is more apparent and can tend towards negative optimizations. For this reason (and is supported in the raw data provided in the linked worksheet) the charts and dialog below ignore source file sizes less than 1MB. (click chart for full size image) The chart above illustrates some interesting points about the results: When the block size is smaller than the source file, performance increases but as the block size approaches and then passes the source file size, you see decreasing benefit to the point of negative gains (see the values for the 1MB file size) For some of the moderately-sized source files, small blocks (256KB) are best As the size of the source file gets larger (see values for 50MB and up), the smallest block size is not the most efficient (presumably due, at least in part, to the increased number of blocks, increased number of individual transfer requests, and reassembly/committal costs). Once you pass the 250MB source file size, the difference in rate for 1MB to 4MB blocks is more-or-less constant The 1MB block size gives the best average improvement (~16x) but the optimal approach would be to vary the block size based on the size of the source file.    (click chart for full size image) The above is another view of the same data as the prior chart just with the axis changed (x-axis represents file size and plotted data shows improvement by block size). It again highlights the fact that the 1MB block size is probably the best overall size but highlights the benefits of some of the other block sizes at different source file sizes. This last chart shows the change in total duration of the file uploads based on different block sizes for the source file sizes. Nothing really new here other than this view of the data highlights the negative affects of poorly choosing a block size for smaller files.   Summary What we have found so far is that blocking your file uploads and uploading them in parallel results in significant performance improvements. Further, utilizing extension methods and the Task Parallel Library (.NET 4.0) make short work of altering the shipping client library to provide this functionality while minimizing the amount of change to existing applications that might be using the client library for other interactions.   Related Resources Source code for upload test application Source code for random file generator ODatas feed of raw data from non-optimized transfer tests Experiment Metadata Experiment Datasets 2KB Uploads 32KB Uploads 64KB Uploads 128KB Uploads 256KB Uploads 512KB Uploads 1MB Uploads 5MB Uploads 10MB Uploads 25MB Uploads 50MB Uploads 100MB Uploads 250MB Uploads 500MB Uploads 750MB Uploads 1GB Uploads Raw Data OData feeds of raw data from blocked/parallelized transfer tests Experiment Metadata Experiment Datasets Raw Data 256KB Blocks 512KB Blocks 1MB Blocks 2MB Blocks 4MB Blocks Excel worksheet showing summarizations and comparisons

    Read the article

  • Exploring packages in code

    In my previous post Searching for tasks with code you can see how to explore the control flow side of packages, drilling down through containers, task, and event handlers, but it didn’t cover the data flow. I recently saw a post on the MSDN forum asking how to edit an existing package programmatically, and the sticking point was how to find the the data flow and the components inside. This post builds on some of the previous code and shows how you can explore all objects inside a package. I took the sample Task Search application I’d written previously, and came up with a totally pointless little console application that just walks through the package and writes out the basic type and name of every object it finds, starting with the package itself e.g. Package – MyPackage . The sample package we used last time showed nested objects as well an event handler; a OnPreExecute event tucked away on the task SQL In FEL. The output of this sample tool would look like this: PackageObjects v1.0.0.0 (1.0.0.26627) Copyright (C) 2009 Konesans Ltd Processing File - Z:\Users\Darren Green\Documents\Visual Studio 2005\Projects\SSISTestProject\EventsAndContainersWithExe cSQLForSearch.dtsx Package - EventsAndContainersWithExecSQLForSearch For Loop - FOR Counter Loop Task - SQL In Counter Loop Sequence Container - SEQ For Each Loop Wrapper For Each Loop - FEL Simple Loop Task - SQL In FEL Task - SQL On Pre Execute for FEL SQL Task Sequence Container - SEQ Top Level Sequence Container - SEQ Nested Lvl 1 Sequence Container - SEQ Nested Lvl 2 Task - SQL In Nested Lvl 2 Task - SQL In Nested Lvl 1 #1 Task - SQL In Nested Lvl 1 #2 Connection Manager – LocalHost The code is very similar to what we had previously, but there are a couple of extra bits to deal with connections and to look more closely at a task and see if it is a Data Flow task. For connections your just examine the package's Connections collection as shown in the abridged snippets below. First you can see the call to the ProcessConnections method, followed by the method itself. // Load the package file Application application = new Application(); using (Package package = application.LoadPackage(filename, null)) { // Write out the package name Console.WriteLine("Package - {0}", package.Name); ... More ... // Look and the connections ProcessConnections(package.Connections); } private static void ProcessConnections(Connections connections) { foreach (ConnectionManager connectionManager in connections) { Console.WriteLine("Connection Manager - {0}", connectionManager.Name); } } What we didn’t see in the sample output above was anything to do with the Data Flow, but rest assured the code now handles it too. The following snippet shows how each task is examined to see if it is a Data Flow task, and if so we can then loop through all of the components inside the data flow. private static void ProcessTaskHost(TaskHost taskHost) { if (taskHost == null) { return; } Console.WriteLine("Task - {0}", taskHost.Name); // Check if the task is a Data Flow task MainPipe pipeline = taskHost.InnerObject as MainPipe; if (pipeline != null) { ProcessPipeline(pipeline); } } private static void ProcessPipeline(MainPipe pipeline) { foreach (IDTSComponentMetaData90 componentMetadata in pipeline.ComponentMetaDataCollection) { Console.WriteLine("Pipeline Component - {0}", componentMetadata.Name); // If you wish to make changes to the component then you should really use the managed wrapper. // CManagedComponentWrapper wrapper = componentMetadata.Instantiate(); // wrapper.SetComponentProperty("PropertyName", "Value"); } } Hopefully you can see how we get a reference to the Data Flow task, and then use the ComponentMetaDataCollection to find out what components we have inside the pipeline. If you wanted to know more about the component you could look at the ObjectType or ComponentClassID properties. After that it gets a bit harder and you should get a reference to the wrapper object as the comment suggest and start using the properties, just like you would in the create packages samples, see our Code Development category for some for these examples. Download Sample code project PackageObjects.zip (5KB)

    Read the article

  • Exploring packages in code

    In my previous post Searching for tasks with code you can see how to explore the control flow side of packages, drilling down through containers, task, and event handlers, but it didn’t cover the data flow. I recently saw a post on the MSDN forum asking how to edit an existing package programmatically, and the sticking point was how to find the the data flow and the components inside. This post builds on some of the previous code and shows how you can explore all objects inside a package. I took the sample Task Search application I’d written previously, and came up with a totally pointless little console application that just walks through the package and writes out the basic type and name of every object it finds, starting with the package itself e.g. Package – MyPackage . The sample package we used last time showed nested objects as well an event handler; a OnPreExecute event tucked away on the task SQL In FEL. The output of this sample tool would look like this: PackageObjects v1.0.0.0 (1.0.0.26627) Copyright (C) 2009 Konesans Ltd Processing File - Z:\Users\Darren Green\Documents\Visual Studio 2005\Projects\SSISTestProject\EventsAndContainersWithExe cSQLForSearch.dtsx Package - EventsAndContainersWithExecSQLForSearch For Loop - FOR Counter Loop Task - SQL In Counter Loop Sequence Container - SEQ For Each Loop Wrapper For Each Loop - FEL Simple Loop Task - SQL In FEL Task - SQL On Pre Execute for FEL SQL Task Sequence Container - SEQ Top Level Sequence Container - SEQ Nested Lvl 1 Sequence Container - SEQ Nested Lvl 2 Task - SQL In Nested Lvl 2 Task - SQL In Nested Lvl 1 #1 Task - SQL In Nested Lvl 1 #2 Connection Manager – LocalHost The code is very similar to what we had previously, but there are a couple of extra bits to deal with connections and to look more closely at a task and see if it is a Data Flow task. For connections your just examine the package's Connections collection as shown in the abridged snippets below. First you can see the call to the ProcessConnections method, followed by the method itself. // Load the package file Application application = new Application(); using (Package package = application.LoadPackage(filename, null)) { // Write out the package name Console.WriteLine("Package - {0}", package.Name); ... More ... // Look and the connections ProcessConnections(package.Connections); } private static void ProcessConnections(Connections connections) { foreach (ConnectionManager connectionManager in connections) { Console.WriteLine("Connection Manager - {0}", connectionManager.Name); } } What we didn’t see in the sample output above was anything to do with the Data Flow, but rest assured the code now handles it too. The following snippet shows how each task is examined to see if it is a Data Flow task, and if so we can then loop through all of the components inside the data flow. private static void ProcessTaskHost(TaskHost taskHost) { if (taskHost == null) { return; } Console.WriteLine("Task - {0}", taskHost.Name); // Check if the task is a Data Flow task MainPipe pipeline = taskHost.InnerObject as MainPipe; if (pipeline != null) { ProcessPipeline(pipeline); } } private static void ProcessPipeline(MainPipe pipeline) { foreach (IDTSComponentMetaData90 componentMetadata in pipeline.ComponentMetaDataCollection) { Console.WriteLine("Pipeline Component - {0}", componentMetadata.Name); // If you wish to make changes to the component then you should really use the managed wrapper. // CManagedComponentWrapper wrapper = componentMetadata.Instantiate(); // wrapper.SetComponentProperty("PropertyName", "Value"); } } Hopefully you can see how we get a reference to the Data Flow task, and then use the ComponentMetaDataCollection to find out what components we have inside the pipeline. If you wanted to know more about the component you could look at the ObjectType or ComponentClassID properties. After that it gets a bit harder and you should get a reference to the wrapper object as the comment suggest and start using the properties, just like you would in the create packages samples, see our Code Development category for some for these examples. Download Sample code project PackageObjects.zip (5KB)

    Read the article

  • Semi Fixed-timestep ported to javascript

    - by abernier
    In Gaffer's "Fix Your Timestep!" article, the author explains how to free your physics' loop from the paint one. Here is the final code, written in C: double t = 0.0; const double dt = 0.01; double currentTime = hires_time_in_seconds(); double accumulator = 0.0; State previous; State current; while ( !quit ) { double newTime = time(); double frameTime = newTime - currentTime; if ( frameTime > 0.25 ) frameTime = 0.25; // note: max frame time to avoid spiral of death currentTime = newTime; accumulator += frameTime; while ( accumulator >= dt ) { previousState = currentState; integrate( currentState, t, dt ); t += dt; accumulator -= dt; } const double alpha = accumulator / dt; State state = currentState*alpha + previousState * ( 1.0 - alpha ); render( state ); } I'm trying to implement this in JavaScript but I'm quite confused about the second while loop... Here is what I have for now (simplified): ... (function animLoop(){ ... while (accumulator >= dt) { // While? In a requestAnimation loop? Maybe if? ... } ... // render requestAnimationFrame(animLoop); // stand for the 1st while loop [OK] }()) As you can see, I'm not sure about the while loop inside the requestAnimation one... I thought replacing it with a if but I'm not sure it will be equivalent... Maybe some can help me.

    Read the article

  • SSIS Reporting Pack v0.4 – Execution Report updated

    - by jamiet
    SSIS Reporting Pack is a suite of reports that I maintain at http://ssisreportingpack.codeplex.com/ that provide visualisation over the SSIS Catalog in SQL Server 2012 and attempt to add value over the reports that ship in the box. Work on the reports has stalled (my last SSIS Reporting Pack blog post was on 4th September 2011) as I’ve had rather more important things going on my life of late however I have recently checked-in a fix that couldn’t really be delayed. I discovered a problem with the Execution report that was causing the report to effectively hang, it was caused by this bit of SQL hidden away in the report definition: [generated_executables] AS (   SELECT  [new_executable].[execution_path],[new_executable].[parent_execution_path]   FROM    (           SELECT  [execution_path] = SUBSTRING([loop_iteration].[execution_path] ,1, [loop_iteration].length_exec_path - [loop_iteration].[char_index_close_square] + 1)           ,       [parent_execution_path] = SUBSTRING([loop_iteration].[execution_path] ,1, [loop_iteration].length_exec_path - [loop_iteration].[char_index_open_square])           FROM    (                   SELECT  [execution_path]                   ,       [char_index_open_square] = CHARINDEX('[',REVERSE([execution_path]),1)                   ,       [char_index_close_square] = CHARINDEX(']',REVERSE([execution_path]),1)                   ,       [length_exec_path] = LEN([execution_path])                   FROM    [exec_stats] es                   WHERE   execution_path LIKE '%\[%]%'  ESCAPE '\'                   )AS [loop_iteration]           ) AS [new_executable]   GROUP   BY [new_executable].[execution_path],[new_executable].[parent_execution_path]) It was there because SSIS does not currently treat a loop iteration as an executable yet I figured there was still value in being able to view it as such – this SQL essentially “invents” new executables for those loop iterations; its what enabled the following visualisation: where each of the three iterations of a For Each Loop called “FEL Loop over top performing regions” appear in the report. Unfortunately, as I alluded, this could under certain circumstances (most likely when there were many loop iterations) cause the report to hang as it waited for the results to be constructed and returned. The change that I have made eradicates this generation of “fake” executables and thus produces this visualisation instead: Notice that the three “children” of the For Each Loop are no longer the three iterations but actually the task (“EPT Call Data Export Package”) contained within that For Each Loop. The problem here is of course that there is no longer a visual distinction between those three iterations; I have instead made the full execution path viewable via a tooltip:   If you preferred the “old” way of presenting this information and are happy to put up with the performance degradation then I have kept the old version of the report hanging around in the reporting pack as “execution loop with iterations” however none of the other reports link to it so you will have to browse to it manually if you want to use it. Please let me know if you ARE using it – I would be very interested to hear about your experiences.   The last change to make you aware of in the execution report is that by default I no longer show OnPreValidate or OnPostValidate messages as I consider them to be superfluous and only serve to clutter up the results. If you want to put them back, well, its open source so go right ahead!   The latest release of SSIS Reporting Pack that contains all of these changes is v0.4 and can be downloaded from http://ssisreportingpack.codeplex.com/releases/view/88178   Feedback on all of the above changes would be very much appreciated. @Jamiet

    Read the article

  • SSIS Reporting Pack v0.4 – Execution Report updated

    - by jamiet
    SSIS Reporting Pack is a suite of reports that I maintain at http://ssisreportingpack.codeplex.com/ that provide visualisation over the SSIS Catalog in SQL Server 2012 and attempt to add value over the reports that ship in the box. Work on the reports has stalled (my last SSIS Reporting Pack blog post was on 4th September 2011) as I’ve had rather more important things going on my life of late however I have recently checked-in a fix that couldn’t really be delayed. I discovered a problem with the Execution report that was causing the report to effectively hang, it was caused by this bit of SQL hidden away in the report definition: [generated_executables] AS (   SELECT  [new_executable].[execution_path],[new_executable].[parent_execution_path]   FROM    (           SELECT  [execution_path] = SUBSTRING([loop_iteration].[execution_path] ,1, [loop_iteration].length_exec_path - [loop_iteration].[char_index_close_square] + 1)           ,       [parent_execution_path] = SUBSTRING([loop_iteration].[execution_path] ,1, [loop_iteration].length_exec_path - [loop_iteration].[char_index_open_square])           FROM    (                   SELECT  [execution_path]                   ,       [char_index_open_square] = CHARINDEX('[',REVERSE([execution_path]),1)                   ,       [char_index_close_square] = CHARINDEX(']',REVERSE([execution_path]),1)                   ,       [length_exec_path] = LEN([execution_path])                   FROM    [exec_stats] es                   WHERE   execution_path LIKE '%\[%]%'  ESCAPE '\'                   )AS [loop_iteration]           ) AS [new_executable]   GROUP   BY [new_executable].[execution_path],[new_executable].[parent_execution_path]) It was there because SSIS does not currently treat a loop iteration as an executable yet I figured there was still value in being able to view it as such – this SQL essentially “invents” new executables for those loop iterations; its what enabled the following visualisation: where each of the three iterations of a For Each Loop called “FEL Loop over top performing regions” appear in the report. Unfortunately, as I alluded, this could under certain circumstances (most likely when there were many loop iterations) cause the report to hang as it waited for the results to be constructed and returned. The change that I have made eradicates this generation of “fake” executables and thus produces this visualisation instead: Notice that the three “children” of the For Each Loop are no longer the three iterations but actually the task (“EPT Call Data Export Package”) contained within that For Each Loop. The problem here is of course that there is no longer a visual distinction between those three iterations; I have instead made the full execution path viewable via a tooltip:   If you preferred the “old” way of presenting this information and are happy to put up with the performance degradation then I have kept the old version of the report hanging around in the reporting pack as “execution loop with iterations” however none of the other reports link to it so you will have to browse to it manually if you want to use it. Please let me know if you ARE using it – I would be very interested to hear about your experiences.   The last change to make you aware of in the execution report is that by default I no longer show OnPreValidate or OnPostValidate messages as I consider them to be superfluous and only serve to clutter up the results. If you want to put them back, well, its open source so go right ahead!   The latest release of SSIS Reporting Pack that contains all of these changes is v0.4 and can be downloaded from http://ssisreportingpack.codeplex.com/releases/view/88178   Feedback on all of the above changes would be very much appreciated. @Jamiet

    Read the article

  • designing classes with similar goal but widely different decisional core

    - by Stefano Borini
    I am puzzled on how to model this situation. Suppose you have an algorithm operating in a loop. At every loop, a procedure P must take place, whose role is to modify an input data I into an output data O, such that O = P(I). In reality, there are different flavors of P, say P1, P2, P3 and so on. The choice of which P to run is user dependent, but all P have the same finality, produce O from I. This called well for a base class PBase with a method PBase::apply, with specific reimplementations of P1::apply(I), P2::apply(I), and P3::apply(I). The actual P class gets instantiated in a factory method, and the loop stays simple. Now, I have a case of P4 which follows the same principle, but this time needs additional data from the loop (such as the current iteration, and the average value of O during the previous iterations). How would you redesign for this case?

    Read the article

  • await, WhenAll, WaitAll, oh my!!

    - by cibrax
    If you are dealing with asynchronous work in .NET, you might know that the Task class has become the main driver for wrapping asynchronous calls. Although this class was officially introduced in .NET 4.0, the programming model for consuming tasks was much more simplified in C# 5.0 in .NET 4.5 with the addition of the new async/await keywords. In a nutshell, you can use these keywords to make asynchronous calls as if they were sequential, and avoiding in that way any fork or callback in the code. The compiler takes care of the rest. I was yesterday writing some code for making multiple asynchronous calls to backend services in parallel. The code looked as follow, var allResults = new List<Result>(); foreach(var provider in providers) { var results = await provider.GetResults(); allResults.AddRange(results); } return allResults; You see, I was using the await keyword to make multiple calls in parallel. Something I did not consider was the overhead this code implied after being compiled. I started an interesting discussion with some smart folks in twitter. One of them, Tugberk Ugurlu, had the brilliant idea of actually write some code to make a performance comparison with another approach using Task.WhenAll. There are two additional methods you can use to wait for the results of multiple calls in parallel, WhenAll and WaitAll. WhenAll creates a new task and waits for results in that new task, so it does not block the calling thread. WaitAll, on the other hand, blocks the calling thread. This is the code Tugberk initially wrote, and I modified afterwards to also show the results of WaitAll. class Program { private static Func<Stopwatch, Task>[] funcs = new Func<Stopwatch, Task>[] { async (watch) => { watch.Start(); await Task.Delay(1000); Console.WriteLine("1000 one has been completed."); }, async (watch) => { await Task.Delay(1500); Console.WriteLine("1500 one has been completed."); }, async (watch) => { await Task.Delay(2000); Console.WriteLine("2000 one has been completed."); watch.Stop(); Console.WriteLine(watch.ElapsedMilliseconds + "ms has been elapsed."); } }; static void Main(string[] args) { Console.WriteLine("Await in loop work starts..."); DoWorkAsync().ContinueWith(task => { Console.WriteLine("Parallel work starts..."); DoWorkInParallelAsync().ContinueWith(t => { Console.WriteLine("WaitAll work starts..."); WaitForAll(); }); }); Console.ReadLine(); } static async Task DoWorkAsync() { Stopwatch watch = new Stopwatch(); foreach (var func in funcs) { await func(watch); } } static async Task DoWorkInParallelAsync() { Stopwatch watch = new Stopwatch(); await Task.WhenAll(funcs[0](watch), funcs[1](watch), funcs[2](watch)); } static void WaitForAll() { Stopwatch watch = new Stopwatch(); Task.WaitAll(funcs[0](watch), funcs[1](watch), funcs[2](watch)); } } After running this code, the results were very concluding. Await in loop work starts... 1000 one has been completed. 1500 one has been completed. 2000 one has been completed. 4532ms has been elapsed. Parallel work starts... 1000 one has been completed. 1500 one has been completed. 2000 one has been completed. 2007ms has been elapsed. WaitAll work starts... 1000 one has been completed. 1500 one has been completed. 2000 one has been completed. 2009ms has been elapsed. The await keyword in a loop does not really make the calls in parallel.

    Read the article

  • Using position function for accessing particular node when using While Activity in SOA 11.1.1.5

    - by AJ
    Hi If you are using while activity in SOA Suite 11.1.1.5 and within loop you have a requirement to access repeating node of XML. You might need to use below XPATH expression for accessing the node. Here is the XML that I am using for this example <?xml version='1.0' encoding='UTF-8'?> David DemoJob 1 2012-04-15 40000 0 10 Steve TestJob 1 2012-04-15 40000 0 10 Here you can notice that Emp node is repeating i.e. EmpCollection node will contain multiple employees. Now in loop one of assign activity you need to access a particular node for e.g. For first time loop runs you want to access first node and second time second node and so on. You need to make use of postion() function like bpws:getVariableData('Receive1_Read_InputVariable','body','/ns4:EmpCollection/ns4:Emp[position()=$loopCounter]/ns4:job') Please Note: Here loopCounter is a variable that we have created of type xsd:int and prior to loop we have initialized a value of 1. Loop will run depending on the number of Emp nodes present at runtime. For that in while Activity you can use below XPATH expression ora:countNodes('Receive1_Read_InputVariable','body','/ns4:EmpCollection/ns4:Emp')=bpws:getVariableData('loopCounter') Do let me know in case of any issues or concern. Cheers AJ

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >