Search Results

Search found 1157 results on 47 pages for 'recursive descent'.

Page 16/47 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • Why can't i define recursive variable in code block?

    - by senia
    Why can't i define a variable recursively in a code block? scala> { | val fibs: Stream[Int] = 1 #:: fibs.scanLeft(1){_ + _} | } <console>:9: error: forward reference extends over definition of value fibs val fibs: Stream[Int] = 1 #:: fibs.scanLeft(1){_ + _} ^ scala> val fibs: Stream[Int] = 1 #:: fibs.scanLeft(1){_ + _} fibs: Stream[Int] = Stream(1, ?) lazy keyword solves this problem, but i can't understand why it works without a code block but throws a compilation error in a code block.

    Read the article

  • Can somebody please explain this recursive function for me?

    - by capncoolio
    #include <stdio.h> #include <stdlib.h> void reprint(char *a[]) { if(*a) { printf("%d ",a); reprint(a+1); printf("%s ",*a); } } int main() { char *coll[] = {"C", "Objective", "like", "don't", "I", NULL}; reprint(coll); printf("\n"); return EXIT_SUCCESS; } As the more experienced will know, this prints the array in reverse. I don't quite understand how! I need help understanding what reprint(char *a[]) does. I understand pointer arithmetic to a degree, but from inserting printf's here and there, I've determined that the function increments up to the array end, and then back down to the start, only printing on the way down. However, I do not understand how it does this; all I've managed to understand by looking at the actual code is that if *a isn't NULL, then call reprint again, at the next index. Thanks guys!

    Read the article

  • How to make a recursive onClickListener for expanding and collapsing?

    - by hunterp
    In plain english, I have a textview, and when I click on it I want it to expand, and when I click on it again, I want it to compress. How can I do this? I've tried the below, but it warns on the final line about expander might not be initialized on holderFinal.text.setOnClickListener(expander); So now the code: final View.OnClickListener expander = new View.OnClickListener() { @Override public void onClick(View v) { holderFinal.text.setText(textData); holderFinal.text.setOnClickListener( new View.OnClickListener() { @Override public void onClick(View v) { holderFinal.text.setText(shortText); holderFinal.text.setOnClickListener(expander); } }); } };

    Read the article

  • prolog. recursive function returning multiple values.

    - by obtur
    I am new in Prolog. I need to declare a function which looks at a list like [[1, 0, 0], [1, 0, 0], [1, 0, 0]] and if the value is 0 returns its address(by considering it as a double array). I wrote a function basicly in the format: function(..., X) :- function(called by other values). How can I write a function, that returns a value when each times it called(recursively). May I get them (in above question) as alternative X's???

    Read the article

  • Is it good to create a usercontrol for Recursive code in xaml?

    - by user281947
    <Border BorderBrush="#C4C8CC" BorderThickness="0,0,0,1"> <TextBlock x:Name="SectionTitle" FontFamily="Trebuchet MS" FontSize="14" FontWeight="Bold" Foreground="#3D3D3D" /> </Border> I have to use the same above format at many places in a single xaml page, so for this i created a usercontrol and defined the above code inside it. So my question is, What i am doing is it right approach? Will it make the page to load slower then the above code used as it is without defining it in a new user control?

    Read the article

  • Compiling Gnucash 2.6.3 in Ubuntu 14.04

    - by wolveryn
    Downloaded the debian file from source forge and followed instructions, where these errors appear, I re-downloaded the file several times with same error. I want to install the latest Gnucash not the one available on software center. Thank you for your support. /qof/gnc-date/qof print date dmy buff: There are some differences between distros in the way they namelocales, and this can cause trouble with the locale-basedformatting. If you get the assert in this function, run locale -aand make sure that en_US, en_GB, and fr_FR are installed and thatif a suffix is needed it's in the suffixes array.** ERROR:test-gnc-date.c:465:test_gnc_setlocale: code should not be reached FAIL GTester: last random seed: R02Sd8d3d0e67be954baa8ec75d81a14c0e3 /bin/bash: line 1: 18889 Terminated MALLOC_CHECK_=2 MALLOC_PERTURB_=$((${RANDOM:-256} % 256)) gtester --verbose test-qof make[5]: *** [test-nonrecursive] Error 143 make[5]: Leaving directory `/home/ahmed/gnucash/gnucash-2.6.3/src/libqof/qof/test' make[4]: *** [check-am] Error 2 make[4]: Leaving directory `/home/ahmed/gnucash/gnucash-2.6.3/src/libqof/qof/test' make[3]: *** [check-recursive] Error 1 make[3]: Leaving directory `/home/ahmed/gnucash/gnucash-2.6.3/src/libqof/qof' make[2]: *** [check-recursive] Error 1 make[2]: Leaving directory `/home/ahmed/gnucash/gnucash-2.6.3/src/libqof' make[1]: *** [check-recursive] Error 1 make[1]: Leaving directory `/home/ahmed/gnucash/gnucash-2.6.3/src' make: *** [check-recursive] Error 1

    Read the article

  • Title case in Notepad++?

    - by recursive
    Is there a way to convert a block of text to title case in Notepad++? It should turn asdf ASDF aSdF into Asdf Asdf Asdf I see upper case and lower case on the edit menu, but those aren't quite what I'm looking for.

    Read the article

  • Resources for improving your comprehension of recursion?

    - by Andrew M
    I know what recursion is (when a patten reoccurs within itself, typically a function that calls itself on one of its lines, after a breakout conditional... right?), and I can understand recursive functions if I study them closely. My problem is, when I see new examples, I'm always initially confused. If I see a loop, or a mapping, zipping, nesting, polymorphic calling, and so on, I know what's going just by looking at it. When I see recursive code, my thought process is usually 'wtf is this?' followed by 'oh it's recursive' followed by 'I guess it must work, if they say it does.' So do you have any tips/plans/resources for building up your skills in this area? Recursion is kind of a wierd concept so I'm thinking the way to tackle it may be equally wierd and inobvious.

    Read the article

  • Loops, Recursion and Memoization in JavaScript

    - by Ken Dason
    Originally posted on: http://geekswithblogs.net/kdason/archive/2013/07/25/loops-recursion-and-memoization-in-javascript.aspxAccording to Wikipedia, the factorial of a positive integer n (denoted by n!) is the product of all positive integers less than or equal to n. For example, 5! = 5 x 4 x 3 x 2 x 1 = 120. The value of 0! is 1. We can use factorials to demonstrate iterative loops and recursive functions in JavaScript.  Here is a function that computes the factorial using a for loop: Output: Time Taken: 51 ms Here is the factorial function coded to be called recursively: Output: Time Taken: 165 ms We can speed up the recursive function with the use of memoization.  Hence,  if the value has previously been computed, it is simply returned and the recursive call ends. Output: Time Taken: 17 ms

    Read the article

  • Python, web log data mining for frequent patterns

    - by descent
    Hello! I need to develop a tool for web log data mining. Having many sequences of urls, requested in a particular user session (retrieved from web-application logs), I need to figure out the patterns of usage and groups (clusters) of users of the website. I am new to Data Mining, and now examining Google a lot. Found some useful info, i.e. querying Frequent Pattern Mining in Web Log Data seems to point to almost exactly similar studies. So my questions are: Are there any python-based tools that do what I need or at least smth similar? Can Orange toolkit be of any help? Can reading the book Programming Collective Intelligence be of any help? What to Google for, what to read, which relatively simple algorithms to use best? I am very limited in time (to around a week), so any help would be extremely precious. What I need is to point me into the right direction and the advice of how to accomplish the task in the shortest time. Thanks in advance!

    Read the article

  • How does VS 2005 provide history across all TFS Team Projects when tf.exe cannot?

    - by AakashM
    In Visual Studio 2005, in the TFS Source Control Explorer, these is a top-level node for the TFS Server itself, with a child node for each Team Project. Right-clicking either the server node or the node for a Team Project gives a context menu on which there is a View History item. Selecting this gives you a History window showing the last 200 or so changesets, either for the specific Team Project chosen, or across all Team Projects. It is this history across all Team Projects that I am wondering about. The command-line tf.exe history command provides (as I understand it) basically the same functionality as is provided by the VS TFS Source Control plug-in. But I cannot work out how to get tf.exe history to provide this across-all-Team-Projects history. At a command line, supposing I have C:\ mapped as the root of my workspace, and Foo, Bar, and Baz as Team Projects, I can do C:\> tf history Foo /recursive /stopafter:200 to get the last 200 changesets that affected Team Project Foo; or from within a Team Project folder C:\Bar> tf history *.* /recursive /stopafter:200 which does the same thing for Team Project Bar - note that the wildcard *.* is allowed here. However, none of these work (each gives the error message shown): C:\> tf history /recursive /stopafter:200 The history command takes exactly one item C:\> tf history *.* /recursive /stopafter:200 Unable to determine the source control server C:\> tf history *.* /server:servername /recursive /stopafter:200 Unable to determine the workspace I don't see an option in the docs for tf for specifying a workspace; it seems to only want to determine it from the current folder. So what is VS 2005 doing? Is it internally doing a history on each Team Project in turn and then sticking the results together?? note also that I have tried with Power Tools; tfpt history from the command line gives exactly the same error messages seen here

    Read the article

  • Recursion Question : Revision

    - by stan
    My slides say that: A recursive call should always be on a smaller data structure than the current one There must be a non recursive option if the data structure is too small You need a wrapper method to make the recursive method accessible Just reading this from the slides makes no sense, especially seeing as it was a topic from before christmas! Could anyone try and clear up what it means please? Thank you

    Read the article

  • CCNet TFS Migration - Dealing with left over folders

    - by Michael Stephenson
    Im currently in the process of migrating our many BizTalk projects from MKS source control to TFS.  While we will be using TFS for work item tracking and source control etc we will be continuing to use Cruise Control for continuous integration although im updating this to CCNet 1.5 at the same time. Ill post a few things as much as a reminder to myself about some of the problems we come across. Problem After the first build of our code the next time a build is triggered an error is encountered by the TFS source control block refreshing the source code. System.IO.IOException: The directory is not empty.    at System.IO.Directory.DeleteHelper(String fullPath, String userPath, Boolean recursive)    at System.IO.Directory.Delete(String fullPath, String userPath, Boolean recursive)    at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Vsts.deleteDirectory(String path)    at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Vsts.GetSource(IIntegrationResult result)    at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Build(IIntegrationResult result)    at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Integrate(IntegrationRequest request) System.IO.IOException: The directory is not empty. at System.IO.Directory.DeleteHelper(String fullPath, String userPath, Boolean recursive) at System.IO.Directory.Delete(String fullPath, String userPath, Boolean recursive) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Vsts.deleteDirectory(String path) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Vsts.GetSource(IIntegrationResult result) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Build(IIntegrationResult result) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Integrate(IntegrationRequest request) Project: Bupa.BPI.Documents Date of build: 2011-01-28 14:54:21 Running time: 00:00:05 Integration Request: Build (ForceBuild) triggered from VMOPBZDEV11 Solution The problem seems to be with a folder called TestLocations which is created by the build process and used along with the file adapter as a way to get messages into BizTalk.  For some reason the source control block when it does a full refresh of the code does not get rid of this folder and then complains thats a problem and fails the build. Interestingly there are other folders created by the build which are deleted fine.  My assumption is that this if something to do with the file adapter polling the directory.  However note that we have not had this problem with other source control blocks in the past. To workaround this I have added a prebuild task to the ccnet.config file to delete this folder before the source control block is executed.  See below for example < prebuild> exec>executable>cmd.exe</executable>buildArgs>/c "if exist "C:\<MyCode>\TestLocations" rd /s /q "C:\<MyCode>\TestLocations""</buildArgs>exec> prebuild> < < < </ </

    Read the article

  • ASP:DropDownList in ItemTemplate: Why is SelectedValue attribute allowed?

    - by recursive
    This piece of code <asp:DropDownList runat="server" ID="testdropdown" SelectedValue="2"> <asp:ListItem Text="1" Value="1"></asp:ListItem> <asp:ListItem Text="2" Value="2"></asp:ListItem> <asp:ListItem Text="3" Value="3"></asp:ListItem> </asp:DropDownList> yields this error: The 'SelectedValue' property cannot be set declaratively. Yet, this is a legal and commonly used edit template for databound GridViews. The SelectedValue attribute certainly appears to be declaratively set here. <EditItemTemplate> <asp:DropDownList runat="server" ID="GenreDropDownList" DataSourceID="GenreDataSource" DataValueField="GenreId" DataTextField="Name" SelectedValue='<%# Bind("Genre.GenreId") %>'> </asp:DropDownList> </EditItemTemplate> The question is: what is the difference between the cases when you are allowed to set it declaratively and those in which you are not? The error message implies that it's never allowed.

    Read the article

  • Extended slice that goes to beginning of sequence with negative stride

    - by recursive
    Bear with me while I explain my question. Skip down to the bold heading if you already understand extended slice list indexing. In python, you can index lists using slice notation. Here's an example: >>> A = list(range(10)) >>> A[0:5] [0, 1, 2, 3, 4] You can also include a stride, which acts like a "step": >>> A[0:5:2] [0, 2, 4] The stride is also allowed to be negative, meaning the elements are retrieved in reverse order: >>> A[5:0:-1] [5, 4, 3, 2, 1] But wait! I wanted to see [4, 3, 2, 1, 0]. Oh, I see, I need to decrement the start and end indices: >>> A[4:-1:-1] [] What happened? It's interpreting -1 as being at the end of the array, not the beginning. I know you can achieve this as follows: >>> A[4::-1] [4, 3, 2, 1, 0] But you can't use this in all cases. For example, in a method that's been passed indices. My question is: Is there any good pythonic way of using extended slices with negative strides and explicit start and end indices that include the first element of a sequence? This is what I've come up with so far, but it seems unsatisfying. >>> A[0:5][::-1] [4, 3, 2, 1, 0]

    Read the article

  • BackupPC - why does it use rsync --sender --server ... ?

    - by Jakobud
    I'm in the process of experimenting with BackupPC on a CentOS 5.5 server. I have everything pretty much setup with default values. I tried setting up a basic backup for a host's /www directory. The backup fails with the following errors: full backup started for directory /www Running: /usr/bin/ssh -q -x -l root target /usr/bin/rsync --server --sender --numeric-ids --perms --owner --group -D --links --hard-links --times --block-size=2048 --recursive --ignore-times . /www/ Xfer PIDs are now 30395 Read EOF: Connection reset by peer Tried again: got 0 bytes Done: 0 files, 0 bytes Got fatal error during xfer (Unable to read 4 bytes) Backup aborted (Unable to read 4 bytes) Not saving this as a partial backup since it has fewer files than the prior one (got 0 and 0 files versus 0) First of all, yes I have my ssh keys setup to allow me to ssh to the target server without requiring a password. In the process of troubleshooting, I tried the above ssh command directly from the command line, and it hangs. Looking at the end of the debug messages for SSH I get: debug1: Sending subsystem: /usr/bin/rsync --server --sender --numeric-ids --perms --owner --group -D --links --hard-links --times --block-size=2048 --recursive --ignore-times . /www/ Request for subsystem '/usr/bin/rsync --server --sender --numeric-ids --perms --owner --group -D --links --hard-links --times --block-size=2048 --recursive --ignore-times . /www/' failed on channel 0 Next I started looking at the rsync flags. I did not recognize --server and --sender. Looking at the rsync man pages, sure enough, I don't see anything about --server or --sender in there. What are those in there for? Looking at the BackupPC config I have this: RsyncClientPath = /usr/bin/rsync RsyncClientCmd = $sshPath -q -x -l root $host $rsyncPath $argList+ And for the arguments, I have the following listed: --numeric-ids --perms --owner --group -D --links --hard-links --times --block-size=2048 --recursive Notice there is no --server, --sender or --ignore-times. Why are these things getting added in? Is this part of the problem?

    Read the article

  • SQL SERVER – SQL in Sixty Seconds – 5 Videos from Joes 2 Pros Series – SQL Exam Prep Series 70-433

    - by pinaldave
    Joes 2 Pros SQL Server Learning series is indeed fun. Joes 2 Pros series is written for beginners and who wants to build expertise for SQL Server programming and development from fundamental. In the beginning of the series author Rick Morelan is not shy to explain the simplest concept of how to open SQL Server Management Studio. Honestly the book starts with that much basic but as it progresses further Rick discussing about various advanced concepts from query tuning to Core Architecture. This five part series is written with keeping SQL Server Exam 70-433. Instead of just focusing on what will be there in exam, this series is focusing on learning the important concepts thoroughly. This book no way take short cut to explain any concepts and at times, will go beyond the topic at length. The best part is that all the books has many companion videos explaining the concepts and videos. Every Wednesday I like to post a video which explains something in quick few seconds. Today we will go over five videos which I posted in my earlier posts related to Joes 2 Pros series. Introduction to XML Data Type Methods – SQL in Sixty Seconds #015 The XML data type was first introduced with SQL Server 2005. This data type continues with SQL Server 2008 where expanded XML features are available, most notably is the power of the XQuery language to analyze and query the values contained in your XML instance. There are five XML data type methods available in SQL Server 2008: query() – Used to extract XML fragments from an XML data type. value() – Used to extract a single value from an XML document. exist() – Used to determine if a specified node exists. Returns 1 if yes and 0 if no. modify() – Updates XML data in an XML data type. node() – Shreds XML data into multiple rows (not covered in this blog post). [Detailed Blog Post] | [Quiz with Answer] Introduction to SQL Error Actions – SQL in Sixty Seconds #014 Most people believe that when SQL Server encounters an error severity level 11 or higher the remaining SQL statements will not get executed. In addition, people also believe that if any error severity level of 11 or higher is hit inside an explicit transaction, then the whole statement will fail as a unit. While both of these beliefs are true 99% of the time, they are not true in all cases. It is these outlying cases that frequently cause unexpected results in your SQL code. To understand how to achieve consistent results you need to know the four ways SQL Error Actions can react to error severity levels 11-16: Statement Termination – The statement with the procedure fails but the code keeps on running to the next statement. Transactions are not affected. Scope Abortion – The current procedure, function or batch is aborted and the next calling scope keeps running. That is, if Stored Procedure A calls B and C, and B fails, then nothing in B runs but A continues to call C. @@Error is set but the procedure does not have a return value. Batch Termination – The entire client call is terminated. XACT_ABORT – (ON = The entire client call is terminated.) or (OFF = SQL Server will choose how to handle all errors.) [Detailed Blog Post] | [Quiz with Answer] Introduction to Basics of a Query Hint – SQL in Sixty Seconds #013 Query hints specify that the indicated hints should be used throughout the query. Query hints affect all operators in the statement and are implemented using the OPTION clause. Cautionary Note: Because the SQL Server Query Optimizer typically selects the best execution plan for a query, it is highly recommended that hints be used as a last resort for experienced developers and database administrators to achieve the desired results. [Detailed Blog Post] | [Quiz with Answer] Introduction to Hierarchical Query – SQL in Sixty Seconds #012 A CTE can be thought of as a temporary result set and are similar to a derived table in that it is not stored as an object and lasts only for the duration of the query. A CTE is generally considered to be more readable than a derived table and does not require the extra effort of declaring a Temp Table while providing the same benefits to the user. However; a CTE is more powerful than a derived table as it can also be self-referencing, or even referenced multiple times in the same query. A recursive CTE requires four elements in order to work properly: Anchor query (runs once and the results ‘seed’ the Recursive query) Recursive query (runs multiple times and is the criteria for the remaining results) UNION ALL statement to bind the Anchor and Recursive queries together. INNER JOIN statement to bind the Recursive query to the results of the CTE. [Detailed Blog Post] | [Quiz with Answer] Introduction to SQL Server Security – SQL in Sixty Seconds #011 Let’s get some basic definitions down first. Take the workplace example where “Tom” needs “Read” access to the “Financial Folder”. What are the Securable, Principal, and Permissions from that last sentence? A Securable is a resource that someone might want to access (like the Financial Folder). A Principal is anything that might want to gain access to the securable (like Tom). A Permission is the level of access a principal has to a securable (like Read). [Detailed Blog Post] | [Quiz with Answer] Please leave a comment explain which one was your favorite video as that will help me understand what works and what needs improvement. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • Neural Network with softmax activation

    - by Cambium
    This is more or less a research project for a course, and my understanding of NN is very/fairly limited, so please be patient :) ============== I am currently in the process of building a neural network that attempts to examine an input dataset and output the probability/likelihood of each classification (there are 5 different classifications). Naturally, the sum of all output nodes should add up to 1. Currently, I have two layers, and I set the hidden layer to contain 10 nodes. I came up with two different types of implementations 1) Logistic sigmoid for hidden layer activation, softmax for output activation 2) Softmax for both hidden layer and output activation I am using gradient descent to find local maximums in order to adjust the hidden nodes' weights and the output nodes' weights. I am certain in that I have this correct for sigmoid. I am less certain with softmax (or whether I can use gradient descent at all), after a bit of researching, I couldn't find the answer and decided to compute the derivative myself and obtained softmax'(x) = softmax(x) - softmax(x)^2 (this returns an column vector of size n). I have also looked into the MATLAB NN toolkit, the derivative of softmax provided by the toolkit returned a square matrix of size nxn, where the diagonal coincides with the softmax'(x) that I calculated by hand; and I am not sure how to interpret the output matrix. I ran each implementation with a learning rate of 0.001 and 1000 iterations of back propagation. However, my NN returns 0.2 (an even distribution) for all five output nodes, for any subset of the input dataset. My conclusions: o I am fairly certain that my gradient of descent is incorrectly done, but I have no idea how to fix this. o Perhaps I am not using enough hidden nodes o Perhaps I should increase the number of layers Any help would be greatly appreciated! The dataset I am working with can be found here (processed Cleveland): http://archive.ics.uci.edu/ml/datasets/Heart+Disease

    Read the article

  • Parallelism in .NET – Part 11, Divide and Conquer via Parallel.Invoke

    - by Reed
    Many algorithms are easily written to work via recursion.  For example, most data-oriented tasks where a tree of data must be processed are much more easily handled by starting at the root, and recursively “walking” the tree.  Some algorithms work this way on flat data structures, such as arrays, as well.  This is a form of divide and conquer: an algorithm design which is based around breaking up a set of work recursively, “dividing” the total work in each recursive step, and “conquering” the work when the remaining work is small enough to be solved easily. Recursive algorithms, especially ones based on a form of divide and conquer, are often a very good candidate for parallelization. This is apparent from a common sense standpoint.  Since we’re dividing up the total work in the algorithm, we have an obvious, built-in partitioning scheme.  Once partitioned, the data can be worked upon independently, so there is good, clean isolation of data. Implementing this type of algorithm is fairly simple.  The Parallel class in .NET 4 includes a method suited for this type of operation: Parallel.Invoke.  This method works by taking any number of delegates defined as an Action, and operating them all in parallel.  The method returns when every delegate has completed: Parallel.Invoke( () => { Console.WriteLine("Action 1 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); }, () => { Console.WriteLine("Action 2 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); }, () => { Console.WriteLine("Action 3 executing in thread {0}", Thread.CurrentThread.ManagedThreadId); } ); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Running this simple example demonstrates the ease of using this method.  For example, on my system, I get three separate thread IDs when running the above code.  By allowing any number of delegates to be executed directly, concurrently, the Parallel.Invoke method provides us an easy way to parallelize any algorithm based on divide and conquer.  We can divide our work in each step, and execute each task in parallel, recursively. For example, suppose we wanted to implement our own quicksort routine.  The quicksort algorithm can be designed based on divide and conquer.  In each iteration, we pick a pivot point, and use that to partition the total array.  We swap the elements around the pivot, then recursively sort the lists on each side of the pivot.  For example, let’s look at this simple, sequential implementation of quicksort: public static void QuickSort<T>(T[] array) where T : IComparable<T> { QuickSortInternal(array, 0, array.Length - 1); } private static void QuickSortInternal<T>(T[] array, int left, int right) where T : IComparable<T> { if (left >= right) { return; } SwapElements(array, left, (left + right) / 2); int last = left; for (int current = left + 1; current <= right; ++current) { if (array[current].CompareTo(array[left]) < 0) { ++last; SwapElements(array, last, current); } } SwapElements(array, left, last); QuickSortInternal(array, left, last - 1); QuickSortInternal(array, last + 1, right); } static void SwapElements<T>(T[] array, int i, int j) { T temp = array[i]; array[i] = array[j]; array[j] = temp; } Here, we implement the quicksort algorithm in a very common, divide and conquer approach.  Running this against the built-in Array.Sort routine shows that we get the exact same answers (although the framework’s sort routine is slightly faster).  On my system, for example, I can use framework’s sort to sort ten million random doubles in about 7.3s, and this implementation takes about 9.3s on average. Looking at this routine, though, there is a clear opportunity to parallelize.  At the end of QuickSortInternal, we recursively call into QuickSortInternal with each partition of the array after the pivot is chosen.  This can be rewritten to use Parallel.Invoke by simply changing it to: // Code above is unchanged... SwapElements(array, left, last); Parallel.Invoke( () => QuickSortInternal(array, left, last - 1), () => QuickSortInternal(array, last + 1, right) ); } This routine will now run in parallel.  When executing, we now see the CPU usage across all cores spike while it executes.  However, there is a significant problem here – by parallelizing this routine, we took it from an execution time of 9.3s to an execution time of approximately 14 seconds!  We’re using more resources as seen in the CPU usage, but the overall result is a dramatic slowdown in overall processing time. This occurs because parallelization adds overhead.  Each time we split this array, we spawn two new tasks to parallelize this algorithm!  This is far, far too many tasks for our cores to operate upon at a single time.  In effect, we’re “over-parallelizing” this routine.  This is a common problem when working with divide and conquer algorithms, and leads to an important observation: When parallelizing a recursive routine, take special care not to add more tasks than necessary to fully utilize your system. This can be done with a few different approaches, in this case.  Typically, the way to handle this is to stop parallelizing the routine at a certain point, and revert back to the serial approach.  Since the first few recursions will all still be parallelized, our “deeper” recursive tasks will be running in parallel, and can take full advantage of the machine.  This also dramatically reduces the overhead added by parallelizing, since we’re only adding overhead for the first few recursive calls.  There are two basic approaches we can take here.  The first approach would be to look at the total work size, and if it’s smaller than a specific threshold, revert to our serial implementation.  In this case, we could just check right-left, and if it’s under a threshold, call the methods directly instead of using Parallel.Invoke. The second approach is to track how “deep” in the “tree” we are currently at, and if we are below some number of levels, stop parallelizing.  This approach is a more general-purpose approach, since it works on routines which parse trees as well as routines working off of a single array, but may not work as well if a poor partitioning strategy is chosen or the tree is not balanced evenly. This can be written very easily.  If we pass a maxDepth parameter into our internal routine, we can restrict the amount of times we parallelize by changing the recursive call to: // Code above is unchanged... SwapElements(array, left, last); if (maxDepth < 1) { QuickSortInternal(array, left, last - 1, maxDepth); QuickSortInternal(array, last + 1, right, maxDepth); } else { --maxDepth; Parallel.Invoke( () => QuickSortInternal(array, left, last - 1, maxDepth), () => QuickSortInternal(array, last + 1, right, maxDepth)); } We no longer allow this to parallelize indefinitely – only to a specific depth, at which time we revert to a serial implementation.  By starting the routine with a maxDepth equal to Environment.ProcessorCount, we can restrict the total amount of parallel operations significantly, but still provide adequate work for each processing core. With this final change, my timings are much better.  On average, I get the following timings: Framework via Array.Sort: 7.3 seconds Serial Quicksort Implementation: 9.3 seconds Naive Parallel Implementation: 14 seconds Parallel Implementation Restricting Depth: 4.7 seconds Finally, we are now faster than the framework’s Array.Sort implementation.

    Read the article

  • Is recursion really bad?

    - by dotneteer
    After my previous post about the stack space, it appears that there is perception from the feedback that recursion is bad and we should avoid deep recursion. After writing a compiler, I know that the modern computer and compiler are complex enough and one cannot automatically assume that a hand crafted code would out-perform the compiler optimization. The only way is to do some prototype to find out. So why recursive code may not perform as well? Compilers place frames on a stack. In additional to arguments and local variables, compiles also need to place frame and program pointers on the frame, resulting in overheads. So why hand-crafted code may not performance as well? The stack used by a compiler is a simpler data structure and can grow and shrink cleanly. To replace recursion with out own stack, our stack is allocated in the heap that is far more complicated to manage. There could be overhead as well if the compiler needs to mark objects for garbage collection. Compiler also needs to worry about the memory fragmentation. Then there is additional complexity: CPUs have registers and multiple levels of cache. Register access is a few times faster than in-CPU cache access and is a few 10s times than on-board memory access. So it is up to the OS and compiler to maximize the use of register and in-CPU cache. For my particular problem, I did an experiment to rewrite my c# version of recursive code with a loop and stack approach. So here are the outcomes of the two approaches:   Recursive call Loop and Stack Lines of code for the algorithm 17 46 Speed Baseline 3% faster Readability Clean Far more complex So at the end, I was able to achieve 3% better performance with other drawbacks. My message is never assuming your sophisticated approach would automatically work out better than a simpler approach with a modern computer and compiler. Gage carefully before committing to a more complex approach.

    Read the article

  • What happens to C# 4 optional parameters when compiling against 3.5?

    - by Bertrand Le Roy
    Here’s a method declaration that uses optional parameters: public Path Copy( Path destination, bool overwrite = false, bool recursive = false) Something you may not know is that Visual Studio 2010 will let you compile this against .NET 3.5, with no error or warning. You may be wondering (as I was) how it does that. Well, it takes the easy and rather obvious way of not trying to be too smart and just ignores the optional parameters. So if you’re compiling against 3.5 from Visual Studio 2010, the above code is equivalent to: public Path Copy( Path destination, bool overwrite, bool recursive) The parameters are not optional (no such thing in C# 3), and no overload gets magically created for you. If you’re building a library that is going to have both 3.5 and 4.0 versions, and you want 3.5 users to have reasonable overloads of your methods, you’ll have to provide those yourself, which means that providing a version with optional parameters for the benefit of 4.0 users is not going to provide that much value, except for the ability to provide named parameters out of order. I guess that’s not so bad… Providing all of the following overloads will compile against both 3.5 and 4.0: public Path Copy(Path destination)public Path Copy(Path destination, bool overwrite)public Path Copy( Path destination, bool overwrite = false, bool recursive = false)

    Read the article

  • Is it just me or is this a baffling tech interview question

    - by Matthew Patrick Cashatt
    Background I was just asked in a tech interview to write an algorithm to traverse an "object" (notice the quotes) where A is equal to B and B is equal to C and A is equal to C. That's it. That is all the information I was given. I asked the interviewer what the goal was but apparently there wasn't one, just "traverse" the "object". I don't know about anyone else, but this seems like a silly question to me. I asked again, "am I searching for a value?". Nope. Just "traverse" it. Why would I ever want to endlessly loop through this "object"?? To melt my processor maybe?? The answer according to the interviewer was that I should have written a recursive function. OK, so why not simply ask me to write a recursive function? And who would write a recursive function that never ends? My question: Is this a valid question to the rest of you and, if so, can you provide a hint as to what I might be missing? Perhaps I am thinking too hard about solving real world problems. I have been successfully coding for a long time but this tech interview process makes me feel like I don't know anything. Final Answer: CLOWN TRAVERSAL!!! (See @Matt's answer below) Thanks! Matt

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >