Search Results

Search found 9667 results on 387 pages for 'parallel loop'.

Page 103/387 | < Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >

  • Step by Step screencasts to do Behavior Driven Development on WCF and UI using xUnit

    - by oazabir
    I am trying to encourage my team to get into Behavior Driven Development (BDD). So, I made two quick video tutorials to show how BDD can be done from early requirement collection stage to late integration tests. It explains breaking user stories into behaviors, and then developers and test engineers taking the behavior specs and writing a WCF service and unit test for it, in parallel, and then eventually integrating the WCF service and doing the integration tests. It introduces how mocking is done using the Moq library. Moreover, it shows a way how you can write test once and do both unit and integration tests at the flip of a config setting. Watch the screencast here: Doing BDD with xUnit, Subspec and on a WCF Service  Warning: you might hear some noise in the audio in some places. Something wrong with audio bit rate. I suggest you let the video download for a while and then play it. If you still get noise, go back couple of seconds earlier and then resume play. It eliminates the noise.  The next video tutorial is about doing BDD to do automated UI tests. It shows how test engineers can take behaviors and then write tests that tests a prototype UI in isolation (just like Service Contract) in order to ensure the prototype conforms to the expected behaviors, while developers can write the real code and build the real product in parallel. When the real stuff is done, the same test can test the real stuff and ensure the agreed behaviors are satisfied. I have used WatiN to automate UI and test UI for expected behaviors. Doing BDD with xUnit and WatiN on a ASP.NET webform Hope you like it!

    Read the article

  • What to use for simple cross-platform games instead of Flash?

    - by jmh_gr
    In short, for simple games: Is Flash still a good option for browser-based PC clients? It still has 90%+ penetration. What is a good alternative for mobile devices? It HTML5 + JavaScript the choice for mobile? Or does one have to learn a new native language for each target platform? (Android, Apple, Windows Phone)... If you desire further background: There are more blogs about the official demise of mobile Flash than I can count, along with endless useless and vitriolic comments. I'm actually trying to do something practical: build simple games that can be served accross multiple platforms. Several months ago I plopped down $1100 for CS5.5 Web and am wading into Flash. Bummer. My question to people who actually develop simple games and apps: What platform should I use instead? Is Flash still a sensible platform for web-served PC users? For example, let's say I build a simple arcade game that I would like to serve as an app to mobile users and as a browser-based game to PC users. Should I still invest the time and effort to learn and develop in Flash for the PC users, while building a parallel code set in some other language for mobile users? My games are simple enough that it would be annoying but not inconceivable to maintain parallel code sets.

    Read the article

  • Dev Lead Job opening on my team

    My product unit (Parallel Developer Tools) is hiring a developer lead here in Redmond. This position is specifically on the debugger feature team that I "Program Manage".So, if you have what it takes and don't mind working with me every single day, click on the link below to read more and apply. You can also send me your resume and I'll make sure it gets to the right place and that you get a prompt response.There is a very long job description on the Microsoft careers site under job id 707388.Here is an excerpt from the middle (emphasis mine):"...We are in search of a talented and innovative senior lead software design engineer to own development of the debugging tools for data parallelism (including GP-GPU) and HPC Clusters being built by our team.To be successful, you need to be able to guide careers, design and architect well, communicate and share the best development practices, collaborate with your peers, contribute to the vision, and code significant portions of the solution. We want to hear from you if you're passionate about making your mark in the parallel development space, improving people, and building world-class tools."Responsibilities include:Managing a team of senior and junior developersDesign and coding high-quality software..."For the full background story, requirements, qualifications and responsibilities please visit the official page. Comments about this post welcome at the original blog.

    Read the article

  • Problem with SANE and CardScan

    - by TiuTalk
    I have a CardScan 60 II device and installed SANE in my Ubuntu 10.10 laptop. The problem is I can't make scanimage find the device. Quote: $ sudo sane-find-scanner # sane-find-scanner will now attempt to detect your scanner. If the # result is different from what you expected, first make sure your # scanner is powered up and properly connected to your computer. # No SCSI scanners found. If you expected something different, make sure that # you have loaded a kernel SCSI driver for your SCSI adapter. found USB scanner (vendor=0x08f0 [Corex Technologies Corporation], product=0x1000 [Corex CardScan 60], chip=LM9832/3) at libusb:006:002 # Your USB scanner was (probably) detected. It may or may not be supported by # SANE. Try scanimage -L and read the backend's manpage. # Not checking for parallel port scanners. # Most Scanners connected to the parallel port or other proprietary ports # can't be detected by this program. But I can't find the device: $ sudo scanimage -L No scanners were identified. If you were expecting something different, check that the scanner is plugged in, turned on and detected by the sane-find-scanner tool (if appropriate). Please read the documentation which came with this software (README, FAQ, manpages).

    Read the article

  • Parallelize incremental processing in Tabular #ssas #tabular

    - by Marco Russo (SQLBI)
    I recently came in a problem trying to improve the parallelism of Tabular processing. As you know, multiple tables can be processed in parallel, whereas the processing of several partitions within the same table cannot be parallelized. When you perform an incremental update by adding only new rows to existing table, what you really do is adding rows to a partition, so adding rows to many tables means adding rows to several partitions. The particular condition you have in this case is that every partition in which you add rows belongs to a different table. Adding rows implies using the ProcessAdd command; its QueryBinding parameter specifies a SQL syntax to read new rows, otherwise the original query specified for the partition will be used, and it could generate duplicated data if you don’t have a dynamic behavior on the SQL side. If you create the required XMLA code manually, you will find that the QueryBinding node that should be part of the ProcessAdd command has to be moved out from ProcessAdd in case you are using a Batch command with more than one Process command (which is the reason why you want to use a single batch: run multiple process operations in parallel!). If you use AMO (Analysis Management Objects) you will find that this combination is not supported, even if you don’t have a syntax error compiling the code, but you might obtain this error at execution time: The syntax for the 'Process' command is incorrect. The 'Bindings' keyword cannot appear under a 'Process' command if the 'Process' command is a part of a 'Batch' command and there are more than one 'Process' commands in the 'Batch' or the 'Batch' command contains any out of line related information. In this case, the 'Bindings' keyword should be a part of the 'Batch' command only. If this is happening to you, the best solution I’ve found is manipulating the XMLA code generated by AMO moving the Binding nodes in the right place. A more detailed description of the issue and the code required to send a correct XMLA batch to Analysis Services is available in my article Parallelize ProcessAdd with AMO. By the way, the same technique (and code) can be used also if you have the same problem in a Multidimensional model.

    Read the article

  • Upcoming events : OBUG Connect Conference 2012

    - by Maria Colgan
    The Oracle Benelux User Group (OBUG) have given me an amazing opportunity to present a one day Optimizer workshop at their annual Connect Conference in Maastricht on April 24th. The workshop will run as one of the parallel tracks at the conference and consists of three 45 minute sessions. Each session can be attended stand alone but they will build on each other to allow someone new to the Oracle Optimizer or SQL tuning to come away from the conference with a better understanding of how the Optimizer works and what techniques they should deploy to tune their SQL. Below is a brief description of each of the sessions Session 7 - 11:30 am Oracle Optimizer: Understanding Optimizer StatisticsThe workshop opens with a discussion on Optimizer statistics and the features introduced in Oracle Database 11g to improve the quality and efficiency of statistics-gathering. The session will also provide strategies for managing statistics in various database environments. Session 27 -  14:30 pm Oracle Optimizer: Explain the Explain PlanThe workshop will continue with a detailed examination of the different aspects of an execution plan, from selectivity to parallel execution, and explains what information you should be gleaning from the plan. Session 47 -  15:45 pm Top Tips to get Optimal Execution Plans Finally I will show you how to identify and resolving the most common SQL execution performance problems, such as poor cardinality estimations, bind peeking issues, and selecting the wrong access method.   Hopefully I will see you there! +Maria Colgan

    Read the article

  • Should you create a class within a method?

    - by Amndeep7
    I have made a program using Java that is an implementation of this project: http://nifty.stanford.edu/2009/stone-random-art/sml/index.html. Essentially, you create a mathematical expression and, using the pixel coordinate as input, make a picture. After I initially implemented this in serial, I then implemented it in parallel due to the fact that if the picture size is too large or if the mathematical expression is too complex (especially considering the fact that I made the expression recursively), it takes a really long time. During this process, I realized that I needed two classes which implemented the Runnable interface as I had to put in parameters for the run method, which you aren't allowed to do directly. One of these classes ended up being a medium sized static inner class (not large enough to make an independent class file for it though). The other though, just needed a few parameters to determine some indexes and the size of the for loop that I was making run in parallel - here it is: class DataConversionRunnable implements Runnable { int jj, kk, w; DataConversionRunnable(int column, int matrix, int wid) { jj = column; kk = matrix; w = wid; } public void run() { for(int i = 0; i < w; i++) colorvals[kk][jj][i] = (int) ((raw[kk][jj][i] + 1.0) * 255 / 2.0); increaseCounter(); } } My question is should I make it a static inner class or can I just create it in a method? What is the general programming convention followed in this case?

    Read the article

  • First Shard for SQL Azure and SQL Server

    - by Herve Roggero
    That's it!!!!! It's ready to go and be tested, abused and improved! It requires .NET 4.0 and uses some cool technologies, like caching (the new System.Runtime.Caching) and the Task Parallel Library (System.Threading.Tasks). With this library you can: Define a shard of 1, 2 or 100 SQL databases (a mix of SQL Server and SQL Azure) Read from the shard in parallel or sequentially, and cache resultsets Update, Delete a record from the shard Insert records quickly in the shard with a round-robin load Reset the cache You can download the source code and a sample application here: http://enzosqlshard.codeplex.com/  Note about the breadcrumbs: I had to add a connection GUID in order for the library to know which database a record came from. The GUID is currently calculated on the fly in the library using some of the parameters of the connection string. The GUID is also dynamically added to the result set so the client can pass it back to the library. I am curious to get your feedback on this approach. ** Correction from my previous post: this is a library for a Horizontal Partition Shard (HPS): tables are split across databases horizontally. So in essence, the tables need to have the same schema across the databases.

    Read the article

  • Faster, Simpler access to Azure Tables with Enzo Azure API

    - by Herve Roggero
    After developing the latest version of Enzo Cloud Backup I took the time to create an API that would simplify access to Azure Tables (the Enzo Azure API). At first, my goal was to make the code simpler compared to the Microsoft Azure SDK. But as it turns out it is also a little faster; and when using the specialized methods (the fetch strategies) it is much faster out of the box than the Microsoft SDK, unless you start creating complex parallel and resilient routines yourself. Last but not least, I decided to add a few extension methods that I think you will find attractive, such as the ability to transform a list of entities into a DataTable. So let’s review each area in more details. Simpler Code My first objective was to make the API much easier to use than the Azure SDK. I wanted to reduce the amount of code necessary to fetch entities, remove the code needed to add automatic retries and handle transient conditions, and give additional control, such as a way to cancel operations, obtain basic statistics on the calls, and control the maximum number of REST calls the API generates in an attempt to avoid throttling conditions in the first place (something you cannot do with the Azure SDK at this time). Strongly Typed Before diving into the code, the following examples rely on a strongly typed class called MyData. The way MyData is defined for the Azure SDK is similar to the Enzo Azure API, with the exception that they inherit from different classes. With the Azure SDK, classes that represent entities must inherit from TableServiceEntity, while classes with the Enzo Azure API must inherit from BaseAzureTable or implement a specific interface. // With the SDK public class MyData1 : TableServiceEntity {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } //  With the Enzo Azure API public class MyData2 : BaseAzureTable {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } Simpler Code Now that the classes representing an Azure Table entity are defined, let’s review the methods that the Azure SDK would look like when fetching all the entities from an Azure Table (note the use of a few variables: the _tableName variable stores the name of the Azure Table, and the ConnectionString property returns the connection string for the Storage Account containing the table): // With the Azure SDK public List<MyData1> FetchAllEntities() {      CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConnectionString);      CloudTableClient tableClient = storageAccount.CreateCloudTableClient();      TableServiceContext serviceContext = tableClient.GetDataServiceContext();      CloudTableQuery<MyData1> partitionQuery =         (from e in serviceContext.CreateQuery<MyData1>(_tableName)         select new MyData1()         {            PartitionKey = e.PartitionKey,            RowKey = e.RowKey,            Timestamp = e.Timestamp,            Message = e.Message,            Level = e.Level,            Severity = e.Severity            }).AsTableServiceQuery<MyData1>();        return partitionQuery.ToList();  } This code gives you automatic retries because the AsTableServiceQuery does that for you. Also, note that this method is strongly-typed because it is using LINQ. Although this doesn’t look like too much code at first glance, you are actually mapping the strongly-typed object manually. So for larger entities, with dozens of properties, your code will grow. And from a maintenance standpoint, when a new property is added, you may need to change the mapping code. You will also note that the mapping being performed is optional; it is desired when you want to retrieve specific properties of the entities (not all) to reduce the network traffic. If you do not specify the properties you want, all the properties will be returned; in this example we are returning the Message, Level and Severity properties (in addition to the required PartitionKey, RowKey and Timestamp). The Enzo Azure API does the mapping automatically and also handles automatic reties when fetching entities. The equivalent code to fetch all the entities (with the same three properties) from the same Azure Table looks like this: // With the Enzo Azure API public List<MyData2> FetchAllEntities() {        AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);        List<MyData2> res = at.Fetch<MyData2>("", "Message,Level,Severity");        return res; } As you can see, the Enzo Azure API returns the entities already strongly typed, so there is no need to map the output. Also, the Enzo Azure API makes it easy to specify the list of properties to return, and to specify a filter as well (no filter was provided in this example; the filter is passed as the first parameter).  Fetch Strategies Both approaches discussed above fetch the data sequentially. In addition to the linear/sequential fetch methods, the Enzo Azure API provides specific fetch strategies. Fetch strategies are designed to prepare a set of REST calls, executed in parallel, in a way that performs faster that if you were to fetch the data sequentially. For example, if the PartitionKey is a GUID string, you could prepare multiple calls, providing appropriate filters ([‘a’, ‘b’[, [‘b’, ‘c’[, [‘c’, ‘d[, …), and send those calls in parallel. As you can imagine, the code necessary to create these requests would be fairly large. With the Enzo Azure API, two strategies are provided out of the box: the GUID and List strategies. If you are interested in how these strategies work, see the Enzo Azure API Online Help. Here is an example code that performs parallel requests using the GUID strategy (which executes more than 2 t o3 times faster than the sequential methods discussed previously): public List<MyData2> FetchAllEntitiesGUID() {     AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);     List<MyData2> res = at.FetchWithGuid<MyData2>("", "Message,Level,Severity");     return res; } Faster Results With Sequential Fetch Methods Developing a faster API wasn’t a primary objective; but it appears that the performance tests performed with the Enzo Azure API deliver the data a little faster out of the box (5%-10% on average, and sometimes to up 50% faster) with the sequential fetch methods. Although the amount of data is the same regardless of the approach (and the REST calls are almost exactly identical), the object mapping approach is different. So it is likely that the slight performance increase is due to a lighter API. Using LINQ offers many advantages and tremendous flexibility; nevertheless when fetching data it seems that the Enzo Azure API delivers faster.  For example, the same code previously discussed delivered the following results when fetching 3,000 entities (about 1KB each). The average elapsed time shows that the Azure SDK returned the 3000 entities in about 5.9 seconds on average, while the Enzo Azure API took 4.2 seconds on average (39% improvement). With Fetch Strategies When using the fetch strategies we are no longer comparing apples to apples; the Azure SDK is not designed to implement fetch strategies out of the box, so you would need to code the strategies yourself. Nevertheless I wanted to provide out of the box capabilities, and as a result you see a test that returned about 10,000 entities (1KB each entity), and an average execution time over 5 runs. The Azure SDK implemented a sequential fetch while the Enzo Azure API implemented the List fetch strategy. The fetch strategy was 2.3 times faster. Note that the following test hit a limit on my network bandwidth quickly (3.56Mbps), so the results of the fetch strategy is significantly below what it could be with a higher bandwidth. Additional Methods The API wouldn’t be complete without support for a few important methods other than the fetch methods discussed previously. The Enzo Azure API offers these additional capabilities: - Support for batch updates, deletes and inserts - Conversion of entities to DataRow, and List<> to a DataTable - Extension methods for Delete, Merge, Update, Insert - Support for asynchronous calls and cancellation - Support for fetch statistics (total bytes, total REST calls, retries…) For more information, visit http://www.bluesyntax.net or go directly to the Enzo Azure API page (http://www.bluesyntax.net/EnzoAzureAPI.aspx). About Herve Roggero Herve Roggero, Windows Azure MVP, is the founder of Blue Syntax Consulting, a company specialized in cloud computing products and services. Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" from Apress and runs the Azure Florida Association (on LinkedIn: http://www.linkedin.com/groups?gid=4177626). For more information on Blue Syntax Consulting, visit www.bluesyntax.net.

    Read the article

  • Is this so bad when using MySQL queries in PHP?

    - by alex
    I need to update a lot of rows, per a user request. It is a site with products. I could... Delete all old rows for that product, then loop through string building a new INSERT query. This however will lose all data if the INSERT fails. Perform an UPDATE through each loop. This loop currently iterates over 8 items, but in the future it may get up to 15. This many UPDATEs doesn't sound like too good an idea. Change DB Schema, and add an auto_increment Id to the rows. Then first do a SELECT, get all old rows ids in a variable, perform one INSERT, and then a DELETE WHERE IN SET. What is the usual practice here? Thanks

    Read the article

  • flex 3 using nested repeaters

    - by ccdugga
    I'm trying to loop through a row of an arraycollection using nested repeater; <mx:Repeater id="rp1" dataProvider="{arrayCollection}"> <mx:Repeater id="rp2" dataProvider="{rp1.currentItem}"> <mx:Button height="49" width="50" label="{rp2.currentItem.name}" /> </mx:Repeater> </mx:Repeater> What im trying to do is make the repeater loop through all the attributes in the currentRow, eg. name,age, address etc. At the moment all i do is call rp2.currentItem.name which explicitly calls out the name of the attribute and then the value is returned. Is it possible instead of explicity naming the attribute to just loop through them all and dispplay button for each using the nested repeater?thanks

    Read the article

  • Is recursion ever faster than looping?

    - by Carson Myers
    I know that recursion is sometimes a lot cleaner than looping, and I'm not asking anything about when I should use recursion over iteration, I know there are lots of questions about that already. What I'm asking is, is recursion ever faster than a loop? To me it seems like, you would always be able to refine a loop and get it to perform more quickly than a recursive function because the loop is absent constantly setting up new stack frames. I'm specifically looking for whether recursion is faster in applications where recursion is the right way to handle the data, such as in some sorting functions, in binary trees, etc.

    Read the article

  • Python (Twisted) - reading from fifo and sending read data to multiple protocols

    - by SpankMe
    Hi, Im trying to write some kind of multi protocol bot (jabber/irc) that would read messages from fifo file (one liners mostly) and then send them to irc channel and jabber contacts. So far, I managed to create two factories to connect to jabber and irc, and they seem to be working. However, I've problem with reading the fifo file - I have no idea how to read it in a loop (open file, read line, close file, jump to open file and so on) outside of reactor loop to get the data I need to send, and then get that data to reactor loop for sending in both protocols. I've been looking for information on how to do it in best way, but Im totally lost in the dark. Any suggestion/help would be highly appreciated. Thanks in advance!

    Read the article

  • haskell network io hgetline

    - by Alex
    I want to read all the data on a handle, and then block waiting for more data. listen1 stops when there is a '\n' character in the stream. listen2 works and could be made completely general by imitating the code for hGetNonBlocking. What is the best way to do this? import qualified Data.ByteString as B loop = sequence_ . repeat listen1 :: Handle -> TChan B.ByteString -> IO() listen1 sock chan = do loop ( do s <- B.hGetLine sock atomically (writeTChan chan s) ) listen2 :: Handle -> TChan B.ByteString -> IO() listen2 sock chan = do loop ( do s <- B.hGet sock 1 s1 <- B.hGetNonBlocking sock 65000 atomically (writeTChan chan (B.append s s1)) )

    Read the article

  • F# on Mac: Syntax Error on Inner Function

    - by JBristow
    The following code: exception NoElements of string let nth(k, list) = let rec loop count list = match list with | head :: _ when count = k -> head | _ :: tail when count <> k -> loop (count+1) tail | [] -> raise (NoElements("No Elements")) loop 0 list ;; printfn "%A" (nth(2, [1, 1, 2, 3, 5, 8])) Produces the following errors when compiling on a mac, but not in Visual Studio 2010: nth.fs(10,0): error FS0191: syntax error. nth.fs(4,4): error FS0191: no matching 'in' found for this 'let'.

    Read the article

  • Removing an Entity from an EntitySet during Iteration...

    - by Gregorius
    I've got this code... seems nice and elegant, but apparently the framework don't like it when i mess with a collection while iterating through it: foreach (KitGroup kg in ProductToTransfer.KitGroups) { // Remove kit groups that have been excluded by the user if (inKitGroupExclusions != null && inKitGroupExclusions.Contains(kg.KitGroupID)) ProductToTransfer.KitGroups.Remove(kg); else { // Loop through the kit items and do other stuff //... } } The error it throws when it iterates to the 2nd object in the collection is: "EntitySet was modified during enumeration" I know i could create a new collection of KitGroup objects (or even just IDs) that i want to remove, and then another loop afterwards to loop through these, and remove them from the collection, but this just seems like unnecessary extra code... can anybody suggest a more elegant way of achieving the same thing? Cheers Greg

    Read the article

  • Calculate total by javascript in gridview with paging

    - by louis
    Without paging function, i can loop through the gridview by using var sum = 0; var gridViewCtlId = '<%=timesheetView.ClientID%>'; var grid = document.getElementById(gridViewCtlId); var gridLength = grid.rows.length; so with gridLength i can loop through the gridview to sum all rows. However, when I use paging event of gridview, i use the page size to loop through all rows, but it occurs errors because the last page may not have enough rows. So would you please to help me how to get the rows in the each page of gridview?

    Read the article

  • CSS selector for grouped iterations

    - by snaken
    Hi, I have a number of elements that i want to loop through as groups. Consider this HTML: <input class="matching match-1" /> <input class="matching match-1" /> <input class="matching match-2" /> <input class="matching match-2" /> <input class="matching match-2" /> <input class="matching match-3" /> <input class="matching match-3" /> // etc I want a CSS selector that would allow me to loop through these as groups so there would be - using this example - 3 iterations of the loop (one for match-1, one for match-2 and one for match-3). The 1,2,3 etc is a variable used for grouping but this is not fixed so it cannot rely on hard coding of these values. Is this even possible? I'll be using jQuery or prototype not that that should matter really. Thanks

    Read the article

  • C#: Two forms, one is calling the other one

    - by Shaza
    Hey all, I have a problem like this, I have two Winforms, f1 and f2. f1 will start a loop on button click, this loop checks a condition and decide to call the other form which is f2 or not. The problem is, the loop may call f2 many times, so each time the other form f2 will be called the first form f1 should pause its execution. So, I solved it like this, I used backgroundWorker + AutoResetEvent. I placed the backgroundWorker in the first form and inside the DoWork event handler I called f2.Show() then I called WaitOne on the AutoResetEvent let it be A. In the other form "f2", on Exiting button I called Set on the same A. But, unfortunately f2 got freezed when clicking that button in f1, what should I change??

    Read the article

  • JS settimeout doesn’t work in IE8…

    - by Nagesh
    <html> <head> <script> var i; i = 0; function loop() { i = i + 1; alert(String(i)); setTimeout("loop()",1000); } setTimeout("loop()",1000); </script> </head> <body> </body> </html> Please try the above code in IE8 it will not give alert message for every 1 sec if you hold right click. But in firefox it will give alert message even though if you dont release the right click. I want the firefox functionality in IE8.

    Read the article

  • BackgroundWorker vs background Thread

    - by freddy smith
    I have a stylistic question about the choice of background thread implementation I should use on a windows form app. Currently I have a BackgroundWorker on a form that has an infinite (while(true)) loop. In this loop I use WaitHandle.WaitAny to keep the thread snoozing until something of interest happens. One of the event handles I wait on is a "stopthread" event so that I can break out of the loop. This event is signaled when from my overridden Form.Dispose(). I read somewhere that BackgroundWorker is really intended for operations that you dont want to tie up the UI with and have an finite end - like downloading a file, or processing a sequence of items. In this case the "end" is unknown and only when the window is closed. Therefore would it be more appropriate for me to use a background Thread instead of BackgroundWorker for this purpose?

    Read the article

  • Why trigger fired only once when running on a JQuery Object?

    - by Shlomi.A.
    Hi I got an array. I'm running over it for (i = 0; i < sugestionsArray.length; i++) { $('li.groupCode' + sugestionsArray[i] + ' span').addClass('checkedInput'); $('option[value=' + sugestionsArray[i] + ']').attr('selected', 'selected'); } And this loop runs 3 times perfectly, adding classname and playing with the option. instead of adding a clasname, i'm trying to trigger a click over the span $('li.groupCode' + sugestionsArray[i] + ' span').trigger('click'); which in his turn has a click event bind to it using jq as well Span.click(function() {}) for some reason my loop breaks after the first click. he is leaving the loop and don't continue to the next 2 loops after him. only the first span is been clicked. does anyone has an idea?

    Read the article

  • BDE, Delphi, ODBC, SQL Native Client & Dead lock

    - by EspenS
    Hi. We have some Delphi code that uses the BDE to Access SQL Server 2008 through the SQL Server Native Client ODBC driver (2005 version). Our issue is that we're experiencing some deadlock issues in a loop doing inserts to multiple tables. The whole loop is done within a [TDatabase].StartTransaction. Looking at the SQL Server Profiler we clearly see that at one point during the loop the SPID (Session ID?) change, and then we naturally end up with a deadlock. (Both SPID doing inserts to the same table) It seems like the BDE at some point does a second connection to the DB... (Although I would love to skip the BDE, it's currently not possible. ) Anyone with experiences to share?

    Read the article

  • Loops in ada and the implementation

    - by maddy
    HI all, Below is a piece of code shown and doubts are regarding the implementation of loops C := character'last; I := 1; K : loop Done := C = character'first; Count2 := I; Exit K when Done; C := character'pred(c); I := I + 1; end loop K; Can anyone please tell me what does 'K' stands for.I guess its not a variable.How does 'K' control the execution of the loop? Thanks Maddy

    Read the article

  • orale clients dead wait

    - by Macroideal
    hi all friends I meet a problem yestoday...maybe it's becoz it is april 1st... but it did exist. here i begin clarifying: i have 3 pcs in remote area..2 clients and 1 oracle server.. my app is running seperately in the two clients hourly connecting to the oracles,,my clients worked well be4 April 1th, but suddenly my app in the clients mechines went down...first i did not change any configurations.... i used libsqlora8 to connect the server...I went into a dead loop in the library...I tried sqlplus, but it dead there in my shell terminal, like it meets a infinite loop....no return until i pressed ctrl + c, the reason i guess is "infinite loop" somewhere..... btw, when i used my local pc to connect the server, it worked well...just from this phenomenon, wen can see the is problems lie in the client mechines..and i check the configuration file both in local mechine and client mechines..they are identical have you met a problem like this.... I dont hope it's frank of April 1st

    Read the article

< Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >