Search Results

Search found 10773 results on 431 pages for 'concurrency testing'.

Page 12/431 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Techniques for modeling a dynamic dataflow with Java concurrency API

    - by Maian
    Is there an elegant way to model a dynamic dataflow in Java? By dataflow, I mean there are various types of tasks, and these tasks can be "connected" arbitrarily, such that when a task finishes, successor tasks are executed in parallel using the finished tasks output as input, or when multiple tasks finish, their output is aggregated in a successor task (see flow-based programming). By dynamic, I mean that the type and number of successors tasks when a task finishes depends on the output of that finished task, so for example, task A may spawn task B if it has a certain output, but may spawn task C if has a different output. Another way of putting it is that each task (or set of tasks) is responsible for determining what the next tasks are. Sample dataflow for rendering a webpage: I have as task types: file downloader, HTML/CSS renderer, HTML parser/DOM builder, image renderer, JavaScript parser, JavaScript interpreter. File downloader task for HTML file HTML parser/DOM builder task File downloader task for each embedded file/link If image, image renderer If external JavaScript, JavaScript parser JavaScript interpreter Otherwise, just store in some var/field in HTML parser task JavaScript parser for each embedded script JavaScript interpreter Wait for above tasks to finish, then HTML/CSS renderer (obviously not optimal or perfectly correct, but this is simple) I'm not saying the solution needs to be some comprehensive framework (in fact, the closer to the JDK API, the better), and I absolutely don't want something as heavyweight is say Spring Web Flow or some declarative markup or other DSL. To be more specific, I'm trying to think of a good way to model this in Java with Callables, Executors, ExecutorCompletionServices, and perhaps various synchronizer classes (like Semaphore or CountDownLatch). There are a couple use cases and requirements: Don't make any assumptions on what executor(s) the tasks will run on. In fact, to simplify, just assume there's only one executor. It can be a fixed thread pool executor, so a naive implementation can result in deadlocks (e.g. imagine a task that submits another task and then blocks until that subtask is finished, and now imagine several of these tasks using up all the threads). To simplify, assume that the data is not streamed between tasks (task output-succeeding task input) - the finishing task and succeeding task won't exist together, so the input data to the succeeding task will not be changed by the preceeding task (since it's already done). There are only a couple operations that the dataflow "engine" should be able to handle: A mechanism where a task can queue more tasks A mechanism whereby a successor task is not queued until all the required input tasks are finished A mechanism whereby the main thread (or other threads not managed by the executor) blocks until the flow is finished A mechanism whereby the main thread (or other threads not managed by the executor) blocks until certain tasks have finished Since the dataflow is dynamic (depends on input/state of the task), the activation of these mechanisms should occur within the task code, e.g. the code in a Callable is itself responsible for queueing more Callables. The dataflow "internals" should not be exposed to the tasks (Callables) themselves - only the operations listed above should be available to the task. Note that the type of the data is not necessarily the same for all tasks, e.g. a file download task may accept a File as input but will output a String. If a task throws an uncaught exception (indicating some fatal error requiring all dataflow processing to stop), it must propagate up to the thread that initiated the dataflow as quickly as possible and cancel all tasks (or something fancier like a fatal error handler). Tasks should be launched as soon as possible. This along with the previous requirement should preclude simple Future polling + Thread.sleep(). As a bonus, I would like to dataflow engine itself to perform some action (like logging) every time task is finished or when no has finished in X time since last task has finished. Something like: ExecutorCompletionService<T> ecs; while (hasTasks()) { Future<T> future = ecs.poll(1 minute); some_action_like_logging(); if (future != null) { future.get() ... } ... } Are there straightforward ways to do all this with Java concurrency API? Or if it's going to complex no matter what with what's available in the JDK, is there a lightweight library that satisfies the requirements? I already have a partial solution that fits my particular use case (it cheats in a way, since I'm using two executors, and just so you know, it's not related at all to the web browser example I gave above), but I'd like to see a more general purpose and elegant solution.

    Read the article

  • Problems with Optimistic Concurrency through an ObjectDataSource and a GridView

    - by Bloodsplatter
    Hi I'm having a problem in an ASP .NET 2.0 Application. I have a GridView displaying data from an ObjectDataSource (connected to a BLL class which connects to a TabledAdapter (Typed Dataset using optimistic concurrency). The select (displaying the data) works just fine, however, when I update a row the GridView does pass the old values to the ObjectDataSource. <DataObjectMethod(DataObjectMethodType.Update, True)> _ Public Function UpdateOC(ByVal original_id As Integer, ByVal original_fotonummer As Integer, ByVal original_inhoud As String, ByVal original_postdatum As Date?, ByVal fotonummer As Integer, ByVal inhoud As String, ByVal postdatum As Date?) As Boolean Dim tweets As TwitpicOC.TweetsDataTable = adapterOC.GetTweetById(original_id) If tweets.Rows.Count = 0 Then Return False Dim row As TwitpicOC.TweetsRow = tweets(0) SmijtHetErIn(row, original_fotonummer, original_inhoud, original_postdatum) row.AcceptChanges() SmijtHetErIn(row, fotonummer, inhoud, postdatum) Return adapterOC.Update(row) = 1 End Function Public Sub SmijtHetErIn(ByVal row As TwitpicOC.TweetsRow, ByVal original_fotonummer As Integer, ByVal original_inhoud As String, ByVal original_postdatum As Date?) With row .fotonummer = original_fotonummer If String.IsNullOrEmpty(original_inhoud) Then .SetinhoudNull() Else .inhoud = original_inhoud If Not original_postdatum.HasValue Then .SetpostdatumNull() Else .postdatum = original_postdatum.Value End With End Sub And this is the part of the page: <div id='Overzicht' class='post'> <div class='title'> <h2> <a href='javascript:;'>Tweetsoverzicht</a></h2> <p> Overzicht</p> </div> <div class='entry'> <p> <asp:ObjectDataSource ID="odsGebruiker" runat="server" OldValuesParameterFormatString="" SelectMethod="GetAll" TypeName="TakeHomeWeb.BLL.GebruikersBLL"></asp:ObjectDataSource> <asp:ObjectDataSource ID="odsFoto" runat="server" SelectMethod="GetFotosByGebruiker" TypeName="TakeHomeWeb.BLL.FotosBLL"> <SelectParameters> <asp:ControlParameter ControlID="ddlGebruiker" DefaultValue="0" Name="userid" PropertyName="SelectedValue" Type="Int32" /> </SelectParameters> </asp:ObjectDataSource> <form id="form1" runat="server"> <asp:Label runat="server" AssociatedControlID="ddlGebruiker">Gebruiker:&nbsp;</asp:Label> <asp:DropDownList ID="ddlGebruiker" runat="server" AutoPostBack="True" DataSourceID="odsGebruiker" DataTextField="naam" DataValueField="userid" AppendDataBoundItems="True"> <asp:ListItem Text="Kies een gebruiker" Value="-1" /> </asp:DropDownList> <br /> <asp:Label runat="server" AssociatedControlID="ddlFoto">Foto:&nbsp;</asp:Label> <asp:DropDownList ID="ddlFoto" runat="server" AutoPostBack="True" DataSourceID="odsFoto" DataTextField="url" DataValueField="id" AppendDataBoundItems="True"> <asp:ListItem Value="-1">Kies een foto...</asp:ListItem> </asp:DropDownList> <br /> <div style="float: left"> <asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="False" DataKeyNames="id" DataSourceID="odsTweets"> <Columns> <asp:CommandField ShowDeleteButton="True" ShowEditButton="True" /> <asp:BoundField DataField="id" HeaderText="id" InsertVisible="False" ReadOnly="True" SortExpression="id" /> <asp:BoundField DataField="fotonummer" HeaderText="fotonummer" SortExpression="fotonummer" /> <asp:BoundField DataField="inhoud" HeaderText="inhoud" SortExpression="inhoud" /> <asp:BoundField DataField="postdatum" HeaderText="postdatum" SortExpression="postdatum" /> </Columns> </asp:GridView> <asp:ObjectDataSource ID="odsTweets" runat="server" ConflictDetection="CompareAllValues" DeleteMethod="DeleteOC" OldValuesParameterFormatString="original_{0}" SelectMethod="GetTweetsByFotoId" TypeName="TakeHomeWeb.BLL.TweetsOCBLL" UpdateMethod="UpdateOC"> <DeleteParameters> <asp:Parameter Name="original_id" Type="Int32" /> <asp:Parameter Name="original_fotonummer" Type="Int32" /> <asp:Parameter Name="original_inhoud" Type="String" /> <asp:Parameter Name="original_postdatum" Type="DateTime" /> </DeleteParameters> <UpdateParameters> <asp:Parameter Name="original_id" Type="Int32" /> <asp:Parameter Name="original_fotonummer" Type="Int32" /> <asp:Parameter Name="original_inhoud" Type="String" /> <asp:Parameter Name="original_postdatum" Type="DateTime" /> <asp:Parameter Name="fotonummer" Type="Int32" /> <asp:Parameter Name="inhoud" Type="String" /> <asp:Parameter Name="postdatum" Type="DateTime" /> </UpdateParameters> <SelectParameters> <asp:ControlParameter ControlID="ddlFoto" Name="foto" PropertyName="SelectedValue" Type="Int32" /> </SelectParameters> </asp:ObjectDataSource> </div> </form> </p> </div> </div> I've got a feeling there's huge fail involved or something, but I've been staring at it for hours now and I just can't find it.

    Read the article

  • SQL University: What and why of database testing

    - by Mladen Prajdic
    This is a post for a great idea called SQL University started by Jorge Segarra also famously known as SqlChicken on Twitter. It’s a collection of blog posts on different database related topics contributed by several smart people all over the world. So this week is mine and we’ll be talking about database testing and refactoring. In 3 posts we’ll cover: SQLU part 1 - What and why of database testing SQLU part 2 - What and why of database refactoring SQLU part 2 – Tools of the trade With that out of the way let us sharpen our pencils and get going. Why test a database The sad state of the industry today is that there is very little emphasis on testing in general. Test driven development is still a small niche of the programming world while refactoring is even smaller. The cause of this is the inability of developers to convince themselves and their managers that writing tests is beneficial. At the moment they are mostly viewed as waste of time. This is because the average person (let’s not fool ourselves, we’re all average) is unable to think about lower future costs in relation to little more current work. It’s orders of magnitude easier to know about the current costs in relation to current amount of work. That’s why programmers convince themselves testing is a waste of time. However we have to ask ourselves what tests are really about? Maybe finding bugs? No, not really. If we introduce bugs, we’re likely to write test around those bugs too. But yes we can find some bugs with tests. The main point of tests is to have reproducible repeatability in our systems. By having a code base largely covered by tests we can know with better certainty what a small code change can break in other parts of the system. By having repeatability we can make code changes with confidence, since we know we’ll see what breaks in other tests. And here comes the inability to estimate future costs. By spending just a few more hours writing those tests we’d know instantly what broke where. Imagine we fix a reported bug. We check-in the code, deploy it and the users are happy. Until we get a call 2 weeks later about a certain monthly process has stopped working. What we don’t know is that this process was developed by a long gone coworker and for some reason it relied on that same bug we’ve happily fixed. There’s no way we could’ve known that. We say OK and go in and fix the monthly process. But what we have no clue about is that there’s this ETL job that relied on data from that monthly process. Now that we’ve fixed the process it’s giving unexpected (yet correct since we fixed it) data to the ETL job. So we have to fix that too. But there’s this part of the app we coded that relies on data from that exact ETL job. And just like that we enter the “Loop of maintenance horror”. With the loop eventually comes blame. Here’s a nice tip for all developers and DBAs out there: If you make a mistake man up and admit to it. All of the above is valid for any kind of software development. Keeping this in mind the database is nothing other than just a part of the application. But a big part! One reason why testing a database is even more important than testing an application is that one database is usually accessed from multiple applications and processes. This makes it the central and vital part of the enterprise software infrastructure. Knowing all this can we really afford not to have tests? What to test in a database Now that we’ve decided we’ll dive into this testing thing we have to ask ourselves what needs to be tested? The short answer is: everything. The long answer is: read on! There are 2 main ways of doing tests: Black box and White box testing. Black box testing means we have no idea how the system internals are built and we only have access to it’s inputs and outputs. With it we test that the internal changes to the system haven’t caused the input/output behavior of the system to change. The most important thing to test here are the edge conditions. It’s where most programs break. Having good edge condition tests we can be more confident that the systems changes won’t break. White box testing has the full knowledge of the system internals. With it we test the internal system changes, different states of the application, etc… White and Black box tests should be complementary to each other as they are very much interconnected. Testing database routines includes testing stored procedures, views, user defined functions and anything you use to access the data with. Database routines are your input/output interface to the database system. They count as black box testing. We test then for 2 things: Data and schema. When testing schema we only care about the columns and the data types they’re returning. After all the schema is the contract to the out side systems. If it changes we usually have to change the applications accessing it. One helpful T-SQL command when doing schema tests is SET FMTONLY ON. It tells the SQL Server to return only empty results sets. This speeds up tests because it doesn’t return any data to the client. After we’ve validated the schema we have to test the returned data. There no other way to do this but to have expected data known before the tests executes and comparing that data to the database routine output. Testing Authentication and Authorization helps us validate who has access to the SQL Server box (Authentication) and who has access to certain database objects (Authorization). For desktop applications and windows authentication this works well. But the biggest problem here are web apps. They usually connect to the database as a single user. Please ensure that that user is not SA or an account with admin privileges. That is just bad. Load testing ensures us that our database can handle peak loads. One often overlooked tool for load testing is Microsoft’s OSTRESS tool. It’s part of RML utilities (x86, x64) for SQL Server and can help determine if our database server can handle loads like 100 simultaneous users each doing 10 requests per second. SQL Profiler can also help us here by looking at why certain queries are slow and what to do to fix them.   One particular problem to think about is how to begin testing existing databases. First thing we have to do is to get to know those databases. We can’t test something when we don’t know how it works. To do this we have to talk to the users of the applications accessing the database, run SQL Profiler to see what queries are being run, use existing documentation to decipher all the object relationships, etc… The way to approach this is to choose one part of the database (say a logical grouping of tables that go together) and filter our traces accordingly. Once we’ve done that we move on to the next grouping and so on until we’ve covered the whole database. Then we move on to the next one. Database Testing is a topic that we can spent many hours discussing but let this be a nice intro to the world of database testing. See you in the next post.

    Read the article

  • Regression testing for firewall changes

    - by James C
    We have a number of firewalls in place around our organisation and in some cases packets can pass through four levels of firewall limiting the flow TCP traffic. A concept that I'm used to from software testing is regression testing, allowing you to run a test suite against a changed application to verify that the new changes haven't affected any old features. Does anyone have any experience or an offer any solutions to being able to perform the same type of thing with firewall changes and network testing? The problem becomes a lot more complicated because you'd ideally want to be originating (and testing receipt) of packets across many machines.

    Read the article

  • Verfication vs validation again, does testing belong to verification? If so, which?

    - by user970696
    I have asked before and created a lot of controversy so I tried to collect some data and ask similar question again. E.g. V&V where all testing is only validation: http://www.buzzle.com/editorials/4-5-2005-68117.asp According to ISO 12207, testing is done in validation: •Prepare Test Requirements,Cases and Specifications •Conduct the Tests In verification, it mentiones. The code implements proper event sequence, consistent interfaces, correct data and control flow, completeness, appropriate allocation timing and sizing budgets, and error definition, isolation, and recovery. and The software components and units of each software item have been completely and correctly integrated into the software item Not sure how to verify without testing but it is not there as a technique. From IEEE: Verification: The process of evaluating software to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase. [IEEE-STD-610]. Validation: The process of evaluating software during or at the end of the development process to determine whether it satisfies specified requirements. [IEEE-STD-610] At the end of development phase? That would mean UAT.. So the question is, what testing (unit, integration, system, uat) will be considered verification or validation? I do not understand why some say dynamic verification is testing, while others that only validation. An example: I am testing an application. System requirements say there are two fields with max. lenght of 64 characters and Save button. Use case say: User will fill in first and last name and save. When checking the fields and Save button presence, I would say its verification. When I follow the use case, its validation. So its both together, done on the system as a whole.

    Read the article

  • How to verify the code that could take a substantial time to compile? [on hold]

    - by user18404
    As a follow up to my prev question: What is the best aproach for coding in a slow compilation environment To recap: I am stuck with a large software system with which a TDD ideology of "test often" does not work. And to make it even worse the features like pre-compiled headers/multi-threaded compilation/incremental linking, etc is not available to me - hence I think that the best way out would be to add the extensive logging into the system and to start "coding in large chunks", which I understand as code for a two-three hours first (as opposed to 15-20 mins in TDD) - thoroughly eyeball the code for a 15 minutes and only after all that do the compilation and run the tests. As I have been doing TDD for a quite a while, my code eyeballing / code verification skills got rusty (you don't really need this that much if you can quickly verify what you've done in 5 seconds by running a test or two) - so I am after a recommendations on how to learn these source code verification/error spotting skills again. I know I was able to do that easily some 5-10 years ago when I din't have much support from the compiler/unit testing tools I had until recently, thus there should be a way to get back to the basics.

    Read the article

  • How can I test linkable/executable files that require re-hosting or retargeting?

    - by hagubear
    Due to data protection, I cannot discuss fine details of the work itself so apologies PROBLEM CASE Sometimes my software projects require merging/integration with third party (customer or other suppliers) software. these software are often in linkable executables or object code (requires that my source code is retargeted and linked with it). When I get the executables or object code, I cannot validate its operation fully without integrating it with my system. My initial idea is that executables are not meant to be unit tested, they are meant to be linkable with other system, but what is the guarantee that post-linkage and integration behaviour will be okay? There is also no sufficient documentation available (from the customer) to indicate how to go about integrating the executables or object files. I know this is philosophical question, but apparently not enough research could be found at this moment to conclude to a solution. I was hoping that people could help me go to the right direction by suggesting approaches. To start, I have found out that Avionics OEM software is often rehosted and retargeted by third parties e.g. simulator makers. I wonder how they test them. Surely, the source code will not be supplied due to IPR rgulations. UPDATE I have received reasonable and very useful suggestions regarding this area. My current struggle has shifted into testing 3rd party OBJECT code that needs to be linked with my own source code (retargeted) on my host machine. How can I even test object code? Surely, I need to link them first to even think about doing anything. Is it the post-link behaviour that needs to be determined and scripted (using perl,Tcl, etc.) so that inputs and outputs could be verified? No clue!! :( thanks,

    Read the article

  • Troubleshoot Database Concurrency in SQL Server with sp_locks

    sp_locks is a useful tool which can help a DBA in detecting and troubleshooting blocking and concurrency scenarios. This article demonstrates a worked example of using sp_locks to troubleshoot a database concurrency issue. Free trial of SQL Backup™“SQL Backup was able to cut down my backup time significantly AND achieved a 90% compression at the same time!” Joe Cheng. Download a free trial now.

    Read the article

  • White-box testing in Javascript - how to deal with privacy?

    - by Max Shawabkeh
    I'm writing unit tests for a module in a small Javascript application. In order to keep the interface clean, some of the implementation details are closed over by an anonymous function (the usual JS pattern for privacy). However, while testing I need to access/mock/verify the private parts. Most of the tests I've written previously have been in Python, where there are no real private variables (members, identifiers, whatever you want to call them). One simply suggests privacy via a leading underscore for the users, and freely ignores it while testing the code. In statically typed OO languages I suppose one could make private members accessible to tests by converting them to be protected and subclassing the object to be tested. In Javascript, the latter doesn't apply, while the former seems like bad practice. I could always wall back to black box testing and simply check the final results. It's the simplest and cleanest approach, but unfortunately not really detailed enough for my needs. So, is there a standard way of keeping variables private while still retaining some backdoors for testing in Javascript?

    Read the article

  • Self hosted WCF ServiceHost/WebServiceHost Concurrency/Peformance Design Options (.NET 3.5)

    - by Kyle
    So I'll be providing a few functions via a self hosted (in a WindowsService) WebServiceHost (not sure how to process HTTP GET/POST with ServiceHost), one of which may be called a large amount of the time. This function will also rely on a connection in the appdomain (hosted by the WindowsService so it can stay alive over multiple requests). I have the following concerns and would be oh so thankful for any input/thoughts/comments: Concurrent access - how does the WebServiceHost handle a bunch of concurrent requests. Are they queued and processes sequentially or are new instances of the contracts automagically created? WebServiceHost - WindowsService communication - I need some form of communication from the WebServiceHost to the hosting WindowsService for things like requesting a new session if one does not exist. Perhaps implementing a class which extends the WebServiceHost with events which the WindowsService subscribes to... (unless there is another way I can set off an event in the WindowsService when a request is made...) Multiple WebServiceHosts or Contracts - Would it give any real performance gain to be running multiple WebServiceHost instances in different threads (one per endpoint perhaps?) - A better understanding of the first point would probably help here. WSDL - I'm not sure why (probably just need to do more reading), but I'm not sure how to get the WebServiceHost base endpoint to respond with a WDSL document describing the available contract. Not required as all the operations will be done via GET requests which will not likely change, but it would be nice to have... That's about it for the moment ;) I've been reading a lot on WCF and wish I'd gotten into it long ago, but definitely still learning.

    Read the article

  • Http requests / concurrency?

    - by maxp
    Say a website on my localhost takes about 3 seconds to do each request. This is fine, and as expected (as it is doing some fancy networking behind the scenes). However, if i open the same url in tabs (in firefox), then reload them all at the same time, it appears to load each page sequentially rather than all at the same time. What is this all about? Have tried it on windows server 2008 iis and windows 7 iis

    Read the article

  • How do I get the java.concurrency.CyclicBarrier to work as expected

    - by Ritesh M Nayak
    I am writing code that will spawn two thread and then wait for them to sync up using the CyclicBarrier class. Problem is that the cyclic barrier isn't working as expected and the main thread doesnt wait for the individual threads to finish. Here's how my code looks: class mythread extends Thread{ CyclicBarrier barrier; public mythread(CyclicBarrier barrier) { this.barrier = barrier; } public void run(){ barrier.await(); } } class MainClass{ public void spawnAndWait(){ CyclicBarrier barrier = new CyclicBarrier(2); mythread thread1 = new mythread(barrier).start(); mythread thread2 = new mythread(barrier).start(); System.out.println("Should wait till both threads finish executing before printing this"); } } Any idea what I am doing wrong? Or is there a better way to write these barrier synchronization methods? Please help.

    Read the article

  • LINQ to SQL: Issue with concurrency

    - by Gib
    I’m working on a sandwich ordering app in ASP.NET MVC, C# and LINQ to SQL. The app revolves around the user creating multiple custom-made sandwiches from a selection of ingredients. When it comes to confirming the order I need to know that there’s enough portions of each ingredient to fulfil all the sandwiches in the user’s order before I commit to the DB as it is possible that an ingredient will go out of stock between adding it to their basket and confirming the order. A bit about the database: Ingredient – Stores ingredient details including number of portions Order – Header table for an order, simply stores the order time OrderDetail – Stores a record of each sandwich in an order OrderDetailItem – Stores each ingredient in each sandwich in an order So basically I’m wondering what the best approach to ensuring that before I add records to Order, OrderDetail and OrderDetailItem I can ensure that the order can be met.

    Read the article

  • Java Concurrency : Volatile vs final in "cascaded" variables?

    - by Tom
    Hello Experts, is final Map<Integer,Map<String,Integer>> status = new ConcurrentHashMap<Integer, Map<String,Integer>>(); Map<Integer,Map<String,Integer>> statusInner = new ConcurrentHashMap<Integer, Map<String,Integer>>(); status.put(key,statusInner); the same as volatile Map<Integer,Map<String,Integer>> status = new ConcurrentHashMap<Integer, Map<String,Integer>>(); Map<Integer,Map<String,Integer>> statusInner = new ConcurrentHashMap<Integer, Map<String,Integer>>(); status.put(key,statusInner); in case the inner Map is accessed by different Threads? or is even something like this required: volatile Map<Integer,Map<String,Integer>> status = new ConcurrentHashMap<Integer, Map<String,Integer>>(); volatile Map<Integer,Map<String,Integer>> statusInner = new ConcurrentHashMap<Integer, Map<String,Integer>>(); status.put(key,statusInner); In case the it is NOT a "cascaded" map, final and volatile have in the end the same effect of making shure that all threads see always the correct contents of the Map... But what happens if the Map iteself contains a map, as in the example... How do I make shure that the inner Map is correctly "Memory barriered"? Tanks! Tom

    Read the article

  • Concurrency and Calendar classes

    - by fbielejec
    I have a thread (class implementing runnable, called AnalyzeTree) organised around a hash map (ConcurrentMap slicesMap). The class goes through the data (called trees here) in the large text file and parses the geographical coordinates from it to the HashMap. The idea is to process one tree at a time and add or grow the values according to the key (which is just a Double value representing time). The relevant part of code looks like this: // grow map entry if key exists if (slicesMap.containsKey(sliceTime)) { double[] imputedLocation = imputeValue( location, parentLocation, sliceHeight, nodeHeight, parentHeight, rate, useTrueNoise, currentTreeNormalization, precisionArray); slicesMap.get(sliceTime).add( new Coordinates(imputedLocation[1], imputedLocation[0], 0.0)); // start new entry if no such key in the map } else { List<Coordinates> coords = new ArrayList<Coordinates>(); double[] imputedLocation = imputeValue( location, parentLocation, sliceHeight, nodeHeight, parentHeight, rate, useTrueNoise, currentTreeNormalization, precisionArray); coords.add(new Coordinates(imputedLocation[1], imputedLocation[0], 0.0)); slicesMap.putIfAbsent(sliceTime, coords); // slicesMap.put(sliceTime, coords); }// END: key check And the class is called like this (executor is ExecutorService executor = Executors.newFixedThreadPool(NTHREDS) ): mrsd = new SpreadDate(mrsdString); int readTrees = 1; while (treesImporter.hasTree()) { currentTree = (RootedTree) treesImporter.importNextTree(); executor.submit(new AnalyzeTree(currentTree, precisionString, coordinatesName, rateString, numberOfIntervals, treeRootHeight, timescaler, mrsd, slicesMap, useTrueNoise)); // new AnalyzeTree(currentTree, precisionString, // coordinatesName, rateString, numberOfIntervals, // treeRootHeight, timescaler, mrsd, slicesMap, // useTrueNoise).run(); readTrees++; }// END: while has trees Now this is running into troubles when executed in parallel (the commented part running sequentially is fine), I thought it might throw a ConcurrentModificationException, but apparently the problem is in mrsd (instance of SpreadDate object, which is simply a class for date related calculations). The SpreadDate class looks like this: public class SpreadDate { private Calendar cal; private SimpleDateFormat formatter; private Date stringdate; public SpreadDate(String date) throws ParseException { // if no era specified assume current era String line[] = date.split(" "); if (line.length == 1) { StringBuilder properDateStringBuilder = new StringBuilder(); date = properDateStringBuilder.append(date).append(" AD") .toString(); } formatter = new SimpleDateFormat("yyyy-MM-dd G", Locale.US); stringdate = formatter.parse(date); cal = Calendar.getInstance(); } public long plus(int days) { cal.setTime(stringdate); cal.add(Calendar.DATE, days); return cal.getTimeInMillis(); }// END: plus public long minus(int days) { cal.setTime(stringdate); cal.add(Calendar.DATE, -days); //line 39 return cal.getTimeInMillis(); }// END: minus public long getTime() { cal.setTime(stringdate); return cal.getTimeInMillis(); }// END: getDate } And the stack trace from when exception is thrown: java.lang.ArrayIndexOutOfBoundsException: 58 at sun.util.calendar.BaseCalendar.getCalendarDateFromFixedDate(BaseCalendar.java:454) at java.util.GregorianCalendar.computeFields(GregorianCalendar.java:2098) at java.util.GregorianCalendar.computeFields(GregorianCalendar.java:2013) at java.util.Calendar.setTimeInMillis(Calendar.java:1126) at java.util.GregorianCalendar.add(GregorianCalendar.java:1020) at utils.SpreadDate.minus(SpreadDate.java:39) at templates.AnalyzeTree.run(AnalyzeTree.java:88) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603) at java.lang.Thread.run(Thread.java:636) If a move the part initializing mrsd to the AnalyzeTree class it runs without any problems - however it is not very memory efficient to initialize class each time this thread is running, hence my concerns. How can it be remedied?

    Read the article

  • Parallel Tasking Concurrency with Dependencies on Python like GNU Make

    - by Brian Bruggeman
    I'm looking for a method or possibly a philosophical approach for how to do something like GNU Make within python. Currently, we utilize makefiles to execute processing because the makefiles are extremely good at parallel runs with changing single option: -j x. In addition, gnu make already has the dependency stacks built into it, so adding a secondary processor or the ability to process more threads just means updating that single option. I want that same power and flexibility in python, but I don't see it. As an example: all: dependency_a dependency_b dependency_c dependency_a: dependency_d stuff dependency_b: dependency_d stuff dependency_c: dependency_e stuff dependency_d: dependency_f stuff dependency_e: stuff dependency_f: stuff If we do a standard single thread operation (-j 1), the order of operation might be: dependency_f -> dependency_d -> dependency_a -> dependency_b -> dependency_e \ -> dependency_c For two threads (-j 2), we might see: 1: dependency_f -> dependency_d -> dependency_a -> dependency_b 2: dependency_e -> dependency_c Does anyone have any suggestions on either a package already built or an approach? I'm totally open, provided it's a pythonic solution/approach. Please and Thanks in advance!

    Read the article

  • Control process concurrency in PHP

    - by Ivan
    I have programmed a simple app that every X minutes checks if an image has changed in several websites and downloads it. It's very simple: downloads image header, make some CRC checks, downloads the file, stores in a MySQL database some data about each image and process next item... This process takes about 1 minute to complete. The problem is I have noticed that while the server is executing this process I cannot access to any page in the website, even those that don't require MySQL. I don't know why it is happening and I have no clue about how to fix it. Perhaps a more advanced PHP programmer can help me.

    Read the article

  • Has the `message-passing/shared-state' dilemma (concurrency & distribution) taken form of a `Holywar

    - by Bubba88
    I'm not too well-informed about the state of the discussion about which model is better, so I would like to ask a pretty straight question: Does it look like two opposing views having really heatened dispute? E.g. like prototype/class based OOP or dynamic vs. static typing (though these are really not much fitting examples, I just do not know how to express my question more clearly)

    Read the article

  • Google App Engine - Dealing with concurrency issues of storing an object

    - by Spines
    My User object that I want to create and store in the datastore has an email, and a username. How do I make sure when creating my User object that another User object doesn't also have either the same email or the same username? If I just do a query to see if any other users have already used the username or the email, then there could be a race condition. UPDATE: The solution I'm currently considering is to use the MemCache to implement a locking mechanism. I would acquire 2 locks before trying to store the User object in the datastore. First a lock that locks based on email, then another that locks based on username. Since creating new User objects only happens at user registration time, and it's even rarer that two people try to use either the same username or the same email, I think it's okay to take the performance hit of locking. I'm thinking of using the MemCache locking code that is here: http://appengine-cookbook.appspot.com/recipe/mutex-using-memcache-api/ What do you guys think?

    Read the article

  • The simplest concurrency pattern

    - by Ilya Kogan
    Please, would you help me in reminding me of one of the simplest parallel programming techniques. How do I do the following in C#: Initial state: semaphore counter = 0 Thread 1: // Block until semaphore is signalled semaphore.Wait(); // wait until semaphore counter is 1 Thread 2: // Allow thread 1 to run: semaphore.Signal(); // increments from 0 to 1 It's not a mutex because there is no critical section, or rather you can say there is an infinite critical section. So what is it?

    Read the article

  • PostgreSQL, triggers, and concurrency to enforce a temporal key

    - by Hobbes
    I want to define a trigger in PostgreSQL to check that the inserted row, on a generic table, has the the property: "no other row exists with the same key in the same valid time" (the keys are sequenced keys). In fact, I has already implemented it. But since the trigger has to scan the entire table, now i'm wondering: is there a need for a table-level lock? Or this is managed someway by the PostgreSQL itself?

    Read the article

  • Groovy & Grails Concurrency ( quartz, executor )

    - by Pietro
    What I'm trying to do is to run multiple threads at some starting time. Those threads must stay alive for 90minutes after start. During the 90minutes they execute something after a random sleep time (ex: 5minutes to 15minutes). Here is a pseudo code on how I would implement it. The problem is that doing it in this way the threads run in an unexpected way. How can I implement correctly something like this? Class MyJob { static triggers = { cron name: 'first', cronExpression: "0 30 21 * * FRI" cron name: 'second', cronExpression: "0 30 19 * * FRI" cron name: 'third', cronExpression: "0 30 17 * * FRI" def myService def execute() { switch( between trigger name ) case 'first': model = Model.findByAttribute(...) ... myService.run( model, start_time ) break; ... } } class MyService { def run( model, start_time ) { def end_time = end_time.plusMinutes(90) model.fields.each( field -> Thread.start { executeSomeTasks( field, start_time, end_time ) } ) } def executeSomeTasks( field, start_time, end_time ) { while( start_time < end_time ) { ...do something ... sleep( Random.nextInt( 1000 ) ); } } }

    Read the article

  • Good concurrency example of Java vs. Clojure

    - by Michiel Borkent
    Clojure is said to be a language that makes multi-thread programming easier. From the Clojure.org website: Clojure simplifies multi-threaded programming in several ways. Now I'm looking for a non-trivial problem solved in Java and in Clojure so I can compare/contrast their simplicity. Anyone?

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >