Search Results

Search found 10773 results on 431 pages for 'concurrency testing'.

Page 74/431 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • Preventing concurrent data modification in Web based app

    - by Manoj
    Hi, I am building a web app in Silverlight which allows users to view and edit a database. In order to prevent multiple users from editing the same data, I was thinking of implementing a lock and key mechanism, so that other users are made to wait when one particular user is editing the data. Is there any way in which we can have variable(flag specifying if a user is editing data) in the server which can be shared across multiple clients? Is there a better way to manage this type of conurrent data access issues?

    Read the article

  • What Use are Threads Outside of Parallel Problems on MultiCore Systesm?

    - by Robert S. Barnes
    Threads make the design, implementation and debugging of a program significantly more difficult. Yet many people seem to think that every task in a program that can be threaded should be threaded, even on a single core system. I can understand threading something like an MPEG2 decoder that's going to run on a multicore cpu ( which I've done ), but what can justify the significant development costs threading entails when you're talking about a single core system or even a multicore system if your task doesn't gain significant performance from a parallel implementation? Or more succinctly, what kinds of non-performance related problems justify threading? Edit Well I just ran across one instance that's not CPU limited but threads make a big difference: TCP, HTTP and the Multi-Threading Sweet Spot Multiple threads are pretty useful when trying to max out your bandwidth to another peer over a high latency network connection. Non-blocking I/O would use significantly less local CPU resources, but would be much more difficult to design and implement.

    Read the article

  • AtomicInteger lazySet and set

    - by Yan Cheng CHEOK
    May I know what is the difference among lazySet and set method for AtomicInteger. javadoc doesn't talk much about lazySet : Eventually sets to the given value. It seems that AtomicInteger will not immediately be set to the desired value, but it will be scheduled to be set in some time. But, what is the practical use of this method? Any example?

    Read the article

  • Oracle (PL/SQL): Is UPDATE RETURNING concurrent?

    - by Jaap
    I'm using table with a counter to ensure unique id's on a child element. I know it is usually better to use a sequence, but I can't use it because I have a lot of counters (a customer can create a couple of buckets and each of them needs to have their own counter, they have to start with 1 (it's a requirement, my customer needs "human readable" keys). I'm creating records (let's call them items) that have a prikey (bucket_id, num = counter). I need to guarantee that the bucket_id / num combination is unique (so using a sequence as prikey won't fix my problem). The creation of rows doesn't happen in pl/sql, so I need to claim the number (btw: it's not against the requirements to have gaps). My solution was: UPDATE bucket SET counter = counter + 1 WHERE id = param_id RETURNING counter INTO num_forprikey; PL/SQL returns var_num_forprikey so the item record can be created. Question: Will I always get unique num_forprikey even if the user concurrently asks for new items in a bucket?

    Read the article

  • Java: design for using many executors services and only few threads

    - by Guillaume
    I need to run in parallel multiple threads to perform some tests. My 'test engine' will have n tests to perform, each one doing k sub-tests. Each test result is stored for a later usage. So I have n*k processes that can be ran concurrently. I'm trying to figure how to use the java concurrent tools efficiently. Right now I have an executor service at test level and n executor service at sub test level. I create my list of Callables for the test level. Each test callable will then create another list of callables for the subtest level. When invoked a test callable will subsequently invoke all subtest callables test 1 subtest a1 subtest ...1 subtest k1 test n subtest a2 subtest ...2 subtest k2 call sequence: test manager create test 1 callable test1 callable create subtest a1 to k1 testn callable create subtest an to kn test manager invoke all test callables test1 callable invoke all subtest a1 to k1 testn callable invoke all subtest an to kn This is working fine, but I have a lot of new treads that are created. I can not share executor service since I need to call 'shutdown' on the executors. My idea to fix this problem is to provide the same fixed size thread pool to each executor service. Do you think it is a good design ? Do I miss something more appropriate/simple for doing this ?

    Read the article

  • Physical Cores vs Virtual Cores in Parallelism

    - by Code Curiosity
    When it comes to virtualization, I have been deliberating on the relationship between the physical cores and the virtual cores, especially in how it effects applications employing parallelism. For example, in a VM scenario, if there are less physical cores than there are virtual cores, if that's possible, what's the effect or limits placed on the application's parallel processing? I'm asking, because in my environment, it's not disclosed as to what the physical architecture is. Is there still much advantage to parallelizing if the application lives on a dual core VM hosted on a single core physical machine?

    Read the article

  • ScheduledThreadPoolExecutor executing a wrong time because of CPU time discrepancy

    - by richs
    I'm scheduling a task using a ScheduledThreadPoolExecutor object. I use the following method: public ScheduledFuture<?> schedule(Runnable command, long delay,TimeUnit unit) and set the delay to 30 seconds (delay = 30,000 and unit=TimeUnit.MILLISECONDS). Sometimes my task occurs immediately and other times it takes 70 seconds. I believe the ScheduledThreadPoolExecutor uses CPU specific clocks. When i run tests comparing System.currentTimeMillis(), System.nanoTime() [which is CPU specific] i see the following schedule: 1272637682651ms, 7858346157228410ns execute: 1272637682667ms, 7858386270968425ns difference is 16ms but 4011374001ns (or 40,113ms) so it looks like there is discrepancy between two CPU clocks of 40 seconds How do i resolve this issue in java code? Unfortunately this is a clients machine and i can't modify their system.

    Read the article

  • Ensuring all waiting threads complete

    - by Daniel
    I'm building a system where the progress of calling threads is dependent on the state of two variables. One variable is updated sporadically by an external source (separate from the client threads) and multiple client threads block on a condition of both variables. The system is something like this TypeB waitForB() { // Can be called by many threads. synchronized (B) { while (A <= B) { B.wait(); } A = B; return B; { } void updateB(TypeB newB) { // Called by one thread. synchronized (B) { B.update(newB); B.notifyAll(); // All blocked threads must receive new B. } } I need all the blocked threads to receive the new value of B once it has been updated. But the problem is once a single thread finishes and updates A, the waiting condition becomes true again so some of the other threads become blocked and don't receive the new value of B. Is there a way of ensuring that only the last thread that was blocked on B updates A, or another way of getting this behaviour?

    Read the article

  • Explain "Leader/Follower" Pattern

    - by Alex B
    I can't seem to find a good explanation of "Leader/Follower" pattern. All explanations either simply refer to it in the context of some problem, or are completely meaningless. Can anyone explain to the the mechanics of how this pattern works, and why and how it improves performance over more traditional asynchronous IO models? Examples and links to diagrams are appreciated too.

    Read the article

  • Multi-Threaded Application - Help with some pseudo code!!

    - by HonorGod
    I am working on a multi-threaded application and need help with some pseudo-code. To make it simpler for implementation I will try to explain that in simple terms / test case. Here is the scenario - I have an array list of strings (say 100 strings) I have a Reader Class that reads the strings and passes them to a Writer Class that prints the strings to the console. Right now this runs in a Single Thread Model. I wanted to make this multi-threaded but with the following features - Ability to set MAX_READERS Ability to set MAX_WRITERS Ability to set BATCH_SIZE So basically the code should instantiate those many Readers and Writers and do the work in parallel. Any pseudo code will really be helpful to keep me going!

    Read the article

  • Asynchrous calls cause StaleObjectStateException

    - by Mulone
    Hi all, I'm struggling with a Grails service. The service gets AJAX calls from the clients and behaves like a simple local cache for remote objects: void **someCallFromClient**() { // extract params def results = remoteService.queryService(params) results.each{ // try to fetch result object from local DB def obj = SomeClass.findBySomeField(result.someField) if (!obj){ obj = new Result(params) obj.save() } // do stuff on obj } } The service works fine when only one client is connected, but as soon as 2 or more clients start bombing the server with requests, I start getting: 2010-05-24 13:09:49,764 [30893094@qtp-26315919-2] ERROR errors.GrailsExceptionResolver - Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect): [ some object #892901] org.hibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect): [ some object #892901] // very long stactrace It probably happens when 2 calls are trying to create the same object concurrently. I suppose this is a rather typical situation to end up in. Could you recommend any pattern/good practice to fix this issue? For example, is there a way to say to one of the service instances to hang on and wait for the other to finish its stuff and try again? Cheers!

    Read the article

  • Performance testing on .xap files...

    - by Radhi
    Hi All, I want to know that can i use profiler to do performance testing of .xap files. if you have any articles for the same topic please provide it to me. and if there are any other tools available to do this please tell me. in my project we have to check that when we logged into the Silverlight 4 .0 application. the screen takes 5 seconds to load. so i have to check which method is taking time to do this. in our project there are services which calls other services too,, and we have used CAL. so need to identify the bottleneck... please help...

    Read the article

  • Python unittest with expensive setup

    - by Staale
    My test file is basically: class Test(unittest.TestCase): def testOk(): pass if __name__ == "__main__": expensiveSetup() try: unittest.main() finally: cleanUp() However, I do wish to run my test through Netbeans testing tools, and to do that I need unittests that don't rely on an environment setup done in main. Looking at http://stackoverflow.com/questions/402483/caching-result-of-setup-using-python-unittest - it recommends using Nose. However, I don't think Netbeans supports this. I didn't find any information indicating that it does. Additionally, I am the only one here actually writing tests, so I don't want to introduce additional dependencies for the other 2 developers unless they are needed. How can I do the setup and cleanup once for all the tests in my TestSuite? The expensive setup here is creating some files with dummy data, as well as setting up and tearing down a simple xml-rpc server. I also have 2 test classes, one testing locally and one testing all methods over xml-rpc.

    Read the article

  • Inconsistency in java.util.concurrent.Future?

    - by loganj
    For the sake of argument, let's say I'm implementing Future for a task which is not cancelable. The Java 6 API doc says: After [cancel()] returns, subsequent calls to isDone() will always return true. [cancel()] returns false if the task could not be cancelled, typically because it has already completed normally It also says: [isDone()] returns true if this task completed. But what if my cancellation fails not because the task is already completed, but because it simply cannot be cancelled? Is there a way out of this contradiction (other than making my uncancelable task cancelable and sidestepping it altogether)?

    Read the article

  • Does isolation frameworks (Moq, RhinoMock, etc) lead to test overspecification?

    - by Marius
    In Osherove's great book "The Art of Unit Testing" one of the test anti-patterns is over-specification which is basically the same as testing the internal state of the object instead of some expected output. To my experience, using Isolation frameworks can cause the same unwanted side effects as testing internal behavior because one tends to only implement the behavior necessary to make your stub interact with the object under test. Now if your implementation changes later on (but the contract remains the same), your test will suddenly break because you are expecting some data from the stub which was not implemented. So what do you think is the best approach to counter this? 1) Implement your stubs/mocks fully, this has the negative side-effect of potentially making your test less readable and also specifying more than necessary to make your test pass. 2) Favor manual, fully implemented fakes. 3) Implement your stubs/fakes so that they make your test just pass, and then deal with the brittleness that this might introduce.

    Read the article

  • Interview Question: .Any() vs if (.Length > 0) for testing if a collection has elements

    - by Chris
    In a recent interview I was asked what the difference between .Any() and .Length > 0 was and why I would use either when testing to see if a collection had elements. This threw me a little as it seems a little obvious but feel I may be missing something. I suggested that you use .Length when you simply need to know that a collection has elements and .Any() when you wish to filter the results. Presumably .Any() takes a performance hit too as it has to do a loop / query internally.

    Read the article

  • Standard term for a thread I/O reorder buffer?

    - by Crashworks
    I have a case where many threads all concurrently generate data that is ultimately written to one long, serial file. I need to somehow serialize these writes so that the file gets written in the right order. ie, I have an input queue of 2048 jobs j0..jn, each of which produces a chunk of data oi. The jobs run in parallel on, say, eight threads, but the output blocks have to appear in the file in the same order as the corresponding input blocks — the output file has to be in the order o0o1o2... The solution to this is pretty self evident: I need some kind of buffer that accumulates and writes the output blocks in the correct order, similar to a CPU reorder buffer in Tomasulo's algorithm, or to the way that TCP reassembles out-of-order packets before passing them to the application layer. Before I go code it, I'd like to do a quick literature search to see if there are any papers that have solved this problem in a particularly clever or efficient way, since I have severe realtime and memory constraints. I can't seem to find any papers describing this though; a Scholar search on every permutation of [threads, concurrent, reorder buffer, reassembly, io, serialize] hasn't yielded anything useful. I feel like I must just not be searching the right terms. Is there a common academic name or keyword for this kind of pattern that I can search on?

    Read the article

  • "FOR UPDATE" v/s "LOCK IN SHARE MODE" : Allow concurrent threads to read updated "state" value of locked row

    - by shadesco
    I have the following scenario: User X logs in to the application from location lc1: call it Ulc1 User X (has been hacked, or some friend of his knows his login credential, or he just logs in from a different browser on his machine,etc.. u got the point) logs in at the same time from location lc2: call it Ulc2 I am using a main servlet which : - gets a connection from database pooling - sets autocommit to false - executes a command that goes through app layers: if all successful, set autocommit to true in a "finally" statement, and closes connection. Else if an exception happens, rollback(). In my database (mysql/innoDb) i have a "history" table, with row columns: id(primary key) |username | date | topic | locked The column "locked" has by default value "false" and it serves as a flag that marks if a specific row is locked or not. Each row is specific to a user (as u can see from the username column) So back to the scenario: --Ulc1 sends the command to update his history from the db for date "D" and topic "T". --Ulc2 sends the same command to update history from the db for the same date "D" and same topic "T" at the exact same time. I want to implement an mysql/innoDB locking system that will enable whichever thread arriving to do the following check: Is column "locked" for this row true or not? if true, return a message to the user that " he is already updating the same data from another location" if not true (ie not locked) : flag it as locked and update then reset locked to false once finished. Which of these two mysql locking techniques, will actually allow the 2nd arriving thread from reading the "updated" value of the locked column to decide wt action to take?Should i use "FOR UPDATE" or "LOCK IN SHARE MODE"? This scenario explains what i want to accomplish: - Ulc1 thread arrives first: column "locked" is false, set it to true and continue updating process - Ulc2 thread arrives while Ulc1's transaction is still in process, and even though the row is locked through innoDb functionalities, it doesn't have to wait but in fact reads the "new" value of column locked which is "true", and so doesn't in fact have to wait till Ulc1 transaction commits to read the value of the "locked" column(anyway by that time the value of this column will already have been reset to false). I am not very experienced with the 2 types of locking mechanisms, what i understand so far is that LOCK IN SHARE MODE allow other transaction to read the locked row while FOR UPDATE doesn't even allow reading. But does this read gets on the updated value? or the 2nd arriving thread has to wait the first thread to commit to then read the value? Any recommendations about which locking mechanism to use for this scenario is appreciated. Also if there's a better way to "check" if the row has been locked (other than using a true/false column flag) please let me know about it. thank you SOLUTION (Jdbc pseudocode example based on @Darhazer's answer) Table : [ id(primary key) |username | date | topic | locked ] connection.setautocommit(false); //transaction-1 PreparedStatement ps1 = "Select locked from tableName for update where id="key" and locked=false); ps1.executeQuery(); //transaction 2 PreparedStatement ps2 = "Update tableName set locked=true where id="key"; ps2.executeUpdate(); connection.setautocommit(true);// here we allow other transactions threads to see the new value connection.setautocommit(false); //transaction 3 PreparedStatement ps3 = "Update tableName set aField="Sthg" where id="key" And date="D" and topic="T"; ps3.executeUpdate(); // reset locked to false PreparedStatement ps4 = "Update tableName set locked=false where id="key"; ps4.executeUpdate(); //commit connection.setautocommit(true);

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >