Search Results

Search found 10773 results on 431 pages for 'concurrency testing'.

Page 74/431 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • Unittest and mock

    - by user1410756
    I'm testing with unittest in python and it's ok. Now, I have introduced mock and I need to resolve a question. This is my code: from mock import Mock import unittest class Matematica(object): def __init__(self, op1, op2): self.op1 = op1 self.op2 = op2 def adder(self): return self.op1 + self.op2 def subs(self): return abs(self.op1 - self.op2) def molt(self): return self.op1 * self.op2 def divid(self): return self.op1 / self.op2 class TestMatematica(unittest.TestCase): """Test della classe Matematica""" def testing(self): """Somma""" mat = Matematica(10,20) self.assertEqual(mat.adder(),30) """Sottrazione""" self.assertEqual(mat.subs(),10) class test_mock(object): def __init__(self, matematica): self.matematica = matematica def execute(self): self.matematica.adder() self.matematica.adder() self.matematica.subs() if __name__ == "__main__": result = unittest.TextTestRunner(verbosity=2).run(TestMatematica('testing')) a = Matematica(10,20) b = test_mock(a) b.execute() mock_foo = Mock(b.execute)#return_value = 'rafa') mock_foo() print mock_foo.called print mock_foo.call_count print mock_foo.method_calls This code is functionally and result of print is: True, 1, [] . Now, I need to count how many times are called self.matematica.adder() and self.matematica.subs() . THANKS

    Read the article

  • Using SMO to call Database.ExecuteNonQuery() concurrently?

    - by JimDaniel
    I have been banging my head against the wall trying to figure out how I can run update scripts concurrently against multiple databases in a single SQL Server instance using SMO. Our environments have an ever-increasing number of databases which need updating, and iterating through one at a time is becoming a problem (too slow). From what I understand SMO does not support concurrent operations, and my tests have bore that out. There seems to be shared memory at the Server object level, for things like DataReader context, keeps throwing exceptions such as "reader is already open." I apologize for not having the exact exceptions I am getting. I will try to get them and update this post. I am no expert on SMO and just feeling my way through to be honest. Not really sure I am approaching it the right way, but it's something that has to be done, or our productivity will slow to a crawl. So how would you guys do something like this? Am I using the wrong technology with SMO? All I am wanting to do is execute sql scripts against databases in a single sql server instance in parallel. Thanks for any help you can give, Daniel

    Read the article

  • Performance testing on .xap files...

    - by Radhi
    Hi All, I want to know that can i use profiler to do performance testing of .xap files. if you have any articles for the same topic please provide it to me. and if there are any other tools available to do this please tell me. in my project we have to check that when we logged into the Silverlight 4 .0 application. the screen takes 5 seconds to load. so i have to check which method is taking time to do this. in our project there are services which calls other services too,, and we have used CAL. so need to identify the bottleneck... please help...

    Read the article

  • Multi-Threaded Application - Help with some pseudo code!!

    - by HonorGod
    I am working on a multi-threaded application and need help with some pseudo-code. To make it simpler for implementation I will try to explain that in simple terms / test case. Here is the scenario - I have an array list of strings (say 100 strings) I have a Reader Class that reads the strings and passes them to a Writer Class that prints the strings to the console. Right now this runs in a Single Thread Model. I wanted to make this multi-threaded but with the following features - Ability to set MAX_READERS Ability to set MAX_WRITERS Ability to set BATCH_SIZE So basically the code should instantiate those many Readers and Writers and do the work in parallel. Any pseudo code will really be helpful to keep me going!

    Read the article

  • Data access strategy for a site like SO - sorted SQL queries and simultaneous updates that affect th

    - by Kaleb Brasee
    I'm working on a Grails web app that would be similar in access patterns to StackOverflow or MyLifeIsAverage - users can vote on entries, and their votes are used to sort a list of entries based on the number of votes. Votes can be placed while the sorted select queries are being performed. Since the selects would lock a large portion of the table, it seems that normal transaction locking would cause updates to take forever (given enough traffic). Has anyone worked on an app with a data access pattern such as this, and if so, did you find a way to allow these updates and selects to happen more or less concurrently? Does anyone know how sites like SO approach this? My thought was to make the sorted selects dirty reads, since it is acceptable if they're not completely up to date all of the time. This is my only idea for possibly improving performance of these selects and updates, but I thought someone might know a better way.

    Read the article

  • Does isolation frameworks (Moq, RhinoMock, etc) lead to test overspecification?

    - by Marius
    In Osherove's great book "The Art of Unit Testing" one of the test anti-patterns is over-specification which is basically the same as testing the internal state of the object instead of some expected output. To my experience, using Isolation frameworks can cause the same unwanted side effects as testing internal behavior because one tends to only implement the behavior necessary to make your stub interact with the object under test. Now if your implementation changes later on (but the contract remains the same), your test will suddenly break because you are expecting some data from the stub which was not implemented. So what do you think is the best approach to counter this? 1) Implement your stubs/mocks fully, this has the negative side-effect of potentially making your test less readable and also specifying more than necessary to make your test pass. 2) Favor manual, fully implemented fakes. 3) Implement your stubs/fakes so that they make your test just pass, and then deal with the brittleness that this might introduce.

    Read the article

  • Asynchrous calls cause StaleObjectStateException

    - by Mulone
    Hi all, I'm struggling with a Grails service. The service gets AJAX calls from the clients and behaves like a simple local cache for remote objects: void **someCallFromClient**() { // extract params def results = remoteService.queryService(params) results.each{ // try to fetch result object from local DB def obj = SomeClass.findBySomeField(result.someField) if (!obj){ obj = new Result(params) obj.save() } // do stuff on obj } } The service works fine when only one client is connected, but as soon as 2 or more clients start bombing the server with requests, I start getting: 2010-05-24 13:09:49,764 [30893094@qtp-26315919-2] ERROR errors.GrailsExceptionResolver - Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect): [ some object #892901] org.hibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect): [ some object #892901] // very long stactrace It probably happens when 2 calls are trying to create the same object concurrently. I suppose this is a rather typical situation to end up in. Could you recommend any pattern/good practice to fix this issue? For example, is there a way to say to one of the service instances to hang on and wait for the other to finish its stuff and try again? Cheers!

    Read the article

  • ScheduledThreadPoolExecutor executing a wrong time because of CPU time discrepancy

    - by richs
    I'm scheduling a task using a ScheduledThreadPoolExecutor object. I use the following method: public ScheduledFuture<?> schedule(Runnable command, long delay,TimeUnit unit) and set the delay to 30 seconds (delay = 30,000 and unit=TimeUnit.MILLISECONDS). Sometimes my task occurs immediately and other times it takes 70 seconds. I believe the ScheduledThreadPoolExecutor uses CPU specific clocks. When i run tests comparing System.currentTimeMillis(), System.nanoTime() [which is CPU specific] i see the following schedule: 1272637682651ms, 7858346157228410ns execute: 1272637682667ms, 7858386270968425ns difference is 16ms but 4011374001ns (or 40,113ms) so it looks like there is discrepancy between two CPU clocks of 40 seconds How do i resolve this issue in java code? Unfortunately this is a clients machine and i can't modify their system.

    Read the article

  • Java Executor: Small tasks or big ones?

    - by Arash Shahkar
    Consider one big task which could be broken into hundreds of small, independently-runnable tasks. To be more specific, each small task is to send a light network request and decide upon the answer received from the server. These small tasks are not expected to take longer than a second, and involve a few servers in total. I have in mind two approaches to implement this using the Executor framework, and I want to know which one's better and why. Create a few, say 5 to 10 tasks each involving doing a bunch of send and receives. Create a single task (Callable or Runnable) for each send & receive and schedule all of them (hundreds) to be run by the executor. I'm sorry if my question shows that I'm lazy to test these and see for myself what's better (at least performance-wise). My question, while looking after an answer to this specific case, has a more general aspect. In situations like these when you want to use an executor to do all the scheduling and other stuff, is it better to create lots of small tasks or to group those into a less number of bigger tasks?

    Read the article

  • Inconsistency in java.util.concurrent.Future?

    - by loganj
    For the sake of argument, let's say I'm implementing Future for a task which is not cancelable. The Java 6 API doc says: After [cancel()] returns, subsequent calls to isDone() will always return true. [cancel()] returns false if the task could not be cancelled, typically because it has already completed normally It also says: [isDone()] returns true if this task completed. But what if my cancellation fails not because the task is already completed, but because it simply cannot be cancelled? Is there a way out of this contradiction (other than making my uncancelable task cancelable and sidestepping it altogether)?

    Read the article

  • How does one SELECT block another?

    - by Krip
    I'm looking at output of SP_WhoIsActive on SQL Server 2005, and it's telling me one session is blocking another - fine. However they both are running a SELECT. How does one SELECT block another? Shouldn't they both be acquiring shared locks (which are compatible with one another)? Some more details: Neither session has an open transaction count - so they are stand-alone. The queries join a view with a table. They are complex queries which join lots of tables and results in 10,000 or so reads. Any insight much appreciated.

    Read the article

  • What Use are Threads Outside of Parallel Problems on MultiCore Systesm?

    - by Robert S. Barnes
    Threads make the design, implementation and debugging of a program significantly more difficult. Yet many people seem to think that every task in a program that can be threaded should be threaded, even on a single core system. I can understand threading something like an MPEG2 decoder that's going to run on a multicore cpu ( which I've done ), but what can justify the significant development costs threading entails when you're talking about a single core system or even a multicore system if your task doesn't gain significant performance from a parallel implementation? Or more succinctly, what kinds of non-performance related problems justify threading? Edit Well I just ran across one instance that's not CPU limited but threads make a big difference: TCP, HTTP and the Multi-Threading Sweet Spot Multiple threads are pretty useful when trying to max out your bandwidth to another peer over a high latency network connection. Non-blocking I/O would use significantly less local CPU resources, but would be much more difficult to design and implement.

    Read the article

  • Would this prevent the row from being read during the transaction?

    - by acidzombie24
    I remember an example where reads in a transaction then writing back the data is not safe because another transaction may read/write to it in the time between. So i would like to check the date and prevent the row from being modified or read until my transaction is finish. Would this do the trick? and are there any sql variants that this will not work on? update tbl set id=id where date>expire_date and id=@id Note: dateexpire_date happens to be my condition. It could be anything. Would this prevent other transaction from reading the row until i commit or rollback?

    Read the article

  • Interview Question: .Any() vs if (.Length > 0) for testing if a collection has elements

    - by Chris
    In a recent interview I was asked what the difference between .Any() and .Length > 0 was and why I would use either when testing to see if a collection had elements. This threw me a little as it seems a little obvious but feel I may be missing something. I suggested that you use .Length when you simply need to know that a collection has elements and .Any() when you wish to filter the results. Presumably .Any() takes a performance hit too as it has to do a loop / query internally.

    Read the article

  • Using ManualResetEvent to wait for multiple Image.ImageOpened events

    - by umlgorithm
    Dictionary<Image, ManualResetEvent> waitHandleMap = new Dictionary<Image, ManualResetEvent>(); List<Image> images = GetImagesWhichAreAlreadyInVisualTree(); foreach (var image in images) { image.Source = new BitmapImage(new Uri("some_valid_image_url")); waitHandleMap.Add(image, new ManualResetEvent(false)); image.ImageOpened += delegate { waitHandleMap[image].Set(); }; image.ImageFailed += delegate { waitHandleMap[image].Set(); }; } WaitHandle.WaitAll(waitHandleMap.Values.ToArray()); WaitHandle.WaitAll blocks the current UI thread, so ImageOpened/ImageFailed events would never get fired. Could you suggest me an easy workaround to wait for the multiple ui events?

    Read the article

  • Is there any point in using a volatile long?

    - by Adamski
    I occasionally use a volatile instance variable in cases where I have two threads reading from / writing to it and don't want the overhead (or potential deadlock risk) of taking out a lock; for example a timer thread periodically updating an int ID that is exposed as a getter on some class: public class MyClass { private volatile int id; public MyClass() { ScheduledExecutorService execService = Executors.newScheduledThreadPool(1); execService.scheduleAtFixedRate(new Runnable() { public void run() { ++id; } }, 0L, 30L, TimeUnit.SECONDS); } public int getId() { return id; } } My question: Given that the JLS only guarantees that 32-bit reads will be atomic is there any point in ever using a volatile long? (i.e. 64-bit). Caveat: Please do not reply saying that using volatile over synchronized is a case of pre-optimisation; I am well aware of how / when to use synchronized but there are cases where volatile is preferable. For example, when defining a Spring bean for use in a single-threaded application I tend to favour volatile instance variables, as there is no guarantee that the Spring context will initialise each bean's properties in the main thread.

    Read the article

  • Standard term for a thread I/O reorder buffer?

    - by Crashworks
    I have a case where many threads all concurrently generate data that is ultimately written to one long, serial file. I need to somehow serialize these writes so that the file gets written in the right order. ie, I have an input queue of 2048 jobs j0..jn, each of which produces a chunk of data oi. The jobs run in parallel on, say, eight threads, but the output blocks have to appear in the file in the same order as the corresponding input blocks — the output file has to be in the order o0o1o2... The solution to this is pretty self evident: I need some kind of buffer that accumulates and writes the output blocks in the correct order, similar to a CPU reorder buffer in Tomasulo's algorithm, or to the way that TCP reassembles out-of-order packets before passing them to the application layer. Before I go code it, I'd like to do a quick literature search to see if there are any papers that have solved this problem in a particularly clever or efficient way, since I have severe realtime and memory constraints. I can't seem to find any papers describing this though; a Scholar search on every permutation of [threads, concurrent, reorder buffer, reassembly, io, serialize] hasn't yielded anything useful. I feel like I must just not be searching the right terms. Is there a common academic name or keyword for this kind of pattern that I can search on?

    Read the article

  • Java: serial thread confinement question

    - by denis
    Assume you have a Collection(ConcurrentLinkedQueue) of Runnables with mutable state. Thread A iterates over the Collection and hands the Runnables to an ExecutorService. The run() method changes the Runnables state. The Runnable has no internal synchronization. The above is a repetitive action and the worker threads need to see the changes made by previous iterations. So a Runnable gets processed by one worker thread after another, but is never accessed by more than one thread at a time - a case of serial thread confinement(i hope ;)). The question: Will it work just with the internal synchronization of the ConcurrentLinkedQueue/ExecutorSerivce? To be more precise: If Thread A hands Runnable R to worker thread B and B changes the state of R, and then A hands R to worker thread C..does C see the modifications done by B?

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >