Search Results

Search found 659 results on 27 pages for 'safety'.

Page 8/27 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • MMGR Questions, code use and thread-saftey

    - by chadb
    1) Is MMGR thread safe? 2) I was hoping someone could help me understand some code. I am looking at something where a macro is used, but I don't understand the macro. I know it contains a function call and an if check, however, the function is a void function. How does wrapping "(m_setOwner (FILE,_LINE_,FUNCTION),false)" ever change return types? #define someMacro (m_setOwner(__FILE__,__LINE__,__FUNCTION__),false) ? NULL : new ... void m_setOwner(const char *file, const unsigned int line, const char *func); 3) What is the point of the reservoir? 4) On line 770 ("void *operator new(size_t reportedSize)" there is the line "// ANSI says: allocation requests of 0 bytes will still return a valid value" Who/what is ANSI in this context? Do they mean the standards? 5) This is more of C++ standards, but where does "reportedSize" come from for "void *operator new(size_t reportedSize)"? 6) Is this the code that is actually doing the allocation needed? "au-actualAddress = malloc(au-actualSize);"

    Read the article

  • Execute a block of database querys

    - by Nightmare
    I have the following task to complete: In my program a have a block of database querys or questions. I want to execute these questions and wait for the result of all questions or catch an error if one question fails! My Question object looks like this (simplified): public class DbQuestion(String sql) { [...] } [...] //The answer is just a holder for custom data... public void SetAnswer(DbAnswer answer) { //Store the answer in the question and fire a event to the listeners this.OnAnswered(EventArgs.Empty); } [...] public void SetError() { //Signal an Error in this query! this.OnError(EventArgs.Empty); } So every question fired to the database has a listener that waits for the parsed result. Now I want to fire some questions asynchronous to the database (max. 5 or so) and fire an event with the data from all questions or an error if only one question throws one! Which is the best or a good way to accomplish this task? Can I really execute more then one question parallel and stop all my work when one question throws an error? I think I need some inspiration on this... Just a note: I´m working with .NET framework 2.0

    Read the article

  • Can the lock function be used to implement thread-safe enumeration?

    - by Daniel
    I'm working on a thread-safe collection that uses Dictionary as a backing store. In C# you can do the following: private IEnumerable<KeyValuePair<K, V>> Enumerate() { if (_synchronize) { lock (_locker) { foreach (var entry in _dict) yield return entry; } } else { foreach (var entry in _dict) yield return entry; } } The only way I've found to do this in F# is using Monitor, e.g.: let enumerate() = if synchronize then seq { System.Threading.Monitor.Enter(locker) try for entry in dict -> entry finally System.Threading.Monitor.Exit(locker) } else seq { for entry in dict -> entry } Can this be done using the lock function? Or, is there a better way to do this in general? I don't think returning a copy of the collection for iteration will work because I need absolute synchronization.

    Read the article

  • efficient thread-safe singleton in C++

    - by user168715
    The usual pattern for a singleton class is something like static Foo &getInst() { static Foo *inst = NULL; if(inst == NULL) inst = new Foo(...); return *inst; } However, it's my understanding that this solution is not thread-safe, since 1) Foo's constructor might be called more than once (which may or may not matter) and 2) inst may not be fully constructed before it is returned to a different thread. One solution is to wrap a mutex around the whole method, but then I'm paying for synchronization overhead long after I actually need it. An alternative is something like static Foo &getInst() { static Foo *inst = NULL; if(inst == NULL) { pthread_mutex_lock(&mutex); if(inst == NULL) inst = new Foo(...); pthread_mutex_unlock(&mutex); } return *inst; } Is this the right way to do it, or are there any pitfalls I should be aware of? For instance, are there any static initialization order problems that might occur, i.e. is inst always guaranteed to be NULL the first time getInst is called?

    Read the article

  • Apply [ThreadStatic] attribute to a method in external assembly

    - by Sen Jacob
    Can I use an external assembly's static method like [ThreadStatic] method? Here is my situation. The assembly class (which I do not have access to its source) has this structure public class RegistrationManager() { private RegistrationManager() {} public static void RegisterConfiguration(int ID) {} public static object DoWork() {} public static void UnregisterConfiguration(int ID) {} } Once registered, I cannot call the DoWork() with a different ID without unregistering the previously registered one. Actually I want to call the DoWork() method with different IDs simultaneously with multi-threading. If the RegisterConfiguration(int ID) method was [ThreadStatic], I could have call it in different threads without problems with calls, right? So, can I apply the [ThreadStatic] attribute to this method or is there any other way I can call the two static methods same time without waiting for other thread to unregister it? If I check it like the following, it should work. for(int i=0; i < 10; i++) { new Thread(new ThreadStart(() => Checker(i))).Start(); } public string Checker(int i) { public static void RegisterConfiguration(i); // Now i cannot register second time public static object DoWork(i); Thread.Sleep(5000); // DoWork() may take a little while to complete before unregistered public static void UnregisterConfiguration(i); }

    Read the article

  • Noise with multi-threaded raytracer

    - by herber88
    This is my first multi-threaded implementation, so it's probably a beginners mistake. The threads handle the rendering of every second row of pixels (so all rendering is handled within each thread). The problem persists if the threads render the upper and lower parts of the screen respectively. Both threads read from the same variables, can this cause any problems? From what I've understood only writing can cause concurrency problems... Can calling the same functions cause any concurrency problems? And again, from what I've understood this shouldn't be a problem... The only time both threads write to the same variable is when saving the calculated pixel color. This is stored in an array, but they never write to the same indices in that array. Can this cause a problem? Multi-threaded rendered image (Spam prevention stops me from posting images directly..) Ps. I use the exactly same implementation in both cases, the ONLY difference is a single vs. two threads created for the rendering.

    Read the article

  • Thread-safe use of a singleton's members

    - by Anthony Mastrean
    I have a C# singleton class that multiple classes use. Is access through Instance to the Toggle() method thread-safe? If yes, by what assumptions, rules, etc. If no, why and how can I fix it? public class MyClass { private static readonly MyClass instance = new MyClass(); public static MyClass Instance { get { return instance; } } private int value = 0; public int Toggle() { if(value == 0) { value = 1; } else if(value == 1) { value = 0; } return value; } }

    Read the article

  • Is it safe to use a boolean flag to stop a thread from running in C#

    - by Lirik
    My main concern is with the boolean flag... is it safe to use it without any synchronization? I've read in several places that it's atomic. class MyTask { private ManualResetEvent startSignal; private CountDownLatch latch; private bool running; MyTask(CountDownLatch latch) { running = false; this.latch = latch; startSignal = new ManualResetEvent(false); } // A method which runs in a thread public void Run() { startSignal.WaitOne(); while(running) { startSignal.WaitOne(); //... some code } latch.Signal(); } public void Stop() { running = false; startSignal.Set(); } public void Start() { running = true; startSignal.Set(); } public void Pause() { startSignal.Reset(); } public void Resume() { startSignal.Set(); } } Is this a safe way to design a task? Any suggestions, improvements, comments? Note: I wrote my custom CountDownLatch class in case you're wondering where I'm getting it from.

    Read the article

  • Is this a valid, lazy, thread-safe Singleton implementation for C#?

    - by Matthew
    I implemented a Singleton pattern like this: public sealed class MyClass { ... public static MyClass Instance { get { return SingletonHolder.instance; } } ... static class SingletonHolder { public static MyClass instance = new MyClass (); } } From Googling around for C# Singleton implementations, it doesn't seem like this is a common way to do things in C#. I found one similar implementation, but the SingletonHolder class wasn't static, and included an explicit (empty) static constructor. Is this a valid, lazy, thread-safe way to implement the Singleton pattern? Or is there something I'm missing?

    Read the article

  • Do really need a count lock on Multi threads with one CPU core?

    - by MrROY
    If i have some code looks like this(Please ignore the syntax, i want to understand it without a specified language): count = 0 def countDown(): count += 1 if __name__ == '__main__': thread1(countDown) thread2(countDown) thread3(countDown) Here i have a CPU with only one core, do i really need a lock to the variable count in case of it could be over-written by other threads. I don't know, but if the language cares a lot, please explain it under Java?C and Python, So many thanks.

    Read the article

  • How do I make a defaultdict safe for unexpecting clients?

    - by ~miki4242
    Several times (even several in a row) I've been bitten by the defaultdict bug. d = defaultdict(list) ... try: v = d["key"] except KeyError: print "Sorry, no dice!" For those who have been bitten too, the problem is evident: when d has no key 'key', the v = d["key"] magically creates an empty list and assigns it to both d["key"] and v instead of raising an exception. Which can be quite a pain to track down if d comes from some module whose details one doesn't remember very well. I'm looking for a way to take the sting out of this bug. For me, the best solution would be to somehow disable a defaultdict's magic before returning it to the client.

    Read the article

  • Is this use of PreparedStatements in a Thread in JAVA correct?

    - by Gormcito
    I'm still an undergrad just working part time and so I'm always trying to be aware of better ways to do things. Recently I had to write a program for work where the main thread of the program would spawn "task" threads (for each db "task" record) which would perform some operations and then update the record to say that it has finished. Therefore I needed a database connection object and PreparedStatement objects in or available to the ThreadedTask objects. This is roughly what I ended up writing, is creating a PreparedStatement object per thread a waste? I thought static PreparedStatments could create race conditions... Thread A stmt.setInt(); Thread B stmt.setInt(); Thread A stmt.execute(); Thread B stmt.execute(); A´s version never gets execed.. Is this thread safe? Is creating and destroying PreparedStatement objects that are always the same not a huge waste? public class ThreadedTask implements runnable { private final PreparedStatement taskCompleteStmt; public ThreadedTask() { //... taskCompleteStmt = Main.db.prepareStatement(...); } public run() { //... taskCompleteStmt.executeUpdate(); } } public class Main { public static final db = DriverManager.getConnection(...); }

    Read the article

  • Cannot make a static reference to the non-static type MyRunnable

    - by kaiwii ho
    Here is the whole code : import java.util.ArrayList; public class Test{ ThreadLocal<ArrayList<E>>arraylist=new ThreadLocal<ArrayList<E>>(){ @Override protected ArrayList<E> initialValue() { // TODO Auto-generated method stub //return super.initialValue(); ArrayList<E>arraylist=new ArrayList<E>(); for(int i=0;i<=20;i++) arraylist.add((E) new Integer(i)); return arraylist; } }; class MyRunnable implements Runnable{ private Test mytest; public MyRunnable(Test test){ mytest=test; // TODO Auto-generated constructor stub } @Override public void run() { System.out.println("before"+mytest.arraylist.toString()); ArrayList<E>myarraylist=(ArrayList<E>) mytest.arraylist.get(); myarraylist.add((E) new Double(Math.random())); mytest.arraylist.set(myarraylist); System.out.println("after"+mytest.arraylist.toString()); } // TODO Auto-generated method stub } public static void main(String[] args){ Test test=new Test<Double>(); System.out.println(test.arraylist.toString()); new Thread(new MyRunnable(test)).start(); new Thread(new MyRunnable(test)).start(); System.out.println(arraylist.toString()); } } my questions are: 1\ why the new Thread(new MyRunnable(test)).start(); cause the error: Cannot make a static reference to the non-static type MyRunnable ? 2\ what is the static reference refer to right here? thx in advanced

    Read the article

  • VB.net SyncLock Object

    - by Budius
    I always seen on SyncLock examples people using Private Lock1 As New Object ' declaration SyncLock Lock1 ' usage but why? In my specific case I'm locking a Queue to avoid problems on mult-threading Enqueueing and Dequeueing my data. Can I lock the Queue object itself, like this? Private cmdQueue As New Queue(Of QueueItem) ' declaration SyncLock cmdQueue ' usage Any help appreciated. Thanks.

    Read the article

  • Is safe ( documented behaviour? ) to delete the domain of an iterator in execution

    - by PoorLuzer
    I wanted to know if is safe ( documented behaviour? ) to delete the domain space of an iterator in execution in Python. Consider the code: import os import sys sampleSpace = [ x*x for x in range( 7 ) ] print sampleSpace for dx in sampleSpace: print str( dx ) if dx == 1: del sampleSpace[ 1 ] del sampleSpace[ 3 ] elif dx == 25: del sampleSpace[ -1 ] print sampleSpace 'sampleSpace' is what I call 'the domain space of an iterator' ( if there is a more appropriate word/phrase, lemme know ). What I am doing is deleting values from it while the iterator 'dx' is running through it. Here is what I expect from the code : Iteration versus element being pointed to (*): 0: [*0, 1, 4, 9, 16, 25, 36] 1: [0, *1, 4, 9, 16, 25, 36] ( delete 2nd and 5th element after this iteration ) 2: [0, 4, *9, 25, 36] 3: [0, 4, 9, *25, 36] ( delete -1th element after this iteration ) 4: [0, 4, 9, 25*] ( as the iterator points to nothing/end of list, the loop terminates ) .. and here is what I get: [0, 1, 4, 9, 16, 25, 36] 0 1 9 25 [0, 4, 9, 25] As you can see - what I expect is what I get - which is contrary to the behaviour I have had from other languages in such a scenario. Hence - I wanted to ask you if there is some rule like "the iterator becomes invalid if you mutate its space during iteration" in Python? Is it safe ( documented behaviour? ) in Python to do stuff like this?

    Read the article

  • ConcurentModificationException in Java HashMap

    - by Bear
    Suppose I have two methods in my classes, writeToMap() and processKey() and both methods are called by multiple threads. writeToMap is a method to write something in hashmap and processKey() is used to do sth based on the keySet of HashMap. Inside processKey, I first copy the originalMap before getting the key set. new HashMap<String, Map<String,String>(originalMap).get("xx").keySet(); But I am still getting ConcurrentModificationException even though I always copy the hashmap. Whats the problem?

    Read the article

  • Thread help with Android game

    - by Ciph3rzer0
    I need some help dealing with three Threads in Android One thread is the main thread, the other is the GLThread, and the other is a WorkerThread I created to update the game state. The problem I have is they all need to access the same LinkedList of game objects. Both the GLThread and my WorkerThread only read from the LinkedList, so no problem there, but occasionally I have the main thread adding in another game object to the list. How can I manage this? I tried using synchronized in front of the functions involved but it really slows down the application. For some reason, just catching the errors and not rendering or updating the game state that frame, causes it to start lagging permanently. Anyone have any great ideas?

    Read the article

  • Suggestions for lightweight, thread-safe scheduler

    - by nirvanai
    I am trying to write a round-robin scheduler for lightweight threads (fibers). It must scale to handle as many concurrently-scheduled fibers as possible. I also need to be able to schedule fibers from threads other than the one the run loop is on, and preferably unschedule them from arbitrary threads as well (though I could live with only being able to unschedule them from the run loop). My current idea is to have a circular doubly-linked list, where each fiber is a node and the scheduler holds a reference to the current node. This is what I have so far: using Interlocked = System.Threading.Interlocked; public class Thread { internal Future current_fiber; public void RunLoop () { while (true) { var fiber = current_fiber; if (fiber == null) { // block the thread until a fiber is scheduled continue; } if (fiber.Fulfilled) fiber.Unschedule (); else fiber.Resume (); //if (current_fiber == fiber) current_fiber = fiber.next; Interlocked.CompareExchange<Future> (ref current_fiber, fiber.next, fiber); } } } public abstract class Future { public bool Fulfilled { get; protected set; } internal Future previous, next; // this must be thread-safe // it inserts this node before thread.current_fiber // (getting the exact position doesn't matter, as long as the // chosen nodes haven't been unscheduled) public void Schedule (Thread thread) { next = this; // maintain circularity, even if this is the only node previous = this; try_again: var current = Interlocked.CompareExchange<Future> (ref thread.current_fiber, this, null); if (current == null) return; var target = current.previous; while (target == null) { // current was unscheduled; negotiate for new current_fiber var potential = current.next; var actual = Interlocked.CompareExchange<Future> (ref thread.current_fiber, potential, current); current = (actual == current? potential : actual); if (current == null) goto try_again; target = current.previous; } // I would lock "current" and "target" at this point. // How can I do this w/o risk of deadlock? next = current; previous = target; target.next = this; current.previous = this; } // this would ideally be thread-safe public void Unschedule () { var prev = previous; if (prev == null) { // already unscheduled return; } previous = null; if (next == this) { next = null; return; } // Again, I would lock "prev" and "next" here // How can I do this w/o risk of deadlock? prev.next = next; next.previous = prev; } public abstract void Resume (); } As you can see, my sticking point is that I cannot ensure the order of locking, so I can't lock more than one node without risking deadlock. Or can I? I don't want to have a global lock on the Thread object, since the amount of lock contention would be extreme. Plus, I don't especially care about insertion position, so if I lock each node separately then Schedule() could use something like Monitor.TryEnter and just keep walking the list until it finds an unlocked node. Overall, I'm not invested in any particular implementation, as long as it meets the requirements I've mentioned. Any ideas would be greatly appreciated. Thanks! P.S- For the curious, this is for an open source project I'm starting at http://github.com/nirvanai/Cirrus

    Read the article

  • synchronizing reads to a java collection

    - by jeff
    so i want to have an arraylist that stores a series of stock quotes. but i keep track of bid price, ask price and last price for each. of course at any time, the bid ask or last of a given stock can change. i have one thread that updates the prices and one that reads them. i want to make sure that when reading no other thread is updating a price. so i looked at synchronized collection. but that seems to only prevent reading while another thread is adding or deleting an entry to the arraylist. so now i'm onto the wrapper approach: public class Qte_List { private final ArrayList<Qte> the_list; public void UpdateBid(String p_sym, double p_bid){ synchronized (the_list){ Qte q = Qte.FindBySym(the_list, p_sym); q.bid=p_bid;} } public double ReadBid(String p_sym){ synchronized (the_list){ Qte q = Qte.FindBySym(the_list, p_sym); return q.bid;} } so what i want to accomplish with this is only one thread can be doing anything - reading or updating an the_list's contents - at one time. am i approach this right? thanks.

    Read the article

  • Turn based synchronization between threads

    - by Amarus
    I'm trying to find a way to synchronize multiple threads having the following conditions: * There are two types of threads: 1. A single "cyclic" thread executing an infinite loop to do cyclic calculations 2. Multiple short-lived threads not started by the main thread * The cyclic thread has a sleep duration between each cycle/loop iteration * The other threads are allowed execute during the inter-cycle sleep of the cyclic thread: - Any other thread that attempts to execute during an active cycle should be blocked - The cyclic thread will wait until all other threads that are already executing to be finished Here's a basic example of what I was thinking of doing: // Somewhere in the code: ManualResetEvent manualResetEvent = new ManualResetEvent(true); // Allow Externally call CountdownEvent countdownEvent = new CountdownEvent(1); // Can't AddCount a CountdownEvent with CurrentCount = 0 void ExternallyCalled() { manualResetEvent.WaitOne(); // Wait until CyclicCalculations is having its beauty sleep countdownEvent.AddCount(); // Notify CyclicCalculations that it should wait for this method call to finish before starting the next cycle Thread.Sleep(1000); // TODO: Replace with actual method logic countdownEvent.Signal(); // Notify CyclicCalculations that this call is finished } void CyclicCalculations() { while (!stopCyclicCalculations) { manualResetEvent.Reset(); // Block all incoming calls to ExternallyCalled from this point forward countdownEvent.Signal(); // Dirty workaround for the issue with AddCount and CurrentCount = 0 countdownEvent.Wait(); // Wait until all of the already executing calls to ExternallyCalled are finished countdownEvent.Reset(); // Reset the CountdownEvent for next cycle. Thread.Sleep(2000); // TODO: Replace with actual method logic manualResetEvent.Set(); // Unblock all threads executing ExternallyCalled Thread.Sleep(1000); // Inter-cycles delay } } Obviously, this doesn't work. There's no guarantee that there won't be any threads executing ExternallyCalled that are in between manualResetEvent.WaitOne(); and countdownEvent.AddCount(); at the time the main thread gets released by the CountdownEvent. I can't figure out a simple way of doing what I'm after, and almost everything that I've found after a lengthy search is related to producer/consumer synchronization which I can't apply here.

    Read the article

  • Python threading question (Working with a method that blocks forever)

    - by Nix
    I am trying to wrap a thread around some receiving logic in python. Basically we have an app, that will have a thread in the background polling for messages, the problem I ran into is that piece that actually pulls the messages waits forever for a message. Making it impossible to terminate... I ended up wrapping the pull in another thread, but I wanted to make sure there wasn't a better way to do it. Original code: class Manager: def __init__(self): receiver = MessageReceiver() receiver.start() #do other stuff... class MessageReceiver(Thread): receiver = Receiver() def __init__(self): Thread.__init__(self) def run(self): #stop is a flag that i use to stop the thread... while(not stopped ): #can never stop because pull below blocks message = receiver.pull() print "Message" + message What I refectored to: class Manager: def __init__(self): receiver = MessageReceiver() receiver.start() class MessageReceiver(Thread): receiver = Receiver() def __init__(self): Thread.__init__(self) def run(self): pullThread = PullThread(self.receiver) pullThread.start() #stop is a flag that i use to stop the thread... while(not stopped and pullThread.last_message ==None): pass message = pullThread.last_message print "Message" + message class PullThread(Thread): last_message = None def __init__(self, receiver): Thread.__init(self, target=get_message, args=(receiver)) def get_message(self, receiver): self.last_message = None self.last_message = receiver.pull() return self.last_message I know the obvious locking issues exist, but is this the appropriate way to control a receive thread that waits forever for a message? One thing I did notice was this thing eats 100% cpu while waiting for a message... **If you need to see the stopping logic please let me know and I will post.

    Read the article

  • singleton pattern in java- lazy Intialization

    - by flash
    public static MySingleton getInstance() { if (_instance==null) { synchronized (MySingleton.class) { _instance = new MySingleton(); } } return _instance; } 1.is there a flaw with the above implementation of the getInstance method? 2.What is the difference between the two implementations.? public static synchronized MySingleton getInstance() { if (_instance==null) { _instance = new MySingleton(); } return _instance; } I have seen a lot of answers on the singleton pattern in stackoverflow but the question I have posted is to know mainly difference of 'synchronize' at method and block level in this particular case.

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >