Search Results

Search found 9137 results on 366 pages for 'worker thread'.

Page 62/366 | < Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >

  • how can i pass a parameter in a thread in ansi c windows lib can also be used?

    - by moon
    int NM_Generator = 1; //Aray to store thread handles HANDLE Array_Of_Thread_Handles[1]; //variable to hold handle of North pulse HANDLE Handle_Of_NM_Generator = 0; //Create NM_Generator Thread Handle_Of_NM_Generator = CreateThread( NULL, 0, NMGenerator, &dDifference, 0, NULL); if ( Handle_Of_NM_Generator == NULL) ExitProcess(NM_Generator); i want to pass a parameter double value in it how can i do so?

    Read the article

  • Why the “Toilet” Analogy for SQL might be bad

    - by Jonathan Kehayias
    Robert Davis(blog/twitter) recently blogged The Toilet Analogy … or Why I Never Recommend Increasing Worker Threads , in which he uses an analogy for why increasing the value for the ‘max worker threads’ sp_configure option can be bad inside of SQL Server.  While I can’t make an argument against Robert’s assertion that increasing worker threads may not improve performance, I can make an argument against his suggestion that, simply increasing the number of logical processors, for example from...(read more)

    Read the article

  • UnsatisfiedLinkError on xawt when running HEC-HMS.sh

    - by G.Oxsen
    I am a recent adopter of Linux and this problem has got me stumped. I use HEC-HMS and HEC-DSSVue for work on a regular basis. I have been using the widows versions in wine but they are really buggy. So I decided to try out the linux versions. the links below will take you to the download pages for these two programs. They are free programs for Hydrology and data management. Once I install them and attempt to run the shell file (HEC-HMS.sh for example) I get a ton of java errors that I do not understand. If I had to guess I would say that the java files in question can not be found. When I check to see if java is installed it is. Here is the output from the terminal from trying to run HEC-HMS.sh: Exception in thread "Thread-1" java.lang.UnsatisfiedLinkError: /home/smythe/HEC/hec-hms35/java/lib/i386/xawt/libmawt.so: libXtst.so.6: cannot open shared object file: No such file or directory at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary0(Unknown Source) at java.lang.ClassLoader.loadLibrary(Unknown Source) at java.lang.Runtime.load0(Unknown Source) at java.lang.System.load(Unknown Source) at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary0(Unknown Source) at java.lang.ClassLoader.loadLibrary(Unknown Source) at java.lang.Runtime.loadLibrary0(Unknown Source) at java.lang.System.loadLibrary(Unknown Source) at sun.security.action.LoadLibraryAction.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at sun.awt.NativeLibLoader.loadLibraries(Unknown Source) at sun.awt.DebugHelper.<clinit>(Unknown Source) at java.awt.Component.<clinit>(Unknown Source) at javax.swing.ImageIcon.<clinit>(Unknown Source) at hms.i.c(Unknown Source) at hms.i.b(Unknown Source) at hms.K.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Exception in thread "Thread-4" java.lang.UnsatisfiedLinkError: /home/smythe/HEC/hec-hms35/java/lib/i386/xawt/libmawt.so: libXtst.so.6: cannot open shared object file: No such file or directory at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary0(Unknown Source) at java.lang.ClassLoader.loadLibrary(Unknown Source) at java.lang.Runtime.load0(Unknown Source) at java.lang.System.load(Unknown Source) at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary0(Unknown Source) at java.lang.ClassLoader.loadLibrary(Unknown Source) at java.lang.Runtime.loadLibrary0(Unknown Source) at java.lang.System.loadLibrary(Unknown Source) at sun.security.action.LoadLibraryAction.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.awt.Toolkit.loadLibraries(Unknown Source) at java.awt.Toolkit.<clinit>(Unknown Source) at sun.print.CUPSPrinter.<clinit>(Unknown Source) at sun.print.UnixPrintServiceLookup.getDefaultPrintService(Unknown Source) at sun.print.UnixPrintServiceLookup.refreshServices(Unknown Source) at sun.print.UnixPrintServiceLookup$PrinterChangeListener.run(Unknown Source) Exception in thread "main" java.lang.NoClassDefFoundError: Could not initialize class java.awt.Toolkit at java.awt.Color.<clinit>(Unknown Source) at hms.model.l.<init>(Unknown Source) at hms.model.ProjectManager.<init>(Unknown Source) at hms.Hms.<init>(Unknown Source) at hms.Hms.main(Unknown Source) Exception in thread "Thread-2" java.lang.NoClassDefFoundError: Could not initialize class sun.print.CUPSPrinter at sun.print.UnixPrintServiceLookup.getDefaultPrintService(Unknown Source) at javax.print.PrintServiceLookup.lookupDefaultPrintService(Unknown Source) at hms.util.f.run(Unknown Source) at java.lang.Thread.run(Unknown Source) I get similar outputs when I try to run HEC-DSSVue.sh. If anyone could shed some light on a solution I would really appreciate it. The problem turned out to be that the program needed 32 bit versions of the particular dependencies.

    Read the article

  • C#: Handling Notifications: inheritance, events, or delegates?

    - by James Michael Hare
    Often times as developers we have to design a class where we get notification when certain things happen. In older object-oriented code this would often be implemented by overriding methods -- with events, delegates, and interfaces, however, we have far more elegant options. So, when should you use each of these methods and what are their strengths and weaknesses? Now, for the purposes of this article when I say notification, I'm just talking about ways for a class to let a user know that something has occurred. This can be through any programmatic means such as inheritance, events, delegates, etc. So let's build some context. I'm sitting here thinking about a provider neutral messaging layer for the place I work, and I got to the point where I needed to design the message subscriber which will receive messages from the message bus. Basically, what we want is to be able to create a message listener and have it be called whenever a new message arrives. Now, back before the flood we would have done this via inheritance and an abstract class: 1:  2: // using inheritance - omitting argument null checks and halt logic 3: public abstract class MessageListener 4: { 5: private ISubscriber _subscriber; 6: private bool _isHalted = false; 7: private Thread _messageThread; 8:  9: // assign the subscriber and start the messaging loop 10: public MessageListener(ISubscriber subscriber) 11: { 12: _subscriber = subscriber; 13: _messageThread = new Thread(MessageLoop); 14: _messageThread.Start(); 15: } 16:  17: // user will override this to process their messages 18: protected abstract void OnMessageReceived(Message msg); 19:  20: // handle the looping in the thread 21: private void MessageLoop() 22: { 23: while(!_isHalted) 24: { 25: // as long as processing, wait 1 second for message 26: Message msg = _subscriber.Receive(TimeSpan.FromSeconds(1)); 27: if(msg != null) 28: { 29: OnMessageReceived(msg); 30: } 31: } 32: } 33: ... 34: } It seems so odd to write this kind of code now. Does it feel odd to you? Maybe it's just because I've gotten so used to delegation that I really don't like the feel of this. To me it is akin to saying that if I want to drive my car I need to derive a new instance of it just to put myself in the driver's seat. And yet, unquestionably, five years ago I would have probably written the code as you see above. To me, inheritance is a flawed approach for notifications due to several reasons: Inheritance is one of the HIGHEST forms of coupling. You can't seal the listener class because it depends on sub-classing to work. Because C# does not allow multiple-inheritance, I've spent my one inheritance implementing this class. Every time you need to listen to a bus, you have to derive a class which leads to lots of trivial sub-classes. The act of consuming a message should be a separate responsibility than the act of listening for a message (SRP). Inheritance is such a strong statement (this IS-A that) that it should only be used in building type hierarchies and not for overriding use-specific behaviors and notifications. Chances are, if a class needs to be inherited to be used, it most likely is not designed as well as it could be in today's modern programming languages. So lets look at the other tools available to us for getting notified instead. Here's a few other choices to consider. Have the listener expose a MessageReceived event. Have the listener accept a new IMessageHandler interface instance. Have the listener accept an Action<Message> delegate. Really, all of these are different forms of delegation. Now, .NET events are a bit heavier than the other types of delegates in terms of run-time execution, but they are a great way to allow others using your class to subscribe to your events: 1: // using event - ommiting argument null checks and halt logic 2: public sealed class MessageListener 3: { 4: private ISubscriber _subscriber; 5: private bool _isHalted = false; 6: private Thread _messageThread; 7:  8: // assign the subscriber and start the messaging loop 9: public MessageListener(ISubscriber subscriber) 10: { 11: _subscriber = subscriber; 12: _messageThread = new Thread(MessageLoop); 13: _messageThread.Start(); 14: } 15:  16: // user will override this to process their messages 17: public event Action<Message> MessageReceived; 18:  19: // handle the looping in the thread 20: private void MessageLoop() 21: { 22: while(!_isHalted) 23: { 24: // as long as processing, wait 1 second for message 25: Message msg = _subscriber.Receive(TimeSpan.FromSeconds(1)); 26: if(msg != null && MessageReceived != null) 27: { 28: MessageReceived(msg); 29: } 30: } 31: } 32: } Note, now we can seal the class to avoid changes and the user just needs to provide a message handling method: 1: theListener.MessageReceived += CustomReceiveMethod; However, personally I don't think events hold up as well in this case because events are largely optional. To me, what is the point of a listener if you create one with no event listeners? So in my mind, use events when handling the notification is optional. So how about the delegation via interface? I personally like this method quite a bit. Basically what it does is similar to inheritance method mentioned first, but better because it makes it easy to split the part of the class that doesn't change (the base listener behavior) from the part that does change (the user-specified action after receiving a message). So assuming we had an interface like: 1: public interface IMessageHandler 2: { 3: void OnMessageReceived(Message receivedMessage); 4: } Our listener would look like this: 1: // using delegation via interface - omitting argument null checks and halt logic 2: public sealed class MessageListener 3: { 4: private ISubscriber _subscriber; 5: private IMessageHandler _handler; 6: private bool _isHalted = false; 7: private Thread _messageThread; 8:  9: // assign the subscriber and start the messaging loop 10: public MessageListener(ISubscriber subscriber, IMessageHandler handler) 11: { 12: _subscriber = subscriber; 13: _handler = handler; 14: _messageThread = new Thread(MessageLoop); 15: _messageThread.Start(); 16: } 17:  18: // handle the looping in the thread 19: private void MessageLoop() 20: { 21: while(!_isHalted) 22: { 23: // as long as processing, wait 1 second for message 24: Message msg = _subscriber.Receive(TimeSpan.FromSeconds(1)); 25: if(msg != null) 26: { 27: _handler.OnMessageReceived(msg); 28: } 29: } 30: } 31: } And they would call it by creating a class that implements IMessageHandler and pass that instance into the constructor of the listener. I like that this alleviates the issues of inheritance and essentially forces you to provide a handler (as opposed to events) on construction. Well, this is good, but personally I think we could go one step further. While I like this better than events or inheritance, it still forces you to implement a specific method name. What if that name collides? Furthermore if you have lots of these you end up either with large classes inheriting multiple interfaces to implement one method, or lots of small classes. Also, if you had one class that wanted to manage messages from two different subscribers differently, it wouldn't be able to because the interface can't be overloaded. This brings me to using delegates directly. In general, every time I think about creating an interface for something, and if that interface contains only one method, I start thinking a delegate is a better approach. Now, that said delegates don't accomplish everything an interface can. Obviously having the interface allows you to refer to the classes that implement the interface which can be very handy. In this case, though, really all you want is a method to handle the messages. So let's look at a method delegate: 1: // using delegation via delegate - omitting argument null checks and halt logic 2: public sealed class MessageListener 3: { 4: private ISubscriber _subscriber; 5: private Action<Message> _handler; 6: private bool _isHalted = false; 7: private Thread _messageThread; 8:  9: // assign the subscriber and start the messaging loop 10: public MessageListener(ISubscriber subscriber, Action<Message> handler) 11: { 12: _subscriber = subscriber; 13: _handler = handler; 14: _messageThread = new Thread(MessageLoop); 15: _messageThread.Start(); 16: } 17:  18: // handle the looping in the thread 19: private void MessageLoop() 20: { 21: while(!_isHalted) 22: { 23: // as long as processing, wait 1 second for message 24: Message msg = _subscriber.Receive(TimeSpan.FromSeconds(1)); 25: if(msg != null) 26: { 27: _handler(msg); 28: } 29: } 30: } 31: } Here the MessageListener now takes an Action<Message>.  For those of you unfamiliar with the pre-defined delegate types in .NET, that is a method with the signature: void SomeMethodName(Message). The great thing about delegates is it gives you a lot of power. You could create an anonymous delegate, a lambda, or specify any other method as long as it satisfies the Action<Message> signature. This way, you don't need to define an arbitrary helper class or name the method a specific thing. Incidentally, we could combine both the interface and delegate approach to allow maximum flexibility. Doing this, the user could either pass in a delegate, or specify a delegate interface: 1: // using delegation - give users choice of interface or delegate 2: public sealed class MessageListener 3: { 4: private ISubscriber _subscriber; 5: private Action<Message> _handler; 6: private bool _isHalted = false; 7: private Thread _messageThread; 8:  9: // assign the subscriber and start the messaging loop 10: public MessageListener(ISubscriber subscriber, Action<Message> handler) 11: { 12: _subscriber = subscriber; 13: _handler = handler; 14: _messageThread = new Thread(MessageLoop); 15: _messageThread.Start(); 16: } 17:  18: // passes the interface method as a delegate using method group 19: public MessageListener(ISubscriber subscriber, IMessageHandler handler) 20: : this(subscriber, handler.OnMessageReceived) 21: { 22: } 23:  24: // handle the looping in the thread 25: private void MessageLoop() 26: { 27: while(!_isHalted) 28: { 29: // as long as processing, wait 1 second for message 30: Message msg = _subscriber.Receive(TimeSpan.FromSeconds(1)); 31: if(msg != null) 32: { 33: _handler(msg); 34: } 35: } 36: } 37: } } This is the method I tend to prefer because it allows the user of the class to choose which method works best for them. You may be curious about the actual performance of these different methods. 1: Enter iterations: 2: 1000000 3:  4: Inheritance took 4 ms. 5: Events took 7 ms. 6: Interface delegation took 4 ms. 7: Lambda delegate took 5 ms. Before you get too caught up in the numbers, however, keep in mind that this is performance over over 1,000,000 iterations. Since they are all < 10 ms which boils down to fractions of a micro-second per iteration so really any of them are a fine choice performance wise. As such, I think the choice of what to do really boils down to what you're trying to do. Here's my guidelines: Inheritance should be used only when defining a collection of related types with implementation specific behaviors, it should not be used as a hook for users to add their own functionality. Events should be used when subscription is optional or multi-cast is desired. Interface delegation should be used when you wish to refer to implementing classes by the interface type or if the type requires several methods to be implemented. Delegate method delegation should be used when you only need to provide one method and do not need to refer to implementers by the interface name.

    Read the article

  • Which of these design patterns is superior?

    - by durron597
    I find I tend to design class structures where several subclasses have nearly identical functionality, but one piece of it is different. So I write nearly all the code in the abstract class, and then create several subclasses to do the one different thing. Does this pattern have a name? Is this the best way for this sort of scenario? Option 1: public interface TaxCalc { String calcTaxes(); } public abstract class AbstractTaxCalc implements TaxCalc { // most constructors and fields are here public double calcTaxes(UserFinancials data) { // code double diffNumber = getNumber(data); // more code } abstract protected double getNumber(UserFinancials data); protected double initialTaxes(double grossIncome) { // code return initialNumber; } } public class SimpleTaxCalc extends AbstractCalc { protected double getNumber(UserFinancials data) { double temp = intialCalc(data.getGrossIncome()); // do other stuff return temp; } } public class FancyTaxCalc extends AbstractTaxCalc { protected double getNumber(UserFinancials data) { int temp = initialCalc(data.getGrossIncome()); // Do fancier math return temp; } } Option 2: This version is more like the Strategy pattern, and should be able to do essentially the same sorts of tasks. public class TaxCalcImpl implements TaxCalc { private final TaxMath worker; public DummyImpl(TaxMath worker) { this.worker = worker; } public double calcTaxes(UserFinancials data) { // code double analyzedDouble = initialNumber; int diffNumber = worker.getNumber(data, initialNumber); // more code } protected int initialTaxes(double grossIncome) { // code return initialNumber; } } public interface TaxMath { double getNumber(UserFinancials data, double initial); } Then I could do: TaxCalc dum = new TaxCalcImpl(new TaxMath() { @Override public double getNumber(UserFinancials data, double initial) { double temp = data.getGrossIncome(); // do math return temp; }); And I could make specific implementations of TaxMath for things I use a lot, or I could make a stateless singleton for certain kinds of workers I use a lot. So the question I'm asking is: Which of these patterns is superior, when, and why? Or, alternately, is there an even better third option?

    Read the article

  • Disqus thread migration. Gotchas?

    - by sramsay
    I've been migrating a site to a new domain. The site itself is pretty straightforward (it uses Jekyll), and everything has gone fine -- except migration of Disqus threads. I've had partial success -- some of the threads have migrated successfully, but not all. I've tried the domain migration wizard (which caught a few), the URL mapper (which caught a few), and the 301 redirect crawler (which caught a few). But the remaining threads just won't move, no matter which method I use. So, I suppose I suppose I'm asking if there are any "gotchas" I should know about with this. When you execute any of these migration tools, it says it will "take awhile." Does that mean hours? Days? I can't tell if it's working, and there's no logging or error reporting that I can see.

    Read the article

  • App Pool crashes before loading mscorsvr. How to troubleshoot?

    - by codepoke
    I have an app pool that recycles every 29 hours, per default. It recycles smoothly 9 times out of 10, and I'm pretty sure the recycle itself is good for the app. Once every couple weeks the recycle does not work. The old worker process dies cleanly and the new worker process starts, but will not serve up content. Recycling the app pool again manually works like a charm. The failed worker process stops and dies cleanly and a second new worker process fires up and serves content perfectly. I took a crash dump against the failed worker process prior to recycling it, and DebugDiag found nothing to complain about. I tried to dig a little deeper using WinDBG, but mscorsvr/mscorwks is not loaded yet 15 minutes after the new process started. There are 14 threads running (4 async) and 20 pending client connections, but .NET is not even loaded into the process yet. Any suggestions where to poke and prod to find a root cause on this?

    Read the article

  • How to create a Request Specific Thread Safe Static int Counter?

    - by user960567
    In one of my server application I have a class that look like, class A { static int _value = 0; void DoSomething() { // a request start here _value = 0; _value++; // a request end here } // This method can be called many time during request void SomeAsyncMethods() { _value++; } } The problem is SomeAsyncMethods is async. Can be called many times. What I need when a request start set _value = 0 and then asynchrosnously increment this. After end of request I need the total. But the problem is that another request at the same time can access the class.

    Read the article

  • Plagued by multithreaded bugs

    - by koncurrency
    On my new team that I manage, the majority of our code is platform, TCP socket, and http networking code. All C++. Most of it originated from other developers that have left the team. The current developers on the team are very smart, but mostly junior in terms of experience. Our biggest problem: multi-threaded concurrency bugs. Most of our class libraries are written to be asynchronous by use of some thread pool classes. Methods on the class libraries often enqueue long running taks onto the thread pool from one thread and then the callback methods of that class get invoked on a different thread. As a result, we have a lot of edge case bugs involving incorrect threading assumptions. This results in subtle bugs that go beyond just having critical sections and locks to guard against concurrency issues. What makes these problems even harder is that the attempts to fix are often incorrect. Some mistakes I've observed the team attempting (or within the legacy code itself) includes something like the following: Common mistake #1 - Fixing concurrency issue by just put a lock around the shared data, but forgetting about what happens when methods don't get called in an expected order. Here's a very simple example: void Foo::OnHttpRequestComplete(statuscode status) { m_pBar->DoSomethingImportant(status); } void Foo::Shutdown() { m_pBar->Cleanup(); delete m_pBar; m_pBar=nullptr; } So now we have a bug in which Shutdown could get called while OnHttpNetworkRequestComplete is occuring on. A tester finds the bug, captures the crash dump, and assigns the bug to a developer. He in turn fixes the bug like this. void Foo::OnHttpRequestComplete(statuscode status) { AutoLock lock(m_cs); m_pBar->DoSomethingImportant(status); } void Foo::Shutdown() { AutoLock lock(m_cs); m_pBar->Cleanup(); delete m_pBar; m_pBar=nullptr; } The above fix looks good until you realize there's an even more subtle edge case. What happens if Shutdown gets called before OnHttpRequestComplete gets called back? The real world examples my team has are even more complex, and the edge cases are even harder to spot during the code review process. Common Mistake #2 - fixing deadlock issues by blindly exiting the lock, wait for the other thread to finish, then re-enter the lock - but without handling the case that the object just got updated by the other thread! Common Mistake #3 - Even though the objects are reference counted, the shutdown sequence "releases" it's pointer. But forgets to wait for the thread that is still running to release it's instance. As such, components are shutdown cleanly, then spurious or late callbacks are invoked on an object in an state not expecting any more calls. There are other edge cases, but the bottom line is this: Multithreaded programming is just plain hard, even for smart people. As I catch these mistakes, I spend time discussing the errors with each developer on developing a more appropriate fix. But I suspect they are often confused on how to solve each issue because of the enormous amount of legacy code that the "right" fix will involve touching. We're going to be shipping soon, and I'm sure the patches we're applying will hold for the upcoming release. Afterwards, we're going to have some time to improve the code base and refactor where needed. We won't have time to just re-write everything. And the majority of the code isn't all that bad. But I'm looking to refactor code such that threading issues can be avoided altogether. One approach I am considering is this. For each significant platform feature, have a dedicated single thread where all events and network callbacks get marshalled onto. Similar to COM apartment threading in Windows with use of a message loop. Long blocking operations could still get dispatched to a work pool thread, but the completion callback is invoked on on the component's thread. Components could possibly even share the same thread. Then all the class libraries running inside the thread can be written under the assumption of a single threaded world. Before I go down that path, I am also very interested if there are other standard techniques or design patterns for dealing with multithreaded issues. And I have to emphasize - something beyond a book that describes the basics of mutexes and semaphores. What do you think? I am also interested in any other approaches to take towards a refactoring process. Including any of the following: Literature or papers on design patterns around threads. Something beyond an introduction to mutexes and semaphores. We don't need massive parallelism either, just ways to design an object model so as to handle asynchronous events from other threads correctly. Ways to diagram the threading of various components, so that it will be easy to study and evolve solutions for. (That is, a UML equivalent for discussing threads across objects and classes) Educating your development team on the issues with multithreaded code. What would you do?

    Read the article

  • Problem with a blocking network task

    - by user326967
    Hello everyone. I'm new in Java so please forgive any obscene errors that I may make :) I'm developing a program in Java that among other things it should also handle clients that will connect to a server. The server has 3 threads running, and I have created them in the following way : DaemonForUI du; DaemonForPort da; DaemonForCheck dc; da = new DaemonForPort(3); dc = new DaemonForCheck(5); du = new DaemonForUI(7); Thread t_port = new Thread(da); Thread t_check = new Thread(dc); Thread t_ui = new Thread(du); t_port.setName("v1.9--PORTd"); t_check.setName("v1.9-CHECKd"); t_ui.setName("v1.9----UId"); t_port.start(); t_check.start(); t_ui.start(); Each thread handles a different aspect of the complete program. The thread t_ui is responsible to accept asynchronous incoming connections from clients, process the sent data and send other data back to the client. When I remove all the commands from the previous piece of code that has to with the t_ui thread, everything runs ok which in my case means that the other threads are printing their debug messages. If I set the t_ui thread to run too, then the whole program blocks at the "accept" of the t_ui thread. After reading at online manuals I saw that the accepted connections should be non-blocking, therefore use something like that : public ServerSocketChannel ssc = null; ssc = ServerSocketChannel.open(); ssc.socket().bind(new InetSocketAddress(port)); ssc.configureBlocking(false); SocketChannel sc = ssc.accept(); if (sc == null) { ; } else { System.out.println("The server and client are connected!"); System.out.println("Incoming connection from: " + sc.socket().getRemoteSocketAddress()); in = new DataInputStream(new BufferedInputStream(sc.socket().getInputStream())); out = new DataOutputStream(new BufferedOutputStream(sc.socket().getOutputStream())); //other magic things take place after that point... The thread for t_ui is created as follows : class DaemonForUI implements Runnable{ private int cnt; private int rr; public ListenerForUI serverListener; public DaemonForUI(int rr){ cnt = 0; this.rr = rr; serverListener = new ListenerForUI(); } public static String getCurrentTime() { final String DATE_FORMAT_NOW = "yyyy-MM-dd HH:mm:ss"; Calendar cal = Calendar.getInstance(); SimpleDateFormat sdf = new SimpleDateFormat(DATE_FORMAT_NOW); return (sdf.format(cal.getTime())); } public void run() { while(true) { System.out.println(Thread.currentThread().getName() + "\t (" + cnt + ")\t (every " + rr + " sec) @ " + getCurrentTime()); try{ Thread.sleep(rr * 1000); cnt++; } catch (InterruptedException e){ e.printStackTrace(); } } } } Obviously, I'm doing something wrong at the creation of the socket or at the use of the thread. Do you know what is causing the problem? Every help would be greatly appreciated.

    Read the article

  • Motion - can't get streaming working from a webcam

    - by Emmanuel Brunet
    I'm trying to record a video stream from my Tenvis IP camera with motion 3.2.12 on Debian 7.5. I used the standard debian package with sudo apt-get install motion Assume my DNS IP cam is webcam, user : admin, password : password /etc/motion/motion.conf Bellow are my configuration file settings : netcam_url http://webcam/videostream.cgi netcam_userpass admin:password target_dir /media/videos/log/motion # The mini-http server listens to this port for requests (default: 0 = disabled) webcam_port 1234 ffmpeg_cap_new on ffmpeg_video_codec mpeg4 output_motion off snapshot_interval 0 # Quality of the jpeg (in percent) images produced (default: 50) webcam_quality 50 # Output frames at 1 fps when no motion is detected and increase to the # rate given by webcam_maxrate when motion is detected (default: off) webcam_motion on # Maximum framerate for webcam streams (default: 1) webcam_maxrate 15 # Restrict webcam connections to localhost only (default: on) webcam_localhost on # Limits the number of images per connection (default: 0 = unlimited) # Number can be defined by multiplying actual webcam rate by desired number of seconds # Actual webcam rate is the smallest of the numbers framerate and webcam_maxrate webcam_limit 0 control_port 8080 control_authentication admin:password Issue #1 when I try display http:/localhost:1234 the browser looks frozen, no HTTP 404 received but it stills waiting for data it seems .. Issue #2 in the output directory motion writes a lot of jpeg snapshots althought I just want to have several video sequenced files. Log I run motion in interactive mode in a terminal, here is the ouput root@mercure:/etc/motion# motion -c motion-1.0.conf [0] Processing thread 0 - config file motion-1.0.conf [0] Motion 3.2.12 Started [0] ffmpeg LIBAVCODEC_BUILD 3482368 LIBAVFORMAT_BUILD 3478785 [0] Thread 1 is from motion-1.0.conf [0] motion-httpd/3.2.12 running, accepting connections [0] motion-httpd: waiting for data on port TCP 8080 [1] Thread 1 started [1] Resizing pre_capture buffer to 1 items [1] Started stream webcam server in port 1234 [1] avcodec_open - could not open codec: Operation now in progress [1] ffopen_open error creating (new) file [~/tmp/motion/01-20140603165303.avi]: Operation now in progress [1] File of type 1 saved to: ~/tmp/motion/01-20140603165303-01.jpg [1] Thread exiting [1] Calling vid_close() from motion_cleanup [1] vid_close: calling netcam_cleanup [1] netcam camera handler: finish set, exiting [0] Motion thread 1 restart [1] Thread 1 started [1] Resizing pre_capture buffer to 1 items [1] Started stream webcam server in port 1234 [1] avcodec_open - could not open codec: Resource temporarily unavailable [1] ffopen_open error creating (new) file [~/tmp/motion/01-20140603165329.avi]: Resource temporarily unavailable [1] File of type 1 saved to: ~/tmp/motion/01-20140603165329-00.jpg [1] Thread exiting [1] Calling vid_close() from motion_cleanup [1] vid_close: calling netcam_cleanup [1] netcam camera handler: finish set, exiting [0] Motion thread 1 restart [1] Thread 1 started [1] Resizing pre_capture buffer to 1 items [1] Started stream webcam server in port 1234 [1] avcodec_open - could not open codec: Connection reset by peer [1] ffopen_open error creating (new) file [~/tmp/motion/01-20140603165355.avi]: Connection reset by peer [1] File of type 1 saved to: ~/tmp/motion/01-20140603165355-06.jpg [1] Thread exiting [1] Calling vid_close() from motion_cleanup [1] vid_close: calling netcam_cleanup [0] httpd - Finishing [0] httpd Closing [0] httpd thread exit [1] netcam camera handler: finish set, exiting [0] Motion thread 1 restart [1] Thread 1 started [1] Resizing pre_capture buffer to 1 items [1] Started stream webcam server in port 1234 It doesn't find the codec ... avcodec_open - could not open codec: Operation now in progress I've changed fmpeg_video_codec from mpeg4 to swf the result is the same. When using flv format motion writes a lot of .jpg image but I can't see anything at http://localhost:1234 [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-00.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-01.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-02.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-03.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-04.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-05.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-06.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171036-00.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171036-01.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171036-02.jpg Any idea just to get the video stream recoded on my local disk ?

    Read the article

  • Innodb Queries Slow

    - by user105196
    I have redHat 5.3 (Tikanga) with Mysql 5.0.86 configued with RIAD 10 HW, I run an application inquiries from Mysql/InnoDB and MyIsam tables, the queries are super fast,but some quires on Innodb tables sometime slow down and took more than 1-3 seconds to run and these queries are simple and optimized, this problem occurred just on innodb tables in different time with random queries. Why is this happening only to Innodb tables? the below is the Innodb status and some Mysql variables: show innodb status\G ************* 1. row ************* Status: 120325 10:54:08 INNODB MONITOR OUTPUT Per second averages calculated from the last 19 seconds SEMAPHORES OS WAIT ARRAY INFO: reservation count 22943, signal count 22947 Mutex spin waits 0, rounds 561745, OS waits 7664 RW-shared spins 24427, OS waits 12201; RW-excl spins 1461, OS waits 1277 TRANSACTIONS Trx id counter 0 119069326 Purge done for trx's n:o < 0 119069326 undo n:o < 0 0 History list length 41 Total number of lock structs in row lock hash table 0 LIST OF TRANSACTIONS FOR EACH SESSION: ---TRANSACTION 0 0, not started, process no 29093, OS thread id 1166043456 MySQL thread id 703985, query id 5807220 localhost root show innodb status FILE I/O I/O thread 0 state: waiting for i/o request (insert buffer thread) I/O thread 1 state: waiting for i/o request (log thread) I/O thread 2 state: waiting for i/o request (read thread) I/O thread 3 state: waiting for i/o request (write thread) Pending normal aio reads: 0, aio writes: 0, ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0 Pending flushes (fsync) log: 0; buffer pool: 0 132777 OS file reads, 689086 OS file writes, 252010 OS fsyncs 0.00 reads/s, 0 avg bytes/read, 0.00 writes/s, 0.00 fsyncs/s INSERT BUFFER AND ADAPTIVE HASH INDEX Ibuf: size 1, free list len 366, seg size 368, 62237 inserts, 62237 merged recs, 52881 merges Hash table size 8850487, used cells 3698960, node heap has 7061 buffer(s) 0.00 hash searches/s, 0.00 non-hash searches/s LOG Log sequence number 15 3415398745 Log flushed up to 15 3415398745 Last checkpoint at 15 3415398745 0 pending log writes, 0 pending chkp writes 218214 log i/o's done, 0.00 log i/o's/second BUFFER POOL AND MEMORY Total memory allocated 4798817080; in additional pool allocated 12342784 Buffer pool size 262144 Free buffers 101603 Database pages 153480 Modified db pages 0 Pending reads 0 Pending writes: LRU 0, flush list 0, single page 0 Pages read 151954, created 1526, written 494505 0.00 reads/s, 0.00 creates/s, 0.00 writes/s No buffer pool page gets since the last printout ROW OPERATIONS 0 queries inside InnoDB, 0 queries in queue 1 read views open inside InnoDB Main thread process no. 29093, id 1162049856, state: waiting for server activity Number of rows inserted 77675, updated 85439, deleted 0, read 14377072495 0.00 inserts/s, 0.00 updates/s, 0.00 deletes/s, 0.00 reads/s END OF INNODB MONITOR OUTPUT 1 row in set, 1 warning (0.02 sec) read_buffer_size = 128M sort_buffer_size = 256M tmp_table_size = 1024M innodb_additional_mem_pool_size = 20M innodb_log_file_size=10M innodb_lock_wait_timeout=100 innodb_buffer_pool_size=4G join_buffer_size = 128M key_buffer_size = 1G can any one help me ?

    Read the article

  • Weird nfs performance: 1 thread better than 8, 8 better than 2!

    - by Joe
    I'm trying to determine the cause of poor nfs performance between two Xen Virtual Machines (client & server) running on the same host. Specifically, the speed at which I can sequentially read a 1GB file on the client is much lower than what would be expected based on the measured network connection speed between the two VMs and the measured speed of reading the file directly on the server. The VMs are running Ubuntu 9.04 and the server is using the nfs-kernel-server package. According to various NFS tuning resources, changing the number of nfsd threads (in my case kernel threads) can affect performance. Usually this advice is framed in terms of increasing the number from the default of 8 on heavily-used servers. What I find in my current configuration: RPCNFSDCOUNT=8: (default): 13.5-30 seconds to cat a 1GB file on the client so 35-80MB/sec RPCNFSDCOUNT=16: 18s to cat the file 60MB/s RPCNFSDCOUNT=1: 8-9 seconds to cat the file (!!?!) 125MB/s RPCNFSDCOUNT=2: 87s to cat the file 12MB/s I should mention that the file I'm exporting is on a RevoDrive SSD mounted on the server using Xen's PCI-passthrough; on the server I can cat the file in under seconds ( 250MB/s). I am dropping caches on the client before each test. I don't really want to leave the server configured with just one thread as I'm guessing that won't work so well when there are multiple clients, but I might be misunderstanding how that works. I have repeated the tests a few times (changing the server config in between) and the results are fairly consistent. So my question is: why is the best performance with 1 thread? A few other things I have tried changing, to little or no effect: increasing the values of /proc/sys/net/ipv4/ipfrag_low_thresh and /proc/sys/net/ipv4/ipfrag_high_thresh to 512K, 1M from the default 192K,256K increasing the value of /proc/sys/net/core/rmem_default and /proc/sys/net/core/rmem_max to 1M from the default of 128K mounting with client options rsize=32768, wsize=32768 From the output of sar -d I understand that the actual read sizes going to the underlying device are rather small (<100 bytes) but this doesn't cause a problem when reading the file locally on the client. The RevoDrive actually exposes two "SATA" devices /dev/sda and /dev/sdb, then dmraid picks up a fakeRAID-0 striped across them which I have mounted to /mnt/ssd and then bind-mounted to /export/ssd. I've done local tests on my file using both locations and see the good performance mentioned above. If answers/comments ask for more details I will add them.

    Read the article

  • How do you create a non-Thread-based Guice custom Scope?

    - by Russ
    It seems that all Guice's out-of-the-box Scope implementations are inherently Thread-based (or ignore Threads entirely): Scopes.SINGLETON and Scopes.NO_SCOPE ignore Threads and are the edge cases: global scope and no scope. ServletScopes.REQUEST and ServletScopes.SESSION ultimately depend on retrieving scoped objects from a ThreadLocal<Context>. The retrieved Context holds a reference to the HttpServletRequest that holds a reference to the scoped objects stored as named attributes (where name is derived from com.google.inject.Key). Class SimpleScope from the custom scope Guice wiki also provides a per-Thread implementation using a ThreadLocal<Map<Key<?>, Object>> member variable. With that preamble, my question is this: how does one go about creating a non-Thread-based Scope? It seems that something that I can use to look up a Map<Key<?>, Object> is missing, as the only things passed in to Scope.scope() are a Key<T> and a Provider<T>. Thanks in advance for your time.

    Read the article

  • How to make a thread that runs at x:00 x:15 x:30 and x:45 do something different at 2:00.

    - by rmarimon
    I have a timer thread that needs to run at a particular moments of the day to do an incremental replication with a database. Right now it runs at the hour, 15 minutes past the hour, 30 minutes past the hour and 45 minutes past the hour. This is the code I have which is working ok: public class TimerRunner implements Runnable { private static final Semaphore lock = new Semaphore(1); private static final ScheduledExecutorService executor = Executors.newSingleThreadScheduledExecutor(); public static void initialize() { long delay = getDelay(); executor.schedule(new TimerRunner(), delay, TimeUnit.SECONDS); } public static void destroy() { executor.shutdownNow(); } private static long getDelay() { Calendar now = Calendar.getInstance(); long p = 15 * 60; // run at 00, 15, 30 and 45 minutes past the hour long second = now.get(Calendar.MINUTE) * 60 + now.get(Calendar.SECOND); return p - (second % p); } public static void replicate() { if (lock.tryAcquire()) { try { Thread t = new Thread(new Runnable() { public void run() { try { // here is where the magic happens } finally { lock.release(); } } }); t.start(); } catch (Exception e) { lock.release(); } } else { throw new IllegalStateException("already running a replicator"); } } public void run() { try { TimerRunner.replicate(); } finally { long delay = getDelay(); executor.schedule(new TimerRunner(), delay, TimeUnit.SECONDS); } } } This process is started by calling TimerRunner.initialize() when a server starts and calling TimerRunner.destroy(). I have created a full replication process (as opposed to incremental) that I would like to run at a certain moment of the day, say 2:00am. How would change the above code to do this? I think that it should be very simple something like if it is now around 2:00am and it's been a long time since I did the full replication then do it now, but I can't get the if right. Beware that sometimes the replicate process takes way longer to complete. Sometimes beyond the 15 minutes, posing a problem in running at around 2:00am.

    Read the article

  • Why thread in background is not waiting for task to complete?

    - by Haris Hasan
    I am playing with async await feature of C#. Things work as expected when I use it with UI thread. But when I use it in a non-UI thread it doesn't work as expected. Consider the code below private void Click_Button(object sender, RoutedEventArgs e) { var bg = new BackgroundWorker(); bg.DoWork += BgDoWork; bg.RunWorkerCompleted += BgOnRunWorkerCompleted; bg.RunWorkerAsync(); } private void BgOnRunWorkerCompleted(object sender, RunWorkerCompletedEventArgs runWorkerCompletedEventArgs) { } private async void BgDoWork(object sender, DoWorkEventArgs doWorkEventArgs) { await Method(); } private static async Task Method() { for (int i = int.MinValue; i < int.MaxValue; i++) { var http = new HttpClient(); var tsk = await http.GetAsync("http://www.ebay.com"); } } When I execute this code, background thread don't wait for long running task in Method to complete. Instead it instantly executes the BgOnRunWorkerCompleted after calling Method. Why is that so? What am I missing here? P.S: I am not interested in alternate ways or correct ways of doing this. I want to know what is actually happening behind the scene in this case? Why is it not waiting?

    Read the article

  • Concurrent Programming:Should I write a sequential program first, then add thread safety?

    - by evthim
    I'm working on a project where we have to create a number of threads(actual number will be inputted in by testers (TA's)). I'm having trouble not only with the programming but also with the design, I can't wrap my head around all of the threads that will be invoked and where I might cause errors. The project is due soon so I don't want to waste time on this if it'll actually set me back, but I was wondering if I should write the program like only one thread will be running and everything should be sequential and then later go back and try to add the thread safety parts of the code? Would that take twice the original amount of time? Project Description: Note:I'm going to be as vague as possible so I don't violate any honor codes, sorry :( your program should accept n number of objectA threads, m number of objectB threads, and r number of objectC objectB threads interact with code in objectA. objectA threads interact with code in objectB and objectC objectB and objectC don't directly interact, but do so indirectly through objectA -ex: objectB needs something from objectA. objectA gets the result for that something by calling objectC my confusion stems mostly from the fact that all of this interactions will be done by m+n threads and there are various restrictions throughout the descriptions, like objectB can request something from objectA, and objectA has to wait for objectC to finish that something before returning it to objectB. Also each objectA thread can only work on one instruction from objectB at a time, etc. etc. I just want to know if I write the code so that there is only 1 objectA, 1 objectB and 1 object C, can I go back and easily modify it so that those 1's can be changed to m, n and r? Sorry again, if my description is a little bit confusing.

    Read the article

  • difference between http.context.user and thread.currentprincipal and when to use them?

    - by yamspog
    I have just recently run into an issue running an asp.net web app under visual studio 2008. I get the error 'type is not resolved for member...customUserPrincipal'. Tracking down various discussion groups it seems that there is an issue with Visual Studio's web server when you assign a custom principal against the Thread.CurrentPrincipal. In my code, I now use... HttpContext.Current.User = myCustomPrincipal //Thread.CurrentPrincipal = myCustomPrincipal I'm glad that I got the error out of the way, but it begs the question "What is the difference between these two methods of setting a principal?". There are other stackoverflow questions related to the differences but they don't get into the details of the two approaches. I did find one tantalizing post that had the following grandiose comment but no explanation to back up his assertions... Use HttpConext.Current.User for all web (ASPX/ASMX) applications. Use Thread.CurrentPrincipal for all other applications like winForms, console and windows service applications. Can any of you security/dot.net gurus shed some light on this subject?

    Read the article

  • creating a QT gui using a thread in c++?

    - by rashid
    I am trying to create this QT gui using a thread but no luck. Below is my code. Problem is gui never shows up. But if i put QApplication app(m.s_argc,m.s_argv); //object instantiation guiClass *gui = new guiClass(); //show gui gui-show(); app.exec(); in main() then it works. /*INCLUDES HERE... .... */ using namespace std; struct mainStruct { int s_argc; char ** s_argv; }; typedef struct mainStruct mas; void *guifunc(void * arg); int main(int argc, char * argv[]) { mas m; m.s_argc = argc; m.s_argv = argv; pthread_t threadGUI; //start a new thread for gui int result = pthread_create(&threadGUI, NULL, guifunc, (void *) &m); if (result) { printf("Error creating gui thread"); exit(0); } return 0; } void *guifunc(void * arg) { mas m = *(mas *)arg; QApplication app(m.s_argc,m.s_argv); //object instantiation guiClass *gui = new guiClass(); //show gui gui-show(); app.exec(); }

    Read the article

  • How do I stop Ant from hanging after executing a java program that attempted to interrupt a thread (and failed) and continued?

    - by Zugwalt
    I have Ant build and execute a java program. This program tries to do something that sometimes hangs, so we execute it in a thread. actionThread.start(); try { actionThread.join(10000); } catch (InterruptedException e) { System.out.println("InterruptedException: "+e.getMessage()); } if (actionThread.isAlive()) { actionThread.interrupt(); System.out.println("Thread timed out and never died"); } The ant call looks like this: <java fork="true" failonerror="yes" classname="myPackage.myPathName" classpath="build"> <arg line=""/> <classpath> <pathelement location="bin" /> <fileset dir="lib"> <include name="**/*.jar"/> </fileset> </classpath> </java> And when this runs I see the "Thread timed out and never died" statement, and I also see the main program finish execution, but then Ant just hangs. Presumably it is waiting for the child threads to finish, but they never will. How can I have Ant be done once it is done executing main() and just kill or ignore dead threads?

    Read the article

  • Why does the BackgroundWorker in WPF need Thread.Sleep to update UI controls?

    - by user364060
    namespace WpfApplication1 { /// <summary> /// Interaction logic for Window1.xaml /// </summary> public partial class Window1 : Window { BackgroundWorker bgWorker; Action<int> myProgressReporter; public Window1() { InitializeComponent(); bgWorker = new BackgroundWorker(); bgWorker.DoWork += bgWorker_Task; bgWorker.RunWorkerCompleted += myWorker_RunWorkerCompleted; // hook event to method bgWorker.ProgressChanged += bgWorker_ReportProgress; // hook the delegate to the method myProgressReporter = updateProgress; bgWorker.WorkerReportsProgress = true; } private void myWorker_RunWorkerCompleted(object sender, System.ComponentModel.RunWorkerCompletedEventArgs e) { object result; result = e.Result; MessageBox.Show(result.ToString()); progressBar1.Value = 0; button1.IsEnabled = true; } private void bgWorker_ReportProgress(object sender, ProgressChangedEventArgs e) { System.Windows.Threading.Dispatcher disp = button1.Dispatcher; disp.BeginInvoke(myProgressReporter,e.ProgressPercentage); //Dispatcher.BeginInvoke(myProgressReporter, DispatcherPriority.Normal, e.ProgressPercentage); } private void updateProgress(int progressPercentage) { progressBar1.Value = progressPercentage; } private void bgWorker_Task(Object sender, DoWorkEventArgs e) { int total = 1000; for (int i = 1; i <= total; i++) { if (bgWorker.WorkerReportsProgress) { int p = (int)(((float)i / (float)total) * 100); bgWorker.ReportProgress(p); } Thread.Sleep(1); // Without Thread.Sleep(x) the main thread freezes or gives stackoverflow exception, } e.Result = "Completed"; } private void button1_Click(object sender, RoutedEventArgs e) { if(!bgWorker.IsBusy) bgWorker.RunWorkerAsync("This is a background process"); button1.IsEnabled = false; } } }

    Read the article

  • How to figure out how much RAM each prefork thread requires for maximum Wordpress performance on an EC2 small instance

    - by two7s_clash
    Just read Making WordPress Stable on EC2-Micro In the "Tuning Apache" section, I can't quite figure out how he comes up with his numbers for his prefork config. He explains how to get the numbers for an average process, which I get. But then: Or roughly 53MB per process...In this case, ten threads should be safe. This means that if we receive more than ten simultaneous requests, the other requests will be queued until a worker thread is available. In order to maximize performance, we will also configure the system to have this number of threads available all of the time. From 53MB per process, with 613MB of RAM, he somehow gets this config, which I don't get: <IfModule prefork.c> StartServers 10 MinSpareServers 10 MaxSpareServers 10 MaxClients 10 MaxRequestsPerChild 4000 </IfModule> How exactly does he get this from 53MB per process, with 613MB limit? Bonus question From the below, on a small instance (1.7 GB memory), what would good settings be? bitnami@ip-10-203-39-166:~$ ps xav |grep httpd 1411 ? Ss 0:00 2 0 114928 15436 0.8 /opt/bitnami/apache2/bin/httpd -f /opt/bitnami/apache2/conf/httpd.conf 1415 ? S 0:06 10 0 125860 55900 3.1 /opt/bitnami/apache2/bin/httpd -f /opt/bitnami/apache2/conf/httpd.conf 1426 ? S 0:08 19 0 127000 62996 3.5 /opt/bitnami/apache2/bin/httpd -f /opt/bitnami/apache2/conf/httpd.conf 1446 ? S 0:05 48 0 131932 72792 4.1 /opt/bitnami/apache2/bin/httpd -f /opt/bitnami/apache2/conf/httpd.conf 1513 ? S 0:05 7 0 125672 54840 3.1 /opt/bitnami/apache2/bin/httpd -f /opt/bitnami/apache2/conf/httpd.conf 1516 ? S 0:02 2 0 125228 48680 2.7 /opt/bitnami/apache2/bin/httpd -f /opt/bitnami/apache2/conf/httpd.conf 1517 ? S 0:06 2 0 127004 55796 3.1 /opt/bitnami/apache2/bin/httpd -f /opt/bitnami/apache2/conf/httpd.conf 1518 ? S 0:03 1 0 127196 54208 3.0 /opt/bitnami/apache2/bin/httpd -f /opt/bitnami/apache2/conf/httpd.conf 1531 ? R 0:04 0 0 127500 54236 3.0 /opt/bitnami/apache2/bin/httpd -f /opt/bitnami/apache2/conf/httpd.conf

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >