Search Results

Search found 956 results on 39 pages for 'synchronization'.

Page 31/39 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • Flex textarea control not updating properly.

    - by ielashi
    I am writing a flex application that involves modifying a textarea very frequently. I have encountered issues with the textarea sometimes not displaying my modifications. The following actionscript code illustrates my problem: <?xml version="1.0" encoding="utf-8"?> <mx:Application xmlns:mx="http://www.adobe.com/2006/mxml" layout="absolute" minWidth="955" minHeight="600"> <mx:TextArea x="82" y="36" width="354" height="291" id="textArea" creationComplete="initApp()"/> <mx:Script> <![CDATA[ private var testSentence:String = "The big brown fox jumps over the lazy dog."; private var testCounter:int = 0; private function initApp():void { var timer:Timer = new Timer(10); timer.addEventListener(TimerEvent.TIMER, playSentence); timer.start(); } private function playSentence(event:TimerEvent):void { textArea.editable = false; if (testCounter == testSentence.length) { testCounter = 0; textArea.text += "\n"; } else { textArea.text += testSentence.charAt(testCounter++); } textArea.editable = true; } ]]> </mx:Script> </mx:Application> When you run the above code in a flex project, it should repeatedly print, character by character, the sentence "The big brown fox jumps over the lazy dog.". But, if you are typing into the textarea at the same time, you will notice the text the timer prints is distorted. I am really curious as to why this happens. The single-threaded nature of flex and disabling user input for the textarea when I make modifications should prevent this from happening, but for some reason this doesn't seem to be working. I must note too that, when running the timer at larger intervals (around 100ms) it seems to work perfectly, so I am tempted to think it's some kind of synchronization issue in the internals of the flex framework. Any ideas on what could be causing the problem?

    Read the article

  • Implementation review for a MVC.NET app with custom membership

    - by mrjoltcola
    I'd like to hear if anyone sees any problems with how I implemented the security in this Oracle based MVC.NET app, either security issues, concurrency issues or scalability issues. First, I implemented a CustomOracleMembershipProvider to handle the database interface to the membership store. I implemented a custom Principal named User which implements IPrincipal, and it has a hashtable of Roles. I also created a separate class named AuthCache which has a simple cache for User objects. Its purpose is simple to avoid return trips to the database, while decoupling the caching from either the web layer or the data layer. (So I can share the cache between MVC.NET, WCF, etc.) The MVC.NET stock MembershipService uses the CustomOracleMembershipProvider (configured in web.config), and both MembershipService and FormsService share access to the singleton AuthCache. My AccountController.LogOn() method: 1) Validates the user via the MembershipService.Validate() method, also loads the roles into the User.Roles container and then caches the User in AuthCache. 2) Signs the user into the Web context via FormsService.SignIn() which accesses the AuthCache (not the database) to get the User, sets HttpContext.Current.User to the cached User Principal. In global.asax.cs, Application_AuthenticateRequest() is implemented. It decrypts the FormsAuthenticationTicket, accesses the AuthCache by the ticket.Name (Username) and sets the Principal by setting Context.User = user from the AuthCache. So in short, all these classes share the AuthCache, and I have, for thread synchronization, a lock() in the cache store method. No lock in the read method. The custom membership provider doesn't know about the cache, the MembershipService doesn't know about any HttpContext (so could be used outside of a web app), and the FormsService doesn't use any custom methods besides accessing the AuthCache to set the Context.User for the initial login, so it isn't dependent on a specific membership provider. The main thing I see now is that the AuthCache will be sharing a User object if a user logs in from multiple sessions. So I may have to change the key from just UserId to something else (maybe using something in the FormsAuthenticationTicket for the key?).

    Read the article

  • C# serialPort speed

    - by MarekK
    Hi I am developing some monitoring tool for some kind of protocol based on serial communication. Serial BaudRate=187,5kb I use System.IO.Ports.SerialPort class. This protocol has 4 kinds of frames. They have 1Byte,3Bytes,6Bytes, 10-255Bytes. I can work with them but I receive them too late to respond. For the beginning I receive first packed after ex. 96ms (too late), and it contains about 1000B. This means 20-50 frames (too much, too late). Later its work more stable, 3-10Bytes but it is still too late because it contains 1-2 frames. Of Course 1 frame is OK, but 2 is too late. Can you point me how can I deal with it more reliable? I know it is possible. Revision1: I tried straight way: private void serialPort1_DataReceived(object sender, SerialDataReceivedEventArgs e) { if (!serialPort1.IsOpen) return; this.BeginInvoke(new EventHandler(this.DataReceived)); } And Backgroud worker: And ... new Tread(Read) and... always the same. Too late, too slow. Do I have to go back to WinApi and import some kernel32.dll functions? Revision 2: this is the part of code use in the Treading way: int c = serialPort1.BytesToRead; byte[] b = new byte[c]; serialPort1.Read(b, 0, c); I guess it is some problem with stream use inside SerialPort class. Or some synchronization problem. Revision 3: I do not use both at once!! I just tried different ways. Regards MarekK

    Read the article

  • Ruby - Immutable Objects

    - by Chris Bunch
    I've got a highly multithreaded app written in Ruby that shares a few instance variables. Writes to these variables are rare (1%) while reads are very common (99%). What is the best way (either in your opinion or in the idiomatic Ruby fashion) to ensure that these threads always see the most up-to-date values involved? Here's some ideas so far that I had (although I'd like your input before I overhaul this): Have a lock that most be used before reading or writing any of these variables (from Java Concurrency in Practice). The downside of this is that it puts a lot of synchronize blocks in my code and I don't see an easy way to avoid it. Use Ruby's freeze method (see here), although it looks equally cumbersome and doesn't give me any of the synchronization benefits that the first option gives. These options both seem pretty similar but hopefully anyone out there will have a better idea (or can argue well for one of these ideas). I'd also be fine with making the objects immutable so they aren't corrupted or altered in the middle of an operation, but I don't know Ruby well enough to make the call on my own and this question seems to argue that objects are highly mutable.

    Read the article

  • best practice for directory polling

    - by Hieu Lam
    Hi all, I have to do batch processing to automate business process. I have to poll directory at regular interval to detect new files and do processing. While old files is being processed, new files can come in. For now, I use quartz scheduler and thread synchronization to ensure that only one thread can process files. Part of the code are: application-context.xml <bean id="methodInvokingJob" class="org.springframework.scheduling.quartz.MethodInvokingJobDetailFactoryBean" <property name="targetObject" ref="documentProcessor" / <property name="targetMethod" value="processDocuments" / </bean DocumentProcessor ..... public void processDocuments() { LOG.info(Thread.currentThread().getName() + " attempt to run."); if (!processing) { synchronized (this) { try { processing = true; LOG.info(Thread.currentThread().getName() + " is processing"); List xmlDocuments = documentManager.getFileNamesFromFolder(incomingFolderPath); // loop over the files and processed unlock files. for (String xmlDocument : xmlDocuments) { processDocument(xmlDocument); } } finally { processing = false; } } } } For the current code, I have to prevent other thread to process files when one thread is processing. Is that a good idea ? or we support multi-threaded processing. In that case how can I know which files is being process and which files has just arrived ? Any idea is really appreciated.

    Read the article

  • According to MSDN ReadFile() Win32 function may incorrectly report read operation completion. When?

    - by Martin Dobšík
    The MSDN states in its description of ReadFile() function (http://msdn.microsoft.com/en-us/library/aa365467%28VS.85%29.aspx): “If hFile is opened with FILE_FLAG_OVERLAPPED, the lpOverlapped parameter must point to a valid and unique OVERLAPPED structure, otherwise the function can incorrectly report that the read operation is complete.” I have some applications that are violating the above recommendation and I would like to know the severity of the problem. I mean the program uses named pipe that has been created with FILE_FLAG_OVERLAPPED, but it reads from it using the following call: ReadFile(handle, &buf, n, &n_read, NULL); That means it passes NULL as the lpOverlapped parameter. That call should not work correctly in some circumstances according to documentation. I have spent a lot of time trying to reproduce the problem, but I was unable to! I always got all data in right place at right time. I was testing only Named Pipes though. Would anybody know when can I expect that ReadFile() will incorrectly return and report successful completion even the data are not yet in the buffer? What would have to happen in order to reproduce the problem? Does it happen with files, pipes, sockets, consoles, or other devices? Do I have to use particular version of OS? Or particular version of reading (like register the handle to I/O completion port)? Or particular synchronization of reading and writing processes/threads? Or when would that fail? It works for me :/ Please help! With regards Martin

    Read the article

  • Integrating a Custom Compiler with the Visual Studio IDE

    - by M.A. Hanin
    Background: I want to create a custom VB compiler, extending the "original" compiler, to handle my custom compile-time attributes. Question: after I've created my custom compiler and I've got an executable file capable of compiling VB code via the standard command-line interface, how do I integrate this compiler with the Visual Studio IDE? (such that pressing "compile" or "build" will make use of my compiler instead of the default compiler). EDIT: (Correct me if i'm wrong) From the reactions here, I see this question is a bit shocking, so I shall further explain my needs and background: .NET provides us with a great mechanism called Attributes. As far as I understand, making attributes apply their intended behavior upon the attributed element (assembly, module, class, method, etc.) - attributes must be reflected upon. So the real trick here is reflecting and applying behavior at the right spot. Lets take Serialization for example: We decorate a class with the Serializable attribute. We then pass an instance of the class to the formatter's Serialize method. The formatter reflects upon the instance, checking if it has the Serializable attribute, and acting accordingly. Now, if we examine the Synchronization, Flags, Obsolete and CLSCompliant attributes, then the real question is: who reflects upon them? At least in some cases, it has to be the compiler (and/or IDE). Therefore, it seems that if I wish to create custom attributes that change an element's behavior regardless of any specific consumer, i must extend the compiler to reflect upon them at compilation. Of course, these are not my personal insights: the book "Applied .NET Attributes" provides a complete example of creating a custom attribute and a custom C# compiler to reflect upon that attribute at compilation (the example is used to implement "java-style checked exceptions").

    Read the article

  • WinForms Load Event / Static Initialization Strangeness

    - by Eric J.
    Background I'm troubleshooting an WinForms 2.0 program that's already been burned to CD for distribution to an internet-challenged target audience. Some users are experiencing a fatal error that I can reproduce locally. Reproducing the Error I get the fatal error when I log into my Vista box using a standard user that I just created, even if I run the program as administrator. I do not get the fatal error when I log in as local administrator. I'm not sure that being administrator is necessarily the trigger (since runas did not help). I have reproduced this half a dozen times under each account with consistent results. The faulty code Base.cs (base class for several user controls, only one of which is shown on first screen) private void BaseWindow_Load(object sender, EventArgs e) { // This message shown once in both cases MessageBox.Show("BaseWindow_Load for " + this.GetType().FullName); SkinManager.ApplySkin(this); } SkinManager.cs private static Skin skin = null; public static void ApplySkin(UserControl applyTo) { if (skin == null) { skin = new Skin(SkinsDirectory, "Default"); } } Skin.cs internal Skin(string skinPath, string skinName) { config = SkinConfig.Load(path); } SkinConfig.cs public static SkinConfig Load(string path) { // This message shown only once running as Admin but twice running as standard user System.Windows.Forms.MessageBox.Show("@1"); // !!! LOCK path HERE !!! } A user control loads on the first form, which triggers a call to SkinManager.ApplySkin, which checks if skin is null and, if so assigns it (without thread synchronization or recursion protection), which ultimately causes a file to be opened. When logged in as local admin, that sequence completes just fine. When logged in as my test standard user, ApplySkin is always called a second time while skin is still null, causing a second attempt to load, causing the file to be locked on the second attempt. The error handling is draconian at this point and the program terminates. The Question While this code can be easily fixed, I would like to understand why the error is happening only in some cases.

    Read the article

  • Syncronization Exception

    - by Kurru
    Hi I have two threads, one thread processes a queue and the other thread adds stuff into the queue. I want to put the queue processing thread to sleep when its finished processing the queue I want to have the 2nd thread tell it to wake up when it has added an item to the queue However these functions call System.Threading.SynchronizationLockException: Object synchronization method was called from an unsynchronized block of code on the Monitor.PulseAll(waiting); call, because I havent syncronized the function with the waiting object. [which I dont want to do, i want to be able to process while adding items to the queue]. How can I achieve this? Queue<object> items = new Queue<object>(); object waiting = new object(); 1st Thread public void ProcessQueue() { while (true) { if (items.Count == 0) Monitor.Wait(waiting); object real = null; lock(items) { object item = items.Dequeue(); real = item; } if(real == null) continue; .. bla bla bla } } 2nd Thread involves public void AddItem(object o) { ... bla bla bla lock(items) { items.Enqueue(o); } Monitor.PulseAll(waiting); }

    Read the article

  • Is This a Valid Way to Use Blocks in Objective-C?

    - by Carter
    I've been building a HTTP client that uses web services to synchronize information between the client and server. I've been using Blocks and NSURLConnection to achieve this on the client side, but I'm getting frequent EXC_BAD_ACCESS crashes in objc_msgSend(). From what I understand, this usually means that a stored block that has fallen off the stack has been called. I think I've coded things correctly to avoid this, but I'm still stuck. Here is conceptually what my code is doing. It starts by calling "synchronizeWithWebServer". That method invokes "listRootObjectsOnServerWithBlock:" which takes in a block to be called when the method returns. "listRootObjectsOnServersWithBlock:" initiates a NSURLConnection to the web server asynchronously. It to expects a block to be called when it returns. Inside that block I want to be able to execute the original Block (so aptly named 'block'). This is only a simplified version of my code. The real synchronization process is more complex but it's mostly more of the same as what you see below. Sometimes the code works perfectly, but about 80% of the time it crashes very early on in the routine. It seems to be more vulnerable to crashing when my data set gets larger. - (void)synchronizeWithWebServer { [self listRootObjectsOnServerWithBlock:^(NSArray *results, NSError *error) { //Iterate over result objects and perform some other similar routines. }]; } - (void)listRootObjectsOnServerWithBlock:(void (^)(NSArray *results, NSError *error))block { //Create NSURLRequest Here //Create connection asynchronously. block = [block copy]; [NSURLConnection sendAsynchronousRequest:urlRequest queue:[NSOperationQueue currentQueue] completionHandler:^(NSURLResponse *response, NSData *data, NSError *error){ //Parse response from web server (stored in NSData *data) NSArray *results = ..... //Call 'block' block(results, error); [block release]; }]; }

    Read the article

  • Which Database to choose?

    - by Sundar
    I have the following criteria Database should be protected with a username and password. It should not be possible to copy the database file and use it else were like MS Access. There will be no central database server. Each machine will run their own database server locally and user will initiate synchronization. Concept is inspired from distributed version control system like Git. So it should have good replication support. Strong consistency is not needed. Users will synchronize each other database when they need. In case of conflicts it should be possible to find the conflict and present it (from application) to the user for fixing it. Revisions of data if available it will be good. e.g. Entire history of change to a invoice. I explored document oriented database and inclined towards the same. But I dont know what to choose. Database is small it will not reach even 1GB in the next few years (say 3 years). Please feel free to suggest any database which you think might be suitable. Any pointers is highly appreciated. Thanks in advance.

    Read the article

  • VSTS Team Build Mail notification should include the "associateChangeSets"

    - by Kris
    Team Build Guru's I am looking for "Associated ChangeSets" list included in the build mail notifications say, by default we get a build notification like this, Team Project: Content Server Build Number: MerchantPortal_1.0.0707.69 Build Agent: \Content Server\MerchantPortalBuildBox Build Definition: \Content Server\MerchantPortal QA Build started by: ENETDOM\jrichter Build Start Time: 7/7/2009 8:25:30 AM Build Finish Time: 7/7/2009 8:30:49 AM Notes: - All dates and times are shown in GMT -05:00:00 Central Daylight Time - You are receiving this notification because of a subscription created by ENETDOM\enbuild Provided by Microsoft Visual Studio® Team System 2008 What I really would like is an email containing the changes. So the user does NOT have to click an URL to retrieve the list of changes. So... I would the mail to look something like this instead: Team Project: Content Server Build Number: MerchantPortal_1.0.0707.69 Build Agent: \Content Server\MerchantPortalBuildBox Build Definition: \Content Server\MerchantPortal QA Build started by: ENETDOM\enbuild Build Start Time: 7/7/2009 8:25:30 AM Build Finish Time: 7/7/2009 8:30:49 AM **Associated changesets: 482 DOMAIN\johny Not needed... 486 DOMAIN\adam A final synchronization with SourceSafe files after the 15 december release. 487 DOMAIN\bob Corrected the naught millenium bug.... 488 DOMAIN\sarah Reverted back to csproj file with SC changes.... Associated work items:** .... Notes: - All dates and times are shown in GMT -05:00:00 Central Daylight Time - You are receiving this notification because of a subscription created by ENETDOM\enbuild Provided by Microsoft Visual Studio® Team System 2008

    Read the article

  • thread reaches end but isn't removed

    - by pstanton
    I create a bunch of threads to do some processing: new Thread("upd-" + id){ @Override public void run(){ try{ doSomething(); } catch (Throwable e){ LOG.error("error", e); } finally{ LOG.debug("thread death"); } } }.start(); I know i should be using a threadPool but i need to understand the following problem before i change it: I'm using eclipse's debugger and looking at the threads in the debug pane which lists active threads. Many of them complete as you would expect, and are removed from the debug pane, however some seem to stay in the list of active threads even though the log shows the "thread death" entry for these. When i attempt to debug these threads, they either do not pause for debugging or show an error dialog: "A timeout occurred while retrieving stack frames for thread: upd-...". there is some synchronization going on within the doSomething() call but i'm fairly sure it's ok and since the "thread death" log is being called i'm assuming these threads aren't deadlocked in that method. i don't do any Thread.join()s, however i do call a third party API but doubt they do either. Can anyone think of another reason these threads are lingering? Thanks. EDIT: I created this test to check the Garbage Collection theory: Thread thread = new Thread("!!!!!!!!!!!!!!!!") { @Override public void run() { System.out.println("running"); ThreadUs.sleepQuiet(5000); System.out.println("finished"); // <-- thread removed from list here } }; thread.start(); ThreadUs.sleepQuiet(10000); System.out.println(thread.isAlive()); // <-- thread already removed from list but hasn't been GC'd ThreadUs.sleepQuiet(10000); this proves that it is nothing to do with garbage collection as eclipse removes the thread from the thread list as soon as it completes and isn't waiting for the object to be de-referenced/GC'd.

    Read the article

  • How to implement Master-Detail with Multi-Selection in WPF?

    - by gehho
    Hi, I plan to create a typical Master-Detail scenario, i.e. a collection of items displayed in a ListView via DataBinding to an ICollectionView, and details about the selected item in a separate group of controls (TextBoxes, NumUpDowns...). No problem so far, actually I have already implemented a pretty similar scenario in an older project. However, it should be possible to select multiple items in the ListView and get the appropriate shared values displayed in the detail view. This means, if all selected items have the same value for a property, this value should be displayed in the detail view. If they do not share the same value, the corresponding control should provide some visual clue for the user indicating this, and no value should be displayed (or an "undefined" state in a CheckBox for example). Now, if the user edits the value, this change should be applied to all selected items. Further requirements are: MVVM compatibility (i.e. not too much code-behind) Extendability (new properties/types can be added later on) Does anyone have experience with such a scenario? Actually, I think this should be a very common scenario. However, I could not find any details on that topic anywhere. Thanks! gehho. PS: In the older project mentioned above, I had a solution using a subclass of the ViewModel which handles the special case of multi-selection. It checked all selected items for equality and returned the appropriate values. However, this approach had some drawbacks and somehow seemed like a hack because (besides other smelly things) it was necessary to break the synchronization between the ListView and the detail view and handle it manually.

    Read the article

  • Lightweight spinlocks built from GCC atomic operations?

    - by Thomas
    I'd like to minimize synchronization and write lock-free code when possible in a project of mine. When absolutely necessary I'd love to substitute light-weight spinlocks built from atomic operations for pthread and win32 mutex locks. My understanding is that these are system calls underneath and could cause a context switch (which may be unnecessary for very quick critical sections where simply spinning a few times would be preferable). The atomic operations I'm referring to are well documented here: http://gcc.gnu.org/onlinedocs/gcc-4.4.1/gcc/Atomic-Builtins.html Here is an example to illustrate what I'm talking about. Imagine a RB-tree with multiple readers and writers possible. RBTree::exists() is read-only and thread safe, RBTree::insert() would require exclusive access by a single writer (and no readers) to be safe. Some code: class IntSetTest { private: unsigned short lock; RBTree<int>* myset; public: // ... void add_number(int n) { // Aquire once locked==false (atomic) while (__sync_bool_compare_and_swap(&lock, 0, 0xffff) == false); // Perform a thread-unsafe operation on the set myset->insert(n); // Unlock (atomic) __sync_bool_compare_and_swap(&lock, 0xffff, 0); } bool check_number(int n) { // Increment once the lock is below 0xffff u16 savedlock = lock; while (savedlock == 0xffff || __sync_bool_compare_and_swap(&lock, savedlock, savedlock+1) == false) savedlock = lock; // Perform read-only operation bool exists = tree->exists(n); // Decrement savedlock = lock; while (__sync_bool_compare_and_swap(&lock, savedlock, savedlock-1) == false) savedlock = lock; return exists; } }; (lets assume it need not be exception-safe) Is this code indeed thread-safe? Are there any pros/cons to this idea? Any advice? Is the use of spinlocks like this a bad idea if the threads are not truly concurrent? Thanks in advance. ;)

    Read the article

  • Why is my multithreaded Java program not maxing out all my cores on my machine?

    - by James B
    Hi, I have a program that starts up and creates an in-memory data model and then creates a (command-line-specified) number of threads to run several string checking algorithms against an input set and that data model. The work is divided amongst the threads along the input set of strings, and then each thread iterates the same in-memory data model instance (which is never updated again, so there are no synchronization issues). I'm running this on a Windows 2003 64-bit server with 2 quadcore processors, and from looking at Windows task Manager they aren't being maxed-out, (nor are they looking like they are being particularly taxed) when I run with 10 threads. Is this normal behaviour? It appears that 7 threads all complete a similar amount of work in a similar amount of time, so would you recommend running with 7 threads instead? Should I run it with more threads?...Although I assume this could be detrimental as the JVM will do more context switching between the threads. Alternatively, should I run it with fewer threads? Alternatively, what would be the best tool I could use to measure this?...Would a profiling tool help me out here - indeed, is one of the several profilers better at detecting bottlenecks (assuming I have one here) than the rest? Note, the server is also running SQL Server 2005 (this may or may not be relevant), but nothing much is happening on that database when I am running my program. Note also, the threads are only doing string matching, they aren't doing any I/O or database work or anything else they may need to wait on. Thanks in advance, -James

    Read the article

  • mercurial: how to synchronize mq patches from a master repo as mq patches to a set of clone repos

    - by dim
    I have to run a dozen of different build tests on a code base maintained in a mercurial repository. I don't want to run serially these tests on same repository because they modify a set of common files and I want to run them in parallel on different machines. Also, after all tests are run I want to have access to latest test results from those test work areas. Currently I'm cloning the master repository a dozen of times and run in each clone one different test. Before each test execution I do a pull/update/purge preparation sequence in order to start the test on latest clean state. That's good for me. I'm also preparing new changes using mq extension that I would test on all clones as above before committing them. For testing some ready candidate mq patches I want somehow to deploy/synchronize them to be available in test clones and apply those ready for testing using some guard before running the test. Did anybody do this synchronization before? What's the most simple way to do it? Do I need to have versioned mq patches for that?

    Read the article

  • Total Order between !different! volatile variables?

    - by andreas
    Hi all, Consider the following Java code: volatile boolean v1 = false; volatile boolean v2 = false; //Thread A v1 = true; if (v2) System.out.println("v2 was true"); //Thread B v2 = true; if (v1) System.out.println("v1 was true"); If there was a globally visible total order for volatile accesses then at least one println would always be reached. Is that actually guaranteed by the Java Standard? Or is an execution like this possible: A: v1 = true; B: v2 = true; A: read v2 = false; B: read v1 = false; A: v2 = true becomes visible (after the if) B: v1 = true becomes visible (after the if) I could only find statements about accesses to the same volatile variable in the Standard (but I might be missing something). "A write to a volatile variable (§8.3.1.4) v synchronizes-with all subsequent reads of v by any thread (where subsequent is defined according to the synchronization order)." http://java.sun.com/docs/books/jls/third_edition/html/memory.html#17.4.4 Thanks!

    Read the article

  • ConcurrentLinkedQueue$Node remains in heap after remove()

    - by action8
    I have a multithreaded app writing and reading a ConcurrentLinkedQueue, which is conceptually used to back entries in a list/table. I originally used a ConcurrentHashMap for this, which worked well. A new requirement required tracking the order entries came in, so they could be removed in oldest first order, depending on some conditions. ConcurrentLinkedQueue appeared to be a good choice, and functionally it works well. A configurable amount of entries are held in memory, and when a new entry is offered when the limit is reached, the queue is searched in oldest-first order for one that can be removed. Certain entries are not to be removed by the system and wait for client interaction. What appears to be happening is I have an entry at the front of the queue that occurred, say 100K entries ago. The queue appears to have the limited number of configured entries (size() == 100), but when profiling, I found that there were ~100K ConcurrentLinkedQueue$Node objects in memory. This appears to be by design, just glancing at the source for ConcurrentLinkedQueue, a remove merely removes the reference to the object being stored but leaves the linked list in place for iteration. Finally my question: Is there a "better" lazy way to handle a collection of this nature? I love the speed of the ConcurrentLinkedQueue, I just cant afford the unbounded leak that appears to be possible in this case. If not, it seems like I'd have to create a second structure to track order and may have the same issues, plus a synchronization concern.

    Read the article

  • UML Modelling in C++Builder 2010 Professional

    - by Gordon Brandly
    I'd like to do some basic class diagram UML models in the Pro version of C++Builder 2010. Embarcadero has a C++Builder Features Matrix document, one line of which says "UML Code Visualization – at any time, get a UML model view of your source code" and has a check in the "Professional" column of that table -- I assume this means it should be available to me. Yet, when I open an existing project and do a View | Model View, there's nothing in the Model View window. The only diagram I can find is on the Graph tab of the C++ Class Explorer. I wouldn't call that a UML diagram myself -- is that what Embarcadero is referring to? Embarcadero's table shows that many UML diagrams are not available in Pro, but it looks to me like Class Diagrams should be available. Other lines in that same table indicate that both "Full two-way class diagrams with synchronization between code and diagrams" and "Diagram hyper-linking and annotations" are also supposed to be available in Pro. The Class Explorer graph is one-way only as far as I can tell, so I hope they're referring to something else I haven't been able to find so far. Thanks for any insight into this.

    Read the article

  • Java Random Slowdowns on Mac OS cont'd

    - by javajustice
    I asked this question a few weeks ago, but I'm still having the problem and I have some new hints. The original question is here: http://stackoverflow.com/questions/1651887/java-random-slowdowns-on-mac-os Basically, I have a java application that splits a job into independent pieces and runs them in separate threads. The threads have no synchronization or shared memory items. The only resources they do share are data files on the hard disk, with each thread having an open file channel. Most of the time it runs very fast, but occasionally it will run very slow for no apparent reason. If I attach a CPU profiler to it, then it will start running quickly again. If I take a CPU snapshot, it says its spending most of its time in "self time" in a function that doesn't do anything except check a few (unshared unsynchronized) booleans. I don't know how this could be accurate because 1, it makes no sense, and 2, attaching the profiler seems to knock the threads out of whatever mode they're in and fix the problem. Also, regardless of whether it runs fast or slow, it always finishes and gives the same output, and it never dips in total cpu usage (in this case ~1500%), implying that the threads aren't getting blocked. I have tried different garbage collectors, different sizings the parts of the memory space, writing data output to non-raid drives, and putting all data output in threads separate the main worker threads. Does anyone have any idea what kind of problem this could be? Could it be the operating system (OS X 10.6.2) ? I have not been able to duplicate it on a windows machine, but I don't have one with a similar hardware configuration.

    Read the article

  • Does SetFileBandwidthReservation affect memory-mapped file performance?

    - by Ghostrider
    Does this function affect Memory-mapped file performance? Here's the problem I need to solve: I have two applications competing for disk access: "reader" and "updater". Whole system runs on Windows Server 2008 R2 x64 "Updater" constantly accesses disk in a linear manner, updating data. They system is set up in such a way that updater always has infinite data to update. Consider that it is constantly approximating a solution of a huge set of equations that takes up entire 2TB disk drive. Updater uses ReadFile and WriteFile to process data in a linear fashion. "Reader" is occasionally invoked by user to get some pieces of data. Usually user would read several 4kb blocks from the drive and stop. Occasionally user needs to read up to 100mb sequentially. In exceptional cases up to several gigabytes. Reader maps files to memory to get data it needs. What I would like to achieve is for "reader" to have absolute priority so that "updater" would completely stop if needed so that "reader" could get the data user needs ASAP. Is this problem solvable by using SetPriorityClass and SetFileBandwidthReservation calls? I would really hate to put synchronization login in "reader" and "updater" and rather have the OS take care of priorities.

    Read the article

  • Synchronizing screencasting (ffmpeg) and capturing from the webcam (OpenCV)

    - by lyuba
    As from my previous questions, I am trying to build a simple eye tracker. Decided to start from a Linux version (run Ubuntu). To complete this task one should organize screencasting and webcam capturing in such way that frames from both streams exactly match each other and there is the same number of frames in each of them totally. Screencasting fsp fully depends on the camera's fsp, so each time we get the image from the webcam we can potentially grab a screen frame and stay happy. However, all the tools for the fast screencasting, like ffmpeg, for example, return the .avi file as the result and require the fsp already known to be started. From the other side, tools like Java+Robot or ImageMagick seem to require around 20ms to return the .jpg screenshot, which is pretty slow for the task. But they may be requested right after each time the webcam frame is grabbed and provide the needed synchronization. So the sub-questions are: 1. Does the USD camera's frame rate vary during a single session? 2. Are there any tools which provide fast screencasting frame by frame? 3. Is there any way to make ffmpeg push a new frame to the .avi file only when program initiates this request? For my task I may either use C++ or Java. I am, actually, an interface designer, not the driver programmer, and this task seems to be pretty low-level. I would be grateful for any suggestion and tip!

    Read the article

  • .net real time stream processing - needed huge and fast RAM buffer

    - by mack369
    The application I'm developing communicates with an digital audio device, which is capable of sending 24 different voice streams at the same time. The device is connected via USB, using FTDI device (serial port emulator) and D2XX Drivers (basic COM driver is to slow to handle transfer of 4.5Mbit). Basically the application consist of 3 threads: Main thread - GUI, control, ect. Bus reader - in this thread data is continuously read from the device and saved to a file buffer (there is no logic in this thread) Data interpreter - this thread reads the data from file buffer, converts to samples, does simple sample processing and saves the samples to separate wav files. The reason why I used file buffer is that I wanted to be sure that I won't loose any samples. The application doesn't use recording all the time, so I've chosen this solution because it was safe. The application works fine, except that buffered wave file generator is pretty slow. For 24 parallel records of 1 minute, it takes about 4 minutes to complete the recording. I'm pretty sure that eliminating the use of hard drive in this process will increase the speed much. The second problem is that the file buffer is really heavy for long records and I can't clean this up until the end of data processing (it would slow down the process even more). For RAM buffer I need at lest 1GB to make it work properly. What is the best way to allocate such a big amount of memory in .NET? I'm going to use this memory in 2 threads so a fast synchronization mechanism needed. I'm thinking about a cycle buffer: one big array, the Bus Reader saves the data, the Data Interpreter reads it. What do you think about it? [edit] Now for buffering I'm using classes BinaryReader and BinaryWriter based on a file.

    Read the article

  • Canceling a WSK I/O operation when driver is unloading

    - by eaducac
    I've been learning how to write drivers with the Windows DDK recently. After creating a few test drivers experimenting with system threads and synchronization, I decided to step it up a notch and write a driver that actually does something, albeit something useless. Currently, my driver connects to my other computer using Winsock Kernel and just loops and echoes back whatever I send to it until it gets the command "exit", which causes it to break out of the loop. In my loop, after I call WskReceive() to get some data from the other computer, I use KeWaitForMultipleObjects() to wait for either of two SynchronizationEvents. BlockEvent gets set by my IRP's CompletionRoutine() to let my thread know that it's received some data from the socket. EndEvent gets set by my DriverUnload() routine to tell the thread that it's being unloaded and it needs to terminate. When I send the "exit" command, the thread terminates with no problems, and I can safely unload the driver afterward. If I try to stop the driver while it's still waiting on data from the other computer, however, it blue screens with the error DRIVER_UNLOADED_WITHOUT_CANCELLING_PENDING_OPERATIONS. After I get the EndEvent but before I exit the loop, I've tried canceling the IRP with IoCancelIrp() and completing it with IoCompleteRequest(), but both of those give me DRIVER_IRQL_NOT_LESS_OR_EQUAL errors. I then tried calling WskDisconnect(), hoping that would cause the receive operation to complete, but that took me back to the CANCELLING_PENDING_OPERATIONS error. How do I cancel my pending I/O operation from my WSK socket at the IRQL I'm running at when the driver is unloaded?

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >