Search Results

Search found 3458 results on 139 pages for 'concurrent queue'.

Page 121/139 | < Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >

  • [C#] Two System.Drawing methods manifest slow drawing or flickery: Solutions? or Other Options?

    - by Luke Mcneice
    Hi all, I am doing a little graphing via the System.Drawing and im having a few problems. I'm holding data in a Queue and i'm drawing(graphing) out that data onto three picture boxes this method fills the picture box then scrolls the graph across. so not to draw on top of the previous drawings (and graduly looking messier) i found 2 solutions to draw the graph. Call plot.Clear(BACKGOUNDCOLOR) before the draw loop [block commented] although this causes a flicker to appear from the time it takes to do the actual drawing loop. call plot.DrawLine(channelPen[5], j, 140, j, 0); just before each drawline [commented] although this causes the drawing to start ok then slow down very quickly to a crawl as if a wait command had been placed before the draw command. Here is the Code for reference: /*plotx.Clear(BACKGOUNDCOLOR) ploty.Clear(BACKGOUNDCOLOR) plotz.Clear(BACKGOUNDCOLOR)*/ for (int j = 1; j < 599; j++) { if (j > RealTimeBuffer.Count - 1) break; QueueEntity past = RealTimeBuffer.ElementAt(j - 1); QueueEntity current = RealTimeBuffer.ElementAt(j); if (j == 1) { //plotx.DrawLine(channelPen[5], 0, 140, 0, 0); //ploty.DrawLine(channelPen[5], 0, 140, 0, 0); //plotz.DrawLine(channelPen[5], 0, 140, 0, 0); } //plotx.DrawLine(channelPen[5], j, 140, j, 0); plotx.DrawLine(channelPen[0], j - 1, (((past.accdata.X - 0x7FFF) / 256) + 64), j, (((current.accdata.X - 0x7FFF) / 256) + 64)); //ploty.DrawLine(channelPen[5], j, 140, j, 0); ploty.DrawLine(channelPen[1], j - 1, (((past.accdata.Y - 0x7FFF) / 256) + 64), j, (((current.accdata.Y - 0x7FFF) / 256) + 64)); //plotz.DrawLine(markerPen, j, 140, j, 0); plotz.DrawLine(channelPen[2], j - 1, (((past.accdata.Z - 0x7FFF) / 256) + 94), j, (((current.accdata.Z - 0x7FFF) / 256) + 94)); } Is there any tricks to avoid these overheads? If not would there be any other/better solutions?

    Read the article

  • What is wrong with locking non-static fields? What is the correct way to lock a particular instance?

    - by smartcaveman
    Why is it considered bad practice to lock non-static fields? And, if I am not locking non-static fields, then how do I lock an instance method without locking the method on all other instances of the same or derived class? I wrote an example to make my question more clear. public abstract class BaseClass { private readonly object NonStaticLockObject = new object(); private static readonly object StaticLockObject = new object(); protected void DoThreadSafeAction<T>(Action<T> action) where T: BaseClass { var derived = this as T; if(derived == null) { throw new Exception(); } lock(NonStaticLockObject) { action(derived); } } } public class DerivedClass :BaseClass { private readonly Queue<object> _queue; public void Enqueue(object obj) { DoThreadSafeAction<DerivedClass>(x=>x._queue.Enqueue(obj)); } } If I make the lock on the StaticLockObject, then the DoThreadSafeAction method will be locked for all instances of all classes that derive from BaseClass and that is not what I want. I want to make sure that no other threads can call a method on a particular instance of an object while it is locked.

    Read the article

  • Compilation problem in the standard x86_64 libraries

    - by user350282
    Hi everyone, I am having trouble compiling a program I have written. I have two different files with the same includes but only one generates the following error when compiled with g++ /usr/lib/gcc/x86_64-linux-gnu/4.4.1/../../../../lib/crt1.o: In function `_start': /build/buildd/eglibc-2.10.1/csu/../sysdeps/x86_64/elf/start.S:109: undefined reference to `main' collect2: ld returned 1 exit status The files I am including in my header are as follows: #include <google/sparse_hash_map> using google::sparse_hash_map; #include <ext/hash_map> #include <math.h> #include <iostream> #include <queue> #include <vector> #include <stack> using std::priority_queue; using std::stack; using std::vector; using __gnu_cxx::hash_map; using __gnu_cxx::hash; using namespace std; Searching the internet for those two lines hasn't resulted in anything to help me. I would be very grateful for any advice. Thank you

    Read the article

  • Having trouble hiding keyboard using invisible button which sits on top of uiscrollview

    - by phil
    I have 3 items in play... 1) UIView sits at the base of the hierarchy and contains the UIScrollview. 2) UIScrollview that is presenting a lengthy user form. 3) An invisible button on the UIScrollview that I'm using to provide "hide the keyboard" features. Notice in the code below that I'm registering to be notified when the keyboard is going to appear and again when it's going to disappear. These are working great. My problem is seemingly one of "layers". See below where I insert the button into the view atIndex:0. This causes the button to be activated and "stuffed" behind the scrollview so that when you click on it, the scrollview grabs the touch and the button is unaware. There is no way to "reach" the button and suppress the keyboard. However, if I insert atIndex:1, the button gets super imposed on top of the text entry fields and so any touch at all is acted upon by the button, which immediately suppresses the keyboard and then disappears. How do I insert the button on top of the UIScrollview but behind the UITextfields that sit there? other logistics: I have a -(void) hidekeyboard function that I have setup with the UIButtion as an IBAction(). And I have the UIButton connected to "files owner" via a ctrl-drag/drop. (Do I need both of those conventions?) This code in ViewDidLoad()... [[NSNotificationCenter defaultCenter] addObserverForName:UIKeyboardWillShowNotification object:nil queue:nil usingBlock:^(NSNotification *notification){ [self.view insertSubview:self.keyboardDismissalButton atIndex:0]; }];

    Read the article

  • How to get the size of a binary tree ?

    - by Andrei Ciobanu
    I have a very simple binary tree structure, something like: struct nmbintree_s { unsigned int size; int (*cmp)(const void *e1, const void *e2); void (*destructor)(void *data); nmbintree_node *root; }; struct nmbintree_node_s { void *data; struct nmbintree_node_s *right; struct nmbintree_node_s *left; }; Sometimes i need to extract a 'tree' from another and i need to get the size to the 'extracted tree' in order to update the size of the initial 'tree' . I was thinking on two approaches: 1) Using a recursive function, something like: unsigned int nmbintree_size(struct nmbintree_node* node) { if (node==NULL) { return(0); } return( nmbintree_size(node->left) + nmbintree_size(node->right) + 1 ); } 2) A preorder / inorder / postorder traversal done in an iterative way (using stack / queue) + counting the nodes. What approach do you think is more 'memory failure proof' / performant ? Any other suggestions / tips ? NOTE: I am probably going to use this implementation in the future for small projects of mine. So I don't want to unexpectedly fail :).

    Read the article

  • Adding defaults and indexes to a script/generate command in a Rails Template?

    - by charliepark
    I'm trying to set up a Rails Template that would allow for comprehensive set-up of a specific Rails app. Using Pratik Naik's overview (http://m.onkey.org/2008/12/4/rails-templates), I was able to set up a couple of scaffolds and models, with a line that looks something like this ... generate("scaffold", "post", "title:string", "body:string") I'm now trying to add in Delayed Jobs, which normally has a migration file that looks like this: create_table :delayed_jobs, :force => true do |table| table.integer :priority, :default => 0 # Allows some jobs to jump to the front of the queue table.integer :attempts, :default => 0 # Provides for retries, but still fail eventually. table.text :handler # YAML-encoded string of the object that will do work table.text :last_error # reason for last failure (See Note below) table.datetime :run_at # When to run. Could be Time.now for immediately, or sometime in the future. table.datetime :locked_at # Set when a client is working on this object table.datetime :failed_at # Set when all retries have failed (actually, by default, the record is deleted instead) table.string :locked_by # Who is working on this object (if locked) table.timestamps end So, what I'm trying to do with the Rails template, is to add in that :default = 0 into the master template file. I know that the rest of the template's command should look like this: generate("migration", "createDelayedJobs", "priority:integer", "attempts:integer", "handler:text", "last_error:text", "run_at:datetime", "locked_at:datetime", "failed_at:datetime", "locked_by:string") Where would I put (or, rather, what is the syntax to add) the :default values in that? And if I wanted to add an index, what's the best way to do that?

    Read the article

  • Ember Data Sycn - LocalStorage+REST+RealTime+Online/Offline

    - by Miguel Madero
    We have a combination of requirements in terms o data access. Pre-load some reference data. We need reference data to survive browser restarts instead of just living in memory to avoid loading it all the time. I'm currently using the LocalStorageAdapter for that. Once we have it, we would like to sync changes (polling or using Socket.IO in the background and updating the LocalStorage could do the trick) There're other models that are more transactional, where we would need to directly go to the Server and get/save them. It would be nice to use something like the RESTAdapter for that. Lastly, there're some operations that should work off-line and changes should be synced later. To make it more concrete: * We pre-load vendor and "favorite products" into Local Storage. We work offline with those. * We need to sync server changes to vendor and product information. * If they search the full catalog, that requires them to be online. * When offline, we need to allow users to add something to their cart or even submit and order. We would like to queue this action and submit it when they have an Internet Connection. So a few questions are derived from this: * Is there a way to user RESTAdapter in combination with LocalStorage? * Is there some Socket.IO support? (Happy to do this part manually) * Is there Queueing support? Ideally at the Ember-Data level. I know we will have to do a lot of this manually and pull together the different lego pieces, but I wanted to ask for some perspective from experience Ember devs.

    Read the article

  • Using JMX classes to notify on events over time

    - by Cincinnati Joe
    I've been looking at JMX for monitoring application and system metrics (partially because MBeans can accessed by various tools such as JConsole). It would seem like the classes included with JMX would be useful for things like notification when metrics have exceeded thresholds. But I'm not sure they fit with the way I want to measure these over a specified time period. For example, let's say I want to notify an admin when the average CPU load is over 95% for more than 5 minutes. Is that something can be done with a GaugeMonitor? From the docs, it doesn't seem quite suited for this, and I'm wondering if instead I should write my own MBean with the necessary logic. A more relevant example is when the login times for users exceed 10s over a period of 5 mins. Slightly different would be the last 20 logins took more than 10s on average. Another case would be when a process crashes 4+ times in an hour. Or the request queue exceeds 15 for 5 mins. Are the JMX Monitor classes useful for this kind of thing?

    Read the article

  • Need Help finding an appropriate task assignment algorithm for a college project involving coordinat

    - by Trif Mircea
    I am a long time lurker here and have found over time many answers regarding jquery and web development topics so I decided to ask a question of my own. This time I have to create a c++ project for college which should help manage the workflow of a company providing all kinds of services through in the field teams. The ideas I have so far are: client-server application; the server is a dispatcher where all the orders from clients get and the clients are mobile devices (PDAs) each team in the field having one a order from a client is a task. Each task is made up of a series of subtasks. You have a database with estimations on how long a task should take to complete you also know what tasks or subtasks each team on the field can perform based on what kind of specialists made up the team (not going to complicate the problem by adding needed materials, it is considered that if a member of a team can perform a subtask he has the stuff needed) Now knowing these factors, what would a good task assignment algorithm be? The criteria is: how many tasks can a team do, how many tasks they have in the queue, it could also be location, how far away are they from the place but I don't think I can implement that.. It needs to be efficient and also to adapt quickly is the human dispatcher manually assigns a task. Any help or leads would be really appreciated. Also I'm not 100% sure in the idea so if you have another way you would go about creating such an application please share, even if it just a quick outline. I have to write a theoretical part too so even if the ideas are far more complex that what i outlined that would be ok ; I'd write those and implement what I can. Edit: C++ is the only language I know unfortunately.

    Read the article

  • MapView EXC_BAD_ACCESS (SIGSEGV) and KERN_INVALID_ADDRESS

    - by user768113
    I'm having some 'issues' with my application... well, it crashes in an UIViewController that is presented modally, there the user enters information through UITextFields and his location is tracked by a MapView. Lets call this view controller "MapViewController" When the user submits the form, I call a different ViewController - modally again - that processes this info and a third one answers accordingly, then go back to a MenuVC using unwinding segues, which then calls MapViewController and so on. This sequence is repeated many times, but it always crashes in MapViewController. Looking at the crash log, I think that the MapView can be the problem of this or some element in the UI (because of the UIKit framework). I tried to use NSZombie in order to track a memory issue but it doesn't give me a clue about whats happening. Here is the crash log Hardware Model: iPad3,4 Process: MyApp [2253] OS Version: iOS 6.1.3 (10B329) Report Version: 104 Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Codes: KERN_INVALID_ADDRESS at 0x00000044 Crashed Thread: 0 Thread 0 name: Dispatch queue: com.apple.main-thread Thread 0 Crashed: 0 IMGSGX554GLDriver 0x328b9be0 0x328ac000 + 56288 1 IMGSGX554GLDriver 0x328b9b8e 0x328ac000 + 56206deallocated instance 2 IMGSGX554GLDriver 0x328bc2f2 0x328ac000 + 66290 3 IMGSGX554GLDriver 0x328baf44 0x328ac000 + 61252 4 libGPUSupportMercury.dylib 0x370f86be 0x370f6000 + 9918 5 GLEngine 0x34ce8bd2 0x34c4f000 + 629714 6 GLEngine 0x34cea30e 0x34c4f000 + 635662 7 GLEngine 0x34c8498e 0x34c4f000 + 219534 8 GLEngine 0x34c81394 0x34c4f000 + 205716 9 VectorKit 0x3957f4de 0x394c7000 + 754910 10 VectorKit 0x3955552e 0x394c7000 + 582958 11 VectorKit 0x394d056e 0x394c7000 + 38254 12 VectorKit 0x394d0416 0x394c7000 + 37910 13 VectorKit 0x394cb7ca 0x394c7000 + 18378 14 VectorKit 0x394c9804 0x394c7000 + 10244 15 VectorKit 0x394c86a2 0x394c7000 + 5794 16 QuartzCore 0x354a07a4 0x35466000 + 239524 17 QuartzCore 0x354a06fc 0x35466000 + 239356 18 IOMobileFramebuffer 0x376f8fd4 0x376f4000 + 20436 19 IOKit 0x344935aa 0x34490000 + 13738 20 CoreFoundation 0x33875888 0x337e9000 + 575624 21 CoreFoundation 0x338803e4 0x337e9000 + 619492 22 CoreFoundation 0x33880386 0x337e9000 + 619398 23 CoreFoundation 0x3387f20a 0x337e9000 + 614922 24 CoreFoundation 0x337f2238 0x337e9000 + 37432 25 CoreFoundation 0x337f20c4 0x337e9000 + 37060 26 GraphicsServices 0x373ad336 0x373a8000 + 21302 27 UIKit 0x3570e2b4 0x356b7000 + 357044 28 MyApp 0x000ea12e 0xe9000 + 4398 29 MyApp 0x000ea0e4 0xe9000 + 4324 I think thats all, additionally, I would like to ask you: if you are using unwind segues then you are releasing view controllers from the memory heap, right? Meanwhile, performing segues let you instantiate those controllers. Technically, MenuVC should be the only VC alive in the heap during the app life cycle if you understand me.

    Read the article

  • Java: fastest way to do random reads on huge disk file(s)

    - by cocotwo
    I've got a moderately big set of data, about 800 MB or so, that is basically some big precomputed table that I need to speed some computation by several orders of magnitude (creating that file took several mutlicores computers days to produce using an optimized and multi-threaded algo... I do really need that file). Now that it has been computed once, that 800MB of data is read only. I cannot hold it in memory. As of now it is one big huge 800MB file but splitting in into smaller files ain't a problem if it can help. I need to read about 32 bits of data here and there in that file a lot of time. I don't know before hand where I'll need to read these data: the reads are uniformly distributed. What would be the fastest way in Java to do my random reads in such a file or files? Ideally I should be doing these reads from several unrelated threads (but I could queue the reads in a single thread if needed). Is Java NIO the way to go? I'm not familiar with 'memory mapped file': I think I don't want to map the 800 MB in memory. All I want is the fastest random reads I can get to access these 800MB of disk-based data. btw in case people wonder this is not at all the same as the question I asked not long ago: http://stackoverflow.com/questions/2346722/java-fast-disk-based-hash-set

    Read the article

  • ActiveMq integration with Spring 2.5

    - by Tony
    I am using ActiveMq 5.32 with Spring 2.5.5. I use pretty generic configuration, as long as I include the jmsTransactionManager in DefaultMessageListenerContainer, Spring throw an error on start up: "Error creating bean with name 'org.springframework.jms.listener.DefaultMessageListenerContainer#0'" Without the transactionManager attribute , this works fine, but when I add 10 messages to the message queue, a transaction exception will occur. Part of my configurations : <bean class="org.springframework.jms.listener.DefaultMessageListenerContainer"> <property name="connectionFactory" ref="connectionFactory" /> <property name="destination" ref="emailDestination" /> <property name="messageListener" ref="emailServiceMDP" /> <property name="transactionManager" ref="jmsTransactionManager" /> </bean> <bean id="jmsTransactionManager" class="org.springframework.jms.connection.JmsTransactionManager"> <property name="connectionFactory" ref="connectionFactory" /> </bean> Does this version of Spring and Activemq has some know issues in integration ? Or do I need additional libs to get jmsTransactionManager to work ?

    Read the article

  • How to process events chain.

    - by theblackcascade
    I need to process this chain using one LoadXML method and one urlLoader object: ResourceLoader.Instance.LoadXML("Config.xml"); ResourceLoader.Instance.LoadXML("GraphicsSet.xml"); Loader starts loading after first frameFunc iteration (why?) I want it to start immediatly.(optional) And it starts loading only "GraphicsSet.xml" Loader class LoadXml method: public function LoadXML(URL:String):XML { urlLoader.addEventListener(Event.COMPLETE,XmlLoadCompleteListener); urlLoader.load(new URLRequest(URL)); return xml; } private function XmlLoadCompleteListener(e:Event):void { var xml:XML = new XML(e.target.data); trace(xml); trace(xml.name()); if(xml.name() == "Config") XMLParser.Instance.GameSetup(xml); else if(xml.name() == "GraphicsSet") XMLParser.Instance.GraphicsPoolSetup(xml); } Here is main: public function Main() { Mouse.hide(); this.addChild(Game.Instance); this.addEventListener(Event.ENTER_FRAME,Game.Instance.Loop); } And on adding a Game.Instance to the rendering queue in game constuctor i start initialize method: public function Game():void { trace("Constructor"); if(_instance) throw new Error("Use Instance Field"); Initialize(); } its code is: private function Initialize():void { trace("initialization"); ResourceLoader.Instance.LoadXML("Config.xml"); ResourceLoader.Instance.LoadXML("GraphicsSet.xml"); } Thanks.

    Read the article

  • Communication between lexer and parser

    - by FredOverflow
    Every time I write a simple lexer and parser, I stumble upon the same question: how should the lexer and the parser communicate? I see four different approaches: The lexer eagerly converts the entire input string into a vector of tokens. Once this is done, the vector is fed to the parser which converts it into a tree. This is by far the simplest solution to implement, but since all tokens are stored in memory, it wastes a lot of space. Each time the lexer finds a token, it invokes a function on the parser, passing the current token. In my experience, this only works if the parser can naturally be implemented as a state machine like LALR parsers. By contrast, I don't think it would work at all for recursive descent parsers. Each time the parser needs a token, it asks the lexer for the next one. This is very easy to implement in C# due to the yield keyword, but quite hard in C++ which doesn't have it. The lexer and parser communicate through an asynchronous queue. This is commonly known under the title "producer/consumer", and it should simplify the communication between the lexer and the parser a lot. Does it also outperform the other solutions on multicores? Or is lexing too trivial? Is my analysis sound? Are there other approaches I haven't thought of? What is used in real-world compilers? It would be really cool if compiler writers like Eric Lippert could shed some light on this issue.

    Read the article

  • Scheduling a Delay Job on Heroku with a Worker Dyno

    - by user1524775
    I'm currently using Heroku's scheduler to run a script. However, the time that the script takes to run is going to increase from a few milliseconds to a few minutes. I'm looking at using the delayed_job gem to push this process off to a Worker Dyno. I want to continue to run this script once-a-day, just offload it to the worker. My current rake task is: desc "This task updates some stuff for you." task :update_some_stuff => :environment do puts "Updating some stuff ..." SomeClass.new.process puts "... done." end Once the gem is installed, migration run, and worker dyno started, will the script just need to change to: desc "This task updates some stuff for you." task :update_some_stuff => :environment do puts "Updating some stuff ..." SomeClass.new.delay.process puts "... done." end With this task still being a rake task scheduled by Heroku's Scheduler, is the only thing that needs to happen here the introduction of the delay method to put this in the Worker's queue? Thanks in advance for any help.

    Read the article

  • SQL Server race condition issue with range lock

    - by Freek
    I'm implementing a queue in SQL Server (please no discussions about this) and am running into a race condition issue. The T-SQL of interest is the following: set transaction isolation level serializable begin tran declare @RecordId int declare @CurrentTS datetime2 set @CurrentTS=CURRENT_TIMESTAMP select top 1 @RecordId=Id from QueuedImportJobs with (updlock) where Status=@Status and (LeaseTimeout is null or @CurrentTS>LeaseTimeout) order by Id asc if @@ROWCOUNT> 0 begin update QueuedImportJobs set LeaseTimeout = DATEADD(mi,5,@CurrentTS), LeaseTicket=newid() where Id=@RecordId select * from QueuedImportJobs where Id = @RecordId end commit tran RecordId is the PK and there is also an index on Status,LeaseTimeout. What I'm basically doing is select a record of which the lease happens to be expired, while simultaneously updating the lease time with 5 minutes and setting a new lease ticket. So the problem is that I'm getting deadlocks when I run this code in parallel using a couple of threads. I've debugged it up to the point where I found out that the update statement sometimes gets executing twice for the same record. Now, I was under the impression that the with (updlock) should prevent this (it also happens with xlock btw, not with tablockx). So it actually look like there is a RangeS-U and a RangeX-X lock on the same range of records, which ought to be impossible. So what am I missing? I'm thinking it might have something to do with the top 1 clause or that SQL Server does not know that where Id=@RecordId is actually in the locked range? Deadlock graph: Table schema (simplified):

    Read the article

  • XMLHttpRequest POST Data Size

    - by usurper
    Hi, Is there a size limit to a XHR POST request? I am using the POST method for saving textdata into MySQL using PHP script and the data is cut off. Firebug sends me the following message: ... Firebug request size limit has been reached by Firebug. ... This is my code for sending the data: function makeXHR(recordData) { xmlhttp = createXHR(); var body = "q=" + encodeURIComponent(recordData); xmlhttp.open("POST", "insertRowData.php", true); xmlhttp.setRequestHeader("Content-Type", "application/x-www-form-urlencoded"); xmlhttp.setRequestHeader("Content-length", body.length); xmlhttp.setRequestHeader("Connection", "close"); xmlhttp.onreadystatechange = function() { if(xmlhttp.readyState == 4 && xmlhttp.status == 200) { //alert(xmlhttp.responseText); alert("Records were saved successfully!"); } } xmlhttp.send(body); } The only solution I can think of is splitting the data and making a queue of XHR requests but I don't like it. Is there another way?

    Read the article

  • UIDocumentInteractionController & ARC: [UIPopoverController dealloc] reached while popover is still visible

    - by muffel
    This issue or similar issues have been discussed here before, but I didn't find any working solution for me. I am using the following code to display a UIDocumentInteractionController on an ARC-enabled iOS 7 project: - (void) exportDoc{ // [...] docController = [UIDocumentInteractionController interactionControllerWithURL:[NSURL fileURLWithPath:path]]; docController.delegate = self; [docController presentOpenInMenuFromBarButtonItem:mainMenuButton animated:YES]; } First I didn't want to create a property that holds the controller reference, but as many people said that there are not alternatives to it. It is defined as @property (strong) UIDocumentInteractionController* docController; exportDoc is run in the main thread using NSOperationQueue. Whenever it is executed, I get the following error message: Terminating app due to uncaught exception 'NSGenericException', reason: '-[UIPopoverController dealloc] reached while popover is still visible.' This is what the backtrace says: (lldb) bt * thread #1: tid = 0x1c97d9, 0x000000019a23c1c0 libobjc.A.dylibobjc_exception_throw, queue = 'com.apple.main-thread', stop reason = breakpoint 2.1 frame #0: 0x000000019a23c1c0 libobjc.A.dylibobjc_exception_throw frame #1: 0x000000018d982e90 CoreFoundation+[NSException raise:format:] + 128 frame #2: 0x0000000190bc348c UIKit-[UIPopoverController dealloc] + 96 frame #3: 0x0000000190e18fc8 UIKit-[UIDocumentInteractionController dealloc] + 168 frame #4: 0x000000019a255474 libobjc.A.dylib(anonymous namespace)::AutoreleasePoolPage::pop(void*) + 524 frame #5: 0x000000018d881988 CoreFoundation_CFAutoreleasePoolPop + 28 frame #6: 0x000000018e42cb18 Foundation-[NSOperationInternal _start:] + 892 frame #7: 0x000000018e4eea38 Foundation__NSOQSchedule_f + 76 frame #8: 0x000000019a813fd4 libdispatch.dylib_dispatch_client_callout + 16 frame #9: 0x000000019a8171dc libdispatch.dylib_dispatch_main_queue_callback_4CF + 336 frame #10: 0x000000018d942c2c CoreFoundation__CFRUNLOOP_IS_SERVICING_THE_MAIN_DISPATCH_QUEUE + 12 frame #11: 0x000000018d940f6c CoreFoundation__CFRunLoopRun + 1452 frame #12: 0x000000018d881c20 CoreFoundationCFRunLoopRunSpecific + 452 frame #13: 0x0000000193511c0c GraphicsServicesGSEventRunModal + 168 frame #14: 0x00000001909b2fdc UIKitUIApplicationMain + 1156 * frame #15: 0x000000010000947c MyApplicationmain(argc=1, argv=0x000000016fdfbc80) + 108 at main.m:16 frame #16: 0x000000019a82faa0 libdyld.dylibstart + 4 As far as I understand the autoreleasepool just releases the controller. Shouldn't this be prevented by using a strong property just as I did? Do you have any idea what the problem can be and how I can solve it?

    Read the article

  • Javascript Recursion

    - by rpophessagr
    I have an ajax call and would like to recall it once I finish parsing and animating the result into the page. And that's where I'm getting stuck. I was able to recall the function, but it seems to not take into account the delays in the animation. i.e. The console keeps outputting the values at a wild pace. I thought setInterval might help with the interval being the sum of the length of my delays, but I can't get that to work... function loadEm(){ var result=new Array(); $.getJSON("jsonCall.php",function(results){ $.each(results, function(i, res){ rand = (Math.floor(Math.random()*11)*1000)+2000; fullRand += rand; console.log(fullRand); $("tr:first").delay(rand).queue(function(next) { doStuff(res); next(); }); }); var int=self.setInterval("loadEm()",fullRand); }); } });

    Read the article

  • How can I make hundreds of simultaneously running processes communicate with a database through one

    - by Olfan
    Long speech short: How can I make hundreds of simultaneously running processes communicate with a database through one or few permanent sessions? The whole story: I once built a number crunching engine that handles vast amounts of large data files by forking off one child after another giving each a small number of files to work on. File locking, progress monitoring and result propagation happen in an Oracle database which all (sub-)processes access at various times using an application-specific module which encapsulates DBI. This worked well at first, but now with higher volumes of input data, the number of database sessions (one per child, and they can be very short-lived) constantly being opened and closed is becoming an issue. I now want to centralise database access so that there are only one or few fixed database sessions which handle all database access for all the (sub-)processes. The presence of the database abstraction module should make the changes easy because the function calls in the worker instances can stay the same. My problem is that I cannot think of a suitable way to enhance said module in order to establish communication between all the processes and the database connector(s). I thought of message queueing, but couldn't come up with a way of connecting a large herd of requestors with one or few database connectors in a way so that bidirectional communication is possible (for collecting the query result). An asynchronous approach could help here in that all requests are written to the same queue and the database connector servicing the request will "call back" to submit the result. But my mind fails me in generating an image clear enough so that I can paint into code. Threading instead of forking might have given me an easier start, but this would now require massive changes to the code base that I'm not prepared to do to a live system. The more I think of it, the more the base idea looks like a pre-forked web server to me only that it doesn't serve web pages but database queries. Any ideas on what to dig into, and where? Sample (pseudo) code to inspire me, links to possibly related articles, ready solutions on CPAN maybe?

    Read the article

  • Clarification on Threads and Run Loops In Cocoa

    - by dubbeat
    I'm trying to learn about threading and I'm thoroughly confused. I'm sure all the answers are there in the apple docs but I just found it really hard to breakdown and digest. Maybe somebody could clear a thing or 2 up for me. 1)performSelectorOnMainThread Does the above simply register an event in the main run loop or is it somehow a new thread even though the method says "mainThread"? If the purpose of threads is to relieve processing on the main thread how does this help? 2) RunLoops Is it true that if I want to create a completely seperate thread I use "detachNewThreadSelector"? Does calling start on this initiate a default run loop for the thread that has been created? If so where do run loops come into it? 3) And Finally , I've seen examples using NSOperationQueue. Is it true to say that If you use performSelectorOnMainThread the threads are in a queue anyway so NSOperation is not needed? 4) Should I forget about all of this and just use the Grand Central Dispatch instead?

    Read the article

  • Wicket application + Apache + mod_jk - AJP queues are filling up!

    - by nojyarg
    Dear community, We are having a Wicket-based Java application deployed in a production server cluster using Apache (2.2.3) with mod_jk (1.2.30) as load balancing component w/ sticky session and Jboss 5 as application container for the Java application. We are inconsistently seeing an issue in our production environment where our AJP queues between Apache and Jboss as shown in the JMX console fill up with requests to the point where the application server is no longer taking on any new requests. When looking at all involved system components (overall traffic, load db, process list db, load of all clustered application server nodes) nothing points towards a capacity issue which would explain why the calls are being stalled in the AJP queue. Instead all systems appear sufficiently idle. So far, our only remedy to this issue is to restart the appservers and the load balancer which only occasionally clears the AJP queues. We are trying to figure out why the queues are filling up to the point that no calls get returned to the end user although the system is not under a high load. Has anyone else experienced similar problems? Are there any other system metrics we should monitor that could explain the queuing behavior? Is this potentially a mod_jk issue? If so, is it advisable to swap mod_jk with mod_cluster to resolve the issue? Any advice is highly appreciated. If I can provide additional information for the sake of troubleshooting I would be more than willing to do so. /Ben

    Read the article

  • kernel get stack when signalled

    - by yoavstr
    hi there i write readers and writers where the kernel have to syncronize between them and block writer who already read a massage when i am in the queue waiting I get signal so I do the fallowing while (i_Allready_Read(myf) == ALLREADY_READ || isExistWriter == false ) //while (!(i_Allready_Read(aliveProc,current->pid))) { int i, is_sig = 0; printk(KERN_INFO "\n\n*****entered set in read ******\n\n" ); if (i_Allready_Read(myf) == ALLREADY_READ ) wait_event_interruptible (readWaitQ1, !i_Allready_Read(myf)); else wait_event_interruptible (readWaitQ1, isExistWriter); //printk(KERN_INFO "Read Wakeup %d\n",current->pid); for (i = 0; i < _NSIG_WORDS && !is_sig; i++) { is_sig = current->pending.signal.sig[i] & ~current->blocked.sig[i]; } if (is_sig) { handledClose(myf); module_put (THIS_MODULE); return -EINTR; } } return 0;//success } inline void handledClose(struct file *myf)//v { /* *if we close the writer other writer *would be able to enter to permissiones */ if (myf == writerpid ) { isExistWriter = DOESNT_EXIST; //printk(KERN_INFO "procfs_close : this is pid that just closed %d \n", writerpid); } /* *else its a reader so our numofreaders *need to decremented */ else { removeFromArr(myf); numOfReaders--; } } and my close does nothing ... what did i do wrong?

    Read the article

  • tcp/ip accept not returning, but client does

    - by paquetp
    server: vxworks 6.3 calls the usual socket, bind, listen, then: for (;;) { client = accept(sfd,NULL,NULL); // pass client to worker thread } client: .NET 2.0 TcpClient constructor to connect to server that takes the string hostname and int port, like: TcpClient client = new TcpClient(server_ip, port); This is working fine when the server is compiled and executed in windows (native c++). intermittently, the constructor to TcpClient will return the instance, without throwing any exception, but the accept call in vxWorks does not return with the client fd. tcpstatShow indicates no accept occurred. What could possibly make the TcpClient constructor (which calls 'Connect') return the instance, while the accept call on the server not return? It seems to be related to what the system is doing in the background - it seems more likely to get this symptom to occur when the server is busy persisting data to flash or an NFS share when the client attempts to connect, but can happen when it isn't also. I've tried adjusting priority of the thread running accept I've looked at the size of the queue in 'listen'. There's enough. The total number of file descriptors available should be enough (haven't validated this yet though, first thing in the morning)

    Read the article

  • How to take a collection of bytes and pull typed values out of it?

    - by Pat
    Say I have a collection of bytes var bytes = new byte[] {0, 1, 2, 3, 4, 5, 6, 7}; and I want to pull out a defined value from the bytes as a managed type, e.g. a ushort. What is a simple way to define what types reside at what location in the collection and pull out those values? One (ugly) way is to use System.BitConverter and a Queue or byte[] with an index and simply iterate through, e.g.: int index = 0; ushort first = System.BitConverter.ToUint16(bytes, index); index += 2; // size of a ushort int second = System.BitConverter.ToInt32(bytes, index); index += 4; ... This method gets very, very tedious when you deal with a lot of these structures! I know that there is the System.Runtime.InteropServices.StructLayoutAttribute which allows me to define the locations of types inside a struct or class, but there doesn't seem to be a way to import the collection of bytes into that struct. If I could somehow overlay the struct on the collection of bytes and pull out the values, that would be ideal. E.g. Foo foo = (Foo)bytes; // doesn't work because I'd need to implement the implicit operator ushort first = foo.first; int second = foo.second; ... [StructLayout(LayoutKind.Explicit, Size=FOO_SIZE)] public struct Foo { [FieldOffset(0)] public ushort first; [FieldOffset(2)] public int second; } Any thoughts on how to achieve this?

    Read the article

< Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >