Search Results

Search found 2411 results on 97 pages for 'queue'.

Page 83/97 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • Spring Integration 1.0 RC2: Streaming file content?

    - by gdm
    I've been trying to find information on this, but due to the immaturity of the Spring Integration framework I haven't had much luck. Here is my desired work flow: New files are placed in an 'Incoming' directory Files are picked up using a file:inbound-channel-adapter The file content is streamed, N lines at a time, to a 'Stage 1' channel, which parses the line into an intermediary (shared) representation. This parsed line is routed to multiple 'Stage 2' channels. Each 'Stage 2' channel does its own processing on the N available lines to convert them to a final representation. This channel must have a queue which ensures no Stage 2 channel is overwhelmed in the event that one channel processes significantly slower than the others. The final representation of the N lines is written to a file. There will be as many output files as there were routing destinations in step 4. *'N' above stands for any reasonable number of lines to read at a time, from [1, whatever I can fit into memory reasonably], but is guaranteed to always be less than the number of lines in the full file. How can I accomplish streaming (steps 3, 4, 5) in Spring Integration? It's fairly easy to do without streaming the files, but my files are large enough that I cannot read the entire file into memory. As a side note, I have a working implementation of this work flow without Spring Integration, but since we're using Spring Integration in other places in our project, I'd like to try it here to see how it performs and how the resulting code compares for length and clarity.

    Read the article

  • How to get SimpleRpcClient.Call() to be a blocking call to achieve synchronous communication with RabbitMQ?

    - by Nick Josevski
    In the .NET version (2.4.1) of RabbitMQ the RabbitMQ.Client.MessagePatterns.SimpleRpcClient has a Call() method with these signatures: public virtual object[] Call(params object[] args); public virtual byte[] Call(byte[] body); public virtual byte[] Call(IBasicProperties requestProperties, byte[] body, out IBasicProperties replyProperties); The problem: With various attempts, the method still continues to not block where I expect it to, so it's unable ever handle the response. The Question: Am I missing something obvious in the setup of the SimpleRpcClient, or earlier with the IModel, IConnection, or even PublicationAddress? More Info: I've also tried various paramater configurations of the QueueDeclare() method too with no luck. string QueueDeclare(string queue, bool durable, bool exclusive, bool autoDelete, IDictionary arguments); Some more reference code of my setup of these: IConnection conn = new ConnectionFactory{Address = "127.0.0.1"}.CreateConnection()); using (IModel ch = conn.CreateModel()) { var client = new SimpleRpcClient(ch, queueName); var queueName = ch.QueueDeclare("t.qid", true, true, true, null); ch.QueueBind(queueName, "exch", "", null); //HERE: does not block? var replyMessageBytes = client.Call(prop, msgToSend, out replyProp); } Looking elsewhere: Or is it likely there's an issue in my "server side" code? With and without the use of BasicAck() it appears the client has already continued execution.

    Read the article

  • What could cause these Apache crash errors ?

    - by jacobanderssen
    Hello guys. I had a server crash several days ago. I use Cacti to keep stats: at the time when the server crashed, a huge spike from Load 1 to Load 200 occurred, with over 800 processes in the run queue ( from 300 average). Upon checking /var/log/httpd I notice this: * glibc detected /usr/sbin/httpd: double free or corruption (out): 0x00002b8f3142c2f0 ** Followed by alot of these: [Sat Mar 13 19:20:20 2010] [warn] child process 3090 still did not exit, sending a SIGTERM [Sat Mar 13 19:20:20 2010] [warn] child process 3091 still did not exit, sending a SIGTERM Followed by this: ======= Backtrace: ========= /lib64/libc.so.6[0x2b8f1463c2ef] /lib64/libc.so.6(cfree+0x4b)[0x2b8f1463c73b] /usr/lib64/libapr-1.so.0(apr_pool_destroy+0x131)[0x2b8f13f98821] /usr/sbin/httpd[0x2b8f126df47e] /usr/sbin/httpd[0x2b8f126df4ab] /lib64/libpthread.so.0[0x2b8f141b87c0] /etc/httpd/modules/mod_file_cache.so[0x2b8f1cdf00fb] ======= Memory map: ======== And finally a lot of these: [Sat Mar 13 19:20:27 2010] [error] could not make child process 733 exit, attempting to continue anyway [Sat Mar 13 19:20:27 2010] [error] could not make child process 24560 exit, attempting to continue anyway [Sat Mar 13 19:20:27 2010] [error] could not make child process 31384 exit, attempting to continue anyway I am also noticing one or two lines like this: [Mon Mar 15 01:17:26 2010] [notice] child pid 20765 exit signal Segmentation fault (11) Please help me shed some light on this. Thanks !

    Read the article

  • My UITableView has duplicated rows

    - by Mark
    Im not sure why, but my UITableView, which isnt anything fancy, is showing repeating rows when it shouldnt be. It seems that the rows that get added when the user scrolls (i.e. the rows that are off the screen to start with) are getting the data for the wrong row index. Its almost like when a new cell is de-queued, it's using a cell that 'was' used, but wasn't cleaned up correctly. Do you need to 'clean up' cells that are de-queue so that new cells dont use cells that are already created? my code is as below: - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CustomCellIdentifier = @"CustomCellIdentifier"; MyDayCell *cell = (MyDayCell *)[tableView dequeueReusableCellWithIdentifier: CustomCellIdentifier]; if (cell == nil) { NSArray *nib = [[NSBundle mainBundle] loadNibNamed:@"MyDayCell" owner:self options:nil]; for (id oneObject in nib) if ([oneObject isKindOfClass:[MyDayCell class]]) cell = (MyDayCell *)oneObject; } NSUInteger section = [indexPath section]; NSUInteger row = [indexPath row]; NSArray *thisSectionItems = (NSArray*)[self.listData objectForKey: [[NSNumber alloc] initWithInt:section]]; MyDayDetails *rowData = [thisSectionItems objectAtIndex:row]; //setup my cells data here... return cell; } Is there anything wrong with this code? has anyone seen anything like this before?

    Read the article

  • horizontal uiview's controls unresponsive.. or how to foul up a view hierarchy

    - by Oldmicah
    Hello all, I'm working on an app that has two sections, a config section and a results section. My config section needs to be 2 separate views (horizontal and vert, and yes, I can hear the intake of breath from here), with one rotatable view for the results. b/c of layout restraints and a lot of pain around rotation, I'm not using a navigation controller. I've been experiencing the joys of rotation experimentation and have settled upon keeping my views contained as subviews of my view controller. i.e. view controller.view.subviews = configH, configV, and results. I then use the controller.view bringSubviewToFront to bring the either the configH, configV, or the result view to the front. Rotation works-queue(humor intended) the angelic choirs... almost. What's happening is that my configV button's are responsive, but when the device (or simulator) is rotated, my configH controls are not. (configV is the second subview added, but the first one to be brought to the front because app comes up in portrait mode) The controls on the results view also work. Plan B was to assign the controller.view to configH, configV, or results. All of my controls now work, but rotation is now fouled up. Question 1: Is there a better way to do this? (a horizontal and vertical config view and a rotatable results view) Question 2: Does the above suggest a design issue, or is it more likely that my addled brain is just missing something in my own code. (nothing from the peanut gallery please) many thanks!

    Read the article

  • How to cancel a jquery.load()?

    - by Dirk Jönsson
    I'd like to cancel a .load() operation, when the load() does not return in 5 seconds. If it's so I show an error message like 'sorry, no picture loaded'. What I have is... ...the timeout handling: jQuery.fn.idle = function(time, postFunction){ var i = $(this); i.queue(function(){ setTimeout(function(){ i.dequeue(); postFunction(); }, time); }); return $(this); }; ... initializing of the error message timeout: var hasImage = false; $('#errorMessage') .idle(5000, function() { if(!hasImage) { // 1. cancel .load() // 2. show error message } }); ... the image loading: $('#myImage') .attr('src', '/url/anypath/image.png') .load(function(){ hasImage = true; // do something... }); The only thing I could not figure out is how to cancel the running load() (if it's possible). Please help. Thanks! Edit: Another way: How do I prevent the .load() method to call it's callback function when it's returning?

    Read the article

  • Adding defaults and indexes to a script/generate command in a Rails Template?

    - by charliepark
    I'm trying to set up a Rails Template that would allow for comprehensive set-up of a specific Rails app. Using Pratik Naik's overview (http://m.onkey.org/2008/12/4/rails-templates), I was able to set up a couple of scaffolds and models, with a line that looks something like this ... generate("scaffold", "post", "title:string", "body:string") I'm now trying to add in Delayed Jobs, which normally has a migration file that looks like this: create_table :delayed_jobs, :force => true do |table| table.integer :priority, :default => 0 # Allows some jobs to jump to the front of the queue table.integer :attempts, :default => 0 # Provides for retries, but still fail eventually. table.text :handler # YAML-encoded string of the object that will do work table.text :last_error # reason for last failure (See Note below) table.datetime :run_at # When to run. Could be Time.now for immediately, or sometime in the future. table.datetime :locked_at # Set when a client is working on this object table.datetime :failed_at # Set when all retries have failed (actually, by default, the record is deleted instead) table.string :locked_by # Who is working on this object (if locked) table.timestamps end So, what I'm trying to do with the Rails template, is to add in that :default = 0 into the master template file. I know that the rest of the template's command should look like this: generate("migration", "createDelayedJobs", "priority:integer", "attempts:integer", "handler:text", "last_error:text", "run_at:datetime", "locked_at:datetime", "failed_at:datetime", "locked_by:string") Where would I put (or, rather, what is the syntax to add) the :default values in that? And if I wanted to add an index, what's the best way to do that?

    Read the article

  • [C#] Two System.Drawing methods manifest slow drawing or flickery: Solutions? or Other Options?

    - by Luke Mcneice
    Hi all, I am doing a little graphing via the System.Drawing and im having a few problems. I'm holding data in a Queue and i'm drawing(graphing) out that data onto three picture boxes this method fills the picture box then scrolls the graph across. so not to draw on top of the previous drawings (and graduly looking messier) i found 2 solutions to draw the graph. Call plot.Clear(BACKGOUNDCOLOR) before the draw loop [block commented] although this causes a flicker to appear from the time it takes to do the actual drawing loop. call plot.DrawLine(channelPen[5], j, 140, j, 0); just before each drawline [commented] although this causes the drawing to start ok then slow down very quickly to a crawl as if a wait command had been placed before the draw command. Here is the Code for reference: /*plotx.Clear(BACKGOUNDCOLOR) ploty.Clear(BACKGOUNDCOLOR) plotz.Clear(BACKGOUNDCOLOR)*/ for (int j = 1; j < 599; j++) { if (j > RealTimeBuffer.Count - 1) break; QueueEntity past = RealTimeBuffer.ElementAt(j - 1); QueueEntity current = RealTimeBuffer.ElementAt(j); if (j == 1) { //plotx.DrawLine(channelPen[5], 0, 140, 0, 0); //ploty.DrawLine(channelPen[5], 0, 140, 0, 0); //plotz.DrawLine(channelPen[5], 0, 140, 0, 0); } //plotx.DrawLine(channelPen[5], j, 140, j, 0); plotx.DrawLine(channelPen[0], j - 1, (((past.accdata.X - 0x7FFF) / 256) + 64), j, (((current.accdata.X - 0x7FFF) / 256) + 64)); //ploty.DrawLine(channelPen[5], j, 140, j, 0); ploty.DrawLine(channelPen[1], j - 1, (((past.accdata.Y - 0x7FFF) / 256) + 64), j, (((current.accdata.Y - 0x7FFF) / 256) + 64)); //plotz.DrawLine(markerPen, j, 140, j, 0); plotz.DrawLine(channelPen[2], j - 1, (((past.accdata.Z - 0x7FFF) / 256) + 94), j, (((current.accdata.Z - 0x7FFF) / 256) + 94)); } Is there any tricks to avoid these overheads? If not would there be any other/better solutions?

    Read the article

  • Having trouble hiding keyboard using invisible button which sits on top of uiscrollview

    - by phil
    I have 3 items in play... 1) UIView sits at the base of the hierarchy and contains the UIScrollview. 2) UIScrollview that is presenting a lengthy user form. 3) An invisible button on the UIScrollview that I'm using to provide "hide the keyboard" features. Notice in the code below that I'm registering to be notified when the keyboard is going to appear and again when it's going to disappear. These are working great. My problem is seemingly one of "layers". See below where I insert the button into the view atIndex:0. This causes the button to be activated and "stuffed" behind the scrollview so that when you click on it, the scrollview grabs the touch and the button is unaware. There is no way to "reach" the button and suppress the keyboard. However, if I insert atIndex:1, the button gets super imposed on top of the text entry fields and so any touch at all is acted upon by the button, which immediately suppresses the keyboard and then disappears. How do I insert the button on top of the UIScrollview but behind the UITextfields that sit there? other logistics: I have a -(void) hidekeyboard function that I have setup with the UIButtion as an IBAction(). And I have the UIButton connected to "files owner" via a ctrl-drag/drop. (Do I need both of those conventions?) This code in ViewDidLoad()... [[NSNotificationCenter defaultCenter] addObserverForName:UIKeyboardWillShowNotification object:nil queue:nil usingBlock:^(NSNotification *notification){ [self.view insertSubview:self.keyboardDismissalButton atIndex:0]; }];

    Read the article

  • What is wrong with locking non-static fields? What is the correct way to lock a particular instance?

    - by smartcaveman
    Why is it considered bad practice to lock non-static fields? And, if I am not locking non-static fields, then how do I lock an instance method without locking the method on all other instances of the same or derived class? I wrote an example to make my question more clear. public abstract class BaseClass { private readonly object NonStaticLockObject = new object(); private static readonly object StaticLockObject = new object(); protected void DoThreadSafeAction<T>(Action<T> action) where T: BaseClass { var derived = this as T; if(derived == null) { throw new Exception(); } lock(NonStaticLockObject) { action(derived); } } } public class DerivedClass :BaseClass { private readonly Queue<object> _queue; public void Enqueue(object obj) { DoThreadSafeAction<DerivedClass>(x=>x._queue.Enqueue(obj)); } } If I make the lock on the StaticLockObject, then the DoThreadSafeAction method will be locked for all instances of all classes that derive from BaseClass and that is not what I want. I want to make sure that no other threads can call a method on a particular instance of an object while it is locked.

    Read the article

  • How to get the size of a binary tree ?

    - by Andrei Ciobanu
    I have a very simple binary tree structure, something like: struct nmbintree_s { unsigned int size; int (*cmp)(const void *e1, const void *e2); void (*destructor)(void *data); nmbintree_node *root; }; struct nmbintree_node_s { void *data; struct nmbintree_node_s *right; struct nmbintree_node_s *left; }; Sometimes i need to extract a 'tree' from another and i need to get the size to the 'extracted tree' in order to update the size of the initial 'tree' . I was thinking on two approaches: 1) Using a recursive function, something like: unsigned int nmbintree_size(struct nmbintree_node* node) { if (node==NULL) { return(0); } return( nmbintree_size(node->left) + nmbintree_size(node->right) + 1 ); } 2) A preorder / inorder / postorder traversal done in an iterative way (using stack / queue) + counting the nodes. What approach do you think is more 'memory failure proof' / performant ? Any other suggestions / tips ? NOTE: I am probably going to use this implementation in the future for small projects of mine. So I don't want to unexpectedly fail :).

    Read the article

  • Compilation problem in the standard x86_64 libraries

    - by user350282
    Hi everyone, I am having trouble compiling a program I have written. I have two different files with the same includes but only one generates the following error when compiled with g++ /usr/lib/gcc/x86_64-linux-gnu/4.4.1/../../../../lib/crt1.o: In function `_start': /build/buildd/eglibc-2.10.1/csu/../sysdeps/x86_64/elf/start.S:109: undefined reference to `main' collect2: ld returned 1 exit status The files I am including in my header are as follows: #include <google/sparse_hash_map> using google::sparse_hash_map; #include <ext/hash_map> #include <math.h> #include <iostream> #include <queue> #include <vector> #include <stack> using std::priority_queue; using std::stack; using std::vector; using __gnu_cxx::hash_map; using __gnu_cxx::hash; using namespace std; Searching the internet for those two lines hasn't resulted in anything to help me. I would be very grateful for any advice. Thank you

    Read the article

  • Using JMX classes to notify on events over time

    - by Cincinnati Joe
    I've been looking at JMX for monitoring application and system metrics (partially because MBeans can accessed by various tools such as JConsole). It would seem like the classes included with JMX would be useful for things like notification when metrics have exceeded thresholds. But I'm not sure they fit with the way I want to measure these over a specified time period. For example, let's say I want to notify an admin when the average CPU load is over 95% for more than 5 minutes. Is that something can be done with a GaugeMonitor? From the docs, it doesn't seem quite suited for this, and I'm wondering if instead I should write my own MBean with the necessary logic. A more relevant example is when the login times for users exceed 10s over a period of 5 mins. Slightly different would be the last 20 logins took more than 10s on average. Another case would be when a process crashes 4+ times in an hour. Or the request queue exceeds 15 for 5 mins. Are the JMX Monitor classes useful for this kind of thing?

    Read the article

  • Ember Data Sycn - LocalStorage+REST+RealTime+Online/Offline

    - by Miguel Madero
    We have a combination of requirements in terms o data access. Pre-load some reference data. We need reference data to survive browser restarts instead of just living in memory to avoid loading it all the time. I'm currently using the LocalStorageAdapter for that. Once we have it, we would like to sync changes (polling or using Socket.IO in the background and updating the LocalStorage could do the trick) There're other models that are more transactional, where we would need to directly go to the Server and get/save them. It would be nice to use something like the RESTAdapter for that. Lastly, there're some operations that should work off-line and changes should be synced later. To make it more concrete: * We pre-load vendor and "favorite products" into Local Storage. We work offline with those. * We need to sync server changes to vendor and product information. * If they search the full catalog, that requires them to be online. * When offline, we need to allow users to add something to their cart or even submit and order. We would like to queue this action and submit it when they have an Internet Connection. So a few questions are derived from this: * Is there a way to user RESTAdapter in combination with LocalStorage? * Is there some Socket.IO support? (Happy to do this part manually) * Is there Queueing support? Ideally at the Ember-Data level. I know we will have to do a lot of this manually and pull together the different lego pieces, but I wanted to ask for some perspective from experience Ember devs.

    Read the article

  • Need Help finding an appropriate task assignment algorithm for a college project involving coordinat

    - by Trif Mircea
    I am a long time lurker here and have found over time many answers regarding jquery and web development topics so I decided to ask a question of my own. This time I have to create a c++ project for college which should help manage the workflow of a company providing all kinds of services through in the field teams. The ideas I have so far are: client-server application; the server is a dispatcher where all the orders from clients get and the clients are mobile devices (PDAs) each team in the field having one a order from a client is a task. Each task is made up of a series of subtasks. You have a database with estimations on how long a task should take to complete you also know what tasks or subtasks each team on the field can perform based on what kind of specialists made up the team (not going to complicate the problem by adding needed materials, it is considered that if a member of a team can perform a subtask he has the stuff needed) Now knowing these factors, what would a good task assignment algorithm be? The criteria is: how many tasks can a team do, how many tasks they have in the queue, it could also be location, how far away are they from the place but I don't think I can implement that.. It needs to be efficient and also to adapt quickly is the human dispatcher manually assigns a task. Any help or leads would be really appreciated. Also I'm not 100% sure in the idea so if you have another way you would go about creating such an application please share, even if it just a quick outline. I have to write a theoretical part too so even if the ideas are far more complex that what i outlined that would be ok ; I'd write those and implement what I can. Edit: C++ is the only language I know unfortunately.

    Read the article

  • MapView EXC_BAD_ACCESS (SIGSEGV) and KERN_INVALID_ADDRESS

    - by user768113
    I'm having some 'issues' with my application... well, it crashes in an UIViewController that is presented modally, there the user enters information through UITextFields and his location is tracked by a MapView. Lets call this view controller "MapViewController" When the user submits the form, I call a different ViewController - modally again - that processes this info and a third one answers accordingly, then go back to a MenuVC using unwinding segues, which then calls MapViewController and so on. This sequence is repeated many times, but it always crashes in MapViewController. Looking at the crash log, I think that the MapView can be the problem of this or some element in the UI (because of the UIKit framework). I tried to use NSZombie in order to track a memory issue but it doesn't give me a clue about whats happening. Here is the crash log Hardware Model: iPad3,4 Process: MyApp [2253] OS Version: iOS 6.1.3 (10B329) Report Version: 104 Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Codes: KERN_INVALID_ADDRESS at 0x00000044 Crashed Thread: 0 Thread 0 name: Dispatch queue: com.apple.main-thread Thread 0 Crashed: 0 IMGSGX554GLDriver 0x328b9be0 0x328ac000 + 56288 1 IMGSGX554GLDriver 0x328b9b8e 0x328ac000 + 56206deallocated instance 2 IMGSGX554GLDriver 0x328bc2f2 0x328ac000 + 66290 3 IMGSGX554GLDriver 0x328baf44 0x328ac000 + 61252 4 libGPUSupportMercury.dylib 0x370f86be 0x370f6000 + 9918 5 GLEngine 0x34ce8bd2 0x34c4f000 + 629714 6 GLEngine 0x34cea30e 0x34c4f000 + 635662 7 GLEngine 0x34c8498e 0x34c4f000 + 219534 8 GLEngine 0x34c81394 0x34c4f000 + 205716 9 VectorKit 0x3957f4de 0x394c7000 + 754910 10 VectorKit 0x3955552e 0x394c7000 + 582958 11 VectorKit 0x394d056e 0x394c7000 + 38254 12 VectorKit 0x394d0416 0x394c7000 + 37910 13 VectorKit 0x394cb7ca 0x394c7000 + 18378 14 VectorKit 0x394c9804 0x394c7000 + 10244 15 VectorKit 0x394c86a2 0x394c7000 + 5794 16 QuartzCore 0x354a07a4 0x35466000 + 239524 17 QuartzCore 0x354a06fc 0x35466000 + 239356 18 IOMobileFramebuffer 0x376f8fd4 0x376f4000 + 20436 19 IOKit 0x344935aa 0x34490000 + 13738 20 CoreFoundation 0x33875888 0x337e9000 + 575624 21 CoreFoundation 0x338803e4 0x337e9000 + 619492 22 CoreFoundation 0x33880386 0x337e9000 + 619398 23 CoreFoundation 0x3387f20a 0x337e9000 + 614922 24 CoreFoundation 0x337f2238 0x337e9000 + 37432 25 CoreFoundation 0x337f20c4 0x337e9000 + 37060 26 GraphicsServices 0x373ad336 0x373a8000 + 21302 27 UIKit 0x3570e2b4 0x356b7000 + 357044 28 MyApp 0x000ea12e 0xe9000 + 4398 29 MyApp 0x000ea0e4 0xe9000 + 4324 I think thats all, additionally, I would like to ask you: if you are using unwind segues then you are releasing view controllers from the memory heap, right? Meanwhile, performing segues let you instantiate those controllers. Technically, MenuVC should be the only VC alive in the heap during the app life cycle if you understand me.

    Read the article

  • ActiveMq integration with Spring 2.5

    - by Tony
    I am using ActiveMq 5.32 with Spring 2.5.5. I use pretty generic configuration, as long as I include the jmsTransactionManager in DefaultMessageListenerContainer, Spring throw an error on start up: "Error creating bean with name 'org.springframework.jms.listener.DefaultMessageListenerContainer#0'" Without the transactionManager attribute , this works fine, but when I add 10 messages to the message queue, a transaction exception will occur. Part of my configurations : <bean class="org.springframework.jms.listener.DefaultMessageListenerContainer"> <property name="connectionFactory" ref="connectionFactory" /> <property name="destination" ref="emailDestination" /> <property name="messageListener" ref="emailServiceMDP" /> <property name="transactionManager" ref="jmsTransactionManager" /> </bean> <bean id="jmsTransactionManager" class="org.springframework.jms.connection.JmsTransactionManager"> <property name="connectionFactory" ref="connectionFactory" /> </bean> Does this version of Spring and Activemq has some know issues in integration ? Or do I need additional libs to get jmsTransactionManager to work ?

    Read the article

  • Java: fastest way to do random reads on huge disk file(s)

    - by cocotwo
    I've got a moderately big set of data, about 800 MB or so, that is basically some big precomputed table that I need to speed some computation by several orders of magnitude (creating that file took several mutlicores computers days to produce using an optimized and multi-threaded algo... I do really need that file). Now that it has been computed once, that 800MB of data is read only. I cannot hold it in memory. As of now it is one big huge 800MB file but splitting in into smaller files ain't a problem if it can help. I need to read about 32 bits of data here and there in that file a lot of time. I don't know before hand where I'll need to read these data: the reads are uniformly distributed. What would be the fastest way in Java to do my random reads in such a file or files? Ideally I should be doing these reads from several unrelated threads (but I could queue the reads in a single thread if needed). Is Java NIO the way to go? I'm not familiar with 'memory mapped file': I think I don't want to map the 800 MB in memory. All I want is the fastest random reads I can get to access these 800MB of disk-based data. btw in case people wonder this is not at all the same as the question I asked not long ago: http://stackoverflow.com/questions/2346722/java-fast-disk-based-hash-set

    Read the article

  • Communication between lexer and parser

    - by FredOverflow
    Every time I write a simple lexer and parser, I stumble upon the same question: how should the lexer and the parser communicate? I see four different approaches: The lexer eagerly converts the entire input string into a vector of tokens. Once this is done, the vector is fed to the parser which converts it into a tree. This is by far the simplest solution to implement, but since all tokens are stored in memory, it wastes a lot of space. Each time the lexer finds a token, it invokes a function on the parser, passing the current token. In my experience, this only works if the parser can naturally be implemented as a state machine like LALR parsers. By contrast, I don't think it would work at all for recursive descent parsers. Each time the parser needs a token, it asks the lexer for the next one. This is very easy to implement in C# due to the yield keyword, but quite hard in C++ which doesn't have it. The lexer and parser communicate through an asynchronous queue. This is commonly known under the title "producer/consumer", and it should simplify the communication between the lexer and the parser a lot. Does it also outperform the other solutions on multicores? Or is lexing too trivial? Is my analysis sound? Are there other approaches I haven't thought of? What is used in real-world compilers? It would be really cool if compiler writers like Eric Lippert could shed some light on this issue.

    Read the article

  • How to process events chain.

    - by theblackcascade
    I need to process this chain using one LoadXML method and one urlLoader object: ResourceLoader.Instance.LoadXML("Config.xml"); ResourceLoader.Instance.LoadXML("GraphicsSet.xml"); Loader starts loading after first frameFunc iteration (why?) I want it to start immediatly.(optional) And it starts loading only "GraphicsSet.xml" Loader class LoadXml method: public function LoadXML(URL:String):XML { urlLoader.addEventListener(Event.COMPLETE,XmlLoadCompleteListener); urlLoader.load(new URLRequest(URL)); return xml; } private function XmlLoadCompleteListener(e:Event):void { var xml:XML = new XML(e.target.data); trace(xml); trace(xml.name()); if(xml.name() == "Config") XMLParser.Instance.GameSetup(xml); else if(xml.name() == "GraphicsSet") XMLParser.Instance.GraphicsPoolSetup(xml); } Here is main: public function Main() { Mouse.hide(); this.addChild(Game.Instance); this.addEventListener(Event.ENTER_FRAME,Game.Instance.Loop); } And on adding a Game.Instance to the rendering queue in game constuctor i start initialize method: public function Game():void { trace("Constructor"); if(_instance) throw new Error("Use Instance Field"); Initialize(); } its code is: private function Initialize():void { trace("initialization"); ResourceLoader.Instance.LoadXML("Config.xml"); ResourceLoader.Instance.LoadXML("GraphicsSet.xml"); } Thanks.

    Read the article

  • Scheduling a Delay Job on Heroku with a Worker Dyno

    - by user1524775
    I'm currently using Heroku's scheduler to run a script. However, the time that the script takes to run is going to increase from a few milliseconds to a few minutes. I'm looking at using the delayed_job gem to push this process off to a Worker Dyno. I want to continue to run this script once-a-day, just offload it to the worker. My current rake task is: desc "This task updates some stuff for you." task :update_some_stuff => :environment do puts "Updating some stuff ..." SomeClass.new.process puts "... done." end Once the gem is installed, migration run, and worker dyno started, will the script just need to change to: desc "This task updates some stuff for you." task :update_some_stuff => :environment do puts "Updating some stuff ..." SomeClass.new.delay.process puts "... done." end With this task still being a rake task scheduled by Heroku's Scheduler, is the only thing that needs to happen here the introduction of the delay method to put this in the Worker's queue? Thanks in advance for any help.

    Read the article

  • Standard term for a thread I/O reorder buffer?

    - by Crashworks
    I have a case where many threads all concurrently generate data that is ultimately written to one long, serial file. I need to somehow serialize these writes so that the file gets written in the right order. ie, I have an input queue of 2048 jobs j0..jn, each of which produces a chunk of data oi. The jobs run in parallel on, say, eight threads, but the output blocks have to appear in the file in the same order as the corresponding input blocks — the output file has to be in the order o0o1o2... The solution to this is pretty self evident: I need some kind of buffer that accumulates and writes the output blocks in the correct order, similar to a CPU reorder buffer in Tomasulo's algorithm, or to the way that TCP reassembles out-of-order packets before passing them to the application layer. Before I go code it, I'd like to do a quick literature search to see if there are any papers that have solved this problem in a particularly clever or efficient way, since I have severe realtime and memory constraints. I can't seem to find any papers describing this though; a Scholar search on every permutation of [threads, concurrent, reorder buffer, reassembly, io, serialize] hasn't yielded anything useful. I feel like I must just not be searching the right terms. Is there a common academic name or keyword for this kind of pattern that I can search on?

    Read the article

  • SQL Server race condition issue with range lock

    - by Freek
    I'm implementing a queue in SQL Server (please no discussions about this) and am running into a race condition issue. The T-SQL of interest is the following: set transaction isolation level serializable begin tran declare @RecordId int declare @CurrentTS datetime2 set @CurrentTS=CURRENT_TIMESTAMP select top 1 @RecordId=Id from QueuedImportJobs with (updlock) where Status=@Status and (LeaseTimeout is null or @CurrentTS>LeaseTimeout) order by Id asc if @@ROWCOUNT> 0 begin update QueuedImportJobs set LeaseTimeout = DATEADD(mi,5,@CurrentTS), LeaseTicket=newid() where Id=@RecordId select * from QueuedImportJobs where Id = @RecordId end commit tran RecordId is the PK and there is also an index on Status,LeaseTimeout. What I'm basically doing is select a record of which the lease happens to be expired, while simultaneously updating the lease time with 5 minutes and setting a new lease ticket. So the problem is that I'm getting deadlocks when I run this code in parallel using a couple of threads. I've debugged it up to the point where I found out that the update statement sometimes gets executing twice for the same record. Now, I was under the impression that the with (updlock) should prevent this (it also happens with xlock btw, not with tablockx). So it actually look like there is a RangeS-U and a RangeX-X lock on the same range of records, which ought to be impossible. So what am I missing? I'm thinking it might have something to do with the top 1 clause or that SQL Server does not know that where Id=@RecordId is actually in the locked range? Deadlock graph: Table schema (simplified):

    Read the article

  • How can I make hundreds of simultaneously running processes communicate with a database through one

    - by Olfan
    Long speech short: How can I make hundreds of simultaneously running processes communicate with a database through one or few permanent sessions? The whole story: I once built a number crunching engine that handles vast amounts of large data files by forking off one child after another giving each a small number of files to work on. File locking, progress monitoring and result propagation happen in an Oracle database which all (sub-)processes access at various times using an application-specific module which encapsulates DBI. This worked well at first, but now with higher volumes of input data, the number of database sessions (one per child, and they can be very short-lived) constantly being opened and closed is becoming an issue. I now want to centralise database access so that there are only one or few fixed database sessions which handle all database access for all the (sub-)processes. The presence of the database abstraction module should make the changes easy because the function calls in the worker instances can stay the same. My problem is that I cannot think of a suitable way to enhance said module in order to establish communication between all the processes and the database connector(s). I thought of message queueing, but couldn't come up with a way of connecting a large herd of requestors with one or few database connectors in a way so that bidirectional communication is possible (for collecting the query result). An asynchronous approach could help here in that all requests are written to the same queue and the database connector servicing the request will "call back" to submit the result. But my mind fails me in generating an image clear enough so that I can paint into code. Threading instead of forking might have given me an easier start, but this would now require massive changes to the code base that I'm not prepared to do to a live system. The more I think of it, the more the base idea looks like a pre-forked web server to me only that it doesn't serve web pages but database queries. Any ideas on what to dig into, and where? Sample (pseudo) code to inspire me, links to possibly related articles, ready solutions on CPAN maybe?

    Read the article

  • Wicket application + Apache + mod_jk - AJP queues are filling up!

    - by nojyarg
    Dear community, We are having a Wicket-based Java application deployed in a production server cluster using Apache (2.2.3) with mod_jk (1.2.30) as load balancing component w/ sticky session and Jboss 5 as application container for the Java application. We are inconsistently seeing an issue in our production environment where our AJP queues between Apache and Jboss as shown in the JMX console fill up with requests to the point where the application server is no longer taking on any new requests. When looking at all involved system components (overall traffic, load db, process list db, load of all clustered application server nodes) nothing points towards a capacity issue which would explain why the calls are being stalled in the AJP queue. Instead all systems appear sufficiently idle. So far, our only remedy to this issue is to restart the appservers and the load balancer which only occasionally clears the AJP queues. We are trying to figure out why the queues are filling up to the point that no calls get returned to the end user although the system is not under a high load. Has anyone else experienced similar problems? Are there any other system metrics we should monitor that could explain the queuing behavior? Is this potentially a mod_jk issue? If so, is it advisable to swap mod_jk with mod_cluster to resolve the issue? Any advice is highly appreciated. If I can provide additional information for the sake of troubleshooting I would be more than willing to do so. /Ben

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >