Search Results

Search found 9279 results on 372 pages for 'job queue'.

Page 117/372 | < Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >

  • High-concurrency counters without sharding

    - by dound
    This question concerns two implementations of counters which are intended to scale without sharding (with a tradeoff that they might under-count in some situations): http://appengine-cookbook.appspot.com/recipe/high-concurrency-counters-without-sharding/ (the code in the comments) http://blog.notdot.net/2010/04/High-concurrency-counters-without-sharding My questions: With respect to #1: Running memcache.decr() in a deferred, transactional task seems like overkill. If memcache.decr() is done outside the transaction, I think the worst-case is the transaction fails and we miss counting whatever we decremented. Am I overlooking some other problem that could occur by doing this? What are the significiant tradeoffs between the two implementations? Here are the tradeoffs I see: #2 does not require datastore transactions. To get the counter's value, #2 requires a datastore fetch while with #1 typically only needs to do a memcache.get() and memcache.add(). When incrementing a counter, both call memcache.incr(). Periodically, #2 adds a task to the task queue while #1 transactionally performs a datastore get and put. #1 also always performs memcache.add() (to test whether it is time to persist the counter to the datastore). Conclusions (without actually running any performance tests): #1 should typically be faster at retrieving a counter (#1 memcache vs #2 datastore). Though #1 has to perform an extra memcache.add() too. However, #2 should be faster when updating counters (#1 datastore get+put vs #2 enqueue a task). On the other hand, with #1 you have to be a bit more careful with the update interval since the task queue quota is almost 100x smaller than either the datastore or memcahce APIs.

    Read the article

  • C question: error: expected ')' before '*' token

    - by lhw
    ===EDIT I apologize for not putting the pcb struct into the code snippet. There is a struct called pcb defined in above the two structs I originally posted. Namely, typedef struct{ UINT32 proc; struct pcb *link; }pcb; Hi, I asked a question regarding structs in C a few minutes ago and got an answer blazing fast. But now I'm facing another problem, namely the error in the title of this question. I'm trying to implement a simple priority queue in C using arrays of queues. However, when I try to declare a function on pcb_pQ structure, I get the above error. I have the structs clearly defined in the heard file. In the header file: typedef struct{ pcb *head; pcb *tail; SINT32 size; } pcb_Q; typedef struct pcb_pQ { pcb_Q queues[5]; SINT32 size; } pcb_pQ; Function prototype in header file: /*priority queue operations*/ VOID pcb_pq_enqueue(pcb_pQ*, pcb*); Function impelmentation in .c file: VOID pcb_pq_enqueue(pcb_pQ* pcb_pQ, pcb* pcb) { pcb_Q* pcb_Q_p; int priority; priority = pcb->proc_priority; pcb_Q_p = &pcb_pQ->queues[priority]; pcb_enqueue(pcb_Q_p, pcb); } When I try to compile the above code, I get an "error: expected ')' before '*' token". This error is pointing to the function signature in the .c file, namely VOID pcb_pq_enqueue(pcb_pQ* pcb_pQ, pcb* pcb) { But I am not sure why I am getting this error, could someone give me a hand? Thanks a lot.

    Read the article

  • How to avoid game rendering component circular references?

    - by CodexArcanum
    I'm working on a simple game design, and I wanted to break up my game objects into more reusable components. But I'm getting stuck on how exactly to implement the design I have in mind. Here's an example: I have a Logger object, whose job is simply to store a list of messages and render them to screen. You know, logging. Originally the Logger just held the list, and the game loop rendered it's contents. Then I moved the rendering logic into the Logger.Draw() method, and now I want to move it further into a LoggerRenderer object. In effect, I want to have the game loop call RenderAll, which will then call Logger.Render, which will in turn call the LoggerRenderer.Render and finally output the text. So the Logger needs to contain a Renderer object, but the Renderer needs access to the Logger's state (the message queue) in order to render. How do I resolve that? Should I be passing in the message queue and other state information explicitly to the Render method? Or should the game loop be calling the Renderer directly and it links back to the logger, but the RenderAll method never actually sees the logger object itself? This feels kind of like Command pattern, but I'm botching it up terribly.

    Read the article

  • Configure WebLogic MDB to listen to Foreing AMQ Server

    - by eliel.lobo
    I'm trying to create an MDB(EJB 3.0) on WebLogic 10.3.5. to listen to a Queue in an external AMQ server. but after much work and combination of tutorials i get the followin error when deployin on WwebLogic. [EJB:015027]The Message-Driven EJB is transactional but JMS connection factory referenced by the JNDI name: ActiveMQXAConnectionFactory is not a JMS XA connection factory. Here is a brief of the work i have done: I have added the corresponding libraries to my WLS classpath (following thos tuturial http://amadei.com.br/blog/index.php/connecting-weblogic-and-activemq) and I have created the corresponding JMS Modules as indicated in the tutorial. As connection factory I have used ActiveMQConnectionFactory initially and ActiveMQXAConnectionFactory later, I also ignome the jms. notation an just put plain names as testQueue. Then create a simple MDB whit the following structure. I explicitly defined "connectionFactoryJndiName" property because otherwise it assumes a WebLogic connection factory which is not found an then raises an error. @MessageDriven( activationConfig = { @ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue"), @ActivationConfigProperty(propertyName = "destination", propertyValue = "testQueue"), @ActivationConfigProperty(propertyName = "connectionFactoryJndiName", propertyValue = "ActiveMQXAConnectionFactory") }, mappedName = "testQueue") public class ROMELReceiver implements MessageListener { /** * Default constructor. */ public ROMELReceiver() { // TODO Auto-generated constructor stub } /** * @see MessageListener#onMessage(Message) */ public void onMessage(Message message) { System.out.println("Message received"); } } At this point I'm stuck with the error mentioned above. Even though I use ActiveMQXAConnectionFactory instead of simply ActiveMQConnectionFactory, JNDI resources tree in web logic server shows org.apache.activemq.ActiveMQConnectionFactory as class for my configured connection factory. am i missing something? or is this just a completely wrong way to connect WebLogic whith AMQ? Thanks in advance.

    Read the article

  • Merging k sorted linked lists - analysis

    - by Kotti
    Hi! I am thinking about different solutions for one problem. Assume we have K sorted linked lists and we are merging them into one. All these lists together have N elements. The well known solution is to use priority queue and pop / push first elements from every lists and I can understand why it takes O(N log K) time. But let's take a look at another approach. Suppose we have some MERGE_LISTS(LIST1, LIST2) procedure, that merges two sorted lists and it would take O(T1 + T2) time, where T1 and T2 stand for LIST1 and LIST2 sizes. What we do now generally means pairing these lists and merging them pair-by-pair (if the number is odd, last list, for example, could be ignored at first steps). This generally means we have to make the following "tree" of merge operations: N1, N2, N3... stand for LIST1, LIST2, LIST3 sizes O(N1 + N2) + O(N3 + N4) + O(N5 + N6) + ... O(N1 + N2 + N3 + N4) + O(N5 + N6 + N7 + N8) + ... O(N1 + N2 + N3 + N4 + .... + NK) It looks obvious that there will be log(K) of these rows, each of them implementing O(N) operations, so time for MERGE(LIST1, LIST2, ... , LISTK) operation would actually equal O(N log K). My friend told me (two days ago) it would take O(K N) time. So, the question is - did I f%ck up somewhere or is he actually wrong about this? And if I am right, why doesn't this 'divide&conquer' approach can't be used instead of priority queue approach?

    Read the article

  • Netbean6.8: Cant deploy an webapp with Message Driven Bean

    - by Harry Pham
    I create an Enterprise Application CustomerApp that also generated two projects CustomerApp-ejb and CustomerApp-war. In the CustomerApp-ejb, I create a SessionBean call CustomerSessionBean.java as below. package com.customerapp.ejb; import javax.ejb.Stateless; import javax.ejb.LocalBean; import javax.persistence.EntityManager; import javax.persistence.PersistenceContext; @Stateless @LocalBean public class CustomerSessionBean { @PersistenceContext(unitName = "CustomerApp-ejbPU") private EntityManager em; public void persist(Object object) { em.persist(object); } } Now I can deploy CustomerApp-war just fine. But as soon as I create a Message Driven Bean, I cant deploy CustomerApp-war anymore. When I create NotificationBean.java (message driven bean), In the project destination option, I click add, and have NotificationQueue for the Destination Name and Destination Type is Queue. Below are the code package com.customerapp.mdb; import javax.ejb.ActivationConfigProperty; import javax.ejb.MessageDriven; import javax.jms.Message; import javax.jms.MessageListener; @MessageDriven(mappedName = "jms/NotificationQueue", activationConfig = { @ActivationConfigProperty(propertyName = "acknowledgeMode", propertyValue = "Auto-acknowledge"), @ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue") }) public class NotificationBean implements MessageListener { public NotificationBean() { } public void onMessage(Message message) { } } If I remove the @MessageDriven annotation, then I can deploy the project. Any idea why and how to fix it?

    Read the article

  • What am i doing wrong

    - by Erik Sapir
    I have the following code. I need B class to have a min priority queue of AToTime objects. AToTime have operator, and yet i receive error telling me than there is no operator matching the operands... #include <queue> #include <functional> using namespace std; class B{ //public functions public: B(); virtual ~B(); //private members private: log4cxx::LoggerPtr m_logger; class AToTime { //public functions public: AToTime(const ACE_Time_Value& time, const APtr a) : m_time(time), m_a(a){} bool operator >(const AToTime& other) { return m_time > other.m_time; } //public members - no point using any private members here public: ACE_Time_Value m_time; APtr m_a; }; priority_queue<AToTime, vector<AToTime>, greater<AToTime> > m_myMinHeap; };

    Read the article

  • Netbean6.8: Cant deploy an app if I have Message Driven Bean

    - by Harry Pham
    I create an Enterprise Application CustomerApp that also generated two projects CustomerApp-ejb and CustomerApp-war. In the CustomerApp-ejb, I create a SessionBean call CustomerSessionBean.java as below. package com.customerapp.ejb; import javax.ejb.Stateless; import javax.ejb.LocalBean; import javax.persistence.EntityManager; import javax.persistence.PersistenceContext; @Stateless @LocalBean public class CustomerSessionBean { @PersistenceContext(unitName = "CustomerApp-ejbPU") private EntityManager em; public void persist(Object object) { em.persist(object); } } Now I can deploy CustomerApp-war just fine. But as soon as I create a Message Driven Bean, I cant deploy CustomerApp-war anymore. When I create NotificationBean.java (message driven bean), In the project destination option, I click add, and have NotificationQueue for the Destination Name and Destination Type is Queue. Below are the code package com.customerapp.mdb; import javax.ejb.ActivationConfigProperty; import javax.ejb.MessageDriven; import javax.jms.Message; import javax.jms.MessageListener; @MessageDriven(mappedName = "jms/NotificationQueue", activationConfig = { @ActivationConfigProperty(propertyName = "acknowledgeMode", propertyValue = "Auto-acknowledge"), @ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue") }) public class NotificationBean implements MessageListener { public NotificationBean() { } public void onMessage(Message message) { } } If I remove the @MessageDriven annotation, then I can deploy the project. Any idea why and how to fix it?

    Read the article

  • How to make jquery hover event fire repeatedly.

    - by clinthorner
    I have a infinite carousel that I want to move when I hover over the next and previous buttons. Right now hover only fires this once. I want the carousel to continue moving while the mouse is within the next or previous buttons. Any Suggestions? jQuery.fn.carousel = function(previous, next, options){ var sliderList = jQuery(this).children()[0]; if (sliderList) { var increment = jQuery(sliderList).children().outerWidth("true"), elmnts = jQuery(sliderList).children(), numElmts = elmnts.length, sizeFirstElmnt = increment, shownInViewport = Math.round(jQuery(this).width() / sizeFirstElmnt), firstElementOnViewPort = 1, isAnimating = false; for (i = 0; i < shownInViewport; i++) { jQuery(sliderList).css('width',(numElmts+shownInViewport)*increment + increment + "px"); jQuery(sliderList).append(jQuery(elmnts[i]).clone()); } jQuery(previous).hover(function(event){ if (!isAnimating) { if (firstElementOnViewPort == 1) { jQuery(sliderList).css('left', "-" + numElmts * sizeFirstElmnt + "px"); firstElementOnViewPort = numElmts; } else { firstElementOnViewPort--; } jQuery(sliderList).animate({ left: "+=" + increment, y: 0, queue: true }, "swing", function(){isAnimating = false;}); isAnimating = true; } }); jQuery(next).hover(function(event){ if (!isAnimating) { if (firstElementOnViewPort > numElmts) { firstElementOnViewPort = 2; jQuery(sliderList).css('left', "0px"); } else { firstElementOnViewPort++; } jQuery(sliderList).animate({ left: "-=" + increment, y: 0, queue: true }, "swing", function(){isAnimating = false;}); isAnimating = true; } }); } };

    Read the article

  • jquery-infinite-carousel - jump 3 items instead of one

    - by Rob B
    Successfully used jquery-infinite-carousel in the past but for my current project I need it to loop forever AND also jump 3 items at a time. Example of how it works jumping 1 at a time here. jQuery.fn.carousel = function(previous, next, options){ var sliderList = jQuery(this).children()[0]; if (sliderList) { var increment = jQuery(sliderList).children().outerWidth("true"), elmnts = jQuery(sliderList).children(), numElmts = elmnts.length, sizeFirstElmnt = increment, shownInViewport = Math.round(jQuery(this).width() / sizeFirstElmnt), firstElementOnViewPort = 1, isAnimating = false; //console.log("increment = " + increment); //console.log("numElmts = " + numElmts); //console.log("shownInViewport = " + shownInViewport); for (i = 0; i < shownInViewport; i++) { jQuery(sliderList).css('width',(numElmts+shownInViewport)*increment + increment + "px"); jQuery(sliderList).append(jQuery(elmnts[i]).clone()); } jQuery(previous).click(function(event){ if (!isAnimating) { if (firstElementOnViewPort == 1) { jQuery(sliderList).css('left', "-" + numElmts * sizeFirstElmnt + "px"); firstElementOnViewPort = numElmts; console.log("firstElementOnViewPort = " + firstElementOnViewPort); } else { firstElementOnViewPort--; } jQuery(sliderList).animate({ left: "+=" + increment, y: 0, queue: true }, "swing", function(){isAnimating = false;}); isAnimating = true; } }); jQuery(next).click(function(event){ if (!isAnimating) { if (firstElementOnViewPort > numElmts) { firstElementOnViewPort = 2; jQuery(sliderList).css('left', "0px"); } else { firstElementOnViewPort++; } jQuery(sliderList).animate({ left: "-=" + (increment), y: 0, queue: true }, "swing", function(){isAnimating = false;}); isAnimating = true; } }); } };

    Read the article

  • User Friendly Video Review with Locking

    - by James Cori
    We have a jquery/php/mysql system that allows a user to log in and review videos built by a system for online viewing. When a user begins reviewing a video, the video is marked as such. But now we've cornered ourselves into the classic browser-based application problem of the user navigating away or closing the browser without completing review. That video would then enter a state of limbo of constantly being reviewed, but never completed, and never re-entering the queue. Options we have are: Build a service (which we already have others) to find review sessions that are outside a duration boundary and reset them back into the queue. Reset review sessions outside a duration boundary when that user logs in. Essentially, if a user locks out a video for review, it'll be unlocked the next time they log in. A suggestion made to me was to use the php/apache session length and on expiration, reset any pending review jobs. I don't even know where to look to implement this as this is one project on a shared server, so it shouldn't be an apache config, but the reset mechanism would need to know the database credentials to be able to reset it... The worst solution everyone hates is preventing the user from navigating away with javascript, asking "Are you sure?!" This system is used by a few hired reviewers, so I'm not exactly dealing with the public here, but I can't prevent users from sharing logins for speedier review, which would knock out the 2nd option above because it would unlock a video being reviewed by someone else using the same login.

    Read the article

  • After interruption, delayed audio route change notifications when recording

    - by Frank Shearar
    My iPhone application requires that I know when a user has/has not plugged in her headphones. That's easy. AudioSessionAddPropertyListener with a callback listening to kAudioSessionProperty_AudioRouteChange. I write logs with NSLog as things happen. User plugs the headphones in? Get a notification, and a line in the gdb console. User unplugs the headphones? Ditto. At the same time I'm sensing the noise level of the environment by starting a recording audio queue. This, too, works great: I can get the mic noise level and listen for audio route changes just fine. What I find is that after an interruption, and I've reactivated the audio session and restored the audio category to kAudioSessionCategory_RecordAudio, the audio route notifications go a bit haywire. When I plug in the headphones, I see no notification. When I unplug the headphones I see BOTH the "plugged in" notification AND the "unplugged" notification, in rapid succession. It's like the "plugged in" notification's delayed and, when the "unplugged" notification arrives, the queue of pending notifications is flushed. What am I doing wrong? How do I correctly restore the audio session to get timeous notifications? EDIT: iPhone OS 3.1.2, running on an iPhone 3G. I'm running a program compiled with the 3.0 SDK (from within XCode 3.1.2).

    Read the article

  • Silverlight Socket Constantly Returns With Empty Buffer

    - by Benny
    I am using Silverlight to interact with a proxy application that I have developed but, without the proxy sending a message to the Silverlight application, it executes the receive completed handler with an empty buffer ('\0's). Is there something I'm doing wrong? It is causing a major memory leak. this._rawBuffer = new Byte[this.BUFFER_SIZE]; SocketAsyncEventArgs receiveArgs = new SocketAsyncEventArgs(); receiveArgs.SetBuffer(_rawBuffer, 0, _rawBuffer.Length); receiveArgs.Completed += new EventHandler<SocketAsyncEventArgs>(ReceiveComplete); this._client.ReceiveAsync(receiveArgs); if (args.SocketError == SocketError.Success && args.LastOperation == SocketAsyncOperation.Receive) { // Read the current bytes from the stream buffer int bytesRecieved = this._client.ReceiveBufferSize; // If there are bytes to process else the connection is lost if (bytesRecieved > 0) { try { //Find out what we just received string messagePart = UTF8Encoding.UTF8.GetString(_rawBuffer, 0, _rawBuffer.GetLength(0)); //Take out any trailing empty characters from the message messagePart = messagePart.Replace('\0'.ToString(), ""); //Concatenate our current message with any leftovers from previous receipts string fullMessage = _theRest + messagePart; int seperator; //While the index of the seperator (LINE_END defined & initiated as private member) while ((seperator = fullMessage.IndexOf((char)Messages.MessageSeperator.Terminator)) > 0) { //Pull out the first message available (up to the seperator index string message = fullMessage.Substring(0, seperator); //Queue up our new message _messageQueue.Enqueue(message); //Take out our line end character fullMessage = fullMessage.Remove(0, seperator + 1); } //Save whatever was NOT a full message to the private variable used to store the rest _theRest = fullMessage; //Empty the queue of messages if there are any while (this._messageQueue.Count > 0) { ... } } catch (Exception e) { throw e; } // Wait for a new message if (this._isClosing != true) Receive(); } } Thanks in advance.

    Read the article

  • Generate and merge data with python multiprocessing

    - by Bobby
    I have a list of starting data. I want to apply a function to the starting data that creates a few pieces of new data for each element in the starting data. Some pieces of the new data are the same and I want to remove them. The sequential version is essentially: def create_new_data_for(datum): """make a list of new data from some old datum""" return [datum.modified_copy(k) for k in datum.k_list] data = [some list of data] #some data to start with #generate a list of new data from the old data, we'll reduce it next newdata = [] for d in data: newdata.extend(create_new_data_for(d)) #now reduce the data under ".matches(other)" reduced = [] for d in newdata: for seen in reduced: if d.matches(seen): break #so we haven't seen anything like d yet seen.append(d) #now reduced is finished and is what we want! I want to speed this up with multiprocessing. I was thinking that I could use a multiprocessing.Queue for the generation. Each process would just put the stuff it creates on, and when the processes are reducing the data, they can just get the data from the Queue. But I'm not sure how to have the different process loop over reduced and modify it without any race conditions or other issues. What is the best way to do this safely? or is there a different way to accomplish this goal better?

    Read the article

  • How do I animate text links in jquery?

    - by Peter
    I am somewhat new to jquery and I have a problem with something I am trying to implement using jquery. I have a vertical navigation menu where each link animates on hover by changing color, increasing letter spacing, and adding a border on the left. Everything is working the way I want it to except for when I click on the link. After I click on the link, the text changes to a different color and remains that same color even when I hover over the link. I am wanting to make it to where the color change on hover remains intact even after I click the link. I'm sure I am missing something simple, but I have tried everything I know to do with no luck. Any suggestions would be helpful! Here is what I have for the animation... <script type="text/javascript"> $(document).ready(function(){ $("ul.navlist li a").hover(function(){ $(this).stop() .animate({paddingLeft: '10px',letterSpacing: '2px',borderWidth:'20px'},{queue:false,easing:'easeInQuad'},50) }, function(){ $(this).stop() .animate({paddingLeft: '0px', letterSpacing: '0px',borderWidth:'0px'},{queue:false,easing:'easeOutQuad'},50) }); }); </script> My css for the navigation list is here... .navlist { list-style: none; } .navlist a { border-left-color: #555555; border-left-style: solid; border-left-width: 0px; color: #c4c4c4; } .navlist a:hover { border-left-color: #555555; border-left-style: solid; color: #555555; }

    Read the article

  • MUD (game) design concept question about timed events.

    - by mudder
    I'm trying my hand at building a MUD (multiplayer interactive-fiction game) I'm in the design/conceptualizing phase and I've run into a problem that I can't come up with a solution for. I'm hoping some more experienced programmers will have some advice. Here's the problem as best I can explain it. When the player decides to perform an action he sends a command to the server. the server then processes the command, determines whether or not the action can be performed, and either does it or responds with a reason as to why it could not be done. One reason that an action might fail is that the player is busy doing something else. For instance, if a player is mid-fight and has just swung a massive broadsword, it might take 3 seconds before he can repeat this action. If the player attempts to swing again to soon, the game will respond indicating that he must wait x seconds before doing that. Now, this I can probably design without much trouble. The problem I'm having is how I can replicate this behavior from AI creatures. All of the events that are being performed by the server ON ITS OWN, aka not as an immediate reaction to something a player has done, will have to be time sensitive. Some evil monster has cast a spell on you but must wait 30 seconds before doing it again... I think I'll probably be adding all these events to some kind of event queue, but how can I make that event queue time sensitive?

    Read the article

  • iPhone: Using dispatch_after to mimick NSTimer

    - by Joseph Tura
    Don't know a whole lot about blocks. How would you go about mimicking a repeating NSTimer with dispatch_after? My problem is that I want to "pause" a timer when the app moves to the background, but subclassing NSTimer does not seem to work. I tried something which seems to work. I cannot judge its performance implications or whether it could be greatly optimized. Any input is welcome. #import "TimerWithPause.h" @implementation TimerWithPause @synthesize timeInterval; @synthesize userInfo; @synthesize invalid; @synthesize invocation; + (TimerWithPause *)scheduledTimerWithTimeInterval:(NSTimeInterval)aTimeInterval target:(id)aTarget selector:(SEL)aSelector userInfo:(id)aUserInfo repeats:(BOOL)aTimerRepeats { TimerWithPause *timer = [[[TimerWithPause alloc] init] autorelease]; timer.timeInterval = aTimeInterval; NSMethodSignature *signature = [[aTarget class] instanceMethodSignatureForSelector:aSelector]; NSInvocation *aInvocation = [NSInvocation invocationWithMethodSignature:signature]; [aInvocation setSelector:aSelector]; [aInvocation setTarget:aTarget]; [aInvocation setArgument:&timer atIndex:2]; timer.invocation = aInvocation; timer.userInfo = aUserInfo; if (!aTimerRepeats) { timer.invalid = YES; } [timer fireAfterDelay]; return timer; } - (void)fireAfterDelay { dispatch_time_t delay = dispatch_time(DISPATCH_TIME_NOW, self.timeInterval * NSEC_PER_SEC); dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0); dispatch_after(delay, queue, ^{ [invocation performSelectorOnMainThread:@selector(invoke) withObject:nil waitUntilDone:NO]; if (!invalid) { [self fireAfterDelay]; } }); } - (void)invalidate { invalid = YES; [invocation release]; invocation = nil; [userInfo release]; userInfo = nil; } - (void)dealloc { [self invalidate]; [super dealloc]; } @end

    Read the article

  • Common Lisp condition system for transfer of control

    - by Ken
    I'll admit right up front that the following is a pretty terrible description of what I want to do. Apologies in advance. Please ask questions to help me explain. :-) I've written ETLs in other languages that consist of individual operations that look something like: // in class CountOperation IEnumerable<Row> Execute(IEnumerable<Row> rows) { var count = 0; foreach (var row in rows) { row["record number"] = count++; yield return row; } } Then you string a number of these operations together, and call The Dispatcher, which is responsible for calling Operations and pushing data between them. I'm trying to do something similar in Common Lisp, and I want to use the same basic structure, i.e., each operation is defined like a normal function that inputs a list and outputs a list, but lazily. I can define-condition a condition (have-value) to use for yield-like behavior, and I can run it in a single loop, and it works great. I'm defining the operations the same way, looping through the inputs: (defun count-records (rows) (loop for count from 0 for row in rows do (signal 'have-value :value `(:count ,count @,row)))) The trouble is if I want to string together several operations, and run them. My first attempt at writing a dispatcher for these looks something like: (let ((next-op ...)) ;; pick an op from the set of all ops (loop (handler-bind ((have-value (...))) ;; records output from operation (setq next-op ...) ;; pick a new next-op (call next-op))) But restarts have only dynamic extent: each operation will have the same restart names. The restart isn't a Lisp object I can store, to store the state of a function: it's something you call by name (symbol) inside the handler block, not a continuation you can store for later use. Is it possible to do something like I want here? Or am I better off just making each operation function explicitly look at its input queue, and explicitly place values on the output queue?

    Read the article

  • How do I rewrite a for loop with a shared dependency using actors

    - by Thomas Rynne
    We have some code which needs to run faster. Its already profiled so we would like to make use of multiple threads. Usually I would setup an in memory queue, and have a number of threads taking jobs of the queue and calculating the results. For the shared data I would use a ConcurrentHashMap or similar. I don't really want to go down that route again. From what I have read using actors will result in cleaner code and if I use akka migrating to more than 1 jvm should be easier. Is that true? However, I don't know how to think in actors so I am not sure where to start. To give a better idea of the problem here is some sample code: case class Trade(price:Double, volume:Int, stock:String) { def value(priceCalculator:PriceCalculator) = (priceCalculator.priceFor(stock)-> price)*volume } class PriceCalculator { def priceFor(stock:String) = { Thread.sleep(20)//a slow operation which can be cached 50.0 } } object ValueTrades { def valueAll(trades:List[Trade], priceCalculator:PriceCalculator):List[(Trade,Double)] = { trades.map { trade => (trade,trade.value(priceCalculator)) } } def main(args:Array[String]) { val trades = List( Trade(30.5, 10, "Foo"), Trade(30.5, 20, "Foo") //usually much longer ) val priceCalculator = new PriceCalculator val values = valueAll(trades, priceCalculator) } } I'd appreciate it if someone with experience using actors could suggest how this would map on to actors.

    Read the article

  • Send C++ Structure to MSMQ Message

    - by Gobalakrishnan
    Hi, I am trying to send the below structure through MSMQ Message typedef struct { char cfiller[7]; short MsgCode; char cfiller1[11]; short MsgLength; char cfiller2[2]; } MESSAGECODE; typedef struct { MESSAGECODE Header; char DealerId[16]; char GroupId[16]; long Token; short Periodicity; double Deposit; double GrossExposureLimit; double NetExposureLimit; double NetSaleExposureLimit; double NetPositionLimit; double TurnoverLimit; double PendingOrdersLimit; double MTMLossLimit; double MaxSingleTransValue; long MaxSingleTransQty; double IMLimit; long NetQuantityLimit; } LIMITUPDATE; void main() { // // create queue // open queue // send message // OleInitialize(NULL); // have to init OLE // // declare some variables // IMSMQQueueInfoPtr qinfo("MSMQ.MSMQQueueInfo"); IMSMQQueuePtr qSend; IMSMQMessagePtr m("MSMQ.MSMQMessage"); LIMITUPDATE l1; l1.Header.MsgCode=26001; l1.Header.MsgLength=150; qinfo->PathName = ".\\private$\\q99"; m->Body = l1; qSend = qinfo->Open(MQ_SEND_ACCESS, MQ_DENY_NONE); m->Send(qSend); qSend->Close(); } while compiling i am getting the following error. Error 2 error C2664: 'IMSMQMessage::PutBody' : cannot convert parameter 1 from 'LIMITUPDATE' to 'const _variant_t &' c:\temp\msmq\msmq.cpp 58 msmq thank you.

    Read the article

  • Disadvantages of MySQL Row Locking

    - by Nyxynyx
    I am using row locking (transactions) in MySQL for creating a job queue. Engine used is InnoDB. SQL Query START TRANSACTION; SELECT * FROM mytable WHERE status IS NULL ORDER BY timestamp DESC LIMIT 1 FOR UPDATE; UPDATE mytable SET status = 1; COMMIT; According to this webpage, The problem with SELECT FOR UPDATE is that it usually creates a single synchronization point for all of the worker processes, and you see a lot of processes waiting for the locks to be released with COMMIT. Question: Does this mean that when the first query is executed, which takes some time to finish the transaction before, when the second similar query occurs before the first transaction is committed, it will have to wait for it to finish before the query is executed? If this is true, then I do not understand why the row locking of a single row (which I assume) will affect the next transaction query that would not require reading that locked row? Additionally, can this problem be solved (and still achieve the effect row locking does for a job queue) by doing a UPDATE instead of the transaction? UPDATE mytable SET status = 1 WHERE status IS NULL ORDER BY timestamp DESC LIMIT 1

    Read the article

  • PHP Socket Server vs node.js: Web Chat

    - by Eliasdx
    I want to program a HTTP WebChat using long-held HTTP requests (Comet), ajax and websockets (depending on the browser used). Userdatabase is in mysql. Chat is written in PHP except maybe the chat stream itself which could also be written in javascript (node.js): I don't want to start a php process per user as there is no good way to send the chat messages between these php childs. So I thought about writing an own socket server in either PHP or node.js which should be able to handle more then 1000 connections (chat users). As a purely web developer (php) I'm not much familiar with sockets as I usually let web server care about connections. The chat messages won't be saved on disk nor in mysql but in RAM as an array or object for best speed. As far as I know there is no way to handle multiple connections at the same time in a single php process (socket server), however you can accept a great amount of socket connections and process them successive in a loop (read and write; incoming message - write to all socket connections). The problem is that there will most-likely be a lag with ~1000 users and mysql operations could slow the whole thing down which will then affect all users. My question is: Can node.js handle a socket server with better performance? Node.js is event-based but I'm not sure if it can process multiple events at the same time (wouldn't that need multi-threading?) or if there is just an event queue. With an event queue it would be just like php: process user after user. I could also spawn a php process per chat room (much less users) but afaik there are singlethreaded IRC servers which are also capable to handle thousands of users. (written in c++ or whatever) so maybe it's also possible in php. I would prefer PHP over Node.js because then the project would be php-only and not a mixture of programming languages. However if Node can process connections simultaneously I'd probably choose it.

    Read the article

  • Providing less than operator for one element of a pair

    - by Koszalek Opalek
    What would be the most elegant way too fix the following code: #include <vector> #include <map> #include <set> using namespace std; typedef map< int, int > row_t; typedef vector< row_t > board_t; typedef row_t::iterator area_t; bool operator< ( area_t const& a, area_t const& b ) { return( a->first < b->first ); }; int main( int argc, char* argv[] ) { int row_num; area_t it; set< pair< int, area_t > > queue; queue.insert( make_pair( row_num, it ) ); // does not compile }; One way to fix it is moving the definition of less< to namespace std (I know, you are not supposed to do it.) namespace std { bool operator< ( area_t const& a, area_t const& b ) { return( a->first < b->first ); }; }; Another obvious solution is defining less than< for pair< int, area_t but I'd like to avoid that and be able to define the operator only for the one element of the pair where it is not defined.

    Read the article

  • How OpenStack Swift handles concurrent restful API request?

    - by Chen Xie
    I installed a swift service and was trying to know the capability of handling concurrent request. So I created massive amount of threads in Java, and sent it via the RestFUL API Not surprisingly, when the number of requests climb up, the program started to throw out exceptions. Caused by: java.net.ConnectException: Connection timed out: connect at java.net.DualStackPlainSocketImpl.connect0(Native Method) at java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:69) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:157) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:391) at java.net.Socket.connect(Socket.java:579) at java.net.Socket.connect(Socket.java:528) at sun.net.NetworkClient.doConnect(NetworkClient.java:180) at sun.net.www.http.HttpClient.openServer(HttpClient.java:378) at sun.net.www.http.HttpClient.openServer(HttpClient.java:473) at sun.net.www.http.HttpClient.(HttpClient.java:203) But can anyone tell me how that time outhappened? I am curious of how SWIFT handles those requests. Is that by queuing the requests and because there are too many requests in the queue and wait for too long time and it's just get kicked out from the queue? If this holds, does it mean that it's an asynchronized mechanism to handle requests? Thanks.

    Read the article

  • NSDrawer delegate pointing to deallocated object?

    - by Isaac
    A user has sent in a crash report with the stack trace listed below (I have not been able to reproduce the crash myself, but every other crash this user has reported has been a valid bug, even when I couldn't reproduce the effect). The application is a reference-counted Objective-C/Cocoa app. If I am interpreting it correctly, the crash is caused by attempting to send a drawerDidOpen: message to a deallocated object. The only object that should be receiving drawerDidOpen: is the drawer's delegate object (nowhere does any object register to receive drawer notifications), and the drawer's delegate object is instantiated via the XIB/NIB file, wired to the delegate outlet of the drawer, and not referenced anywhere else. Given that, how can I protect against the delegate getting dealloc'd before the drawer notification? Or, alternately, what have I misinterpreted that might be causing the crash? Crash log/stack trace: Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Codes: KERN_INVALID_ADDRESS at 0x0000000000000010 Crashed Thread: 0 Dispatch queue: com.apple.main-thread Application Specific Information: objc_msgSend() selector name: drawerDidOpen: Thread 0 Crashed: Dispatch queue: com.apple.main-thread 0 libobjc.A.dylib 0x00007fff8272011c objc_msgSend + 40 1 com.apple.Foundation 0x00007fff87d0786e _nsnote_callback + 167 2 com.apple.CoreFoundation 0x00007fff831bcaea __CFXNotificationPost + 954 3 com.apple.CoreFoundation 0x00007fff831a9098 _CFXNotificationPostNotification + 200 4 com.apple.Foundation 0x00007fff87cfe7d8 -[NSNotificationCenter postNotificationName:object:userInfo:] + 101 5 com.apple.AppKit 0x00007fff8512e944 _NSDrawerObserverCallBack + 840 6 com.apple.CoreFoundation 0x00007fff831d40d7 __CFRunLoopDoObservers + 519 7 com.apple.CoreFoundation 0x00007fff831af8c4 CFRunLoopRunSpecific + 548 8 com.apple.HIToolbox 0x00007fff839b8ada RunCurrentEventLoopInMode + 333 9 com.apple.HIToolbox 0x00007fff839b883d ReceiveNextEventCommon + 148 10 com.apple.HIToolbox 0x00007fff839b8798 BlockUntilNextEventMatchingListInMode + 59 11 com.apple.AppKit 0x00007fff84de8a2a _DPSNextEvent + 708 12 com.apple.AppKit 0x00007fff84de8379 -[NSApplication nextEventMatchingMask:untilDate:inMode:dequeue:] + 155 13 com.apple.AppKit 0x00007fff84dae05b -[NSApplication run] + 395 14 com.apple.AppKit 0x00007fff84da6d7c NSApplicationMain + 364 15 (my app's identifier) 0x0000000100001188 start + 52

    Read the article

< Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >