Search Results

Search found 3222 results on 129 pages for 'mercurial queue'.

Page 43/129 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • Can I architect a web app so it can be deployed to either the cloud or a dedicated server / VPS ? Ho

    - by CAD bloke
    Is there are an architecture versatile enough that it may be deployed to either a cloud server or to a dedicated (or VPS) server with minimal change? Obviously there would be config changes but I'd rather leave the rest of the app consistent, keeping one maintainable codebase. The app would be ASP.NET &/or ASP.MVC. My dev environment is VS 2010. The cloud may, or may not be, Azure. Dedicated or VPS would be Win Server 2008. Probably. It is not a public-facing web site. The web app I have in mind would be a separate deployment for each client. Some clients would be small-scale, some will prefer the app to run on a local intranet rather than on the web. Other clients may prefer the cloud approach for a black-box solution. The app may run for a few hours or it may run indefinitely, it depends on the client and the project. Other than deployment scenarios the apps would be more or less identical. As you may see from the tags, I'm assuming a message-based architecture is probably the most versatile but I'm also used to being wrong about this stuff. All suggestions and pointers welcome regarding general architectures and also specific solutions.

    Read the article

  • How can I configure different worker pools using celery?

    - by Chris R
    I need to deploy a queued execution service with (generally) the following three classes of worker: A periodic, low-priority job class that takes a long time and can be processed serially; these jobs should only use 0..2 workers in the system at most. A periodic, deadline-sensitive job class that take a short to medium amount of time (say, topping out at 5 minutes) An ad-hoc job class, that is higher priority than #1, but can interleave with #2. Any workers from class #2 that are inactive when this type of job comes in should handle it, without ever starving the pool of workers for #2 All three job classes are the same task, the only difference between them is how they're requested; they'll take the same input and generate the same output, but each one has different performance guarantees. How can I implement this using celery?

    Read the article

  • EnumJobs not returning Copies & Total Pages

    - by Hein du Plessis
    I'm using Windows API's EnumJobs to find the PageCount and Copies of a print job, but I found that these fields are almost always zero when called on a print server. Although it could be my timing is out, because the number of pages increment as the job prints and once it's done the print job cannot be accessed. So there is about half a nanosecond when the values in EnumJobs is correct before it dissapears. I've been scouring the web but can't find any help on this, just other people with similar problems. Anybody with experience with EnumJobs or can suggest other means of determining the total number of pages printed?

    Read the article

  • Sending messages between two Python servers

    - by Will
    I have two servers - one Django, the other likely to be written in Python - and one is putting 'tasks' into a database and another is processing these tasks. They share a database, but I want the processor to react quickly to new tasks rather than polling periodically. Are there any straightforward ways for two Python servers to talk to one another, or does the task processor have to have web-hooks or something? It feels there ought to be a blessed way to do this...

    Read the article

  • How can I animate multiple elements sequentially using jQuery?

    - by lorenzium
    I thought it would be simple but I still can't get it to work. By clicking one button, I want several animations to happen - one after the other - but now all the animations are happening at once. Here's my code - can someone please tell me where I'm going wrong?: $(".button").click(function(){ $("#header").animate({top: "-50"}, "slow") $("#something").animate({height: "hide"}, "slow") $("ul#menu").animate({top: "20", left: "0"}, "slow") $(".trigger").animate({height: "show", top: "110", left: "0"}, "slow"); });

    Read the article

  • How can I animate multiple elements sequentially using jQuery?

    - by lorenzium
    I thought it would be simple but I still can't get it to work. By clicking one button, I want several animations to happen - one after the other - but now all the animations are happening at once. Here's my code - can someone please tell me where I'm going wrong?: $(".button").click(function(){ $("#header").animate({top: "-50"}, "slow") $("#something").animate({height: "hide"}, "slow") $("ul#menu").animate({top: "20", left: "0"}, "slow") $(".trigger").animate({height: "show", top: "110", left: "0"}, "slow"); });

    Read the article

  • c# - best way to initialize an array of reference type object

    - by lidong
    I wonder if there is better way to initialize an array of reference type object, like this. Queue<int>[] queues = new Queue<int>[10]; for (int i = 0; i < queues.Length; i++) { queues[i] = new Queue<int>(); } I tried Enumerable.Repeat, but all elements in the array refer to same instance, Queue<int>[] queues = Enumerable.Repeat(new Queue<int>(), 10).ToArray(); I also tried Array.ForEach, but it doesn't work without ref keyword: Queue<int>[] queues = Array.ForEach(queues, queue => queue = new Queue<int>()); any other idea?

    Read the article

  • how to use q.js promises to work with multiple asynchronous operations

    - by kimsia
    Note: This question is also cross-posted in Q.js mailing list over here. i had a situation with multiple asynchronous operations and the answer I accepted pointed out that using Promises using a library such as q.js would be more beneficial. I am convinced to refactor my code to use Promises but because the code is pretty long, i have trimmed the irrelevant portions and exported the crucial parts into a separate repo. The repo is here and the most important file is this. The requirement is that I want pageSizes to be non-empty after traversing all the dragged'n dropped files. The problem is that the FileAPI operations inside getSizeSettingsFromPage function causes getSizeSettingsFromPage to be async. So I cannot place checkWhenReady(); like this. function traverseFiles() { for (var i=0, l=pages.length; i<l; i++) { getSizeSettingsFromPage(pages[i], calculateRatio); } checkWhenReady(); // this always returns 0. } This works, but it is not ideal. I prefer to call checkWhenReady just ONCE after all the pages have undergone this function calculateRatio successfully. function calculateRatio(width, height, filename) { // .... code pageSizes.add(filename, object); checkWhenReady(); // this works but it is not ideal. I prefer to call this method AFTER all the `pages` have undergone calculateRatio // ..... more code... } How do I refactor the code to make use of Promises in Q.js?

    Read the article

  • Windows Game Loop 50% CPU on Dual Core

    - by Dave18
    The game loop alone is using 50% of CPU Usage, I haven't done any rendering work yet. What i'm doing here? while(true) { if(PeekMessage(&msg,NULL,0,0,PM_REMOVE)) { if(msg.message == WM_QUIT || msg.message == WM_CLOSE || msg.message == WM_DESTROY) break; TranslateMessage(&msg); DispatchMessage(&msg); } else { //Run game code, break out of loop when the game is over } }

    Read the article

  • python can't start a new thread

    - by Giorgos Komnino
    I am building a multi threading application. I have setup a threadPool. [ A Queue of size N and N Workers that get data from the queue] When all tasks are done I use tasks.join() where tasks is the queue . The application seems to run smoothly until suddently at some point (after 20 minutes in example) it terminates with the error thread.error: can't start new thread Any ideas? Edit: The threads are daemon Threads and the code is like: while True: t0 = time.time() keyword_statuses = DBSession.query(KeywordStatus).filter(KeywordStatus.status==0).options(joinedload(KeywordStatus.keyword)).with_lockmode("update").limit(100) if keyword_statuses.count() == 0: DBSession.commit() break for kw_status in keyword_statuses: kw_status.status = 1 DBSession.commit() t0 = time.time() w = SWorker(threads_no=32, network_server='http://192.168.1.242:8180/', keywords=keyword_statuses, cities=cities, saver=MySqlRawSave(DBSession), loglevel='debug') w.work() print 'finished' When the daemon threads are killed? When the application finishes or when the work() finishes? Look at the thread pool and the worker (it's from a recipe ) from Queue import Queue from threading import Thread, Event, current_thread import time event = Event() class Worker(Thread): """Thread executing tasks from a given tasks queue""" def __init__(self, tasks): Thread.__init__(self) self.tasks = tasks self.daemon = True self.start() def run(self): '''Start processing tasks from the queue''' while True: event.wait() #time.sleep(0.1) try: func, args, callback = self.tasks.get() except Exception, e: print str(e) return else: if callback is None: func(args) else: callback(func(args)) self.tasks.task_done() class ThreadPool: """Pool of threads consuming tasks from a queue""" def __init__(self, num_threads): self.tasks = Queue(num_threads) for _ in range(num_threads): Worker(self.tasks) def add_task(self, func, args=None, callback=None): ''''Add a task to the queue''' self.tasks.put((func, args, callback)) def wait_completion(self): '''Wait for completion of all the tasks in the queue''' self.tasks.join() def broadcast_block_event(self): '''blocks running threads''' event.clear() def broadcast_unblock_event(self): '''unblocks running threads''' event.set() def get_event(self): '''returns the event object''' return event

    Read the article

  • Producer consumer with qualifications

    - by tgguy
    I am new to clojure and am trying to understand how to properly use its concurrency features, so any critique/suggestions is appreciated. So I am trying to write a small test program in clojure that works as follows: there 5 producers and 2 consumers a producer waits for a random time and then pushes a number onto a shared queue. a consumer should pull a number off the queue as soon as the queue is nonempty and then sleep for a short time to simulate doing work the consumers should block when the queue is empty producers should block when the queue has more than 4 items in it to prevent it from growing huge Here is my plan for each step above: the producers and consumers will be agents that don't really care for their state (just nil values or something); i just use the agents to send-off a "consumer" or "producer" function to do at some time. Then the shared queue will be (def queue (ref [])). Perhaps this should be an atom though? in the "producer" agent function, simply (Thread/sleep (rand-int 1000)) and then (dosync (alter queue conj (rand-int 100))) to push onto the queue. I am thinking to make the consumer agents watch the queue for changes with add-watcher. Not sure about this though..it will wake up the consumers on any change, even if the change came from a consumer pulling something off (possibly making it empty) . Perhaps checking for this in the watcher function is sufficient. Another problem I see is that if all consumers are busy, then what happens when a producer adds something new to the queue? Does the watched event get queued up on some consumer agent or does it disappear? see above I really don't know how to do this. I heard that clojure's seque may be useful, but I couldn't find enough doc on how to use it and my initial testing didn't seem to work (sorry don't have the code on me anymore)

    Read the article

  • How do I subscribe to a MSMQ queue but only "peek" the message in .Net?

    - by lemkepf
    We have a MSMQ Queue setup that receives messages and is processed by an application. We'd like to have another process subscribe to the Queue and just read the message and log it's contents. I have this in place already, the problem is it's constantly peeking the queue. CPU on the server when this is running is around 40%. The mqsvc.exe runs at 30% and this app runs at 10%. I'd rather have something that just waits for a message to come in, get's notified of it, and then logs it without constantly pooling the server. Dim lastid As String Dim objQueue As MessageQueue Dim strQueueName As String Public Sub Main() objQueue = New MessageQueue(strQueueName, QueueAccessMode.SendAndReceive) Dim propertyFilter As New MessagePropertyFilter propertyFilter.ArrivedTime = True propertyFilter.Body = True propertyFilter.Id = True propertyFilter.LookupId = True objQueue.MessageReadPropertyFilter = propertyFilter objQueue.Formatter = New ActiveXMessageFormatter AddHandler objQueue.PeekCompleted, AddressOf MessageFound objQueue.BeginPeek() end main Public Sub MessageFound(ByVal s As Object, ByVal args As PeekCompletedEventArgs) Dim oQueue As MessageQueue Dim oMessage As Message ' Retrieve the queue from which the message originated oQueue = CType(s, MessageQueue) oMessage = oQueue.EndPeek(args.AsyncResult) If oMessage.LookupId <> lastid Then ' Process the message here lastid = oMessage.LookupId ' let's write it out log.write(oMessage) End If objQueue.BeginPeek() End Sub

    Read the article

  • What is the fastest collection in c# to implement a prioritizing queue?

    - by Nathan Smith
    I need to implement a queue for messages on a game server so it needs to as fast as possible. The queue will have a maxiumem size. I need to prioritize messages once the queue is full by working backwards and removing a lower priority message (if one exists) before adding the new message. The appliation is asynchronous so access to the queue needs to be locked. I'm currently implementing it using a LinkedList as the underlying storage but have concerns that searching and removing nodes will keep it locked for too long. Heres the basic code I have at the moment: public class ActionQueue { private LinkedList<ClientAction> _actions = new LinkedList<ClientAction>(); private int _maxSize; /// <summary> /// Initializes a new instance of the ActionQueue class. /// </summary> public ActionQueue(int maxSize) { _maxSize = maxSize; } public int Count { get { return _actions.Count; } } public void Enqueue(ClientAction action) { lock (_actions) { if (Count < _maxSize) _actions.AddLast(action); else { LinkedListNode<ClientAction> node = _actions.Last; while (node != null) { if (node.Value.Priority < action.Priority) { _actions.Remove(node); _actions.AddLast(action); break; } } } } } public ClientAction Dequeue() { ClientAction action = null; lock (_actions) { action = _actions.First.Value; _actions.RemoveFirst(); } return action; } }

    Read the article

  • NServiceBus & MSMQ: How To Change the Default Permissions on the Queue?

    - by Amy T
    My team is on our first attempt at using NServiceBus (v2.0), using MSMQ as the backing storage. We're getting stuck on queue permissions. We're using it in a Web Forms application, where the user account the website runs under is not an administrator on the machine. When NServiceBus creates the MSMQ queue, it gives the local administrators group full control, and the local everyone and anonymous groups permissions to send messages. But then later, as part of initializing the queue, NServiceBus tries to read all of its messages. That's where we run into the permissions error. Since the website isn't running as an administrator, it's not allowed to read messages. How are other people dealing with this? Do your applications run as administrators? Or do you create the MSMQ queue in your code first, giving it the permissions you need, so that NServiceBus doesn't have to create it? Or is there a bit of configuration we're missing? Or are we likely writing our code that uses NServiceBus incorrectly to be running into this?

    Read the article

  • A Method for Reducing Contention and Overhead in Worker Queues for Multithreaded Java Applications

    - by Janice J. Heiss
    A java.net article, rich in practical resources, by IBM India Labs’ Sathiskumar Palaniappan, Kavitha Varadarajan, and Jayashree Viswanathan, explores the challenge of writing code in a way that that effectively makes use of the resources of modern multicore processors and multiprocessor servers.As the article states: “Many server applications, such as Web servers, application servers, database servers, file servers, and mail servers, maintain worker queues and thread pools to handle large numbers of short tasks that arrive from remote sources. In general, a ‘worker queue’ holds all the short tasks that need to be executed, and the threads in the thread pool retrieve the tasks from the worker queue and complete the tasks. Since multiple threads act on the worker queue, adding tasks to and deleting tasks from the worker queue needs to be synchronized, which introduces contention in the worker queue.” The article goes on to explain ways that developers can reduce contention by maintaining one queue per thread. It also demonstrates a work-stealing technique that helps in effectively utilizing the CPU in multicore systems. Read the rest of the article here.

    Read the article

  • How to measure the time HTTP requests spend sitting in the accept-queue?

    - by David Jones
    I am using Apache2 on Ubuntu 9.10, and I am trying to tune my configuration for a web application to reduce latency of responses to HTTP requests. During a moderately heavy load on my small server, there are 24 apache2 processes handling requests. Additional requests get queued. Using "netstat", I see 24 connections are ESTABLISHED and 125 connections are TIME_WAIT. I am trying to figure out if that is considered a reasonable backlog. Most requests get serviced in a fraction of a second, so I am assuming requests move through the accept-queue fairly quickly, probably within 1 or 2 seconds, but I would like to be more certain. Can anyone recommend an easy way to measure the time an HTTP request sits in the accept-queue? The suggestions I have come across so far seem to start the clock after the apache2 worker accepts the connection. I'm trying to quantify the accept-queue delay before that. thanks in advance, David Jones

    Read the article

  • JMS - How do message selectors work with multiple queue and topic consumers?

    - by Stephen Harmon
    Say you have a JMS queue, and multiple consumers are watching the queue for messages. You want one of the consumers to get all of a particular type of message, so you decide to employ message selectors. For example, you define a property to go in your JMS message header named, "targetConsumer." Your message selector, which you apply to the consumer known as, "A," is something like "WHERE targetConsumer = "CONSUMER_A." It's clear that consumer A will now just grab messages with the property set like it is in in the example. Will the other consumers have awareness of that, though? IOW, will another consumer, unconstrained by a message selector, grab the "CONSUMER_A" messages, if it looks at the queue before Consumer A? Do I need to apply message selectors like, "WHERE targetConsumer < "CONSUMER_A" to the others? I am RTFMing and gathering empirical data now, but was hoping someone might know off the top of their head.

    Read the article

  • What is the absolute fastest way to implement a concurrent queue with ONLY one consumer and one producer?

    - by JohnPristine
    java.util.concurrent.ConcurrentLinkedQueue comes to mind, but is it really optimum for this two-thread scenario? I am looking for the minimum latency possible on both sides (producer and consumer). If the queue is empty you can immediately return null AND if the queue is full you can immediately discard the entry you are offering. Does ConcurrentLinkedQueue use super fast and light locks (AtomicBoolean) ? Has anyone benchmarked ConcurrentLinkedQueue or knows about the ultimate fastest way of doing that? Additional Details: I imagine the queue should be a fair one, meaning the consumer should not make the consumer wait any longer than it needs (by front-running it) and vice-versa.

    Read the article

  • Upstart: cannot run as root

    - by Ronni Egeriis
    I have made this upstart script, which starts a Node.js service. But all of the sudden the service has stopped, and upstart has failed to restart it. Now that I am trying to start it manually, it fails to recognize my service: start: Unknown job: queue The script is properly placed in /etc/init, and should have the correct rights: -rw-r--r-- 1 root root 200 Aug 7 13:30 queue.conf When I check the config file with init-checkconf however, it says that it is not able to run as root: root@production1:~# init-checkconf /etc/init/queue.conf ERROR: cannot run as root What causes this error and how do I solve it? Debug info: Ubuntu 12.04.3 LTS root@production1:~# service --version service ver. 0.91-ubuntu1 Edit Here's queue.conf: description "Echo.it command queue" author "Ronni Egeriis Persson <[email protected]>" stop on shutdown respawn respawn 20 5 exec sudo -u beanstalk /usr/bin/node /var/www/queue/index.js >> /var/log/queue.log 2>&1 The command sudo -u beanstalk /usr/bin/node /var/www/queue/index.js >> /var/log/queue.log 2>&1 works fine when run manually.

    Read the article

  • Connect to ibm mq with jms . Specify the channel and queue manager

    - by bhargav
    How do i specify which queue manager to connect to in my system properties. Here is the code: Properties properties = new Properties(); properties.setProperty("java.naming.factory.initial", "com.ibm.mq.jms.context.WMQInitialContextFactory"); properties.setProperty("java.naming.provider.url", "localhost:1414/SYSTEM.DEF.SVRCONN"); Context context = new InitialContext(properties); factory= (QueueConnectionFactory)context.lookup("TESTOUT"); context always gets TEST que only not able to connect to TESTOUT queue

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >