Search Results

Search found 3222 results on 129 pages for 'mercurial queue'.

Page 47/129 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • Should I have a heroku worker dyno for poll a AWS SQS?

    - by Luccas
    Im confusing about where should I have a script polling an Aws Sqs inside a Rails application. If I use a thread inside the web app probably it will use cpu cycles to listen this queue forever and then affecting performance. And if I reserve a single heroku worker dyno it costs $34.50 per month. It makes sense to pay this price for it for a single queue poll? Or it's not the case to use a worker for it? The script code: queue = AWS::SQS::Queue.new(SQSADDR['my_queue']) queue.poll(:idle_timeout => 20) do |msg| # code here end I need help!! Thanks

    Read the article

  • How to discriminate from two nodes with identical frequencies in a Huffman's tree?

    - by Omega
    Still on my quest to compress/decompress files with a Java implementation of Huffman's coding (http://en.wikipedia.org/wiki/Huffman_coding) for a school assignment. From the Wikipedia page, I quote: Create a leaf node for each symbol and add it to the priority queue. While there is more than one node in the queue: Remove the two nodes of highest priority (lowest probability) from the queue Create a new internal node with these two nodes as children and with probability equal to the sum of the two nodes' probabilities. Add the new node to the queue. The remaining node is the root node and the tree is complete. Now, emphasis: Remove the two nodes of highest priority (lowest probability) from the queue Create a new internal node with these two nodes as children and with probability equal to the sum of the two nodes' probabilities. So I have to take two nodes with the lowest frequency. What if there are multiple nodes with the same low frequency? How do I discriminate which one to use? The reason I ask this is because Wikipedia has this image: And I wanted to see if my Huffman's tree was the same. I created a file with the following content: aaaaeeee nnttmmiihhssfffouxprl And this was the result: Doesn't look so bad. But there clearly are some differences when multiple nodes have the same frequency. My questions are the following: What is Wikipedia's image doing to discriminate the nodes with the same frequency? Is my tree wrong? (Is Wikipedia's image method the one and only answer?) I guess there is one specific and strict way to do this, because for our school assignment, files that have been compressed by my program should be able to be decompressed by other classmate's programs - so there must be a "standard" or "unique" way to do it. But I'm a bit lost with that. My code is rather straightforward. It literally just follows Wikipedia's listed steps. The way my code extracts the two nodes with the lowest frequency from the queue is to iterate all nodes and if the current node has a lower frequency than any of the two "smallest" known nodes so far, then it replaces the highest one. Just like that.

    Read the article

  • Radix sort in java help

    - by endif
    Hi i need some help to improve my code. I am trying to use Radixsort to sort array of 10 numbers (for example) in increasing order. When i run the program with array of size 10 and put 10 random int numbers in like 70 309 450 279 799 192 586 609 54 657 i get this out: 450 309 192 279 54 192 586 657 54 609 Don´t see where my error is in the code. class IntQueue { static class Hlekkur { int tala; Hlekkur naest; } Hlekkur fyrsti; Hlekkur sidasti; int n; public IntQueue() { fyrsti = sidasti = null; } // First number in queue. public int first() { return fyrsti.tala; } public int get() { int res = fyrsti.tala; n--; if( fyrsti == sidasti ) fyrsti = sidasti = null; else fyrsti = fyrsti.naest; return res; } public void put( int i ) { Hlekkur nyr = new Hlekkur(); n++; nyr.tala = i; if( sidasti==null ) f yrsti = sidasti = nyr; else { sidasti.naest = nyr; sidasti = nyr; } } public int count() { return n; } public static void radixSort(int [] q, int n, int d){ IntQueue [] queue = new IntQueue[n]; for (int k = 0; k < n; k++){ queue[k] = new IntQueue(); } for (int i = d-1; i >=0; i--){ for (int j = 0; j < n; j++){ while(queue[j].count() != 0) { queue[j].get(); } } for (int index = 0; index < n; index++){ // trying to look at one of three digit to sort after. int v=1; int digit = (q[index]/v)%10; v*=10; queue[digit].put(q[index]); } for (int p = 0; p < n; p++){ while(queue[p].count() != 0) { q[p] = (queue[p].get()); } } } } } I am also thinking can I let the function take one queue as an argument and on return that queue is in increasing order? If so how? Please help. Sorry if my english is bad not so good in it. Please let know if you need more details. import java.util.Random; public class RadTest extends IntQueue { public static void main(String[] args) { int [] q = new int[10]; Random r = new Random(); int t = 0; int size = 10; while(t != size) { q[t] = (r.nextInt(1000)); t++; } for(int i = 0; i!= size; i++) { System.out.println(q[i]); } System.out.println("Radad: \n"); radixSort(q,size,3); for(int i = 0; i!= size; i++) { System.out.println(q[i]); } } } Hope this is what you were talking about...

    Read the article

  • Java SAX ContentHandler to create new objects for every root node

    - by behrk2
    Hello everyone, I am using SAX to parse some XML. Let's say I have the following XML document: <queue> <element A> 1 </element A> <element B> 2 </element B> </queue> <queue> <element A> 1 </element A> <element B> 2 </element B> </queue> <queue> <element A> 1 </element A> <element B> 2 </element B> </queue> And I also have an Elements class: public static Elements { String element; public Elements() { } public void setElement(String element){ this.element = element; } public String getElement(){ return element; } } I am looking to write a ContentHandler that follows the following algorithm: Vector v; for every <queue> root node { Element element = new Element(); for every <element> child node{ element.setElement(value of current element); } v.addElement(element); } So, I want to create a bunch of Element objects and add each to a vector...with each Element object containing its own String values (from the child nodes found within the root nodes. I know how to parse out the elements and all of those details, but can someone show me a sample of how to structure my ContentHandler to allow for the above algorithm? Thanks!

    Read the article

  • CMBufferQueueCreate fail, required parameters?

    - by Agustin
    Reading the documentation about iOS SDK CMBufferQueueCreate, it says that getDuration and version are required, all the others callbacks can be NULL. but running the following code: CFAllocatorRef allocator; CMBufferCallbacks *callbacks; callbacks = malloc(sizeof(CMBufferCallbacks)); callbacks->version = 0; callbacks->getDuration = timeCallback; callbacks->refcon = NULL; callbacks->getDecodeTimeStamp = NULL; callbacks->getPresentationTimeStamp = NULL; callbacks->isDataReady = NULL; callbacks->compare = NULL; callbacks->dataBecameReadyNotification = NULL; CMItemCount capacity = 4; OSStatus s = CMBufferQueueCreate(allocator, capacity, callbacks, queue); NSLog(@"QUEUE: %x", queue); NSLog(@"STATUS: %i", s); with timeCallback: CMTime timeCallback(CMBufferRef buf, void *refcon){ return CMTimeMake(1, 1); } and queue is: CMBufferQueueRef* queue; queue creations fails (queue = 0) and returns a status of: kCMBufferQueueError_RequiredParameterMissing = -12761, The callbacks variable is correctly initialized, at least the debugger says so. Somebody have played arround with CMBufferQueue? google doesn't know about that! Thanks

    Read the article

  • Emptying the datastore in GAE

    - by colwilson
    I know what you're thinking, 'O not that again!', but here we are since Google have not yet provided a simpler method. I have been using a queue based solution which worked fine: import datetime from models import * DELETABLE_MODELS = [Alpha, Beta, AlphaBeta] def initiate_purge(): for e in config.DELETABLE_MODELS: deferred.defer(delete_entities, e, 'purging', _queue = 'purging') class NotEmptyException(Exception): pass def delete_entities(e, queue): try: q = e.all(keys_only=True) db.delete(q.fetch(200)) ct = q.count(1) if ct > 0: raise NotEmptyException('there are still entities to be deleted') else: logging.info('processing %s completed' % queue) except Exception, err: deferred.defer(delete_entities, e, then, queue, _queue = queue) logging.info('processing %s deferred: %s' % (queue, err)) All this does is queue a request to delete some data (once for each class) and then if the queued process either fails or knows there is still some stuff to delete, it re-queues itself. This beats the heck out of hitting the refresh on a browser for 10 minutes. However, I'm having trouble deleting AlphaBeta entities, there are always a few left at the end. I think because it contains Reference Properties: class AlphaBeta(db.Model): alpha = db.ReferenceProperty(Alpha, required=True, collection_name='betas') beta = db.ReferenceProperty(Beta, required=True, collection_name='alphas') I have tried deleting the indexes relating to these entity types, but that did not make any difference. Any advice would be appreciated please.

    Read the article

  • iTunes App Store: Does a major version upgrade = longer approval queue time?

    - by erlingormar
    I'm wondering if anyone has insight into this... when releasing an update of an iPhone application, should I expect the approval process to take longer if I submit something that's declared as a major version update (as compared to a minor version)? Last time around (about the time the big Facebook-update was released) our wait time for a minor version review was 21 days (16 working days).

    Read the article

  • What does an empty run queue entry points to?

    - by EpsilonVector
    I'm trying to figure out the technicalities of scheduling in Linux. What I can't figure out is what happens with those entries in the run_queue where there are no running processes. In the run_queue we have a bitmap, a counter, and the array of lists themselves. For a list that is empty because there are no running tasks with its priority, what do the next and prev pointers point to?

    Read the article

  • Perl daemon script for message queue hanging for 20 seconds after each process. Why?

    - by Mike Diena
    I have daemon script written in Perl that checks a database tables for rows, pulls them in one by one, sends the contents via HTTP post to another service, then logs the result and repeats (only a single child). When there are rows present, the first one is posted and logged immediately, but every subsequent one is delayed for around 20 seconds. There are no sleep()'s running, and I can't find any other obvious delays. Any ideas?

    Read the article

  • How can I delete a specific element in priority queue?

    - by Yuan
    import java.util.*; public class test4 { public static void main(String[] args){ PriorityQueue[] P = new PriorityQueue[10]; P[1] = new PriorityQueue<ClassEntry>(); P[1].add(new ClassEntry(1.2,1)); P[1].add(new ClassEntry(1.5,2)); P[1].add(new ClassEntry(1.2,3)); P[1].add(new ClassEntry(10,4)); P[1].remove(new ClassEntry(10,4));//I can't delete this object??? System.out.println(P[1].size()); ClassEntry ce = (ClassEntry) P[1].peek(); System.out.println(P[1].size()); System.out.println(ce.sim+"\t"+ce.index); } } Why i can't delete (10,4)? Can somebody teach how to implement...thanks!

    Read the article

  • Best Practical RT, sorting email into queues automatically using procmail

    - by user52095
    I'm trying to get incoming e-mail to automatically go directly into whichever queue/ticket they are related to or create a new one if none exist and the right queue e-mail setup in the web interface is used. I will have too many queues to have two line items within mailgate per queue. A similar issue was discussed here (http://serverfault.com/questions/104779/procmail-pipe-to-program-otherwise-return-error-to-sender), but I thought it best to open a new case instead of tagging on what appeared to be an answer to that person's query. I'm able to send and receive e-mail (via PostFix) to the default rt user and this user successfully accepts all e-mail for the relative domain. I have no idea where the e-mail goes - it's successfully delivered, but it does not update existing tickets (with a Subject line match) and it does not create any new. Here's and example of my ./procmail.log: procmail: [23048] Mon Aug 23 14:26:01 2010 procmail: Assigning "MAILDOMAIN=rt.mydomain.com " procmail: Assigning "RT_MAILGATE=/opt/rt3/bin/rt-mailgate " procmail: Assigning "RT_URL=http://rt.mydomain.com/ " procmail: Assigning "LOGABSTRACT=all " procmail: Skipped " " procmail: Skipped " " procmail: Assigning "LASTFOLDER={ " procmail: Opening "{ " procmail: Acquiring kernel-lock procmail: Notified comsat: "rt@18337:./{ " From [email protected] Mon Aug 23 14:26:01 2010 Subject: RE: [RT.mydomain.com #1] Test Ticket Folder: { 1616 Does the notified comsat portion mean that it notified RT? The contents of my ./procmailrc: #Preliminaries SHELL=/bin/sh #Use the Bourne shell (check your path!) #MAILDIR=${HOME} #First check what your mail directory is! MAILDIR="/var/mail/rt/" LOGFILE="home/rt//procmail.log" LOG="--- Logging ${LOGFILE} for ${LOGNAME}, " VERBOSE=yes MAILDOMAIN="rt.mydomain.com" RT_MAILGATE="/opt/rt3/bin/rt-mailgate" #RT_MAILGATE="/usr/local/bin/rt-mailgate" RT_URL="http://rt.mydomain.com/" LOGABSTRACT=all :0 { # the following line extracts the recipient from Received-headers. # Simply using the To: does not work, as tickets are often created # by sending a CC/BCC to RT TO=`formail -c -xReceived: |grep $MAILDOMAIN |sed -e 's/.*for *<*\(.*\)>* *;.*$/\1/'` QUEUE=`echo $TO| $HOME/get_queue.pl` ACTION=`echo $TO| $HOME/get_action.pl` :0 h b w |/usr/bin/perl $RT_MAILGATE --queue $QUEUE --action $ACTION --url $RT_URL } I know that my get_queue.pl and get_action.pl scripts work, as those have been previously tested. Any help and/or guidance you can give would be greatly appreciated. Nicôle

    Read the article

  • cd command not working anymore

    - by dystroy
    I just did two things : install scm_breeze and mercurial : git clone git://github.com/ndbroadbent/scm_breeze.git ~/.scm_breeze ~/.scm_breeze/install.sh sudo apt-get install mercurial And now my cd command seems to be gone : dys@dys-tour:~/prog> cd ~ bash: /home/dys : is a folder dys@dys-tour:~/prog> cd .. .. : command not found The terminals I opened before installing scm_breeze and mercurial are fine. The terminals I open now have the problem. I uninstalled scm_breeze, with no result. What can I do to diagnose the problem and fix it ?

    Read the article

  • Solving embarassingly parallel problems using Python multiprocessing

    - by gotgenes
    How does one use multiprocessing to tackle embarrassingly parallel problems? Embarassingly parallel problems typically consist of three basic parts: Read input data (from a file, database, tcp connection, etc.). Run calculations on the input data, where each calculation is independent of any other calculation. Write results of calculations (to a file, database, tcp connection, etc.). We can parallelize the program in two dimensions: Part 2 can run on multiple cores, since each calculation is independent; order of processing doesn't matter. Each part can run independently. Part 1 can place data on an input queue, part 2 can pull data off the input queue and put results onto an output queue, and part 3 can pull results off the output queue and write them out. This seems a most basic pattern in concurrent programming, but I am still lost in trying to solve it, so let's write a canonical example to illustrate how this is done using multiprocessing. Here is the example problem: Given a CSV file with rows of integers as input, compute their sums. Separate the problem into three parts, which can all run in parallel: Process the input file into raw data (lists/iterables of integers) Calculate the sums of the data, in parallel Output the sums Below is traditional, single-process bound Python program which solves these three tasks: #!/usr/bin/env python # -*- coding: UTF-8 -*- # basicsums.py """A program that reads integer values from a CSV file and writes out their sums to another CSV file. """ import csv import optparse import sys def make_cli_parser(): """Make the command line interface parser.""" usage = "\n\n".join(["python %prog INPUT_CSV OUTPUT_CSV", __doc__, """ ARGUMENTS: INPUT_CSV: an input CSV file with rows of numbers OUTPUT_CSV: an output file that will contain the sums\ """]) cli_parser = optparse.OptionParser(usage) return cli_parser def parse_input_csv(csvfile): """Parses the input CSV and yields tuples with the index of the row as the first element, and the integers of the row as the second element. The index is zero-index based. :Parameters: - `csvfile`: a `csv.reader` instance """ for i, row in enumerate(csvfile): row = [int(entry) for entry in row] yield i, row def sum_rows(rows): """Yields a tuple with the index of each input list of integers as the first element, and the sum of the list of integers as the second element. The index is zero-index based. :Parameters: - `rows`: an iterable of tuples, with the index of the original row as the first element, and a list of integers as the second element """ for i, row in rows: yield i, sum(row) def write_results(csvfile, results): """Writes a series of results to an outfile, where the first column is the index of the original row of data, and the second column is the result of the calculation. The index is zero-index based. :Parameters: - `csvfile`: a `csv.writer` instance to which to write results - `results`: an iterable of tuples, with the index (zero-based) of the original row as the first element, and the calculated result from that row as the second element """ for result_row in results: csvfile.writerow(result_row) def main(argv): cli_parser = make_cli_parser() opts, args = cli_parser.parse_args(argv) if len(args) != 2: cli_parser.error("Please provide an input file and output file.") infile = open(args[0]) in_csvfile = csv.reader(infile) outfile = open(args[1], 'w') out_csvfile = csv.writer(outfile) # gets an iterable of rows that's not yet evaluated input_rows = parse_input_csv(in_csvfile) # sends the rows iterable to sum_rows() for results iterable, but # still not evaluated result_rows = sum_rows(input_rows) # finally evaluation takes place as a chain in write_results() write_results(out_csvfile, result_rows) infile.close() outfile.close() if __name__ == '__main__': main(sys.argv[1:]) Let's take this program and rewrite it to use multiprocessing to parallelize the three parts outlined above. Below is a skeleton of this new, parallelized program, that needs to be fleshed out to address the parts in the comments: #!/usr/bin/env python # -*- coding: UTF-8 -*- # multiproc_sums.py """A program that reads integer values from a CSV file and writes out their sums to another CSV file, using multiple processes if desired. """ import csv import multiprocessing import optparse import sys NUM_PROCS = multiprocessing.cpu_count() def make_cli_parser(): """Make the command line interface parser.""" usage = "\n\n".join(["python %prog INPUT_CSV OUTPUT_CSV", __doc__, """ ARGUMENTS: INPUT_CSV: an input CSV file with rows of numbers OUTPUT_CSV: an output file that will contain the sums\ """]) cli_parser = optparse.OptionParser(usage) cli_parser.add_option('-n', '--numprocs', type='int', default=NUM_PROCS, help="Number of processes to launch [DEFAULT: %default]") return cli_parser def main(argv): cli_parser = make_cli_parser() opts, args = cli_parser.parse_args(argv) if len(args) != 2: cli_parser.error("Please provide an input file and output file.") infile = open(args[0]) in_csvfile = csv.reader(infile) outfile = open(args[1], 'w') out_csvfile = csv.writer(outfile) # Parse the input file and add the parsed data to a queue for # processing, possibly chunking to decrease communication between # processes. # Process the parsed data as soon as any (chunks) appear on the # queue, using as many processes as allotted by the user # (opts.numprocs); place results on a queue for output. # # Terminate processes when the parser stops putting data in the # input queue. # Write the results to disk as soon as they appear on the output # queue. # Ensure all child processes have terminated. # Clean up files. infile.close() outfile.close() if __name__ == '__main__': main(sys.argv[1:]) These pieces of code, as well as another piece of code that can generate example CSV files for testing purposes, can be found on github. I would appreciate any insight here as to how you concurrency gurus would approach this problem. Here are some questions I had when thinking about this problem. Bonus points for addressing any/all: Should I have child processes for reading in the data and placing it into the queue, or can the main process do this without blocking until all input is read? Likewise, should I have a child process for writing the results out from the processed queue, or can the main process do this without having to wait for all the results? Should I use a processes pool for the sum operations? If yes, what method do I call on the pool to get it to start processing the results coming into the input queue, without blocking the input and output processes, too? apply_async()? map_async()? imap()? imap_unordered()? Suppose we didn't need to siphon off the input and output queues as data entered them, but could wait until all input was parsed and all results were calculated (e.g., because we know all the input and output will fit in system memory). Should we change the algorithm in any way (e.g., not run any processes concurrently with I/O)?

    Read the article

  • TcpListener is queuing connections faster than I can clear them

    - by Matthew Brindley
    As I understand it, TcpListener will queue connections once you call Start(). Each time you call AcceptTcpClient (or BeginAcceptTcpClient), it will dequeue one item from the queue. If we load test our TcpListener app by sending 1,000 connections to it at once, the queue builds far faster than we can clear it, leading (eventually) to timeouts from the client because it didn't get a response because its connection was still in the queue. However, the server doesn't appear to be under much pressure, our app isn't consuming much CPU time and the other monitored resources on the machine aren't breaking a sweat. It feels like we're not running efficiently enough right now. We're calling BeginAcceptTcpListener and then immediately handing over to a ThreadPool thread to actually do the work, then calling BeginAcceptTcpClient again. The work involved doesn't seem to put any pressure on the machine, it's basically just a 3 second sleep followed by a dictionary lookup and then a 100 byte write to the TcpClient's stream. Here's the TcpListener code we're using: // Thread signal. private static ManualResetEvent tcpClientConnected = new ManualResetEvent(false); public void DoBeginAcceptTcpClient(TcpListener listener) { // Set the event to nonsignaled state. tcpClientConnected.Reset(); listener.BeginAcceptTcpClient( new AsyncCallback(DoAcceptTcpClientCallback), listener); // Wait for signal tcpClientConnected.WaitOne(); } public void DoAcceptTcpClientCallback(IAsyncResult ar) { // Get the listener that handles the client request, and the TcpClient TcpListener listener = (TcpListener)ar.AsyncState; TcpClient client = listener.EndAcceptTcpClient(ar); if (inProduction) ThreadPool.QueueUserWorkItem(state => HandleTcpRequest(client, serverCertificate)); // With SSL else ThreadPool.QueueUserWorkItem(state => HandleTcpRequest(client)); // Without SSL // Signal the calling thread to continue. tcpClientConnected.Set(); } public void Start() { currentHandledRequests = 0; tcpListener = new TcpListener(IPAddress.Any, 10000); try { tcpListener.Start(); while (true) DoBeginAcceptTcpClient(tcpListener); } catch (SocketException) { // The TcpListener is shutting down, exit gracefully CheckBuffer(); return; } } I'm assuming the answer will be related to using Sockets instead of TcpListener, or at least using TcpListener.AcceptSocket, but I wondered how we'd go about doing that? One idea we had was to call AcceptTcpClient and immediately Enqueue the TcpClient into one of multiple Queue<TcpClient> objects. That way, we could poll those queues on separate threads (one queue per thread), without running into monitors that might block the thread while waiting for other Dequeue operations. Each queue thread could then use ThreadPool.QueueUserWorkItem to have the work done in a ThreadPool thread and then move onto dequeuing the next TcpClient in its queue. Would you recommend this approach, or is our problem that we're using TcpListener and no amount of rapid dequeueing is going to fix that?

    Read the article

  • [MQ] How to check which point is cause of problem

    - by Fuangwith S.
    Hello everybody, I use MQ for send/receive message between my system and other system. Sometime I found that no response message in response queue, yet other system have already put response message into response queue (check from log). So, how to check which point is cause of problem, how to prove message is not arrive to my response queue. In addition, when message arrive my queue it will be written to log file. Thanks

    Read the article

  • How to check which point is cause of problem with MQ?

    - by Fuangwith S.
    I use MQ for send/receive message between my system and other system. Sometime I found that no response message in response queue, yet other system have already put response message into response queue (check from log). So, how to check which point is cause of problem, how to prove message is not arrive to my response queue. In addition, when message arrive my queue it will be written to log file.

    Read the article

  • How can I connect to MSMQ over a workgroup?

    - by cyclotis04
    I'm writing a simple console client-server app using MSMQ. I'm attempting to run it over the workgroup we have set up. They run just fine when run on the same computer, but I can't get them to connect over the network. I've tried adding Direct=, OS:, and a bunch of combinations of other prefaces, but I'm running out of ideas, and obviously don't know the right way to do it. My queue's don't have GUIDs, which is also slightly confusing. Whenever I attempt to connect to a remote machine, I get an invalid queue name message. What do I have to do to make this work? Server: class Program { static string _queue = @"\Private$\qim"; static MessageQueue _mq; static readonly object _mqLock = new object(); static void Main(string[] args) { _queue = Dns.GetHostName() + _queue; lock (_mqLock) { if (!MessageQueue.Exists(_queue)) _mq = MessageQueue.Create(_queue); else _mq = new MessageQueue(_queue); } Console.Write("Starting server at {0}:\n\n", _mq.Path); _mq.Formatter = new BinaryMessageFormatter(); _mq.BeginReceive(new TimeSpan(0, 1, 0), new object(), OnReceive); while (Console.ReadKey().Key != ConsoleKey.Escape) { } _mq.Close(); } static void OnReceive(IAsyncResult result) { Message msg; lock (_mqLock) { try { msg = _mq.EndReceive(result); Console.Write(msg.Body); } catch (Exception ex) { Console.Write("\n" + ex.Message + "\n"); } } _mq.BeginReceive(new TimeSpan(0, 1, 0), new object(), OnReceive); } } Client: class Program { static MessageQueue _mq; static void Main(string[] args) { string queue; while (_mq == null) { Console.Write("Enter the queue name:\n"); queue = Console.ReadLine(); //queue += @"\Private$\qim"; try { if (MessageQueue.Exists(queue)) _mq = new MessageQueue(queue); } catch (Exception ex) { Console.Write("\n" + ex.Message + "\n"); _mq = null; } } Console.Write("Connected. Begin typing.\n\n"); _mq.Formatter = new BinaryMessageFormatter(); ConsoleKeyInfo key = new ConsoleKeyInfo(); while (key.Key != ConsoleKey.Escape) { key = Console.ReadKey(); _mq.Send(key.KeyChar.ToString()); } } }

    Read the article

  • Seg Fault with malloc'd pointers

    - by anon
    I'm making a thread class to use as a wrapper for pthreads. I have a Queue class to use as a queue, but I'm having trouble with it. It seems to allocate and fill the queue struct fine, but when I try to get the data from it, it Seg. faults. http://pastebin.com/Bquqzxt0 (the printf's are for debugging, both throw seg faults) edit: the queue is stored in a dynamically allocated "struct queueset" array as a pointer to the data and an index for the data

    Read the article

  • Audio decoding delay when changing the audio language

    - by mahendiran.b
    My gstreamer Pipeline is like this Approach1 --------------input-selector->Queue->AduioParser->AudioSink | Souphttpsrc->tsdemux-->| | --------------- Queue->videoParser->videoSink In this approach 1, there is a delay in audio decoding when I toggle between various audio language. Approach2 ------ input-selector-> Queue->AduioParser->AudioSink | Souphttpsrc->tsdemux---multiqueue>| | ------- Queue->videoParser->VideoSink But there is no delay is observed in approach2. Can anyone please explain the reason behind this ? what is the specialty of multiqueue here?

    Read the article

  • Windows Azure Service Bus Splitter and Aggregator

    - by Alan Smith
    This article will cover basic implementations of the Splitter and Aggregator patterns using the Windows Azure Service Bus. The content will be included in the next release of the “Windows Azure Service Bus Developer Guide”, along with some other patterns I am working on. I’ve taken the pattern descriptions from the book “Enterprise Integration Patterns” by Gregor Hohpe. I bought a copy of the book in 2004, and recently dusted it off when I started to look at implementing the patterns on the Windows Azure Service Bus. Gregor has also presented an session in 2011 “Enterprise Integration Patterns: Past, Present and Future” which is well worth a look. I’ll be covering more patterns in the coming weeks, I’m currently working on Wire-Tap and Scatter-Gather. There will no doubt be a section on implementing these patterns in my “SOA, Connectivity and Integration using the Windows Azure Service Bus” course. There are a number of scenarios where a message needs to be divided into a number of sub messages, and also where a number of sub messages need to be combined to form one message. The splitter and aggregator patterns provide a definition of how this can be achieved. This section will focus on the implementation of basic splitter and aggregator patens using the Windows Azure Service Bus direct programming model. In BizTalk Server receive pipelines are typically used to implement the splitter patterns, with sequential convoy orchestrations often used to aggregate messages. In the current release of the Service Bus, there is no functionality in the direct programming model that implements these patterns, so it is up to the developer to implement them in the applications that send and receive messages. Splitter A message splitter takes a message and spits the message into a number of sub messages. As there are different scenarios for how a message can be split into sub messages, message splitters are implemented using different algorithms. The Enterprise Integration Patterns book describes the splatter pattern as follows: How can we process a message if it contains multiple elements, each of which may have to be processed in a different way? Use a Splitter to break out the composite message into a series of individual messages, each containing data related to one item. The Enterprise Integration Patterns website provides a description of the Splitter pattern here. In some scenarios a batch message could be split into the sub messages that are contained in the batch. The splitting of a message could be based on the message type of sub-message, or the trading partner that the sub message is to be sent to. Aggregator An aggregator takes a stream or related messages and combines them together to form one message. The Enterprise Integration Patterns book describes the aggregator pattern as follows: How do we combine the results of individual, but related messages so that they can be processed as a whole? Use a stateful filter, an Aggregator, to collect and store individual messages until a complete set of related messages has been received. Then, the Aggregator publishes a single message distilled from the individual messages. The Enterprise Integration Patterns website provides a description of the Aggregator pattern here. A common example of the need for an aggregator is in scenarios where a stream of messages needs to be combined into a daily batch to be sent to a legacy line-of-business application. The BizTalk Server EDI functionality provides support for batching messages in this way using a sequential convoy orchestration. Scenario The scenario for this implementation of the splitter and aggregator patterns is the sending and receiving of large messages using a Service Bus queue. In the current release, the Windows Azure Service Bus currently supports a maximum message size of 256 KB, with a maximum header size of 64 KB. This leaves a safe maximum body size of 192 KB. The BrokeredMessage class will support messages larger than 256 KB; in fact the Size property is of type long, implying that very large messages may be supported at some point in the future. The 256 KB size restriction is set in the service bus components that are deployed in the Windows Azure data centers. One of the ways of working around this size restriction is to split large messages into a sequence of smaller sub messages in the sending application, send them via a queue, and then reassemble them in the receiving application. This scenario will be used to demonstrate the pattern implementations. Implementation The splitter and aggregator will be used to provide functionality to send and receive large messages over the Windows Azure Service Bus. In order to make the implementations generic and reusable they will be implemented as a class library. The splitter will be implemented in the LargeMessageSender class and the aggregator in the LargeMessageReceiver class. A class diagram showing the two classes is shown below. Implementing the Splitter The splitter will take a large brokered message, and split the messages into a sequence of smaller sub-messages that can be transmitted over the service bus messaging entities. The LargeMessageSender class provides a Send method that takes a large brokered message as a parameter. The implementation of the class is shown below; console output has been added to provide details of the splitting operation. public class LargeMessageSender {     private static int SubMessageBodySize = 192 * 1024;     private QueueClient m_QueueClient;       public LargeMessageSender(QueueClient queueClient)     {         m_QueueClient = queueClient;     }       public void Send(BrokeredMessage message)     {         // Calculate the number of sub messages required.         long messageBodySize = message.Size;         int nrSubMessages = (int)(messageBodySize / SubMessageBodySize);         if (messageBodySize % SubMessageBodySize != 0)         {             nrSubMessages++;         }           // Create a unique session Id.         string sessionId = Guid.NewGuid().ToString();         Console.WriteLine("Message session Id: " + sessionId);         Console.Write("Sending {0} sub-messages", nrSubMessages);           Stream bodyStream = message.GetBody<Stream>();         for (int streamOffest = 0; streamOffest < messageBodySize;             streamOffest += SubMessageBodySize)         {                                     // Get the stream chunk from the large message             long arraySize = (messageBodySize - streamOffest) > SubMessageBodySize                 ? SubMessageBodySize : messageBodySize - streamOffest;             byte[] subMessageBytes = new byte[arraySize];             int result = bodyStream.Read(subMessageBytes, 0, (int)arraySize);             MemoryStream subMessageStream = new MemoryStream(subMessageBytes);               // Create a new message             BrokeredMessage subMessage = new BrokeredMessage(subMessageStream, true);             subMessage.SessionId = sessionId;               // Send the message             m_QueueClient.Send(subMessage);             Console.Write(".");         }         Console.WriteLine("Done!");     }} The LargeMessageSender class is initialized with a QueueClient that is created by the sending application. When the large message is sent, the number of sub messages is calculated based on the size of the body of the large message. A unique session Id is created to allow the sub messages to be sent as a message session, this session Id will be used for correlation in the aggregator. A for loop in then used to create the sequence of sub messages by creating chunks of data from the stream of the large message. The sub messages are then sent to the queue using the QueueClient. As sessions are used to correlate the messages, the queue used for message exchange must be created with the RequiresSession property set to true. Implementing the Aggregator The aggregator will receive the sub messages in the message session that was created by the splitter, and combine them to form a single, large message. The aggregator is implemented in the LargeMessageReceiver class, with a Receive method that returns a BrokeredMessage. The implementation of the class is shown below; console output has been added to provide details of the splitting operation.   public class LargeMessageReceiver {     private QueueClient m_QueueClient;       public LargeMessageReceiver(QueueClient queueClient)     {         m_QueueClient = queueClient;     }       public BrokeredMessage Receive()     {         // Create a memory stream to store the large message body.         MemoryStream largeMessageStream = new MemoryStream();           // Accept a message session from the queue.         MessageSession session = m_QueueClient.AcceptMessageSession();         Console.WriteLine("Message session Id: " + session.SessionId);         Console.Write("Receiving sub messages");           while (true)         {             // Receive a sub message             BrokeredMessage subMessage = session.Receive(TimeSpan.FromSeconds(5));               if (subMessage != null)             {                 // Copy the sub message body to the large message stream.                 Stream subMessageStream = subMessage.GetBody<Stream>();                 subMessageStream.CopyTo(largeMessageStream);                   // Mark the message as complete.                 subMessage.Complete();                 Console.Write(".");             }             else             {                 // The last message in the sequence is our completeness criteria.                 Console.WriteLine("Done!");                 break;             }         }                     // Create an aggregated message from the large message stream.         BrokeredMessage largeMessage = new BrokeredMessage(largeMessageStream, true);         return largeMessage;     } }   The LargeMessageReceiver initialized using a QueueClient that is created by the receiving application. The receive method creates a memory stream that will be used to aggregate the large message body. The AcceptMessageSession method on the QueueClient is then called, which will wait for the first message in a message session to become available on the queue. As the AcceptMessageSession can throw a timeout exception if no message is available on the queue after 60 seconds, a real-world implementation should handle this accordingly. Once the message session as accepted, the sub messages in the session are received, and their message body streams copied to the memory stream. Once all the messages have been received, the memory stream is used to create a large message, that is then returned to the receiving application. Testing the Implementation The splitter and aggregator are tested by creating a message sender and message receiver application. The payload for the large message will be one of the webcast video files from http://www.cloudcasts.net/, the file size is 9,697 KB, well over the 256 KB threshold imposed by the Service Bus. As the splitter and aggregator are implemented in a separate class library, the code used in the sender and receiver console is fairly basic. The implementation of the main method of the sending application is shown below.   static void Main(string[] args) {     // Create a token provider with the relevant credentials.     TokenProvider credentials =         TokenProvider.CreateSharedSecretTokenProvider         (AccountDetails.Name, AccountDetails.Key);       // Create a URI for the serivce bus.     Uri serviceBusUri = ServiceBusEnvironment.CreateServiceUri         ("sb", AccountDetails.Namespace, string.Empty);       // Create the MessagingFactory     MessagingFactory factory = MessagingFactory.Create(serviceBusUri, credentials);       // Use the MessagingFactory to create a queue client     QueueClient queueClient = factory.CreateQueueClient(AccountDetails.QueueName);       // Open the input file.     FileStream fileStream = new FileStream(AccountDetails.TestFile, FileMode.Open);       // Create a BrokeredMessage for the file.     BrokeredMessage largeMessage = new BrokeredMessage(fileStream, true);       Console.WriteLine("Sending: " + AccountDetails.TestFile);     Console.WriteLine("Message body size: " + largeMessage.Size);     Console.WriteLine();         // Send the message with a LargeMessageSender     LargeMessageSender sender = new LargeMessageSender(queueClient);     sender.Send(largeMessage);       // Close the messaging facory.     factory.Close();  } The implementation of the main method of the receiving application is shown below. static void Main(string[] args) {       // Create a token provider with the relevant credentials.     TokenProvider credentials =         TokenProvider.CreateSharedSecretTokenProvider         (AccountDetails.Name, AccountDetails.Key);       // Create a URI for the serivce bus.     Uri serviceBusUri = ServiceBusEnvironment.CreateServiceUri         ("sb", AccountDetails.Namespace, string.Empty);       // Create the MessagingFactory     MessagingFactory factory = MessagingFactory.Create(serviceBusUri, credentials);       // Use the MessagingFactory to create a queue client     QueueClient queueClient = factory.CreateQueueClient(AccountDetails.QueueName);       // Create a LargeMessageReceiver and receive the message.     LargeMessageReceiver receiver = new LargeMessageReceiver(queueClient);     BrokeredMessage largeMessage = receiver.Receive();       Console.WriteLine("Received message");     Console.WriteLine("Message body size: " + largeMessage.Size);       string testFile = AccountDetails.TestFile.Replace(@"\In\", @"\Out\");     Console.WriteLine("Saving file: " + testFile);       // Save the message body as a file.     Stream largeMessageStream = largeMessage.GetBody<Stream>();     largeMessageStream.Seek(0, SeekOrigin.Begin);     FileStream fileOut = new FileStream(testFile, FileMode.Create);     largeMessageStream.CopyTo(fileOut);     fileOut.Close();       Console.WriteLine("Done!"); } In order to test the application, the sending application is executed, which will use the LargeMessageSender class to split the message and place it on the queue. The output of the sender console is shown below. The console shows that the body size of the large message was 9,929,365 bytes, and the message was sent as a sequence of 51 sub messages. When the receiving application is executed the results are shown below. The console application shows that the aggregator has received the 51 messages from the message sequence that was creating in the sending application. The messages have been aggregated to form a massage with a body of 9,929,365 bytes, which is the same as the original large message. The message body is then saved as a file. Improvements to the Implementation The splitter and aggregator patterns in this implementation were created in order to show the usage of the patterns in a demo, which they do quite well. When implementing these patterns in a real-world scenario there are a number of improvements that could be made to the design. Copying Message Header Properties When sending a large message using these classes, it would be great if the message header properties in the message that was received were copied from the message that was sent. The sending application may well add information to the message context that will be required in the receiving application. When the sub messages are created in the splitter, the header properties in the first message could be set to the values in the original large message. The aggregator could then used the values from this first sub message to set the properties in the message header of the large message during the aggregation process. Using Asynchronous Methods The current implementation uses the synchronous send and receive methods of the QueueClient class. It would be much more performant to use the asynchronous methods, however doing so may well affect the sequence in which the sub messages are enqueued, which would require the implementation of a resequencer in the aggregator to restore the correct message sequence. Handling Exceptions In order to keep the code readable no exception handling was added to the implementations. In a real-world scenario exceptions should be handled accordingly.

    Read the article

  • Should vendors have an express queue for people who have a clue? What passes for support today?

    - by Greg Low
    It's good to see some airports that have queues for people that travel frequently and know what they're doing. But I'm left thinking that IT vendors need to have something similar. Bigpond (part of Telstra) in Australia have recently introduced new 42MB/sec modems on their 3G network. It's actually just a pair of 21MB/sec modems linked together but the idea is cute. Around most of the country, they work pretty well. In the middle of the CBD in Melbourne however, at present they just don't work. Having...(read more)

    Read the article

  • Why is Postfix trying to connect to other machines SMTP port 25?

    - by TryTryAgain
    Jul 5 11:09:25 relay postfix/smtp[3084]: connect to ab.xyz.com[10.41.0.101]:25: Connection refused Jul 5 11:09:25 relay postfix/smtp[3087]: connect to ab.xyz.com[10.41.0.247]:25: Connection refused Jul 5 11:09:25 relay postfix/smtp[3088]: connect to ab.xyz.com[10.41.0.101]:25: Connection refused Jul 5 11:09:25 relay postfix/smtp[3084]: connect to ab.xyz.com[10.41.0.247]:25: Connection refused Jul 5 11:09:25 relay postfix/smtp[3087]: connect to ab.xyz.com[10.41.0.110]:25: Connection refused Jul 5 11:09:25 relay postfix/smtp[3088]: connect to ab.xyz.com[10.41.0.110]:25: Connection refused Jul 5 11:09:25 relay postfix/smtp[3084]: connect to ab.xyz.com[10.41.0.102]:25: Connection refused Jul 5 11:09:30 relay postfix/smtp[3085]: connect to ab.xyz.com[10.41.0.102]:25: Connection refused Jul 5 11:09:30 relay postfix/smtp[3086]: connect to ab.xyz.com[10.41.0.247]:25: Connection refused Jul 5 11:09:30 relay postfix/smtp[3086]: connect to ab.xyz.com[10.41.0.102]:25: Connection refused Jul 5 11:09:55 relay postfix/smtp[3087]: connect to ab.xyz.com[10.40.40.130]:25: Connection timed out Jul 5 11:09:55 relay postfix/smtp[3084]: connect to ab.xyz.com[10.40.40.130]:25: Connection timed out Jul 5 11:09:55 relay postfix/smtp[3088]: connect to ab.xyz.com[10.40.40.130]:25: Connection timed out Jul 5 11:09:55 relay postfix/smtp[3087]: connect to ab.xyz.com[10.41.0.135]:25: Connection refused Jul 5 11:09:55 relay postfix/smtp[3084]: connect to ab.xyz.com[10.41.0.110]:25: Connection refused Jul 5 11:09:55 relay postfix/smtp[3088]: connect to ab.xyz.com[10.41.0.247]:25: Connection refused Is this a DNS thing, doubtful as I've changed from our local DNS to Google's..still Postfix will occasionally try and connect to ab.xyz.com from a variety of addresses that may or may not have port 25 open and act as mail servers to begin with. Why is Postfix attempting to connect to other machines as seen in the log? Mail is being sent properly, other than that, it appears all is good. Occasionally I'll also see: relay postfix/error[3090]: 3F1AB42132: to=, relay=none, delay=32754, delays=32724/30/0/0, dsn=4.4.1, status=deferred (delivery temporarily suspended: connect to ab.xyz.com[10.41.0.102]:25: Connection refused) I have Postfix setup with very little restrictions: mynetworks = 127.0.0.0/8, 10.0.0.0/8 only. Like I said it appears all mail is getting passed through, but I hate seeing errors and it is confusing me as to why it would be attempting to connect to other machines as seen in the log. Some Output of cat /var/log/mail.log|grep 3F1AB42132 Jul 5 02:04:01 relay postfix/smtpd[1653]: 3F1AB42132: client=unknown[10.41.0.109] Jul 5 02:04:01 relay postfix/cleanup[1655]: 3F1AB42132: message-id= Jul 5 02:04:01 relay postfix/qmgr[1588]: 3F1AB42132: from=, size=3404, nrcpt=1 (queue active) Jul 5 02:04:31 relay postfix/smtp[1634]: 3F1AB42132: to=, relay=none, delay=30, delays=0.02/0/30/0, dsn=4.4.1, status=deferred (connect to ab.xyz.com[10.41.0.110]:25: Connection refused) Jul 5 02:13:58 relay postfix/qmgr[1588]: 3F1AB42132: from=, size=3404, nrcpt=1 (queue active) Jul 5 02:14:28 relay postfix/smtp[1681]: 3F1AB42132: to=, relay=none, delay=628, delays=598/0.01/30/0, dsn=4.4.1, status=deferred (connect to ab.xyz.com[10.41.0.247]:25: Connection refused) Jul 5 02:28:58 relay postfix/qmgr[1588]: 3F1AB42132: from=, size=3404, nrcpt=1 (queue active) Jul 5 02:29:28 relay postfix/smtp[1684]: 3F1AB42132: to=, relay=none, delay=1527, delays=1497/0/30/0, dsn=4.4.1, status=deferred (connect to ab.xyz.com[10.41.0.135]:25: Connection refused) Jul 5 02:58:58 relay postfix/qmgr[1588]: 3F1AB42132: from=, size=3404, nrcpt=1 (queue active) Jul 5 02:59:28 relay postfix/smtp[1739]: 3F1AB42132: to=, relay=none, delay=3327, delays=3297/0/30/0, dsn=4.4.1, status=deferred (connect to ab.xyz.com[10.40.40.130]:25: Connection timed out) Jul 5 03:58:58 relay postfix/qmgr[1588]: 3F1AB42132: from=, size=3404, nrcpt=1 (queue active) Jul 5 03:59:28 relay postfix/smtp[1839]: 3F1AB42132: to=, relay=none, delay=6928, delays=6897/0.03/30/0, dsn=4.4.1, status=deferred (connect to ab.xyz.com[10.41.0.101]:25: Connection refused) Jul 5 04:11:03 relay postfix/qmgr[2039]: 3F1AB42132: from=, size=3404, nrcpt=1 (queue active) Jul 5 04:11:33 relay postfix/error[2093]: 3F1AB42132: to=, relay=none, delay=7653, delays=7622/30/0/0, dsn=4.4.1, status=deferred (delivery temporarily suspended: connect to ab.xyz.com[10.41.0.101]:25: Connection refused) Jul 5 05:21:03 relay postfix/qmgr[2039]: 3F1AB42132: from=, size=3404, nrcpt=1 (queue active) Jul 5 05:21:33 relay postfix/error[2217]: 3F1AB42132: to=, relay=none, delay=11853, delays=11822/30/0/0, dsn=4.4.1, status=deferred (delivery temporarily suspended: connect to ab.xyz.com[10.41.0.101]:25: Connection refused) Jul 5 06:29:25 relay postfix/qmgr[2420]: 3F1AB42132: from=, size=3404, nrcpt=1 (queue active) Jul 5 06:29:55 relay postfix/error[2428]: 3F1AB42132: to=, relay=none, delay=15954, delays=15924/30/0/0.08, dsn=4.4.1, status=deferred (delivery temporarily suspended: connect to ab.xyz.com[10.41.0.101]:25: Connection refused) Jul 5 07:39:24 relay postfix/qmgr[2885]: 3F1AB42132: from=, size=3404, nrcpt=1 (queue active) Jul 5 07:39:54 relay postfix/error[2936]: 3F1AB42132: to=, relay=none, delay=20153, delays=20123/30/0/0, dsn=4.4.1, status=deferred (delivery temporarily suspended: connect to ab.xyz.com[10.40.40.130]:25: Connection timed out)

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >