Search Results

Search found 3222 results on 129 pages for 'mercurial queue'.

Page 41/129 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • What happens to my PriorityQueue if my Comparator throws an exception while it's busy bubbling up or

    - by nieldw
    Hi, I'm trying order pairs of integers ascendantly where a pair is considered less than another pair if both its entries are strictly less than those of the other pair, and larger than the other pair if both its entries are strictly larger than those of the other pair. All other cases are considered incomparable. They way I want to solve this is by defining a Comparator that implements the above, but will throw an exception for incomparable cases, and provide that to a PriorityQueue. Of course, while inserting a pair the priority queue does several comparisons while bubbling the new entry up to its correct position in the heap, and many of these will be comparable. But it may happen during the bubbling process that a pair is encountered with which this new pair is incomparable, and an exception will be thrown. If this happens, what will be the state of the PriorityQueue? Will the pair I was trying to insert sit in the heap at the last position it was in before the exception was thrown? If I use the PriorityQueue's remove(Object o) method, will the PriorityQueue be restored to a consistent state? Thanks

    Read the article

  • How to work threading with ConcurrentQueue<T>.

    - by dboarman
    I am trying to figure out what the best way of working with a queue will be. I have a process that returns a DataTable. Each DataTable, in turn, is merged with the previous DataTable. There is one problem, too many records to hold until the final BulkCopy (OutOfMemory). So, I have determined that I should process each incoming DataTable immediately. Thinking about the ConcurrentQueue<T>...but I don't see how the WriteQueuedData() method would know to dequeue a table and write it to the database. For instance: public class TableTransporter { private ConcurrentQueue<DataTable> tableQueue = new ConcurrentQueue<DataTable>(); public TableTransporter() { tableQueue.OnItemQueued += new EventHandler(WriteQueuedData); // no events available } public void ExtractData() { DataTable table; // perform data extraction tableQueue.Enqueue(table); } private void WriteQueuedData(object sender, EventArgs e) { BulkCopy(e.Table); } } My first question is, aside from the fact that I don't actually have any events to subscribe to, if I call ExtractData() asynchronously will this be all that I need? Second, is there something I'm missing about the way ConcurrentQueue<T> functions and needing some form of trigger to work asynchronously with the queued objects?

    Read the article

  • Trying to fadein divs in a sequence, over time, using JQuery

    - by user346602
    Hi, I'm trying to figure out how to make 4 images fade in sequentially when the page loads. The following is my (amateurish) code: Here is the HTML: <div id="outercorners"> <img id="corner1" src="images/corner1.gif" width="6" height="6" alt=""/> <img id="corner2" src="images/corner2.gif" width="6" height="6" alt=""/> <img id="corner3" src="images/corner3.gif" width="6" height="6" alt=""/> <img id="corner4" src="images/corner4.gif" width="6" height="6" alt=""/> </div><!-- end #outercorners--> Here is the JQuery: $(document).ready(function() { $("#corner1").fadeIn("2000", function(){ $("#corner3").fadeIn("4000", function(){ $("#corner2").fadeIn("6000", function(){ $("#corner4").fadeIn("8000", function(){ }); }); }); }); Here is the css: #outercorners { position: fixed; top:186px; left:186px; width:558px; height:372px; } #corner1 { position: fixed; top:186px; left:186px; display: none; } #corner2 { position: fixed; top:186px; left:744px; display: none; } #corner3 { position: fixed; top:558px; left:744px; display: none; } #corner4 { position: fixed; top:558px; left:186px; display: none; } They seem to just wink at me, rather than fade in in the order I've ascribed to them. Should I be using the queue() function? And, if so, how would I implement it in this case? Thank you for any assistance.

    Read the article

  • How do I view the job queue in lftp after it has moved to a background process?

    - by drpfenderson
    I've just started using lftp for remote transferring files on my Raspberry Pi running Debian. I know how to transfer the files, and use queue and jobs to add and view transferring files. However, I'm not actually sure on how to view these transfers once lftp moves to the background. The lftp man page mentions how lftp is moving to the background, but when I open a new instance of the program from shell and type jobs, the queue is empty. However, I can clearly see using my file manager that the transfers are still happening, as the files are there and growing in size. I'm guessing that when I reopen lftp, it's just opening a new instance that isn't connected to the nohup mode lftp that has the active queue. I've tried searching various places, but no one else seems to have this particular issue. So, I guess what I'm asking is twofold: Is there a way to easily attach to the background lftp process to view the current jobs list? If not, is there a way to view this at all?

    Read the article

  • How to monitor an existing queue from WebSphere MQ?

    - by Jorge
    I have a .NET application that needs to monitor a queue in WebSphere MQ. I need to react to each message without impacting the current process. The client application can't explicity send me the same message. Can I read a message without removing it from the queue? Can I be notified for each message? Can I configure the MQ to duplicate the current queue? Is there another solution?

    Read the article

  • Which JMS broker implementations allowing resending messages saved in dead message queue?

    - by marabol
    I wonder, if there JMS broker, that allows administrators to resend (via GUI or any tool) messages, saved in a ded message queue or dead letter queue, after solving the causing problem (e.g. database is down, not enough space...). WebSphere provide a feature to resend messages saved in dead letter queue: 1 Glassfish 2.1.1 using Sun Java System Message Queue 4.4 has no feature to do this, I think so. What are the options on other JMS brokers? Or is the best way, not to use the DMQ/DLQ feature, if you are depend on a message? Thanks a lot

    Read the article

  • What happens on a JMS queue when onMessage() throws a JMSException?

    - by user311121
    Hi, I'm using Spring 2.5 with my custom class that implements MessageListener. If a JmsException is thrown in my onMessage( ) method, what happens to the state of the queue? Is the message considered "delivered" by the queue the moment onMessage is called? Or does the JmsException trigger some kind of rollback and the message is re-entered on the queue? Thanks in advance!

    Read the article

  • Is it possible to put software in a queue to be installed later?

    - by Gourik Goswami
    Sometime I find some software that is interesting and want to install it. At office I use pocket WiFi so I do not want to download big files. I want some how to tag these files so I can install them later when I am at home at connected with wifi network (More faster network). So is there any way that these software installation can be put in a queue, so that when I am at home it can start right away. Is there any way to do this?

    Read the article

  • Help me upgrade my pf.conf for OpenBSD 4.7

    - by polemon
    I'm planning on upgrading my OpenBSD to 4.7 (from 4.6) and as you may or may not know, they changed the syntax for pf.conf. This is the relevant portion from the upgrade guide: pf(4) NAT syntax change As described in more detail in this mailing list post, PF's separate nat/rdr/binat (translation) rules have been replaced with actions on regular match/filter rules. Simple rulesets may be converted like this: nat on $ext_if from 10/8 -> ($ext_if) rdr on $ext_if to ($ext_if) -> 1.2.3.4 becomes match out on $ext_if from 10/8 nat-to ($ext_if) match in on $ext_if to ($ext_if) rdr-to 1.2.3.4 and... binat on $ext_if from $web_serv_int to any -> $web_serv_ext becomes match on $ext_if from $web_serv_int to any binat-to $web_serv_ext nat-anchor and/or rdr-anchor lines, e.g. for relayd(8), ftp-proxy(8) and tftp-proxy(8), are no longer used and should be removed from pf.conf(5), leaving only the anchor lines. Translation rules relating to these and spamd(8) will need to be adjusted as appropriate. N.B.: Previously, translation rules had "stop at first match" behaviour, with binat being evaluated first, followed by nat/rdr depending on direction of the packet. Now the filter rules are subject to the usual "last match" behaviour, so care must be taken with rule ordering when converting. pf(4) route-to/reply-to syntax change The route-to, reply-to, dup-to and fastroute options in pf.conf move to filteropts; pass in on $ext_if route-to (em1 192.168.1.1) from 10.1.1.1 pass in on $ext_if reply-to (em1 192.168.1.1) to 10.1.1.1 becomes pass in on $ext_if from 10.1.1.1 route-to (em1 192.168.1.1) pass in on $ext_if to 10.1.1.1 reply-to (em1 192.168.1.1) Now, this is my current pf.conf: # $OpenBSD: pf.conf,v 1.38 2009/02/23 01:18:36 deraadt Exp $ # # See pf.conf(5) for syntax and examples; this sample ruleset uses # require-order to permit mixing of NAT/RDR and filter rules. # Remember to set net.inet.ip.forwarding=1 and/or net.inet6.ip6.forwarding=1 # in /etc/sysctl.conf if packets are to be forwarded between interfaces. ext_if="pppoe0" int_if="nfe0" int_net="192.168.0.0/24" polemon="192.168.0.10" poletopw="192.168.0.12" segatop="192.168.0.20" table <leechers> persist set loginterface $ext_if set skip on lo match on $ext_if all scrub (no-df max-mss 1440) altq on $ext_if priq bandwidth 950Kb queue {q_pri, q_hi, q_std, q_low} queue q_pri priority 15 queue q_hi priority 10 queue q_std priority 7 priq(default) queue q_low priority 0 nat-anchor "ftp-proxy/*" rdr-anchor "ftp-proxy/*" nat on $ext_if from !($ext_if) -> ($ext_if) rdr pass on $int_if proto tcp to port ftp -> 127.0.0.1 port 8021 rdr pass on $ext_if proto tcp to port 2080 -> $segatop port 80 rdr pass on $ext_if proto tcp to port 2022 -> $segatop port 22 rdr pass on $ext_if proto tcp to port 4000 -> $polemon port 4000 rdr pass on $ext_if proto tcp to port 6600 -> $polemon port 6600 anchor "ftp-proxy/*" block pass on $int_if queue(q_hi, q_pri) pass out on $ext_if queue(q_std, q_pri) pass out on $ext_if proto icmp queue q_pri pass out on $ext_if proto {tcp, udp} to any port ssh queue(q_hi, q_pri) pass out on $ext_if proto {tcp, udp} to any port http queue(q_std, q_pri) #pass out on $ext_if proto {tcp, udp} all queue(q_low, q_hi) pass out on $ext_if proto {tcp, udp} from <leechers> queue(q_low, q_std) pass in on $ext_if proto tcp to ($ext_if) port ident queue(q_hi, q_pri) pass in on $ext_if proto tcp to ($ext_if) port ssh queue(q_hi, q_pri) pass in on $ext_if proto tcp to ($ext_if) port http queue(q_hi, q_pri) pass in on $ext_if inet proto icmp all icmp-type echoreq queue q_pri If someone has experience with porting the 4.6 pf.conf to 4.7, please help me do the correct changes. OK, this is how far I've got: I commented out nat-anchor and rdr-anchor, as describted in the guide: #nat-anchor "ftp-proxy/*" #rdr-anchor "ftp-proxy/*" And this is how I've "converted" the rdr rules: #nat on $ext_if from !($ext_if) -> ($ext_if) match out on $ext_if from !($ext_if) nat-to ($ext_if) #rdr pass on $int_if proto tcp to port ftp -> 127.0.0.1 port 8021 match in on $int_if proto tcp to port ftp rdr-to 127.0.0.1 port 8021 #rdr pass on $ext_if proto tcp to port 2080 -> $segatop port 80 match in on $ext_if proto tcp tp port 2080 rdr-to $segatop port 80 #rdr pass on $ext_if proto tcp to port 2022 -> $segatop port 22 match in on $ext_if proto tcp tp port 2022 rdr-to $segatop port 22 rdr pass on $ext_if proto tcp to port 4000 -> $polemon port 4000 match in on $ext_if proto tcp tp port 4000 rdr-to $polemon port 4000 rdr pass on $ext_if proto tcp to port 6600 -> $polemon port 6600 match in on $ext_if proto tcp tp port 6600 rdr-to $polemon port 6600 Did I miss anything? Is the anchor for ftp-proxy OK as it is now? Do I need to change something in the other pass in on... lines?

    Read the article

  • Inaccurate bandwidth limiting in altq queues

    - by overkordbaever
    I'm setting up an environment where I have one Linux server, one OpenBSD router and one Linux client and I want to be able to limit how much bandwidth the client should be able to use. I've been performing these tests with "netcat" and "time" (using time to measure the time of the transfer with netcat), and what happens when trying these tests (using the TCP protocol, the queues will for some reason not work with UDP) is that the queues aren't exact at all. For example: when setting a bandwidth limit of 10mbit, the client cannot use more than five mbits, when setting a limit of 100mbit, the client cannot use more than around 50mbit. The config looks like (using a 100mbit limit in the example): #queue rules altq on { $int_if, $ext_if } cbq bandwidth 100Mb queue { def, low } queue def bandwidth 0Mb cbq(default) queue low bandwidth 100Mb cbq(default) #Passrules test pass out quick from $int_if to $ext_if queue low pass in quick from $ext_if to $int_if queue low pass out quick from $ext_if to $int_if queue low pass in quick from $int_if to $ext_if queue low

    Read the article

  • How can I transfer files to a Kindle Fire with a Micro-USB cable?

    - by Jeff
    I'm running Ubuntu 11.10, and when I connect my Kindle Fire to my computer via micro usb, it is not recognized automatically. Other usb devices, such as my ipod and digital camera, are recognized just fine. It does not appear to be a usb power issue, since the Kindle Fire wakes up from sleeping when it is plugged in. I never get the message on the Kindle telling me it is ready to accept files from the computer, though. Here are the last 15 lines of dmesg after plugging the kindle in: jeff@prime:~$ dmesg | tail -n 15 [45918.269671] ieee80211 phy0: wl_ops_bss_info_changed: arp filtering: enabled true, count 1 (implement) [45929.072149] wlan0: no IPv6 routers present [46743.224217] usb 1-1: new high speed USB device number 5 using ehci_hcd [46743.364623] scsi8 : usb-storage 1-1:1.0 [46744.366102] scsi 8:0:0:0: Direct-Access Amazon Kindle 0001 PQ: 0 ANSI: 2 [46744.366356] scsi: killing requests for dead queue [46744.372494] scsi: killing requests for dead queue [46744.384510] scsi: killing requests for dead queue [46744.392348] scsi: killing requests for dead queue [46744.392731] scsi: killing requests for dead queue [46744.396853] scsi: killing requests for dead queue [46744.397214] scsi: killing requests for dead queue [46744.400795] scsi: killing requests for dead queue [46744.401589] sd 8:0:0:0: Attached scsi generic sg2 type 0 [46744.407520] sd 8:0:0:0: [sdb] Attached SCSI removable disk And here are my mounted filesystems: jeff@prime:~$ df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 298594984 174663712 108763480 62% / udev 1407684 4 1407680 1% /dev tmpfs 566924 896 566028 1% /run none 5120 0 5120 0% /run/lock none 1417308 300 1417008 1% /run/shm /home/jeff/.Private 298594984 174663712 108763480 62% /home/jeff I should note that, since I got Dropbox working on my Kindle, the usb is no longer strictly necessary, but as a matter of principle I'd love to get it working.

    Read the article

  • Java - System design with distributed Queues and Locks

    - by sunny
    Looking for inputs to evaluate a design for a system (java) which would have a distributed queue serving several (but not too many) nodes. These nodes would process objects present in the distributed queue and on occasion require a distributed lock across the cluster on an arbitrary (distributed) data structures. These (distributed) data structures could potentially lie in a distributed cache. Eliminating Terracotta (DSO),Hazelcast and Akka what could be alternative choices. Currently considering zookeeper as a distributed locking mechanism. Since the recommendation of a znode is not to exceed the 1M size , the understanding is that zookeeper should not be used a distributed queue. And also from Netflix curator tech note 4. So should a distributed cache, say like memcached, or redis be used to emulate a distributed queue ? i.e. The distributed queue will be stored in the caches and will be locked cluster-wide via zookeeper. Are there potential pitfalls with this high-level approach. The objects don't need to be taken off the queue. The object will pass through a lifecycle which will determine its removal from the queue. There would be about 10k+ objects in a queue at a given time changing states and any node could service one stage of the object's lifecycle. (Although not strictly necessary .. i.e. one node could serve the entire lifecycle if that is more efficient.) Any suggestions/alternatives ? sidenote: new to zookeeper ; redis etc.

    Read the article

  • Announcing Upcoming SOA and JMS Introductory Blog Posts

    - by JuergenKress
    Beginning next week, SOA Proactive Support will begin posting a series of introductory blogs here on working with JMS in a SOA context. The posts will begin with how to set up JMS in WebLogic server, lead you through reading and writing to a JMS queue from the WLS Java samples, continue with how to access it from a SOA composite and, finally, describe how to set up and access AQ JMS (Advanced Queuing JMS) from a SOA/BPEL process. The posts will be of a tutorial nature and include step-by-step examples. Your questions and feedback are encouraged. The following topics are planned: How to Create a Simple JMS Queue in Weblogic Server 11g Using the QueueSend.java Sample Program to Send a Message to a JMS Queue Using the QueueReceive.java Sample Program to Read a Message from a JMS Queue How to Create an 11g BPEL Process Which Writes a Message Based on an XML Schema to a JMS Queue How to Create an 11g BPEL Process Which Reads a Message Based on an XML Schema from a JMS Queue How to Set Up an AQ JMS (Advanced Queueing JMS) for SOA Purposes How to Write to an AQ JMS Queue from a BPEL Process How to Read from an AQ JMS Queue from a BPEL Process Read the full article SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress,SOA JMS,JMS,WebLogic

    Read the article

  • Announcing Upcoming SOA and JMS Introductory Blog Posts

    - by John-Brown.Evans
    Announcing Upcoming SOA and JMS Introductory Blog Posts Beginning next week, SOA Proactive Support will begin posting a series of introductory blogs here on working with JMS in a SOA context. The posts will begin with how to set up JMS in WebLogic server, lead you through reading and writing to a JMS queue from the WLS Java samples, continue with how to access it from a SOA composite and, finally, describe how to set up and access AQ JMS (Advanced Queuing JMS) from a SOA/BPEL process. The posts will be of a tutorial nature and include step-by-step examples. Your questions and feedback are encouraged. The following topics are planned: How to Create a Simple JMS Queue in Weblogic Server 11g Using the QueueSend.java Sample Program to Send a Message to a JMS Queue Using the QueueReceive.java Sample Program to Read a Message from a JMS Queue How to Create an 11g BPEL Process Which Writes a Message Based on an XML Schema to a JMS Queue How to Create an 11g BPEL Process Which Reads a Message Based on an XML Schema from a JMS Queue How to Set Up an AQ JMS (Advanced Queueing JMS) for SOA Purposes How to Write to an AQ JMS Queue from a BPEL Process How to Read from an AQ JMS Queue from a BPEL Process

    Read the article

  • Disable #pragma message("")

    - by Balls-of-steel
    Hi, I needed to include in my project but there is a line, in glut.h which is #pragma message("Note: including lib: glut32.lib\n") It is really annoying and I want to get rid of it when compiling. I could just remove the line in my glut.h but I want my fix to be independent of the glut.h. I have tried setting #pragma warnings to show only critical info, and I have also tried #pragma message disable but nothing worked. Any help?

    Read the article

  • Advice on Python/Django and message queues

    - by Andy Hume
    I have an application in Django, that needs to send a large number of emails to users in various use cases. I don't want to handle this synchronously within the application for obvious reasons. Has anyone any recommendations for a message queuing server which integrates well with Python, or they have used on a Django project? The rest of my stack is Apache, mod_python, MySQL.

    Read the article

  • Java - sorted stack

    - by msr
    Hello, I need a sorted stack. I mean, the element removed from the stack must be the one with great priority. Stack dimension varies a lot (becomes bigger very fast). I need also to search elements in that stack. Does Java give some good implementation for this? What class or algorithm do you suggest for this? I'm using a PriorityQueue right now which I consider reasonable except for searching, so Im wondering if I can use something better. Thanks in advance!

    Read the article

  • QueueConnectionFactory for MQSeries

    - by Pawel
    Does anyone know if there is an implementation of javax.jms.QueueConnectionFactory for MQSeries and where to get it? I Googled it and searched IBM website but couldn't find anything. I don't want to retrieve the connection or factory from Websphere MQ via jndi, I need my own connection factory.

    Read the article

  • What are all the differences between pipes and message queues?

    - by aks
    What are all the differences between pipes and message queues? Please explain both from vxworks & unix perspectives. I think pipes are unidirectional but message queues aren't. But don't pipes internally use message queues, then how come pipes are unidirectional but message queues are not? What are the other differences you can think of (from design or usage or other perspectives)?

    Read the article

  • Django ORM and multiprocessing

    - by Ankur Gupta
    Hi, I am using Django ORM in my python script in a decoupled fashion i.e. it's not running in context of a normal Django Project. I am also using the multi processing module. And different process in turn are making queries. The process ran successfully for an hr and exited with this message "IOError: [Errno 32] Broken pipe" Upon futhur diagnosis and debugging this error pops up when I call save() on the model instance. I am wondering Is Django ORM Process save ? Why would this error arise else ? Cheers Ankur

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >