Search Results

Search found 1048 results on 42 pages for 'producer consumer'.

Page 7/42 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Using RabbitMQ (Java client), is there a way to determine if network connection is closed during con

    - by MItch Branting
    I'm using RabbitMQ on RHEL 5.3 using the Java client. I have 2 nodes (machines). Node1 is consuming messages from a queue on Node2 using the Java helper class QueueingConsumer. QueueingConsumer consumer = new QueueingConsumer(channel); channel.basicConsume("MyQueueOnNode2", noAck, consumer); while (true) { QueueingConsumer.Delivery delivery = consumer.nextDelivery(); ... Process message - delivery.getBody() } If the interface is brought down on Node1 or Node2 (e.g. ifconfig eth1 down), the client (above) never knows the network isn't there anymore. Does RabbitMQ provide some type of configuration on the Java client that can be used to determine if the connection has gone away. Shutting down the RabbitMQ server on Node2 will trigger a ShutdownSignalException, which can be caught and the app can go into a reconnect loop. But bringing down the interface doesn't cause any type of exception to happen, so the code will be waiting forever on consumer.nextDelivery(). I've also tried using the timeout version of this call. e.g. QueueingConsumer consumer = new QueueingConsumer(channel); channel.basicConsume("MyQueueOnNode2", noAck, consumer); int timeout_ms = 30000; while (true) { QueueingConsumer.Delivery delivery = consumer.nextDelivery(timeout_ms); if (delivery == null) { if (channel.isOpen() == false) // Seems to always return true { throw new ShutdownSignalException(); } } else { ... Process message - delivery.getBody() } } but appears that this always returns true (even though the interface is down). I assume registering for the ShutdownListener on the connection will yield the same results, but haven't tried that yet. Is there a way to configure some sort of heartbeat, or do you just have to write custom lease logic (e.g. "I'm here now") in order to get this to work?

    Read the article

  • What are alternatives to Win32 PulseEvent() function?

    - by Bill
    The documentation for the Win32 API PulseEvent() function (kernel32.dll) states that this function is “… unreliable and should not be used by new applications. Instead, use condition variables”. However, condition variables cannot be used across process boundaries like (named) events can. I have a scenario that is cross-process, cross-runtime (native and managed code) in which a single producer occasionally has something interesting to make known to zero or more consumers. Right now, a well-known named event is used (and set to signaled state) by the producer using this PulseEvent function when it needs to make something known. Zero or more consumers wait on that event (WaitForSingleObject()) and perform an action in response. There is no need for two-way communication in my scenario, and the producer does not need to know if the event has any listeners, nor does it need to know if the event was successfully acted upon. On the other hand, I do not want any consumers to ever miss any events. In other words, the system needs to be perfectly reliable – but the producer does not need to know if that is the case or not. The scenario can be thought of as a “clock ticker” – i.e., the producer provides a semi-regular signal for zero or more consumers to count. And all consumers must have the correct count over any given period of time. No polling by consumers is allowed (performance reasons). The ticker is just a few milliseconds (20 or so, but not perfectly regular). Raymen Chen (The Old New Thing) has a blog post pointing out the “fundamentally flawed” nature of the PulseEvent() function, but I do not see an alternative for my scenario from Chen or the posted comments. Can anyone please suggest one? Please keep in mind that the IPC signal must cross process boundries on the machine, not simply threads. And the solution needs to have high performance in that consumers must be able to act within 10ms of each event.

    Read the article

  • In C#, what thread will Events be handled in?

    - by Ben
    Hi, I have attempted to implement a producer/consumer pattern in c#. I have a consumer thread that monitors a shared queue, and a producer thread that places items onto the shared queue. The producer thread is subscribed to receive data...that is, it has an event handler, and just sits around and waits for an OnData event to fire (the data is being sent from a 3rd party api). When it gets the data, it sticks it on the queue so the consumer can handle it. When the OnData event does fire in the producer, I had expected it to be handled by my producer thread. But that doesn't seem to be what is happening. The OnData event seems as if it's being handled on a new thread instead! Is this how .net always works...events are handled on their own thread? Can I control what thread will handle events when they're raised? What if hundreds of events are raised near-simultaneously...would each have its own thread? Thank in advance! Ben

    Read the article

  • Building an SSL server farm

    - by dan
    I'm interested in building the the architecture in the article referenced below. I currently have a modestly-priced layer-4 load balancer and my application servers are the SSL endpoints. I want to put an SSL server farm in between my load balancer and my app servers. Then I will put another inexpensive load balancer between the SSL farm and my app servers, to do layer-7 routing. My web application has a fairly high amount of consumer traffic, that 6 servers can handle at about 50% capacity. Additionally, I have infrastructure traffic that is several orders of magnitude heavier than my consumer traffic. This is data coming in from all over the world that must integrate with my web application in real time. In total I have 18 app servers to handle all the traffic, plus 6 database servers. I will be adding 6 more app servers over the next 2 weeks and another 6 the 2 weeks after that. Conservatively, I estimate I will need to scale to 120 servers by the end of the year. My motivation right now is to separate the consumer traffic from the infrastructure traffic. The consumer traffic is higher priority than the infrastructure traffic and I cannot allow a stampede on the infrastructure side to take down my consumer-facing servers. Having a website that is always up is the top priority. However if there is a failure in one of the consumer app servers, I want to route that traffic to the servers designated for infrastructure traffic. The complication is that all the traffic is addressed using the same hostname and is nearly 100% https. The only way in my case to distinguish infrastructure from consumer traffic is by URL (poor architecture I inherited), so I need a layer 7 load balancer to be able to route. However for that to work I need either a fancy hardware-based SSL terminator or an SSL server farm as described above. Because my user base is rapidly scaling, I worry that if I go down the hardware path it will become very expensive very fast, especially since I will need 4 of everything for high availability (2 identical setups in 2 facilities). Meanwhile, the above diagram seems very flexible and more horizontally scalable. Has anyone built this before? Are there pre-built configurations? What considerations should I make and what software should I use (I've heard of people using apache with mod-ssl, nginx, and stunnel)? Also, when does it make sense to buy an expensive load balancer vs building an SSL server farm? http://1wt.eu/articles/2006_lb/index_05.html

    Read the article

  • .Net Thread Synchronization

    - by user209293
    Hello, I am planning to use Auto reset Event Handle for Inter Thread communication. EventWaitHandle handle = new EventWaitHandle(false, EventResetMode.AutoReset); My producer thread code look like below produceSomething(); handle.Set(); In the consumer thread, I have to download data for every one minute or when prodcuer is called Set method try { while(true) { handle.WaitOne(60000, false); doSomething(); - downloads data from internet. takes lot of time to complete it. } } catch(ThreadAbortException) { cleanup(); } My question is if consumer thread is running doSomething funtion and producer calls set function, what would be state of Auto reset event object? My requreiment is as soon as producer calls set method i have to downlaod fresh data from intenet . If doSomething function is running, when Producer calls set method, i have to interrupt it and call again. Any help is appreciated. Regards Raju

    Read the article

  • how to upload a file from php to amqp

    - by user2462648
    i want to know... how can i send a file to a rabbitmq queue from php. i have gone through many examples most of them didnt work. below is a consumer producer example which is near to working. Below is a the publisher.php <?php require_once('../php-amqplib/amqp.inc'); include('../config.php'); $conn = new AMQPConnection(HOST, PORT, USER, PASS, VHOST); $channel = $conn->channel(); $channel->exchange_declare('upload-pictures','direct', false, true, false); $metadata = json_encode(array( 'image_id' => $argv[1], 'user_id' => $argv[2], 'image_path' => $argv[3] )); $msg = new AMQPMessage($metadata, array('content_type' => 'text/plain','delivery_mode' => 2)); $channel->basic_publish($msg, 'upload-pictures'); $channel->close(); $conn->close(); ?> consumer.php <?php require_once('../php-amqplib/amqp.inc'); include('../config.php'); $conn = new AMQPConnection(HOST, PORT, USER, PASS,VHOST); $channel = $conn->channel(); $queue = 'add-points'; $consumer_tag = 'consumer'; $channel->exchange_declare('upload-pictures','direct', false, true, false); $channel->queue_declare('add-points',false, true, false, false); $channel->queue_bind('add-points', 'upload-pictures'); $consumer = function($msg){}; $channel->basic_consume($queue,$consumer_tag,false,false,false,false,$consumer); ?> according to the example provided in the rabbitmq manual, i need to run consumer(php consumer.php) first in one terminal and the publisher (php publisher.php 1 2 file/path.png) in another terminal, i will get a "Adding points to user: 2" message in the consumer terminal. iam not getting this message at all. can you suggest where iam going wrong

    Read the article

  • Freshbooks oauth question

    - by Phil
    Very quick question for freshbooks oauth. When requesting a Request Token you need to provide (amoung others) the oauth_signature method. Is the signature the consumer key and the consumer secret seperated by an ampersand? e.g. _consumer_key_%26_consumer_secret_ where _consumer_key_ is the consumer key. _consumer_secret_ is the consumer secret and %26 is a urlencode ampersand.

    Read the article

  • How to write a custom (odd) authentication plugins for Wordpress, Joomla and MediaWiki

    - by Bart van Heukelom
    On our network (a group of related websites - not a LAN) we have a common authentication system which works like this: On a network site ("consumer") the user clicks on a login link This redirects the user to a login page on our auth system ("RAS"). Upon successful login the user is directed back to the consumer site. Extra data is passed in the query string. This extra data does not include any information about the user yet. The consumer site's backend contacts RAS to get the information about the logged in user. So as you can see, the consumer site knows nothing about the authentication method. It doesn't know if it's by username/password, fingerprint, smartcard, or winning a game of poker. This is the main problem I'm encountering when trying to find out how I could write custom authentication plugins for these packages, acting as consumer sites: Wordpress Joomla MediaWiki For example Joomla offers a pretty simple auth plugin system, but it depends on a username/password entered on the Joomla site. Any hints on where to start?

    Read the article

  • Allow connections to only a specific URL via HTTPS with iptables, -m recent (potentially) and -m string (definitely)

    - by The Consumer
    Hello, Let's say that, for example, I want to allow connections only to subdomain.mydomain.com; I have it partially working, but it sometimes gets in a freaky loop with the client key exchange once the Client Hello is allowed. Ah, to make it even more annoying, it's a self-signed certificate, and the page requires authentication, and HTTPS is listening on a non-standard port... So the TCP/SSL Handshake experience will differ greatly for many users. Is -m recent the right route? Is there a more graceful method to allow the complete TCP stream once the string is seen? Here's what I have so far: #iptables -N SSL #iptables -A INPUT -i eth0 -p tcp -j SSL #iptables -A SSL -m recent --set -p tcp --syn --dport 400 #iptables -A SSL -m recent --update -p tcp --tcp-flags PSH,SYN,ACK SYN,ACK --sport 400 #iptables -A SSL -m recent --update -p tcp --tcp-flags PSH,SYN,ACK ACK --dport 400 #iptables -A SSL -m recent --remove -p tcp --tcp-flags PSH,ACK PSH,ACK --dport 400 -m string --algo kmp --string "subdomain.mydomain.com" -j ACCEPT Yes, I have tried to get around this with nginx tweaks, but I can't get nginx to return a 444 or abrupt disconnect before the client hello, if you can think of a way to achieve this instead, I'm all ears, err, eyes. (As suggested by a user, bringing this inquiry over from http://stackoverflow.com/questions/4628157/allow-connections-to-only-a-specific-url-via-https-with-iptables-m-recent-pote)

    Read the article

  • NRF Big Show 2011 -- Part 2

    - by David Dorf
    One of the things I love about attending NRF is visiting the smaller booths to see what new innovative ideas have sprung up. After all, by watching emerging technologies we can get a sense of how the retail experience might change. After NRF I'm hoping to write a post on what I found, if anything, so be sure to check back. At the Oracle Retail booth we'll be demonstrating some of the aspects of the changing retail experience. These demos use a mix of GA and experimental components. Here are some highlights: 1. Checkin We wrote a consumer iPhone app we call Store Gateway that lets consumers access information from the store. They'll start by doing a checkin when they arrive that will alert the store manager via another iPhone app we wrote called Mobile Manager. Additionally, we display a welcome messaging using Starmount's digital sign. 2. Receive Offers There are three interaction points where a store can easily make an offer to a consumer: checkin, product scans, and checkout. For this demo we're calling our Universal Offer Engine at checkin to determine the best offer for this particular consumer. This offer is then displayed on the consumer's phone as well as on the digital sign. 3. Scan Products To thwart consumers from scanning product barcodes, we used Store Inventory Management to print QRCodes on shelf label then provided access to a scanner in the Store Gateway iphone app. When the consumer scans the shelf label they are shown product information provided by the retailer. 4. Checkout While we don't have a NFC-enabled mobile phone, we have a NFC chip that can attach to a phone. We're using this to checkout using a reader provided by ViVOTech. Tap the phone on the reader, and the POS accesses the customer#, coupons, and payment information. This really speeds the checkout process. 5. Digital Receipt After the transaction is complete, a digital copy of the receipt is sent to Intuit's QuickReceipts where consumers to store all their digital receipts. There's even an iPhone app that provides easy access to the receipts. This covers about half of what what we'll be showing, so be sure to stop by. I'll also be talking about how mobile is impacting the retail experience at the Wednesday morning session NRF Mobile Retail Initiative: a Blueprint for Action. See you at the Big Show!

    Read the article

  • Common Areas For Securing Web Services

    The only way to truly keep a web service secure is to host it on a web server and then turn off the server. In real life no web service is 100% secure but there are methodologies for increasing the security around web services. In order for consumers of a web service they must adhere to the service’s Service-Level Agreement (SLA).  An SLA is a digital contract between a web service and its consumer. This contract defines what methods and protocols must be used to access the web service along with the defined data formats for sending and receiving data through the service. If either part does not abide by the contract then the service will not be accessible for consumption. Common areas for securing web services: Universal Discovery Description Integration  (UDDI) Web Service Description Language  (WSDL) Application Level Network Level “UDDI is a specification for maintaining standardized directories of information about web services, recording their capabilities, location and requirements in a universally recognized format.” (UDDI, 2010) WSDL on the other hand is a standardized format for defining a web service. A WSDL describes the allowable methods for accessing the web service along with what operations it performs. Web services in the Application Level can control access to what data is available by implementing its own security through various methodologies but the most common method is to have a consumer pass in a token along with a system identifier so that they system can validate the users access to any data or actions that they may be requesting. Security restrictions can also be applied to the host web server of the service by restricting access to the site by IP address or login credentials. Furthermore, companies can also block access to a service by using firewall rules and only allowing access to specific services on certain ports coming from specific IP addresses. This last methodology may require consumers to obtain a static IP address and then register it with the web service host so that they will be provide access to the information they wish to obtain. It is important to note that these areas can be secured in any combination based on the security level tolerance dictated by the publisher of the web service. This being said, the bare minimum security implantation must be in the Application Level within the web service itself. Typically I create a security layer within a web services exposed Internet that requires a consumer identifier and a consumer token. This information is then used to authenticate the requesting consumer before the actual request is performed. Refernece:UDDI. (2010). Retrieved 11 13, 2011, from LooselyCoupled.com: http://www.looselycoupled.com/glossary/UDDIService-Level Agreement (SLA). (n.d.). Retrieved 11 13, 2011, from SearchITChannel: http://searchitchannel.techtarget.com/definition/service-level-agreement

    Read the article

  • Mock RequireJS define dependencies with config.map

    - by Aligned
    Originally posted on: http://geekswithblogs.net/Aligned/archive/2014/08/18/mock-requirejs-define-dependencies-with-config.map.aspxI had a module dependency, that I’m pulling down with RequireJS that I needed to use and write tests against. In this case, I don’t care about the actual implementation of the module (it’s simple enough that I’m just avoiding some AJAX calls). EDIT: make sure you look at the bottom example after the edit before using the config.map approach. I found that there is an easier way. I did not want to change the constructor of the consumer as I had a chain of changes that would have to be made and that would have been to invasive for this task. I found a question on StackOverflow with a short, but helpful answer from “Artem Oboturov”. We can use the config.map from RequireJs to achieve this. Here is some code: A module example (“usefulModule” in Common/Modules/usefulModule.js): define([], function() { "use strict"; var testMethod = function() { ... }; // add more functionality of the module return { testMethod; } }); A consumer of usefulModule example: define([ "Commmon/Modules/usefulModule" ], function(usefulModule) { "use strict"; var consumerModule = function(){ var self = this; // add functionality of the module } }); Using config.map in the html of the test runner page (and in your Karma config –> I’m still trying to figure this out): map: {'*': { // replace usefulModule with a mock 'Common/Modules/usefulModule': '/Tests/Specs/Common/usefulModuleMock.js' } } With the new mapping, Require will load usefulModuleMock.js from Tests/Specs/Common instead of the real implementation. Some of the answers on StackOverflow mentioned Squire.js, which looked interesting, but I wasn’t ready to introduce a new library at this time. That’s all you need to be able to mock a depency in RequireJS. However, there are many good cases when you should pass it in through the constructor instead of this approach.   EDIT: After all that, here’s another, probably better way: The consumer class, updated: define([ "Commmon/Modules/usefulModule" ], function(UsefulModule) { "use strict"; var consumerModule = function(){ var self = this; self.usefulModule = new UsefulModule(); // add functionality of the module } }); Jasmine test: define([ "consumerModule", "/UnitTests/Specs/Common/Mocks/usefulModuleMock.js" ], function(consumerModule, UsefulModuleMock){ describe("when mocking out the module", function(){ it("should probably just override the property", function(){ var consumer = new consumerModule(); consumer.usefulModule = new UsefulModuleMock(); }); }); });   Thanks for letting me think out loud :-).

    Read the article

  • New Content: Customer Engagement & Oracle OpenWorld Preview

    - by user462779
    Two new bits of content available on Profit Online: In A Cross-Channel Approach to Consumer Engagement, Cassandra Moren, senior director of consumer goods industry marketing at Oracle, shares her thoughts on how consumer goods manufacturers are reaping benefits from developing a direct relationship with customers: "Consumer goods manufacturers are starting to adapt in ways that mirror retailers. They are making investments in innovative technologies and processes to build the infrastructure to support the market demand. With advances in aspects like social networking, digital marketing and mobility fundamentally changing the way consumers behave, the door has opened to building a more direct relationship with their customers." We've also published a Special Report on Oracle OpenWorld that gives a great overview of recommendations for must-see sessions and insider advice from experienced attendees. For example, this top from John Matelski, newly elected president of the Independent Oracle Users Group: “Based on developments of the last 12 months, I think big data is definitely going to be hot. The challenges and opportunities of data governance will be another biggie. And there will obviously be a big emphasis on Oracle Exadata and the other Oracle Engineered Systems, with more than 100 sessions.” More updates to come as we continue to add content to Profit Online on a regular basis. Thanks for reading!

    Read the article

  • Design pattern for an automated mechanical test bench

    - by JJS
    Background I have a test fixture with a number of communication/data acquisition devices on it that is used as an end of line test for a product. Because of all the various sensors used in the bench and the need to run the test procedure in near real-time, I'm having a hard time structuring the program to be more friendly to modify later on. For example, a National Instruments USB data acquisition device is used to control an analog output (load) and monitor an analog input (current), a digital scale with a serial data interface measures position, an air pressure gauge with a different serial data interface, and the product is interfaced through a proprietary DLL that handles its own serial communication. The hard part The "real-time" aspect of the program is my biggest tripping point. For example, I need to time how long the product needs to go from position 0 to position 10,000 to the tenth of a second. While it's traveling, I need to ramp up an output of the NI DAQ when it reaches position 6,000 and ramp it down when it reaches position 8,000. This sort of control looks easy from browsing NI's LabVIEW docs but I'm stuck with C# for now. All external communication is done by polling which makes for lots of annoying loops. I've slapped together a loose Producer Consumer model where the Producer thread loops through reading the sensors and sets the outputs. The Consumer thread executes functions containing timed loops that poll the Producer for current data and execute movement commands as required. The UI thread polls both threads for updating some gauges indicating current test progress. Unsure where to start Is there a more appropriate pattern for this type of application? Are there any good resources for writing control loops in software (non-LabVIEW) that interface with external sensors and whatnot?

    Read the article

  • I never really understood: what is Application Binary Interface (ABI)?

    - by claws
    I never clearly understood what is an ABI. I'm sorry for such a lengthy question. I just want to clearly understand things. Please don't point me to wiki article, If could understand it, I wouldn't be here posting such a lengthy post. This is my mindset about different interfaces: TV remote is an interface between user and TV. It is an existing entity but useless (doesn't provide any functionality) by itself. All the functionality for each of those buttons on the remote is implemented in the Television set. Interface: It is a "existing entity" layer between the functionality and consumer of that functionality. An, interface by itself is doesn't do anything. It just invokes the functionality lying behind. Now depending on who the user is there are different type of interfaces. Command Line Interface(CLI) commands are the existing entities, consumer is the user and functionality lies behind. functionality: my software functionality which solves some purpose to which we are describing this interface. existing entities: commands consumer: user Graphical User Interface(GUI) window,buttons etc.. are the existing entities, again consumer is the user and functionality lies behind. functionality: my software functionality which solves some purpose to which we are describing this interface. existing entities: window,buttons etc.. consumer: user Application Programming Interface(API) functions or to be more correct, interfaces (in interfaced based programming) are the existing entities, consumer here is another program not a user. and again functionality lies behind this layer. functionality: my software functionality which solves some purpose to which we are describing this interface. existing entities: functions, Interfaces(array of functions). consumer: another program/application. Application Binary Interface (ABI) Here is my problem starts. functionality: ??? existing entities: ??? consumer: ??? I've wrote few softwares in different languages and provided different kind of interfaces (CLI, GUI, API) but I'm not sure, if I ever, provided any ABI. http://en.wikipedia.org/wiki/Application_binary_interface says: ABIs cover details such as data type, size, and alignment; the calling convention, which controls how functions' arguments are passed and return values retrieved; the system call numbers and how an application should make system calls to the operating system; Other ABIs standardize details such as the C++ name mangling,[2] . exception propagation,[3] and calling convention between compilers on the same platform, but do not require cross-platform compatibility. Who needs these details? Please don't say, OS. I know assembly programming. I know how linking & loading works. I know what exactly happens inside. Where did C++ name mangling come in between? I thought we are talking at the binary level. Where did languages come in between? anyway, I've downloaded the [PDF] System V Application Binary Interface Edition 4.1 (1997-03-18) to see what exactly it contains. Well, most of it didn't make any sense. Why does it contain 2 chapters (4th & 5th) which describe the ELF file format.Infact, these are the only 2 significant chapters that specification. Rest of all the chapters "Processor Specific". Anyway, I thought that it is completely different topic. Please don't say that ELF file format specs are the ABI. It doesn't qualify to be Interface according to the definition. I know, since we are talking at such low level it must be very specific. But I'm not sure how is it "Instruction Set Architecture(ISA)" specific? Where can I find MS Window's ABI? So, these are the major queries that are bugging me.

    Read the article

  • request response with activemq - always send double response.

    - by Chris Valley
    Hi, I'm new at activeMq. I tried to create a simple request response like this. public Listener(string destination) { // set factory ConnectionFactory factory = new ConnectionFactory(URL); IConnection connection; try { connection = factory.CreateConnection(); connection.Start(); ISession session = connection.CreateSession(); // create consumer for designated destination IMessageConsumer consumer = session.CreateConsumer(new Apache.NMS.ActiveMQ.Commands.ActiveMQQueue(destination)); consumer.Listener += new MessageListener(consumer_Listener); Console.ReadLine(); } catch (Exception ex) { Console.WriteLine(ex.ToString()); throw new Exception("Exception in Listening ", ex); } } The OnMessage static void consumer_Listener(IMessage message) { IConnectionFactory factory = new ConnectionFactory("tcp://localhost:61616/"); using (IConnection connection = factory.CreateConnection()) { //Create the Session using (ISession session = connection.CreateSession()) { //Create the Producer for the topic/queue // IMessageProducer prod = session.CreateProducer(new Apache.NMS.ActiveMQ.Commands.ActiveMQTempQueue(message.NMSDestination)); IMessageProducer producer = session.CreateProducer(message.NMSDestination); // Create Response // IMessage response = session.CreateMessage(); ITextMessage response = producer.CreateTextMessage("Replied from VS2010 Test"); //response.NMSReplyTo = new Apache.NMS.ActiveMQ.Commands.ActiveMQQueue("testQ1"); response.NMSCorrelationID = message.NMSCorrelationID; if (message.NMSReplyTo != null) { producer.Send(message.NMSReplyTo, response); Console.WriteLine("Receive: " + ((ITextMessage)message).NMSCorrelationID); Console.WriteLine("Received from : " + message.NMSDestination.ToString()); Console.WriteLine("----------------------------------------------------"); } } } } Every time i tried to send a request to the listener, the response always send repeatedly. The first response will have NMSReplyTo properties while the other not. My workaround to stop this situation by cheking the NMSReplyTo properties if (message.NMSReplyTo != null) { producer.Send(message.NMSReplyTo, response); Console.WriteLine("Receive: " + ((ITextMessage)message).NMSCorrelationID); Console.WriteLine("Received from : " + message.NMSDestination.ToString()); Console.WriteLine("----------------------------------------------------"); } In my understanding, this happened because there was a circular send response in the listener to the same Queue. Could you guys help me how to fix this? Many Thanks, Chris

    Read the article

  • Critical Threads Optimization

    - by Rafael Vanoni
    Background One of the more common issues we've been seeing in the field is the growing difficulty in optimizing performance of multi-threaded applications. A good portion of this difficulty is due to the increasing complexity of modern processors that present various degrees of sharing relationships between hardware components. Take any current CMT processor and you'll find any number of CPUs sharing execution pipelines, floating point units, caches, etc. Consequently, applying the traditional recipe of one software thread for each CPU will have varying degrees of success, according to the layout of the underlying hardware. On top of this increasing complexity we've also seen processors with features that aim at dynamically resourcing software threads according to their utilization. Intel's Turbo Boost allows processors to increase their operating frequency if there is enough thermal headroom available and the processor isn't fully utilized. More recently, the SPARC T4 processor introduced dynamic threading, allowing each core to dynamically allocate more resources to its active CPUs. Both cases are in essence recognizing that current processors will be running a wide mix of workloads, some will be designed for throughput, others for low latency. The hardware is providing mechanisms to dynamically resource threads according to their runtime behavior. We're very aware of these challenges in Solaris, and have been working to provide the best out of box performance while providing mechanisms to further optimize applications when necessary. The Critical Threads Optimzation was introduced in Solaris 10 8/11 and Solaris 11 as one such mechanism that allows customers to both address issues caused by contention over shared hardware resources and explicitly take advantage of features such as T4's dynamic threading. What it is The basic idea is to allow performance critical threads to execute with more exclusive access to hardware resources. For example, when deploying an application that implements a producer/consumer model, it'll likely be advantageous to give the producer more exclusive access to the hardware instead of having it competing for resources with all the consumers. In the case of a T4 based system, we may want to have a producer running by itself on a single core and create one consumer for each of the remaining CPUs. With the Critical Threads Optimization we're extending the semantics of scheduling priorities (which thread should run first) to include priority over shared resources (which thread should have more "space"). Now the scheduler will not only run higher priority threads first: it will also provide them with more exclusive access to hardware resources if they are available. How does it work ? Using the previous example in Solaris 11, all you'd have to do would be to place the producer in the Fixed Priority (FX) scheduling class at priority 60, or in the Real Time (RT) class at any priority and Solaris will try to give it more "hardware space". On both Solaris 10 8/11 and Solaris 11 this can be achieved through the existing priocntl(1,2) and priocntlset(2) interfaces. If your application already assigns these priorities to performance critical threads, there's no additional step you need to take. One important aspect of this optimization is that it requires some level of idleness in the system, either as a result of sizing the application before hand or through periods of transient idleness during runtime. If the system is fully committed, the scheduler will put all the available CPUs to work.Best practices If you're an application developer, we encourage you to look into assigning the right priorities for the different threads in your application. Solaris provides different scheduling classes (Time Share, Interactive, Fair Share, Fixed Priority and Real Time) that offer different policies and behaviors. It is not always simple to figure out which set of threads are critical to the performance of a workload, and it may not always be feasible to take advantage of this optimization, but we believe that this can be correctly (and safely) done during development. Overall, the out of box performance in Solaris should meet your workload's requirements. If you are looking into that extra bit of performance, then the Critical Threads Optimization may be what you're looking for.

    Read the article

  • SSD-HDD price parity

    - by jchang
    It is hard to believe that we are essentially at SSD-HDD price parity? Of course I am comparing enterprise class 10K/15K HDDs to consumer grade SSDs. Below are prices I am seeing 300GB 15K HDD $370 900GB 10K HDD $600 1TB 7200 HDD $230 (less for consumer HDDs) 512GB SATA SSD $400-600 Intel SSD DC S3700 400GB $940 The 512GB SATA SSDs are consumer grade, MLC NAND, with only 7% over provisioning. That is 512GB (1GB = 2^30) of NAND, with 512GB (1GB =10^9) of user capacity. Intel just announced the SSD...(read more)

    Read the article

  • Why does ActiveMQ hold messages that should be deleted from Topic?

    - by rauch
    I use ActiveMQ as Notification System(Pub/Sub model). On server: if any changes of data occur, Server send this updated data (File) to Topic using BlobMessages. There are few Clients, that subscribe on this Topic and get updated File if it exsist in Topic. The problem is that all of BlobMessages, that were sent to Topic, are hold by ActiveMQ all time. this.producer = new ProducerTool.Builder("tcp://localhost:61616?jms.blobTransferPolicy.uploadUrl=http://localhost:8161/fileserver/", "ServerProdTopic").topic(true) .transacted(false).durable(false).timeToLive(10000L).build(); this.consumer = new ConsumerTool.Builder("tcp://localhost:61616", "ServerConsTopic").topic(true) .transacted(false).durable(false).build(); consumer.setMessageListener(this); The File is sent: connection = createConnection(); session = createSession(connection); producer = createProducer(session); BlobMessage blobMsg = ((ActiveMQSession) session).createBlobMessage(resource); blobMsg.setStringProperty("sourceName", resource.getName()); producer.send(blobMsg); if (transacted) { System.out.println("Producer Committing..."); session.commit(); } Where createProducer is: protected Connection createConnection() throws JMSException, Exception { ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory(url); //connectionFactory.getBlobTransferPolicy().setUploadUrl("http://localhost:8161/fileserver/"); Connection connection = connectionFactory.createConnection(); connection.start(); ((ActiveMQConnection) connection).setCopyMessageOnSend(false); return connection; } All, that could be useful I set as need: Session.AUTO_ACKNOWLEDGE; Non-Durable Subscription; TimeToLive = 9000; JMSDeliveryMode = Non-Persistent; What I have at runtime: in ActiveMQ directory: ~/apache-activemq-5.3.0/webapps/fileserver/ tere are all File, that where delivered and not delivered to Subscribers. Why? Sometimes Server send Big Files about 1 GB....And even this Files are hold at that directory, Even after stopping Subscribers(Clients), Publisher(Server) and ActiveMQ Broker.

    Read the article

  • Oauth for Google API example using Python / Django

    - by DrDee
    Hi, I am trying to get Oauth working with the Google API using Python. I have tried different oauth libraries such as oauth, oauth2 and djanog-oauth but I cannot get it to work (including the provided examples). For debugging Oauth I use Google's Oauth Playground and I have studied the API and the Oauth documentation With some libraries I am struggling with getting a right signature, with other libraries I am struggling with converting the request token to an authorized token. What would really help me if someone can show me a working example for the Google API using one of the above-mentioned libraries. EDIT: My initial question did not lead to any answers so I have added my code. There are two possible causes of this code not working: 1) Google does not authorize my request token, but not quite sure how to detect this 2) THe signature for the access token is invalid but then I would like to know which oauth parameters Google is expecting as I am able to generate a proper signature in the first phase. This is written using oauth2.py and for Django hence the HttpResponseRedirect. REQUEST_TOKEN_URL = 'https://www.google.com/accounts/OAuthGetRequestToken' AUTHORIZATION_URL = 'https://www.google.com/accounts/OAuthAuthorizeToken' ACCESS_TOKEN_URL = 'https://www.google.com/accounts/OAuthGetAccessToken' CALLBACK = 'http://localhost:8000/mappr/mappr/oauth/' #will become real server when deployed OAUTH_CONSUMER_KEY = 'anonymous' OAUTH_CONSUMER_SECRET = 'anonymous' signature_method = oauth.SignatureMethod_HMAC_SHA1() consumer = oauth.Consumer(key=OAUTH_CONSUMER_KEY, secret=OAUTH_CONSUMER_SECRET) client = oauth.Client(consumer) request_token = oauth.Token('','') #hackish way to be able to access the token in different functions, I know this is bad, but I just want it to get working in the first place :) def authorize(request): if request.GET == {}: tokens = OAuthGetRequestToken() return HttpResponseRedirect(AUTHORIZATION_URL + '?' + tokens) elif request.GET['oauth_verifier'] != '': oauth_token = request.GET['oauth_token'] oauth_verifier = request.GET['oauth_verifier'] OAuthAuthorizeToken(oauth_token) OAuthGetAccessToken(oauth_token, oauth_verifier) #I need to add a Django return object but I am still debugging other phases. def OAuthGetRequestToken(): print '*** OUTPUT OAuthGetRequestToken ***' params = { 'oauth_consumer_key': OAUTH_CONSUMER_KEY, 'oauth_nonce': oauth.generate_nonce(), 'oauth_signature_method': 'HMAC-SHA1', 'oauth_timestamp': int(time.time()), #The timestamp should be expressed in number of seconds after January 1, 1970 00:00:00 GMT. 'scope': 'https://www.google.com/analytics/feeds/', 'oauth_callback': CALLBACK, 'oauth_version': '1.0' } # Sign the request. req = oauth.Request(method="GET", url=REQUEST_TOKEN_URL, parameters=params) req.sign_request(signature_method, consumer, None) tokens =client.request(req.to_url())[1] params = ConvertURLParamstoDictionary(tokens) request_token.key = params['oauth_token'] request_token.secret = params['oauth_token_secret'] return tokens def OAuthAuthorizeToken(oauth_token): print '*** OUTPUT OAuthAuthorizeToken ***' params ={ 'oauth_token' :oauth_token, 'hd': 'default' } req = oauth.Request(method="GET", url=AUTHORIZATION_URL, parameters=params) req.sign_request(signature_method, consumer, request_token) response =client.request(req.to_url()) print response #for debugging purposes def OAuthGetAccessToken(oauth_token, oauth_verifier): print '*** OUTPUT OAuthGetAccessToken ***' params = { 'oauth_consumer_key': OAUTH_CONSUMER_KEY, 'oauth_token': oauth_token, 'oauth_verifier': oauth_verifier, 'oauth_token_secret': request_token.secret, 'oauth_signature_method': 'HMAC-SHA1', 'oauth_timestamp': int(time.time()), 'oauth_nonce': oauth.generate_nonce(), 'oauth_version': '1.0', } req = oauth.Request(method="GET", url=ACCESS_TOKEN_URL, parameters=params) req.sign_request(signature_method, consumer, request_token) response =client.request(req.to_url()) print response return req def ConvertURLParamstoDictionary(tokens): params = {} tokens = tokens.split('&') for token in tokens: token = token.split('=') params[token[0]] = token[1] return params

    Read the article

  • Programmatically Set a QueryStringFilterWebPart / ExcelWebRenderer Connection

    - by user355972
    Code: public static void ChartPageConnector(SPWeb web, string pageURL, string providerWpId, string consumerWpId, string mappedName) { SPFile file = web.Files[pageURL]; SPLimitedWebPartManager mgr = file.GetLimitedWebPartManager(PersonalizationScope.Shared); TransformableFilterValuesToFilterValuesTransformer transformer = new TransformableFilterValuesToFilterValuesTransformer(); transformer.MappedConsumerParameterName = mappedName; //QueryStringFilterWebPart provider = (QueryStringFilterWebPart)mgr.WebParts[providerWpId]; //ExcelWebRenderer consumer = (ExcelWebRenderer)mgr.WebParts[consumerWpId]; JWP.WebPart provider = mgr.WebParts[providerWpId]; JWP.WebPart consumer = mgr.WebParts[consumerWpId]; ProviderConnectionPoint pcp = null; foreach (ProviderConnectionPoint ppoint in mgr.GetProviderConnectionPoints(provider)) { if (ppoint.InterfaceType == typeof(Microsoft.SharePoint.WebPartPages.ITransformableFilterValues)) { pcp = ppoint; break; } } ConsumerConnectionPoint ccp = null; foreach (ConsumerConnectionPoint cpoint in mgr.GetConsumerConnectionPoints(consumer)) { if (cpoint.InterfaceType == typeof(IFilterValues)) { ccp = cpoint; break; } } //mgr.SPConnectWebParts(provider, pcp, consumer, ccp); mgr.SPConnectWebParts(provider, pcp, consumer, ccp, transformer); file.Update(); web.Update(); } Error: The connection point "IFilterValues" on "g_33d68e82_6478_4629_a079_5a7e02ac4695" is disabled. Any Ideas Why?

    Read the article

  • OAuth gives me 401 error

    - by Radek
    I am trying to get the access key but I cannot make it work. `request_token.get_access_token is giving me 401 Unauthorized (OAuth::Unauthorized) error. I copy the authorize_url into my browser, allow the application, I receive some kind of PIN from twitter but after hitting enter in my script I always get 401 error. I did some search and I found this helped access_token = request_token.get_access_token(:oauth_verifier => params[:oauth_verifier]) but it is giving me undefined local variable or methodparams' for main:Object (NameError)` the ruby script is like ( I was following this tutorial ) gem 'oauth' require 'oauth/consumer' consumer_key = 'your key' consumer_secret ='your secret' consumer=OAuth::Consumer.new "consumer_key", "consumer_secret", {:site=>"http://twitter.com"} #{:site=>"https://agree2.com"} request_token = consumer.get_request_token puts request_token.token puts request_token.secret puts request_token.authorize_url puts "Hit enter when you have completed authorization." STDIN.gets access_token = request_token.get_access_token #access_token = request_token.get_access_token(:oauth_verifier => params[:oauth_verifier]) puts access_token.token puts access_token.secret puts puts access_token.inspect

    Read the article

  • [Rails] OAuth with Digg API

    - by Karl
    I'm attempting to get Rails to play nice with the Digg API's OAuth. I'm using the oauth gem (ruby one, not the rails one). My code looks approximately like this: @consumer = OAuth::Consumer.new(API_KEY, API_SECRET, :scheme => :header, :http_method => :post, :oauth_callback => "http://locahost:3000", :request_token_url => 'http://services.digg.com/1.0/endpoint?method=oauth.getRequestToken', :access_token_url => 'http://services.digg.com/1.0/endpoint?method=oauth.getAccessToken', :authorize_url => 'http://digg.com/oauth/authorize') @request_token = @consumer.get_request_token session[:request_token] = @request_token.token session[:request_token_secret] = @request_token.secret redirect_to @request_token.authorize_url Which is by-the-book in terms of what the gem documentation gave me. However, Digg spits a "400 Bad Request" error back at me when @consumer.get_request_token is called. I can't figure out what I'm doing wrong. Any ideas?

    Read the article

  • Inheriting Web Part from both ICellProvider and ICellConsumer

    - by tyumener
    Hi there. What I'm trying to accomplish is to make a series of 3 web parts. One that acts as a provider, second - consumer of a first web part and at the same time a provider for a third web part, third web part - consumer of the second. When overriding the EnsureInterfaces Method the second parameter is InterfaceType and I'm able to enter InterfaceTypes.ICellConsumer OR InterfaceTypes.ICellConsumer. Is it possible to make the second web part act both as a provider and a consumer?

    Read the article

  • oauth process for twitter. the difference between client and web application

    - by Radek
    I managed to make the oauth process work for PIN kind of verification. My twitter application is client type. When enter authorize url into web browser and grant the application access then I have to enter pin in my ruby application. Can I finish the process of getting access token without the pin thing? My current code is like. What changes do I need to do to make it work without pin? gem 'oauth' require 'oauth/consumer' consumer_key = 'w855B2MEJWQr0SoNDrnBKA' consumer_secret ='yLK3Nk1xCWX30p07Id1ahxlXULOkucq5Rve28pNVwE' consumer=OAuth::Consumer.new consumer_key, consumer_secret, {:site=>"http://twitter.com"} request_token = consumer.get_request_token puts request_token.authorize_url puts "Hit enter when you have completed authorization." pin = STDIN.readline.chomp access_token = request_token.get_access_token(:oauth_verifier => pin) puts puts access_token.token puts access_token.secret

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >