Search Results

Search found 2411 results on 97 pages for 'queue'.

Page 21/97 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • What is the absolute fastest way to implement a concurrent queue with ONLY one consumer and one producer?

    - by JohnPristine
    java.util.concurrent.ConcurrentLinkedQueue comes to mind, but is it really optimum for this two-thread scenario? I am looking for the minimum latency possible on both sides (producer and consumer). If the queue is empty you can immediately return null AND if the queue is full you can immediately discard the entry you are offering. Does ConcurrentLinkedQueue use super fast and light locks (AtomicBoolean) ? Has anyone benchmarked ConcurrentLinkedQueue or knows about the ultimate fastest way of doing that? Additional Details: I imagine the queue should be a fair one, meaning the consumer should not make the consumer wait any longer than it needs (by front-running it) and vice-versa.

    Read the article

  • Upstart: cannot run as root

    - by Ronni Egeriis
    I have made this upstart script, which starts a Node.js service. But all of the sudden the service has stopped, and upstart has failed to restart it. Now that I am trying to start it manually, it fails to recognize my service: start: Unknown job: queue The script is properly placed in /etc/init, and should have the correct rights: -rw-r--r-- 1 root root 200 Aug 7 13:30 queue.conf When I check the config file with init-checkconf however, it says that it is not able to run as root: root@production1:~# init-checkconf /etc/init/queue.conf ERROR: cannot run as root What causes this error and how do I solve it? Debug info: Ubuntu 12.04.3 LTS root@production1:~# service --version service ver. 0.91-ubuntu1 Edit Here's queue.conf: description "Echo.it command queue" author "Ronni Egeriis Persson <[email protected]>" stop on shutdown respawn respawn 20 5 exec sudo -u beanstalk /usr/bin/node /var/www/queue/index.js >> /var/log/queue.log 2>&1 The command sudo -u beanstalk /usr/bin/node /var/www/queue/index.js >> /var/log/queue.log 2>&1 works fine when run manually.

    Read the article

  • Connect to ibm mq with jms . Specify the channel and queue manager

    - by bhargav
    How do i specify which queue manager to connect to in my system properties. Here is the code: Properties properties = new Properties(); properties.setProperty("java.naming.factory.initial", "com.ibm.mq.jms.context.WMQInitialContextFactory"); properties.setProperty("java.naming.provider.url", "localhost:1414/SYSTEM.DEF.SVRCONN"); Context context = new InitialContext(properties); factory= (QueueConnectionFactory)context.lookup("TESTOUT"); context always gets TEST que only not able to connect to TESTOUT queue

    Read the article

  • Is there a better way to count the messages in an Message Queue (MSMQ)?

    - by Damovisa
    I'm currently doing it like this: MessageQueue queue = new MessageQueue(".\Private$\myqueue"); MessageEnumerator messageEnumerator = queue.GetMessageEnumerator2(); int i = 0; while (messageEnumerator.MoveNext()) { i++; } return i; But for obvious reasons, it just feels wrong - I shouldn't have to iterate through every message just to get a count, should I? Is there a better way?

    Read the article

  • i don't receive mail notification...Nagios Core 4

    - by alessio
    I have a problem with automatically mail notification in Nagios Core 4 installed on ubuntu 12 .04 lts server... i have tried to send mail with nagios user and root user with the command: echo "test" | mail -s "test mail" [email protected] and i received mail correctly... but i don't receive any automatically mail notification... i don't know how can i do to resolve this issue! :( these are my configuration files (commands.cfg, contacts.cfg, nagios.log, mail.log): commands.cfg (the path /usr/bin/mail is the right path): # 'notify-host-by-email' command definition define command{ command_name notify-host-by-email command_line /usr/bin/printf "%b" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\nHost: $HOSTNAME$\nState: $HOSTSTATE$\nAddress: $HOSTADDRESS$\nInfo: $HOSTOUTPUT$\n\nDate/Time: $LONGDATETIME$\n" | /usr/bin/mail -s "** $NOTIFICATIONTYPE$ Host Alert: $HOSTNAME$ is $HOSTSTATE$ **" $CONTACTEMAIL$ } # 'notify-service-by-email' command definition define command{ command_name notify-service-by-email command_line /usr/bin/printf "%b" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\n\nService: $SERVICEDESC$\nHost: $HOSTALIAS$\nAddress: $HOSTADDRESS$\nState: $SERVICESTATE$\n\nDate/Time: $LONGDATETIME$\n\nAdditional Info:\n\n$SERVICEOUTPUT$\n" | /usr/bin/mail -s "** $NOTIFICATIONTYPE$ Service Alert: $HOSTALIAS$/$SERVICEDESC$ is $SERVICESTATE$ **" $CONTACTEMAIL$ } # 'process-host-perfdata' command definition define command{ command_name process-host-perfdata command_line /usr/bin/printf "%b" "$LASTHOSTCHECK$\t$HOSTNAME$\t$HOSTSTATE$\t$HOSTATTEMPT$\t$HOSTSTATETYPE$\t$HOSTEXECUTIONTIME$\t$HOSTOUTPUT$\t$HOSTPERFDATA$\n" >> /usr/local/nagios/var/host-perfdata.out } # 'process-service-perfdata' command definition define command{ command_name process-service-perfdata command_line /usr/bin/printf "%b" "$LASTSERVICECHECK$\t$HOSTNAME$\t$SERVICEDESC$\t$SERVICESTATE$\t$SERVICEATTEMPT$\t$SERVICESTATETYPE$\t$SERVICEEXECUTIONTIME$\t$SERVICELATENCY$\t$SERVICEOUTPUT$\t$SERVICEPERFDATA$\n" >> /usr/local/nagios/var/service-perfdata.out } contacts.cfg: define contact{ contact_name supporto alias Supporto Clienti DEA service_notification_period 24x7 host_notification_period 24x7 service_notification_options w,u,c,r host_notification_options d,r service_notification_commands notify-service-by-email host_notification_commands notify-host-by-email email [email protected] } define contactgroup{ contactgroup_name admins alias Nagios Administrators members supporto } nagios.log: [1401871412] SERVICE ALERT: fileserver;Current Users;OK;SOFT;2;USERS OK - 1 users currently logged in [1401871953] SERVICE ALERT: backups;Nagios Status;WARNING;SOFT;1;NAGIOS WARNING: 36 processes, status log updated 541 seconds ago [1401872133] SERVICE ALERT: backups;Nagios Status;OK;SOFT;2;NAGIOS OK: 36 processes, status log updated 180 seconds ago [1401872321] SERVICE ALERT: posta;Swap Usage;CRITICAL;SOFT;1;CRITICAL - Plugin timed out after 10 seconds [1401872322] SERVICE ALERT: fileserver;Current Users;CRITICAL;SOFT;1;CRITICAL - Plugin timed out after 10 seconds [1401872420] SERVICE ALERT: archivio;Disk Space;CRITICAL;SOFT;1;CRITICAL - Plugin timed out after 10 seconds [1401872492] SERVICE ALERT: fileserver;Current Users;OK;SOFT;2;USERS OK - 1 users currently logged in [1401872492] SERVICE ALERT: posta;Swap Usage;OK;SOFT;2;SWAP OK: 100% free (1984 MB out of 1984 MB) [1401872590] SERVICE ALERT: archivio;Disk Space;OK;SOFT;2;DISK OK [1401872931] Auto-save of retention data completed successfully. [1401873333] SERVICE ALERT: backups;Nagios Status;WARNING;SOFT;1;NAGIOS WARNING: 36 processes, status log updated 402 seconds ago [1401873513] SERVICE ALERT: backups;Nagios Status;OK;SOFT;2;NAGIOS OK: 36 processes, status log updated 180 seconds ago mail.log (i think that the problem is here but i don't know how to resolve it): Jun 4 10:00:01 backups sm-msp-queue[6109]: My unqualified host name (backups) unknown; sleeping for retry Jun 4 10:01:01 backups sm-msp-queue[6109]: unable to qualify my own domain name (backups) -- using short name Jun 4 10:20:01 backups sm-msp-queue[7247]: My unqualified host name (backups) unknown; sleeping for retry Jun 4 10:21:01 backups sm-msp-queue[7247]: unable to qualify my own domain name (backups) -- using short name Jun 4 10:40:01 backups sm-msp-queue[8327]: My unqualified host name (backups) unknown; sleeping for retry Jun 4 10:41:01 backups sm-msp-queue[8327]: unable to qualify my own domain name (backups) -- using short name Jun 4 11:00:01 backups sm-msp-queue[9549]: My unqualified host name (backups) unknown; sleeping for retry Jun 4 11:01:01 backups sm-msp-queue[9549]: unable to qualify my own domain name (backups) -- using short name Jun 4 11:20:01 backups sm-msp-queue[10678]: My unqualified host name (backups) unknown; sleeping for retry Jun 4 11:21:01 backups sm-msp-queue[10678]: unable to qualify my own domain name (backups) -- using short name i'm at the last step and i want to finish this Nagios Core! :) Any help be appreciate!:) host definition (this host have the disk almost full and it is in hard state but non notification) : define host{ use generic-host ; Name of host template to use host_name posta alias Server Posta ESA address 10.10.2.102 parents xen1, xen2 icon_image redhat.png statusmap_image redhat.gd2 } service definition: define service{ use generic-service host_name xen1, maestro, xen2, posta, nas002, serv2, esasrvmi02, esaubuntumi service_description Disk Space check_command ssh_all_disks!10%!5% } Notification is allowed for the contact definition you gave, but is it also allowed at the the service level ? sorry but i don't understand this thing! :(

    Read the article

  • AWS .NET SDK v2: setting up queues and topics

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/10/13/aws-.net-sdk-v2-setting-up-queues-and-topics.aspxFollowing on from my last post, reading from SQS queues with the new SDK is easy stuff, but linking a Simple Notification Service topic to an SQS queue is a bit more involved. The AWS model for topics and subscriptions is a bit more advanced than in Azure Service Bus. SNS lets you have subscribers on multiple different channels, so you can send a message which gets relayed to email address, mobile apps and SQS queues all in one go. As the topic owner, when you request a subscription on any channel, the owner needs to confirm they’re happy for you to send them messages. With email subscriptions, the user gets a confirmation request from Amazon which they need to reply to before they start getting messages. With SQS, you need to grant the topic permission to write to the queue. If you own both the topic and the queue, you can do it all in code with the .NET SDK. Let’s say you want to create a new topic, a new queue as a topic subscriber, and link the two together. Creating the topic is easy with the SNS client (which has an expanded name, AmazonSimpleNotificationServiceClient, compare to the SQS class which is just called QueueClient): var request = new CreateTopicRequest(); request.Name = TopicName; var response = _snsClient.CreateTopic(request); TopicArn = response.TopicArn; In the response from AWS (which I’m assuming is successful), you get an ARN – Amazon Resource Name – which is the unique identifier for the topic. We create the queue using the same code from my last post, AWS .NET SDK v2: the message-pump pattern, and then we need to subscribe the queue to the topic. The topic creates the subscription request: var response = _snsClient.Subscribe(new SubscribeRequest { TopicArn = TopicArn, Protocol = "sqs", Endpoint = _queueClient.QueueArn }); That response will give you an ARN for the subscription, which you’ll need if you want to set attributes like RawMessageDelivery. Then the SQS client needs to confirm the subscription by allowing the topic to send messages to it. The SDK doesn’t give you a nice mechanism for doing that, so I’ve extended my AWS wrapper with a method that encapsulates it: internal void AllowSnsToSendMessages(TopicClient topicClient) { var policy = Policies.AllowSendFormat.Replace("%QueueArn%", QueueArn).Replace("%TopicArn%", topicClient.TopicArn); var request = new SetQueueAttributesRequest(); request.Attributes.Add("Policy", policy); request.QueueUrl = QueueUrl; var response = _sqsClient.SetQueueAttributes(request); } That builds up a policy statement, which gets added to the queue as an attribute, and specifies that the topic is allowed to send messages to the queue. The statement itself is a JSON block which contains the ARN of the queue, the ARN of the topic, and an Allow effect for the sqs:SendMessage action: public const string AllowSendFormat= @"{ ""Statement"": [ { ""Sid"": ""MySQSPolicy001"", ""Effect"": ""Allow"", ""Principal"": { ""AWS"": ""*"" }, ""Action"": ""sqs:SendMessage"", ""Resource"": ""%QueueArn%"", ""Condition"": { ""ArnEquals"": { ""aws:SourceArn"": ""%TopicArn%"" } } } ] }"; There’s a new gist with an updated QueueClient and a new TopicClient here: Wrappers for the SQS and SNS clients in the AWS SDK for .NET v2. Both clients have an Ensure() method which creates the resource, so if you want to create a topic and a subscription you can use:  var topicClient = new TopicClient(“BigNews”, “ImListening”); And the topic client has a Subscribe() method, which calls into the message pump on the queue client: topicClient.Subscribe(x=>Log.Debug(x.Body)); var message = {}; //etc. topicClient.Publish(message); So you can isolate all the fiddly bits and use SQS and SNS with a similar interface to the Azure SDK.

    Read the article

  • Does a high run queue length average result in poor performance for a web server?

    - by Domino
    I'm trying to narrow down the list of suspects of web servers that perform moderately well most of the time with occasional bouts of poor performance. I'm analyzing the data collected and summarized by sar. I've noticed a few things, one of which is high number of tasks in the run queue. 10:15:01 AM runq-sz plist-sz ldavg-1 ldavg-5 ldavg-15 blocked 10:25:01 AM 2 150 0.05 0.05 0.06 0 10:35:01 AM 4 149 0.08 0.12 0.09 0 10:45:01 AM 6 150 0.13 0.19 0.15 0 10:55:01 AM 1 150 0.08 0.10 0.13 0 11:05:01 AM 4 150 0.20 0.35 0.23 0 11:15:01 AM 3 149 0.02 0.09 0.15 0 11:25:01 AM 7 149 0.04 0.05 0.11 0 11:35:01 AM 4 150 0.14 0.15 0.13 0 11:45:01 AM 6 150 0.27 0.18 0.16 0 11:55:01 AM 5 150 0.08 0.10 0.13 0 12:05:01 PM 3 149 0.35 0.40 0.26 0 12:15:01 PM 19 155 0.02 0.10 0.16 1 12:25:01 PM 2 150 0.00 0.07 0.12 0 12:35:02 PM 3 151 0.58 0.24 0.17 0 12:45:01 PM 8 150 0.02 0.13 0.15 0 12:55:01 PM 6 149 0.81 0.29 0.18 0 01:05:01 PM 3 148 0.00 0.09 0.13 0 01:15:01 PM 7 149 0.00 0.04 0.11 0 I believe these are 10 minute averages. Is this an indicator that the web server is not performing as fast as it could if the average run queue length was lower?

    Read the article

  • BizTalk and SQL: Alternatives to the SQL receive adapter. Using Msmq to receive SQL data

    - by Leonid Ganeline
    If we have to get data from the SQL database, the standard way is to use a receive port with SQL adapter. SQL receive adapter is a solicit-response adapter. It periodically polls the SQL database with queries. That’s only way it can work. Sometimes it is undesirable. With new WCF-SQL adapter we can use the lightweight approach but still with the same principle, the WCF-SQL adapter periodically solicits the database with queries to check for the new records. Imagine the situation when the new records can appear in very broad time limits, some - in a second interval, others - in the several minutes interval. Our requirement is to process the new records ASAP. That means the polling interval should be near the shortest interval between the new records, a second interval. As a result the most of the poll queries would return nothing and would load the database without good reason. If the database is working under heavy payload, it is very undesirable. Do we have other choices? Sure. We can change the polling to the “eventing”. The good news is the SQL server could issue the event in case of new records with triggers. Got a new record –the trigger event is fired. No new records – no the trigger events – no excessive load to the database. The bad news is the SQL Server doesn’t have intrinsic methods to send the event data outside. For example, we would rather use the adapters that do listen for the data and do not solicit. There are several such adapters-listeners as File, Ftp, SOAP, WCF, and Msmq. But the SQL Server doesn’t have methods to create and save files, to consume the Web-services, to create and send messages in the queue, does it? Can we use the File, FTP, Msmq, WCF adapters to get data from SQL code? Yes, we can. The SQL Server 2005 and 2008 have the possibility to use .NET code inside SQL code. See the SQL Integration. How it works for the Msmq, for example: ·         New record is created, trigger is fired ·         Trigger calls the CLR stored procedure and passes the message parameters to it ·         The CLR stored procedure creates message and sends it to the outgoing queue in the SQL Server computer. ·         Msmq service transfers message to the queue in the BizTalk Server computer. ·         WCF-NetMsmq adapter receives the message from this queue. For the File adapter the idea is the same, the CLR stored procedure creates and stores the file with message, and then the File adapter picks up this file. Using WCF-NetMsmq adapter to get data from SQL I am describing the full set of the deployment and development steps for the case with the WCF-NetMsmq adapter. Development: 1.       Create the .NET code: project, class and method to create and send the message to the MSMQ queue. 2.       Create the SQL code in triggers to call the .NET code. Installation and Deployment: 1.       SQL Server: a.       Register the CLR assembly with .NET (CLR) code b.      Install the MSMQ Services 2.       BizTalk Server: a.       Install the MSMQ Services b.      Create the MSMQ queue c.       Create the WCF-NetMsmq receive port. The detailed description is below. Code .NET code … using System.Xml; using System.Xml.Linq; using System.Xml.Serialization;   //namespace MyCompany.MySolution.MyProject – doesn’t work. The assembly name is MyCompany.MySolution.MyProject // I gave up with the compound namespace. Seems the CLR Integration cannot work with it L. Maybe I’m wrong.     public class Event     {         static public XElement CreateMsg(int par1, int par2, int par3)         {             XNamespace ns = "http://schemas.microsoft.com/Sql/2008/05/TypedPolling/my_storedProc";             XElement xdoc =                 new XElement(ns + "TypedPolling",                     new XElement(ns + "TypedPollingResultSet0",                         new XElement(ns + "TypedPollingResultSet0",                             new XElement(ns + "par1", par1),                             new XElement(ns + "par2", par2),                             new XElement(ns + "par3", par3),                         )                     )                 );             return xdoc;         }     }   //////////////////////////////////////////////////////////////////////// … using System.ServiceModel; using System.ServiceModel.Channels; using System.Transactions; using System.Data; using System.Data.Sql; using System.Data.SqlTypes;   public class MsmqHelper {     [Microsoft.SqlServer.Server.SqlProcedure]     // msmqAddress as "net.msmq://localhost/private/myapp.myqueue";     public static void SendMsg(string msmqAddress, string action, int par1, int par2, int par3)     {         using (TransactionScope scope = new TransactionScope(TransactionScopeOption.Suppress))         {             NetMsmqBinding binding = new NetMsmqBinding(NetMsmqSecurityMode.None);             binding.ExactlyOnce = true;             EndpointAddress address = new EndpointAddress(msmqAddress);               using (ChannelFactory<IOutputChannel> factory = new ChannelFactory<IOutputChannel>(binding, address))             {                 IOutputChannel channel = factory.CreateChannel();                 try                 {                     XElement xe = Event.CreateMsg(par1, par2, par3);                     XmlReader xr = xe.CreateReader();                     Message msg = Message.CreateMessage(MessageVersion.Default, action, xr);                     channel.Send(msg);                     //SqlContext.Pipe.Send(…); // to test                 }                 catch (Exception ex)                 { …                 }             }             scope.Complete();         }     }   SQL code in triggers   -- sp_SendMsg was registered as a name of the MsmqHelper.SendMsg() EXEC sp_SendMsg'net.msmq://biztalk_server_name/private/myapp.myqueue', 'Create', @par1, @par2, @par3   Installation and Deployment On the SQL Server Registering the CLR assembly 1.       Prerequisites: .NET 3.5 SP1 Framework. It could be the issue for the production SQL Server! 2.       For more information, please, see the link http://nielsb.wordpress.com/sqlclrwcf/ 3.       Copy files: >copy “\Windows\Microsoft.net\Framework\v3.0\Windows Communication Foundation\Microsoft.Transactions.Bridge.dll” “\Program Files\Reference Assemblies\Microsoft\Framework\v3.0 \Microsoft.Transactions.Bridge.dll” If your machine is a 64-bit, run two commands: >copy “\Windows\Microsoft.net\Framework\v3.0\Windows Communication Foundation\Microsoft.Transactions.Bridge.dll” “\Program Files (x86)\Reference Assemblies\Microsoft\Framework\v3.0 \Microsoft.Transactions.Bridge.dll” >copy “\Windows\Microsoft.net\Framework64\v3.0\Windows Communication Foundation\Microsoft.Transactions.Bridge.dll” “\Program Files\Reference Assemblies\Microsoft\Framework\v3.0 \Microsoft.Transactions.Bridge.dll” 4.       Execute the SQL code to register the .NET assemblies: -- For x64 OS: CREATE ASSEMBLY SMdiagnostics AUTHORIZATION dbo FROM 'C:\Windows\Microsoft.NET\Framework\v3.0\Windows Communication Foundation\SMdiagnostics.dll' WITH permission_set = unsafe CREATE ASSEMBLY [System.Web] AUTHORIZATION dbo FROM 'C:\Windows\Microsoft.NET\Framework64\v2.0.50727\System.Web.dll' WITH permission_set = unsafe CREATE ASSEMBLY [System.Messaging] AUTHORIZATION dbo FROM 'C:\Windows\Microsoft.NET\Framework\v2.0.50727\System.Messaging.dll' WITH permission_set = unsafe CREATE ASSEMBLY [System.ServiceModel] AUTHORIZATION dbo FROM 'C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\v3.0\System.ServiceModel.dll' WITH permission_set = unsafe CREATE ASSEMBLY [System.Xml.Linq] AUTHORIZATION dbo FROM 'C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.5\System.Xml.Linq.dll' WITH permission_set = unsafe   -- For x32 OS: --CREATE ASSEMBLY SMdiagnostics AUTHORIZATION dbo FROM 'C:\Windows\Microsoft.NET\Framework\v3.0\Windows Communication Foundation\SMdiagnostics.dll' WITH permission_set = unsafe --CREATE ASSEMBLY [System.Web] AUTHORIZATION dbo FROM 'C:\Windows\Microsoft.NET\Framework\v2.0.50727\System.Web.dll' WITH permission_set = unsafe --CREATE ASSEMBLY [System.Messaging] AUTHORIZATION dbo FROM 'C:\Windows\Microsoft.NET\Framework\v2.0.50727\System.Messaging.dll' WITH permission_set = unsafe --CREATE ASSEMBLY [System.ServiceModel] AUTHORIZATION dbo FROM 'C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.0\System.ServiceModel.dll' WITH permission_set = unsafe 5.       Register the assembly with the external stored procedure: CREATE ASSEMBLY [HelperClass] AUTHORIZATION dbo FROM ’<FilePath>MyCompany.MySolution.MyProject.dll' WITH permission_set = unsafe where the <FilePath> - the path of the file on this machine! 6. Create the external stored procedure CREATE PROCEDURE sp_SendMsg (        @msmqAddress nvarchar(100),        @Action NVARCHAR(50),        @par1 int,        @par2 int,        @par3 int ) AS EXTERNAL NAME HelperClear.MsmqHelper.SendMsg   Installing the MSMQ Services 1.       Check if the MSMQ service is NOT installed. To check:  Start / Administrative Tools / Computer Management, on the left pane open the “Services and Applications”, search to the “Message Queuing”. If you cannot see it, follow next steps. 2.       Start / Control Panel / Programs and Features 3.       Click “Turn Windows Features on or off” 4.       Click Features, click “Add Features” 5.       Scroll down the feature list; open the “Message Queuing” / “Message Queuing Services”; and check the “Message Queuing Server” option  6.       Click Next; Click Install; wait to the successful finish of the installation Creating the MSMQ queue We don’t need to create the queue on the “sender” side. On the BizTalk Server Installing the MSMQ Services The same is as for the SQL Server. Creating the MSMQ queue 1.       Start / Administrative Tools / Computer Management, on the left pane open the “Services and Applications”, open the “Message Queuing”, and open the “Private Queues”. 2.       Right-click the “Private Queues”; choose New; choose “Private Queue”. 3.       Type the Queue name as ’myapp.myqueue'; check the “Transactional” option. Creating the WCF-NetMsmq receive port I will not go through this step in all details. It is straightforward. URI for this receive location should be 'net.msmq://localhost/private/myapp.myqueue'. Notes ·         The biggest problem is usually on the step the “Registering the CLR assembly”. It is hard to predict where are the assemblies from the assembly list, what version should be used, x86 or x64. It is pity of such “rude” integration of the SQL with .NET. ·         In couple cases the new WCF-NetMsmq port was not able to work with the queue. Try to replace the WCF- NetMsmq port with the WCF-Custom port with netMsmqBinding. It was working fine for me. ·         To test how messages go through the queue you can turn on the Journal /Enabled option for the queue. I used the QueueExplorer utility to look to the messages in Journal. The Computer Management can also show the messages but it shows only small part of the message body and in the weird format. The QueueExplorer can do the better job; it shows the whole body and Xml messages are in good color format.

    Read the article

  • How to get local ActiveMQ broker to "mirror" a queue on a remote ActiveMQ broker?

    - by T.K.
    I have a local ActiveMQ broker which is on an unreliable internet connection, and also a remote ActiveMQ broker in a reliable datacenter. I have already sorted out a "store and forward" setup so that outgoing messages are sent to the remote broker when the Internet connection is available. That alone works great, but when messages are outbound. However, now I have to do the reverse. Here is the scenario: A new message appears in the remote ActiveMQ broker. The message is put into a specific queue. In a few minutes, the Internet connection becomes available to the local ActiveMQ broker. The local broker should then be able to pull the message from the remote broker, and place it in its own local queue. Local consumers will then be able to see the message. So in essence, I need the local broker to become a subscribed consumer to the remote queue. I have looked through the ActiveMQ documentations but I can't find anything yet about how to do this in the .xml configuration file. Is this what I should be looking for? See: "ActiveMQ: JMS to JMS Bridge". Any advice and tips would be highly appreciated.

    Read the article

  • How (and if) to write a single-consumer queue using the task parallel library?

    - by Eric
    I've heard a bunch of podcasts recently about the TPL in .NET 4.0. Most of them describe background activities like downloading images or doing a computation, using tasks so that the work doesn't interfere with a GUI thread. Most of the code I work on has more of a multiple-producer / single-consumer flavor, where work items from multiple sources must be queued and then processed in order. One example would be logging, where log lines from multiple threads are sequentialized into a single queue for eventual writing to a file or database. All the records from any single source must remain in order, and records from the same moment in time should be "close" to each other in the eventual output. So multiple threads or tasks or whatever are all invoking a queuer: lock( _queue ) // or use a lock-free queue! { _queue.enqueue( some_work ); _queueSemaphore.Release(); } And a dedicated worker thread processes the queue: while( _queueSemaphore.WaitOne() ) { lock( _queue ) { some_work = _queue.dequeue(); } deal_with( some_work ); } It's always seemed reasonable to dedicate a worker thread for the consumer side of these tasks. Should I write future programs using some construct from the TPL instead? Which one? Why?

    Read the article

  • Mocking successive calls of similar type via sequential mocking

    - by mehfuzh
    In this post , i show how you can benefit from  sequential mocking feature[In JustMock] for setting up expectations with successive calls of same type.  To start let’s first consider the following dummy database and entity class. public class Person {     public virtual string Name { get; set; }     public virtual int Age { get; set; } }   public interface IDataBase {     T Get<T>(); } Now, our test goal is to return different entity for successive calls on IDataBase.Get<T>(). By default, the behavior in JustMock is override , which is similar to other popular mocking tools. By override it means that the tool will consider always the latest user setup. Therefore, the first example will return the latest entity every-time and will fail in line #12: Person person1 = new Person { Age = 30, Name = "Kosev" }; Person person2 = new Person { Age = 80, Name = "Mihail" };   var database = Mock.Create<IDataBase>();   Queue<Person> queue = new Queue<Person>();   Mock.Arrange(() => database.Get<Person>()).Returns(() => queue.Dequeue()); Mock.Arrange(() => database.Get<Person>()).Returns(person2);   // this will fail Assert.Equal(person1.GetHashCode(), database.Get<Person>().GetHashCode());   Assert.Equal(person2.GetHashCode(), database.Get<Person>().GetHashCode()); We can solve it the following way using a Queue and that removes the item from bottom on each call: Person person1 = new Person { Age = 30, Name = "Kosev" }; Person person2 = new Person { Age = 80, Name = "Mihail" };   var database = Mock.Create<IDataBase>();   Queue<Person> queue = new Queue<Person>();   queue.Enqueue(person1); queue.Enqueue(person2);   Mock.Arrange(() => database.Get<Person>()).Returns(queue.Dequeue());   Assert.Equal(person1.GetHashCode(), database.Get<Person>().GetHashCode()); Assert.Equal(person2.GetHashCode(), database.Get<Person>().GetHashCode()); This will ensure that right entity is returned but this is not an elegant solution. So, in JustMock we introduced a  new option that lets you set up your expectations sequentially. Like: Person person1 = new Person { Age = 30, Name = "Kosev" }; Person person2 = new Person { Age = 80, Name = "Mihail" };   var database = Mock.Create<IDataBase>();   Mock.Arrange(() => database.Get<Person>()).Returns(person1).InSequence(); Mock.Arrange(() => database.Get<Person>()).Returns(person2).InSequence();   Assert.Equal(person1.GetHashCode(), database.Get<Person>().GetHashCode()); Assert.Equal(person2.GetHashCode(), database.Get<Person>().GetHashCode()); The  “InSequence” modifier will tell the mocking tool to return the expected result as in the order it is specified by user. The solution though pretty simple and but neat(to me) and way too simpler than using a collection to solve this type of cases. Hope that helps P.S. The example shown in my blog is using interface don’t require a profiler  and you can even use a notepad and build it referencing Telerik.JustMock.dll, run it with GUI tools and it will work. But this feature also applies to concrete methods that includes JM profiler and can be implemented for more complex scenarios.

    Read the article

  • How to get Messages Pending Count from a Queue using WLST?

    - by lmestre
    WLST is a scripting Language that helps to achieve similar functionality as the ones you have in WebLogic console, but in a command line fashion.You can develop your WLST Scripts using Eclipse OEPE, read more here:https://blogs.oracle.com/oepe/entry/new_oracle_enterprise_pack_forFinally, here is an example to get Messages Pending Count using WLST: . ./setDomainEnv.sh java weblogic.WLST connect('weblogic','welcome1','t3://localhost:7001') domainRuntime() jms= getMBean ('ServerRuntimes/MyManagedServer/JMSRuntime/MyManagedServer.jms/JMSServers/MyJMSServer/Destinations/MyModule!MyQueue') jms.getMessagesPendingCount() Enjoy!WLST documentation:http://docs.oracle.com/middleware/1212/wls/WLSTG/index.html

    Read the article

  • How do you manage private queue permissions on a windows server 2008 cluster?

    - by David
    On windows server 2003 there was an option to allow a resource to interact with the desktop, this allowed you to run computer management mmc snap-in on the virtual name of the cluster, allowing you to manage permissions of the private message queues on the cluster. Windows Server 2008 failover clustering has removed that checkbox, so applications can no longer interact with the desktop. My question is then how does one go about managing private queue permissions on the clustered (the virtual name) server?

    Read the article

  • Server compromised. Bounce message contains many email addresses message was not sent to

    - by Tim Duncklee
    This is not a dupe. Please read and understand the issue before marking this as a duplicate question that has been answered already. Several customers are reporting bounce messages like the one below. At first I thought their computers had a virus but then I received one that was server generated so the problem is with the server. I've inspected the logs and these email addresses do not appear in the logs. The only thing I see that I do not remember seeing in the past are entries like this: Apr 30 13:34:49 psa86 qmail-queue-handlers[20994]: hook_dir = '/var/qmail//handlers/before-queue' Apr 30 13:34:49 psa86 qmail-queue-handlers[20994]: recipient[3] = '[email protected]' Apr 30 13:34:49 psa86 qmail-queue-handlers[20994]: handlers dir = '/var/qmail//handlers/before-queue/recipient/[email protected]' I've searched here and the web and maybe I'm just not entering the right search terms but I find nothing on this issue. Does anyone know how a hacker would attach additional email addresses to a message at the server and have them not appear in the logs? CentOS release 5.4, Plesk 8.6, QMail 1.03 Hi. This is the qmail-send program at psa.aaaaaa.com. I'm afraid I wasn't able to deliver your message to the following addresses. This is a permanent error; I've given up. Sorry it didn't work out. <[email protected]>: 82.201.133.227 does not like recipient. Remote host said: 550 #5.1.0 Address rejected. Giving up on 82.201.133.227. <[email protected]>: 64.18.7.10 does not like recipient. Remote host said: 550 No such user - psmtp Giving up on 64.18.7.10. <[email protected]>: 173.194.68.27 does not like recipient. Remote host said: 550-5.1.1 The email account that you tried to reach does not exist. Please try 550-5.1.1 double-checking the recipient's email address for typos or 550-5.1.1 unnecessary spaces. Learn more at 550 5.1.1 http://support.google.com/mail/bin/answer.py?answer=6596 w8si1903qag.18 - gsmtp Giving up on 173.194.68.27. <[email protected]>: 207.115.36.23 does not like recipient. Remote host said: 550 5.2.1 <[email protected]>... Addressee unknown, relay=[174.142.62.210] Giving up on 207.115.36.23. <[email protected]>: 207.115.37.22 does not like recipient. Remote host said: 550 5.2.1 <[email protected]>... Addressee unknown, relay=[174.142.62.210] Giving up on 207.115.37.22. <[email protected]>: 207.115.37.20 does not like recipient. Remote host said: 550 5.2.1 <[email protected]>... Addressee unknown, relay=[174.142.62.210] Giving up on 207.115.37.20. <[email protected]>: 207.115.37.23 does not like recipient. Remote host said: 550 5.2.1 <[email protected]>... Addressee unknown, relay=[174.142.62.210] Giving up on 207.115.37.23. <[email protected]>: 207.115.36.22 does not like recipient. Remote host said: 550 5.2.1 <[email protected]>... Addressee unknown, relay=[174.142.62.210] Giving up on 207.115.36.22. <[email protected]>: 74.205.16.140 does not like recipient. Remote host said: 553 sorry, that domain isn't in my list of allowed rcpthosts; no valid cert for gatewaying (#5.7.1) Giving up on 74.205.16.140. <[email protected]>: 207.115.36.20 does not like recipient. Remote host said: 550 5.2.1 <[email protected]>... Addressee unknown, relay=[174.142.62.210] Giving up on 207.115.36.20. <[email protected]>: 207.115.37.21 does not like recipient. Remote host said: 550 5.2.1 <[email protected]>... Addressee unknown, relay=[174.142.62.210] Giving up on 207.115.37.21. <[email protected]>: 192.169.41.23 failed after I sent the message. Remote host said: 554 qq Sorry, no valid recipients (#5.1.3) --- Below this line is a copy of the message. Return-Path: <[email protected]> Received: (qmail 15962 invoked from network); 1 May 2013 06:49:34 -0400 Received: from exprod6mo107.postini.com (64.18.1.18) by psa.aaaaaa.com with (DHE-RSA-AES256-SHA encrypted) SMTP; 1 May 2013 06:49:34 -0400 Received: from aaaaaa.com (exprod6lut001.postini.com [64.18.1.199]) by exprod6mo107.postini.com (Postfix) with SMTP id 47F80B8CA4 for <[email protected]>; Wed, 1 May 2013 03:49:33 -0700 (PDT) From: "Support" <[email protected]> To: [email protected] Subject: Detected Potential Junk Mail Date: Wed, 1 May 2013 03:49:33 -0700 Dear [email protected], junk mail protection service has detected suspicious email message(s) since your last visit and directed them to your Message Center. You can inspect your suspicious email at: ... UPDATE: After not seeing this problem for a while, I personally sent a message and immediately got a bounce with several bad addresses that I know I did not send to. These are addresses that are not on my system or on the server. This problem happens with both Mac and Windows clients and with messages generated from Postini and sent to users on my system. This is NOT backscatter. If it was backscatter it would not have the contents of my message in it. UPDATE #2 Here is another bounce. This one was sent by me and the bounce came back immediately. Hi. This is the qmail-send program at psa.aaaaaa.com. I'm afraid I wasn't able to deliver your message to the following addresses. This is a permanent error; I've given up. Sorry it didn't work out. <[email protected]>: 71.74.56.227 does not like recipient. Remote host said: 550 5.1.1 <[email protected]>... User unknown Giving up on 71.74.56.227. <[email protected]>: Connected to 208.34.236.3 but sender was rejected. Remote host said: 550 5.7.1 This system is configured to reject mail from 174.142.62.210 [174.142.62.210] (Host blacklisted - Found on Realtime Black List server 'bl.mailspike.net') <[email protected]>: 66.96.80.22 failed after I sent the message. Remote host said: 552 sorry, mailbox [email protected] is over quota temporarily (#5.1.1) <[email protected]>: 83.145.109.52 does not like recipient. Remote host said: 550 5.1.1 <[email protected]>: Recipient address rejected: User unknown in virtual mailbox table Giving up on 83.145.109.52. <[email protected]>: 69.49.101.234 does not like recipient. Remote host said: 550 5.7.1 <[email protected]>... H:M12 [174.142.62.210] Connection refused due to abuse. Please see http://mailspike.org/anubis/lookup.html or contact your E-mail provider. Giving up on 69.49.101.234. <[email protected]>: 212.55.154.36 does not like recipient. Remote host said: 550-The account has been suspended for inactivity 550 A conta do destinatario encontra-se suspensa por inactividade (#5.2.1) Giving up on 212.55.154.36. <[email protected]>: 199.168.90.102 failed after I sent the message. Remote host said: 552 Transaction [email protected] failed, remote said "550 No such user" (#5.1.1) <[email protected]>: 98.136.217.192 failed after I sent the message. Remote host said: 554 delivery error: dd Sorry your message to [email protected] cannot be delivered. This account has been disabled or discontinued [#102]. - mta1210.sbc.mail.gq1.yahoo.com --- Below this line is a copy of the message. Return-Path: <[email protected]> Received: (qmail 2618 invoked from network); 2 Jun 2013 22:32:51 -0400 Received: from 75-138-254-239.dhcp.jcsn.tn.charter.com (HELO ?192.168.0.66?) (75.138.254.239) by psa.aaaaaa.com with SMTP; 2 Jun 2013 22:32:48 -0400 User-Agent: Microsoft-Entourage/12.34.0.120813 Date: Sun, 02 Jun 2013 21:32:39 -0500 Subject: Refinance From: Tim Duncklee <[email protected]> To: Scott jones <[email protected]> Message-ID: <CDD16A79.67344%[email protected]> Thread-Topic: Reference Thread-Index: Ac5gAp2QmTs+LRv0SEOy7AJTX2DWzQ== Mime-version: 1.0 Content-type: multipart/mixed; boundary="B_3453053568_12034440" > This message is in MIME format. Since your mail reader does not understand this format, some or all of this message may not be legible. --B_3453053568_12034440 Content-type: multipart/related; boundary="B_3453053568_11982218" --B_3453053568_11982218 Content-type: multipart/alternative; boundary="B_3453053568_12000660" --B_3453053568_12000660 Content-type: text/plain; charset="ISO-8859-1" Content-transfer-encoding: quoted-printable Scott, ... email body here ... Here are the relevant log entries: Jun 2 22:32:50 psa qmail-queue[2616]: mail: all addreses are uncheckable - need to skip scanning (by deny mode) Jun 2 22:32:50 psa qmail-queue[2616]: scan: the message(drweb.tmp.i2SY0n) sent by [email protected] to [email protected] should be passed without checks, because contains uncheckable addresses Jun 2 22:32:50 psa qmail-queue-handlers[2617]: Handlers Filter before-queue for qmail started ... Jun 2 22:32:50 psa qmail-queue-handlers[2617]: [email protected] Jun 2 22:32:50 psa qmail-queue-handlers[2617]: [email protected] Jun 2 22:32:50 psa qmail-queue-handlers[2617]: hook_dir = '/var/qmail//handlers/before-queue' Jun 2 22:32:50 psa qmail-queue-handlers[2617]: recipient[3] = '[email protected]' Jun 2 22:32:50 psa qmail-queue-handlers[2617]: handlers dir = '/var/qmail//handlers/before-queue/recipient/[email protected]' Jun 2 22:32:51 psa qmail: 1370226771.060211 starting delivery 57: msg 1540285 to remote [email protected] Jun 2 22:32:51 psa qmail: 1370226771.060402 status: local 0/10 remote 1/20 Jun 2 22:32:51 psa qmail: 1370226771.060556 new msg 4915232 Jun 2 22:32:51 psa qmail: 1370226771.060671 info msg 4915232: bytes 687899 from <[email protected]> qp 2618 uid 2020 Jun 2 22:32:51 psa qmail-remote-handlers[2619]: Handlers Filter before-remote for qmail started ... Jun 2 22:32:51 psa qmail-queue-handlers[2617]: starter: submitter[2618] exited normally Jun 2 22:32:51 psa qmail-remote-handlers[2619]: from= Jun 2 22:32:51 psa qmail-remote-handlers[2619]: [email protected] Jun 2 22:32:51 psa qmail: 1370226771.078732 starting delivery 58: msg 4915232 to remote [email protected] Jun 2 22:32:51 psa qmail: 1370226771.078825 status: local 0/10 remote 2/20 Jun 2 22:32:51 psa qmail-remote-handlers[2621]: Handlers Filter before-remote for qmail started ... Jun 2 22:32:51 psa qmail-remote-handlers[2621]: [email protected] Jun 2 22:32:51 psa qmail-remote-handlers[2621]: [email protected]

    Read the article

  • Tuxedo Load Balancing

    - by Todd Little
    A question I often receive is how does Tuxedo perform load balancing.  This is often asked by customers that see an imbalance in the number of requests handled by servers offering a specific service. First of all let me say that Tuxedo really does load or request optimization instead of load balancing.  What I mean by that is that Tuxedo doesn't attempt to ensure that all servers offering a specific service get the same number of requests, but instead attempts to ensure that requests are processed in the least amount of time.   Simple round robin "load balancing" can be employed to ensure that all servers for a particular service are given the same number of requests.  But the question I ask is, "to what benefit"?  Instead Tuxedo scans the queues (which may or may not correspond to servers based upon SSSQ - Single Server Single Queue or MSSQ - Multiple Server Single Queue) to determine on which queue a request should be placed.  The scan is always performed in the same order and during the scan if a queue is empty the request is immediately placed on that queue and request routing is done.  However, should all the queues be busy, meaning that requests are currently being processed, Tuxedo chooses the queue with the least amount of "work" queued to it where work is the sum of all the requests queued weighted by their "load" value as defined in the UBBCONFIG file.  What this means is that under light loads, only the first few queues (servers) process all the requests as an empty queue is often found before reaching the end of the scan.  Thus the first few servers in the queue handle most of the requests.  While this sounds non-optimal, in fact it capitalizes on the underlying operating systems and hardware behavior to produce the best possible performance.  Round Robin scheduling would spread the requests across all the available servers and thus require all of them to be in memory, and likely not share much in the way of hardware or memory caches.  Tuxedo's system maximizes the various caches and thus optimizes overall performance.  Hopefully this makes sense and now explains why you may see a few servers handling most of the requests.  Under heavy load, meaning enough load to keep all servers that can handle a request busy, you should see a relatively equal number of requests processed.  Next post I'll try and cover how this applies to servers in a clustered (MP) environment because the load balancing there is a little more complicated. Regards,Todd LittleOracle Tuxedo Chief Architect

    Read the article

  • More useful Sql Server Serivce Broker Queries

    - by ChrisD
    SELECT 'Checking Broker Service Status...' IF (select Top 1 is_broker_enabled from sys.databases where name = 'NWMESSAGE')=1     SELECT ' Broker Service IS Enabled'  -- Should return a 1. ELSE     SELECT '** Broker Service IS DISABLED ***' /* If Is_Broker_enabled returns 0, uncomment and run this code ALTER DATABASE NWMESSAGE SET SINGLE_USER WITH ROLLBACK IMMEDIATE GO Alter Database NWMESSAGE Set enable_broker GO ALTER DATABASE NWDataChannel SET MULTI_USER GO */ SELECT 'Checking For Disabled Queues....' -- ensure the queues are enabled --  0 indicates the queue is disabled. Select '** Receive Queue Disabled: '+name from sys.service_queues where is_receive_enabled = 0 --select [name], is_receive_enabled from sys.service_queues; /*If the queue is disabled, to enable it alter queue QUEUENAME with status=on; – replace QUEUENAME with the name of your queue */ -- Get General information about the queues --select * from sys.service_queues -- Get the message counts in each queue SELECT 'Checking Message Count for each Queue...' select q.name, p.rows from sys.objects as o join sys.partitions as p on p.object_id = o.object_id join sys.objects as q on o.parent_object_id = q.object_id join sys.service_queues sq on sq.name = q.name where p.index_id = 1 -- Ensure all the queue activiation sprocs are present SELECT 'Checking for Activation Stored Procedures....' SELECT  '** Missing Procedure:  '+q.name  From sys.service_queues q Where NOT Exists(Select * from sysobjects where xtype='p' and name='activation_'+q.name) and q.activation_procedure is not null DECLARE @sprocs Table (Name Varchar(2000)) Insert into @sprocs Values ('Echo') Insert into @sprocs Values ('HTTP_POST') Insert into @sprocs Values ('InitializeRecipients') Insert into @sprocs Values ('sp_EnableRecipient') Insert into @sprocs Values ('sp_ProcessReceivedMessage') Insert into @sprocs Values ('sp_SendXmlMessage') SELECT 'Checking for required stored procedures...' SELECT  '** Missing Procedure:  '+s.name  From @sprocs s Where NOT Exists(Select * from sysobjects where xtype='p' and name=s.name) GO -- Check the services Select 'Checking Recipient Message Services...' Select '** Missing Message Service:' + r.RecipientName +'MessageService' From Recipient r Where not exists (Select * from sys.services s where  s.name  COLLATE SQL_Latin1_General_CP1_CI_AS= r.RecipientName+'MessageService') DECLARE @svcs Table (Name Varchar(2000)) Insert into @svcs Values ('XmlMessageSendingService') SELECT  '** Missing Service:  '+s.name  From @svcs s Where NOT Exists(Select * from sys.services where name=s.name COLLATE SQL_Latin1_General_CP1_CI_AS) GO /*** To Test a message send Run: sp_SendXmlMessage  'TSQLTEST', 'CommerceEngine','<Root><Text>Test</Text></Root>' */ Select CAST(message_body as XML) as xml, * From XmlMessageSendingQueue /*** clean out all queues declare @handle uniqueidentifier declare conv cursor for   select conversation_handle from sys.conversation_endpoints open conv fetch next from conv into @handle while @@FETCH_STATUS = 0 Begin    END Conversation @handle with cleanup    fetch next from conv into @handle End close conv deallocate conv ***********************

    Read the article

  • How to auto create a JMS topic/queue on JBoss in a portable and per-application way?

    - by Bozho
    It's simple: I have an MDB and an EJB that sends messages to a topic (or queue). JBoss complains that the topic is not bound to the JNDI context. I want to have the topic/queue to be automatically created at best, or at least to have a standard way to define it, per application (say, in ejb-jar/META-INF) this question and this blogpost show us how to do it in an application server specific way. This surely works, but: I want to use the @MessageDriven annotation I want the setting not to be global for the application server I want the setting to be portable

    Read the article

  • Server overloaded with log messages: tty_release_dev: pts0: read/write wait queue active!

    - by Raph
    In the logs, I have this (extract from the full kernel messages logges at 06:01:14): Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.863038] BUG: unable to handle kernel NULL pointer dereference at 0000000000000015 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861081] Process telnet (pid: 20247, threadinfo ffff8800f8598000, task ffff8800024d4500) And then the server logs flooded by this message: Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861547] tty_release_dev: pts0: read/write wait queue active! In the end, 2 hours later, I had to reboot because it had become inaccessible: the load hat grown to 160%. The last command does not show anyone logged on pts0 at that time. I also don't know where this telnet process could come from.... This is an AWS instance running UBUNTU 10.04 LTS And here are the complete logs: Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.863038] BUG: unable to handle kernel NULL pointer dereference at 0000000000000015 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861007] IP: [<ffffffff81363dde>] n_tty_read+0x2ce/0x970 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861019] PGD ee13d067 PUD f8698067 PMD 0 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861025] Oops: 0000 [#1] SMP Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861028] last sysfs file: /sys/devices/xen/vbd-2208/block/sdk/removable Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861032] CPU 0 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861034] Modules linked in: ipv6 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861040] Pid: 20247, comm: telnet Not tainted 2.6.32-312-ec2 #24-Ubuntu Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861042] RIP: e030:[<ffffffff81363dde>] [<ffffffff81363dde>] n_tty_read+0x2ce/0x970 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861047] RSP: e02b:ffff8800f8599d88 EFLAGS: 00010246 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861049] RAX: 0000000000000015 RBX: ffff8800f8598000 RCX: 0000000001aed069 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861052] RDX: 0000000000000000 RSI: ffff8800f8599e67 RDI: ffff8801dd833d1c Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861054] RBP: ffff8800f8599e98 R08: ffffffff8135eb10 R09: 7fffffffffffffff Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861057] R10: 0000000000000000 R11: 0000000000000246 R12: ffff8801dd833800 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861059] R13: 0000000000000000 R14: ffff8801dd833a68 R15: ffff8801dd833d1c Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861065] FS: 00007f90121f6720(0000) GS:ffff880002c40000(0000) knlGS:0000000000000000 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861068] CS: e033 DS: 0000 ES: 0000 CR0: 000000008005003b Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861070] CR2: 0000000000000015 CR3: 0000000032a59000 CR4: 0000000000002660 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861073] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861076] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861081] Process telnet (pid: 20247, threadinfo ffff8800f8598000, task ffff8800024d4500) Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861083] Stack: Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861085] 0000000000000000 0000000001aed069 ffff8801dd8339c8 ffff8800024d4500 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861089] <0> ffff8801dd8339c0 ffff8801dd833c90 0000000001aed027 ffff8800024d4500 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861094] <0> ffff8801dd8338d8 0000000000000000 ffff8800024d4500 0000000000000000 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861099] Call Trace: Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861107] [<ffffffff81034bc0>] ? default_wake_function+0x0/0x10 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861113] [<ffffffff8135ebb6>] tty_read+0xa6/0xf0 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861118] [<ffffffff810ee7e5>] vfs_read+0xb5/0x1a0 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861122] [<ffffffff810ee91c>] sys_read+0x4c/0x80 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861127] [<ffffffff81009ba8>] system_call_fastpath+0x16/0x1b Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861131] [<ffffffff81009b40>] ? system_call+0x0/0x52 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861133] Code: 85 d2 0f 84 92 00 00 00 45 8b ac 24 5c 02 00 00 f0 45 0f b3 2e 45 19 ed 49 63 84 24 5c 02 00 00 49 8b 94 24 50 02 00 00 4c 89 ff <0f> be 1c 02 e8 a9 d3 14 00 41 8b 94 24 5c 02 00 00 41 83 ac 24 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861171] RIP [<ffffffff81363dde>] n_tty_read+0x2ce/0x970 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861175] RSP <ffff8800f8599d88> Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861171] RIP [<ffffffff81363dde>] n_tty_read+0x2ce/0x970 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861175] RSP <ffff8800f8599d88> Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861177] CR2: 0000000000000015 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861205] ---[ end trace f10eee2057ff4f6b ]--- Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861547] tty_release_dev: pts0: read/write wait queue active!

    Read the article

  • Exchange 2010: How to retain mail in the outgoing queue for a certain amount of time before it is being sent

    - by Jeroen Landheer
    One of our clients asked us to configure Exchange 2010 to retain outgoing mail for a certain amount of time (independant of Outlook settings.) The idea is that an administrator has about 10 minutes to take a message out of a queue before it is sent out the organization. I know this can be configured in Outlook, but this is not a valiable solution for us. I'm also aware that this causes queues to fill up, this is part of the consideration. Is there a way in Exchange 2010 to configure this?

    Read the article

  • Why does Exim puts emails on hold if there are frozen messages in the queue?

    - by user51932
    I've a CentOS with CPanel server working as a SMTP server, which currently uses 20 different hostnames and IP addresses to deliver email for an email newsletter service. However, it's extremely slow in sending emails. It's sending like 10 emails per minute, which I check by running the "exim -bpc" command. What could be affecting this? One thing I'm supposing, is that there are frozen messages in the queue, which are slowing down the sending until they're sent out, and are putting new messages on hold. What are the most common reasons a message can get frozen? Also, would it be more efficient to use 20 different small VPSs to send out email rather than use one large VPS with the 20 different hostnames and IPs in it?

    Read the article

  • Why does Exim puts emails on hold if there are frozen messages in the queue?

    - by user51932
    Hi, I've a CentOS with CPanel server working as a SMTP server, which currently uses 20 different hostnames and IP addresses to deliver email for an email newsletter service. However, it's extremely slow in sending emails. It's sending like 10 emails per minute, which I check by running the "exim -bpc" command. What could be affecting this? One thing I'm supposing, is that there are frozen messages in the queue, which are slowing down the sending until they're sent out, and are putting new messages on hold. What are the most common reasons a message can get frozen? Also, would it be more efficient to use 20 different small VPSs to send out email rather than use one large VPS with the 20 different hostnames and IPs in it?

    Read the article

  • Program to Queue Files for Copy/Move/Delete in linux?

    - by laliga
    I've search the net for Linux's answer to something like Teracopy (Windows)... but could not find anything suitable. The closest things i got are: Krusader. Mentioned in their features but indicated as 'not implemented yet'. MiniCopier. A java based app http://a.courreges.free.fr/projets/minicopier/minicopier-en.php rsync is not an option. Can someone recommend me a simple file copy tool that can queue files for copy/move/delete? Preferably if I can drag and drop from Nautilus. If something like this does not exist, can someone please tell me why? ...am I the only person that needs something like this?

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >