Search Results

Search found 12107 results on 485 pages for 'session timeout'.

Page 109/485 | < Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >

  • Block SMTP session with sender domain which doesn't itself accept SMTP connection.

    - by bignose
    I'm administrating a mail service for a small business. Their mail host's internet connection is an ADSL service with a permanent IP address. Unfortunately, many misconfigured mail systems will happily deliver to this host, but, when the host attempts to send mail back (e.g. a bounce notice, or a normal response from someone), the declared sender's domain has an MX which refuses to receive connections from this host. That misconfiguration makes their system a one-way mail sender, which is a problem. How can I configure Postfix on this customer's mail host to refuse SMTP sessions that declare a sender domain which itself refuses SMTP from this host? That is, if the SMTP client declares a domain that we can't make SMTP connections back to, then there's not much point accepting the incoming connection in the first place. Note that I'm not, as some commenters have assumed, talking about checking whether the SMTP client will receive messages. The check I want is whether the declared sender's domain (regardless of who the current SMTP client is) will accept SMTP connections from here. In other words: when we get around to sending a message back, we'll need the sender's domain to accept SMTP connections; I want to do that check before accepting the incoming session. I'm imagining a late check (after the low-cost checks to winnow most of the rubbish connections) that keeps the client on the other end while it attempts an SMTP client connection back to the declared domain of the sender. If that connection is rejected, the incoming one is also rejected. I'm also open to other suggestions for how this problem might be addressed (short of not using this mail host at all, which isn't an option).

    Read the article

  • Constant CMS Session Expiry On 1&1 Cloud Server?

    - by leen3o
    I have a couple of 1&1's 'Dynamic Cloud Servers' and running Win2008R2 and they are setup as web servers, I have a number of Umbraco CMS installs on them and they have been running fine for over a year. On Saturday on BOTH servers, a very strange thing happened - As soon as I login to the CMS/Umbraco admin I am logged out with about 5 seconds? It's as if my session expires the moment I login? I have checked everything I can as I'm not really a server admin, and everything seems to be exactly as it was last week? Like I say this has happened EXACTLY the same time (Saturday) on TWO different servers? I'm just looking for ideas of what I should be looking for? Also the front end of the sites seem fine... Its only the backend when I login. I have gone to 1&1 about this, and as usual they have washed their hands saying its nothing to do with them - When I am certain it is. How can this happen on two different servers, and affect the same sites in exactly the same way? Any help, tips, things to try would be greatly appreciated.

    Read the article

  • Windows Azure Service Bus Splitter and Aggregator

    - by Alan Smith
    This article will cover basic implementations of the Splitter and Aggregator patterns using the Windows Azure Service Bus. The content will be included in the next release of the “Windows Azure Service Bus Developer Guide”, along with some other patterns I am working on. I’ve taken the pattern descriptions from the book “Enterprise Integration Patterns” by Gregor Hohpe. I bought a copy of the book in 2004, and recently dusted it off when I started to look at implementing the patterns on the Windows Azure Service Bus. Gregor has also presented an session in 2011 “Enterprise Integration Patterns: Past, Present and Future” which is well worth a look. I’ll be covering more patterns in the coming weeks, I’m currently working on Wire-Tap and Scatter-Gather. There will no doubt be a section on implementing these patterns in my “SOA, Connectivity and Integration using the Windows Azure Service Bus” course. There are a number of scenarios where a message needs to be divided into a number of sub messages, and also where a number of sub messages need to be combined to form one message. The splitter and aggregator patterns provide a definition of how this can be achieved. This section will focus on the implementation of basic splitter and aggregator patens using the Windows Azure Service Bus direct programming model. In BizTalk Server receive pipelines are typically used to implement the splitter patterns, with sequential convoy orchestrations often used to aggregate messages. In the current release of the Service Bus, there is no functionality in the direct programming model that implements these patterns, so it is up to the developer to implement them in the applications that send and receive messages. Splitter A message splitter takes a message and spits the message into a number of sub messages. As there are different scenarios for how a message can be split into sub messages, message splitters are implemented using different algorithms. The Enterprise Integration Patterns book describes the splatter pattern as follows: How can we process a message if it contains multiple elements, each of which may have to be processed in a different way? Use a Splitter to break out the composite message into a series of individual messages, each containing data related to one item. The Enterprise Integration Patterns website provides a description of the Splitter pattern here. In some scenarios a batch message could be split into the sub messages that are contained in the batch. The splitting of a message could be based on the message type of sub-message, or the trading partner that the sub message is to be sent to. Aggregator An aggregator takes a stream or related messages and combines them together to form one message. The Enterprise Integration Patterns book describes the aggregator pattern as follows: How do we combine the results of individual, but related messages so that they can be processed as a whole? Use a stateful filter, an Aggregator, to collect and store individual messages until a complete set of related messages has been received. Then, the Aggregator publishes a single message distilled from the individual messages. The Enterprise Integration Patterns website provides a description of the Aggregator pattern here. A common example of the need for an aggregator is in scenarios where a stream of messages needs to be combined into a daily batch to be sent to a legacy line-of-business application. The BizTalk Server EDI functionality provides support for batching messages in this way using a sequential convoy orchestration. Scenario The scenario for this implementation of the splitter and aggregator patterns is the sending and receiving of large messages using a Service Bus queue. In the current release, the Windows Azure Service Bus currently supports a maximum message size of 256 KB, with a maximum header size of 64 KB. This leaves a safe maximum body size of 192 KB. The BrokeredMessage class will support messages larger than 256 KB; in fact the Size property is of type long, implying that very large messages may be supported at some point in the future. The 256 KB size restriction is set in the service bus components that are deployed in the Windows Azure data centers. One of the ways of working around this size restriction is to split large messages into a sequence of smaller sub messages in the sending application, send them via a queue, and then reassemble them in the receiving application. This scenario will be used to demonstrate the pattern implementations. Implementation The splitter and aggregator will be used to provide functionality to send and receive large messages over the Windows Azure Service Bus. In order to make the implementations generic and reusable they will be implemented as a class library. The splitter will be implemented in the LargeMessageSender class and the aggregator in the LargeMessageReceiver class. A class diagram showing the two classes is shown below. Implementing the Splitter The splitter will take a large brokered message, and split the messages into a sequence of smaller sub-messages that can be transmitted over the service bus messaging entities. The LargeMessageSender class provides a Send method that takes a large brokered message as a parameter. The implementation of the class is shown below; console output has been added to provide details of the splitting operation. public class LargeMessageSender {     private static int SubMessageBodySize = 192 * 1024;     private QueueClient m_QueueClient;       public LargeMessageSender(QueueClient queueClient)     {         m_QueueClient = queueClient;     }       public void Send(BrokeredMessage message)     {         // Calculate the number of sub messages required.         long messageBodySize = message.Size;         int nrSubMessages = (int)(messageBodySize / SubMessageBodySize);         if (messageBodySize % SubMessageBodySize != 0)         {             nrSubMessages++;         }           // Create a unique session Id.         string sessionId = Guid.NewGuid().ToString();         Console.WriteLine("Message session Id: " + sessionId);         Console.Write("Sending {0} sub-messages", nrSubMessages);           Stream bodyStream = message.GetBody<Stream>();         for (int streamOffest = 0; streamOffest < messageBodySize;             streamOffest += SubMessageBodySize)         {                                     // Get the stream chunk from the large message             long arraySize = (messageBodySize - streamOffest) > SubMessageBodySize                 ? SubMessageBodySize : messageBodySize - streamOffest;             byte[] subMessageBytes = new byte[arraySize];             int result = bodyStream.Read(subMessageBytes, 0, (int)arraySize);             MemoryStream subMessageStream = new MemoryStream(subMessageBytes);               // Create a new message             BrokeredMessage subMessage = new BrokeredMessage(subMessageStream, true);             subMessage.SessionId = sessionId;               // Send the message             m_QueueClient.Send(subMessage);             Console.Write(".");         }         Console.WriteLine("Done!");     }} The LargeMessageSender class is initialized with a QueueClient that is created by the sending application. When the large message is sent, the number of sub messages is calculated based on the size of the body of the large message. A unique session Id is created to allow the sub messages to be sent as a message session, this session Id will be used for correlation in the aggregator. A for loop in then used to create the sequence of sub messages by creating chunks of data from the stream of the large message. The sub messages are then sent to the queue using the QueueClient. As sessions are used to correlate the messages, the queue used for message exchange must be created with the RequiresSession property set to true. Implementing the Aggregator The aggregator will receive the sub messages in the message session that was created by the splitter, and combine them to form a single, large message. The aggregator is implemented in the LargeMessageReceiver class, with a Receive method that returns a BrokeredMessage. The implementation of the class is shown below; console output has been added to provide details of the splitting operation.   public class LargeMessageReceiver {     private QueueClient m_QueueClient;       public LargeMessageReceiver(QueueClient queueClient)     {         m_QueueClient = queueClient;     }       public BrokeredMessage Receive()     {         // Create a memory stream to store the large message body.         MemoryStream largeMessageStream = new MemoryStream();           // Accept a message session from the queue.         MessageSession session = m_QueueClient.AcceptMessageSession();         Console.WriteLine("Message session Id: " + session.SessionId);         Console.Write("Receiving sub messages");           while (true)         {             // Receive a sub message             BrokeredMessage subMessage = session.Receive(TimeSpan.FromSeconds(5));               if (subMessage != null)             {                 // Copy the sub message body to the large message stream.                 Stream subMessageStream = subMessage.GetBody<Stream>();                 subMessageStream.CopyTo(largeMessageStream);                   // Mark the message as complete.                 subMessage.Complete();                 Console.Write(".");             }             else             {                 // The last message in the sequence is our completeness criteria.                 Console.WriteLine("Done!");                 break;             }         }                     // Create an aggregated message from the large message stream.         BrokeredMessage largeMessage = new BrokeredMessage(largeMessageStream, true);         return largeMessage;     } }   The LargeMessageReceiver initialized using a QueueClient that is created by the receiving application. The receive method creates a memory stream that will be used to aggregate the large message body. The AcceptMessageSession method on the QueueClient is then called, which will wait for the first message in a message session to become available on the queue. As the AcceptMessageSession can throw a timeout exception if no message is available on the queue after 60 seconds, a real-world implementation should handle this accordingly. Once the message session as accepted, the sub messages in the session are received, and their message body streams copied to the memory stream. Once all the messages have been received, the memory stream is used to create a large message, that is then returned to the receiving application. Testing the Implementation The splitter and aggregator are tested by creating a message sender and message receiver application. The payload for the large message will be one of the webcast video files from http://www.cloudcasts.net/, the file size is 9,697 KB, well over the 256 KB threshold imposed by the Service Bus. As the splitter and aggregator are implemented in a separate class library, the code used in the sender and receiver console is fairly basic. The implementation of the main method of the sending application is shown below.   static void Main(string[] args) {     // Create a token provider with the relevant credentials.     TokenProvider credentials =         TokenProvider.CreateSharedSecretTokenProvider         (AccountDetails.Name, AccountDetails.Key);       // Create a URI for the serivce bus.     Uri serviceBusUri = ServiceBusEnvironment.CreateServiceUri         ("sb", AccountDetails.Namespace, string.Empty);       // Create the MessagingFactory     MessagingFactory factory = MessagingFactory.Create(serviceBusUri, credentials);       // Use the MessagingFactory to create a queue client     QueueClient queueClient = factory.CreateQueueClient(AccountDetails.QueueName);       // Open the input file.     FileStream fileStream = new FileStream(AccountDetails.TestFile, FileMode.Open);       // Create a BrokeredMessage for the file.     BrokeredMessage largeMessage = new BrokeredMessage(fileStream, true);       Console.WriteLine("Sending: " + AccountDetails.TestFile);     Console.WriteLine("Message body size: " + largeMessage.Size);     Console.WriteLine();         // Send the message with a LargeMessageSender     LargeMessageSender sender = new LargeMessageSender(queueClient);     sender.Send(largeMessage);       // Close the messaging facory.     factory.Close();  } The implementation of the main method of the receiving application is shown below. static void Main(string[] args) {       // Create a token provider with the relevant credentials.     TokenProvider credentials =         TokenProvider.CreateSharedSecretTokenProvider         (AccountDetails.Name, AccountDetails.Key);       // Create a URI for the serivce bus.     Uri serviceBusUri = ServiceBusEnvironment.CreateServiceUri         ("sb", AccountDetails.Namespace, string.Empty);       // Create the MessagingFactory     MessagingFactory factory = MessagingFactory.Create(serviceBusUri, credentials);       // Use the MessagingFactory to create a queue client     QueueClient queueClient = factory.CreateQueueClient(AccountDetails.QueueName);       // Create a LargeMessageReceiver and receive the message.     LargeMessageReceiver receiver = new LargeMessageReceiver(queueClient);     BrokeredMessage largeMessage = receiver.Receive();       Console.WriteLine("Received message");     Console.WriteLine("Message body size: " + largeMessage.Size);       string testFile = AccountDetails.TestFile.Replace(@"\In\", @"\Out\");     Console.WriteLine("Saving file: " + testFile);       // Save the message body as a file.     Stream largeMessageStream = largeMessage.GetBody<Stream>();     largeMessageStream.Seek(0, SeekOrigin.Begin);     FileStream fileOut = new FileStream(testFile, FileMode.Create);     largeMessageStream.CopyTo(fileOut);     fileOut.Close();       Console.WriteLine("Done!"); } In order to test the application, the sending application is executed, which will use the LargeMessageSender class to split the message and place it on the queue. The output of the sender console is shown below. The console shows that the body size of the large message was 9,929,365 bytes, and the message was sent as a sequence of 51 sub messages. When the receiving application is executed the results are shown below. The console application shows that the aggregator has received the 51 messages from the message sequence that was creating in the sending application. The messages have been aggregated to form a massage with a body of 9,929,365 bytes, which is the same as the original large message. The message body is then saved as a file. Improvements to the Implementation The splitter and aggregator patterns in this implementation were created in order to show the usage of the patterns in a demo, which they do quite well. When implementing these patterns in a real-world scenario there are a number of improvements that could be made to the design. Copying Message Header Properties When sending a large message using these classes, it would be great if the message header properties in the message that was received were copied from the message that was sent. The sending application may well add information to the message context that will be required in the receiving application. When the sub messages are created in the splitter, the header properties in the first message could be set to the values in the original large message. The aggregator could then used the values from this first sub message to set the properties in the message header of the large message during the aggregation process. Using Asynchronous Methods The current implementation uses the synchronous send and receive methods of the QueueClient class. It would be much more performant to use the asynchronous methods, however doing so may well affect the sequence in which the sub messages are enqueued, which would require the implementation of a resequencer in the aggregator to restore the correct message sequence. Handling Exceptions In order to keep the code readable no exception handling was added to the implementations. In a real-world scenario exceptions should be handled accordingly.

    Read the article

  • Why so Long time span in creating Session Factory?

    - by vijay.shad
    Hi My project is web application running in the tomcat container. This application is a spring framework based hibernate application. The problem with this is it takes a lot of time when creates session factory. here is the logs 2010-04-15 23:05:28,053 DEBUG [SessionFactoryImpl] Session factory constructed with filter configurations : {} 2010-04-15 23:05:28,053 DEBUG [SessionFactoryImpl] instantiating session factory with properties: {java.vendor=Sun Microsystems Inc., sun.java.launcher=SUN_STANDARD, catalina.base=/usr/local/InstalledPrograms/apache-tomcat-6.0.20, sun.management.compiler=HotSpot Tiered Compilers, catalina.useNaming=true, os.name=Linux, sun.boot.class.path=/usr/java/jdk1.6.0_17/jre/lib/resources.jar:/usr/java/jdk1.6.0_17/jre/lib/rt.jar:/usr/java/jdk1.6.0_17/jre/lib/sunrsasign.jar:/usr/java/jdk1.6.0_17/jre/lib/jsse.jar:/usr/java/jdk1.6.0_17/jre/lib/jce.jar:/usr/java/jdk1.6.0_17/jre/lib/charsets.jar:/usr/java/jdk1.6.0_17/jre/classes, java.util.logging.config.file=/usr/local/InstalledPrograms/apache-tomcat-6.0.20/conf/logging.properties, java.vm.specification.vendor=Sun Microsystems Inc., hibernate.generate_statistics=true, java.runtime.version=1.6.0_17-b04, hibernate.cache.provider_class=org.hibernate.cache.EhCacheProvider, user.name=root, shared.loader=, tomcat.util.buf.StringCache.byte.enabled=true, hibernate.connection.release_mode=auto, user.language=en, java.naming.factory.initial=org.apache.naming.java.javaURLContextFactory, sun.boot.library.path=/usr/java/jdk1.6.0_17/jre/lib/i386, java.version=1.6.0_17, java.util.logging.manager=org.apache.juli.ClassLoaderLogManager, user.timezone=Canada/Pacific, sun.arch.data.model=32, java.endorsed.dirs=/usr/local/InstalledPrograms/apache-tomcat-6.0.20/endorsed, sun.cpu.isalist=, sun.jnu.encoding=UTF-8, file.encoding.pkg=sun.io, package.access=sun.,org.apache.catalina.,org.apache.coyote.,org.apache.tomcat.,org.apache.jasper.,sun.beans., file.separator=/, java.specification.name=Java Platform API Specification, java.class.version=50.0, user.country=US, java.home=/usr/java/jdk1.6.0_17/jre, java.vm.info=mixed mode, os.version=2.6.18-128.el5, path.separator=:, java.vm.version=14.3-b01, hibernate.jdbc.batch_size=25, java.awt.printerjob=sun.print.PSPrinterJob, sun.io.unicode.encoding=UnicodeLittle, package.definition=sun.,java.,org.apache.catalina.,org.apache.coyote.,org.apache.tomcat.,org.apache.jasper., java.naming.factory.url.pkgs=org.apache.naming, sun.rmi.dgc.client.gcInterval=3600000, user.home=/root, java.specification.vendor=Sun Microsystems Inc., java.library.path=/usr/java/jdk1.6.0_17/jre/lib/i386/server:/usr/java/jdk1.6.0_17/jre/lib/i386:/usr/java/jdk1.6.0_17/jre/../lib/i386:/usr/java/packages/lib/i386:/lib:/usr/lib, java.vendor.url=http://java.sun.com/, java.vm.vendor=Sun Microsystems Inc., hibernate.dialect=org.hibernate.dialect.MySQL5InnoDBDialect, sun.rmi.dgc.server.gcInterval=3600000, common.loader=${catalina.home}/lib,${catalina.home}/lib/*.jar, java.runtime.name=Java(TM) SE Runtime Environment, java.class.path=:/usr/local/InstalledPrograms/apache-tomcat-6.0.20/bin/bootstrap.jar, hibernate.bytecode.use_reflection_optimizer=false, java.vm.specification.name=Java Virtual Machine Specification, java.vm.specification.version=1.0, catalina.home=/usr/local/InstalledPrograms/apache-tomcat-6.0.20, sun.cpu.endian=little, sun.os.patch.level=unknown, hibernate.cache.use_query_cache=true, hibernate.connection.provider_class=org.springframework.orm.hibernate3.LocalDataSourceConnectionProvider, java.io.tmpdir=/usr/local/InstalledPrograms/apache-tomcat-6.0.20/temp, java.vendor.url.bug=http://java.sun.com/cgi-bin/bugreport.cgi, server.loader=, os.arch=i386, java.awt.graphicsenv=sun.awt.X11GraphicsEnvironment, java.ext.dirs=/usr/java/jdk1.6.0_17/jre/lib/ext:/usr/java/packages/lib/ext, user.dir=/, line.separator=, java.vm.name=Java HotSpot(TM) Server VM, hibernate.cache.use_second_level_cache=true, file.encoding=UTF-8, java.specification.version=1.6, hibernate.show_sql=true} 2010-04-15 23:08:53,516 DEBUG [AbstractEntityPersister] Static SQL for entity: com.vsd.model.Order There you can see the time delay of more than 3 mins in executing these processes. My database is mysql and database server is running on the local machine only. The container environment is Centos Linux system. I am clueless about why it takes that much of time in executing these process, But when i do the same task from under eclipse it does not take that much of time. Development environment is Windows.

    Read the article

  • does entity framework or mysql provider swallows timeout exceptions on enumeration of result?!

    - by Freddy Rios
    I'm trying to make sense of a situation I have using entity framework on .net 3.5 sp1 + MySQL 6.1.2.0 as the provider. It involves the following code: Response.Write("Products: " + plist.Count() + "<br />"); var total = 0; foreach (var p in plist) { //... some actions total++; //... other actions } Response.Write("Total Products Checked: " + total + "<br />"); Basically the total products is varying on each run, and it isn't matching the full total in plist. Its varies widely, from ~ 1/5th to half. There isn't any control flow code inside the foreach i.e. no break, continue, try/catch, conditions around total++, anything that could affect the count. As confirmation, there are other totals captured inside the loop related to the actions, and those match the lower and higher total runs. I don't find any reason to the above, other than something in entity framework or the mysql provider that causes it to end the foreach when retrieving an item. The body of the foreach can have some good variation in time, as the actions involve file & network access, my best shot at the time is that when it takes beyond certain threshold there is some type of timeout in the underlying framework/provider and instead of causing an exception it is silently reporting no more items for enumeration. Can anyone give some light in the above scenario and/or confirm if the entity framework/mysql provider has the above behavior?

    Read the article

  • Developer Developer Developer Scotland 2010

    - by Chris Hardy (ChrisNTR)
    This past weekend, I headed up to Glasgow thanks to Plip for driving and Dave Sussman for some light entertainment to do a session on C# on the iPhone with MonoTouch. I had already presented a session similar to this one at DDD8 in Reading, which you can watch on Vimeo ( http://vimeo.com/9150434 ) but in this session I covered more topics such as the new 3.3.1 section of the new terms of service Apple released. I also showed a Twitter example written in MonoTouch, which was reused from the DDD8 session...(read more)

    Read the article

  • An XEvent a Day (5 of 31) - Targets Week – ring_buffer

    - by Jonathan Kehayias
    Yesterday’s post, Querying the Session Definition and Active Session DMV’s , showed how to find information about the Event Sessions that exist inside a SQL Server and how to find information about the Active Event Sessions that are running inside a SQL Server using the Session Definition and Active Session DMV’s.  With the background information now out of the way, and since this post falls on the start of a new week I’ve decided to make this Targets Week, where each day we’ll look at a different...(read more)

    Read the article

  • Can Windows log CryptoAPI CRL timouts?

    - by makerofthings7
    We have several .NET applications that occasionally "act slow" with no CPU or disk access. I suspect that they are hung up on authentication when trying to validate the certificate, since the timeout is almost 20 seconds. As per this MSFT article Most applications do not specify to CryptoAPI to use a cumulative time-out. If the cumulative time-out option is not enabled, CryptoAPI uses the CryptoAPI default setting which is a time-out of 15 seconds per URL. If the cumulative time-out option specified by the application, then CryptoAPI will use a default setting of 20 seconds as the cumulative timeout. The first URL receives a maximum timeout of 10 seconds. Each subsequent URL timeout is half of the remaining balance in the cumulative timeout value. Since this is a service, how can I detect and log CryptoAPI hangs for applications I have sourcecode to, and also 3rd party

    Read the article

  • SQLAuthority News – Presented at Bangalore DevCon August 4, 2012

    - by pinaldave
    Bangalore Devcon 2012 was a great fun. Earlier this month I was fortunate to be invited to present at Dev Con. The event was very well planned and had excellent response. There were more than 140 attendees at any time in the sessions. There were two tracks and both tracks were running parallel to each other in the Microsoft Bangalore building. The venue is fantastic and the enthusiasm of the community is impeccable. We had a total of 12 sessions during the day. I had decided to attend each session if I can. We have so many fantastic speakers and I did not want to miss any of the sessions. As sessions were running parallel, I attended every session for 30 minutes and switched to another session. I had fun doing this experiment as tit gave me a good idea about every session. I presented personally on the session of SQL Tips and Tricks for Web Developer. DBA is a very common word and every time when we say SQL Server – lots of people think of DBA in their mind, however, SQL Server is used by many developers as well. In this session I tried to cover a few of the simple concepts where developers must pay special attention while writing T-SQL code. Sometimes a very small mistake can be very fatal on performance in the future. Here are few of the photos of the event. Btw, the two sessions which clearly stand out were Vinod Kumar‘s session on Leadership and Lohith‘s session on Visual Studio Tips and Tricks. Additional Read: Following are the blog posts by community on the Bangalore DevCon Experience. I encourage you to read them all and leave a comment which one you liked the most. http://abhishekbhat.wordpress.com/2012/08/07/devcon-2012-experience/ http://praveenprajapati.wordpress.com/2012/08/07/devcon-2012-part-2/ http://tomsblogsspot.blogspot.in/2012/08/devcon-2012.html?view=classic https://manasdash.wordpress.com/2012/08/06/devcon-2012-by-bdotnet-4th-august-2012/ http://www.jagan-bhathri.com/2012/08/bangalore-developer-conference-2012-by.html Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, T SQL, Technology

    Read the article

  • GlassFish Back from Devoxx 2011 Mature Java EE 6 and EE 7 well on its way

    - by alexismp
    I'm back from my 8th (!) Devoxx conference (I don't think I've missed one since 2004) and this conference keeps delivering on the promise of a Java developer paradise week. GlassFish was covered in many different ways and I was not involved in a good number of them which can only be a good sign! Several folks asked me when my Java EE 6 session with Antonio Goncalves was scheduled (we've been covering this for the past two years in University sessions, hands-on labs and regular sessions). It turns out we didn't team up this year (Antonio was crazy busy preparing for Devoxx France) and I had a regular GlassFish session. Instead, this year, Bert Ertman and Paul Bakker covered the 3-hour Java EE 6 University session ("Duke’s Duct Tape Adventures") on the very first day (using GlassFish) with great success it seems. The Java EE 6 lab was also a hit with a full room of folks covering a lot of technical ground in 2.5 hours (with GlassFish of course). GlassFish was also mentioned during Cameron Purdy's keynote (pretty natural even if that surprised a number of folks that had not been closely following GlassFish) but also in Stephan Janssen's Keynote as the engine powering Parleys.com. In fact Stephan was a speaker in the GlassFish session describing how they went from a single-instance Tomcat setup to a clustered GlassFish + MQ environment. Also in the session was Johan Vos (of Mollom fame, along other things). Both of these customer testimonials were made possible because GlassFish has been delivering full Java EE 6 implementations for almost two years now which is plenty of time to see serious production deployments on it. The Java EE Gathering (BOF) was very well attended and very lively with many spec leads participating and discussing progress and also pain points with folks in the room. Thanks to all those attending this session, a good number of RFE's, and priority points came out of this. While this wasn't a GlassFish session by any means, it's great to have the current RESTful Admin and upcoming Java EE 7 planned features be a satisfactory answer to some of the requests from the attendance. Last but certainly not least, the GlassFish team is busy with Java EE 7 and version 4 of the product. This was discussed and shown during the Java EE keynote and in greater details in Jerome Dochez' session. If any indication, the tweets on his demo (virtualization, provisioning, etc...) were very encouraging. Java EE 6 adoption is doing great and GlassFish, being a production-quality reference implementation, is one of the first to benefit from this. And with GlassFish 4.0, we're looking at increasing the product and community adoption by offering a pragmatic technical solution to Java EE PaaS deployments. Stay tuned ! (the impatient in you is encouraged to grab a 4.0 build and provide feedback).

    Read the article

  • What is the purpose of netcat's "-w timeout" option when ssh tunneling?

    - by jrdioko
    I am in the exact same situation as the person who posted another question, I am trying to tunnel ssh connections through a gateway server instead of having to ssh into the gateway and manually ssh again to the destination server from there. I am trying to set up the solution given in the accepted answer there, a ~/.ssh/config that includes: host foo User webby ProxyCommand ssh a nc -w 3 %h %p host a User johndoe However, when I try to ssh foo, my connection stays alive for 3 seconds and then dies with a Write failed: Broken pipe error. Removing the -w 3 option solves the problem. What is the purpose of that -w 3 in the original solution, and why is it causing a Broken pipe error when I use it? What is the harm in omitting it?

    Read the article

  • Keeps "SSH timeout" error in our AWS instance- how do i diagnose?

    - by ming yeow
    I am befuddled by this error. We keep failing to SSH into our AWS instance, whether it is is deployment or via console. I have tried rebooting a few times, but it does not seem to be helping. Here are a couple of error messages i keep getting. connection failed for: HOST.NAME.amazonaws.com (Errno::ETIMEDOUT: Operation timed out - connect(2)) 111.222.333.444: ssh connection failed at 2010-07-02 03:39:37 I also SSHed in when it was up, and monitored "top" when ssh times out. looking at the memory logs, it does not look like any program was hogging

    Read the article

  • Microsoft BI Conference 2010 Recap & books promo

    - by Marco Russo (SQLBI)
    Last week I’ve been at Microsoft BI Conference and I presented an interactive session about PowerPivot DAX Patterns. Unfortunately only the breakout session were recorded and available on TechEd Online . The room was full and there were probably many other people in an overflow room.  I would like to thanks all the attendees of my session and you can write me (marco dot russo [at] sqlbi dot com) if you have other questions and/or feedback about the session. The interest about PowerPivot (especially...(read more)

    Read the article

  • Cisco ASA - Enable communication between same security level

    - by Conor
    I have recently inherited a network with a Cisco ASA (running version 8.2). I am trying to configure it to allow communication between two interfaces configured with the same security level (DMZ-DMZ) "same-security-traffic permit inter-interface" has been set, but hosts are unable to communicate between the interfaces. I am assuming that some NAT settings are causing my issue. Below is my running config: ASA Version 8.2(3) ! hostname asa enable password XXXXXXXX encrypted passwd XXXXXXXX encrypted names ! interface Ethernet0/0 switchport access vlan 400 ! interface Ethernet0/1 switchport access vlan 400 ! interface Ethernet0/2 switchport access vlan 420 ! interface Ethernet0/3 switchport access vlan 420 ! interface Ethernet0/4 switchport access vlan 450 ! interface Ethernet0/5 switchport access vlan 450 ! interface Ethernet0/6 switchport access vlan 500 ! interface Ethernet0/7 switchport access vlan 500 ! interface Vlan400 nameif outside security-level 0 ip address XX.XX.XX.10 255.255.255.248 ! interface Vlan420 nameif public security-level 20 ip address 192.168.20.1 255.255.255.0 ! interface Vlan450 nameif dmz security-level 50 ip address 192.168.10.1 255.255.255.0 ! interface Vlan500 nameif inside security-level 100 ip address 192.168.0.1 255.255.255.0 ! ftp mode passive clock timezone JST 9 same-security-traffic permit inter-interface same-security-traffic permit intra-interface object-group network DM_INLINE_NETWORK_1 network-object host XX.XX.XX.11 network-object host XX.XX.XX.13 object-group service ssh_2220 tcp port-object eq 2220 object-group service ssh_2251 tcp port-object eq 2251 object-group service ssh_2229 tcp port-object eq 2229 object-group service ssh_2210 tcp port-object eq 2210 object-group service DM_INLINE_TCP_1 tcp group-object ssh_2210 group-object ssh_2220 object-group service zabbix tcp port-object range 10050 10051 object-group service DM_INLINE_TCP_2 tcp port-object eq www group-object zabbix object-group protocol TCPUDP protocol-object udp protocol-object tcp object-group service http_8029 tcp port-object eq 8029 object-group network DM_INLINE_NETWORK_2 network-object host 192.168.20.10 network-object host 192.168.20.30 network-object host 192.168.20.60 object-group service imaps_993 tcp description Secure IMAP port-object eq 993 object-group service public_wifi_group description Service allowed on the Public Wifi Group. Allows Web and Email. service-object tcp-udp eq domain service-object tcp-udp eq www service-object tcp eq https service-object tcp-udp eq 993 service-object tcp eq imap4 service-object tcp eq 587 service-object tcp eq pop3 service-object tcp eq smtp access-list outside_access_in remark http traffic from outside access-list outside_access_in extended permit tcp any object-group DM_INLINE_NETWORK_1 eq www access-list outside_access_in remark ssh from outside to web1 access-list outside_access_in extended permit tcp any host XX.XX.XX.11 object-group ssh_2251 access-list outside_access_in remark ssh from outside to penguin access-list outside_access_in extended permit tcp any host XX.XX.XX.10 object-group ssh_2229 access-list outside_access_in remark http from outside to penguin access-list outside_access_in extended permit tcp any host XX.XX.XX.10 object-group http_8029 access-list outside_access_in remark ssh from outside to internal hosts access-list outside_access_in extended permit tcp any host XX.XX.XX.13 object-group DM_INLINE_TCP_1 access-list outside_access_in remark dns service to internal host access-list outside_access_in extended permit object-group TCPUDP any host XX.XX.XX.13 eq domain access-list dmz_access_in extended permit ip 192.168.10.0 255.255.255.0 any access-list dmz_access_in extended permit tcp any host 192.168.10.29 object-group DM_INLINE_TCP_2 access-list public_access_in remark Web access to DMZ websites access-list public_access_in extended permit object-group TCPUDP any object-group DM_INLINE_NETWORK_2 eq www access-list public_access_in remark General web access. (HTTP, DNS & ICMP and Email) access-list public_access_in extended permit object-group public_wifi_group any any pager lines 24 logging enable logging asdm informational mtu outside 1500 mtu public 1500 mtu dmz 1500 mtu inside 1500 no failover icmp unreachable rate-limit 1 burst-size 1 no asdm history enable arp timeout 60 global (outside) 1 interface global (dmz) 2 interface nat (public) 1 0.0.0.0 0.0.0.0 nat (dmz) 1 0.0.0.0 0.0.0.0 nat (inside) 1 0.0.0.0 0.0.0.0 static (inside,outside) tcp interface 2229 192.168.0.29 2229 netmask 255.255.255.255 static (inside,outside) tcp interface 8029 192.168.0.29 www netmask 255.255.255.255 static (dmz,outside) XX.XX.XX.13 192.168.10.10 netmask 255.255.255.255 dns static (dmz,outside) XX.XX.XX.11 192.168.10.30 netmask 255.255.255.255 dns static (dmz,inside) 192.168.0.29 192.168.10.29 netmask 255.255.255.255 static (dmz,public) 192.168.20.30 192.168.10.30 netmask 255.255.255.255 dns static (dmz,public) 192.168.20.10 192.168.10.10 netmask 255.255.255.255 dns static (inside,dmz) 192.168.10.0 192.168.0.0 netmask 255.255.255.0 dns access-group outside_access_in in interface outside access-group public_access_in in interface public access-group dmz_access_in in interface dmz route outside 0.0.0.0 0.0.0.0 XX.XX.XX.9 1 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute timeout tcp-proxy-reassembly 0:01:00 dynamic-access-policy-record DfltAccessPolicy http server enable http 192.168.0.0 255.255.255.0 inside no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 telnet timeout 5 ssh 192.168.0.0 255.255.255.0 inside ssh timeout 20 console timeout 0 dhcpd dns 61.122.112.97 61.122.112.1 dhcpd auto_config outside ! dhcpd address 192.168.20.200-192.168.20.254 public dhcpd enable public ! dhcpd address 192.168.0.200-192.168.0.254 inside dhcpd enable inside ! threat-detection basic-threat threat-detection statistics host threat-detection statistics access-list no threat-detection statistics tcp-intercept ntp server 130.54.208.201 source public webvpn ! class-map inspection_default match default-inspection-traffic ! ! policy-map type inspect dns preset_dns_map parameters message-length maximum client auto message-length maximum 512 policy-map global_policy class inspection_default inspect dns preset_dns_map inspect ftp inspect h323 h225 inspect h323 ras inspect ip-options inspect netbios inspect rsh inspect rtsp inspect skinny inspect esmtp inspect sqlnet inspect sunrpc inspect tftp inspect sip inspect xdmcp !

    Read the article

  • Asynchronous connectToServer

    - by Pavel Bucek
    Users of JSR-356 – Java API for WebSocket are probably familiar with WebSocketContainer#connectToServer method. This article will be about its usage and improvement which was introduce in recent Tyrus release. WebSocketContainer#connectToServer does what is says, it connects to WebSocketServerEndpoint deployed on some compliant container. It has two or three parameters (depends on which representation of client endpoint are you providing) and returns aSession. Returned Session represents WebSocket connection and you are instantly able to send messages, register MessageHandlers, etc. An issue might appear when you are trying to create responsive user interface and use this method – its execution blocks until Session is created which usually means some container needs to be started, DNS queried, connection created (it’s even more complicated when there is some proxy on the way), etc., so nothing which might be really considered as responsive. Trivial and correct solution is to do this in another thread and monitor the result, but.. why should users do that? :-) Tyrus now provides async* versions of all connectToServer methods, which performs only simple (=fast) check in the same thread and then fires a new one and performs all other tasks there. Return type of these methods is Future<Session>. List of added methods: public Future<Session> asyncConnectToServer(Class<?> annotatedEndpointClass, URI path) public Future<Session> asyncConnectToServer(Class<? extends Endpoint>  endpointClass, ClientEndpointConfig cec, URI path) public Future<Session> asyncConnectToServer(Endpoint endpointInstance, ClientEndpointConfig cec, URI path) public Future<Session> asyncConnectToServer(Object obj, URI path) As you can see, all connectToServer variants have its async* alternative. All these methods do throw DeploymentException, same as synchronous variants, but some of these errors cannot be thrown as a result of the first method call, so you might get it as the cause ofExecutionException thrown when Future<Session>.get() is called. Please let us know if you find these newly added methods useful or if you would like to change something (signature, functionality, …) – you can send us a comment to [email protected] or ping me personally. Related links: https://tyrus.java.net https://java.net/jira/browse/TYRUS/ https://github.com/tyrus-project/tyrus

    Read the article

  • Microsoft BI Conference 2010 Recap & books promo

    - by Marco Russo (SQLBI)
    Last week I’ve been at Microsoft BI Conference and I presented an interactive session about PowerPivot DAX Patterns. Unfortunately only the breakout session were recorded and available on TechEd Online . The room was full and there were probably many other people in an overflow room.  I would like to thanks all the attendees of my session and you can write me (marco dot russo [at] sqlbi dot com) if you have other questions and/or feedback about the session. The interest about PowerPivot (especially...(read more)

    Read the article

  • Installed Ubuntu 14.04LTS

    - by user291729
    On my laptop which came pre-installed with Windows 8.1. Felt I needed to see the competition for myself to establish which was a better OS. So I followed the channels to dual boot. All seemed fine and I accessed Ubuntu with no issues after selecting this from the menu to select the OS. I should add that the boot method was changed to legacy. However, since using Ubuntu, I no longer have the ability to select the OS. The laptop simply logs straight into Ubuntu. I therefore attempted to access the recovery options, only it appears the Windows 8 bootloader has somehow been corrupted as I am now told to use the Windows 8 recovery disc (which, as this was pre-installed - I do not have). Left with no other alternative, I have scoured these forums without success, and so I am hoping someone in the know (or who has experienced similar) can help. I have tried boot repair again without success. On rebooting I am only presented with a basic splash screen asking me to select Ubuntu, Memtest, Windows 8 Recovery or Windows 8 Bootloader (The bootloaders again require I insert the disc). I have tried Code: cat /boot/grub/grub.cfg df -h sudo fdisk -l cat /proc/partitions # # DO NOT EDIT THIS FILE # # It is automatically generated by grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then set have_grubenv=true load_env fi if [ "${next_entry}" ] ; then set default="${next_entry}" set next_entry= save_env next_entry set boot_once=true else set default="0" fi if [ x"${feature_menuentry_id}" = xy ]; then menuentry_id_option="--id" else menuentry_id_option="" fi export menuentry_id_option if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi } function load_video { if [ x$feature_all_video_module = xy ]; then insmod all_video else insmod efi_gop insmod efi_uga insmod ieee1275_fb insmod vbe insmod vga insmod video_bochs insmod video_cirrus fi } if [ x$feature_default_font_path = xy ] ; then font=unicode else insmod part_gpt insmod ext2 set root='hd0,gpt9' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt9 --hint-efi=hd0,gpt9 --hint-baremetal=ahci0,gpt9 d2f10f36-e3bb-4d83-a9b8-5d456fc454ad else search --no-floppy --fs-uuid --set=root d2f10f36-e3bb-4d83-a9b8-5d456fc454ad fi font="/usr/share/grub/unicode.pf2" fi if loadfont $font ; then set gfxmode=800x600 load_video insmod gfxterm set locale_dir=$prefix/locale set lang=en_GB insmod gettext fi terminal_output gfxterm if [ "${recordfail}" = 1 ] ; then set timeout=-1 else if [ x$feature_timeout_style = xy ] ; then set timeout_style=menu set timeout=20 # Fallback normal timeout code in case the timeout_style feature is # unavailable. else set timeout=20 fi fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/light-gray if background_color 44,0,30; then clear fi ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### function gfxmode { set gfxpayload="${1}" if [ "${1}" = "keep" ]; then set vt_handoff=vt.handoff=7 else set vt_handoff= fi } if [ "${recordfail}" != 1 ]; then if [ -e ${prefix}/gfxblacklist.txt ]; then if hwmatch ${prefix}/gfxblacklist.txt 3; then if [ ${match} = 0 ]; then set linux_gfx_mode=keep else set linux_gfx_mode=text fi else set linux_gfx_mode=text fi else set linux_gfx_mode=keep fi else set linux_gfx_mode=text fi export linux_gfx_mode menuentry 'Ubuntu' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-d2f10f36-e3bb-4d83-a9b8-5d456fc454ad' { recordfail load_video gfxmode $linux_gfx_mode insmod gzio insmod part_gpt insmod ext2 set root='hd0,gpt9' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt9 --hint-efi=hd0,gpt9 --hint-baremetal=ahci0,gpt9 d2f10f36-e3bb-4d83-a9b8-5d456fc454ad else search --no-floppy --fs-uuid --set=root d2f10f36-e3bb-4d83-a9b8-5d456fc454ad fi linux /boot/vmlinuz-3.13.0-29-generic root=UUID=d2f10f36-e3bb-4d83-a9b8-5d456fc454ad ro vga=789 quiet quiet splash $vt_handoff initrd /boot/initrd.img-3.13.0-29-generic } submenu 'Advanced options for Ubuntu' $menuentry_id_option 'gnulinux-advanced-d2f10f36-e3bb-4d83-a9b8-5d456fc454ad' { menuentry 'Ubuntu, with Linux 3.13.0-29-generic' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.13.0-29-generic-advanced-d2f10f36-e3bb-4d83-a9b8-5d456fc454ad' { recordfail load_video gfxmode $linux_gfx_mode insmod gzio insmod part_gpt insmod ext2 set root='hd0,gpt9' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt9 --hint-efi=hd0,gpt9 --hint-baremetal=ahci0,gpt9 d2f10f36-e3bb-4d83-a9b8-5d456fc454ad else search --no-floppy --fs-uuid --set=root d2f10f36-e3bb-4d83-a9b8-5d456fc454ad fi echo 'Loading Linux 3.13.0-29-generic ...' linux /boot/vmlinuz-3.13.0-29-generic root=UUID=d2f10f36-e3bb-4d83-a9b8-5d456fc454ad ro vga=789 quiet quiet splash $vt_handoff echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.13.0-29-generic } menuentry 'Ubuntu, with Linux 3.13.0-29-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.13.0-29-generic-recovery-d2f10f36-e3bb-4d83-a9b8-5d456fc454ad' { recordfail load_video insmod gzio insmod part_gpt insmod ext2 set root='hd0,gpt9' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt9 --hint-efi=hd0,gpt9 --hint-baremetal=ahci0,gpt9 d2f10f36-e3bb-4d83-a9b8-5d456fc454ad else search --no-floppy --fs-uuid --set=root d2f10f36-e3bb-4d83-a9b8-5d456fc454ad fi echo 'Loading Linux 3.13.0-29-generic ...' linux /boot/vmlinuz-3.13.0-29-generic root=UUID=d2f10f36-e3bb-4d83-a9b8-5d456fc454ad ro recovery nomodeset vga=789 quiet echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.13.0-29-generic } menuentry 'Ubuntu, with Linux 3.13.0-24-generic' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.13.0-24-generic-advanced-d2f10f36-e3bb-4d83-a9b8-5d456fc454ad' { recordfail load_video gfxmode $linux_gfx_mode insmod gzio insmod part_gpt insmod ext2 set root='hd0,gpt9' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt9 --hint-efi=hd0,gpt9 --hint-baremetal=ahci0,gpt9 d2f10f36-e3bb-4d83-a9b8-5d456fc454ad else search --no-floppy --fs-uuid --set=root d2f10f36-e3bb-4d83-a9b8-5d456fc454ad fi echo 'Loading Linux 3.13.0-24-generic ...' linux /boot/vmlinuz-3.13.0-24-generic root=UUID=d2f10f36-e3bb-4d83-a9b8-5d456fc454ad ro vga=789 quiet quiet splash $vt_handoff echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.13.0-24-generic } menuentry 'Ubuntu, with Linux 3.13.0-24-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.13.0-24-generic-recovery-d2f10f36-e3bb-4d83-a9b8-5d456fc454ad' { recordfail load_video insmod gzio insmod part_gpt insmod ext2 set root='hd0,gpt9' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt9 --hint-efi=hd0,gpt9 --hint-baremetal=ahci0,gpt9 d2f10f36-e3bb-4d83-a9b8-5d456fc454ad else search --no-floppy --fs-uuid --set=root d2f10f36-e3bb-4d83-a9b8-5d456fc454ad fi echo 'Loading Linux 3.13.0-24-generic ...' linux /boot/vmlinuz-3.13.0-24-generic root=UUID=d2f10f36-e3bb-4d83-a9b8-5d456fc454ad ro recovery nomodeset vga=789 quiet echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.13.0-24-generic } } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry 'Memory test (memtest86+)' { insmod part_gpt insmod ext2 set root='hd0,gpt9' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt9 --hint-efi=hd0,gpt9 --hint-baremetal=ahci0,gpt9 d2f10f36-e3bb-4d83-a9b8-5d456fc454ad else search --no-floppy --fs-uuid --set=root d2f10f36-e3bb-4d83-a9b8-5d456fc454ad fi knetbsd /boot/memtest86+.elf } menuentry 'Memory test (memtest86+, serial console 115200)' { insmod part_gpt insmod ext2 set root='hd0,gpt9' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt9 --hint-efi=hd0,gpt9 --hint-baremetal=ahci0,gpt9 d2f10f36-e3bb-4d83-a9b8-5d456fc454ad else search --no-floppy --fs-uuid --set=root d2f10f36-e3bb-4d83-a9b8-5d456fc454ad fi linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### menuentry 'Windows Recovery Environment (loader) (on /dev/sda2)' --class windows --class os $menuentry_id_option 'osprober-chain-7A6A69D66A698FA5' { insmod part_gpt insmod ntfs set root='hd0,gpt2' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt2 --hint-efi=hd0,gpt2 --hint-baremetal=ahci0,gpt2 7A6A69D66A698FA5 else search --no-floppy --fs-uuid --set=root 7A6A69D66A698FA5 fi drivemap -s (hd0) ${root} chainloader +1 } menuentry 'Windows 8 (loader) (on /dev/sda3)' --class windows --class os $menuentry_id_option 'osprober-chain-8C88-80F7' { insmod part_gpt insmod fat set root='hd0,gpt3' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt3 --hint-efi=hd0,gpt3 --hint-baremetal=ahci0,gpt3 8C88-80F7 else search --no-floppy --fs-uuid --set=root 8C88-80F7 fi drivemap -s (hd0) ${root} chainloader +1 } set timeout_style=menu if [ "${timeout}" = 0 ]; then set timeout=10 fi ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/30_uefi-firmware ### ### END /etc/grub.d/30_uefi-firmware ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f ${config_directory}/custom.cfg ]; then source ${config_directory}/custom.cfg elif [ -z "${config_directory}" -a -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ### # # DO NOT EDIT THIS FILE # # It is automatically generated by grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then set have_grubenv=true load_env fi if [ "${next_entry}" ] ; then set default="${next_entry}" set next_entry= save_env next_entry set boot_once=true else set default="0" fi if [ x"${feature_menuentry_id}" = xy ]; then menuentry_id_option="--id" else menuentry_id_option="" fi export menuentry_id_option if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi } function load_video { if [ x$feature_all_video_module = xy ]; then insmod all_video else insmod efi_gop insmod efi_uga insmod ieee1275_fb insmod vbe insmod vga insmod video_bochs insmod video_cirrus fi } if [ x$feature_default_font_path = xy ] ; then font=unicode else insmod part_gpt insmod ext2 set root='hd0,gpt9' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt9 --hint-efi=hd0,gpt9 --hint-baremetal=ahci0,gpt9 d2f10f36-e3bb-4d83-a9b8-5d456fc454ad else search --no-floppy --fs-uuid --set=root d2f10f36-e3bb-4d83-a9b8-5d456fc454ad fi font="/usr/share/grub/unicode.pf2" fi if loadfont $font ; then set gfxmode=800x600 load_video insmod gfxterm set locale_dir=$prefix/locale set lang=en_GB insmod gettext fi terminal_output gfxterm if [ "${recordfail}" = 1 ] ; then set timeout=-1 else if [ x$feature_timeout_style = xy ] ; then set timeout_style=menu set timeout=20 # Fallback normal timeout code in case the timeout_style feature is # unavailable. else set timeout=20 fi fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/light-gray if background_color 44,0,30; then clear fi ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### function gfxmode { set gfxpayload="${1}" if [ "${1}" = "keep" ]; then set vt_handoff=vt.handoff=7 else set vt_handoff= fi } if [ "${recordfail}" != 1 ]; then if [ -e ${prefix}/gfxblacklist.txt ]; then if hwmatch ${prefix}/gfxblacklist.txt 3; then if [ ${match} = 0 ]; then set linux_gfx_mode=keep else set linux_gfx_mode=text fi else set linux_gfx_mode=text fi else set linux_gfx_mode=keep fi else set linux_gfx_mode=text fi export linux_gfx_mode menuentry 'Ubuntu' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-d2f10f36-e3bb-4d83-a9b8-5d456fc454ad' { recordfail load_video gfxmode $linux_gfx_mode insmod gzio insmod part_gpt insmod ext2 set root='hd0,gpt9' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt9 --hint-efi=hd0,gpt9 --hint-baremetal=ahci0,gpt9 d2f10f36-e3bb-4d83-a9b8-5d456fc454ad else search --no-floppy --fs-uuid --set=root d2f10f36-e3bb-4d83-a9b8-5d456fc454ad fi linux /boot/vmlinuz-3.13.0-29-generic root=UUID=d2f10f36-e3bb-4d83-a9b8-5d456fc454ad ro vga=789 quiet quiet splash $vt_handoff initrd /boot/initrd.img-3.13.0-29-generic } submenu 'Advanced options for Ubuntu' $menuentry_id_option 'gnulinux-advanced-d2f10f36-e3bb-4d83-a9b8-5d456fc454ad' { menuentry 'Ubuntu, with Linux 3.13.0-29-generic' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.13.0-29-generic-advanced-d2f10f36-e3bb-4d83-a9b8-5d456fc454ad' { recordfail load_video gfxmode $linux_gfx_mode insmod gzio insmod part_gpt insmod ext2 set root='hd0,gpt9' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt9 --hint-efi=hd0,gpt9 --hint-baremetal=ahci0,gpt9 d2f10f36-e3bb-4d83-a9b8-5d456fc454ad else search --no-floppy --fs-uuid --set=root d2f10f36-e3bb-4d83-a9b8-5d456fc454ad fi echo 'Loading Linux 3.13.0-29-generic ...' linux /boot/vmlinuz-3.13.0-29-generic root=UUID=d2f10f36-e3bb-4d83-a9b8-5d456fc454ad ro vga=789 quiet quiet splash $vt_handoff echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.13.0-29-generic } menuentry 'Ubuntu, with Linux 3.13.0-29-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.13.0-29-generic-recovery-d2f10f36-e3bb-4d83-a9b8-5d456fc454ad' { recordfail load_video insmod gzio insmod part_gpt insmod ext2 set root='hd0,gpt9' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt9 --hint-efi=hd0,gpt9 --hint-baremetal=ahci0,gpt9 d2f10f36-e3bb-4d83-a9b8-5d456fc454ad else search --no-floppy --fs-uuid --set=root d2f10f36-e3bb-4d83-a9b8-5d456fc454ad fi echo 'Loading Linux 3.13.0-29-generic ...' linux /boot/vmlinuz-3.13.0-29-generic root=UUID=d2f10f36-e3bb-4d83-a9b8-5d456fc454ad ro recovery nomodeset vga=789 quiet echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.13.0-29-generic } menuentry 'Ubuntu, with Linux 3.13.0-24-generic' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.13.0-24-generic-advanced-d2f10f36-e3bb-4d83-a9b8-5d456fc454ad' { recordfail load_video gfxmode $linux_gfx_mode insmod gzio insmod part_gpt insmod ext2 set root='hd0,gpt9' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt9 --hint-efi=hd0,gpt9 --hint-baremetal=ahci0,gpt9 d2f10f36-e3bb-4d83-a9b8-5d456fc454ad else search --no-floppy --fs-uuid --set=root d2f10f36-e3bb-4d83-a9b8-5d456fc454ad fi echo 'Loading Linux 3.13.0-24-generic ...' linux /boot/vmlinuz-3.13.0-24-generic root=UUID=d2f10f36-e3bb-4d83-a9b8-5d456fc454ad ro vga=789 quiet quiet splash $vt_handoff echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.13.0-24-generic } menuentry 'Ubuntu, with Linux 3.13.0-24-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.13.0-24-generic-recovery-d2f10f36-e3bb-4d83-a9b8-5d456fc454ad' { recordfail load_video insmod gzio insmod part_gpt insmod ext2 set root='hd0,gpt9' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt9 --hint-efi=hd0,gpt9 --hint-baremetal=ahci0,gpt9 d2f10f36-e3bb-4d83-a9b8-5d456fc454ad else search --no-floppy --fs-uuid --set=root d2f10f36-e3bb-4d83-a9b8-5d456fc454ad fi echo 'Loading Linux 3.13.0-24-generic ...' linux /boot/vmlinuz-3.13.0-24-generic root=UUID=d2f10f36-e3bb-4d83-a9b8-5d456fc454ad ro recovery nomodeset vga=789 quiet echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.13.0-24-generic } } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry 'Memory test (memtest86+)' { insmod part_gpt insmod ext2 set root='hd0,gpt9' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt9 --hint-efi=hd0,gpt9 --hint-baremetal=ahci0,gpt9 d2f10f36-e3bb-4d83-a9b8-5d456fc454ad else search --no-floppy --fs-uuid --set=root d2f10f36-e3bb-4d83-a9b8-5d456fc454ad fi knetbsd /boot/memtest86+.elf } menuentry 'Memory test (memtest86+, serial console 115200)' { insmod part_gpt insmod ext2 set root='hd0,gpt9' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt9 --hint-efi=hd0,gpt9 --hint-baremetal=ahci0,gpt9 d2f10f36-e3bb-4d83-a9b8-5d456fc454ad else search --no-floppy --fs-uuid --set=root d2f10f36-e3bb-4d83-a9b8-5d456fc454ad fi linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### menuentry 'Windows Recovery Environment (loader) (on /dev/sda2)' --class windows --class os $menuentry_id_option 'osprober-chain-7A6A69D66A698FA5' { insmod part_gpt insmod ntfs set root='hd0,gpt2' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt2 --hint-efi=hd0,gpt2 --hint-baremetal=ahci0,gpt2 7A6A69D66A698FA5 else search --no-floppy --fs-uuid --set=root 7A6A69D66A698FA5 fi drivemap -s (hd0) ${root} chainloader +1 } menuentry 'Windows 8 (loader) (on /dev/sda3)' --class windows --class os $menuentry_id_option 'osprober-chain-8C88-80F7' { insmod part_gpt insmod fat set root='hd0,gpt3' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt3 --hint-efi=hd0,gpt3 --hint-baremetal=ahci0,gpt3 8C88-80F7 else search --no-floppy --fs-uuid --set=root 8C88-80F7 fi drivemap -s (hd0) ${root} chainloader +1 } set timeout_style=menu if [ "${timeout}" = 0 ]; then set timeout=10 fi ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/30_uefi-firmware ### ### END /etc/grub.d/30_uefi-firmware ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f ${config_directory}/custom.cfg ]; then source ${config_directory}/custom.cfg elif [ -z "${config_directory}" -a -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ### john@john-SVE1713Y1EB:~$ ^C john@john-SVE1713Y1EB:~$ ^C john@john-SVE1713Y1EB:~$ df -h Filesystem Size Used Avail Use% Mounted on /dev/sda9 84G 7.1G 73G 9% / none 4.0K 0 4.0K 0% /sys/fs/cgroup udev 3.9G 4.0K 3.9G 1% /dev tmpfs 794M 1.4M 793M 1% /run none 5.0M 0 5.0M 0% /run/lock none 3.9G 80K 3.9G 1% /run/shm none 100M 52K 100M 1% /run/user /dev/sdc1 7.5G 2.2G 5.4G 29% /media/john/DYLANMUSIC /dev/sr0 964M 964M 0 100% /media/john/Ubuntu 14.04 LTS amd64 /dev/sdb1 1.9T 892G 972G 48% /media/john/Storage Main WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x4e2ccf75 Device Boot Start End Blocks Id System /dev/sda1 1 1953525167 976762583+ ee GPT Partition 1 does not start on physical sector boundary. Disk /dev/sdc: 8011 MB, 8011120640 bytes 41 heads, 41 sectors/track, 9307 cylinders, total 15646720 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xc3072e18 Device Boot Start End Blocks Id System /dev/sdc1 8064 15646719 7819328 b W95 FAT32 Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xc7d968ff Device Boot Start End Blocks Id System /dev/sdb1 64 3907029119 1953514528 7 HPFS/NTFS/exFAT major minor #blocks name 8 0 976762584 sda 8 1 266240 sda1 8 2 1509376 sda2 8 3 266240 sda3 8 4 131072 sda4 8 5 841012780 sda5 8 6 358400 sda6 8 7 35376128 sda7 8 8 1024 sda8 8 9 89501696 sda9 8 10 8337408 sda10 11 0 987136 sr0 8 32 7823360 sdc 8 33 7819328 sdc1 8 16 1953514584 sdb 8 17 1953514528 sdb1 I am no expert on this and I'm at a loss as how to correct this without having to re-format everything and reinstall Windows 8. However, if I'm to try using Ubuntu again then there is the risk this problem may come back. Again, I did not do anything manually - the installer did everything (with the exception of changing the boot to Legacy to allow the booting of another bootloader). LiveCD works but doesn't give me the options that I've seen here and as mentioned earlier, only boot recovery only gives me the options as mentioned earlier. Also this fails to load via USB (possibly because HDD comes before USB in the boot order?). Being used to a Windows environment, the Ubuntu (and Linux) environment is a dive at a less than comfortable depth at present (but one I fully intend to get to grips with - especially the commands being more common via Terminal). I very much appreciate the help with this guys.

    Read the article

  • SQLPASS Summit 2011 -- I'm going but not as a speaker

    - by NeilHambly
    This post is about my attempt and slight failure @ getting a presenting session @ this year’s SQLPASS Summit 2011 I had submitted for the 1st time 2 submissions (think we had max of 4 we could enter, but I was happy to go with just 2 this time, 1 I had already presented & 1 was nearly completed) My general session (75 minutes) the same session on “Waits” I had done @ SQLBits 8 back in Brighton last April, and a new 1/2 day 3.5 hours format which is a session I’m completing on SQLOS layer Well...(read more)

    Read the article

  • RDA Health Checks for SOA

    - by ShawnBailey
    What is a health check in RDA? A health check evaluates something in your environment to determine whether a change needs to be considered in order to avoid a problem or optimize fuctionality. Examples of what this 'something' might be are: Configuration Parameters JVM Options Runtime Statistics What have we done for SOA? In the latest release of RDA, 4.30, we have added a Rule Set for SOA called 'Oracle SOA 11g (11.1.1) Post Installation (Generic)'. This Rule Set contains 14 SOA related health checks. These checks were all derived from common issues / solutions we see in support of the SOA product. Many of the recommendations come from the product documentation while others are covered in the SOA Knowledge Base. Our goal is that you will be able to easily identify the areas of concern and understand the guidance available from the output of the Rule Set. Running the health checks for SOA The rules that the checks use are installed with RDA and bundled by product or functional area into what are called 'Rule Sets'. To view the available Rule Sets simply run the command from the RDA home location: rda.cmd (or .sh) -dT hcve This will bring up a list of the available HCVE (Health Check / Verification Engine) Rule Sets. Each Rule Set contains a group of related rules that are used for evalutation and display of results. A rule can be considered synonymous with a single health check and they are assigned an ID, Name and Description that can be seen when they are executed. The Rule Set for SOA is option number 11 and you just enter this selection at the prompt. The Rule Set will then execute to completion. After running an HCVE Rule Set the tool will write the output to the RDA_HOME/output folder. The simplest way to view the output is to drag the .htm file to a browser but of course it can also be uploaded to a Service Request for evaluation by Oracle Support. Many of the Rule Sets will prompt you for information before they can execute their rules but the SOA Rule Set will identify the SOA domains configured in your RDA setup.cfg file. This means that you don't need to answer all of the questions again about where stuff is but it also means that you must have configured RDA for SOA. To run the Rule Set: Download the latest version of RDA from MOS Doc ID 314422.1 Configure RDA for your SOA domains. Detailed steps can be found here In it's simplest form the command is 'rda.cmd (.sh) -S SOA' Go to the RDA home location and enter the command 'rda.cmd (or .sh) -dT hcve' Select option '11' It should be noted that this our first release of a SOA Rule Set so there will probably be some things we need to clean up or fix. None of these rules will actually modify anything on your system as they are read only and do the evaluations internally. Please let us know if you have any issues with the rules or ideas for new ones so we can make them as useful as possible. The Checks Here is a list of the SOA health checks by ID, Name and Description. ID Name Description A00100 SOA Domain Homes Lists the SOA domains that were indentified from the RDA setup.cfg file A00200 Coherence Protocol Conflict Checks to see if you have both Unicast and Multicast configured in the same domain. Checks both the setDomainEnv and config.xml entries (if it exists). We recommend Unicast with fully qualified host names or IP addresses. A00210 Coherence Fully Qualified Host Checks that the host names are fully qualified or that IP addresses are used. Will fail if unqualified host names are detected. A00220 Unicast Local Host Checks that the Coherence localhost is specified for use with Unicast A00300 JTA Timeout Checks that the JTA timeout is configured for the domain and lists the value. The bundled rule will only list the current values of the JTA timeout for each SOA Domain. In the future the rule with fail with a warning if the value is 300 seconds or lower. It is recommended that timeouts follow the pattern 'syncMaxWaitTime' < EJB Timeouts < JTA Timeout. The 300 second value is important because the EJB Timeouts default to 300 seconds. Additional information can be found in MOS Doc ID 880313.1. A00310 XA Max Time Checks that the JTA Maximum XA call time is set for the domain. Fails if it is not explicitly set or if the value is less than or equal to the default of 12000 ms. A00320 XA Timeout Checks that the XA timeout is enabled and that the value is '0' for the SOA Data Source (SOADataSource-jdbc.xml) A00330 JDBC Statement Timeout Checks that the Statement Timeout is set for all SOA Data Sources. Fails if the value is not set or if it is set to the default of -1. A00400 XA Driver Checks that the SOA Data Source is configured to use an XA driver. Fails if it is not. A00410 JDBC Capacity Settings Checks that the minimum and maximum capacity are equal for all SOA Data Sources. Fails if they are not and lists specifically which data sources failed. A00500 SOA Roles Checks that the default SOA roles 'SOAAdmin' and 'SOAOperator' are configured for the soa-infra application in the file sytem-jazn-data.xml. Fails if they are not. A00700 SOA-INFRA Deployment Checks that the soa-infra application is deployed to either a cluster, all members of a cluster or a stand alone server. A00710 SOA Deployments Checks that the SOA related applications are deployed to the same domain members as soa-infra. A00720 SOA Library Deployments Checks that the SOA related libraries are deployed to the same domain members as soa-infra. A00730 Data Source Deployments Checks that the SOA Data Sources are all targeted to the same domain members as soa-infra

    Read the article

  • Speaking - Automate Your ETL Infrastructure with SSIS and PowerShell

    - by AllenMWhite
    Today at 4:45PM EDT I'm presenting a new session using PowerShell to auto-generate SSIS packages via the BIML language. The really cool thing is that this session will be live broadcast on PASS TV! You can view the session by clicking on this link . If you have questions for me during the session, you can send them to me via Twitter using this hashtag: #posh2biml Brian Davis, my good friend from the Ohio North SQL Server Users Group, will be monitoring that hashtag and feeding me the questions that...(read more)

    Read the article

  • What does a connection timeout indicate when performing an NFS mount?

    - by DeeDee
    We have a shiny new QNAP NAS (TS-879U-RP), and I'm trying to mount it to our big ol' RHEL server in the same manner as our other two QNAP NAS devices. The IT department won't give me the root privileges to the NAS, so I can't SSH in (I know, I know). The first thing I did was to, via the QNAP web admin interface, create a network share named "Runs." I then added the IP of the RHEL server to the permissions list: On the RHEL server, I then added the following line to /etc/fstab: [IP of NAS]:/Runs /mnt/gsrnas3 nfs defaults 0 0 Aside from the IP and the specific mount directory name, this is how I mounted the other two NAS devices. I then created the gsrnas3 directory under /mnt/, and then ran `mount /mnt/gsrnas3' I got the following error: mount.nfs: Connection timed out My first thought is that it's a ports issue, but I don't have enough specific experience with this issue to know for sure. I have two other NAS devices by the same manufacturer already mounted to this RHEL server, so that leads me to believe the configuration issue is on the NAS side of things. I can ping the NAS device successfully from the RHEL server. Not being able to SSH into said NAS is a huge hassle, though. Any ideas?

    Read the article

  • Enable grub boot menu on new system

    - by Remus Rigo
    I have installed Ubuntu 11.04 and I would like to see the boot menu when the system starts (by default it is hidden or the timeout=0) # # DO NOT EDIT THIS FILE # # It is automatically generated by grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then set have_grubenv=true load_env fi set default="0" if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi } function load_video { insmod vbe insmod vga insmod video_bochs insmod video_cirrus } insmod part_msdos insmod ext2 set root='(hd0,msdos1)' search --no-floppy --fs-uuid --set=root 44f311b4-0b40-4d10-b004-78108539fc39 if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=auto load_video insmod gfxterm fi terminal_output gfxterm insmod part_msdos insmod ext2 set root='(hd0,msdos1)' search --no-floppy --fs-uuid --set=root 44f311b4-0b40-4d10-b004-78108539fc39 set locale_dir=($root)/boot/grub/locale set lang=en_US insmod gettext if [ "${recordfail}" = 1 ]; then set timeout=-1 else set timeout=10 fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### insmod part_msdos insmod ext2 set root='(hd0,msdos1)' search --no-floppy --fs-uuid --set=root 44f311b4-0b40-4d10-b004-78108539fc39 insmod jpeg if background_image /boot/grub/boot.jpg; then true else set menu_color_normal=white/black set menu_color_highlight=black/light-gray fi ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### if [ ${recordfail} != 1 ]; then if [ -e ${prefix}/gfxblacklist.txt ]; then if hwmatch ${prefix}/gfxblacklist.txt 3; then if [ ${match} = 0 ]; then set linux_gfx_mode=keep else set linux_gfx_mode=text fi else set linux_gfx_mode=text fi else set linux_gfx_mode=keep fi else set linux_gfx_mode=text fi export linux_gfx_mode if [ "$linux_gfx_mode" != "text" ]; then load_video; fi menuentry 'Ubuntu, with Linux 2.6.38-8-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail set gfxpayload=$linux_gfx_mode insmod part_msdos insmod ext2 set root='(hd0,msdos1)' search --no-floppy --fs-uuid --set=root 44f311b4-0b40-4d10-b004-78108539fc39 linux /boot/vmlinuz-2.6.38-8-generic root=UUID=44f311b4-0b40-4d10-b004-78108539fc39 ro quiet splash vt.handoff=7 initrd /boot/initrd.img-2.6.38-8-generic } menuentry 'Ubuntu, with Linux 2.6.38-8-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail set gfxpayload=$linux_gfx_mode insmod part_msdos insmod ext2 set root='(hd0,msdos1)' search --no-floppy --fs-uuid --set=root 44f311b4-0b40-4d10-b004-78108539fc39 echo 'Loading Linux 2.6.38-8-generic ...' linux /boot/vmlinuz-2.6.38-8-generic root=UUID=44f311b4-0b40-4d10-b004-78108539fc39 ro single echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-2.6.38-8-generic } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry "Memory test (memtest86+)" { insmod part_msdos insmod ext2 set root='(hd0,msdos1)' search --no-floppy --fs-uuid --set=root 44f311b4-0b40-4d10-b004-78108539fc39 linux16 /boot/memtest86+.bin } menuentry "Memory test (memtest86+, serial console 115200)" { insmod part_msdos insmod ext2 set root='(hd0,msdos1)' search --no-floppy --fs-uuid --set=root 44f311b4-0b40-4d10-b004-78108539fc39 linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### if [ "x${timeout}" != "x-1" ]; then if keystatus; then if keystatus --shift; then set timeout=-1 else set timeout=0 fi else if sleep --interruptible 3 ; then set timeout=0 fi fi fi ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ###

    Read the article

  • Can't access shared drive when connecting over VPN

    - by evolvd
    I can ping all network devices but it doesn't seem that DNS is resolving their hostnames. ipconfig/ all is showing that I am pointing to the correct dns server. I can "ping "dnsname"" and it will resolve but it wont resolve any other names. Split tunnel is set up so outside DNS is resolving fine So one issue might be DNS but I have the IP address of the server share so I figure I could just get to it that way. example: \10.0.0.1\ well I can't get to it that way either and I get "the specified network name is no longer available" I can ping it but I can't open the share. Below is the ASA config : ASA Version 8.2(1) ! hostname KG-ASA domain-name example.com names ! interface Vlan1 nameif inside security-level 100 ip address 10.0.0.253 255.255.255.0 ! interface Vlan2 nameif outside security-level 0 ip address dhcp setroute ! interface Ethernet0/0 switchport access vlan 2 ! interface Ethernet0/1 ! interface Ethernet0/2 ! interface Ethernet0/3 ! interface Ethernet0/4 ! interface Ethernet0/5 ! interface Ethernet0/6 ! interface Ethernet0/7 ! ftp mode passive clock timezone EST -5 clock summer-time EDT recurring dns domain-lookup outside dns server-group DefaultDNS name-server 10.0.0.101 domain-name blah.com access-list OUTSIDE_IN extended permit tcp any host 10.0.0.253 eq 10000 access-list OUTSIDE_IN extended permit tcp any host 10.0.0.253 eq 8333 access-list OUTSIDE_IN extended permit tcp any host 10.0.0.253 eq 902 access-list SPLIT-TUNNEL-VPN standard permit 10.0.0.0 255.0.0.0 access-list NONAT extended permit ip 10.0.0.0 255.255.255.0 10.0.1.0 255.255.255.0 pager lines 24 logging asdm informational mtu inside 1500 mtu outside 1500 ip local pool IPSECVPN-POOL 10.0.1.2-10.0.1.50 mask 255.255.255.0 icmp unreachable rate-limit 1 burst-size 1 asdm image disk0:/asdm-621.bin no asdm history enable arp timeout 14400 global (outside) 1 interface nat (inside) 0 access-list NONAT nat (inside) 1 0.0.0.0 0.0.0.0 static (inside,outside) tcp interface 10000 10.0.0.101 10000 netmask 255.255.255.255 static (inside,outside) tcp interface 8333 10.0.0.101 8333 netmask 255.255.255.255 static (inside,outside) tcp interface 902 10.0.0.101 902 netmask 255.255.255.255 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute timeout tcp-proxy-reassembly 0:01:00 dynamic-access-policy-record DfltAccessPolicy aaa authentication enable console LOCAL aaa authentication http console LOCAL aaa authentication serial console LOCAL aaa authentication ssh console LOCAL aaa authentication telnet console LOCAL http server enable http 10.0.0.0 255.255.0.0 inside http 0.0.0.0 0.0.0.0 outside no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec transform-set myset esp-aes esp-sha-hmac crypto ipsec transform-set ESP-3DES-SHA esp-3des esp-sha-hmac crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 crypto dynamic-map dynmap 1 set transform-set myset crypto dynamic-map dynmap 1 set reverse-route crypto map IPSEC-MAP 65535 ipsec-isakmp dynamic dynmap crypto map IPSEC-MAP interface outside crypto isakmp enable outside crypto isakmp policy 10 authentication pre-share encryption 3des hash sha group 2 lifetime 86400 crypto isakmp policy 65535 authentication pre-share encryption aes hash sha group 2 lifetime 86400 telnet 0.0.0.0 0.0.0.0 inside telnet timeout 5 ssh 0.0.0.0 0.0.0.0 inside ssh 70.60.228.0 255.255.255.0 outside ssh 74.102.150.0 255.255.254.0 outside ssh 74.122.164.0 255.255.252.0 outside ssh timeout 5 console timeout 0 dhcpd dns 10.0.0.101 dhcpd lease 7200 dhcpd domain blah.com ! dhcpd address 10.0.0.110-10.0.0.170 inside dhcpd enable inside ! threat-detection basic-threat threat-detection statistics access-list no threat-detection statistics tcp-intercept ntp server 63.111.165.21 webvpn enable outside svc image disk0:/anyconnect-win-2.4.1012-k9.pkg 1 svc enable group-policy EASYVPN internal group-policy EASYVPN attributes dns-server value 10.0.0.101 vpn-tunnel-protocol IPSec l2tp-ipsec svc webvpn split-tunnel-policy tunnelspecified split-tunnel-network-list value SPLIT-TUNNEL-VPN ! tunnel-group client type remote-access tunnel-group client general-attributes address-pool (inside) IPSECVPN-POOL address-pool IPSECVPN-POOL default-group-policy EASYVPN dhcp-server 10.0.0.253 tunnel-group client ipsec-attributes pre-shared-key * tunnel-group CLIENTVPN type ipsec-l2l tunnel-group CLIENTVPN ipsec-attributes pre-shared-key * ! class-map inspection_default match default-inspection-traffic ! ! policy-map global_policy class inspection_default inspect icmp ! service-policy global_policy global prompt hostname context I'm not sure where I should go next with troubleshooting nslookup result: Default Server: blahname.blah.lan Address: 10.0.0.101

    Read the article

  • How do I start "Ubuntu classic desktop" (no effects) from the command line

    - by Andrew Stern
    I am able to run sessions over an ssh connection but I rather use the "Ubuntu classic desktop (no effects)" version on Ubuntu 11.04 instead of the new Unity since I don't have 3d support on the laptop I'm using to display the graphical User Interface. How can I startup the older gnome-session without the 3d effects? I tried gnome-session but it seems to be the option with the 3d effects and I want a more stripped down session over my ssh session.

    Read the article

  • ISCSI: Ethernet cable maximum length vs. SCSI command timeout

    - by Jeremy Hajek
    I have a question about a non-optimal setup and the practical implications of this. Ideally you would place the ESXi server right in the same room as the FreeNas white box end of question. My situation is this: I have a run of ~125ft of Cat 5e connecting a ESXi server to a FreeNas whitebox in the server room. I know the distance of the ethernet cable is within the maximum distance for ethernet traffic but I have two questions... Can Cat 5e support gigbit speeds at that distance if the switch on the back end is a linksys SRW-2048? Should I be concerned about the distance causing data read and write timeouts in the SCSI portion--(disk operations of the ESXi)?

    Read the article

< Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >