Search Results

Search found 11100 results on 444 pages for 'xt 20'.

Page 433/444 | < Previous Page | 429 430 431 432 433 434 435 436 437 438 439 440  | Next Page >

  • WCF WS-Security and WSE Nonce Authentication

    - by Rick Strahl
    WCF makes it fairly easy to access WS-* Web Services, except when you run into a service format that it doesn't support. Even then WCF provides a huge amount of flexibility to make the service clients work, however finding the proper interfaces to make that happen is not easy to discover and for the most part undocumented unless you're lucky enough to run into a blog, forum or StackOverflow post on the matter. This is definitely true for the Password Nonce as part of the WS-Security/WSE protocol, which is not natively supported in WCF. Specifically I had a need to create a WCF message on the client that includes a WS-Security header that looks like this from their spec document:<soapenv:Header> <wsse:Security soapenv:mustUnderstand="1" xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <wsse:UsernameToken wsu:Id="UsernameToken-8" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> <wsse:Username>TeStUsErNaMe1</wsse:Username> <wsse:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText" >TeStPaSsWoRd1</wsse:Password> <wsse:Nonce EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" >f8nUe3YupTU5ISdCy3X9Gg==</wsse:Nonce> <wsu:Created>2011-05-04T19:01:40.981Z</wsu:Created> </wsse:UsernameToken> </wsse:Security> </soapenv:Header> Specifically, the Nonce and Created keys are what WCF doesn't create or have a built in formatting for. Why is there a nonce? My first thought here was WTF? The username and password are there in clear text, what does the Nonce accomplish? The Nonce and created keys are are part of WSE Security specification and are meant to allow the server to detect and prevent replay attacks. The hashed nonce should be unique per request which the server can store and check for before running another request thus ensuring that a request is not replayed with exactly the same values. Basic ServiceUtl Import - not much Luck The first thing I did when I imported this service with a service reference was to simply import it as a Service Reference. The Add Service Reference import automatically detects that WS-Security is required and appropariately adds the WS-Security to the basicHttpBinding in the config file:<?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <bindings> <basicHttpBinding> <binding name="RealTimeOnlineSoapBinding"> <security mode="Transport" /> </binding> <binding name="RealTimeOnlineSoapBinding1" /> </basicHttpBinding> </bindings> <client> <endpoint address="https://notarealurl.com:443/services/RealTimeOnline" binding="basicHttpBinding" bindingConfiguration="RealTimeOnlineSoapBinding" contract="RealTimeOnline.RealTimeOnline" name="RealTimeOnline" /> </client> </system.serviceModel> </configuration> If if I run this as is using code like this:var client = new RealTimeOnlineClient(); client.ClientCredentials.UserName.UserName = "TheUsername"; client.ClientCredentials.UserName.Password = "ThePassword"; … I get nothing in terms of WS-Security headers. The request is sent, but the the binding expects transport level security to be applied, rather than message level security. To fix this so that a WS-Security message header is sent the security mode can be changed to: <security mode="TransportWithMessageCredential" /> Now if I re-run I at least get a WS-Security header which looks like this:<s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/" xmlns:u="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> <s:Header> <o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <u:Timestamp u:Id="_0"> <u:Created>2012-11-24T02:55:18.011Z</u:Created> <u:Expires>2012-11-24T03:00:18.011Z</u:Expires> </u:Timestamp> <o:UsernameToken u:Id="uuid-18c215d4-1106-40a5-8dd1-c81fdddf19d3-1"> <o:Username>TheUserName</o:Username> <o:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText" >ThePassword</o:Password> </o:UsernameToken> </o:Security> </s:Header> Closer! Now the WS-Security header is there along with a timestamp field (which might not be accepted by some WS-Security expecting services), but there's no Nonce or created timestamp as required by my original service. Using a CustomBinding instead My next try was to go with a CustomBinding instead of basicHttpBinding as it allows a bit more control over the protocol and transport configurations for the binding. Specifically I can explicitly specify the message protocol(s) used. Using configuration file settings here's what the config file looks like:<?xml version="1.0"?> <configuration> <system.serviceModel> <bindings> <customBinding> <binding name="CustomSoapBinding"> <security includeTimestamp="false" authenticationMode="UserNameOverTransport" defaultAlgorithmSuite="Basic256" requireDerivedKeys="false" messageSecurityVersion="WSSecurity10WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10"> </security> <textMessageEncoding messageVersion="Soap11"></textMessageEncoding> <httpsTransport maxReceivedMessageSize="2000000000"/> </binding> </customBinding> </bindings> <client> <endpoint address="https://notrealurl.com:443/services/RealTimeOnline" binding="customBinding" bindingConfiguration="CustomSoapBinding" contract="RealTimeOnline.RealTimeOnline" name="RealTimeOnline" /> </client> </system.serviceModel> <startup> <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/> </startup> </configuration> This ends up creating a cleaner header that's missing the timestamp field which can cause some services problems. The WS-Security header output generated with the above looks like this:<s:Header> <o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <o:UsernameToken u:Id="uuid-291622ca-4c11-460f-9886-ac1c78813b24-1"> <o:Username>TheUsername</o:Username> <o:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText" >ThePassword</o:Password> </o:UsernameToken> </o:Security> </s:Header> This is closer as it includes only the username and password. The key here is the protocol for WS-Security:messageSecurityVersion="WSSecurity10WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10" which explicitly specifies the protocol version. There are several variants of this specification but none of them seem to support the nonce unfortunately. This protocol does allow for optional omission of the Nonce and created timestamp provided (which effectively makes those keys optional). With some services I tried that requested a Nonce just using this protocol actually worked where the default basicHttpBinding failed to connect, so this is a possible solution for access to some services. Unfortunately for my target service that was not an option. The nonce has to be there. Creating Custom ClientCredentials As it turns out WCF doesn't have support for the Digest Nonce as part of WS-Security, and so as far as I can tell there's no way to do it just with configuration settings. I did a bunch of research on this trying to find workarounds for this, and I did find a couple of entries on StackOverflow as well as on the MSDN forums. However, none of these are particularily clear and I ended up using bits and pieces of several of them to arrive at a working solution in the end. http://stackoverflow.com/questions/896901/wcf-adding-nonce-to-usernametoken http://social.msdn.microsoft.com/Forums/en-US/wcf/thread/4df3354f-0627-42d9-b5fb-6e880b60f8ee The latter forum message is the more useful of the two (the last message on the thread in particular) and it has most of the information required to make this work. But it took some experimentation for me to get this right so I'll recount the process here maybe a bit more comprehensively. In order for this to work a number of classes have to be overridden: ClientCredentials ClientCredentialsSecurityTokenManager WSSecurityTokenizer The idea is that we need to create a custom ClientCredential class to hold the custom properties so they can be set from the UI or via configuration settings. The TokenManager and Tokenizer are mainly required to allow the custom credentials class to flow through the WCF pipeline and eventually provide custom serialization. Here are the three classes required and their full implementations:public class CustomCredentials : ClientCredentials { public CustomCredentials() { } protected CustomCredentials(CustomCredentials cc) : base(cc) { } public override System.IdentityModel.Selectors.SecurityTokenManager CreateSecurityTokenManager() { return new CustomSecurityTokenManager(this); } protected override ClientCredentials CloneCore() { return new CustomCredentials(this); } } public class CustomSecurityTokenManager : ClientCredentialsSecurityTokenManager { public CustomSecurityTokenManager(CustomCredentials cred) : base(cred) { } public override System.IdentityModel.Selectors.SecurityTokenSerializer CreateSecurityTokenSerializer(System.IdentityModel.Selectors.SecurityTokenVersion version) { return new CustomTokenSerializer(System.ServiceModel.Security.SecurityVersion.WSSecurity11); } } public class CustomTokenSerializer : WSSecurityTokenSerializer { public CustomTokenSerializer(SecurityVersion sv) : base(sv) { } protected override void WriteTokenCore(System.Xml.XmlWriter writer, System.IdentityModel.Tokens.SecurityToken token) { UserNameSecurityToken userToken = token as UserNameSecurityToken; string tokennamespace = "o"; DateTime created = DateTime.Now; string createdStr = created.ToString("yyyy-MM-ddThh:mm:ss.fffZ"); // unique Nonce value - encode with SHA-1 for 'randomness' // in theory the nonce could just be the GUID by itself string phrase = Guid.NewGuid().ToString(); var nonce = GetSHA1String(phrase); // in this case password is plain text // for digest mode password needs to be encoded as: // PasswordAsDigest = Base64(SHA-1(Nonce + Created + Password)) // and profile needs to change to //string password = GetSHA1String(nonce + createdStr + userToken.Password); string password = userToken.Password; writer.WriteRaw(string.Format( "<{0}:UsernameToken u:Id=\"" + token.Id + "\" xmlns:u=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\">" + "<{0}:Username>" + userToken.UserName + "</{0}:Username>" + "<{0}:Password Type=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText\">" + password + "</{0}:Password>" + "<{0}:Nonce EncodingType=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary\">" + nonce + "</{0}:Nonce>" + "<u:Created>" + createdStr + "</u:Created></{0}:UsernameToken>", tokennamespace)); } protected string GetSHA1String(string phrase) { SHA1CryptoServiceProvider sha1Hasher = new SHA1CryptoServiceProvider(); byte[] hashedDataBytes = sha1Hasher.ComputeHash(Encoding.UTF8.GetBytes(phrase)); return Convert.ToBase64String(hashedDataBytes); } } Realistically only the CustomTokenSerializer has any significant code in. The code there deals with actually serializing the custom credentials using low level XML semantics by writing output into an XML writer. I can't take credit for this code - most of the code comes from the MSDN forum post mentioned earlier - I made a few adjustments to simplify the nonce generation and also added some notes to allow for PasswordDigest generation. Per spec the nonce is nothing more than a unique value that's supposed to be 'random'. I'm thinking that this value can be any string that's unique and a GUID on its own probably would have sufficed. Comments on other posts that GUIDs can be potentially guessed are highly exaggerated to say the least IMHO. To satisfy even that aspect though I added the SHA1 encryption and binary decoding to give a more random value that would be impossible to 'guess'. The original example from the forum post used another level of encoding and decoding to string in between - but that really didn't accomplish anything but extra overhead. The header output generated from this looks like this:<s:Header> <o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <o:UsernameToken u:Id="uuid-f43d8b0d-0ebb-482e-998d-f544401a3c91-1" xmlns:u="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> <o:Username>TheUsername</o:Username> <o:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText">ThePassword</o:Password> <o:Nonce EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" >PjVE24TC6HtdAnsf3U9c5WMsECY=</o:Nonce> <u:Created>2012-11-23T07:10:04.670Z</u:Created> </o:UsernameToken> </o:Security> </s:Header> which is exactly as it should be. Password Digest? In my case the password is passed in plain text over an SSL connection, so there's no digest required so I was done with the code above. Since I don't have a service handy that requires a password digest,  I had no way of testing the code for the digest implementation, but here is how this is likely to work. If you need to pass a digest encoded password things are a little bit trickier. The password type namespace needs to change to: http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#Digest and then the password value needs to be encoded. The format for password digest encoding is this: Base64(SHA-1(Nonce + Created + Password)) and it can be handled in the code above with this code (that's commented in the snippet above): string password = GetSHA1String(nonce + createdStr + userToken.Password); The entire WriteTokenCore method for digest code looks like this:protected override void WriteTokenCore(System.Xml.XmlWriter writer, System.IdentityModel.Tokens.SecurityToken token) { UserNameSecurityToken userToken = token as UserNameSecurityToken; string tokennamespace = "o"; DateTime created = DateTime.Now; string createdStr = created.ToString("yyyy-MM-ddThh:mm:ss.fffZ"); // unique Nonce value - encode with SHA-1 for 'randomness' // in theory the nonce could just be the GUID by itself string phrase = Guid.NewGuid().ToString(); var nonce = GetSHA1String(phrase); string password = GetSHA1String(nonce + createdStr + userToken.Password); writer.WriteRaw(string.Format( "<{0}:UsernameToken u:Id=\"" + token.Id + "\" xmlns:u=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\">" + "<{0}:Username>" + userToken.UserName + "</{0}:Username>" + "<{0}:Password Type=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#Digest\">" + password + "</{0}:Password>" + "<{0}:Nonce EncodingType=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary\">" + nonce + "</{0}:Nonce>" + "<u:Created>" + createdStr + "</u:Created></{0}:UsernameToken>", tokennamespace)); } I had no service to connect to to try out Digest auth - if you end up needing it and get it to work please drop a comment… How to use the custom Credentials The easiest way to use the custom credentials is to create the client in code. Here's a factory method I use to create an instance of my service client:  public static RealTimeOnlineClient CreateRealTimeOnlineProxy(string url, string username, string password) { if (string.IsNullOrEmpty(url)) url = "https://notrealurl.com:443/cows/services/RealTimeOnline"; CustomBinding binding = new CustomBinding(); var security = TransportSecurityBindingElement.CreateUserNameOverTransportBindingElement(); security.IncludeTimestamp = false; security.DefaultAlgorithmSuite = SecurityAlgorithmSuite.Basic256; security.MessageSecurityVersion = MessageSecurityVersion.WSSecurity10WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10; var encoding = new TextMessageEncodingBindingElement(); encoding.MessageVersion = MessageVersion.Soap11; var transport = new HttpsTransportBindingElement(); transport.MaxReceivedMessageSize = 20000000; // 20 megs binding.Elements.Add(security); binding.Elements.Add(encoding); binding.Elements.Add(transport); RealTimeOnlineClient client = new RealTimeOnlineClient(binding, new EndpointAddress(url)); // to use full client credential with Nonce uncomment this code: // it looks like this might not be required - the service seems to work without it client.ChannelFactory.Endpoint.Behaviors.Remove<System.ServiceModel.Description.ClientCredentials>(); client.ChannelFactory.Endpoint.Behaviors.Add(new CustomCredentials()); client.ClientCredentials.UserName.UserName = username; client.ClientCredentials.UserName.Password = password; return client; } This returns a service client that's ready to call other service methods. The key item in this code is the ChannelFactory endpoint behavior modification that that first removes the original ClientCredentials and then adds the new one. The ClientCredentials property on the client is read only and this is the way it has to be added.   Summary It's a bummer that WCF doesn't suport WSE Security authentication with nonce values out of the box. From reading the comments in posts/articles while I was trying to find a solution, I found that this feature was omitted by design as this protocol is considered unsecure. While I agree that plain text passwords are rarely a good idea even if they go over secured SSL connection as WSE Security does, there are unfortunately quite a few services (mosly Java services I suspect) that use this protocol. I've run into this twice now and trying to find a solution online I can see that this is not an isolated problem - many others seem to have struggled with this. It seems there are about a dozen questions about this on StackOverflow all with varying incomplete answers. Hopefully this post provides a little more coherent content in one place. Again I marvel at WCF and its breadth of support for protocol features it has in a single tool. And even when it can't handle something there are ways to get it working via extensibility. But at the same time I marvel at how freaking difficult it is to arrive at these solutions. I mean there's no way I could have ever figured this out on my own. It takes somebody working on the WCF team or at least being very, very intricately involved in the innards of WCF to figure out the interconnection of the various objects to do this from scratch. Luckily this is an older problem that has been discussed extensively online and I was able to cobble together a solution from the online content. I'm glad it worked out that way, but it feels dirty and incomplete in that there's a whole learning path that was omitted to get here… Man am I glad I'm not dealing with SOAP services much anymore. REST service security - even when using some sort of federation is a piece of cake by comparison :-) I'm sure once standards bodies gets involved we'll be right back in security standard hell…© Rick Strahl, West Wind Technologies, 2005-2012Posted in WCF  Web Services   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Introducing the Earthquake Locator – A Bing Maps Silverlight Application, part 1

    - by Bobby Diaz
    Update: Live demo and source code now available!  The recent wave of earthquakes (no pun intended) being reported in the news got me wondering about the frequency and severity of earthquakes around the world. Since I’ve been doing a lot of Silverlight development lately, I decided to scratch my curiosity with a nice little Bing Maps application that will show the location and relative strength of recent seismic activity. Here is a list of technologies this application will utilize, so be sure to have everything downloaded and installed if you plan on following along. Silverlight 3 WCF RIA Services Bing Maps Silverlight Control * Managed Extensibility Framework (optional) MVVM Light Toolkit (optional) log4net (optional) * If you are new to Bing Maps or have not signed up for a Developer Account, you will need to visit www.bingmapsportal.com to request a Bing Maps key for your application. Getting Started We start out by creating a new Silverlight Application called EarthquakeLocator and specify that we want to automatically create the Web Application Project with RIA Services enabled. I cleaned up the web app by removing the Default.aspx and EarthquakeLocatorTestPage.html. Then I renamed the EarthquakeLocatorTestPage.aspx to Default.aspx and set it as my start page. I also set the development server to use a specific port, as shown below. RIA Services Next, I created a Services folder in the EarthquakeLocator.Web project and added a new Domain Service Class called EarthquakeService.cs. This is the RIA Services Domain Service that will provide earthquake data for our client application. I am not using LINQ to SQL or Entity Framework, so I will use the <empty domain service class> option. We will be pulling data from an external Atom feed, but this example could just as easily pull data from a database or another web service. This is an important distinction to point out because each scenario I just mentioned could potentially use a different Domain Service base class (i.e. LinqToSqlDomainService<TDataContext>). Now we can start adding Query methods to our EarthquakeService that pull data from the USGS web site. Here is the complete code for our service class: using System; using System.Collections.Generic; using System.IO; using System.Linq; using System.ServiceModel.Syndication; using System.Web.DomainServices; using System.Web.Ria; using System.Xml; using log4net; using EarthquakeLocator.Web.Model;   namespace EarthquakeLocator.Web.Services {     /// <summary>     /// Provides earthquake data to client applications.     /// </summary>     [EnableClientAccess()]     public class EarthquakeService : DomainService     {         private static readonly ILog log = LogManager.GetLogger(typeof(EarthquakeService));           // USGS Data Feeds: http://earthquake.usgs.gov/earthquakes/catalogs/         private const string FeedForPreviousDay =             "http://earthquake.usgs.gov/earthquakes/catalogs/1day-M2.5.xml";         private const string FeedForPreviousWeek =             "http://earthquake.usgs.gov/earthquakes/catalogs/7day-M2.5.xml";           /// <summary>         /// Gets the earthquake data for the previous week.         /// </summary>         /// <returns>A queryable collection of <see cref="Earthquake"/> objects.</returns>         public IQueryable<Earthquake> GetEarthquakes()         {             var feed = GetFeed(FeedForPreviousWeek);             var list = new List<Earthquake>();               if ( feed != null )             {                 foreach ( var entry in feed.Items )                 {                     var quake = CreateEarthquake(entry);                     if ( quake != null )                     {                         list.Add(quake);                     }                 }             }               return list.AsQueryable();         }           /// <summary>         /// Creates an <see cref="Earthquake"/> object for each entry in the Atom feed.         /// </summary>         /// <param name="entry">The Atom entry.</param>         /// <returns></returns>         private Earthquake CreateEarthquake(SyndicationItem entry)         {             Earthquake quake = null;             string title = entry.Title.Text;             string summary = entry.Summary.Text;             string point = GetElementValue<String>(entry, "point");             string depth = GetElementValue<String>(entry, "elev");             string utcTime = null;             string localTime = null;             string depthDesc = null;             double? magnitude = null;             double? latitude = null;             double? longitude = null;             double? depthKm = null;               if ( !String.IsNullOrEmpty(title) && title.StartsWith("M") )             {                 title = title.Substring(2, title.IndexOf(',')-3).Trim();                 magnitude = TryParse(title);             }             if ( !String.IsNullOrEmpty(point) )             {                 var values = point.Split(' ');                 if ( values.Length == 2 )                 {                     latitude = TryParse(values[0]);                     longitude = TryParse(values[1]);                 }             }             if ( !String.IsNullOrEmpty(depth) )             {                 depthKm = TryParse(depth);                 if ( depthKm != null )                 {                     depthKm = Math.Round((-1 * depthKm.Value) / 100, 2);                 }             }             if ( !String.IsNullOrEmpty(summary) )             {                 summary = summary.Replace("</p>", "");                 var values = summary.Split(                     new string[] { "<p>" },                     StringSplitOptions.RemoveEmptyEntries);                   if ( values.Length == 3 )                 {                     var times = values[1].Split(                         new string[] { "<br>" },                         StringSplitOptions.RemoveEmptyEntries);                       if ( times.Length > 0 )                     {                         utcTime = times[0];                     }                     if ( times.Length > 1 )                     {                         localTime = times[1];                     }                       depthDesc = values[2];                     depthDesc = "Depth: " + depthDesc.Substring(depthDesc.IndexOf(":") + 2);                 }             }               if ( latitude != null && longitude != null )             {                 quake = new Earthquake()                 {                     Id = entry.Id,                     Title = entry.Title.Text,                     Summary = entry.Summary.Text,                     Date = entry.LastUpdatedTime.DateTime,                     Url = entry.Links.Select(l => Path.Combine(l.BaseUri.OriginalString,                         l.Uri.OriginalString)).FirstOrDefault(),                     Age = entry.Categories.Where(c => c.Label == "Age")                         .Select(c => c.Name).FirstOrDefault(),                     Magnitude = magnitude.GetValueOrDefault(),                     Latitude = latitude.GetValueOrDefault(),                     Longitude = longitude.GetValueOrDefault(),                     DepthInKm = depthKm.GetValueOrDefault(),                     DepthDesc = depthDesc,                     UtcTime = utcTime,                     LocalTime = localTime                 };             }               return quake;         }           private T GetElementValue<T>(SyndicationItem entry, String name)         {             var el = entry.ElementExtensions.Where(e => e.OuterName == name).FirstOrDefault();             T value = default(T);               if ( el != null )             {                 value = el.GetObject<T>();             }               return value;         }           private double? TryParse(String value)         {             double d;             if ( Double.TryParse(value, out d) )             {                 return d;             }             return null;         }           /// <summary>         /// Gets the feed at the specified URL.         /// </summary>         /// <param name="url">The URL.</param>         /// <returns>A <see cref="SyndicationFeed"/> object.</returns>         public static SyndicationFeed GetFeed(String url)         {             SyndicationFeed feed = null;               try             {                 log.Debug("Loading RSS feed: " + url);                   using ( var reader = XmlReader.Create(url) )                 {                     feed = SyndicationFeed.Load(reader);                 }             }             catch ( Exception ex )             {                 log.Error("Error occurred while loading RSS feed: " + url, ex);             }               return feed;         }     } }   The only method that will be generated in the client side proxy class, EarthquakeContext, will be the GetEarthquakes() method. The reason being that it is the only public instance method and it returns an IQueryable<Earthquake> collection that can be consumed by the client application. GetEarthquakes() calls the static GetFeed(String) method, which utilizes the built in SyndicationFeed API to load the external data feed. You will need to add a reference to the System.ServiceModel.Web library in order to take advantage of the RSS/Atom reader. The API will also allow you to create your own feeds to serve up in your applications. Model I have also created a Model folder and added a new class, Earthquake.cs. The Earthquake object will hold the various properties returned from the Atom feed. Here is a sample of the code for that class. Notice the [Key] attribute on the Id property, which is required by RIA Services to uniquely identify the entity. using System; using System.Collections.Generic; using System.Linq; using System.Runtime.Serialization; using System.ComponentModel.DataAnnotations;   namespace EarthquakeLocator.Web.Model {     /// <summary>     /// Represents an earthquake occurrence and related information.     /// </summary>     [DataContract]     public class Earthquake     {         /// <summary>         /// Gets or sets the id.         /// </summary>         /// <value>The id.</value>         [Key]         [DataMember]         public string Id { get; set; }           /// <summary>         /// Gets or sets the title.         /// </summary>         /// <value>The title.</value>         [DataMember]         public string Title { get; set; }           /// <summary>         /// Gets or sets the summary.         /// </summary>         /// <value>The summary.</value>         [DataMember]         public string Summary { get; set; }           // additional properties omitted     } }   View Model The recent trend to use the MVVM pattern for WPF and Silverlight provides a great way to separate the data and behavior logic out of the user interface layer of your client applications. I have chosen to use the MVVM Light Toolkit for the Earthquake Locator, but there are other options out there if you prefer another library. That said, I went ahead and created a ViewModel folder in the Silverlight project and added a EarthquakeViewModel class that derives from ViewModelBase. Here is the code: using System; using System.Collections.ObjectModel; using System.ComponentModel.Composition; using System.ComponentModel.Composition.Hosting; using Microsoft.Maps.MapControl; using GalaSoft.MvvmLight; using EarthquakeLocator.Web.Model; using EarthquakeLocator.Web.Services;   namespace EarthquakeLocator.ViewModel {     /// <summary>     /// Provides data for views displaying earthquake information.     /// </summary>     public class EarthquakeViewModel : ViewModelBase     {         [Import]         public EarthquakeContext Context;           /// <summary>         /// Initializes a new instance of the <see cref="EarthquakeViewModel"/> class.         /// </summary>         public EarthquakeViewModel()         {             var catalog = new AssemblyCatalog(GetType().Assembly);             var container = new CompositionContainer(catalog);             container.ComposeParts(this);             Initialize();         }           /// <summary>         /// Initializes a new instance of the <see cref="EarthquakeViewModel"/> class.         /// </summary>         /// <param name="context">The context.</param>         public EarthquakeViewModel(EarthquakeContext context)         {             Context = context;             Initialize();         }           private void Initialize()         {             MapCenter = new Location(20, -170);             ZoomLevel = 2;         }           #region Private Methods           private void OnAutoLoadDataChanged()         {             LoadEarthquakes();         }           private void LoadEarthquakes()         {             var query = Context.GetEarthquakesQuery();             Context.Earthquakes.Clear();               Context.Load(query, (op) =>             {                 if ( !op.HasError )                 {                     foreach ( var item in op.Entities )                     {                         Earthquakes.Add(item);                     }                 }             }, null);         }           #endregion Private Methods           #region Properties           private bool autoLoadData;         /// <summary>         /// Gets or sets a value indicating whether to auto load data.         /// </summary>         /// <value><c>true</c> if auto loading data; otherwise, <c>false</c>.</value>         public bool AutoLoadData         {             get { return autoLoadData; }             set             {                 if ( autoLoadData != value )                 {                     autoLoadData = value;                     RaisePropertyChanged("AutoLoadData");                     OnAutoLoadDataChanged();                 }             }         }           private ObservableCollection<Earthquake> earthquakes;         /// <summary>         /// Gets the collection of earthquakes to display.         /// </summary>         /// <value>The collection of earthquakes.</value>         public ObservableCollection<Earthquake> Earthquakes         {             get             {                 if ( earthquakes == null )                 {                     earthquakes = new ObservableCollection<Earthquake>();                 }                   return earthquakes;             }         }           private Location mapCenter;         /// <summary>         /// Gets or sets the map center.         /// </summary>         /// <value>The map center.</value>         public Location MapCenter         {             get { return mapCenter; }             set             {                 if ( mapCenter != value )                 {                     mapCenter = value;                     RaisePropertyChanged("MapCenter");                 }             }         }           private double zoomLevel;         /// <summary>         /// Gets or sets the zoom level.         /// </summary>         /// <value>The zoom level.</value>         public double ZoomLevel         {             get { return zoomLevel; }             set             {                 if ( zoomLevel != value )                 {                     zoomLevel = value;                     RaisePropertyChanged("ZoomLevel");                 }             }         }           #endregion Properties     } }   The EarthquakeViewModel class contains all of the properties that will be bound to by the various controls in our views. Be sure to read through the LoadEarthquakes() method, which handles calling the GetEarthquakes() method in our EarthquakeService via the EarthquakeContext proxy, and also transfers the loaded entities into the view model’s Earthquakes collection. Another thing to notice is what’s going on in the default constructor. I chose to use the Managed Extensibility Framework (MEF) for my composition needs, but you can use any dependency injection library or none at all. To allow the EarthquakeContext class to be discoverable by MEF, I added the following partial class so that I could supply the appropriate [Export] attribute: using System; using System.ComponentModel.Composition;   namespace EarthquakeLocator.Web.Services {     /// <summary>     /// The client side proxy for the EarthquakeService class.     /// </summary>     [Export]     public partial class EarthquakeContext     {     } }   One last piece I wanted to point out before moving on to the user interface, I added a client side partial class for the Earthquake entity that contains helper properties that we will bind to later: using System;   namespace EarthquakeLocator.Web.Model {     /// <summary>     /// Represents an earthquake occurrence and related information.     /// </summary>     public partial class Earthquake     {         /// <summary>         /// Gets the location based on the current Latitude/Longitude.         /// </summary>         /// <value>The location.</value>         public string Location         {             get { return String.Format("{0},{1}", Latitude, Longitude); }         }           /// <summary>         /// Gets the size based on the Magnitude.         /// </summary>         /// <value>The size.</value>         public double Size         {             get { return (Magnitude * 3); }         }     } }   View Now the fun part! Usually, I would create a Views folder to place all of my View controls in, but I took the easy way out and added the following XAML code to the default MainPage.xaml file. Be sure to add the bing prefix associating the Microsoft.Maps.MapControl namespace after adding the assembly reference to your project. The MVVM Light Toolkit project templates come with a ViewModelLocator class that you can use via a static resource, but I am instantiating the EarthquakeViewModel directly in my user control. I am setting the AutoLoadData property to true as a way to trigger the LoadEarthquakes() method call. The MapItemsControl found within the <bing:Map> control binds its ItemsSource property to the Earthquakes collection of the view model, and since it is an ObservableCollection<T>, we get the automatic two way data binding via the INotifyCollectionChanged interface. <UserControl x:Class="EarthquakeLocator.MainPage"     xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"     xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"     xmlns:d="http://schemas.microsoft.com/expression/blend/2008"     xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"     xmlns:bing="clr-namespace:Microsoft.Maps.MapControl;assembly=Microsoft.Maps.MapControl"     xmlns:vm="clr-namespace:EarthquakeLocator.ViewModel"     mc:Ignorable="d" d:DesignWidth="640" d:DesignHeight="480" >     <UserControl.Resources>         <DataTemplate x:Key="EarthquakeTemplate">             <Ellipse Fill="Red" Stroke="Black" StrokeThickness="1"                      Width="{Binding Size}" Height="{Binding Size}"                      bing:MapLayer.Position="{Binding Location}"                      bing:MapLayer.PositionOrigin="Center">                 <ToolTipService.ToolTip>                     <StackPanel>                         <TextBlock Text="{Binding Title}" FontSize="14" FontWeight="Bold" />                         <TextBlock Text="{Binding UtcTime}" />                         <TextBlock Text="{Binding LocalTime}" />                         <TextBlock Text="{Binding DepthDesc}" />                     </StackPanel>                 </ToolTipService.ToolTip>             </Ellipse>         </DataTemplate>     </UserControl.Resources>       <UserControl.DataContext>         <vm:EarthquakeViewModel AutoLoadData="True" />     </UserControl.DataContext>       <Grid x:Name="LayoutRoot">           <bing:Map x:Name="map" CredentialsProvider="--Your-Bing-Maps-Key--"                   Center="{Binding MapCenter, Mode=TwoWay}"                   ZoomLevel="{Binding ZoomLevel, Mode=TwoWay}">             <bing:MapItemsControl ItemsSource="{Binding Earthquakes}"                                   ItemTemplate="{StaticResource EarthquakeTemplate}" />         </bing:Map>       </Grid> </UserControl>   The EarthquakeTemplate defines the Ellipse that will represent each earthquake, the Width and Height that are determined by the Magnitude, the Position on the map, and also the tooltip that will appear when we mouse over each data point. Running the application will give us the following result (shown with a tooltip example): That concludes this portion of our show but I plan on implementing additional functionality in later blog posts. Be sure to come back soon to see the next installments in this series. Enjoy!   Additional Resources USGS Earthquake Data Feeds Brad Abrams shows how RIA Services and MVVM can work together

    Read the article

  • Camera for 2.5D Game

    - by me--
    I'm hoping someone can explain this to me like I'm 5, because I've been struggling with this for hours and simply cannot understand what I'm doing wrong. I've written a Camera class for my 2.5D game. The intention is to support world and screen spaces like this: The camera is the black thing on the right. The +Z axis is upwards in that image, with -Z heading downwards. As you can see, both world space and screen space have (0, 0) at their top-left. I started writing some unit tests to prove that my camera was working as expected, and that's where things started getting...strange. My tests plot coordinates in world, view, and screen spaces. Eventually I will use image comparison to assert that they are correct, but for now my test just displays the result. The render logic uses Camera.ViewMatrix to transform world space to view space, and Camera.WorldPointToScreen to transform world space to screen space. Here is an example test: [Fact] public void foo() { var camera = new Camera(new Viewport(0, 0, 250, 100)); DrawingVisual worldRender; DrawingVisual viewRender; DrawingVisual screenRender; this.Render(camera, out worldRender, out viewRender, out screenRender, new Vector3(30, 0, 0), new Vector3(30, 40, 0)); this.ShowRenders(camera, worldRender, viewRender, screenRender); } And here's what pops up when I run this test: World space looks OK, although I suspect the z axis is going into the screen instead of towards the viewer. View space has me completely baffled. I was expecting the camera to be sitting above (0, 0) and looking towards the center of the scene. Instead, the z axis seems to be the wrong way around, and the camera is positioned in the opposite corner to what I expect! I suspect screen space will be another thing altogether, but can anyone explain what I'm doing wrong in my Camera class? UPDATE I made some progress in terms of getting things to look visually as I expect, but only through intuition: not an actual understanding of what I'm doing. Any enlightenment would be greatly appreciated. I realized that my view space was flipped both vertically and horizontally compared to what I expected, so I changed my view matrix to scale accordingly: this.viewMatrix = Matrix.CreateLookAt(this.location, this.target, this.up) * Matrix.CreateScale(this.zoom, this.zoom, 1) * Matrix.CreateScale(-1, -1, 1); I could combine the two CreateScale calls, but have left them separate for clarity. Again, I have no idea why this is necessary, but it fixed my view space: But now my screen space needs to be flipped vertically, so I modified my projection matrix accordingly: this.projectionMatrix = Matrix.CreatePerspectiveFieldOfView(0.7853982f, viewport.AspectRatio, 1, 2) * Matrix.CreateScale(1, -1, 1); And this results in what I was expecting from my first attempt: I have also just tried using Camera to render sprites via a SpriteBatch to make sure everything works there too, and it does. But the question remains: why do I need to do all this flipping of axes to get the space coordinates the way I expect? UPDATE 2 I've since improved my rendering logic in my test suite so that it supports geometries and so that lines get lighter the further away they are from the camera. I wanted to do this to avoid optical illusions and to further prove to myself that I'm looking at what I think I am. Here is an example: In this case, I have 3 geometries: a cube, a sphere, and a polyline on the top face of the cube. Notice how the darkening and lightening of the lines correctly identifies those portions of the geometries closer to the camera. If I remove the negative scaling I had to put in, I see: So you can see I'm still in the same boat - I still need those vertical and horizontal flips in my matrices to get things to appear correctly. In the interests of giving people a repro to play with, here is the complete code needed to generate the above. If you want to run via the test harness, just install the xunit package: Camera.cs: using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; using System.Diagnostics; public sealed class Camera { private readonly Viewport viewport; private readonly Matrix projectionMatrix; private Matrix? viewMatrix; private Vector3 location; private Vector3 target; private Vector3 up; private float zoom; public Camera(Viewport viewport) { this.viewport = viewport; // for an explanation of the negative scaling, see: http://gamedev.stackexchange.com/questions/63409/ this.projectionMatrix = Matrix.CreatePerspectiveFieldOfView(0.7853982f, viewport.AspectRatio, 1, 2) * Matrix.CreateScale(1, -1, 1); // defaults this.location = new Vector3(this.viewport.Width / 2, this.viewport.Height, 100); this.target = new Vector3(this.viewport.Width / 2, this.viewport.Height / 2, 0); this.up = new Vector3(0, 0, 1); this.zoom = 1; } public Viewport Viewport { get { return this.viewport; } } public Vector3 Location { get { return this.location; } set { this.location = value; this.viewMatrix = null; } } public Vector3 Target { get { return this.target; } set { this.target = value; this.viewMatrix = null; } } public Vector3 Up { get { return this.up; } set { this.up = value; this.viewMatrix = null; } } public float Zoom { get { return this.zoom; } set { this.zoom = value; this.viewMatrix = null; } } public Matrix ProjectionMatrix { get { return this.projectionMatrix; } } public Matrix ViewMatrix { get { if (this.viewMatrix == null) { // for an explanation of the negative scaling, see: http://gamedev.stackexchange.com/questions/63409/ this.viewMatrix = Matrix.CreateLookAt(this.location, this.target, this.up) * Matrix.CreateScale(this.zoom) * Matrix.CreateScale(-1, -1, 1); } return this.viewMatrix.Value; } } public Vector2 WorldPointToScreen(Vector3 point) { var result = viewport.Project(point, this.ProjectionMatrix, this.ViewMatrix, Matrix.Identity); return new Vector2(result.X, result.Y); } public void WorldPointsToScreen(Vector3[] points, Vector2[] destination) { Debug.Assert(points != null); Debug.Assert(destination != null); Debug.Assert(points.Length == destination.Length); for (var i = 0; i < points.Length; ++i) { destination[i] = this.WorldPointToScreen(points[i]); } } } CameraFixture.cs: using Microsoft.Xna.Framework.Graphics; using System; using System.Collections.Generic; using System.Linq; using System.Windows; using System.Windows.Controls; using System.Windows.Media; using Xunit; using XNA = Microsoft.Xna.Framework; public sealed class CameraFixture { [Fact] public void foo() { var camera = new Camera(new Viewport(0, 0, 250, 100)); DrawingVisual worldRender; DrawingVisual viewRender; DrawingVisual screenRender; this.Render( camera, out worldRender, out viewRender, out screenRender, new Sphere(30, 15) { WorldMatrix = XNA.Matrix.CreateTranslation(155, 50, 0) }, new Cube(30) { WorldMatrix = XNA.Matrix.CreateTranslation(75, 60, 15) }, new PolyLine(new XNA.Vector3(0, 0, 0), new XNA.Vector3(10, 10, 0), new XNA.Vector3(20, 0, 0), new XNA.Vector3(0, 0, 0)) { WorldMatrix = XNA.Matrix.CreateTranslation(65, 55, 30) }); this.ShowRenders(worldRender, viewRender, screenRender); } #region Supporting Fields private static readonly Pen xAxisPen = new Pen(Brushes.Red, 2); private static readonly Pen yAxisPen = new Pen(Brushes.Green, 2); private static readonly Pen zAxisPen = new Pen(Brushes.Blue, 2); private static readonly Pen viewportPen = new Pen(Brushes.Gray, 1); private static readonly Pen nonScreenSpacePen = new Pen(Brushes.Black, 0.5); private static readonly Color geometryBaseColor = Colors.Black; #endregion #region Supporting Methods private void Render(Camera camera, out DrawingVisual worldRender, out DrawingVisual viewRender, out DrawingVisual screenRender, params Geometry[] geometries) { var worldDrawingVisual = new DrawingVisual(); var viewDrawingVisual = new DrawingVisual(); var screenDrawingVisual = new DrawingVisual(); const int axisLength = 15; using (var worldDrawingContext = worldDrawingVisual.RenderOpen()) using (var viewDrawingContext = viewDrawingVisual.RenderOpen()) using (var screenDrawingContext = screenDrawingVisual.RenderOpen()) { // draw lines around the camera's viewport var viewportBounds = camera.Viewport.Bounds; var viewportLines = new Tuple<int, int, int, int>[] { Tuple.Create(viewportBounds.Left, viewportBounds.Bottom, viewportBounds.Left, viewportBounds.Top), Tuple.Create(viewportBounds.Left, viewportBounds.Top, viewportBounds.Right, viewportBounds.Top), Tuple.Create(viewportBounds.Right, viewportBounds.Top, viewportBounds.Right, viewportBounds.Bottom), Tuple.Create(viewportBounds.Right, viewportBounds.Bottom, viewportBounds.Left, viewportBounds.Bottom) }; foreach (var viewportLine in viewportLines) { var viewStart = XNA.Vector3.Transform(new XNA.Vector3(viewportLine.Item1, viewportLine.Item2, 0), camera.ViewMatrix); var viewEnd = XNA.Vector3.Transform(new XNA.Vector3(viewportLine.Item3, viewportLine.Item4, 0), camera.ViewMatrix); var screenStart = camera.WorldPointToScreen(new XNA.Vector3(viewportLine.Item1, viewportLine.Item2, 0)); var screenEnd = camera.WorldPointToScreen(new XNA.Vector3(viewportLine.Item3, viewportLine.Item4, 0)); worldDrawingContext.DrawLine(viewportPen, new Point(viewportLine.Item1, viewportLine.Item2), new Point(viewportLine.Item3, viewportLine.Item4)); viewDrawingContext.DrawLine(viewportPen, new Point(viewStart.X, viewStart.Y), new Point(viewEnd.X, viewEnd.Y)); screenDrawingContext.DrawLine(viewportPen, new Point(screenStart.X, screenStart.Y), new Point(screenEnd.X, screenEnd.Y)); } // draw axes var axisLines = new Tuple<int, int, int, int, int, int, Pen>[] { Tuple.Create(0, 0, 0, axisLength, 0, 0, xAxisPen), Tuple.Create(0, 0, 0, 0, axisLength, 0, yAxisPen), Tuple.Create(0, 0, 0, 0, 0, axisLength, zAxisPen) }; foreach (var axisLine in axisLines) { var viewStart = XNA.Vector3.Transform(new XNA.Vector3(axisLine.Item1, axisLine.Item2, axisLine.Item3), camera.ViewMatrix); var viewEnd = XNA.Vector3.Transform(new XNA.Vector3(axisLine.Item4, axisLine.Item5, axisLine.Item6), camera.ViewMatrix); var screenStart = camera.WorldPointToScreen(new XNA.Vector3(axisLine.Item1, axisLine.Item2, axisLine.Item3)); var screenEnd = camera.WorldPointToScreen(new XNA.Vector3(axisLine.Item4, axisLine.Item5, axisLine.Item6)); worldDrawingContext.DrawLine(axisLine.Item7, new Point(axisLine.Item1, axisLine.Item2), new Point(axisLine.Item4, axisLine.Item5)); viewDrawingContext.DrawLine(axisLine.Item7, new Point(viewStart.X, viewStart.Y), new Point(viewEnd.X, viewEnd.Y)); screenDrawingContext.DrawLine(axisLine.Item7, new Point(screenStart.X, screenStart.Y), new Point(screenEnd.X, screenEnd.Y)); } // for all points in all geometries to be rendered, find the closest and furthest away from the camera so we can lighten lines that are further away var distancesToAllGeometrySections = from geometry in geometries let geometryViewMatrix = geometry.WorldMatrix * camera.ViewMatrix from section in geometry.Sections from point in new XNA.Vector3[] { section.Item1, section.Item2 } let viewPoint = XNA.Vector3.Transform(point, geometryViewMatrix) select viewPoint.Length(); var furthestDistance = distancesToAllGeometrySections.Max(); var closestDistance = distancesToAllGeometrySections.Min(); var deltaDistance = Math.Max(0.000001f, furthestDistance - closestDistance); // draw each geometry for (var i = 0; i < geometries.Length; ++i) { var geometry = geometries[i]; // there's probably a more correct name for this, but basically this gets the geometry relative to the camera so we can check how far away each point is from the camera var geometryViewMatrix = geometry.WorldMatrix * camera.ViewMatrix; // we order roughly by those sections furthest from the camera to those closest, so that the closer ones "overwrite" the ones further away var orderedSections = from section in geometry.Sections let startPointRelativeToCamera = XNA.Vector3.Transform(section.Item1, geometryViewMatrix) let endPointRelativeToCamera = XNA.Vector3.Transform(section.Item2, geometryViewMatrix) let startPointDistance = startPointRelativeToCamera.Length() let endPointDistance = endPointRelativeToCamera.Length() orderby (startPointDistance + endPointDistance) descending select new { Section = section, DistanceToStart = startPointDistance, DistanceToEnd = endPointDistance }; foreach (var orderedSection in orderedSections) { var start = XNA.Vector3.Transform(orderedSection.Section.Item1, geometry.WorldMatrix); var end = XNA.Vector3.Transform(orderedSection.Section.Item2, geometry.WorldMatrix); var viewStart = XNA.Vector3.Transform(start, camera.ViewMatrix); var viewEnd = XNA.Vector3.Transform(end, camera.ViewMatrix); worldDrawingContext.DrawLine(nonScreenSpacePen, new Point(start.X, start.Y), new Point(end.X, end.Y)); viewDrawingContext.DrawLine(nonScreenSpacePen, new Point(viewStart.X, viewStart.Y), new Point(viewEnd.X, viewEnd.Y)); // screen rendering is more complicated purely because I wanted geometry to fade the further away it is from the camera // otherwise, it's very hard to tell whether the rendering is actually correct or not var startDistanceRatio = (orderedSection.DistanceToStart - closestDistance) / deltaDistance; var endDistanceRatio = (orderedSection.DistanceToEnd - closestDistance) / deltaDistance; // lerp towards white based on distance from camera, but only to a maximum of 90% var startColor = Lerp(geometryBaseColor, Colors.White, startDistanceRatio * 0.9f); var endColor = Lerp(geometryBaseColor, Colors.White, endDistanceRatio * 0.9f); var screenStart = camera.WorldPointToScreen(start); var screenEnd = camera.WorldPointToScreen(end); var brush = new LinearGradientBrush { StartPoint = new Point(screenStart.X, screenStart.Y), EndPoint = new Point(screenEnd.X, screenEnd.Y), MappingMode = BrushMappingMode.Absolute }; brush.GradientStops.Add(new GradientStop(startColor, 0)); brush.GradientStops.Add(new GradientStop(endColor, 1)); var pen = new Pen(brush, 1); brush.Freeze(); pen.Freeze(); screenDrawingContext.DrawLine(pen, new Point(screenStart.X, screenStart.Y), new Point(screenEnd.X, screenEnd.Y)); } } } worldRender = worldDrawingVisual; viewRender = viewDrawingVisual; screenRender = screenDrawingVisual; } private static float Lerp(float start, float end, float amount) { var difference = end - start; var adjusted = difference * amount; return start + adjusted; } private static Color Lerp(Color color, Color to, float amount) { var sr = color.R; var sg = color.G; var sb = color.B; var er = to.R; var eg = to.G; var eb = to.B; var r = (byte)Lerp(sr, er, amount); var g = (byte)Lerp(sg, eg, amount); var b = (byte)Lerp(sb, eb, amount); return Color.FromArgb(255, r, g, b); } private void ShowRenders(DrawingVisual worldRender, DrawingVisual viewRender, DrawingVisual screenRender) { var itemsControl = new ItemsControl(); itemsControl.Items.Add(new HeaderedContentControl { Header = "World", Content = new DrawingVisualHost(worldRender)}); itemsControl.Items.Add(new HeaderedContentControl { Header = "View", Content = new DrawingVisualHost(viewRender) }); itemsControl.Items.Add(new HeaderedContentControl { Header = "Screen", Content = new DrawingVisualHost(screenRender) }); var window = new Window { Title = "Renders", Content = itemsControl, ShowInTaskbar = true, SizeToContent = SizeToContent.WidthAndHeight }; window.ShowDialog(); } #endregion #region Supporting Types // stupidly simple 3D geometry class, consisting of a series of sections that will be connected by lines private abstract class Geometry { public abstract IEnumerable<Tuple<XNA.Vector3, XNA.Vector3>> Sections { get; } public XNA.Matrix WorldMatrix { get; set; } } private sealed class Line : Geometry { private readonly XNA.Vector3 magnitude; public Line(XNA.Vector3 magnitude) { this.magnitude = magnitude; } public override IEnumerable<Tuple<XNA.Vector3, XNA.Vector3>> Sections { get { yield return Tuple.Create(XNA.Vector3.Zero, this.magnitude); } } } private sealed class PolyLine : Geometry { private readonly XNA.Vector3[] points; public PolyLine(params XNA.Vector3[] points) { this.points = points; } public override IEnumerable<Tuple<XNA.Vector3, XNA.Vector3>> Sections { get { if (this.points.Length < 2) { yield break; } var end = this.points[0]; for (var i = 1; i < this.points.Length; ++i) { var start = end; end = this.points[i]; yield return Tuple.Create(start, end); } } } } private sealed class Cube : Geometry { private readonly float size; public Cube(float size) { this.size = size; } public override IEnumerable<Tuple<XNA.Vector3, XNA.Vector3>> Sections { get { var halfSize = this.size / 2; var frontBottomLeft = new XNA.Vector3(-halfSize, halfSize, -halfSize); var frontBottomRight = new XNA.Vector3(halfSize, halfSize, -halfSize); var frontTopLeft = new XNA.Vector3(-halfSize, halfSize, halfSize); var frontTopRight = new XNA.Vector3(halfSize, halfSize, halfSize); var backBottomLeft = new XNA.Vector3(-halfSize, -halfSize, -halfSize); var backBottomRight = new XNA.Vector3(halfSize, -halfSize, -halfSize); var backTopLeft = new XNA.Vector3(-halfSize, -halfSize, halfSize); var backTopRight = new XNA.Vector3(halfSize, -halfSize, halfSize); // front face yield return Tuple.Create(frontBottomLeft, frontBottomRight); yield return Tuple.Create(frontBottomLeft, frontTopLeft); yield return Tuple.Create(frontTopLeft, frontTopRight); yield return Tuple.Create(frontTopRight, frontBottomRight); // left face yield return Tuple.Create(frontTopLeft, backTopLeft); yield return Tuple.Create(backTopLeft, backBottomLeft); yield return Tuple.Create(backBottomLeft, frontBottomLeft); // right face yield return Tuple.Create(frontTopRight, backTopRight); yield return Tuple.Create(backTopRight, backBottomRight); yield return Tuple.Create(backBottomRight, frontBottomRight); // back face yield return Tuple.Create(backBottomLeft, backBottomRight); yield return Tuple.Create(backTopLeft, backTopRight); } } } private sealed class Sphere : Geometry { private readonly float radius; private readonly int subsections; public Sphere(float radius, int subsections) { this.radius = radius; this.subsections = subsections; } public override IEnumerable<Tuple<XNA.Vector3, XNA.Vector3>> Sections { get { var latitudeLines = this.subsections; var longitudeLines = this.subsections; // see http://stackoverflow.com/a/4082020/5380 var results = from latitudeLine in Enumerable.Range(0, latitudeLines) from longitudeLine in Enumerable.Range(0, longitudeLines) let latitudeRatio = latitudeLine / (float)latitudeLines let longitudeRatio = longitudeLine / (float)longitudeLines let nextLatitudeRatio = (latitudeLine + 1) / (float)latitudeLines let nextLongitudeRatio = (longitudeLine + 1) / (float)longitudeLines let z1 = Math.Cos(Math.PI * latitudeRatio) let z2 = Math.Cos(Math.PI * nextLatitudeRatio) let x1 = Math.Sin(Math.PI * latitudeRatio) * Math.Cos(Math.PI * 2 * longitudeRatio) let y1 = Math.Sin(Math.PI * latitudeRatio) * Math.Sin(Math.PI * 2 * longitudeRatio) let x2 = Math.Sin(Math.PI * nextLatitudeRatio) * Math.Cos(Math.PI * 2 * longitudeRatio) let y2 = Math.Sin(Math.PI * nextLatitudeRatio) * Math.Sin(Math.PI * 2 * longitudeRatio) let x3 = Math.Sin(Math.PI * latitudeRatio) * Math.Cos(Math.PI * 2 * nextLongitudeRatio) let y3 = Math.Sin(Math.PI * latitudeRatio) * Math.Sin(Math.PI * 2 * nextLongitudeRatio) let start = new XNA.Vector3((float)x1 * radius, (float)y1 * radius, (float)z1 * radius) let firstEnd = new XNA.Vector3((float)x2 * radius, (float)y2 * radius, (float)z2 * radius) let secondEnd = new XNA.Vector3((float)x3 * radius, (float)y3 * radius, (float)z1 * radius) select new { First = Tuple.Create(start, firstEnd), Second = Tuple.Create(start, secondEnd) }; foreach (var result in results) { yield return result.First; yield return result.Second; } } } } #endregion }

    Read the article

  • Improving Partitioned Table Join Performance

    - by Paul White
    The query optimizer does not always choose an optimal strategy when joining partitioned tables. This post looks at an example, showing how a manual rewrite of the query can almost double performance, while reducing the memory grant to almost nothing. Test Data The two tables in this example use a common partitioning partition scheme. The partition function uses 41 equal-size partitions: CREATE PARTITION FUNCTION PFT (integer) AS RANGE RIGHT FOR VALUES ( 125000, 250000, 375000, 500000, 625000, 750000, 875000, 1000000, 1125000, 1250000, 1375000, 1500000, 1625000, 1750000, 1875000, 2000000, 2125000, 2250000, 2375000, 2500000, 2625000, 2750000, 2875000, 3000000, 3125000, 3250000, 3375000, 3500000, 3625000, 3750000, 3875000, 4000000, 4125000, 4250000, 4375000, 4500000, 4625000, 4750000, 4875000, 5000000 ); GO CREATE PARTITION SCHEME PST AS PARTITION PFT ALL TO ([PRIMARY]); There two tables are: CREATE TABLE dbo.T1 ( TID integer NOT NULL IDENTITY(0,1), Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T1 PRIMARY KEY CLUSTERED (TID) ON PST (TID) );   CREATE TABLE dbo.T2 ( TID integer NOT NULL, Column1 integer NOT NULL, Padding binary(100) NOT NULL DEFAULT 0x,   CONSTRAINT PK_T2 PRIMARY KEY CLUSTERED (TID, Column1) ON PST (TID) ); The next script loads 5 million rows into T1 with a pseudo-random value between 1 and 5 for Column1. The table is partitioned on the IDENTITY column TID: INSERT dbo.T1 WITH (TABLOCKX) (Column1) SELECT (ABS(CHECKSUM(NEWID())) % 5) + 1 FROM dbo.Numbers AS N WHERE n BETWEEN 1 AND 5000000; In case you don’t already have an auxiliary table of numbers lying around, here’s a script to create one with 10 million rows: CREATE TABLE dbo.Numbers (n bigint PRIMARY KEY);   WITH L0 AS(SELECT 1 AS c UNION ALL SELECT 1), L1 AS(SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B), L2 AS(SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B), L3 AS(SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B), L4 AS(SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B), L5 AS(SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B), Nums AS(SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS n FROM L5) INSERT dbo.Numbers WITH (TABLOCKX) SELECT TOP (10000000) n FROM Nums ORDER BY n OPTION (MAXDOP 1); Table T1 contains data like this: Next we load data into table T2. The relationship between the two tables is that table 2 contains ‘n’ rows for each row in table 1, where ‘n’ is determined by the value in Column1 of table T1. There is nothing particularly special about the data or distribution, by the way. INSERT dbo.T2 WITH (TABLOCKX) (TID, Column1) SELECT T.TID, N.n FROM dbo.T1 AS T JOIN dbo.Numbers AS N ON N.n >= 1 AND N.n <= T.Column1; Table T2 ends up containing about 15 million rows: The primary key for table T2 is a combination of TID and Column1. The data is partitioned according to the value in column TID alone. Partition Distribution The following query shows the number of rows in each partition of table T1: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T1 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are 40 partitions containing 125,000 rows (40 * 125k = 5m rows). The rightmost partition remains empty. The next query shows the distribution for table 2: SELECT PartitionID = CA1.P, NumRows = COUNT_BIG(*) FROM dbo.T2 AS T CROSS APPLY (VALUES ($PARTITION.PFT(TID))) AS CA1 (P) GROUP BY CA1.P ORDER BY CA1.P; There are roughly 375,000 rows in each partition (the rightmost partition is also empty): Ok, that’s the test data done. Test Query and Execution Plan The task is to count the rows resulting from joining tables 1 and 2 on the TID column: SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; The optimizer chooses a plan using parallel hash join, and partial aggregation: The Plan Explorer plan tree view shows accurate cardinality estimates and an even distribution of rows across threads (click to enlarge the image): With a warm data cache, the STATISTICS IO output shows that no physical I/O was needed, and all 41 partitions were touched: Running the query without actual execution plan or STATISTICS IO information for maximum performance, the query returns in around 2600ms. Execution Plan Analysis The first step toward improving on the execution plan produced by the query optimizer is to understand how it works, at least in outline. The two parallel Clustered Index Scans use multiple threads to read rows from tables T1 and T2. Parallel scan uses a demand-based scheme where threads are given page(s) to scan from the table as needed. This arrangement has certain important advantages, but does result in an unpredictable distribution of rows amongst threads. The point is that multiple threads cooperate to scan the whole table, but it is impossible to predict which rows end up on which threads. For correct results from the parallel hash join, the execution plan has to ensure that rows from T1 and T2 that might join are processed on the same thread. For example, if a row from T1 with join key value ‘1234’ is placed in thread 5’s hash table, the execution plan must guarantee that any rows from T2 that also have join key value ‘1234’ probe thread 5’s hash table for matches. The way this guarantee is enforced in this parallel hash join plan is by repartitioning rows to threads after each parallel scan. The two repartitioning exchanges route rows to threads using a hash function over the hash join keys. The two repartitioning exchanges use the same hash function so rows from T1 and T2 with the same join key must end up on the same hash join thread. Expensive Exchanges This business of repartitioning rows between threads can be very expensive, especially if a large number of rows is involved. The execution plan selected by the optimizer moves 5 million rows through one repartitioning exchange and around 15 million across the other. As a first step toward removing these exchanges, consider the execution plan selected by the optimizer if we join just one partition from each table, disallowing parallelism: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = 1 AND $PARTITION.PFT(T2.TID) = 1 OPTION (MAXDOP 1); The optimizer has chosen a (one-to-many) merge join instead of a hash join. The single-partition query completes in around 100ms. If everything scaled linearly, we would expect that extending this strategy to all 40 populated partitions would result in an execution time around 4000ms. Using parallelism could reduce that further, perhaps to be competitive with the parallel hash join chosen by the optimizer. This raises a question. If the most efficient way to join one partition from each of the tables is to use a merge join, why does the optimizer not choose a merge join for the full query? Forcing a Merge Join Let’s force the optimizer to use a merge join on the test query using a hint: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN); This is the execution plan selected by the optimizer: This plan results in the same number of logical reads reported previously, but instead of 2600ms the query takes 5000ms. The natural explanation for this drop in performance is that the merge join plan is only using a single thread, whereas the parallel hash join plan could use multiple threads. Parallel Merge Join We can get a parallel merge join plan using the same query hint as before, and adding trace flag 8649: SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (MERGE JOIN, QUERYTRACEON 8649); The execution plan is: This looks promising. It uses a similar strategy to distribute work across threads as seen for the parallel hash join. In practice though, performance is disappointing. On a typical run, the parallel merge plan runs for around 8400ms; slower than the single-threaded merge join plan (5000ms) and much worse than the 2600ms for the parallel hash join. We seem to be going backwards! The logical reads for the parallel merge are still exactly the same as before, with no physical IOs. The cardinality estimates and thread distribution are also still very good (click to enlarge): A big clue to the reason for the poor performance is shown in the wait statistics (captured by Plan Explorer Pro): CXPACKET waits require careful interpretation, and are most often benign, but in this case excessive waiting occurs at the repartitioning exchanges. Unlike the parallel hash join, the repartitioning exchanges in this plan are order-preserving ‘merging’ exchanges (because merge join requires ordered inputs): Parallelism works best when threads can just grab any available unit of work and get on with processing it. Preserving order introduces inter-thread dependencies that can easily lead to significant waits occurring. In extreme cases, these dependencies can result in an intra-query deadlock, though the details of that will have to wait for another time to explore in detail. The potential for waits and deadlocks leads the query optimizer to cost parallel merge join relatively highly, especially as the degree of parallelism (DOP) increases. This high costing resulted in the optimizer choosing a serial merge join rather than parallel in this case. The test results certainly confirm its reasoning. Collocated Joins In SQL Server 2008 and later, the optimizer has another available strategy when joining tables that share a common partition scheme. This strategy is a collocated join, also known as as a per-partition join. It can be applied in both serial and parallel execution plans, though it is limited to 2-way joins in the current optimizer. Whether the optimizer chooses a collocated join or not depends on cost estimation. The primary benefits of a collocated join are that it eliminates an exchange and requires less memory, as we will see next. Costing and Plan Selection The query optimizer did consider a collocated join for our original query, but it was rejected on cost grounds. The parallel hash join with repartitioning exchanges appeared to be a cheaper option. There is no query hint to force a collocated join, so we have to mess with the costing framework to produce one for our test query. Pretending that IOs cost 50 times more than usual is enough to convince the optimizer to use collocated join with our test query: -- Pretend IOs are 50x cost temporarily DBCC SETIOWEIGHT(50);   -- Co-located hash join SELECT COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID OPTION (RECOMPILE);   -- Reset IO costing DBCC SETIOWEIGHT(1); Collocated Join Plan The estimated execution plan for the collocated join is: The Constant Scan contains one row for each partition of the shared partitioning scheme, from 1 to 41. The hash repartitioning exchanges seen previously are replaced by a single Distribute Streams exchange using Demand partitioning. Demand partitioning means that the next partition id is given to the next parallel thread that asks for one. My test machine has eight logical processors, and all are available for SQL Server to use. As a result, there are eight threads in the single parallel branch in this plan, each processing one partition from each table at a time. Once a thread finishes processing a partition, it grabs a new partition number from the Distribute Streams exchange…and so on until all partitions have been processed. It is important to understand that the parallel scans in this plan are different from the parallel hash join plan. Although the scans have the same parallelism icon, tables T1 and T2 are not being co-operatively scanned by multiple threads in the same way. Each thread reads a single partition of T1 and performs a hash match join with the same partition from table T2. The properties of the two Clustered Index Scans show a Seek Predicate (unusual for a scan!) limiting the rows to a single partition: The crucial point is that the join between T1 and T2 is on TID, and TID is the partitioning column for both tables. A thread that processes partition ‘n’ is guaranteed to see all rows that can possibly join on TID for that partition. In addition, no other thread will see rows from that partition, so this removes the need for repartitioning exchanges. CPU and Memory Efficiency Improvements The collocated join has removed two expensive repartitioning exchanges and added a single exchange processing 41 rows (one for each partition id). Remember, the parallel hash join plan exchanges had to process 5 million and 15 million rows. The amount of processor time spent on exchanges will be much lower in the collocated join plan. In addition, the collocated join plan has a maximum of 8 threads processing single partitions at any one time. The 41 partitions will all be processed eventually, but a new partition is not started until a thread asks for it. Threads can reuse hash table memory for the new partition. The parallel hash join plan also had 8 hash tables, but with all 5,000,000 build rows loaded at the same time. The collocated plan needs memory for only 8 * 125,000 = 1,000,000 rows at any one time. Collocated Hash Join Performance The collated join plan has disappointing performance in this case. The query runs for around 25,300ms despite the same IO statistics as usual. This is much the worst result so far, so what went wrong? It turns out that cardinality estimation for the single partition scans of table T1 is slightly low. The properties of the Clustered Index Scan of T1 (graphic immediately above) show the estimation was for 121,951 rows. This is a small shortfall compared with the 125,000 rows actually encountered, but it was enough to cause the hash join to spill to physical tempdb: A level 1 spill doesn’t sound too bad, until you realize that the spill to tempdb probably occurs for each of the 41 partitions. As a side note, the cardinality estimation error is a little surprising because the system tables accurately show there are 125,000 rows in every partition of T1. Unfortunately, the optimizer uses regular column and index statistics to derive cardinality estimates here rather than system table information (e.g. sys.partitions). Collocated Merge Join We will never know how well the collocated parallel hash join plan might have worked without the cardinality estimation error (and the resulting 41 spills to tempdb) but we do know: Merge join does not require a memory grant; and Merge join was the optimizer’s preferred join option for a single partition join Putting this all together, what we would really like to see is the same collocated join strategy, but using merge join instead of hash join. Unfortunately, the current query optimizer cannot produce a collocated merge join; it only knows how to do collocated hash join. So where does this leave us? CROSS APPLY sys.partitions We can try to write our own collocated join query. We can use sys.partitions to find the partition numbers, and CROSS APPLY to get a count per partition, with a final step to sum the partial counts. The following query implements this idea: SELECT row_count = SUM(Subtotals.cnt) FROM ( -- Partition numbers SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1 ) AS P CROSS APPLY ( -- Count per collocated join SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals; The estimated plan is: The cardinality estimates aren’t all that good here, especially the estimate for the scan of the system table underlying the sys.partitions view. Nevertheless, the plan shape is heading toward where we would like to be. Each partition number from the system table results in a per-partition scan of T1 and T2, a one-to-many Merge Join, and a Stream Aggregate to compute the partial counts. The final Stream Aggregate just sums the partial counts. Execution time for this query is around 3,500ms, with the same IO statistics as always. This compares favourably with 5,000ms for the serial plan produced by the optimizer with the OPTION (MERGE JOIN) hint. This is another case of the sum of the parts being less than the whole – summing 41 partial counts from 41 single-partition merge joins is faster than a single merge join and count over all partitions. Even so, this single-threaded collocated merge join is not as quick as the original parallel hash join plan, which executed in 2,600ms. On the positive side, our collocated merge join uses only one logical processor and requires no memory grant. The parallel hash join plan used 16 threads and reserved 569 MB of memory:   Using a Temporary Table Our collocated merge join plan should benefit from parallelism. The reason parallelism is not being used is that the query references a system table. We can work around that by writing the partition numbers to a temporary table (or table variable): SET STATISTICS IO ON; DECLARE @s datetime2 = SYSUTCDATETIME();   CREATE TABLE #P ( partition_number integer PRIMARY KEY);   INSERT #P (partition_number) SELECT p.partition_number FROM sys.partitions AS p WHERE p.[object_id] = OBJECT_ID(N'T1', N'U') AND p.index_id = 1;   SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals;   DROP TABLE #P;   SELECT DATEDIFF(Millisecond, @s, SYSUTCDATETIME()); SET STATISTICS IO OFF; Using the temporary table adds a few logical reads, but the overall execution time is still around 3500ms, indistinguishable from the same query without the temporary table. The problem is that the query optimizer still doesn’t choose a parallel plan for this query, though the removal of the system table reference means that it could if it chose to: In fact the optimizer did enter the parallel plan phase of query optimization (running search 1 for a second time): Unfortunately, the parallel plan found seemed to be more expensive than the serial plan. This is a crazy result, caused by the optimizer’s cost model not reducing operator CPU costs on the inner side of a nested loops join. Don’t get me started on that, we’ll be here all night. In this plan, everything expensive happens on the inner side of a nested loops join. Without a CPU cost reduction to compensate for the added cost of exchange operators, candidate parallel plans always look more expensive to the optimizer than the equivalent serial plan. Parallel Collocated Merge Join We can produce the desired parallel plan using trace flag 8649 again: SELECT row_count = SUM(Subtotals.cnt) FROM #P AS p CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: One difference between this plan and the collocated hash join plan is that a Repartition Streams exchange operator is used instead of Distribute Streams. The effect is similar, though not quite identical. The Repartition uses round-robin partitioning, meaning the next partition id is pushed to the next thread in sequence. The Distribute Streams exchange seen earlier used Demand partitioning, meaning the next partition id is pulled across the exchange by the next thread that is ready for more work. There are subtle performance implications for each partitioning option, but going into that would again take us too far off the main point of this post. Performance The important thing is the performance of this parallel collocated merge join – just 1350ms on a typical run. The list below shows all the alternatives from this post (all timings include creation, population, and deletion of the temporary table where appropriate) from quickest to slowest: Collocated parallel merge join: 1350ms Parallel hash join: 2600ms Collocated serial merge join: 3500ms Serial merge join: 5000ms Parallel merge join: 8400ms Collated parallel hash join: 25,300ms (hash spill per partition) The parallel collocated merge join requires no memory grant (aside from a paltry 1.2MB used for exchange buffers). This plan uses 16 threads at DOP 8; but 8 of those are (rather pointlessly) allocated to the parallel scan of the temporary table. These are minor concerns, but it turns out there is a way to address them if it bothers you. Parallel Collocated Merge Join with Demand Partitioning This final tweak replaces the temporary table with a hard-coded list of partition ids (dynamic SQL could be used to generate this query from sys.partitions): SELECT row_count = SUM(Subtotals.cnt) FROM ( VALUES (1),(2),(3),(4),(5),(6),(7),(8),(9),(10), (11),(12),(13),(14),(15),(16),(17),(18),(19),(20), (21),(22),(23),(24),(25),(26),(27),(28),(29),(30), (31),(32),(33),(34),(35),(36),(37),(38),(39),(40),(41) ) AS P (partition_number) CROSS APPLY ( SELECT cnt = COUNT_BIG(*) FROM dbo.T1 AS T1 JOIN dbo.T2 AS T2 ON T2.TID = T1.TID WHERE $PARTITION.PFT(T1.TID) = p.partition_number AND $PARTITION.PFT(T2.TID) = p.partition_number ) AS SubTotals OPTION (QUERYTRACEON 8649); The actual execution plan is: The parallel collocated hash join plan is reproduced below for comparison: The manual rewrite has another advantage that has not been mentioned so far: the partial counts (per partition) can be computed earlier than the partial counts (per thread) in the optimizer’s collocated join plan. The earlier aggregation is performed by the extra Stream Aggregate under the nested loops join. The performance of the parallel collocated merge join is unchanged at around 1350ms. Final Words It is a shame that the current query optimizer does not consider a collocated merge join (Connect item closed as Won’t Fix). The example used in this post showed an improvement in execution time from 2600ms to 1350ms using a modestly-sized data set and limited parallelism. In addition, the memory requirement for the query was almost completely eliminated  – down from 569MB to 1.2MB. The problem with the parallel hash join selected by the optimizer is that it attempts to process the full data set all at once (albeit using eight threads). It requires a large memory grant to hold all 5 million rows from table T1 across the eight hash tables, and does not take advantage of the divide-and-conquer opportunity offered by the common partitioning. The great thing about the collocated join strategies is that each parallel thread works on a single partition from both tables, reading rows, performing the join, and computing a per-partition subtotal, before moving on to a new partition. From a thread’s point of view… If you have trouble visualizing what is happening from just looking at the parallel collocated merge join execution plan, let’s look at it again, but from the point of view of just one thread operating between the two Parallelism (exchange) operators. Our thread picks up a single partition id from the Distribute Streams exchange, and starts a merge join using ordered rows from partition 1 of table T1 and partition 1 of table T2. By definition, this is all happening on a single thread. As rows join, they are added to a (per-partition) count in the Stream Aggregate immediately above the Merge Join. Eventually, either T1 (partition 1) or T2 (partition 1) runs out of rows and the merge join stops. The per-partition count from the aggregate passes on through the Nested Loops join to another Stream Aggregate, which is maintaining a per-thread subtotal. Our same thread now picks up a new partition id from the exchange (say it gets id 9 this time). The count in the per-partition aggregate is reset to zero, and the processing of partition 9 of both tables proceeds just as it did for partition 1, and on the same thread. Each thread picks up a single partition id and processes all the data for that partition, completely independently from other threads working on other partitions. One thread might eventually process partitions (1, 9, 17, 25, 33, 41) while another is concurrently processing partitions (2, 10, 18, 26, 34) and so on for the other six threads at DOP 8. The point is that all 8 threads can execute independently and concurrently, continuing to process new partitions until the wider job (of which the thread has no knowledge!) is done. This divide-and-conquer technique can be much more efficient than simply splitting the entire workload across eight threads all at once. Related Reading Understanding and Using Parallelism in SQL Server Parallel Execution Plans Suck © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • A Taxonomy of Numerical Methods v1

    - by JoshReuben
    Numerical Analysis – When, What, (but not how) Once you understand the Math & know C++, Numerical Methods are basically blocks of iterative & conditional math code. I found the real trick was seeing the forest for the trees – knowing which method to use for which situation. Its pretty easy to get lost in the details – so I’ve tried to organize these methods in a way that I can quickly look this up. I’ve included links to detailed explanations and to C++ code examples. I’ve tried to classify Numerical methods in the following broad categories: Solving Systems of Linear Equations Solving Non-Linear Equations Iteratively Interpolation Curve Fitting Optimization Numerical Differentiation & Integration Solving ODEs Boundary Problems Solving EigenValue problems Enjoy – I did ! Solving Systems of Linear Equations Overview Solve sets of algebraic equations with x unknowns The set is commonly in matrix form Gauss-Jordan Elimination http://en.wikipedia.org/wiki/Gauss%E2%80%93Jordan_elimination C++: http://www.codekeep.net/snippets/623f1923-e03c-4636-8c92-c9dc7aa0d3c0.aspx Produces solution of the equations & the coefficient matrix Efficient, stable 2 steps: · Forward Elimination – matrix decomposition: reduce set to triangular form (0s below the diagonal) or row echelon form. If degenerate, then there is no solution · Backward Elimination –write the original matrix as the product of ints inverse matrix & its reduced row-echelon matrix à reduce set to row canonical form & use back-substitution to find the solution to the set Elementary ops for matrix decomposition: · Row multiplication · Row switching · Add multiples of rows to other rows Use pivoting to ensure rows are ordered for achieving triangular form LU Decomposition http://en.wikipedia.org/wiki/LU_decomposition C++: http://ganeshtiwaridotcomdotnp.blogspot.co.il/2009/12/c-c-code-lu-decomposition-for-solving.html Represent the matrix as a product of lower & upper triangular matrices A modified version of GJ Elimination Advantage – can easily apply forward & backward elimination to solve triangular matrices Techniques: · Doolittle Method – sets the L matrix diagonal to unity · Crout Method - sets the U matrix diagonal to unity Note: both the L & U matrices share the same unity diagonal & can be stored compactly in the same matrix Gauss-Seidel Iteration http://en.wikipedia.org/wiki/Gauss%E2%80%93Seidel_method C++: http://www.nr.com/forum/showthread.php?t=722 Transform the linear set of equations into a single equation & then use numerical integration (as integration formulas have Sums, it is implemented iteratively). an optimization of Gauss-Jacobi: 1.5 times faster, requires 0.25 iterations to achieve the same tolerance Solving Non-Linear Equations Iteratively find roots of polynomials – there may be 0, 1 or n solutions for an n order polynomial use iterative techniques Iterative methods · used when there are no known analytical techniques · Requires set functions to be continuous & differentiable · Requires an initial seed value – choice is critical to convergence à conduct multiple runs with different starting points & then select best result · Systematic - iterate until diminishing returns, tolerance or max iteration conditions are met · bracketing techniques will always yield convergent solutions, non-bracketing methods may fail to converge Incremental method if a nonlinear function has opposite signs at 2 ends of a small interval x1 & x2, then there is likely to be a solution in their interval – solutions are detected by evaluating a function over interval steps, for a change in sign, adjusting the step size dynamically. Limitations – can miss closely spaced solutions in large intervals, cannot detect degenerate (coinciding) solutions, limited to functions that cross the x-axis, gives false positives for singularities Fixed point method http://en.wikipedia.org/wiki/Fixed-point_iteration C++: http://books.google.co.il/books?id=weYj75E_t6MC&pg=PA79&lpg=PA79&dq=fixed+point+method++c%2B%2B&source=bl&ots=LQ-5P_taoC&sig=lENUUIYBK53tZtTwNfHLy5PEWDk&hl=en&sa=X&ei=wezDUPW1J5DptQaMsIHQCw&redir_esc=y#v=onepage&q=fixed%20point%20method%20%20c%2B%2B&f=false Algebraically rearrange a solution to isolate a variable then apply incremental method Bisection method http://en.wikipedia.org/wiki/Bisection_method C++: http://numericalcomputing.wordpress.com/category/algorithms/ Bracketed - Select an initial interval, keep bisecting it ad midpoint into sub-intervals and then apply incremental method on smaller & smaller intervals – zoom in Adv: unaffected by function gradient à reliable Disadv: slow convergence False Position Method http://en.wikipedia.org/wiki/False_position_method C++: http://www.dreamincode.net/forums/topic/126100-bisection-and-false-position-methods/ Bracketed - Select an initial interval , & use the relative value of function at interval end points to select next sub-intervals (estimate how far between the end points the solution might be & subdivide based on this) Newton-Raphson method http://en.wikipedia.org/wiki/Newton's_method C++: http://www-users.cselabs.umn.edu/classes/Summer-2012/csci1113/index.php?page=./newt3 Also known as Newton's method Convenient, efficient Not bracketed – only a single initial guess is required to start iteration – requires an analytical expression for the first derivative of the function as input. Evaluates the function & its derivative at each step. Can be extended to the Newton MutiRoot method for solving multiple roots Can be easily applied to an of n-coupled set of non-linear equations – conduct a Taylor Series expansion of a function, dropping terms of order n, rewrite as a Jacobian matrix of PDs & convert to simultaneous linear equations !!! Secant Method http://en.wikipedia.org/wiki/Secant_method C++: http://forum.vcoderz.com/showthread.php?p=205230 Unlike N-R, can estimate first derivative from an initial interval (does not require root to be bracketed) instead of inputting it Since derivative is approximated, may converge slower. Is fast in practice as it does not have to evaluate the derivative at each step. Similar implementation to False Positive method Birge-Vieta Method http://mat.iitm.ac.in/home/sryedida/public_html/caimna/transcendental/polynomial%20methods/bv%20method.html C++: http://books.google.co.il/books?id=cL1boM2uyQwC&pg=SA3-PA51&lpg=SA3-PA51&dq=Birge-Vieta+Method+c%2B%2B&source=bl&ots=QZmnDTK3rC&sig=BPNcHHbpR_DKVoZXrLi4nVXD-gg&hl=en&sa=X&ei=R-_DUK2iNIjzsgbE5ID4Dg&redir_esc=y#v=onepage&q=Birge-Vieta%20Method%20c%2B%2B&f=false combines Horner's method of polynomial evaluation (transforming into lesser degree polynomials that are more computationally efficient to process) with Newton-Raphson to provide a computational speed-up Interpolation Overview Construct new data points for as close as possible fit within range of a discrete set of known points (that were obtained via sampling, experimentation) Use Taylor Series Expansion of a function f(x) around a specific value for x Linear Interpolation http://en.wikipedia.org/wiki/Linear_interpolation C++: http://www.hamaluik.com/?p=289 Straight line between 2 points à concatenate interpolants between each pair of data points Bilinear Interpolation http://en.wikipedia.org/wiki/Bilinear_interpolation C++: http://supercomputingblog.com/graphics/coding-bilinear-interpolation/2/ Extension of the linear function for interpolating functions of 2 variables – perform linear interpolation first in 1 direction, then in another. Used in image processing – e.g. texture mapping filter. Uses 4 vertices to interpolate a value within a unit cell. Lagrange Interpolation http://en.wikipedia.org/wiki/Lagrange_polynomial C++: http://www.codecogs.com/code/maths/approximation/interpolation/lagrange.php For polynomials Requires recomputation for all terms for each distinct x value – can only be applied for small number of nodes Numerically unstable Barycentric Interpolation http://epubs.siam.org/doi/pdf/10.1137/S0036144502417715 C++: http://www.gamedev.net/topic/621445-barycentric-coordinates-c-code-check/ Rearrange the terms in the equation of the Legrange interpolation by defining weight functions that are independent of the interpolated value of x Newton Divided Difference Interpolation http://en.wikipedia.org/wiki/Newton_polynomial C++: http://jee-appy.blogspot.co.il/2011/12/newton-divided-difference-interpolation.html Hermite Divided Differences: Interpolation polynomial approximation for a given set of data points in the NR form - divided differences are used to approximately calculate the various differences. For a given set of 3 data points , fit a quadratic interpolant through the data Bracketed functions allow Newton divided differences to be calculated recursively Difference table Cubic Spline Interpolation http://en.wikipedia.org/wiki/Spline_interpolation C++: https://www.marcusbannerman.co.uk/index.php/home/latestarticles/42-articles/96-cubic-spline-class.html Spline is a piecewise polynomial Provides smoothness – for interpolations with significantly varying data Use weighted coefficients to bend the function to be smooth & its 1st & 2nd derivatives are continuous through the edge points in the interval Curve Fitting A generalization of interpolating whereby given data points may contain noise à the curve does not necessarily pass through all the points Least Squares Fit http://en.wikipedia.org/wiki/Least_squares C++: http://www.ccas.ru/mmes/educat/lab04k/02/least-squares.c Residual – difference between observed value & expected value Model function is often chosen as a linear combination of the specified functions Determines: A) The model instance in which the sum of squared residuals has the least value B) param values for which model best fits data Straight Line Fit Linear correlation between independent variable and dependent variable Linear Regression http://en.wikipedia.org/wiki/Linear_regression C++: http://www.oocities.org/david_swaim/cpp/linregc.htm Special case of statistically exact extrapolation Leverage least squares Given a basis function, the sum of the residuals is determined and the corresponding gradient equation is expressed as a set of normal linear equations in matrix form that can be solved (e.g. using LU Decomposition) Can be weighted - Drop the assumption that all errors have the same significance –-> confidence of accuracy is different for each data point. Fit the function closer to points with higher weights Polynomial Fit - use a polynomial basis function Moving Average http://en.wikipedia.org/wiki/Moving_average C++: http://www.codeproject.com/Articles/17860/A-Simple-Moving-Average-Algorithm Used for smoothing (cancel fluctuations to highlight longer-term trends & cycles), time series data analysis, signal processing filters Replace each data point with average of neighbors. Can be simple (SMA), weighted (WMA), exponential (EMA). Lags behind latest data points – extra weight can be given to more recent data points. Weights can decrease arithmetically or exponentially according to distance from point. Parameters: smoothing factor, period, weight basis Optimization Overview Given function with multiple variables, find Min (or max by minimizing –f(x)) Iterative approach Efficient, but not necessarily reliable Conditions: noisy data, constraints, non-linear models Detection via sign of first derivative - Derivative of saddle points will be 0 Local minima Bisection method Similar method for finding a root for a non-linear equation Start with an interval that contains a minimum Golden Search method http://en.wikipedia.org/wiki/Golden_section_search C++: http://www.codecogs.com/code/maths/optimization/golden.php Bisect intervals according to golden ratio 0.618.. Achieves reduction by evaluating a single function instead of 2 Newton-Raphson Method Brent method http://en.wikipedia.org/wiki/Brent's_method C++: http://people.sc.fsu.edu/~jburkardt/cpp_src/brent/brent.cpp Based on quadratic or parabolic interpolation – if the function is smooth & parabolic near to the minimum, then a parabola fitted through any 3 points should approximate the minima – fails when the 3 points are collinear , in which case the denominator is 0 Simplex Method http://en.wikipedia.org/wiki/Simplex_algorithm C++: http://www.codeguru.com/cpp/article.php/c17505/Simplex-Optimization-Algorithm-and-Implemetation-in-C-Programming.htm Find the global minima of any multi-variable function Direct search – no derivatives required At each step it maintains a non-degenerative simplex – a convex hull of n+1 vertices. Obtains the minimum for a function with n variables by evaluating the function at n-1 points, iteratively replacing the point of worst result with the point of best result, shrinking the multidimensional simplex around the best point. Point replacement involves expanding & contracting the simplex near the worst value point to determine a better replacement point Oscillation can be avoided by choosing the 2nd worst result Restart if it gets stuck Parameters: contraction & expansion factors Simulated Annealing http://en.wikipedia.org/wiki/Simulated_annealing C++: http://code.google.com/p/cppsimulatedannealing/ Analogy to heating & cooling metal to strengthen its structure Stochastic method – apply random permutation search for global minima - Avoid entrapment in local minima via hill climbing Heating schedule - Annealing schedule params: temperature, iterations at each temp, temperature delta Cooling schedule – can be linear, step-wise or exponential Differential Evolution http://en.wikipedia.org/wiki/Differential_evolution C++: http://www.amichel.com/de/doc/html/ More advanced stochastic methods analogous to biological processes: Genetic algorithms, evolution strategies Parallel direct search method against multiple discrete or continuous variables Initial population of variable vectors chosen randomly – if weighted difference vector of 2 vectors yields a lower objective function value then it replaces the comparison vector Many params: #parents, #variables, step size, crossover constant etc Convergence is slow – many more function evaluations than simulated annealing Numerical Differentiation Overview 2 approaches to finite difference methods: · A) approximate function via polynomial interpolation then differentiate · B) Taylor series approximation – additionally provides error estimate Finite Difference methods http://en.wikipedia.org/wiki/Finite_difference_method C++: http://www.wpi.edu/Pubs/ETD/Available/etd-051807-164436/unrestricted/EAMPADU.pdf Find differences between high order derivative values - Approximate differential equations by finite differences at evenly spaced data points Based on forward & backward Taylor series expansion of f(x) about x plus or minus multiples of delta h. Forward / backward difference - the sums of the series contains even derivatives and the difference of the series contains odd derivatives – coupled equations that can be solved. Provide an approximation of the derivative within a O(h^2) accuracy There is also central difference & extended central difference which has a O(h^4) accuracy Richardson Extrapolation http://en.wikipedia.org/wiki/Richardson_extrapolation C++: http://mathscoding.blogspot.co.il/2012/02/introduction-richardson-extrapolation.html A sequence acceleration method applied to finite differences Fast convergence, high accuracy O(h^4) Derivatives via Interpolation Cannot apply Finite Difference method to discrete data points at uneven intervals – so need to approximate the derivative of f(x) using the derivative of the interpolant via 3 point Lagrange Interpolation Note: the higher the order of the derivative, the lower the approximation precision Numerical Integration Estimate finite & infinite integrals of functions More accurate procedure than numerical differentiation Use when it is not possible to obtain an integral of a function analytically or when the function is not given, only the data points are Newton Cotes Methods http://en.wikipedia.org/wiki/Newton%E2%80%93Cotes_formulas C++: http://www.siafoo.net/snippet/324 For equally spaced data points Computationally easy – based on local interpolation of n rectangular strip areas that is piecewise fitted to a polynomial to get the sum total area Evaluate the integrand at n+1 evenly spaced points – approximate definite integral by Sum Weights are derived from Lagrange Basis polynomials Leverage Trapezoidal Rule for default 2nd formulas, Simpson 1/3 Rule for substituting 3 point formulas, Simpson 3/8 Rule for 4 point formulas. For 4 point formulas use Bodes Rule. Higher orders obtain more accurate results Trapezoidal Rule uses simple area, Simpsons Rule replaces the integrand f(x) with a quadratic polynomial p(x) that uses the same values as f(x) for its end points, but adds a midpoint Romberg Integration http://en.wikipedia.org/wiki/Romberg's_method C++: http://code.google.com/p/romberg-integration/downloads/detail?name=romberg.cpp&can=2&q= Combines trapezoidal rule with Richardson Extrapolation Evaluates the integrand at equally spaced points The integrand must have continuous derivatives Each R(n,m) extrapolation uses a higher order integrand polynomial replacement rule (zeroth starts with trapezoidal) à a lower triangular matrix set of equation coefficients where the bottom right term has the most accurate approximation. The process continues until the difference between 2 successive diagonal terms becomes sufficiently small. Gaussian Quadrature http://en.wikipedia.org/wiki/Gaussian_quadrature C++: http://www.alglib.net/integration/gaussianquadratures.php Data points are chosen to yield best possible accuracy – requires fewer evaluations Ability to handle singularities, functions that are difficult to evaluate The integrand can include a weighting function determined by a set of orthogonal polynomials. Points & weights are selected so that the integrand yields the exact integral if f(x) is a polynomial of degree <= 2n+1 Techniques (basically different weighting functions): · Gauss-Legendre Integration w(x)=1 · Gauss-Laguerre Integration w(x)=e^-x · Gauss-Hermite Integration w(x)=e^-x^2 · Gauss-Chebyshev Integration w(x)= 1 / Sqrt(1-x^2) Solving ODEs Use when high order differential equations cannot be solved analytically Evaluated under boundary conditions RK for systems – a high order differential equation can always be transformed into a coupled first order system of equations Euler method http://en.wikipedia.org/wiki/Euler_method C++: http://rosettacode.org/wiki/Euler_method First order Runge–Kutta method. Simple recursive method – given an initial value, calculate derivative deltas. Unstable & not very accurate (O(h) error) – not used in practice A first-order method - the local error (truncation error per step) is proportional to the square of the step size, and the global error (error at a given time) is proportional to the step size In evolving solution between data points xn & xn+1, only evaluates derivatives at beginning of interval xn à asymmetric at boundaries Higher order Runge Kutta http://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods C++: http://www.dreamincode.net/code/snippet1441.htm 2nd & 4th order RK - Introduces parameterized midpoints for more symmetric solutions à accuracy at higher computational cost Adaptive RK – RK-Fehlberg – estimate the truncation at each integration step & automatically adjust the step size to keep error within prescribed limits. At each step 2 approximations are compared – if in disagreement to a specific accuracy, the step size is reduced Boundary Value Problems Where solution of differential equations are located at 2 different values of the independent variable x à more difficult, because cannot just start at point of initial value – there may not be enough starting conditions available at the end points to produce a unique solution An n-order equation will require n boundary conditions – need to determine the missing n-1 conditions which cause the given conditions at the other boundary to be satisfied Shooting Method http://en.wikipedia.org/wiki/Shooting_method C++: http://ganeshtiwaridotcomdotnp.blogspot.co.il/2009/12/c-c-code-shooting-method-for-solving.html Iteratively guess the missing values for one end & integrate, then inspect the discrepancy with the boundary values of the other end to adjust the estimate Given the starting boundary values u1 & u2 which contain the root u, solve u given the false position method (solving the differential equation as an initial value problem via 4th order RK), then use u to solve the differential equations. Finite Difference Method For linear & non-linear systems Higher order derivatives require more computational steps – some combinations for boundary conditions may not work though Improve the accuracy by increasing the number of mesh points Solving EigenValue Problems An eigenvalue can substitute a matrix when doing matrix multiplication à convert matrix multiplication into a polynomial EigenValue For a given set of equations in matrix form, determine what are the solution eigenvalue & eigenvectors Similar Matrices - have same eigenvalues. Use orthogonal similarity transforms to reduce a matrix to diagonal form from which eigenvalue(s) & eigenvectors can be computed iteratively Jacobi method http://en.wikipedia.org/wiki/Jacobi_method C++: http://people.sc.fsu.edu/~jburkardt/classes/acs2_2008/openmp/jacobi/jacobi.html Robust but Computationally intense – use for small matrices < 10x10 Power Iteration http://en.wikipedia.org/wiki/Power_iteration For any given real symmetric matrix, generate the largest single eigenvalue & its eigenvectors Simplest method – does not compute matrix decomposition à suitable for large, sparse matrices Inverse Iteration Variation of power iteration method – generates the smallest eigenvalue from the inverse matrix Rayleigh Method http://en.wikipedia.org/wiki/Rayleigh's_method_of_dimensional_analysis Variation of power iteration method Rayleigh Quotient Method Variation of inverse iteration method Matrix Tri-diagonalization Method Use householder algorithm to reduce an NxN symmetric matrix to a tridiagonal real symmetric matrix vua N-2 orthogonal transforms     Whats Next Outside of Numerical Methods there are lots of different types of algorithms that I’ve learned over the decades: Data Mining – (I covered this briefly in a previous post: http://geekswithblogs.net/JoshReuben/archive/2007/12/31/ssas-dm-algorithms.aspx ) Search & Sort Routing Problem Solving Logical Theorem Proving Planning Probabilistic Reasoning Machine Learning Solvers (eg MIP) Bioinformatics (Sequence Alignment, Protein Folding) Quant Finance (I read Wilmott’s books – interesting) Sooner or later, I’ll cover the above topics as well.

    Read the article

  • Toorcon 15 (2013)

    - by danx
    The Toorcon gang (senior staff): h1kari (founder), nfiltr8, and Geo Introduction to Toorcon 15 (2013) A Tale of One Software Bypass of MS Windows 8 Secure Boot Breaching SSL, One Byte at a Time Running at 99%: Surviving an Application DoS Security Response in the Age of Mass Customized Attacks x86 Rewriting: Defeating RoP and other Shinanighans Clowntown Express: interesting bugs and running a bug bounty program Active Fingerprinting of Encrypted VPNs Making Attacks Go Backwards Mask Your Checksums—The Gorry Details Adventures with weird machines thirty years after "Reflections on Trusting Trust" Introduction to Toorcon 15 (2013) Toorcon 15 is the 15th annual security conference held in San Diego. I've attended about a third of them and blogged about previous conferences I attended here starting in 2003. As always, I've only summarized the talks I attended and interested me enough to write about them. Be aware that I may have misrepresented the speaker's remarks and that they are not my remarks or opinion, or those of my employer, so don't quote me or them. Those seeking further details may contact the speakers directly or use The Google. For some talks, I have a URL for further information. A Tale of One Software Bypass of MS Windows 8 Secure Boot Andrew Furtak and Oleksandr Bazhaniuk Yuri Bulygin, Oleksandr ("Alex") Bazhaniuk, and (not present) Andrew Furtak Yuri and Alex talked about UEFI and Bootkits and bypassing MS Windows 8 Secure Boot, with vendor recommendations. They previously gave this talk at the BlackHat 2013 conference. MS Windows 8 Secure Boot Overview UEFI (Unified Extensible Firmware Interface) is interface between hardware and OS. UEFI is processor and architecture independent. Malware can replace bootloader (bootx64.efi, bootmgfw.efi). Once replaced can modify kernel. Trivial to replace bootloader. Today many legacy bootkits—UEFI replaces them most of them. MS Windows 8 Secure Boot verifies everything you load, either through signatures or hashes. UEFI firmware relies on secure update (with signed update). You would think Secure Boot would rely on ROM (such as used for phones0, but you can't do that for PCs—PCs use writable memory with signatures DXE core verifies the UEFI boat loader(s) OS Loader (winload.efi, winresume.efi) verifies the OS kernel A chain of trust is established with a root key (Platform Key, PK), which is a cert belonging to the platform vendor. Key Exchange Keys (KEKs) verify an "authorized" database (db), and "forbidden" database (dbx). X.509 certs with SHA-1/SHA-256 hashes. Keys are stored in non-volatile (NV) flash-based NVRAM. Boot Services (BS) allow adding/deleting keys (can't be accessed once OS starts—which uses Run-Time (RT)). Root cert uses RSA-2048 public keys and PKCS#7 format signatures. SecureBoot — enable disable image signature checks SetupMode — update keys, self-signed keys, and secure boot variables CustomMode — allows updating keys Secure Boot policy settings are: always execute, never execute, allow execute on security violation, defer execute on security violation, deny execute on security violation, query user on security violation Attacking MS Windows 8 Secure Boot Secure Boot does NOT protect from physical access. Can disable from console. Each BIOS vendor implements Secure Boot differently. There are several platform and BIOS vendors. It becomes a "zoo" of implementations—which can be taken advantage of. Secure Boot is secure only when all vendors implement it correctly. Allow only UEFI firmware signed updates protect UEFI firmware from direct modification in flash memory protect FW update components program SPI controller securely protect secure boot policy settings in nvram protect runtime api disable compatibility support module which allows unsigned legacy Can corrupt the Platform Key (PK) EFI root certificate variable in SPI flash. If PK is not found, FW enters setup mode wich secure boot turned off. Can also exploit TPM in a similar manner. One is not supposed to be able to directly modify the PK in SPI flash from the OS though. But they found a bug that they can exploit from User Mode (undisclosed) and demoed the exploit. It loaded and ran their own bootkit. The exploit requires a reboot. Multiple vendors are vulnerable. They will disclose this exploit to vendors in the future. Recommendations: allow only signed updates protect UEFI fw in ROM protect EFI variable store in ROM Breaching SSL, One Byte at a Time Yoel Gluck and Angelo Prado Angelo Prado and Yoel Gluck, Salesforce.com CRIME is software that performs a "compression oracle attack." This is possible because the SSL protocol doesn't hide length, and because SSL compresses the header. CRIME requests with every possible character and measures the ciphertext length. Look for the plaintext which compresses the most and looks for the cookie one byte-at-a-time. SSL Compression uses LZ77 to reduce redundancy. Huffman coding replaces common byte sequences with shorter codes. US CERT thinks the SSL compression problem is fixed, but it isn't. They convinced CERT that it wasn't fixed and they issued a CVE. BREACH, breachattrack.com BREACH exploits the SSL response body (Accept-Encoding response, Content-Encoding). It takes advantage of the fact that the response is not compressed. BREACH uses gzip and needs fairly "stable" pages that are static for ~30 seconds. It needs attacker-supplied content (say from a web form or added to a URL parameter). BREACH listens to a session's requests and responses, then inserts extra requests and responses. Eventually, BREACH guesses a session's secret key. Can use compression to guess contents one byte at-a-time. For example, "Supersecret SupersecreX" (a wrong guess) compresses 10 bytes, and "Supersecret Supersecret" (a correct guess) compresses 11 bytes, so it can find each character by guessing every character. To start the guess, BREACH needs at least three known initial characters in the response sequence. Compression length then "leaks" information. Some roadblocks include no winners (all guesses wrong) or too many winners (multiple possibilities that compress the same). The solutions include: lookahead (guess 2 or 3 characters at-a-time instead of 1 character). Expensive rollback to last known conflict check compression ratio can brute-force first 3 "bootstrap" characters, if needed (expensive) block ciphers hide exact plain text length. Solution is to align response in advance to block size Mitigations length: use variable padding secrets: dynamic CSRF tokens per request secret: change over time separate secret to input-less servlets Future work eiter understand DEFLATE/GZIP HTTPS extensions Running at 99%: Surviving an Application DoS Ryan Huber Ryan Huber, Risk I/O Ryan first discussed various ways to do a denial of service (DoS) attack against web services. One usual method is to find a slow web page and do several wgets. Or download large files. Apache is not well suited at handling a large number of connections, but one can put something in front of it Can use Apache alternatives, such as nginx How to identify malicious hosts short, sudden web requests user-agent is obvious (curl, python) same url requested repeatedly no web page referer (not normal) hidden links. hide a link and see if a bot gets it restricted access if not your geo IP (unless the website is global) missing common headers in request regular timing first seen IP at beginning of attack count requests per hosts (usually a very large number) Use of captcha can mitigate attacks, but you'll lose a lot of genuine users. Bouncer, goo.gl/c2vyEc and www.github.com/rawdigits/Bouncer Bouncer is software written by Ryan in netflow. Bouncer has a small, unobtrusive footprint and detects DoS attempts. It closes blacklisted sockets immediately (not nice about it, no proper close connection). Aggregator collects requests and controls your web proxies. Need NTP on the front end web servers for clean data for use by bouncer. Bouncer is also useful for a popularity storm ("Slashdotting") and scraper storms. Future features: gzip collection data, documentation, consumer library, multitask, logging destroyed connections. Takeaways: DoS mitigation is easier with a complete picture Bouncer designed to make it easier to detect and defend DoS—not a complete cure Security Response in the Age of Mass Customized Attacks Peleus Uhley and Karthik Raman Peleus Uhley and Karthik Raman, Adobe ASSET, blogs.adobe.com/asset/ Peleus and Karthik talked about response to mass-customized exploits. Attackers behave much like a business. "Mass customization" refers to concept discussed in the book Future Perfect by Stan Davis of Harvard Business School. Mass customization is differentiating a product for an individual customer, but at a mass production price. For example, the same individual with a debit card receives basically the same customized ATM experience around the world. Or designing your own PC from commodity parts. Exploit kits are another example of mass customization. The kits support multiple browsers and plugins, allows new modules. Exploit kits are cheap and customizable. Organized gangs use exploit kits. A group at Berkeley looked at 77,000 malicious websites (Grier et al., "Manufacturing Compromise: The Emergence of Exploit-as-a-Service", 2012). They found 10,000 distinct binaries among them, but derived from only a dozen or so exploit kits. Characteristics of Mass Malware: potent, resilient, relatively low cost Technical characteristics: multiple OS, multipe payloads, multiple scenarios, multiple languages, obfuscation Response time for 0-day exploits has gone down from ~40 days 5 years ago to about ~10 days now. So the drive with malware is towards mass customized exploits, to avoid detection There's plenty of evicence that exploit development has Project Manager bureaucracy. They infer from the malware edicts to: support all versions of reader support all versions of windows support all versions of flash support all browsers write large complex, difficult to main code (8750 lines of JavaScript for example Exploits have "loose coupling" of multipe versions of software (adobe), OS, and browser. This allows specific attacks against specific versions of multiple pieces of software. Also allows exploits of more obscure software/OS/browsers and obscure versions. Gave examples of exploits that exploited 2, 3, 6, or 14 separate bugs. However, these complete exploits are more likely to be buggy or fragile in themselves and easier to defeat. Future research includes normalizing malware and Javascript. Conclusion: The coming trend is that mass-malware with mass zero-day attacks will result in mass customization of attacks. x86 Rewriting: Defeating RoP and other Shinanighans Richard Wartell Richard Wartell The attack vector we are addressing here is: First some malware causes a buffer overflow. The malware has no program access, but input access and buffer overflow code onto stack Later the stack became non-executable. The workaround malware used was to write a bogus return address to the stack jumping to malware Later came ASLR (Address Space Layout Randomization) to randomize memory layout and make addresses non-deterministic. The workaround malware used was to jump t existing code segments in the program that can be used in bad ways "RoP" is Return-oriented Programming attacks. RoP attacks use your own code and write return address on stack to (existing) expoitable code found in program ("gadgets"). Pinkie Pie was paid $60K last year for a RoP attack. One solution is using anti-RoP compilers that compile source code with NO return instructions. ASLR does not randomize address space, just "gadgets". IPR/ILR ("Instruction Location Randomization") randomizes each instruction with a virtual machine. Richard's goal was to randomize a binary with no source code access. He created "STIR" (Self-Transofrming Instruction Relocation). STIR disassembles binary and operates on "basic blocks" of code. The STIR disassembler is conservative in what to disassemble. Each basic block is moved to a random location in memory. Next, STIR writes new code sections with copies of "basic blocks" of code in randomized locations. The old code is copied and rewritten with jumps to new code. the original code sections in the file is marked non-executible. STIR has better entropy than ASLR in location of code. Makes brute force attacks much harder. STIR runs on MS Windows (PEM) and Linux (ELF). It eliminated 99.96% or more "gadgets" (i.e., moved the address). Overhead usually 5-10% on MS Windows, about 1.5-4% on Linux (but some code actually runs faster!). The unique thing about STIR is it requires no source access and the modified binary fully works! Current work is to rewrite code to enforce security policies. For example, don't create a *.{exe,msi,bat} file. Or don't connect to the network after reading from the disk. Clowntown Express: interesting bugs and running a bug bounty program Collin Greene Collin Greene, Facebook Collin talked about Facebook's bug bounty program. Background at FB: FB has good security frameworks, such as security teams, external audits, and cc'ing on diffs. But there's lots of "deep, dark, forgotten" parts of legacy FB code. Collin gave several examples of bountied bugs. Some bounty submissions were on software purchased from a third-party (but bounty claimers don't know and don't care). We use security questions, as does everyone else, but they are basically insecure (often easily discoverable). Collin didn't expect many bugs from the bounty program, but they ended getting 20+ good bugs in first 24 hours and good submissions continue to come in. Bug bounties bring people in with different perspectives, and are paid only for success. Bug bounty is a better use of a fixed amount of time and money versus just code review or static code analysis. The Bounty program started July 2011 and paid out $1.5 million to date. 14% of the submissions have been high priority problems that needed to be fixed immediately. The best bugs come from a small % of submitters (as with everything else)—the top paid submitters are paid 6 figures a year. Spammers like to backstab competitors. The youngest sumitter was 13. Some submitters have been hired. Bug bounties also allows to see bugs that were missed by tools or reviews, allowing improvement in the process. Bug bounties might not work for traditional software companies where the product has release cycle or is not on Internet. Active Fingerprinting of Encrypted VPNs Anna Shubina Anna Shubina, Dartmouth Institute for Security, Technology, and Society (I missed the start of her talk because another track went overtime. But I have the DVD of the talk, so I'll expand later) IPsec leaves fingerprints. Using netcat, one can easily visually distinguish various crypto chaining modes just from packet timing on a chart (example, DES-CBC versus AES-CBC) One can tell a lot about VPNs just from ping roundtrips (such as what router is used) Delayed packets are not informative about a network, especially if far away from the network More needed to explore about how TCP works in real life with respect to timing Making Attacks Go Backwards Fuzzynop FuzzyNop, Mandiant This talk is not about threat attribution (finding who), product solutions, politics, or sales pitches. But who are making these malware threats? It's not a single person or group—they have diverse skill levels. There's a lot of fat-fingered fumblers out there. Always look for low-hanging fruit first: "hiding" malware in the temp, recycle, or root directories creation of unnamed scheduled tasks obvious names of files and syscalls ("ClearEventLog") uncleared event logs. Clearing event log in itself, and time of clearing, is a red flag and good first clue to look for on a suspect system Reverse engineering is hard. Disassembler use takes practice and skill. A popular tool is IDA Pro, but it takes multiple interactive iterations to get a clean disassembly. Key loggers are used a lot in targeted attacks. They are typically custom code or built in a backdoor. A big tip-off is that non-printable characters need to be printed out (such as "[Ctrl]" "[RightShift]") or time stamp printf strings. Look for these in files. Presence is not proof they are used. Absence is not proof they are not used. Java exploits. Can parse jar file with idxparser.py and decomile Java file. Java typially used to target tech companies. Backdoors are the main persistence mechanism (provided externally) for malware. Also malware typically needs command and control. Application of Artificial Intelligence in Ad-Hoc Static Code Analysis John Ashaman John Ashaman, Security Innovation Initially John tried to analyze open source files with open source static analysis tools, but these showed thousands of false positives. Also tried using grep, but tis fails to find anything even mildly complex. So next John decided to write his own tool. His approach was to first generate a call graph then analyze the graph. However, the problem is that making a call graph is really hard. For example, one problem is "evil" coding techniques, such as passing function pointer. First the tool generated an Abstract Syntax Tree (AST) with the nodes created from method declarations and edges created from method use. Then the tool generated a control flow graph with the goal to find a path through the AST (a maze) from source to sink. The algorithm is to look at adjacent nodes to see if any are "scary" (a vulnerability), using heuristics for search order. The tool, called "Scat" (Static Code Analysis Tool), currently looks for C# vulnerabilities and some simple PHP. Later, he plans to add more PHP, then JSP and Java. For more information see his posts in Security Innovation blog and NRefactory on GitHub. Mask Your Checksums—The Gorry Details Eric (XlogicX) Davisson Eric (XlogicX) Davisson Sometimes in emailing or posting TCP/IP packets to analyze problems, you may want to mask the IP address. But to do this correctly, you need to mask the checksum too, or you'll leak information about the IP. Problem reports found in stackoverflow.com, sans.org, and pastebin.org are usually not masked, but a few companies do care. If only the IP is masked, the IP may be guessed from checksum (that is, it leaks data). Other parts of packet may leak more data about the IP. TCP and IP checksums both refer to the same data, so can get more bits of information out of using both checksums than just using one checksum. Also, one can usually determine the OS from the TTL field and ports in a packet header. If we get hundreds of possible results (16x each masked nibble that is unknown), one can do other things to narrow the results, such as look at packet contents for domain or geo information. With hundreds of results, can import as CSV format into a spreadsheet. Can corelate with geo data and see where each possibility is located. Eric then demoed a real email report with a masked IP packet attached. Was able to find the exact IP address, given the geo and university of the sender. Point is if you're going to mask a packet, do it right. Eric wouldn't usually bother, but do it correctly if at all, to not create a false impression of security. Adventures with weird machines thirty years after "Reflections on Trusting Trust" Sergey Bratus Sergey Bratus, Dartmouth College (and Julian Bangert and Rebecca Shapiro, not present) "Reflections on Trusting Trust" refers to Ken Thompson's classic 1984 paper. "You can't trust code that you did not totally create yourself." There's invisible links in the chain-of-trust, such as "well-installed microcode bugs" or in the compiler, and other planted bugs. Thompson showed how a compiler can introduce and propagate bugs in unmodified source. But suppose if there's no bugs and you trust the author, can you trust the code? Hell No! There's too many factors—it's Babylonian in nature. Why not? Well, Input is not well-defined/recognized (code's assumptions about "checked" input will be violated (bug/vunerabiliy). For example, HTML is recursive, but Regex checking is not recursive. Input well-formed but so complex there's no telling what it does For example, ELF file parsing is complex and has multiple ways of parsing. Input is seen differently by different pieces of program or toolchain Any Input is a program input executes on input handlers (drives state changes & transitions) only a well-defined execution model can be trusted (regex/DFA, PDA, CFG) Input handler either is a "recognizer" for the inputs as a well-defined language (see langsec.org) or it's a "virtual machine" for inputs to drive into pwn-age ELF ABI (UNIX/Linux executible file format) case study. Problems can arise from these steps (without planting bugs): compiler linker loader ld.so/rtld relocator DWARF (debugger info) exceptions The problem is you can't really automatically analyze code (it's the "halting problem" and undecidable). Only solution is to freeze code and sign it. But you can't freeze everything! Can't freeze ASLR or loading—must have tables and metadata. Any sufficiently complex input data is the same as VM byte code Example, ELF relocation entries + dynamic symbols == a Turing Complete Machine (TM). @bxsays created a Turing machine in Linux from relocation data (not code) in an ELF file. For more information, see Rebecca "bx" Shapiro's presentation from last year's Toorcon, "Programming Weird Machines with ELF Metadata" @bxsays did same thing with Mach-O bytecode Or a DWARF exception handling data .eh_frame + glibc == Turning Machine X86 MMU (IDT, GDT, TSS): used address translation to create a Turning Machine. Page handler reads and writes (on page fault) memory. Uses a page table, which can be used as Turning Machine byte code. Example on Github using this TM that will fly a glider across the screen Next Sergey talked about "Parser Differentials". That having one input format, but two parsers, will create confusion and opportunity for exploitation. For example, CSRs are parsed during creation by cert requestor and again by another parser at the CA. Another example is ELF—several parsers in OS tool chain, which are all different. Can have two different Program Headers (PHDRs) because ld.so parses multiple PHDRs. The second PHDR can completely transform the executable. This is described in paper in the first issue of International Journal of PoC. Conclusions trusting computers not only about bugs! Bugs are part of a problem, but no by far all of it complex data formats means bugs no "chain of trust" in Babylon! (that is, with parser differentials) we need to squeeze complexity out of data until data stops being "code equivalent" Further information See and langsec.org. USENIX WOOT 2013 (Workshop on Offensive Technologies) for "weird machines" papers and videos.

    Read the article

  • The broken Promise of the Mobile Web

    - by Rick Strahl
    High end mobile devices have been with us now for almost 7 years and they have utterly transformed the way we access information. Mobile phones and smartphones that have access to the Internet and host smart applications are in the hands of a large percentage of the population of the world. In many places even very remote, cell phones and even smart phones are a common sight. I’ll never forget when I was in India in 2011 I was up in the Southern Indian mountains riding an elephant out of a tiny local village, with an elephant herder in front riding atop of the elephant in front of us. He was dressed in traditional garb with the loin wrap and head cloth/turban as did quite a few of the locals in this small out of the way and not so touristy village. So we’re slowly trundling along in the forest and he’s lazily using his stick to guide the elephant and… 10 minutes in he pulls out his cell phone from his sash and starts texting. In the middle of texting a huge pig jumps out from the side of the trail and he takes a picture running across our path in the jungle! So yeah, mobile technology is very pervasive and it’s reached into even very buried and unexpected parts of this world. Apps are still King Apps currently rule the roost when it comes to mobile devices and the applications that run on them. If there’s something that you need on your mobile device your first step usually is to look for an app, not use your browser. But native app development remains a pain in the butt, with the requirement to have to support 2 or 3 completely separate platforms. There are solutions that try to bridge that gap. Xamarin is on a tear at the moment, providing their cross-device toolkit to build applications using C#. While Xamarin tools are impressive – and also *very* expensive – they only address part of the development madness that is app development. There are still specific device integration isssues, dealing with the different developer programs, security and certificate setups and all that other noise that surrounds app development. There’s also PhoneGap/Cordova which provides a hybrid solution that involves creating local HTML/CSS/JavaScript based applications, and then packaging them to run in a specialized App container that can run on most mobile device platforms using a WebView interface. This allows for using of HTML technology, but it also still requires all the set up, configuration of APIs, security keys and certification and submission and deployment process just like native applications – you actually lose many of the benefits that  Web based apps bring. The big selling point of Cordova is that you get to use HTML have the ability to build your UI once for all platforms and run across all of them – but the rest of the app process remains in place. Apps can be a big pain to create and manage especially when we are talking about specialized or vertical business applications that aren’t geared at the mainstream market and that don’t fit the ‘store’ model. If you’re building a small intra department application you don’t want to deal with multiple device platforms and certification etc. for various public or corporate app stores. That model is simply not a good fit both from the development and deployment perspective. Even for commercial, big ticket apps, HTML as a UI platform offers many advantages over native, from write-once run-anywhere, to remote maintenance, single point of management and failure to having full control over the application as opposed to have the app store overloads censor you. In a lot of ways Web based HTML/CSS/JavaScript applications have so much potential for building better solutions based on existing Web technologies for the very same reasons a lot of content years ago moved off the desktop to the Web. To me the Web as a mobile platform makes perfect sense, but the reality of today’s Mobile Web unfortunately looks a little different… Where’s the Love for the Mobile Web? Yet here we are in the middle of 2014, nearly 7 years after the first iPhone was released and brought the promise of rich interactive information at your fingertips, and yet we still don’t really have a solid mobile Web platform. I know what you’re thinking: “But we have lots of HTML/JavaScript/CSS features that allows us to build nice mobile interfaces”. I agree to a point – it’s actually quite possible to build nice looking, rich and capable Web UI today. We have media queries to deal with varied display sizes, CSS transforms for smooth animations and transitions, tons of CSS improvements in CSS 3 that facilitate rich layout, a host of APIs geared towards mobile device features and lately even a number of JavaScript framework choices that facilitate development of multi-screen apps in a consistent manner. Personally I’ve been working a lot with AngularJs and heavily modified Bootstrap themes to build mobile first UIs and that’s been working very well to provide highly usable and attractive UI for typical mobile business applications. From the pure UI perspective things actually look very good. Not just about the UI But it’s not just about the UI - it’s also about integration with the mobile device. When it comes to putting all those pieces together into what amounts to a consolidated platform to build mobile Web applications, I think we still have a ways to go… there are a lot of missing pieces to make it all work together and integrate with the device more smoothly, and more importantly to make it work uniformly across the majority of devices. I think there are a number of reasons for this. Slow Standards Adoption HTML standards implementations and ratification has been dreadfully slow, and browser vendors all seem to pick and choose different pieces of the technology they implement. The end result is that we have a capable UI platform that’s missing some of the infrastructure pieces to make it whole on mobile devices. There’s lots of potential but what is lacking that final 10% to build truly compelling mobile applications that can compete favorably with native applications. Some of it is the fragmentation of browsers and the slow evolution of the mobile specific HTML APIs. A host of mobile standards exist but many of the standards are in the early review stage and they have been there stuck for long periods of time and seem to move at a glacial pace. Browser vendors seem even slower to implement them, and for good reason – non-ratified standards mean that implementations may change and vendor implementations tend to be experimental and  likely have to be changed later. Neither Vendors or developers are not keen on changing standards. This is the typical chicken and egg scenario, but without some forward momentum from some party we end up stuck in the mud. It seems that either the standards bodies or the vendors need to carry the torch forward and that doesn’t seem to be happening quickly enough. Mobile Device Integration just isn’t good enough Current standards are not far reaching enough to address a number of the use case scenarios necessary for many mobile applications. While not every application needs to have access to all mobile device features, almost every mobile application could benefit from some integration with other parts of the mobile device platform. Integration with GPS, phone, media, messaging, notifications, linking and contacts system are benefits that are unique to mobile applications and could be widely used, but are mostly (with the exception of GPS) inaccessible for Web based applications today. Unfortunately trying to do most of this today only with a mobile Web browser is a losing battle. Aside from PhoneGap/Cordova’s app centric model with its own custom API accessing mobile device features and the token exception of the GeoLocation API, most device integration features are not widely supported by the current crop of mobile browsers. For example there’s no usable messaging API that allows access to SMS or contacts from HTML. Even obvious components like the Media Capture API are only implemented partially by mobile devices. There are alternatives and workarounds for some of these interfaces by using browser specific code, but that’s might ugly and something that I thought we were trying to leave behind with newer browser standards. But it’s not quite working out that way. It’s utterly perplexing to me that mobile standards like Media Capture and Streams, Media Gallery Access, Responsive Images, Messaging API, Contacts Manager API have only minimal or no traction at all today. Keep in mind we’ve had mobile browsers for nearly 7 years now, and yet we still have to think about how to get access to an image from the image gallery or the camera on some devices? Heck Windows Phone IE Mobile just gained the ability to upload images recently in the Windows 8.1 Update – that’s feature that HTML has had for 20 years! These are simple concepts and common problems that should have been solved a long time ago. It’s extremely frustrating to see build 90% of a mobile Web app with relative ease and then hit a brick wall for the remaining 10%, which often can be show stoppers. The remaining 10% have to do with platform integration, browser differences and working around the limitations that browsers and ‘pinned’ applications impose on HTML applications. The maddening part is that these limitations seem arbitrary as they could easily work on all mobile platforms. For example, SMS has a URL Moniker interface that sort of works on Android, works badly with iOS (only works if the address is already in the contact list) and not at all on Windows Phone. There’s no reason this shouldn’t work universally using the same interface – after all all phones have supported SMS since before the year 2000! But, it doesn’t have to be this way Change can happen very quickly. Take the GeoLocation API for example. Geolocation has taken off at the very beginning of the mobile device era and today it works well, provides the necessary security (a big concern for many mobile APIs), and is supported by just about all major mobile and even desktop browsers today. It handles security concerns via prompts to avoid unwanted access which is a model that would work for most other device APIs in a similar fashion. One time approval and occasional re-approval if code changes or caches expire. Simple and only slightly intrusive. It all works well, even though GeoLocation actually has some physical limitations, such as representing the current location when no GPS device is present. Yet this is a solved problem, where other APIs that are conceptually much simpler to implement have failed to gain any traction at all. Technically none of these APIs should be a problem to implement, but it appears that the momentum is just not there. Inadequate Web Application Linking and Activation Another important piece of the puzzle missing is the integration of HTML based Web applications. Today HTML based applications are not first class citizens on mobile operating systems. When talking about HTML based content there’s a big difference between content and applications. Content is great for search engine discovery and plain browser usage. Content is usually accessed intermittently and permanent linking is not so critical for this type of content.  But applications have different needs. Applications need to be started up quickly and must be easily switchable to support a multi-tasking user workflow. Therefore, it’s pretty crucial that mobile Web apps are integrated into the underlying mobile OS and work with the standard task management features. Unfortunately this integration is not as smooth as it should be. It starts with actually trying to find mobile Web applications, to ‘installing’ them onto a phone in an easily accessible manner in a prominent position. The experience of discovering a Mobile Web ‘App’ and making it sticky is by no means as easy or satisfying. Today the way you’d go about this is: Open the browser Search for a Web Site in the browser with your search engine of choice Hope that you find the right site Hope that you actually find a site that works for your mobile device Click on the link and run the app in a fully chrome’d browser instance (read tiny surface area) Pin the app to the home screen (with all the limitations outline above) Hope you pointed at the right URL when you pinned Even for you and me as developers, there are a few steps in there that are painful and annoying, but think about the average user. First figuring out how to search for a specific site or URL? And then pinning the app and hopefully from the right location? You’ve probably lost more than half of your audience at that point. This experience sucks. For developers too this process is painful since app developers can’t control the shortcut creation directly. This problem often gets solved by crazy coding schemes, with annoying pop-ups that try to get people to create shortcuts via fancy animations that are both annoying and add overhead to each and every application that implements this sort of thing differently. And that’s not the end of it - getting the link onto the home screen with an application icon varies quite a bit between browsers. Apple’s non-standard meta tags are prominent and they work with iOS and Android (only more recent versions), but not on Windows Phone. Windows Phone instead requires you to create an actual screen or rather a partial screen be captured for a shortcut in the tile manager. Who had that brilliant idea I wonder? Surprisingly Chrome on recent Android versions seems to actually get it right – icons use pngs, pinning is easy and pinned applications properly behave like standalone apps and retain the browser’s active page state and content. Each of the platforms has a different way to specify icons (WP doesn’t allow you to use an icon image at all), and the most widely used interface in use today is a bunch of Apple specific meta tags that other browsers choose to support. The question is: Why is there no standard implementation for installing shortcuts across mobile platforms using an official format rather than a proprietary one? Then there’s iOS and the crazy way it treats home screen linked URLs using a crazy hybrid format that is neither as capable as a Web app running in Safari nor a WebView hosted application. Moving off the Web ‘app’ link when switching to another app actually causes the browser and preview it to ‘blank out’ the Web application in the Task View (see screenshot on the right). Then, when the ‘app’ is reactivated it ends up completely restarting the browser with the original link. This is crazy behavior that you can’t easily work around. In some situations you might be able to store the application state and restore it using LocalStorage, but for many scenarios that involve complex data sources (like say Google Maps) that’s not a possibility. The only reason for this screwed up behavior I can think of is that it is deliberate to make Web apps a pain in the butt to use and forcing users trough the App Store/PhoneGap/Cordova route. App linking and management is a very basic problem – something that we essentially have solved in every desktop browser – yet on mobile devices where it arguably matters a lot more to have easy access to web content we have to jump through hoops to have even a remotely decent linking/activation experience across browsers. Where’s the Money? It’s not surprising that device home screen integration and Mobile Web support in general is in such dismal shape – the mobile OS vendors benefit financially from App store sales and have little to gain from Web based applications that bypass the App store and the cash cow that it presents. On top of that, platform specific vendor lock-in of both end users and developers who have invested in hardware, apps and consumables is something that mobile platform vendors actually aspire to. Web based interfaces that are cross-platform are the anti-thesis of that and so again it’s no surprise that the mobile Web is on a struggling path. But – that may be changing. More and more we’re seeing operations shifting to services that are subscription based or otherwise collect money for usage, and that may drive more progress into the Web direction in the end . Nothing like the almighty dollar to drive innovation forward. Do we need a Mobile Web App Store? As much as I dislike moderated experiences in today’s massive App Stores, they do at least provide one single place to look for apps for your device. I think we could really use some sort of registry, that could provide something akin to an app store for mobile Web apps, to make it easier to actually find mobile applications. This could take the form of a specialized search engine, or maybe a more formal store/registry like structure. Something like apt-get/chocolatey for Web apps. It could be curated and provide at least some feedback and reviews that might help with the integrity of applications. Coupled to that could be a native application on each platform that would allow searching and browsing of the registry and then also handle installation in the form of providing the home screen linking, plus maybe an initial security configuration that determines what features are allowed access to for the app. I’m not holding my breath. In order for this sort of thing to take off and gain widespread appeal, a lot of coordination would be required. And in order to get enough traction it would have to come from a well known entity – a mobile Web app store from a no name source is unlikely to gain high enough usage numbers to make a difference. In a way this would eliminate some of the freedom of the Web, but of course this would also be an optional search path in addition to the standard open Web search mechanisms to find and access content today. Security Security is a big deal, and one of the perceived reasons why so many IT professionals appear to be willing to go back to the walled garden of deployed apps is that Apps are perceived as safe due to the official review and curation of the App stores. Curated stores are supposed to protect you from malware, illegal and misleading content. It doesn’t always work out that way and all the major vendors have had issues with security and the review process at some time or another. Security is critical, but I also think that Web applications in general pose less of a security threat than native applications, by nature of the sandboxed browser and JavaScript environments. Web applications run externally completely and in the HTML and JavaScript sandboxes, with only a very few controlled APIs allowing access to device specific features. And as discussed earlier – security for any device interaction can be granted the same for mobile applications through a Web browser, as they can for native applications either via explicit policies loaded from the Web, or via prompting as GeoLocation does today. Security is important, but it’s certainly solvable problem for Web applications even those that need to access device hardware. Security shouldn’t be a reason for Web apps to be an equal player in mobile applications. Apps are winning, but haven’t we been here before? So now we’re finding ourselves back in an era of installed app, rather than Web based and managed apps. Only it’s even worse today than with Desktop applications, in that the apps are going through a gatekeeper that charges a toll and censors what you can and can’t do in your apps. Frankly it’s a mystery to me why anybody would buy into this model and why it’s lasted this long when we’ve already been through this process. It’s crazy… It’s really a shame that this regression is happening. We have the technology to make mobile Web apps much more prominent, but yet we’re basically held back by what seems little more than bureaucracy, partisan bickering and self interest of the major parties involved. Back in the day of the desktop it was Internet Explorer’s 98+%  market shareholding back the Web from improvements for many years – now it’s the combined mobile OS market in control of the mobile browsers. If mobile Web apps were allowed to be treated the same as native apps with simple ways to install and run them consistently and persistently, that would go a long way to making mobile applications much more usable and seriously viable alternatives to native apps. But as it is mobile apps have a severe disadvantage in placement and operation. There are a few bright spots in all of this. Mozilla’s FireFoxOs is embracing the Web for it’s mobile OS by essentially building every app out of HTML and JavaScript based content. It supports both packaged and certified package modes (that can be put into the app store), and Open Web apps that are loaded and run completely off the Web and can also cache locally for offline operation using a manifest. Open Web apps are treated as full class citizens in FireFoxOS and run using the same mechanism as installed apps. Unfortunately FireFoxOs is getting a slow start with minimal device support and specifically targeting the low end market. We can hope that this approach will change and catch on with other vendors, but that’s also an uphill battle given the conflict of interest with platform lock in that it represents. Recent versions of Android also seem to be working reasonably well with mobile application integration onto the desktop and activation out of the box. Although it still uses the Apple meta tags to find icons and behavior settings, everything at least works as you would expect – icons to the desktop on pinning, WebView based full screen activation, and reliable application persistence as the browser/app is treated like a real application. Hopefully iOS will at some point provide this same level of rudimentary Web app support. What’s also interesting to me is that Microsoft hasn’t picked up on the obvious need for a solid Web App platform. Being a distant third in the mobile OS war, Microsoft certainly has nothing to lose and everything to gain by using fresh ideas and expanding into areas that the other major vendors are neglecting. But instead Microsoft is trying to beat the market leaders at their own game, fighting on their adversary’s terms instead of taking a new tack. Providing a kick ass mobile Web platform that takes the lead on some of the proposed mobile APIs would be something positive that Microsoft could do to improve its miserable position in the mobile device market. Where are we at with Mobile Web? It sure sounds like I’m really down on the Mobile Web, right? I’ve built a number of mobile apps in the last year and while overall result and response has been very positive to what we were able to accomplish in terms of UI, getting that final 10% that required device integration dialed was an absolute nightmare on every single one of them. Big compromises had to be made and some features were left out or had to be modified for some devices. In two cases we opted to go the Cordova route in order to get the integration we needed, along with the extra pain involved in that process. Unless you’re not integrating with device features and you don’t care deeply about a smooth integration with the mobile desktop, mobile Web development is fraught with frustration. So, yes I’m frustrated! But it’s not for lack of wanting the mobile Web to succeed. I am still a firm believer that we will eventually arrive a much more functional mobile Web platform that allows access to the most common device features in a sensible way. It wouldn't be difficult for device platform vendors to make Web based applications first class citizens on mobile devices. But unfortunately it looks like it will still be some time before this happens. So, what’s your experience building mobile Web apps? Are you finding similar issues? Just giving up on raw Web applications and building PhoneGap apps instead? Completely skipping the Web and going native? Leave a comment for discussion. Resources Rick Strahl on DotNet Rocks talking about Mobile Web© Rick Strahl, West Wind Technologies, 2005-2014Posted in HTML5  Mobile   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Errors when installing Open Office

    - by user109036
    I followed the first set of instructions on this page to install Open Office: How to install Open Office? However, the last step which says to change the CHMOD of a folder, I got an error saying that the directory does not exist. Open Office now appears in my Ubuntu start menu, but clicking on it does nothing. I tried a reboot. Below is what I could copy from my terminal. I am running the latest Ubuntu. I have not uninstalled Libreoffice as suggested somewhere. The reason is that in the Ubuntu software centre, Libre office appears to be made up of several components and I don't know which ones to remove (or all maybe?). They are Libreoffice Draw, Math, Writer, Calc. After this operation, 480 MB of additional disk space will be used. Do you want to continue [Y/n]? y Get:1 http://gb.archive.ubuntu.com/ubuntu/ quantal-updates/universe openjdk-6-jre-lib all 6b24-1.11.5-0ubuntu1~12.10.1 [6,135 kB] Get:2 http://ppa.launchpad.net/upubuntu-com/office/ubuntu/ quantal/main openoffice amd64 3.4~oneiric [321 MB] Get:3 http://gb.archive.ubuntu.com/ubuntu/ quantal/main ca-certificates-java all 20120721 [13.2 kB] Get:4 http://gb.archive.ubuntu.com/ubuntu/ quantal/main tzdata-java all 2012e-0ubuntu2 [140 kB] Get:5 http://gb.archive.ubuntu.com/ubuntu/ quantal/main java-common all 0.43ubuntu3 [61.7 kB] Get:6 http://gb.archive.ubuntu.com/ubuntu/ quantal-updates/universe openjdk-6-jre-headless amd64 6b24-1.11.5-0ubuntu1~12.10.1 [25.4 MB] Get:7 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libgif4 amd64 4.1.6-9.1ubuntu1 [31.3 kB] Get:8 http://gb.archive.ubuntu.com/ubuntu/ quantal-updates/universe openjdk-6-jre amd64 6b24-1.11.5-0ubuntu1~12.10.1 [234 kB] Get:9 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libatk-wrapper-java all 0.30.4-0ubuntu4 [29.8 kB] Get:10 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libatk-wrapper-java-jni amd64 0.30.4-0ubuntu4 [31.1 kB] Get:11 http://gb.archive.ubuntu.com/ubuntu/ quantal/main xorg-sgml-doctools all 1:1.10-1 [12.0 kB] Get:12 http://gb.archive.ubuntu.com/ubuntu/ quantal/main x11proto-core-dev all 7.0.23-1 [744 kB] Get:13 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libice-dev amd64 2:1.0.8-2 [57.6 kB] Get:14 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libpthread-stubs0 amd64 0.3-3 [3,258 B] Get:15 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libpthread-stubs0-dev amd64 0.3-3 [2,866 B] Get:16 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libsm-dev amd64 2:1.2.1-2 [19.9 kB] Get:17 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxau-dev amd64 1:1.0.7-1 [10.2 kB] Get:18 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxdmcp-dev amd64 1:1.1.1-1 [26.9 kB] Get:19 http://gb.archive.ubuntu.com/ubuntu/ quantal/main x11proto-input-dev all 2.2-1 [133 kB] Get:20 http://gb.archive.ubuntu.com/ubuntu/ quantal/main x11proto-kb-dev all 1.0.6-2 [269 kB] Get:21 http://gb.archive.ubuntu.com/ubuntu/ quantal/main xtrans-dev all 1.2.7-1 [84.3 kB] Get:22 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxcb1-dev amd64 1.8.1-1ubuntu1 [82.6 kB] Get:23 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libx11-dev amd64 2:1.5.0-1 [912 kB] Get:24 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libx11-doc all 2:1.5.0-1 [2,460 kB] Get:25 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxt-dev amd64 1:1.1.3-1 [492 kB] Get:26 http://gb.archive.ubuntu.com/ubuntu/ quantal/main ttf-dejavu-extra all 2.33-2ubuntu1 [3,420 kB] Get:27 http://gb.archive.ubuntu.com/ubuntu/ quantal-updates/universe icedtea-6-jre-cacao amd64 6b24-1.11.5-0ubuntu1~12.10.1 [417 kB] Get:28 http://gb.archive.ubuntu.com/ubuntu/ quantal-updates/universe icedtea-6-jre-jamvm amd64 6b24-1.11.5-0ubuntu1~12.10.1 [581 kB] Get:29 http://gb.archive.ubuntu.com/ubuntu/ quantal-updates/main icedtea-netx-common all 1.3-1ubuntu1.1 [617 kB] Get:30 http://gb.archive.ubuntu.com/ubuntu/ quantal-updates/main icedtea-netx amd64 1.3-1ubuntu1.1 [16.2 kB] Get:31 http://gb.archive.ubuntu.com/ubuntu/ quantal-updates/universe openjdk-6-jdk amd64 6b24-1.11.5-0ubuntu1~12.10.1 [11.1 MB] Fetched 374 MB in 9min 18s (671 kB/s) Extract templates from packages: 100% Selecting previously unselected package openjdk-6-jre-lib. (Reading database ... 143191 files and directories currently installed.) Unpacking openjdk-6-jre-lib (from .../openjdk-6-jre-lib_6b24-1.11.5-0ubuntu1~12.10.1_all.deb) ... Selecting previously unselected package ca-certificates-java. Unpacking ca-certificates-java (from .../ca-certificates-java_20120721_all.deb) ... Selecting previously unselected package tzdata-java. Unpacking tzdata-java (from .../tzdata-java_2012e-0ubuntu2_all.deb) ... Selecting previously unselected package java-common. Unpacking java-common (from .../java-common_0.43ubuntu3_all.deb) ... Selecting previously unselected package openjdk-6-jre-headless:amd64. Unpacking openjdk-6-jre-headless:amd64 (from .../openjdk-6-jre-headless_6b24-1.11.5-0ubuntu1~12.10.1_amd64.deb) ... Selecting previously unselected package libgif4:amd64. Unpacking libgif4:amd64 (from .../libgif4_4.1.6-9.1ubuntu1_amd64.deb) ... Selecting previously unselected package openjdk-6-jre:amd64. Unpacking openjdk-6-jre:amd64 (from .../openjdk-6-jre_6b24-1.11.5-0ubuntu1~12.10.1_amd64.deb) ... Selecting previously unselected package libatk-wrapper-java. Unpacking libatk-wrapper-java (from .../libatk-wrapper-java_0.30.4-0ubuntu4_all.deb) ... Selecting previously unselected package libatk-wrapper-java-jni:amd64. Unpacking libatk-wrapper-java-jni:amd64 (from .../libatk-wrapper-java-jni_0.30.4-0ubuntu4_amd64.deb) ... Selecting previously unselected package xorg-sgml-doctools. Unpacking xorg-sgml-doctools (from .../xorg-sgml-doctools_1%3a1.10-1_all.deb) ... Selecting previously unselected package x11proto-core-dev. Unpacking x11proto-core-dev (from .../x11proto-core-dev_7.0.23-1_all.deb) ... Selecting previously unselected package libice-dev:amd64. Unpacking libice-dev:amd64 (from .../libice-dev_2%3a1.0.8-2_amd64.deb) ... Selecting previously unselected package libpthread-stubs0:amd64. Unpacking libpthread-stubs0:amd64 (from .../libpthread-stubs0_0.3-3_amd64.deb) ... Selecting previously unselected package libpthread-stubs0-dev:amd64. Unpacking libpthread-stubs0-dev:amd64 (from .../libpthread-stubs0-dev_0.3-3_amd64.deb) ... Selecting previously unselected package libsm-dev:amd64. Unpacking libsm-dev:amd64 (from .../libsm-dev_2%3a1.2.1-2_amd64.deb) ... Selecting previously unselected package libxau-dev:amd64. Unpacking libxau-dev:amd64 (from .../libxau-dev_1%3a1.0.7-1_amd64.deb) ... Selecting previously unselected package libxdmcp-dev:amd64. Unpacking libxdmcp-dev:amd64 (from .../libxdmcp-dev_1%3a1.1.1-1_amd64.deb) ... Selecting previously unselected package x11proto-input-dev. Unpacking x11proto-input-dev (from .../x11proto-input-dev_2.2-1_all.deb) ... Selecting previously unselected package x11proto-kb-dev. Unpacking x11proto-kb-dev (from .../x11proto-kb-dev_1.0.6-2_all.deb) ... Selecting previously unselected package xtrans-dev. Unpacking xtrans-dev (from .../xtrans-dev_1.2.7-1_all.deb) ... Selecting previously unselected package libxcb1-dev:amd64. Unpacking libxcb1-dev:amd64 (from .../libxcb1-dev_1.8.1-1ubuntu1_amd64.deb) ... Selecting previously unselected package libx11-dev:amd64. Unpacking libx11-dev:amd64 (from .../libx11-dev_2%3a1.5.0-1_amd64.deb) ... Selecting previously unselected package libx11-doc. Unpacking libx11-doc (from .../libx11-doc_2%3a1.5.0-1_all.deb) ... Selecting previously unselected package libxt-dev:amd64. Unpacking libxt-dev:amd64 (from .../libxt-dev_1%3a1.1.3-1_amd64.deb) ... Selecting previously unselected package ttf-dejavu-extra. Unpacking ttf-dejavu-extra (from .../ttf-dejavu-extra_2.33-2ubuntu1_all.deb) ... Selecting previously unselected package icedtea-6-jre-cacao:amd64. Unpacking icedtea-6-jre-cacao:amd64 (from .../icedtea-6-jre-cacao_6b24-1.11.5-0ubuntu1~12.10.1_amd64.deb) ... Selecting previously unselected package icedtea-6-jre-jamvm:amd64. Unpacking icedtea-6-jre-jamvm:amd64 (from .../icedtea-6-jre-jamvm_6b24-1.11.5-0ubuntu1~12.10.1_amd64.deb) ... Selecting previously unselected package icedtea-netx-common. Unpacking icedtea-netx-common (from .../icedtea-netx-common_1.3-1ubuntu1.1_all.deb) ... Selecting previously unselected package icedtea-netx:amd64. Unpacking icedtea-netx:amd64 (from .../icedtea-netx_1.3-1ubuntu1.1_amd64.deb) ... Selecting previously unselected package openjdk-6-jdk:amd64. Unpacking openjdk-6-jdk:amd64 (from .../openjdk-6-jdk_6b24-1.11.5-0ubuntu1~12.10.1_amd64.deb) ... Selecting previously unselected package openoffice. Unpacking openoffice (from .../openoffice_3.4~oneiric_amd64.deb) ... Processing triggers for doc-base ... Processing 2 added doc-base files... Processing triggers for man-db ... Processing triggers for desktop-file-utils ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for gnome-menus ... Processing triggers for hicolor-icon-theme ... Processing triggers for fontconfig ... Processing triggers for gnome-icon-theme ... Processing triggers for shared-mime-info ... Setting up tzdata-java (2012e-0ubuntu2) ... Setting up java-common (0.43ubuntu3) ... Setting up libgif4:amd64 (4.1.6-9.1ubuntu1) ... Setting up xorg-sgml-doctools (1:1.10-1) ... Setting up x11proto-core-dev (7.0.23-1) ... Setting up libice-dev:amd64 (2:1.0.8-2) ... Setting up libpthread-stubs0:amd64 (0.3-3) ... Setting up libpthread-stubs0-dev:amd64 (0.3-3) ... Setting up libsm-dev:amd64 (2:1.2.1-2) ... Setting up libxau-dev:amd64 (1:1.0.7-1) ... Setting up libxdmcp-dev:amd64 (1:1.1.1-1) ... Setting up x11proto-input-dev (2.2-1) ... Setting up x11proto-kb-dev (1.0.6-2) ... Setting up xtrans-dev (1.2.7-1) ... Setting up libxcb1-dev:amd64 (1.8.1-1ubuntu1) ... Setting up libx11-dev:amd64 (2:1.5.0-1) ... Setting up libx11-doc (2:1.5.0-1) ... Setting up libxt-dev:amd64 (1:1.1.3-1) ... Setting up ttf-dejavu-extra (2.33-2ubuntu1) ... Setting up icedtea-netx-common (1.3-1ubuntu1.1) ... Setting up openjdk-6-jre-lib (6b24-1.11.5-0ubuntu1~12.10.1) ... Setting up openjdk-6-jre-headless:amd64 (6b24-1.11.5-0ubuntu1~12.10.1) ... update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/java to provide /usr/bin/java (java) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/keytool to provide /usr/bin/keytool (keytool) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/pack200 to provide /usr/bin/pack200 (pack200) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/rmid to provide /usr/bin/rmid (rmid) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/rmiregistry to provide /usr/bin/rmiregistry (rmiregistry) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/unpack200 to provide /usr/bin/unpack200 (unpack200) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/orbd to provide /usr/bin/orbd (orbd) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/servertool to provide /usr/bin/servertool (servertool) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/tnameserv to provide /usr/bin/tnameserv (tnameserv) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/lib/jexec to provide /usr/bin/jexec (jexec) in auto mode Setting up ca-certificates-java (20120721) ... Adding debian:Deutsche_Telekom_Root_CA_2.pem Adding debian:Comodo_Trusted_Services_root.pem Adding debian:Certum_Trusted_Network_CA.pem Adding debian:thawte_Primary_Root_CA_-_G2.pem Adding debian:UTN_USERFirst_Hardware_Root_CA.pem Adding debian:AddTrust_Low-Value_Services_Root.pem Adding debian:Microsec_e-Szigno_Root_CA.pem Adding debian:SwissSign_Silver_CA_-_G2.pem Adding debian:ComSign_Secured_CA.pem Adding debian:Buypass_Class_2_CA_1.pem Adding debian:Verisign_Class_1_Public_Primary_Certification_Authority_-_G3.pem Adding debian:Certum_Root_CA.pem Adding debian:AddTrust_External_Root.pem Adding debian:Chambers_of_Commerce_Root_-_2008.pem Adding debian:Starfield_Root_Certificate_Authority_-_G2.pem Adding debian:Verisign_Class_1_Public_Primary_Certification_Authority_-_G2.pem Adding debian:Visa_eCommerce_Root.pem Adding debian:Digital_Signature_Trust_Co._Global_CA_3.pem Adding debian:AC_Raíz_Certicámara_S.A..pem Adding debian:NetLock_Arany_=Class_Gold=_Fotanúsítvány.pem Adding debian:Taiwan_GRCA.pem Adding debian:Camerfirma_Chambers_of_Commerce_Root.pem Adding debian:Juur-SK.pem Adding debian:Entrust.net_Premium_2048_Secure_Server_CA.pem Adding debian:XRamp_Global_CA_Root.pem Adding debian:Security_Communication_RootCA2.pem Adding debian:AddTrust_Qualified_Certificates_Root.pem Adding debian:NetLock_Qualified_=Class_QA=_Root.pem Adding debian:TC_TrustCenter_Class_2_CA_II.pem Adding debian:DST_ACES_CA_X6.pem Adding debian:thawte_Primary_Root_CA.pem Adding debian:thawte_Primary_Root_CA_-_G3.pem Adding debian:GeoTrust_Universal_CA_2.pem Adding debian:ACEDICOM_Root.pem Adding debian:Security_Communication_EV_RootCA1.pem Adding debian:America_Online_Root_Certification_Authority_2.pem Adding debian:TC_TrustCenter_Universal_CA_I.pem Adding debian:SwissSign_Platinum_CA_-_G2.pem Adding debian:Global_Chambersign_Root_-_2008.pem Adding debian:SecureSign_RootCA11.pem Adding debian:GeoTrust_Global_CA_2.pem Adding debian:Buypass_Class_3_CA_1.pem Adding debian:Baltimore_CyberTrust_Root.pem Adding debian:UbuntuOne-Go_Daddy_Class_2_CA.pem Adding debian:Equifax_Secure_eBusiness_CA_1.pem Adding debian:SwissSign_Gold_CA_-_G2.pem Adding debian:AffirmTrust_Premium_ECC.pem Adding debian:TC_TrustCenter_Universal_CA_III.pem Adding debian:ca.pem Adding debian:Verisign_Class_3_Public_Primary_Certification_Authority_-_G2.pem Adding debian:NetLock_Express_=Class_C=_Root.pem Adding debian:VeriSign_Class_3_Public_Primary_Certification_Authority_-_G5.pem Adding debian:Firmaprofesional_Root_CA.pem Adding debian:Comodo_Secure_Services_root.pem Adding debian:cacert.org.pem Adding debian:GeoTrust_Primary_Certification_Authority.pem Adding debian:RSA_Security_2048_v3.pem Adding debian:Staat_der_Nederlanden_Root_CA.pem Adding debian:Cybertrust_Global_Root.pem Adding debian:DigiCert_High_Assurance_EV_Root_CA.pem Adding debian:TDC_OCES_Root_CA.pem Adding debian:A-Trust-nQual-03.pem Adding debian:Equifax_Secure_CA.pem Adding debian:Digital_Signature_Trust_Co._Global_CA_1.pem Adding debian:GeoTrust_Global_CA.pem Adding debian:Starfield_Class_2_CA.pem Adding debian:ApplicationCA_-_Japanese_Government.pem Adding debian:Swisscom_Root_CA_1.pem Adding debian:Verisign_Class_2_Public_Primary_Certification_Authority_-_G2.pem Adding debian:Camerfirma_Global_Chambersign_Root.pem Adding debian:QuoVadis_Root_CA_3.pem Adding debian:QuoVadis_Root_CA.pem Adding debian:Comodo_AAA_Services_root.pem Adding debian:ComSign_CA.pem Adding debian:AddTrust_Public_Services_Root.pem Adding debian:DigiCert_Assured_ID_Root_CA.pem Adding debian:UTN_DATACorp_SGC_Root_CA.pem Adding debian:CA_Disig.pem Adding debian:E-Guven_Kok_Elektronik_Sertifika_Hizmet_Saglayicisi.pem Adding debian:GlobalSign_Root_CA_-_R3.pem Adding debian:QuoVadis_Root_CA_2.pem Adding debian:Entrust_Root_Certification_Authority.pem Adding debian:GTE_CyberTrust_Global_Root.pem Adding debian:ValiCert_Class_1_VA.pem Adding debian:Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem Adding debian:GeoTrust_Primary_Certification_Authority_-_G2.pem Adding debian:spi-ca-2003.pem Adding debian:America_Online_Root_Certification_Authority_1.pem Adding debian:AffirmTrust_Premium.pem Adding debian:Sonera_Class_1_Root_CA.pem Adding debian:Verisign_Class_2_Public_Primary_Certification_Authority_-_G3.pem Adding debian:Certplus_Class_2_Primary_CA.pem Adding debian:TURKTRUST_Certificate_Services_Provider_Root_2.pem Adding debian:Network_Solutions_Certificate_Authority.pem Adding debian:Go_Daddy_Class_2_CA.pem Adding debian:StartCom_Certification_Authority.pem Adding debian:Hongkong_Post_Root_CA_1.pem Adding debian:Hellenic_Academic_and_Research_Institutions_RootCA_2011.pem Adding debian:Thawte_Premium_Server_CA.pem Adding debian:EBG_Elektronik_Sertifika_Hizmet_Saglayicisi.pem Adding debian:TURKTRUST_Certificate_Services_Provider_Root_1.pem Adding debian:NetLock_Business_=Class_B=_Root.pem Adding debian:Microsec_e-Szigno_Root_CA_2009.pem Adding debian:DigiCert_Global_Root_CA.pem Adding debian:VeriSign_Class_3_Public_Primary_Certification_Authority_-_G4.pem Adding debian:IGC_A.pem Adding debian:TWCA_Root_Certification_Authority.pem Adding debian:S-TRUST_Authentication_and_Encryption_Root_CA_2005_PN.pem Adding debian:VeriSign_Universal_Root_Certification_Authority.pem Adding debian:DST_Root_CA_X3.pem Adding debian:Verisign_Class_1_Public_Primary_Certification_Authority.pem Adding debian:Root_CA_Generalitat_Valenciana.pem Adding debian:UTN_USERFirst_Email_Root_CA.pem Adding debian:ssl-cert-snakeoil.pem Adding debian:Starfield_Services_Root_Certificate_Authority_-_G2.pem Adding debian:GeoTrust_Primary_Certification_Authority_-_G3.pem Adding debian:Certinomis_-_Autorité_Racine.pem Adding debian:Verisign_Class_3_Public_Primary_Certification_Authority.pem Adding debian:TDC_Internet_Root_CA.pem Adding debian:UbuntuOne-ValiCert_Class_2_VA.pem Adding debian:AffirmTrust_Commercial.pem Adding debian:spi-cacert-2008.pem Adding debian:Izenpe.com.pem Adding debian:EC-ACC.pem Adding debian:Go_Daddy_Root_Certificate_Authority_-_G2.pem Adding debian:COMODO_ECC_Certification_Authority.pem Adding debian:CNNIC_ROOT.pem Adding debian:NetLock_Notary_=Class_A=_Root.pem Adding debian:Equifax_Secure_eBusiness_CA_2.pem Adding debian:Verisign_Class_3_Public_Primary_Certification_Authority_-_G3.pem Adding debian:Secure_Global_CA.pem Adding debian:UbuntuOne-Go_Daddy_CA.pem Adding debian:GeoTrust_Universal_CA.pem Adding debian:Wells_Fargo_Root_CA.pem Adding debian:Thawte_Server_CA.pem Adding debian:WellsSecure_Public_Root_Certificate_Authority.pem Adding debian:TC_TrustCenter_Class_3_CA_II.pem Adding debian:COMODO_Certification_Authority.pem Adding debian:Equifax_Secure_Global_eBusiness_CA.pem Adding debian:Security_Communication_Root_CA.pem Adding debian:GlobalSign_Root_CA_-_R2.pem Adding debian:TÜBITAK_UEKAE_Kök_Sertifika_Hizmet_Saglayicisi_-_Sürüm_3.pem Adding debian:Verisign_Class_4_Public_Primary_Certification_Authority_-_G3.pem Adding debian:certSIGN_ROOT_CA.pem Adding debian:RSA_Root_Certificate_1.pem Adding debian:ePKI_Root_Certification_Authority.pem Adding debian:Entrust.net_Secure_Server_CA.pem Adding debian:OISTE_WISeKey_Global_Root_GA_CA.pem Adding debian:Sonera_Class_2_Root_CA.pem Adding debian:Certigna.pem Adding debian:AffirmTrust_Networking.pem Adding debian:ValiCert_Class_2_VA.pem Adding debian:GlobalSign_Root_CA.pem Adding debian:Staat_der_Nederlanden_Root_CA_-_G2.pem Adding debian:SecureTrust_CA.pem done. Setting up openjdk-6-jre:amd64 (6b24-1.11.5-0ubuntu1~12.10.1) ... update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/policytool to provide /usr/bin/policytool (policytool) in auto mode Setting up libatk-wrapper-java (0.30.4-0ubuntu4) ... Setting up icedtea-6-jre-cacao:amd64 (6b24-1.11.5-0ubuntu1~12.10.1) ... Setting up icedtea-6-jre-jamvm:amd64 (6b24-1.11.5-0ubuntu1~12.10.1) ... Setting up icedtea-netx:amd64 (1.3-1ubuntu1.1) ... update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/javaws to provide /usr/bin/javaws (javaws) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/itweb-settings to provide /usr/bin/itweb-settings (itweb-settings) in auto mode update-alternatives: using /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/javaws to provide /usr/bin/javaws (javaws) in auto mode update-alternatives: using /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/itweb-settings to provide /usr/bin/itweb-settings (itweb-settings) in auto mode Setting up openjdk-6-jdk:amd64 (6b24-1.11.5-0ubuntu1~12.10.1) ... update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/appletviewer to provide /usr/bin/appletviewer (appletviewer) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/extcheck to provide /usr/bin/extcheck (extcheck) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/idlj to provide /usr/bin/idlj (idlj) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jar to provide /usr/bin/jar (jar) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jarsigner to provide /usr/bin/jarsigner (jarsigner) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/javadoc to provide /usr/bin/javadoc (javadoc) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/javah to provide /usr/bin/javah (javah) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/javap to provide /usr/bin/javap (javap) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jconsole to provide /usr/bin/jconsole (jconsole) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jdb to provide /usr/bin/jdb (jdb) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jhat to provide /usr/bin/jhat (jhat) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jinfo to provide /usr/bin/jinfo (jinfo) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jmap to provide /usr/bin/jmap (jmap) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jps to provide /usr/bin/jps (jps) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jrunscript to provide /usr/bin/jrunscript (jrunscript) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jsadebugd to provide /usr/bin/jsadebugd (jsadebugd) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jstack to provide /usr/bin/jstack (jstack) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jstat to provide /usr/bin/jstat (jstat) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/jstatd to provide /usr/bin/jstatd (jstatd) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/native2ascii to provide /usr/bin/native2ascii (native2ascii) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/rmic to provide /usr/bin/rmic (rmic) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/schemagen to provide /usr/bin/schemagen (schemagen) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/serialver to provide /usr/bin/serialver (serialver) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/wsgen to provide /usr/bin/wsgen (wsgen) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/wsimport to provide /usr/bin/wsimport (wsimport) in auto mode update-alternatives: using /usr/lib/jvm/java-6-openjdk-amd64/bin/xjc to provide /usr/bin/xjc (xjc) in auto mode Setting up openoffice (3.4~oneiric) ... Setting up libatk-wrapper-java-jni:amd64 (0.30.4-0ubuntu4) ... Processing triggers for libc-bin ... ldconfig deferred processing now taking place philip@X301-2:~$ sudo apt-get install libxrandr2:i386 libxinerama1:i386 Reading package lists... Done Building dependency tree Reading state information... Done The following package was automatically installed and is no longer required: linux-headers-3.5.0-17 Use 'apt-get autoremove' to remove it. The following extra packages will be installed: gcc-4.7-base:i386 libc6:i386 libgcc1:i386 libx11-6:i386 libxau6:i386 libxcb1:i386 libxdmcp6:i386 libxext6:i386 libxrender1:i386 Suggested packages: glibc-doc:i386 locales:i386 The following NEW packages will be installed gcc-4.7-base:i386 libc6:i386 libgcc1:i386 libx11-6:i386 libxau6:i386 libxcb1:i386 libxdmcp6:i386 libxext6:i386 libxinerama1:i386 libxrandr2:i386 libxrender1:i386 0 upgraded, 11 newly installed, 0 to remove and 93 not upgraded. Need to get 4,936 kB of archives. After this operation, 11.9 MB of additional disk space will be used. Do you want to continue [Y/n]? y Get:1 http://gb.archive.ubuntu.com/ubuntu/ quantal/main gcc-4.7-base i386 4.7.2-2ubuntu1 [15.5 kB] Get:2 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libc6 i386 2.15-0ubuntu20 [3,940 kB] Get:3 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libgcc1 i386 1:4.7.2-2ubuntu1 [53.5 kB] Get:4 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxau6 i386 1:1.0.7-1 [8,582 B] Get:5 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxdmcp6 i386 1:1.1.1-1 [13.1 kB] Get:6 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxcb1 i386 1.8.1-1ubuntu1 [48.7 kB] Get:7 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libx11-6 i386 2:1.5.0-1 [776 kB] Get:8 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxext6 i386 2:1.3.1-2 [33.9 kB] Get:9 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxinerama1 i386 2:1.1.2-1 [8,118 B] Get:10 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxrender1 i386 1:0.9.7-1 [20.1 kB] Get:11 http://gb.archive.ubuntu.com/ubuntu/ quantal/main libxrandr2 i386 2:1.4.0-1 [18.8 kB] Fetched 4,936 kB in 30s (161 kB/s) Preconfiguring packages ... Selecting previously unselected package gcc-4.7-base:i386. (Reading database ... 146005 files and directories currently installed.) Unpacking gcc-4.7-base:i386 (from .../gcc-4.7-base_4.7.2-2ubuntu1_i386.deb) ... Selecting previously unselected package libc6:i386. Unpacking libc6:i386 (from .../libc6_2.15-0ubuntu20_i386.deb) ... Selecting previously unselected package libgcc1:i386. Unpacking libgcc1:i386 (from .../libgcc1_1%3a4.7.2-2ubuntu1_i386.deb) ... Selecting previously unselected package libxau6:i386. Unpacking libxau6:i386 (from .../libxau6_1%3a1.0.7-1_i386.deb) ... Selecting previously unselected package libxdmcp6:i386. Unpacking libxdmcp6:i386 (from .../libxdmcp6_1%3a1.1.1-1_i386.deb) ... Selecting previously unselected package libxcb1:i386. Unpacking libxcb1:i386 (from .../libxcb1_1.8.1-1ubuntu1_i386.deb) ... Selecting previously unselected package libx11-6:i386. Unpacking libx11-6:i386 (from .../libx11-6_2%3a1.5.0-1_i386.deb) ... Selecting previously unselected package libxext6:i386. Unpacking libxext6:i386 (from .../libxext6_2%3a1.3.1-2_i386.deb) ... Selecting previously unselected package libxinerama1:i386. Unpacking libxinerama1:i386 (from .../libxinerama1_2%3a1.1.2-1_i386.deb) ... Selecting previously unselected package libxrender1:i386. Unpacking libxrender1:i386 (from .../libxrender1_1%3a0.9.7-1_i386.deb) ... Selecting previously unselected package libxrandr2:i386. Unpacking libxrandr2:i386 (from .../libxrandr2_2%3a1.4.0-1_i386.deb) ... Setting up gcc-4.7-base:i386 (4.7.2-2ubuntu1) ... Setting up libc6:i386 (2.15-0ubuntu20) ... Setting up libgcc1:i386 (1:4.7.2-2ubuntu1) ... Setting up libxau6:i386 (1:1.0.7-1) ... Setting up libxdmcp6:i386 (1:1.1.1-1) ... Setting up libxcb1:i386 (1.8.1-1ubuntu1) ... Setting up libx11-6:i386 (2:1.5.0-1) ... Setting up libxext6:i386 (2:1.3.1-2) ... Setting up libxinerama1:i386 (2:1.1.2-1) ... Setting up libxrender1:i386 (1:0.9.7-1) ... Setting up libxrandr2:i386 (2:1.4.0-1) ... Processing triggers for libc-bin ... ldconfig deferred processing now taking place $ sudo chmod a+rx /opt/openoffice.org3/share/uno_packages/cache/uno_packages chmod: cannot access `/opt/openoffice.org3/share/uno_packages/cache/uno_packages': No such file or directory

    Read the article

  • E: Sub-process /usr/bin/dpkg returned an error code (1) seems to be choking on kde-runtime-data version issue

    - by BMT
    12.04 LTS, on a dell mini 10. Install stable until about a week ago. Updated about 1x a week, sometimes more often. Several days ago, I booted up and the system was no longer working correctly. All these symptoms occurred simultaneously: Cannot run (exit on opening, every time): Update manager, software center, ubuntuOne, libreOffice. Vinagre autostarts on boot, no explanation, not set to startup with Ubuntu. Using apt-get to fix install results in the following: maura@pandora:~$ sudo apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following package was automatically installed and is no longer required: libtelepathy-farstream2 Use 'apt-get autoremove' to remove them. The following extra packages will be installed: gwibber gwibber-service kde-runtime-data software-center Suggested packages: gwibber-service-flickr gwibber-service-digg gwibber-service-statusnet gwibber-service-foursquare gwibber-service-friendfeed gwibber-service-pingfm gwibber-service-qaiku unity-lens-gwibber The following packages will be upgraded: gwibber gwibber-service kde-runtime-data software-center 4 upgraded, 0 newly installed, 0 to remove and 39 not upgraded. 20 not fully installed or removed. Need to get 0 B/5,682 kB of archives. After this operation, 177 kB of additional disk space will be used. Do you want to continue [Y/n]? debconf: Perl may be unconfigured (Can't locate Scalar/Util.pm in @INC (@INC contains: /etc/perl /usr/local/lib/perl/5.14.2 /usr/local/share/perl/5.14.2 /usr/lib/perl5 /usr/share/perl5 /usr/lib/perl/5.14 /usr/share/perl/5.14 /usr/local/lib/site_perl .) at /usr/lib/perl/5.14/Hash/Util.pm line 9. BEGIN failed--compilation aborted at /usr/lib/perl/5.14/Hash/Util.pm line 9. Compilation failed in require at /usr/share/perl/5.14/fields.pm line 122. Compilation failed in require at /usr/share/perl5/Debconf/Log.pm line 10. Compilation failed in require at (eval 1) line 4. BEGIN failed--compilation aborted at (eval 1) line 4. ) -- aborting (Reading database ... 242672 files and directories currently installed.) Preparing to replace gwibber 3.4.1-0ubuntu1 (using .../gwibber_3.4.2-0ubuntu1_i386.deb) ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: error processing /var/cache/apt/archives/gwibber_3.4.2-0ubuntu1_i386.deb (--unpack): subprocess new pre-removal script returned error exit status 1 Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace gwibber-service 3.4.1-0ubuntu1 (using .../gwibber-service_3.4.2-0ubuntu1_all.deb) ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: error processing /var/cache/apt/archives/gwibber-service_3.4.2-0ubuntu1_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace kde-runtime-data 4:4.8.3-0ubuntu0.1 (using .../kde-runtime-data_4%3a4.8.4-0ubuntu0.1_all.deb) ... Unpacking replacement kde-runtime-data ... dpkg: error processing /var/cache/apt/archives/kde-runtime-data_4%3a4.8.4-0ubuntu0.1_all.deb (--unpack): trying to overwrite '/usr/share/sounds', which is also in package sound-theme-freedesktop 0.7.pristine-2 dpkg-deb (subprocess): subprocess data was killed by signal (Broken pipe) dpkg-deb: error: subprocess <decompress> returned error exit status 2 Preparing to replace python-crypto 2.4.1-1 (using .../python-crypto_2.4.1-1_i386.deb) ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: error processing /var/cache/apt/archives/python-crypto_2.4.1-1_i386.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace software-center 5.2.2.2 (using .../software-center_5.2.4_all.deb) ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: error processing /var/cache/apt/archives/software-center_5.2.4_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace xdiagnose 2.5 (using .../archives/xdiagnose_2.5_all.deb) ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: error processing /var/cache/apt/archives/xdiagnose_2.5_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/gwibber_3.4.2-0ubuntu1_i386.deb /var/cache/apt/archives/gwibber-service_3.4.2-0ubuntu1_all.deb /var/cache/apt/archives/kde-runtime-data_4%3a4.8.4-0ubuntu0.1_all.deb /var/cache/apt/archives/python-crypto_2.4.1-1_i386.deb /var/cache/apt/archives/software-center_5.2.4_all.deb /var/cache/apt/archives/xdiagnose_2.5_all.deb E: Sub-process /usr/bin/dpkg returned an error code (1) maura@pandora:~$ ^C maura@pandora:~$

    Read the article

  • Ancillary Objects: Separate Debug ELF Files For Solaris

    - by Ali Bahrami
    We introduced a new object ELF object type in Solaris 11 Update 1 called the Ancillary Object. This posting describes them, using material originally written during their development, the PSARC arc case, and the Solaris Linker and Libraries Manual. ELF objects contain allocable sections, which are mapped into memory at runtime, and non-allocable sections, which are present in the file for use by debuggers and observability tools, but which are not mapped or used at runtime. Typically, all of these sections exist within a single object file. Ancillary objects allow them to instead go into a separate file. There are different reasons given for wanting such a feature. One can debate whether the added complexity is worth the benefit, and in most cases it is not. However, one important case stands out — customers with very large 32-bit objects who are not ready or able to make the transition to 64-bits. We have customers who build extremely large 32-bit objects. Historically, the debug sections in these objects have used the stabs format, which is limited, but relatively compact. In recent years, the industry has transitioned to the powerful but verbose DWARF standard. In some cases, the size of these debug sections is large enough to push the total object file size past the fundamental 4GB limit for 32-bit ELF object files. The best, and ultimately only, solution to overly large objects is to transition to 64-bits. However, consider environments where: Hundreds of users may be executing the code on large shared systems. (32-bits use less memory and bus bandwidth, and on sparc runs just as fast as 64-bit code otherwise). Complex finely tuned code, where the original authors may no longer be available. Critical production code, that was expensive to qualify and bring online, and which is otherwise serving its intended purpose without issue. Users in these risk adverse and/or high scale categories have good reasons to push 32-bits objects to the limit before moving on. Ancillary objects offer these users a longer runway. Design The design of ancillary objects is intended to be simple, both to help human understanding when examining elfdump output, and to lower the bar for debuggers such as dbx to support them. The primary and ancillary objects have the same set of section headers, with the same names, in the same order (i.e. each section has the same index in both files). A single added section of type SHT_SUNW_ANCILLARY is added to both objects, containing information that allows a debugger to identify and validate both files relative to each other. Given one of these files, the ancillary section allows you to identify the other. Allocable sections go in the primary object, and non-allocable ones go into the ancillary object. A small set of non-allocable objects, notably the symbol table, are copied into both objects. As noted above, most sections are only written to one of the two objects, but both objects have the same section header array. The section header in the file that does not contain the section data is tagged with the SHF_SUNW_ABSENT section header flag to indicate its placeholder status. Compiler writers and others who produce objects can set the SUNW_SHF_PRIMARY section header flag to mark non-allocable sections that should go to the primary object rather than the ancillary. If you don't request an ancillary object, the Solaris ELF format is unchanged. Users who don't use ancillary objects do not pay for the feature. This is important, because they exist to serve a small subset of our users, and must not complicate the common case. If you do request an ancillary object, the runtime behavior of the primary object will be the same as that of a normal object. There is no added runtime cost. The primary and ancillary object together represent a logical single object. This is facilitated by the use of a single set of section headers. One can easily imagine a tool that can merge a primary and ancillary object into a single file, or the reverse. (Note that although this is an interesting intellectual exercise, we don't actually supply such a tool because there's little practical benefit above and beyond using ld to create the files). Among the benefits of this approach are: There is no need for per-file symbol tables to reflect the contents of each file. The same symbol table that would be produced for a standard object can be used. The section contents are identical in either case — there is no need to alter data to accommodate multiple files. It is very easy for a debugger to adapt to these new files, and the processing involved can be encapsulated in input/output routines. Most of the existing debugger implementation applies without modification. The limit of a 4GB 32-bit output object is now raised to 4GB of code, and 4GB of debug data. There is also the future possibility (not currently supported) to support multiple ancillary objects, each of which could contain up to 4GB of additional debug data. It must be noted however that the 32-bit DWARF debug format is itself inherently 32-bit limited, as it uses 32-bit offsets between debug sections, so the ability to employ multiple ancillary object files may not turn out to be useful. Using Ancillary Objects (From the Solaris Linker and Libraries Guide) By default, objects contain both allocable and non-allocable sections. Allocable sections are the sections that contain executable code and the data needed by that code at runtime. Non-allocable sections contain supplemental information that is not required to execute an object at runtime. These sections support the operation of debuggers and other observability tools. The non-allocable sections in an object are not loaded into memory at runtime by the operating system, and so, they have no impact on memory use or other aspects of runtime performance no matter their size. For convenience, both allocable and non-allocable sections are normally maintained in the same file. However, there are situations in which it can be useful to separate these sections. To reduce the size of objects in order to improve the speed at which they can be copied across wide area networks. To support fine grained debugging of highly optimized code requires considerable debug data. In modern systems, the debugging data can easily be larger than the code it describes. The size of a 32-bit object is limited to 4 Gbytes. In very large 32-bit objects, the debug data can cause this limit to be exceeded and prevent the creation of the object. To limit the exposure of internal implementation details. Traditionally, objects have been stripped of non-allocable sections in order to address these issues. Stripping is effective, but destroys data that might be needed later. The Solaris link-editor can instead write non-allocable sections to an ancillary object. This feature is enabled with the -z ancillary command line option. $ ld ... -z ancillary[=outfile] ...By default, the ancillary file is given the same name as the primary output object, with a .anc file extension. However, a different name can be provided by providing an outfile value to the -z ancillary option. When -z ancillary is specified, the link-editor performs the following actions. All allocable sections are written to the primary object. In addition, all non-allocable sections containing one or more input sections that have the SHF_SUNW_PRIMARY section header flag set are written to the primary object. All remaining non-allocable sections are written to the ancillary object. The following non-allocable sections are written to both the primary object and ancillary object. .shstrtab The section name string table. .symtab The full non-dynamic symbol table. .symtab_shndx The symbol table extended index section associated with .symtab. .strtab The non-dynamic string table associated with .symtab. .SUNW_ancillary Contains the information required to identify the primary and ancillary objects, and to identify the object being examined. The primary object and all ancillary objects contain the same array of sections headers. Each section has the same section index in every file. Although the primary and ancillary objects all define the same section headers, the data for most sections will be written to a single file as described above. If the data for a section is not present in a given file, the SHF_SUNW_ABSENT section header flag is set, and the sh_size field is 0. This organization makes it possible to acquire a full list of section headers, a complete symbol table, and a complete list of the primary and ancillary objects from either of the primary or ancillary objects. The following example illustrates the underlying implementation of ancillary objects. An ancillary object is created by adding the -z ancillary command line option to an otherwise normal compilation. The file utility shows that the result is an executable named a.out, and an associated ancillary object named a.out.anc. $ cat hello.c #include <stdio.h> int main(int argc, char **argv) { (void) printf("hello, world\n"); return (0); } $ cc -g -zancillary hello.c $ file a.out a.out.anc a.out: ELF 32-bit LSB executable 80386 Version 1 [FPU], dynamically linked, not stripped, ancillary object a.out.anc a.out.anc: ELF 32-bit LSB ancillary 80386 Version 1, primary object a.out $ ./a.out hello worldThe resulting primary object is an ordinary executable that can be executed in the usual manner. It is no different at runtime than an executable built without the use of ancillary objects, and then stripped of non-allocable content using the strip or mcs commands. As previously described, the primary object and ancillary objects contain the same section headers. To see how this works, it is helpful to use the elfdump utility to display these section headers and compare them. The following table shows the section header information for a selection of headers from the previous link-edit example. Index Section Name Type Primary Flags Ancillary Flags Primary Size Ancillary Size 13 .text PROGBITS ALLOC EXECINSTR ALLOC EXECINSTR SUNW_ABSENT 0x131 0 20 .data PROGBITS WRITE ALLOC WRITE ALLOC SUNW_ABSENT 0x4c 0 21 .symtab SYMTAB 0 0 0x450 0x450 22 .strtab STRTAB STRINGS STRINGS 0x1ad 0x1ad 24 .debug_info PROGBITS SUNW_ABSENT 0 0 0x1a7 28 .shstrtab STRTAB STRINGS STRINGS 0x118 0x118 29 .SUNW_ancillary SUNW_ancillary 0 0 0x30 0x30 The data for most sections is only present in one of the two files, and absent from the other file. The SHF_SUNW_ABSENT section header flag is set when the data is absent. The data for allocable sections needed at runtime are found in the primary object. The data for non-allocable sections used for debugging but not needed at runtime are placed in the ancillary file. A small set of non-allocable sections are fully present in both files. These are the .SUNW_ancillary section used to relate the primary and ancillary objects together, the section name string table .shstrtab, as well as the symbol table.symtab, and its associated string table .strtab. It is possible to strip the symbol table from the primary object. A debugger that encounters an object without a symbol table can use the .SUNW_ancillary section to locate the ancillary object, and access the symbol contained within. The primary object, and all associated ancillary objects, contain a .SUNW_ancillary section that allows all the objects to be identified and related together. $ elfdump -T SUNW_ancillary a.out a.out.anc a.out: Ancillary Section: .SUNW_ancillary index tag value [0] ANC_SUNW_CHECKSUM 0x8724 [1] ANC_SUNW_MEMBER 0x1 a.out [2] ANC_SUNW_CHECKSUM 0x8724 [3] ANC_SUNW_MEMBER 0x1a3 a.out.anc [4] ANC_SUNW_CHECKSUM 0xfbe2 [5] ANC_SUNW_NULL 0 a.out.anc: Ancillary Section: .SUNW_ancillary index tag value [0] ANC_SUNW_CHECKSUM 0xfbe2 [1] ANC_SUNW_MEMBER 0x1 a.out [2] ANC_SUNW_CHECKSUM 0x8724 [3] ANC_SUNW_MEMBER 0x1a3 a.out.anc [4] ANC_SUNW_CHECKSUM 0xfbe2 [5] ANC_SUNW_NULL 0 The ancillary sections for both objects contain the same number of elements, and are identical except for the first element. Each object, starting with the primary object, is introduced with a MEMBER element that gives the file name, followed by a CHECKSUM that identifies the object. In this example, the primary object is a.out, and has a checksum of 0x8724. The ancillary object is a.out.anc, and has a checksum of 0xfbe2. The first element in a .SUNW_ancillary section, preceding the MEMBER element for the primary object, is always a CHECKSUM element, containing the checksum for the file being examined. The presence of a .SUNW_ancillary section in an object indicates that the object has associated ancillary objects. The names of the primary and all associated ancillary objects can be obtained from the ancillary section from any one of the files. It is possible to determine which file is being examined from the larger set of files by comparing the first checksum value to the checksum of each member that follows. Debugger Access and Use of Ancillary Objects Debuggers and other observability tools must merge the information found in the primary and ancillary object files in order to build a complete view of the object. This is equivalent to processing the information from a single file. This merging is simplified by the primary object and ancillary objects containing the same section headers, and a single symbol table. The following steps can be used by a debugger to assemble the information contained in these files. Starting with the primary object, or any of the ancillary objects, locate the .SUNW_ancillary section. The presence of this section identifies the object as part of an ancillary group, contains information that can be used to obtain a complete list of the files and determine which of those files is the one currently being examined. Create a section header array in memory, using the section header array from the object being examined as an initial template. Open and read each file identified by the .SUNW_ancillary section in turn. For each file, fill in the in-memory section header array with the information for each section that does not have the SHF_SUNW_ABSENT flag set. The result will be a complete in-memory copy of the section headers with pointers to the data for all sections. Once this information has been acquired, the debugger can proceed as it would in the single file case, to access and control the running program. Note - The ELF definition of ancillary objects provides for a single primary object, and an arbitrary number of ancillary objects. At this time, the Oracle Solaris link-editor only produces a single ancillary object containing all non-allocable sections. This may change in the future. Debuggers and other observability tools should be written to handle the general case of multiple ancillary objects. ELF Implementation Details (From the Solaris Linker and Libraries Guide) To implement ancillary objects, it was necessary to extend the ELF format to add a new object type (ET_SUNW_ANCILLARY), a new section type (SHT_SUNW_ANCILLARY), and 2 new section header flags (SHF_SUNW_ABSENT, SHF_SUNW_PRIMARY). In this section, I will detail these changes, in the form of diffs to the Solaris Linker and Libraries manual. Part IV ELF Application Binary Interface Chapter 13: Object File Format Object File Format Edit Note: This existing section at the beginning of the chapter describes the ELF header. There's a table of object file types, which now includes the new ET_SUNW_ANCILLARY type. e_type Identifies the object file type, as listed in the following table. NameValueMeaning ET_NONE0No file type ET_REL1Relocatable file ET_EXEC2Executable file ET_DYN3Shared object file ET_CORE4Core file ET_LOSUNW0xfefeStart operating system specific range ET_SUNW_ANCILLARY0xfefeAncillary object file ET_HISUNW0xfefdEnd operating system specific range ET_LOPROC0xff00Start processor-specific range ET_HIPROC0xffffEnd processor-specific range Sections Edit Note: This overview section defines the section header structure, and provides a high level description of known sections. It was updated to define the new SHF_SUNW_ABSENT and SHF_SUNW_PRIMARY flags and the new SHT_SUNW_ANCILLARY section. ... sh_type Categorizes the section's contents and semantics. Section types and their descriptions are listed in Table 13-5. sh_flags Sections support 1-bit flags that describe miscellaneous attributes. Flag definitions are listed in Table 13-8. ... Table 13-5 ELF Section Types, sh_type NameValue . . . SHT_LOSUNW0x6fffffee SHT_SUNW_ancillary0x6fffffee . . . ... SHT_LOSUNW - SHT_HISUNW Values in this inclusive range are reserved for Oracle Solaris OS semantics. SHT_SUNW_ANCILLARY Present when a given object is part of a group of ancillary objects. Contains information required to identify all the files that make up the group. See Ancillary Section. ... Table 13-8 ELF Section Attribute Flags NameValue . . . SHF_MASKOS0x0ff00000 SHF_SUNW_NODISCARD0x00100000 SHF_SUNW_ABSENT0x00200000 SHF_SUNW_PRIMARY0x00400000 SHF_MASKPROC0xf0000000 . . . ... SHF_SUNW_ABSENT Indicates that the data for this section is not present in this file. When ancillary objects are created, the primary object and any ancillary objects, will all have the same section header array, to facilitate merging them to form a complete view of the object, and to allow them to use the same symbol tables. Each file contains a subset of the section data. The data for allocable sections is written to the primary object while the data for non-allocable sections is written to an ancillary file. The SHF_SUNW_ABSENT flag is used to indicate that the data for the section is not present in the object being examined. When the SHF_SUNW_ABSENT flag is set, the sh_size field of the section header must be 0. An application encountering an SHF_SUNW_ABSENT section can choose to ignore the section, or to search for the section data within one of the related ancillary files. SHF_SUNW_PRIMARY The default behavior when ancillary objects are created is to write all allocable sections to the primary object and all non-allocable sections to the ancillary objects. The SHF_SUNW_PRIMARY flag overrides this behavior. Any output section containing one more input section with the SHF_SUNW_PRIMARY flag set is written to the primary object without regard for its allocable status. ... Two members in the section header, sh_link, and sh_info, hold special information, depending on section type. Table 13-9 ELF sh_link and sh_info Interpretation sh_typesh_linksh_info . . . SHT_SUNW_ANCILLARY The section header index of the associated string table. 0 . . . Special Sections Edit Note: This section describes the sections used in Solaris ELF objects, using the types defined in the previous description of section types. It was updated to define the new .SUNW_ancillary (SHT_SUNW_ANCILLARY) section. Various sections hold program and control information. Sections in the following table are used by the system and have the indicated types and attributes. Table 13-10 ELF Special Sections NameTypeAttribute . . . .SUNW_ancillarySHT_SUNW_ancillaryNone . . . ... .SUNW_ancillary Present when a given object is part of a group of ancillary objects. Contains information required to identify all the files that make up the group. See Ancillary Section for details. ... Ancillary Section Edit Note: This new section provides the format reference describing the layout of a .SUNW_ancillary section and the meaning of the various tags. Note that these sections use the same tag/value concept used for dynamic and capabilities sections, and will be familiar to anyone used to working with ELF. In addition to the primary output object, the Solaris link-editor can produce one or more ancillary objects. Ancillary objects contain non-allocable sections that would normally be written to the primary object. When ancillary objects are produced, the primary object and all of the associated ancillary objects contain a SHT_SUNW_ancillary section, containing information that identifies these related objects. Given any one object from such a group, the ancillary section provides the information needed to identify and interpret the others. This section contains an array of the following structures. See sys/elf.h. typedef struct { Elf32_Word a_tag; union { Elf32_Word a_val; Elf32_Addr a_ptr; } a_un; } Elf32_Ancillary; typedef struct { Elf64_Xword a_tag; union { Elf64_Xword a_val; Elf64_Addr a_ptr; } a_un; } Elf64_Ancillary; For each object with this type, a_tag controls the interpretation of a_un. a_val These objects represent integer values with various interpretations. a_ptr These objects represent file offsets or addresses. The following ancillary tags exist. Table 13-NEW1 ELF Ancillary Array Tags NameValuea_un ANC_SUNW_NULL0Ignored ANC_SUNW_CHECKSUM1a_val ANC_SUNW_MEMBER2a_ptr ANC_SUNW_NULL Marks the end of the ancillary section. ANC_SUNW_CHECKSUM Provides the checksum for a file in the c_val element. When ANC_SUNW_CHECKSUM precedes the first instance of ANC_SUNW_MEMBER, it provides the checksum for the object from which the ancillary section is being read. When it follows an ANC_SUNW_MEMBER tag, it provides the checksum for that member. ANC_SUNW_MEMBER Specifies an object name. The a_ptr element contains the string table offset of a null-terminated string, that provides the file name. An ancillary section must always contain an ANC_SUNW_CHECKSUM before the first instance of ANC_SUNW_MEMBER, identifying the current object. Following that, there should be an ANC_SUNW_MEMBER for each object that makes up the complete set of objects. Each ANC_SUNW_MEMBER should be followed by an ANC_SUNW_CHECKSUM for that object. A typical ancillary section will therefore be structured as: TagMeaning ANC_SUNW_CHECKSUMChecksum of this object ANC_SUNW_MEMBERName of object #1 ANC_SUNW_CHECKSUMChecksum for object #1 . . . ANC_SUNW_MEMBERName of object N ANC_SUNW_CHECKSUMChecksum for object N ANC_SUNW_NULL An object can therefore identify itself by comparing the initial ANC_SUNW_CHECKSUM to each of the ones that follow, until it finds a match. Related Other Work The GNU developers have also encountered the need/desire to support separate debug information files, and use the solution detailed at http://sourceware.org/gdb/onlinedocs/gdb/Separate-Debug-Files.html. At the current time, the separate debug file is constructed by building the standard object first, and then copying the debug data out of it in a separate post processing step, Hence, it is limited to a total of 4GB of code and debug data, just as a single object file would be. They are aware of this, and I have seen online comments indicating that they may add direct support for generating these separate files to their link-editor. It is worth noting that the GNU objcopy utility is available on Solaris, and that the Studio dbx debugger is able to use these GNU style separate debug files even on Solaris. Although this is interesting in terms giving Linux users a familiar environment on Solaris, the 4GB limit means it is not an answer to the problem of very large 32-bit objects. We have also encountered issues with objcopy not understanding Solaris-specific ELF sections, when using this approach. The GNU community also has a current effort to adapt their DWARF debug sections in order to move them to separate files before passing the relocatable objects to the linker. The details of Project Fission can be found at http://gcc.gnu.org/wiki/DebugFission. The goal of this project appears to be to reduce the amount of data seen by the link-editor. The primary effort revolves around moving DWARF data to separate .dwo files so that the link-editor never encounters them. The details of modifying the DWARF data to be usable in this form are involved — please see the above URL for details.

    Read the article

  • Unable to transfer data to or from mounted hard drive

    - by user210335
    So usually i'm good at sorting out issues. But this one has me at a loss! This issues has occured since upgrading my ubuntu so this was workingg prior. I use mounted hard drives to manage my downloads which are then copied over accordingly by a python based app. I found it was having issues with permissions to create anything on these mounted hard drives. I'm able to play and access he content of these drives so they're not faulty. My mount script looks like the following rw,user,exec,auto I really am stuck. Could anyone shed any light on how to fix this and allow me to access it. I've checked the properties and all groups should have read and write access so i'm very confused! thanks, edit here's the output of my mount options /dev/sda2 on / type ext4 (rw,errors=remount-ro) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/cgroup type tmpfs (rw) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) none on /sys/firmware/efi/efivars type efivarfs (rw) udev on /dev type devtmpfs (rw,mode=0755) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755) none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880) none on /run/shm type tmpfs (rw,nosuid,nodev) none on /run/user type tmpfs (rw,noexec,nosuid,nodev,size=104857600,mode=0755) none on /sys/fs/pstore type pstore (rw) /dev/sda1 on /boot/efi type vfat (rw) /dev/sdc1 on /mnt/tv type fuseblk (rw,nosuid,nodev,allow_other,blksize=4096) /dev/sdb1 on /mnt/B88A30E88A30A4B2 type fuseblk (rw,nosuid,nodev,allow_other,blksize=4096) systemd on /sys/fs/cgroup/systemd type cgroup (rw,noexec,nosuid,nodev,none,name=systemd) gvfsd-fuse on /run/user/1000/gvfs type fuse.gvfsd-fuse (rw,nosuid,nodev,user=simon) /dev/sdd1 on /media/simon/New Volume3 type fuseblk (rw,nosuid,nodev,allow_other,default_permissions,blksize=4096) the main mount in question is /dev/sdc1 on /mnt/tv type fuseblk (rw,nosuid,nodev,allow_other,blksize=4096) heres my dmesg output. I tried cchanging permissions in a terminal and I got an io error. [52803.343417] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.343420] sd 2:0:0:0: [sdc] CDB: [52803.343422] Read(10): 28 00 00 60 9e 3f 00 00 08 00 [52803.343805] sd 2:0:0:0: [sdc] Unhandled error code [52803.343808] sd 2:0:0:0: [sdc] [52803.343810] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.343812] sd 2:0:0:0: [sdc] CDB: [52803.343813] Read(10): 28 00 00 67 64 67 00 00 08 00 [52803.344389] sd 2:0:0:0: [sdc] Unhandled error code [52803.344392] sd 2:0:0:0: [sdc] [52803.344394] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.344396] sd 2:0:0:0: [sdc] CDB: [52803.344397] Read(10): 28 00 09 bd e7 6f 00 00 08 00 [52803.344584] sd 2:0:0:0: [sdc] Unhandled error code [52803.344587] sd 2:0:0:0: [sdc] [52803.344589] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.344591] sd 2:0:0:0: [sdc] CDB: [52803.344592] Read(10): 28 00 07 3a cf b7 00 00 08 00 [52803.344776] sd 2:0:0:0: [sdc] Unhandled error code [52803.344779] sd 2:0:0:0: [sdc] [52803.344781] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.344783] sd 2:0:0:0: [sdc] CDB: [52803.344784] Read(10): 28 00 09 bd e7 97 00 00 08 00 [52803.344973] sd 2:0:0:0: [sdc] Unhandled error code [52803.344976] sd 2:0:0:0: [sdc] [52803.344978] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.344980] sd 2:0:0:0: [sdc] CDB: [52803.344981] Read(10): 28 00 08 dd 57 ef 00 00 08 00 [52803.346745] sd 2:0:0:0: [sdc] Unhandled error code [52803.346748] sd 2:0:0:0: [sdc] [52803.346750] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.346752] sd 2:0:0:0: [sdc] CDB: [52803.346754] Read(10): 28 00 07 1a c1 0f 00 00 08 00 [52803.349939] sd 2:0:0:0: [sdc] Unhandled error code [52803.349942] sd 2:0:0:0: [sdc] [52803.349944] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.349946] sd 2:0:0:0: [sdc] CDB: [52803.349948] Read(10): 28 00 00 67 64 9f 00 00 08 00 [52803.350147] sd 2:0:0:0: [sdc] Unhandled error code [52803.350150] sd 2:0:0:0: [sdc] [52803.350152] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.350154] sd 2:0:0:0: [sdc] CDB: [52803.350155] Read(10): 28 00 00 67 64 97 00 00 08 00 [52803.351302] sd 2:0:0:0: [sdc] Unhandled error code [52803.351305] sd 2:0:0:0: [sdc] [52803.351307] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.351309] sd 2:0:0:0: [sdc] CDB: [52803.351311] Read(10): 28 00 00 a4 1d cf 00 00 08 00 [52803.351894] sd 2:0:0:0: [sdc] Unhandled error code [52803.351897] sd 2:0:0:0: [sdc] [52803.351899] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.351901] sd 2:0:0:0: [sdc] CDB: [52803.351902] Read(10): 28 00 00 67 67 3f 00 00 08 00 [52803.353163] sd 2:0:0:0: [sdc] Unhandled error code [52803.353166] sd 2:0:0:0: [sdc] [52803.353168] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.353170] sd 2:0:0:0: [sdc] CDB: [52803.353172] Read(10): 28 00 00 67 64 ef 00 00 08 00 [52803.353917] sd 2:0:0:0: [sdc] Unhandled error code [52803.353920] sd 2:0:0:0: [sdc] [52803.353922] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.353924] sd 2:0:0:0: [sdc] CDB: [52803.353925] Read(10): 28 00 00 67 65 17 00 00 08 00 [52803.354484] sd 2:0:0:0: [sdc] Unhandled error code [52803.354487] sd 2:0:0:0: [sdc] [52803.354489] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.354491] sd 2:0:0:0: [sdc] CDB: [52803.354492] Read(10): 28 00 07 1a d8 9f 00 00 08 00 [52803.355005] sd 2:0:0:0: [sdc] Unhandled error code [52803.355010] sd 2:0:0:0: [sdc] [52803.355013] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.355017] sd 2:0:0:0: [sdc] CDB: [52803.355019] Read(10): 28 00 00 67 65 3f 00 00 08 00 [52803.355293] sd 2:0:0:0: [sdc] Unhandled error code [52803.355298] sd 2:0:0:0: [sdc] [52803.355301] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.355305] sd 2:0:0:0: [sdc] CDB: [52803.355308] Read(10): 28 00 00 a4 20 27 00 00 08 00 [52803.355575] sd 2:0:0:0: [sdc] Unhandled error code [52803.355580] sd 2:0:0:0: [sdc] [52803.355583] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.355587] sd 2:0:0:0: [sdc] CDB: [52803.355589] Read(10): 28 00 00 5d dc 67 00 00 08 00 [52803.356647] sd 2:0:0:0: [sdc] Unhandled error code [52803.356650] sd 2:0:0:0: [sdc] [52803.356652] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.356654] sd 2:0:0:0: [sdc] CDB: [52803.356655] Read(10): 28 00 07 1a dd 3f 00 00 08 00 [52803.357108] sd 2:0:0:0: [sdc] Unhandled error code [52803.357111] sd 2:0:0:0: [sdc] [52803.357113] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.357115] sd 2:0:0:0: [sdc] CDB: [52803.357116] Read(10): 28 00 00 67 65 97 00 00 08 00 [52803.357298] sd 2:0:0:0: [sdc] Unhandled error code [52803.357300] sd 2:0:0:0: [sdc] [52803.357302] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.357304] sd 2:0:0:0: [sdc] CDB: [52803.357306] Read(10): 28 00 07 1a 04 d7 00 00 08 00 [52803.360374] sd 2:0:0:0: [sdc] Unhandled error code [52803.360377] sd 2:0:0:0: [sdc] [52803.360379] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.360382] sd 2:0:0:0: [sdc] CDB: [52803.360383] Read(10): 28 00 00 67 65 b7 00 00 08 00 [52803.360581] sd 2:0:0:0: [sdc] Unhandled error code [52803.360584] sd 2:0:0:0: [sdc] [52803.360586] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.360588] sd 2:0:0:0: [sdc] CDB: [52803.360589] Read(10): 28 00 00 67 65 c7 00 00 08 00 [52803.361352] sd 2:0:0:0: [sdc] Unhandled error code [52803.361355] sd 2:0:0:0: [sdc] [52803.361357] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.361359] sd 2:0:0:0: [sdc] CDB: [52803.361360] Read(10): 28 00 09 bd e1 af 00 00 08 00 [52803.362096] sd 2:0:0:0: [sdc] Unhandled error code [52803.362099] sd 2:0:0:0: [sdc] [52803.362101] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.362103] sd 2:0:0:0: [sdc] CDB: [52803.362104] Read(10): 28 00 07 0a 64 e7 00 00 08 00 [52803.362555] sd 2:0:0:0: [sdc] Unhandled error code [52803.362558] sd 2:0:0:0: [sdc] [52803.362560] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.362562] sd 2:0:0:0: [sdc] CDB: [52803.362563] Read(10): 28 00 00 67 65 d7 00 00 08 00 [52803.362747] sd 2:0:0:0: [sdc] Unhandled error code [52803.362750] sd 2:0:0:0: [sdc] [52803.362752] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.362754] sd 2:0:0:0: [sdc] CDB: [52803.362755] Read(10): 28 00 01 4c 12 6f 00 00 08 00 [52803.362977] sd 2:0:0:0: [sdc] Unhandled error code [52803.362980] sd 2:0:0:0: [sdc] [52803.362982] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.362984] sd 2:0:0:0: [sdc] CDB: [52803.362985] Read(10): 28 00 03 85 43 7f 00 00 08 00 [52803.365197] sd 2:0:0:0: [sdc] Unhandled error code [52803.365200] sd 2:0:0:0: [sdc] [52803.365202] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.365204] sd 2:0:0:0: [sdc] CDB: [52803.365206] Read(10): 28 00 07 15 46 4f 00 00 08 00 [52803.365524] sd 2:0:0:0: [sdc] Unhandled error code [52803.365527] sd 2:0:0:0: [sdc] [52803.365528] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.365531] sd 2:0:0:0: [sdc] CDB: [52803.365532] Read(10): 28 00 07 11 78 8f 00 00 08 00 [52803.369355] sd 2:0:0:0: [sdc] Unhandled error code [52803.369360] sd 2:0:0:0: [sdc] [52803.369362] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.369365] sd 2:0:0:0: [sdc] CDB: [52803.369366] Read(10): 28 00 09 bd e2 8f 00 00 08 00 [52803.370806] sd 2:0:0:0: [sdc] Unhandled error code [52803.370809] sd 2:0:0:0: [sdc] [52803.370811] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.370814] sd 2:0:0:0: [sdc] CDB: [52803.370815] Read(10): 28 00 07 1a c6 37 00 00 08 00 [52803.371630] sd 2:0:0:0: [sdc] Unhandled error code [52803.371634] sd 2:0:0:0: [sdc] [52803.371636] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.371639] sd 2:0:0:0: [sdc] CDB: [52803.371640] Read(10): 28 00 00 67 66 57 00 00 08 00 [52803.371863] sd 2:0:0:0: [sdc] Unhandled error code [52803.371867] sd 2:0:0:0: [sdc] [52803.371868] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.371871] sd 2:0:0:0: [sdc] CDB: [52803.371872] Read(10): 28 00 00 64 0b df 00 00 08 00 [52803.373467] sd 2:0:0:0: [sdc] Unhandled error code [52803.373470] sd 2:0:0:0: [sdc] [52803.373472] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.373474] sd 2:0:0:0: [sdc] CDB: [52803.373476] Read(10): 28 00 00 60 83 7f 00 00 08 00 [52803.373655] sd 2:0:0:0: [sdc] Unhandled error code [52803.373658] sd 2:0:0:0: [sdc] [52803.373660] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.373662] sd 2:0:0:0: [sdc] CDB: [52803.373663] Read(10): 28 00 00 60 83 7f 00 00 08 00 [52803.374063] sd 2:0:0:0: [sdc] Unhandled error code [52803.374066] sd 2:0:0:0: [sdc] [52803.374068] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.374070] sd 2:0:0:0: [sdc] CDB: [52803.374071] Read(10): 28 00 08 db d5 5f 00 00 08 00 [52803.374602] sd 2:0:0:0: [sdc] Unhandled error code [52803.374605] sd 2:0:0:0: [sdc] [52803.374607] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.374609] sd 2:0:0:0: [sdc] CDB: [52803.374611] Read(10): 28 00 07 1a bf a7 00 00 08 00 [52803.375259] sd 2:0:0:0: [sdc] Unhandled error code [52803.375264] sd 2:0:0:0: [sdc] [52803.375267] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.375270] sd 2:0:0:0: [sdc] CDB: [52803.375272] Read(10): 28 00 00 67 66 87 00 00 08 00 [52803.375515] sd 2:0:0:0: [sdc] Unhandled error code [52803.375520] sd 2:0:0:0: [sdc] [52803.375522] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.375526] sd 2:0:0:0: [sdc] CDB: [52803.375527] Read(10): 28 00 00 62 54 8f 00 00 08 00 [52803.378506] sd 2:0:0:0: [sdc] Unhandled error code [52803.378513] sd 2:0:0:0: [sdc] [52803.378516] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.378520] sd 2:0:0:0: [sdc] CDB: [52803.378522] Read(10): 28 00 00 67 66 bf 00 00 08 00 [52803.381048] sd 2:0:0:0: [sdc] Unhandled error code [52803.381054] sd 2:0:0:0: [sdc] [52803.381057] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.381061] sd 2:0:0:0: [sdc] CDB: [52803.381063] Read(10): 28 00 00 60 ae 77 00 00 08 00 [52803.381238] sd 2:0:0:0: [sdc] Unhandled error code [52803.381242] sd 2:0:0:0: [sdc] [52803.381245] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.381248] sd 2:0:0:0: [sdc] CDB: [52803.381250] Read(10): 28 00 00 60 ae 77 00 00 08 00 [52803.381382] sd 2:0:0:0: [sdc] Unhandled error code [52803.381386] sd 2:0:0:0: [sdc] [52803.381388] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.381392] sd 2:0:0:0: [sdc] CDB: [52803.381394] Read(10): 28 00 00 60 ae 77 00 00 08 00 [52803.381569] sd 2:0:0:0: [sdc] Unhandled error code [52803.381573] sd 2:0:0:0: [sdc] [52803.381575] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.381579] sd 2:0:0:0: [sdc] CDB: [52803.381581] Read(10): 28 00 00 60 ae 77 00 00 08 00 [52803.382295] sd 2:0:0:0: [sdc] Unhandled error code [52803.382300] sd 2:0:0:0: [sdc] [52803.382302] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.382306] sd 2:0:0:0: [sdc] CDB: [52803.382307] Read(10): 28 00 00 67 6a 87 00 00 08 00 [52803.382552] sd 2:0:0:0: [sdc] Unhandled error code [52803.382556] sd 2:0:0:0: [sdc] [52803.382558] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.382562] sd 2:0:0:0: [sdc] CDB: [52803.382564] Read(10): 28 00 00 67 6a af 00 00 08 00 [52803.382794] sd 2:0:0:0: [sdc] Unhandled error code [52803.382798] sd 2:0:0:0: [sdc] [52803.382801] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.382804] sd 2:0:0:0: [sdc] CDB: [52803.382806] Read(10): 28 00 00 67 6a c7 00 00 08 00 [52803.383269] sd 2:0:0:0: [sdc] Unhandled error code [52803.383274] sd 2:0:0:0: [sdc] [52803.383277] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.383280] sd 2:0:0:0: [sdc] CDB: [52803.383282] Read(10): 28 00 00 67 6a f7 00 00 08 00 [52803.383556] sd 2:0:0:0: [sdc] Unhandled error code [52803.383560] sd 2:0:0:0: [sdc] [52803.383563] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.383566] sd 2:0:0:0: [sdc] CDB: [52803.383568] Read(10): 28 00 00 67 6b 2f 00 00 08 00 [52803.386185] sd 2:0:0:0: [sdc] Unhandled error code [52803.386191] sd 2:0:0:0: [sdc] [52803.386194] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.386198] sd 2:0:0:0: [sdc] CDB: [52803.386200] Read(10): 28 00 01 4c 1b bf 00 00 08 00 [52803.386454] sd 2:0:0:0: [sdc] Unhandled error code [52803.386458] sd 2:0:0:0: [sdc] [52803.386461] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.386465] sd 2:0:0:0: [sdc] CDB: [52803.386467] Read(10): 28 00 07 1a b4 1f 00 00 08 00 [52803.388320] sd 2:0:0:0: [sdc] Unhandled error code [52803.388324] sd 2:0:0:0: [sdc] [52803.388326] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.388328] sd 2:0:0:0: [sdc] CDB: [52803.388329] Read(10): 28 00 09 bd de 17 00 00 08 00 [52803.388836] sd 2:0:0:0: [sdc] Unhandled error code [52803.388838] sd 2:0:0:0: [sdc] [52803.388839] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.388841] sd 2:0:0:0: [sdc] CDB: [52803.388842] Read(10): 28 00 07 57 9f ff 00 00 08 00 [52803.389124] sd 2:0:0:0: [sdc] Unhandled error code [52803.389126] sd 2:0:0:0: [sdc] [52803.389127] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.389129] sd 2:0:0:0: [sdc] CDB: [52803.389130] Read(10): 28 00 00 67 6b 8f 00 00 08 00 [52803.389244] sd 2:0:0:0: [sdc] Unhandled error code [52803.389246] sd 2:0:0:0: [sdc] [52803.389248] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.389249] sd 2:0:0:0: [sdc] CDB: [52803.389250] Read(10): 28 00 07 e9 ee ff 00 00 08 00 [52803.390386] sd 2:0:0:0: [sdc] Unhandled error code [52803.390389] sd 2:0:0:0: [sdc] [52803.390390] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.390392] sd 2:0:0:0: [sdc] CDB: [52803.390393] Read(10): 28 00 07 1a be 0f 00 00 08 00 [52803.390682] sd 2:0:0:0: [sdc] Unhandled error code [52803.390685] sd 2:0:0:0: [sdc] [52803.390686] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.390688] sd 2:0:0:0: [sdc] CDB: [52803.390689] Read(10): 28 00 00 67 6b e7 00 00 08 00 [52803.390804] sd 2:0:0:0: [sdc] Unhandled error code [52803.390806] sd 2:0:0:0: [sdc] [52803.390808] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.390809] sd 2:0:0:0: [sdc] CDB: [52803.390810] Read(10): 28 00 07 ed 17 bf 00 00 08 00 [52803.391449] sd 2:0:0:0: [sdc] Unhandled error code [52803.391451] sd 2:0:0:0: [sdc] [52803.391452] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.391454] sd 2:0:0:0: [sdc] CDB: [52803.391455] Read(10): 28 00 09 bd e5 9f 00 00 08 00 [52803.391956] sd 2:0:0:0: [sdc] Unhandled error code [52803.391958] sd 2:0:0:0: [sdc] [52803.391960] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.391961] sd 2:0:0:0: [sdc] CDB: [52803.391962] Read(10): 28 00 00 b5 86 a7 00 00 08 00 [52803.392293] sd 2:0:0:0: [sdc] Unhandled error code [52803.392295] sd 2:0:0:0: [sdc] [52803.392296] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.392298] sd 2:0:0:0: [sdc] CDB: [52803.392299] Read(10): 28 00 07 18 bf bf 00 00 08 00 [52803.392843] sd 2:0:0:0: [sdc] Unhandled error code [52803.392845] sd 2:0:0:0: [sdc] [52803.392846] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.392848] sd 2:0:0:0: [sdc] CDB: [52803.392849] Read(10): 28 00 00 60 b3 1f 00 00 08 00 [52803.392929] sd 2:0:0:0: [sdc] Unhandled error code [52803.392931] sd 2:0:0:0: [sdc] [52803.392932] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.392934] sd 2:0:0:0: [sdc] CDB: [52803.392935] Read(10): 28 00 00 60 b3 1f 00 00 08 00 [52803.393057] sd 2:0:0:0: [sdc] Unhandled error code [52803.393059] sd 2:0:0:0: [sdc] [52803.393060] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.393062] sd 2:0:0:0: [sdc] CDB: [52803.393063] Read(10): 28 00 00 60 83 9f 00 00 08 00 [52803.393286] sd 2:0:0:0: [sdc] Unhandled error code [52803.393288] sd 2:0:0:0: [sdc] [52803.393289] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.393291] sd 2:0:0:0: [sdc] CDB: [52803.393292] Read(10): 28 00 00 67 6b bf 00 00 08 00 [52803.393720] sd 2:0:0:0: [sdc] Unhandled error code [52803.393722] sd 2:0:0:0: [sdc] [52803.393723] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.393725] sd 2:0:0:0: [sdc] CDB: [52803.393725] Read(10): 28 00 00 60 b2 17 00 00 08 00 [52803.393806] sd 2:0:0:0: [sdc] Unhandled error code [52803.393808] sd 2:0:0:0: [sdc] [52803.393809] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.393810] sd 2:0:0:0: [sdc] CDB: [52803.393811] Read(10): 28 00 00 60 b2 17 00 00 08 00 [52803.393892] sd 2:0:0:0: [sdc] Unhandled error code [52803.393894] sd 2:0:0:0: [sdc] [52803.393895] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.393896] sd 2:0:0:0: [sdc] CDB: [52803.393897] Read(10): 28 00 00 60 b2 17 00 00 08 00 [52803.393974] sd 2:0:0:0: [sdc] Unhandled error code [52803.393976] sd 2:0:0:0: [sdc] [52803.393977] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.393978] sd 2:0:0:0: [sdc] CDB: [52803.393979] Read(10): 28 00 00 60 b2 17 00 00 08 00 [52803.394298] sd 2:0:0:0: [sdc] Unhandled error code [52803.394300] sd 2:0:0:0: [sdc] [52803.394302] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.394303] sd 2:0:0:0: [sdc] CDB: [52803.394304] Read(10): 28 00 00 5d a6 a7 00 00 08 00 [52803.395577] sd 2:0:0:0: [sdc] Unhandled error code [52803.395580] sd 2:0:0:0: [sdc] [52803.395582] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.395584] sd 2:0:0:0: [sdc] CDB: [52803.395585] Read(10): 28 00 00 00 01 9f 00 00 08 00 [52803.395721] sd 2:0:0:0: [sdc] Unhandled error code [52803.395724] sd 2:0:0:0: [sdc] [52803.395725] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.395726] sd 2:0:0:0: [sdc] CDB: [52803.395727] Read(10): 28 00 00 00 01 67 00 00 08 00 [52803.395843] sd 2:0:0:0: [sdc] Unhandled error code [52803.395845] sd 2:0:0:0: [sdc] [52803.395846] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.395847] sd 2:0:0:0: [sdc] CDB: [52803.395848] Read(10): 28 00 02 a8 33 77 00 00 08 00 [52803.395960] sd 2:0:0:0: [sdc] Unhandled error code [52803.395962] sd 2:0:0:0: [sdc] [52803.395963] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.395965] sd 2:0:0:0: [sdc] CDB: [52803.395965] Read(10): 28 00 00 b5 ae 7f 00 00 08 00 [52803.396077] sd 2:0:0:0: [sdc] Unhandled error code [52803.396079] sd 2:0:0:0: [sdc] [52803.396080] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.396082] sd 2:0:0:0: [sdc] CDB: [52803.396083] Read(10): 28 00 00 63 64 bf 00 00 08 00 [52803.396193] sd 2:0:0:0: [sdc] Unhandled error code [52803.396195] sd 2:0:0:0: [sdc] [52803.396196] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.396198] sd 2:0:0:0: [sdc] CDB: [52803.396199] Read(10): 28 00 07 1a e2 e7 00 00 08 00 [52803.396313] sd 2:0:0:0: [sdc] Unhandled error code [52803.396315] sd 2:0:0:0: [sdc] [52803.396316] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.396318] sd 2:0:0:0: [sdc] CDB: [52803.396319] Read(10): 28 00 07 1a b9 87 00 00 08 00 [52803.396435] sd 2:0:0:0: [sdc] Unhandled error code [52803.396437] sd 2:0:0:0: [sdc] [52803.396438] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.396439] sd 2:0:0:0: [sdc] CDB: [52803.396441] Read(10): 28 00 02 ce 8e df 00 00 08 00 [52803.396555] sd 2:0:0:0: [sdc] Unhandled error code [52803.396557] sd 2:0:0:0: [sdc] [52803.396558] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.396560] sd 2:0:0:0: [sdc] CDB: [52803.396561] Read(10): 28 00 0e 66 6d f7 00 00 08 00 [52803.396769] sd 2:0:0:0: [sdc] Unhandled error code [52803.396770] sd 2:0:0:0: [sdc] [52803.396772] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.396773] sd 2:0:0:0: [sdc] CDB: [52803.396774] Read(10): 28 00 07 1a e4 2f 00 00 08 00 [52803.396886] sd 2:0:0:0: [sdc] Unhandled error code [52803.396888] sd 2:0:0:0: [sdc] [52803.396889] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.396890] sd 2:0:0:0: [sdc] CDB: [52803.396891] Read(10): 28 00 00 63 d4 3f 00 00 08 00 [52803.397002] sd 2:0:0:0: [sdc] Unhandled error code [52803.397004] sd 2:0:0:0: [sdc] [52803.397005] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.397007] sd 2:0:0:0: [sdc] CDB: [52803.397007] Read(10): 28 00 07 1a e4 1f 00 00 08 00 [52803.400074] sd 2:0:0:0: [sdc] Unhandled error code [52803.400078] sd 2:0:0:0: [sdc] [52803.400079] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.400081] sd 2:0:0:0: [sdc] CDB: [52803.400082] Read(10): 28 00 07 16 c7 5f 00 00 08 00 [52803.400318] sd 2:0:0:0: [sdc] Unhandled error code [52803.400320] sd 2:0:0:0: [sdc] [52803.400322] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.400323] sd 2:0:0:0: [sdc] CDB: [52803.400324] Read(10): 28 00 00 60 01 87 00 00 08 00 [52803.400408] sd 2:0:0:0: [sdc] Unhandled error code [52803.400410] sd 2:0:0:0: [sdc] [52803.400412] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.400413] sd 2:0:0:0: [sdc] CDB: [52803.400414] Read(10): 28 00 00 60 01 0f 00 00 08 00 [52803.400564] sd 2:0:0:0: [sdc] Unhandled error code [52803.400566] sd 2:0:0:0: [sdc] [52803.400568] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.400569] sd 2:0:0:0: [sdc] CDB: [52803.400570] Read(10): 28 00 00 5d d1 d7 00 00 08 00 [52803.400841] sd 2:0:0:0: [sdc] Unhandled error code [52803.400843] sd 2:0:0:0: [sdc] [52803.400844] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.400846] sd 2:0:0:0: [sdc] CDB: [52803.400847] Read(10): 28 00 07 1a e3 47 00 00 08 00 [52803.401151] sd 2:0:0:0: [sdc] Unhandled error code [52803.401153] sd 2:0:0:0: [sdc] [52803.401155] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.401156] sd 2:0:0:0: [sdc] CDB: [52803.401157] Read(10): 28 00 07 1a b9 1f 00 00 08 00 [52803.401310] sd 2:0:0:0: [sdc] Unhandled error code [52803.401312] sd 2:0:0:0: [sdc] [52803.401313] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.401315] sd 2:0:0:0: [sdc] CDB: [52803.401316] Read(10): 28 00 00 a4 1b 57 00 00 08 00 [52803.401877] sd 2:0:0:0: [sdc] Unhandled error code [52803.401879] sd 2:0:0:0: [sdc] [52803.401880] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.401881] sd 2:0:0:0: [sdc] CDB: [52803.401882] Read(10): 28 00 0e 66 35 47 00 00 08 00 [52803.402032] sd 2:0:0:0: [sdc] Unhandled error code [52803.402033] sd 2:0:0:0: [sdc] [52803.402034] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.402036] sd 2:0:0:0: [sdc] CDB: [52803.402037] Read(10): 28 00 06 30 69 ff 00 00 08 00 [52803.402148] sd 2:0:0:0: [sdc] Unhandled error code [52803.402150] sd 2:0:0:0: [sdc] [52803.402151] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.402153] sd 2:0:0:0: [sdc] CDB: [52803.402154] Read(10): 28 00 09 bd d8 77 00 00 08 00 [52803.402263] sd 2:0:0:0: [sdc] Unhandled error code [52803.402265] sd 2:0:0:0: [sdc] [52803.402266] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.402267] sd 2:0:0:0: [sdc] CDB: [52803.402268] Read(10): 28 00 00 5d ff 77 00 00 08 00 [52803.402376] sd 2:0:0:0: [sdc] Unhandled error code [52803.402378] sd 2:0:0:0: [sdc] [52803.402379] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.402381] sd 2:0:0:0: [sdc] CDB: [52803.402382] Read(10): 28 00 00 5d ff 7f 00 00 08 00 [52803.402490] sd 2:0:0:0: [sdc] Unhandled error code [52803.402492] sd 2:0:0:0: [sdc] [52803.402493] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.402495] sd 2:0:0:0: [sdc] CDB: [52803.402496] Read(10): 28 00 00 00 01 2f 00 00 08 00 [52803.402602] sd 2:0:0:0: [sdc] Unhandled error code [52803.402604] sd 2:0:0:0: [sdc] [52803.402605] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.402607] sd 2:0:0:0: [sdc] CDB: [52803.402608] Read(10): 28 00 00 b5 ac 8f 00 00 08 00 [52803.402715] sd 2:0:0:0: [sdc] Unhandled error code [52803.402717] sd 2:0:0:0: [sdc] [52803.402719] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.402720] sd 2:0:0:0: [sdc] CDB: [52803.402721] Read(10): 28 00 00 e1 18 ff 00 00 08 00 [52803.402829] sd 2:0:0:0: [sdc] Unhandled error code [52803.402831] sd 2:0:0:0: [sdc] [52803.402833] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.402834] sd 2:0:0:0: [sdc] CDB: [52803.402835] Read(10): 28 00 09 bd ea cf 00 00 08 00 [52803.403999] sd 2:0:0:0: [sdc] Unhandled error code [52803.404001] sd 2:0:0:0: [sdc] [52803.404003] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52803.404005] sd 2:0:0:0: [sdc] CDB: [52803.404006] Read(10): 28 00 07 1a b8 f7 00 00 08 00 [52832.950225] sd 2:0:0:0: [sdc] Unhandled error code [52832.950230] sd 2:0:0:0: [sdc] [52832.950233] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52832.950235] sd 2:0:0:0: [sdc] CDB: [52832.950237] Write(10): 2a 00 00 60 bf 7f 00 00 08 00 [52832.950247] blk_update_request: 1077 callbacks suppressed [52832.950250] end_request: I/O error, dev sdc, sector 6340479 [52832.950253] quiet_error: 1077 callbacks suppressed [52832.950256] Buffer I/O error on device sdc1, logical block 792552 [52832.950258] lost page write due to I/O error on sdc1 [52832.950269] sd 2:0:0:0: [sdc] Unhandled error code [52832.950272] sd 2:0:0:0: [sdc] [52832.950273] Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK [52832.950276] sd 2:0:0:0: [sdc] CDB: [52832.950277] Write(10): 2a 00 01 a5 f1 4f 00 00 08 00 [52832.950285] end_request: I/O error, dev sdc, sector 27652431 [52832.950287] Buffer I/O error on device sdc1, logical block 3456546 [52832.950289] lost page write due to I/O error on sdc1

    Read the article

  • Help with Collision Resolution?

    - by Milo
    I'm trying to learn about physics by trying to make a simplified GTA 2 clone. My only problem is collision resolution. Everything else works great. I have a rigid body class and from there cars and a wheel class: class RigidBody extends Entity { //linear private Vector2D velocity = new Vector2D(); private Vector2D forces = new Vector2D(); private OBB2D predictionRect = new OBB2D(new Vector2D(), 1.0f, 1.0f, 0.0f); private float mass; private Vector2D deltaVec = new Vector2D(); private Vector2D v = new Vector2D(); //angular private float angularVelocity; private float torque; private float inertia; //graphical private Vector2D halfSize = new Vector2D(); private Bitmap image; private Matrix mat = new Matrix(); private float[] Vector2Ds = new float[2]; private Vector2D tangent = new Vector2D(); private static Vector2D worldRelVec = new Vector2D(); private static Vector2D relWorldVec = new Vector2D(); private static Vector2D pointVelVec = new Vector2D(); public RigidBody() { //set these defaults so we don't get divide by zeros mass = 1.0f; inertia = 1.0f; setLayer(LAYER_OBJECTS); } protected void rectChanged() { if(getWorld() != null) { getWorld().updateDynamic(this); } } //intialize out parameters public void initialize(Vector2D halfSize, float mass, Bitmap bitmap) { //store physical parameters this.halfSize = halfSize; this.mass = mass; image = bitmap; inertia = (1.0f / 20.0f) * (halfSize.x * halfSize.x) * (halfSize.y * halfSize.y) * mass; RectF rect = new RectF(); float scalar = 10.0f; rect.left = (int)-halfSize.x * scalar; rect.top = (int)-halfSize.y * scalar; rect.right = rect.left + (int)(halfSize.x * 2.0f * scalar); rect.bottom = rect.top + (int)(halfSize.y * 2.0f * scalar); setRect(rect); predictionRect.set(rect); } public void setLocation(Vector2D position, float angle) { getRect().set(position, getWidth(), getHeight(), angle); rectChanged(); } public void setPredictionLocation(Vector2D position, float angle) { getPredictionRect().set(position, getWidth(), getHeight(), angle); } public void setPredictionCenter(Vector2D center) { getPredictionRect().moveTo(center); } public void setPredictionAngle(float angle) { predictionRect.setAngle(angle); } public Vector2D getPosition() { return getRect().getCenter(); } public OBB2D getPredictionRect() { return predictionRect; } @Override public void update(float timeStep) { doUpdate(false,timeStep); } public void doUpdate(boolean prediction, float timeStep) { //integrate physics //linear Vector2D acceleration = Vector2D.scalarDivide(forces, mass); if(prediction) { Vector2D velocity = Vector2D.add(this.velocity, Vector2D.scalarMultiply(acceleration, timeStep)); Vector2D c = getRect().getCenter(); c = Vector2D.add(getRect().getCenter(), Vector2D.scalarMultiply(velocity , timeStep)); setPredictionCenter(c); //forces = new Vector2D(0,0); //clear forces } else { velocity.x += (acceleration.x * timeStep); velocity.y += (acceleration.y * timeStep); //velocity = Vector2D.add(velocity, Vector2D.scalarMultiply(acceleration, timeStep)); Vector2D c = getRect().getCenter(); v.x = getRect().getCenter().getX() + (velocity.x * timeStep); v.y = getRect().getCenter().getY() + (velocity.y * timeStep); deltaVec.x = v.x - c.x; deltaVec.y = v.y - c.y; deltaVec.normalize(); setCenter(v.x, v.y); forces.x = 0; //clear forces forces.y = 0; } //angular float angAcc = torque / inertia; if(prediction) { float angularVelocity = this.angularVelocity + angAcc * timeStep; setPredictionAngle(getAngle() + angularVelocity * timeStep); //torque = 0; //clear torque } else { angularVelocity += angAcc * timeStep; setAngle(getAngle() + angularVelocity * timeStep); torque = 0; //clear torque } } public void updatePrediction(float timeStep) { doUpdate(true, timeStep); } //take a relative Vector2D and make it a world Vector2D public Vector2D relativeToWorld(Vector2D relative) { mat.reset(); Vector2Ds[0] = relative.x; Vector2Ds[1] = relative.y; mat.postRotate(JMath.radToDeg(getAngle())); mat.mapVectors(Vector2Ds); relWorldVec.x = Vector2Ds[0]; relWorldVec.y = Vector2Ds[1]; return new Vector2D(Vector2Ds[0], Vector2Ds[1]); } //take a world Vector2D and make it a relative Vector2D public Vector2D worldToRelative(Vector2D world) { mat.reset(); Vector2Ds[0] = world.x; Vector2Ds[1] = world.y; mat.postRotate(JMath.radToDeg(-getAngle())); mat.mapVectors(Vector2Ds); return new Vector2D(Vector2Ds[0], Vector2Ds[1]); } //velocity of a point on body public Vector2D pointVelocity(Vector2D worldOffset) { tangent.x = -worldOffset.y; tangent.y = worldOffset.x; return Vector2D.add( Vector2D.scalarMultiply(tangent, angularVelocity) , velocity); } public void applyForce(Vector2D worldForce, Vector2D worldOffset) { //add linear force forces.x += worldForce.x; forces.y += worldForce.y; //add associated torque torque += Vector2D.cross(worldOffset, worldForce); } @Override public void draw( GraphicsContext c) { c.drawRotatedScaledBitmap(image, getPosition().x, getPosition().y, getWidth(), getHeight(), getAngle()); } public Vector2D getVelocity() { return velocity; } public void setVelocity(Vector2D velocity) { this.velocity = velocity; } public Vector2D getDeltaVec() { return deltaVec; } } Vehicle public class Wheel { private Vector2D forwardVec; private Vector2D sideVec; private float wheelTorque; private float wheelSpeed; private float wheelInertia; private float wheelRadius; private Vector2D position = new Vector2D(); public Wheel(Vector2D position, float radius) { this.position = position; setSteeringAngle(0); wheelSpeed = 0; wheelRadius = radius; wheelInertia = (radius * radius) * 1.1f; } public void setSteeringAngle(float newAngle) { Matrix mat = new Matrix(); float []vecArray = new float[4]; //forward Vector vecArray[0] = 0; vecArray[1] = 1; //side Vector vecArray[2] = -1; vecArray[3] = 0; mat.postRotate(newAngle / (float)Math.PI * 180.0f); mat.mapVectors(vecArray); forwardVec = new Vector2D(vecArray[0], vecArray[1]); sideVec = new Vector2D(vecArray[2], vecArray[3]); } public void addTransmissionTorque(float newValue) { wheelTorque += newValue; } public float getWheelSpeed() { return wheelSpeed; } public Vector2D getAnchorPoint() { return position; } public Vector2D calculateForce(Vector2D relativeGroundSpeed, float timeStep, boolean prediction) { //calculate speed of tire patch at ground Vector2D patchSpeed = Vector2D.scalarMultiply(Vector2D.scalarMultiply( Vector2D.negative(forwardVec), wheelSpeed), wheelRadius); //get velocity difference between ground and patch Vector2D velDifference = Vector2D.add(relativeGroundSpeed , patchSpeed); //project ground speed onto side axis Float forwardMag = new Float(0.0f); Vector2D sideVel = velDifference.project(sideVec); Vector2D forwardVel = velDifference.project(forwardVec, forwardMag); //calculate super fake friction forces //calculate response force Vector2D responseForce = Vector2D.scalarMultiply(Vector2D.negative(sideVel), 2.0f); responseForce = Vector2D.subtract(responseForce, forwardVel); float topSpeed = 500.0f; //calculate torque on wheel wheelTorque += forwardMag * wheelRadius; //integrate total torque into wheel wheelSpeed += wheelTorque / wheelInertia * timeStep; //top speed limit (kind of a hack) if(wheelSpeed > topSpeed) { wheelSpeed = topSpeed; } //clear our transmission torque accumulator wheelTorque = 0; //return force acting on body return responseForce; } public void setTransmissionTorque(float newValue) { wheelTorque = newValue; } public float getTransmissionTourque() { return wheelTorque; } public void setWheelSpeed(float speed) { wheelSpeed = speed; } } //our vehicle object public class Vehicle extends RigidBody { private Wheel [] wheels = new Wheel[4]; private boolean throttled = false; public void initialize(Vector2D halfSize, float mass, Bitmap bitmap) { //front wheels wheels[0] = new Wheel(new Vector2D(halfSize.x, halfSize.y), 0.45f); wheels[1] = new Wheel(new Vector2D(-halfSize.x, halfSize.y), 0.45f); //rear wheels wheels[2] = new Wheel(new Vector2D(halfSize.x, -halfSize.y), 0.75f); wheels[3] = new Wheel(new Vector2D(-halfSize.x, -halfSize.y), 0.75f); super.initialize(halfSize, mass, bitmap); } public void setSteering(float steering) { float steeringLock = 0.13f; //apply steering angle to front wheels wheels[0].setSteeringAngle(steering * steeringLock); wheels[1].setSteeringAngle(steering * steeringLock); } public void setThrottle(float throttle, boolean allWheel) { float torque = 85.0f; throttled = true; //apply transmission torque to back wheels if (allWheel) { wheels[0].addTransmissionTorque(throttle * torque); wheels[1].addTransmissionTorque(throttle * torque); } wheels[2].addTransmissionTorque(throttle * torque); wheels[3].addTransmissionTorque(throttle * torque); } public void setBrakes(float brakes) { float brakeTorque = 15.0f; //apply brake torque opposing wheel vel for (Wheel wheel : wheels) { float wheelVel = wheel.getWheelSpeed(); wheel.addTransmissionTorque(-wheelVel * brakeTorque * brakes); } } public void doUpdate(float timeStep, boolean prediction) { for (Wheel wheel : wheels) { float wheelVel = wheel.getWheelSpeed(); //apply negative force to naturally slow down car if(!throttled && !prediction) wheel.addTransmissionTorque(-wheelVel * 0.11f); Vector2D worldWheelOffset = relativeToWorld(wheel.getAnchorPoint()); Vector2D worldGroundVel = pointVelocity(worldWheelOffset); Vector2D relativeGroundSpeed = worldToRelative(worldGroundVel); Vector2D relativeResponseForce = wheel.calculateForce(relativeGroundSpeed, timeStep,prediction); Vector2D worldResponseForce = relativeToWorld(relativeResponseForce); applyForce(worldResponseForce, worldWheelOffset); } //no throttling yet this frame throttled = false; if(prediction) { super.updatePrediction(timeStep); } else { super.update(timeStep); } } @Override public void update(float timeStep) { doUpdate(timeStep,false); } public void updatePrediction(float timeStep) { doUpdate(timeStep,true); } public void inverseThrottle() { float scalar = 0.2f; for(Wheel wheel : wheels) { wheel.setTransmissionTorque(-wheel.getTransmissionTourque() * scalar); wheel.setWheelSpeed(-wheel.getWheelSpeed() * 0.1f); } } } And my big hack collision resolution: private void update() { camera.setPosition((vehicle.getPosition().x * camera.getScale()) - ((getWidth() ) / 2.0f), (vehicle.getPosition().y * camera.getScale()) - ((getHeight() ) / 2.0f)); //camera.move(input.getAnalogStick().getStickValueX() * 15.0f, input.getAnalogStick().getStickValueY() * 15.0f); if(input.isPressed(ControlButton.BUTTON_GAS)) { vehicle.setThrottle(1.0f, false); } if(input.isPressed(ControlButton.BUTTON_STEAL_CAR)) { vehicle.setThrottle(-1.0f, false); } if(input.isPressed(ControlButton.BUTTON_BRAKE)) { vehicle.setBrakes(1.0f); } vehicle.setSteering(input.getAnalogStick().getStickValueX()); //vehicle.update(16.6666666f / 1000.0f); boolean colided = false; vehicle.updatePrediction(16.66666f / 1000.0f); List<Entity> buildings = world.queryStaticSolid(vehicle,vehicle.getPredictionRect()); if(buildings.size() > 0) { colided = true; } if(!colided) { vehicle.update(16.66f / 1000.0f); } else { Vector2D delta = vehicle.getDeltaVec(); vehicle.setVelocity(Vector2D.negative(vehicle.getVelocity().multiply(0.2f)). add(delta.multiply(-1.0f))); vehicle.inverseThrottle(); } } Here is OBB public class OBB2D { // Corners of the box, where 0 is the lower left. private Vector2D corner[] = new Vector2D[4]; private Vector2D center = new Vector2D(); private Vector2D extents = new Vector2D(); private RectF boundingRect = new RectF(); private float angle; //Two edges of the box extended away from corner[0]. private Vector2D axis[] = new Vector2D[2]; private double origin[] = new double[2]; public OBB2D(Vector2D center, float w, float h, float angle) { set(center,w,h,angle); } public OBB2D(float left, float top, float width, float height) { set(new Vector2D(left + (width / 2), top + (height / 2)),width,height,0.0f); } public void set(Vector2D center,float w, float h,float angle) { Vector2D X = new Vector2D( (float)Math.cos(angle), (float)Math.sin(angle)); Vector2D Y = new Vector2D((float)-Math.sin(angle), (float)Math.cos(angle)); X = X.multiply( w / 2); Y = Y.multiply( h / 2); corner[0] = center.subtract(X).subtract(Y); corner[1] = center.add(X).subtract(Y); corner[2] = center.add(X).add(Y); corner[3] = center.subtract(X).add(Y); computeAxes(); extents.x = w / 2; extents.y = h / 2; computeDimensions(center,angle); } private void computeDimensions(Vector2D center,float angle) { this.center.x = center.x; this.center.y = center.y; this.angle = angle; boundingRect.left = Math.min(Math.min(corner[0].x, corner[3].x), Math.min(corner[1].x, corner[2].x)); boundingRect.top = Math.min(Math.min(corner[0].y, corner[1].y),Math.min(corner[2].y, corner[3].y)); boundingRect.right = Math.max(Math.max(corner[1].x, corner[2].x), Math.max(corner[0].x, corner[3].x)); boundingRect.bottom = Math.max(Math.max(corner[2].y, corner[3].y),Math.max(corner[0].y, corner[1].y)); } public void set(RectF rect) { set(new Vector2D(rect.centerX(),rect.centerY()),rect.width(),rect.height(),0.0f); } // Returns true if other overlaps one dimension of this. private boolean overlaps1Way(OBB2D other) { for (int a = 0; a < axis.length; ++a) { double t = other.corner[0].dot(axis[a]); // Find the extent of box 2 on axis a double tMin = t; double tMax = t; for (int c = 1; c < corner.length; ++c) { t = other.corner[c].dot(axis[a]); if (t < tMin) { tMin = t; } else if (t > tMax) { tMax = t; } } // We have to subtract off the origin // See if [tMin, tMax] intersects [0, 1] if ((tMin > 1 + origin[a]) || (tMax < origin[a])) { // There was no intersection along this dimension; // the boxes cannot possibly overlap. return false; } } // There was no dimension along which there is no intersection. // Therefore the boxes overlap. return true; } //Updates the axes after the corners move. Assumes the //corners actually form a rectangle. private void computeAxes() { axis[0] = corner[1].subtract(corner[0]); axis[1] = corner[3].subtract(corner[0]); // Make the length of each axis 1/edge length so we know any // dot product must be less than 1 to fall within the edge. for (int a = 0; a < axis.length; ++a) { axis[a] = axis[a].divide((axis[a].length() * axis[a].length())); origin[a] = corner[0].dot(axis[a]); } } public void moveTo(Vector2D center) { Vector2D centroid = (corner[0].add(corner[1]).add(corner[2]).add(corner[3])).divide(4.0f); Vector2D translation = center.subtract(centroid); for (int c = 0; c < 4; ++c) { corner[c] = corner[c].add(translation); } computeAxes(); computeDimensions(center,angle); } // Returns true if the intersection of the boxes is non-empty. public boolean overlaps(OBB2D other) { if(right() < other.left()) { return false; } if(bottom() < other.top()) { return false; } if(left() > other.right()) { return false; } if(top() > other.bottom()) { return false; } if(other.getAngle() == 0.0f && getAngle() == 0.0f) { return true; } return overlaps1Way(other) && other.overlaps1Way(this); } public Vector2D getCenter() { return center; } public float getWidth() { return extents.x * 2; } public float getHeight() { return extents.y * 2; } public void setAngle(float angle) { set(center,getWidth(),getHeight(),angle); } public float getAngle() { return angle; } public void setSize(float w,float h) { set(center,w,h,angle); } public float left() { return boundingRect.left; } public float right() { return boundingRect.right; } public float bottom() { return boundingRect.bottom; } public float top() { return boundingRect.top; } public RectF getBoundingRect() { return boundingRect; } public boolean overlaps(float left, float top, float right, float bottom) { if(right() < left) { return false; } if(bottom() < top) { return false; } if(left() > right) { return false; } if(top() > bottom) { return false; } return true; } }; What I do is when I predict a hit on the car, I force it back. It does not work that well and seems like a bad idea. What could I do to have more proper collision resolution. Such that if I hit a wall I will never get stuck in it and if I hit the side of a wall I can steer my way out of it. Thanks I found this nice ppt. It talks about pulling objects apart and calculating new velocities. How could I calc new velocities in my case? http://www.google.ca/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&ved=0CC8QFjAB&url=http%3A%2F%2Fcoitweb.uncc.edu%2F~tbarnes2%2FGameDesignFall05%2FSlides%2FCh4.2-CollDet.ppt&ei=x4ucULy5M6-N0QGRy4D4Cg&usg=AFQjCNG7FVDXWRdLv8_-T5qnFyYld53cTQ&cad=rja

    Read the article

  • Advanced TSQL Tuning: Why Internals Knowledge Matters

    - by Paul White
    There is much more to query tuning than reducing logical reads and adding covering nonclustered indexes.  Query tuning is not complete as soon as the query returns results quickly in the development or test environments.  In production, your query will compete for memory, CPU, locks, I/O and other resources on the server.  Today’s entry looks at some tuning considerations that are often overlooked, and shows how deep internals knowledge can help you write better TSQL. As always, we’ll need some example data.  In fact, we are going to use three tables today, each of which is structured like this: Each table has 50,000 rows made up of an INTEGER id column and a padding column containing 3,999 characters in every row.  The only difference between the three tables is in the type of the padding column: the first table uses CHAR(3999), the second uses VARCHAR(MAX), and the third uses the deprecated TEXT type.  A script to create a database with the three tables and load the sample data follows: USE master; GO IF DB_ID('SortTest') IS NOT NULL DROP DATABASE SortTest; GO CREATE DATABASE SortTest COLLATE LATIN1_GENERAL_BIN; GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest', SIZE = 3GB, MAXSIZE = 3GB ); GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest_log', SIZE = 256MB, MAXSIZE = 1GB, FILEGROWTH = 128MB ); GO ALTER DATABASE SortTest SET ALLOW_SNAPSHOT_ISOLATION OFF ; ALTER DATABASE SortTest SET AUTO_CLOSE OFF ; ALTER DATABASE SortTest SET AUTO_CREATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_SHRINK OFF ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS_ASYNC ON ; ALTER DATABASE SortTest SET PARAMETERIZATION SIMPLE ; ALTER DATABASE SortTest SET READ_COMMITTED_SNAPSHOT OFF ; ALTER DATABASE SortTest SET MULTI_USER ; ALTER DATABASE SortTest SET RECOVERY SIMPLE ; USE SortTest; GO CREATE TABLE dbo.TestCHAR ( id INTEGER IDENTITY (1,1) NOT NULL, padding CHAR(3999) NOT NULL,   CONSTRAINT [PK dbo.TestCHAR (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestMAX ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAX (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestTEXT ( id INTEGER IDENTITY (1,1) NOT NULL, padding TEXT NOT NULL,   CONSTRAINT [PK dbo.TestTEXT (id)] PRIMARY KEY CLUSTERED (id), ) ; -- ============= -- Load TestCHAR (about 3s) -- ============= INSERT INTO dbo.TestCHAR WITH (TABLOCKX) ( padding ) SELECT padding = REPLICATE(CHAR(65 + (Data.n % 26)), 3999) FROM ( SELECT TOP (50000) n = ROW_NUMBER() OVER (ORDER BY (SELECT 0)) - 1 FROM master.sys.columns C1, master.sys.columns C2, master.sys.columns C3 ORDER BY n ASC ) AS Data ORDER BY Data.n ASC ; -- ============ -- Load TestMAX (about 3s) -- ============ INSERT INTO dbo.TestMAX WITH (TABLOCKX) ( padding ) SELECT CONVERT(VARCHAR(MAX), padding) FROM dbo.TestCHAR ORDER BY id ; -- ============= -- Load TestTEXT (about 5s) -- ============= INSERT INTO dbo.TestTEXT WITH (TABLOCKX) ( padding ) SELECT CONVERT(TEXT, padding) FROM dbo.TestCHAR ORDER BY id ; -- ========== -- Space used -- ========== -- EXECUTE sys.sp_spaceused @objname = 'dbo.TestCHAR'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAX'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestTEXT'; ; CHECKPOINT ; That takes around 15 seconds to run, and shows the space allocated to each table in its output: To illustrate the points I want to make today, the example task we are going to set ourselves is to return a random set of 150 rows from each table.  The basic shape of the test query is the same for each of the three test tables: SELECT TOP (150) T.id, T.padding FROM dbo.Test AS T ORDER BY NEWID() OPTION (MAXDOP 1) ; Test 1 – CHAR(3999) Running the template query shown above using the TestCHAR table as the target, we find that the query takes around 5 seconds to return its results.  This seems slow, considering that the table only has 50,000 rows.  Working on the assumption that generating a GUID for each row is a CPU-intensive operation, we might try enabling parallelism to see if that speeds up the response time.  Running the query again (but without the MAXDOP 1 hint) on a machine with eight logical processors, the query now takes 10 seconds to execute – twice as long as when run serially. Rather than attempting further guesses at the cause of the slowness, let’s go back to serial execution and add some monitoring.  The script below monitors STATISTICS IO output and the amount of tempdb used by the test query.  We will also run a Profiler trace to capture any warnings generated during query execution. DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TC.id, TC.padding FROM dbo.TestCHAR AS TC ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; Let’s take a closer look at the statistics and query plan generated from this: Following the flow of the data from right to left, we see the expected 50,000 rows emerging from the Clustered Index Scan, with a total estimated size of around 191MB.  The Compute Scalar adds a column containing a random GUID (generated from the NEWID() function call) for each row.  With this extra column in place, the size of the data arriving at the Sort operator is estimated to be 192MB. Sort is a blocking operator – it has to examine all of the rows on its input before it can produce its first row of output (the last row received might sort first).  This characteristic means that Sort requires a memory grant – memory allocated for the query’s use by SQL Server just before execution starts.  In this case, the Sort is the only memory-consuming operator in the plan, so it has access to the full 243MB (248,696KB) of memory reserved by SQL Server for this query execution. Notice that the memory grant is significantly larger than the expected size of the data to be sorted.  SQL Server uses a number of techniques to speed up sorting, some of which sacrifice size for comparison speed.  Sorts typically require a very large number of comparisons, so this is usually a very effective optimization.  One of the drawbacks is that it is not possible to exactly predict the sort space needed, as it depends on the data itself.  SQL Server takes an educated guess based on data types, sizes, and the number of rows expected, but the algorithm is not perfect. In spite of the large memory grant, the Profiler trace shows a Sort Warning event (indicating that the sort ran out of memory), and the tempdb usage monitor shows that 195MB of tempdb space was used – all of that for system use.  The 195MB represents physical write activity on tempdb, because SQL Server strictly enforces memory grants – a query cannot ‘cheat’ and effectively gain extra memory by spilling to tempdb pages that reside in memory.  Anyway, the key point here is that it takes a while to write 195MB to disk, and this is the main reason that the query takes 5 seconds overall. If you are wondering why using parallelism made the problem worse, consider that eight threads of execution result in eight concurrent partial sorts, each receiving one eighth of the memory grant.  The eight sorts all spilled to tempdb, resulting in inefficiencies as the spilled sorts competed for disk resources.  More importantly, there are specific problems at the point where the eight partial results are combined, but I’ll cover that in a future post. CHAR(3999) Performance Summary: 5 seconds elapsed time 243MB memory grant 195MB tempdb usage 192MB estimated sort set 25,043 logical reads Sort Warning Test 2 – VARCHAR(MAX) We’ll now run exactly the same test (with the additional monitoring) on the table using a VARCHAR(MAX) padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TM.id, TM.padding FROM dbo.TestMAX AS TM ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query takes around 8 seconds to complete (3 seconds longer than Test 1).  Notice that the estimated row and data sizes are very slightly larger, and the overall memory grant has also increased very slightly to 245MB.  The most marked difference is in the amount of tempdb space used – this query wrote almost 391MB of sort run data to the physical tempdb file.  Don’t draw any general conclusions about VARCHAR(MAX) versus CHAR from this – I chose the length of the data specifically to expose this edge case.  In most cases, VARCHAR(MAX) performs very similarly to CHAR – I just wanted to make test 2 a bit more exciting. MAX Performance Summary: 8 seconds elapsed time 245MB memory grant 391MB tempdb usage 193MB estimated sort set 25,043 logical reads Sort warning Test 3 – TEXT The same test again, but using the deprecated TEXT data type for the padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TT.id, TT.padding FROM dbo.TestTEXT AS TT ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query runs in 500ms.  If you look at the metrics we have been checking so far, it’s not hard to understand why: TEXT Performance Summary: 0.5 seconds elapsed time 9MB memory grant 5MB tempdb usage 5MB estimated sort set 207 logical reads 596 LOB logical reads Sort warning SQL Server’s memory grant algorithm still underestimates the memory needed to perform the sorting operation, but the size of the data to sort is so much smaller (5MB versus 193MB previously) that the spilled sort doesn’t matter very much.  Why is the data size so much smaller?  The query still produces the correct results – including the large amount of data held in the padding column – so what magic is being performed here? TEXT versus MAX Storage The answer lies in how columns of the TEXT data type are stored.  By default, TEXT data is stored off-row in separate LOB pages – which explains why this is the first query we have seen that records LOB logical reads in its STATISTICS IO output.  You may recall from my last post that LOB data leaves an in-row pointer to the separate storage structure holding the LOB data. SQL Server can see that the full LOB value is not required by the query plan until results are returned, so instead of passing the full LOB value down the plan from the Clustered Index Scan, it passes the small in-row structure instead.  SQL Server estimates that each row coming from the scan will be 79 bytes long – 11 bytes for row overhead, 4 bytes for the integer id column, and 64 bytes for the LOB pointer (in fact the pointer is rather smaller – usually 16 bytes – but the details of that don’t really matter right now). OK, so this query is much more efficient because it is sorting a very much smaller data set – SQL Server delays retrieving the LOB data itself until after the Sort starts producing its 150 rows.  The question that normally arises at this point is: Why doesn’t SQL Server use the same trick when the padding column is defined as VARCHAR(MAX)? The answer is connected with the fact that if the actual size of the VARCHAR(MAX) data is 8000 bytes or less, it is usually stored in-row in exactly the same way as for a VARCHAR(8000) column – MAX data only moves off-row into LOB storage when it exceeds 8000 bytes.  The default behaviour of the TEXT type is to be stored off-row by default, unless the ‘text in row’ table option is set suitably and there is room on the page.  There is an analogous (but opposite) setting to control the storage of MAX data – the ‘large value types out of row’ table option.  By enabling this option for a table, MAX data will be stored off-row (in a LOB structure) instead of in-row.  SQL Server Books Online has good coverage of both options in the topic In Row Data. The MAXOOR Table The essential difference, then, is that MAX defaults to in-row storage, and TEXT defaults to off-row (LOB) storage.  You might be thinking that we could get the same benefits seen for the TEXT data type by storing the VARCHAR(MAX) values off row – so let’s look at that option now.  This script creates a fourth table, with the VARCHAR(MAX) data stored off-row in LOB pages: CREATE TABLE dbo.TestMAXOOR ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAXOOR (id)] PRIMARY KEY CLUSTERED (id), ) ; EXECUTE sys.sp_tableoption @TableNamePattern = N'dbo.TestMAXOOR', @OptionName = 'large value types out of row', @OptionValue = 'true' ; SELECT large_value_types_out_of_row FROM sys.tables WHERE [schema_id] = SCHEMA_ID(N'dbo') AND name = N'TestMAXOOR' ; INSERT INTO dbo.TestMAXOOR WITH (TABLOCKX) ( padding ) SELECT SPACE(0) FROM dbo.TestCHAR ORDER BY id ; UPDATE TM WITH (TABLOCK) SET padding.WRITE (TC.padding, NULL, NULL) FROM dbo.TestMAXOOR AS TM JOIN dbo.TestCHAR AS TC ON TC.id = TM.id ; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAXOOR' ; CHECKPOINT ; Test 4 – MAXOOR We can now re-run our test on the MAXOOR (MAX out of row) table: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) MO.id, MO.padding FROM dbo.TestMAXOOR AS MO ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; TEXT Performance Summary: 0.3 seconds elapsed time 245MB memory grant 0MB tempdb usage 193MB estimated sort set 207 logical reads 446 LOB logical reads No sort warning The query runs very quickly – slightly faster than Test 3, and without spilling the sort to tempdb (there is no sort warning in the trace, and the monitoring query shows zero tempdb usage by this query).  SQL Server is passing the in-row pointer structure down the plan and only looking up the LOB value on the output side of the sort. The Hidden Problem There is still a huge problem with this query though – it requires a 245MB memory grant.  No wonder the sort doesn’t spill to tempdb now – 245MB is about 20 times more memory than this query actually requires to sort 50,000 records containing LOB data pointers.  Notice that the estimated row and data sizes in the plan are the same as in test 2 (where the MAX data was stored in-row). The optimizer assumes that MAX data is stored in-row, regardless of the sp_tableoption setting ‘large value types out of row’.  Why?  Because this option is dynamic – changing it does not immediately force all MAX data in the table in-row or off-row, only when data is added or actually changed.  SQL Server does not keep statistics to show how much MAX or TEXT data is currently in-row, and how much is stored in LOB pages.  This is an annoying limitation, and one which I hope will be addressed in a future version of the product. So why should we worry about this?  Excessive memory grants reduce concurrency and may result in queries waiting on the RESOURCE_SEMAPHORE wait type while they wait for memory they do not need.  245MB is an awful lot of memory, especially on 32-bit versions where memory grants cannot use AWE-mapped memory.  Even on a 64-bit server with plenty of memory, do you really want a single query to consume 0.25GB of memory unnecessarily?  That’s 32,000 8KB pages that might be put to much better use. The Solution The answer is not to use the TEXT data type for the padding column.  That solution happens to have better performance characteristics for this specific query, but it still results in a spilled sort, and it is hard to recommend the use of a data type which is scheduled for removal.  I hope it is clear to you that the fundamental problem here is that SQL Server sorts the whole set arriving at a Sort operator.  Clearly, it is not efficient to sort the whole table in memory just to return 150 rows in a random order. The TEXT example was more efficient because it dramatically reduced the size of the set that needed to be sorted.  We can do the same thing by selecting 150 unique keys from the table at random (sorting by NEWID() for example) and only then retrieving the large padding column values for just the 150 rows we need.  The following script implements that idea for all four tables: SET STATISTICS IO ON ; WITH TestTable AS ( SELECT * FROM dbo.TestCHAR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id = ANY (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAX ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestTEXT ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAXOOR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; All four queries now return results in much less than a second, with memory grants between 6 and 12MB, and without spilling to tempdb.  The small remaining inefficiency is in reading the id column values from the clustered primary key index.  As a clustered index, it contains all the in-row data at its leaf.  The CHAR and VARCHAR(MAX) tables store the padding column in-row, so id values are separated by a 3999-character column, plus row overhead.  The TEXT and MAXOOR tables store the padding values off-row, so id values in the clustered index leaf are separated by the much-smaller off-row pointer structure.  This difference is reflected in the number of logical page reads performed by the four queries: Table 'TestCHAR' logical reads 25511 lob logical reads 000 Table 'TestMAX'. logical reads 25511 lob logical reads 000 Table 'TestTEXT' logical reads 00412 lob logical reads 597 Table 'TestMAXOOR' logical reads 00413 lob logical reads 446 We can increase the density of the id values by creating a separate nonclustered index on the id column only.  This is the same key as the clustered index, of course, but the nonclustered index will not include the rest of the in-row column data. CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestCHAR (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAX (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestTEXT (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAXOOR (id); The four queries can now use the very dense nonclustered index to quickly scan the id values, sort them by NEWID(), select the 150 ids we want, and then look up the padding data.  The logical reads with the new indexes in place are: Table 'TestCHAR' logical reads 835 lob logical reads 0 Table 'TestMAX' logical reads 835 lob logical reads 0 Table 'TestTEXT' logical reads 686 lob logical reads 597 Table 'TestMAXOOR' logical reads 686 lob logical reads 448 With the new index, all four queries use the same query plan (click to enlarge): Performance Summary: 0.3 seconds elapsed time 6MB memory grant 0MB tempdb usage 1MB sort set 835 logical reads (CHAR, MAX) 686 logical reads (TEXT, MAXOOR) 597 LOB logical reads (TEXT) 448 LOB logical reads (MAXOOR) No sort warning I’ll leave it as an exercise for the reader to work out why trying to eliminate the Key Lookup by adding the padding column to the new nonclustered indexes would be a daft idea Conclusion This post is not about tuning queries that access columns containing big strings.  It isn’t about the internal differences between TEXT and MAX data types either.  It isn’t even about the cool use of UPDATE .WRITE used in the MAXOOR table load.  No, this post is about something else: Many developers might not have tuned our starting example query at all – 5 seconds isn’t that bad, and the original query plan looks reasonable at first glance.  Perhaps the NEWID() function would have been blamed for ‘just being slow’ – who knows.  5 seconds isn’t awful – unless your users expect sub-second responses – but using 250MB of memory and writing 200MB to tempdb certainly is!  If ten sessions ran that query at the same time in production that’s 2.5GB of memory usage and 2GB hitting tempdb.  Of course, not all queries can be rewritten to avoid large memory grants and sort spills using the key-lookup technique in this post, but that’s not the point either. The point of this post is that a basic understanding of execution plans is not enough.  Tuning for logical reads and adding covering indexes is not enough.  If you want to produce high-quality, scalable TSQL that won’t get you paged as soon as it hits production, you need a deep understanding of execution plans, and as much accurate, deep knowledge about SQL Server as you can lay your hands on.  The advanced database developer has a wide range of tools to use in writing queries that perform well in a range of circumstances. By the way, the examples in this post were written for SQL Server 2008.  They will run on 2005 and demonstrate the same principles, but you won’t get the same figures I did because 2005 had a rather nasty bug in the Top N Sort operator.  Fair warning: if you do decide to run the scripts on a 2005 instance (particularly the parallel query) do it before you head out for lunch… This post is dedicated to the people of Christchurch, New Zealand. © 2011 Paul White email: @[email protected] twitter: @SQL_Kiwi

    Read the article

  • XNA Screen Manager problem with transitions

    - by NexAddo
    I'm having issues using the game statemanagement example in the game I am developing. I have no issues with my first three screens transitioning between one another. I have a main menu screen, a splash screen and a high score screen that cycle: mainMenuScreen->splashScreen->highScoreScreen->mainMenuScreen The screens change every 15 seconds. Transition times public MainMenuScreen() { TransitionOnTime = TimeSpan.FromSeconds(0.5); TransitionOffTime = TimeSpan.FromSeconds(0.0); currentCreditAmount = Global.CurrentCredits; } public SplashScreen() { TransitionOnTime = TimeSpan.FromSeconds(0.5); TransitionOffTime = TimeSpan.FromSeconds(0.5); } public HighScoreScreen() { TransitionOnTime = TimeSpan.FromSeconds(0.5); TransitionOffTime = TimeSpan.FromSeconds(0.5); } public GamePlayScreen() { TransitionOnTime = TimeSpan.FromSeconds(0.5); TransitionOffTime = TimeSpan.FromSeconds(0.5); } When a user inserts credits they can play the game after pressing start mainMenuScreen->splashScreen->highScoreScreen->(loops forever) || || || ===========Credits In============= || Start || \/ LoadingScreen || Start || \/ GamePlayScreen During each of these transitions, between screens, the same code is used, which exits(removes) all current active screens and respects transitions, then adds the new screen to the screen manager: foreach (GameScreen screen in ScreenManager.GetScreens()) screen.ExitScreen(); //AddScreen takes a new screen to manage and the controlling player ScreenManager.AddScreen(new NameOfScreenHere(), null); Each screen is removed from the ScreenManager with ExitScreen() and using this function, each screen transition is respected. The problem I am having is with my gamePlayScreen. When the current game is finished and the transition is complete for the gamePlayScreen, it should be removed and the next screens should be added to the ScreenManager. GamePlayScreen Code Snippet private void FinishCurrentGame() { AudioManager.StopSounds(); this.UnloadContent(); if (Global.SaveDevice.IsReady) Stats.Save(); if (HighScoreScreen.IsInHighscores(timeLimit)) { foreach (GameScreen screen in ScreenManager.GetScreens()) screen.ExitScreen(); Global.TimeRemaining = timeLimit; ScreenManager.AddScreen(new BackgroundScreen(), null); ScreenManager.AddScreen(new MessageBoxScreen("Enter your Initials", true), null); } else { foreach (GameScreen screen in ScreenManager.GetScreens()) screen.ExitScreen(); ScreenManager.AddScreen(new BackgroundScreen(), null); ScreenManager.AddScreen(new MainMenuScreen(), null); } } The problem is that when isExiting is set to true by screen.ExitScreen() for the gamePlayScreen, the transition never completes the transition and removes the screen from the ScreenManager. Every other screen that I use the same technique to add and remove each screen fully transitions On/Off and is removed at the appropriate time from the ScreenManager, but noy my GamePlayScreen. Has anyone that has used the GameStateManagement example experienced this issue or can someone see the mistake I am making? EDIT This is what I tracked down. When the game is done, I call foreach (GameScreen screen in ScreenManager.GetScreens()) screen.ExitScreen(); to start the transition off process for the gameplay screen. At this point there is only 1 screen on the ScreenManager stack. The gamePlay screen gets isExiting set to true and starts to transition off. Right after the above call to ExitScreen() I add a background screen and menu screen to the screenManager: ScreenManager.AddScreen(new background(), null); ScreenManager.AddScreen(new Menu(), null); The count of the ScreenManager is now 3. What I noticed while stepping through the updates for GameScreen and ScreenManager, the gameplay screen never gets to the point where the transistion process finishes so the ScreenManager can remove it from the stack. This anomaly does not happen to any of my other screens when I switch between them. Screen Manager Code #region File Description //----------------------------------------------------------------------------- // ScreenManager.cs // // Microsoft XNA Community Game Platform // Copyright (C) Microsoft Corporation. All rights reserved. //----------------------------------------------------------------------------- #endregion #define DEMO #region Using Statements using System; using System.Diagnostics; using System.Collections.Generic; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.Graphics; using PerformanceUtility.GameDebugTools; #endregion namespace GameStateManagement { /// <summary> /// The screen manager is a component which manages one or more GameScreen /// instances. It maintains a stack of screens, calls their Update and Draw /// methods at the appropriate times, and automatically routes input to the /// topmost active screen. /// </summary> public class ScreenManager : DrawableGameComponent { #region Fields List<GameScreen> screens = new List<GameScreen>(); List<GameScreen> screensToUpdate = new List<GameScreen>(); InputState input = new InputState(); SpriteBatch spriteBatch; SpriteFont font; Texture2D blankTexture; bool isInitialized; bool getOut; bool traceEnabled; #if DEBUG DebugSystem debugSystem; Stopwatch stopwatch = new Stopwatch(); bool debugTextEnabled; #endif #endregion #region Properties /// <summary> /// A default SpriteBatch shared by all the screens. This saves /// each screen having to bother creating their own local instance. /// </summary> public SpriteBatch SpriteBatch { get { return spriteBatch; } } /// <summary> /// A default font shared by all the screens. This saves /// each screen having to bother loading their own local copy. /// </summary> public SpriteFont Font { get { return font; } } public Rectangle ScreenRectangle { get { return new Rectangle(0, 0, GraphicsDevice.Viewport.Width, GraphicsDevice.Viewport.Height); } } /// <summary> /// If true, the manager prints out a list of all the screens /// each time it is updated. This can be useful for making sure /// everything is being added and removed at the right times. /// </summary> public bool TraceEnabled { get { return traceEnabled; } set { traceEnabled = value; } } #if DEBUG public bool DebugTextEnabled { get { return debugTextEnabled; } set { debugTextEnabled = value; } } public DebugSystem DebugSystem { get { return debugSystem; } } #endif #endregion #region Initialization /// <summary> /// Constructs a new screen manager component. /// </summary> public ScreenManager(Game game) : base(game) { // we must set EnabledGestures before we can query for them, but // we don't assume the game wants to read them. //TouchPanel.EnabledGestures = GestureType.None; } /// <summary> /// Initializes the screen manager component. /// </summary> public override void Initialize() { base.Initialize(); #if DEBUG debugSystem = DebugSystem.Initialize(Game, "Fonts/MenuFont"); #endif isInitialized = true; } /// <summary> /// Load your graphics content. /// </summary> protected override void LoadContent() { // Load content belonging to the screen manager. ContentManager content = Game.Content; spriteBatch = new SpriteBatch(GraphicsDevice); font = content.Load<SpriteFont>(@"Fonts\menufont"); blankTexture = content.Load<Texture2D>(@"Textures\Backgrounds\blank"); // Tell each of the screens to load their content. foreach (GameScreen screen in screens) { screen.LoadContent(); } } /// <summary> /// Unload your graphics content. /// </summary> protected override void UnloadContent() { // Tell each of the screens to unload their content. foreach (GameScreen screen in screens) { screen.UnloadContent(); } } #endregion #region Update and Draw /// <summary> /// Allows each screen to run logic. /// </summary> public override void Update(GameTime gameTime) { #if DEBUG debugSystem.TimeRuler.StartFrame(); debugSystem.TimeRuler.BeginMark("Update", Color.Blue); if (debugTextEnabled && getOut == false) { debugSystem.FpsCounter.Visible = true; debugSystem.TimeRuler.Visible = true; debugSystem.TimeRuler.ShowLog = true; getOut = true; } else if (debugTextEnabled == false) { getOut = false; debugSystem.FpsCounter.Visible = false; debugSystem.TimeRuler.Visible = false; debugSystem.TimeRuler.ShowLog = false; } #endif // Read the keyboard and gamepad. input.Update(); // Make a copy of the master screen list, to avoid confusion if // the process of updating one screen adds or removes others. screensToUpdate.Clear(); foreach (GameScreen screen in screens) screensToUpdate.Add(screen); bool otherScreenHasFocus = !Game.IsActive; bool coveredByOtherScreen = false; // Loop as long as there are screens waiting to be updated. while (screensToUpdate.Count > 0) { // Pop the topmost screen off the waiting list. GameScreen screen = screensToUpdate[screensToUpdate.Count - 1]; screensToUpdate.RemoveAt(screensToUpdate.Count - 1); // Update the screen. screen.Update(gameTime, otherScreenHasFocus, coveredByOtherScreen); if (screen.ScreenState == ScreenState.TransitionOn || screen.ScreenState == ScreenState.Active) { // If this is the first active screen we came across, // give it a chance to handle input. if (!otherScreenHasFocus) { screen.HandleInput(input); otherScreenHasFocus = true; } // If this is an active non-popup, inform any subsequent // screens that they are covered by it. if (!screen.IsPopup) coveredByOtherScreen = true; } } // Print debug trace? if (traceEnabled) TraceScreens(); #if DEBUG debugSystem.TimeRuler.EndMark("Update"); #endif } /// <summary> /// Prints a list of all the screens, for debugging. /// </summary> void TraceScreens() { List<string> screenNames = new List<string>(); foreach (GameScreen screen in screens) screenNames.Add(screen.GetType().Name); Debug.WriteLine(string.Join(", ", screenNames.ToArray())); } /// <summary> /// Tells each screen to draw itself. /// </summary> public override void Draw(GameTime gameTime) { #if DEBUG debugSystem.TimeRuler.StartFrame(); debugSystem.TimeRuler.BeginMark("Draw", Color.Yellow); #endif foreach (GameScreen screen in screens) { if (screen.ScreenState == ScreenState.Hidden) continue; screen.Draw(gameTime); } #if DEBUG debugSystem.TimeRuler.EndMark("Draw"); #endif #if DEMO SpriteBatch.Begin(); SpriteBatch.DrawString(font, "DEMO - NOT FOR RESALE", new Vector2(20, 80), Color.White); SpriteBatch.End(); #endif } #endregion #region Public Methods /// <summary> /// Adds a new screen to the screen manager. /// </summary> public void AddScreen(GameScreen screen, PlayerIndex? controllingPlayer) { screen.ControllingPlayer = controllingPlayer; screen.ScreenManager = this; screen.IsExiting = false; // If we have a graphics device, tell the screen to load content. if (isInitialized) { screen.LoadContent(); } screens.Add(screen); } /// <summary> /// Removes a screen from the screen manager. You should normally /// use GameScreen.ExitScreen instead of calling this directly, so /// the screen can gradually transition off rather than just being /// instantly removed. /// </summary> public void RemoveScreen(GameScreen screen) { // If we have a graphics device, tell the screen to unload content. if (isInitialized) { screen.UnloadContent(); } screens.Remove(screen); screensToUpdate.Remove(screen); } /// <summary> /// Expose an array holding all the screens. We return a copy rather /// than the real master list, because screens should only ever be added /// or removed using the AddScreen and RemoveScreen methods. /// </summary> public GameScreen[] GetScreens() { return screens.ToArray(); } /// <summary> /// Helper draws a translucent black fullscreen sprite, used for fading /// screens in and out, and for darkening the background behind popups. /// </summary> public void FadeBackBufferToBlack(float alpha) { Viewport viewport = GraphicsDevice.Viewport; spriteBatch.Begin(); spriteBatch.Draw(blankTexture, new Rectangle(0, 0, viewport.Width, viewport.Height), Color.Black * alpha); spriteBatch.End(); } #endregion } } Game Screen Parent of GamePlayScreen #region File Description //----------------------------------------------------------------------------- // GameScreen.cs // // Microsoft XNA Community Game Platform // Copyright (C) Microsoft Corporation. All rights reserved. //----------------------------------------------------------------------------- #endregion #region Using Statements using System; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Input; //using Microsoft.Xna.Framework.Input.Touch; using System.IO; #endregion namespace GameStateManagement { /// <summary> /// Enum describes the screen transition state. /// </summary> public enum ScreenState { TransitionOn, Active, TransitionOff, Hidden, } /// <summary> /// A screen is a single layer that has update and draw logic, and which /// can be combined with other layers to build up a complex menu system. /// For instance the main menu, the options menu, the "are you sure you /// want to quit" message box, and the main game itself are all implemented /// as screens. /// </summary> public abstract class GameScreen { #region Properties /// <summary> /// Normally when one screen is brought up over the top of another, /// the first screen will transition off to make room for the new /// one. This property indicates whether the screen is only a small /// popup, in which case screens underneath it do not need to bother /// transitioning off. /// </summary> public bool IsPopup { get { return isPopup; } protected set { isPopup = value; } } bool isPopup = false; /// <summary> /// Indicates how long the screen takes to /// transition on when it is activated. /// </summary> public TimeSpan TransitionOnTime { get { return transitionOnTime; } protected set { transitionOnTime = value; } } TimeSpan transitionOnTime = TimeSpan.Zero; /// <summary> /// Indicates how long the screen takes to /// transition off when it is deactivated. /// </summary> public TimeSpan TransitionOffTime { get { return transitionOffTime; } protected set { transitionOffTime = value; } } TimeSpan transitionOffTime = TimeSpan.Zero; /// <summary> /// Gets the current position of the screen transition, ranging /// from zero (fully active, no transition) to one (transitioned /// fully off to nothing). /// </summary> public float TransitionPosition { get { return transitionPosition; } protected set { transitionPosition = value; } } float transitionPosition = 1; /// <summary> /// Gets the current alpha of the screen transition, ranging /// from 1 (fully active, no transition) to 0 (transitioned /// fully off to nothing). /// </summary> public float TransitionAlpha { get { return 1f - TransitionPosition; } } /// <summary> /// Gets the current screen transition state. /// </summary> public ScreenState ScreenState { get { return screenState; } protected set { screenState = value; } } ScreenState screenState = ScreenState.TransitionOn; /// <summary> /// There are two possible reasons why a screen might be transitioning /// off. It could be temporarily going away to make room for another /// screen that is on top of it, or it could be going away for good. /// This property indicates whether the screen is exiting for real: /// if set, the screen will automatically remove itself as soon as the /// transition finishes. /// </summary> public bool IsExiting { get { return isExiting; } protected internal set { isExiting = value; } } bool isExiting = false; /// <summary> /// Checks whether this screen is active and can respond to user input. /// </summary> public bool IsActive { get { return !otherScreenHasFocus && (screenState == ScreenState.TransitionOn || screenState == ScreenState.Active); } } bool otherScreenHasFocus; /// <summary> /// Gets the manager that this screen belongs to. /// </summary> public ScreenManager ScreenManager { get { return screenManager; } internal set { screenManager = value; } } ScreenManager screenManager; public KeyboardState KeyboardState { get {return Keyboard.GetState();} } /// <summary> /// Gets the index of the player who is currently controlling this screen, /// or null if it is accepting input from any player. This is used to lock /// the game to a specific player profile. The main menu responds to input /// from any connected gamepad, but whichever player makes a selection from /// this menu is given control over all subsequent screens, so other gamepads /// are inactive until the controlling player returns to the main menu. /// </summary> public PlayerIndex? ControllingPlayer { get { return controllingPlayer; } internal set { controllingPlayer = value; } } PlayerIndex? controllingPlayer; /// <summary> /// Gets whether or not this screen is serializable. If this is true, /// the screen will be recorded into the screen manager's state and /// its Serialize and Deserialize methods will be called as appropriate. /// If this is false, the screen will be ignored during serialization. /// By default, all screens are assumed to be serializable. /// </summary> public bool IsSerializable { get { return isSerializable; } protected set { isSerializable = value; } } bool isSerializable = true; #endregion #region Initialization /// <summary> /// Load graphics content for the screen. /// </summary> public virtual void LoadContent() { } /// <summary> /// Unload content for the screen. /// </summary> public virtual void UnloadContent() { } #endregion #region Update and Draw /// <summary> /// Allows the screen to run logic, such as updating the transition position. /// Unlike HandleInput, this method is called regardless of whether the screen /// is active, hidden, or in the middle of a transition. /// </summary> public virtual void Update(GameTime gameTime, bool otherScreenHasFocus, bool coveredByOtherScreen) { this.otherScreenHasFocus = otherScreenHasFocus; if (isExiting) { // If the screen is going away to die, it should transition off. screenState = ScreenState.TransitionOff; if (!UpdateTransition(gameTime, transitionOffTime, 1)) { // When the transition finishes, remove the screen. ScreenManager.RemoveScreen(this); } } else if (coveredByOtherScreen) { // If the screen is covered by another, it should transition off. if (UpdateTransition(gameTime, transitionOffTime, 1)) { // Still busy transitioning. screenState = ScreenState.TransitionOff; } else { // Transition finished! screenState = ScreenState.Hidden; } } else { // Otherwise the screen should transition on and become active. if (UpdateTransition(gameTime, transitionOnTime, -1)) { // Still busy transitioning. screenState = ScreenState.TransitionOn; } else { // Transition finished! screenState = ScreenState.Active; } } } /// <summary> /// Helper for updating the screen transition position. /// </summary> bool UpdateTransition(GameTime gameTime, TimeSpan time, int direction) { // How much should we move by? float transitionDelta; if (time == TimeSpan.Zero) transitionDelta = 1; else transitionDelta = (float)(gameTime.ElapsedGameTime.TotalMilliseconds / time.TotalMilliseconds); // Update the transition position. transitionPosition += transitionDelta * direction; // Did we reach the end of the transition? if (((direction < 0) && (transitionPosition <= 0)) || ((direction > 0) && (transitionPosition >= 1))) { transitionPosition = MathHelper.Clamp(transitionPosition, 0, 1); return false; } // Otherwise we are still busy transitioning. return true; } /// <summary> /// Allows the screen to handle user input. Unlike Update, this method /// is only called when the screen is active, and not when some other /// screen has taken the focus. /// </summary> public virtual void HandleInput(InputState input) { } public KeyboardState currentKeyState; public KeyboardState lastKeyState; public bool IsKeyHit(Keys key) { if (currentKeyState.IsKeyDown(key) && lastKeyState.IsKeyUp(key)) return true; return false; } /// <summary> /// This is called when the screen should draw itself. /// </summary> public virtual void Draw(GameTime gameTime) { } #endregion #region Public Methods /// <summary> /// Tells the screen to serialize its state into the given stream. /// </summary> public virtual void Serialize(Stream stream) { } /// <summary> /// Tells the screen to deserialize its state from the given stream. /// </summary> public virtual void Deserialize(Stream stream) { } /// <summary> /// Tells the screen to go away. Unlike ScreenManager.RemoveScreen, which /// instantly kills the screen, this method respects the transition timings /// and will give the screen a chance to gradually transition off. /// </summary> public void ExitScreen() { if (TransitionOffTime == TimeSpan.Zero) { // If the screen has a zero transition time, remove it immediately. ScreenManager.RemoveScreen(this); } else { // Otherwise flag that it should transition off and then exit. isExiting = true; } } #endregion #region Helper Methods /// <summary> /// A helper method which loads assets using the screen manager's /// associated game content loader. /// </summary> /// <typeparam name="T">Type of asset.</typeparam> /// <param name="assetName">Asset name, relative to the loader root /// directory, and not including the .xnb extension.</param> /// <returns></returns> public T Load<T>(string assetName) { return ScreenManager.Game.Content.Load<T>(assetName); } #endregion } }

    Read the article

  • Ubuntu 14.04 Failed to load module udlfb

    - by jar276705
    DisplayLink doesn't load and run. The adapter is recognized and /dev/FB1 is created. USB bus info: Bus 001 Device 006: ID 17e9:0198 DisplayLink Xorg.0.log: X.Org X Server 1.15.1 Release Date: 2014-04-13 [ 44708.386] X Protocol Version 11, Revision 0 [ 44708.389] Build Operating System: Linux 3.2.0-37-generic i686 Ubuntu [ 44708.392] Current Operating System: Linux rrl 3.13.0-24-generic #46-Ubuntu SMP Thu Apr 10 19:08:14 UTC 2014 i686 [ 44708.392] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-3.13.0-24-generic root=UUID=6b719a77-29e0-4668-8f16-57d0d3a73a3f ro quiet splash vt.handoff=7 [ 44708.399] Build Date: 16 April 2014 01:40:08PM [ 44708.402] xorg-server 2:1.15.1-0ubuntu2 (For technical support please see http://www.ubuntu.com/support) [ 44708.405] Current version of pixman: 0.30.2 [ 44708.412] Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. [ 44708.412] Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. [ 44708.427] (==) Log file: "/var/log/Xorg.0.log", Time: Thu May 1 09:38:27 2014 [ 44708.431] (==) Using config file: "/etc/X11/xorg.conf" [ 44708.434] (==) Using system config directory "/usr/share/X11/xorg.conf.d" [ 44708.435] (==) ServerLayout "X.org Configured" [ 44708.435] (**) |-->Screen "DisplayLinkScreen" (0) [ 44708.435] (**) | |-->Monitor "DisplayLinkMonitor" [ 44708.435] (**) | |-->Device "DisplayLinkDevice" [ 44708.435] (**) |-->Screen "Screen0" (1) [ 44708.435] (**) | |-->Monitor "Monitor0" [ 44708.435] (**) | |-->Device "Card0" [ 44708.435] (**) |-->Input Device "Mouse0" [ 44708.435] (**) |-->Input Device "Keyboard0" [ 44708.435] (==) Automatically adding devices [ 44708.435] (==) Automatically enabling devices [ 44708.435] (==) Automatically adding GPU devices [ 44708.435] (WW) The directory "/usr/share/fonts/X11/cyrillic" does not exist. [ 44708.435] Entry deleted from font path. [ 44708.435] (WW) The directory "/usr/share/fonts/X11/75dpi/" does not exist. [ 44708.435] Entry deleted from font path. [ 44708.435] (WW) The directory "/usr/share/fonts/X11/75dpi" does not exist. [ 44708.435] Entry deleted from font path. [ 44708.435] (WW) The directory "/usr/share/fonts/X11/cyrillic" does not exist. [ 44708.435] Entry deleted from font path. [ 44708.435] (WW) The directory "/usr/share/fonts/X11/75dpi/" does not exist. [ 44708.435] Entry deleted from font path. [ 44708.435] (WW) The directory "/usr/share/fonts/X11/75dpi" does not exist. [ 44708.435] Entry deleted from font path. [ 44708.435] (**) FontPath set to: /usr/share/fonts/X11/misc, /usr/share/fonts/X11/100dpi/:unscaled, /usr/share/fonts/X11/Type1, /usr/share/fonts/X11/100dpi, built-ins, /usr/share/fonts/X11/misc, /usr/share/fonts/X11/100dpi/:unscaled, /usr/share/fonts/X11/Type1, /usr/share/fonts/X11/100dpi, built-ins [ 44708.435] (**) ModulePath set to "/usr/lib/xorg/modules" [ 44708.435] (WW) Hotplugging is on, devices using drivers 'kbd', 'mouse' or 'vmmouse' will be disabled. [ 44708.435] (WW) Disabling Mouse0 [ 44708.435] (WW) Disabling Keyboard0 [ 44708.435] (II) Loader magic: 0xb77106c0 [ 44708.435] (II) Module ABI versions: [ 44708.435] X.Org ANSI C Emulation: 0.4 [ 44708.435] X.Org Video Driver: 15.0 [ 44708.435] X.Org XInput driver : 20.0 [ 44708.435] X.Org Server Extension : 8.0 [ 44708.436] (II) xfree86: Adding drm device (/dev/dri/card0) [ 44708.436] (II) xfree86: Adding drm device (/dev/dri/card1) [ 44708.437] (--) PCI:*(0:1:5:0) 1002:9616:105b:0e26 rev 0, Mem @ 0xf0000000/134217728, 0xfeae0000/65536, 0xfe900000/1048576, I/O @ 0x0000b000/256 [ 44708.441] Initializing built-in extension Generic Event Extension [ 44708.444] Initializing built-in extension SHAPE [ 44708.448] Initializing built-in extension MIT-SHM [ 44708.452] Initializing built-in extension XInputExtension [ 44708.456] Initializing built-in extension XTEST [ 44708.460] Initializing built-in extension BIG-REQUESTS [ 44708.464] Initializing built-in extension SYNC [ 44708.468] Initializing built-in extension XKEYBOARD [ 44708.471] Initializing built-in extension XC-MISC [ 44708.475] Initializing built-in extension SECURITY [ 44708.479] Initializing built-in extension XINERAMA [ 44708.483] Initializing built-in extension XFIXES [ 44708.487] Initializing built-in extension RENDER [ 44708.491] Initializing built-in extension RANDR [ 44708.494] Initializing built-in extension COMPOSITE [ 44708.498] Initializing built-in extension DAMAGE [ 44708.502] Initializing built-in extension MIT-SCREEN-SAVER [ 44708.506] Initializing built-in extension DOUBLE-BUFFER [ 44708.510] Initializing built-in extension RECORD [ 44708.513] Initializing built-in extension DPMS [ 44708.517] Initializing built-in extension Present [ 44708.521] Initializing built-in extension DRI3 [ 44708.525] Initializing built-in extension X-Resource [ 44708.528] Initializing built-in extension XVideo [ 44708.532] Initializing built-in extension XVideo-MotionCompensation [ 44708.535] Initializing built-in extension SELinux [ 44708.539] Initializing built-in extension XFree86-VidModeExtension [ 44708.542] Initializing built-in extension XFree86-DGA [ 44708.546] Initializing built-in extension XFree86-DRI [ 44708.549] Initializing built-in extension DRI2 [ 44708.549] (II) "glx" will be loaded. This was enabled by default and also specified in the config file. [ 44708.549] (WW) "xmir" is not to be loaded by default. Skipping. [ 44708.549] (II) LoadModule: "glx" [ 44708.549] (II) Loading /usr/lib/xorg/modules/extensions/libglx.so [ 44708.550] (II) Module glx: vendor="X.Org Foundation" [ 44708.550] compiled for 1.15.1, module version = 1.0.0 [ 44708.550] ABI class: X.Org Server Extension, version 8.0 [ 44708.550] (==) AIGLX enabled [ 44708.553] Loading extension GLX [ 44708.553] (II) LoadModule: "udlfb" [ 44708.554] (WW) Warning, couldn't open module udlfb [ 44708.554] (II) UnloadModule: "udlfb" [ 44708.554] (II) Unloading udlfb [ 44708.554] (EE) Failed to load module "udlfb" (module does not exist, 0) [ 44708.554] (II) LoadModule: "modesetting" [ 44708.554] (II) Loading /usr/lib/xorg/modules/drivers/modesetting_drv.so [ 44708.554] (II) Module modesetting: vendor="X.Org Foundation" [ 44708.554] compiled for 1.15.0, module version = 0.8.1 [ 44708.554] Module class: X.Org Video Driver [ 44708.554] ABI class: X.Org Video Driver, version 15.0 [ 44708.554] (==) Matched fglrx as autoconfigured driver 0 [ 44708.554] (==) Matched ati as autoconfigured driver 1 [ 44708.554] (==) Matched fglrx as autoconfigured driver 2 [ 44708.554] (==) Matched ati as autoconfigured driver 3 [ 44708.554] (==) Matched modesetting as autoconfigured driver 4 [ 44708.554] (==) Matched fbdev as autoconfigured driver 5 [ 44708.554] (==) Matched vesa as autoconfigured driver 6 [ 44708.554] (==) Assigned the driver to the xf86ConfigLayout [ 44708.554] (II) LoadModule: "fglrx" [ 44708.554] (WW) Warning, couldn't open module fglrx [ 44708.554] (II) UnloadModule: "fglrx" [ 44708.554] (II) Unloading fglrx [ 44708.554] (EE) Failed to load module "fglrx" (module does not exist, 0) [ 44708.554] (II) LoadModule: "ati" [ 44708.554] (II) Loading /usr/lib/xorg/modules/drivers/ati_drv.so [ 44708.554] (II) Module ati: vendor="X.Org Foundation" [ 44708.554] compiled for 1.15.0, module version = 7.3.0 [ 44708.554] Module class: X.Org Video Driver [ 44708.554] ABI class: X.Org Video Driver, version 15.0 [ 44708.554] (II) LoadModule: "radeon" [ 44708.555] (II) Loading /usr/lib/xorg/modules/drivers/radeon_drv.so [ 44708.555] (II) Module radeon: vendor="X.Org Foundation" [ 44708.555] compiled for 1.15.0, module version = 7.3.0 [ 44708.555] Module class: X.Org Video Driver [ 44708.555] ABI class: X.Org Video Driver, version 15.0 [ 44708.555] (II) LoadModule: "modesetting" [ 44708.555] (II) Loading /usr/lib/xorg/modules/drivers/modesetting_drv.so [ 44708.555] (II) Module modesetting: vendor="X.Org Foundation" [ 44708.555] compiled for 1.15.0, module version = 0.8.1 [ 44708.555] Module class: X.Org Video Driver [ 44708.555] ABI class: X.Org Video Driver, version 15.0 [ 44708.555] (II) UnloadModule: "modesetting" [ 44708.555] (II) Unloading modesetting [ 44708.555] (II) Failed to load module "modesetting" (already loaded, 0) [ 44708.555] (II) LoadModule: "fbdev" [ 44708.555] (II) Loading /usr/lib/xorg/modules/drivers/fbdev_drv.so [ 44708.555] (II) Module fbdev: vendor="X.Org Foundation" [ 44708.555] compiled for 1.15.0, module version = 0.4.4 [ 44708.555] Module class: X.Org Video Driver [ 44708.555] ABI class: X.Org Video Driver, version 15.0 [ 44708.555] (II) LoadModule: "vesa" [ 44708.555] (II) Loading /usr/lib/xorg/modules/drivers/vesa_drv.so [ 44708.555] (II) Module vesa: vendor="X.Org Foundation" [ 44708.555] compiled for 1.15.0, module version = 2.3.3 [ 44708.555] Module class: X.Org Video Driver [ 44708.555] ABI class: X.Org Video Driver, version 15.0 [ 44708.555] (II) modesetting: Driver for Modesetting Kernel Drivers: kms [ 44708.555] (II) RADEON: Driver for ATI Radeon chipsets: [ 44708.560] (II) FBDEV: driver for framebuffer: fbdev [ 44708.560] (II) VESA: driver for VESA chipsets: vesa [ 44708.560] (--) using VT number 7 [ 44708.578] (II) modesetting(0): using drv /dev/dri/card0 [ 44708.578] (II) modesetting(G0): using drv /dev/dri/card1 [ 44708.578] (WW) Falling back to old probe method for fbdev [ 44708.578] (II) Loading sub module "fbdevhw" [ 44708.578] (II) LoadModule: "fbdevhw" [ 44708.578] (II) Loading /usr/lib/xorg/modules/libfbdevhw.so [ 44708.578] (II) Module fbdevhw: vendor="X.Org Foundation" [ 44708.578] compiled for 1.15.1, module version = 0.0.2 [ 44708.578] ABI class: X.Org Video Driver, version 15.0 [ 44708.578] (WW) Falling back to old probe method for vesa [ 44708.578] (**) modesetting(0): Depth 16, (--) framebuffer bpp 16 [ 44708.578] (==) modesetting(0): RGB weight 565 [ 44708.578] (==) modesetting(0): Default visual is TrueColor [ 44708.578] (II) modesetting(0): ShadowFB: preferred YES, enabled YES [ 44708.608] (II) modesetting(0): Output VGA-0 using monitor section DisplayLinkMonitor [ 44708.610] (II) modesetting(0): Output DVI-0 has no monitor section [ 44708.640] (II) modesetting(0): EDID for output VGA-0 [ 44708.640] (II) modesetting(0): Manufacturer: ACR Model: 74 Serial#: 2483090993 [ 44708.640] (II) modesetting(0): Year: 2009 Week: 40 [ 44708.640] (II) modesetting(0): EDID Version: 1.3 [ 44708.640] (II) modesetting(0): Analog Display Input, Input Voltage Level: 0.700/0.700 V [ 44708.640] (II) modesetting(0): Sync: Separate [ 44708.640] (II) modesetting(0): Max Image Size [cm]: horiz.: 53 vert.: 29 [ 44708.640] (II) modesetting(0): Gamma: 2.20 [ 44708.640] (II) modesetting(0): DPMS capabilities: StandBy Suspend Off; RGB/Color Display [ 44708.641] (II) modesetting(0): First detailed timing is preferred mode [ 44708.641] (II) modesetting(0): redX: 0.649 redY: 0.338 greenX: 0.289 greenY: 0.609 [ 44708.641] (II) modesetting(0): blueX: 0.146 blueY: 0.070 whiteX: 0.313 whiteY: 0.329 [ 44708.641] (II) modesetting(0): Supported established timings: [ 44708.641] (II) modesetting(0): 720x400@70Hz [ 44708.641] (II) modesetting(0): 640x480@60Hz [ 44708.641] (II) modesetting(0): 640x480@72Hz [ 44708.641] (II) modesetting(0): 640x480@75Hz [ 44708.641] (II) modesetting(0): 800x600@56Hz [ 44708.641] (II) modesetting(0): 800x600@60Hz [ 44708.641] (II) modesetting(0): 800x600@72Hz [ 44708.641] (II) modesetting(0): 800x600@75Hz [ 44708.641] (II) modesetting(0): 1024x768@60Hz [ 44708.641] (II) modesetting(0): 1024x768@70Hz [ 44708.641] (II) modesetting(0): 1024x768@75Hz [ 44708.641] (II) modesetting(0): 1280x1024@75Hz [ 44708.641] (II) modesetting(0): Manufacturer's mask: 0 [ 44708.641] (II) modesetting(0): Supported standard timings: [ 44708.641] (II) modesetting(0): #0: hsize: 1280 vsize 1024 refresh: 60 vid: 32897 [ 44708.641] (II) modesetting(0): #1: hsize: 1152 vsize 864 refresh: 75 vid: 20337 [ 44708.641] (II) modesetting(0): #2: hsize: 1440 vsize 900 refresh: 60 vid: 149 [ 44708.641] (II) modesetting(0): #3: hsize: 1440 vsize 900 refresh: 75 vid: 3989 [ 44708.641] (II) modesetting(0): #4: hsize: 1600 vsize 1200 refresh: 60 vid: 16553 [ 44708.641] (II) modesetting(0): #5: hsize: 1680 vsize 1050 refresh: 60 vid: 179 [ 44708.641] (II) modesetting(0): Supported detailed timing: [ 44708.641] (II) modesetting(0): clock: 138.5 MHz Image Size: 531 x 298 mm [ 44708.641] (II) modesetting(0): h_active: 1920 h_sync: 1968 h_sync_end 2000 h_blank_end 2080 h_border: 0 [ 44708.641] (II) modesetting(0): v_active: 1080 v_sync: 1083 v_sync_end 1088 v_blanking: 1111 v_border: 0 [ 44708.641] (II) modesetting(0): Monitor name: H243H [ 44708.641] (II) modesetting(0): Ranges: V min: 56 V max: 76 Hz, H min: 31 H max: 83 kHz, PixClock max 185 MHz [ 44708.641] (II) modesetting(0): Serial No: LEW0C0044002 [ 44708.641] (II) modesetting(0): EDID (in hex): [ 44708.641] (II) modesetting(0): 00ffffffffffff000472740031f60094 [ 44708.641] (II) modesetting(0): 2813010368351d78ea6085a6564a9c25 [ 44708.641] (II) modesetting(0): 125054afcf008180714f9500950fa940 [ 44708.641] (II) modesetting(0): b300010101011a3680a070381f403020 [ 44708.641] (II) modesetting(0): 3500132a2100001a000000fc00483234 [ 44708.642] (II) modesetting(0): 33480a20202020202020000000fd0038 [ 44708.642] (II) modesetting(0): 4c1f5312000a202020202020000000ff [ 44708.642] (II) modesetting(0): 004c45573043303034343030320a003c [ 44708.642] (II) modesetting(0): Printing probed modes for output VGA-0 [ 44708.642] (II) modesetting(0): Modeline "1280x1024"x75.0 135.00 1280 1296 1440 1688 1024 1025 1028 1066 +hsync +vsync (80.0 kHz UeP) [ 44708.642] (II) modesetting(0): Modeline "1920x1080"x59.9 138.50 1920 1968 2000 2080 1080 1083 1088 1111 +hsync -vsync (66.6 kHz eP) [ 44708.642] (II) modesetting(0): Modeline "1600x1200"x60.0 162.00 1600 1664 1856 2160 1200 1201 1204 1250 +hsync +vsync (75.0 kHz e) [ 44708.642] (II) modesetting(0): Modeline "1680x1050"x60.0 146.25 1680 1784 1960 2240 1050 1053 1059 1089 -hsync +vsync (65.3 kHz e) [ 44708.642] (II) modesetting(0): Modeline "1280x1024"x60.0 108.00 1280 1328 1440 1688 1024 1025 1028 1066 +hsync +vsync (64.0 kHz e) [ 44708.642] (II) modesetting(0): Modeline "1440x900"x75.0 136.75 1440 1536 1688 1936 900 903 909 942 -hsync +vsync (70.6 kHz e) [ 44708.642] (II) modesetting(0): Modeline "1440x900"x59.9 106.50 1440 1520 1672 1904 900 903 909 934 -hsync +vsync (55.9 kHz e) [ 44708.642] (II) modesetting(0): Modeline "1152x864"x75.0 108.00 1152 1216 1344 1600 864 865 868 900 +hsync +vsync (67.5 kHz e) [ 44708.642] (II) modesetting(0): Modeline "1024x768"x75.1 78.80 1024 1040 1136 1312 768 769 772 800 +hsync +vsync (60.1 kHz e) [ 44708.642] (II) modesetting(0): Modeline "1024x768"x70.1 75.00 1024 1048 1184 1328 768 771 777 806 -hsync -vsync (56.5 kHz e) [ 44708.642] (II) modesetting(0): Modeline "1024x768"x60.0 65.00 1024 1048 1184 1344 768 771 777 806 -hsync -vsync (48.4 kHz e) [ 44708.642] (II) modesetting(0): Modeline "800x600"x72.2 50.00 800 856 976 1040 600 637 643 666 +hsync +vsync (48.1 kHz e) [ 44708.642] (II) modesetting(0): Modeline "800x600"x75.0 49.50 800 816 896 1056 600 601 604 625 +hsync +vsync (46.9 kHz e) [ 44708.642] (II) modesetting(0): Modeline "800x600"x60.3 40.00 800 840 968 1056 600 601 605 628 +hsync +vsync (37.9 kHz e) [ 44708.642] (II) modesetting(0): Modeline "800x600"x56.2 36.00 800 824 896 1024 600 601 603 625 +hsync +vsync (35.2 kHz e) [ 44708.642] (II) modesetting(0): Modeline "640x480"x75.0 31.50 640 656 720 840 480 481 484 500 -hsync -vsync (37.5 kHz e) [ 44708.642] (II) modesetting(0): Modeline "640x480"x72.8 31.50 640 664 704 832 480 489 491 520 -hsync -vsync (37.9 kHz e) [ 44708.642] (II) modesetting(0): Modeline "640x480"x60.0 25.20 640 656 752 800 480 490 492 525 -hsync -vsync (31.5 kHz e) [ 44708.642] (II) modesetting(0): Modeline "720x400"x70.1 28.32 720 738 846 900 400 412 414 449 -hsync +vsync (31.5 kHz e) [ 44708.645] (II) modesetting(0): EDID for output DVI-0 [ 44708.645] (II) modesetting(0): Output VGA-0 connected [ 44708.645] (II) modesetting(0): Output DVI-0 disconnected [ 44708.645] (II) modesetting(0): Using user preference for initial modes [ 44708.645] (II) modesetting(0): Output VGA-0 using initial mode 1280x1024 [ 44708.645] (II) modesetting(0): Using default gamma of (1.0, 1.0, 1.0) unless otherwise stated. [ 44708.645] (==) modesetting(0): DPI set to (96, 96) [ 44708.645] (II) Loading sub module "fb" [ 44708.645] (II) LoadModule: "fb" [ 44708.645] (II) Loading /usr/lib/xorg/modules/libfb.so [ 44708.645] (II) Module fb: vendor="X.Org Foundation" [ 44708.645] compiled for 1.15.1, module version = 1.0.0 [ 44708.645] ABI class: X.Org ANSI C Emulation, version 0.4 [ 44708.645] (II) Loading sub module "shadow" [ 44708.645] (II) LoadModule: "shadow" [ 44708.646] (II) Loading /usr/lib/xorg/modules/libshadow.so [ 44708.646] (II) Module shadow: vendor="X.Org Foundation" [ 44708.646] compiled for 1.15.1, module version = 1.1.0 [ 44708.646] ABI class: X.Org ANSI C Emulation, version 0.4 [ 44708.646] (**) modesetting(G0): Depth 16, (--) framebuffer bpp 16 [ 44708.646] (==) modesetting(G0): RGB weight 565 [ 44708.646] (==) modesetting(G0): Default visual is TrueColor [ 44708.646] (II) modesetting(G0): ShadowFB: preferred NO, enabled NO [ 44708.727] (II) modesetting(G0): Output DVI-1-0 using monitor section DisplayLinkMonitor [ 44708.808] (II) modesetting(G0): EDID for output DVI-1-0 [ 44708.808] (II) modesetting(G0): Manufacturer: WDE Model: 1702 Serial#: 0 [ 44708.808] (II) modesetting(G0): Year: 2005 Week: 14 [ 44708.808] (II) modesetting(G0): EDID Version: 1.3 [ 44708.808] (II) modesetting(G0): Analog Display Input, Input Voltage Level: 0.700/0.700 V [ 44708.808] (II) modesetting(G0): Sync: Separate [ 44708.808] (II) modesetting(G0): Max Image Size [cm]: horiz.: 34 vert.: 27 [ 44708.808] (II) modesetting(G0): Gamma: 2.20 [ 44708.808] (II) modesetting(G0): DPMS capabilities: StandBy Suspend Off; RGB/Color Display [ 44708.808] (II) modesetting(G0): Default color space is primary color space [ 44708.808] (II) modesetting(G0): First detailed timing is preferred mode [ 44708.808] (II) modesetting(G0): GTF timings supported [ 44708.808] (II) modesetting(G0): redX: 0.643 redY: 0.352 greenX: 0.283 greenY: 0.608 [ 44708.808] (II) modesetting(G0): blueX: 0.147 blueY: 0.102 whiteX: 0.313 whiteY: 0.329 [ 44708.808] (II) modesetting(G0): Supported established timings: [ 44708.808] (II) modesetting(G0): 720x400@70Hz [ 44708.808] (II) modesetting(G0): 640x480@60Hz [ 44708.808] (II) modesetting(G0): 640x480@67Hz [ 44708.808] (II) modesetting(G0): 640x480@72Hz [ 44708.808] (II) modesetting(G0): 640x480@75Hz [ 44708.808] (II) modesetting(G0): 800x600@56Hz [ 44708.808] (II) modesetting(G0): 800x600@60Hz [ 44708.808] (II) modesetting(G0): 800x600@72Hz [ 44708.808] (II) modesetting(G0): 800x600@75Hz [ 44708.808] (II) modesetting(G0): 832x624@75Hz [ 44708.808] (II) modesetting(G0): 1024x768@60Hz [ 44708.808] (II) modesetting(G0): 1024x768@70Hz [ 44708.808] (II) modesetting(G0): 1024x768@75Hz [ 44708.809] (II) modesetting(G0): 1280x1024@75Hz [ 44708.809] (II) modesetting(G0): Manufacturer's mask: 0 [ 44708.809] (II) modesetting(G0): Supported standard timings: [ 44708.809] (II) modesetting(G0): #0: hsize: 1280 vsize 1024 refresh: 60 vid: 32897 [ 44708.809] (II) modesetting(G0): #1: hsize: 1152 vsize 864 refresh: 75 vid: 20337 [ 44708.809] (II) modesetting(G0): Supported detailed timing: [ 44708.809] (II) modesetting(G0): clock: 108.0 MHz Image Size: 338 x 270 mm [ 44708.809] (II) modesetting(G0): h_active: 1280 h_sync: 1328 h_sync_end 1440 h_blank_end 1688 h_border: 0 [ 44708.809] (II) modesetting(G0): v_active: 1024 v_sync: 1025 v_sync_end 1028 v_blanking: 1066 v_border: 0 [ 44708.809] (II) modesetting(G0): Ranges: V min: 50 V max: 75 Hz, H min: 30 H max: 82 kHz, PixClock max 145 MHz [ 44708.809] (II) modesetting(G0): Monitor name: WDE LCM-17v2 [ 44708.809] (II) modesetting(G0): Serial No: 0 [ 44708.809] (II) modesetting(G0): EDID (in hex): [ 44708.809] (II) modesetting(G0): 00ffffffffffff005c85021700000000 [ 44708.809] (II) modesetting(G0): 0e0f010368221b78ef8bc5a45a489b25 [ 44708.809] (II) modesetting(G0): 1a5054bfef008180714f010101010101 [ 44708.809] (II) modesetting(G0): 010101010101302a009851002a403070 [ 44708.809] (II) modesetting(G0): 1300520e1100001e000000fd00324b1e [ 44708.809] (II) modesetting(G0): 520e000a202020202020000000fc0057 [ 44708.809] (II) modesetting(G0): 4445204c434d2d313776320a000000ff [ 44708.809] (II) modesetting(G0): 00300a202020202020202020202000e7 [ 44708.809] (II) modesetting(G0): Printing probed modes for output DVI-1-0 [ 44708.809] (II) modesetting(G0): Modeline "1280x1024"x60.0 108.00 1280 1328 1440 1688 1024 1025 1028 1066 +hsync +vsync (64.0 kHz UeP) [ 44708.809] (II) modesetting(G0): Modeline "1280x1024"x75.0 135.00 1280 1296 1440 1688 1024 1025 1028 1066 +hsync +vsync (80.0 kHz e) [ 44708.809] (II) modesetting(G0): Modeline "1280x960"x60.0 108.00 1280 1376 1488 1800 960 961 964 1000 +hsync +vsync (60.0 kHz e) [ 44708.809] (II) modesetting(G0): Modeline "1280x800"x74.9 106.50 1280 1360 1488 1696 800 803 809 838 -hsync +vsync (62.8 kHz e) [ 44708.809] (II) modesetting(G0): Modeline "1280x800"x59.8 83.50 1280 1352 1480 1680 800 803 809 831 +hsync -vsync (49.7 kHz e) [ 44708.809] (II) modesetting(G0): Modeline "1152x864"x75.0 108.00 1152 1216 1344 1600 864 865 868 900 +hsync +vsync (67.5 kHz e) [ 44708.809] (II) modesetting(G0): Modeline "1280x768"x74.9 102.25 1280 1360 1488 1696 768 771 778 805 +hsync -vsync (60.3 kHz e) [ 44708.809] (II) modesetting(G0): Modeline "1280x768"x59.9 79.50 1280 1344 1472 1664 768 771 778 798 -hsync +vsync (47.8 kHz e) [ 44708.810] (II) modesetting(G0): Modeline "1024x768"x75.1 78.80 1024 1040 1136 1312 768 769 772 800 +hsync +vsync (60.1 kHz e) [ 44708.810] (II) modesetting(G0): Modeline "1024x768"x70.1 75.00 1024 1048 1184 1328 768 771 777 806 -hsync -vsync (56.5 kHz e) [ 44708.810] (II) modesetting(G0): Modeline "1024x768"x60.0 65.00 1024 1048 1184 1344 768 771 777 806 -hsync -vsync (48.4 kHz e) [ 44708.810] (II) modesetting(G0): Modeline "1024x576"x60.0 46.97 1024 1064 1168 1312 576 577 580 597 -hsync +vsync (35.8 kHz) [ 44708.810] (II) modesetting(G0): Modeline "832x624"x74.6 57.28 832 864 928 1152 624 625 628 667 -hsync -vsync (49.7 kHz e) [ 44708.810] (II) modesetting(G0): Modeline "800x600"x72.2 50.00 800 856 976 1040 600 637 643 666 +hsync +vsync (48.1 kHz e) [ 44708.810] (II) modesetting(G0): Modeline "800x600"x75.0 49.50 800 816 896 1056 600 601 604 625 +hsync +vsync (46.9 kHz e) [ 44708.810] (II) modesetting(G0): Modeline "800x600"x60.3 40.00 800 840 968 1056 600 601 605 628 +hsync +vsync (37.9 kHz e) [ 44708.810] (II) modesetting(G0): Modeline "800x600"x56.2 36.00 800 824 896 1024 600 601 603 625 +hsync +vsync (35.2 kHz e) [ 44708.810] (II) modesetting(G0): Modeline "848x480"x60.0 33.75 848 864 976 1088 480 486 494 517 +hsync +vsync (31.0 kHz e) [ 44708.810] (II) modesetting(G0): Modeline "640x480"x75.0 31.50 640 656 720 840 480 481 484 500 -hsync -vsync (37.5 kHz e) [ 44708.810] (II) modesetting(G0): Modeline "640x480"x72.8 31.50 640 664 704 832 480 489 491 520 -hsync -vsync (37.9 kHz e) [ 44708.810] (II) modesetting(G0): Modeline "640x480"x66.7 30.24 640 704 768 864 480 483 486 525 -hsync -vsync (35.0 kHz e) [ 44708.810] (II) modesetting(G0): Modeline "640x480"x60.0 25.20 640 656 752 800 480 490 492 525 -hsync -vsync (31.5 kHz e) [ 44708.810] (II) modesetting(G0): Modeline "720x400"x70.1 28.32 720 738 846 900 400 412 414 449 -hsync +vsync (31.5 kHz e) [ 44708.810] (II) modesetting(G0): Using default gamma of (1.0, 1.0, 1.0) unless otherwise stated. [ 44708.810] (==) modesetting(G0): DPI set to (96, 96) [ 44708.810] (II) Loading sub module "fb" [ 44708.810] (II) LoadModule: "fb" [ 44708.810] (II) Loading /usr/lib/xorg/modules/libfb.so [ 44708.810] (II) Module fb: vendor="X.Org Foundation" [ 44708.810] compiled for 1.15.1, module version = 1.0.0 [ 44708.811] ABI class: X.Org ANSI C Emulation, version 0.4 [ 44708.811] (II) UnloadModule: "radeon" [ 44708.811] (II) Unloading radeon [ 44708.811] (II) UnloadModule: "fbdev" [ 44708.811] (II) Unloading fbdev [ 44708.811] (II) UnloadSubModule: "fbdevhw" [ 44708.811] (II) Unloading fbdevhw [ 44708.811] (II) UnloadModule: "vesa" [ 44708.811] (II) Unloading vesa [ 44708.811] (==) modesetting(G0): Backing store enabled [ 44708.811] (==) modesetting(G0): Silken mouse enabled [ 44708.812] (II) modesetting(G0): RandR 1.2 enabled, ignore the following RandR disabled message. [ 44708.812] (==) modesetting(G0): DPMS enabled [ 44708.812] (WW) modesetting(G0): Option "fbdev" is not used [ 44708.812] (==) modesetting(0): Backing store enabled [ 44708.812] (==) modesetting(0): Silken mouse enabled [ 44708.812] (II) modesetting(0): RandR 1.2 enabled, ignore the following RandR disabled message. [ 44708.812] (==) modesetting(0): DPMS enabled [ 44708.812] (WW) modesetting(0): Option "fbdev" is not used [ 44708.856] (--) RandR disabled [ 44708.867] (II) SELinux: Disabled on system [ 44708.868] (II) AIGLX: Screen 0 is not DRI2 capable [ 44708.868] (EE) AIGLX: reverting to software rendering [ 44708.878] (II) AIGLX: Loaded and initialized swrast [ 44708.878] (II) GLX: Initialized DRISWRAST GL provider for screen 0 [ 44708.879] (II) modesetting(G0): Damage tracking initialized [ 44708.879] (II) modesetting(0): Damage tracking initialized [ 44708.879] (II) modesetting(0): Setting screen physical size to 338 x 270 [ 44708.900] (II) XKB: generating xkmfile /tmp/server-B20D7FC79C7F597315E3E501AEF10E0D866E8E92.xkm [ 44708.918] (II) config/udev: Adding input device Power Button (/dev/input/event1) [ 44708.918] (**) Power Button: Applying InputClass "evdev keyboard catchall" [ 44708.918] (II) LoadModule: "evdev" [ 44708.918] (II) Loading /usr/lib/xorg/modules/input/evdev_drv.so [ 44708.918] (II) Module evdev: vendor="X.Org Foundation" [ 44708.918] compiled for 1.15.0, module version = 2.8.2 [ 44708.918] Module class: X.Org XInput Driver [ 44708.918] ABI class: X.Org XInput driver, version 20.0 [ 44708.918] (II) Using input driver 'evdev' for 'Power Button' [ 44708.918] (**) Power Button: always reports core events [ 44708.918] (**) evdev: Power Button: Device: "/dev/input/event1" [ 44708.918] (--) evdev: Power Button: Vendor 0 Product 0x1 [ 44708.918] (--) evdev: Power Button: Found keys [ 44708.918] (II) evdev: Power Button: Configuring as keyboard [ 44708.918] (**) Option "config_info" "udev:/sys/devices/LNXSYSTM:00/LNXPWRBN:00/input/input1/event1" [ 44708.918] (II) XINPUT: Adding extended input device "Power Button" (type: KEYBOARD, id 6) [ 44708.918] (**) Option "xkb_rules" "evdev" [ 44708.918] (**) Option "xkb_model" "pc105" [ 44708.918] (**) Option "xkb_layout" "us" [ 44708.919] (II) config/udev: Adding input device Power Button (/dev/input/event0) [ 44708.919] (**) Power Button: Applying InputClass "evdev keyboard catchall" [ 44708.919] (II) Using input driver 'evdev' for 'Power Button' [ 44708.919] (**) Power Button: always reports core events [ 44708.919] (**) evdev: Power Button: Device: "/dev/input/event0" [ 44708.919] (--) evdev: Power Button: Vendor 0 Product 0x1 [ 44708.919] (--) evdev: Power Button: Found keys [ 44708.919] (II) evdev: Power Button: Configuring as keyboard [ 44708.919] (**) Option "config_info" "udev:/sys/devices/LNXSYSTM:00/device:00/PNP0C0C:00/input/input0/event0" Is there anything I can do to fix this problem.

    Read the article

  • Using Stub Objects

    - by user9154181
    Having told the long and winding tale of where stub objects came from and how we use them to build Solaris, I'd like to focus now on the the nuts and bolts of building and using them. The following new features were added to the Solaris link-editor (ld) to support the production and use of stub objects: -z stub This new command line option informs ld that it is to build a stub object rather than a normal object. In this mode, it accepts the same command line arguments as usual, but will quietly ignore any objects and sharable object dependencies. STUB_OBJECT Mapfile Directive In order to build a stub version of an object, its mapfile must specify the STUB_OBJECT directive. When producing a non-stub object, the presence of STUB_OBJECT causes the link-editor to perform extra validation to ensure that the stub and non-stub objects will be compatible. ASSERT Mapfile Directive All data symbols exported from the object must have an ASSERT symbol directive in the mapfile that declares them as data and supplies the size, binding, bss attributes, and symbol aliasing details. When building the stub objects, the information in these ASSERT directives is used to create the data symbols. When building the real object, these ASSERT directives will ensure that the real object matches the linking interface presented by the stub. Although ASSERT was added to the link-editor in order to support stub objects, they are a general purpose feature that can be used independently of stub objects. For instance you might choose to use an ASSERT directive if you have a symbol that must have a specific address in order for the object to operate properly and you want to automatically ensure that this will always be the case. The material presented here is derived from a document I originally wrote during the development effort, which had the dual goals of providing supplemental materials for the stub object PSARC case, and as a set of edits that were eventually applied to the Oracle Solaris Linker and Libraries Manual (LLM). The Solaris 11 LLM contains this information in a more polished form. Stub Objects A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be used at runtime. However, an application can be built against a stub object, where the stub object provides the real object name to be used at runtime, and then use the real object at runtime. When building a stub object, the link-editor ignores any object or library files specified on the command line, and these files need not exist in order to build a stub. Since the compilation step can be omitted, and because the link-editor has relatively little work to do, stub objects can be built very quickly. Stub objects can be used to solve a variety of build problems: Speed Modern machines, using a version of make with the ability to parallelize operations, are capable of compiling and linking many objects simultaneously, and doing so offers significant speedups. However, it is typical that a given object will depend on other objects, and that there will be a core set of objects that nearly everything else depends on. It is necessary to impose an ordering that builds each object before any other object that requires it. This ordering creates bottlenecks that reduce the amount of parallelization that is possible and limits the overall speed at which the code can be built. Complexity/Correctness In a large body of code, there can be a large number of dependencies between the various objects. The makefiles or other build descriptions for these objects can become very complex and difficult to understand or maintain. The dependencies can change as the system evolves. This can cause a given set of makefiles to become slightly incorrect over time, leading to race conditions and mysterious rare build failures. Dependency Cycles It might be desirable to organize code as cooperating shared objects, each of which draw on the resources provided by the other. Such cycles cannot be supported in an environment where objects must be built before the objects that use them, even though the runtime linker is fully capable of loading and using such objects if they could be built. Stub shared objects offer an alternative method for building code that sidesteps the above issues. Stub objects can be quickly built for all the shared objects produced by the build. Then, all the real shared objects and executables can be built in parallel, in any order, using the stub objects to stand in for the real objects at link-time. Afterwards, the executables and real shared objects are kept, and the stub shared objects are discarded. Stub objects are built from a mapfile, which must satisfy the following requirements. The mapfile must specify the STUB_OBJECT directive. This directive informs the link-editor that the object can be built as a stub object, and as such causes the link-editor to perform validation and sanity checking intended to guarantee that an object and its stub will always provide identical linking interfaces. All function and data symbols that make up the external interface to the object must be explicitly listed in the mapfile. The mapfile must use symbol scope reduction ('*'), to remove any symbols not explicitly listed from the external interface. All global data exported from the object must have an ASSERT symbol attribute in the mapfile to specify the symbol type, size, and bss attributes. In the case where there are multiple symbols that reference the same data, the ASSERT for one of these symbols must specify the TYPE and SIZE attributes, while the others must use the ALIAS attribute to reference this primary symbol. Given such a mapfile, the stub and real versions of the shared object can be built using the same command line for each, adding the '-z stub' option to the link for the stub object, and omiting the option from the link for the real object. To demonstrate these ideas, the following code implements a shared object named idx5, which exports data from a 5 element array of integers, with each element initialized to contain its zero-based array index. This data is available as a global array, via an alternative alias data symbol with weak binding, and via a functional interface. % cat idx5.c int _idx5[5] = { 0, 1, 2, 3, 4 }; #pragma weak idx5 = _idx5 int idx5_func(int index) { if ((index 4)) return (-1); return (_idx5[index]); } A mapfile is required to describe the interface provided by this shared object. % cat mapfile $mapfile_version 2 STUB_OBJECT; SYMBOL_SCOPE { _idx5 { ASSERT { TYPE=data; SIZE=4[5] }; }; idx5 { ASSERT { BINDING=weak; ALIAS=_idx5 }; }; idx5_func; local: *; }; The following main program is used to print all the index values available from the idx5 shared object. % cat main.c #include <stdio.h> extern int _idx5[5], idx5[5], idx5_func(int); int main(int argc, char **argv) { int i; for (i = 0; i The following commands create a stub version of this shared object in a subdirectory named stublib. elfdump is used to verify that the resulting object is a stub. The command used to build the stub differs from that of the real object only in the addition of the -z stub option, and the use of a different output file name. This demonstrates the ease with which stub generation can be added to an existing makefile. % cc -Kpic -G -M mapfile -h libidx5.so.1 idx5.c -o stublib/libidx5.so.1 -zstub % ln -s libidx5.so.1 stublib/libidx5.so % elfdump -d stublib/libidx5.so | grep STUB [11] FLAGS_1 0x4000000 [ STUB ] The main program can now be built, using the stub object to stand in for the real shared object, and setting a runpath that will find the real object at runtime. However, as we have not yet built the real object, this program cannot yet be run. Attempts to cause the system to load the stub object are rejected, as the runtime linker knows that stub objects lack the actual code and data found in the real object, and cannot execute. % cc main.c -L stublib -R '$ORIGIN/lib' -lidx5 -lc % ./a.out ld.so.1: a.out: fatal: libidx5.so.1: open failed: No such file or directory Killed % LD_PRELOAD=stublib/libidx5.so.1 ./a.out ld.so.1: a.out: fatal: stublib/libidx5.so.1: stub shared object cannot be used at runtime Killed We build the real object using the same command as we used to build the stub, omitting the -z stub option, and writing the results to a different file. % cc -Kpic -G -M mapfile -h libidx5.so.1 idx5.c -o lib/libidx5.so.1 Once the real object has been built in the lib subdirectory, the program can be run. % ./a.out [0] 0 0 0 [1] 1 1 1 [2] 2 2 2 [3] 3 3 3 [4] 4 4 4 Mapfile Changes The version 2 mapfile syntax was extended in a number of places to accommodate stub objects. Conditional Input The version 2 mapfile syntax has the ability conditionalize mapfile input using the $if control directive. As you might imagine, these directives are used frequently with ASSERT directives for data, because a given data symbol will frequently have a different size in 32 or 64-bit code, or on differing hardware such as x86 versus sparc. The link-editor maintains an internal table of names that can be used in the logical expressions evaluated by $if and $elif. At startup, this table is initialized with items that describe the class of object (_ELF32 or _ELF64) and the type of the target machine (_sparc or _x86). We found that there were a small number of cases in the Solaris code base in which we needed to know what kind of object we were producing, so we added the following new predefined items in order to address that need: NameMeaning ...... _ET_DYNshared object _ET_EXECexecutable object _ET_RELrelocatable object ...... STUB_OBJECT Directive The new STUB_OBJECT directive informs the link-editor that the object described by the mapfile can be built as a stub object. STUB_OBJECT; A stub shared object is built entirely from the information in the mapfiles supplied on the command line. When the -z stub option is specified to build a stub object, the presence of the STUB_OBJECT directive in a mapfile is required, and the link-editor uses the information in symbol ASSERT attributes to create global symbols that match those of the real object. When the real object is built, the presence of STUB_OBJECT causes the link-editor to verify that the mapfiles accurately describe the real object interface, and that a stub object built from them will provide the same linking interface as the real object it represents. All function and data symbols that make up the external interface to the object must be explicitly listed in the mapfile. The mapfile must use symbol scope reduction ('*'), to remove any symbols not explicitly listed from the external interface. All global data in the object is required to have an ASSERT attribute that specifies the symbol type and size. If the ASSERT BIND attribute is not present, the link-editor provides a default assertion that the symbol must be GLOBAL. If the ASSERT SH_ATTR attribute is not present, or does not specify that the section is one of BITS or NOBITS, the link-editor provides a default assertion that the associated section is BITS. All data symbols that describe the same address and size are required to have ASSERT ALIAS attributes specified in the mapfile. If aliased symbols are discovered that do not have an ASSERT ALIAS specified, the link fails and no object is produced. These rules ensure that the mapfiles contain a description of the real shared object's linking interface that is sufficient to produce a stub object with a completely compatible linking interface. SYMBOL_SCOPE/SYMBOL_VERSION ASSERT Attribute The SYMBOL_SCOPE and SYMBOL_VERSION mapfile directives were extended with a symbol attribute named ASSERT. The syntax for the ASSERT attribute is as follows: ASSERT { ALIAS = symbol_name; BINDING = symbol_binding; TYPE = symbol_type; SH_ATTR = section_attributes; SIZE = size_value; SIZE = size_value[count]; }; The ASSERT attribute is used to specify the expected characteristics of the symbol. The link-editor compares the symbol characteristics that result from the link to those given by ASSERT attributes. If the real and asserted attributes do not agree, a fatal error is issued and the output object is not created. In normal use, the link editor evaluates the ASSERT attribute when present, but does not require them, or provide default values for them. The presence of the STUB_OBJECT directive in a mapfile alters the interpretation of ASSERT to require them under some circumstances, and to supply default assertions if explicit ones are not present. See the definition of the STUB_OBJECT Directive for the details. When the -z stub command line option is specified to build a stub object, the information provided by ASSERT attributes is used to define the attributes of the global symbols provided by the object. ASSERT accepts the following: ALIAS Name of a previously defined symbol that this symbol is an alias for. An alias symbol has the same type, value, and size as the main symbol. The ALIAS attribute is mutually exclusive to the TYPE, SIZE, and SH_ATTR attributes, and cannot be used with them. When ALIAS is specified, the type, size, and section attributes are obtained from the alias symbol. BIND Specifies an ELF symbol binding, which can be any of the STB_ constants defined in <sys/elf.h>, with the STB_ prefix removed (e.g. GLOBAL, WEAK). TYPE Specifies an ELF symbol type, which can be any of the STT_ constants defined in <sys/elf.h>, with the STT_ prefix removed (e.g. OBJECT, COMMON, FUNC). In addition, for compatibility with other mapfile usage, FUNCTION and DATA can be specified, for STT_FUNC and STT_OBJECT, respectively. TYPE is mutually exclusive to ALIAS, and cannot be used in conjunction with it. SH_ATTR Specifies attributes of the section associated with the symbol. The section_attributes that can be specified are given in the following table: Section AttributeMeaning BITSSection is not of type SHT_NOBITS NOBITSSection is of type SHT_NOBITS SH_ATTR is mutually exclusive to ALIAS, and cannot be used in conjunction with it. SIZE Specifies the expected symbol size. SIZE is mutually exclusive to ALIAS, and cannot be used in conjunction with it. The syntax for the size_value argument is as described in the discussion of the SIZE attribute below. SIZE The SIZE symbol attribute existed before support for stub objects was introduced. It is used to set the size attribute of a given symbol. This attribute results in the creation of a symbol definition. Prior to the introduction of the ASSERT SIZE attribute, the value of a SIZE attribute was always numeric. While attempting to apply ASSERT SIZE to the objects in the Solaris ON consolidation, I found that many data symbols have a size based on the natural machine wordsize for the class of object being produced. Variables declared as long, or as a pointer, will be 4 bytes in size in a 32-bit object, and 8 bytes in a 64-bit object. Initially, I employed the conditional $if directive to handle these cases as follows: $if _ELF32 foo { ASSERT { TYPE=data; SIZE=4 } }; bar { ASSERT { TYPE=data; SIZE=20 } }; $elif _ELF64 foo { ASSERT { TYPE=data; SIZE=8 } }; bar { ASSERT { TYPE=data; SIZE=40 } }; $else $error UNKNOWN ELFCLASS $endif I found that the situation occurs frequently enough that this is cumbersome. To simplify this case, I introduced the idea of the addrsize symbolic name, and of a repeat count, which together make it simple to specify machine word scalar or array symbols. Both the SIZE, and ASSERT SIZE attributes support this syntax: The size_value argument can be a numeric value, or it can be the symbolic name addrsize. addrsize represents the size of a machine word capable of holding a memory address. The link-editor substitutes the value 4 for addrsize when building 32-bit objects, and the value 8 when building 64-bit objects. addrsize is useful for representing the size of pointer variables and C variables of type long, as it automatically adjusts for 32 and 64-bit objects without requiring the use of conditional input. The size_value argument can be optionally suffixed with a count value, enclosed in square brackets. If count is present, size_value and count are multiplied together to obtain the final size value. Using this feature, the example above can be written more naturally as: foo { ASSERT { TYPE=data; SIZE=addrsize } }; bar { ASSERT { TYPE=data; SIZE=addrsize[5] } }; Exported Global Data Is Still A Bad Idea As you can see, the additional plumbing added to the Solaris link-editor to support stub objects is minimal. Furthermore, about 90% of that plumbing is dedicated to handling global data. We have long advised against global data exported from shared objects. There are many ways in which global data does not fit well with dynamic linking. Stub objects simply provide one more reason to avoid this practice. It is always better to export all data via a functional interface. You should always hide your data, and make it available to your users via a function that they can call to acquire the address of the data item. However, If you do have to support global data for a stub, perhaps because you are working with an already existing object, it is still easilily done, as shown above. Oracle does not like us to discuss hypothetical new features that don't exist in shipping product, so I'll end this section with a speculation. It might be possible to do more in this area to ease the difficulty of dealing with objects that have global data that the users of the library don't need. Perhaps someday... Conclusions It is easy to create stub objects for most objects. If your library only exports function symbols, all you have to do to build a faithful stub object is to add STUB_OBJECT; and then to use the same link command you're currently using, with the addition of the -z stub option. Happy Stubbing!

    Read the article

  • Windows Azure Learning Plan - Architecture

    - by BuckWoody
    This is one in a series of posts on a Windows Azure Learning Plan. You can find the main post here. This one deals with what an Architect needs to know about Windows Azure.   General Architectural Guidance Overview and general  information about Azure - what it is, how it works, and where you can learn more. Cloud Computing, A Crash Course for Architects (Video) http://www.msteched.com/2010/Europe/ARC202 Patterns and Practices for Cloud Development http://msdn.microsoft.com/en-us/library/ff898430.aspx Design Patterns, Anti-Patterns and Windows Azure http://blogs.msdn.com/b/ignitionshowcase/archive/2010/11/27/design-patterns-anti-patterns-and-windows-azure.aspx Application Patterns for the Cloud http://blogs.msdn.com/b/kashif/archive/2010/08/07/application-patterns-for-the-cloud.aspx Architecting Applications for High Scalability (Video) http://www.msteched.com/2010/Europe/ARC309 David Aiken on Azure Architecture Patterns (Video) http://blogs.msdn.com/b/architectsrule/archive/2010/09/09/arcast-tv-david-aiken-on-azure-architecture-patterns.aspx Cloud Application Architecture Patterns (Video) http://blogs.msdn.com/b/bobfamiliar/archive/2010/10/19/cloud-application-architecture-patterns-by-david-platt.aspx 10 Things Every Architect Needs to Know about Windows Azure http://geekswithblogs.net/iupdateable/archive/2010/10/20/slides-and-links-for-windows-azure-platform-session-at-software.aspx Key Differences Between Public and Private Clouds http://blogs.msdn.com/b/kadriu/archive/2010/10/24/key-differences-between-public-and-private-clouds.aspx Microsoft Application Platform at a Glance http://blogs.msdn.com/b/jmeier/archive/2010/10/30/microsoft-application-platform-at-a-glance.aspx Windows Azure is not just about Roles http://vikassahni.wordpress.com/2010/11/17/windows-azure-is-not-just-about-roles/ Example Application for Windows Azure http://msdn.microsoft.com/en-us/library/ff966482.aspx Implementation Guidance Practical applications for the architect to consider 5 Enterprise steps for adopting a Platform as a Service http://blogs.msdn.com/b/davidmcg/archive/2010/12/02/5-enterprise-steps-for-adopting-a-platform-as-a-service.aspx?wa=wsignin1.0 Performance-Based Scaling in Windows Azure http://msdn.microsoft.com/en-us/magazine/gg232759.aspx Windows Azure Guidance for the Development Process http://blogs.msdn.com/b/eugeniop/archive/2010/04/01/windows-azure-guidance-development-process.aspx Microsoft Developer Guidance Maps http://blogs.msdn.com/b/jmeier/archive/2010/10/04/developer-guidance-ia-at-a-glance.aspx How to Build a Hybrid On-Premise/In Cloud Application http://blogs.msdn.com/b/ignitionshowcase/archive/2010/11/09/how-to-build-a-hybrid-on-premise-in-cloud-application.aspx A Common Scenario of Multi-instances in Windows Azure http://blogs.msdn.com/b/windows-azure-support/archive/2010/11/03/a-common-scenario-of-multi_2d00_instances-in-windows-azure-.aspx Slides and Links for Windows Azure Platform Best Practices http://geekswithblogs.net/iupdateable/archive/2010/09/29/slides-and-links-for-windows-azure-platform-best-practices-for.aspx AppFabric Architecture and Deployment Topologies guide http://blogs.msdn.com/b/appfabriccat/archive/2010/09/10/appfabric-architecture-and-deployment-topologies-guide-now-available-via-microsoft-download-center.aspx Windows Azure Platform Appliance http://www.microsoft.com/windowsazure/appliance/ Integrating Cloud Technologies into Your Organization Interoperability with Open Source and other applications; business and cost decisions Interoperability Labs at Microsoft http://www.interoperabilitybridges.com/ Windows Azure Service Level Agreements http://www.microsoft.com/windowsazure/sla/

    Read the article

  • What is hogging my connection?

    - by SF.
    At times it seems like dozens, if not hundreds of root-owned HTTP connections spring up. This is not much of a problem on LAN or WLAN as each of them seems to transfer very little, but if I use GPRS link, my ping times go into minutes (seriously, 80000ms is not infrequent!) and all connections grind to a halt waiting till these end. This usually lasts some 15 minutes and ends about when I start troubleshooting it for real. I've managed to capture a fragment of Nethogs output NetHogs version 0.8.0 PID USER PROGRAM DEV SENT RECEIVED ? root 37.209.147.180:59854-141.101.114.59:80 0.013 0.000 KB/sec ? root 37.209.147.180:59853-141.101.114.59:80 0.000 0.000 KB/sec ? root 37.209.147.180:52804-173.194.70.95:80 0.000 0.000 KB/sec 1954 bw /home/bw/.dropbox-dist/dropbox ppp0 0.000 0.000 KB/sec ? root 37.209.147.180:59851-141.101.114.59:80 0.000 0.000 KB/sec ? root 37.209.147.180:59850-141.101.114.59:80 0.000 0.000 KB/sec ? root 37.209.147.180:52801-173.194.70.95:80 0.000 0.000 KB/sec 13301 bw /usr/lib/firefox/firefox ppp0 0.000 0.000 KB/sec ? root unknown TCP 0.000 0.000 KB/sec Unfortunately, it doesn't display the owning process of these. Does anyone recognize these addresses or is able to suggest how to troubleshoot it further or disable it? Is it some automatic update or something like that? EDIT: per request; netstat -n, for obvious reason that normal netstat won't ever launch as all DNS requests are hogged just the same. netstat -n Active Internet connections (w/o servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 1 93.154.166.62:51314 198.252.206.16:80 FIN_WAIT1 tcp 0 1 37.209.147.180:44098 198.252.206.16:80 FIN_WAIT1 tcp 0 1 37.209.147.180:59855 141.101.114.59:80 FIN_WAIT1 tcp 1 0 192.168.43.224:38237 213.189.45.39:443 CLOSE_WAIT tcp 1 0 93.154.146.186:35167 75.101.152.29:80 CLOSE_WAIT tcp 1 0 192.168.43.224:32939 199.15.160.100:80 CLOSE_WAIT tcp 1 0 192.168.43.224:55619 63.245.217.207:443 CLOSE_WAIT tcp 1 0 93.154.146.186:60210 75.101.152.29:443 CLOSE_WAIT tcp 1 0 192.168.43.224:32944 199.15.160.100:80 CLOSE_WAIT tcp 0 1 37.209.147.180:52804 173.194.70.95:80 FIN_WAIT1 tcp 1 0 93.154.146.186:46606 23.21.151.181:80 CLOSE_WAIT tcp 1 0 93.154.146.186:52619 107.22.246.76:80 CLOSE_WAIT tcp 415 0 93.154.146.186:36156 82.112.106.104:80 CLOSE_WAIT tcp 1 0 93.154.146.186:50352 107.22.246.76:443 CLOSE_WAIT tcp 1 0 192.168.43.224:55000 213.189.45.44:443 CLOSE_WAIT tcp 0 1 37.209.147.180:59853 141.101.114.59:80 FIN_WAIT1 tcp 1 0 192.168.43.224:32937 199.15.160.100:80 CLOSE_WAIT tcp 1 0 192.168.43.224:56055 93.184.221.40:80 CLOSE_WAIT tcp 415 0 93.154.146.186:36155 82.112.106.104:80 CLOSE_WAIT tcp 0 1 37.209.147.180:44097 198.252.206.16:80 FIN_WAIT1 tcp 1 0 93.154.146.186:35166 75.101.152.29:80 CLOSE_WAIT tcp 1 0 192.168.43.224:32943 199.15.160.100:80 CLOSE_WAIT tcp 1 0 93.154.146.186:46607 23.21.151.181:80 CLOSE_WAIT tcp 1 0 93.154.146.186:36422 23.21.151.181:443 CLOSE_WAIT tcp 1 0 192.168.43.224:36081 93.184.220.148:80 CLOSE_WAIT tcp 1 0 192.168.43.224:44462 213.189.45.29:443 CLOSE_WAIT tcp 1 0 192.168.43.224:32938 199.15.160.100:80 CLOSE_WAIT tcp 1 0 93.154.146.186:36419 23.21.151.181:443 CLOSE_WAIT tcp 0 497 93.154.166.62:51313 198.252.206.16:80 FIN_WAIT1 tcp 0 1 37.209.147.180:59851 141.101.114.59:80 FIN_WAIT1 tcp 0 1 37.209.147.180:44095 198.252.206.16:80 FIN_WAIT1 tcp 1 0 93.154.146.186:46611 23.21.151.181:80 CLOSE_WAIT tcp 1 0 192.168.43.224:38236 213.189.45.39:443 CLOSE_WAIT tcp 0 171 37.209.147.180:45341 173.194.113.146:443 ESTABLISHED tcp 0 1 37.209.147.180:52801 173.194.70.95:80 FIN_WAIT1 tcp 1 0 192.168.43.224:36080 93.184.220.148:80 CLOSE_WAIT tcp 0 1 37.209.147.180:59856 141.101.114.59:80 FIN_WAIT1 tcp 0 1 37.209.147.180:44096 198.252.206.16:80 FIN_WAIT1 tcp 0 1 93.154.166.62:57471 108.160.162.49:80 FIN_WAIT1 tcp 0 1 37.209.147.180:59854 141.101.114.59:80 FIN_WAIT1 tcp 0 171 37.209.147.180:45340 173.194.113.146:443 ESTABLISHED tcp 0 168 37.209.147.180:45334 173.194.113.146:443 FIN_WAIT1 tcp 1 0 93.154.146.186:46609 23.21.151.181:80 CLOSE_WAIT tcp 0 1248 93.154.166.62:58270 64.251.23.59:443 FIN_WAIT1 tcp 0 1 37.209.147.180:59850 141.101.114.59:80 FIN_WAIT1 tcp 1 0 93.154.146.186:35181 75.101.152.29:80 CLOSE_WAIT tcp 232 0 93.154.172.168:46384 198.252.206.25:80 ESTABLISHED tcp 1 0 93.154.146.186:52618 107.22.246.76:80 CLOSE_WAIT tcp 1 0 93.154.172.168:36298 173.194.69.95:443 CLOSE_WAIT tcp 1 0 93.154.146.186:60209 75.101.152.29:443 CLOSE_WAIT tcp 0 168 37.209.147.180:45335 173.194.113.146:443 FIN_WAIT1 tcp 415 0 93.154.146.186:36157 82.112.106.104:80 CLOSE_WAIT tcp 1 0 192.168.43.224:36082 93.184.220.148:80 CLOSE_WAIT tcp 1 0 192.168.43.224:32942 199.15.160.100:80 CLOSE_WAIT tcp 1 0 93.154.146.186:50350 107.22.246.76:443 CLOSE_WAIT tcp 1 0 192.168.43.224:32941 199.15.160.100:80 CLOSE_WAIT tcp 0 534 37.209.147.180:44089 198.252.206.16:80 FIN_WAIT1 tcp 1 0 93.154.146.186:46608 23.21.151.181:80 CLOSE_WAIT tcp 1 0 93.154.146.186:46612 23.21.151.181:80 CLOSE_WAIT udp 0 0 37.209.147.180:49057 193.41.112.14:53 ESTABLISHED udp 0 0 37.209.147.180:51631 193.41.112.18:53 ESTABLISHED udp 0 0 37.209.147.180:34827 193.41.112.18:53 ESTABLISHED udp 0 0 37.209.147.180:35908 193.41.112.14:53 ESTABLISHED udp 0 0 37.209.147.180:44106 193.41.112.14:53 ESTABLISHED udp 0 0 37.209.147.180:42184 193.41.112.14:53 ESTABLISHED udp 0 0 37.209.147.180:54485 193.41.112.14:53 ESTABLISHED udp 0 0 37.209.147.180:42216 193.41.112.18:53 ESTABLISHED udp 0 0 37.209.147.180:51961 193.41.112.14:53 ESTABLISHED udp 0 0 37.209.147.180:48412 193.41.112.14:53 ESTABLISHED The interesting lines from ping got lost, but the summary over past few hours is: --- 8.8.8.8 ping statistics --- 107459 packets transmitted, 104376 received, +22 duplicates, 2% packet loss, time 195427362ms rtt min/avg/max/mdev = 24.822/528.132/90538.257/2519.263 ms, pipe 90 EDIT: Per request: Happened again, reboot didn't help but cleaned up all "hanging" processes. Currently netstat shows: bw@pony:/var/log$ netstat -n -t Active Internet connections (w/o servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 93.154.188.68:42767 74.125.239.143:443 TIME_WAIT tcp 0 0 93.154.188.68:50270 173.194.69.189:443 ESTABLISHED tcp 0 0 93.154.188.68:45250 190.93.244.58:80 TIME_WAIT tcp 0 0 93.154.188.68:53488 173.194.32.198:80 ESTABLISHED tcp 0 0 93.154.188.68:53490 173.194.32.198:80 ESTABLISHED tcp 0 159 93.154.188.68:42741 74.125.239.143:443 LAST_ACK tcp 0 0 93.154.188.68:45808 198.252.206.25:80 ESTABLISHED tcp 0 0 93.154.188.68:52449 173.194.32.199:443 ESTABLISHED tcp 0 0 93.154.188.68:52600 173.194.32.199:443 TIME_WAIT tcp 0 0 93.154.188.68:50300 173.194.69.189:443 TIME_WAIT tcp 0 0 93.154.188.68:45253 190.93.244.58:80 TIME_WAIT tcp 0 0 93.154.188.68:46252 173.194.32.204:443 ESTABLISHED tcp 0 0 93.154.188.68:45246 190.93.244.58:80 ESTABLISHED tcp 0 0 93.154.188.68:47064 173.194.113.143:443 ESTABLISHED tcp 0 0 93.154.188.68:34484 173.194.69.95:443 ESTABLISHED tcp 0 0 93.154.188.68:45252 190.93.244.58:80 TIME_WAIT tcp 0 0 93.154.188.68:54290 173.194.32.202:443 ESTABLISHED tcp 0 0 93.154.188.68:47063 173.194.113.143:443 ESTABLISHED tcp 0 0 93.154.188.68:53469 173.194.32.198:80 TIME_WAIT tcp 0 0 93.154.188.68:45242 190.93.244.58:80 TIME_WAIT tcp 0 0 93.154.188.68:53468 173.194.32.198:80 ESTABLISHED tcp 0 0 93.154.188.68:50299 173.194.69.189:443 TIME_WAIT tcp 0 0 93.154.188.68:42764 74.125.239.143:443 TIME_WAIT tcp 0 0 93.154.188.68:45256 190.93.244.58:80 TIME_WAIT tcp 0 0 93.154.188.68:58047 108.160.162.105:80 ESTABLISHED tcp 0 0 93.154.188.68:45249 190.93.244.58:80 TIME_WAIT tcp 0 0 93.154.188.68:50297 173.194.69.189:443 TIME_WAIT tcp 0 0 93.154.188.68:53470 173.194.32.198:80 ESTABLISHED tcp 0 0 93.154.188.68:34100 68.232.35.121:443 ESTABLISHED tcp 0 0 93.154.188.68:42758 74.125.239.143:443 ESTABLISHED tcp 0 0 93.154.188.68:42765 74.125.239.143:443 TIME_WAIT tcp 0 0 93.154.188.68:39000 173.194.69.95:80 TIME_WAIT tcp 0 0 93.154.188.68:50296 173.194.69.189:443 TIME_WAIT tcp 0 0 93.154.188.68:53467 173.194.32.198:80 ESTABLISHED tcp 0 0 93.154.188.68:42766 74.125.239.143:443 TIME_WAIT tcp 0 0 93.154.188.68:45251 190.93.244.58:80 TIME_WAIT tcp 0 0 93.154.188.68:45248 190.93.244.58:80 TIME_WAIT tcp 0 0 93.154.188.68:45247 190.93.244.58:80 ESTABLISHED tcp 0 159 93.154.188.68:50254 173.194.69.189:443 LAST_ACK tcp 0 0 93.154.188.68:34483 173.194.69.95:443 ESTABLISHED Output of ps: USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.8 0.0 3628 2092 ? Ss 16:52 0:03 /sbin/init root 2 0.0 0.0 0 0 ? S 16:52 0:00 [kthreadd] root 3 0.1 0.0 0 0 ? S 16:52 0:00 [ksoftirqd/0] root 4 0.1 0.0 0 0 ? S 16:52 0:00 [kworker/0:0] root 6 0.0 0.0 0 0 ? S 16:52 0:00 [migration/0] root 7 0.0 0.0 0 0 ? S 16:52 0:00 [watchdog/0] root 8 0.0 0.0 0 0 ? S 16:52 0:00 [migration/1] root 10 0.1 0.0 0 0 ? S 16:52 0:00 [ksoftirqd/1] root 11 0.0 0.0 0 0 ? S 16:52 0:00 [watchdog/1] root 12 0.0 0.0 0 0 ? S 16:52 0:00 [migration/2] root 14 0.1 0.0 0 0 ? S 16:52 0:00 [ksoftirqd/2] root 15 0.0 0.0 0 0 ? S 16:52 0:00 [watchdog/2] root 16 0.0 0.0 0 0 ? S 16:52 0:00 [migration/3] root 17 0.0 0.0 0 0 ? S 16:52 0:00 [kworker/3:0] root 18 0.1 0.0 0 0 ? S 16:52 0:00 [ksoftirqd/3] root 19 0.0 0.0 0 0 ? S 16:52 0:00 [watchdog/3] root 20 0.0 0.0 0 0 ? S< 16:52 0:00 [cpuset] root 21 0.0 0.0 0 0 ? S< 16:52 0:00 [khelper] root 22 0.0 0.0 0 0 ? S 16:52 0:00 [kdevtmpfs] root 23 0.0 0.0 0 0 ? S< 16:52 0:00 [netns] root 24 0.0 0.0 0 0 ? S 16:52 0:00 [sync_supers] root 25 0.0 0.0 0 0 ? S 16:52 0:00 [bdi-default] root 26 0.0 0.0 0 0 ? S< 16:52 0:00 [kintegrityd] root 27 0.0 0.0 0 0 ? S< 16:52 0:00 [kblockd] root 28 0.0 0.0 0 0 ? S< 16:52 0:00 [ata_sff] root 29 0.0 0.0 0 0 ? S 16:52 0:00 [khubd] root 30 0.0 0.0 0 0 ? S< 16:52 0:00 [md] root 42 0.0 0.0 0 0 ? S 16:52 0:00 [khungtaskd] root 43 0.0 0.0 0 0 ? S 16:52 0:00 [kswapd0] root 44 0.0 0.0 0 0 ? SN 16:52 0:00 [ksmd] root 45 0.0 0.0 0 0 ? SN 16:52 0:00 [khugepaged] root 46 0.0 0.0 0 0 ? S 16:52 0:00 [fsnotify_mark] root 47 0.0 0.0 0 0 ? S 16:52 0:00 [ecryptfs-kthrea] root 48 0.0 0.0 0 0 ? S< 16:52 0:00 [crypto] root 59 0.0 0.0 0 0 ? S< 16:52 0:00 [kthrotld] root 70 0.1 0.0 0 0 ? S 16:52 0:00 [kworker/2:1] root 71 0.0 0.0 0 0 ? S 16:52 0:00 [scsi_eh_0] root 72 0.0 0.0 0 0 ? S 16:52 0:00 [scsi_eh_1] root 73 0.0 0.0 0 0 ? S 16:52 0:00 [scsi_eh_2] root 74 0.0 0.0 0 0 ? S 16:52 0:00 [scsi_eh_3] root 75 0.0 0.0 0 0 ? S 16:52 0:00 [kworker/u:2] root 76 0.0 0.0 0 0 ? S 16:52 0:00 [kworker/u:3] root 79 0.0 0.0 0 0 ? S 16:52 0:00 [kworker/1:1] root 99 0.0 0.0 0 0 ? S< 16:52 0:00 [deferwq] root 100 0.0 0.0 0 0 ? S< 16:52 0:00 [charger_manager] root 101 0.0 0.0 0 0 ? S< 16:52 0:00 [devfreq_wq] root 102 0.1 0.0 0 0 ? S 16:52 0:00 [kworker/2:2] root 106 0.0 0.0 0 0 ? S 16:52 0:00 [scsi_eh_4] root 107 0.0 0.0 0 0 ? S 16:52 0:00 [usb-storage] root 108 0.0 0.0 0 0 ? S 16:52 0:00 [scsi_eh_5] root 109 0.0 0.0 0 0 ? S 16:52 0:00 [usb-storage] root 271 0.1 0.0 0 0 ? S 16:52 0:00 [kworker/1:2] root 316 0.0 0.0 0 0 ? S 16:52 0:00 [jbd2/sda1-8] root 317 0.0 0.0 0 0 ? S< 16:52 0:00 [ext4-dio-unwrit] root 440 0.1 0.0 2820 608 ? S 16:52 0:00 upstart-udev-bridge --daemon root 478 0.0 0.0 3460 1648 ? Ss 16:52 0:00 /sbin/udevd --daemon root 632 0.0 0.0 3348 1336 ? S 16:52 0:00 /sbin/udevd --daemon root 633 0.0 0.0 3348 1204 ? S 16:52 0:00 /sbin/udevd --daemon root 782 0.0 0.0 2816 596 ? S 16:52 0:00 upstart-socket-bridge --daemon root 822 0.0 0.0 6684 2400 ? Ss 16:52 0:00 /usr/sbin/sshd -D 102 834 0.2 0.0 4064 1864 ? Ss 16:52 0:01 dbus-daemon --system --fork root 857 0.0 0.1 7420 3380 ? Ss 16:52 0:00 /usr/sbin/modem-manager root 858 0.0 0.0 4784 1636 ? Ss 16:52 0:00 /usr/sbin/bluetoothd syslog 860 0.0 0.0 31068 1496 ? Sl 16:52 0:00 rsyslogd -c5 root 869 0.1 0.1 24280 5564 ? Ssl 16:52 0:00 NetworkManager avahi 883 0.0 0.0 3448 1488 ? S 16:52 0:00 avahi-daemon: running [pony.local] avahi 884 0.0 0.0 3448 436 ? S 16:52 0:00 avahi-daemon: chroot helper root 885 0.0 0.0 0 0 ? S< 16:52 0:00 [kpsmoused] root 892 0.0 0.1 25696 4140 ? Sl 16:52 0:00 /usr/lib/policykit-1/polkitd --no-debug root 923 0.0 0.0 0 0 ? S 16:52 0:00 [scsi_eh_6] root 959 0.0 0.0 0 0 ? S< 16:52 0:00 [krfcommd] root 970 0.0 0.1 7536 3120 ? Ss 16:52 0:00 /usr/sbin/cupsd -F colord 976 0.1 0.3 55080 10396 ? Sl 16:52 0:00 /usr/lib/i386-linux-gnu/colord/colord root 979 0.0 0.0 4632 872 tty4 Ss+ 16:52 0:00 /sbin/getty -8 38400 tty4 root 987 0.0 0.0 4632 884 tty5 Ss+ 16:52 0:00 /sbin/getty -8 38400 tty5 root 994 0.0 0.0 4632 884 tty2 Ss+ 16:52 0:00 /sbin/getty -8 38400 tty2 root 995 0.0 0.0 4632 868 tty3 Ss+ 16:52 0:00 /sbin/getty -8 38400 tty3 root 998 0.0 0.0 4632 876 tty6 Ss+ 16:52 0:00 /sbin/getty -8 38400 tty6 root 1022 0.0 0.0 2176 680 ? Ss 16:52 0:00 acpid -c /etc/acpi/events -s /var/run/acpid.socket root 1029 0.0 0.0 3632 664 ? Ss 16:52 0:00 /usr/sbin/irqbalance daemon 1030 0.0 0.0 2476 120 ? Ss 16:52 0:00 atd root 1031 0.0 0.0 2620 880 ? Ss 16:52 0:00 cron root 1061 0.1 0.0 0 0 ? S 16:52 0:00 [kworker/3:2] root 1064 0.0 1.0 34116 31072 ? SLsl 16:52 0:00 lightdm root 1076 13.4 1.2 118688 37920 tty7 Ssl+ 16:52 0:55 /usr/bin/X :0 -core -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswit root 1085 0.0 0.0 0 0 ? S 16:52 0:00 [rts_pstor] root 1087 0.0 0.0 0 0 ? S 16:52 0:00 [rtsx-polling] root 1095 0.0 0.0 0 0 ? S< 16:52 0:00 [cfg80211] root 1127 0.0 0.0 0 0 ? S 16:52 0:00 [flush-8:0] root 1130 0.0 0.0 6136 1824 ? Ss 16:52 0:00 /sbin/wpa_supplicant -B -P /run/sendsigs.omit.d/wpasupplicant.pid -u -s -O /va root 1137 0.0 0.1 24604 3164 ? Sl 16:52 0:00 /usr/lib/accountsservice/accounts-daemon root 1140 0.0 0.0 0 0 ? S< 16:52 0:00 [hd-audio0] root 1188 0.0 0.1 34308 3420 ? Sl 16:52 0:00 /usr/sbin/console-kit-daemon --no-daemon root 1425 0.0 0.0 4632 872 tty1 Ss+ 16:52 0:00 /sbin/getty -8 38400 tty1 root 1443 0.1 0.1 29460 4664 ? Sl 16:52 0:00 /usr/lib/upower/upowerd root 1579 0.0 0.1 16540 3272 ? Sl 16:53 0:00 lightdm --session-child 12 19 bw 1623 0.0 0.0 2232 644 ? Ss 16:53 0:00 /bin/sh /usr/bin/startkde bw 1672 0.0 0.0 4092 204 ? Ss 16:53 0:00 /usr/bin/ssh-agent /usr/bin/gpg-agent --daemon --sh --write-env-file=/home/bw/ bw 1673 0.0 0.0 5492 384 ? Ss 16:53 0:00 /usr/bin/gpg-agent --daemon --sh --write-env-file=/home/bw/.gnupg/gpg-agent-in bw 1676 0.0 0.0 3848 792 ? S 16:53 0:00 /usr/bin/dbus-launch --exit-with-session /usr/bin/startkde bw 1677 0.5 0.0 5384 2180 ? Ss 16:53 0:02 //bin/dbus-daemon --fork --print-pid 5 --print-address 7 --session root 1704 0.3 0.1 25348 3600 ? Sl 16:53 0:01 /usr/lib/udisks/udisks-daemon root 1705 0.0 0.0 6620 728 ? S 16:53 0:00 udisks-daemon: not polling any devices bw 1736 0.0 0.0 2008 64 ? S 16:53 0:00 /usr/lib/kde4/libexec/start_kdeinit +kcminit_startup bw 1737 0.0 0.5 115200 15588 ? Ss 16:53 0:00 kdeinit4: kdeinit4 Running... bw 1738 0.1 0.2 116756 8728 ? S 16:53 0:00 kdeinit4: klauncher [kdeinit] --fd=9 bw 1740 0.6 1.0 340524 31264 ? Sl 16:53 0:02 kdeinit4: kded4 [kdeinit] bw 1742 0.0 0.0 8944 2144 ? S 16:53 0:00 /usr/lib/i386-linux-gnu/gconf/gconfd-2 bw 1746 0.2 0.4 92028 14688 ? S 16:53 0:00 /usr/bin/kglobalaccel bw 1748 0.0 0.4 90804 13500 ? S 16:53 0:00 /usr/bin/kwalletd bw 1752 0.1 0.5 103764 15152 ? S 16:53 0:00 /usr/bin/kactivitymanagerd bw 1758 0.0 0.0 2144 280 ? S 16:53 0:00 kwrapper4 ksmserver bw 1759 0.1 0.5 150016 16088 ? Sl 16:53 0:00 kdeinit4: ksmserver [kdeinit] bw 1763 2.2 1.0 178492 32100 ? Sl 16:53 0:08 kwin bw 1772 0.2 0.5 106292 16340 ? Sl 16:53 0:00 /usr/bin/knotify4 bw 1777 0.9 1.1 246120 32912 ? Sl 16:53 0:03 /usr/bin/krunner bw 1778 6.3 2.7 389884 80216 ? Sl 16:53 0:23 /usr/bin/plasma-desktop bw 1785 0.0 0.0 2844 1208 ? S 16:53 0:00 ksysguardd bw 1789 0.1 0.4 82036 14176 ? S 16:53 0:00 /usr/bin/kuiserver bw 1805 0.3 0.1 61560 5612 ? Sl 16:53 0:01 /usr/bin/akonadi_control root 1806 0.0 0.0 0 0 ? S 16:53 0:00 [kworker/0:2] bw 1808 0.1 0.2 211852 8460 ? Sl 16:53 0:00 akonadiserver bw 1810 0.4 0.8 244116 25360 ? Sl 16:53 0:01 /usr/sbin/mysqld --defaults-file=/home/bw/.local/share/akonadi/mysql.conf --da bw 1874 0.0 0.0 35284 2956 ? Sl 16:53 0:00 /usr/bin/xsettings-kde bw 1876 0.0 0.3 68776 9488 ? Sl 16:53 0:00 /usr/bin/nepomukserver bw 1884 0.4 0.9 173876 29240 ? SNl 16:53 0:01 /usr/bin/nepomukservicestub nepomukstorage bw 1902 6.1 2.1 451512 63924 ? Sl 16:53 0:21 /home/bw/.dropbox-dist/dropbox bw 1906 3.8 1.0 142368 32376 ? Rl 16:53 0:13 /usr/bin/yakuake bw 1933 0.0 0.1 54636 4680 ? Sl 16:53 0:00 /usr/bin/zeitgeist-datahub bw 1943 0.5 1.5 164836 46836 ? Sl 16:53 0:01 python /usr/bin/printer-applet bw 1945 0.1 0.1 99636 5048 ? S<l 16:53 0:00 /usr/bin/pulseaudio --start --log-target=syslog rtkit 1947 0.0 0.0 21336 1248 ? SNl 16:53 0:00 /usr/lib/rtkit/rtkit-daemon bw 1958 0.0 0.1 44204 3792 ? Sl 16:53 0:00 /usr/bin/zeitgeist-daemon bw 1972 0.0 0.0 27008 2684 ? Sl 16:53 0:00 /usr/lib/gvfs/gvfsd bw 1974 0.1 0.5 90480 16660 ? Sl 16:53 0:00 /usr/bin/akonadi_agent_launcher akonadi_akonotes_resource akonadi_akonotes_res bw 1984 0.1 0.5 90472 16636 ? Sl 16:53 0:00 /usr/bin/akonadi_agent_launcher akonadi_akonotes_resource akonadi_akonotes_res bw 1985 0.3 0.9 148800 28304 ? S 16:53 0:01 /usr/bin/akonadi_archivemail_agent --identifier akonadi_archivemail_agent bw 1992 0.1 0.5 90020 16148 ? Sl 16:53 0:00 /usr/bin/akonadi_agent_launcher akonadi_contacts_resource akonadi_contacts_res bw 1993 0.1 0.5 90132 16452 ? Sl 16:53 0:00 /usr/bin/akonadi_agent_launcher akonadi_contacts_resource akonadi_contacts_res bw 1994 0.1 0.5 90564 16332 ? Sl 16:53 0:00 /usr/bin/akonadi_agent_launcher akonadi_ical_resource akonadi_ical_resource_0 bw 1995 0.1 0.5 90676 16732 ? Sl 16:53 0:00 /usr/bin/akonadi_agent_launcher akonadi_ical_resource akonadi_ical_resource_1 bw 1996 0.1 0.5 90468 16800 ? Sl 16:53 0:00 /usr/bin/akonadi_agent_launcher akonadi_maildir_resource akonadi_maildir_resou bw 1999 0.2 0.6 99324 19276 ? S 16:53 0:00 /usr/bin/akonadi_maildispatcher_agent --identifier akonadi_maildispatcher_agen bw 2006 0.3 0.9 148808 28332 ? S 16:53 0:01 /usr/bin/akonadi_mailfilter_agent --identifier akonadi_mailfilter_agent bw 2017 0.0 0.1 50256 4716 ? Sl 16:53 0:00 /usr/lib/zeitgeist/zeitgeist-fts bw 2024 0.2 0.6 103632 18376 ? Sl 16:53 0:00 /usr/bin/akonadi_nepomuk_feeder --identifier akonadi_nepomuk_feeder bw 2043 0.0 0.0 4484 280 ? S 16:53 0:00 /bin/cat bw 2101 0.2 0.7 113600 22396 ? Sl 16:53 0:00 /usr/lib/kde4/libexec/polkit-kde-authentication-agent-1 bw 2105 0.2 0.7 114196 22072 ? Sl 16:53 0:00 /usr/bin/nepomukcontroller bw 2156 0.3 1.0 333188 31244 ? Sl 16:54 0:01 /usr/bin/kmix bw 2167 0.0 0.0 6548 2724 pts/2 Ss 16:54 0:00 /bin/bash bw 2177 0.2 0.7 113496 22960 ? Sl 16:54 0:00 /usr/bin/klipper bw 2394 3.5 1.2 52932 35596 ? SNl 16:54 0:11 /usr/bin/virtuoso-t +foreground +configfile /tmp/virtuoso_hX1884.ini +wait root 2460 0.0 0.0 6184 1876 pts/2 S 16:54 0:00 sudo -s root 2500 0.0 0.0 6528 2700 pts/2 S 16:54 0:00 /bin/bash root 2599 0.0 0.0 5444 1280 pts/2 S+ 16:54 0:00 /bin/bash bin/aero root 2606 0.1 0.0 9836 2500 pts/2 S+ 16:54 0:00 wvdial aero2 root 2619 0.0 0.0 3504 1280 pts/2 S 16:54 0:00 /usr/sbin/pppd 57600 modem crtscts defaultroute usehostname -detach user aero bw 2653 0.0 0.0 6600 2880 pts/3 Ss 16:54 0:00 /bin/bash bw 2676 0.4 0.8 130296 24016 ? SNl 16:54 0:01 /usr/bin/nepomukservicestub nepomukfilewatch bw 2679 0.1 0.7 101636 22252 ? SNl 16:54 0:00 /usr/bin/nepomukservicestub nepomukqueryservice bw 2681 0.2 0.8 109836 24280 ? SNl 16:54 0:00 /usr/bin/nepomukservicestub nepomukbackupsync bw 3833 46.0 9.7 829272 288012 ? Rl 16:55 1:46 /usr/lib/firefox/firefox bw 3903 0.0 0.0 35128 2804 ? Sl 16:55 0:00 /usr/lib/at-spi2-core/at-spi-bus-launcher bw 4708 0.1 0.0 6564 2736 pts/4 Ss 16:56 0:00 /bin/bash root 5210 0.0 0.0 0 0 ? S 16:57 0:00 [kworker/u:0] root 6140 0.2 0.0 0 0 ? S 16:58 0:00 [kworker/0:1] root 6371 0.5 0.0 6184 1868 pts/4 S+ 16:59 0:00 sudo nethogs ppp0 root 6411 17.7 0.2 8616 6144 pts/4 S+ 16:59 0:05 nethogs ppp0 bw 6787 0.0 0.0 5464 1220 pts/3 R+ 16:59 0:00 ps auxw

    Read the article

  • CodePlex Daily Summary for Tuesday, November 22, 2011

    CodePlex Daily Summary for Tuesday, November 22, 2011Popular ReleasesDeveloper Team Article System Management: DTASM v1.3: ?? ??? ???? 3 ????? ???? ???? ????? ??? : - ????? ?????? ????? ???? ?? ??? ???? ????? ?? ??? ? ?? ???? ?????? ???? ?? ???? ????? ?? . - ??? ?? ???? ????? ???? ????? ???? ???? ?? ????? , ?????? ????? ????? ?? ??? . - ??? ??????? ??? ??? ???? ?? ????? ????? ????? .VideoLan DotNet for WinForm, WPF & Silverlight 5: VideoLan DotNet for WinForm, WPF, SL5 - 2011.11.22: The new version contains Silverlight 5 library: Vlc.DotNet.Silverlight. A sample could be tested here The new version add and correct many features : Correction : Reinitialize some variables Deprecate : Logging API, since VLC 1.2 (08/20/2011) Add subitem in LocationMedia (for Youtube videos, ...) Update Wpf sample to use Youtube videos Many others correctionsSharePoint 2010 FBA Pack: SharePoint 2010 FBA Pack 1.2.0: Web parts are now fully customizable via html templates (Issue #323) FBA Pack is now completely localizable using resource files. Thank you David Chen for submitting the code as well as Chinese translations of the FBA Pack! The membership request web part now gives the option of having the user enter the password and removing the captcha (Issue # 447) The FBA Pack will now work in a zone that does not have FBA enabled (Another zone must have FBA enabled, and the zone must contain the me...SharePoint 2010 Education Demo Project: Release SharePoint SP1 for Education Solutions: This release includes updates to the Content Packs for SharePoint SP1. All Content Packs have been updated to install successfully under SharePoint SP1SQL Monitor - tracking sql server activities: SQLMon 4.1 alpha 6: 1. improved support for schema 2. added find reference when right click on object list 3. added object rename supportBugNET Issue Tracker: BugNET 0.9.126: First stable release of version 0.9. Upgrades from 0.8 are fully supported and upgrades to future releases will also be supported. This release is now compiled against the .NET 4.0 framework and is a requirement. Because of this the web.config has significantly changed. After upgrading, you will need to configure the authentication settings for user registration and anonymous access again. Please see our installation / upgrade instructions for more details: http://wiki.bugnetproject.c...Anno 2070 Assistant: v0.1.0 (STABLE): Version 0.1.0 Features Production Chains Eco Production Chains (Complete) Tycoon Production Chains (Disabled - Incomplete) Tech Production Chains (Disabled - Incomplete) Supply (Disabled - Incomplete) Calculator (Disabled - Incomplete) Building Layouts Eco Building Layouts (Complete) Tycoon Building Layouts (Disabled - Incomplete) Tech Building Layouts (Disabled - Incomplete) Credits (Complete)Free SharePoint 2010 Sites Templates: SharePoint Server 2010 Sites Templates: here is the list of sites templates to be downloadedVsTortoise - a TortoiseSVN add-in for Microsoft Visual Studio: VsTortoise Build 30 Beta: Note: This release does not work with custom VsTortoise toolbars. These get removed every time when you shutdown Visual Studio. (#7940) Build 30 (beta)New: Support for TortoiseSVN 1.7 added. (the download contains both setups, for TortoiseSVN 1.6 and 1.7) New: OpenModifiedDocumentDialog displays conflicted files now. New: OpenModifiedDocument allows to group items by changelist now. Fix: OpenModifiedDocumentDialog caused Visual Studio 2010 to freeze sometimes. Fix: The installer didn...nopCommerce. Open source shopping cart (ASP.NET MVC): nopcommerce 2.30: Highlight features & improvements: • Performance optimization. • Back in stock notifications. • Product special price support. • Catalog mode (based on customer role) To see the full list of fixes and changes please visit the release notes page (http://www.nopCommerce.com/releasenotes.aspx).WPF Converters: WPF Converters V1.2.0.0: support for enumerations, value types, and reference types in the expression converter's equality operators the expression converter now handles DependencyProperty.UnsetValue as argument values correctly (#4062) StyleCop conformance (more or less)Json.NET: Json.NET 4.0 Release 4: Change - JsonTextReader.Culture is now CultureInfo.InvariantCulture by default Change - KeyValurPairConverter no longer cares about the order of the key and value properties Change - Time zone conversions now use new TimeZoneInfo instead of TimeZone Fix - Fixed boolean values sometimes being capitalized when converting to XML Fix - Fixed error when deserializing ConcurrentDictionary Fix - Fixed serializing some Uris returning the incorrect value Fix - Fixed occasional error when...Media Companion: MC 3.423b Weekly: Ensure .NET 4.0 Full Framework is installed. (Available from http://www.microsoft.com/download/en/details.aspx?id=17718) Ensure the NFO ID fix is applied when transitioning from versions prior to 3.416b. (Details here) Replaced 'Rebuild' with 'Refresh' throughout entire code. Rebuild will now be known as Refresh. mc_com.exe has been fully updated TV Show Resolutions... Resolved issue #206 - having to hit save twice when updating runtime manually Shrunk cache size and lowered loading times f...Delta Engine: Delta Engine Beta Preview v0.9.1: v0.9.1 beta release with lots of refactoring, fixes, new samples and support for iOS, Android and WP7 (you need a Marketplace account however). If you want a binary release for the games (like v0.9.0), just say so in the Forum or here and we will quickly prepare one. It is just not much different from v0.9.0, so I left it out this time. See http://DeltaEngine.net/Wiki.Roadmap for details.SharpMap - Geospatial Application Framework for the CLR: SharpMap-0.9-AnyCPU-Trunk-2011.11.17: This is a build of SharpMap from the 0.9 development trunk as per 2011-11-17 For most applications the AnyCPU release is the recommended, but in case you need an x86 build that is included to. For some dataproviders (GDAL/OGR, SqLite, PostGis) you need to also referense the SharpMap.Extensions assembly For SqlServer Spatial you need to reference the SharpMap.SqlServerSpatial assemblyAJAX Control Toolkit: November 2011 Release: AJAX Control Toolkit Release Notes - November 2011 Release Version 51116November 2011 release of the AJAX Control Toolkit. AJAX Control Toolkit .NET 4 - Binary – AJAX Control Toolkit for .NET 4 and sample site (Recommended). AJAX Control Toolkit .NET 3.5 - Binary – AJAX Control Toolkit for .NET 3.5 and sample site (Recommended). Notes: - The current version of the AJAX Control Toolkit is not compatible with ASP.NET 2.0. The latest version that is compatible with ASP.NET 2.0 can be found h...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.36: Fix for issue #16908: string literals containing ASP.NET replacement syntax fail if the ASP.NET code contains the same character as the string literal delimiter. Also, we shouldn't be changing the delimiter for those literals or combining them with other literals; the developer may have specifically chosen the delimiter used because of possible content inserted by ASP.NET code. This logic is normally off; turn it on via the -aspnet command-line flag (or the Code.Settings.AllowEmbeddedAspNetBl...MVC Controls Toolkit: Mvc Controls Toolkit 1.5.5: Added: Now the DateRanteAttribute accepts complex expressions containing "Now" and "Today" as static minimum and maximum. Menu, MenuFor helpers capable of handling a "currently selected element". The developer can choose between using a standard nested menu based on a standard SimpleMenuItem class or specifying an item template based on a custom class. Added also helpers to build the tree structure containing all data items the menu takes infos from. Improved the pager. Now the developer ...SharpCompress - a fully native C# library for RAR, 7Zip, Zip, Tar, GZip, BZip2: SharpCompress 0.7: Reworked API to be more consistent. See Supported formats table. Added some more helper methods - e.g. OpenEntryStream (RarArchive/RarReader does not support this) Fixed up testsSilverlight Toolkit: Windows Phone Toolkit - Nov 2011 (7.1 SDK): This release is coming soon! What's new ListPicker once again works in a ScrollViewer LongListSelector bug fixes around OutOfRange exceptions, wrong ordering of items, grouping issues, and scrolling events. ItemTuple is now refactored to be the public type LongListSelectorItem to provide users better access to the values in selection changed handlers. PerformanceProgressBar binding fix for IsIndeterminate (item 9767 and others) There is no longer a GestureListener dependency with the C...New ProjectsAndrecorder: Andrecorder???Android???????,???????????????????,????????????????,????????!Android Tree Bulletin: Android bulletin reader in tree format.Bài t?p l?p môn HCI: Name: Ph?n m?m qu?n lý thu h?c phí tru?ng d?i h?c Công Nghi?p Hà N?i Basic Grid Collision sample in XNA: This project shows how to implement a basic grid collision in XNA. The project uses the XNA 4.0 framework and C#Club Manager: Club Manager is a web site for managing sport clubs / teams.Create email with encrypt text implement TEA encryption and Web Service: RahaTEA Mail is an application to send messages in secret. These applications implement TEA encryption and web serviceCRM 2011 Layers: Several .net layers to customize CRM 2011CTEF: China Tomorrow Education Foundation websitedns?????: ??c#???dns?????。????????,???????,??????。EAF: Extensibility Application FrameworkEnergy SBA: In order to compete with large companies for Federal contracts, small business need information. This application seeks to show standard methods of using remote APIs to integrate information into a Metro interface using services provided by the Small Business Administration (SBA)EPiOptimiser - Scan your EPiServer configuration to optimise start up times: EPiScanner scans your EPiServer configuration to optimise start ups by generating a recommended exclude list of assemblies to include in EPiServer framework config. It can be used on command line, as a custom build task or integrated into Visual Studio as an external tool.FreeIDS - Free Intrusion Detection System: Don't want someone to use your computer? Don't want to use a system password? Want to see when someone accessed your computer? Time/Date? FreeIDS is it!FtpServerAdministrator: FtpServerAdministrator makes it easier to administer some ftp server by code, although it can only be used for FileZilla server now. It's developed in C#.GreenPoint Online: Tools and components that help you customize an Office 365 / SharePoint Online Environment.HCC C# Workshop: This project contains the code for the exercises of the HCC C# WorkshopKsigDo - Real time view model syncing across user screens: KsigDo show real time view model syncing across user screens - using ASP.NET, Knockout and SignalR. Real time data syncing across user views *was* hard, especially in web applications. Most of the time, the second user needs to refresh the screen, to see the changes made by first user, or we need to implement some long polling that fetches the data and does the update manually. Now, with SignalR and Knockout, ASP.NET developers can take advantage of view model syncing across users, that...lineseven: ???????????????。Mail Size Labeler for GMail: A small utility that labels large e-mails on your gmail account. This utility scan you gmail account, and adds labels to large e-mail so you can clean your mailbox and free space. The labels this utility adds are: Size 1M-2M Size 2M-5M Size 5M-10M Size 10M-15M Size 15M plus Note: a single e-mail thread may get multiple labels if different e-mails of the thread fit different filters.MathService: Complex digits, standart class extentions etc.MyGameProject: gamesMySQL Connect 2 ASP.NET: Example project to show how to connect MySQL database to ASP.NET web project. IDE: Visual Studio 2010 Pro Programming language: C# Detailed information in the article here: http://epavlov.net/blog/2011/11/13/connect-to-mysql-in-visual-studio/ nl: Nutri Leaf Devomr.event.js: Simple js event injecterPastebin4DotNet: This project is an example of how to consume an API, in this case I consummed the Pastebin API.Pomelo: Pomelo is a website example.QuickDevFrameWork: ????????,??,??,????,ioc ?????postsharp?aopReadable Passphrase Generator: Generates passphrases which are (mostly) grammatically correct but nonsensical. These are easy to remember but difficult to guess (for humans or computers). Developed in C# with a KeePass plugin, console app and public API.Rosyama.ru for Windows Phone 7: ?????????? Windows Phone 7 ??? ???????? ???????? ?? ???? rosyama.ru. ?????????? ??????? ?????????? ? ???????? ????????? ???????. SimpleBatch: As the name suggests, this is a simple batch framework allowing you to define batch jobs in XML format. Thus far, contains a basic selection of processors such as the following; File Email SQL (SQL Server Client) SharePoint Document Library Custom ProcessorSite de Notícias: Projeto de faculdade que consiste na criação de um site de notícias.SPWikiProvisioning: Create update and delete SharePoint wiki pages using feature activation and deactivation handlers.SVN Automated Control With C#: I Created this libaray because I need to control Tortoise SVN automactically with out an interface for my own build server and could not find any resuilts on google to achive this task so I went about creating this libaray which dos most of the task's that I needed. I round that you could control SVN by command line so using that as my basic idear I went about coding the most common commands for SVN most of the commads are done but not all. if you like this libaray then please use it we...TremplinCMS: TremplinCMS is a CMS framework for ASP .NET 4.vlu0206sms: SMSMaker by team0206 developingWCF DataService RequestStream Access on webInvoke HTTP POST: This library provides access to the message body request stream of a WCF Data Service (formerly ADO.NET Data Service), which is not possible with the original WCF Data Service class. You are enabled passing data (e.g. Json, files) via HTTP POST to the request body. It uses the operation context (DbContext) provided by the DataService<T> class to get access to the resquest stream.WebOS: Welcome to join us to build our os projectWp7StarterDantas: Iniciando com Wp7WpfCollaborative3D: WpfCollaborative3DXNA Content Preprocessor: The XNA Content Preprocessor allows you to compile all of your XNA assets outside of your normal XNA project. This means more time building your game or app instead of your content.

    Read the article

  • Nagging As A Strategy For Better Linking: -z guidance

    - by user9154181
    The link-editor (ld) in Solaris 11 has a new feature that we call guidance that is intended to help you build better objects. The basic idea behind guidance is that if (and only if) you request it, the link-editor will issue messages suggesting better options and other changes you might make to your ld command to get better results. You can choose to take the advice, or you can disable specific types of guidance while acting on others. In some ways, this works like an experienced friend leaning over your shoulder and giving you advice — you're free to take it or leave it as you see fit, but you get nudged to do a better job than you might have otherwise. We use guidance to build the core Solaris OS, and it has proven to be useful, both in improving our objects, and in making sure that regressions don't creep back in later. In this article, I'm going to describe the evolution in thinking and design that led to the implementation of the -z guidance option, as well as give a brief description of how it works. The guidance feature issues non-fatal warnings. However, experience shows that once developers get used to ignoring warnings, it is inevitable that real problems will be lost in the noise and ignored or missed. This is why we have a zero tolerance policy against build noise in the core Solaris OS. In order to get maximum benefit from -z guidance while maintaining this policy, I added the -z fatal-warnings option at the same time. Much of the material presented here is adapted from the arc case: PSARC 2010/312 Link-editor guidance The History Of Unfortunate Link-Editor Defaults The Solaris link-editor is one of the oldest Unix commands. It stands to reason that this would be true — in order to write an operating system, you need the ability to compile and link code. The original link-editor (ld) had defaults that made sense at the time. As new features were needed, command line option switches were added to let the user use them, while maintaining backward compatibility for those who didn't. Backward compatibility is always a concern in system design, but is particularly important in the case of the tool chain (compilers, linker, and related tools), since it is a basic building block for the entire system. Over the years, applications have grown in size and complexity. Important concepts like dynamic linking that didn't exist in the original Unix system were invented. Object file formats changed. In the case of System V Release 4 Unix derivatives like Solaris, the ELF (Extensible Linking Format) was adopted. Since then, the ELF system has evolved to provide tools needed to manage today's larger and more complex environments. Features such as lazy loading, and direct bindings have been added. In an ideal world, many of these options would be defaults, with rarely used options that allow the user to turn them off. However, the reality is exactly the reverse: For backward compatibility, these features are all options that must be explicitly turned on by the user. This has led to a situation in which most applications do not take advantage of the many improvements that have been made in linking over the last 20 years. If their code seems to link and run without issue, what motivation does a developer have to read a complex manpage, absorb the information provided, choose the features that matter for their application, and apply them? Experience shows that only the most motivated and diligent programmers will make that effort. We know that most programs would be improved if we could just get you to use the various whizzy features that we provide, but the defaults conspire against us. We have long wanted to do something to make it easier for our users to use the linkers more effectively. There have been many conversations over the years regarding this issue, and how to address it. They always break down along the following lines: Change ld Defaults Since the world would be a better place the newer ld features were the defaults, why not change things to make it so? This idea is simple, elegant, and impossible. Doing so would break a large number of existing applications, including those of ISVs, big customers, and a plethora of existing open source packages. In each case, the owner of that code may choose to follow our lead and fix their code, or they may view it as an invitation to reconsider their commitment to our platform. Backward compatibility, and our installed base of working software, is one of our greatest assets, and not something to be lightly put at risk. Breaking backward compatibility at this level of the system is likely to do more harm than good. But, it sure is tempting. New Link-Editor One might create a new linker command, not called 'ld', leaving the old command as it is. The new one could use the same code as ld, but would offer only modern options, with the proper defaults for features such as direct binding. The resulting link-editor would be a pleasure to use. However, the approach is doomed to niche status. There is a vast pile of exiting code in the world built around the existing ld command, that reaches back to the 1970's. ld use is embedded in large and unknown numbers of makefiles, and is used by name by compilers that execute it. A Unix link-editor that is not named ld will not find a majority audience no matter how good it might be. Finally, a new linker command will eventually cease to be new, and will accumulate its own burden of backward compatibility issues. An Option To Make ld Do The Right Things Automatically This line of reasoning is best summarized by a CR filed in 2005, entitled 6239804 make it easier for ld(1) to do what's best The idea is to have a '-z best' option that unchains ld from its backward compatibility commitment, and allows it to turn on the "best" set of features, as determined by the authors of ld. The specific set of features enabled by -z best would be subject to change over time, as requirements change. This idea is more realistic than the other two, but was never implemented because it has some important issues that we could never answer to our satisfaction: The -z best proposal assumes that the user can turn it on, and trust it to select good options without the user needing to be aware of the options being applied. This is a fallacy. Features such as direct bindings require the user to do some analysis to ensure that the resulting program will still operate properly. A user who is willing to do the work to verify that what -z best does will be OK for their application is capable of turning on those features directly, and therefore gains little added benefit from -z best. The intent is that when a user opts into -z best, that they understand that z best is subject to sometimes incompatible evolution. Experience teaches us that this won't work. People will use this feature, the meaning of -z best will change, code that used to build will fail, and then there will be complaints and demands to retract the change. When (not if) this occurs, we will of course defend our actions, and point at the disclaimer. We'll win some of those debates, and lose others. Ultimately, we'll end up with -z best2 (-z better), or other compromises, and our goal of simplifying the world will have failed. The -z best idea rolls up a set of features that may or may not be related to each other into a unit that must be taken wholesale, or not at all. It could be that only a subset of what it does is compatible with a given application, in which case the user is expected to abandon -z best and instead set the options that apply to their application directly. In doing so, they lose one of the benefits of -z best, that if you use it, future versions of ld may choose a different set of options, and automatically improve the object through the act of rebuilding it. I drew two conclusions from the above history: For a link-editor, backward compatibility is vital. If a given command line linked your application 10 years ago, you have every reason to expect that it will link today, assuming that the libraries you're linking against are still available and compatible with their previous interfaces. For an application of any size or complexity, there is no substitute for the work involved in examining the code and determining which linker options apply and which do not. These options are largely orthogonal to each other, and it can be reasonable not to use any or all of them, depending on the situation, even in modern applications. It is a mistake to tie them together. The idea for -z guidance came from consideration of these points. By decoupling the advice from the act of taking the advice, we can retain the good aspects of -z best while avoiding its pitfalls: -z guidance gives advice, but the decision to take that advice remains with the user who must evaluate its merit and make a decision to take it or not. As such, we are free to change the specific guidance given in future releases of ld, without breaking existing applications. The only fallout from this will be some new warnings in the build output, which can be ignored or dealt with at the user's convenience. It does not couple the various features given into a single "take it or leave it" option, meaning that there will never be a need to offer "-zguidance2", or other such variants as things change over time. Guidance has the potential to be our final word on this subject. The user is given the flexibility to disable specific categories of guidance without losing the benefit of others, including those that might be added to future versions of the system. Although -z fatal-warnings stands on its own as a useful feature, it is of particular interest in combination with -z guidance. Used together, the guidance turns from advice to hard requirement: The user must either make the suggested change, or explicitly reject the advice by specifying a guidance exception token, in order to get a build. This is valuable in environments with high coding standards. ld Command Line Options The guidance effort resulted in new link-editor options for guidance and for turning warnings into fatal errors. Before I reproduce that text here, I'd like to highlight the strategic decisions embedded in the guidance feature: In order to get guidance, you have to opt in. We hope you will opt in, and believe you'll get better objects if you do, but our default mode of operation will continue as it always has, with full backward compatibility, and without judgement. Guidance suggestions always offers specific advice, and not vague generalizations. You can disable some guidance without turning off the entire feature. When you get guidance warnings, you can choose to take the advice, or you can specify a keyword to disable guidance for just that category. This allows you to get guidance for things that are useful to you, without being bothered about things that you've already considered and dismissed. As the world changes, we will add new guidance to steer you in the right direction. All such new guidance will come with a keyword that let's you turn it off. In order to facilitate building your code on different versions of Solaris, we quietly ignore any guidance keywords we don't recognize, assuming that they are intended for newer versions of the link-editor. If you want to see what guidance tokens ld does and does not recognize on your system, you can use the ld debugging feature as follows: % ld -Dargs -z guidance=foo,nodefs debug: debug: Solaris Linkers: 5.11-1.2275 debug: debug: arg[1] option=-D: option-argument: args debug: arg[2] option=-z: option-argument: guidance=foo,nodefs debug: warning: unrecognized -z guidance item: foo The -z fatal-warning option is straightforward, and generally useful in environments with strict coding standards. Note that the GNU ld already had this feature, and we accept their option names as synonyms: -z fatal-warnings | nofatal-warnings --fatal-warnings | --no-fatal-warnings The -z fatal-warnings and the --fatal-warnings option cause the link-editor to treat warnings as fatal errors. The -z nofatal-warnings and the --no-fatal-warnings option cause the link-editor to treat warnings as non-fatal. This is the default behavior. The -z guidance option is defined as follows: -z guidance[=item1,item2,...] Provide guidance messages to suggest ld options that can improve the quality of the resulting object, or which are otherwise considered to be beneficial. The specific guidance offered is subject to change over time as the system evolves. Obsolete guidance offered by older versions of ld may be dropped in new versions. Similarly, new guidance may be added to new versions of ld. Guidance therefore always represents current best practices. It is possible to enable guidance, while preventing specific guidance messages, by providing a list of item tokens, representing the class of guidance to be suppressed. In this way, unwanted advice can be suppressed without losing the benefit of other guidance. Unrecognized item tokens are quietly ignored by ld, allowing a given ld command line to be executed on a variety of older or newer versions of Solaris. The guidance offered by the current version of ld, and the item tokens used to disable these messages, are as follows. Specify Required Dependencies Dynamic executables and shared objects should explicitly define all of the dependencies they require. Guidance recommends the use of the -z defs option, should any symbol references remain unsatisfied when building dynamic objects. This guidance can be disabled with -z guidance=nodefs. Do Not Specify Non-Required Dependencies Dynamic executables and shared objects should not define any dependencies that do not satisfy the symbol references made by the dynamic object. Guidance recommends that unused dependencies be removed. This guidance can be disabled with -z guidance=nounused. Lazy Loading Dependencies should be identified for lazy loading. Guidance recommends the use of the -z lazyload option should any dependency be processed before either a -z lazyload or -z nolazyload option is encountered. This guidance can be disabled with -z guidance=nolazyload. Direct Bindings Dependencies should be referenced with direct bindings. Guidance recommends the use of the -B direct, or -z direct options should any dependency be processed before either of these options, or the -z nodirect option is encountered. This guidance can be disabled with -z guidance=nodirect. Pure Text Segment Dynamic objects should not contain relocations to non-writable, allocable sections. Guidance recommends compiling objects with Position Independent Code (PIC) should any relocations against the text segment remain, and neither the -z textwarn or -z textoff options are encountered. This guidance can be disabled with -z guidance=notext. Mapfile Syntax All mapfiles should use the version 2 mapfile syntax. Guidance recommends the use of the version 2 syntax should any mapfiles be encountered that use the version 1 syntax. This guidance can be disabled with -z guidance=nomapfile. Library Search Path Inappropriate dependencies that are encountered by ld are quietly ignored. For example, a 32-bit dependency that is encountered when generating a 64-bit object is ignored. These dependencies can result from incorrect search path settings, such as supplying an incorrect -L option. Although benign, this dependency processing is wasteful, and might hide a build problem that should be solved. Guidance recommends the removal of any inappropriate dependencies. This guidance can be disabled with -z guidance=nolibpath. In addition, -z guidance=noall can be used to entirely disable the guidance feature. See Chapter 7, Link-Editor Quick Reference, in the Linker and Libraries Guide for more information on guidance and advice for building better objects. Example The following example demonstrates how the guidance feature is intended to work. We will build a shared object that has a variety of shortcomings: Does not specify all it's dependencies Specifies dependencies it does not use Does not use direct bindings Uses a version 1 mapfile Contains relocations to the readonly allocable text (not PIC) This scenario is sadly very common — many shared objects have one or more of these issues. % cat hello.c #include <stdio.h> #include <unistd.h> void hello(void) { printf("hello user %d\n", getpid()); } % cat mapfile.v1 # This version 1 mapfile will trigger a guidance message % cc hello.c -o hello.so -G -M mapfile.v1 -lelf As you can see, the operation completes without error, resulting in a usable object. However, turning on guidance reveals a number of things that could be better: % cc hello.c -o hello.so -G -M mapfile.v1 -lelf -zguidance ld: guidance: version 2 mapfile syntax recommended: mapfile.v1 ld: guidance: -z lazyload option recommended before first dependency ld: guidance: -B direct or -z direct option recommended before first dependency Undefined first referenced symbol in file getpid hello.o (symbol belongs to implicit dependency /lib/libc.so.1) printf hello.o (symbol belongs to implicit dependency /lib/libc.so.1) ld: warning: symbol referencing errors ld: guidance: -z defs option recommended for shared objects ld: guidance: removal of unused dependency recommended: libelf.so.1 warning: Text relocation remains referenced against symbol offset in file .rodata1 (section) 0xa hello.o getpid 0x4 hello.o printf 0xf hello.o ld: guidance: position independent (PIC) code recommended for shared objects ld: guidance: see ld(1) -z guidance for more information Given the explicit advice in the above guidance messages, it is relatively easy to modify the example to do the right things: % cat mapfile.v2 # This version 2 mapfile will not trigger a guidance message $mapfile_version 2 % cc hello.c -o hello.so -Kpic -G -Bdirect -M mapfile.v2 -lc -zguidance There are situations in which the guidance does not fit the object being built. For instance, you want to build an object without direct bindings: % cc -Kpic hello.c -o hello.so -G -M mapfile.v2 -lc -zguidance ld: guidance: -B direct or -z direct option recommended before first dependency ld: guidance: see ld(1) -z guidance for more information It is easy to disable that specific guidance warning without losing the overall benefit from allowing the remainder of the guidance feature to operate: % cc -Kpic hello.c -o hello.so -G -M mapfile.v2 -lc -zguidance=nodirect Conclusions The linking guidelines enforced by the ld guidance feature correspond rather directly to our standards for building the core Solaris OS. I'm sure that comes as no surprise. It only makes sense that we would want to build our own product as well as we know how. Solaris is usually the first significant test for any new linker feature. We now enable guidance by default for all builds, and the effect has been very positive. Guidance helps us find suboptimal objects more quickly. Programmers get concrete advice for what to change instead of vague generalities. Even in the cases where we override the guidance, the makefile rules to do so serve as documentation of the fact. Deciding to use guidance is likely to cause some up front work for most code, as it forces you to consider using new features such as direct bindings. Such investigation is worthwhile, but does not come for free. However, the guidance suggestions offer a structured and straightforward way to tackle modernizing your objects, and once that work is done, for keeping them that way. The investment is often worth it, and will replay you in terms of better performance and fewer problems. I hope that you find guidance to be as useful as we have.

    Read the article

  • Installing MOSS 2007 on Windows 2008 R2

    - by Manesh Karunakaran
    When you try to install MOSS 2007 on Windows 2008 R2, if you are using an installation media that is older than SP2, you would get the following error, saying that “This program is blocked due to compatibility issues”    All is not lost though, all you need to do is to slip stream the SP2 updates to the MOSS 2007 Setup. Here’s a nice how to on how to do that. http://blogs.technet.com/seanearp/archive/2009/05/20/slipstreaming-sp2-into-sharepoint-server-2007.aspx Once you slipstream the SP2 updates, you would be able to continue with the installation with out the above error. HTH.   You may already read from blogs about April Cumulative Update for separate components in SharePoint. Now, the server-packages (also known as “Uber” packages) of April Cumulative Update for Microsoft Office SharePoint Server 2007 and Windows SharePoint Services 3.0 are ready for download. Download Information Windows SharePoint Services 3.0 April cumulative update package http://support.microsoft.com/hotfix/KBHotfix.aspx?kbnum=968850 Office SharePoint Server 2007 April cumulative update package http://support.microsoft.com/hotfix/KBHotfix.aspx?kbnum=968851 Detail Description Description of the Windows SharePoint Services 3.0 April cumulative update package http://support.microsoft.com/kb/968850 Description of the Office SharePoint Server 2007 April cumulative update package http://support.microsoft.com/kb/968851 Installation Recommendation for a fresh SharePoint Server To keep all files in a SharePoint installation up-to-date, the following sequence is recommended. Service Pack 2 for Windows SharePoint Services 3.0 Service Pack 2 for Office SharePoint Server 2007 April Cumulative Update package for Windows SharePoint Services 3.0 April Cumulative Update package for Office SharePoint Server 2007 Please note: Start from April Cumulative Update, the packages will no longer install on a farm without a service pack installed. You must have installed either Service Pack 1 (SP1) or SP2 prior to the installation of the cumulative updates. After applying the preceding updates, run the SharePoint Products and Technologies Configuration Wizard or “psconfig –cmd upgrade –inplace b2b -wait” in command line. This needs to be done on every server in the farm with SharePoint installed.  The version of content databases should be 12.0.6504.5000 after successfully applying these updates. For more in-depth guidance for the update process, we recommend that customers refer to the following articles. These articles provide a correct way to deploy updates, identify known issues (and resolutions), and provide information about creating slipstream builds. Deploy software updates for Windows SharePoint Services 3.0 http://technet.microsoft.com/en-us/library/cc288269.aspx Deploy software updates for Office SharePoint Server 2007 http://technet.microsoft.com/en-us/library/cc263467.aspx Create an installation source that includes software updates (Windows SharePoint Services 3.0) http://technet.microsoft.com/en-us/library/cc287882.aspx Create an installation source that includes software updates (Office SharePoint Server 2007) http://technet.microsoft.com/en-us/library/cc261890.aspx

    Read the article

  • Mouse Clicks, Reactive Extensions and StreamInsight Mashup

    I had an hour spare this afternoon so I wanted to have another play with Reactive Extensions in .Net and StreamInsight.  I also didn’t want to simply use a console window as a way of gathering events so I decided to use a windows form instead. The task I set myself was this. Whenever I click on my form I want to subscribe to the event and output its location to the console window and also the timestamp of the event.  In addition to this I want to know for every mouse click I do, how many mouse clicks have happened in the last 5 seconds. The second point here is really interesting.  I have often found this when working with people on problems.  It is how you ask the question that determines how you tackle the problem.  I will show 2 ways of possibly answering the second question depending on how the question was interpreted. As a side effect of this example I will show how time in StreamInsight can stand still.  This is an important concept and we can see it in the output later. Now to the code.  I will break it all down in this blogpost but you can download the solution and see it all together. I created a Console application and then instantiate a windows form.   frm = new Form(); Thread g = new Thread(CallUI); g.SetApartmentState(ApartmentState.STA); g.Start();   Call UI looks like this   static void CallUI() { System.Windows.Forms.Application.Run(frm); frm.Activate(); frm.BringToFront(); }   Now what we need to do is create an observable from the MouseClick event on the form.  For this we use the Reactive Extensions.   var lblevt = Observable.FromEvent<MouseEventArgs>(frm, "MouseClick").Timestamp();   As mentioned earlier I have two objectives in this example and to solve the first I am going to again use the Reactive extensions.  Let’s subscribe to the MouseClick event and output the location and timestamp to the console. lblevt.Subscribe(evt => { Console.WriteLine("Clicked: {0}, {1} ", evt.Value.EventArgs.Location,evt.Timestamp); }); That should take care of obective #1 but what about the second objective.  For that we need some temporal windowing and this means StreamInsight.  First we need to turn our Observable collection of MouseClick events into a PointStream Server s = Server.Create("Default"); Microsoft.ComplexEventProcessing.Application a = s.CreateApplication("MouseClicks"); var input = lblevt.ToPointStream( a, evt => PointEvent.CreateInsert( evt.Timestamp, new { loc = evt.Value.EventArgs.Location.ToString(), ts = evt.Timestamp.ToLocalTime().ToString() }), AdvanceTimeSettings.IncreasingStartTime);   Now that we have created out PointStream we need to do something with it and this is where we get to our second objective.  It is pretty clear that we want some kind of windowing but what? Here is one way of doing it.  It might not be what you wanted but again it is how the second objective is interpreted   var q = from i in input.TumblingWindow(TimeSpan.FromSeconds(5), HoppingWindowOutputPolicy.ClipToWindowEnd) select new { CountOfClicks = i.Count() };   The above code creates tumbling windows of 5 seconds and counts the number of events in the windows.  If there are no events in the window then no result is output.  Likewise until an event (MouseClick) is issued then we do not see anything in the output (that is not strictly true because it is the CTI strapped to our MouseClick events that flush the events through the StreamInsight engine not the events themselves).  This approach is centred around the windows and not the events.  Until the windows complete and a CTI is issued then no events are pushed through. An alternate way of answering our second question is below   var q = from i in input.AlterEventDuration(evt => TimeSpan.FromSeconds(5)).SnapshotWindow(SnapshotWindowOutputPolicy.Clip) select new { CountOfClicks = i.Count() };   In this code we extend the duration of each MouseClick to five seconds.  We then create  Snapshot Windows over those events.  Snapshot windows are discussed in detail here.  With this solution we are centred around the events.  It is the events that are driving the output.  Let’s have a look at the output from this solution as it may be a little confusing. First though let me show how we get the output from StreamInsight into the Console window. foreach (var x in q.ToPointEnumerable().Where(e => e.EventKind != EventKind.Cti)) { Console.WriteLine(x.Payload.CountOfClicks); }   Ok so now to the output.   The table at the top shows the output from our routine and the table at the bottom helps to explain the output.  One of the things that will help as well is, you will note that for our PointStream we set the issuing of CTIs to be IncreasingStartTime.  What this means is that the CTI is placed right at the start of the event so will not flush the event with which it was issued but will flush those prior to it.  In the bottom table the Blue fill is where we issued a click.  Yellow fill is the duration and boundaries of our events.  The numbers at the bottom indicate the count of events   Clicked 22:40:16                                 Clicked 23:40:18                                 1                                   Clicked 23:40:20                                 2                                   Clicked 23:40:22                                 3                                   2                                   Clicked 23:40:24                                 3                                   2                                   Clicked 23:40:32                                 3                                   2                                   1                                                                                                         secs 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32                                                                                                                                                                                                                         counts   1   2 3 2 3 2 3   2   1           What we can see here in the output is that the counts include all the end edges that have occurred between the mouse clicks.  If we look specifically at the mouse click at 22:40:32. then we see that 3 events are returned to us. These include the following End Edge count at 22:40:25 End Edge count at 22:40:27 End Edge count at 22:40:29 Another thing we notice is that until we actually issue a CTI at 22:40:32 then those last 3 snapshot window counts will never be reported. Hopefully this has helped to explain  a few concepts around StreamInsight and the IObservable() pattern.   You can download this solution from here and play.  You will need the Reactive Framework from here and StreamInsight 1.1

    Read the article

  • Our own Daily WTF

    - by Dennis Vroegop
    Originally posted on: http://geekswithblogs.net/dvroegop/archive/2014/08/20/our-own-daily-wtf.aspxIf you're a developer, you've probably heard of the website the DailyWTF. If you haven't, head on over to http://www.thedailywtf.com and read. And laugh. I'll wait. Read it? Good. If you're a bit like me probably you've been wondering how on earth some people ever get hired as a software engineer. Most of the stories there seem to weird to be true: no developer would write software like that right? And then you run into a little nugget of code one of your co-workers wrote. And then you realize: "Hey, it happens everywhere!" Look at this piece of art I found in our codebase recently: public static decimal ToDecimal(this string input) {     System.Globalization.CultureInfo cultureInfo = System.Globalization.CultureInfo.InstalledUICulture;     var numberFormatInfo = (System.Globalization.NumberFormatInfo)cultureInfo.NumberFormat.Clone();     int dotIndex = input.IndexOf(".");     int commaIndex = input.IndexOf(",");     if (dotIndex > commaIndex)         numberFormatInfo.NumberDecimalSeparator = ".";     else if (commaIndex > dotIndex)         numberFormatInfo.NumberDecimalSeparator = ",";     decimal result;     if (decimal.TryParse(input, System.Globalization.NumberStyles.Float, numberFormatInfo, out result))         return result;     else         throw new Exception(string.Format("Invalid input for decimal parsing: {0}. Decimal separator: {1}.", input, numberFormatInfo.NumberDecimalSeparator)); }  Me and a collegue have been looking long and hard at this and what we concluded was the following: Apparently, we don't trust our users to be able to correctly set the culture in Windows. Users aren't able to determine if they should tell Windows to use a decimal point or a comma to display numbers. So what we've done here is make sure that whatever the user enters, we'll translate that into whatever the user WANTS to enter instead of what he actually did. So if you set your locale to US, since you're a US citizen, but you want to enter the number 12.34 in the Dutch style (because, you know, the Dutch are way cooler with numbers) so you enter 12,34 we will understand this and respect your wishes! Of course, if you change your mind and in the next input field you decide to use the decimal dot again, that's fine with us as well. We will do the hard work. Now, I am all for smart software. Software that can handle all sorts of input the user can think of. But this feels a little uhm, I don't know.. wrong.. Or am I too old fashioned?

    Read the article

  • ?????create or replace???PL/SQL??

    - by Liu Maclean(???)
    ????T.Askmaclean.com?????10gR2??????procedure,?????????create or replace ??????????????????,????Oracle???????????????????procedure? ??Maclean ??2?10gR2???????????PL/SQL?????: ??1: ??Flashback Query ????,?????????????flashback database,??????????create or replace???SQL??source$??????????undo data,????????????: SQL> select * from V$version; BANNER ---------------------------------------------------------------- Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bi PL/SQL Release 10.2.0.5.0 - Production CORE 10.2.0.5.0 Production TNS for Linux: Version 10.2.0.5.0 - Production NLSRTL Version 10.2.0.5.0 - Production SQL> select * from global_name; GLOBAL_NAME -------------------------------------------------------------------------------- www.oracledatabase12g.com SQL> create or replace procedure maclean_proc as   2  begin   3  execute immediate 'select 1 from dual';   4  end;   5  / Procedure created. SQL> select * from dba_source where name='MACLEAN_PROC'; OWNER      NAME                           TYPE               LINE TEXT ---------- ------------------------------ ------------ ---------- -------------------------------------------------- SYS        MACLEAN_PROC                   PROCEDURE             1 procedure maclean_proc as SYS        MACLEAN_PROC                   PROCEDURE             2 begin SYS        MACLEAN_PROC                   PROCEDURE             3 execute immediate 'select 1 from dual'; SYS        MACLEAN_PROC                   PROCEDURE             4 end; SQL> select current_scn from v$database; CURRENT_SCN -----------     2660057 create or replace procedure maclean_proc as begin -- I am new procedure execute immediate 'select 2 from dual'; end; / Procedure created. SQL> select current_scn from v$database; CURRENT_SCN -----------     2660113 SQL> select * from dba_source where name='MACLEAN_PROC'; OWNER      NAME                           TYPE               LINE TEXT ---------- ------------------------------ ------------ ---------- -------------------------------------------------- SYS        MACLEAN_PROC                   PROCEDURE             1 procedure maclean_proc as SYS        MACLEAN_PROC                   PROCEDURE             2 begin SYS        MACLEAN_PROC                   PROCEDURE             3 -- I am new procedure SYS        MACLEAN_PROC                   PROCEDURE             4 execute immediate 'select 2 from dual'; SYS        MACLEAN_PROC                   PROCEDURE             5 end; SQL> create table old_source as select * from dba_source as of scn 2660057 where name='MACLEAN_PROC'; Table created. SQL> select * from old_source where name='MACLEAN_PROC'; OWNER      NAME                           TYPE               LINE TEXT ---------- ------------------------------ ------------ ---------- -------------------------------------------------- SYS        MACLEAN_PROC                   PROCEDURE             1 procedure maclean_proc as SYS        MACLEAN_PROC                   PROCEDURE             2 begin SYS        MACLEAN_PROC                   PROCEDURE             3 execute immediate 'select 1 from dual'; SYS        MACLEAN_PROC                   PROCEDURE             4 end; ?????????scn??flashback query????,????????as of timestamp??????????,????PL/SQL????????????????undo??????????,????????????replace/drop ??????PL/SQL??? ??2 ??logminer??replace/drop PL/SQL?????SQL???DELETE??,??logminer?UNDO SQL???PL/SQL?????? ????????????????archivelog????,??????????????? minimal supplemental logging,??????????Unsupported SQLREDO???: create or replace?? ?? procedure???????SQL??????, ??????procedure????????????????, source$??????????????: SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA; Database altered. SQL> create or replace procedure maclean_proc as   2  begin   3  execute immediate 'select 1 from dual';   4  end;   5  / Procedure created. SQL> SQL> oradebug setmypid; Statement processed. SQL> SQL> oradebug event 10046 trace name context forever,level 12; Statement processed. SQL> SQL> create or replace procedure maclean_proc as   2  begin   3  execute immediate 'select 2 from dual';   4  end;   5  / Procedure created. SQL> oradebug tracefile_name /s01/admin/G10R25/udump/g10r25_ora_4305.trc [oracle@vrh8 ~]$ egrep  "update|insert|delete|merge"  /s01/admin/G10R25/udump/g10r25_ora_4305.trc delete from procedureinfo$ where obj#=:1 delete from argument$ where obj#=:1 delete from procedurec$ where obj#=:1 delete from procedureplsql$ where obj#=:1 delete from procedurejava$ where obj#=:1 delete from vtable$ where obj#=:1 insert into procedureinfo$(obj#,procedure#,overload#,procedurename,properties,itypeobj#) values (:1,:2,:3,:4,:5,:6) insert into argument$( obj#,procedure$,procedure#,overload#,position#,sequence#,level#,argument,type#,default#,in_out,length,precision#,scale,radix,charsetid,charsetform,properties,type_owner,type_name,type_subname,type_linkname,pls_type) values (:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,:14,:15,:16,:17,:18,:19,:20,:21,:22,:23) insert into procedureplsql$(obj#,procedure#,entrypoint#) values (:1,:2,:3) update procedure$ set audit$=:2,options=:3 where obj#=:1 delete from source$ where obj#=:1 insert into source$(obj#,line,source) values (:1,:2,:3) delete from idl_ub1$ where obj#=:1 and part=:2 and version<>:3 delete from idl_char$ where obj#=:1 and part=:2 and version<>:3 delete from idl_ub2$ where obj#=:1 and part=:2 and version<>:3 delete from idl_sb4$ where obj#=:1 and part=:2 and version<>:3 delete from ncomp_dll$ where obj#=:1 returning dllname into :2 update idl_sb4$ set piece#=:1 ,length=:2 , piece=:3 where obj#=:4 and part=:5 and piece#=:6 and version=:7 update idl_ub1$ set piece#=:1 ,length=:2 , piece=:3 where obj#=:4 and part=:5 and piece#=:6 and version=:7 update idl_char$ set piece#=:1 ,length=:2 , piece=:3 where obj#=:4 and part=:5 and piece#=:6 and version=:7 update idl_ub2$ set piece#=:1 ,length=:2 , piece=:3 where obj#=:4 and part=:5 and piece#=:6 and version=:7 delete from idl_ub1$ where obj#=:1 and part=:2 and version<>:3 delete from idl_char$ where obj#=:1 and part=:2 and version<>:3 delete from idl_ub2$ where obj#=:1 and part=:2 and version<>:3 delete from idl_sb4$ where obj#=:1 and part=:2 and version<>:3 delete from ncomp_dll$ where obj#=:1 returning dllname into :2 delete from idl_ub1$ where obj#=:1 and part=:2 and (piece#<:3 or piece#>:4) and version=:5 delete from idl_char$ where obj#=:1 and part=:2 and (piece#<:3 or piece#>:4) and version=:5 delete from idl_ub2$ where obj#=:1 and part=:2 and (piece#<:3 or piece#>:4) and version=:5 delete from idl_sb4$ where obj#=:1 and part=:2 and (piece#<:3 or piece#>:4) and version=:5 delete from idl_ub1$ where obj#=:1 and part=:2 and version<>:3 delete from idl_char$ where obj#=:1 and part=:2 and version<>:3 delete from idl_ub2$ where obj#=:1 and part=:2 and version<>:3 delete from idl_sb4$ where obj#=:1 and part=:2 and version<>:3 delete from ncomp_dll$ where obj#=:1 returning dllname into :2 update idl_sb4$ set piece#=:1 ,length=:2 , piece=:3 where obj#=:4 and part=:5 and piece#=:6 and version=:7 update idl_ub1$ set piece#=:1 ,length=:2 , piece=:3 where obj#=:4 and part=:5 and piece#=:6 and version=:7 delete from idl_char$ where obj#=:1 and part=:2 and (piece#<:3 or piece#>:4) and version=:5 delete from idl_ub2$ where obj#=:1 and part=:2 and (piece#<:3 or piece#>:4) and version=:5 delete from error$ where obj#=:1 delete from settings$ where obj# = :1 insert into settings$(obj#, param, value) values (:1, :2, :3) delete from warning_settings$ where obj# = :1 insert into warning_settings$(obj#, warning_num, global_mod, property) values (:1, :2, :3, :4) delete from dependency$ where d_obj#=:1 delete from access$ where d_obj#=:1 insert into dependency$(d_obj#,d_timestamp,order#,p_obj#,p_timestamp, property, d_attrs)values (:1,:2,:3,:4,:5,:6, :7) insert into access$(d_obj#,order#,columns,types) values (:1,:2,:3,:4) update obj$ set obj#=:6,type#=:7,ctime=:8,mtime=:9,stime=:10,status=:11,dataobj#=:13,flags=:14,oid$=:15,spare1=:16, spare2=:17 where owner#=:1 and name=:2 and namespace=:3 and(remoteowner=:4 or remoteowner is null and :4 is null)and(linkname=:5 or linkname is null and :5 is null)and(subname=:12 or subname is null and :12 is null) ?drop procedure??????source$???PL/SQL?????: SQL> oradebug setmypid; Statement processed. SQL> oradebug event 10046 trace name context forever,level 12; Statement processed. SQL> drop procedure maclean_proc; Procedure dropped. SQL> oradebug tracefile_name /s01/admin/G10R25/udump/g10r25_ora_4331.trc delete from context$ where obj#=:1 delete from dir$ where obj#=:1 delete from type_misc$ where obj#=:1 delete from library$ where obj#=:1 delete from procedure$ where obj#=:1 delete from javaobj$ where obj#=:1 delete from operator$ where obj#=:1 delete from opbinding$ where obj#=:1 delete from opancillary$ where obj#=:1 delete from oparg$ where obj# = :1 delete from com$ where obj#=:1 delete from source$ where obj#=:1 delete from idl_ub1$ where obj#=:1 and part=:2 delete from idl_char$ where obj#=:1 and part=:2 delete from idl_ub2$ where obj#=:1 and part=:2 delete from idl_sb4$ where obj#=:1 and part=:2 delete from ncomp_dll$ where obj#=:1 returning dllname into :2 delete from idl_ub1$ where obj#=:1 and part=:2 delete from idl_char$ where obj#=:1 and part=:2 delete from idl_ub2$ where obj#=:1 and part=:2 delete from idl_sb4$ where obj#=:1 and part=:2 delete from ncomp_dll$ where obj#=:1 returning dllname into :2 delete from idl_ub1$ where obj#=:1 and part=:2 delete from idl_char$ where obj#=:1 and part=:2 delete from idl_ub2$ where obj#=:1 and part=:2 delete from idl_sb4$ where obj#=:1 and part=:2 delete from ncomp_dll$ where obj#=:1 returning dllname into :2 delete from error$ where obj#=:1 delete from settings$ where obj# = :1 delete from procedureinfo$ where obj#=:1 delete from argument$ where obj#=:1 delete from procedurec$ where obj#=:1 delete from procedureplsql$ where obj#=:1 delete from procedurejava$ where obj#=:1 delete from vtable$ where obj#=:1 delete from dependency$ where d_obj#=:1 delete from access$ where d_obj#=:1 delete from objauth$ where obj#=:1 update obj$ set obj#=:6,type#=:7,ctime=:8,mtime=:9,stime=:10,status=:11,dataobj#=:13,flags=:14,oid$=:15,spare1=:16, spare2=:17 where owner#=:1 and name=:2 and namespace=:3 and(remoteowner=:4 or remoteowner is null and :4 is null)and(linkname=:5 or linkname is null and :5 is null)and(subname=:12 or subname is null and :12 is null) ??????????source$???redo: SQL> alter system switch logfile; System altered. SQL> select sequence#,name from v$archived_log where sequence#=(select max(sequence#) from v$archived_log);  SEQUENCE# ---------- NAME --------------------------------------------------------------------------------        242 /s01/flash_recovery_area/G10R25/archivelog/2012_05_21/o1_mf_1_242_7vnm13k6_.arc SQL> exec dbms_logmnr.add_logfile ('/s01/flash_recovery_area/G10R25/archivelog/2012_05_21/o1_mf_1_242_7vnm13k6_.arc',options => dbms_logmnr.new); PL/SQL procedure successfully completed. SQL> exec dbms_logmnr.start_logmnr(options => dbms_logmnr.dict_from_online_catalog); PL/SQL procedure successfully completed. SQL> select sql_redo,sql_undo from v$logmnr_contents where seg_name = 'SOURCE$' and operation='DELETE'; delete from "SYS"."SOURCE$" where "OBJ#" = '56059' and "LINE" = '1' and "SOURCE" = 'procedure maclean_proc as ' and ROWID = 'AAAABIAABAAALpyAAN'; insert into "SYS"."SOURCE$"("OBJ#","LINE","SOURCE") values ('56059','1','procedure maclean_proc as '); delete from "SYS"."SOURCE$" where "OBJ#" = '56059' and "LINE" = '2' and "SOURCE" = 'begin ' and ROWID = 'AAAABIAABAAALpyAAO'; insert into "SYS"."SOURCE$"("OBJ#","LINE","SOURCE") values ('56059','2','begin '); delete from "SYS"."SOURCE$" where "OBJ#" = '56059' and "LINE" = '3' and "SOURCE" = 'execute immediate ''select 1 from dual''; ' and ROWID = 'AAAABIAABAAALpyAAP'; insert into "SYS"."SOURCE$"("OBJ#","LINE","SOURCE") values ('56059','3','execute immediate ''select 1 from dual''; '); delete from "SYS"."SOURCE$" where "OBJ#" = '56059' and "LINE" = '4' and "SOURCE" = 'end;' and ROWID = 'AAAABIAABAAALpyAAQ'; insert into "SYS"."SOURCE$"("OBJ#","LINE","SOURCE") values ('56059','4','end;'); delete from "SYS"."SOURCE$" where "OBJ#" = '56059' and "LINE" = '1' and "SOURCE" = 'procedure maclean_proc as ' and ROWID = 'AAAABIAABAAALpyAAJ'; insert into "SYS"."SOURCE$"("OBJ#","LINE","SOURCE") values ('56059','1','procedure maclean_proc as '); delete from "SYS"."SOURCE$" where "OBJ#" = '56059' and "LINE" = '2' and "SOURCE" = 'begin ' and ROWID = 'AAAABIAABAAALpyAAK'; insert into "SYS"."SOURCE$"("OBJ#","LINE","SOURCE") values ('56059','2','begin '); delete from "SYS"."SOURCE$" where "OBJ#" = '56059' and "LINE" = '3' and "SOURCE" = 'execute immediate ''select 2 from dual''; ' and ROWID = 'AAAABIAABAAALpyAAL'; insert into "SYS"."SOURCE$"("OBJ#","LINE","SOURCE") values ('56059','3','execute immediate ''select 2 from dual''; '); delete from "SYS"."SOURCE$" where "OBJ#" = '56059' and "LINE" = '4' and "SOURCE" = 'end;' and ROWID = 'AAAABIAABAAALpyAAM'; insert into "SYS"."SOURCE$"("OBJ#","LINE","SOURCE") values ('56059','4','end;'); ???? logminer???UNDO SQL???????source$????,?DELETE????????????,????SOURCE????????????PL/SQL???DDL???

    Read the article

  • Magento, NGINX, PHP-FPM, APC, MEMCACHED, 16gb Ram CentOS, Spiking PHP-FPM to 100% CPU

    - by Terry Dunford
    I have been trying to resolve my issue of spiking cpu caused by php-fpm processes. I've reduced the php-fpm config settings to: pm = ondemand pm.max_children = 12 pm.start_servers = 2 pm.min_spare_servers = 2 pm.max_spare_servers = 10 pm.max_requests = 500 php_admin_value[memory_limit] = 128M Problem still exists. I'm running a Joomla main site (which is having no problems) and a Magento store in a sub-directory. My server is a Linux CentOS, running NGINX, APC, Memcached, Full Page Cache and php-fpm. My server has 8 cores and 16gb dedicated ram. My host has shut down my server several times the past week because my php-fpm processes are consuming the entire network. A lot of the individual php-fpm processes are getting over 50% cpu. I've hired several "professionals" and none of them was able to help me, so now broke and stumped, I'm turning to you guys for help. So any suggestions would be greatly appreciated. I turned on slow php logs and here are some of the latest results: [01-Apr-2012 14:26:12] [pool magento] pid 21537 script_filename = /home/flyfish/www/flyshop/index.php [0x0000000011a394f8] _renderStraightjoin() /home/flyfish/www/flyshop/lib/Varien/Db/Select.php:397 [0x0000000011a39158] _renderStraightjoin() /home/flyfish/www/flyshop/lib/Zend/Db/Select.php:705 [0x0000000011a38f30] assemble() /home/flyfish/www/flyshop/lib/Zend/Db/Select.php:1343 [0x00007fffbb6d6e50] __toString() unknown:0 [0x0000000011a38630] _prepareQuery() /home/flyfish/www/flyshop/lib/Varien/Db/Adapter/Pdo/Mysql.php:409 [0x0000000011a38270] _prepareQuery() /home/flyfish/www/flyshop/lib/Varien/Db/Adapter/Pdo/Mysql.php:388 [0x0000000011a38008] query() /home/flyfish/www/flyshop/lib/Zend/Db/Adapter/Abstract.php:734 [0x0000000011a375c8] fetchAll() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Resource/Product/Type/Configurable/Attribute/Collection.php:196 [0x0000000011a370e0] _loadLabels() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Resource/Product/Type/Configurable/Attribute/Collection.php:129 [0x0000000011a369a0] _afterLoad() /home/flyfish/www/flyshop/lib/Varien/Data/Collection/Db.php:536 [0x0000000011a364a8] load() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Product/Type/Configurable.php:253 [0x0000000011a35968] getConfigurableAttributes() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Product/Type/Configurable.php:330 [0x0000000011a35590] getUsedProducts() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Product/Type/Configurable.php:458 [0x0000000011a35410] isSalable() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Product.php:1264 [0x0000000011a35098] isAvailable() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Product.php:1244 [0x0000000011a34fa8] isSalable() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Product.php:1308 [0x0000000011a33998] isSaleable() /home/flyfish/www/flyshop/app/design/frontend/moxy/default/template/rokmagemodules/rokmage-categoryview/rokmage-categoryview.phtml:122 [0x0000000011a331f0] +++ dump failed [01-Apr-2012 14:26:44] [pool magento] pid 21531 script_filename = /home/flyfish/www/flyshop/index.php [0x0000000011a37768] _loadPrices() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Resource/Product/Type/Configurable/Attribute/Collection.php:251 [0x0000000011a37280] _loadPrices() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Resource/Product/Type/Configurable/Attribute/Collection.php:132 [0x0000000011a36b40] _afterLoad() /home/flyfish/www/flyshop/lib/Varien/Data/Collection/Db.php:536 [0x0000000011a36648] load() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Product/Type/Configurable.php:253 [0x0000000011a35b08] getConfigurableAttributes() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Product/Type/Configurable.php:330 [0x0000000011a35730] getUsedProducts() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Product/Type/Configurable.php:458 [0x0000000011a355b0] isSalable() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Product.php:1264 [0x0000000011a35238] isAvailable() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Product.php:1244 [0x0000000011a35148] isSalable() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Product.php:1308 [0x0000000011a33b38] isSaleable() /home/flyfish/www/flyshop/app/design/frontend/moxy/default/template/rokmagemodules/rokmage-categoryview/rokmage-categoryview.phtml:122 [0x0000000011a33390] +++ dump failed [01-Apr-2012 14:27:01] [pool magento] pid 21528 script_filename = /home/flyfish/www/flyshop/index.php [0x0000000011ff67a8] execute() /home/flyfish/www/flyshop/lib/Zend/Db/Statement/Pdo.php:228 [0x0000000011ff6518] _execute() /home/flyfish/www/flyshop/lib/Varien/Db/Statement/Pdo/Mysql.php:110 [0x0000000011ff5e90] _execute() /home/flyfish/www/flyshop/lib/Zend/Db/Statement.php:300 [0x0000000011ff5a20] execute() /home/flyfish/www/flyshop/lib/Zend/Db/Adapter/Abstract.php:479 [0x0000000011ff5438] query() /home/flyfish/www/flyshop/lib/Zend/Db/Adapter/Pdo/Abstract.php:238 [0x0000000011ff5078] query() /home/flyfish/www/flyshop/lib/Varien/Db/Adapter/Pdo/Mysql.php:389 [0x0000000011ff4e98] query() /home/flyfish/www/flyshop/lib/Zend/Db/Adapter/Abstract.php:825 [0x0000000011ff4948] fetchOne() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Resource/Category/Flat.php:1161 [0x0000000011ff4678] getProductCount() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Category.php:801 [0x0000000011ff33e0] getProductCount() /home/flyfish/www/flyshop/app/code/local/Extendware/EWLayeredNav/Model/Library/Plugin/Catalog/Layer/Filter/Category.php:54 [0x0000000011ff2da0] _initItemsData() /home/flyfish/www/flyshop/app/code/local/Extendware/EWLayeredNav/Model/Library/Plugin/Catalog/Layer/Filter/Category.php:23 [0x0000000011ff2818] _getItemsData() /home/flyfish/www/flyshop/app/code/local/Extendware/EWLayeredNav/Model/Library/Plugin/Catalog/Layer/Filter/Category.php:119 [0x0000000011ff26b0] _initItems() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Layer/Filter/Abstract.php:120 [0x0000000011ff2598] getItems() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Layer/Filter/Abstract.php:109 [0x0000000011ff2480] getItemsCount() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Block/Layer/Filter/Abstract.php:126 [0x0000000011ff22b8] getItemsCount() /home/flyfish/www/flyshop/var/cache/extendware/ewcore/overrides/Mage/Catalog/Block/Layer/View/67dcc5dfa9c44bd3a205b75a08193105.php:218 [0x0000000011ff2088] canShowOptions() /home/flyfish/www/flyshop/var/cache/extendware/ewcore/overrides/Mage/Catalog/Block/Layer/View/67dcc5dfa9c44bd3a205b75a08193105.php:233 [0x0000000011ff14f8] canShowBlock() /home/flyfish/www/flyshop/app/design/frontend/moxy/default/template/extendware/ewlayerednav/catalog/layer/view.phtml:6 [0x0000000011ff0d50] +++ dump failed [01-Apr-2012 14:27:04] [pool magento] pid 21529 script_filename = /home/flyfish/www/flyshop/index.php [0x0000000012468ff8] execute() /home/flyfish/www/flyshop/lib/Zend/Db/Statement/Pdo.php:228 [0x0000000012468d68] _execute() /home/flyfish/www/flyshop/lib/Varien/Db/Statement/Pdo/Mysql.php:110 [0x00000000124686e0] _execute() /home/flyfish/www/flyshop/lib/Zend/Db/Statement.php:300 [0x0000000012468270] execute() /home/flyfish/www/flyshop/lib/Zend/Db/Adapter/Abstract.php:479 [0x0000000012467c88] query() /home/flyfish/www/flyshop/lib/Zend/Db/Adapter/Pdo/Abstract.php:238 [0x00000000124678c8] query() /home/flyfish/www/flyshop/lib/Varien/Db/Adapter/Pdo/Mysql.php:389 [0x0000000012467660] query() /home/flyfish/www/flyshop/lib/Zend/Db/Adapter/Abstract.php:734 [0x0000000012467248] fetchAll() /home/flyfish/www/flyshop/lib/Varien/Data/Collection/Db.php:687 [0x00000000124668f0] _fetchAll() /home/flyfish/www/flyshop/app/code/core/Mage/Eav/Model/Entity/Collection/Abstract.php:1045 [0x0000000012466288] _loadEntities() /home/flyfish/www/flyshop/app/code/core/Mage/Eav/Model/Entity/Collection/Abstract.php:869 [0x0000000012465fb0] load() /home/flyfish/www/flyshop/app/code/core/Mage/Review/Model/Observer.php:78 [0x0000000012465d10] catalogBlockProductCollectionBeforeToHtml() /home/flyfish/www/flyshop/app/code/core/Mage/Core/Model/App.php:1303 [0x0000000012464c28] _callObserverMethod() /home/flyfish/www/flyshop/app/code/core/Mage/Core/Model/App.php:1278 [0x00000000124649e0] dispatchEvent() /home/flyfish/www/flyshop/app/Mage.php:416 [0x0000000012464290] dispatchEvent() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Block/Product/List.php:163 [0x0000000012463760] _beforeToHtml() /home/flyfish/www/flyshop/var/ait_rewrite/6bfe16ca572eea47db567910902c6209.php:864 [0x00000000124633b0] toHtml() /home/flyfish/www/flyshop/var/ait_rewrite/6bfe16ca572eea47db567910902c6209.php:584 [0x0000000012462e30] _getChildHtml() /home/flyfish/www/flyshop/var/ait_rewrite/6bfe16ca572eea47db567910902c6209.php:528 [0x0000000012462d38] getChildHtml() /home/flyfish/www/flyshop/var/cache/extendware/ewcore/overrides/Mage/Catalog/Block/Category/View/6362e7526f5dcb27e7f8b0b414b59004.php:85 [0x00000000124629f0] getProductListHtml() /home/flyfish/www/flyshop/app/code/local/Extendware/EWLayeredNav/Block/Override/Mage/Catalog/Category/View.php:20 [01-Apr-2012 14:27:55] [pool magento] pid 21536 script_filename = /home/flyfish/www/flyshop/index.php [0x0000000011a35010] execute() /home/flyfish/www/flyshop/lib/Zend/Db/Statement/Pdo.php:228 [0x0000000011a34d80] _execute() /home/flyfish/www/flyshop/lib/Varien/Db/Statement/Pdo/Mysql.php:110 [0x0000000011a346f8] _execute() /home/flyfish/www/flyshop/lib/Zend/Db/Statement.php:300 [0x0000000011a34288] execute() /home/flyfish/www/flyshop/lib/Zend/Db/Adapter/Abstract.php:479 [0x0000000011a33ca0] query() /home/flyfish/www/flyshop/lib/Zend/Db/Adapter/Pdo/Abstract.php:238 [0x0000000011a338e0] query() /home/flyfish/www/flyshop/lib/Varien/Db/Adapter/Pdo/Mysql.php:389 [0x0000000011a33700] query() /home/flyfish/www/flyshop/lib/Zend/Db/Adapter/Abstract.php:825 [0x0000000011a33368] fetchOne() /home/flyfish/www/flyshop/app/code/core/Mage/Eav/Model/Resource/Entity/Type.php:71 [0x0000000011a33238] getAdditionalAttributeTable() /home/flyfish/www/flyshop/app/code/core/Mage/Eav/Model/Resource/Entity/Attribute.php:483 [0x0000000011a32be8] getAdditionalAttributeTable() /home/flyfish/www/flyshop/app/code/core/Mage/Eav/Model/Resource/Entity/Attribute.php:500 [0x0000000011a32860] _afterLoad() /home/flyfish/www/flyshop/app/code/core/Mage/Eav/Model/Resource/Entity/Attribute.php:108 [0x0000000011a32330] loadByCode() /home/flyfish/www/flyshop/app/code/core/Mage/Eav/Model/Entity/Attribute/Abstract.php:118 [0x0000000011a31350] loadByCode() /home/flyfish/www/flyshop/app/code/core/Mage/Eav/Model/Config.php:423 [0x0000000011a30ce8] getAttribute() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Helper/Output.php:156 [0x0000000011a30208] categoryAttribute() /home/flyfish/www/flyshop/app/design/frontend/base/default/template/catalog/category/view.phtml:47 [0x0000000011a2fa60] +++ dump failed [01-Apr-2012 14:27:56] [pool magento] pid 21530 script_filename = /home/flyfish/www/flyshop/index.php [0x0000000011a35b10] updateParamDefaults() /home/flyfish/www/flyshop/var/ait_rewrite/78778b0d1ad4bf93e846365bd2fbf33f.php:276 [0x0000000011a35750] updateParamDefaults() /home/flyfish/www/flyshop/var/ait_rewrite/78778b0d1ad4bf93e846365bd2fbf33f.php:326 [0x0000000011a351f0] getSkinBaseUrl() /home/flyfish/www/flyshop/var/ait_rewrite/78778b0d1ad4bf93e846365bd2fbf33f.php:482 [0x0000000011a350a8] getSkinUrl() /home/flyfish/www/flyshop/var/ait_rewrite/6bfe16ca572eea47db567910902c6209.php:981 [0x0000000011a32468] getSkinUrl() /home/flyfish/www/flyshop/app/code/local/Extendware/EWMinify/Block/Override/Mage/Page/Html/Head.php:126 [0x0000000011a30ca8] getCssJsHtml() /home/flyfish/www/flyshop/app/code/local/Extendware/EWCore/Block/Override/Mage/Page/Html/Head.php:55 [0x0000000011a30978] getCssJsHtml() /home/flyfish/www/flyshop/app/code/local/MageWorx/SeoSuite/Block/Page/Html/Head.php:41 [0x0000000011a2fd10] getCssJsHtml() /home/flyfish/www/flyshop/app/design/frontend/moxy/default/template/rokmagemodules/rokmage-modalheader/rokmage-head.phtml:26 [0x0000000011a2f568] +++ dump failed [01-Apr-2012 14:28:28] [pool magento] pid 21527 script_filename = /home/flyfish/www/flyshop/index.php [0x0000000010c7bba0] execute() /home/flyfish/www/flyshop/lib/Zend/Db/Statement/Pdo.php:228 [0x0000000010c7b910] _execute() /home/flyfish/www/flyshop/lib/Varien/Db/Statement/Pdo/Mysql.php:110 [0x0000000010c7b288] _execute() /home/flyfish/www/flyshop/lib/Zend/Db/Statement.php:300 [0x0000000010c7ae18] execute() /home/flyfish/www/flyshop/lib/Zend/Db/Adapter/Abstract.php:479 [0x0000000010c7a830] query() /home/flyfish/www/flyshop/lib/Zend/Db/Adapter/Pdo/Abstract.php:238 [0x0000000010c7a470] query() /home/flyfish/www/flyshop/lib/Varien/Db/Adapter/Pdo/Mysql.php:389 [0x0000000010c7a168] query() /home/flyfish/www/flyshop/lib/Zend/Db/Adapter/Abstract.php:808 [0x0000000010c79558] fetchPairs() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Resource/Product/Collection.php:840 [0x0000000010c79240] addCountToCategories() /home/flyfish/www/flyshop/app/code/community/Mage/Catalog/Block/Navigation.php:133 [0x0000000010c71d48] getCurrentChildCategories() /home/flyfish/www/flyshop/app/design/frontend/base/default/template/rokmagemodules/rokmage-magemenus/rokmage-magemenu-left.phtml:139 [0x0000000010c715a0] +++ dump failed [01-Apr-2012 14:28:28] [pool magento] pid 21577 script_filename = /home/flyfish/www/flyshop/index.php [0x0000000011a3a8d8] execute() /home/flyfish/www/flyshop/lib/Zend/Db/Statement/Pdo.php:228 [0x0000000011a3a648] _execute() /home/flyfish/www/flyshop/lib/Varien/Db/Statement/Pdo/Mysql.php:110 [0x0000000011a39fc0] _execute() /home/flyfish/www/flyshop/lib/Zend/Db/Statement.php:300 [0x0000000011a39b50] execute() /home/flyfish/www/flyshop/lib/Zend/Db/Adapter/Abstract.php:479 [0x0000000011a39568] query() /home/flyfish/www/flyshop/lib/Zend/Db/Adapter/Pdo/Abstract.php:238 [0x0000000011a391a8] query() /home/flyfish/www/flyshop/lib/Varien/Db/Adapter/Pdo/Mysql.php:389 [0x0000000011a38f40] query() /home/flyfish/www/flyshop/lib/Zend/Db/Adapter/Abstract.php:734 [0x0000000011a37cc0] fetchAll() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Resource/Category/Flat.php:276 [0x0000000011a37b20] _loadNodes() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Resource/Category/Flat.php:1229 [0x0000000011a379a0] getChildrenCategories() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Category.php:841 [0x0000000011a37690] getChildrenCategories() /home/flyfish/www/flyshop/app/code/community/Mage/Catalog/Block/Navigation.php:130 [0x0000000011a30198] getCurrentChildCategories() /home/flyfish/www/flyshop/app/design/frontend/base/default/template/rokmagemodules/rokmage-magemenus/rokmage-magemenu-left.phtml:139 [0x0000000011a2f9f0] +++ dump failed [01-Apr-2012 14:28:48] [pool magento] pid 21629 script_filename = /home/flyfish/www/flyshop/index.php [0x00002ac987e2cb48] _loadPrices() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Resource/Product/Type/Configurable/Attribute/Collection.php:252 [0x00002ac987e2c660] _loadPrices() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Resource/Product/Type/Configurable/Attribute/Collection.php:132 [0x00002ac987e2bf20] _afterLoad() /home/flyfish/www/flyshop/lib/Varien/Data/Collection/Db.php:536 [0x00002ac987e2ba28] load() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Product/Type/Configurable.php:253 [0x00002ac987e2aee8] getConfigurableAttributes() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Product/Type/Configurable.php:330 [0x00002ac987e2ab10] getUsedProducts() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Product/Type/Configurable.php:458 [0x00002ac987e2a990] isSalable() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Product.php:1264 [0x00002ac987e2a618] isAvailable() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Product.php:1244 [0x00002ac987e2a528] isSalable() /home/flyfish/www/flyshop/app/code/core/Mage/Catalog/Model/Product.php:1308 [0x00002ac987e28f18] isSaleable() /home/flyfish/www/flyshop/app/design/frontend/moxy/default/template/rokmagemodules/rokmage-categoryview/rokmage-categoryview.phtml:122 [0x00002ac987e28770] +++ dump failed ___________________________________________ A snippet of the Latest php-fpm error log: [01-Apr-2012 14:26:12] WARNING: [pool magento] child 21537, script '/home/flyfish/www/flyshop/index.php' (request: "GET /flyshop/index.php") executing too slow (5.265105 sec), logging [01-Apr-2012 14:26:12] ERROR: failed to ptrace(PEEKDATA) pid 21537: Input/output error (5) [01-Apr-2012 14:26:44] WARNING: [pool magento] child 21531, script '/home/flyfish/www/flyshop/index.php' (request: "GET /flyshop/index.php") executing too slow (5.268434 sec), logging [01-Apr-2012 14:26:44] ERROR: failed to ptrace(PEEKDATA) pid 21531: Input/output error (5) [01-Apr-2012 14:27:01] WARNING: [pool magento] child 21528, script '/home/flyfish/www/flyshop/index.php' (request: "GET /flyshop/index.php") executing too slow (6.656633 sec), logging [01-Apr-2012 14:27:01] ERROR: failed to ptrace(PEEKDATA) pid 21528: Input/output error (5) [01-Apr-2012 14:27:04] WARNING: [pool magento] child 21529, script '/home/flyfish/www/flyshop/index.php' (request: "GET /flyshop/index.php") executing too slow (5.211136 sec), logging [01-Apr-2012 14:27:55] WARNING: [pool magento] child 21536, script '/home/flyfish/www/flyshop/index.php' (request: "GET /flyshop/index.php") executing too slow (5.207001 sec), logging [01-Apr-2012 14:27:55] ERROR: failed to ptrace(PEEKDATA) pid 21536: Input/output error (5) [01-Apr-2012 14:27:56] WARNING: [pool magento] child 21530, script '/home/flyfish/www/flyshop/index.php' (request: "GET /flyshop/index.php") executing too slow (5.503186 sec), logging [01-Apr-2012 14:27:56] ERROR: failed to ptrace(PEEKDATA) pid 21530: Input/output error (5) [01-Apr-2012 14:28:28] WARNING: [pool magento] child 21577, script '/home/flyfish/www/flyshop/index.php' (request: "GET /flyshop/index.php") executing too slow (5.722625 sec), logging [01-Apr-2012 14:28:28] WARNING: [pool magento] child 21527, script '/home/flyfish/www/flyshop/index.php' (request: "GET /flyshop/index.php") executing too slow (5.122326 sec), logging [01-Apr-2012 14:28:28] ERROR: failed to ptrace(PEEKDATA) pid 21527: Input/output error (5) [01-Apr-2012 14:28:28] ERROR: failed to ptrace(PEEKDATA) pid 21577: Input/output error (5) [01-Apr-2012 14:28:48] WARNING: [pool magento] child 21629, script '/home/flyfish/www/flyshop/index.php' (request: "GET /flyshop/index.php") executing too slow (5.446961 sec), logging [01-Apr-2012 14:28:48] ERROR: failed to ptrace(PEEKDATA) pid 21629: Input/output error (5) _____________________________________________ I also noticed that the server is not using much memory: Mem: 16777216k total, 1204040k used, 15573176k free My.conf settings: query_cache_size = 128M innodb_buffer_pool_size = 512M open-files-limit = 8192 table_cache=4096 I just noticed that someone changed my innodb_buffer_pool_size to 512M. Shouldn't this be set to 80% of available ram? So I have 16gb ram so it should be set at 12G; however, I set it at 10G. What do you think? I made that change and restart everything. Php-fpm is still spiking cpu. Here is just 1 php-fpm process: 23942 user 17 0 507m 99m 27m R 90.9%CPU 0.6 0:03.46 php-fpm I'm sure there may be more information you will need to help, so just let me know what you guys need to help me figure this out. Thank you.

    Read the article

< Previous Page | 429 430 431 432 433 434 435 436 437 438 439 440  | Next Page >