Search Results

Search found 4586 results on 184 pages for 'supported'.

Page 183/184 | < Previous Page | 179 180 181 182 183 184  | Next Page >

  • An Introduction to ASP.NET Web API

    - by Rick Strahl
    Microsoft recently released ASP.NET MVC 4.0 and .NET 4.5 and along with it, the brand spanking new ASP.NET Web API. Web API is an exciting new addition to the ASP.NET stack that provides a new, well-designed HTTP framework for creating REST and AJAX APIs (API is Microsoft’s new jargon for a service, in case you’re wondering). Although Web API ships and installs with ASP.NET MVC 4, you can use Web API functionality in any ASP.NET project, including WebForms, WebPages and MVC or just a Web API by itself. And you can also self-host Web API in your own applications from Console, Desktop or Service applications. If you're interested in a high level overview on what ASP.NET Web API is and how it fits into the ASP.NET stack you can check out my previous post: Where does ASP.NET Web API fit? In the following article, I'll focus on a practical, by example introduction to ASP.NET Web API. All the code discussed in this article is available in GitHub: https://github.com/RickStrahl/AspNetWebApiArticle [republished from my Code Magazine Article and updated for RTM release of ASP.NET Web API] Getting Started To start I’ll create a new empty ASP.NET application to demonstrate that Web API can work with any kind of ASP.NET project. Although you can create a new project based on the ASP.NET MVC/Web API template to quickly get up and running, I’ll take you through the manual setup process, because one common use case is to add Web API functionality to an existing ASP.NET application. This process describes the steps needed to hook up Web API to any ASP.NET 4.0 application. Start by creating an ASP.NET Empty Project. Then create a new folder in the project called Controllers. Add a Web API Controller Class Once you have any kind of ASP.NET project open, you can add a Web API Controller class to it. Web API Controllers are very similar to MVC Controller classes, but they work in any kind of project. Add a new item to this folder by using the Add New Item option in Visual Studio and choose Web API Controller Class, as shown in Figure 1. Figure 1: This is how you create a new Controller Class in Visual Studio   Make sure that the name of the controller class includes Controller at the end of it, which is required in order for Web API routing to find it. Here, the name for the class is AlbumApiController. For this example, I’ll use a Music Album model to demonstrate basic behavior of Web API. The model consists of albums and related songs where an album has properties like Name, Artist and YearReleased and a list of songs with a SongName and SongLength as well as an AlbumId that links it to the album. You can find the code for the model (and the rest of these samples) on Github. To add the file manually, create a new folder called Model, and add a new class Album.cs and copy the code into it. There’s a static AlbumData class with a static CreateSampleAlbumData() method that creates a short list of albums on a static .Current that I’ll use for the examples. Before we look at what goes into the controller class though, let’s hook up routing so we can access this new controller. Hooking up Routing in Global.asax To start, I need to perform the one required configuration task in order for Web API to work: I need to configure routing to the controller. Like MVC, Web API uses routing to provide clean, extension-less URLs to controller methods. Using an extension method to ASP.NET’s static RouteTable class, you can use the MapHttpRoute() (in the System.Web.Http namespace) method to hook-up the routing during Application_Start in global.asax.cs shown in Listing 1.using System; using System.Web.Routing; using System.Web.Http; namespace AspNetWebApi { public class Global : System.Web.HttpApplication { protected void Application_Start(object sender, EventArgs e) { RouteTable.Routes.MapHttpRoute( name: "AlbumVerbs", routeTemplate: "albums/{title}", defaults: new { symbol = RouteParameter.Optional, controller="AlbumApi" } ); } } } This route configures Web API to direct URLs that start with an albums folder to the AlbumApiController class. Routing in ASP.NET is used to create extensionless URLs and allows you to map segments of the URL to specific Route Value parameters. A route parameter, with a name inside curly brackets like {name}, is mapped to parameters on the controller methods. Route parameters can be optional, and there are two special route parameters – controller and action – that determine the controller to call and the method to activate respectively. HTTP Verb Routing Routing in Web API can route requests by HTTP Verb in addition to standard {controller},{action} routing. For the first examples, I use HTTP Verb routing, as shown Listing 1. Notice that the route I’ve defined does not include an {action} route value or action value in the defaults. Rather, Web API can use the HTTP Verb in this route to determine the method to call the controller, and a GET request maps to any method that starts with Get. So methods called Get() or GetAlbums() are matched by a GET request and a POST request maps to a Post() or PostAlbum(). Web API matches a method by name and parameter signature to match a route, query string or POST values. In lieu of the method name, the [HttpGet,HttpPost,HttpPut,HttpDelete, etc] attributes can also be used to designate the accepted verbs explicitly if you don’t want to follow the verb naming conventions. Although HTTP Verb routing is a good practice for REST style resource APIs, it’s not required and you can still use more traditional routes with an explicit {action} route parameter. When {action} is supplied, the HTTP verb routing is ignored. I’ll talk more about alternate routes later. When you’re finished with initial creation of files, your project should look like Figure 2.   Figure 2: The initial project has the new API Controller Album model   Creating a small Album Model Now it’s time to create some controller methods to serve data. For these examples, I’ll use a very simple Album and Songs model to play with, as shown in Listing 2. public class Song { public string AlbumId { get; set; } [Required, StringLength(80)] public string SongName { get; set; } [StringLength(5)] public string SongLength { get; set; } } public class Album { public string Id { get; set; } [Required, StringLength(80)] public string AlbumName { get; set; } [StringLength(80)] public string Artist { get; set; } public int YearReleased { get; set; } public DateTime Entered { get; set; } [StringLength(150)] public string AlbumImageUrl { get; set; } [StringLength(200)] public string AmazonUrl { get; set; } public virtual List<Song> Songs { get; set; } public Album() { Songs = new List<Song>(); Entered = DateTime.Now; // Poor man's unique Id off GUID hash Id = Guid.NewGuid().GetHashCode().ToString("x"); } public void AddSong(string songName, string songLength = null) { this.Songs.Add(new Song() { AlbumId = this.Id, SongName = songName, SongLength = songLength }); } } Once the model has been created, I also added an AlbumData class that generates some static data in memory that is loaded onto a static .Current member. The signature of this class looks like this and that's what I'll access to retrieve the base data:public static class AlbumData { // sample data - static list public static List<Album> Current = CreateSampleAlbumData(); /// <summary> /// Create some sample data /// </summary> /// <returns></returns> public static List<Album> CreateSampleAlbumData() { … }} You can check out the full code for the data generation online. Creating an AlbumApiController Web API shares many concepts of ASP.NET MVC, and the implementation of your API logic is done by implementing a subclass of the System.Web.Http.ApiController class. Each public method in the implemented controller is a potential endpoint for the HTTP API, as long as a matching route can be found to invoke it. The class name you create should end in Controller, which is how Web API matches the controller route value to figure out which class to invoke. Inside the controller you can implement methods that take standard .NET input parameters and return .NET values as results. Web API’s binding tries to match POST data, route values, form values or query string values to your parameters. Because the controller is configured for HTTP Verb based routing (no {action} parameter in the route), any methods that start with Getxxxx() are called by an HTTP GET operation. You can have multiple methods that match each HTTP Verb as long as the parameter signatures are different and can be matched by Web API. In Listing 3, I create an AlbumApiController with two methods to retrieve a list of albums and a single album by its title .public class AlbumApiController : ApiController { public IEnumerable<Album> GetAlbums() { var albums = AlbumData.Current.OrderBy(alb => alb.Artist); return albums; } public Album GetAlbum(string title) { var album = AlbumData.Current .SingleOrDefault(alb => alb.AlbumName.Contains(title)); return album; }} To access the first two requests, you can use the following URLs in your browser: http://localhost/aspnetWebApi/albumshttp://localhost/aspnetWebApi/albums/Dirty%20Deeds Note that you’re not specifying the actions of GetAlbum or GetAlbums in these URLs. Instead Web API’s routing uses HTTP GET verb to route to these methods that start with Getxxx() with the first mapping to the parameterless GetAlbums() method and the latter to the GetAlbum(title) method that receives the title parameter mapped as optional in the route. Content Negotiation When you access any of the URLs above from a browser, you get either an XML or JSON result returned back. The album list result for Chrome 17 and Internet Explorer 9 is shown Figure 3. Figure 3: Web API responses can vary depending on the browser used, demonstrating Content Negotiation in action as these two browsers send different HTTP Accept headers.   Notice that the results are not the same: Chrome returns an XML response and IE9 returns a JSON response. Whoa, what’s going on here? Shouldn’t we see the same result in both browsers? Actually, no. Web API determines what type of content to return based on Accept headers. HTTP clients, like browsers, use Accept headers to specify what kind of content they’d like to see returned. Browsers generally ask for HTML first, followed by a few additional content types. Chrome (and most other major browsers) ask for: Accept: text/html, application/xhtml+xml,application/xml; q=0.9,*/*;q=0.8 IE9 asks for: Accept: text/html, application/xhtml+xml, */* Note that Chrome’s Accept header includes application/xml, which Web API finds in its list of supported media types and returns an XML response. IE9 does not include an Accept header type that works on Web API by default, and so it returns the default format, which is JSON. This is an important and very useful feature that was missing from any previous Microsoft REST tools: Web API automatically switches output formats based on HTTP Accept headers. Nowhere in the server code above do you have to explicitly specify the output format. Rather, Web API determines what format the client is requesting based on the Accept headers and automatically returns the result based on the available formatters. This means that a single method can handle both XML and JSON results.. Using this simple approach makes it very easy to create a single controller method that can return JSON, XML, ATOM or even OData feeds by providing the appropriate Accept header from the client. By default you don’t have to worry about the output format in your code. Note that you can still specify an explicit output format if you choose, either globally by overriding the installed formatters, or individually by returning a lower level HttpResponseMessage instance and setting the formatter explicitly. More on that in a minute. Along the same lines, any content sent to the server via POST/PUT is parsed by Web API based on the HTTP Content-type of the data sent. The same formats allowed for output are also allowed on input. Again, you don’t have to do anything in your code – Web API automatically performs the deserialization from the content. Accessing Web API JSON Data with jQuery A very common scenario for Web API endpoints is to retrieve data for AJAX calls from the Web browser. Because JSON is the default format for Web API, it’s easy to access data from the server using jQuery and its getJSON() method. This example receives the albums array from GetAlbums() and databinds it into the page using knockout.js.$.getJSON("albums/", function (albums) { // make knockout template visible $(".album").show(); // create view object and attach array var view = { albums: albums }; ko.applyBindings(view); }); Figure 4 shows this and the next example’s HTML output. You can check out the complete HTML and script code at http://goo.gl/Ix33C (.html) and http://goo.gl/tETlg (.js). Figu Figure 4: The Album Display sample uses JSON data loaded from Web API.   The result from the getJSON() call is a JavaScript object of the server result, which comes back as a JavaScript array. In the code, I use knockout.js to bind this array into the UI, which as you can see, requires very little code, instead using knockout’s data-bind attributes to bind server data to the UI. Of course, this is just one way to use the data – it’s entirely up to you to decide what to do with the data in your client code. Along the same lines, I can retrieve a single album to display when the user clicks on an album. The response returns the album information and a child array with all the songs. The code to do this is very similar to the last example where we pulled the albums array:$(".albumlink").live("click", function () { var id = $(this).data("id"); // title $.getJSON("albums/" + id, function (album) { ko.applyBindings(album, $("#divAlbumDialog")[0]); $("#divAlbumDialog").show(); }); }); Here the URL looks like this: /albums/Dirty%20Deeds, where the title is the ID captured from the clicked element’s data ID attribute. Explicitly Overriding Output Format When Web API automatically converts output using content negotiation, it does so by matching Accept header media types to the GlobalConfiguration.Configuration.Formatters and the SupportedMediaTypes of each individual formatter. You can add and remove formatters to globally affect what formats are available and it’s easy to create and plug in custom formatters.The example project includes a JSONP formatter that can be plugged in to provide JSONP support for requests that have a callback= querystring parameter. Adding, removing or replacing formatters is a global option you can use to manipulate content. It’s beyond the scope of this introduction to show how it works, but you can review the sample code or check out my blog entry on the subject (http://goo.gl/UAzaR). If automatic processing is not desirable in a particular Controller method, you can override the response output explicitly by returning an HttpResponseMessage instance. HttpResponseMessage is similar to ActionResult in ASP.NET MVC in that it’s a common way to return an abstract result message that contains content. HttpResponseMessage s parsed by the Web API framework using standard interfaces to retrieve the response data, status code, headers and so on[MS2] . Web API turns every response – including those Controller methods that return static results – into HttpResponseMessage instances. Explicitly returning an HttpResponseMessage instance gives you full control over the output and lets you mostly bypass WebAPI’s post-processing of the HTTP response on your behalf. HttpResponseMessage allows you to customize the response in great detail. Web API’s attention to detail in the HTTP spec really shows; many HTTP options are exposed as properties and enumerations with detailed IntelliSense comments. Even if you’re new to building REST-based interfaces, the API guides you in the right direction for returning valid responses and response codes. For example, assume that I always want to return JSON from the GetAlbums() controller method and ignore the default media type content negotiation. To do this, I can adjust the output format and headers as shown in Listing 4.public HttpResponseMessage GetAlbums() { var albums = AlbumData.Current.OrderBy(alb => alb.Artist); // Create a new HttpResponse with Json Formatter explicitly var resp = new HttpResponseMessage(HttpStatusCode.OK); resp.Content = new ObjectContent<IEnumerable<Album>>( albums, new JsonMediaTypeFormatter()); // Get Default Formatter based on Content Negotiation //var resp = Request.CreateResponse<IEnumerable<Album>>(HttpStatusCode.OK, albums); resp.Headers.ConnectionClose = true; resp.Headers.CacheControl = new CacheControlHeaderValue(); resp.Headers.CacheControl.Public = true; return resp; } This example returns the same IEnumerable<Album> value, but it wraps the response into an HttpResponseMessage so you can control the entire HTTP message result including the headers, formatter and status code. In Listing 4, I explicitly specify the formatter using the JsonMediaTypeFormatter to always force the content to JSON.  If you prefer to use the default content negotiation with HttpResponseMessage results, you can create the Response instance using the Request.CreateResponse method:var resp = Request.CreateResponse<IEnumerable<Album>>(HttpStatusCode.OK, albums); This provides you an HttpResponse object that's pre-configured with the default formatter based on Content Negotiation. Once you have an HttpResponse object you can easily control most HTTP aspects on this object. What's sweet here is that there are many more detailed properties on HttpResponse than the core ASP.NET Response object, with most options being explicitly configurable with enumerations that make it easy to pick the right headers and response codes from a list of valid codes. It makes HTTP features available much more discoverable even for non-hardcore REST/HTTP geeks. Non-Serialized Results The output returned doesn’t have to be a serialized value but can also be raw data, like strings, binary data or streams. You can use the HttpResponseMessage.Content object to set a number of common Content classes. Listing 5 shows how to return a binary image using the ByteArrayContent class from a Controller method. [HttpGet] public HttpResponseMessage AlbumArt(string title) { var album = AlbumData.Current.FirstOrDefault(abl => abl.AlbumName.StartsWith(title)); if (album == null) { var resp = Request.CreateResponse<ApiMessageError>( HttpStatusCode.NotFound, new ApiMessageError("Album not found")); return resp; } // kinda silly - we would normally serve this directly // but hey - it's a demo. var http = new WebClient(); var imageData = http.DownloadData(album.AlbumImageUrl); // create response and return var result = new HttpResponseMessage(HttpStatusCode.OK); result.Content = new ByteArrayContent(imageData); result.Content.Headers.ContentType = new MediaTypeHeaderValue("image/jpeg"); return result; } The image retrieval from Amazon is contrived, but it shows how to return binary data using ByteArrayContent. It also demonstrates that you can easily return multiple types of content from a single controller method, which is actually quite common. If an error occurs - such as a resource can’t be found or a validation error – you can return an error response to the client that’s very specific to the error. In GetAlbumArt(), if the album can’t be found, we want to return a 404 Not Found status (and realistically no error, as it’s an image). Note that if you are not using HTTP Verb-based routing or not accessing a method that starts with Get/Post etc., you have to specify one or more HTTP Verb attributes on the method explicitly. Here, I used the [HttpGet] attribute to serve the image. Another option to handle the error could be to return a fixed placeholder image if no album could be matched or the album doesn’t have an image. When returning an error code, you can also return a strongly typed response to the client. For example, you can set the 404 status code and also return a custom error object (ApiMessageError is a class I defined) like this:return Request.CreateResponse<ApiMessageError>( HttpStatusCode.NotFound, new ApiMessageError("Album not found") );   If the album can be found, the image will be returned. The image is downloaded into a byte[] array, and then assigned to the result’s Content property. I created a new ByteArrayContent instance and assigned the image’s bytes and the content type so that it displays properly in the browser. There are other content classes available: StringContent, StreamContent, ByteArrayContent, MultipartContent, and ObjectContent are at your disposal to return just about any kind of content. You can create your own Content classes if you frequently return custom types and handle the default formatter assignments that should be used to send the data out . Although HttpResponseMessage results require more code than returning a plain .NET value from a method, it allows much more control over the actual HTTP processing than automatic processing. It also makes it much easier to test your controller methods as you get a response object that you can check for specific status codes and output messages rather than just a result value. Routing Again Ok, let’s get back to the image example. Using the original routing we have setup using HTTP Verb routing there's no good way to serve the image. In order to return my album art image I’d like to use a URL like this: http://localhost/aspnetWebApi/albums/Dirty%20Deeds/image In order to create a URL like this, I have to create a new Controller because my earlier routes pointed to the AlbumApiController using HTTP Verb routing. HTTP Verb based routing is great for representing a single set of resources such as albums. You can map operations like add, delete, update and read easily using HTTP Verbs. But you cannot mix action based routing into a an HTTP Verb routing controller - you can only map HTTP Verbs and each method has to be unique based on parameter signature. You can't have multiple GET operations to methods with the same signature. So GetImage(string id) and GetAlbum(string title) are in conflict in an HTTP GET routing scenario. In fact, I was unable to make the above Image URL work with any combination of HTTP Verb plus Custom routing using the single Albums controller. There are number of ways around this, but all involve additional controllers.  Personally, I think it’s easier to use explicit Action routing and then add custom routes if you need to simplify your URLs further. So in order to accommodate some of the other examples, I created another controller – AlbumRpcApiController – to handle all requests that are explicitly routed via actions (/albums/rpc/AlbumArt) or are custom routed with explicit routes defined in the HttpConfiguration. I added the AlbumArt() method to this new AlbumRpcApiController class. For the image URL to work with the new AlbumRpcApiController, you need a custom route placed before the default route from Listing 1.RouteTable.Routes.MapHttpRoute( name: "AlbumRpcApiAction", routeTemplate: "albums/rpc/{action}/{title}", defaults: new { title = RouteParameter.Optional, controller = "AlbumRpcApi", action = "GetAblums" } ); Now I can use either of the following URLs to access the image: Custom route: (/albums/rpc/{title}/image)http://localhost/aspnetWebApi/albums/PowerAge/image Action route: (/albums/rpc/action/{title})http://localhost/aspnetWebAPI/albums/rpc/albumart/PowerAge Sending Data to the Server To send data to the server and add a new album, you can use an HTTP POST operation. Since I’m using HTTP Verb-based routing in the original AlbumApiController, I can implement a method called PostAlbum()to accept a new album from the client. Listing 6 shows the Web API code to add a new album.public HttpResponseMessage PostAlbum(Album album) { if (!this.ModelState.IsValid) { // my custom error class var error = new ApiMessageError() { message = "Model is invalid" }; // add errors into our client error model for client foreach (var prop in ModelState.Values) { var modelError = prop.Errors.FirstOrDefault(); if (!string.IsNullOrEmpty(modelError.ErrorMessage)) error.errors.Add(modelError.ErrorMessage); else error.errors.Add(modelError.Exception.Message); } return Request.CreateResponse<ApiMessageError>(HttpStatusCode.Conflict, error); } // update song id which isn't provided foreach (var song in album.Songs) song.AlbumId = album.Id; // see if album exists already var matchedAlbum = AlbumData.Current .SingleOrDefault(alb => alb.Id == album.Id || alb.AlbumName == album.AlbumName); if (matchedAlbum == null) AlbumData.Current.Add(album); else matchedAlbum = album; // return a string to show that the value got here var resp = Request.CreateResponse(HttpStatusCode.OK, string.Empty); resp.Content = new StringContent(album.AlbumName + " " + album.Entered.ToString(), Encoding.UTF8, "text/plain"); return resp; } The PostAlbum() method receives an album parameter, which is automatically deserialized from the POST buffer the client sent. The data passed from the client can be either XML or JSON. Web API automatically figures out what format it needs to deserialize based on the content type and binds the content to the album object. Web API uses model binding to bind the request content to the parameter(s) of controller methods. Like MVC you can check the model by looking at ModelState.IsValid. If it’s not valid, you can run through the ModelState.Values collection and check each binding for errors. Here I collect the error messages into a string array that gets passed back to the client via the result ApiErrorMessage object. When a binding error occurs, you’ll want to return an HTTP error response and it’s best to do that with an HttpResponseMessage result. In Listing 6, I used a custom error class that holds a message and an array of detailed error messages for each binding error. I used this object as the content to return to the client along with my Conflict HTTP Status Code response. If binding succeeds, the example returns a string with the name and date entered to demonstrate that you captured the data. Normally, a method like this should return a Boolean or no response at all (HttpStatusCode.NoConent). The sample uses a simple static list to hold albums, so once you’ve added the album using the Post operation, you can hit the /albums/ URL to see that the new album was added. The client jQuery code to call the POST operation from the client with jQuery is shown in Listing 7. var id = new Date().getTime().toString(); var album = { "Id": id, "AlbumName": "Power Age", "Artist": "AC/DC", "YearReleased": 1977, "Entered": "2002-03-11T18:24:43.5580794-10:00", "AlbumImageUrl": http://ecx.images-amazon.com/images/…, "AmazonUrl": http://www.amazon.com/…, "Songs": [ { "SongName": "Rock 'n Roll Damnation", "SongLength": 3.12}, { "SongName": "Downpayment Blues", "SongLength": 4.22 }, { "SongName": "Riff Raff", "SongLength": 2.42 } ] } $.ajax( { url: "albums/", type: "POST", contentType: "application/json", data: JSON.stringify(album), processData: false, beforeSend: function (xhr) { // not required since JSON is default output xhr.setRequestHeader("Accept", "application/json"); }, success: function (result) { // reload list of albums page.loadAlbums(); }, error: function (xhr, status, p3, p4) { var err = "Error"; if (xhr.responseText && xhr.responseText[0] == "{") err = JSON.parse(xhr.responseText).message; alert(err); } }); The code in Listing 7 creates an album object in JavaScript to match the structure of the .NET Album class. This object is passed to the $.ajax() function to send to the server as POST. The data is turned into JSON and the content type set to application/json so that the server knows what to convert when deserializing in the Album instance. The jQuery code hooks up success and failure events. Success returns the result data, which is a string that’s echoed back with an alert box. If an error occurs, jQuery returns the XHR instance and status code. You can check the XHR to see if a JSON object is embedded and if it is, you can extract it by de-serializing it and accessing the .message property. REST standards suggest that updates to existing resources should use PUT operations. REST standards aside, I’m not a big fan of separating out inserts and updates so I tend to have a single method that handles both. But if you want to follow REST suggestions, you can create a PUT method that handles updates by forwarding the PUT operation to the POST method:public HttpResponseMessage PutAlbum(Album album) { return PostAlbum(album); } To make the corresponding $.ajax() call, all you have to change from Listing 7 is the type: from POST to PUT. Model Binding with UrlEncoded POST Variables In the example in Listing 7 I used JSON objects to post a serialized object to a server method that accepted an strongly typed object with the same structure, which is a common way to send data to the server. However, Web API supports a number of different ways that data can be received by server methods. For example, another common way is to use plain UrlEncoded POST  values to send to the server. Web API supports Model Binding that works similar (but not the same) as MVC's model binding where POST variables are mapped to properties of object parameters of the target method. This is actually quite common for AJAX calls that want to avoid serialization and the potential requirement of a JSON parser on older browsers. For example, using jQUery you might use the $.post() method to send a new album to the server (albeit one without songs) using code like the following:$.post("albums/",{AlbumName: "Dirty Deeds", YearReleased: 1976 … },albumPostCallback); Although the code looks very similar to the client code we used before passing JSON, here the data passed is URL encoded values (AlbumName=Dirty+Deeds&YearReleased=1976 etc.). Web API then takes this POST data and maps each of the POST values to the properties of the Album object in the method's parameter. Although the client code is different the server can both handle the JSON object, or the UrlEncoded POST values. Dynamic Access to POST Data There are also a few options available to dynamically access POST data, if you know what type of data you're dealing with. If you have POST UrlEncoded values, you can dynamically using a FormsDataCollection:[HttpPost] public string PostAlbum(FormDataCollection form) { return string.Format("{0} - released {1}", form.Get("AlbumName"),form.Get("RearReleased")); } The FormDataCollection is a very simple object, that essentially provides the same functionality as Request.Form[] in ASP.NET. Request.Form[] still works if you're running hosted in an ASP.NET application. However as a general rule, while ASP.NET's functionality is always available when running Web API hosted inside of an  ASP.NET application, using the built in classes specific to Web API makes it possible to run Web API applications in a self hosted environment outside of ASP.NET. If your client is sending JSON to your server, and you don't want to map the JSON to a strongly typed object because you only want to retrieve a few simple values, you can also accept a JObject parameter in your API methods:[HttpPost] public string PostAlbum(JObject jsonData) { dynamic json = jsonData; JObject jalbum = json.Album; JObject juser = json.User; string token = json.UserToken; var album = jalbum.ToObject<Album>(); var user = juser.ToObject<User>(); return String.Format("{0} {1} {2}", album.AlbumName, user.Name, token); } There quite a few options available to you to receive data with Web API, which gives you more choices for the right tool for the job. Unfortunately one shortcoming of Web API is that POST data is always mapped to a single parameter. This means you can't pass multiple POST parameters to methods that receive POST data. It's possible to accept multiple parameters, but only one can map to the POST content - the others have to come from the query string or route values. I have a couple of Blog POSTs that explain what works and what doesn't here: Passing multiple POST parameters to Web API Controller Methods Mapping UrlEncoded POST Values in ASP.NET Web API   Handling Delete Operations Finally, to round out the server API code of the album example we've been discussin, here’s the DELETE verb controller method that allows removal of an album by its title:public HttpResponseMessage DeleteAlbum(string title) { var matchedAlbum = AlbumData.Current.Where(alb => alb.AlbumName == title) .SingleOrDefault(); if (matchedAlbum == null) return new HttpResponseMessage(HttpStatusCode.NotFound); AlbumData.Current.Remove(matchedAlbum); return new HttpResponseMessage(HttpStatusCode.NoContent); } To call this action method using jQuery, you can use:$(".removeimage").live("click", function () { var $el = $(this).parent(".album"); var txt = $el.find("a").text(); $.ajax({ url: "albums/" + encodeURIComponent(txt), type: "Delete", success: function (result) { $el.fadeOut().remove(); }, error: jqError }); }   Note the use of the DELETE verb in the $.ajax() call, which routes to DeleteAlbum on the server. DELETE is a non-content operation, so you supply a resource ID (the title) via route value or the querystring. Routing Conflicts In all requests with the exception of the AlbumArt image example shown so far, I used HTTP Verb routing that I set up in Listing 1. HTTP Verb Routing is a recommendation that is in line with typical REST access to HTTP resources. However, it takes quite a bit of effort to create REST-compliant API implementations based only on HTTP Verb routing only. You saw one example that didn’t really fit – the return of an image where I created a custom route albums/{title}/image that required creation of a second controller and a custom route to work. HTTP Verb routing to a controller does not mix with custom or action routing to the same controller because of the limited mapping of HTTP verbs imposed by HTTP Verb routing. To understand some of the problems with verb routing, let’s look at another example. Let’s say you create a GetSortableAlbums() method like this and add it to the original AlbumApiController accessed via HTTP Verb routing:[HttpGet] public IQueryable<Album> SortableAlbums() { var albums = AlbumData.Current; // generally should be done only on actual queryable results (EF etc.) // Done here because we're running with a static list but otherwise might be slow return albums.AsQueryable(); } If you compile this code and try to now access the /albums/ link, you get an error: Multiple Actions were found that match the request. HTTP Verb routing only allows access to one GET operation per parameter/route value match. If more than one method exists with the same parameter signature, it doesn’t work. As I mentioned earlier for the image display, the only solution to get this method to work is to throw it into another controller. Because I already set up the AlbumRpcApiController I can add the method there. First, I should rename the method to SortableAlbums() so I’m not using a Get prefix for the method. This also makes the action parameter look cleaner in the URL - it looks less like a method and more like a noun. I can then create a new route that handles direct-action mapping:RouteTable.Routes.MapHttpRoute( name: "AlbumRpcApiAction", routeTemplate: "albums/rpc/{action}/{title}", defaults: new { title = RouteParameter.Optional, controller = "AlbumRpcApi", action = "GetAblums" } ); As I am explicitly adding a route segment – rpc – into the route template, I can now reference explicit methods in the Web API controller using URLs like this: http://localhost/AspNetWebApi/rpc/SortableAlbums Error Handling I’ve already done some minimal error handling in the examples. For example in Listing 6, I detected some known-error scenarios like model validation failing or a resource not being found and returning an appropriate HttpResponseMessage result. But what happens if your code just blows up or causes an exception? If you have a controller method, like this:[HttpGet] public void ThrowException() { throw new UnauthorizedAccessException("Unauthorized Access Sucka"); } You can call it with this: http://localhost/AspNetWebApi/albums/rpc/ThrowException The default exception handling displays a 500-status response with the serialized exception on the local computer only. When you connect from a remote computer, Web API throws back a 500  HTTP Error with no data returned (IIS then adds its HTML error page). The behavior is configurable in the GlobalConfiguration:GlobalConfiguration .Configuration .IncludeErrorDetailPolicy = IncludeErrorDetailPolicy.Never; If you want more control over your error responses sent from code, you can throw explicit error responses yourself using HttpResponseException. When you throw an HttpResponseException the response parameter is used to generate the output for the Controller action. [HttpGet] public void ThrowError() { var resp = Request.CreateResponse<ApiMessageError>( HttpStatusCode.BadRequest, new ApiMessageError("Your code stinks!")); throw new HttpResponseException(resp); } Throwing an HttpResponseException stops the processing of the controller method and immediately returns the response you passed to the exception. Unlike other Exceptions fired inside of WebAPI, HttpResponseException bypasses the Exception Filters installed and instead just outputs the response you provide. In this case, the serialized ApiMessageError result string is returned in the default serialization format – XML or JSON. You can pass any content to HttpResponseMessage, which includes creating your own exception objects and consistently returning error messages to the client. Here’s a small helper method on the controller that you might use to send exception info back to the client consistently:private void ThrowSafeException(string message, HttpStatusCode statusCode = HttpStatusCode.BadRequest) { var errResponse = Request.CreateResponse<ApiMessageError>(statusCode, new ApiMessageError() { message = message }); throw new HttpResponseException(errResponse); } You can then use it to output any captured errors from code:[HttpGet] public void ThrowErrorSafe() { try { List<string> list = null; list.Add("Rick"); } catch (Exception ex) { ThrowSafeException(ex.Message); } }   Exception Filters Another more global solution is to create an Exception Filter. Filters in Web API provide the ability to pre- and post-process controller method operations. An exception filter looks at all exceptions fired and then optionally creates an HttpResponseMessage result. Listing 8 shows an example of a basic Exception filter implementation.public class UnhandledExceptionFilter : ExceptionFilterAttribute { public override void OnException(HttpActionExecutedContext context) { HttpStatusCode status = HttpStatusCode.InternalServerError; var exType = context.Exception.GetType(); if (exType == typeof(UnauthorizedAccessException)) status = HttpStatusCode.Unauthorized; else if (exType == typeof(ArgumentException)) status = HttpStatusCode.NotFound; var apiError = new ApiMessageError() { message = context.Exception.Message }; // create a new response and attach our ApiError object // which now gets returned on ANY exception result var errorResponse = context.Request.CreateResponse<ApiMessageError>(status, apiError); context.Response = errorResponse; base.OnException(context); } } Exception Filter Attributes can be assigned to an ApiController class like this:[UnhandledExceptionFilter] public class AlbumRpcApiController : ApiController or you can globally assign it to all controllers by adding it to the HTTP Configuration's Filters collection:GlobalConfiguration.Configuration.Filters.Add(new UnhandledExceptionFilter()); The latter is a great way to get global error trapping so that all errors (short of hard IIS errors and explicit HttpResponseException errors) return a valid error response that includes error information in the form of a known-error object. Using a filter like this allows you to throw an exception as you normally would and have your filter create a response in the appropriate output format that the client expects. For example, an AJAX application can on failure expect to see a JSON error result that corresponds to the real error that occurred rather than a 500 error along with HTML error page that IIS throws up. You can even create some custom exceptions so you can differentiate your own exceptions from unhandled system exceptions - you often don't want to display error information from 'unknown' exceptions as they may contain sensitive system information or info that's not generally useful to users of your application/site. This is just one example of how ASP.NET Web API is configurable and extensible. Exception filters are just one example of how you can plug-in into the Web API request flow to modify output. Many more hooks exist and I’ll take a closer look at extensibility in Part 2 of this article in the future. Summary Web API is a big improvement over previous Microsoft REST and AJAX toolkits. The key features to its usefulness are its ease of use with simple controller based logic, familiar MVC-style routing, low configuration impact, extensibility at all levels and tight attention to exposing and making HTTP semantics easily discoverable and easy to use. Although none of the concepts used in Web API are new or radical, Web API combines the best of previous platforms into a single framework that’s highly functional, easy to work with, and extensible to boot. I think that Microsoft has hit a home run with Web API. Related Resources Where does ASP.NET Web API fit? Sample Source Code on GitHub Passing multiple POST parameters to Web API Controller Methods Mapping UrlEncoded POST Values in ASP.NET Web API Creating a JSONP Formatter for ASP.NET Web API Removing the XML Formatter from ASP.NET Web API Applications© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Capturing and Transforming ASP.NET Output with Response.Filter

    - by Rick Strahl
    During one of my Handlers and Modules session at DevConnections this week one of the attendees asked a question that I didn’t have an immediate answer for. Basically he wanted to capture response output completely and then apply some filtering to the output – effectively injecting some additional content into the page AFTER the page had completely rendered. Specifically the output should be captured from anywhere – not just a page and have this code injected into the page. Some time ago I posted some code that allows you to capture ASP.NET Page output by overriding the Render() method, capturing the HtmlTextWriter() and reading its content, modifying the rendered data as text then writing it back out. I’ve actually used this approach on a few occasions and it works fine for ASP.NET pages. But this obviously won’t work outside of the Page class environment and it’s not really generic – you have to create a custom page class in order to handle the output capture. [updated 11/16/2009 – updated ResponseFilterStream implementation and a few additional notes based on comments] Enter Response.Filter However, ASP.NET includes a Response.Filter which can be used – well to filter output. Basically Response.Filter is a stream through which the OutputStream is piped back to the Web Server (indirectly). As content is written into the Response object, the filter stream receives the appropriate Stream commands like Write, Flush and Close as well as read operations although for a Response.Filter that’s uncommon to be hit. The Response.Filter can be programmatically replaced at runtime which allows you to effectively intercept all output generation that runs through ASP.NET. A common Example: Dynamic GZip Encoding A rather common use of Response.Filter hooking up code based, dynamic  GZip compression for requests which is dead simple by applying a GZipStream (or DeflateStream) to Response.Filter. The following generic routines can be used very easily to detect GZip capability of the client and compress response output with a single line of code and a couple of library helper routines: WebUtils.GZipEncodePage(); which is handled with a few lines of reusable code and a couple of static helper methods: /// <summary> ///Sets up the current page or handler to use GZip through a Response.Filter ///IMPORTANT:  ///You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() {     HttpResponse Response = HttpContext.Current.Response;     if(IsGZipSupported())     {         stringAcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"];         if(AcceptEncoding.Contains("deflate"))         {             Response.Filter = newSystem.IO.Compression.DeflateStream(Response.Filter,                                        System.IO.Compression.CompressionMode.Compress);             Response.AppendHeader("Content-Encoding", "deflate");         }         else        {             Response.Filter = newSystem.IO.Compression.GZipStream(Response.Filter,                                       System.IO.Compression.CompressionMode.Compress);             Response.AppendHeader("Content-Encoding", "gzip");                            }     }     // Allow proxy servers to cache encoded and unencoded versions separately    Response.AppendHeader("Vary", "Content-Encoding"); } /// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } GZipStream and DeflateStream are streams that are assigned to Response.Filter and by doing so apply the appropriate compression on the active Response. Response.Filter content is chunked So to implement a Response.Filter effectively requires only that you implement a custom stream and handle the Write() method to capture Response output as it’s written. At first blush this seems very simple – you capture the output in Write, transform it and write out the transformed content in one pass. And that indeed works for small amounts of content. But you see, the problem is that output is written in small buffer chunks (a little less than 16k it appears) rather than just a single Write() statement into the stream, which makes perfect sense for ASP.NET to stream data back to IIS in smaller chunks to minimize memory usage en route. Unfortunately this also makes it a more difficult to implement any filtering routines since you don’t directly get access to all of the response content which is problematic especially if those filtering routines require you to look at the ENTIRE response in order to transform or capture the output as is needed for the solution the gentleman in my session asked for. So in order to address this a slightly different approach is required that basically captures all the Write() buffers passed into a cached stream and then making the stream available only when it’s complete and ready to be flushed. As I was thinking about the implementation I also started thinking about the few instances when I’ve used Response.Filter implementations. Each time I had to create a new Stream subclass and create my custom functionality but in the end each implementation did the same thing – capturing output and transforming it. I thought there should be an easier way to do this by creating a re-usable Stream class that can handle stream transformations that are common to Response.Filter implementations. Creating a semi-generic Response Filter Stream Class What I ended up with is a ResponseFilterStream class that provides a handful of Events that allow you to capture and/or transform Response content. The class implements a subclass of Stream and then overrides Write() and Flush() to handle capturing and transformation operations. By exposing events it’s easy to hook up capture or transformation operations via single focused methods. ResponseFilterStream exposes the following events: CaptureStream, CaptureString Captures the output only and provides either a MemoryStream or String with the final page output. Capture is hooked to the Flush() operation of the stream. TransformStream, TransformString Allows you to transform the complete response output with events that receive a MemoryStream or String respectively and can you modify the output then return it back as a return value. The transformed output is then written back out in a single chunk to the response output stream. These events capture all output internally first then write the entire buffer into the response. TransformWrite, TransformWriteString Allows you to transform the Response data as it is written in its original chunk size in the Stream’s Write() method. Unlike TransformStream/TransformString which operate on the complete output, these events only see the current chunk of data written. This is more efficient as there’s no caching involved, but can cause problems due to searched content splitting over multiple chunks. Using this implementation, creating a custom Response.Filter transformation becomes as simple as the following code. To hook up the Response.Filter using the MemoryStream version event: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformStream += filter_TransformStream; Response.Filter = filter; and the event handler to do the transformation: MemoryStream filter_TransformStream(MemoryStream ms) { Encoding encoding = HttpContext.Current.Response.ContentEncoding; string output = encoding.GetString(ms.ToArray()); output = FixPaths(output); ms = new MemoryStream(output.Length); byte[] buffer = encoding.GetBytes(output); ms.Write(buffer,0,buffer.Length); return ms; } private string FixPaths(string output) { string path = HttpContext.Current.Request.ApplicationPath; // override root path wonkiness if (path == "/") path = ""; output = output.Replace("\"~/", "\"" + path + "/").Replace("'~/", "'" + path + "/"); return output; } The idea of the event handler is that you can do whatever you want to the stream and return back a stream – either the same one that’s been modified or a brand new one – which is then sent back to as the final response. The above code can be simplified even more by using the string version events which handle the stream to string conversions for you: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; and the event handler to do the transformation calling the same FixPaths method shown above: string filter_TransformString(string output) { return FixPaths(output); } The events for capturing output and capturing and transforming chunks work in a very similar way. By using events to handle the transformations ResponseFilterStream becomes a reusable component and we don’t have to create a new stream class or subclass an existing Stream based classed. By the way, the example used here is kind of a cool trick which transforms “~/” expressions inside of the final generated HTML output – even in plain HTML controls not HTML controls – and transforms them into the appropriate application relative path in the same way that ResolveUrl would do. So you can write plain old HTML like this: <a href=”~/default.aspx”>Home</a>  and have it turned into: <a href=”/myVirtual/default.aspx”>Home</a>  without having to use an ASP.NET control like Hyperlink or Image or having to constantly use: <img src=”<%= ResolveUrl(“~/images/home.gif”) %>” /> in MVC applications (which frankly is one of the most annoying things about MVC especially given the path hell that extension-less and endpoint-less URLs impose). I can’t take credit for this idea. While discussing the Response.Filter issues on Twitter a hint from Dylan Beattie who pointed me at one of his examples which does something similar. I thought the idea was cool enough to use an example for future demos of Response.Filter functionality in ASP.NET next I time I do the Modules and Handlers talk (which was great fun BTW). How practical this is is debatable however since there’s definitely some overhead to using a Response.Filter in general and especially on one that caches the output and the re-writes it later. Make sure to test for performance anytime you use Response.Filter hookup and make sure it' doesn’t end up killing perf on you. You’ve been warned :-}. How does ResponseFilterStream work? The big win of this implementation IMHO is that it’s a reusable  component – so for implementation there’s no new class, no subclassing – you simply attach to an event to implement an event handler method with a straight forward signature to retrieve the stream or string you’re interested in. The implementation is based on a subclass of Stream as is required in order to handle the Response.Filter requirements. What’s different than other implementations I’ve seen in various places is that it supports capturing output as a whole to allow retrieving the full response output for capture or modification. The exception are the TransformWrite and TransformWrite events which operate only active chunk of data written by the Response. For captured output, the Write() method captures output into an internal MemoryStream that is cached until writing is complete. So Write() is called when ASP.NET writes to the Response stream, but the filter doesn’t pass on the Write immediately to the filter’s internal stream. The data is cached and only when the Flush() method is called to finalize the Stream’s output do we actually send the cached stream off for transformation (if the events are hooked up) and THEN finally write out the returned content in one big chunk. Here’s the implementation of ResponseFilterStream: /// <summary> /// A semi-generic Stream implementation for Response.Filter with /// an event interface for handling Content transformations via /// Stream or String. /// <remarks> /// Use with care for large output as this implementation copies /// the output into a memory stream and so increases memory usage. /// </remarks> /// </summary> public class ResponseFilterStream : Stream { /// <summary> /// The original stream /// </summary> Stream _stream; /// <summary> /// Current position in the original stream /// </summary> long _position; /// <summary> /// Stream that original content is read into /// and then passed to TransformStream function /// </summary> MemoryStream _cacheStream = new MemoryStream(5000); /// <summary> /// Internal pointer that that keeps track of the size /// of the cacheStream /// </summary> int _cachePointer = 0; /// <summary> /// /// </summary> /// <param name="responseStream"></param> public ResponseFilterStream(Stream responseStream) { _stream = responseStream; } /// <summary> /// Determines whether the stream is captured /// </summary> private bool IsCaptured { get { if (CaptureStream != null || CaptureString != null || TransformStream != null || TransformString != null) return true; return false; } } /// <summary> /// Determines whether the Write method is outputting data immediately /// or delaying output until Flush() is fired. /// </summary> private bool IsOutputDelayed { get { if (TransformStream != null || TransformString != null) return true; return false; } } /// <summary> /// Event that captures Response output and makes it available /// as a MemoryStream instance. Output is captured but won't /// affect Response output. /// </summary> public event Action<MemoryStream> CaptureStream; /// <summary> /// Event that captures Response output and makes it available /// as a string. Output is captured but won't affect Response output. /// </summary> public event Action<string> CaptureString; /// <summary> /// Event that allows you transform the stream as each chunk of /// the output is written in the Write() operation of the stream. /// This means that that it's possible/likely that the input /// buffer will not contain the full response output but only /// one of potentially many chunks. /// /// This event is called as part of the filter stream's Write() /// operation. /// </summary> public event Func<byte[], byte[]> TransformWrite; /// <summary> /// Event that allows you to transform the response stream as /// each chunk of bytep[] output is written during the stream's write /// operation. This means it's possibly/likely that the string /// passed to the handler only contains a portion of the full /// output. Typical buffer chunks are around 16k a piece. /// /// This event is called as part of the stream's Write operation. /// </summary> public event Func<string, string> TransformWriteString; /// <summary> /// This event allows capturing and transformation of the entire /// output stream by caching all write operations and delaying final /// response output until Flush() is called on the stream. /// </summary> public event Func<MemoryStream, MemoryStream> TransformStream; /// <summary> /// Event that can be hooked up to handle Response.Filter /// Transformation. Passed a string that you can modify and /// return back as a return value. The modified content /// will become the final output. /// </summary> public event Func<string, string> TransformString; protected virtual void OnCaptureStream(MemoryStream ms) { if (CaptureStream != null) CaptureStream(ms); } private void OnCaptureStringInternal(MemoryStream ms) { if (CaptureString != null) { string content = HttpContext.Current.Response.ContentEncoding.GetString(ms.ToArray()); OnCaptureString(content); } } protected virtual void OnCaptureString(string output) { if (CaptureString != null) CaptureString(output); } protected virtual byte[] OnTransformWrite(byte[] buffer) { if (TransformWrite != null) return TransformWrite(buffer); return buffer; } private byte[] OnTransformWriteStringInternal(byte[] buffer) { Encoding encoding = HttpContext.Current.Response.ContentEncoding; string output = OnTransformWriteString(encoding.GetString(buffer)); return encoding.GetBytes(output); } private string OnTransformWriteString(string value) { if (TransformWriteString != null) return TransformWriteString(value); return value; } protected virtual MemoryStream OnTransformCompleteStream(MemoryStream ms) { if (TransformStream != null) return TransformStream(ms); return ms; } /// <summary> /// Allows transforming of strings /// /// Note this handler is internal and not meant to be overridden /// as the TransformString Event has to be hooked up in order /// for this handler to even fire to avoid the overhead of string /// conversion on every pass through. /// </summary> /// <param name="responseText"></param> /// <returns></returns> private string OnTransformCompleteString(string responseText) { if (TransformString != null) TransformString(responseText); return responseText; } /// <summary> /// Wrapper method form OnTransformString that handles /// stream to string and vice versa conversions /// </summary> /// <param name="ms"></param> /// <returns></returns> internal MemoryStream OnTransformCompleteStringInternal(MemoryStream ms) { if (TransformString == null) return ms; //string content = ms.GetAsString(); string content = HttpContext.Current.Response.ContentEncoding.GetString(ms.ToArray()); content = TransformString(content); byte[] buffer = HttpContext.Current.Response.ContentEncoding.GetBytes(content); ms = new MemoryStream(); ms.Write(buffer, 0, buffer.Length); //ms.WriteString(content); return ms; } /// <summary> /// /// </summary> public override bool CanRead { get { return true; } } public override bool CanSeek { get { return true; } } /// <summary> /// /// </summary> public override bool CanWrite { get { return true; } } /// <summary> /// /// </summary> public override long Length { get { return 0; } } /// <summary> /// /// </summary> public override long Position { get { return _position; } set { _position = value; } } /// <summary> /// /// </summary> /// <param name="offset"></param> /// <param name="direction"></param> /// <returns></returns> public override long Seek(long offset, System.IO.SeekOrigin direction) { return _stream.Seek(offset, direction); } /// <summary> /// /// </summary> /// <param name="length"></param> public override void SetLength(long length) { _stream.SetLength(length); } /// <summary> /// /// </summary> public override void Close() { _stream.Close(); } /// <summary> /// Override flush by writing out the cached stream data /// </summary> public override void Flush() { if (IsCaptured && _cacheStream.Length > 0) { // Check for transform implementations _cacheStream = OnTransformCompleteStream(_cacheStream); _cacheStream = OnTransformCompleteStringInternal(_cacheStream); OnCaptureStream(_cacheStream); OnCaptureStringInternal(_cacheStream); // write the stream back out if output was delayed if (IsOutputDelayed) _stream.Write(_cacheStream.ToArray(), 0, (int)_cacheStream.Length); // Clear the cache once we've written it out _cacheStream.SetLength(0); } // default flush behavior _stream.Flush(); } /// <summary> /// /// </summary> /// <param name="buffer"></param> /// <param name="offset"></param> /// <param name="count"></param> /// <returns></returns> public override int Read(byte[] buffer, int offset, int count) { return _stream.Read(buffer, offset, count); } /// <summary> /// Overriden to capture output written by ASP.NET and captured /// into a cached stream that is written out later when Flush() /// is called. /// </summary> /// <param name="buffer"></param> /// <param name="offset"></param> /// <param name="count"></param> public override void Write(byte[] buffer, int offset, int count) { if ( IsCaptured ) { // copy to holding buffer only - we'll write out later _cacheStream.Write(buffer, 0, count); _cachePointer += count; } // just transform this buffer if (TransformWrite != null) buffer = OnTransformWrite(buffer); if (TransformWriteString != null) buffer = OnTransformWriteStringInternal(buffer); if (!IsOutputDelayed) _stream.Write(buffer, offset, buffer.Length); } } The key features are the events and corresponding OnXXX methods that handle the event hookups, and the Write() and Flush() methods of the stream implementation. All the rest of the members tend to be plain jane passthrough stream implementation code without much consequence. I do love the way Action<t> and Func<T> make it so easy to create the event signatures for the various events – sweet. A few Things to consider Performance Response.Filter is not great for performance in general as it adds another layer of indirection to the ASP.NET output pipeline, and this implementation in particular adds a memory hit as it basically duplicates the response output into the cached memory stream which is necessary since you may have to look at the entire response. If you have large pages in particular this can cause potentially serious memory pressure in your server application. So be careful of wholesale adoption of this (or other) Response.Filters. Make sure to do some performance testing to ensure it’s not killing your app’s performance. Response.Filter works everywhere A few questions came up in comments and discussion as to capturing ALL output hitting the site and – yes you can definitely do that by assigning a Response.Filter inside of a module. If you do this however you’ll want to be very careful and decide which content you actually want to capture especially in IIS 7 which passes ALL content – including static images/CSS etc. through the ASP.NET pipeline. So it is important to filter only on what you’re looking for – like the page extension or maybe more effectively the Response.ContentType. Response.Filter Chaining Originally I thought that filter chaining doesn’t work at all due to a bug in the stream implementation code. But it’s quite possible to assign multiple filters to the Response.Filter property. So the following actually works to both compress the output and apply the transformed content: WebUtils.GZipEncodePage(); ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; However the following does not work resulting in invalid content encoding errors: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; WebUtils.GZipEncodePage(); In other words multiple Response filters can work together but it depends entirely on the implementation whether they can be chained or in which order they can be chained. In this case running the GZip/Deflate stream filters apparently relies on the original content length of the output and chokes when the content is modified. But if attaching the compression first it works fine as unintuitive as that may seem. Resources Download example code Capture Output from ASP.NET Pages © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Configuring Fed Authentication Methods in OIF / IdP

    - by Damien Carru
    In this article, I will provide examples on how to configure OIF/IdP to map OAM Authentication Schemes to Federation Authentication Methods, based on the concepts introduced in my previous entry. I will show examples for the three protocols supported by OIF: SAML 2.0 SSO SAML 1.1 SSO OpenID 2.0 Enjoy the reading! Configuration As I mentioned in my previous article, mapping Federation Authentication Methods to OAM Authentication Schemes is protocol dependent, since the methods are defined in the various protocols (SAML 2.0, SAML 1.1, OpenID 2.0). As such, the WLST commands to set those mappings will involve: Either the SP Partner Profile and affect all Partners referencing that profile, which do not override the Federation Authentication Method to OAM Authentication Scheme mappings Or the SP Partner entry, which will only affect the SP Partner It is important to note that if an SP Partner is configured to define one or more Federation Authentication Method to OAM Authentication Scheme mappings, then all the mappings defined in the SP Partner Profile will be ignored. WLST Commands The two OIF WLST commands that can be used to define mapping Federation Authentication Methods to OAM Authentication Schemes are: addSPPartnerProfileAuthnMethod() to define a mapping on an SP Partner Profile, taking as parameters: The name of the SP Partner Profile The Federation Authentication Method The OAM Authentication Scheme name addSPPartnerAuthnMethod() to define a mapping on an SP Partner , taking as parameters: The name of the SP Partner The Federation Authentication Method The OAM Authentication Scheme name Note: I will discuss in a subsequent article the other parameters of those commands. In the next sections, I will show examples on how to use those methods: For SAML 2.0, I will configure the SP Partner Profile, that will apply all the mappings to SP Partners referencing this profile, unless they override mapping definition For SAML 1.1, I will configure the SP Partner. For OpenID 2.0, I will configure the SP/RP Partner SAML 2.0 Test Setup In this setup, OIF is acting as an IdP and is integrated with a remote SAML 2.0 SP partner identified by AcmeSP. In this test, I will perform Federation SSO with OIF/IdP configured to: Use LDAPScheme as the Authentication Scheme Use BasicScheme as the Authentication Scheme Map BasicSessionScheme  to  the urn:oasis:names:tc:SAML:2.0:ac:classes:Password Federation Authentication Method Use OAMLDAPPluginAuthnScheme as the Authentication Scheme Map OAMLDAPPluginAuthnScheme to  the urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport Federation Authentication Method LDAPScheme as Authentication Scheme Using the OOTB settings regarding user authentication in OAM, the user will be challenged via a FORM based login page based on the LDAPScheme. Also the default Federation Authentication Method mappings configuration maps only the urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport to LDAPScheme (also marked as the default scheme used for authentication), FAAuthScheme, BasicScheme and BasicFAScheme. After authentication via FORM, OIF/IdP would issue an Assertion similar to: <samlp:Response ...>    <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>    <samlp:Status>        <samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/>    </samlp:Status>    <saml:Assertion ...>        <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>        <dsig:Signature>            ...        </dsig:Signature>        <saml:Subject>            <saml:NameID ...>[email protected]</saml:NameID>            <saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">                <saml:SubjectConfirmationData .../>            </saml:SubjectConfirmation>        </saml:Subject>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthnInstant="2014-03-21T20:53:55Z" SessionIndex="id-6i-Dm0yB-HekG6cejktwcKIFMzYE8Yrmqwfd0azz" SessionNotOnOrAfter="2014-03-21T21:53:55Z">            <saml:AuthnContext>                <saml:AuthnContextClassRef>                   urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport                </saml:AuthnContextClassRef>            </saml:AuthnContext>        </saml:AuthnStatement>    </saml:Assertion></samlp:Response> BasicScheme as Authentication Scheme For this test, I will switch the default Authentication Scheme for the SP Partner Profile to BasicScheme instead of LDAPScheme. I will use the OIF WLST setSPPartnerProfileDefaultScheme() command and specify which scheme to be used as the default for the SP Partner Profile referenced by AcmeSP (which is saml20-sp-partner-profile in this case: getFedPartnerProfile("AcmeSP", "sp") ): Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the setSPPartnerProfileDefaultScheme() command:setSPPartnerProfileDefaultScheme("saml20-sp-partner-profile", "BasicScheme") Exit the WLST environment:exit() The user will now be challenged via HTTP Basic Authentication defined in the BasicScheme for AcmeSP. Also, as noted earlier, the default Federation Authentication Method mappings configuration maps only the urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport to LDAPScheme (also marked as the default scheme used for authentication), FAAuthScheme, BasicScheme and BasicFAScheme. After authentication via HTTP Basic Authentication, OIF/IdP would issue an Assertion similar to: <samlp:Response ...>    <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>    <samlp:Status>        <samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/>    </samlp:Status>    <saml:Assertion ...>        <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>        <dsig:Signature>            ...        </dsig:Signature>        <saml:Subject>            <saml:NameID ...>[email protected]</saml:NameID>            <saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">                <saml:SubjectConfirmationData .../>            </saml:SubjectConfirmation>        </saml:Subject>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthnInstant="2014-03-21T20:53:55Z" SessionIndex="id-6i-Dm0yB-HekG6cejktwcKIFMzYE8Yrmqwfd0azz" SessionNotOnOrAfter="2014-03-21T21:53:55Z">            <saml:AuthnContext>                <saml:AuthnContextClassRef>                   urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport                </saml:AuthnContextClassRef>            </saml:AuthnContext>        </saml:AuthnStatement>    </saml:Assertion></samlp:Response> Mapping BasicScheme To change the Federation Authentication Method mapping for the BasicScheme to urn:oasis:names:tc:SAML:2.0:ac:classes:Password instead of urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport for the saml20-sp-partner-profile SAML 2.0 SP Partner Profile (the profile to which my AcmeSP Partner is bound to), I will execute the addSPPartnerProfileAuthnMethod() method: Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the addSPPartnerProfileAuthnMethod() command:addSPPartnerProfileAuthnMethod("saml20-sp-partner-profile", "urn:oasis:names:tc:SAML:2.0:ac:classes:Password", "BasicScheme") Exit the WLST environment:exit() After authentication via HTTP Basic Authentication, OIF/IdP would now issue an Assertion similar to (see that the AuthnContextClassRef was changed from PasswordProtectedTransport to Password): <samlp:Response ...>    <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>    <samlp:Status>        <samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/>    </samlp:Status>    <saml:Assertion ...>        <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>        <dsig:Signature>            ...        </dsig:Signature>        <saml:Subject>            <saml:NameID ...>[email protected]</saml:NameID>            <saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">                <saml:SubjectConfirmationData .../>            </saml:SubjectConfirmation>        </saml:Subject>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthnInstant="2014-03-21T20:53:55Z" SessionIndex="id-6i-Dm0yB-HekG6cejktwcKIFMzYE8Yrmqwfd0azz" SessionNotOnOrAfter="2014-03-21T21:53:55Z">            <saml:AuthnContext>                <saml:AuthnContextClassRef>                   urn:oasis:names:tc:SAML:2.0:ac:classes:Password                </saml:AuthnContextClassRef>            </saml:AuthnContext>        </saml:AuthnStatement>    </saml:Assertion></samlp:Response> OAMLDAPPluginAuthnScheme as Authentication Scheme For this test, I will switch the default Authentication Scheme for the SP Partner Profile to OAMLDAPPluginAuthnScheme instead of BasicScheme. I will use the OIF WLST setSPPartnerProfileDefaultScheme() command and specify which scheme to be used as the default for the SP Partner Profile referenced by AcmeSP (which is saml20-sp-partner-profile in this case: getFedPartnerProfile("AcmeSP", "sp") ): Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the setSPPartnerProfileDefaultScheme() command:setSPPartnerProfileDefaultScheme("saml20-sp-partner-profile", "OAMLDAPPluginAuthnScheme") Exit the WLST environment:exit() The user will now be challenged via FORM defined in the OAMLDAPPluginAuthnScheme for AcmeSP. Contrarily to LDAPScheme and BasicScheme, the OAMLDAPPluginAuthnScheme is not mapped by default to any Federation Authentication Methods. As such, OIF/IdP will not be able to find a Federation Authentication Method and will set the method in the SAML Assertion to the OAM Authentication Scheme name. After authentication via FORM, OIF/IdP would issue an Assertion similar to (see the AuthnContextClassRef set to OAMLDAPPluginAuthnScheme): <samlp:Response ...>    <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>    <samlp:Status>        <samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/>    </samlp:Status>    <saml:Assertion ...>        <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>        <dsig:Signature>            ...        </dsig:Signature>        <saml:Subject>            <saml:NameID ...>[email protected]</saml:NameID>            <saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">                <saml:SubjectConfirmationData .../>            </saml:SubjectConfirmation>        </saml:Subject>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthnInstant="2014-03-21T20:53:55Z" SessionIndex="id-6i-Dm0yB-HekG6cejktwcKIFMzYE8Yrmqwfd0azz" SessionNotOnOrAfter="2014-03-21T21:53:55Z">            <saml:AuthnContext>                <saml:AuthnContextClassRef> OAMLDAPPluginAuthnScheme                </saml:AuthnContextClassRef>            </saml:AuthnContext>        </saml:AuthnStatement>    </saml:Assertion></samlp:Response> Mapping OAMLDAPPluginAuthnScheme To add the OAMLDAPPluginAuthnScheme  to the Federation Authentication Method urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport mapping, I will execute the addSPPartnerProfileAuthnMethod() method: Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the addSPPartnerProfileAuthnMethod() command:addSPPartnerProfileAuthnMethod("saml20-sp-partner-profile", "urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport", "OAMLDAPPluginAuthnScheme") Exit the WLST environment:exit() After authentication via FORM, OIF/IdP would now issue an Assertion similar to (see that the method was changed from OAMLDAPPluginAuthnScheme to PasswordProtectedTransport): <samlp:Response ...>    <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>    <samlp:Status>        <samlp:StatusCode Value="urn:oasis:names:tc:SAML:2.0:status:Success"/>    </samlp:Status>    <saml:Assertion ...>        <saml:Issuer ...>https://idp.com/oam/fed</saml:Issuer>        <dsig:Signature>            ...        </dsig:Signature>        <saml:Subject>            <saml:NameID ...>[email protected]</saml:NameID>            <saml:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer">                <saml:SubjectConfirmationData .../>            </saml:SubjectConfirmation>        </saml:Subject>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthnInstant="2014-03-21T20:53:55Z" SessionIndex="id-6i-Dm0yB-HekG6cejktwcKIFMzYE8Yrmqwfd0azz" SessionNotOnOrAfter="2014-03-21T21:53:55Z">            <saml:AuthnContext>                <saml:AuthnContextClassRef>                   urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport                </saml:AuthnContextClassRef>            </saml:AuthnContext>        </saml:AuthnStatement>    </saml:Assertion></samlp:Response> SAML 1.1 Test Setup In this setup, OIF is acting as an IdP and is integrated with a remote SAML 1.1 SP partner identified by AcmeSP. In this test, I will perform Federation SSO with OIF/IdP configured to: Use LDAPScheme as the Authentication Scheme Use OAMLDAPPluginAuthnScheme as the Authentication Scheme Map OAMLDAPPluginAuthnScheme to  the urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport Federation Authentication Method Use LDAPScheme as the Authentication Scheme Map LDAPScheme to  the urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport Federation Authentication Method LDAPScheme as Authentication Scheme Using the OOTB settings regarding user authentication in OAM, the user will be challenged via a FORM based login page based on the LDAPScheme. Also the default Federation Authentication Method mappings configuration maps only the urn:oasis:names:tc:SAML:1.0:am:password to LDAPScheme (also marked as the default scheme used for authentication), FAAuthScheme, BasicScheme and BasicFAScheme. After authentication via FORM, OIF/IdP would issue an Assertion similar to: <samlp:Response ...>    <samlp:Status>        <samlp:StatusCode Value="samlp:Success"/>    </samlp:Status>    <saml:Assertion Issuer="https://idp.com/oam/fed" ...>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp/ssov11</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthenticationInstant="2014-03-21T20:53:55Z" AuthenticationMethod="urn:oasis:names:tc:SAML:1.0:am:password">            <saml:Subject>                <saml:NameIdentifier ...>[email protected]</saml:NameIdentifier>                <saml:SubjectConfirmation>                   <saml:ConfirmationMethod>                       urn:oasis:names:tc:SAML:1.0:cm:bearer                   </saml:ConfirmationMethod>                </saml:SubjectConfirmation>            </saml:Subject>        </saml:AuthnStatement>        <dsig:Signature>            ...        </dsig:Signature>    </saml:Assertion></samlp:Response> OAMLDAPPluginAuthnScheme as Authentication Scheme For this test, I will switch the default Authentication Scheme for the SP Partner to OAMLDAPPluginAuthnScheme instead of LDAPScheme. I will use the OIF WLST setSPPartnerDefaultScheme() command and specify which scheme to be used as the default for the SP Partner: Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the setSPPartnerDefaultScheme() command:setSPPartnerDefaultScheme("AcmeSP", "OAMLDAPPluginAuthnScheme") Exit the WLST environment:exit() The user will be challenged via FORM defined in the OAMLDAPPluginAuthnScheme for AcmeSP. Contrarily to LDAPScheme, the OAMLDAPPluginAuthnScheme is not mapped by default to any Federation Authentication Methods (in the SP Partner Profile). As such, OIF/IdP will not be able to find a Federation Authentication Method and will set the method in the SAML Assertion to the OAM Authentication Scheme name. After authentication via FORM, OIF/IdP would issue an Assertion similar to (see the AuthenticationMethod set to OAMLDAPPluginAuthnScheme): <samlp:Response ...>    <samlp:Status>        <samlp:StatusCode Value="samlp:Success"/>    </samlp:Status>    <saml:Assertion Issuer="https://idp.com/oam/fed" ...>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp/ssov11</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthenticationInstant="2014-03-21T20:53:55Z" AuthenticationMethod="OAMLDAPPluginAuthnScheme">            <saml:Subject>                <saml:NameIdentifier ...>[email protected]</saml:NameIdentifier>                <saml:SubjectConfirmation>                   <saml:ConfirmationMethod>                       urn:oasis:names:tc:SAML:1.0:cm:bearer                   </saml:ConfirmationMethod>                </saml:SubjectConfirmation>            </saml:Subject>        </saml:AuthnStatement>        <dsig:Signature>            ...        </dsig:Signature>    </saml:Assertion></samlp:Response> Mapping OAMLDAPPluginAuthnScheme To map the OAMLDAPPluginAuthnScheme  to the Federation Authentication Method urn:oasis:names:tc:SAML:1.0:am:password for this SP Partner only, I will execute the addSPPartnerAuthnMethod() method: Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the addSPPartnerAuthnMethod() command:addSPPartnerAuthnMethod("AcmeSP", "urn:oasis:names:tc:SAML:1.0:am:password", "OAMLDAPPluginAuthnScheme") Exit the WLST environment:exit() After authentication via FORM, OIF/IdP would now issue an Assertion similar to (see that the method was changed from OAMLDAPPluginAuthnScheme to password): <samlp:Response ...>    <samlp:Status>        <samlp:StatusCode Value="samlp:Success"/>    </samlp:Status>    <saml:Assertion Issuer="https://idp.com/oam/fed" ...>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp/ssov11</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthenticationInstant="2014-03-21T20:53:55Z" AuthenticationMethod="urn:oasis:names:tc:SAML:1.0:am:password">            <saml:Subject>                <saml:NameIdentifier ...>[email protected]</saml:NameIdentifier>                <saml:SubjectConfirmation>                   <saml:ConfirmationMethod>                       urn:oasis:names:tc:SAML:1.0:cm:bearer                   </saml:ConfirmationMethod>                </saml:SubjectConfirmation>            </saml:Subject>        </saml:AuthnStatement>        <dsig:Signature>            ...        </dsig:Signature>    </saml:Assertion></samlp:Response> LDAPScheme as Authentication Scheme I will now show that by defining a Federation Authentication Mapping at the Partner level, this now ignores all mappings defined at the SP Partner Profile level. For this test, I will switch the default Authentication Scheme for this SP Partner back to LDAPScheme, and the Assertion issued by OIF/IdP will not be able to map this LDAPScheme to a Federation Authentication Method anymore, since A Federation Authentication Method mapping is defined at the SP Partner level and thus the mappings defined at the SP Partner Profile are ignored The LDAPScheme is not listed in the mapping at the Partner level I will use the OIF WLST setSPPartnerDefaultScheme() command and specify which scheme to be used as the default for this SP Partner: Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the setSPPartnerDefaultScheme() command:setSPPartnerDefaultScheme("AcmeSP", "LDAPScheme") Exit the WLST environment:exit() After authentication via FORM, OIF/IdP would issue an Assertion similar to (see the AuthenticationMethod set to LDAPScheme): <samlp:Response ...>    <samlp:Status>        <samlp:StatusCode Value="samlp:Success"/>    </samlp:Status>    <saml:Assertion Issuer="https://idp.com/oam/fed" ...>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp/ssov11</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthenticationInstant="2014-03-21T20:53:55Z" AuthenticationMethod="LDAPScheme">            <saml:Subject>                <saml:NameIdentifier ...>[email protected]</saml:NameIdentifier>                <saml:SubjectConfirmation>                   <saml:ConfirmationMethod>                       urn:oasis:names:tc:SAML:1.0:cm:bearer                   </saml:ConfirmationMethod>                </saml:SubjectConfirmation>            </saml:Subject>        </saml:AuthnStatement>        <dsig:Signature>            ...        </dsig:Signature>    </saml:Assertion></samlp:Response> Mapping LDAPScheme at Partner Level To fix this issue, we will need to add the LDAPScheme  to the Federation Authentication Method urn:oasis:names:tc:SAML:1.0:am:password mapping for this SP Partner only. I will execute the addSPPartnerAuthnMethod() method: Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the addSPPartnerAuthnMethod() command:addSPPartnerAuthnMethod("AcmeSP", "urn:oasis:names:tc:SAML:1.0:am:password", "LDAPScheme") Exit the WLST environment:exit() After authentication via FORM, OIF/IdP would now issue an Assertion similar to (see that the method was changed from LDAPScheme to password): <samlp:Response ...>    <samlp:Status>        <samlp:StatusCode Value="samlp:Success"/>    </samlp:Status>    <saml:Assertion Issuer="https://idp.com/oam/fed" ...>        <saml:Conditions ...>            <saml:AudienceRestriction>                <saml:Audience>https://acme.com/sp/ssov11</saml:Audience>            </saml:AudienceRestriction>        </saml:Conditions>        <saml:AuthnStatement AuthenticationInstant="2014-03-21T20:53:55Z" AuthenticationMethod="urn:oasis:names:tc:SAML:1.0:am:password">            <saml:Subject>                <saml:NameIdentifier ...>[email protected]</saml:NameIdentifier>                <saml:SubjectConfirmation>                   <saml:ConfirmationMethod>                       urn:oasis:names:tc:SAML:1.0:cm:bearer                   </saml:ConfirmationMethod>                </saml:SubjectConfirmation>            </saml:Subject>        </saml:AuthnStatement>        <dsig:Signature>            ...        </dsig:Signature>    </saml:Assertion></samlp:Response> OpenID 2.0 In the OpenID 2.0 flows, the RP must request use of PAPE, in order for OIF/IdP/OP to include PAPE information. For OpenID 2.0, the configuration will involve mapping a list of OpenID 2.0 policies to a list of Authentication Schemes. The WLST command will take a list of policies, delimited by the ',' character, instead of SAML 2.0 or SAML 1.1 where a single Federation Authentication Method had to be specified. Test Setup In this setup, OIF is acting as an IdP/OP and is integrated with a remote OpenID 2.0 SP/RP partner identified by AcmeRP. In this test, I will perform Federation SSO with OIF/IdP configured to: Use LDAPScheme as the Authentication Scheme Map LDAPScheme to  the http://schemas.openid.net/pape/policies/2007/06/phishing-resistant and http://openid-policies/password-protected policies Federation Authentication Methods (the second one is a custom for this use case) LDAPScheme as Authentication Scheme Using the OOTB settings regarding user authentication in OAM, the user will be challenged via a FORM based login page based on the LDAPScheme. No Federation Authentication Method is defined OOTB for OpenID 2.0, so if the IdP/OP issue an SSO response with a PAPE Response element, it will specify the scheme name instead of Federation Authentication Methods After authentication via FORM, OIF/IdP would issue an SSO Response similar to: https://acme.com/openid?refid=id-9PKVXZmRxAeDYcgLqPm36ClzOMA-&openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0&openid.mode=id_res&openid.op_endpoint=https%3A%2F%2Fidp.com%2Fopenid&openid.claimed_id=https%3A%2F%2Fidp.com%2Fopenid%3Fid%3Did-38iCmmlAVEXPsFjnFVKArfn5RIiF75D5doorhEgqqPM%3D&openid.identity=https%3A%2F%2Fidp.com%2Fopenid%3Fid%3Did-38iCmmlAVEXPsFjnFVKArfn5RIiF75D5doorhEgqqPM%3D&openid.return_to=https%3A%2F%2Facme.com%2Fopenid%3Frefid%3Did-9PKVXZmRxAeDYcgLqPm36ClzOMA-&openid.response_nonce=2014-03-24T19%3A20%3A06Zid-YPa2kTNNFftZkgBb460jxJGblk2g--iNwPpDI7M1&openid.assoc_handle=id-6a5S6zhAKaRwQNUnjTKROREdAGSjWodG1el4xyz3&openid.ns.ax=http%3A%2F%2Fopenid.net%2Fsrv%2Fax%2F1.0&openid.ax.mode=fetch_response&openid.ax.type.attr0=http%3A%2F%2Fsession%2Fcount&openid.ax.value.attr0=1&openid.ax.type.attr1=http%3A%2F%2Fopenid.net%2Fschema%2FnamePerson%2Ffriendly&openid.ax.value.attr1=My+name+is+Bobby+Smith&openid.ax.type.attr2=http%3A%2F%2Fschemas.openid.net%2Fax%2Fapi%2Fuser_id&openid.ax.value.attr2=bob&openid.ax.type.attr3=http%3A%2F%2Faxschema.org%2Fcontact%2Femail&openid.ax.value.attr3=bob%40oracle.com&openid.ax.type.attr4=http%3A%2F%2Fsession%2Fipaddress&openid.ax.value.attr4=10.145.120.253&openid.ns.pape=http%3A%2F%2Fspecs.openid.net%2Fextensions%2Fpape%2F1.0&openid.pape.auth_time=2014-03-24T19%3A20%3A05Z&openid.pape.auth_policies=LDAPScheme&openid.signed=op_endpoint%2Cclaimed_id%2Cidentity%2Creturn_to%2Cresponse_nonce%2Cassoc_handle%2Cns.ax%2Cax.mode%2Cax.type.attr0%2Cax.value.attr0%2Cax.type.attr1%2Cax.value.attr1%2Cax.type.attr2%2Cax.value.attr2%2Cax.type.attr3%2Cax.value.attr3%2Cax.type.attr4%2Cax.value.attr4%2Cns.pape%2Cpape.auth_time%2Cpape.auth_policies&openid.sig=mYMgbGYSs22l8e%2FDom9NRPw15u8%3D Mapping LDAPScheme To map the LDAP Scheme to the http://schemas.openid.net/pape/policies/2007/06/phishing-resistant and http://openid-policies/password-protected policies Federation Authentication Methods, I will execute the addSPPartnerAuthnMethod() method (the policies will be comma separated): Enter the WLST environment by executing:$IAM_ORACLE_HOME/common/bin/wlst.sh Connect to the WLS Admin server:connect() Navigate to the Domain Runtime branch:domainRuntime() Execute the addSPPartnerAuthnMethod() command:addSPPartnerAuthnMethod("AcmeRP", "http://schemas.openid.net/pape/policies/2007/06/phishing-resistant,http://openid-policies/password-protected", "LDAPScheme") Exit the WLST environment:exit() After authentication via FORM, OIF/IdP would now issue an Assertion similar to (see that the method was changed from LDAPScheme to the two policies): https://acme.com/openid?refid=id-9PKVXZmRxAeDYcgLqPm36ClzOMA-&openid.ns=http%3A%2F%2Fspecs.openid.net%2Fauth%2F2.0&openid.mode=id_res&openid.op_endpoint=https%3A%2F%2Fidp.com%2Fopenid&openid.claimed_id=https%3A%2F%2Fidp.com%2Fopenid%3Fid%3Did-38iCmmlAVEXPsFjnFVKArfn5RIiF75D5doorhEgqqPM%3D&openid.identity=https%3A%2F%2Fidp.com%2Fopenid%3Fid%3Did-38iCmmlAVEXPsFjnFVKArfn5RIiF75D5doorhEgqqPM%3D&openid.return_to=https%3A%2F%2Facme.com%2Fopenid%3Frefid%3Did-9PKVXZmRxAeDYcgLqPm36ClzOMA-&openid.response_nonce=2014-03-24T19%3A20%3A06Zid-YPa2kTNNFftZkgBb460jxJGblk2g--iNwPpDI7M1&openid.assoc_handle=id-6a5S6zhAKaRwQNUnjTKROREdAGSjWodG1el4xyz3&openid.ns.ax=http%3A%2F%2Fopenid.net%2Fsrv%2Fax%2F1.0&openid.ax.mode=fetch_response&openid.ax.type.attr0=http%3A%2F%2Fsession%2Fcount&openid.ax.value.attr0=1&openid.ax.type.attr1=http%3A%2F%2Fopenid.net%2Fschema%2FnamePerson%2Ffriendly&openid.ax.value.attr1=My+name+is+Bobby+Smith&openid.ax.type.attr2=http%3A%2F%2Fschemas.openid.net%2Fax%2Fapi%2Fuser_id&openid.ax.value.attr2=bob&openid.ax.type.attr3=http%3A%2F%2Faxschema.org%2Fcontact%2Femail&openid.ax.value.attr3=bob%40oracle.com&openid.ax.type.attr4=http%3A%2F%2Fsession%2Fipaddress&openid.ax.value.attr4=10.145.120.253&openid.ns.pape=http%3A%2F%2Fspecs.openid.net%2Fextensions%2Fpape%2F1.0&openid.pape.auth_time=2014-03-24T19%3A20%3A05Z&openid.pape.auth_policies=http%3A%2F%2Fschemas.openid.net%2Fpape%2Fpolicies%2F2007%2F06%2Fphishing-resistant+http%3A%2F%2Fopenid-policies%2Fpassword-protected&openid.signed=op_endpoint%2Cclaimed_id%2Cidentity%2Creturn_to%2Cresponse_nonce%2Cassoc_handle%2Cns.ax%2Cax.mode%2Cax.type.attr0%2Cax.value.attr0%2Cax.type.attr1%2Cax.value.attr1%2Cax.type.attr2%2Cax.value.attr2%2Cax.type.attr3%2Cax.value.attr3%2Cax.type.attr4%2Cax.value.attr4%2Cns.pape%2Cpape.auth_time%2Cpape.auth_policies&openid.sig=mYMgbGYSs22l8e%2FDom9NRPw15u8%3D In the next article, I will cover how OIF/IdP can be configured so that an SP can request a specific Federation Authentication Method to challenge the user during Federation SSO.Cheers,Damien Carru

    Read the article

  • Web Browser Control &ndash; Specifying the IE Version

    - by Rick Strahl
    I use the Internet Explorer Web Browser Control in a lot of my applications to display document type layout. HTML happens to be one of the most common document formats and displaying data in this format – even in desktop applications, is often way easier than using normal desktop technologies. One issue the Web Browser Control has that it’s perpetually stuck in IE 7 rendering mode by default. Even though IE 8 and now 9 have significantly upgraded the IE rendering engine to be more CSS and HTML compliant by default the Web Browser control will have none of it. IE 9 in particular – with its much improved CSS support and basic HTML 5 support is a big improvement and even though the IE control uses some of IE’s internal rendering technology it’s still stuck in the old IE 7 rendering by default. This applies whether you’re using the Web Browser control in a WPF application, a WinForms app, a FoxPro or VB classic application using the ActiveX control. Behind the scenes all these UI platforms use the COM interfaces and so you’re stuck by those same rules. Rendering Challenged To see what I’m talking about here are two screen shots rendering an HTML 5 doctype page that includes some CSS 3 functionality – rounded corners and border shadows - from an earlier post. One uses IE 9 as a standalone browser, and one uses a simple WPF form that includes the Web Browser control. IE 9 Browser:   Web Browser control in a WPF form: The IE 9 page displays this HTML correctly – you see the rounded corners and shadow displayed. Obviously the latter rendering using the Web Browser control in a WPF application is a bit lacking. Not only are the new CSS features missing but the page also renders in Internet Explorer’s quirks mode so all the margins, padding etc. behave differently by default, even though there’s a CSS reset applied on this page. If you’re building an application that intends to use the Web Browser control for a live preview of some HTML this is clearly undesirable. Feature Delegation via Registry Hacks Fortunately starting with Internet Explore 8 and later there’s a fix for this problem via a registry setting. You can specify a registry key to specify which rendering mode and version of IE should be used by that application. These are not global mind you – they have to be enabled for each application individually. There are two different sets of keys for 32 bit and 64 bit applications. 32 bit: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Internet Explorer\MAIN\FeatureControl\FEATURE_BROWSER_EMULATION Value Key: yourapplication.exe 64 bit: HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Internet Explorer\MAIN\FeatureControl\FEATURE_BROWSER_EMULATION Value Key: yourapplication.exe The value to set this key to is (taken from MSDN here) as decimal values: 9999 (0x270F) Internet Explorer 9. Webpages are displayed in IE9 Standards mode, regardless of the !DOCTYPE directive. 9000 (0x2328) Internet Explorer 9. Webpages containing standards-based !DOCTYPE directives are displayed in IE9 mode. 8888 (0x22B8) Webpages are displayed in IE8 Standards mode, regardless of the !DOCTYPE directive. 8000 (0x1F40) Webpages containing standards-based !DOCTYPE directives are displayed in IE8 mode. 7000 (0x1B58) Webpages containing standards-based !DOCTYPE directives are displayed in IE7 Standards mode.   The added key looks something like this in the Registry Editor: With this in place my Html Html Help Builder application which has wwhelp.exe as its main executable now works with HTML 5 and CSS 3 documents in the same way that Internet Explorer 9 does. Incidentally I accidentally added an ‘empty’ DWORD value of 0 to my EXE name and that worked as well giving me IE 9 rendering. Although not documented I suspect 0 (or an invalid value) will default to the installed browser. Don’t have a good way to test this but if somebody could try this with IE 8 installed that would be great: What happens when setting 9000 with IE 8 installed? What happens when setting 0 with IE 8 installed? Don’t forget to add Keys for Host Environments If you’re developing your application in Visual Studio and you run the debugger you may find that your application is still not rendering right, but if you run the actual generated EXE from Explorer or the OS command prompt it works. That’s because when you run the debugger in Visual Studio it wraps your application into a debugging host container. For this reason you might want to also add another registry key for yourapp.vshost.exe on your development machine. If you’re developing in Visual FoxPro make sure you add a key for vfp9.exe to see the rendering adjustments in the Visual FoxPro development environment. Cleaner HTML - no more HTML mangling! There are a number of additional benefits to setting up rendering of the Web Browser control to the IE 9 engine (or even the IE 8 engine) beyond the obvious rendering functionality. IE 9 actually returns your HTML in something that resembles the original HTML formatting, as opposed to the IE 7 default format which mangled the original HTML content. If you do the following in the WPF application: private void button2_Click(object sender, RoutedEventArgs e) { dynamic doc = this.webBrowser.Document; MessageBox.Show(doc.body.outerHtml); } you get different output depending on the rendering mode active. With the default IE 7 rendering you get: <BODY><DIV> <H1>Rounded Corners and Shadows - Creating Dialogs in CSS</H1> <DIV class=toolbarcontainer><A class=hoverbutton href="./"><IMG src="../../css/images/home.gif"> Home</A> <A class=hoverbutton href="RoundedCornersAndShadows.htm"><IMG src="../../css/images/refresh.gif"> Refresh</A> </DIV> <DIV class=containercontent> <FIELDSET><LEGEND>Plain Box</LEGEND><!-- Simple Box with rounded corners and shadow --> <DIV style="BORDER-BOTTOM: steelblue 2px solid; BORDER-LEFT: steelblue 2px solid; WIDTH: 550px; BORDER-TOP: steelblue 2px solid; BORDER-RIGHT: steelblue 2px solid" class="roundbox boxshadow"> <DIV style="BACKGROUND: khaki" class="boxcontenttext roundbox">Simple Rounded Corner Box. </DIV></DIV></FIELDSET> <FIELDSET><LEGEND>Box with Header</LEGEND> <DIV style="BORDER-BOTTOM: steelblue 2px solid; BORDER-LEFT: steelblue 2px solid; WIDTH: 550px; BORDER-TOP: steelblue 2px solid; BORDER-RIGHT: steelblue 2px solid" class="roundbox boxshadow"> <DIV class="gridheaderleft roundbox-top">Box with a Header</DIV> <DIV style="BACKGROUND: khaki" class="boxcontenttext roundbox-bottom">Simple Rounded Corner Box. </DIV></DIV></FIELDSET> <FIELDSET><LEGEND>Dialog Style Window</LEGEND> <DIV style="POSITION: relative; WIDTH: 450px" id=divDialog class="dialog boxshadow" jQuery16107208195684204002="2"> <DIV style="POSITION: relative" class=dialog-header> <DIV class=closebox></DIV>User Sign-in <DIV class=closebox jQuery16107208195684204002="3"></DIV></DIV> <DIV class=descriptionheader>This dialog is draggable and closable</DIV> <DIV class=dialog-content><LABEL>Username:</LABEL> <INPUT name=txtUsername value=" "> <LABEL>Password</LABEL> <INPUT name=txtPassword value=" "> <HR> <INPUT id=btnLogin value=Login type=button> </DIV> <DIV class=dialog-statusbar>Ready</DIV></DIV></FIELDSET> </DIV> <SCRIPT type=text/javascript>     $(document).ready(function () {         $("#divDialog")             .draggable({ handle: ".dialog-header" })             .closable({ handle: ".dialog-header",                 closeHandler: function () {                     alert("Window about to be closed.");                     return true;  // true closes - false leaves open                 }             });     }); </SCRIPT> </DIV></BODY> Now lest you think I’m out of my mind and create complete whacky HTML rooted in the last century, here’s the IE 9 rendering mode output which looks a heck of a lot cleaner and a lot closer to my original HTML of the page I’m accessing: <body> <div>         <h1>Rounded Corners and Shadows - Creating Dialogs in CSS</h1>     <div class="toolbarcontainer">         <a class="hoverbutton" href="./"> <img src="../../css/images/home.gif"> Home</a>         <a class="hoverbutton" href="RoundedCornersAndShadows.htm"> <img src="../../css/images/refresh.gif"> Refresh</a>     </div>         <div class="containercontent">     <fieldset>         <legend>Plain Box</legend>                <!-- Simple Box with rounded corners and shadow -->             <div style="border: 2px solid steelblue; width: 550px;" class="roundbox boxshadow">                              <div style="background: khaki;" class="boxcontenttext roundbox">                     Simple Rounded Corner Box.                 </div>             </div>     </fieldset>     <fieldset>         <legend>Box with Header</legend>         <div style="border: 2px solid steelblue; width: 550px;" class="roundbox boxshadow">                          <div class="gridheaderleft roundbox-top">Box with a Header</div>             <div style="background: khaki;" class="boxcontenttext roundbox-bottom">                 Simple Rounded Corner Box.             </div>         </div>     </fieldset>       <fieldset>         <legend>Dialog Style Window</legend>         <div style="width: 450px; position: relative;" id="divDialog" class="dialog boxshadow">             <div style="position: relative;" class="dialog-header">                 <div class="closebox"></div>                 User Sign-in             <div class="closebox"></div></div>             <div class="descriptionheader">This dialog is draggable and closable</div>                    <div class="dialog-content">                             <label>Username:</label>                 <input name="txtUsername" value=" " type="text">                 <label>Password</label>                 <input name="txtPassword" value=" " type="text">                                 <hr/>                                 <input id="btnLogin" value="Login" type="button">                        </div>             <div class="dialog-statusbar">Ready</div>         </div>     </fieldset>     </div> <script type="text/javascript">     $(document).ready(function () {         $("#divDialog")             .draggable({ handle: ".dialog-header" })             .closable({ handle: ".dialog-header",                 closeHandler: function () {                     alert("Window about to be closed.");                     return true;  // true closes - false leaves open                 }             });     }); </script>        </div> </body> IOW, in IE9 rendering mode IE9 is much closer (but not identical) to the original HTML from the page on the Web that we’re reading from. As a side note: Unfortunately, the browser feature emulation can't be applied against the Html Help (CHM) Engine in Windows which uses the Web Browser control (or COM interfaces anyway) to render Html Help content. I tried setting up hh.exe which is the help viewer, to use IE 9 rendering but a help file generated with CSS3 features will simply show in IE 7 mode. Bummer - this would have been a nice quick fix to allow help content served from CHM files to look better. HTML Editing leaves HTML formatting intact In the same vane, if you do any inline HTML editing in the control by setting content to be editable, IE 9’s control does a much more reasonable job of creating usable and somewhat valid HTML. It also leaves the original content alone other than the text your are editing or adding. No longer is the HTML output stripped of excess spaces and reformatted in IEs format. So if I do: private void button3_Click(object sender, RoutedEventArgs e) { dynamic doc = this.webBrowser.Document; doc.body.contentEditable = true; } and then make some changes to the document by typing into it using IE 9 mode, the document formatting stays intact and only the affected content is modified. The created HTML is reasonably clean (although it does lack proper XHTML formatting for things like <br/> <hr/>). This is very different from IE 7 mode which mangled the HTML as soon as the page was loaded into the control. Any editing you did stripped out all white space and lost all of your existing XHTML formatting. In IE 9 mode at least *most* of your original formatting stays intact. This is huge! In Html Help Builder I have supported HTML editing for a long time but the HTML mangling by the Web Browser control made it very difficult to edit the HTML later. Previously IE would mangle the HTML by stripping out spaces, upper casing all tags and converting many XHTML safe tags to its HTML 3 tags. Now IE leaves most of my document alone while editing, and creates cleaner and more compliant markup (with exception of self-closing elements like BR/HR). The end result is that I now have HTML editing in place that's much cleaner and actually capable of being manually edited. Caveats, Caveats, Caveats It wouldn't be Internet Explorer if there weren't some major compatibility issues involved in using this various browser version interaction. The biggest thing I ran into is that there are odd differences in some of the COM interfaces and what they return. I specifically ran into a problem with the document.selection.createRange() function which with IE 7 compatibility returns an expected text range object. When running in IE 8 or IE 9 mode however. I could not retrieve a valid text range with this code where loEdit is the WebBrowser control: loRange = loEdit.document.selection.CreateRange() The loRange object returned (here in FoxPro) had a length property of 0 but none of the other properties of the TextRange or TextRangeCollection objects were available. I figured this was due to some changed security settings but even after elevating the Intranet Security Zone and mucking with the other browser feature flags pertaining to security I had no luck. In the end I relented and used a JavaScript function in my editor document that returns a selection range object: function getselectionrange() { var range = document.selection.createRange(); return range; } and call that JavaScript function from my host applications code: *** Use a function in the document to get around HTML Editing issues loRange = loEdit.document.parentWindow.getselectionrange(.f.) and that does work correctly. This wasn't a big deal as I'm already loading a support script file into the editor page so all I had to do is add the function to this existing script file. You can find out more how to call script code in the Web Browser control from a host application in a previous post of mine. IE 8 and 9 also clamp down the security environment a little more than the default IE 7 control, so there may be other issues you run into. Other than the createRange() problem above I haven't seen anything else that is breaking in my code so far though and that's encouraging at least since it uses a lot of HTML document manipulation for the custom editor I've created (and would love to replace - any PROFESSIONAL alternatives anybody?) Registry Key Installation for your Application It’s important to remember that this registry setting is made per application, so most likely this is something you want to set up with your installer. Also remember that 32 and 64 bit settings require separate settings in the registry so if you’re creating your installer you most likely will want to set both keys in the registry preemptively for your application. I use Tarma Installer for all of my application installs and in Tarma I configure registry keys for both and set a flag to only install the latter key group in the 64 bit version: Because this setting is application specific you have to do this for every application you install unfortunately, but this also means that you can safely configure this setting in the registry because it is after only applied to your application. Another problem with install based installation is version detection. If IE 8 is installed I’d want 8000 for the value, if IE 9 is installed I want 9000. I can do this easily in code but in the installer this is much more difficult. I don’t have a good solution for this at the moment, but given that the app works with IE 7 mode now, IE 9 mode is just a bonus for the moment. If IE 9 is not installed and 9000 is used the default rendering will remain in use.   It sure would be nice if we could specify the IE rendering mode as a property, but I suspect the ActiveX container has to know before it loads what actual version to load up and once loaded can only load a single version of IE. This would account for this annoying application level configuration… Summary The registry feature emulation has been available for quite some time, but I just found out about it today and started experimenting around with it. I’m stoked to see that this is available as I’d pretty much given up in ever seeing any better rendering in the Web Browser control. Now at least my apps can take advantage of newer HTML features. Now if we could only get better HTML Editing support somehow <snicker>… ah can’t have everything.© Rick Strahl, West Wind Technologies, 2005-2011Posted in .NET  FoxPro  Windows  

    Read the article

  • An Xml Serializable PropertyBag Dictionary Class for .NET

    - by Rick Strahl
    I don't know about you but I frequently need property bags in my applications to store and possibly cache arbitrary data. Dictionary<T,V> works well for this although I always seem to be hunting for a more specific generic type that provides a string key based dictionary. There's string dictionary, but it only works with strings. There's Hashset<T> but it uses the actual values as keys. In most key value pair situations for me string is key value to work off. Dictionary<T,V> works well enough, but there are some issues with serialization of dictionaries in .NET. The .NET framework doesn't do well serializing IDictionary objects out of the box. The XmlSerializer doesn't support serialization of IDictionary via it's default serialization, and while the DataContractSerializer does support IDictionary serialization it produces some pretty atrocious XML. What doesn't work? First off Dictionary serialization with the Xml Serializer doesn't work so the following fails: [TestMethod] public void DictionaryXmlSerializerTest() { var bag = new Dictionary<string, object>(); bag.Add("key", "Value"); bag.Add("Key2", 100.10M); bag.Add("Key3", Guid.NewGuid()); bag.Add("Key4", DateTime.Now); bag.Add("Key5", true); bag.Add("Key7", new byte[3] { 42, 45, 66 }); TestContext.WriteLine(this.ToXml(bag)); } public string ToXml(object obj) { if (obj == null) return null; StringWriter sw = new StringWriter(); XmlSerializer ser = new XmlSerializer(obj.GetType()); ser.Serialize(sw, obj); return sw.ToString(); } The error you get with this is: System.NotSupportedException: The type System.Collections.Generic.Dictionary`2[[System.String, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089],[System.Object, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]] is not supported because it implements IDictionary. Got it! BTW, the same is true with binary serialization. Running the same code above against the DataContractSerializer does work: [TestMethod] public void DictionaryDataContextSerializerTest() { var bag = new Dictionary<string, object>(); bag.Add("key", "Value"); bag.Add("Key2", 100.10M); bag.Add("Key3", Guid.NewGuid()); bag.Add("Key4", DateTime.Now); bag.Add("Key5", true); bag.Add("Key7", new byte[3] { 42, 45, 66 }); TestContext.WriteLine(this.ToXmlDcs(bag)); } public string ToXmlDcs(object value, bool throwExceptions = false) { var ser = new DataContractSerializer(value.GetType(), null, int.MaxValue, true, false, null); MemoryStream ms = new MemoryStream(); ser.WriteObject(ms, value); return Encoding.UTF8.GetString(ms.ToArray(), 0, (int)ms.Length); } This DOES work but produces some pretty heinous XML (formatted with line breaks and indentation here): <ArrayOfKeyValueOfstringanyType xmlns="http://schemas.microsoft.com/2003/10/Serialization/Arrays" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <KeyValueOfstringanyType> <Key>key</Key> <Value i:type="a:string" xmlns:a="http://www.w3.org/2001/XMLSchema">Value</Value> </KeyValueOfstringanyType> <KeyValueOfstringanyType> <Key>Key2</Key> <Value i:type="a:decimal" xmlns:a="http://www.w3.org/2001/XMLSchema">100.10</Value> </KeyValueOfstringanyType> <KeyValueOfstringanyType> <Key>Key3</Key> <Value i:type="a:guid" xmlns:a="http://schemas.microsoft.com/2003/10/Serialization/">2cd46d2a-a636-4af4-979b-e834d39b6d37</Value> </KeyValueOfstringanyType> <KeyValueOfstringanyType> <Key>Key4</Key> <Value i:type="a:dateTime" xmlns:a="http://www.w3.org/2001/XMLSchema">2011-09-19T17:17:05.4406999-07:00</Value> </KeyValueOfstringanyType> <KeyValueOfstringanyType> <Key>Key5</Key> <Value i:type="a:boolean" xmlns:a="http://www.w3.org/2001/XMLSchema">true</Value> </KeyValueOfstringanyType> <KeyValueOfstringanyType> <Key>Key7</Key> <Value i:type="a:base64Binary" xmlns:a="http://www.w3.org/2001/XMLSchema">Ki1C</Value> </KeyValueOfstringanyType> </ArrayOfKeyValueOfstringanyType> Ouch! That seriously hurts the eye! :-) Worse though it's extremely verbose with all those repetitive namespace declarations. It's good to know that it works in a pinch, but for a human readable/editable solution or something lightweight to store in a database it's not quite ideal. Why should I care? As a little background, in one of my applications I have a need for a flexible property bag that is used on a free form database field on an otherwise static entity. Basically what I have is a standard database record to which arbitrary properties can be added in an XML based string field. I intend to expose those arbitrary properties as a collection from field data stored in XML. The concept is pretty simple: When loading write the data to the collection, when the data is saved serialize the data into an XML string and store it into the database. When reading the data pick up the XML and if the collection on the entity is accessed automatically deserialize the XML into the Dictionary. (I'll talk more about this in another post). While the DataContext Serializer would work, it's verbosity is problematic both for size of the generated XML strings and the fact that users can manually edit this XML based property data in an advanced mode. A clean(er) layout certainly would be preferable and more user friendly. Custom XMLSerialization with a PropertyBag Class So… after a bunch of experimentation with different serialization formats I decided to create a custom PropertyBag class that provides for a serializable Dictionary. It's basically a custom Dictionary<TType,TValue> implementation with the keys always set as string keys. The result are PropertyBag<TValue> and PropertyBag (which defaults to the object type for values). The PropertyBag<TType> and PropertyBag classes provide these features: Subclassed from Dictionary<T,V> Implements IXmlSerializable with a cleanish XML format ToXml() and FromXml() methods to export and import to and from XML strings Static CreateFromXml() method to create an instance It's simple enough as it's merely a Dictionary<string,object> subclass but that supports serialization to a - what I think at least - cleaner XML format. The class is super simple to use: [TestMethod] public void PropertyBagTwoWayObjectSerializationTest() { var bag = new PropertyBag(); bag.Add("key", "Value"); bag.Add("Key2", 100.10M); bag.Add("Key3", Guid.NewGuid()); bag.Add("Key4", DateTime.Now); bag.Add("Key5", true); bag.Add("Key7", new byte[3] { 42,45,66 } ); bag.Add("Key8", null); bag.Add("Key9", new ComplexObject() { Name = "Rick", Entered = DateTime.Now, Count = 10 }); string xml = bag.ToXml(); TestContext.WriteLine(bag.ToXml()); bag.Clear(); bag.FromXml(xml); Assert.IsTrue(bag["key"] as string == "Value"); Assert.IsInstanceOfType( bag["Key3"], typeof(Guid)); Assert.IsNull(bag["Key8"]); //Assert.IsNull(bag["Key10"]); Assert.IsInstanceOfType(bag["Key9"], typeof(ComplexObject)); } This uses the PropertyBag class which uses a PropertyBag<string,object> - which means it returns untyped values of type object. I suspect for me this will be the most common scenario as I'd want to store arbitrary values in the PropertyBag rather than one specific type. The same code with a strongly typed PropertyBag<decimal> looks like this: [TestMethod] public void PropertyBagTwoWayValueTypeSerializationTest() { var bag = new PropertyBag<decimal>(); bag.Add("key", 10M); bag.Add("Key1", 100.10M); bag.Add("Key2", 200.10M); bag.Add("Key3", 300.10M); string xml = bag.ToXml(); TestContext.WriteLine(bag.ToXml()); bag.Clear(); bag.FromXml(xml); Assert.IsTrue(bag.Get("Key1") == 100.10M); Assert.IsTrue(bag.Get("Key3") == 300.10M); } and produces typed results of type decimal. The types can be either value or reference types the combination of which actually proved to be a little more tricky than anticipated due to null and specific string value checks required - getting the generic typing right required use of default(T) and Convert.ChangeType() to trick the compiler into playing nice. Of course the whole raison d'etre for this class is the XML serialization. You can see in the code above that we're doing a .ToXml() and .FromXml() to serialize to and from string. The XML produced for the first example looks like this: <?xml version="1.0" encoding="utf-8"?> <properties> <item> <key>key</key> <value>Value</value> </item> <item> <key>Key2</key> <value type="decimal">100.10</value> </item> <item> <key>Key3</key> <value type="___System.Guid"> <guid>f7a92032-0c6d-4e9d-9950-b15ff7cd207d</guid> </value> </item> <item> <key>Key4</key> <value type="datetime">2011-09-26T17:45:58.5789578-10:00</value> </item> <item> <key>Key5</key> <value type="boolean">true</value> </item> <item> <key>Key7</key> <value type="base64Binary">Ki1C</value> </item> <item> <key>Key8</key> <value type="nil" /> </item> <item> <key>Key9</key> <value type="___Westwind.Tools.Tests.PropertyBagTest+ComplexObject"> <ComplexObject> <Name>Rick</Name> <Entered>2011-09-26T17:45:58.5789578-10:00</Entered> <Count>10</Count> </ComplexObject> </value> </item> </properties>   The format is a bit cleaner than the DataContractSerializer. Each item is serialized into <key> <value> pairs. If the value is a string no type information is written. Since string tends to be the most common type this saves space and serialization processing. All other types are attributed. Simple types are mapped to XML types so things like decimal, datetime, boolean and base64Binary are encoded using their Xml type values. All other types are embedded with a hokey format that describes the .NET type preceded by a three underscores and then are encoded using the XmlSerializer. You can see this best above in the ComplexObject encoding. For custom types this isn't pretty either, but it's more concise than the DCS and it works as long as you're serializing back and forth between .NET clients at least. The XML generated from the second example that uses PropertyBag<decimal> looks like this: <?xml version="1.0" encoding="utf-8"?> <properties> <item> <key>key</key> <value type="decimal">10</value> </item> <item> <key>Key1</key> <value type="decimal">100.10</value> </item> <item> <key>Key2</key> <value type="decimal">200.10</value> </item> <item> <key>Key3</key> <value type="decimal">300.10</value> </item> </properties>   How does it work As I mentioned there's nothing fancy about this solution - it's little more than a subclass of Dictionary<T,V> that implements custom Xml Serialization and a couple of helper methods that facilitate getting the XML in and out of the class more easily. But it's proven very handy for a number of projects for me where dynamic data storage is required. Here's the code: /// <summary> /// Creates a serializable string/object dictionary that is XML serializable /// Encodes keys as element names and values as simple values with a type /// attribute that contains an XML type name. Complex names encode the type /// name with type='___namespace.classname' format followed by a standard xml /// serialized format. The latter serialization can be slow so it's not recommended /// to pass complex types if performance is critical. /// </summary> [XmlRoot("properties")] public class PropertyBag : PropertyBag<object> { /// <summary> /// Creates an instance of a propertybag from an Xml string /// </summary> /// <param name="xml">Serialize</param> /// <returns></returns> public static PropertyBag CreateFromXml(string xml) { var bag = new PropertyBag(); bag.FromXml(xml); return bag; } } /// <summary> /// Creates a serializable string for generic types that is XML serializable. /// /// Encodes keys as element names and values as simple values with a type /// attribute that contains an XML type name. Complex names encode the type /// name with type='___namespace.classname' format followed by a standard xml /// serialized format. The latter serialization can be slow so it's not recommended /// to pass complex types if performance is critical. /// </summary> /// <typeparam name="TValue">Must be a reference type. For value types use type object</typeparam> [XmlRoot("properties")] public class PropertyBag<TValue> : Dictionary<string, TValue>, IXmlSerializable { /// <summary> /// Not implemented - this means no schema information is passed /// so this won't work with ASMX/WCF services. /// </summary> /// <returns></returns> public System.Xml.Schema.XmlSchema GetSchema() { return null; } /// <summary> /// Serializes the dictionary to XML. Keys are /// serialized to element names and values as /// element values. An xml type attribute is embedded /// for each serialized element - a .NET type /// element is embedded for each complex type and /// prefixed with three underscores. /// </summary> /// <param name="writer"></param> public void WriteXml(System.Xml.XmlWriter writer) { foreach (string key in this.Keys) { TValue value = this[key]; Type type = null; if (value != null) type = value.GetType(); writer.WriteStartElement("item"); writer.WriteStartElement("key"); writer.WriteString(key as string); writer.WriteEndElement(); writer.WriteStartElement("value"); string xmlType = XmlUtils.MapTypeToXmlType(type); bool isCustom = false; // Type information attribute if not string if (value == null) { writer.WriteAttributeString("type", "nil"); } else if (!string.IsNullOrEmpty(xmlType)) { if (xmlType != "string") { writer.WriteStartAttribute("type"); writer.WriteString(xmlType); writer.WriteEndAttribute(); } } else { isCustom = true; xmlType = "___" + value.GetType().FullName; writer.WriteStartAttribute("type"); writer.WriteString(xmlType); writer.WriteEndAttribute(); } // Actual deserialization if (!isCustom) { if (value != null) writer.WriteValue(value); } else { XmlSerializer ser = new XmlSerializer(value.GetType()); ser.Serialize(writer, value); } writer.WriteEndElement(); // value writer.WriteEndElement(); // item } } /// <summary> /// Reads the custom serialized format /// </summary> /// <param name="reader"></param> public void ReadXml(System.Xml.XmlReader reader) { this.Clear(); while (reader.Read()) { if (reader.NodeType == XmlNodeType.Element && reader.Name == "key") { string xmlType = null; string name = reader.ReadElementContentAsString(); // item element reader.ReadToNextSibling("value"); if (reader.MoveToNextAttribute()) xmlType = reader.Value; reader.MoveToContent(); TValue value; if (xmlType == "nil") value = default(TValue); // null else if (string.IsNullOrEmpty(xmlType)) { // value is a string or object and we can assign TValue to value string strval = reader.ReadElementContentAsString(); value = (TValue) Convert.ChangeType(strval, typeof(TValue)); } else if (xmlType.StartsWith("___")) { while (reader.Read() && reader.NodeType != XmlNodeType.Element) { } Type type = ReflectionUtils.GetTypeFromName(xmlType.Substring(3)); //value = reader.ReadElementContentAs(type,null); XmlSerializer ser = new XmlSerializer(type); value = (TValue)ser.Deserialize(reader); } else value = (TValue)reader.ReadElementContentAs(XmlUtils.MapXmlTypeToType(xmlType), null); this.Add(name, value); } } } /// <summary> /// Serializes this dictionary to an XML string /// </summary> /// <returns>XML String or Null if it fails</returns> public string ToXml() { string xml = null; SerializationUtils.SerializeObject(this, out xml); return xml; } /// <summary> /// Deserializes from an XML string /// </summary> /// <param name="xml"></param> /// <returns>true or false</returns> public bool FromXml(string xml) { this.Clear(); // if xml string is empty we return an empty dictionary if (string.IsNullOrEmpty(xml)) return true; var result = SerializationUtils.DeSerializeObject(xml, this.GetType()) as PropertyBag<TValue>; if (result != null) { foreach (var item in result) { this.Add(item.Key, item.Value); } } else // null is a failure return false; return true; } /// <summary> /// Creates an instance of a propertybag from an Xml string /// </summary> /// <param name="xml"></param> /// <returns></returns> public static PropertyBag<TValue> CreateFromXml(string xml) { var bag = new PropertyBag<TValue>(); bag.FromXml(xml); return bag; } } } The code uses a couple of small helper classes SerializationUtils and XmlUtils for mapping Xml types to and from .NET, both of which are from the WestWind,Utilities project (which is the same project where PropertyBag lives) from the West Wind Web Toolkit. The code implements ReadXml and WriteXml for the IXmlSerializable implementation using old school XmlReaders and XmlWriters (because it's pretty simple stuff - no need for XLinq here). Then there are two helper methods .ToXml() and .FromXml() that basically allow your code to easily convert between XML and a PropertyBag object. In my code that's what I use to actually to persist to and from the entity XML property during .Load() and .Save() operations. It's sweet to be able to have a string key dictionary and then be able to turn around with 1 line of code to persist the whole thing to XML and back. Hopefully some of you will find this class as useful as I've found it. It's a simple solution to a common requirement in my applications and I've used the hell out of it in the  short time since I created it. Resources You can find the complete code for the two classes plus the helpers in the Subversion repository for Westwind.Utilities. You can grab the source files from there or download the whole project. You can also grab the full Westwind.Utilities assembly from NuGet and add it to your project if that's easier for you. PropertyBag Source Code SerializationUtils and XmlUtils Westwind.Utilities Assembly on NuGet (add from Visual Studio) © Rick Strahl, West Wind Technologies, 2005-2011Posted in .NET  CSharp   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Using HTML 5 SessionState to save rendered Page Content

    - by Rick Strahl
    HTML 5 SessionState and LocalStorage are very useful and super easy to use to manage client side state. For building rich client side or SPA style applications it's a vital feature to be able to cache user data as well as HTML content in order to swap pages in and out of the browser's DOM. What might not be so obvious is that you can also use the sessionState and localStorage objects even in classic server rendered HTML applications to provide caching features between pages. These APIs have been around for a long time and are supported by most relatively modern browsers and even all the way back to IE8, so you can use them safely in your Web applications. SessionState and LocalStorage are easy The APIs that make up sessionState and localStorage are very simple. Both object feature the same API interface which  is a simple, string based key value store that has getItem, setItem, removeitem, clear and  key methods. The objects are also pseudo array objects and so can be iterated like an array with  a length property and you have array indexers to set and get values with. Basic usage  for storing and retrieval looks like this (using sessionStorage, but the syntax is the same for localStorage - just switch the objects):// set var lastAccess = new Date().getTime(); if (sessionStorage) sessionStorage.setItem("myapp_time", lastAccess.toString()); // retrieve in another page or on a refresh var time = null; if (sessionStorage) time = sessionStorage.getItem("myapp_time"); if (time) time = new Date(time * 1); else time = new Date(); sessionState stores data that is browser session specific and that has a liftetime of the active browser session or window. Shut down the browser or tab and the storage goes away. localStorage uses the same API interface, but the lifetime of the data is permanently stored in the browsers storage area until deleted via code or by clearing out browser cookies (not the cache). Both sessionStorage and localStorage space is limited. The spec is ambiguous about this - supposedly sessionStorage should allow for unlimited size, but it appears that most WebKit browsers support only 2.5mb for either object. This means you have to be careful what you store especially since other applications might be running on the same domain and also use the storage mechanisms. That said 2.5mb worth of character data is quite a bit and would go a long way. The easiest way to get a feel for how sessionState and localStorage work is to look at a simple example. You can go check out the following example online in Plunker: http://plnkr.co/edit/0ICotzkoPjHaWa70GlRZ?p=preview which looks like this: Plunker is an online HTML/JavaScript editor that lets you write and run Javascript code and similar to JsFiddle, but a bit cleaner to work in IMHO (thanks to John Papa for turning me on to it). The sample has two text boxes with counts that update session/local storage every time you click the related button. The counts are 'cached' in Session and Local storage. The point of these examples is that both counters survive full page reloads, and the LocalStorage counter survives a complete browser shutdown and restart. Go ahead and try it out by clicking the Reload button after updating both counters and then shutting down the browser completely and going back to the same URL (with the same browser). What you should see is that reloads leave both counters intact at the counted values, while a browser restart will leave only the local storage counter intact. The code to deal with the SessionStorage (and LocalStorage not shown here) in the example is isolated into a couple of wrapper methods to simplify the code: function getSessionCount() { var count = 0; if (sessionStorage) { var count = sessionStorage.getItem("ss_count"); count = !count ? 0 : count * 1; } $("#txtSession").val(count); return count; } function setSessionCount(count) { if (sessionStorage) sessionStorage.setItem("ss_count", count.toString()); } These two functions essentially load and store a session counter value. The two key methods used here are: sessionStorage.getItem(key); sessionStorage.setItem(key,stringVal); Note that the value given to setItem and return by getItem has to be a string. If you pass another type you get an error. Don't let that limit you though - you can easily enough store JSON data in a variable so it's quite possible to pass complex objects and store them into a single sessionStorage value:var user = { name: "Rick", id="ricks", level=8 } sessionStorage.setItem("app_user",JSON.stringify(user)); to retrieve it:var user = sessionStorage.getItem("app_user"); if (user) user = JSON.parse(user); Simple! If you're using the Chrome Developer Tools (F12) you can also check out the session and local storage state on the Resource tab:   You can also use this tool to refresh or remove entries from storage. What we just looked at is a purely client side implementation where a couple of counters are stored. For rich client centric AJAX applications sessionStorage and localStorage provide a very nice and simple API to store application state while the application is running. But you can also use these storage mechanisms to manage server centric HTML applications when you combine server rendering with some JavaScript to perform client side data caching. You can both store some state information and data on the client (ie. store a JSON object and carry it forth between server rendered HTML requests) or you can use it for good old HTTP based caching where some rendered HTML is saved and then restored later. Let's look at the latter with a real life example. Why do I need Client-side Page Caching for Server Rendered HTML? I don't know about you, but in a lot of my existing server driven applications I have lists that display a fair amount of data. Typically these lists contain links to then drill down into more specific data either for viewing or editing. You can then click on a link and go off to a detail page that provides more concise content. So far so good. But now you're done with the detail page and need to get back to the list, so you click on a 'bread crumbs trail' or an application level 'back to list' button and… …you end up back at the top of the list - the scroll position, the current selection in some cases even filters conditions - all gone with the wind. You've left behind the state of the list and are starting from scratch in your browsing of the list from the top. Not cool! Sound familiar? This a pretty common scenario with server rendered HTML content where it's so common to display lists to drill into, only to lose state in the process of returning back to the original list. Look at just about any traditional forums application, or even StackOverFlow to see what I mean here. Scroll down a bit to look at a post or entry, drill in then use the bread crumbs or tab to go back… In some cases returning to the top of a list is not a big deal. On StackOverFlow that sort of works because content is turning around so quickly you probably want to actually look at the top posts. Not always though - if you're browsing through a list of search topics you're interested in and drill in there's no way back to that position. Essentially anytime you're actively browsing the items in the list, that's when state becomes important and if it's not handled the user experience can be really disrupting. Content Caching If you're building client centric SPA style applications this is a fairly easy to solve problem - you tend to render the list once and then update the page content to overlay the detail content, only hiding the list temporarily until it's used again later. It's relatively easy to accomplish this simply by hiding content on the page and later making it visible again. But if you use server rendered content, hanging on to all the detail like filters, selections and scroll position is not quite as easy. Or is it??? This is where sessionStorage comes in handy. What if we just save the rendered content of a previous page, and then restore it when we return to this page based on a special flag that tells us to use the cached version? Let's see how we can do this. A real World Use Case Recently my local ISP asked me to help out with updating an ancient classifieds application. They had a very busy, local classifieds app that was originally an ASP classic application. The old app was - wait for it: frames based - and even though I lobbied against it, the decision was made to keep the frames based layout to allow rapid browsing of the hundreds of posts that are made on a daily basis. The primary reason they wanted this was precisely for the ability to quickly browse content item by item. While I personally hate working with Frames, I have to admit that the UI actually works well with the frames layout as long as you're running on a large desktop screen. You can check out the frames based desktop site here: http://classifieds.gorge.net/ However when I rebuilt the app I also added a secondary view that doesn't use frames. The main reason for this of course was for mobile displays which work horribly with frames. So there's a somewhat mobile friendly interface to the interface, which ditches the frames and uses some responsive design tweaking for mobile capable operation: http://classifeds.gorge.net/mobile  (or browse the base url with your browser width under 800px)   Here's what the mobile, non-frames view looks like:   As you can see this means that the list of classifieds posts now is a list and there's a separate page for drilling down into the item. And of course… originally we ran into that usability issue I mentioned earlier where the browse, view detail, go back to the list cycle resulted in lost list state. Originally in mobile mode you scrolled through the list, found an item to look at and drilled in to display the item detail. Then you clicked back to the list and BAM - you've lost your place. Because there are so many items added on a daily basis the full list is never fully loaded, but rather there's a "Load Additional Listings"  entry at the button. Not only did we originally lose our place when coming back to the list, but any 'additionally loaded' items are no longer there because the list was now rendering  as if it was the first page hit. The additional listings, and any filters, the selection of an item all were lost. Major Suckage! Using Client SessionStorage to cache Server Rendered Content To work around this problem I decided to cache the rendered page content from the list in SessionStorage. Anytime the list renders or is updated with Load Additional Listings, the page HTML is cached and stored in Session Storage. Any back links from the detail page or the login or write entry forms then point back to the list page with a back=true query string parameter. If the server side sees this parameter it doesn't render the part of the page that is cached. Instead the client side code retrieves the data from the sessionState cache and simply inserts it into the page. It sounds pretty simple, and the overall the process is really easy, but there are a few gotchas that I'll discuss in a minute. But first let's look at the implementation. Let's start with the server side here because that'll give a quick idea of the doc structure. As I mentioned the server renders data from an ASP.NET MVC view. On the list page when returning to the list page from the display page (or a host of other pages) looks like this: https://classifieds.gorge.net/list?back=True The query string value is a flag, that indicates whether the server should render the HTML. Here's what the top level MVC Razor view for the list page looks like:@model MessageListViewModel @{ ViewBag.Title = "Classified Listing"; bool isBack = !string.IsNullOrEmpty(Request.QueryString["back"]); } <form method="post" action="@Url.Action("list")"> <div id="SizingContainer"> @if (!isBack) { @Html.Partial("List_CommandBar_Partial", Model) <div id="PostItemContainer" class="scrollbox" xstyle="-webkit-overflow-scrolling: touch;"> @Html.Partial("List_Items_Partial", Model) @if (Model.RequireLoadEntry) { <div class="postitem loadpostitems" style="padding: 15px;"> <div id="LoadProgress" class="smallprogressright"></div> <div class="control-progress"> Load additional listings... </div> </div> } </div> } </div> </form> As you can see the query string triggers a conditional block that if set is simply not rendered. The content inside of #SizingContainer basically holds  the entire page's HTML sans the headers and scripts, but including the filter options and menu at the top. In this case this makes good sense - in other situations the fact that the menu or filter options might be dynamically updated might make you only cache the list rather than essentially the entire page. In this particular instance all of the content works and produces the proper result as both the list along with any filter conditions in the form inputs are restored. Ok, let's move on to the client. On the client there are two page level functions that deal with saving and restoring state. Like the counter example I showed earlier, I like to wrap the logic to save and restore values from sessionState into a separate function because they are almost always used in several places.page.saveData = function(id) { if (!sessionStorage) return; var data = { id: id, scroll: $("#PostItemContainer").scrollTop(), html: $("#SizingContainer").html() }; sessionStorage.setItem("list_html",JSON.stringify(data)); }; page.restoreData = function() { if (!sessionStorage) return; var data = sessionStorage.getItem("list_html"); if (!data) return null; return JSON.parse(data); }; The data that is saved is an object which contains an ID which is the selected element when the user clicks and a scroll position. These two values are used to reset the scroll position when the data is used from the cache. Finally the html from the #SizingContainer element is stored, which makes for the bulk of the document's HTML. In this application the HTML captured could be a substantial bit of data. If you recall, I mentioned that the server side code renders a small chunk of data initially and then gets more data if the user reads through the first 50 or so items. The rest of the items retrieved can be rather sizable. Other than the JSON deserialization that's Ok. Since I'm using SessionStorage the storage space has no immediate limits. Next is the core logic to handle saving and restoring the page state. At first though this would seem pretty simple, and in some cases it might be, but as the following code demonstrates there are a few gotchas to watch out for. Here's the relevant code I use to save and restore:$( function() { … var isBack = getUrlEncodedKey("back", location.href); if (isBack) { // remove the back key from URL setUrlEncodedKey("back", "", location.href); var data = page.restoreData(); // restore from sessionState if (!data) { // no data - force redisplay of the server side default list window.location = "list"; return; } $("#SizingContainer").html(data.html); var el = $(".postitem[data-id=" + data.id + "]"); $(".postitem").removeClass("highlight"); el.addClass("highlight"); $("#PostItemContainer").scrollTop(data.scroll); setTimeout(function() { el.removeClass("highlight"); }, 2500); } else if (window.noFrames) page.saveData(null); // save when page loads $("#SizingContainer").on("click", ".postitem", function() { var id = $(this).attr("data-id"); if (!id) return true; if (window.noFrames) page.saveData(id); var contentFrame = window.parent.frames["Content"]; if (contentFrame) contentFrame.location.href = "show/" + id; else window.location.href = "show/" + id; return false; }); … The code starts out by checking for the back query string flag which triggers restoring from the client cache. If cached the cached data structure is read from sessionStorage. It's important here to check if data was returned. If the user had back=true on the querystring but there is no cached data, he likely bookmarked this page or otherwise shut down the browser and came back to this URL. In that case the server didn't render any detail and we have no cached data, so all we can do is redirect to the original default list view using window.location. If we continued the page would render no data - so make sure to always check the cache retrieval result. Always! If there is data the it's loaded and the data.html data is restored back into the document by simply injecting the HTML back into the document's #SizingContainer element:$("#SizingContainer").html(data.html); It's that simple and it's quite quick even with a fully loaded list of additional items and on a phone. The actual HTML data is stored to the cache on every page load initially and then again when the user clicks on an element to navigate to a particular listing. The former ensures that the client cache always has something in it, and the latter updates with additional information for the selected element. For the click handling I use a data-id attribute on the list item (.postitem) in the list and retrieve the id from that. That id is then used to navigate to the actual entry as well as storing that Id value in the saved cached data. The id is used to reset the selection by searching for the data-id value in the restored elements. The overall process of this save/restore process is pretty straight forward and it doesn't require a bunch of code, yet it yields a huge improvement in the usability of the site on mobile devices (or anybody who uses the non-frames view). Some things to watch out for As easy as it conceptually seems to simply store and retrieve cached content, you have to be quite aware what type of content you are caching. The code above is all that's specific to cache/restore cycle and it works, but it took a few tweaks to the rest of the script code and server code to make it all work. There were a few gotchas that weren't immediately obvious. Here are a few things to pay attention to: Event Handling Logic Timing of manipulating DOM events Inline Script Code Bookmarking to the Cache Url when no cache exists Do you have inline script code in your HTML? That script code isn't going to run if you restore from cache and simply assign or it may not run at the time you think it would normally in the DOM rendering cycle. JavaScript Event Hookups The biggest issue I ran into with this approach almost immediately is that originally I had various static event handlers hooked up to various UI elements that are now cached. If you have an event handler like:$("#btnSearch").click( function() {…}); that works fine when the page loads with server rendered HTML, but that code breaks when you now load the HTML from cache. Why? Because the elements you're trying to hook those events to may not actually be there - yet. Luckily there's an easy workaround for this by using deferred events. With jQuery you can use the .on() event handler instead:$("#SelectionContainer").on("click","#btnSearch", function() {…}); which monitors a parent element for the events and checks for the inner selector elements to handle events on. This effectively defers to runtime event binding, so as more items are added to the document bindings still work. For any cached content use deferred events. Timing of manipulating DOM Elements Along the same lines make sure that your DOM manipulation code follows the code that loads the cached content into the page so that you don't manipulate DOM elements that don't exist just yet. Ideally you'll want to check for the condition to restore cached content towards the top of your script code, but that can be tricky if you have components or other logic that might not all run in a straight line. Inline Script Code Here's another small problem I ran into: I use a DateTime Picker widget I built a while back that relies on the jQuery date time picker. I also created a helper function that allows keyboard date navigation into it that uses JavaScript logic. Because MVC's limited 'object model' the only way to embed widget content into the page is through inline script. This code broken when I inserted the cached HTML into the page because the script code was not available when the component actually got injected into the page. As the last bullet - it's a matter of timing. There's no good work around for this - in my case I pulled out the jQuery date picker and relied on native <input type="date" /> logic instead - a better choice these days anyway, especially since this view is meant to be primarily to serve mobile devices which actually support date input through the browser (unlike desktop browsers of which only WebKit seems to support it). Bookmarking Cached Urls When you cache HTML content you have to make a decision whether you cache on the client and also not render that same content on the server. In the Classifieds app I didn't render server side content so if the user comes to the page with back=True and there is no cached content I have to a have a Plan B. Typically this happens when somebody ends up bookmarking the back URL. The easiest and safest solution for this scenario is to ALWAYS check the cache result to make sure it exists and if not have a safe URL to go back to - in this case to the plain uncached list URL which amounts to effectively redirecting. This seems really obvious in hindsight, but it's easy to overlook and not see a problem until much later, when it's not obvious at all why the page is not rendering anything. Don't use <body> to replace Content Since we're practically replacing all the HTML in the page it may seem tempting to simply replace the HTML content of the <body> tag. Don't. The body tag usually contains key things that should stay in the page and be there when it loads. Specifically script tags and elements and possibly other embedded content. It's best to create a top level DOM element specifically as a placeholder container for your cached content and wrap just around the actual content you want to replace. In the app above the #SizingContainer is that container. Other Approaches The approach I've used for this application is kind of specific to the existing server rendered application we're running and so it's just one approach you can take with caching. However for server rendered content caching this is a pattern I've used in a few apps to retrofit some client caching into list displays. In this application I took the path of least resistance to the existing server rendering logic. Here are a few other ways that come to mind: Using Partial HTML Rendering via AJAXInstead of rendering the page initially on the server, the page would load empty and the client would render the UI by retrieving the respective HTML and embedding it into the page from a Partial View. This effectively makes the initial rendering and the cached rendering logic identical and removes the server having to decide whether this request needs to be rendered or not (ie. not checking for a back=true switch). All the logic related to caching is made on the client in this case. Using JSON Data and Client RenderingThe hardcore client option is to do the whole UI SPA style and pull data from the server and then use client rendering or databinding to pull the data down and render using templates or client side databinding with knockout/angular et al. As with the Partial Rendering approach the advantage is that there's no difference in the logic between pulling the data from cache or rendering from scratch other than the initial check for the cache request. Of course if the app is a  full on SPA app, then caching may not be required even - the list could just stay in memory and be hidden and reactivated. I'm sure there are a number of other ways this can be handled as well especially using  AJAX. AJAX rendering might simplify the logic, but it also complicates search engine optimization since there's no content loaded initially. So there are always tradeoffs and it's important to look at all angles before deciding on any sort of caching solution in general. State of the Session SessionState and LocalStorage are easy to use in client code and can be integrated even with server centric applications to provide nice caching features of content and data. In this post I've shown a very specific scenario of storing HTML content for the purpose of remembering list view data and state and making the browsing experience for lists a bit more friendly, especially if there's dynamically loaded content involved. If you haven't played with sessionStorage or localStorage I encourage you to give it a try. There's a lot of cool stuff that you can do with this beyond the specific scenario I've covered here… Resources Overview of localStorage (also applies to sessionStorage) Web Storage Compatibility Modernizr Test Suite© Rick Strahl, West Wind Technologies, 2005-2013Posted in JavaScript  HTML5  ASP.NET  MVC   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Nginx and client certificates from hierarchical OpenSSL-based certification authorities

    - by Fmy Oen
    I'm trying to set up root certification authority, subordinate certification authority and to generate the client certificates signed by any of this CA that nginx 0.7.67 on Debian Squeeze will accept. My problem is that root CA signed client certificate works fine while subordinate CA signed one results in "400 Bad Request. The SSL certificate error". Step 1: nginx virtual host configuration: server { server_name test.local; access_log /var/log/nginx/test.access.log; listen 443 default ssl; keepalive_timeout 70; ssl_protocols SSLv3 TLSv1; ssl_ciphers AES128-SHA:AES256-SHA:RC4-SHA:DES-CBC3-SHA:RC4-MD5; ssl_certificate /etc/nginx/ssl/server.crt; ssl_certificate_key /etc/nginx/ssl/server.key; ssl_client_certificate /etc/nginx/ssl/client.pem; ssl_verify_client on; ssl_session_cache shared:SSL:10m; ssl_session_timeout 5m; location / { proxy_pass http://testsite.local/; } } Step 2: PKI infrastructure organization for both root and subordinate CA (based on this article): # mkdir ~/pki && cd ~/pki # mkdir rootCA subCA # cp -v /etc/ssl/openssl.cnf rootCA/ # cd rootCA/ # mkdir certs private crl newcerts; touch serial; echo 01 > serial; touch index.txt; touch crlnumber; echo 01 > crlnumber # cp -Rvp * ../subCA/ Almost no changes was made to rootCA/openssl.cnf: [ CA_default ] dir = . # Where everything is kept ... certificate = $dir/certs/rootca.crt # The CA certificate ... private_key = $dir/private/rootca.key # The private key and to subCA/openssl.cnf: [ CA_default ] dir = . # Where everything is kept ... certificate = $dir/certs/subca.crt # The CA certificate ... private_key = $dir/private/subca.key # The private key Step 3: Self-signed root CA certificate generation: # openssl genrsa -out ./private/rootca.key -des3 2048 # openssl req -x509 -new -key ./private/rootca.key -out certs/rootca.crt -config openssl.cnf Enter pass phrase for ./private/rootca.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]: State or Province Name (full name) [Some-State]: Locality Name (eg, city) []: Organization Name (eg, company) [Internet Widgits Pty Ltd]: Organizational Unit Name (eg, section) []: Common Name (eg, YOUR name) []:rootca Email Address []: Step 4: Subordinate CA certificate generation: # cd ../subCA # openssl genrsa -out ./private/subca.key -des3 2048 # openssl req -new -key ./private/subca.key -out subca.csr -config openssl.cnf Enter pass phrase for ./private/subca.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]: State or Province Name (full name) [Some-State]: Locality Name (eg, city) []: Organization Name (eg, company) [Internet Widgits Pty Ltd]: Organizational Unit Name (eg, section) []: Common Name (eg, YOUR name) []:subca Email Address []: Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []: Step 5: Subordinate CA certificate signing by root CA certificate: # cd ../rootCA/ # openssl ca -in ../subCA/subca.csr -extensions v3_ca -config openssl.cnf Using configuration from openssl.cnf Enter pass phrase for ./private/rootca.key: Check that the request matches the signature Signature ok Certificate Details: Serial Number: 1 (0x1) Validity Not Before: Feb 4 10:49:43 2013 GMT Not After : Feb 4 10:49:43 2014 GMT Subject: countryName = AU stateOrProvinceName = Some-State organizationName = Internet Widgits Pty Ltd commonName = subca X509v3 extensions: X509v3 Subject Key Identifier: C9:E2:AC:31:53:81:86:3F:CD:F8:3D:47:10:FC:E5:8E:C2:DA:A9:20 X509v3 Authority Key Identifier: keyid:E9:50:E6:BF:57:03:EA:6E:8F:21:23:86:BB:44:3D:9F:8F:4A:8B:F2 DirName:/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca serial:9F:FB:56:66:8D:D3:8F:11 X509v3 Basic Constraints: CA:TRUE Certificate is to be certified until Feb 4 10:49:43 2014 GMT (365 days) Sign the certificate? [y/n]:y 1 out of 1 certificate requests certified, commit? [y/n]y ... # cd ../subCA/ # cp -v ../rootCA/newcerts/01.pem certs/subca.crt Step 6: Server certificate generation and signing by root CA (for nginx virtual host): # cd ../rootCA # openssl genrsa -out ./private/server.key -des3 2048 # openssl req -new -key ./private/server.key -out server.csr -config openssl.cnf Enter pass phrase for ./private/server.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]: State or Province Name (full name) [Some-State]: Locality Name (eg, city) []: Organization Name (eg, company) [Internet Widgits Pty Ltd]: Organizational Unit Name (eg, section) []: Common Name (eg, YOUR name) []:test.local Email Address []: Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []: # openssl ca -in server.csr -out certs/server.crt -config openssl.cnf Step 7: Client #1 certificate generation and signing by root CA: # openssl genrsa -out ./private/client1.key -des3 2048 # openssl req -new -key ./private/client1.key -out client1.csr -config openssl.cnf Enter pass phrase for ./private/client1.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]: State or Province Name (full name) [Some-State]: Locality Name (eg, city) []: Organization Name (eg, company) [Internet Widgits Pty Ltd]: Organizational Unit Name (eg, section) []: Common Name (eg, YOUR name) []:Client #1 Email Address []: Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []: # openssl ca -in client1.csr -out certs/client1.crt -config openssl.cnf Step 8: Client #1 certificate converting to PKCS12 format: # openssl pkcs12 -export -out certs/client1.p12 -inkey private/client1.key -in certs/client1.crt -certfile certs/rootca.crt Step 9: Client #2 certificate generation and signing by subordinate CA: # cd ../subCA/ # openssl genrsa -out ./private/client2.key -des3 2048 # openssl req -new -key ./private/client2.key -out client2.csr -config openssl.cnf Enter pass phrase for ./private/client2.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]: State or Province Name (full name) [Some-State]: Locality Name (eg, city) []: Organization Name (eg, company) [Internet Widgits Pty Ltd]: Organizational Unit Name (eg, section) []: Common Name (eg, YOUR name) []:Client #2 Email Address []: Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []: # openssl ca -in client2.csr -out certs/client2.crt -config openssl.cnf Step 10: Client #2 certificate converting to PKCS12 format: # openssl pkcs12 -export -out certs/client2.p12 -inkey private/client2.key -in certs/client2.crt -certfile certs/subca.crt Step 11: Passing server certificate and private key to nginx (performed with OS superuser privileges): # cd ../rootCA/ # cp -v certs/server.crt /etc/nginx/ssl/ # cp -v private/server.key /etc/nginx/ssl/ Step 12: Passing root and subordinate CA certificates to nginx (performed with OS superuser privileges): # cat certs/rootca.crt > /etc/nginx/ssl/client.pem # cat ../subCA/certs/subca.crt >> /etc/nginx/ssl/client.pem client.pem file look like this: # cat /etc/nginx/ssl/client.pem -----BEGIN CERTIFICATE----- MIID6TCCAtGgAwIBAgIJAJ/7VmaN048RMA0GCSqGSIb3DQEBBQUAMFYxCzAJBgNV BAYTAkFVMRMwEQYDVQQIEwpTb21lLVN0YXRlMSEwHwYDVQQKExhJbnRlcm5ldCBX aWRnaXRzIFB0eSBMdGQxDzANBgNVBAMTBnJvb3RjYTAeFw0xMzAyMDQxMDM1NTda ... -----END CERTIFICATE----- Certificate: Data: Version: 3 (0x2) Serial Number: 1 (0x1) ... -----BEGIN CERTIFICATE----- MIID4DCCAsigAwIBAgIBATANBgkqhkiG9w0BAQUFADBWMQswCQYDVQQGEwJBVTET MBEGA1UECBMKU29tZS1TdGF0ZTEhMB8GA1UEChMYSW50ZXJuZXQgV2lkZ2l0cyBQ dHkgTHRkMQ8wDQYDVQQDEwZyb290Y2EwHhcNMTMwMjA0MTA0OTQzWhcNMTQwMjA0 ... -----END CERTIFICATE----- It looks like everything is working fine: # service nginx reload # Reloading nginx configuration: Enter PEM pass phrase: # nginx. # Step 13: Installing *.p12 certificates in browser (Firefox in my case) gives the problem I've mentioned above. Client #1 = 200 OK, Client #2 = 400 Bad request/The SSL certificate error. Any ideas what should I do? Update 1: Results of SSL connection test attempts: # openssl s_client -connect test.local:443 -CAfile ~/pki/rootCA/certs/rootca.crt -cert ~/pki/rootCA/certs/client1.crt -key ~/pki/rootCA/private/client1.key -showcerts Enter pass phrase for tmp/testcert/client1.key: CONNECTED(00000003) depth=1 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = rootca verify return:1 depth=0 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = test.local verify return:1 --- Certificate chain 0 s:/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=test.local i:/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca -----BEGIN CERTIFICATE----- MIIDpjCCAo6gAwIBAgIBAjANBgkqhkiG9w0BAQUFADBWMQswCQYDVQQGEwJBVTET MBEGA1UECBMKU29tZS1TdGF0ZTEhMB8GA1UEChMYSW50ZXJuZXQgV2lkZ2l0cyBQ dHkgTHRkMQ8wDQYDVQQDEwZyb290Y2EwHhcNMTMwMjA0MTEwNjAzWhcNMTQwMjA0 ... -----END CERTIFICATE----- 1 s:/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca i:/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca -----BEGIN CERTIFICATE----- MIID6TCCAtGgAwIBAgIJAJ/7VmaN048RMA0GCSqGSIb3DQEBBQUAMFYxCzAJBgNV BAYTAkFVMRMwEQYDVQQIEwpTb21lLVN0YXRlMSEwHwYDVQQKExhJbnRlcm5ldCBX aWRnaXRzIFB0eSBMdGQxDzANBgNVBAMTBnJvb3RjYTAeFw0xMzAyMDQxMDM1NTda ... -----END CERTIFICATE----- --- Server certificate subject=/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=test.local issuer=/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca --- Acceptable client certificate CA names /C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca /C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=subca --- SSL handshake has read 3395 bytes and written 2779 bytes --- New, TLSv1/SSLv3, Cipher is AES256-SHA Server public key is 2048 bit Secure Renegotiation IS supported Compression: zlib compression Expansion: zlib compression SSL-Session: Protocol : TLSv1 Cipher : AES256-SHA Session-ID: 15BFC2029691262542FAE95A48078305E76EEE7D586400F8C4F7C516B0F9D967 Session-ID-ctx: Master-Key: 23246CF166E8F3900793F0A2561879E5DB07291F32E99591BA1CF53E6229491FEAE6858BFC9AACAF271D9C3706F139C7 Key-Arg : None PSK identity: None PSK identity hint: None SRP username: None TLS session ticket: 0000 - c2 5e 1d d2 b5 6d 40 23-b2 40 89 e4 35 75 70 07 .^...m@#[email protected]. 0010 - 1b bb 2b e6 e0 b5 ab 10-10 bf 46 6e aa 67 7f 58 ..+.......Fn.g.X 0020 - cf 0e 65 a4 67 5a 15 ba-aa 93 4e dd 3d 6e 73 4c ..e.gZ....N.=nsL 0030 - c5 56 f6 06 24 0f 48 e6-38 36 de f1 b5 31 c5 86 .V..$.H.86...1.. ... 0440 - 4c 53 39 e3 92 84 d2 d0-e5 e2 f5 8a 6a a8 86 b1 LS9.........j... Compression: 1 (zlib compression) Start Time: 1359989684 Timeout : 300 (sec) Verify return code: 0 (ok) --- Everything seems fine with Client #2 and root CA certificate but request returns 400 Bad Request error: # openssl s_client -connect test.local:443 -CAfile ~/pki/rootCA/certs/rootca.crt -cert ~/pki/subCA/certs/client2.crt -key ~/pki/subCA/private/client2.key -showcerts Enter pass phrase for tmp/testcert/client2.key: CONNECTED(00000003) depth=1 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = rootca verify return:1 depth=0 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = test.local verify return:1 ... Compression: 1 (zlib compression) Start Time: 1359989989 Timeout : 300 (sec) Verify return code: 0 (ok) --- GET / HTTP/1.0 HTTP/1.1 400 Bad Request Server: nginx/0.7.67 Date: Mon, 04 Feb 2013 15:00:43 GMT Content-Type: text/html Content-Length: 231 Connection: close <html> <head><title>400 The SSL certificate error</title></head> <body bgcolor="white"> <center><h1>400 Bad Request</h1></center> <center>The SSL certificate error</center> <hr><center>nginx/0.7.67</center> </body> </html> closed Verification fails with Client #2 certificate and subordinate CA certificate: # openssl s_client -connect test.local:443 -CAfile ~/pki/subCA/certs/subca.crt -cert ~/pki/subCA/certs/client2.crt -key ~/pki/subCA/private/client2.key -showcerts Enter pass phrase for tmp/testcert/client2.key: CONNECTED(00000003) depth=1 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = rootca verify error:num=19:self signed certificate in certificate chain verify return:0 ... Compression: 1 (zlib compression) Start Time: 1359990354 Timeout : 300 (sec) Verify return code: 19 (self signed certificate in certificate chain) --- GET / HTTP/1.0 HTTP/1.1 400 Bad Request ... Still getting 400 Bad Request error with concatenated CA certificates and Client #2 (but still everything ok with Client #1): # cat certs/rootca.crt ../subCA/certs/subca.crt > certs/concatenatedca.crt # openssl s_client -connect test.local:443 -CAfile ~/pki/rootCA/certs/concatenatedca.crt -cert ~/pki/subCA/certs/client2.crt -key ~/pki/subCA/private/client2.key -showcerts Enter pass phrase for tmp/testcert/client2.key: CONNECTED(00000003) depth=1 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = rootca verify return:1 depth=0 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = test.local verify return:1 --- ... Compression: 1 (zlib compression) Start Time: 1359990772 Timeout : 300 (sec) Verify return code: 0 (ok) --- GET / HTTP/1.0 HTTP/1.1 400 Bad Request ... Update 2: I've managed to recompile nginx with enabled debug. Here is the part of successfull conection by Client #1 track: 2013/02/05 14:08:23 [debug] 38701#0: *119 accept: <MY IP ADDRESS> fd:3 2013/02/05 14:08:23 [debug] 38701#0: *119 event timer add: 3: 60000:2856497512 2013/02/05 14:08:23 [debug] 38701#0: *119 kevent set event: 3: ft:-1 fl:0025 2013/02/05 14:08:23 [debug] 38701#0: *119 malloc: 28805200:660 2013/02/05 14:08:23 [debug] 38701#0: *119 malloc: 28834400:1024 2013/02/05 14:08:23 [debug] 38701#0: *119 posix_memalign: 28860000:4096 @16 2013/02/05 14:08:23 [debug] 38701#0: *119 http check ssl handshake 2013/02/05 14:08:23 [debug] 38701#0: *119 https ssl handshake: 0x16 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL server name: "test.local" 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_do_handshake: -1 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_get_error: 2 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL handshake handler: 0 2013/02/05 14:08:23 [debug] 38701#0: *119 verify:1, error:0, depth:1, subject:"/C=AU /ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca",issuer: "/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca" 2013/02/05 14:08:23 [debug] 38701#0: *119 verify:1, error:0, depth:0, subject:"/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=Client #1",issuer: "/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca" 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_do_handshake: 1 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL: TLSv1, cipher: "AES256-SHA SSLv3 Kx=RSA Au=RSA Enc=AES(256) Mac=SHA1" 2013/02/05 14:08:23 [debug] 38701#0: *119 http process request line 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_read: -1 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_get_error: 2 2013/02/05 14:08:23 [debug] 38701#0: *119 http process request line 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_read: 1 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_read: 524 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_read: -1 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_get_error: 2 2013/02/05 14:08:23 [debug] 38701#0: *119 http request line: "GET / HTTP/1.1" And here is the part of unsuccessfull conection by Client #2 track: 2013/02/05 13:51:34 [debug] 38701#0: *112 accept: <MY_IP_ADDRESS> fd:3 2013/02/05 13:51:34 [debug] 38701#0: *112 event timer add: 3: 60000:2855488975 2013/02/05 13:51:34 [debug] 38701#0: *112 kevent set event: 3: ft:-1 fl:0025 2013/02/05 13:51:34 [debug] 38701#0: *112 malloc: 28805200:660 2013/02/05 13:51:34 [debug] 38701#0: *112 malloc: 28834400:1024 2013/02/05 13:51:34 [debug] 38701#0: *112 posix_memalign: 28860000:4096 @16 2013/02/05 13:51:34 [debug] 38701#0: *112 http check ssl handshake 2013/02/05 13:51:34 [debug] 38701#0: *112 https ssl handshake: 0x16 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL server name: "test.local" 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_do_handshake: -1 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_get_error: 2 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL handshake handler: 0 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_do_handshake: -1 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_get_error: 2 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL handshake handler: 0 2013/02/05 13:51:34 [debug] 38701#0: *112 verify:0, error:20, depth:1, subject:"/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=subca",issuer: "/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca" 2013/02/05 13:51:34 [debug] 38701#0: *112 verify:0, error:27, depth:1, subject:"/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=subca",issuer: "/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca" 2013/02/05 13:51:34 [debug] 38701#0: *112 verify:1, error:27, depth:0, subject:"/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=Client #2",issuer: "/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=subca" 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_do_handshake: 1 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL: TLSv1, cipher: "AES256-SHA SSLv3 Kx=RSA Au=RSA Enc=AES(256) Mac=SHA1" 2013/02/05 13:51:34 [debug] 38701#0: *112 http process request line 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_read: 1 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_read: 524 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_read: -1 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_get_error: 2 2013/02/05 13:51:34 [debug] 38701#0: *112 http request line: "GET / HTTP/1.1" So I'm getting OpenSSL error #20 and then #27. According to verify documentation: 20 X509_V_ERR_UNABLE_TO_GET_ISSUER_CERT_LOCALLY: unable to get local issuer certificate the issuer certificate could not be found: this occurs if the issuer certificate of an untrusted certificate cannot be found. 27 X509_V_ERR_CERT_UNTRUSTED: certificate not trusted the root CA is not marked as trusted for the specified purpose.

    Read the article

  • Java Cloud Service Integration to REST Service

    - by Jani Rautiainen
    Service (JCS) provides a platform to develop and deploy business applications in the cloud. In Fusion Applications Cloud deployments customers do not have the option to deploy custom applications developed with JDeveloper to ensure the integrity and supportability of the hosted application service. Instead the custom applications can be deployed to the JCS and integrated to the Fusion Application Cloud instance. This series of articles will go through the features of JCS, provide end-to-end examples on how to develop and deploy applications on JCS and how to integrate them with the Fusion Applications instance. In this article a custom application integrating with REST service will be implemented. We will use REST services provided by Taleo as an example; however the same approach will work with any REST service. In this example the data from the REST service is used to populate a dynamic table. Pre-requisites Access to Cloud instance In order to deploy the application access to a JCS instance is needed, a free trial JCS instance can be obtained from Oracle Cloud site. To register you will need a credit card even if the credit card will not be charged. To register simply click "Try it" and choose the "Java" option. The confirmation email will contain the connection details. See this video for example of the registration.Once the request is processed you will be assigned 2 service instances; Java and Database. Applications deployed to the JCS must use Oracle Database Cloud Service as their underlying database. So when JCS instance is created a database instance is associated with it using a JDBC data source.The cloud services can be monitored and managed through the web UI. For details refer to Getting Started with Oracle Cloud. JDeveloper JDeveloper contains Cloud specific features related to e.g. connection and deployment. To use these features download the JDeveloper from JDeveloper download site by clicking the "Download JDeveloper 11.1.1.7.1 for ADF deployment on Oracle Cloud" link, this version of JDeveloper will have the JCS integration features that will be used in this article. For versions that do not include the Cloud integration features the Oracle Java Cloud Service SDK or the JCS Java Console can be used for deployment. For details on installing and configuring the JDeveloper refer to the installation guideFor details on SDK refer to Using the Command-Line Interface to Monitor Oracle Java Cloud Service and Using the Command-Line Interface to Manage Oracle Java Cloud Service. Access to a local database The database associated with the JCS instance cannot be connected to with JDBC.  Since creating ADFbc business component requires a JDBC connection we will need access to a local database. 3rd party libraries This example will use some 3rd party libraries for implementing the REST service call and processing the input / output content. Other libraries may also be used, however these are tested to work. Jersey 1.x Jersey library will be used as a client to make the call to the REST service. JCS documentation for supported specifications states: Java API for RESTful Web Services (JAX-RS) 1.1 So Jersey 1.x will be used. Download the single-JAR Jersey bundle; in this example Jersey 1.18 JAR bundle is used. Json-simple Jjson-simple library will be used to process the json objects. Download the  JAR file; in this example json-simple-1.1.1.jar is used. Accessing data in Taleo Before implementing the application it is beneficial to familiarize oneself with the data in Taleo. Easiest way to do this is by using a RESTClient on your browser. Once added to the browser you can access the UI: The client can be used to call the REST services to test the URLs and data before adding them into the application. First derive the base URL for the service this can be done with: Method: GET URL: https://tbe.taleo.net/MANAGER/dispatcher/api/v1/serviceUrl/<company name> The response will contain the base URL to be used for the service calls for the company. Next obtain authentication token with: Method: POST URL: https://ch.tbe.taleo.net/CH07/ats/api/v1/login?orgCode=<company>&userName=<user name>&password=<password> The response includes an authentication token that can be used for few hours to authenticate with the service: {   "response": {     "authToken": "webapi26419680747505890557"   },   "status": {     "detail": {},     "success": true   } } To authenticate the service calls navigate to "Headers -> Custom Header": And add a new request header with: Name: Cookie Value: authToken=webapi26419680747505890557 Once authentication token is defined the tool can be used to invoke REST services; for example: Method: GET URL: https://ch.tbe.taleo.net/CH07/ats/api/v1/object/candidate/search.xml?status=16 This data will be used on the application to be created. For details on the Taleo REST services refer to the Taleo Business Edition REST API Guide. Create Application First Fusion Web Application is created and configured. Start JDeveloper and click "New Application": Application Name: JcsRestDemo Application Package Prefix: oracle.apps.jcs.test Application Template: Fusion Web Application (ADF) Configure Local Cloud Connection Follow the steps documented in the "Java Cloud Service ADF Web Application" article to configure a local database connection needed to create the ADFbc objects. Configure Libraries Add the 3rd party libraries into the class path. Create the following directory and copy the jar files into it: <JDEV_USER_HOME>/JcsRestDemo/lib  Select the "Model" project, navigate "Application -> Project Properties -> Libraries and Classpath -> Add JAR / Directory" and add the 2 3rd party libraries: Accessing Data from Taleo To access data from Taleo using the REST service the 3rd party libraries will be used. 2 Java classes are implemented, one representing the Candidate object and another for accessing the Taleo repository Candidate Candidate object is a POJO object used to represent the candidate data obtained from the Taleo repository. The data obtained will be used to populate the ADFbc object used to display the data on the UI. The candidate object contains simply the variables we obtain using the REST services and the getters / setters for them: Navigate "New -> General -> Java -> Java Class", enter "Candidate" as the name and create it in the package "oracle.apps.jcs.test.model".  Copy / paste the following as the content: import oracle.jbo.domain.Number; public class Candidate { private Number candId; private String firstName; private String lastName; public Candidate() { super(); } public Candidate(Number candId, String firstName, String lastName) { super(); this.candId = candId; this.firstName = firstName; this.lastName = lastName; } public void setCandId(Number candId) { this.candId = candId; } public Number getCandId() { return candId; } public void setFirstName(String firstName) { this.firstName = firstName; } public String getFirstName() { return firstName; } public void setLastName(String lastName) { this.lastName = lastName; } public String getLastName() { return lastName; } } Taleo Repository Taleo repository class will interact with the Taleo REST services. The logic will query data from Taleo and populate Candidate objects with the data. The Candidate object will then be used to populate the ADFbc object used to display data on the UI. Navigate "New -> General -> Java -> Java Class", enter "TaleoRepository" as the name and create it in the package "oracle.apps.jcs.test.model".  Copy / paste the following as the content (for details of the implementation refer to the documentation in the code): import com.sun.jersey.api.client.Client; import com.sun.jersey.api.client.ClientResponse; import com.sun.jersey.api.client.WebResource; import com.sun.jersey.core.util.MultivaluedMapImpl; import java.io.StringReader; import java.util.ArrayList; import java.util.Iterator; import java.util.List; import java.util.Map; import javax.ws.rs.core.MediaType; import javax.ws.rs.core.MultivaluedMap; import oracle.jbo.domain.Number; import org.json.simple.JSONArray; import org.json.simple.JSONObject; import org.json.simple.parser.JSONParser; /** * This class interacts with the Taleo REST services */ public class TaleoRepository { /** * Connection information needed to access the Taleo services */ String _company = null; String _userName = null; String _password = null; /** * Jersey client used to access the REST services */ Client _client = null; /** * Parser for processing the JSON objects used as * input / output for the services */ JSONParser _parser = null; /** * The base url for constructing the REST URLs. This is obtained * from Taleo with a service call */ String _baseUrl = null; /** * Authentication token obtained from Taleo using a service call. * The token can be used to authenticate on subsequent * service calls. The token will expire in 4 hours */ String _authToken = null; /** * Static url that can be used to obtain the url used to construct * service calls for a given company */ private static String _taleoUrl = "https://tbe.taleo.net/MANAGER/dispatcher/api/v1/serviceUrl/"; /** * Default constructor for the repository * Authentication details are passed as parameters and used to generate * authentication token. Note that each service call will * generate its own token. This is done to avoid dealing with the expiry * of the token. Also only 20 tokens are allowed per user simultaneously. * So instead for each call there is login / logout. * * @param company the company for which the service calls are made * @param userName the user name to authenticate with * @param password the password to authenticate with. */ public TaleoRepository(String company, String userName, String password) { super(); _company = company; _userName = userName; _password = password; _client = Client.create(); _parser = new JSONParser(); _baseUrl = getBaseUrl(); } /** * This obtains the base url for a company to be used * to construct the urls for service calls * @return base url for the service calls */ private String getBaseUrl() { String result = null; if (null != _baseUrl) { result = _baseUrl; } else { try { String company = _company; WebResource resource = _client.resource(_taleoUrl + company); ClientResponse response = resource.type(MediaType.APPLICATION_FORM_URLENCODED_TYPE).get(ClientResponse.class); String entity = response.getEntity(String.class); JSONObject jsonObject = (JSONObject)_parser.parse(new StringReader(entity)); JSONObject jsonResponse = (JSONObject)jsonObject.get("response"); result = (String)jsonResponse.get("URL"); } catch (Exception ex) { ex.printStackTrace(); } } return result; } /** * Generates authentication token, that can be used to authenticate on * subsequent service calls. Note that each service call will * generate its own token. This is done to avoid dealing with the expiry * of the token. Also only 20 tokens are allowed per user simultaneously. * So instead for each call there is login / logout. * @return authentication token that can be used to authenticate on * subsequent service calls */ private String login() { String result = null; try { MultivaluedMap<String, String> formData = new MultivaluedMapImpl(); formData.add("orgCode", _company); formData.add("userName", _userName); formData.add("password", _password); WebResource resource = _client.resource(_baseUrl + "login"); ClientResponse response = resource.type(MediaType.APPLICATION_FORM_URLENCODED_TYPE).post(ClientResponse.class, formData); String entity = response.getEntity(String.class); JSONObject jsonObject = (JSONObject)_parser.parse(new StringReader(entity)); JSONObject jsonResponse = (JSONObject)jsonObject.get("response"); result = (String)jsonResponse.get("authToken"); } catch (Exception ex) { throw new RuntimeException("Unable to login ", ex); } if (null == result) throw new RuntimeException("Unable to login "); return result; } /** * Releases a authentication token. Each call to login must be followed * by call to logout after the processing is done. This is required as * the tokens are limited to 20 per user and if not released the tokens * will only expire after 4 hours. * @param authToken */ private void logout(String authToken) { WebResource resource = _client.resource(_baseUrl + "logout"); resource.header("cookie", "authToken=" + authToken).post(ClientResponse.class); } /** * This method is used to obtain a list of candidates using a REST * service call. At this example the query is hard coded to query * based on status. The url constructed to access the service is: * <_baseUrl>/object/candidate/search.xml?status=16 * @return List of candidates obtained with the service call */ public List<Candidate> getCandidates() { List<Candidate> result = new ArrayList<Candidate>(); try { // First login, note that in finally block we must have logout _authToken = "authToken=" + login(); /** * Construct the URL, the resulting url will be: * <_baseUrl>/object/candidate/search.xml?status=16 */ MultivaluedMap<String, String> formData = new MultivaluedMapImpl(); formData.add("status", "16"); JSONArray searchResults = (JSONArray)getTaleoResource("object/candidate/search", "searchResults", formData); /** * Process the results, the resulting JSON object is something like * this (simplified for readability): * * { * "response": * { * "searchResults": * [ * { * "candidate": * { * "candId": 211, * "firstName": "Mary", * "lastName": "Stochi", * logic here will find the candidate object(s), obtain the desired * data from them, construct a Candidate object based on the data * and add it to the results. */ for (Object object : searchResults) { JSONObject temp = (JSONObject)object; JSONObject candidate = (JSONObject)findObject(temp, "candidate"); Long candIdTemp = (Long)candidate.get("candId"); Number candId = (null == candIdTemp ? null : new Number(candIdTemp)); String firstName = (String)candidate.get("firstName"); String lastName = (String)candidate.get("lastName"); result.add(new Candidate(candId, firstName, lastName)); } } catch (Exception ex) { ex.printStackTrace(); } finally { if (null != _authToken) logout(_authToken); } return result; } /** * Convenience method to construct url for the service call, invoke the * service and obtain a resource from the response * @param path the path for the service to be invoked. This is combined * with the base url to construct a url for the service * @param resource the key for the object in the response that will be * obtained * @param parameters any parameters used for the service call. The call * is slightly different depending whether parameters exist or not. * @return the resource from the response for the service call */ private Object getTaleoResource(String path, String resource, MultivaluedMap<String, String> parameters) { Object result = null; try { WebResource webResource = _client.resource(_baseUrl + path); ClientResponse response = null; if (null == parameters) response = webResource.header("cookie", _authToken).get(ClientResponse.class); else response = webResource.queryParams(parameters).header("cookie", _authToken).get(ClientResponse.class); String entity = response.getEntity(String.class); JSONObject jsonObject = (JSONObject)_parser.parse(new StringReader(entity)); result = findObject(jsonObject, resource); } catch (Exception ex) { ex.printStackTrace(); } return result; } /** * Convenience method to recursively find a object with an key * traversing down from a given root object. This will traverse a * JSONObject / JSONArray recursively to find a matching key, if found * the object with the key is returned. * @param root root object which contains the key searched for * @param key the key for the object to search for * @return the object matching the key */ private Object findObject(Object root, String key) { Object result = null; if (root instanceof JSONObject) { JSONObject rootJSON = (JSONObject)root; if (rootJSON.containsKey(key)) { result = rootJSON.get(key); } else { Iterator children = rootJSON.entrySet().iterator(); while (children.hasNext()) { Map.Entry entry = (Map.Entry)children.next(); Object child = entry.getValue(); if (child instanceof JSONObject || child instanceof JSONArray) { result = findObject(child, key); if (null != result) break; } } } } else if (root instanceof JSONArray) { JSONArray rootJSON = (JSONArray)root; for (Object child : rootJSON) { if (child instanceof JSONObject || child instanceof JSONArray) { result = findObject(child, key); if (null != result) break; } } } return result; } }   Creating Business Objects While JCS application can be created without a local database, the local database is required when using ADFbc objects even if database objects are not referred. For this example we will create a "Transient" view object that will be programmatically populated based the data obtained from Taleo REST services. Creating ADFbc objects Choose the "Model" project and navigate "New -> Business Tier : ADF Business Components : View Object". On the "Initialize Business Components Project" choose the local database connection created in previous step. On Step 1 enter "JcsRestDemoVO" on the "Name" and choose "Rows populated programmatically, not based on query": On step 2 create the following attributes: CandId Type: Number Updatable: Always Key Attribute: checked Name Type: String Updatable: Always On steps 3 and 4 accept defaults and click "Next".  On step 5 check the "Application Module" checkbox and enter "JcsRestDemoAM" as the name: Click "Finish" to generate the objects. Populating the VO To display the data on the UI the "transient VO" is populated programmatically based on the data obtained from the Taleo REST services. Open the "JcsRestDemoVOImpl.java". Copy / paste the following as the content (for details of the implementation refer to the documentation in the code): import java.sql.ResultSet; import java.util.List; import java.util.ListIterator; import oracle.jbo.server.ViewObjectImpl; import oracle.jbo.server.ViewRowImpl; import oracle.jbo.server.ViewRowSetImpl; // --------------------------------------------------------------------- // --- File generated by Oracle ADF Business Components Design Time. // --- Tue Feb 18 09:40:25 PST 2014 // --- Custom code may be added to this class. // --- Warning: Do not modify method signatures of generated methods. // --------------------------------------------------------------------- public class JcsRestDemoVOImpl extends ViewObjectImpl { /** * This is the default constructor (do not remove). */ public JcsRestDemoVOImpl() { } @Override public void executeQuery() { /** * For some reason we need to reset everything, otherwise * 2nd entry to the UI screen may fail with * "java.util.NoSuchElementException" in createRowFromResultSet * call to "candidates.next()". I am not sure why this is happening * as the Iterator is new and "hasNext" is true at the point * of the execution. My theory is that since the iterator object is * exactly the same the VO cache somehow reuses the iterator including * the pointer that has already exhausted the iterable elements on the * previous run. Working around the issue * here by cleaning out everything on the VO every time before query * is executed on the VO. */ getViewDef().setQuery(null); getViewDef().setSelectClause(null); setQuery(null); this.reset(); this.clearCache(); super.executeQuery(); } /** * executeQueryForCollection - overridden for custom java data source support. */ protected void executeQueryForCollection(Object qc, Object[] params, int noUserParams) { /** * Integrate with the Taleo REST services using TaleoRepository class. * A list of candidates matching a hard coded query is obtained. */ TaleoRepository repository = new TaleoRepository(<company>, <username>, <password>); List<Candidate> candidates = repository.getCandidates(); /** * Store iterator for the candidates as user data on the collection. * This will be used in createRowFromResultSet to create rows based on * the custom iterator. */ ListIterator<Candidate> candidatescIterator = candidates.listIterator(); setUserDataForCollection(qc, candidatescIterator); super.executeQueryForCollection(qc, params, noUserParams); } /** * hasNextForCollection - overridden for custom java data source support. */ protected boolean hasNextForCollection(Object qc) { boolean result = false; /** * Determines whether there are candidates for which to create a row */ ListIterator<Candidate> candidates = (ListIterator<Candidate>)getUserDataForCollection(qc); result = candidates.hasNext(); /** * If all candidates to be created indicate that processing is done */ if (!result) { setFetchCompleteForCollection(qc, true); } return result; } /** * createRowFromResultSet - overridden for custom java data source support. */ protected ViewRowImpl createRowFromResultSet(Object qc, ResultSet resultSet) { /** * Obtain the next candidate from the collection and create a row * for it. */ ListIterator<Candidate> candidates = (ListIterator<Candidate>)getUserDataForCollection(qc); ViewRowImpl row = createNewRowForCollection(qc); try { Candidate candidate = candidates.next(); row.setAttribute("CandId", candidate.getCandId()); row.setAttribute("Name", candidate.getFirstName() + " " + candidate.getLastName()); } catch (Exception e) { e.printStackTrace(); } return row; } /** * getQueryHitCount - overridden for custom java data source support. */ public long getQueryHitCount(ViewRowSetImpl viewRowSet) { /** * For this example this is not implemented rather we always return 0. */ return 0; } } Creating UI Choose the "ViewController" project and navigate "New -> Web Tier : JSF : JSF Page". On the "Create JSF Page" enter "JcsRestDemo" as name and ensure that the "Create as XML document (*.jspx)" is checked.  Open "JcsRestDemo.jspx" and navigate to "Data Controls -> JcsRestDemoAMDataControl -> JcsRestDemoVO1" and drag & drop the VO to the "<af:form> " as a "ADF Read-only Table": Accept the defaults in "Edit Table Columns". To execute the query navigate to to "Data Controls -> JcsRestDemoAMDataControl -> JcsRestDemoVO1 -> Operations -> Execute" and drag & drop the operation to the "<af:form> " as a "Button": Deploying to JCS Follow the same steps as documented in previous article"Java Cloud Service ADF Web Application". Once deployed the application can be accessed with URL: https://java-[identity domain].java.[data center].oraclecloudapps.com/JcsRestDemo-ViewController-context-root/faces/JcsRestDemo.jspx The UI displays a list of candidates obtained from the Taleo REST Services: Summary In this article we learned how to integrate with REST services using Jersey library in JCS. In future articles various other integration techniques will be covered.

    Read the article

  • Squid + Dans Guardian (simple configuration)

    - by The Digital Ninja
    I just built a new proxy server and compiled the latest versions of squid and dansguardian. We use basic authentication to select what users are allowed outside of our network. It seems squid is working just fine and accepts my username and password and lets me out. But if i connect to dans guardian, it prompts for username and password and then displays a message saying my username is not allowed to access the internet. Its pulling my username for the error message so i know it knows who i am. The part i get confused on is i thought that part was handled all by squid, and squid is working flawlessly. Can someone please double check my config files and tell me if i'm missing something or there is some new option i must set to get this to work. dansguardian.conf # Web Access Denied Reporting (does not affect logging) # # -1 = log, but do not block - Stealth mode # 0 = just say 'Access Denied' # 1 = report why but not what denied phrase # 2 = report fully # 3 = use HTML template file (accessdeniedaddress ignored) - recommended # reportinglevel = 3 # Language dir where languages are stored for internationalisation. # The HTML template within this dir is only used when reportinglevel # is set to 3. When used, DansGuardian will display the HTML file instead of # using the perl cgi script. This option is faster, cleaner # and easier to customise the access denied page. # The language file is used no matter what setting however. # languagedir = '/etc/dansguardian/languages' # language to use from languagedir. language = 'ukenglish' # Logging Settings # # 0 = none 1 = just denied 2 = all text based 3 = all requests loglevel = 3 # Log Exception Hits # Log if an exception (user, ip, URL, phrase) is matched and so # the page gets let through. Can be useful for diagnosing # why a site gets through the filter. on | off logexceptionhits = on # Log File Format # 1 = DansGuardian format 2 = CSV-style format # 3 = Squid Log File Format 4 = Tab delimited logfileformat = 1 # Log file location # # Defines the log directory and filename. #loglocation = '/var/log/dansguardian/access.log' # Network Settings # # the IP that DansGuardian listens on. If left blank DansGuardian will # listen on all IPs. That would include all NICs, loopback, modem, etc. # Normally you would have your firewall protecting this, but if you want # you can limit it to only 1 IP. Yes only one. filterip = # the port that DansGuardian listens to. filterport = 8080 # the ip of the proxy (default is the loopback - i.e. this server) proxyip = 127.0.0.1 # the port DansGuardian connects to proxy on proxyport = 3128 # accessdeniedaddress is the address of your web server to which the cgi # dansguardian reporting script was copied # Do NOT change from the default if you are not using the cgi. # accessdeniedaddress = 'http://YOURSERVER.YOURDOMAIN/cgi-bin/dansguardian.pl' # Non standard delimiter (only used with accessdeniedaddress) # Default is enabled but to go back to the original standard mode dissable it. nonstandarddelimiter = on # Banned image replacement # Images that are banned due to domain/url/etc reasons including those # in the adverts blacklists can be replaced by an image. This will, # for example, hide images from advert sites and remove broken image # icons from banned domains. # 0 = off # 1 = on (default) usecustombannedimage = 1 custombannedimagefile = '/etc/dansguardian/transparent1x1.gif' # Filter groups options # filtergroups sets the number of filter groups. A filter group is a set of content # filtering options you can apply to a group of users. The value must be 1 or more. # DansGuardian will automatically look for dansguardianfN.conf where N is the filter # group. To assign users to groups use the filtergroupslist option. All users default # to filter group 1. You must have some sort of authentication to be able to map users # to a group. The more filter groups the more copies of the lists will be in RAM so # use as few as possible. filtergroups = 1 filtergroupslist = '/etc/dansguardian/filtergroupslist' # Authentication files location bannediplist = '/etc/dansguardian/bannediplist' exceptioniplist = '/etc/dansguardian/exceptioniplist' banneduserlist = '/etc/dansguardian/banneduserlist' exceptionuserlist = '/etc/dansguardian/exceptionuserlist' # Show weighted phrases found # If enabled then the phrases found that made up the total which excedes # the naughtyness limit will be logged and, if the reporting level is # high enough, reported. on | off showweightedfound = on # Weighted phrase mode # There are 3 possible modes of operation: # 0 = off = do not use the weighted phrase feature. # 1 = on, normal = normal weighted phrase operation. # 2 = on, singular = each weighted phrase found only counts once on a page. # weightedphrasemode = 2 # Positive result caching for text URLs # Caches good pages so they don't need to be scanned again # 0 = off (recommended for ISPs with users with disimilar browsing) # 1000 = recommended for most users # 5000 = suggested max upper limit urlcachenumber = # # Age before they are stale and should be ignored in seconds # 0 = never # 900 = recommended = 15 mins urlcacheage = # Smart and Raw phrase content filtering options # Smart is where the multiple spaces and HTML are removed before phrase filtering # Raw is where the raw HTML including meta tags are phrase filtered # CPU usage can be effectively halved by using setting 0 or 1 # 0 = raw only # 1 = smart only # 2 = both (default) phrasefiltermode = 2 # Lower casing options # When a document is scanned the uppercase letters are converted to lower case # in order to compare them with the phrases. However this can break Big5 and # other 16-bit texts. If needed preserve the case. As of version 2.7.0 accented # characters are supported. # 0 = force lower case (default) # 1 = do not change case preservecase = 0 # Hex decoding options # When a document is scanned it can optionally convert %XX to chars. # If you find documents are getting past the phrase filtering due to encoding # then enable. However this can break Big5 and other 16-bit texts. # 0 = disabled (default) # 1 = enabled hexdecodecontent = 0 # Force Quick Search rather than DFA search algorithm # The current DFA implementation is not totally 16-bit character compatible # but is used by default as it handles large phrase lists much faster. # If you wish to use a large number of 16-bit character phrases then # enable this option. # 0 = off (default) # 1 = on (Big5 compatible) forcequicksearch = 0 # Reverse lookups for banned site and URLs. # If set to on, DansGuardian will look up the forward DNS for an IP URL # address and search for both in the banned site and URL lists. This would # prevent a user from simply entering the IP for a banned address. # It will reduce searching speed somewhat so unless you have a local caching # DNS server, leave it off and use the Blanket IP Block option in the # bannedsitelist file instead. reverseaddresslookups = off # Reverse lookups for banned and exception IP lists. # If set to on, DansGuardian will look up the forward DNS for the IP # of the connecting computer. This means you can put in hostnames in # the exceptioniplist and bannediplist. # It will reduce searching speed somewhat so unless you have a local DNS server, # leave it off. reverseclientiplookups = off # Build bannedsitelist and bannedurllist cache files. # This will compare the date stamp of the list file with the date stamp of # the cache file and will recreate as needed. # If a bsl or bul .processed file exists, then that will be used instead. # It will increase process start speed by 300%. On slow computers this will # be significant. Fast computers do not need this option. on | off createlistcachefiles = on # POST protection (web upload and forms) # does not block forms without any file upload, i.e. this is just for # blocking or limiting uploads # measured in kibibytes after MIME encoding and header bumph # use 0 for a complete block # use higher (e.g. 512 = 512Kbytes) for limiting # use -1 for no blocking #maxuploadsize = 512 #maxuploadsize = 0 maxuploadsize = -1 # Max content filter page size # Sometimes web servers label binary files as text which can be very # large which causes a huge drain on memory and cpu resources. # To counter this, you can limit the size of the document to be # filtered and get it to just pass it straight through. # This setting also applies to content regular expression modification. # The size is in Kibibytes - eg 2048 = 2Mb # use 0 for no limit maxcontentfiltersize = # Username identification methods (used in logging) # You can have as many methods as you want and not just one. The first one # will be used then if no username is found, the next will be used. # * proxyauth is for when basic proxy authentication is used (no good for # transparent proxying). # * ntlm is for when the proxy supports the MS NTLM authentication # protocol. (Only works with IE5.5 sp1 and later). **NOT IMPLEMENTED** # * ident is for when the others don't work. It will contact the computer # that the connection came from and try to connect to an identd server # and query it for the user owner of the connection. usernameidmethodproxyauth = on usernameidmethodntlm = off # **NOT IMPLEMENTED** usernameidmethodident = off # Preemptive banning - this means that if you have proxy auth enabled and a user accesses # a site banned by URL for example they will be denied straight away without a request # for their user and pass. This has the effect of requiring the user to visit a clean # site first before it knows who they are and thus maybe an admin user. # This is how DansGuardian has always worked but in some situations it is less than # ideal. So you can optionally disable it. Default is on. # As a side effect disabling this makes AD image replacement work better as the mime # type is know. preemptivebanning = on # Misc settings # if on it adds an X-Forwarded-For: <clientip> to the HTTP request # header. This may help solve some problem sites that need to know the # source ip. on | off forwardedfor = on # if on it uses the X-Forwarded-For: <clientip> to determine the client # IP. This is for when you have squid between the clients and DansGuardian. # Warning - headers are easily spoofed. on | off usexforwardedfor = off # if on it logs some debug info regarding fork()ing and accept()ing which # can usually be ignored. These are logged by syslog. It is safe to leave # it on or off logconnectionhandlingerrors = on # Fork pool options # sets the maximum number of processes to sporn to handle the incomming # connections. Max value usually 250 depending on OS. # On large sites you might want to try 180. maxchildren = 180 # sets the minimum number of processes to sporn to handle the incomming connections. # On large sites you might want to try 32. minchildren = 32 # sets the minimum number of processes to be kept ready to handle connections. # On large sites you might want to try 8. minsparechildren = 8 # sets the minimum number of processes to sporn when it runs out # On large sites you might want to try 10. preforkchildren = 10 # sets the maximum number of processes to have doing nothing. # When this many are spare it will cull some of them. # On large sites you might want to try 64. maxsparechildren = 64 # sets the maximum age of a child process before it croaks it. # This is the number of connections they handle before exiting. # On large sites you might want to try 10000. maxagechildren = 5000 # Process options # (Change these only if you really know what you are doing). # These options allow you to run multiple instances of DansGuardian on a single machine. # Remember to edit the log file path above also if that is your intention. # IPC filename # # Defines IPC server directory and filename used to communicate with the log process. ipcfilename = '/tmp/.dguardianipc' # URL list IPC filename # # Defines URL list IPC server directory and filename used to communicate with the URL # cache process. urlipcfilename = '/tmp/.dguardianurlipc' # PID filename # # Defines process id directory and filename. #pidfilename = '/var/run/dansguardian.pid' # Disable daemoning # If enabled the process will not fork into the background. # It is not usually advantageous to do this. # on|off ( defaults to off ) nodaemon = off # Disable logging process # on|off ( defaults to off ) nologger = off # Daemon runas user and group # This is the user that DansGuardian runs as. Normally the user/group nobody. # Uncomment to use. Defaults to the user set at compile time. # daemonuser = 'nobody' # daemongroup = 'nobody' # Soft restart # When on this disables the forced killing off all processes in the process group. # This is not to be confused with the -g run time option - they are not related. # on|off ( defaults to off ) softrestart = off maxcontentramcachescansize = 2000 maxcontentfilecachescansize = 20000 downloadmanager = '/etc/dansguardian/downloadmanagers/default.conf' authplugin = '/etc/dansguardian/authplugins/proxy-basic.conf' Squid.conf http_port 3128 hierarchy_stoplist cgi-bin ? acl QUERY urlpath_regex cgi-bin \? cache deny QUERY acl apache rep_header Server ^Apache #broken_vary_encoding allow apache access_log /squid/var/logs/access.log squid hosts_file /etc/hosts auth_param basic program /squid/libexec/ncsa_auth /squid/etc/userbasic.auth auth_param basic children 5 auth_param basic realm proxy auth_param basic credentialsttl 2 hours auth_param basic casesensitive off refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern . 0 20% 4320 acl NoAuthNec src <HIDDEN FOR SECURITY> acl BrkRm src <HIDDEN FOR SECURITY> acl Dials src <HIDDEN FOR SECURITY> acl Comps src <HIDDEN FOR SECURITY> acl whsws dstdom_regex -i .opensuse.org .novell.com .suse.com mirror.mcs.an1.gov mirrors.kernerl.org www.suse.de suse.mirrors.tds.net mirrros.usc.edu ftp.ale.org suse.cs.utah.edu mirrors.usc.edu mirror.usc.an1.gov linux.nssl.noaa.gov noaa.gov .kernel.org ftp.ale.org ftp.gwdg.de .medibuntu.org mirrors.xmission.com .canonical.com .ubuntu. acl opensites dstdom_regex -i .mbsbooks.com .bowker.com .usps.com .usps.gov .ups.com .fedex.com go.microsoft.com .microsoft.com .apple.com toolbar.msn.com .contacts.msn.com update.services.openoffice.org fms2.pointroll.speedera.net services.wmdrm.windowsmedia.com windowsupdate.com .adobe.com .symantec.com .vitalbook.com vxn1.datawire.net vxn.datawire.net download.lavasoft.de .download.lavasoft.com .lavasoft.com updates.ls-servers.com .canadapost. .myyellow.com minirick symantecliveupdate.com wm.overdrive.com www.overdrive.com productactivation.one.microsoft.com www.update.microsoft.com testdrive.whoson.com www.columbia.k12.mo.us banners.wunderground.com .kofax.com .gotomeeting.com tools.google.com .dl.google.com .cache.googlevideo.com .gpdl.google.com .clients.google.com cache.pack.google.com kh.google.com maps.google.com auth.keyhole.com .contacts.msn.com .hrblock.com .taxcut.com .merchantadvantage.com .jtv.com .malwarebytes.org www.google-analytics.com dcs.support.xerox.com .dhl.com .webtrendslive.com javadl-esd.sun.com javadl-alt.sun.com .excelsior.edu .dhlglobalmail.com .nessus.org .foxitsoftware.com foxit.vo.llnwd.net installshield.com .mindjet.com .mediascouter.com media.us.elsevierhealth.com .xplana.com .govtrack.us sa.tulsacc.edu .omniture.com fpdownload.macromedia.com webservices.amazon.com acl password proxy_auth REQUIRED acl all src all acl manager proto cache_object acl localhost src 127.0.0.1/255.255.255.255 acl to_localhost dst 127.0.0.0/8 acl SSL_ports port 443 563 631 2001 2005 8731 9001 9080 10000 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port # https, snews 443 563 acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port # unregistered ports 1936-65535 acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 10000 acl Safe_ports port 631 acl Safe_ports port 901 # SWAT acl purge method PURGE acl CONNECT method CONNECT acl UTubeUsers proxy_auth "/squid/etc/utubeusers.list" acl RestrictUTube dstdom_regex -i youtube.com acl RestrictFacebook dstdom_regex -i facebook.com acl FacebookUsers proxy_auth "/squid/etc/facebookusers.list" acl BuemerKEC src 10.10.128.0/24 acl MBSsortnet src 10.10.128.0/26 acl MSNExplorer browser -i MSN acl Printers src <HIDDEN FOR SECURITY> acl SpecialFolks src <HIDDEN FOR SECURITY> # streaming download acl fails rep_mime_type ^.*mms.* acl fails rep_mime_type ^.*ms-hdr.* acl fails rep_mime_type ^.*x-fcs.* acl fails rep_mime_type ^.*x-ms-asf.* acl fails2 urlpath_regex dvrplayer mediastream mms:// acl fails2 urlpath_regex \.asf$ \.afx$ \.flv$ \.swf$ acl deny_rep_mime_flashvideo rep_mime_type -i video/flv acl deny_rep_mime_shockwave rep_mime_type -i ^application/x-shockwave-flash$ acl x-type req_mime_type -i ^application/octet-stream$ acl x-type req_mime_type -i application/octet-stream acl x-type req_mime_type -i ^application/x-mplayer2$ acl x-type req_mime_type -i application/x-mplayer2 acl x-type req_mime_type -i ^application/x-oleobject$ acl x-type req_mime_type -i application/x-oleobject acl x-type req_mime_type -i application/x-pncmd acl x-type req_mime_type -i ^video/x-ms-asf$ acl x-type2 rep_mime_type -i ^application/octet-stream$ acl x-type2 rep_mime_type -i application/octet-stream acl x-type2 rep_mime_type -i ^application/x-mplayer2$ acl x-type2 rep_mime_type -i application/x-mplayer2 acl x-type2 rep_mime_type -i ^application/x-oleobject$ acl x-type2 rep_mime_type -i application/x-oleobject acl x-type2 rep_mime_type -i application/x-pncmd acl x-type2 rep_mime_type -i ^video/x-ms-asf$ acl RestrictHulu dstdom_regex -i hulu.com acl broken dstdomain cms.montgomerycollege.edu events.columbiamochamber.com members.columbiamochamber.com public.genexusserver.com acl RestrictVimeo dstdom_regex -i vimeo.com acl http_port port 80 #http_reply_access deny deny_rep_mime_flashvideo #http_reply_access deny deny_rep_mime_shockwave #streaming files #http_access deny fails #http_reply_access deny fails #http_access deny fails2 #http_reply_access deny fails2 #http_access deny x-type #http_reply_access deny x-type #http_access deny x-type2 #http_reply_access deny x-type2 follow_x_forwarded_for allow localhost acl_uses_indirect_client on log_uses_indirect_client on http_access allow manager localhost http_access deny manager http_access allow purge localhost http_access deny purge http_access allow SpecialFolks http_access deny CONNECT !SSL_ports http_access allow whsws http_access allow opensites http_access deny BuemerKEC !MBSsortnet http_access deny BrkRm RestrictUTube RestrictFacebook RestrictVimeo http_access allow RestrictUTube UTubeUsers http_access deny RestrictUTube http_access allow RestrictFacebook FacebookUsers http_access deny RestrictFacebook http_access deny RestrictHulu http_access allow NoAuthNec http_access allow BrkRm http_access allow FacebookUsers RestrictVimeo http_access deny RestrictVimeo http_access allow Comps http_access allow Dials http_access allow Printers http_access allow password http_access deny !Safe_ports http_access deny SSL_ports !CONNECT http_access allow http_port http_access deny all http_reply_access allow all icp_access allow all access_log /squid/var/logs/access.log squid visible_hostname proxy.site.com forwarded_for off coredump_dir /squid/cache/ #header_access Accept-Encoding deny broken #acl snmppublic snmp_community mysecretcommunity #snmp_port 3401 #snmp_access allow snmppublic all cache_mem 3 GB #acl snmppublic snmp_community mbssquid #snmp_port 3401 #snmp_access allow snmppublic all

    Read the article

  • A first look at ConfORM - Part 1

    - by thangchung
    All source codes for this post can be found at here.Have you ever heard of ConfORM is not? I have read it three months ago when I wrote an post about NHibernate and Autofac. At that time, this project really has just started and still in beta version, so I still do not really care much. But recently when reading a book by Jason Dentler NHibernate 3.0 Cookbook, I started to pay attention to it. Author have mentioned quite a lot of OSS in his book. And now again I have reviewed ConfORM once again. I have been involved in ConfORM development group on google and read some articles about it. Fabio Maulo spent a lot of work for the OSS, and I hope it will adapt a great way for NHibernate (because he contributed to NHibernate that). So what is ConfORM? It is stand for Configuration ORM, and it was trying to use a lot of heuristic model for identifying entities from C# code. Today, it's mostly Model First Driven development, so the first thing is to build the entity model. This is really important and we can see it is the heart of business software. Then we have to tell DB about the entity of this model. We often will use Inversion Engineering here, Database Schema is will create based on recently Entity Model. From now we will absolutely not interested in the DB again, only focus on the Entity Model.Fluent NHibenate really good, I liked this OSS. Sharp Architecture and has done so well in Fluent NHibernate integration with applications. A Multiple Database technical in Sharp Architecture is truly awesome. It can receive configuration, a connection string and a dll containing entity model, which would then create a SessionFactory, finally caching inside the computer memory. As the number of SessionFactory can be very large and will full of the memory, it has also devised a way of caching SessionFactory in the file. This post I hope this will not completely explain about and building a model of multiple databases. I just tried to mount a number of posts from the community and apply some of my knowledge to build a management model Session for ConfORM.As well as Fluent NHibernate, ConfORM also supported on the interface mapping, see this to understand it. So the first thing we will build the Entity Model for it, and here is what I will use the model for this article. A simple model for managing news and polls, it will be too easy for a number of people, but I hope not to bring complexity to this post.I will then have some code to build super type for the Entity Model. public interface IEntity<TId>    {        TId Id { get; set; }    } public abstract class EntityBase<TId> : IEntity<TId>    {        public virtual TId Id { get; set; }         public override bool Equals(object obj)        {            return Equals(obj as EntityBase<TId>);        }         private static bool IsTransient(EntityBase<TId> obj)        {            return obj != null &&            Equals(obj.Id, default(TId));        }         private Type GetUnproxiedType()        {            return GetType();        }         public virtual bool Equals(EntityBase<TId> other)        {            if (other == null)                return false;            if (ReferenceEquals(this, other))                return true;            if (!IsTransient(this) &&            !IsTransient(other) &&            Equals(Id, other.Id))            {                var otherType = other.GetUnproxiedType();                var thisType = GetUnproxiedType();                return thisType.IsAssignableFrom(otherType) ||                otherType.IsAssignableFrom(thisType);            }            return false;        }         public override int GetHashCode()        {            if (Equals(Id, default(TId)))                return base.GetHashCode();            return Id.GetHashCode();        }    } Database schema will be created as:The next step is to build the ConORM builder to create a NHibernate Configuration. Patrick have a excellent article about it at here. Contract of it below: public interface IConfigBuilder    {        Configuration BuildConfiguration(string connectionString, string sessionFactoryName);    } The idea here is that I will pass in a connection string and a set of the DLL containing the Entity Model and it makes me a NHibernate Configuration (shame that I stole this ideas of Sharp Architecture). And here is its code: public abstract class ConfORMConfigBuilder : RootObject, IConfigBuilder    {        private static IConfigurator _configurator;         protected IEnumerable<Type> DomainTypes;         private readonly IEnumerable<string> _assemblies;         protected ConfORMConfigBuilder(IEnumerable<string> assemblies)            : this(new Configurator(), assemblies)        {            _assemblies = assemblies;        }         protected ConfORMConfigBuilder(IConfigurator configurator, IEnumerable<string> assemblies)        {            _configurator = configurator;            _assemblies = assemblies;        }         public abstract void GetDatabaseIntegration(IDbIntegrationConfigurationProperties dBIntegration, string connectionString);         protected abstract HbmMapping GetMapping();         public Configuration BuildConfiguration(string connectionString, string sessionFactoryName)        {            Contract.Requires(!string.IsNullOrEmpty(connectionString), "ConnectionString is null or empty");            Contract.Requires(!string.IsNullOrEmpty(sessionFactoryName), "SessionFactory name is null or empty");            Contract.Requires(_configurator != null, "Configurator is null");             return CatchExceptionHelper.TryCatchFunction(                () =>                {                    DomainTypes = GetTypeOfEntities(_assemblies);                     if (DomainTypes == null)                        throw new Exception("Type of domains is null");                     var configure = new Configuration();                    configure.SessionFactoryName(sessionFactoryName);                     configure.Proxy(p => p.ProxyFactoryFactory<ProxyFactoryFactory>());                    configure.DataBaseIntegration(db => GetDatabaseIntegration(db, connectionString));                     if (_configurator.GetAppSettingString("IsCreateNewDatabase").ConvertToBoolean())                    {                        configure.SetProperty("hbm2ddl.auto", "create-drop");                    }                     configure.Properties.Add("default_schema", _configurator.GetAppSettingString("DefaultSchema"));                    configure.AddDeserializedMapping(GetMapping(),                                                     _configurator.GetAppSettingString("DocumentFileName"));                     SchemaMetadataUpdater.QuoteTableAndColumns(configure);                     return configure;                }, Logger);        }         protected IEnumerable<Type> GetTypeOfEntities(IEnumerable<string> assemblies)        {            var type = typeof(EntityBase<Guid>);            var domainTypes = new List<Type>();             foreach (var assembly in assemblies)            {                var realAssembly = Assembly.LoadFrom(assembly);                 if (realAssembly == null)                    throw new NullReferenceException();                 domainTypes.AddRange(realAssembly.GetTypes().Where(                    t =>                    {                        if (t.BaseType != null)                            return string.Compare(t.BaseType.FullName,                                          type.FullName) == 0;                        return false;                    }));            }             return domainTypes;        }    } I do not want to dependency on any RDBMS, so I made a builder as an abstract class, and so I will create a concrete instance for SQL Server 2008 as follows: public class SqlServerConfORMConfigBuilder : ConfORMConfigBuilder    {        public SqlServerConfORMConfigBuilder(IEnumerable<string> assemblies)            : base(assemblies)        {        }         public override void GetDatabaseIntegration(IDbIntegrationConfigurationProperties dBIntegration, string connectionString)        {            dBIntegration.Dialect<MsSql2008Dialect>();            dBIntegration.Driver<SqlClientDriver>();            dBIntegration.KeywordsAutoImport = Hbm2DDLKeyWords.AutoQuote;            dBIntegration.IsolationLevel = IsolationLevel.ReadCommitted;            dBIntegration.ConnectionString = connectionString;            dBIntegration.LogSqlInConsole = true;            dBIntegration.Timeout = 10;            dBIntegration.LogFormatedSql = true;            dBIntegration.HqlToSqlSubstitutions = "true 1, false 0, yes 'Y', no 'N'";        }         protected override HbmMapping GetMapping()        {            var orm = new ObjectRelationalMapper();             orm.Patterns.PoidStrategies.Add(new GuidPoidPattern());             var patternsAppliers = new CoolPatternsAppliersHolder(orm);            //patternsAppliers.Merge(new DatePropertyByNameApplier()).Merge(new MsSQL2008DateTimeApplier());            patternsAppliers.Merge(new ManyToOneColumnNamingApplier());            patternsAppliers.Merge(new OneToManyKeyColumnNamingApplier(orm));             var mapper = new Mapper(orm, patternsAppliers);             var entities = new List<Type>();             DomainDefinition(orm);            Customize(mapper);             entities.AddRange(DomainTypes);             return mapper.CompileMappingFor(entities);        }         private void DomainDefinition(IObjectRelationalMapper orm)        {            orm.TablePerClassHierarchy(new[] { typeof(EntityBase<Guid>) });            orm.TablePerClass(DomainTypes);             orm.OneToOne<News, Poll>();            orm.ManyToOne<Category, News>();             orm.Cascade<Category, News>(Cascade.All);            orm.Cascade<News, Poll>(Cascade.All);            orm.Cascade<User, Poll>(Cascade.All);        }         private static void Customize(Mapper mapper)        {            CustomizeRelations(mapper);            CustomizeTables(mapper);            CustomizeColumns(mapper);        }         private static void CustomizeRelations(Mapper mapper)        {        }         private static void CustomizeTables(Mapper mapper)        {        }         private static void CustomizeColumns(Mapper mapper)        {            mapper.Class<Category>(                cm =>                {                    cm.Property(x => x.Name, m => m.NotNullable(true));                    cm.Property(x => x.CreatedDate, m => m.NotNullable(true));                });             mapper.Class<News>(                cm =>                {                    cm.Property(x => x.Title, m => m.NotNullable(true));                    cm.Property(x => x.ShortDescription, m => m.NotNullable(true));                    cm.Property(x => x.Content, m => m.NotNullable(true));                });             mapper.Class<Poll>(                cm =>                {                    cm.Property(x => x.Value, m => m.NotNullable(true));                    cm.Property(x => x.VoteDate, m => m.NotNullable(true));                    cm.Property(x => x.WhoVote, m => m.NotNullable(true));                });             mapper.Class<User>(                cm =>                {                    cm.Property(x => x.UserName, m => m.NotNullable(true));                    cm.Property(x => x.Password, m => m.NotNullable(true));                });        }    } As you can see that we can do so many things in this class, such as custom entity relationships, custom binding on the columns, custom table name, ... Here I only made two so-Appliers for OneToMany and ManyToOne relationships, you can refer to it here public class ManyToOneColumnNamingApplier : IPatternApplier<PropertyPath, IManyToOneMapper>    {        #region IPatternApplier<PropertyPath,IManyToOneMapper> Members         public void Apply(PropertyPath subject, IManyToOneMapper applyTo)        {            applyTo.Column(subject.ToColumnName() + "Id");        }         #endregion         #region IPattern<PropertyPath> Members         public bool Match(PropertyPath subject)        {            return subject != null;        }         #endregion    } public class OneToManyKeyColumnNamingApplier : OneToManyPattern, IPatternApplier<PropertyPath, ICollectionPropertiesMapper>    {        public OneToManyKeyColumnNamingApplier(IDomainInspector domainInspector) : base(domainInspector) { }         #region Implementation of IPattern<PropertyPath>         public bool Match(PropertyPath subject)        {            return Match(subject.LocalMember);        }         #endregion Implementation of IPattern<PropertyPath>         #region Implementation of IPatternApplier<PropertyPath,ICollectionPropertiesMapper>         public void Apply(PropertyPath subject, ICollectionPropertiesMapper applyTo)        {            applyTo.Key(km => km.Column(GetKeyColumnName(subject)));        }         #endregion Implementation of IPatternApplier<PropertyPath,ICollectionPropertiesMapper>         protected virtual string GetKeyColumnName(PropertyPath subject)        {            Type propertyType = subject.LocalMember.GetPropertyOrFieldType();            Type childType = propertyType.DetermineCollectionElementType();            var entity = subject.GetContainerEntity(DomainInspector);            var parentPropertyInChild = childType.GetFirstPropertyOfType(entity);            var baseName = parentPropertyInChild == null ? subject.PreviousPath == null ? entity.Name : entity.Name + subject.PreviousPath : parentPropertyInChild.Name;            return GetKeyColumnName(baseName);        }         protected virtual string GetKeyColumnName(string baseName)        {            return string.Format("{0}Id", baseName);        }    } Everyone also can download the ConfORM source at google code and see example inside it. Next part I will write about multiple database factory. Hope you enjoy about it. happy coding and see you next part.

    Read the article

  • Obj-msg-send error in numberOfSectionsInTableView

    - by mukeshpawar
    import "AddBillerCategoryViewController.h" import "Globals.h" import "AddBillerViewController.h" import "AddBillerListViewController.h" import"KlinnkAppDelegate.h" @implementation AddBillerCategoryViewController @synthesize REASON, RESPVAR, currentAttribute,tbldata,strOptions; // This recipe adds a title for each section //Initialize the table view controller with the grouped style (AddBillerCategoryViewController *) init { if (self = [super initWithStyle:UITableViewStyleGrouped]);// self.title = @"Crayon Colors"; return self; } -(void)showBack { [[self navigationController] pushViewController:[[AddBillerViewController alloc] init] animated:YES]; } (void)navigationController:(UINavigationController *)navigationController willShowViewController:(UIViewController *)viewController animated:(BOOL)animated { if ([viewController isKindOfClass:[AddBillerCategoryViewController class]]) { AddBillerCategoryViewController *controller = (AddBillerCategoryViewController *)viewController; [controller.tbldata reloadData]; } } (void)viewDidLoad { appDelegate = (KlinnkAppDelegate *)[[UIApplication sharedApplication] delegate]; appDelegate.catListArray.count; // Uncomment the following line to display an Edit button in the navigation bar for this view controller. // self.navigationItem.rightBarButtonItem = self.editButtonItem; //self.navigationItem.leftBarButtonItem = [[[UIBarButtonItem alloc] // initWithTitle:@"Back" // style:UIBarButtonItemStylePlain // target:self // action:@selector(showBack)] autorelease]; if(gotOK == 0) { //self.navigationItem.leftBarButtonItem.enabled = FALSE; dt = [[DateTime alloc] init]; strChannelID = @"IGLOO|MOBILE"; strDateTime = [dt findDateTime]; strTemp = [dt findSessionTime]; strSessionID = [appDelegate.KMobile stringByAppendingString:strTemp]; strResponseURL = @"http://115.113.110.139/Test/CbbpServerRequestHandler"; strResponseVar = @"serverResponseXML"; strRequestType = @"GETCATEGORY"; NSLog(@"Current Session id - %@", strSessionID); //conn = [[NSURLConnection alloc] init]; receivedData = [[NSMutableData data] retain]; //.................... currentAttribute = [[NSMutableString alloc] init]; // create XMl xmlData = [[NSData alloc] init]; xmlData = [self createXML]; // XMl has been created now convert it in to string xmlString = [[NSString alloc] initWithData:xmlData encoding:NSASCIIStringEncoding]; // Ataching other infromatin to he xml parameterString = [[NSString alloc] initWithString:@"mobileRequestXML="]; requestString = [[NSString alloc] init]; requestString = [parameterString stringByAppendingString:xmlString]; // give space betn two element. requestString = [requestString stringByReplacingOccurrencesOfString:@"<" withString:@" <"]; // Initalizing other parameters postData = [requestString dataUsingEncoding:NSASCIIStringEncoding allowLossyConversion:YES]; postLength = [NSString stringWithFormat:@"%d",[postData length]]; firstRequest = [[[NSMutableURLRequest alloc] init] autorelease]; REASON = [[NSMutableString alloc] init]; RESPVAR = [[NSMutableString alloc] init]; NSLog(@"\n \n Sending for 1st time........\n"); [self sendRequest]; NSLog(@"\n \n Sending for 2nd time........\n"); [self sendRequest]; NSLog(@"\n \n both request send........\n \n "); } //[tbldata reloadData]; [self retain]; [super viewDidLoad]; } -(void)sendRequest { finished = FALSE; NSLog(@"\n Sending Request \n\n %@", requestString); conn = [[NSURLConnection alloc] init]; if(gotOK == 0) [firstRequest setURL:[NSURL URLWithString:@"http://115.113.110.139/Test/CbbpMobileRequestHandler"]]; if(gotOK == 1) { [firstRequest setURL:[NSURL URLWithString:@"http://115.113.110.139//secure"]]; gotOK = 2; } [firstRequest setHTTPMethod:@"POST"]; [firstRequest setValue:postLength forHTTPHeaderField:@"Content-Length"]; [firstRequest setValue:@"application/x-www-form-urlencoded" forHTTPHeaderField:@"Content-Type"]; [firstRequest setHTTPBody:postData]; conn = [conn initWithRequest:firstRequest delegate:self startImmediately:YES]; [conn start]; while(!finished) { [[NSRunLoop currentRunLoop] runMode:NSDefaultRunLoopMode beforeDate:[NSDate distantFuture]]; } if (conn) { //receivedData = [[NSMutableData data] retain]; NSLog(@"\n\n Received %d bytes of data",[receivedData length]); } else { NSLog(@"\n Not responding"); } } (void)connection:(NSURLConnection *)connection didReceiveResponse:(NSURLResponse *)response { NSLog(@" \n Send didReciveResponse"); [receivedData setLength:0]; } (void)connection:(NSURLConnection *)connection didReceiveData:(NSData *)data { NSLog(@" \n Send didReciveData"); [receivedData appendData:data]; } (void)connectionDidFinishLoading:(NSURLConnection *)connection { finished = TRUE; NSLog(@" \n Send didFinishLaunching"); // do something with the data // receivedData is declared as a method instance elsewhere NSLog(@"\n\n Succeeded! DIDFINISH Received %d bytes of data\n\n ",[receivedData length]); NSString *aStr = [[NSString alloc] initWithData:receivedData encoding:NSASCIIStringEncoding]; NSLog(aStr); //[conn release]; if([aStr isEqualToString:@"OK"]) gotOK = 1; NSLog(@" Value of gotOK - %d", gotOK); if(gotOK == 2) { responseData = [aStr dataUsingEncoding:NSASCIIStringEncoding allowLossyConversion:YES]; parser = [[NSXMLParser alloc] initWithData:responseData]; [parser setDelegate:self]; NSLog(@"\n start parsing"); [parser parse]; NSLog(@"\n PArsing over"); NSLog(@"\n check U / S and the RESVAR is - %@",RESPVAR); NSRange textRange; textRange =[aStr rangeOfString:@"<"]; if(textRange.location != NSNotFound) { if([RESPVAR isEqualToString:@"U"]) { self.navigationItem.rightBarButtonItem.enabled = TRUE; self.navigationItem.leftBarButtonItem.enabled = TRUE; NSLog(@" \n U......."); UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Error" message:REASON delegate:self cancelButtonTitle:nil otherButtonTitles:@"OK", nil]; [alert show]; [alert release]; } if([RESPVAR isEqualToString:@"S"]) { NSLog(@"\n S........"); [[self navigationController] pushViewController:[[AddBillerCategoryViewController alloc] init] animated:YES]; //[self viewDidLoad]; //[tbldata reloadData]; } } else { UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Connection Problem" message:@"Enable to process your request at this time. Please try again." delegate:self cancelButtonTitle:nil otherButtonTitles:@"OK", nil]; [alert show]; [alert release]; //self.navigationItem.leftBarButtonItem.enabled = TRUE; } } NSLog(@"\n Last line of connectionDidFinish "); //[tableView reloadData]; } -(NSData *)createXML { NSString *strXmlNode = @" channel alliaceid session reqtype responseurl responsevar "; NSString *tempchannel = [strXmlNode stringByReplacingOccurrencesOfString:@"channel" withString:strChannelID options:NSBackwardsSearch range:NSMakeRange(0, [strXmlNode length])]; NSString *tempalliance = [tempchannel stringByReplacingOccurrencesOfString:@"alliaceid" withString:@"WALLET365" options:NSBackwardsSearch range:NSMakeRange(0, [tempchannel length])]; NSString *tempsession = [tempalliance stringByReplacingOccurrencesOfString:@"session" withString:strSessionID options:NSBackwardsSearch range:NSMakeRange(0, [tempalliance length])]; NSString *tempreqtype = [tempsession stringByReplacingOccurrencesOfString:@"reqtype" withString:strRequestType options:NSBackwardsSearch range:NSMakeRange(0,[tempsession length])]; NSString *tempresponseurl = [tempreqtype stringByReplacingOccurrencesOfString:@"responseurl" withString:strResponseURL options:NSBackwardsSearch range:NSMakeRange(0, [tempreqtype length])]; NSString *tempresponsevar = [tempresponseurl stringByReplacingOccurrencesOfString:@"responsevar" withString:strResponseVar options:NSBackwardsSearch range:NSMakeRange(0,[tempresponseurl length])]; NSData *data= [[NSString stringWithString:tempresponsevar] dataUsingEncoding:NSUTF8StringEncoding]; return data; } (void)parser:(NSXMLParser *)parser didStartElement:(NSString *)elementName namespaceURI:(NSString *)namespaceURI qualifiedName:(NSString *)qName attributes:(NSDictionary *)attributeDict { if([elementName isEqualToString:@"RESPVAL"]) currentAttribute = [NSMutableString string]; if([elementName isEqualToString:@"REASON"]) currentAttribute = [NSMutableString string]; if([elementName isEqualToString:@"COUNT"]) currentAttribute = [NSMutableString string]; if([elementName isEqualToString:@"CATNAME"]) currentAttribute = [NSMutableString string]; } (void)parser:(NSXMLParser *)parser didEndElement:(NSString *)elementName namespaceURI:(NSString *)namespaceURI qualifiedName:(NSString *)qName { if([elementName isEqualToString:@"RESPVAL"]) { [RESPVAR setString:currentAttribute]; //NSLog(@"\n Response VAR - %@", RESPVAR); } if([elementName isEqualToString:@"REASON"]) { [REASON setString:currentAttribute]; //NSLog(@"\n Reason - %@", REASON); } if([elementName isEqualToString:@"COUNT"]) { NSString *temp1 = [[NSString alloc] init]; temp1 = [temp1 stringByAppendingString:currentAttribute]; catCount = [temp1 intValue]; [temp1 release]; //NSLog(@"\n Cat Count - %d", catCount); } if([elementName isEqualToString:@"CATNAME"]) { [appDelegate.catListArray addObject:[NSString stringWithFormat:currentAttribute]]; //NSLog(@"%@", appDelegate.catListArray); } } (void)parser:(NSXMLParser *)parser foundCharacters:(NSString *)string { if(self.currentAttribute) [self.currentAttribute setString:string]; } /* (void)viewWillAppear:(BOOL)animated { [super viewWillAppear:animated]; } */ /* (void)viewDidAppear:(BOOL)animated { [super viewDidAppear:animated]; } */ /* (void)viewWillDisappear:(BOOL)animated { [super viewWillDisappear:animated]; } */ /* (void)viewDidDisappear:(BOOL)animated { [super viewDidDisappear:animated]; } */ /* // Override to allow orientations other than the default portrait orientation. (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { // Return YES for supported orientations return (interfaceOrientation == UIInterfaceOrientationPortrait); } */ (void)didReceiveMemoryWarning { [super didReceiveMemoryWarning]; // Releases the view if it doesn't have a superview // Release anything that's not essential, such as cached data } pragma mark Table view methods -(NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { return 1; } // Customize the number of rows in the table view. -(NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { KlinnkAppDelegate *appDelegated = (KlinnkAppDelegate *)[[UIApplication sharedApplication] delegate]; return appDelegated.catListArray.count; } // Customize the appearance of table view cells. (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithFrame:CGRectZero reuseIdentifier:CellIdentifier] autorelease]; } // Set up the cell... AddBillerCategoryViewController *mbvc = (AddBillerCategoryViewController *)[appDelegate.catListArray objectAtIndex:indexPath.row]; [cell setText:mbvc.strOptions]; return cell; } (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { // Navigation logic may go here. Create and push another view controller. // AnotherViewController *anotherViewController = [[AnotherViewController alloc] initWithNibName:@"AnotherView" bundle:nil]; // [self.navigationController pushViewController:anotherViewController]; // [anotherViewController release]; gotOK = 0; int j = indexPath.row; appDelegate.catName = [[NSString alloc] init]; appDelegate.catName = [appDelegate.catName stringByAppendingString:[appDelegate.catListArray objectAtIndex:j]]; [[self navigationController] pushViewController:[[AddBillerListViewController alloc] init] animated:YES]; } /* // Override to support conditional editing of the table view. (BOOL)tableView:(UITableView *)tableView canEditRowAtIndexPath:(NSIndexPath *)indexPath { // Return NO if you do not want the specified item to be editable. return YES; } */ /* // Override to support editing the table view. (void)tableView:(UITableView *)tableView commitEditingStyle:(UITableViewCellEditingStyle)editingStyle forRowAtIndexPath:(NSIndexPath *)indexPath { if (editingStyle == UITableViewCellEditingStyleDelete) { // Delete the row from the data source [tableView deleteRowsAtIndexPaths:[NSArray arrayWithObject:indexPath] withRowAnimation:YES]; } else if (editingStyle == UITableViewCellEditingStyleInsert) { // Create a new instance of the appropriate class, insert it into the array, and add a new row to the table view } } */ /* // Override to support rearranging the table view. (void)tableView:(UITableView *)tableView moveRowAtIndexPath:(NSIndexPath *)fromIndexPath toIndexPath:(NSIndexPath *)toIndexPath { } */ /* // Override to support conditional rearranging of the table view. (BOOL)tableView:(UITableView *)tableView canMoveRowAtIndexPath:(NSIndexPath *)indexPath { // Return NO if you do not want the item to be re-orderable. return YES; } */ (void)dealloc { [REASON release]; [RESPVAR release]; [currentAttribute release]; [tbldata release]; [super dealloc]; } @end In the Above code .. numberOfSectionsInTableView ,i get error of obj-msg-send i have intialize the array catlist and even not released it anywhere still why i am getting this error please help me i am badly stuck' thanks in advacnce

    Read the article

  • Receive MMS images and make album using iamge using j2me

    - by Abdul Basit
    I am trying to made application which receive MMS images and make a album from them user can view the pictures while running this application. I am facing problem while running application on mobile. while this application is fully working in wireless tookit emulator. Please guide me to fix this problem.`//package hello; import javax.microedition.midlet.; import javax.microedition.lcdui.; import javax.wireless.messaging.*; import java.io.IOException; import java.util.Vector; import javax.microedition.io.Connector; import javax.microedition.lcdui.Display; //, ItemStateListener public class MMSS extends MIDlet implements CommandListener, Runnable, MessageListener { //-----------------------------------Receive MMS --------------------------- private Thread mReceiver = null; private boolean mEndNow = false; private Message msg = null; String msgReceived = null; private Image[] receivedImage = new Image[5]; private Command mExitCommand = new Command("Exit", Command.EXIT, 2); private Command mRedCommand = new Command("Back", Command.SCREEN, 1); private Command mBlueCommand = new Command("Next", Command.SCREEN, 1); private Command mPlay = new Command("Play", Command.SCREEN, 1); protected static final String DEFAULT_IMAGE = "/MMSS_logo.jpg"; //protected static final String DEFAULT_IMAGE = "/wait.png"; private Display mDisplay = null; //protected ImageItem mColorSquare = null; protected Image mInitialImage = null; private String mAppID = "MMSMIDlet"; private TextField imageName = null; //private Form mForm = null; private int count = 0; private int next = 0; private Integer mMonitor = new Integer(0); //----------------------------------- End Receive MMS --------------------------- private boolean midletPaused = false; private Command exitCommand; private Command exitCommand1; private Command backCommand; private Form form; private StringItem stringItem; private ImageItem imageItem; private Image image1; private Alert alert; private List locationList; private Alert cannotAddLocationAlert; public MMSS() { } /** * Initilizes the application. * It is called only once when the MIDlet is started. The method is called before the startMIDlet method. */ private void initialize() { } /** * Performs an action assigned to the Mobile Device - MIDlet Started point. */ public void startMIDlet() { // write pre-action user code here switchDisplayable(null, getForm()); // write post-action user code here } /** * Performs an action assigned to the Mobile Device - MIDlet Resumed point. */ public void resumeMIDlet() { } /** * Switches a current displayable in a display. The display instance is taken from getDisplay method. This method is used by all actions in the design for switching displayable. * @param alert the Alert which is temporarily set to the display; if null, then nextDisplayable is set immediately * @param nextDisplayable the Displayable to be set / public void switchDisplayable(Alert alert, Displayable nextDisplayable) {//GEN-END:|5-switchDisplayable|0|5-preSwitch // write pre-switch user code here Display display = getDisplay();//GEN-BEGIN:|5-switchDisplayable|1|5-postSwitch if (alert == null) { display.setCurrent(nextDisplayable); } else { display.setCurrent(alert, nextDisplayable); } } /* * Called by a system to indicated that a command has been invoked on a particular displayable. * @param command the Command that was invoked * @param displayable the Displayable where the command was invoked */ public void commandAction(Command command, Displayable displayable) { // write pre-action user code here if (displayable == form) { if (command == exitCommand) { // write pre-action user code here exitMIDlet(); // write post-action user code here } } // write post-action user code here } /** * Returns an initiliazed instance of exitCommand component. * @return the initialized component instance */ public Command getExitCommand() { if (exitCommand == null) { // write pre-init user code here exitCommand = new Command("Exit", Command.EXIT, 0); // write post-init user code here } return exitCommand; } /** * Returns an initiliazed instance of form component. * @return the initialized component instance */ public Form getForm() { if (form == null) { // write pre-init user code here form = new Form("Welcome to MMSS", new Item[] { getStringItem(), getImageItem() }); form.addCommand(getExitCommand()); form.setCommandListener(this); // write post-init user code here } return form; } /** * Returns an initiliazed instance of stringItem component. * @return the initialized component instance */ public StringItem getStringItem() { if (stringItem == null) { // write pre-init user code here stringItem = new StringItem("Hello", "Hello, World!"); // write post-init user code here } return stringItem; } /** * Returns an initiliazed instance of exitCommand1 component. * @return the initialized component instance / public Command getExitCommand1() { if (exitCommand1 == null) { // write pre-init user code here exitCommand1 = new Command("Exit", Command.EXIT, 0); // write post-init user code here } return exitCommand1; } /* * Returns an initiliazed instance of imageItem component. * @return the initialized component instance */ public ImageItem getImageItem() { if (imageItem == null) { // write pre-init user code here imageItem = new ImageItem("imageItem", getImage1(), ImageItem.LAYOUT_CENTER | Item.LAYOUT_TOP | Item.LAYOUT_BOTTOM | Item.LAYOUT_VCENTER | Item.LAYOUT_EXPAND | Item.LAYOUT_VEXPAND, "");//GEN-LINE:|26-getter|1|26-postInit // write post-init user code here } return imageItem; } /** * Returns an initiliazed instance of image1 component. * @return the initialized component instance */ public Image getImage1() { if (image1 == null) { // write pre-init user code here try { image1 = Image.createImage("/B.jpg"); } catch (java.io.IOException e) { e.printStackTrace(); } // write post-init user code here } return image1; } /** * Returns a display instance. * @return the display instance. */ public Display getDisplay () { return Display.getDisplay(this); } /** * Exits MIDlet. */ public void exitMIDlet() { switchDisplayable (null, null); destroyApp(true); notifyDestroyed(); } /** * Called when MIDlet is started. * Checks whether the MIDlet have been already started and initialize/starts or resumes the MIDlet. */ public void startApp() { if (midletPaused) { resumeMIDlet (); } else { initialize (); startMIDlet (); } midletPaused = false; /////////////////////////////////////////////////////////////////////// /////////////////////////////////////////////////////////////////////// try { conn = (MessageConnection) Connector.open("mms://:" + mAppID); conn.setMessageListener(this); } catch (Exception e) { System.out.println("startApp caught: "); e.printStackTrace(); } if (conn != null) { startReceive(); } /////////////////////////////////////////////////////////////////////// /////////////////////////////////////////////////////////////////////// } /** * Called when MIDlet is paused. */ public void pauseApp() { midletPaused = true; mEndNow = true; try { conn.setMessageListener(null); conn.close(); } catch (IOException ex) { System.out.println("pausetApp caught: "); ex.printStackTrace(); } } /** * Called to signal the MIDlet to terminate. * @param unconditional if true, then the MIDlet has to be unconditionally terminated and all resources has to be released. */ public void destroyApp(boolean unconditional) { mEndNow = true; try { conn.close(); } catch (IOException ex) { System.out.println("destroyApp caught: "); ex.printStackTrace(); } } private void startReceive() { mEndNow = false; //---- Start receive thread mReceiver = new Thread(this); mReceiver.start(); } protected MessageConnection conn = null; protected int mMsgAvail = 0; // -------------------- Get Next Images ------------------------------------ private void getMessage() { synchronized(mMonitor) { mMsgAvail++; mMonitor.notify(); } } // -------------------- Display Images Thread ------------------------------ public void notifyIncomingMessage(MessageConnection msgConn) { if (msgConn == conn) getMessage(); } public void itemStateChanged(Item item) { throw new UnsupportedOperationException("Not supported yet."); } class SetImage implements Runnable { private Image img = null; public SetImage(Image inImg) { img = inImg; } public void run() { imageItem.setImage(img); imageName.setString(Integer.toString(count)); } } public void run() { mMsgAvail = 0; while (!mEndNow) { synchronized(mMonitor) { // Enter monitor if (mMsgAvail <= 0) try { mMonitor.wait(); } catch (InterruptedException ex) { } mMsgAvail--; } try { msg = conn.receive(); if (msg instanceof MultipartMessage) { MultipartMessage mpm = (MultipartMessage)msg; MessagePart[] parts = mpm.getMessageParts(); if (parts != null) { for (int i = 0; i < parts.length; i++) { MessagePart mp = parts[i]; byte[] ba = mp.getContent(); receivedImage[count] = Image.createImage(ba, 0, ba.length); } Display.getDisplay(this).callSerially(new SetImage(receivedImage[count])); } } } catch (IOException e) { System.out.println("Receive thread caught: "); e.printStackTrace(); } count++; } // of while } } `

    Read the article

  • compass-rails 1.03 - TypeError: can't convert nil into String

    - by Romiko
    I am running: ruby 1.9.3p392 (2013-02-22) [i386-mingw32] compass-rails 1.0.3 I used the Windows RailsInstaller to install Ruby on Rails Gemfile group :assets do gem 'sass-rails', '~> 3.2.3' gem 'coffee-rails', '~> 3.2.1' gem 'compass-rails','~> 1.0.2' # See https://github.com/sstephenson/execjs#readme for more supported runtimes # gem 'therubyracer', :platforms => :ruby gem 'uglifier', '>= 1.0.3' end I am currently experiencing issues importing sprites. My sprites are in: assets/images/source in my _shared.scss file I have: //Sprites @import "./source/*.png"; $source-sprite-dimensions: true; In my application.scss I have: /* * This is a manifest file that'll be compiled into application.css, which will include all the files * listed below. * * Any CSS and SCSS file within this directory, lib/assets/stylesheets, vendor/assets/stylesheets, * or vendor/assets/stylesheets of plugins, if any, can be referenced here using a relative path. * * You're free to add application-wide styles to this file and they'll appear at the top of the * compiled file, but it's generally better to create a new file per style scope. * *= require_self */ @import "_shared.scss"; @import "baseline.scss"; @import "global.scss"; @import "normalize.scss"; @import "print.scss"; @import "desktop.scss"; @import "tablet.scss"; @import "home.css.scss"; I am also using rails server and not compass watcher. However when I browse to the page at localhost:3000/assets/application.css, I get the following error: body:before { font-weight: bold; content: "\000a TypeError: can't convert nil into String\000a (in c:\002f RangerRomOnRails\002f RangerRom\002f app\002f assets\002f stylesheets\002f desktop.scss)"; } body:after { content: "\000a C:\002f RailsInstaller\002f Ruby1.9.3\002f lib\002f ruby\002f gems\002f 1.9.1\002f gems\002f compass-0.12.2\002f lib\002f compass\002f sass_extensions\002f functions\002f image_size.rb:17:in `extname'"; } Here is the full stack trace: compass (0 .12.2) lib/compass/sass_extensions/functions/image_size.rb:17:in `extname' compass (0.12.2) lib/compass/sass_extensions/functions/image_size.rb:17:in `initialize' compass (0.12.2) lib/compass/sass_extensions/functions/image_size.rb:50:in `new' compass (0.12.2) lib/compass/sass_extensions/functions/image_size.rb:50:in `image_dimensions' compass (0.12.2) lib/compass/sass_extensions/functions/image_size.rb:4:in `image_width' sass (3.2.9) lib/sass/script/funcall.rb:112:in `_perform' sass (3.2.9) lib/sass/script/node.rb:40:in `perform' sass (3.2.9) lib/sass/tree/visitors/perform.rb:298:in `visit_prop' sass (3.2.9) lib/sass/tree/visitors/base.rb:37:in `visit' sass (3.2.9) lib/sass/tree/visitors/perform.rb:100:in `visit' sass (3.2.9) lib/sass/tree/visitors/base.rb:53:in `block in visit_children' sass (3.2.9) lib/sass/tree/visitors/base.rb:53:in `map' sass (3.2.9) lib/sass/tree/visitors/base.rb:53:in `visit_children' sass (3.2.9) lib/sass/tree/visitors/perform.rb:109:in `block in visit_children' sass (3.2.9) lib/sass/tree/visitors/perform.rb:121:in `with_environment' sass (3.2.9) lib/sass/tree/visitors/perform.rb:108:in `visit_children' sass (3.2.9) lib/sass/tree/visitors/base.rb:37:in `block in visit' sass (3.2.9) lib/sass/tree/visitors/perform.rb:320:in `visit_rule' sass (3.2.9) lib/sass/tree/visitors/base.rb:37:in `visit' sass (3.2.9) lib/sass/tree/visitors/perform.rb:100:in `visit' sass (3.2.9) lib/sass/tree/visitors/base.rb:53:in `block in visit_children' sass (3.2.9) lib/sass/tree/visitors/base.rb:53:in `map' sass (3.2.9) lib/sass/tree/visitors/base.rb:53:in `visit_children' sass (3.2.9) lib/sass/tree/visitors/perform.rb:109:in `block in visit_children' sass (3.2.9) lib/sass/tree/visitors/perform.rb:121:in `with_environment' sass (3.2.9) lib/sass/tree/visitors/perform.rb:108:in `visit_children' sass (3.2.9) lib/sass/tree/visitors/base.rb:37:in `block in visit' sass (3.2.9) lib/sass/tree/visitors/perform.rb:320:in `visit_rule' sass (3.2.9) lib/sass/tree/visitors/base.rb:37:in `visit' sass (3.2.9) lib/sass/tree/visitors/perform.rb:100:in `visit' sass (3.2.9) lib/sass/tree/visitors/base.rb:53:in `block in visit_children' sass (3.2.9) lib/sass/tree/visitors/base.rb:53:in `map' sass (3.2.9) lib/sass/tree/visitors/base.rb:53:in `visit_children' sass (3.2.9) lib/sass/tree/visitors/perform.rb:109:in `block in visit_children' sass (3.2.9) lib/sass/tree/visitors/perform.rb:121:in `with_environment' sass (3.2.9) lib/sass/tree/visitors/perform.rb:108:in `visit_children' sass (3.2.9) lib/sass/tree/visitors/base.rb:37:in `block in visit' sass (3.2.9) lib/sass/tree/visitors/perform.rb:362:in `visit_media' sass (3.2.9) lib/sass/tree/visitors/base.rb:37:in `visit' sass (3.2.9) lib/sass/tree/visitors/perform.rb:100:in `visit' sass (3.2.9) lib/sass/tree/visitors/base.rb:53:in `block in visit_children' sass (3.2.9) lib/sass/tree/visitors/base.rb:53:in `map' sass (3.2.9) lib/sass/tree/visitors/base.rb:53:in `visit_children' sass (3.2.9) lib/sass/tree/visitors/perform.rb:109:in `block in visit_children' sass (3.2.9) lib/sass/tree/visitors/perform.rb:121:in `with_environment' sass (3.2.9) lib/sass/tree/visitors/perform.rb:108:in `visit_children' sass (3.2.9) lib/sass/tree/visitors/base.rb:37:in `block in visit' sass (3.2.9) lib/sass/tree/visitors/perform.rb:128:in `visit_root' sass (3.2.9) lib/sass/tree/visitors/base.rb:37:in `visit' sass (3.2.9) lib/sass/tree/visitors/perform.rb:100:in `visit' sass (3.2.9) lib/sass/tree/visitors/perform.rb:7:in `visit' sass (3.2.9) lib/sass/tree/root_node.rb:20:in `render' sass (3.2.9) lib/sass/engine.rb:315:in `_render' sass (3.2.9) lib/sass/engine.rb:262:in `render' sass-rails (3.2.6) lib/sass/rails/template_handlers.rb:106:in `evaluate' tilt (1.4.1) lib/tilt/template.rb:103:in `render' sprockets (2.2.2) lib/sprockets/context.rb:193:in `block in evaluate' sprockets (2.2.2) lib/sprockets/context.rb:190:in `each' sprockets (2.2.2) lib/sprockets/context.rb:190:in `evaluate' sprockets (2.2.2) lib/sprockets/processed_asset.rb:12:in `initialize' sprockets (2.2.2) lib/sprockets/base.rb:249:in `new' sprockets (2.2.2) lib/sprockets/base.rb:249:in `block in build_asset' sprockets (2.2.2) lib/sprockets/base.rb:270:in `circular_call_protection' sprockets (2.2.2) lib/sprockets/base.rb:248:in `build_asset' sprockets (2.2.2) lib/sprockets/index.rb:93:in `block in build_asset' sprockets (2.2.2) lib/sprockets/caching.rb:19:in `cache_asset' sprockets (2.2.2) lib/sprockets/index.rb:92:in `build_asset' sprockets (2.2.2) lib/sprockets/base.rb:169:in `find_asset' sprockets (2.2.2) lib/sprockets/index.rb:60:in `find_asset' sprockets (2.2.2) lib/sprockets/processed_asset.rb:111:in `block in resolve_dependencies' sprockets (2.2.2) lib/sprockets/processed_asset.rb:105:in `each' sprockets (2.2.2) lib/sprockets/processed_asset.rb:105:in `resolve_dependencies' sprockets (2.2.2) lib/sprockets/processed_asset.rb:97:in `build_required_assets' sprockets (2.2.2) lib/sprockets/processed_asset.rb:16:in `initialize' sprockets (2.2.2) lib/sprockets/base.rb:249:in `new' sprockets (2.2.2) lib/sprockets/base.rb:249:in `block in build_asset' sprockets (2.2.2) lib/sprockets/base.rb:270:in `circular_call_protection' sprockets (2.2.2) lib/sprockets/base.rb:248:in `build_asset' sprockets (2.2.2) lib/sprockets/index.rb:93:in `block in build_asset' sprockets (2.2.2) lib/sprockets/caching.rb:19:in `cache_asset' sprockets (2.2.2) lib/sprockets/index.rb:92:in `build_asset' sprockets (2.2.2) lib/sprockets/base.rb:169:in `find_asset' sprockets (2.2.2) lib/sprockets/index.rb:60:in `find_asset' sprockets (2.2.2) lib/sprockets/bundled_asset.rb:38:in `init_with' sprockets (2.2.2) lib/sprockets/asset.rb:24:in `from_hash' sprockets (2.2.2) lib/sprockets/caching.rb:15:in `cache_asset' sprockets (2.2.2) lib/sprockets/index.rb:92:in `build_asset' sprockets (2.2.2) lib/sprockets/base.rb:169:in `find_asset' sprockets (2.2.2) lib/sprockets/index.rb:60:in `find_asset' sprockets (2.2.2) lib/sprockets/environment.rb:78:in `find_asset' sprockets (2.2.2) lib/sprockets/base.rb:177:in `[]' actionpack (3.2.13) lib/sprockets/helpers/rails_helper.rb:126:in `asset_for' actionpack (3.2.13) lib/sprockets/helpers/rails_helper.rb:44:in `block in stylesheet_link_tag' actionpack (3.2.13) lib/sprockets/helpers/rails_helper.rb:43:in `collect' actionpack (3.2.13) lib/sprockets/helpers/rails_helper.rb:43:in `stylesheet_link_tag' app/views/layouts/application.html.erb:16:in `_app_views_layouts_application_html_erb___824639613_33845076' actionpack (3.2.13) lib/action_view/template.rb:145:in `block in render' activesupport (3.2.13) lib/active_support/notifications.rb:125:in `instrument' actionpack (3.2.13) lib/action_view/template.rb:143:in `render' actionpack (3.2.13) lib/action_view/renderer/template_renderer.rb:59:in `render_with_layout' actionpack (3.2.13) lib/action_view/renderer/template_renderer.rb:45:in `render_template' actionpack (3.2.13) lib/action_view/renderer/template_renderer.rb:18:in `render' actionpack (3.2.13) lib/action_view/renderer/renderer.rb:36:in `render_template' actionpack (3.2.13) lib/action_view/renderer/renderer.rb:17:in `render' actionpack (3.2.13) lib/abstract_controller/rendering.rb:110:in `_render_template' actionpack (3.2.13) lib/action_controller/metal/streaming.rb:225:in `_render_template' actionpack (3.2.13) lib/abstract_controller/rendering.rb:103:in `render_to_body' actionpack (3.2.13) lib/action_controller/metal/renderers.rb:28:in `render_to_body' actionpack (3.2.13) lib/action_controller/metal/compatibility.rb:50:in `render_to_body' actionpack (3.2.13) lib/abstract_controller/rendering.rb:88:in `render' actionpack (3.2.13) lib/action_controller/metal/rendering.rb:16:in `render' actionpack (3.2.13) lib/action_controller/metal/instrumentation.rb:40:in `block (2 levels) in render' activesupport (3.2.13) lib/active_support/core_ext/benchmark.rb:5:in `block in ms' C:/RailsInstaller/Ruby1.9.3/lib/ruby/1.9.1/benchmark.rb:295:in `realtime' activesupport (3.2.13) lib/active_support/core_ext/benchmark.rb:5:in `ms' actionpack (3.2.13) lib/action_controller/metal/instrumentation.rb:40:in `block in render' actionpack (3.2.13) lib/action_controller/metal/instrumentation.rb:83:in `cleanup_view_runtime' activerecord (3.2.13) lib/active_record/railties/controller_runtime.rb:24:in `cleanup_view_runtime' actionpack (3.2.13) lib/action_controller/metal/instrumentation.rb:39:in `render' actionpack (3.2.13) lib/action_controller/metal/implicit_render.rb:10:in `default_render' actionpack (3.2.13) lib/action_controller/metal/implicit_render.rb:5:in `send_action' actionpack (3.2.13) lib/abstract_controller/base.rb:167:in `process_action' actionpack (3.2.13) lib/action_controller/metal/rendering.rb:10:in `process_action' actionpack (3.2.13) lib/abstract_controller/callbacks.rb:18:in `block in process_action' activesupport (3.2.13) lib/active_support/callbacks.rb:414:in `_run__956028316__process_action__416811168__callbacks' activesupport (3.2.13) lib/active_support/callbacks.rb:405:in `__run_callback' activesupport (3.2.13) lib/active_support/callbacks.rb:385:in `_run_process_action_callbacks' activesupport (3.2.13) lib/active_support/callbacks.rb:81:in `run_callbacks' actionpack (3.2.13) lib/abstract_controller/callbacks.rb:17:in `process_action' actionpack (3.2.13) lib/action_controller/metal/rescue.rb:29:in `process_action' actionpack (3.2.13) lib/action_controller/metal/instrumentation.rb:30:in `block in process_action' activesupport (3.2.13) lib/active_support/notifications.rb:123:in `block in instrument' activesupport (3.2.13) lib/active_support/notifications/instrumenter.rb:20:in `instrument' activesupport (3.2.13) lib/active_support/notifications.rb:123:in `instrument' actionpack (3.2.13) lib/action_controller/metal/instrumentation.rb:29:in `process_action' actionpack (3.2.13) lib/action_controller/metal/params_wrapper.rb:207:in `process_action' activerecord (3.2.13) lib/active_record/railties/controller_runtime.rb:18:in `process_action' actionpack (3.2.13) lib/abstract_controller/base.rb:121:in `process' actionpack (3.2.13) lib/abstract_controller/rendering.rb:45:in `process' actionpack (3.2.13) lib/action_controller/metal.rb:203:in `dispatch' actionpack (3.2.13) lib/action_controller/metal/rack_delegation.rb:14:in `dispatch' actionpack (3.2.13) lib/action_controller/metal.rb:246:in `block in action' actionpack (3.2.13) lib/action_dispatch/routing/route_set.rb:73:in `call' actionpack (3.2.13) lib/action_dispatch/routing/route_set.rb:73:in `dispatch' actionpack (3.2.13) lib/action_dispatch/routing/route_set.rb:36:in `call' journey (1.0.4) lib/journey/router.rb:68:in `block in call' journey (1.0.4) lib/journey/router.rb:56:in `each' journey (1.0.4) lib/journey/router.rb:56:in `call' actionpack (3.2.13) lib/action_dispatch/routing/route_set.rb:612:in `call' actionpack (3.2.13) lib/action_dispatch/middleware/best_standards_support.rb:17:in `call' rack (1.4.5) lib/rack/etag.rb:23:in `call' rack (1.4.5) lib/rack/conditionalget.rb:25:in `call' actionpack (3.2.13) lib/action_dispatch/middleware/head.rb:14:in `call' actionpack (3.2.13) lib/action_dispatch/middleware/params_parser.rb:21:in `call' actionpack (3.2.13) lib/action_dispatch/middleware/flash.rb:242:in `call' rack (1.4.5) lib/rack/session/abstract/id.rb:210:in `context' rack (1.4.5) lib/rack/session/abstract/id.rb:205:in `call' actionpack (3.2.13) lib/action_dispatch/middleware/cookies.rb:341:in `call' activerecord (3.2.13) lib/active_record/query_cache.rb:64:in `call' activerecord (3.2.13) lib/active_record/connection_adapters/abstract/connection_pool.rb:479:in `call' actionpack (3.2.13) lib/action_dispatch/middleware/callbacks.rb:28:in `block in call' activesupport (3.2.13) lib/active_support/callbacks.rb:405:in `_run__360878605__call__248365880__callbacks' activesupport (3.2.13) lib/active_support/callbacks.rb:405:in `__run_callback' activesupport (3.2.13) lib/active_support/callbacks.rb:385:in `_run_call_callbacks' activesupport (3.2.13) lib/active_support/callbacks.rb:81:in `run_callbacks' actionpack (3.2.13) lib/action_dispatch/middleware/callbacks.rb:27:in `call' actionpack (3.2.13) lib/action_dispatch/middleware/reloader.rb:65:in `call' actionpack (3.2.13) lib/action_dispatch/middleware/remote_ip.rb:31:in `call' actionpack (3.2.13) lib/action_dispatch/middleware/debug_exceptions.rb:16:in `call' actionpack (3.2.13) lib/action_dispatch/middleware/show_exceptions.rb:56:in `call' railties (3.2.13) lib/rails/rack/logger.rb:32:in `call_app' railties (3.2.13) lib/rails/rack/logger.rb:16:in `block in call' activesupport (3.2.13) lib/active_support/tagged_logging.rb:22:in `tagged' railties (3.2.13) lib/rails/rack/logger.rb:16:in `call' actionpack (3.2.13) lib/action_dispatch/middleware/request_id.rb:22:in `call' rack (1.4.5) lib/rack/methodoverride.rb:21:in `call' rack (1.4.5) lib/rack/runtime.rb:17:in `call' activesupport (3.2.13) lib/active_support/cache/strategy/local_cache.rb:72:in `call' rack (1.4.5) lib/rack/lock.rb:15:in `call' actionpack (3.2.13) lib/action_dispatch/middleware/static.rb:63:in `call' railties (3.2.13) lib/rails/engine.rb:479:in `call' railties (3.2.13) lib/rails/application.rb:223:in `call' rack (1.4.5) lib/rack/content_length.rb:14:in `call' railties (3.2.13) lib/rails/rack/log_tailer.rb:17:in `call' rack (1.4.5) lib/rack/handler/webrick.rb:59:in `service' C:/RailsInstaller/Ruby1.9.3/lib/ruby/1.9.1/webrick/httpserver.rb:138:in `service' C:/RailsInstaller/Ruby1.9.3/lib/ruby/1.9.1/webrick/httpserver.rb:94:in `run' C:/RailsInstaller/Ruby1.9.3/lib/ruby/1.9.1/webrick/server.rb:191:in `block in start_thread'

    Read the article

  • MS Word Macro - Numeric field insertion with automatic calculation at end of page

    - by Will
    Hi, I am trying to duplicate a feature that exists in Multimate (Ashton Tate) word processor. Yes, the one that hasnt been supported for 20 years! If I can duplicate this one feature I can get all the users off MM and onto Word. The documents they create are billing documents. they consist of a descriptive paragraph of any length on the left side of the page, and a billing amount at the end of the paragraph over on the right hand side, like this (excuse the imperfect formatting).... +-----------------whole page--------------------+ |                                                                    | |    pppp-para 1-pppppppppp                   | |    p                                         p                   | |    p                                         p                   | |    p                                         p                   | |    p                                         p                   | |    p                                         p                   | |    p                                         p                   | |    pppppppppppppppppppp      $$$$$  | |                                                                    |  |                                                                    |  |    pppp-para 2-pppppppppp                   | |    p                                         p                   | |    p                                         p                   | |    p                                         p                   | |    p                                         p                   | |    p                                         p                   | |    p                                         p                   | |    pppppppppppppppppppp      $$$$$  | |                                                                    |  |                                                                    |  |                             etc                                  | +-----------------------------------------------------+ some of these bills can be a few hundred pages and have a dozen or so paragraphs on each page, which is why none of the users will leave MM until this efficient little feature can be duplicated. The thing that MM does really easily is that there is a function key that they can press at any time that will - - jump the cursor from the paragraph they are writing over to the right hand side - create a numeric field - allow them to enter a number into the numeric field - return them to the left hand side to start a new paragraph What MM also does is automatically total the numeric fields on each page and create a subtotal in the page footer. it also creates a total for the entire document and puts this in the footer of the last page. I would like to duplicate this feature in word with a macro, but have no idea where to start. Any suggestions or code would be great, thanks, will.

    Read the article

  • Closing a hook that captures global input events

    - by Margus
    Intro Here is an example to illustrate the problem. Consider I am tracking and displaying mouse global current position and last click button and position to the user. Here is an image: To archive capturing click events on windows box, that would and will be sent to the other programs event messaging queue, I create a hook using winapi namely user32.dll library. This is outside JDK sandbox, so I use JNA to call the native library. This all works perfectly, but it does not close as I expect it to. My question is - How do I properly close following example program? Example source Code below is not fully written by Me, but taken from this question in Oracle forum and partly fixed. import java.awt.AWTException; import java.awt.Dimension; import java.awt.EventQueue; import java.awt.GridLayout; import java.awt.MouseInfo; import java.awt.Point; import java.awt.event.WindowAdapter; import java.awt.event.WindowEvent; import javax.swing.JFrame; import javax.swing.JLabel; import com.sun.jna.Native; import com.sun.jna.NativeLong; import com.sun.jna.Platform; import com.sun.jna.Structure; import com.sun.jna.platform.win32.BaseTSD.ULONG_PTR; import com.sun.jna.platform.win32.Kernel32; import com.sun.jna.platform.win32.User32; import com.sun.jna.platform.win32.WinDef.HWND; import com.sun.jna.platform.win32.WinDef.LRESULT; import com.sun.jna.platform.win32.WinDef.WPARAM; import com.sun.jna.platform.win32.WinUser.HHOOK; import com.sun.jna.platform.win32.WinUser.HOOKPROC; import com.sun.jna.platform.win32.WinUser.MSG; import com.sun.jna.platform.win32.WinUser.POINT; public class MouseExample { final JFrame jf; final JLabel jl1, jl2; final CWMouseHook mh; final Ticker jt; public class Ticker extends Thread { public boolean update = true; public void done() { update = false; } public void run() { try { Point p, l = MouseInfo.getPointerInfo().getLocation(); int i = 0; while (update == true) { try { p = MouseInfo.getPointerInfo().getLocation(); if (!p.equals(l)) { l = p; jl1.setText(new GlobalMouseClick(p.x, p.y) .toString()); } Thread.sleep(35); } catch (InterruptedException e) { e.printStackTrace(); return; } } } catch (Exception e) { update = false; } } } public MouseExample() throws AWTException, UnsupportedOperationException { this.jl1 = new JLabel("{}"); this.jl2 = new JLabel("{}"); this.jf = new JFrame(); this.jt = new Ticker(); this.jt.start(); this.mh = new CWMouseHook() { @Override public void globalClickEvent(GlobalMouseClick m) { jl2.setText(m.toString()); } }; mh.setMouseHook(); jf.setLayout(new GridLayout(2, 2)); jf.add(new JLabel("Position")); jf.add(jl1); jf.add(new JLabel("Last click")); jf.add(jl2); jf.addWindowListener(new WindowAdapter() { public void windowClosing(WindowEvent we) { mh.dispose(); jt.done(); jf.dispose(); } }); jf.setLocation(new Point(0, 0)); jf.setPreferredSize(new Dimension(200, 90)); jf.pack(); jf.setVisible(true); } public static class GlobalMouseClick { private char c; private int x, y; public GlobalMouseClick(char c, int x, int y) { super(); this.c = c; this.x = x; this.y = y; } public GlobalMouseClick(int x, int y) { super(); this.x = x; this.y = y; } public char getC() { return c; } public void setC(char c) { this.c = c; } public int getX() { return x; } public void setX(int x) { this.x = x; } public int getY() { return y; } public void setY(int y) { this.y = y; } @Override public String toString() { return (c != 0 ? c : "") + " [" + x + "," + y + "]"; } } public static class CWMouseHook { public User32 USER32INST; public CWMouseHook() throws UnsupportedOperationException { if (!Platform.isWindows()) { throw new UnsupportedOperationException( "Not supported on this platform."); } USER32INST = User32.INSTANCE; mouseHook = hookTheMouse(); Native.setProtected(true); } private static LowLevelMouseProc mouseHook; private HHOOK hhk; private boolean isHooked = false; public static final int WM_LBUTTONDOWN = 513; public static final int WM_LBUTTONUP = 514; public static final int WM_RBUTTONDOWN = 516; public static final int WM_RBUTTONUP = 517; public static final int WM_MBUTTONDOWN = 519; public static final int WM_MBUTTONUP = 520; public void dispose() { unsetMouseHook(); mousehook_thread = null; mouseHook = null; hhk = null; USER32INST = null; } public void unsetMouseHook() { isHooked = false; USER32INST.UnhookWindowsHookEx(hhk); System.out.println("Mouse hook is unset."); } public boolean isIsHooked() { return isHooked; } public void globalClickEvent(GlobalMouseClick m) { System.out.println(m); } private Thread mousehook_thread; public void setMouseHook() { mousehook_thread = new Thread(new Runnable() { @Override public void run() { try { if (!isHooked) { hhk = USER32INST.SetWindowsHookEx(14, mouseHook, Kernel32.INSTANCE.GetModuleHandle(null), 0); isHooked = true; System.out .println("Mouse hook is set. Click anywhere."); // message dispatch loop (message pump) MSG msg = new MSG(); while ((USER32INST.GetMessage(msg, null, 0, 0)) != 0) { USER32INST.TranslateMessage(msg); USER32INST.DispatchMessage(msg); if (!isHooked) break; } } else System.out .println("The Hook is already installed."); } catch (Exception e) { System.err.println("Caught exception in MouseHook!"); } } }); mousehook_thread.start(); } private interface LowLevelMouseProc extends HOOKPROC { LRESULT callback(int nCode, WPARAM wParam, MOUSEHOOKSTRUCT lParam); } private LowLevelMouseProc hookTheMouse() { return new LowLevelMouseProc() { @Override public LRESULT callback(int nCode, WPARAM wParam, MOUSEHOOKSTRUCT info) { if (nCode >= 0) { switch (wParam.intValue()) { case CWMouseHook.WM_LBUTTONDOWN: globalClickEvent(new GlobalMouseClick('L', info.pt.x, info.pt.y)); break; case CWMouseHook.WM_RBUTTONDOWN: globalClickEvent(new GlobalMouseClick('R', info.pt.x, info.pt.y)); break; case CWMouseHook.WM_MBUTTONDOWN: globalClickEvent(new GlobalMouseClick('M', info.pt.x, info.pt.y)); break; default: break; } } return USER32INST.CallNextHookEx(hhk, nCode, wParam, info.getPointer()); } }; } public class Point extends Structure { public class ByReference extends Point implements Structure.ByReference { }; public NativeLong x; public NativeLong y; } public static class MOUSEHOOKSTRUCT extends Structure { public static class ByReference extends MOUSEHOOKSTRUCT implements Structure.ByReference { }; public POINT pt; public HWND hwnd; public int wHitTestCode; public ULONG_PTR dwExtraInfo; } } public static void main(String[] args) { EventQueue.invokeLater(new Runnable() { @Override public void run() { try { new MouseExample(); } catch (AWTException e) { e.printStackTrace(); } } }); } }

    Read the article

  • getaddrinfo appears to return different results between Windows and Ubuntu?

    - by MrDuk
    I have the following two sets of code: Windows #undef UNICODE #include <winsock2.h> #include <ws2tcpip.h> #include <stdio.h> // link with Ws2_32.lib #pragma comment (lib, "Ws2_32.lib") int __cdecl main(int argc, char **argv) { //----------------------------------------- // Declare and initialize variables WSADATA wsaData; int iResult; INT iRetval; DWORD dwRetval; argv[1] = "www.google.com"; argv[2] = "80"; int i = 1; struct addrinfo *result = NULL; struct addrinfo *ptr = NULL; struct addrinfo hints; struct sockaddr_in *sockaddr_ipv4; // struct sockaddr_in6 *sockaddr_ipv6; LPSOCKADDR sockaddr_ip; char ipstringbuffer[46]; DWORD ipbufferlength = 46; /* // Validate the parameters if (argc != 3) { printf("usage: %s <hostname> <servicename>\n", argv[0]); printf("getaddrinfo provides protocol-independent translation\n"); printf(" from an ANSI host name to an IP address\n"); printf("%s example usage\n", argv[0]); printf(" %s www.contoso.com 0\n", argv[0]); return 1; } */ // Initialize Winsock iResult = WSAStartup(MAKEWORD(2, 2), &wsaData); if (iResult != 0) { printf("WSAStartup failed: %d\n", iResult); return 1; } //-------------------------------- // Setup the hints address info structure // which is passed to the getaddrinfo() function ZeroMemory( &hints, sizeof(hints) ); hints.ai_family = AF_UNSPEC; hints.ai_socktype = SOCK_STREAM; // hints.ai_protocol = IPPROTO_TCP; printf("Calling getaddrinfo with following parameters:\n"); printf("\tnodename = %s\n", argv[1]); printf("\tservname (or port) = %s\n\n", argv[2]); //-------------------------------- // Call getaddrinfo(). If the call succeeds, // the result variable will hold a linked list // of addrinfo structures containing response // information dwRetval = getaddrinfo(argv[1], argv[2], &hints, &result); if ( dwRetval != 0 ) { printf("getaddrinfo failed with error: %d\n", dwRetval); WSACleanup(); return 1; } printf("getaddrinfo returned success\n"); // Retrieve each address and print out the hex bytes for(ptr=result; ptr != NULL ;ptr=ptr->ai_next) { printf("getaddrinfo response %d\n", i++); printf("\tFlags: 0x%x\n", ptr->ai_flags); printf("\tFamily: "); switch (ptr->ai_family) { case AF_UNSPEC: printf("Unspecified\n"); break; case AF_INET: printf("AF_INET (IPv4)\n"); sockaddr_ipv4 = (struct sockaddr_in *) ptr->ai_addr; printf("\tIPv4 address %s\n", inet_ntoa(sockaddr_ipv4->sin_addr) ); break; case AF_INET6: printf("AF_INET6 (IPv6)\n"); // the InetNtop function is available on Windows Vista and later // sockaddr_ipv6 = (struct sockaddr_in6 *) ptr->ai_addr; // printf("\tIPv6 address %s\n", // InetNtop(AF_INET6, &sockaddr_ipv6->sin6_addr, ipstringbuffer, 46) ); // We use WSAAddressToString since it is supported on Windows XP and later sockaddr_ip = (LPSOCKADDR) ptr->ai_addr; // The buffer length is changed by each call to WSAAddresstoString // So we need to set it for each iteration through the loop for safety ipbufferlength = 46; iRetval = WSAAddressToString(sockaddr_ip, (DWORD) ptr->ai_addrlen, NULL, ipstringbuffer, &ipbufferlength ); if (iRetval) printf("WSAAddressToString failed with %u\n", WSAGetLastError() ); else printf("\tIPv6 address %s\n", ipstringbuffer); break; case AF_NETBIOS: printf("AF_NETBIOS (NetBIOS)\n"); break; default: printf("Other %ld\n", ptr->ai_family); break; } printf("\tSocket type: "); switch (ptr->ai_socktype) { case 0: printf("Unspecified\n"); break; case SOCK_STREAM: printf("SOCK_STREAM (stream)\n"); break; case SOCK_DGRAM: printf("SOCK_DGRAM (datagram) \n"); break; case SOCK_RAW: printf("SOCK_RAW (raw) \n"); break; case SOCK_RDM: printf("SOCK_RDM (reliable message datagram)\n"); break; case SOCK_SEQPACKET: printf("SOCK_SEQPACKET (pseudo-stream packet)\n"); break; default: printf("Other %ld\n", ptr->ai_socktype); break; } printf("\tProtocol: "); switch (ptr->ai_protocol) { case 0: printf("Unspecified\n"); break; case IPPROTO_TCP: printf("IPPROTO_TCP (TCP)\n"); break; case IPPROTO_UDP: printf("IPPROTO_UDP (UDP) \n"); break; default: printf("Other %ld\n", ptr->ai_protocol); break; } printf("\tLength of this sockaddr: %d\n", ptr->ai_addrlen); printf("\tCanonical name: %s\n", ptr->ai_canonname); } freeaddrinfo(result); WSACleanup(); return 0; } Ubuntu /* ** listener.c -- a datagram sockets "server" demo */ #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <errno.h> #include <string.h> #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #include <arpa/inet.h> #include <netdb.h> #define MYPORT "4950" // the port users will be connecting to #define MAXBUFLEN 100 // get sockaddr, IPv4 or IPv6: void *get_in_addr(struct sockaddr *sa) { if (sa->sa_family == AF_INET) { return &(((struct sockaddr_in*)sa)->sin_addr); } return &(((struct sockaddr_in6*)sa)->sin6_addr); } int main(void) { int sockfd; struct addrinfo hints, *servinfo, *p; int rv; int numbytes; struct sockaddr_storage their_addr; char buf[MAXBUFLEN]; socklen_t addr_len; char s[INET6_ADDRSTRLEN]; memset(&hints, 0, sizeof hints); hints.ai_family = AF_UNSPEC; // set to AF_INET to force IPv4 hints.ai_socktype = SOCK_DGRAM; hints.ai_flags = AI_PASSIVE; // use my IP if ((rv = getaddrinfo(NULL, MYPORT, &hints, &servinfo)) != 0) { fprintf(stderr, "getaddrinfo: %s\n", gai_strerror(rv)); return 1; } // loop through all the results and bind to the first we can for(p = servinfo; p != NULL; p = p->ai_next) { if ((sockfd = socket(p->ai_family, p->ai_socktype, p->ai_protocol)) == -1) { perror("listener: socket"); continue; } if (bind(sockfd, p->ai_addr, p->ai_addrlen) == -1) { close(sockfd); perror("listener: bind"); continue; } break; } if (p == NULL) { fprintf(stderr, "listener: failed to bind socket\n"); return 2; } freeaddrinfo(servinfo); printf("listener: waiting to recvfrom...\n"); addr_len = sizeof their_addr; if ((numbytes = recvfrom(sockfd, buf, MAXBUFLEN-1 , 0, (struct sockaddr *)&their_addr, &addr_len)) == -1) { perror("recvfrom"); exit(1); } printf("listener: got packet from %s\n", inet_ntop(their_addr.ss_family, get_in_addr((struct sockaddr *)&their_addr), s, sizeof s)); printf("listener: packet is %d bytes long\n", numbytes); buf[numbytes] = '\0'; printf("listener: packet contains \"%s\"\n", buf); close(sockfd); return 0; } When I attempt www.google.com, I don't get the ipv6 socket returned on Windows - why is this? Outputs: (ubuntu) caleb@ub1:~/Documents/dev/cs438/mp0/MP0$ ./a.out www.google.com IP addresses for www.google.com: IPv4: 74.125.228.115 IPv4: 74.125.228.116 IPv4: 74.125.228.112 IPv4: 74.125.228.113 IPv4: 74.125.228.114 IPv6: 2607:f8b0:4004:803::1010 Outputs: (win) Calling getaddrinfo with following parameters: nodename = www.google.com servname (or port) = 80 getaddrinfo returned success getaddrinfo response 1 Flags: 0x0 Family: AF_INET (IPv4) IPv4 address 74.125.228.114 Socket type: SOCK_STREAM (stream) Protocol: Unspecified Length of this sockaddr: 16 Canonical name: (null) getaddrinfo response 2 Flags: 0x0 Family: AF_INET (IPv4) IPv4 address 74.125.228.115 Socket type: SOCK_STREAM (stream) Protocol: Unspecified Length of this sockaddr: 16 Canonical name: (null) getaddrinfo response 3 Flags: 0x0 Family: AF_INET (IPv4) IPv4 address 74.125.228.116 Socket type: SOCK_STREAM (stream) Protocol: Unspecified Length of this sockaddr: 16 Canonical name: (null) getaddrinfo response 4 Flags: 0x0 Family: AF_INET (IPv4) IPv4 address 74.125.228.112 Socket type: SOCK_STREAM (stream) Protocol: Unspecified Length of this sockaddr: 16 Canonical name: (null) getaddrinfo response 5 Flags: 0x0 Family: AF_INET (IPv4) IPv4 address 74.125.228.113 Socket type: SOCK_STREAM (stream) Protocol: Unspecified Length of this sockaddr: 16 Canonical name: (null)

    Read the article

  • Multiple errors while adding searching to app

    - by Thijs
    Hi, I'm fairly new at iOS programming, but I managed to make a (in my opinion quite nice) app for the app store. The main function for my next update will be a search option for the app. I followed a tutorial I found on the internet and adapted it to fit my app. I got back quite some errors, most of which I managed to fix. But now I'm completely stuck and don't know what to do next. I know it's a lot to ask, but if anyone could take a look at the code underneath, it would be greatly appreciated. Thanks! // // RootViewController.m // GGZ info // // Created by Thijs Beckers on 29-12-10. // Copyright 2010 __MyCompanyName__. All rights reserved. // #import "RootViewController.h" // Always import the next level view controller header(s) #import "CourseCodes.h" @implementation RootViewController @synthesize dataForCurrentLevel, tableViewData; #pragma mark - #pragma mark View lifecycle // OVERRIDE METHOD - (void)viewDidLoad { [super viewDidLoad]; // Go get the data for the app... // Create a custom string that points to the right location in the app bundle NSString *pathToPlist = [[NSBundle mainBundle] pathForResource:@"SCSCurriculum" ofType:@"plist"]; // Now, place the result into the dictionary property // Note that we must retain it to keep it around dataForCurrentLevel = [[NSDictionary dictionaryWithContentsOfFile:pathToPlist] retain]; // Place the top level keys (the program codes) in an array for the table view // Note that we must retain it to keep it around // NSDictionary has a really useful instance method - allKeys // The allKeys method returns an array with all of the keys found in (this level of) a dictionary tableViewData = [[[dataForCurrentLevel allKeys] sortedArrayUsingSelector:@selector(caseInsensitiveCompare:)] retain]; //Initialize the copy array. copyListOfItems = [[NSMutableArray alloc] init]; // Set the nav bar title self.title = @"GGZ info"; //Add the search bar self.tableView.tableHeaderView = searchBar; searchBar.autocorrectionType = UITextAutocorrectionTypeNo; searching = NO; letUserSelectRow = YES; } /* - (void)viewWillAppear:(BOOL)animated { [super viewWillAppear:animated]; } */ /* - (void)viewDidAppear:(BOOL)animated { [super viewDidAppear:animated]; } */ /* - (void)viewWillDisappear:(BOOL)animated { [super viewWillDisappear:animated]; } */ /* - (void)viewDidDisappear:(BOOL)animated { [super viewDidDisappear:animated]; } */ //RootViewController.m - (void) searchBarTextDidBeginEditing:(UISearchBar *)theSearchBar { searching = YES; letUserSelectRow = NO; self.tableView.scrollEnabled = NO; //Add the done button. self.navigationItem.rightBarButtonItem = [[[UIBarButtonItem alloc] initWithBarButtonSystemItem:UIBarButtonSystemItemDone target:self action:@selector(doneSearching_Clicked:)] autorelease]; } - (NSIndexPath *)tableView :(UITableView *)theTableView willSelectRowAtIndexPath:(NSIndexPath *)indexPath { if(letUserSelectRow) return indexPath; else return nil; } //RootViewController.m - (void)searchBar:(UISearchBar *)theSearchBar textDidChange:(NSString *)searchText { //Remove all objects first. [copyListOfItems removeAllObjects]; if([searchText length] &gt; 0) { searching = YES; letUserSelectRow = YES; self.tableView.scrollEnabled = YES; [self searchTableView]; } else { searching = NO; letUserSelectRow = NO; self.tableView.scrollEnabled = NO; } [self.tableView reloadData]; } //RootViewController.m - (void) searchBarSearchButtonClicked:(UISearchBar *)theSearchBar { [self searchTableView]; } - (void) searchTableView { NSString *searchText = searchBar.text; NSMutableArray *searchArray = [[NSMutableArray alloc] init]; for (NSDictionary *dictionary in listOfItems) { NSArray *array = [dictionary objectForKey:@"Countries"]; [searchArray addObjectsFromArray:array]; } for (NSString *sTemp in searchArray) { NSRange titleResultsRange = [sTemp rangeOfString:searchText options:NSCaseInsensitiveSearch]; if (titleResultsRange.length &gt; 0) [copyListOfItems addObject:sTemp]; } [searchArray release]; searchArray = nil; } //RootViewController.m - (void) doneSearching_Clicked:(id)sender { searchBar.text = @""; [searchBar resignFirstResponder]; letUserSelectRow = YES; searching = NO; self.navigationItem.rightBarButtonItem = nil; self.tableView.scrollEnabled = YES; [self.tableView reloadData]; } //RootViewController.m - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { if (searching) return 1; else return [listOfItems count]; } // Customize the number of rows in the table view. - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { if (searching) return [copyListOfItems count]; else { //Number of rows it should expect should be based on the section NSDictionary *dictionary = [listOfItems objectAtIndex:section]; NSArray *array = [dictionary objectForKey:@"Countries"]; return [array count]; } } - (NSString *)tableView:(UITableView *)tableView titleForHeaderInSection:(NSInteger)section { if(searching) return @""; if(section == 0) return @"Countries to visit"; else return @"Countries visited"; } // Customize the appearance of table view cells. - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithFrame:CGRectZero reuseIdentifier:CellIdentifier] autorelease]; } // Set up the cell... if(searching) cell.text = [copyListOfItems objectAtIndex:indexPath.row]; else { //First get the dictionary object NSDictionary *dictionary = [listOfItems objectAtIndex:indexPath.section]; NSArray *array = [dictionary objectForKey:@"Countries"]; NSString *cellValue = [array objectAtIndex:indexPath.row]; cell.text = cellValue; } return cell; } - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { //Get the selected country NSString *selectedCountry = nil; if(searching) selectedCountry = [copyListOfItems objectAtIndex:indexPath.row]; else { NSDictionary *dictionary = [listOfItems objectAtIndex:indexPath.section]; NSArray *array = [dictionary objectForKey:@"Countries"]; selectedCountry = [array objectAtIndex:indexPath.row]; } //Initialize the detail view controller and display it. DetailViewController *dvController = [[DetailViewController alloc] initWithNibName:@"DetailView" bundle:[NSBundle mainBundle]]; dvController.selectedCountry = selectedCountry; [self.navigationController pushViewController:dvController animated:YES]; [dvController release]; dvController = nil; } //RootViewController.m - (void) searchBarTextDidBeginEditing:(UISearchBar *)theSearchBar { //Add the overlay view. if(ovController == nil) ovController = [[OverlayViewController alloc] initWithNibName:@"OverlayView" bundle:[NSBundle mainBundle]]; CGFloat yaxis = self.navigationController.navigationBar.frame.size.height; CGFloat width = self.view.frame.size.width; CGFloat height = self.view.frame.size.height; //Parameters x = origion on x-axis, y = origon on y-axis. CGRect frame = CGRectMake(0, yaxis, width, height); ovController.view.frame = frame; ovController.view.backgroundColor = [UIColor grayColor]; ovController.view.alpha = 0.5; ovController.rvController = self; [self.tableView insertSubview:ovController.view aboveSubview:self.parentViewController.view]; searching = YES; letUserSelectRow = NO; self.tableView.scrollEnabled = NO; //Add the done button. self.navigationItem.rightBarButtonItem = [[[UIBarButtonItem alloc] initWithBarButtonSystemItem:UIBarButtonSystemItemDone target:self action:@selector(doneSearching_Clicked:)] autorelease]; } // Override to allow orientations other than the default portrait orientation. - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { // Return YES for supported orientations. return YES; } - (void)didReceiveMemoryWarning { // Releases the view if it doesn't have a superview. [super didReceiveMemoryWarning]; // Relinquish ownership any cached data, images, etc that aren't in use. } - (void)viewDidUnload { // Relinquish ownership of anything that can be recreated in viewDidLoad or on demand. // For example: self.myOutlet = nil; } - (void)dealloc { [dataForCurrentLevel release]; [tableViewData release]; [super dealloc]; } #pragma mark - #pragma mark Table view methods // DATA SOURCE METHOD - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { return 1; } // DATA SOURCE METHOD - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { // How many rows should be displayed? return [tableViewData count]; } // DELEGATE METHOD - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { // Cell reuse block static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] autorelease]; } // Configure the cell's contents - we want the program code, and a disclosure indicator cell.textLabel.text = [tableViewData objectAtIndex:indexPath.row]; cell.accessoryType = UITableViewCellAccessoryDisclosureIndicator; return cell; } //RootViewController.m - (void)searchBar:(UISearchBar *)theSearchBar textDidChange:(NSString *)searchText { //Remove all objects first. [copyListOfItems removeAllObjects]; if([searchText length] &gt; 0) { [ovController.view removeFromSuperview]; searching = YES; letUserSelectRow = YES; self.tableView.scrollEnabled = YES; [self searchTableView]; } else { [self.tableView insertSubview:ovController.view aboveSubview:self.parentViewController.view]; searching = NO; letUserSelectRow = NO; self.tableView.scrollEnabled = NO; } [self.tableView reloadData]; } //RootViewController.m - (void) doneSearching_Clicked:(id)sender { searchBar.text = @""; [searchBar resignFirstResponder]; letUserSelectRow = YES; searching = NO; self.navigationItem.rightBarButtonItem = nil; self.tableView.scrollEnabled = YES; [ovController.view removeFromSuperview]; [ovController release]; ovController = nil; [self.tableView reloadData]; } // DELEGATE METHOD - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { // In any navigation-based application, you follow the same pattern: // 1. Create an instance of the next-level view controller // 2. Configure that instance, with settings and data if necessary // 3. Push it on to the navigation stack // In this situation, the next level view controller is another table view // Therefore, we really don't need a nib file (do you see a CourseCodes.xib? no, there isn't one) // So, a UITableViewController offers an initializer that programmatically creates a view // 1. Create the next level view controller // ======================================== CourseCodes *nextVC = [[CourseCodes alloc] initWithStyle:UITableViewStylePlain]; // 2. Configure it... // ================== // It needs data from the dictionary - the "value" for the current "key" (that was tapped) NSDictionary *nextLevelDictionary = [dataForCurrentLevel objectForKey:[tableViewData objectAtIndex:indexPath.row]]; nextVC.dataForCurrentLevel = nextLevelDictionary; // Set the view title nextVC.title = [tableViewData objectAtIndex:indexPath.row]; // 3. Push it on to the navigation stack // ===================================== [self.navigationController pushViewController:nextVC animated:YES]; // Memory manage it [nextVC release]; } /* // Override to support conditional editing of the table view. - (BOOL)tableView:(UITableView *)tableView canEditRowAtIndexPath:(NSIndexPath *)indexPath { // Return NO if you do not want the specified item to be editable. return YES; } */ /* // Override to support editing the table view. - (void)tableView:(UITableView *)tableView commitEditingStyle:(UITableViewCellEditingStyle)editingStyle forRowAtIndexPath:(NSIndexPath *)indexPath { if (editingStyle == UITableViewCellEditingStyleDelete) { // Delete the row from the data source. [tableView deleteRowsAtIndexPaths:[NSArray arrayWithObject:indexPath] withRowAnimation:UITableViewRowAnimationFade]; } else if (editingStyle == UITableViewCellEditingStyleInsert) { // Create a new instance of the appropriate class, insert it into the array, and add a new row to the table view. } } */ /* // Override to support rearranging the table view. - (void)tableView:(UITableView *)tableView moveRowAtIndexPath:(NSIndexPath *)fromIndexPath toIndexPath:(NSIndexPath *)toIndexPath { } */ /* // Override to support conditional rearranging of the table view. - (BOOL)tableView:(UITableView *)tableView canMoveRowAtIndexPath:(NSIndexPath *)indexPath { // Return NO if you do not want the item to be re-orderable. return YES; } */ @end

    Read the article

  • Silverlight MVVM Confusion: Updating Image Based on State

    - by senfo
    I'm developing a Silverlight application and I'm trying to stick to the MVVM principals, but I'm running into some problems changing the source of an image based on the state of a property in the ViewModel. For all intents and purposes, you can think of the functionality I'm implementing as a play/pause button for an audio app. When in the "Play" mode, IsActive is true in the ViewModel and the "Pause.png" image on the button should be displayed. When paused, IsActive is false in the ViewModel and "Play.png" is displayed on the button. Naturally, there are two additional images to handle when the mouse hovers over the button. I thought I could use a Style Trigger, but apparently they're not supported in Silverlight. I've been reviewing a forum post with a question similar to mine where it's suggested to use the VisualStateManager. While this might help with changing the image for hover/normal states, the part missing (or I'm not understanding) is how this would work with a state set via the view model. The post seems to apply only to events rather than properties of the view model. Having said that, I also haven't successfully completed the normal/hover affects, either. Below is my Silverlight 4 XAML. It should also probably be noted I'm working with MVVM Light. <UserControl x:Class="Foo.Bar.MyUserControl" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:i="clr-namespace:System.Windows.Interactivity;assembly=System.Windows.Interactivity" mc:Ignorable="d" d:DesignHeight="100" d:DesignWidth="200"> <UserControl.Resources> <Style x:Key="MyButtonStyle" TargetType="Button"> <Setter Property="IsEnabled" Value="true"/> <Setter Property="IsTabStop" Value="true"/> <Setter Property="Background" Value="#FFA9A9A9"/> <Setter Property="Foreground" Value="#FF000000"/> <Setter Property="MinWidth" Value="5"/> <Setter Property="MinHeight" Value="5"/> <Setter Property="Margin" Value="0"/> <Setter Property="HorizontalAlignment" Value="Left" /> <Setter Property="HorizontalContentAlignment" Value="Center"/> <Setter Property="VerticalAlignment" Value="Top" /> <Setter Property="VerticalContentAlignment" Value="Center"/> <Setter Property="Cursor" Value="Hand"/> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="Button"> <Grid> <Image Source="/Foo.Bar;component/Resources/Icons/Bar/Play.png"> <VisualStateManager.VisualStateGroups> <VisualStateGroup x:Name="Active"> <VisualState x:Name="MouseOver"> <Storyboard> <ObjectAnimationUsingKeyFrames Storyboard.TargetProperty="Source" Storyboard.TargetName="/Foo.Bar;component/Resources/Icons/Bar/Play_Hover.png" /> </Storyboard> </VisualState> </VisualStateGroup> </VisualStateManager.VisualStateGroups> </Image> </Grid> </ControlTemplate> </Setter.Value> </Setter> </Style> </UserControl.Resources> <Grid x:Name="LayoutRoot" Background="White"> <Button Style="{StaticResource MyButtonStyle}" Command="{Binding ChangeStatus}" Height="30" Width="30" /> </Grid> </UserControl> What is the proper way to update images on buttons with the state determined by the view model? Update I changed my Button to a ToggleButton hoping I could use the Checked state to differentiate between play/pause. I practically have it, but I ran into one additional problem. I need to account for two states at the same time. For example, Checked Normal/Hover and Unchecked Normal/Hover. Following is my updated XAML: <UserControl x:Class="Foo.Bar.MyUserControl" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:i="clr-namespace:System.Windows.Interactivity;assembly=System.Windows.Interactivity" mc:Ignorable="d" d:DesignHeight="100" d:DesignWidth="200"> <UserControl.Resources> <Style x:Key="MyButtonStyle" TargetType="ToggleButton"> <Setter Property="IsEnabled" Value="true"/> <Setter Property="IsTabStop" Value="true"/> <Setter Property="Background" Value="#FFA9A9A9"/> <Setter Property="Foreground" Value="#FF000000"/> <Setter Property="MinWidth" Value="5"/> <Setter Property="MinHeight" Value="5"/> <Setter Property="Margin" Value="0"/> <Setter Property="HorizontalAlignment" Value="Left" /> <Setter Property="HorizontalContentAlignment" Value="Center"/> <Setter Property="VerticalAlignment" Value="Top" /> <Setter Property="VerticalContentAlignment" Value="Center"/> <Setter Property="Cursor" Value="Hand"/> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="ToggleButton"> <Grid> <VisualStateManager.VisualStateGroups> <VisualStateGroup x:Name="CommonStates"> <VisualState x:Name="Normal"> <Storyboard> <ObjectAnimationUsingKeyFrames Storyboard.TargetProperty="(UIElement.Visibility)" Storyboard.TargetName="Pause"> <DiscreteObjectKeyFrame KeyTime="0"> <DiscreteObjectKeyFrame.Value> <Visibility>Collapsed</Visibility> </DiscreteObjectKeyFrame.Value> </DiscreteObjectKeyFrame> </ObjectAnimationUsingKeyFrames> </Storyboard> </VisualState> <VisualState x:Name="MouseOver"> <Storyboard> <ObjectAnimationUsingKeyFrames Storyboard.TargetProperty="(UIElement.Visibility)" Storyboard.TargetName="PlayHover"> <DiscreteObjectKeyFrame KeyTime="0"> <DiscreteObjectKeyFrame.Value> <Visibility>Visible</Visibility> </DiscreteObjectKeyFrame.Value> </DiscreteObjectKeyFrame> </ObjectAnimationUsingKeyFrames> <ObjectAnimationUsingKeyFrames Storyboard.TargetProperty="(UIElement.Visibility)" Storyboard.TargetName="Play"> <DiscreteObjectKeyFrame KeyTime="0"> <DiscreteObjectKeyFrame.Value> <Visibility>Collapsed</Visibility> </DiscreteObjectKeyFrame.Value> </DiscreteObjectKeyFrame> </ObjectAnimationUsingKeyFrames> </Storyboard> </VisualState> </VisualStateGroup> <VisualStateGroup x:Name="CheckStates"> <VisualState x:Name="Checked"> <Storyboard> <ObjectAnimationUsingKeyFrames Storyboard.TargetProperty="(UIElement.Visibility)" Storyboard.TargetName="Pause"> <DiscreteObjectKeyFrame KeyTime="0"> <DiscreteObjectKeyFrame.Value> <Visibility>Visible</Visibility> </DiscreteObjectKeyFrame.Value> </DiscreteObjectKeyFrame> </ObjectAnimationUsingKeyFrames> <ObjectAnimationUsingKeyFrames Storyboard.TargetProperty="(UIElement.Visibility)" Storyboard.TargetName="Play"> <DiscreteObjectKeyFrame KeyTime="0"> <DiscreteObjectKeyFrame.Value> <Visibility>Collapsed</Visibility> </DiscreteObjectKeyFrame.Value> </DiscreteObjectKeyFrame> </ObjectAnimationUsingKeyFrames> <ObjectAnimationUsingKeyFrames Storyboard.TargetProperty="(UIElement.Visibility)" Storyboard.TargetName="PlayHover"> <DiscreteObjectKeyFrame KeyTime="0"> <DiscreteObjectKeyFrame.Value> <Visibility>Collapsed</Visibility> </DiscreteObjectKeyFrame.Value> </DiscreteObjectKeyFrame> </ObjectAnimationUsingKeyFrames> </Storyboard> </VisualState> <VisualState x:Name="Unchecked" /> </VisualStateGroup> </VisualStateManager.VisualStateGroups> <Image x:Name="Play" Source="/Foo.Bar;component/Resources/Icons/Bar/Play.png" /> <Image x:Name="Pause" Source="/Foo.Bar;component/Resources/Icons/Bar/Pause.png" Visibility="Collapsed" /> <Image x:Name="PlayHover" Source="/Foo.Bar;component/Resources/Icons/Bar/Play_Hover.png" Visibility="Collapsed" /> <Image x:Name="PauseHover" Source="/Foo.Bar;component/Resources/Icons/Bar/Pause_Hover.png" Visibility="Collapsed" /> </Grid> </ControlTemplate> </Setter.Value> </Setter> </Style> </UserControl.Resources> <Grid x:Name="LayoutRoot" Background="White"> <ToggleButton Style="{StaticResource MyButtonStyle}" IsChecked="{Binding IsPlaying}" Command="{Binding ChangeStatus}" Height="30" Width="30" /> </Grid> </UserControl>

    Read the article

  • Java Mvc And Hibernate

    - by GigaPr
    Hi i am trying to learn Java, Hibernate and the MVC pattern. Following various tutorial online i managed to map my database, i have created few Main methods to test it and it works. Furthermore i have created few pages using the MVC patter and i am able to display some mock data as well in a view. the problem is i can not connect the two. this is what i have My view Looks like this <%@ include file="/WEB-INF/jsp/include.jsp" %> <html> <head> <title>Users</title> <%@ include file="/WEB-INF/jsp/head.jsp" %> </head> <body> <%@ include file="/WEB-INF/jsp/header.jsp" %> <img src="images/rss.png" alt="Rss Feed"/> <%@ include file="/WEB-INF/jsp/menu.jsp" %> <div class="ContainerIntroText"> <img src="images/usersList.png" class="marginL150px" alt="Add New User"/> <br/> <br/> <div class="usersList"> <div class="listHeaders"> <div class="headerBox"> <strong>FirstName</strong> </div> <div class="headerBox"> <strong>LastName</strong> </div> <div class="headerBox"> <strong>Username</strong> </div> <div class="headerAction"> <strong>Edit</strong> </div> <div class="headerAction"> <strong>Delete</strong> </div> </div> <br><br> <c:forEach items="${users}" var="user"> <div class="listElement"> <c:out value="${user.firstName}"/> </div> <div class="listElement"> <c:out value="${user.lastName}"/> </div> <div class="listElement"> <c:out value="${user.username}"/> </div> <div class="listElementAction"> <input type="button" name="Edit" title="Edit" value="Edit"/> </div> <div class="listElementAction"> <input type="image" src="images/delete.png" name="image" alt="Delete" > </div> <br /> </c:forEach> </div> </div> <a id="addUser" href="addUser.htm" title="Click to add a new user">&nbsp;</a> </body> </html> My controller public class UsersController implements Controller { private UserServiceImplementation userServiceImplementation; public ModelAndView handleRequest(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { ModelAndView modelAndView = new ModelAndView("users"); List<User> users = this.userServiceImplementation.get(); modelAndView.addObject("users", users); return modelAndView; } public UserServiceImplementation getUserServiceImplementation() { return userServiceImplementation; } public void setUserServiceImplementation(UserServiceImplementation userServiceImplementation) { this.userServiceImplementation = userServiceImplementation; } } My servelet definitions <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd"> <!-- the application context definition for the springapp DispatcherServlet --> <bean name="/home.htm" class="com.rssFeed.mvc.HomeController"/> <bean name="/rssFeeds.htm" class="com.rssFeed.mvc.RssFeedsController"/> <bean name="/addUser.htm" class="com.rssFeed.mvc.AddUserController"/> <bean name="/users.htm" class="com.rssFeed.mvc.UsersController"> <property name="userServiceImplementation" ref="userServiceImplementation"/> </bean> <bean id="userServiceImplementation" class="com.rssFeed.ServiceImplementation.UserServiceImplementation"> <property name="users"> <list> <ref bean="user1"/> <ref bean="user2"/> </list> </property> </bean> <bean id="user1" class="com.rssFeed.domain.User"> <property name="firstName" value="firstName1"/> <property name="lastName" value="lastName1"/> <property name="username" value="username1"/> <property name="password" value="password1"/> </bean> <bean id="user2" class="com.rssFeed.domain.User"> <property name="firstName" value="firstName2"/> <property name="lastName" value="lastName2"/> <property name="username" value="username2"/> <property name="password" value="password2"/> </bean> <bean id="viewResolver" class="org.springframework.web.servlet.view.InternalResourceViewResolver"> <property name="viewClass" value="org.springframework.web.servlet.view.JstlView"></property> <property name="prefix" value="/WEB-INF/jsp/"></property> <property name="suffix" value=".jsp"></property> </bean> </beans> and finally this class to access the database public class HibernateUserDao extends HibernateDaoSupport implements UserDao { public void addUser(User user) { getHibernateTemplate().saveOrUpdate(user); } public List<User> get() { User user1 = new User(); user1.setFirstName("FirstName"); user1.setLastName("LastName"); user1.setUsername("Username"); user1.setPassword("Password"); List<User> users = new LinkedList<User>(); users.add(user1); return users; } public User get(int id) { throw new UnsupportedOperationException("Not supported yet."); } public User get(String username) { return null; } } the database connection occurs in this file <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd"> <bean id="dataSource" class="org.springframework.jdbc.datasource.DriverManagerDataSource"> <property name="driverClassName" value="org.hsqldb.jdbcDriver"/> <property name="url" value="jdbc:hsqldb:hsql://localhost/rss"/> <property name="username" value="sa"/> <property name="password" value=""/> </bean> <bean id="sessionFactory" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean" > <property name="dataSource" ref="dataSource" /> <property name="mappingResources"> <list> <value>com/rssFeed/domain/User.hbm.xml</value> </list> </property> <property name="hibernateProperties" > <props> <prop key="hibernate.dialect">org.hibernate.dialect.HSQLDialect</prop> </props> </property> </bean> <bean id="transactionManager" class="org.springframework.orm.hibernate3.HibernateTransactionManager"> <property name="sessionFactory" ref="sessionFactory"/> </bean> <bean id="userDao" class="com.rssFeed.dao.hibernate.HibernateUserDao"> <property name="sessionFactory" ref="sessionFactory"/> </bean> </beans> Could you help me to solve this problem i spent the last 4 days and nights on this issue without any success Thanks

    Read the article

  • External USB drive is failing

    - by dma_k
    I have an external USB 2.0 drive WD My Book Mirror Edition, running in RAID 1 (mirroring) mode. A while ago the hard drive started to fail: it stops responding (directories are not listed returning an error after a big timeout). Sometimes it works for weeks before a failure, sometimes – few hours. Small write operations (like removing few files or editing a small file) do not harm, but when copying large files to the drive over the network, or creating the archive locally, the kernel dumps. Also interesting to note that once kernel has failed, Linux does not want to reboot normally (reboot hangs); when Linux box is shutdown with power button, WD drive does not go to sleep mode (as it usually does): leds continue to run, pressing and holding the "shutdown" button on drive's back panel does not do anything; only unplugging the power cord helps. Here goes the boot log: Aug 16 00:32:21 kernel: [ 1.514106] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver Aug 16 00:32:21 kernel: [ 1.657738] ehci_hcd 0000:00:1d.7: PCI INT A -> GSI 23 (level, low) -> IRQ 23 Aug 16 00:32:21 kernel: [ 1.673747] ehci_hcd 0000:00:1d.7: setting latency timer to 64 Aug 16 00:32:21 kernel: [ 1.673751] ehci_hcd 0000:00:1d.7: EHCI Host Controller Aug 16 00:32:21 kernel: [ 1.725224] ehci_hcd 0000:00:1d.7: new USB bus registered, assigned bus number 1 Aug 16 00:32:21 kernel: [ 1.741647] ehci_hcd 0000:00:1d.7: using broken periodic workaround Aug 16 00:32:21 kernel: [ 1.761790] ehci_hcd 0000:00:1d.7: cache line size of 32 is not supported Aug 16 00:32:21 kernel: [ 1.761873] ehci_hcd 0000:00:1d.7: irq 23, io mem 0xfdfff000 Aug 16 00:32:21 kernel: [ 1.796043] ehci_hcd 0000:00:1d.7: USB 2.0 started, EHCI 1.00 Aug 16 00:32:21 kernel: [ 1.879069] usb usb1: New USB device found, idVendor=1d6b, idProduct=0002 Aug 16 00:32:21 kernel: [ 1.895446] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1 Aug 16 00:32:21 kernel: [ 1.911796] usb usb1: Product: EHCI Host Controller Aug 16 00:32:21 kernel: [ 1.928015] usb usb1: Manufacturer: Linux 2.6.32-5-686 ehci_hcd Aug 16 00:32:21 kernel: [ 1.944331] usb usb1: SerialNumber: 0000:00:1d.7 Aug 16 00:32:21 kernel: [ 1.961285] usb usb1: configuration #1 chosen from 1 choice Aug 16 00:32:21 kernel: [ 1.994412] hub 1-0:1.0: USB hub found Aug 16 00:32:21 kernel: [ 2.010864] hub 1-0:1.0: 8 ports detected Aug 16 00:32:21 kernel: [ 2.085939] uhci_hcd: USB Universal Host Controller Interface driver Aug 16 00:32:21 kernel: [ 2.191945] uhci_hcd 0000:00:1d.0: PCI INT A -> GSI 23 (level, low) -> IRQ 23 Aug 16 00:32:21 kernel: [ 2.226029] uhci_hcd 0000:00:1d.0: setting latency timer to 64 Aug 16 00:32:21 kernel: [ 2.226034] uhci_hcd 0000:00:1d.0: UHCI Host Controller Aug 16 00:32:21 kernel: [ 2.243237] uhci_hcd 0000:00:1d.0: new USB bus registered, assigned bus number 2 Aug 16 00:32:21 kernel: [ 2.260390] uhci_hcd 0000:00:1d.0: irq 23, io base 0x0000fe00 Aug 16 00:32:21 kernel: [ 2.277517] usb usb2: New USB device found, idVendor=1d6b, idProduct=0001 Aug 16 00:32:21 kernel: [ 2.294815] usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1 Aug 16 00:32:21 kernel: [ 2.312173] usb usb2: Product: UHCI Host Controller Aug 16 00:32:21 kernel: [ 2.329534] usb usb2: Manufacturer: Linux 2.6.32-5-686 uhci_hcd Aug 16 00:32:21 kernel: [ 2.346828] usb usb2: SerialNumber: 0000:00:1d.0 Aug 16 00:32:21 kernel: [ 2.412989] usb usb2: configuration #1 chosen from 1 choice Aug 16 00:32:21 kernel: [ 2.430651] usb 1-2: new high speed USB device using ehci_hcd and address 2 Aug 16 00:32:21 kernel: [ 2.449046] hub 2-0:1.0: USB hub found Aug 16 00:32:21 kernel: [ 2.466514] hub 2-0:1.0: 2 ports detected Aug 16 00:32:21 kernel: [ 2.484639] uhci_hcd 0000:00:1d.1: PCI INT B -> GSI 19 (level, low) -> IRQ 19 Aug 16 00:32:21 kernel: [ 2.537750] uhci_hcd 0000:00:1d.1: setting latency timer to 64 Aug 16 00:32:21 kernel: [ 2.537756] uhci_hcd 0000:00:1d.1: UHCI Host Controller Aug 16 00:32:21 kernel: [ 2.555085] uhci_hcd 0000:00:1d.1: new USB bus registered, assigned bus number 3 Aug 16 00:32:21 kernel: [ 2.572231] uhci_hcd 0000:00:1d.1: irq 19, io base 0x0000fd00 Aug 16 00:32:21 kernel: [ 2.589593] usb usb3: New USB device found, idVendor=1d6b, idProduct=0001 Aug 16 00:32:21 kernel: [ 2.606869] usb usb3: New USB device strings: Mfr=3, Product=2, SerialNumber=1 Aug 16 00:32:21 kernel: [ 2.624134] usb usb3: Product: UHCI Host Controller Aug 16 00:32:21 kernel: [ 2.641329] usb usb3: Manufacturer: Linux 2.6.32-5-686 uhci_hcd Aug 16 00:32:21 kernel: [ 2.658505] usb usb3: SerialNumber: 0000:00:1d.1 Aug 16 00:32:21 kernel: [ 2.675843] usb usb3: configuration #1 chosen from 1 choice Aug 16 00:32:21 kernel: [ 2.692864] hub 3-0:1.0: USB hub found Aug 16 00:32:21 kernel: [ 2.709651] hub 3-0:1.0: 2 ports detected Aug 16 00:32:21 kernel: [ 2.727378] uhci_hcd 0000:00:1d.2: PCI INT C -> GSI 18 (level, low) -> IRQ 18 Aug 16 00:32:21 kernel: [ 2.768252] uhci_hcd 0000:00:1d.2: setting latency timer to 64 Aug 16 00:32:21 kernel: [ 2.768258] uhci_hcd 0000:00:1d.2: UHCI Host Controller Aug 16 00:32:21 kernel: [ 2.806679] uhci_hcd 0000:00:1d.2: new USB bus registered, assigned bus number 4 Aug 16 00:32:21 kernel: [ 2.824117] uhci_hcd 0000:00:1d.2: irq 18, io base 0x0000fc00 Aug 16 00:32:21 kernel: [ 2.841405] usb 1-2: New USB device found, idVendor=1058, idProduct=1104 Aug 16 00:32:21 kernel: [ 2.858448] usb 1-2: New USB device strings: Mfr=1, Product=2, SerialNumber=3 Aug 16 00:32:21 kernel: [ 2.875347] usb 1-2: Product: My Book Aug 16 00:32:21 kernel: [ 2.892113] usb 1-2: Manufacturer: Western Digital Aug 16 00:32:21 kernel: [ 2.908915] usb 1-2: SerialNumber: 575532553130303530353538 Aug 16 00:32:21 kernel: [ 2.943242] usb usb4: New USB device found, idVendor=1d6b, idProduct=0001 Aug 16 00:32:21 kernel: [ 2.960405] usb usb4: New USB device strings: Mfr=3, Product=2, SerialNumber=1 Aug 16 00:32:21 kernel: [ 2.977615] usb usb4: Product: UHCI Host Controller Aug 16 00:32:21 kernel: [ 2.994687] usb usb4: Manufacturer: Linux 2.6.32-5-686 uhci_hcd Aug 16 00:32:21 kernel: [ 3.011711] usb usb4: SerialNumber: 0000:00:1d.2 Aug 16 00:32:21 kernel: [ 3.029589] usb usb4: configuration #1 chosen from 1 choice Aug 16 00:32:21 kernel: [ 3.082027] sd 2:0:0:0: [sda] Attached SCSI disk Aug 16 00:32:21 kernel: [ 3.103953] usb 1-2: configuration #1 chosen from 1 choice Aug 16 00:32:21 kernel: [ 3.122625] hub 4-0:1.0: USB hub found Aug 16 00:32:21 kernel: [ 3.140484] hub 4-0:1.0: 2 ports detected Aug 16 00:32:21 kernel: [ 3.161680] uhci_hcd 0000:00:1d.3: PCI INT D -> GSI 16 (level, low) -> IRQ 16 Aug 16 00:32:21 kernel: [ 3.181257] uhci_hcd 0000:00:1d.3: setting latency timer to 64 Aug 16 00:32:21 kernel: [ 3.181263] uhci_hcd 0000:00:1d.3: UHCI Host Controller Aug 16 00:32:21 kernel: [ 3.198614] uhci_hcd 0000:00:1d.3: new USB bus registered, assigned bus number 5 Aug 16 00:32:21 kernel: [ 3.216012] uhci_hcd 0000:00:1d.3: irq 16, io base 0x0000fb00 Aug 16 00:32:21 kernel: [ 3.249877] Uniform CD-ROM driver Revision: 3.20 Aug 16 00:32:21 kernel: [ 3.267765] usb usb5: New USB device found, idVendor=1d6b, idProduct=0001 Aug 16 00:32:21 kernel: [ 3.284947] usb usb5: New USB device strings: Mfr=3, Product=2, SerialNumber=1 Aug 16 00:32:21 kernel: [ 3.302023] usb usb5: Product: UHCI Host Controller Aug 16 00:32:21 kernel: [ 3.319215] usb usb5: Manufacturer: Linux 2.6.32-5-686 uhci_hcd Aug 16 00:32:21 kernel: [ 3.336298] usb usb5: SerialNumber: 0000:00:1d.3 Aug 16 00:32:21 kernel: [ 3.368377] Initializing USB Mass Storage driver... Aug 16 00:32:21 kernel: [ 3.390652] usbcore: registered new interface driver hiddev Aug 16 00:32:21 kernel: [ 3.408109] scsi4 : SCSI emulation for USB Mass Storage devices Aug 16 00:32:21 kernel: [ 3.425281] sr 0:0:1:0: Attached scsi CD-ROM sr0 Aug 16 00:32:21 kernel: [ 3.438978] sr 0:0:1:0: Attached scsi generic sg0 type 5 Aug 16 00:32:21 kernel: [ 3.456328] usbcore: registered new interface driver usb-storage Aug 16 00:32:21 kernel: [ 3.474564] usb-storage: device found at 2 Aug 16 00:32:21 kernel: [ 3.474567] usb-storage: waiting for device to settle before scanning Aug 16 00:32:21 kernel: [ 3.475320] sd 2:0:0:0: Attached scsi generic sg1 type 0 Aug 16 00:32:21 kernel: [ 3.492587] USB Mass Storage support registered. Aug 16 00:32:21 kernel: [ 3.510930] usb usb5: configuration #1 chosen from 1 choice Aug 16 00:32:21 kernel: [ 3.531076] hub 5-0:1.0: USB hub found Aug 16 00:32:21 kernel: [ 3.548399] hub 5-0:1.0: 2 ports detected Aug 16 00:32:21 kernel: [ 3.591743] input: Western Digital My Book as /devices/pci0000:00/0000:00:1d.7/usb1/1-2/1-2:1.1/input/input2 Aug 16 00:32:21 kernel: [ 3.609515] generic-usb 0003:1058:1104.0001: input,hidraw0: USB HID v1.11 Device [Western Digital My Book] on usb-0000:00:1d.7-2/input1 Aug 16 00:32:21 kernel: [ 3.627466] usbcore: registered new interface driver usbhid Aug 16 00:32:21 kernel: [ 8.581664] usb-storage: device scan complete Aug 16 00:32:21 kernel: [ 8.624270] scsi 4:0:0:0: Direct-Access WD My Book 1008 PQ: 0 ANSI: 4 Aug 16 00:32:21 kernel: [ 8.655135] scsi 4:0:0:1: Enclosure WD My Book Device 1008 PQ: 0 ANSI: 4 Aug 16 00:32:21 kernel: [ 8.675393] sd 4:0:0:0: Attached scsi generic sg2 type 0 Aug 16 00:32:21 kernel: [ 8.698669] scsi 4:0:0:1: Attached scsi generic sg3 type 13 Aug 16 00:32:21 kernel: [ 8.723370] sd 4:0:0:0: [sdb] 1953513472 512-byte logical blocks: (1.00 TB/931 GiB) Aug 16 00:32:21 kernel: [ 8.750477] sd 4:0:0:0: [sdb] Write Protect is off Aug 16 00:32:21 kernel: [ 8.769411] sd 4:0:0:0: [sdb] Mode Sense: 10 00 00 00 Aug 16 00:32:21 kernel: [ 8.769414] sd 4:0:0:0: [sdb] Assuming drive cache: write through Aug 16 00:32:21 kernel: [ 8.822971] sd 4:0:0:0: [sdb] Assuming drive cache: write through Aug 16 00:32:21 kernel: [ 8.841978] sdb: sdb1 Aug 16 00:32:21 kernel: [ 8.905580] sd 4:0:0:0: [sdb] Assuming drive cache: write through Aug 16 00:32:21 kernel: [ 8.924173] sd 4:0:0:0: [sdb] Attached SCSI disk Aug 16 00:32:21 kernel: [ 11.600492] XFS mounting filesystem sdb1 Aug 16 00:32:21 kernel: [ 12.222948] Ending clean XFS mount for filesystem: sdb1 After a while the following appears in a log: Aug 16 09:30:56 kernel: [32359.112029] usb 1-2: reset high speed USB device using ehci_hcd and address 2 Aug 16 09:31:59 kernel: [32422.112035] usb 1-2: reset high speed USB device using ehci_hcd and address 2 Aug 16 09:33:00 kernel: [32483.112029] usb 1-2: reset high speed USB device using ehci_hcd and address 2 And then it is followed by few kernel dumps, which I think, are not good: Aug 16 09:33:40 kernel: [32520.428027] INFO: task xfssyncd:1002 blocked for more than 120 seconds. Aug 16 09:33:40 kernel: [32520.462689] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Aug 16 09:33:40 kernel: [32520.497422] xfssyncd D c3d84a60 0 1002 2 0x00000000 Aug 16 09:33:40 kernel: [32520.532117] f6c9aa80 00000046 c1132742 c3d84a60 00000286 c1418100 c1418100 00000000 Aug 16 09:33:40 kernel: [32520.566867] f6c9ac3c c2808100 00000000 f653b18b 00001d76 00000001 f6c9aa80 c3c3f0e0 Aug 16 09:33:40 kernel: [32520.601343] 08e59242 f6c9ac3c 2e41392b 00000000 08e59242 00000000 c3f7fb48 0067385a Aug 16 09:33:40 kernel: [32520.635533] Call Trace: Aug 16 09:33:40 kernel: [32520.668991] [<c1132742>] ? cfq_set_request+0x0/0x290 Aug 16 09:33:40 kernel: [32520.702804] [<c126b532>] ? io_schedule+0x5f/0x98 Aug 16 09:33:40 kernel: [32520.736555] [<c1128be0>] ? get_request_wait+0xcb/0x146 Aug 16 09:33:40 kernel: [32520.770360] [<c10437ba>] ? autoremove_wake_function+0x0/0x2d Aug 16 09:33:40 kernel: [32520.804110] [<c112907c>] ? __make_request+0x2cc/0x3d9 Aug 16 09:33:40 kernel: [32520.837713] [<c1128230>] ? blk_peek_request+0x135/0x143 Aug 16 09:33:40 kernel: [32520.871265] [<f8582987>] ? scsi_dispatch_cmd+0x185/0x1e5 [scsi_mod] Aug 16 09:33:40 kernel: [32520.904407] [<c1127cf1>] ? generic_make_request+0x266/0x2b4 Aug 16 09:33:40 kernel: [32520.937007] [<c10cf821>] ? bvec_alloc_bs+0x95/0xaf Aug 16 09:33:40 kernel: [32520.969033] [<c1127dfb>] ? submit_bio+0xbc/0xd6 Aug 16 09:33:40 kernel: [32521.000485] [<c10cffd1>] ? bio_add_page+0x28/0x2e Aug 16 09:33:40 kernel: [32521.031403] [<f8918d38>] ? _xfs_buf_ioapply+0x206/0x22b [xfs] Aug 16 09:33:40 kernel: [32521.061888] [<f89197bd>] ? xfs_buf_iorequest+0x38/0x60 [xfs] Aug 16 09:33:40 kernel: [32521.091845] [<f8907230>] ? xlog_bdstrat_cb+0x16/0x3d [xfs] Aug 16 09:33:40 kernel: [32521.121222] [<f8905781>] ? XFS_bwrite+0x32/0x64 [xfs] Aug 16 09:33:40 kernel: [32521.150007] [<f89059be>] ? xlog_sync+0x20b/0x311 [xfs] Aug 16 09:33:40 kernel: [32521.178214] [<f89112fc>] ? xfs_trans_ail_tail+0x12/0x27 [xfs] Aug 16 09:33:40 kernel: [32521.205914] [<f8906261>] ? xlog_state_sync_all+0xa2/0x141 [xfs] Aug 16 09:33:40 kernel: [32521.233074] [<f8906611>] ? _xfs_log_force+0x51/0x68 [xfs] Aug 16 09:33:40 kernel: [32521.259664] [<c103abaf>] ? process_timeout+0x0/0x5 Aug 16 09:33:40 kernel: [32521.285662] [<f8906636>] ? xfs_log_force+0xe/0x27 [xfs] Aug 16 09:33:40 kernel: [32521.311171] [<f89202df>] ? xfs_sync_worker+0x17/0x5c [xfs] Aug 16 09:33:40 kernel: [32521.336117] [<f891fbb7>] ? xfssyncd+0x134/0x17d [xfs] Aug 16 09:33:40 kernel: [32521.360498] [<f891fa83>] ? xfssyncd+0x0/0x17d [xfs] Aug 16 09:33:40 kernel: [32521.384211] [<c1043588>] ? kthread+0x61/0x66 Aug 16 09:33:40 kernel: [32521.407890] [<c1043527>] ? kthread+0x0/0x66 Aug 16 09:33:40 kernel: [32521.430876] [<c1003d47>] ? kernel_thread_helper+0x7/0x10 Aug 16 09:33:40 kernel: [32521.453394] INFO: task flush-8:16:12945 blocked for more than 120 seconds. Aug 16 09:33:40 kernel: [32521.476116] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Aug 16 09:33:40 kernel: [32521.498579] flush-8:16 D 00000000 0 12945 2 0x00000000 Aug 16 09:33:40 kernel: [32521.520649] f4e4d540 00000046 e412e940 00000000 00000002 c1418100 c1418100 c14136ac Aug 16 09:33:40 kernel: [32521.542426] f4e4d6fc c2808100 00000000 00000000 000008b4 00000001 f4e4d540 c3c3f0e0 Aug 16 09:33:40 kernel: [32521.563745] 02e905a8 f4e4d6fc 007a5399 00000000 02e905a8 00000000 f4e2db48 00670b98 Aug 16 09:33:40 kernel: [32521.585077] Call Trace: Aug 16 09:33:40 kernel: [32521.605790] [<c126b532>] ? io_schedule+0x5f/0x98 Aug 16 09:33:40 kernel: [32521.626184] [<c1128be0>] ? get_request_wait+0xcb/0x146 Aug 16 09:33:40 kernel: [32521.646133] [<c10437ba>] ? autoremove_wake_function+0x0/0x2d Aug 16 09:33:40 kernel: [32521.665659] [<c112907c>] ? __make_request+0x2cc/0x3d9 Aug 16 09:33:40 kernel: [32521.684716] [<f891796e>] ? xfs_convert_page+0x30a/0x331 [xfs] Aug 16 09:33:40 kernel: [32521.703366] [<c1127cf1>] ? generic_make_request+0x266/0x2b4 Aug 16 09:33:40 kernel: [32521.721644] [<c10cf821>] ? bvec_alloc_bs+0x95/0xaf Aug 16 09:33:40 kernel: [32521.739465] [<c1127dfb>] ? submit_bio+0xbc/0xd6 Aug 16 09:33:40 kernel: [32521.756896] [<c10cfa45>] ? bio_alloc_bioset+0x7b/0xba Aug 16 09:33:40 kernel: [32521.774046] [<f8917af0>] ? xfs_submit_ioend_bio+0x3b/0x44 [xfs] Aug 16 09:33:40 kernel: [32521.790694] [<f8917ba3>] ? xfs_submit_ioend+0xaa/0xc4 [xfs] Aug 16 09:33:40 kernel: [32521.806736] [<f891817d>] ? xfs_page_state_convert+0x5c0/0x61c [xfs] Aug 16 09:33:40 kernel: [32521.822859] [<c113705b>] ? __lookup_tag+0x8e/0xee Aug 16 09:33:40 kernel: [32521.838958] [<f891840d>] ? xfs_vm_writepage+0x91/0xc4 [xfs] Aug 16 09:33:40 kernel: [32521.855039] [<c108bbcc>] ? __writepage+0x8/0x22 Aug 16 09:33:40 kernel: [32521.871067] [<c108c17b>] ? write_cache_pages+0x1af/0x29f Aug 16 09:33:40 kernel: [32521.886616] [<c108bbc4>] ? __writepage+0x0/0x22 Aug 16 09:33:40 kernel: [32521.901593] [<c108c285>] ? generic_writepages+0x1a/0x21 Aug 16 09:33:40 kernel: [32521.916455] [<f8918338>] ? xfs_vm_writepages+0x0/0x38 [xfs] Aug 16 09:33:40 kernel: [32521.931484] [<c108c2a5>] ? do_writepages+0x19/0x25 Aug 16 09:33:40 kernel: [32521.946648] [<c10c80d9>] ? writeback_single_inode+0xc7/0x273 Aug 16 09:33:40 kernel: [32521.961675] [<c10c8c44>] ? writeback_inodes_wb+0x3dd/0x49c Aug 16 09:33:40 kernel: [32521.976831] [<c10c8e18>] ? wb_writeback+0x115/0x178 Aug 16 09:33:40 kernel: [32521.991778] [<c10c901f>] ? wb_do_writeback+0x121/0x131 Aug 16 09:33:40 kernel: [32522.006538] [<c103abaf>] ? process_timeout+0x0/0x5 Aug 16 09:33:40 kernel: [32522.021091] [<c10c9050>] ? bdi_writeback_task+0x21/0x89 Aug 16 09:33:40 kernel: [32522.035493] [<c10979e5>] ? bdi_start_fn+0x59/0xa4 Aug 16 09:33:40 kernel: [32522.049765] [<c109798c>] ? bdi_start_fn+0x0/0xa4 Aug 16 09:33:40 kernel: [32522.063792] [<c1043588>] ? kthread+0x61/0x66 Aug 16 09:33:40 kernel: [32522.077612] [<c1043527>] ? kthread+0x0/0x66 Aug 16 09:33:40 kernel: [32522.091260] [<c1003d47>] ? kernel_thread_helper+0x7/0x10 Aug 16 09:33:40 kernel: [32522.104966] INFO: task smartctl:13098 blocked for more than 120 seconds. Aug 16 09:33:40 kernel: [32522.118883] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Aug 16 09:33:40 kernel: [32522.133012] smartctl D 00000020 0 13098 13097 0x00000000 Aug 16 09:33:40 kernel: [32522.147221] e50b9540 00000086 c11d28a8 00000020 00000770 c1418100 c1418100 c14136ac Aug 16 09:33:40 kernel: [32522.161720] e50b96fc c2808100 00000000 e53e8800 00000000 00000020 c3cec000 c13886c0 Aug 16 09:33:40 kernel: [32522.176217] f99dab68 e50b96fc 007a4f1e 00000001 c4082f24 c4082ed8 00000001 c3c3f0e0 Aug 16 09:33:40 kernel: [32522.190737] Call Trace: Aug 16 09:33:40 kernel: [32522.205038] [<c11d28a8>] ? __netdev_alloc_skb+0x14/0x2d Aug 16 09:33:40 kernel: [32522.219605] [<c126b799>] ? schedule_timeout+0x20/0xb0 Aug 16 09:33:40 kernel: [32522.234144] [<c112820d>] ? blk_peek_request+0x112/0x143 Aug 16 09:33:40 kernel: [32522.248649] [<f85873b6>] ? scsi_request_fn+0x3c1/0x47a [scsi_mod] Aug 16 09:33:40 kernel: [32522.263233] [<c103aba8>] ? del_timer+0x55/0x5c Aug 16 09:33:40 kernel: [32522.277773] [<c126b6a2>] ? wait_for_common+0xa4/0x100 Aug 16 09:33:40 kernel: [32522.292342] [<c102cd8d>] ? default_wake_function+0x0/0x8 Aug 16 09:33:40 kernel: [32522.306958] [<c112b3d1>] ? blk_execute_rq+0x8b/0xb2 Aug 16 09:33:40 kernel: [32522.321569] [<c112b2ac>] ? blk_end_sync_rq+0x0/0x23 Aug 16 09:33:40 kernel: [32522.336070] [<c112b58b>] ? blk_recount_segments+0x13/0x20 Aug 16 09:33:40 kernel: [32522.350583] [<c1127307>] ? blk_rq_bio_prep+0x44/0x74 Aug 16 09:33:40 kernel: [32522.365059] [<c112b0b2>] ? blk_rq_map_kern+0xc5/0xee Aug 16 09:33:40 kernel: [32522.379439] [<c112e2a5>] ? sg_scsi_ioctl+0x221/0x2aa Aug 16 09:33:40 kernel: [32522.393801] [<c112e672>] ? scsi_cmd_ioctl+0x344/0x39a Aug 16 09:33:40 kernel: [32522.408140] [<c1024c87>] ? update_curr+0x106/0x1b3 Aug 16 09:33:40 kernel: [32522.422566] [<c1024c87>] ? update_curr+0x106/0x1b3 Aug 16 09:33:40 kernel: [32522.436832] [<f87676aa>] ? sd_ioctl+0x90/0xb5 [sd_mod] Aug 16 09:33:40 kernel: [32522.451228] [<c112c35f>] ? __blkdev_driver_ioctl+0x53/0x63 Aug 16 09:33:40 kernel: [32522.465689] [<c112cbbf>] ? blkdev_ioctl+0x850/0x891 Aug 16 09:33:40 kernel: [32522.479982] [<c1020474>] ? __wake_up_common+0x34/0x59 Aug 16 09:33:40 kernel: [32522.494138] [<c10244cd>] ? complete+0x28/0x36 Aug 16 09:33:40 kernel: [32522.507986] [<c1086c64>] ? find_get_page+0x1f/0x81 Aug 16 09:33:40 kernel: [32522.521671] [<c10abed5>] ? add_partial+0xe/0x40 Aug 16 09:33:40 kernel: [32522.535285] [<c1086e68>] ? lock_page+0x8/0x1d Aug 16 09:33:40 kernel: [32522.548797] [<c1087432>] ? filemap_fault+0xb5/0x2e6 Aug 16 09:33:40 kernel: [32522.562141] [<c109941c>] ? __do_fault+0x381/0x3b1 Aug 16 09:33:40 kernel: [32522.575441] [<c10d0c30>] ? block_ioctl+0x27/0x2c Aug 16 09:33:40 kernel: [32522.588708] [<c10d0c09>] ? block_ioctl+0x0/0x2c Aug 16 09:33:40 kernel: [32522.601858] [<c10bcd78>] ? vfs_ioctl+0x1c/0x5f Aug 16 09:33:40 kernel: [32522.614917] [<c10bd30c>] ? do_vfs_ioctl+0x4aa/0x4e5 Aug 16 09:33:40 kernel: [32522.627961] [<c10350db>] ? __do_softirq+0x115/0x151 Aug 16 09:33:40 kernel: [32522.640901] [<c126e270>] ? do_page_fault+0x2f1/0x307 Aug 16 09:33:40 kernel: [32522.653803] [<c10bd388>] ? sys_ioctl+0x41/0x58 Aug 16 09:33:40 kernel: [32522.666674] [<c10030fb>] ? sysenter_do_call+0x12/0x28 Then again few messages reset high speed USB device using ehci_hcd and address 2. I have browsed and read similar error reports here and there and I tried: I have upgraded the kernel from v2.6.26-2 to 2.6.32-5, which has not solved the problem. They say, this might a cable problem. I have tried to replace the USB-to-miniUSB cable (that connects external drive with computer) with another one. No changes. Somebody suggests to try another USB port. I have only 4 external USB ports, tried another one with no success. They say to try uhci_hcd. I have unmounted the device, unloaded ehci_hcd and mounted again. The difference was that now in log I get reset full speed USB device using uhci_hcd and address 2 and similar kernel dumps after a while. They say to echo 128 > /sys/block/sdb/device/max_sectors. I tried it with ehci_hcd with no success (note: I have issued this command after the drive was mounted but before using it actively). I have lauched smartmond and checking periodically the output of smartctl: drive temperature is OK, number of bad sectors and uncorrectable errors is 0. Nothing suspicious is reported by S.M.A.R.T. except maybe the following: Aug 16 12:40:12 kernel: [43715.314566] program smartctl is using a deprecated SCSI ioctl, please convert it to SG_IO Aug 16 12:40:13 kernel: [43715.705622] program smartctl is using a deprecated SCSI ioctl, please convert it to SG_IO Of course, I have not tried all combinations of above. But unfortunately, I am run out of cardinal ideas. If anybody can advice something specific about the problem, you are very welcome.

    Read the article

  • Pulseaudio is no longer working in Debian Squeeze: 'Failed to open module "module-combine-sink": file not found'

    - by mattalexx
    I'm having a problem with pulseaudio. My machine crashed, and when I rebooted and ran pavucontrol, I got a "Connection Failed: Connection refused" dialog. When I run pulseaudio --log-level=info --log-target=stderr from the command line, I get the following output: [...] I: alsa-util.c: Error opening PCM device front:1: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC1D0c' failed (-2) I: alsa-util.c: Error opening PCM device hw:1: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC1D0c' failed (-2) I: alsa-util.c: Error opening PCM device iec958:1: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC1D0c' failed (-2) I: alsa-util.c: Error opening PCM device iec958:1: No such file or directory I: alsa-util.c: Failed to set hardware parameters on plug:iec958:1: Invalid argument I: alsa-util.c: Failed to set hardware parameters on plug:iec958:1: Invalid argument I: alsa-util.c: Failed to set hardware parameters on plug:iec958:1: Invalid argument I: alsa-util.c: Failed to set hardware parameters on plug:iec958:1: Invalid argument I: alsa-util.c: Failed to set hardware parameters on plug:iec958:1: Invalid argument I: (alsa-lib)pcm.c: Unknown PCM a52:1 I: alsa-util.c: Error opening PCM device a52:1: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM a52:1 I: alsa-util.c: Error opening PCM device a52:1: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM a52:1 I: alsa-util.c: Error opening PCM device a52:1: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM a52:1 I: alsa-util.c: Error opening PCM device a52:1: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM a52:1 I: alsa-util.c: Error opening PCM device a52:1: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM a52:1 I: alsa-util.c: Error opening PCM device a52:1: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM a52:1 I: alsa-util.c: Error opening PCM device a52:1: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM a52:1 I: alsa-util.c: Error opening PCM device a52:1: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM a52:1 I: alsa-util.c: Error opening PCM device a52:1: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM a52:1 I: alsa-util.c: Error opening PCM device a52:1: No such file or directory I: (alsa-lib)confmisc.c: Unable to find definition 'cards.USB-Audio.pcm.hdmi.0:CARD=1,AES0=4,AES1=130,AES2=0,AES3=2' I: (alsa-lib)conf.c: function snd_func_refer returned error: No such file or directory I: (alsa-lib)conf.c: Evaluate error: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM hdmi:1 I: alsa-util.c: Error opening PCM device hdmi:1: No such file or directory I: (alsa-lib)confmisc.c: Unable to find definition 'cards.USB-Audio.pcm.hdmi.0:CARD=1,AES0=4,AES1=130,AES2=0,AES3=2' I: (alsa-lib)conf.c: function snd_func_refer returned error: No such file or directory I: (alsa-lib)conf.c: Evaluate error: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM hdmi:1 I: alsa-util.c: Error opening PCM device hdmi:1: No such file or directory I: (alsa-lib)confmisc.c: Unable to find definition 'cards.USB-Audio.pcm.hdmi.0:CARD=1,AES0=4,AES1=130,AES2=0,AES3=2' I: (alsa-lib)conf.c: function snd_func_refer returned error: No such file or directory I: (alsa-lib)conf.c: Evaluate error: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM hdmi:1 I: alsa-util.c: Error opening PCM device hdmi:1: No such file or directory I: (alsa-lib)confmisc.c: Unable to find definition 'cards.USB-Audio.pcm.hdmi.0:CARD=1,AES0=4,AES1=130,AES2=0,AES3=2' I: (alsa-lib)conf.c: function snd_func_refer returned error: No such file or directory I: (alsa-lib)conf.c: Evaluate error: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM hdmi:1 I: alsa-util.c: Error opening PCM device hdmi:1: No such file or directory I: (alsa-lib)confmisc.c: Unable to find definition 'cards.USB-Audio.pcm.hdmi.0:CARD=1,AES0=4,AES1=130,AES2=0,AES3=2' I: (alsa-lib)conf.c: function snd_func_refer returned error: No such file or directory I: (alsa-lib)conf.c: Evaluate error: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM hdmi:1 I: alsa-util.c: Error opening PCM device hdmi:1: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC1D0c' failed (-2) I: alsa-util.c: Error opening PCM device hw:1: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC1D0c' failed (-2) I: alsa-util.c: Error opening PCM device front:1: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC1D0c' failed (-2) I: alsa-util.c: Error opening PCM device hw:1: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC1D0c' failed (-2) I: alsa-util.c: Error opening PCM device iec958:1: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC1D0c' failed (-2) I: alsa-util.c: Error opening PCM device iec958:1: No such file or directory I: card.c: Created 0 "alsa_card.usb-FiiO_DigiHug_USB_Audio-01-Audio" I: alsa-sink.c: Successfully opened device front:1. I: alsa-sink.c: Selected mapping 'Analog Stereo' (analog-stereo). I: alsa-sink.c: Successfully enabled mmap() mode. I: alsa-sink.c: Successfully enabled timer-based scheduling mode. I: (alsa-lib)control.c: Invalid CTL front:1 I: alsa-mixer.c: Unable to attach to mixer front:1: No such file or directory I: alsa-mixer.c: Successfully attached to mixer 'hw:1' W: alsa-mixer.c: Your kernel driver is broken: it reports a volume range from 0.00 dB to 0.00 dB which makes no sense. I: module-device-restore.c: Restoring volume for sink alsa_output.usb-FiiO_DigiHug_USB_Audio-01-Audio.analog-stereo. I: sink.c: Created sink 0 "alsa_output.usb-FiiO_DigiHug_USB_Audio-01-Audio.analog-stereo" with sample spec s16le 2ch 44100Hz and channel map front-left,front-right I: sink.c: alsa.resolution_bits = "16" I: sink.c: device.api = "alsa" I: sink.c: device.class = "sound" I: sink.c: alsa.class = "generic" I: sink.c: alsa.subclass = "generic-mix" I: sink.c: alsa.name = "USB Audio" I: sink.c: alsa.id = "USB Audio" I: sink.c: alsa.subdevice = "0" I: sink.c: alsa.subdevice_name = "subdevice #0" I: sink.c: alsa.device = "0" I: sink.c: alsa.card = "1" I: sink.c: alsa.card_name = "DigiHug USB Audio" I: sink.c: alsa.long_card_name = "FiiO DigiHug USB Audio at usb-0000:00:1a.0-1.2, full speed" I: sink.c: alsa.driver_name = "snd_usb_audio" I: sink.c: device.bus_path = "pci-0000:00:1a.0-usb-0:1.2:1.1" I: sink.c: sysfs.path = "/devices/pci0000:00/0000:00:1a.0/usb1/1-1/1-1.2/1-1.2:1.1/sound/card1" I: sink.c: udev.id = "usb-FiiO_DigiHug_USB_Audio-01-Audio" I: sink.c: device.bus = "usb" I: sink.c: device.vendor.id = "1852" I: sink.c: device.vendor.name = "GYROCOM C&C Co., LTD" I: sink.c: device.product.id = "7022" I: sink.c: device.product.name = "DigiHug_USB_Audio" I: sink.c: device.serial = "FiiO_DigiHug_USB_Audio" I: sink.c: device.string = "front:1" I: sink.c: device.buffering.buffer_size = "352800" I: sink.c: device.buffering.fragment_size = "176400" I: sink.c: device.access_mode = "mmap+timer" I: sink.c: device.profile.name = "analog-stereo" I: sink.c: device.profile.description = "Analog Stereo" I: sink.c: device.description = "DigiHug_USB_Audio Analog Stereo" I: sink.c: alsa.mixer_name = "USB Mixer" I: sink.c: alsa.components = "USB1852:7022" I: sink.c: module-udev-detect.discovered = "1" I: sink.c: device.icon_name = "audio-card-usb" I: source.c: Created source 0 "alsa_output.usb-FiiO_DigiHug_USB_Audio-01-Audio.analog-stereo.monitor" with sample spec s16le 2ch 44100Hz and channel map front-left,front-right I: source.c: device.description = "Monitor of DigiHug_USB_Audio Analog Stereo" I: source.c: device.class = "monitor" I: source.c: alsa.card = "1" I: source.c: alsa.card_name = "DigiHug USB Audio" I: source.c: alsa.long_card_name = "FiiO DigiHug USB Audio at usb-0000:00:1a.0-1.2, full speed" I: source.c: alsa.driver_name = "snd_usb_audio" I: source.c: device.bus_path = "pci-0000:00:1a.0-usb-0:1.2:1.1" I: source.c: sysfs.path = "/devices/pci0000:00/0000:00:1a.0/usb1/1-1/1-1.2/1-1.2:1.1/sound/card1" I: source.c: udev.id = "usb-FiiO_DigiHug_USB_Audio-01-Audio" I: source.c: device.bus = "usb" I: source.c: device.vendor.id = "1852" I: source.c: device.vendor.name = "GYROCOM C&C Co., LTD" I: source.c: device.product.id = "7022" I: source.c: device.product.name = "DigiHug_USB_Audio" I: source.c: device.serial = "FiiO_DigiHug_USB_Audio" I: source.c: device.string = "1" I: source.c: module-udev-detect.discovered = "1" I: source.c: device.icon_name = "audio-card-usb" I: alsa-sink.c: Using 2.0 fragments of size 176400 bytes (1000.00ms), buffer size is 352800 bytes (2000.00ms) I: alsa-sink.c: Time scheduling watermark is 20.00ms I: alsa-sink.c: Hardware volume ranges from 0 to 110. I: alsa-sink.c: Using hardware volume control. Hardware dB scale not supported. I: alsa-sink.c: Using hardware mute control. I: core-util.c: Successfully enabled SCHED_RR scheduling for thread, with priority 5. I: alsa-sink.c: Starting playback. I: module.c: Loaded "module-alsa-card" (index: #4; argument: "device_id="1" name="usb-FiiO_DigiHug_USB_Audio-01-Audio" card_name="alsa_card.usb-FiiO_DigiHug_USB_Audio-01-Audio" tsched=yes ignore_dB=no card_properties="module-udev-detect.discovered=1""). I: module-udev-detect.c: Card /devices/pci0000:00/0000:00:1a.0/usb1/1-1/1-1.2/1-1.2:1.1/sound/card1 (alsa_card.usb-FiiO_DigiHug_USB_Audio-01-Audio) module loaded. I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device hw:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device hw:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device hw:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device hw:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device hw:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device front:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device hw:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device front:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device hw:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device front:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device hw:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device front:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device hw:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device front:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device hw:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device surround40:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device surround40:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device surround40:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device surround40:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device surround40:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device surround41:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device surround41:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device surround41:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device surround41:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device surround41:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device surround50:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device surround50:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device surround50:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device surround50:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device surround50:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device surround51:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device surround51:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device surround51:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device surround51:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device surround51:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device surround71:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device surround71:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device surround71:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device surround71:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device surround71:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device iec958:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device iec958:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device iec958:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device iec958:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device iec958:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device iec958:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device iec958:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device iec958:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device iec958:2: No such file or directory I: (alsa-lib)pcm_hw.c: open '/dev/snd/pcmC2D0p' failed (-2) I: alsa-util.c: Error opening PCM device iec958:2: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM a52:2 I: alsa-util.c: Error opening PCM device a52:2: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM a52:2 I: alsa-util.c: Error opening PCM device a52:2: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM a52:2 I: alsa-util.c: Error opening PCM device a52:2: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM a52:2 I: alsa-util.c: Error opening PCM device a52:2: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM a52:2 I: alsa-util.c: Error opening PCM device a52:2: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM a52:2 I: alsa-util.c: Error opening PCM device a52:2: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM a52:2 I: alsa-util.c: Error opening PCM device a52:2: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM a52:2 I: alsa-util.c: Error opening PCM device a52:2: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM a52:2 I: alsa-util.c: Error opening PCM device a52:2: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM a52:2 I: alsa-util.c: Error opening PCM device a52:2: No such file or directory I: (alsa-lib)confmisc.c: Unable to find definition 'cards.USB-Audio.pcm.hdmi.0:CARD=2,AES0=4,AES1=130,AES2=0,AES3=2' I: (alsa-lib)conf.c: function snd_func_refer returned error: No such file or directory I: (alsa-lib)conf.c: Evaluate error: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM hdmi:2 I: alsa-util.c: Error opening PCM device hdmi:2: No such file or directory I: (alsa-lib)confmisc.c: Unable to find definition 'cards.USB-Audio.pcm.hdmi.0:CARD=2,AES0=4,AES1=130,AES2=0,AES3=2' I: (alsa-lib)conf.c: function snd_func_refer returned error: No such file or directory I: (alsa-lib)conf.c: Evaluate error: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM hdmi:2 I: alsa-util.c: Error opening PCM device hdmi:2: No such file or directory I: (alsa-lib)confmisc.c: Unable to find definition 'cards.USB-Audio.pcm.hdmi.0:CARD=2,AES0=4,AES1=130,AES2=0,AES3=2' I: (alsa-lib)conf.c: function snd_func_refer returned error: No such file or directory I: (alsa-lib)conf.c: Evaluate error: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM hdmi:2 I: alsa-util.c: Error opening PCM device hdmi:2: No such file or directory I: (alsa-lib)confmisc.c: Unable to find definition 'cards.USB-Audio.pcm.hdmi.0:CARD=2,AES0=4,AES1=130,AES2=0,AES3=2' I: (alsa-lib)conf.c: function snd_func_refer returned error: No such file or directory I: (alsa-lib)conf.c: Evaluate error: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM hdmi:2 I: alsa-util.c: Error opening PCM device hdmi:2: No such file or directory I: (alsa-lib)confmisc.c: Unable to find definition 'cards.USB-Audio.pcm.hdmi.0:CARD=2,AES0=4,AES1=130,AES2=0,AES3=2' I: (alsa-lib)conf.c: function snd_func_refer returned error: No such file or directory I: (alsa-lib)conf.c: Evaluate error: No such file or directory I: (alsa-lib)pcm.c: Unknown PCM hdmi:2 I: alsa-util.c: Error opening PCM device hdmi:2: No such file or directory I: alsa-util.c: Device hw:2 doesn't support 44100 Hz, changed to 8000 Hz. I: alsa-util.c: Failed to set hardware parameters on plug:front:2: Invalid argument I: alsa-util.c: Failed to set hardware parameters on plug:hw:2: Invalid argument I: alsa-util.c: Failed to set hardware parameters on plug:iec958:2: Invalid argument I: alsa-util.c: Failed to set hardware parameters on plug:iec958:2: Invalid argument I: module-card-restore.c: Restoring profile for card alsa_card.usb-046d_08d7-01-U0x46d0x8d7. I: card.c: Created 1 "alsa_card.usb-046d_08d7-01-U0x46d0x8d7" I: module.c: Loaded "module-alsa-card" (index: #5; argument: "device_id="2" name="usb-046d_08d7-01-U0x46d0x8d7" card_name="alsa_card.usb-046d_08d7-01-U0x46d0x8d7" tsched=yes ignore_dB=no card_properties="module-udev-detect.discovered=1""). I: module-udev-detect.c: Card /devices/pci0000:00/0000:00:1a.0/usb1/1-1/1-1.6/1-1.6:1.1/sound/card2 (alsa_card.usb-046d_08d7-01-U0x46d0x8d7) module loaded. I: module-udev-detect.c: Found 3 cards. I: module.c: Loaded "module-udev-detect" (index: #6; argument: ""). I: module.c: Loaded "module-esound-protocol-unix" (index: #7; argument: ""). I: module.c: Loaded "module-native-protocol-unix" (index: #8; argument: ""). I: module-default-device-restore.c: Saved default sink 'alsa_output.pci-0000_00_1b.0.analog-surround-41' not existant, not restoring default sink setting. I: module-default-device-restore.c: Saved default source 'alsa_output.pci-0000_00_1b.0.analog-surround-41.monitor' not existant, not restoring default source setting. I: module.c: Loaded "module-default-device-restore" (index: #9; argument: ""). I: module.c: Loaded "module-rescue-streams" (index: #10; argument: ""). I: module.c: Loaded "module-always-sink" (index: #11; argument: ""). I: module.c: Loaded "module-intended-roles" (index: #12; argument: ""). I: module.c: Loaded "module-suspend-on-idle" (index: #13; argument: ""). I: client.c: Created 0 "ConsoleKit Session /org/freedesktop/ConsoleKit/Session2" I: module.c: Loaded "module-console-kit" (index: #14; argument: ""). I: module.c: Loaded "module-position-event-sounds" (index: #15; argument: ""). I: module.c: Loaded "module-cork-music-on-phone" (index: #16; argument: ""). E: module.c: Failed to open module "module-combine-sink": file not found E: main.c: Module load failed. E: main.c: Failed to initialize daemon. I: module.c: Unloading "module-device-restore" (index: #0). I: module.c: Unloaded "module-device-restore" (index: #0). I: module.c: Unloading "module-stream-restore" (index: #1). I: module.c: Unloaded "module-stream-restore" (index: #1). I: module.c: Unloading "module-card-restore" (index: #2). I: module.c: Unloaded "module-card-restore" (index: #2). I: module.c: Unloading "module-augment-properties" (index: #3). I: module.c: Unloaded "module-augment-properties" (index: #3). I: module.c: Unloading "module-alsa-card" (index: #4). I: sink.c: Freeing sink 0 "alsa_output.usb-FiiO_DigiHug_USB_Audio-01-Audio.analog-stereo" I: source.c: Freeing source 0 "alsa_output.usb-FiiO_DigiHug_USB_Audio-01-Audio.analog-stereo.monitor" I: card.c: Freed 0 "alsa_card.usb-FiiO_DigiHug_USB_Audio-01-Audio" I: module.c: Unloaded "module-alsa-card" (index: #4). I: module.c: Unloading "module-alsa-card" (index: #5). I: card.c: Freed 1 "alsa_card.usb-046d_08d7-01-U0x46d0x8d7" I: module.c: Unloaded "module-alsa-card" (index: #5). I: module.c: Unloading "module-udev-detect" (index: #6). I: module.c: Unloaded "module-udev-detect" (index: #6). I: module.c: Unloading "module-esound-protocol-unix" (index: #7). I: module.c: Unloaded "module-esound-protocol-unix" (index: #7). I: module.c: Unloading "module-native-protocol-unix" (index: #8). I: module.c: Unloaded "module-native-protocol-unix" (index: #8). I: module.c: Unloading "module-default-device-restore" (index: #9). I: module.c: Unloaded "module-default-device-restore" (index: #9). I: module.c: Unloading "module-rescue-streams" (index: #10). I: module.c: Unloaded "module-rescue-streams" (index: #10). I: module.c: Unloading "module-always-sink" (index: #11). I: module.c: Unloaded "module-always-sink" (index: #11). I: module.c: Unloading "module-intended-roles" (index: #12). I: module.c: Unloaded "module-intended-roles" (index: #12). I: module.c: Unloading "module-suspend-on-idle" (index: #13). I: module.c: Unloaded "module-suspend-on-idle" (index: #13). I: module.c: Unloading "module-console-kit" (index: #14). I: client.c: Freed 0 "ConsoleKit Session /org/freedesktop/ConsoleKit/Session2" I: module.c: Unloaded "module-console-kit" (index: #14). I: module.c: Unloading "module-position-event-sounds" (index: #15). I: module.c: Unloaded "module-position-event-sounds" (index: #15). I: module.c: Unloading "module-cork-music-on-phone" (index: #16). I: module.c: Unloaded "module-cork-music-on-phone" (index: #16). I: main.c: Daemon terminated. I believe the relevant part is this: E: module.c: Failed to open module "module-combine-sink": file not found E: main.c: Module load failed. E: main.c: Failed to initialize daemon. I tried uninstalling and reinstalling pulseaudio, I tried to find a way to install module-combine-sink. Nothing worked. I'm on a Debian Squeeze 32-bit machine. What can I do to fix this?

    Read the article

  • error while installing the libmemcached

    - by Ahmet vardar
    I get this while installing libmemcached root@server [/libmemcached]# make make all-am make[1]: Entering directory `/libmemcached' if /bin/sh ./libtool --tag=CXX --mode=compile g++ -DHAVE_CONFIG_H -I. -I. -I. -I. -I. -ggdb -DBUILDING_HASHKIT -MT libhashkit/libhashkit_libhashkit_la-aes.lo -MD -MP -MF "libhashkit/.deps/libhashkit_libhashkit_la-aes.Tpo" -c -o libhashkit/libhashkit_libhashkit_la-aes.lo `test -f 'libhashkit/aes.cc' || echo './'`libhashkit/aes.cc; \ then mv -f "libhashkit/.deps/libhashkit_libhashkit_la-aes.Tpo" "libhashkit/.deps/libhashkit_libhashkit_la-aes.Plo"; else rm -f "libhashkit/.deps/libhashkit_libhashkit_la-aes.Tpo"; exit 1; fi ./libtool: line 866: X--tag=CXX: command not found ./libtool: line 899: libtool: ignoring unknown tag : command not found ./libtool: line 866: X--mode=compile: command not found ./libtool: line 1032: *** Warning: inferring the mode of operation is deprecated.: command not found ./libtool: line 1033: *** Future versions of Libtool will require --mode=MODE be specified.: command not found ./libtool: line 1176: Xg++: command not found ./libtool: line 1176: X-DHAVE_CONFIG_H: command not found ./libtool: line 1176: X-I.: command not found ./libtool: line 1176: X-I.: command not found ./libtool: line 1176: X-I.: command not found ./libtool: line 1176: X-I.: command not found ./libtool: line 1176: X-I.: command not found ./libtool: line 1176: X-ggdb: command not found ./libtool: line 1176: X-DBUILDING_HASHKIT: command not found ./libtool: line 1176: X-MT: command not found ./libtool: line 1176: Xlibhashkit/libhashkit_libhashkit_la-aes.lo: No such file or directory ./libtool: line 1176: X-MD: command not found ./libtool: line 1176: X-MP: command not found ./libtool: line 1176: X-MF: command not found ./libtool: line 1176: Xlibhashkit/.deps/libhashkit_libhashkit_la-aes.Tpo: No such file or directory ./libtool: line 1176: X-c: command not found ./libtool: line 1228: Xlibhashkit/libhashkit_libhashkit_la-aes.lo: No such file or directory ./libtool: line 1233: libtool: compile: cannot determine name of library object from `': command not found make[1]: *** [libhashkit/libhashkit_libhashkit_la-aes.lo] Error 1 make[1]: Leaving directory `/libmemcached' make: *** [all] Error 2 OUTPUT OF ./configure checking build system type... x86_64-unknown-linux-gnu checking host system type... x86_64-unknown-linux-gnu checking target system type... x86_64-unknown-linux-gnu checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for gawk... gawk checking whether make sets $(MAKE)... yes checking for style of include used by make... GNU checking for gcc... gcc checking whether the C compiler works... yes checking for C compiler default output file name... a.out checking for suffix of executables... checking whether we are cross compiling... no checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ISO C89... none needed checking dependency style of gcc... gcc3 checking dependency style of gcc... (cached) gcc3 checking how to run the C preprocessor... gcc -E checking for grep that handles long lines and -e... /bin/grep checking for egrep... /bin/grep -E checking for ANSI C header files... yes checking for sys/types.h... yes checking for sys/stat.h... yes checking for stdlib.h... yes checking for string.h... yes checking for memory.h... yes checking for strings.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for unistd.h... yes checking minix/config.h usability... no checking minix/config.h presence... no checking for minix/config.h... no checking whether it is safe to define __EXTENSIONS__... yes checking for isainfo... no checking for g++... g++ checking whether we are using the GNU C++ compiler... yes checking whether g++ accepts -g... yes checking dependency style of g++... gcc3 checking dependency style of g++... (cached) gcc3 checking whether gcc and cc understand -c and -o together... yes checking how to create a ustar tar archive... gnutar checking whether __SUNPRO_C is declared... no checking whether __ICC is declared... no checking "C Compiler version--yes"... "gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-52)" checking "C++ Compiler version"... "g++ (GCC) 4.1.2 20080704 (Red Hat 4.1.2-52)" checking whether time.h and sys/time.h may both be included... yes checking whether struct tm is in sys/time.h or time.h... time.h checking for size_t... yes checking for special C compiler options needed for large files... no checking for _FILE_OFFSET_BITS value needed for large files... no checking for library containing clock_gettime... -lrt checking sys/socket.h usability... yes checking sys/socket.h presence... yes checking for sys/socket.h... yes checking size of off_t... 8 checking size of size_t... 8 checking size of long long... 8 checking if time_t is unsigned... no checking for setsockopt... yes checking for bind... yes checking whether the compiler provides atomic builtins... yes checking assert.h usability... yes checking assert.h presence... yes checking for assert.h... yes checking whether to enable assertions... yes checking whether it is safe to use -fdiagnostics-show-option... yes checking whether it is safe to use -floop-parallelize-all... no checking whether it is safe to use -Wextra... yes checking whether it is safe to use -Wformat... yes checking whether it is safe to use -Wconversion... no checking whether it is safe to use -Wmissing-declarations from C++... no checking whether it is safe to use -Wframe-larger-than... no checking whether it is safe to use -Wlogical-op... no checking whether it is safe to use -Wredundant-decls from C++... yes checking whether it is safe to use -Wattributes from C++... no checking whether it is safe to use -Wno-attributes... no checking for perl... perl checking for dpkg-gensymbols... no checking for lcov... no checking for genhtml... no checking for sphinx-build... no checking for working -pipe... yes checking for bison... bison checking for flex... flex checking how to print strings... printf checking for a sed that does not truncate output... /bin/sed checking for fgrep... /bin/grep -F checking for ld used by gcc... /usr/bin/ld checking if the linker (/usr/bin/ld) is GNU ld... yes checking for BSD- or MS-compatible name lister (nm)... /usr/bin/nm -B checking the name lister (/usr/bin/nm -B) interface... BSD nm checking whether ln -s works... yes checking the maximum length of command line arguments... 98304 checking whether the shell understands some XSI constructs... yes checking whether the shell understands "+="... yes checking how to convert x86_64-unknown-linux-gnu file names to x86_64-unknown-linux-gnu format... func_convert_file_noop checking how to convert x86_64-unknown-linux-gnu file names to toolchain format... func_convert_file_noop checking for /usr/bin/ld option to reload object files... -r checking for objdump... objdump checking how to recognize dependent libraries... pass_all checking for dlltool... no checking how to associate runtime and link libraries... printf %s\n checking for ar... ar checking for archiver @FILE support... @ checking for strip... strip checking for ranlib... ranlib checking command to parse /usr/bin/nm -B output from gcc object... ok checking for sysroot... no checking for mt... no checking if : is a manifest tool... no checking for dlfcn.h... yes checking for objdir... .libs checking if gcc supports -fno-rtti -fno-exceptions... no checking for gcc option to produce PIC... -fPIC -DPIC checking if gcc PIC flag -fPIC -DPIC works... yes checking if gcc static flag -static works... yes checking if gcc supports -c -o file.o... yes checking if gcc supports -c -o file.o... (cached) yes checking whether the gcc linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes checking whether -lc should be explicitly linked in... no checking dynamic linker characteristics... GNU/Linux ld.so checking how to hardcode library paths into programs... immediate checking whether stripping libraries is possible... yes checking if libtool supports shared libraries... yes checking whether to build shared libraries... yes checking whether to build static libraries... yes checking how to run the C++ preprocessor... g++ -E checking for ld used by g++... /usr/bin/ld -m elf_x86_64 checking if the linker (/usr/bin/ld -m elf_x86_64) is GNU ld... yes checking whether the g++ linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes checking for g++ option to produce PIC... -fPIC -DPIC checking if g++ PIC flag -fPIC -DPIC works... yes checking if g++ static flag -static works... yes checking if g++ supports -c -o file.o... yes checking if g++ supports -c -o file.o... (cached) yes checking whether the g++ linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes checking dynamic linker characteristics... (cached) GNU/Linux ld.so checking how to hardcode library paths into programs... immediate checking whether the -Werror option is usable... yes checking for simple visibility declarations... yes checking for ISO C++ 98 include files... checking whether memcached executable path has been provided... no checking for memcached... /usr/local/bin/memcached checking whether memcached_sasl executable path has been provided... no checking for memcached_sasl... no checking whether gearmand executable path has been provided... no checking for gearmand... no checking libgearman/gearmand.h usability... no checking libgearman/gearmand.h presence... no checking for libgearman/gearmand.h... no checking for library containing getopt_long... none required checking for library containing gethostbyname... none required checking for the pthreads library -lpthreads... no checking whether pthreads work without any flags... yes checking for joinable pthread attribute... PTHREAD_CREATE_JOINABLE checking if more special flags are required for pthreads... no checking for PTHREAD_PRIO_INHERIT... yes checking the location of cstdint... configure: WARNING: Could not find a cstdint header. <stdint.h> checking the location of cinttypes... configure: WARNING: Could not find a cinttypes header. <inttypes.h> checking whether byte ordering is bigendian... no checking for htonll... no checking for working SO_SNDTIMEO... yes checking for working SO_RCVTIMEO... yes checking for supported struct padding... yes checking for alarm... yes checking for dup2... yes checking for getline... yes checking for gettimeofday... yes checking for memchr... yes checking for memmove... yes checking for memset... yes checking for pipe2... no checking for select... yes checking for setenv... yes checking for socket... yes checking for sqrt... yes checking for strcasecmp... yes checking for strchr... yes checking for strdup... yes checking for strerror... yes checking for strtol... yes checking for strtoul... yes checking for strtoull... yes checking arpa/inet.h usability... yes checking arpa/inet.h presence... yes checking for arpa/inet.h... yes checking fcntl.h usability... yes checking fcntl.h presence... yes checking for fcntl.h... yes checking libintl.h usability... yes checking libintl.h presence... yes checking for libintl.h... yes checking limits.h usability... yes checking limits.h presence... yes checking for limits.h... yes checking malloc.h usability... yes checking malloc.h presence... yes checking for malloc.h... yes checking netdb.h usability... yes checking netdb.h presence... yes checking for netdb.h... yes checking netinet/in.h usability... yes checking netinet/in.h presence... yes checking for netinet/in.h... yes checking stddef.h usability... yes checking stddef.h presence... yes checking for stddef.h... yes checking sys/time.h usability... yes checking sys/time.h presence... yes checking for sys/time.h... yes checking execinfo.h usability... yes checking execinfo.h presence... yes checking for execinfo.h... yes checking cxxabi.h usability... yes checking cxxabi.h presence... yes checking for cxxabi.h... yes checking sys/sysctl.h usability... yes checking sys/sysctl.h presence... yes checking for sys/sysctl.h... yes checking umem.h usability... no checking umem.h presence... no checking for umem.h... no checking for C++ compiler vendor... gnu checking for working alloca.h... yes checking for alloca... yes checking for error_at_line... yes checking for pid_t... yes checking vfork.h usability... no checking vfork.h presence... no checking for vfork.h... no checking for fork... yes checking for vfork... yes checking for working fork... yes checking for working vfork... (cached) yes checking for stdlib.h... (cached) yes checking for GNU libc compatible malloc... yes checking for stdlib.h... (cached) yes checking for GNU libc compatible realloc... yes checking whether strerror_r is declared... yes checking for strerror_r... yes checking whether strerror_r returns char *... yes checking for stdbool.h that conforms to C99... yes checking for _Bool... no checking for int16_t... yes checking for int32_t... yes checking for int64_t... yes checking for int8_t... yes checking for off_t... yes checking for pid_t... (cached) yes checking for ssize_t... yes checking for uint16_t... yes checking for uint32_t... yes checking for uint64_t... yes checking for uint8_t... yes checking whether byte ordering is bigendian... (cached) no checking for an ANSI C-conforming const... yes checking for inline... inline checking for working volatile... yes checking for C/C++ restrict keyword... __restrict checking whether the compiler supports GCC C++ ABI name demangling... yes checking sasl/sasl.h usability... no checking sasl/sasl.h presence... no checking for sasl/sasl.h... no checking uuid/uuid.h usability... yes checking uuid/uuid.h presence... yes checking for uuid/uuid.h... yes checking for main in -luuid... yes checking for clock_gettime in -lrt... yes checking for floor in -lm... yes checking for sigignore... yes checking atomic.h usability... no checking atomic.h presence... no checking for atomic.h... no checking for setppriv... no checking for winsock2.h... no checking for poll.h... yes checking for sys/wait.h... yes checking for fnmatch.h... yes checking for MSG_NOSIGNAL... yes checking for MSG_DONTWAIT... yes checking for MSG_MORE... yes checking event.h usability... yes checking event.h presence... yes checking for event.h... yes checking for main in -levent... yes checking for endianness... little configure: creating ./config.status config.status: creating Makefile config.status: creating docs/conf.py config.status: creating libhashkit-1.0/configure.h config.status: creating libmemcached-1.0/configure.h config.status: creating libmemcached-1.2/configure.h config.status: creating libmemcached-2.0/configure.h config.status: creating support/libmemcached.pc config.status: creating support/libmemcached.spec config.status: creating support/libmemcached-fc.spec config.status: creating libtest/version.h config.status: creating config.h config.status: config.h is unchanged config.status: executing depfiles commands config.status: executing libtool commands --- Configuration summary for libmemcached version 1.0.6 * Installation prefix: /usr/local * System type: unknown-linux-gnu * Host CPU: x86_64 * C Compiler: gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-52) * Assertions enabled: yes * Debug enabled: no * Warnings as failure: no * SASL support: --- anyone knows how to solve this ?

    Read the article

  • Garbled text in Screen [closed]

    - by Prabin Dahal
    The graphical Interface in my system is garbled with some text. At the beginning I thought it was due to java and tomcat that I installed. But after removing java and tomcat, it is still the same. I am using ubuntu server and i have installed xfce desktop environment with oboard softkey I have added my dmesg output to this message. What is the problem here. I am not able to figure it out. Thank you for your help. Prabin [ 0.390936] usbcore: registered new interface driver usbfs [ 0.391006] usbcore: registered new interface driver hub [ 0.391147] usbcore: registered new device driver usb [ 0.391580] PCI: Using ACPI for IRQ routing [ 0.400509] PCI: pci_cache_line_size set to 64 bytes [ 0.400669] reserve RAM buffer: 000000000009ec00 - 000000000009ffff [ 0.400681] reserve RAM buffer: 000000007f597000 - 000000007fffffff [ 0.400699] reserve RAM buffer: 000000007f6f0000 - 000000007fffffff [ 0.401135] NetLabel: Initializing [ 0.401155] NetLabel: domain hash size = 128 [ 0.401168] NetLabel: protocols = UNLABELED CIPSOv4 [ 0.401212] NetLabel: unlabeled traffic allowed by default [ 0.401466] HPET: 3 timers in total, 0 timers will be used for per-cpu timer [ 0.401494] hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 [ 0.401520] hpet0: 3 comparators, 64-bit 14.318180 MHz counter [ 0.408228] Switching to clocksource hpet [ 0.434341] AppArmor: AppArmor Filesystem Enabled [ 0.434447] pnp: PnP ACPI init [ 0.434531] ACPI: bus type pnp registered [ 0.434784] pnp 00:00: [bus 00-ff] [ 0.434794] pnp 00:00: [io 0x0cf8-0x0cff] [ 0.434804] pnp 00:00: [io 0x0000-0x0cf7 window] [ 0.434813] pnp 00:00: [io 0x0d00-0xffff window] [ 0.434822] pnp 00:00: [mem 0x000a0000-0x000bffff window] [ 0.434831] pnp 00:00: [mem 0x00000000 window] [ 0.434840] pnp 00:00: [mem 0x80000000-0xffffffff window] [ 0.435018] pnp 00:00: Plug and Play ACPI device, IDs PNP0a08 PNP0a03 (active) [ 0.435526] pnp 00:01: [mem 0xe0000000-0xefffffff] [ 0.435537] pnp 00:01: [mem 0x7f700000-0x7f7fffff] [ 0.435545] pnp 00:01: [mem 0x7f800000-0x7fffffff] [ 0.435554] pnp 00:01: [mem 0xfee00000-0xfeefffff] [ 0.435727] system 00:01: [mem 0xe0000000-0xefffffff] has been reserved [ 0.435754] system 00:01: [mem 0x7f700000-0x7f7fffff] has been reserved [ 0.435775] system 00:01: [mem 0x7f800000-0x7fffffff] has been reserved [ 0.435796] system 00:01: [mem 0xfee00000-0xfeefffff] has been reserved [ 0.435818] system 00:01: Plug and Play ACPI device, IDs PNP0c01 (active) [ 0.436233] pnp 00:02: [io 0x0000-0xffffffffffffffff disabled] [ 0.436245] pnp 00:02: [io 0x0000-0xffffffffffffffff disabled] [ 0.436414] system 00:02: Plug and Play ACPI device, IDs PNP0c02 (active) [ 0.436512] pnp 00:03: [io 0x0060] [ 0.436521] pnp 00:03: [io 0x0064] [ 0.436548] pnp 00:03: [irq 1] [ 0.436682] pnp 00:03: Plug and Play ACPI device, IDs PNP0303 PNP030b (active) [ 0.436825] pnp 00:04: [irq 12] [ 0.436958] pnp 00:04: Plug and Play ACPI device, IDs PNP0f03 PNP0f13 (active) [ 0.437835] pnp 00:05: [io 0x03f8-0x03ff] [ 0.437861] pnp 00:05: [irq 4] [ 0.437870] pnp 00:05: [dma 0 disabled] [ 0.438142] pnp 00:05: Plug and Play ACPI device, IDs PNP0501 (active) [ 0.439014] pnp 00:06: [io 0x02f8-0x02ff] [ 0.439036] pnp 00:06: [irq 3] [ 0.439045] pnp 00:06: [dma 0 disabled] [ 0.439297] pnp 00:06: Plug and Play ACPI device, IDs PNP0501 (active) [ 0.439346] pnp 00:07: [io 0x0000-0x000f] [ 0.439355] pnp 00:07: [io 0x0081-0x0083] [ 0.439363] pnp 00:07: [io 0x0087] [ 0.439371] pnp 00:07: [io 0x0089-0x008b] [ 0.439380] pnp 00:07: [io 0x008f] [ 0.439388] pnp 00:07: [io 0x00c0-0x00df] [ 0.439563] system 00:07: Plug and Play ACPI device, IDs PNP0c01 (active) [ 0.439617] pnp 00:08: [io 0x0070-0x0077] [ 0.439639] pnp 00:08: [irq 8] [ 0.439751] pnp 00:08: Plug and Play ACPI device, IDs PNP0b00 (active) [ 0.439788] pnp 00:09: [io 0x0061] [ 0.439893] pnp 00:09: Plug and Play ACPI device, IDs PNP0800 (active) [ 0.439977] pnp 00:0a: [io 0x0010-0x001f] [ 0.439986] pnp 00:0a: [io 0x0022-0x003f] [ 0.439994] pnp 00:0a: [io 0x0044-0x005f] [ 0.440055] pnp 00:0a: [io 0x0063] [ 0.440063] pnp 00:0a: [io 0x0065] [ 0.440071] pnp 00:0a: [io 0x0067-0x006f] [ 0.440079] pnp 00:0a: [io 0x0072-0x007f] [ 0.440086] pnp 00:0a: [io 0x0080] [ 0.440094] pnp 00:0a: [io 0x0084-0x0086] [ 0.440102] pnp 00:0a: [io 0x0088] [ 0.440109] pnp 00:0a: [io 0x008c-0x008e] [ 0.440117] pnp 00:0a: [io 0x0090-0x009f] [ 0.440125] pnp 00:0a: [io 0x00a2-0x00bf] [ 0.440133] pnp 00:0a: [io 0x00e0-0x00ef] [ 0.440141] pnp 00:0a: [io 0x04d0-0x04d1] [ 0.440150] pnp 00:0a: [io 0x0000-0xffffffffffffffff disabled] [ 0.440160] pnp 00:0a: [io 0x0000-0xffffffffffffffff disabled] [ 0.440168] pnp 00:0a: [io 0x03f4] [ 0.440175] pnp 00:0a: [io 0x03f5] [ 0.440183] pnp 00:0a: [io 0x0374] [ 0.440190] pnp 00:0a: [io 0x0375] [ 0.440405] system 00:0a: [io 0x04d0-0x04d1] has been reserved [ 0.440432] system 00:0a: [io 0x03f4] has been reserved [ 0.440451] system 00:0a: [io 0x03f5] has been reserved [ 0.440469] system 00:0a: [io 0x0374] has been reserved [ 0.440488] system 00:0a: [io 0x0375] has been reserved [ 0.440508] system 00:0a: Plug and Play ACPI device, IDs PNP0c02 (active) [ 0.440550] pnp 00:0b: [io 0x00f0-0x00ff] [ 0.440572] pnp 00:0b: [irq 13] [ 0.440691] pnp 00:0b: Plug and Play ACPI device, IDs PNP0c04 (active) [ 0.440770] pnp 00:0c: [io 0x0810] [ 0.440779] pnp 00:0c: [io 0x0800-0x080f] [ 0.440787] pnp 00:0c: [io 0xffff] [ 0.440947] system 00:0c: [io 0x0810] has been reserved [ 0.440970] system 00:0c: [io 0x0800-0x080f] has been reserved [ 0.440989] system 00:0c: [io 0xffff] has been reserved [ 0.441010] system 00:0c: Plug and Play ACPI device, IDs PNP0c02 (active) [ 0.441620] pnp 00:0d: [io 0x0900-0x097f] [ 0.441630] pnp 00:0d: [io 0x09c0-0x09ff] [ 0.441639] pnp 00:0d: [io 0x0400-0x043f] [ 0.441647] pnp 00:0d: [io 0x0480-0x04bf] [ 0.441656] pnp 00:0d: [mem 0xfec00000-0xfec85fff] [ 0.441664] pnp 00:0d: [mem 0xfed1c000-0xfed1ffff] [ 0.441673] pnp 00:0d: [mem 0x000c0000-0x000dffff] [ 0.441689] pnp 00:0d: [mem 0x000e0000-0x000effff] [ 0.441697] pnp 00:0d: [mem 0x000f0000-0x000fffff] [ 0.441706] pnp 00:0d: [mem 0xff800000-0xffffffff] [ 0.441911] system 00:0d: [io 0x0900-0x097f] has been reserved [ 0.441935] system 00:0d: [io 0x09c0-0x09ff] has been reserved [ 0.441955] system 00:0d: [io 0x0400-0x043f] has been reserved [ 0.441975] system 00:0d: [io 0x0480-0x04bf] has been reserved [ 0.441997] system 00:0d: [mem 0xfec00000-0xfec85fff] could not be reserved [ 0.442019] system 00:0d: [mem 0xfed1c000-0xfed1ffff] has been reserved [ 0.442040] system 00:0d: [mem 0x000c0000-0x000dffff] could not be reserved [ 0.442061] system 00:0d: [mem 0x000e0000-0x000effff] could not be reserved [ 0.442082] system 00:0d: [mem 0x000f0000-0x000fffff] could not be reserved [ 0.442103] system 00:0d: [mem 0xff800000-0xffffffff] has been reserved [ 0.442126] system 00:0d: Plug and Play ACPI device, IDs PNP0c01 (active) [ 0.442308] pnp 00:0e: [mem 0xfed00000-0xfed003ff] [ 0.442454] pnp 00:0e: Plug and Play ACPI device, IDs PNP0103 (active) [ 0.442569] pnp 00:0f: [mem 0x7f6f0000-0x7f6fffff] [ 0.442762] system 00:0f: [mem 0x7f6f0000-0x7f6fffff] has been reserved [ 0.442788] system 00:0f: Plug and Play ACPI device, IDs PNP0c01 (active) [ 0.443360] pnp: PnP ACPI: found 16 devices [ 0.443378] ACPI: ACPI bus type pnp unregistered [ 0.443395] PnPBIOS: Disabled by ACPI PNP [ 0.486106] PCI: max bus depth: 3 pci_try_num: 4 [ 0.486189] pci 0000:00:1c.0: PCI bridge to [bus 01-01] [ 0.486217] pci 0000:00:1c.0: bridge window [io 0xe000-0xefff] [ 0.486241] pci 0000:00:1c.0: bridge window [mem 0xd0100000-0xd01fffff] [ 0.486266] pci 0000:00:1c.0: bridge window [mem 0xff700000-0xff7fffff pref] [ 0.486298] pci 0000:03:01.0: PCI bridge to [bus 04-04] [ 0.486319] pci 0000:03:01.0: bridge window [io 0xd000-0xdfff] [ 0.486348] pci 0000:03:01.0: bridge window [mem 0xd0000000-0xd00fffff] [ 0.486374] pci 0000:03:01.0: bridge window [mem 0xff600000-0xff6fffff 64bit pref] [ 0.486406] pci 0000:03:02.0: PCI bridge to [bus 05-05] [ 0.486444] pci 0000:03:03.0: PCI bridge to [bus 06-06] [ 0.486479] pci 0000:02:00.0: PCI bridge to [bus 03-06] [ 0.486499] pci 0000:02:00.0: bridge window [io 0xd000-0xdfff] [ 0.486522] pci 0000:02:00.0: bridge window [mem 0xd0000000-0xd00fffff] [ 0.486545] pci 0000:02:00.0: bridge window [mem 0xff600000-0xff6fffff 64bit pref] [ 0.486575] pci 0000:00:1c.1: PCI bridge to [bus 02-06] [ 0.486593] pci 0000:00:1c.1: bridge window [io 0xd000-0xdfff] [ 0.486615] pci 0000:00:1c.1: bridge window [mem 0xd0000000-0xd00fffff] [ 0.486637] pci 0000:00:1c.1: bridge window [mem 0xff600000-0xff6fffff pref] [ 0.486710] pci 0000:00:1c.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 0.486735] pci 0000:00:1c.0: setting latency timer to 64 [ 0.486774] pci 0000:00:1c.1: PCI INT B -> GSI 17 (level, low) -> IRQ 17 [ 0.486796] pci 0000:00:1c.1: setting latency timer to 64 [ 0.486817] pci 0000:02:00.0: setting latency timer to 64 [ 0.486836] pci 0000:03:01.0: setting latency timer to 64 [ 0.486858] pci 0000:03:02.0: setting latency timer to 64 [ 0.486880] pci 0000:03:03.0: setting latency timer to 64 [ 0.486893] pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7] [ 0.486902] pci_bus 0000:00: resource 5 [io 0x0d00-0xffff] [ 0.486912] pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff] [ 0.486922] pci_bus 0000:00: resource 7 [mem 0x80000000-0xffffffff] [ 0.486932] pci_bus 0000:01: resource 0 [io 0xe000-0xefff] [ 0.486941] pci_bus 0000:01: resource 1 [mem 0xd0100000-0xd01fffff] [ 0.486951] pci_bus 0000:01: resource 2 [mem 0xff700000-0xff7fffff pref] [ 0.486961] pci_bus 0000:02: resource 0 [io 0xd000-0xdfff] [ 0.486970] pci_bus 0000:02: resource 1 [mem 0xd0000000-0xd00fffff] [ 0.486980] pci_bus 0000:02: resource 2 [mem 0xff600000-0xff6fffff pref] [ 0.486989] pci_bus 0000:03: resource 0 [io 0xd000-0xdfff] [ 0.486998] pci_bus 0000:03: resource 1 [mem 0xd0000000-0xd00fffff] [ 0.487008] pci_bus 0000:03: resource 2 [mem 0xff600000-0xff6fffff 64bit pref] [ 0.487018] pci_bus 0000:04: resource 0 [io 0xd000-0xdfff] [ 0.487028] pci_bus 0000:04: resource 1 [mem 0xd0000000-0xd00fffff] [ 0.487038] pci_bus 0000:04: resource 2 [mem 0xff600000-0xff6fffff 64bit pref] [ 0.487177] NET: Registered protocol family 2 [ 0.487405] IP route cache hash table entries: 32768 (order: 5, 131072 bytes) [ 0.488397] TCP established hash table entries: 131072 (order: 8, 1048576 bytes) [ 0.489792] TCP bind hash table entries: 65536 (order: 7, 524288 bytes) [ 0.490493] TCP: Hash tables configured (established 131072 bind 65536) [ 0.490525] TCP reno registered [ 0.490551] UDP hash table entries: 512 (order: 2, 16384 bytes) [ 0.490590] UDP-Lite hash table entries: 512 (order: 2, 16384 bytes) [ 0.490898] NET: Registered protocol family 1 [ 0.490970] pci 0000:00:02.0: Boot video device [ 0.491052] pci 0000:00:1d.0: PCI INT A -> GSI 20 (level, low) -> IRQ 20 [ 0.491092] pci 0000:00:1d.0: PCI INT A disabled [ 0.491134] pci 0000:00:1d.1: PCI INT B -> GSI 21 (level, low) -> IRQ 21 [ 0.491174] pci 0000:00:1d.1: PCI INT B disabled [ 0.491220] pci 0000:00:1d.2: PCI INT C -> GSI 22 (level, low) -> IRQ 22 [ 0.491259] pci 0000:00:1d.2: PCI INT C disabled [ 0.491307] pci 0000:00:1d.7: PCI INT D -> GSI 23 (level, low) -> IRQ 23 [ 0.864431] Freeing initrd memory: 13820k freed [ 2.088042] pci 0000:00:1d.7: EHCI: BIOS handoff failed (BIOS bug?) 01010001 [ 2.088207] pci 0000:00:1d.7: PCI INT D disabled [ 2.088267] PCI: CLS 64 bytes, default 64 [ 2.089248] audit: initializing netlink socket (disabled) [ 2.089287] type=2000 audit(1349363630.084:1): initialized [ 2.144783] highmem bounce pool size: 64 pages [ 2.144808] HugeTLB registered 2 MB page size, pre-allocated 0 pages [ 2.160057] VFS: Disk quotas dquot_6.5.2 [ 2.160232] Dquot-cache hash table entries: 1024 (order 0, 4096 bytes) [ 2.161716] fuse init (API version 7.17) [ 2.161995] msgmni has been set to 1713 [ 2.162925] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 253) [ 2.163008] io scheduler noop registered [ 2.163023] io scheduler deadline registered [ 2.163048] io scheduler cfq registered (default) [ 2.163339] pcieport 0000:00:1c.0: setting latency timer to 64 [ 2.163530] pcieport 0000:00:1c.1: setting latency timer to 64 [ 2.163706] pcieport 0000:02:00.0: setting latency timer to 64 [ 2.163873] pcieport 0000:03:01.0: setting latency timer to 64 [ 2.163964] pcieport 0000:03:01.0: irq 40 for MSI/MSI-X [ 2.164193] pcieport 0000:03:02.0: setting latency timer to 64 [ 2.164272] pcieport 0000:03:02.0: irq 41 for MSI/MSI-X [ 2.164453] pcieport 0000:03:03.0: setting latency timer to 64 [ 2.164531] pcieport 0000:03:03.0: irq 42 for MSI/MSI-X [ 2.164783] pcieport 0000:00:1c.0: Signaling PME through PCIe PME interrupt [ 2.164801] pci 0000:01:00.0: Signaling PME through PCIe PME interrupt [ 2.164816] pcie_pme 0000:00:1c.0:pcie01: service driver pcie_pme loaded [ 2.164853] pcieport 0000:00:1c.1: Signaling PME through PCIe PME interrupt [ 2.164867] pcieport 0000:02:00.0: Signaling PME through PCIe PME interrupt [ 2.164880] pcieport 0000:03:01.0: Signaling PME through PCIe PME interrupt [ 2.164892] pci 0000:04:00.0: Signaling PME through PCIe PME interrupt [ 2.164904] pcieport 0000:03:02.0: Signaling PME through PCIe PME interrupt [ 2.164917] pcieport 0000:03:03.0: Signaling PME through PCIe PME interrupt [ 2.164932] pcie_pme 0000:00:1c.1:pcie01: service driver pcie_pme loaded [ 2.164988] pci_hotplug: PCI Hot Plug PCI Core version: 0.5 [ 2.165115] pciehp 0000:00:1c.0:pcie04: HPC vendor_id 8086 device_id 8110 ss_vid 8086 ss_did 8119 [ 2.165177] pciehp 0000:00:1c.0:pcie04: service driver pciehp loaded [ 2.165199] pciehp 0000:00:1c.1:pcie04: HPC vendor_id 8086 device_id 8112 ss_vid 8086 ss_did 8119 [ 2.165260] pciehp 0000:00:1c.1:pcie04: service driver pciehp loaded [ 2.165290] pciehp: PCI Express Hot Plug Controller Driver version: 0.4 [ 2.165488] intel_idle: MWAIT substates: 0x3020220 [ 2.165508] intel_idle: v0.4 model 0x1C [ 2.165513] intel_idle: lapic_timer_reliable_states 0x2 [ 2.165519] Marking TSC unstable due to TSC halts in idle states deeper than C2 [ 2.165779] input: Lid Switch as /devices/LNXSYSTM:00/device:00/PNP0C0D:00/input/input0 [ 2.165855] ACPI: Lid Switch [LID] [ 2.165983] input: Power Button as /devices/LNXSYSTM:00/device:00/PNP0C0C:00/input/input1 [ 2.166005] ACPI: Power Button [PWRB] [ 2.173811] thermal LNXTHERM:00: registered as thermal_zone0 [ 2.173829] ACPI: Thermal Zone [TZ00] (48 C) [ 2.174004] thermal LNXTHERM:01: registered as thermal_zone1 [ 2.174018] ACPI: Thermal Zone [TZ01] (34 C) [ 2.174194] thermal LNXTHERM:02: registered as thermal_zone2 [ 2.174207] ACPI: Thermal Zone [TZ02] (34 C) [ 2.174378] thermal LNXTHERM:03: registered as thermal_zone3 [ 2.174392] ACPI: Thermal Zone [TZ03] (34 C) [ 2.174503] ERST: Table is not found! [ 2.174513] GHES: HEST is not enabled! [ 2.174601] isapnp: Scanning for PnP cards... [ 2.176175] Serial: 8250/16550 driver, 32 ports, IRQ sharing enabled [ 2.196702] serial8250: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A [ 2.292409] serial8250: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A [ 2.528909] isapnp: No Plug & Play device found [ 2.588733] 00:05: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A [ 2.624523] 00:06: ttyS1 at I/O 0x2f8 (irq = 3) is a 16550A [ 2.640702] Linux agpgart interface v0.103 [ 2.645138] brd: module loaded [ 2.647452] loop: module loaded [ 2.648149] pata_acpi 0000:00:1f.1: setting latency timer to 64 [ 2.649238] Fixed MDIO Bus: probed [ 2.649315] tun: Universal TUN/TAP device driver, 1.6 [ 2.649327] tun: (C) 1999-2004 Max Krasnyansky <[email protected]> [ 2.649524] PPP generic driver version 2.4.2 [ 2.649824] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver [ 2.649884] ehci_hcd 0000:00:1d.7: PCI INT D -> GSI 23 (level, low) -> IRQ 23 [ 2.649937] ehci_hcd 0000:00:1d.7: setting latency timer to 64 [ 2.649946] ehci_hcd 0000:00:1d.7: EHCI Host Controller [ 2.650082] ehci_hcd 0000:00:1d.7: new USB bus registered, assigned bus number 1 [ 2.650148] ehci_hcd 0000:00:1d.7: debug port 1 [ 2.654045] ehci_hcd 0000:00:1d.7: cache line size of 64 is not supported [ 2.654093] ehci_hcd 0000:00:1d.7: irq 23, io mem 0xd02c4000 [ 2.668035] ehci_hcd 0000:00:1d.7: USB 2.0 started, EHCI 1.00 [ 2.668392] hub 1-0:1.0: USB hub found [ 2.668413] hub 1-0:1.0: 8 ports detected [ 2.668618] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver [ 2.668666] uhci_hcd: USB Universal Host Controller Interface driver [ 2.668726] uhci_hcd 0000:00:1d.0: PCI INT A -> GSI 20 (level, low) -> IRQ 20 [ 2.668751] uhci_hcd 0000:00:1d.0: setting latency timer to 64 [ 2.668759] uhci_hcd 0000:00:1d.0: UHCI Host Controller [ 2.668910] uhci_hcd 0000:00:1d.0: new USB bus registered, assigned bus number 2 [ 2.668981] uhci_hcd 0000:00:1d.0: irq 20, io base 0x0000f040 [ 2.669335] hub 2-0:1.0: USB hub found [ 2.669355] hub 2-0:1.0: 2 ports detected [ 2.669508] uhci_hcd 0000:00:1d.1: PCI INT B -> GSI 21 (level, low) -> IRQ 21 [ 2.669531] uhci_hcd 0000:00:1d.1: setting latency timer to 64 [ 2.669538] uhci_hcd 0000:00:1d.1: UHCI Host Controller [ 2.669675] uhci_hcd 0000:00:1d.1: new USB bus registered, assigned bus number 3 [ 2.669739] uhci_hcd 0000:00:1d.1: irq 21, io base 0x0000f020 [ 2.670099] hub 3-0:1.0: USB hub found [ 2.670118] hub 3-0:1.0: 2 ports detected [ 2.670271] uhci_hcd 0000:00:1d.2: PCI INT C -> GSI 22 (level, low) -> IRQ 22 [ 2.670295] uhci_hcd 0000:00:1d.2: setting latency timer to 64 [ 2.670302] uhci_hcd 0000:00:1d.2: UHCI Host Controller [ 2.670435] uhci_hcd 0000:00:1d.2: new USB bus registered, assigned bus number 4 [ 2.670502] uhci_hcd 0000:00:1d.2: irq 22, io base 0x0000f000 [ 2.670869] hub 4-0:1.0: USB hub found [ 2.670888] hub 4-0:1.0: 2 ports detected [ 2.671186] usbcore: registered new interface driver libusual [ 2.671332] i8042: PNP: PS/2 Controller [PNP0303:PS2K,PNP0f03:PS2M] at 0x60,0x64 irq 1,12 [ 2.673408] serio: i8042 KBD port at 0x60,0x64 irq 1 [ 2.673437] serio: i8042 AUX port at 0x60,0x64 irq 12 [ 2.673844] mousedev: PS/2 mouse device common for all mice [ 2.674272] rtc_cmos 00:08: RTC can wake from S4 [ 2.674482] rtc_cmos 00:08: rtc core: registered rtc_cmos as rtc0 [ 2.674529] rtc0: alarms up to one year, y3k, 242 bytes nvram, hpet irqs [ 2.674691] device-mapper: uevent: version 1.0.3 [ 2.674903] device-mapper: ioctl: 4.22.0-ioctl (2011-10-19) initialised: [email protected] [ 2.675024] EISA: Probing bus 0 at eisa.0 [ 2.675037] EISA: Cannot allocate resource for mainboard [ 2.675050] Cannot allocate resource for EISA slot 1 [ 2.675061] Cannot allocate resource for EISA slot 2 [ 2.675072] Cannot allocate resource for EISA slot 3 [ 2.675083] Cannot allocate resource for EISA slot 4 [ 2.675094] Cannot allocate resource for EISA slot 5 [ 2.675105] Cannot allocate resource for EISA slot 6 [ 2.675116] Cannot allocate resource for EISA slot 7 [ 2.675127] Cannot allocate resource for EISA slot 8 [ 2.675137] EISA: Detected 0 cards. [ 2.675161] cpufreq-nforce2: No nForce2 chipset. [ 2.675401] cpuidle: using governor ladder [ 2.675786] cpuidle: using governor menu [ 2.675797] EFI Variables Facility v0.08 2004-May-17 [ 2.676429] TCP cubic registered [ 2.676751] NET: Registered protocol family 10 [ 2.678031] NET: Registered protocol family 17 [ 2.678052] Registering the dns_resolver key type [ 2.678107] Using IPI No-Shortcut mode [ 2.678515] PM: Hibernation image not present or could not be loaded. [ 2.678543] registered taskstats version 1 [ 2.701145] Magic number: 0:84:234 [ 2.701312] rtc_cmos 00:08: setting system clock to 2012-10-04 15:13:51 UTC (1349363631) [ 2.702280] BIOS EDD facility v0.16 2004-Jun-25, 0 devices found [ 2.702294] EDD information not available. [ 2.702858] Freeing unused kernel memory: 740k freed [ 2.703630] Write protecting the kernel text: 5816k [ 2.703692] Write protecting the kernel read-only data: 2376k [ 2.703706] NX-protecting the kernel data: 4424k [ 2.751226] udevd[84]: starting version 175 [ 2.980162] usb 1-1: new high-speed USB device number 2 using ehci_hcd [ 3.001394] r8169 Gigabit Ethernet driver 2.3LK-NAPI loaded [ 3.001474] r8169 0000:01:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 3.001554] r8169 0000:01:00.0: setting latency timer to 64 [ 3.001654] r8169 0000:01:00.0: irq 43 for MSI/MSI-X [ 3.004220] r8169 0000:01:00.0: eth0: RTL8168c/8111c at 0xf8416000, 00:18:92:03:10:46, XID 1c4000c0 IRQ 43 [ 3.004254] r8169 0000:01:00.0: eth0: jumbo features [frames: 6128 bytes, tx checksumming: ko] [ 3.004347] r8169 Gigabit Ethernet driver 2.3LK-NAPI loaded [ 3.005085] r8169 0000:04:00.0: PCI INT A -> GSI 18 (level, low) -> IRQ 18 [ 3.005182] r8169 0000:04:00.0: setting latency timer to 64 [ 3.005292] r8169 0000:04:00.0: irq 44 for MSI/MSI-X [ 3.007187] r8169 0000:04:00.0: eth1: RTL8168c/8111c at 0xf8418000, 00:18:92:03:10:47, XID 1c4000c0 IRQ 44 [ 3.007224] r8169 0000:04:00.0: eth1: jumbo features [frames: 6128 bytes, tx checksumming: ko] [ 3.034417] pata_sch 0000:00:1f.1: version 0.2 [ 3.034518] pata_sch 0000:00:1f.1: setting latency timer to 64 [ 3.036698] scsi0 : pata_sch [ 3.039842] scsi1 : pata_sch [ 3.040913] ata1: PATA max UDMA/100 cmd 0x1f0 ctl 0x3f6 bmdma 0xf060 irq 14 [ 3.040940] ata2: PATA max UDMA/100 cmd 0x170 ctl 0x376 bmdma 0xf068 irq 15 [ 3.131850] Initializing USB Mass Storage driver... [ 3.136405] scsi2 : usb-storage 1-1:1.0 [ 3.136642] usbcore: registered new interface driver usb-storage [ 3.136656] USB Mass Storage support registered. [ 3.524465] usb 3-1: new low-speed USB device number 2 using uhci_hcd [ 3.968144] usb 3-2: new full-speed USB device number 3 using uhci_hcd [ 4.137903] scsi 2:0:0:0: Direct-Access TS TS4GUFM-H 1100 PQ: 0 ANSI: 0 CCS [ 4.140067] sd 2:0:0:0: Attached scsi generic sg0 type 0 [ 4.140590] sd 2:0:0:0: [sda] 8028160 512-byte logical blocks: (4.11 GB/3.82 GiB) [ 4.141597] sd 2:0:0:0: [sda] Write Protect is off [ 4.141618] sd 2:0:0:0: [sda] Mode Sense: 43 00 00 00 [ 4.142974] sd 2:0:0:0: [sda] No Caching mode page present [ 4.143000] sd 2:0:0:0: [sda] Assuming drive cache: write through [ 4.145837] sd 2:0:0:0: [sda] No Caching mode page present [ 4.145858] sd 2:0:0:0: [sda] Assuming drive cache: write through [ 4.147931] sda: sda1 sda2 < sda5 > [ 4.150972] sd 2:0:0:0: [sda] No Caching mode page present [ 4.151001] sd 2:0:0:0: [sda] Assuming drive cache: write through [ 4.151023] sd 2:0:0:0: [sda] Attached SCSI disk [ 4.249168] input: HID 046a:004b as /devices/pci0000:00/0000:00:1d.1/usb3/3-1/3-1:1.0/input/input2 [ 4.249579] generic-usb 0003:046A:004B.0001: input,hidraw0: USB HID v1.11 Keyboard [HID 046a:004b] on usb-0000:00:1d.1-1/input0 [ 4.287805] input: HID 046a:004b as /devices/pci0000:00/0000:00:1d.1/usb3/3-1/3-1:1.1/input/input3 [ 4.289235] generic-usb 0003:046A:004B.0002: input,hidraw1: USB HID v1.11 Mouse [HID 046a:004b] on usb-0000:00:1d.1-1/input1 [ 4.297604] input: EloTouchSystems,Inc Elo TouchSystems 2216 AccuTouch\xffffffc2\xffffffae\xffffffae USB Touchmonitor Interface as /devices/pci0000:00/0000:00:1d.1/usb3/3-2/3-2:1.0/input/input4 [ 4.298913] generic-usb 0003:04E7:0050.0003: input,hidraw2: USB HID v1.00 Pointer [EloTouchSystems,Inc Elo TouchSystems 2216 AccuTouch\xffffffc2\xffffffae\xffffffae USB Touchmonitor Interface] on usb-0000:00:1d.1-2/input0 [ 4.299878] usbcore: registered new interface driver usbhid [ 4.299925] usbhid: USB HID core driver [ 4.352639] EXT4-fs (sda1): INFO: recovery required on readonly filesystem [ 4.352661] EXT4-fs (sda1): write access will be enabled during recovery [ 8.519257] EXT4-fs (sda1): recovery complete [ 8.564389] EXT4-fs (sda1): mounted filesystem with ordered data mode. Opts: (null) [ 14.280922] ADDRCONF(NETDEV_UP): eth0: link is not ready [ 14.280944] ADDRCONF(NETDEV_UP): eth1: link is not ready [ 14.310368] udevd[308]: starting version 175 [ 14.353873] Adding 1045500k swap on /dev/sda5. Priority:-1 extents:1 across:1045500k [ 14.428718] lp: driver loaded but no devices found [ 14.521667] EXT4-fs (sda1): re-mounted. Opts: errors=remount-ro [ 15.073459] [drm] Initialized drm 1.1.0 20060810 [ 15.097073] psb_gfx: module is from the staging directory, the quality is unknown, you have been warned. [ 15.180630] gma500 0000:00:02.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 15.180648] gma500 0000:00:02.0: setting latency timer to 64 [ 15.182117] Stolen memory information [ 15.182127] base in RAM: 0x7f800000 [ 15.182134] size: 7932K, calculated by (GTT RAM base) - (Stolen base), seems wrong [ 15.182143] the correct size should be: 8M(dvmt mode=3) [ 15.234889] Set up 1983 stolen pages starting at 0x7f800000, GTT offset 0K [ 15.235126] [drm] SGX core id = 0x01130000 [ 15.235135] [drm] SGX core rev major = 0x01, minor = 0x02 [ 15.235143] [drm] SGX core rev maintenance = 0x01, designer = 0x00 [ 15.268796] [Firmware Bug]: ACPI: No _BQC method, cannot determine initial brightness [ 15.269888] acpi device:04: registered as cooling_device2 [ 15.270568] acpi device:05: registered as cooling_device3 [ 15.270947] input: Video Bus as /devices/LNXSYSTM:00/device:00/PNP0A08:00/LNXVIDEO:00/input/input5 [ 15.271238] ACPI: Video Device [GFX0] (multi-head: yes rom: no post: no) [ 15.271424] [drm] Supports vblank timestamp caching Rev 1 (10.10.2010). [ 15.271434] [drm] No driver support for vblank timestamp query. [ 15.374694] type=1400 audit(1349363644.167:2): apparmor="STATUS" operation="profile_load" name="/sbin/dhclient" pid=435 comm="apparmor_parser" [ 15.385518] type=1400 audit(1349363644.179:3): apparmor="STATUS" operation="profile_load" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=435 comm="apparmor_parser" [ 15.386369] type=1400 audit(1349363644.179:4): apparmor="STATUS" operation="profile_load" name="/usr/lib/connman/scripts/dhclient-script" pid=435 comm="apparmor_parser" [ 15.677514] r8169 0000:01:00.0: eth0: link down [ 15.694828] ADDRCONF(NETDEV_UP): eth0: link is not ready [ 16.537490] gma500 0000:00:02.0: allocated 800x480 fb [ 16.558066] fbcon: psbfb (fb0) is primary device [ 16.747122] gma500 0000:00:02.0: BL bug: Reg 00000000 save 00000000 [ 16.775550] Console: switching to colour frame buffer device 100x30 [ 16.781804] fb0: psbfb frame buffer device [ 16.781812] drm: registered panic notifier [ 16.870168] [drm] Initialized gma500 1.0.0 2011-06-06 for 0000:00:02.0 on minor 0 [ 16.871166] snd_hda_intel 0000:00:1b.0: power state changed by ACPI to D0 [ 16.871186] snd_hda_intel 0000:00:1b.0: power state changed by ACPI to D0 [ 16.871207] snd_hda_intel 0000:00:1b.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 16.871284] snd_hda_intel 0000:00:1b.0: setting latency timer to 64 [ 29.338953] r8169 0000:01:00.0: eth0: link up [ 29.339471] ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready [ 31.427223] init: failsafe main process (675) killed by TERM signal [ 31.522411] type=1400 audit(1349363660.316:5): apparmor="STATUS" operation="profile_replace" name="/sbin/dhclient" pid=889 comm="apparmor_parser" [ 31.523956] type=1400 audit(1349363660.316:6): apparmor="STATUS" operation="profile_replace" name="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=889 comm="apparmor_parser" [ 31.524882] type=1400 audit(1349363660.320:7): apparmor="STATUS" operation="profile_replace" name="/usr/lib/connman/scripts/dhclient-script" pid=889 comm="apparmor_parser" [ 31.525940] type=1400 audit(1349363660.320:8): apparmor="STATUS" operation="profile_load" name="/usr/sbin/tcpdump" pid=891 comm="apparmor_parser" [ 34.526445] postgres (1003): /proc/1003/oom_adj is deprecated, please use /proc/1003/oom_score_adj instead. [ 40.144048] eth0: no IPv6 routers present

    Read the article

  • Zen and the Art of File and Folder Organization

    - by Mark Virtue
    Is your desk a paragon of neatness, or does it look like a paper-bomb has gone off? If you’ve been putting off getting organized because the task is too huge or daunting, or you don’t know where to start, we’ve got 40 tips to get you on the path to zen mastery of your filing system. For all those readers who would like to get their files and folders organized, or, if they’re already organized, better organized—we have compiled a complete guide to getting organized and staying organized, a comprehensive article that will hopefully cover every possible tip you could want. Signs that Your Computer is Poorly Organized If your computer is a mess, you’re probably already aware of it.  But just in case you’re not, here are some tell-tale signs: Your Desktop has over 40 icons on it “My Documents” contains over 300 files and 60 folders, including MP3s and digital photos You use the Windows’ built-in search facility whenever you need to find a file You can’t find programs in the out-of-control list of programs in your Start Menu You save all your Word documents in one folder, all your spreadsheets in a second folder, etc Any given file that you’re looking for may be in any one of four different sets of folders But before we start, here are some quick notes: We’re going to assume you know what files and folders are, and how to create, save, rename, copy and delete them The organization principles described in this article apply equally to all computer systems.  However, the screenshots here will reflect how things look on Windows (usually Windows 7).  We will also mention some useful features of Windows that can help you get organized. Everyone has their own favorite methodology of organizing and filing, and it’s all too easy to get into “My Way is Better than Your Way” arguments.  The reality is that there is no perfect way of getting things organized.  When I wrote this article, I tried to keep a generalist and objective viewpoint.  I consider myself to be unusually well organized (to the point of obsession, truth be told), and I’ve had 25 years experience in collecting and organizing files on computers.  So I’ve got a lot to say on the subject.  But the tips I have described here are only one way of doing it.  Hopefully some of these tips will work for you too, but please don’t read this as any sort of “right” way to do it. At the end of the article we’ll be asking you, the reader, for your own organization tips. Why Bother Organizing At All? For some, the answer to this question is self-evident. And yet, in this era of powerful desktop search software (the search capabilities built into the Windows Vista and Windows 7 Start Menus, and third-party programs like Google Desktop Search), the question does need to be asked, and answered. I have a friend who puts every file he ever creates, receives or downloads into his My Documents folder and doesn’t bother filing them into subfolders at all.  He relies on the search functionality built into his Windows operating system to help him find whatever he’s looking for.  And he always finds it.  He’s a Search Samurai.  For him, filing is a waste of valuable time that could be spent enjoying life! It’s tempting to follow suit.  On the face of it, why would anyone bother to take the time to organize their hard disk when such excellent search software is available?  Well, if all you ever want to do with the files you own is to locate and open them individually (for listening, editing, etc), then there’s no reason to ever bother doing one scrap of organization.  But consider these common tasks that are not achievable with desktop search software: Find files manually.  Often it’s not convenient, speedy or even possible to utilize your desktop search software to find what you want.  It doesn’t work 100% of the time, or you may not even have it installed.  Sometimes its just plain faster to go straight to the file you want, if you know it’s in a particular sub-folder, rather than trawling through hundreds of search results. Find groups of similar files (e.g. all your “work” files, all the photos of your Europe holiday in 2008, all your music videos, all the MP3s from Dark Side of the Moon, all your letters you wrote to your wife, all your tax returns).  Clever naming of the files will only get you so far.  Sometimes it’s the date the file was created that’s important, other times it’s the file format, and other times it’s the purpose of the file.  How do you name a collection of files so that they’re easy to isolate based on any of the above criteria?  Short answer, you can’t. Move files to a new computer.  It’s time to upgrade your computer.  How do you quickly grab all the files that are important to you?  Or you decide to have two computers now – one for home and one for work.  How do you quickly isolate only the work-related files to move them to the work computer? Synchronize files to other computers.  If you have more than one computer, and you need to mirror some of your files onto the other computer (e.g. your music collection), then you need a way to quickly determine which files are to be synced and which are not.  Surely you don’t want to synchronize everything? Choose which files to back up.  If your backup regime calls for multiple backups, or requires speedy backups, then you’ll need to be able to specify which files are to be backed up, and which are not.  This is not possible if they’re all in the same folder. Finally, if you’re simply someone who takes pleasure in being organized, tidy and ordered (me! me!), then you don’t even need a reason.  Being disorganized is simply unthinkable. Tips on Getting Organized Here we present our 40 best tips on how to get organized.  Or, if you’re already organized, to get better organized. Tip #1.  Choose Your Organization System Carefully The reason that most people are not organized is that it takes time.  And the first thing that takes time is deciding upon a system of organization.  This is always a matter of personal preference, and is not something that a geek on a website can tell you.  You should always choose your own system, based on how your own brain is organized (which makes the assumption that your brain is, in fact, organized). We can’t instruct you, but we can make suggestions: You may want to start off with a system based on the users of the computer.  i.e. “My Files”, “My Wife’s Files”, My Son’s Files”, etc.  Inside “My Files”, you might then break it down into “Personal” and “Business”.  You may then realize that there are overlaps.  For example, everyone may want to share access to the music library, or the photos from the school play.  So you may create another folder called “Family”, for the “common” files. You may decide that the highest-level breakdown of your files is based on the “source” of each file.  In other words, who created the files.  You could have “Files created by ME (business or personal)”, “Files created by people I know (family, friends, etc)”, and finally “Files created by the rest of the world (MP3 music files, downloaded or ripped movies or TV shows, software installation files, gorgeous desktop wallpaper images you’ve collected, etc).”  This system happens to be the one I use myself.  See below:  Mark is for files created by meVC is for files created by my company (Virtual Creations)Others is for files created by my friends and familyData is the rest of the worldAlso, Settings is where I store the configuration files and other program data files for my installed software (more on this in tip #34, below). Each folder will present its own particular set of requirements for further sub-organization.  For example, you may decide to organize your music collection into sub-folders based on the artist’s name, while your digital photos might get organized based on the date they were taken.  It can be different for every sub-folder! Another strategy would be based on “currentness”.  Files you have yet to open and look at live in one folder.  Ones that have been looked at but not yet filed live in another place.  Current, active projects live in yet another place.  All other files (your “archive”, if you like) would live in a fourth folder. (And of course, within that last folder you’d need to create a further sub-system based on one of the previous bullet points). Put some thought into this – changing it when it proves incomplete can be a big hassle!  Before you go to the trouble of implementing any system you come up with, examine a wide cross-section of the files you own and see if they will all be able to find a nice logical place to sit within your system. Tip #2.  When You Decide on Your System, Stick to It! There’s nothing more pointless than going to all the trouble of creating a system and filing all your files, and then whenever you create, receive or download a new file, you simply dump it onto your Desktop.  You need to be disciplined – forever!  Every new file you get, spend those extra few seconds to file it where it belongs!  Otherwise, in just a month or two, you’ll be worse off than before – half your files will be organized and half will be disorganized – and you won’t know which is which! Tip #3.  Choose the Root Folder of Your Structure Carefully Every data file (document, photo, music file, etc) that you create, own or is important to you, no matter where it came from, should be found within one single folder, and that one single folder should be located at the root of your C: drive (as a sub-folder of C:\).  In other words, do not base your folder structure in standard folders like “My Documents”.  If you do, then you’re leaving it up to the operating system engineers to decide what folder structure is best for you.  And every operating system has a different system!  In Windows 7 your files are found in C:\Users\YourName, whilst on Windows XP it was C:\Documents and Settings\YourName\My Documents.  In UNIX systems it’s often /home/YourName. These standard default folders tend to fill up with junk files and folders that are not at all important to you.  “My Documents” is the worst offender.  Every second piece of software you install, it seems, likes to create its own folder in the “My Documents” folder.  These folders usually don’t fit within your organizational structure, so don’t use them!  In fact, don’t even use the “My Documents” folder at all.  Allow it to fill up with junk, and then simply ignore it.  It sounds heretical, but: Don’t ever visit your “My Documents” folder!  Remove your icons/links to “My Documents” and replace them with links to the folders you created and you care about! Create your own file system from scratch!  Probably the best place to put it would be on your D: drive – if you have one.  This way, all your files live on one drive, while all the operating system and software component files live on the C: drive – simply and elegantly separated.  The benefits of that are profound.  Not only are there obvious organizational benefits (see tip #10, below), but when it comes to migrate your data to a new computer, you can (sometimes) simply unplug your D: drive and plug it in as the D: drive of your new computer (this implies that the D: drive is actually a separate physical disk, and not a partition on the same disk as C:).  You also get a slight speed improvement (again, only if your C: and D: drives are on separate physical disks). Warning:  From tip #12, below, you will see that it’s actually a good idea to have exactly the same file system structure – including the drive it’s filed on – on all of the computers you own.  So if you decide to use the D: drive as the storage system for your own files, make sure you are able to use the D: drive on all the computers you own.  If you can’t ensure that, then you can still use a clever geeky trick to store your files on the D: drive, but still access them all via the C: drive (see tip #17, below). If you only have one hard disk (C:), then create a dedicated folder that will contain all your files – something like C:\Files.  The name of the folder is not important, but make it a single, brief word. There are several reasons for this: When creating a backup regime, it’s easy to decide what files should be backed up – they’re all in the one folder! If you ever decide to trade in your computer for a new one, you know exactly which files to migrate You will always know where to begin a search for any file If you synchronize files with other computers, it makes your synchronization routines very simple.   It also causes all your shortcuts to continue to work on the other machines (more about this in tip #24, below). Once you’ve decided where your files should go, then put all your files in there – Everything!  Completely disregard the standard, default folders that are created for you by the operating system (“My Music”, “My Pictures”, etc).  In fact, you can actually relocate many of those folders into your own structure (more about that below, in tip #6). The more completely you get all your data files (documents, photos, music, etc) and all your configuration settings into that one folder, then the easier it will be to perform all of the above tasks. Once this has been done, and all your files live in one folder, all the other folders in C:\ can be thought of as “operating system” folders, and therefore of little day-to-day interest for us. Here’s a screenshot of a nicely organized C: drive, where all user files are located within the \Files folder:   Tip #4.  Use Sub-Folders This would be our simplest and most obvious tip.  It almost goes without saying.  Any organizational system you decide upon (see tip #1) will require that you create sub-folders for your files.  Get used to creating folders on a regular basis. Tip #5.  Don’t be Shy About Depth Create as many levels of sub-folders as you need.  Don’t be scared to do so.  Every time you notice an opportunity to group a set of related files into a sub-folder, do so.  Examples might include:  All the MP3s from one music CD, all the photos from one holiday, or all the documents from one client. It’s perfectly okay to put files into a folder called C:\Files\Me\From Others\Services\WestCo Bank\Statements\2009.  That’s only seven levels deep.  Ten levels is not uncommon.  Of course, it’s possible to take this too far.  If you notice yourself creating a sub-folder to hold only one file, then you’ve probably become a little over-zealous.  On the other hand, if you simply create a structure with only two levels (for example C:\Files\Work) then you really haven’t achieved any level of organization at all (unless you own only six files!).  Your “Work” folder will have become a dumping ground, just like your Desktop was, with most likely hundreds of files in it. Tip #6.  Move the Standard User Folders into Your Own Folder Structure Most operating systems, including Windows, create a set of standard folders for each of its users.  These folders then become the default location for files such as documents, music files, digital photos and downloaded Internet files.  In Windows 7, the full list is shown below: Some of these folders you may never use nor care about (for example, the Favorites folder, if you’re not using Internet Explorer as your browser).  Those ones you can leave where they are.  But you may be using some of the other folders to store files that are important to you.  Even if you’re not using them, Windows will still often treat them as the default storage location for many types of files.  When you go to save a standard file type, it can become annoying to be automatically prompted to save it in a folder that’s not part of your own file structure. But there’s a simple solution:  Move the folders you care about into your own folder structure!  If you do, then the next time you go to save a file of the corresponding type, Windows will prompt you to save it in the new, moved location. Moving the folders is easy.  Simply drag-and-drop them to the new location.  Here’s a screenshot of the default My Music folder being moved to my custom personal folder (Mark): Tip #7.  Name Files and Folders Intelligently This is another one that almost goes without saying, but we’ll say it anyway:  Do not allow files to be created that have meaningless names like Document1.doc, or folders called New Folder (2).  Take that extra 20 seconds and come up with a meaningful name for the file/folder – one that accurately divulges its contents without repeating the entire contents in the name. Tip #8.  Watch Out for Long Filenames Another way to tell if you have not yet created enough depth to your folder hierarchy is that your files often require really long names.  If you need to call a file Johnson Sales Figures March 2009.xls (which might happen to live in the same folder as Abercrombie Budget Report 2008.xls), then you might want to create some sub-folders so that the first file could be simply called March.xls, and living in the Clients\Johnson\Sales Figures\2009 folder. A well-placed file needs only a brief filename! Tip #9.  Use Shortcuts!  Everywhere! This is probably the single most useful and important tip we can offer.  A shortcut allows a file to be in two places at once. Why would you want that?  Well, the file and folder structure of every popular operating system on the market today is hierarchical.  This means that all objects (files and folders) always live within exactly one parent folder.  It’s a bit like a tree.  A tree has branches (folders) and leaves (files).  Each leaf, and each branch, is supported by exactly one parent branch, all the way back to the root of the tree (which, incidentally, is exactly why C:\ is called the “root folder” of the C: drive). That hard disks are structured this way may seem obvious and even necessary, but it’s only one way of organizing data.  There are others:  Relational databases, for example, organize structured data entirely differently.  The main limitation of hierarchical filing structures is that a file can only ever be in one branch of the tree – in only one folder – at a time.  Why is this a problem?  Well, there are two main reasons why this limitation is a problem for computer users: The “correct” place for a file, according to our organizational rationale, is very often a very inconvenient place for that file to be located.  Just because it’s correctly filed doesn’t mean it’s easy to get to.  Your file may be “correctly” buried six levels deep in your sub-folder structure, but you may need regular and speedy access to this file every day.  You could always move it to a more convenient location, but that would mean that you would need to re-file back to its “correct” location it every time you’d finished working on it.  Most unsatisfactory. A file may simply “belong” in two or more different locations within your file structure.  For example, say you’re an accountant and you have just completed the 2009 tax return for John Smith.  It might make sense to you to call this file 2009 Tax Return.doc and file it under Clients\John Smith.  But it may also be important to you to have the 2009 tax returns from all your clients together in the one place.  So you might also want to call the file John Smith.doc and file it under Tax Returns\2009.  The problem is, in a purely hierarchical filing system, you can’t put it in both places.  Grrrrr! Fortunately, Windows (and most other operating systems) offers a way for you to do exactly that:  It’s called a “shortcut” (also known as an “alias” on Macs and a “symbolic link” on UNIX systems).  Shortcuts allow a file to exist in one place, and an icon that represents the file to be created and put anywhere else you please.  In fact, you can create a dozen such icons and scatter them all over your hard disk.  Double-clicking on one of these icons/shortcuts opens up the original file, just as if you had double-clicked on the original file itself. Consider the following two icons: The one on the left is the actual Word document, while the one on the right is a shortcut that represents the Word document.  Double-clicking on either icon will open the same file.  There are two main visual differences between the icons: The shortcut will have a small arrow in the lower-left-hand corner (on Windows, anyway) The shortcut is allowed to have a name that does not include the file extension (the “.docx” part, in this case) You can delete the shortcut at any time without losing any actual data.  The original is still intact.  All you lose is the ability to get to that data from wherever the shortcut was. So why are shortcuts so great?  Because they allow us to easily overcome the main limitation of hierarchical file systems, and put a file in two (or more) places at the same time.  You will always have files that don’t play nice with your organizational rationale, and can’t be filed in only one place.  They demand to exist in two places.  Shortcuts allow this!  Furthermore, they allow you to collect your most often-opened files and folders together in one spot for convenient access.  The cool part is that the original files stay where they are, safe forever in their perfectly organized location. So your collection of most often-opened files can – and should – become a collection of shortcuts! If you’re still not convinced of the utility of shortcuts, consider the following well-known areas of a typical Windows computer: The Start Menu (and all the programs that live within it) The Quick Launch bar (or the Superbar in Windows 7) The “Favorite folders” area in the top-left corner of the Windows Explorer window (in Windows Vista or Windows 7) Your Internet Explorer Favorites or Firefox Bookmarks Each item in each of these areas is a shortcut!  Each of those areas exist for one purpose only:  For convenience – to provide you with a collection of the files and folders you access most often. It should be easy to see by now that shortcuts are designed for one single purpose:  To make accessing your files more convenient.  Each time you double-click on a shortcut, you are saved the hassle of locating the file (or folder, or program, or drive, or control panel icon) that it represents. Shortcuts allow us to invent a golden rule of file and folder organization: “Only ever have one copy of a file – never have two copies of the same file.  Use a shortcut instead” (this rule doesn’t apply to copies created for backup purposes, of course!) There are also lesser rules, like “don’t move a file into your work area – create a shortcut there instead”, and “any time you find yourself frustrated with how long it takes to locate a file, create a shortcut to it and place that shortcut in a convenient location.” So how to we create these massively useful shortcuts?  There are two main ways: “Copy” the original file or folder (click on it and type Ctrl-C, or right-click on it and select Copy):  Then right-click in an empty area of the destination folder (the place where you want the shortcut to go) and select Paste shortcut: Right-drag (drag with the right mouse button) the file from the source folder to the destination folder.  When you let go of the mouse button at the destination folder, a menu pops up: Select Create shortcuts here. Note that when shortcuts are created, they are often named something like Shortcut to Budget Detail.doc (windows XP) or Budget Detail – Shortcut.doc (Windows 7).   If you don’t like those extra words, you can easily rename the shortcuts after they’re created, or you can configure Windows to never insert the extra words in the first place (see our article on how to do this). And of course, you can create shortcuts to folders too, not just to files! Bottom line: Whenever you have a file that you’d like to access from somewhere else (whether it’s convenience you’re after, or because the file simply belongs in two places), create a shortcut to the original file in the new location. Tip #10.  Separate Application Files from Data Files Any digital organization guru will drum this rule into you.  Application files are the components of the software you’ve installed (e.g. Microsoft Word, Adobe Photoshop or Internet Explorer).  Data files are the files that you’ve created for yourself using that software (e.g. Word Documents, digital photos, emails or playlists). Software gets installed, uninstalled and upgraded all the time.  Hopefully you always have the original installation media (or downloaded set-up file) kept somewhere safe, and can thus reinstall your software at any time.  This means that the software component files are of little importance.  Whereas the files you have created with that software is, by definition, important.  It’s a good rule to always separate unimportant files from important files. So when your software prompts you to save a file you’ve just created, take a moment and check out where it’s suggesting that you save the file.  If it’s suggesting that you save the file into the same folder as the software itself, then definitely don’t follow that suggestion.  File it in your own folder!  In fact, see if you can find the program’s configuration option that determines where files are saved by default (if it has one), and change it. Tip #11.  Organize Files Based on Purpose, Not on File Type If you have, for example a folder called Work\Clients\Johnson, and within that folder you have two sub-folders, Word Documents and Spreadsheets (in other words, you’re separating “.doc” files from “.xls” files), then chances are that you’re not optimally organized.  It makes little sense to organize your files based on the program that created them.  Instead, create your sub-folders based on the purpose of the file.  For example, it would make more sense to create sub-folders called Correspondence and Financials.  It may well be that all the files in a given sub-folder are of the same file-type, but this should be more of a coincidence and less of a design feature of your organization system. Tip #12.  Maintain the Same Folder Structure on All Your Computers In other words, whatever organizational system you create, apply it to every computer that you can.  There are several benefits to this: There’s less to remember.  No matter where you are, you always know where to look for your files If you copy or synchronize files from one computer to another, then setting up the synchronization job becomes very simple Shortcuts can be copied or moved from one computer to another with ease (assuming the original files are also copied/moved).  There’s no need to find the target of the shortcut all over again on the second computer Ditto for linked files (e.g Word documents that link to data in a separate Excel file), playlists, and any files that reference the exact file locations of other files. This applies even to the drive that your files are stored on.  If your files are stored on C: on one computer, make sure they’re stored on C: on all your computers.  Otherwise all your shortcuts, playlists and linked files will stop working! Tip #13.  Create an “Inbox” Folder Create yourself a folder where you store all files that you’re currently working on, or that you haven’t gotten around to filing yet.  You can think of this folder as your “to-do” list.  You can call it “Inbox” (making it the same metaphor as your email system), or “Work”, or “To-Do”, or “Scratch”, or whatever name makes sense to you.  It doesn’t matter what you call it – just make sure you have one! Once you have finished working on a file, you then move it from the “Inbox” to its correct location within your organizational structure. You may want to use your Desktop as this “Inbox” folder.  Rightly or wrongly, most people do.  It’s not a bad place to put such files, but be careful:  If you do decide that your Desktop represents your “to-do” list, then make sure that no other files find their way there.  In other words, make sure that your “Inbox”, wherever it is, Desktop or otherwise, is kept free of junk – stray files that don’t belong there. So where should you put this folder, which, almost by definition, lives outside the structure of the rest of your filing system?  Well, first and foremost, it has to be somewhere handy.  This will be one of your most-visited folders, so convenience is key.  Putting it on the Desktop is a great option – especially if you don’t have any other folders on your Desktop:  the folder then becomes supremely easy to find in Windows Explorer: You would then create shortcuts to this folder in convenient spots all over your computer (“Favorite Links”, “Quick Launch”, etc). Tip #14.  Ensure You have Only One “Inbox” Folder Once you’ve created your “Inbox” folder, don’t use any other folder location as your “to-do list”.  Throw every incoming or created file into the Inbox folder as you create/receive it.  This keeps the rest of your computer pristine and free of randomly created or downloaded junk.  The last thing you want to be doing is checking multiple folders to see all your current tasks and projects.  Gather them all together into one folder. Here are some tips to help ensure you only have one Inbox: Set the default “save” location of all your programs to this folder. Set the default “download” location for your browser to this folder. If this folder is not your desktop (recommended) then also see if you can make a point of not putting “to-do” files on your desktop.  This keeps your desktop uncluttered and Zen-like: (the Inbox folder is in the bottom-right corner) Tip #15.  Be Vigilant about Clearing Your “Inbox” Folder This is one of the keys to staying organized.  If you let your “Inbox” overflow (i.e. allow there to be more than, say, 30 files or folders in there), then you’re probably going to start feeling like you’re overwhelmed:  You’re not keeping up with your to-do list.  Once your Inbox gets beyond a certain point (around 30 files, studies have shown), then you’ll simply start to avoid it.  You may continue to put files in there, but you’ll be scared to look at it, fearing the “out of control” feeling that all overworked, chaotic or just plain disorganized people regularly feel. So, here’s what you can do: Visit your Inbox/to-do folder regularly (at least five times per day). Scan the folder regularly for files that you have completed working on and are ready for filing.  File them immediately. Make it a source of pride to keep the number of files in this folder as small as possible.  If you value peace of mind, then make the emptiness of this folder one of your highest (computer) priorities If you know that a particular file has been in the folder for more than, say, six weeks, then admit that you’re not actually going to get around to processing it, and move it to its final resting place. Tip #16.  File Everything Immediately, and Use Shortcuts for Your Active Projects As soon as you create, receive or download a new file, store it away in its “correct” folder immediately.  Then, whenever you need to work on it (possibly straight away), create a shortcut to it in your “Inbox” (“to-do”) folder or your desktop.  That way, all your files are always in their “correct” locations, yet you still have immediate, convenient access to your current, active files.  When you finish working on a file, simply delete the shortcut. Ideally, your “Inbox” folder – and your Desktop – should contain no actual files or folders.  They should simply contain shortcuts. Tip #17.  Use Directory Symbolic Links (or Junctions) to Maintain One Unified Folder Structure Using this tip, we can get around a potential hiccup that we can run into when creating our organizational structure – the issue of having more than one drive on our computer (C:, D:, etc).  We might have files we need to store on the D: drive for space reasons, and yet want to base our organized folder structure on the C: drive (or vice-versa). Your chosen organizational structure may dictate that all your files must be accessed from the C: drive (for example, the root folder of all your files may be something like C:\Files).  And yet you may still have a D: drive and wish to take advantage of the hundreds of spare Gigabytes that it offers.  Did you know that it’s actually possible to store your files on the D: drive and yet access them as if they were on the C: drive?  And no, we’re not talking about shortcuts here (although the concept is very similar). By using the shell command mklink, you can essentially take a folder that lives on one drive and create an alias for it on a different drive (you can do lots more than that with mklink – for a full rundown on this programs capabilities, see our dedicated article).  These aliases are called directory symbolic links (and used to be known as junctions).  You can think of them as “virtual” folders.  They function exactly like regular folders, except they’re physically located somewhere else. For example, you may decide that your entire D: drive contains your complete organizational file structure, but that you need to reference all those files as if they were on the C: drive, under C:\Files.  If that was the case you could create C:\Files as a directory symbolic link – a link to D:, as follows: mklink /d c:\files d:\ Or it may be that the only files you wish to store on the D: drive are your movie collection.  You could locate all your movie files in the root of your D: drive, and then link it to C:\Files\Media\Movies, as follows: mklink /d c:\files\media\movies d:\ (Needless to say, you must run these commands from a command prompt – click the Start button, type cmd and press Enter) Tip #18. Customize Your Folder Icons This is not strictly speaking an organizational tip, but having unique icons for each folder does allow you to more quickly visually identify which folder is which, and thus saves you time when you’re finding files.  An example is below (from my folder that contains all files downloaded from the Internet): To learn how to change your folder icons, please refer to our dedicated article on the subject. Tip #19.  Tidy Your Start Menu The Windows Start Menu is usually one of the messiest parts of any Windows computer.  Every program you install seems to adopt a completely different approach to placing icons in this menu.  Some simply put a single program icon.  Others create a folder based on the name of the software.  And others create a folder based on the name of the software manufacturer.  It’s chaos, and can make it hard to find the software you want to run. Thankfully we can avoid this chaos with useful operating system features like Quick Launch, the Superbar or pinned start menu items. Even so, it would make a lot of sense to get into the guts of the Start Menu itself and give it a good once-over.  All you really need to decide is how you’re going to organize your applications.  A structure based on the purpose of the application is an obvious candidate.  Below is an example of one such structure: In this structure, Utilities means software whose job it is to keep the computer itself running smoothly (configuration tools, backup software, Zip programs, etc).  Applications refers to any productivity software that doesn’t fit under the headings Multimedia, Graphics, Internet, etc. In case you’re not aware, every icon in your Start Menu is a shortcut and can be manipulated like any other shortcut (copied, moved, deleted, etc). With the Windows Start Menu (all version of Windows), Microsoft has decided that there be two parallel folder structures to store your Start Menu shortcuts.  One for you (the logged-in user of the computer) and one for all users of the computer.  Having two parallel structures can often be redundant:  If you are the only user of the computer, then having two parallel structures is totally redundant.  Even if you have several users that regularly log into the computer, most of your installed software will need to be made available to all users, and should thus be moved out of the “just you” version of the Start Menu and into the “all users” area. To take control of your Start Menu, so you can start organizing it, you’ll need to know how to access the actual folders and shortcut files that make up the Start Menu (both versions of it).  To find these folders and files, click the Start button and then right-click on the All Programs text (Windows XP users should right-click on the Start button itself): The Open option refers to the “just you” version of the Start Menu, while the Open All Users option refers to the “all users” version.  Click on the one you want to organize. A Windows Explorer window then opens with your chosen version of the Start Menu selected.  From there it’s easy.  Double-click on the Programs folder and you’ll see all your folders and shortcuts.  Now you can delete/rename/move until it’s just the way you want it. Note:  When you’re reorganizing your Start Menu, you may want to have two Explorer windows open at the same time – one showing the “just you” version and one showing the “all users” version.  You can drag-and-drop between the windows. Tip #20.  Keep Your Start Menu Tidy Once you have a perfectly organized Start Menu, try to be a little vigilant about keeping it that way.  Every time you install a new piece of software, the icons that get created will almost certainly violate your organizational structure. So to keep your Start Menu pristine and organized, make sure you do the following whenever you install a new piece of software: Check whether the software was installed into the “just you” area of the Start Menu, or the “all users” area, and then move it to the correct area. Remove all the unnecessary icons (like the “Read me” icon, the “Help” icon (you can always open the help from within the software itself when it’s running), the “Uninstall” icon, the link(s)to the manufacturer’s website, etc) Rename the main icon(s) of the software to something brief that makes sense to you.  For example, you might like to rename Microsoft Office Word 2010 to simply Word Move the icon(s) into the correct folder based on your Start Menu organizational structure And don’t forget:  when you uninstall a piece of software, the software’s uninstall routine is no longer going to be able to remove the software’s icon from the Start Menu (because you moved and/or renamed it), so you’ll need to remove that icon manually. Tip #21.  Tidy C:\ The root of your C: drive (C:\) is a common dumping ground for files and folders – both by the users of your computer and by the software that you install on your computer.  It can become a mess. There’s almost no software these days that requires itself to be installed in C:\.  99% of the time it can and should be installed into C:\Program Files.  And as for your own files, well, it’s clear that they can (and almost always should) be stored somewhere else. In an ideal world, your C:\ folder should look like this (on Windows 7): Note that there are some system files and folders in C:\ that are usually and deliberately “hidden” (such as the Windows virtual memory file pagefile.sys, the boot loader file bootmgr, and the System Volume Information folder).  Hiding these files and folders is a good idea, as they need to stay where they are and are almost never needed to be opened or even seen by you, the user.  Hiding them prevents you from accidentally messing with them, and enhances your sense of order and well-being when you look at your C: drive folder. Tip #22.  Tidy Your Desktop The Desktop is probably the most abused part of a Windows computer (from an organization point of view).  It usually serves as a dumping ground for all incoming files, as well as holding icons to oft-used applications, plus some regularly opened files and folders.  It often ends up becoming an uncontrolled mess.  See if you can avoid this.  Here’s why… Application icons (Word, Internet Explorer, etc) are often found on the Desktop, but it’s unlikely that this is the optimum place for them.  The “Quick Launch” bar (or the Superbar in Windows 7) is always visible and so represents a perfect location to put your icons.  You’ll only be able to see the icons on your Desktop when all your programs are minimized.  It might be time to get your application icons off your desktop… You may have decided that the Inbox/To-do folder on your computer (see tip #13, above) should be your Desktop.  If so, then enough said.  Simply be vigilant about clearing it and preventing it from being polluted by junk files (see tip #15, above).  On the other hand, if your Desktop is not acting as your “Inbox” folder, then there’s no reason for it to have any data files or folders on it at all, except perhaps a couple of shortcuts to often-opened files and folders (either ongoing or current projects).  Everything else should be moved to your “Inbox” folder. In an ideal world, it might look like this: Tip #23.  Move Permanent Items on Your Desktop Away from the Top-Left Corner When files/folders are dragged onto your desktop in a Windows Explorer window, or when shortcuts are created on your Desktop from Internet Explorer, those icons are always placed in the top-left corner – or as close as they can get.  If you have other files, folders or shortcuts that you keep on the Desktop permanently, then it’s a good idea to separate these permanent icons from the transient ones, so that you can quickly identify which ones the transients are.  An easy way to do this is to move all your permanent icons to the right-hand side of your Desktop.  That should keep them separated from incoming items. Tip #24.  Synchronize If you have more than one computer, you’ll almost certainly want to share files between them.  If the computers are permanently attached to the same local network, then there’s no need to store multiple copies of any one file or folder – shortcuts will suffice.  However, if the computers are not always on the same network, then you will at some point need to copy files between them.  For files that need to permanently live on both computers, the ideal way to do this is to synchronize the files, as opposed to simply copying them. We only have room here to write a brief summary of synchronization, not a full article.  In short, there are several different types of synchronization: Where the contents of one folder are accessible anywhere, such as with Dropbox Where the contents of any number of folders are accessible anywhere, such as with Windows Live Mesh Where any files or folders from anywhere on your computer are synchronized with exactly one other computer, such as with the Windows “Briefcase”, Microsoft SyncToy, or (much more powerful, yet still free) SyncBack from 2BrightSparks.  This only works when both computers are on the same local network, at least temporarily. A great advantage of synchronization solutions is that once you’ve got it configured the way you want it, then the sync process happens automatically, every time.  Click a button (or schedule it to happen automatically) and all your files are automagically put where they’re supposed to be. If you maintain the same file and folder structure on both computers, then you can also sync files depend upon the correct location of other files, like shortcuts, playlists and office documents that link to other office documents, and the synchronized files still work on the other computer! Tip #25.  Hide Files You Never Need to See If you have your files well organized, you will often be able to tell if a file is out of place just by glancing at the contents of a folder (for example, it should be pretty obvious if you look in a folder that contains all the MP3s from one music CD and see a Word document in there).  This is a good thing – it allows you to determine if there are files out of place with a quick glance.  Yet sometimes there are files in a folder that seem out of place but actually need to be there, such as the “folder art” JPEGs in music folders, and various files in the root of the C: drive.  If such files never need to be opened by you, then a good idea is to simply hide them.  Then, the next time you glance at the folder, you won’t have to remember whether that file was supposed to be there or not, because you won’t see it at all! To hide a file, simply right-click on it and choose Properties: Then simply tick the Hidden tick-box:   Tip #26.  Keep Every Setup File These days most software is downloaded from the Internet.  Whenever you download a piece of software, keep it.  You’ll never know when you need to reinstall the software. Further, keep with it an Internet shortcut that links back to the website where you originally downloaded it, in case you ever need to check for updates. See tip #33 below for a full description of the excellence of organizing your setup files. Tip #27.  Try to Minimize the Number of Folders that Contain Both Files and Sub-folders Some of the folders in your organizational structure will contain only files.  Others will contain only sub-folders.  And you will also have some folders that contain both files and sub-folders.  You will notice slight improvements in how long it takes you to locate a file if you try to avoid this third type of folder.  It’s not always possible, of course – you’ll always have some of these folders, but see if you can avoid it. One way of doing this is to take all the leftover files that didn’t end up getting stored in a sub-folder and create a special “Miscellaneous” or “Other” folder for them. Tip #28.  Starting a Filename with an Underscore Brings it to the Top of a List Further to the previous tip, if you name that “Miscellaneous” or “Other” folder in such a way that its name begins with an underscore “_”, then it will appear at the top of the list of files/folders. The screenshot below is an example of this.  Each folder in the list contains a set of digital photos.  The folder at the top of the list, _Misc, contains random photos that didn’t deserve their own dedicated folder: Tip #29.  Clean Up those CD-ROMs and (shudder!) Floppy Disks Have you got a pile of CD-ROMs stacked on a shelf of your office?  Old photos, or files you archived off onto CD-ROM (or even worse, floppy disks!) because you didn’t have enough disk space at the time?  In the meantime have you upgraded your computer and now have 500 Gigabytes of space you don’t know what to do with?  If so, isn’t it time you tidied up that stack of disks and filed them into your gorgeous new folder structure? So what are you waiting for?  Bite the bullet, copy them all back onto your computer, file them in their appropriate folders, and then back the whole lot up onto a shiny new 1000Gig external hard drive! Useful Folders to Create This next section suggests some useful folders that you might want to create within your folder structure.  I’ve personally found them to be indispensable. The first three are all about convenience – handy folders to create and then put somewhere that you can always access instantly.  For each one, it’s not so important where the actual folder is located, but it’s very important where you put the shortcut(s) to the folder.  You might want to locate the shortcuts: On your Desktop In your “Quick Launch” area (or pinned to your Windows 7 Superbar) In your Windows Explorer “Favorite Links” area Tip #30.  Create an “Inbox” (“To-Do”) Folder This has already been mentioned in depth (see tip #13), but we wanted to reiterate its importance here.  This folder contains all the recently created, received or downloaded files that you have not yet had a chance to file away properly, and it also may contain files that you have yet to process.  In effect, it becomes a sort of “to-do list”.  It doesn’t have to be called “Inbox” – you can call it whatever you want. Tip #31.  Create a Folder where Your Current Projects are Collected Rather than going hunting for them all the time, or dumping them all on your desktop, create a special folder where you put links (or work folders) for each of the projects you’re currently working on. You can locate this folder in your “Inbox” folder, on your desktop, or anywhere at all – just so long as there’s a way of getting to it quickly, such as putting a link to it in Windows Explorer’s “Favorite Links” area: Tip #32.  Create a Folder for Files and Folders that You Regularly Open You will always have a few files that you open regularly, whether it be a spreadsheet of your current accounts, or a favorite playlist.  These are not necessarily “current projects”, rather they’re simply files that you always find yourself opening.  Typically such files would be located on your desktop (or even better, shortcuts to those files).  Why not collect all such shortcuts together and put them in their own special folder? As with the “Current Projects” folder (above), you would want to locate that folder somewhere convenient.  Below is an example of a folder called “Quick links”, with about seven files (shortcuts) in it, that is accessible through the Windows Quick Launch bar: See tip #37 below for a full explanation of the power of the Quick Launch bar. Tip #33.  Create a “Set-ups” Folder A typical computer has dozens of applications installed on it.  For each piece of software, there are often many different pieces of information you need to keep track of, including: The original installation setup file(s).  This can be anything from a simple 100Kb setup.exe file you downloaded from a website, all the way up to a 4Gig ISO file that you copied from a DVD-ROM that you purchased. The home page of the software manufacturer (in case you need to look up something on their support pages, their forum or their online help) The page containing the download link for your actual file (in case you need to re-download it, or download an upgraded version) The serial number Your proof-of-purchase documentation Any other template files, plug-ins, themes, etc that also need to get installed For each piece of software, it’s a great idea to gather all of these files together and put them in a single folder.  The folder can be the name of the software (plus possibly a very brief description of what it’s for – in case you can’t remember what the software does based in its name).  Then you would gather all of these folders together into one place, and call it something like “Software” or “Setups”. If you have enough of these folders (I have several hundred, being a geek, collected over 20 years), then you may want to further categorize them.  My own categorization structure is based on “platform” (operating system): The last seven folders each represents one platform/operating system, while _Operating Systems contains set-up files for installing the operating systems themselves.  _Hardware contains ROMs for hardware I own, such as routers. Within the Windows folder (above), you can see the beginnings of the vast library of software I’ve compiled over the years: An example of a typical application folder looks like this: Tip #34.  Have a “Settings” Folder We all know that our documents are important.  So are our photos and music files.  We save all of these files into folders, and then locate them afterwards and double-click on them to open them.  But there are many files that are important to us that can’t be saved into folders, and then searched for and double-clicked later on.  These files certainly contain important information that we need, but are often created internally by an application, and saved wherever that application feels is appropriate. A good example of this is the “PST” file that Outlook creates for us and uses to store all our emails, contacts, appointments and so forth.  Another example would be the collection of Bookmarks that Firefox stores on your behalf. And yet another example would be the customized settings and configuration files of our all our software.  Granted, most Windows programs store their configuration in the Registry, but there are still many programs that use configuration files to store their settings. Imagine if you lost all of the above files!  And yet, when people are backing up their computers, they typically only back up the files they know about – those that are stored in the “My Documents” folder, etc.  If they had a hard disk failure or their computer was lost or stolen, their backup files would not include some of the most vital files they owned.  Also, when migrating to a new computer, it’s vital to ensure that these files make the journey. It can be a very useful idea to create yourself a folder to store all your “settings” – files that are important to you but which you never actually search for by name and double-click on to open them.  Otherwise, next time you go to set up a new computer just the way you want it, you’ll need to spend hours recreating the configuration of your previous computer! So how to we get our important files into this folder?  Well, we have a few options: Some programs (such as Outlook and its PST files) allow you to place these files wherever you want.  If you delve into the program’s options, you will find a setting somewhere that controls the location of the important settings files (or “personal storage” – PST – when it comes to Outlook) Some programs do not allow you to change such locations in any easy way, but if you get into the Registry, you can sometimes find a registry key that refers to the location of the file(s).  Simply move the file into your Settings folder and adjust the registry key to refer to the new location. Some programs stubbornly refuse to allow their settings files to be placed anywhere other then where they stipulate.  When faced with programs like these, you have three choices:  (1) You can ignore those files, (2) You can copy the files into your Settings folder (let’s face it – settings don’t change very often), or (3) you can use synchronization software, such as the Windows Briefcase, to make synchronized copies of all your files in your Settings folder.  All you then have to do is to remember to run your sync software periodically (perhaps just before you run your backup software!). There are some other things you may decide to locate inside this new “Settings” folder: Exports of registry keys (from the many applications that store their configurations in the Registry).  This is useful for backup purposes or for migrating to a new computer Notes you’ve made about all the specific customizations you have made to a particular piece of software (so that you’ll know how to do it all again on your next computer) Shortcuts to webpages that detail how to tweak certain aspects of your operating system or applications so they are just the way you like them (such as how to remove the words “Shortcut to” from the beginning of newly created shortcuts).  In other words, you’d want to create shortcuts to half the pages on the How-To Geek website! Here’s an example of a “Settings” folder: Windows Features that Help with Organization This section details some of the features of Microsoft Windows that are a boon to anyone hoping to stay optimally organized. Tip #35.  Use the “Favorite Links” Area to Access Oft-Used Folders Once you’ve created your great new filing system, work out which folders you access most regularly, or which serve as great starting points for locating the rest of the files in your folder structure, and then put links to those folders in your “Favorite Links” area of the left-hand side of the Windows Explorer window (simply called “Favorites” in Windows 7):   Some ideas for folders you might want to add there include: Your “Inbox” folder (or whatever you’ve called it) – most important! The base of your filing structure (e.g. C:\Files) A folder containing shortcuts to often-accessed folders on other computers around the network (shown above as Network Folders) A folder containing shortcuts to your current projects (unless that folder is in your “Inbox” folder) Getting folders into this area is very simple – just locate the folder you’re interested in and drag it there! Tip #36.  Customize the Places Bar in the File/Open and File/Save Boxes Consider the screenshot below: The highlighted icons (collectively known as the “Places Bar”) can be customized to refer to any folder location you want, allowing instant access to any part of your organizational structure. Note:  These File/Open and File/Save boxes have been superseded by new versions that use the Windows Vista/Windows 7 “Favorite Links”, but the older versions (shown above) are still used by a surprisingly large number of applications. The easiest way to customize these icons is to use the Group Policy Editor, but not everyone has access to this program.  If you do, open it up and navigate to: User Configuration > Administrative Templates > Windows Components > Windows Explorer > Common Open File Dialog If you don’t have access to the Group Policy Editor, then you’ll need to get into the Registry.  Navigate to: HKEY_CURRENT_USER \ Software \ Microsoft  \ Windows \ CurrentVersion \ Policies \ comdlg32 \ Placesbar It should then be easy to make the desired changes.  Log off and log on again to allow the changes to take effect. Tip #37.  Use the Quick Launch Bar as a Application and File Launcher That Quick Launch bar (to the right of the Start button) is a lot more useful than people give it credit for.  Most people simply have half a dozen icons in it, and use it to start just those programs.  But it can actually be used to instantly access just about anything in your filing system: For complete instructions on how to set this up, visit our dedicated article on this topic. Tip #38.  Put a Shortcut to Windows Explorer into Your Quick Launch Bar This is only necessary in Windows Vista and Windows XP.  The Microsoft boffins finally got wise and added it to the Windows 7 Superbar by default. Windows Explorer – the program used for managing your files and folders – is one of the most useful programs in Windows.  Anyone who considers themselves serious about being organized needs instant access to this program at any time.  A great place to create a shortcut to this program is in the Windows XP and Windows Vista “Quick Launch” bar: To get it there, locate it in your Start Menu (usually under “Accessories”) and then right-drag it down into your Quick Launch bar (and create a copy). Tip #39.  Customize the Starting Folder for Your Windows 7 Explorer Superbar Icon If you’re on Windows 7, your Superbar will include a Windows Explorer icon.  Clicking on the icon will launch Windows Explorer (of course), and will start you off in your “Libraries” folder.  Libraries may be fine as a starting point, but if you have created yourself an “Inbox” folder, then it would probably make more sense to start off in this folder every time you launch Windows Explorer. To change this default/starting folder location, then first right-click the Explorer icon in the Superbar, and then right-click Properties:Then, in Target field of the Windows Explorer Properties box that appears, type %windir%\explorer.exe followed by the path of the folder you wish to start in.  For example: %windir%\explorer.exe C:\Files If that folder happened to be on the Desktop (and called, say, “Inbox”), then you would use the following cleverness: %windir%\explorer.exe shell:desktop\Inbox Then click OK and test it out. Tip #40.  Ummmmm…. No, that’s it.  I can’t think of another one.  That’s all of the tips I can come up with.  I only created this one because 40 is such a nice round number… Case Study – An Organized PC To finish off the article, I have included a few screenshots of my (main) computer (running Vista).  The aim here is twofold: To give you a sense of what it looks like when the above, sometimes abstract, tips are applied to a real-life computer, and To offer some ideas about folders and structure that you may want to steal to use on your own PC. Let’s start with the C: drive itself.  Very minimal.  All my files are contained within C:\Files.  I’ll confine the rest of the case study to this folder: That folder contains the following: Mark: My personal files VC: My business (Virtual Creations, Australia) Others contains files created by friends and family Data contains files from the rest of the world (can be thought of as “public” files, usually downloaded from the Net) Settings is described above in tip #34 The Data folder contains the following sub-folders: Audio:  Radio plays, audio books, podcasts, etc Development:  Programmer and developer resources, sample source code, etc (see below) Humour:  Jokes, funnies (those emails that we all receive) Movies:  Downloaded and ripped movies (all legal, of course!), their scripts, DVD covers, etc. Music:  (see below) Setups:  Installation files for software (explained in full in tip #33) System:  (see below) TV:  Downloaded TV shows Writings:  Books, instruction manuals, etc (see below) The Music folder contains the following sub-folders: Album covers:  JPEG scans Guitar tabs:  Text files of guitar sheet music Lists:  e.g. “Top 1000 songs of all time” Lyrics:  Text files MIDI:  Electronic music files MP3 (representing 99% of the Music folder):  MP3s, either ripped from CDs or downloaded, sorted by artist/album name Music Video:  Video clips Sheet Music:  usually PDFs The Data\Writings folder contains the following sub-folders: (all pretty self-explanatory) The Data\Development folder contains the following sub-folders: Again, all pretty self-explanatory (if you’re a geek) The Data\System folder contains the following sub-folders: These are usually themes, plug-ins and other downloadable program-specific resources. The Mark folder contains the following sub-folders: From Others:  Usually letters that other people (friends, family, etc) have written to me For Others:  Letters and other things I have created for other people Green Book:  None of your business Playlists:  M3U files that I have compiled of my favorite songs (plus one M3U playlist file for every album I own) Writing:  Fiction, philosophy and other musings of mine Mark Docs:  Shortcut to C:\Users\Mark Settings:  Shortcut to C:\Files\Settings\Mark The Others folder contains the following sub-folders: The VC (Virtual Creations, my business – I develop websites) folder contains the following sub-folders: And again, all of those are pretty self-explanatory. Conclusion These tips have saved my sanity and helped keep me a productive geek, but what about you? What tips and tricks do you have to keep your files organized?  Please share them with us in the comments.  Come on, don’t be shy… Similar Articles Productive Geek Tips Fix For When Windows Explorer in Vista Stops Showing File NamesWhy Did Windows Vista’s Music Folder Icon Turn Yellow?Print or Create a Text File List of the Contents in a Directory the Easy WayCustomize the Windows 7 or Vista Send To MenuAdd Copy To / Move To on Windows 7 or Vista Right-Click Menu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Track Daily Goals With 42Goals Video Toolbox is a Superb Online Video Editor Fun with 47 charts and graphs Tomorrow is Mother’s Day Check the Average Speed of YouTube Videos You’ve Watched OutlookStatView Scans and Displays General Usage Statistics

    Read the article

< Previous Page | 179 180 181 182 183 184  | Next Page >