Search Results

Search found 3240 results on 130 pages for 'groupwise maximum'.

Page 73/130 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • Mobile Friendly Websites with CSS Media Queries

    - by dwahlin
    In a previous post the concept of CSS media queries was introduced and I discussed the fundamentals of how they can be used to target different screen sizes. I showed how they could be used to convert a 3-column wide page into a more vertical view of data that displays better on devices such as an iPhone:     In this post I'll provide an additional look at how CSS media queries can be used to mobile-enable a sample site called "Widget Masters" without having to change any server-side code or HTML code. The site that will be discussed is shown next:     This site has some of the standard items shown in most websites today including a title area, menu bar, and sections where data is displayed. Without including CSS media queries the site is readable but has to be zoomed out to see everything on a mobile device, cuts-off some of the menu items, and requires horizontal scrolling to get to additional content. The following image shows what the site looks like on an iPhone. While the site works on mobile devices it's definitely not optimized for mobile.     Let's take a look at how CSS media queries can be used to override existing styles in the site based on different screen widths. Adding CSS Media Queries into a Site The Widget Masters Website relies on standard CSS combined with HTML5 elements to provide the layout shown earlier. For example, to layout the menu bar shown at the top of the page the nav element is used as shown next. A standard div element could certainly be used as well if desired.   <nav> <ul class="clearfix"> <li><a href="#home">Home</a></li> <li><a href="#products">Products</a></li> <li><a href="#aboutus">About Us</a></li> <li><a href="#contactus">Contact Us</a></li> <li><a href="#store">Store</a></li> </ul> </nav>   This HTML is combined with the CSS shown next to add a CSS3 gradient, handle the horizontal orientation, and add some general hover effects.   nav { width: 100%; } nav ul { border-radius: 6px; height: 40px; width: 100%; margin: 0; padding: 0; background: rgb(125,126,125); /* Old browsers */ background: -moz-linear-gradient(top, rgba(125,126,125,1) 0%, rgba(14,14,14,1) 100%); /* FF3.6+ */ background: -webkit-gradient(linear, left top, left bottom, color-stop(0%,rgba(125,126,125,1)), color-stop(100%,rgba(14,14,14,1))); /* Chrome,Safari4+ */ background: -webkit-linear-gradient(top, rgba(125,126,125,1) 0%, rgba(14,14,14,1) 100%); /* Chrome10+,Safari5.1+ */ background: -o-linear-gradient(top, rgba(125,126,125,1) 0%, rgba(14,14,14,1) 100%); /* Opera 11.10+ */ background: -ms-linear-gradient(top, rgba(125,126,125,1) 0%, rgba(14,14,14,1) 100%); /* IE10+ */ background: linear-gradient(top, rgba(125,126,125,1) 0%, rgba(14,14,14,1) 100%); /* W3C */ filter: progid:DXImageTransform.Microsoft.gradient( startColorstr='#7d7e7d', endColorstr='#0e0e0e',GradientType=0 ); /* IE6-9 */ } nav ul > li { list-style: none; float: left; margin: 0; padding: 0; } nav ul > li:first-child { margin-left: 8px; } nav ul > li > a { color: #ccc; text-decoration: none; line-height: 2.8em; font-size: 0.95em; font-weight: bold; padding: 8px 25px 7px 25px; font-family: Arial, Helvetica, sans-serif; } nav ul > li a:hover { background-color: rgba(0, 0, 0, 0.1); color: #fff; }   When mobile devices hit the site the layout of the menu items needs to be adjusted so that they're all visible without having to swipe left or right to get to them. This type of modification can be accomplished using CSS media queries by targeting specific screen sizes. To start, a media query can be added into the site's CSS file as shown next: @media screen and (max-width:320px) { /* CSS style overrides for this screen width go here */ } This media query targets screens that have a maximum width of 320 pixels. Additional types of queries can also be added – refer to my previous post for more details as well as resources that can be used to test media queries in different devices. In that post I emphasize (and I'll emphasize again) that CSS media queries only modify the overall layout and look and feel of a site. They don't optimize the site as far as the size of the images or content sent to the device which is important to keep in mind. To make the navigation menu more accessible on devices such as an iPhone or Android the CSS shown next can be used. This code changes the height of the menu from 40 pixels to 100%, takes off the li element floats, changes the line-height, and changes the margins.   @media screen and (max-width:320px) { nav ul { height: 100%; } nav ul > li { float: none; } nav ul > li a { line-height: 1.5em; } nav ul > li:first-child { margin-left: 0px; } /* Additional CSS overrides go here */ }   The following image shows an example of what the menu look like when run on a device with a width of 320 pixels:   Mobile devices with a maximum width of 480 pixels need different CSS styles applied since they have 160 additional pixels of width. This can be done by adding a new CSS media query into the stylesheet as shown next. Looking through the CSS you'll see that only a minimal override is added to adjust the padding of anchor tags since the menu fits by default in this screen width.   @media screen and (max-width: 480px) { nav ul > li > a { padding: 8px 10px 7px 10px; } }   Running the site on a device with 480 pixels results in the menu shown next being rendered. Notice that the space between the menu items is much smaller compared to what was shown when the main site loads in a standard browser.     In addition to modifying the menu, the 3 horizontal content sections shown earlier can be changed from a horizontal layout to a vertical layout so that they look good on a variety of smaller mobile devices and are easier to navigate by end users. The HTML5 article and section elements are used as containers for the 3 sections in the site as shown next:   <article class="clearfix"> <section id="info"> <header>Why Choose Us?</header> <br /> <img id="mainImage" src="Images/ArticleImage.png" title="Article Image" /> <p> Post emensos insuperabilis expeditionis eventus languentibus partium animis, quas periculorum varietas fregerat et laborum, nondum tubarum cessante clangore vel milite locato per stationes hibernas. </p> </section> <section id="products"> <header>Products</header> <br /> <img id="gearsImage" src="Images/Gears.png" title="Article Image" /> <p> <ul> <li>Widget 1</li> <li>Widget 2</li> <li>Widget 3</li> <li>Widget 4</li> <li>Widget 5</li> </ul> </p> </section> <section id="FAQ"> <header>FAQ</header> <br /> <img id="faqImage" src="Images/faq.png" title="Article Image" /> <p> <ul> <li>FAQ 1</li> <li>FAQ 2</li> <li>FAQ 3</li> <li>FAQ 4</li> <li>FAQ 5</li> </ul> </p> </section> </article>   To force the sections into a vertical layout for smaller mobile devices the CSS styles shown next can be added into the media queries targeting 320 pixel and 480 pixel widths. Styles to target the display size of the images in each section are also included. It's important to note that the original image is still being downloaded from the server and isn't being optimized in any way for the mobile device. It's certainly possible for the CSS to include URL information for a mobile-optimized image if desired. @media screen and (max-width:320px) { section { float: none; width: 97%; margin: 0px; padding: 5px; } #wrapper { padding: 5px; width: 96%; } #mainImage, #gearsImage, #faqImage { width: 100%; height: 100px; } } @media screen and (max-width: 480px) { section { float: none; width: 98%; margin: 0px 0px 10px 0px; padding: 5px; } article > section:last-child { margin-right: 0px; float: none; } #bottomSection { width: 99%; } #wrapper { padding: 5px; width: 96%; } #mainImage, #gearsImage, #faqImage { width: 100%; height: 100px; } }   The following images show the site rendered on an iPhone with the CSS media queries in place. Each of the sections now displays vertically making it much easier for the user to access them. Images inside of each section also scale appropriately to fit properly.     CSS media queries provide a great way to override default styles in a website and target devices with different resolutions. In this post you've seen how CSS media queries can be used to convert a standard browser-based site into a site that is more accessible to mobile users. Although much more can be done to optimize sites for mobile, CSS media queries provide a nice starting point if you don't have the time or resources to create mobile-specific versions of sites.

    Read the article

  • Adding Unobtrusive Validation To MVCContrib Fluent Html

    - by srkirkland
    ASP.NET MVC 3 includes a new unobtrusive validation strategy that utilizes HTML5 data-* attributes to decorate form elements.  Using a combination of jQuery validation and an unobtrusive validation adapter script that comes with MVC 3, those attributes are then turned into client side validation rules. A Quick Introduction to Unobtrusive Validation To quickly show how this works in practice, assume you have the following Order.cs class (think Northwind) [If you are familiar with unobtrusive validation in MVC 3 you can skip to the next section]: public class Order : DomainObject { [DataType(DataType.Date)] public virtual DateTime OrderDate { get; set; }   [Required] [StringLength(12)] public virtual string ShipAddress { get; set; }   [Required] public virtual Customer OrderedBy { get; set; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Note the System.ComponentModel.DataAnnotations attributes, which provide the validation and metadata information used by ASP.NET MVC 3 to determine how to render out these properties.  Now let’s assume we have a form which can edit this Order class, specifically let’s look at the ShipAddress property: @Html.LabelFor(x => x.Order.ShipAddress) @Html.EditorFor(x => x.Order.ShipAddress) @Html.ValidationMessageFor(x => x.Order.ShipAddress) .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now the Html.EditorFor() method is smart enough to look at the ShipAddress attributes and write out the necessary unobtrusive validation html attributes.  Note we could have used Html.TextBoxFor() or even Html.TextBox() and still retained the same results. If we view source on the input box generated by the Html.EditorFor() call, we get the following: <input type="text" value="Rua do Paço, 67" name="Order.ShipAddress" id="Order_ShipAddress" data-val-required="The ShipAddress field is required." data-val-length-max="12" data-val-length="The field ShipAddress must be a string with a maximum length of 12." data-val="true" class="text-box single-line input-validation-error"> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } As you can see, we have data-val-* attributes for both required and length, along with the proper error messages and additional data as necessary (in this case, we have the length-max=”12”). And of course, if we try to submit the form with an invalid value, we get an error on the client: Working with MvcContrib’s Fluent Html The MvcContrib project offers a fluent interface for creating Html elements which I find very expressive and useful, especially when it comes to creating select lists.  Let’s look at a few quick examples: @this.TextBox(x => x.FirstName).Class("required").Label("First Name:") @this.MultiSelect(x => x.UserId).Options(ViewModel.Users) @this.CheckBox("enabled").LabelAfter("Enabled").Title("Click to enable.").Styles(vertical_align => "middle")   @(this.Select("Order.OrderedBy").Options(Model.Customers, x => x.Id, x => x.CompanyName) .Selected(Model.Order.OrderedBy != null ? Model.Order.OrderedBy.Id : "") .FirstOption(null, "--Select A Company--") .HideFirstOptionWhen(Model.Order.OrderedBy != null) .Label("Ordered By:")) .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } These fluent html helpers create the normal html you would expect, and I think they make life a lot easier and more readable when dealing with complex markup or select list data models (look ma: no anonymous objects for creating class names!). Of course, the problem we have now is that MvcContrib’s fluent html helpers don’t know about ASP.NET MVC 3’s unobtrusive validation attributes and thus don’t take part in client validation on your page.  This is not ideal, so I wrote a quick helper method to extend fluent html with the knowledge of what unobtrusive validation attributes to include when they are rendered. Extending MvcContrib’s Fluent Html Before posting the code, there are just a few things you need to know.  The first is that all Fluent Html elements implement the IElement interface (MvcContrib.FluentHtml.Elements.IElement), and the second is that the base System.Web.Mvc.HtmlHelper has been extended with a method called GetUnobtrusiveValidationAttributes which we can use to determine the necessary attributes to include.  With this knowledge we can make quick work of extending fluent html: public static class FluentHtmlExtensions { public static T IncludeUnobtrusiveValidationAttributes<T>(this T element, HtmlHelper htmlHelper) where T : MvcContrib.FluentHtml.Elements.IElement { IDictionary<string, object> validationAttributes = htmlHelper .GetUnobtrusiveValidationAttributes(element.GetAttr("name"));   foreach (var validationAttribute in validationAttributes) { element.SetAttr(validationAttribute.Key, validationAttribute.Value); }   return element; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The code is pretty straight forward – basically we use a passed HtmlHelper to get a list of validation attributes for the current element and then add each of the returned attributes to the element to be rendered. The Extension In Action Now let’s get back to the earlier ShipAddress example and see what we’ve accomplished.  First we will use a fluent html helper to render out the ship address text input (this is the ‘before’ case): @this.TextBox("Order.ShipAddress").Label("Ship Address:").Class("class-name") .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } And the resulting HTML: <label id="Order_ShipAddress_Label" for="Order_ShipAddress">Ship Address:</label> <input type="text" value="Rua do Paço, 67" name="Order.ShipAddress" id="Order_ShipAddress" class="class-name"> Now let’s do the same thing except here we’ll use the newly written extension method: @this.TextBox("Order.ShipAddress").Label("Ship Address:") .Class("class-name").IncludeUnobtrusiveValidationAttributes(Html) .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } And the resulting HTML: <label id="Order_ShipAddress_Label" for="Order_ShipAddress">Ship Address:</label> <input type="text" value="Rua do Paço, 67" name="Order.ShipAddress" id="Order_ShipAddress" data-val-required="The ShipAddress field is required." data-val-length-max="12" data-val-length="The field ShipAddress must be a string with a maximum length of 12." data-val="true" class="class-name"> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Excellent!  Now we can continue to use unobtrusive validation and have the flexibility to use ASP.NET MVC’s Html helpers or MvcContrib’s fluent html helpers interchangeably, and every element will participate in client side validation. Wrap Up Overall I’m happy with this solution, although in the best case scenario MvcContrib would know about unobtrusive validation attributes and include them automatically (of course if it is enabled in the web.config file).  I know that MvcContrib allows you to author global behaviors, but that requires changing the base class of your views, which I am not willing to do. Enjoy!

    Read the article

  • Built-in GZip/Deflate Compression on IIS 7.x

    - by Rick Strahl
    IIS 7 improves internal compression functionality dramatically making it much easier than previous versions to take advantage of compression that’s built-in to the Web server. IIS 7 also supports dynamic compression which allows automatic compression of content created in your own applications (ASP.NET or otherwise!). The scheme is based on content-type sniffing and so it works with any kind of Web application framework. While static compression on IIS 7 is super easy to set up and turned on by default for most text content (text/*, which includes HTML and CSS, as well as for JavaScript, Atom, XAML, XML), setting up dynamic compression is a bit more involved, mostly because the various default compression settings are set in multiple places down the IIS –> ASP.NET hierarchy. Let’s take a look at each of the two approaches available: Static Compression Compresses static content from the hard disk. IIS can cache this content by compressing the file once and storing the compressed file on disk and serving the compressed alias whenever static content is requested and it hasn’t changed. The overhead for this is minimal and should be aggressively enabled. Dynamic Compression Works against application generated output from applications like your ASP.NET apps. Unlike static content, dynamic content must be compressed every time a page that requests it regenerates its content. As such dynamic compression has a much bigger impact than static caching. How Compression is configured Compression in IIS 7.x  is configured with two .config file elements in the <system.WebServer> space. The elements can be set anywhere in the IIS/ASP.NET configuration pipeline all the way from ApplicationHost.config down to the local web.config file. The following is from the the default setting in ApplicationHost.config (in the %windir%\System32\inetsrv\config forlder) on IIS 7.5 with a couple of small adjustments (added json output and enabled dynamic compression): <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <httpCompression directory="%SystemDrive%\inetpub\temp\IIS Temporary Compressed Files"> <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="9" /> <dynamicTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/json" enabled="true" /> <add mimeType="*/*" enabled="false" /> </dynamicTypes> <staticTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/atom+xml" enabled="true" /> <add mimeType="application/xaml+xml" enabled="true" /> <add mimeType="*/*" enabled="false" /> </staticTypes> </httpCompression> <urlCompression doStaticCompression="true" doDynamicCompression="true" /> </system.webServer> </configuration> You can find documentation on the httpCompression and urlCompression keys here respectively: http://msdn.microsoft.com/en-us/library/ms690689%28v=vs.90%29.aspx http://msdn.microsoft.com/en-us/library/aa347437%28v=vs.90%29.aspx The httpCompression Element – What and How to compress Basically httpCompression configures what types to compress and how to compress them. It specifies the DLL that handles gzip encoding and the types of documents that are to be compressed. Types are set up based on mime-types which looks at returned Content-Type headers in HTTP responses. For example, I added the application/json to mime type to my dynamic compression types above to allow that content to be compressed as well since I have quite a bit of AJAX content that gets sent to the client. The UrlCompression Element – Enables and Disables Compression The urlCompression element is a quick way to turn compression on and off. By default static compression is enabled server wide, and dynamic compression is disabled server wide. This might be a bit confusing because the httpCompression element also has a doDynamicCompression attribute which is set to true by default, but the urlCompression attribute by the same name actually overrides it. The urlCompression element only has three attributes: doStaticCompression, doDynamicCompression and dynamicCompressionBeforeCache. The doCompression attributes are the final determining factor whether compression is enabled, so it’s a good idea to be explcit! The default for doDynamicCompression='false”, but doStaticCompression="true"! Static Compression is enabled by Default, Dynamic Compression is not Because static compression is very efficient in IIS 7 it’s enabled by default server wide and there probably is no reason to ever change that setting. Dynamic compression however, since it’s more resource intensive, is turned off by default. If you want to enable dynamic compression there are a few quirks you have to deal with, namely that enabling it in ApplicationHost.config doesn’t work. Setting: <urlCompression doDynamicCompression="true" /> in applicationhost.config appears to have no effect and I had to move this element into my local web.config to make dynamic compression work. This is actually a smart choice because you’re not likely to want dynamic compression in every application on a server. Rather dynamic compression should be applied selectively where it makes sense. However, nowhere is it documented that the setting in applicationhost.config doesn’t work (or more likely is overridden somewhere and disabled lower in the configuration hierarchy). So: remember to set doDynamicCompression=”true” in web.config!!! How Static Compression works Static compression works against static content loaded from files on disk. Because this content is static and not bound to change frequently – such as .js, .css and static HTML content – it’s fairly easy for IIS to compress and then cache the compressed content. The way this works is that IIS compresses the files into a special folder on the server’s hard disk and then reads the content from this location if already compressed content is requested and the underlying file resource has not changed. The semantics of serving an already compressed file are very efficient – IIS still checks for file changes, but otherwise just serves the already compressed file from the compression folder. The compression folder is located at: %windir%\inetpub\temp\IIS Temporary Compressed Files\ApplicationPool\ If you look into the subfolders you’ll find compressed files: These files are pre-compressed and IIS serves them directly to the client until the underlying files are changed. As I mentioned before – static compression is on by default and there’s very little reason to turn that functionality off as it is efficient and just works out of the box. The one tweak you might want to do is to set the compression level to maximum. Since IIS only compresses content very infrequently it would make sense to apply maximum compression. You can do this with the staticCompressionLevel setting on the scheme element: <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="9" /> Other than that the default settings are probably just fine. Dynamic Compression – not so fast! By default dynamic compression is disabled and that’s actually quite sensible – you should use dynamic compression very carefully and think about what content you want to compress. In most applications it wouldn’t make sense to compress *all* generated content as it would generate a significant amount of overhead. Scott Fortsyth has a great post that details some of the performance numbers and how much impact dynamic compression has. Depending on how busy your server is you can play around with compression and see what impact it has on your server’s performance. There are also a few settings you can tweak to minimize the overhead of dynamic compression. Specifically the httpCompression key has a couple of CPU related keys that can help minimize the impact of Dynamic Compression on a busy server: dynamicCompressionDisableCpuUsage dynamicCompressionEnableCpuUsage By default these are set to 90 and 50 which means that when the CPU hits 90% compression will be disabled until CPU utilization drops back down to 50%. Again this is actually quite sensible as it utilizes CPU power from compression when available and falling off when the threshold has been hit. It’s a good way some of that extra CPU power on your big servers to use when utilization is low. Again these settings are something you likely have to play with. I would probably set the upper limit a little lower than 90% maybe around 70% to make this a feature that kicks in only if there’s lots of power to spare. I’m not really sure how accurate these CPU readings that IIS uses are as Cpu usage on Web Servers can spike drastically even during low loads. Don’t trust settings – do some load testing or monitor your server in a live environment to see what values make sense for your environment. Finally for dynamic compression I tend to add one Mime type for JSON data, since a lot of my applications send large chunks of JSON data over the wire. You can do that with the application/json content type: <add mimeType="application/json" enabled="true" /> What about Deflate Compression? The default compression is GZip. The documentation hints that you can use a different compression scheme and mentions Deflate compression. And sure enough you can change the compression settings to: <scheme name="deflate" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="9" /> to get deflate style compression. The deflate algorithm produces slightly more compact output so I tend to prefer it over GZip but more HTTP clients (other than browsers) support GZip than Deflate so be careful with this option if you build Web APIs. I also had some issues with the above value actually being applied right away. Changing the scheme in applicationhost.config didn’t show up on the site  right away. It required me to do a full IISReset to get that change to show up before I saw the change over to deflate compressed content. Content was slightly more compressed with deflate – not sure if it’s worth the slightly less common compression type, but the option at least is available. IIS 7 finally makes GZip Easy In summary IIS 7 makes GZip easy finally, even if the configuration settings are a bit obtuse and the documentation is seriously lacking. But once you know the basic settings I’ve described here and the fact that you can override all of this in your local web.config it’s pretty straight forward to configure GZip support and tweak it exactly to your needs. Static compression is a total no brainer as it adds very little overhead compared to direct static file serving and provides solid compression. Dynamic Compression is a little more tricky as it does add some overhead to servers, so it probably will require some tweaking to get the right balance of CPU load vs. compression ratios. Looking at large sites like Amazon, Yahoo, NewEgg etc. – they all use Related Content Code based ASP.NET GZip Caveats HttpWebRequest and GZip Responses © Rick Strahl, West Wind Technologies, 2005-2011Posted in IIS7   ASP.NET  

    Read the article

  • More CPU cores may not always lead to better performance – MAXDOP and query memory distribution in spotlight

    - by sqlworkshops
    More hardware normally delivers better performance, but there are exceptions where it can hinder performance. Understanding these exceptions and working around it is a major part of SQL Server performance tuning.   When a memory allocating query executes in parallel, SQL Server distributes memory to each task that is executing part of the query in parallel. In our example the sort operator that executes in parallel divides the memory across all tasks assuming even distribution of rows. Common memory allocating queries are that perform Sort and do Hash Match operations like Hash Join or Hash Aggregation or Hash Union.   In reality, how often are column values evenly distributed, think about an example; are employees working for your company distributed evenly across all the Zip codes or mainly concentrated in the headquarters? What happens when you sort result set based on Zip codes? Do all products in the catalog sell equally or are few products hot selling items?   One of my customers tested the below example on a 24 core server with various MAXDOP settings and here are the results:MAXDOP 1: CPU time = 1185 ms, elapsed time = 1188 msMAXDOP 4: CPU time = 1981 ms, elapsed time = 1568 msMAXDOP 8: CPU time = 1918 ms, elapsed time = 1619 msMAXDOP 12: CPU time = 2367 ms, elapsed time = 2258 msMAXDOP 16: CPU time = 2540 ms, elapsed time = 2579 msMAXDOP 20: CPU time = 2470 ms, elapsed time = 2534 msMAXDOP 0: CPU time = 2809 ms, elapsed time = 2721 ms - all 24 cores.In the above test, when the data was evenly distributed, the elapsed time of parallel query was always lower than serial query.   Why does the query get slower and slower with more CPU cores / higher MAXDOP? Maybe you can answer this question after reading the article; let me know: [email protected].   Well you get the point, let’s see an example.   The best way to learn is to practice. To create the below tables and reproduce the behavior, join the mailing list by using this link: www.sqlworkshops.com/ml and I will send you the table creation script.   Let’s update the Employees table with 49 out of 50 employees located in Zip code 2001. update Employees set Zip = EmployeeID / 400 + 1 where EmployeeID % 50 = 1 update Employees set Zip = 2001 where EmployeeID % 50 != 1 go update statistics Employees with fullscan go   Let’s create the temporary table #FireDrill with all possible Zip codes. drop table #FireDrill go create table #FireDrill (Zip int primary key) insert into #FireDrill select distinct Zip from Employees update statistics #FireDrill with fullscan go  Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --First serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) goThe query took 1011 ms to complete.   The execution plan shows the 77816 KB of memory was granted while the estimated rows were 799624.  No Sort Warnings in SQL Server Profiler.  Now let’s execute the query in parallel with MAXDOP 0. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 1912 ms to complete.  The execution plan shows the 79360 KB of memory was granted while the estimated rows were 799624.  The estimated number of rows between serial and parallel plan are the same. The parallel plan has slightly more memory granted due to additional overhead. Sort properties shows the rows are unevenly distributed over the 4 threads.   Sort Warnings in SQL Server Profiler.   Intermediate Summary: The reason for the higher duration with parallel plan was sort spill. This is due to uneven distribution of employees over Zip codes, especially concentration of 49 out of 50 employees in Zip code 2001. Now let’s update the Employees table and distribute employees evenly across all Zip codes.   update Employees set Zip = EmployeeID / 400 + 1 go update statistics Employees with fullscan go  Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --Serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) go   The query took 751 ms to complete.  The execution plan shows the 77816 KB of memory was granted while the estimated rows were 784707.  No Sort Warnings in SQL Server Profiler.   Now let’s execute the query in parallel with MAXDOP 0. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 661 ms to complete.  The execution plan shows the 79360 KB of memory was granted while the estimated rows were 784707.  Sort properties shows the rows are evenly distributed over the 4 threads. No Sort Warnings in SQL Server Profiler.    Intermediate Summary: When employees were distributed unevenly, concentrated on 1 Zip code, parallel sort spilled while serial sort performed well without spilling to tempdb. When the employees were distributed evenly across all Zip codes, parallel sort and serial sort did not spill to tempdb. This shows uneven data distribution may affect the performance of some parallel queries negatively. For detailed discussion of memory allocation, refer to webcasts available at www.sqlworkshops.com/webcasts.     Some of you might conclude from the above execution times that parallel query is not faster even when there is no spill. Below you can see when we are joining limited amount of Zip codes, parallel query will be fasted since it can use Bitmap Filtering.   Let’s update the Employees table with 49 out of 50 employees located in Zip code 2001. update Employees set Zip = EmployeeID / 400 + 1 where EmployeeID % 50 = 1 update Employees set Zip = 2001 where EmployeeID % 50 != 1 go update statistics Employees with fullscan go  Let’s create the temporary table #FireDrill with limited Zip codes. drop table #FireDrill go create table #FireDrill (Zip int primary key) insert into #FireDrill select distinct Zip       from Employees where Zip between 1800 and 2001 update statistics #FireDrill with fullscan go  Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --Serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) go The query took 989 ms to complete.  The execution plan shows the 77816 KB of memory was granted while the estimated rows were 785594. No Sort Warnings in SQL Server Profiler.  Now let’s execute the query in parallel with MAXDOP 0. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 1799 ms to complete.  The execution plan shows the 79360 KB of memory was granted while the estimated rows were 785594.  Sort Warnings in SQL Server Profiler.    The estimated number of rows between serial and parallel plan are the same. The parallel plan has slightly more memory granted due to additional overhead.  Intermediate Summary: The reason for the higher duration with parallel plan even with limited amount of Zip codes was sort spill. This is due to uneven distribution of employees over Zip codes, especially concentration of 49 out of 50 employees in Zip code 2001.   Now let’s update the Employees table and distribute employees evenly across all Zip codes. update Employees set Zip = EmployeeID / 400 + 1 go update statistics Employees with fullscan go Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --Serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) go The query took 250  ms to complete.  The execution plan shows the 9016 KB of memory was granted while the estimated rows were 79973.8.  No Sort Warnings in SQL Server Profiler.  Now let’s execute the query in parallel with MAXDOP 0.  --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 85 ms to complete.  The execution plan shows the 13152 KB of memory was granted while the estimated rows were 784707.  No Sort Warnings in SQL Server Profiler.    Here you see, parallel query is much faster than serial query since SQL Server is using Bitmap Filtering to eliminate rows before the hash join.   Parallel queries are very good for performance, but in some cases it can hinder performance. If one identifies the reason for these hindrances, then it is possible to get the best out of parallelism. I covered many aspects of monitoring and tuning parallel queries in webcasts (www.sqlworkshops.com/webcasts) and articles (www.sqlworkshops.com/articles). I suggest you to watch the webcasts and read the articles to better understand how to identify and tune parallel query performance issues.   Summary: One has to avoid sort spill over tempdb and the chances of spills are higher when a query executes in parallel with uneven data distribution. Parallel query brings its own advantage, reduced elapsed time and reduced work with Bitmap Filtering. So it is important to understand how to avoid spills over tempdb and when to execute a query in parallel.   I explain these concepts with detailed examples in my webcasts (www.sqlworkshops.com/webcasts), I recommend you to watch them. The best way to learn is to practice. To create the above tables and reproduce the behavior, join the mailing list at www.sqlworkshops.com/ml and I will send you the relevant SQL Scripts.   Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.   Disclaimer and copyright information:This article refers to organizations and products that may be the trademarks or registered trademarks of their various owners. Copyright of this article belongs to R Meyyappan / www.sqlworkshops.com. You may freely use the ideas and concepts discussed in this article with acknowledgement (www.sqlworkshops.com), but you may not claim any of it as your own work. This article is for informational purposes only; you use any of the suggestions given here entirely at your own risk.   Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.   R Meyyappan [email protected] LinkedIn: http://at.linkedin.com/in/rmeyyappan  

    Read the article

  • The Benefits of Smart Grid Business Software

    - by Sylvie MacKenzie, PMP
    Smart Grid Background What Are Smart Grids?Smart Grids use computer hardware and software, sensors, controls, and telecommunications equipment and services to: Link customers to information that helps them manage consumption and use electricity wisely. Enable customers to respond to utility notices in ways that help minimize the duration of overloads, bottlenecks, and outages. Provide utilities with information that helps them improve performance and control costs. What Is Driving Smart Grid Development? Environmental ImpactSmart Grid development is picking up speed because of the widespread interest in reducing the negative impact that energy use has on the environment. Smart Grids use technology to drive efficiencies in transmission, distribution, and consumption. As a result, utilities can serve customers’ power needs with fewer generating plants, fewer transmission and distribution assets,and lower overall generation. With the possible exception of wind farm sprawl, landscape preservation is one obvious benefit. And because most generation today results in greenhouse gas emissions, Smart Grids reduce air pollution and the potential for global climate change.Smart Grids also more easily accommodate the technical difficulties of integrating intermittent renewable resources like wind and solar into the grid, providing further greenhouse gas reductions. CostsThe ability to defer the cost of plant and grid expansion is a major benefit to both utilities and customers. Utilities do not need to use as many internal resources for traditional infrastructure project planning and management. Large T&D infrastructure expansion costs are not passed on to customers.Smart Grids will not eliminate capital expansion, of course. Transmission corridors to connect renewable generation with customers will require major near-term expenditures. Additionally, in the future, electricity to satisfy the needs of population growth and additional applications will exceed the capacity reductions available through the Smart Grid. At that point, expansion will resume—but with greater overall T&D efficiency based on demand response, load control, and many other Smart Grid technologies and business processes. Energy efficiency is a second area of Smart Grid cost saving of particular relevance to customers. The timely and detailed information Smart Grids provide encourages customers to limit waste, adopt energy-efficient building codes and standards, and invest in energy efficient appliances. Efficiency may or may not lower customer bills because customer efficiency savings may be offset by higher costs in generation fuels or carbon taxes. It is clear, however, that bills will be lower with efficiency than without it. Utility Operations Smart Grids can serve as the central focus of utility initiatives to improve business processes. Many utilities have long “wish lists” of projects and applications they would like to fund in order to improve customer service or ease staff’s burden of repetitious work, but they have difficulty cost-justifying the changes, especially in the short term. Adding Smart Grid benefits to the cost/benefit analysis frequently tips the scales in favor of the change and can also significantly reduce payback periods.Mobile workforce applications and asset management applications work together to deploy assets and then to maintain, repair, and replace them. Many additional benefits result—for instance, increased productivity and fuel savings from better routing. Similarly, customer portals that provide customers with near-real-time information can also encourage online payments, thus lowering billing costs. Utilities can and should include these cost and service improvements in the list of Smart Grid benefits. What Is Smart Grid Business Software? Smart Grid business software gathers data from a Smart Grid and uses it improve a utility’s business processes. Smart Grid business software also helps utilities provide relevant information to customers who can then use it to reduce their own consumption and improve their environmental profiles. Smart Grid Business Software Minimizes the Impact of Peak Demand Utilities must size their assets to accommodate their highest peak demand. The higher the peak rises above base demand: The more assets a utility must build that are used only for brief periods—an inefficient use of capital. The higher the utility’s risk profile rises given the uncertainties surrounding the time needed for permitting, building, and recouping costs. The higher the costs for utilities to purchase supply, because generators can charge more for contracts and spot supply during high-demand periods. Smart Grids enable a variety of programs that reduce peak demand, including: Time-of-use pricing and critical peak pricing—programs that charge customers more when they consume electricity during peak periods. Pilot projects indicate that these programs are successful in flattening peaks, thus ensuring better use of existing T&D and generation assets. Direct load control, which lets utilities reduce or eliminate electricity flow to customer equipment (such as air conditioners). Contracts govern the terms and conditions of these turn-offs. Indirect load control, which signals customers to reduce the use of on-premises equipment for contractually agreed-on time periods. Smart Grid business software enables utilities to impose penalties on customers who do not comply with their contracts. Smart Grids also help utilities manage peaks with existing assets by enabling: Real-time asset monitoring and control. In this application, advanced sensors safely enable dynamic capacity load limits, ensuring that all grid assets can be used to their maximum capacity during peak demand periods. Real-time asset monitoring and control applications also detect the location of excessive losses and pinpoint need for mitigation and asset replacements. As a result, utilities reduce outage risk and guard against excess capacity or “over-build”. Better peak demand analysis. As a result: Distribution planners can better size equipment (e.g. transformers) to avoid over-building. Operations engineers can identify and resolve bottlenecks and other inefficiencies that may cause or exacerbate peaks. As above, the result is a reduction in the tendency to over-build. Supply managers can more closely match procurement with delivery. As a result, they can fine-tune supply portfolios, reducing the tendency to over-contract for peak supply and reducing the need to resort to spot market purchases during high peaks. Smart Grids can help lower the cost of remaining peaks by: Standardizing interconnections for new distributed resources (such as electricity storage devices). Placing the interconnections where needed to support anticipated grid congestion. Smart Grid Business Software Lowers the Cost of Field Services By processing Smart Grid data through their business software, utilities can reduce such field costs as: Vegetation management. Smart Grids can pinpoint momentary interruptions and tree-caused outages. Spatial mash-up tools leverage GIS models of tree growth for targeted vegetation management. This reduces the cost of unnecessary tree trimming. Service vehicle fuel. Many utility service calls are “false alarms.” Checking meter status before dispatching crews prevents many unnecessary “truck rolls.” Similarly, crews use far less fuel when Smart Grid sensors can pinpoint a problem and mobile workforce applications can then route them directly to it. Smart Grid Business Software Ensures Regulatory Compliance Smart Grids can ensure compliance with private contracts and with regional, national, or international requirements by: Monitoring fulfillment of contract terms. Utilities can use one-hour interval meters to ensure that interruptible (“non-core”) customers actually reduce or eliminate deliveries as required. They can use the information to levy fines against contract violators. Monitoring regulations imposed on customers, such as maximum use during specific time periods. Using accurate time-stamped event history derived from intelligent devices distributed throughout the smart grid to monitor and report reliability statistics and risk compliance. Automating business processes and activities that ensure compliance with security and reliability measures (e.g. NERC-CIP 2-9). Grid Business Software Strengthens Utilities’ Connection to Customers While Reducing Customer Service Costs During outages, Smart Grid business software can: Identify outages more quickly. Software uses sensors to pinpoint outages and nested outage locations. They also permit utilities to ensure outage resolution at every meter location. Size outages more accurately, permitting utilities to dispatch crews that have the skills needed, in appropriate numbers. Provide updates on outage location and expected duration. This information helps call centers inform customers about the timing of service restoration. Smart Grids also facilitates display of outage maps for customer and public-service use. Smart Grids can significantly reduce the cost to: Connect and disconnect customers. Meters capable of remote disconnect can virtually eliminate the costs of field crews and vehicles previously required to change service from the old to the new residents of a metered property or disconnect customers for nonpayment. Resolve reports of voltage fluctuation. Smart Grids gather and report voltage and power quality data from meters and grid sensors, enabling utilities to pinpoint reported problems or resolve them before customers complain. Detect and resolve non-technical losses (e.g. theft). Smart Grids can identify illegal attempts to reconnect meters or to use electricity in supposedly vacant premises. They can also detect theft by comparing flows through delivery assets with billed consumption. Smart Grids also facilitate outreach to customers. By monitoring and analyzing consumption over time, utilities can: Identify customers with unusually high usage and contact them before they receive a bill. They can also suggest conservation techniques that might help to limit consumption. This can head off “high bill” complaints to the contact center. Note that such “high usage” or “additional charges apply because you are out of range” notices—frequently via text messaging—are already common among mobile phone providers. Help customers identify appropriate bill payment alternatives (budget billing, prepayment, etc.). Help customers find and reduce causes of over-consumption. There’s no waiting for bills in the mail before they even understand there is a problem. Utilities benefit not just through improved customer relations but also through limiting the size of bills from customers who might struggle to pay them. Where permitted, Smart Grids can open the doors to such new utility service offerings as: Monitoring properties. Landlords reduce costs of vacant properties when utilities notify them of unexpected energy or water consumption. Utilities can perform similar services for owners of vacation properties or the adult children of aging parents. Monitoring equipment. Power-use patterns can reveal a need for equipment maintenance. Smart Grids permit utilities to alert owners or managers to a need for maintenance or replacement. Facilitating home and small-business networks. Smart Grids can provide a gateway to equipment networks that automate control or let owners access equipment remotely. They also facilitate net metering, offering some utilities a path toward involvement in small-scale solar or wind generation. Prepayment plans that do not need special meters. Smart Grid Business Software Helps Customers Control Energy Costs There is no end to the ways Smart Grids help both small and large customers control energy costs. For instance: Multi-premises customers appreciate having all meters read on the same day so that they can more easily compare consumption at various sites. Customers in competitive regions can match their consumption profile (detailed via Smart Grid data) with specific offerings from competitive suppliers. Customers seeing inexplicable consumption patterns and power quality problems may investigate further. The result can be discovery of electrical problems that can be resolved through rewiring or maintenance—before more serious fires or accidents happen. Smart Grid Business Software Facilitates Use of Renewables Generation from wind and solar resources is a popular alternative to fossil fuel generation, which emits greenhouse gases. Wind and solar generation may also increase energy security in regions that currently import fossil fuel for use in generation. Utilities face many technical issues as they attempt to integrate intermittent resource generation into traditional grids, which traditionally handle only fully dispatchable generation. Smart Grid business software helps solves many of these issues by: Detecting sudden drops in production from renewables-generated electricity (wind and solar) and automatically triggering electricity storage and smart appliance response to compensate as needed. Supporting industry-standard distributed generation interconnection processes to reduce interconnection costs and avoid adding renewable supplies to locations already subject to grid congestion. Facilitating modeling and monitoring of locally generated supply from renewables and thus helping to maximize their use. Increasing the efficiency of “net metering” (through which utilities can use electricity generated by customers) by: Providing data for analysis. Integrating the production and consumption aspects of customer accounts. During non-peak periods, such techniques enable utilities to increase the percent of renewable generation in their supply mix. During peak periods, Smart Grid business software controls circuit reconfiguration to maximize available capacity. Conclusion Utility missions are changing. Yesterday, they focused on delivery of reasonably priced energy and water. Tomorrow, their missions will expand to encompass sustainable use and environmental improvement.Smart Grids are key to helping utilities achieve this expanded mission. But they come at a relatively high price. Utilities will need to invest heavily in new hardware, software, business process development, and staff training. Customer investments in home area networks and smart appliances will be large. Learning to change the energy and water consumption habits of a lifetime could ultimately prove even more formidable tasks.Smart Grid business software can ease the cost and difficulties inherent in a needed transition to a more flexible, reliable, responsive electricity grid. Justifying its implementation, however, requires a full understanding of the benefits it brings—benefits that can ultimately help customers, utilities, communities, and the world address global issues like energy security and climate change while minimizing costs and maximizing customer convenience. This white paper is available for download here. For further information about Oracle's Primavera Solutions for Utilities, please read our Utilities e-book.

    Read the article

  • Update records in OAF Page

    - by PRajkumar
    1. Create a Search Page to Create a page please go through the following link https://blogs.oracle.com/prajkumar/entry/create_oaf_search_page   2. Implement Update Action in SearchPG Right click on ResultTable in SearchPG > New > Item   Set following properties for New Item   Attribute Property ID UpdateAction Item Style image Image URI updateicon_enabled.gif Atribute Set /oracle/apps/fnd/attributesets/Buttons/Update Prompt Update Additional Text Update record Height 24 Width 24 Action Type fireAction Event update Submit True Parameters Name – PColumn1 Value -- ${oa.SearchVO1.Column1} Name – PColumn2 Value -- ${oa.SearchVO1.Column2}   3. Create a Update Page Right click on SearchDemo > New > Web Tier > OA Components > Page Name – UpdatePG Package – prajkumar.oracle.apps.fnd.searchdemo.webui   4. Select the UpdatePG and go to the strcuture pane where a default region has been created   5. Select region1 and set the following properties:   Attribute Property ID PageLayoutRN Region Style PageLayout AM Definition prajkumar.oracle.apps.fnd.searchdemo.server.SearchAM Window Title Update Page Window Title Update Page Auto Footer True   6. Create the Second Region (Main Content Region) Select PageLayoutRN right click > New > Region ID – MainRN Region Style – messageComponentLayout   7. Create first Item (Empty Field) MainRN > New > messageTextInput   Attribute Property ID Column1 Style Property messageTextInput Prompt Column1 Data Type VARCHAR2 Length 20 Maximum Length 100 View Instance SearchVO1 View Attribute Column1   8. Create second Item (Empty Field) MainRN > New > messageTextInput   Attribute Property ID Column2 Style Property messageTextInput Prompt Column2 Data Type VARCHAR2 Length 20 Maximum Length 100 View Instance SearchVO1 View Attribute Column2   9. Create a container Region for Apply and Cancel Button in UpdatePG Select MainRN of UpdatePG MainRN > messageLayout   Attribute Property Region ButtonLayout   10. Create Apply Button Select ButtonLayout > New > Item   Attribute Property ID Apply Item Style submitButton Attribute /oracle/apps/fnd/attributesets/Buttons/Apply   11. Create Cancel Button Select ButtonLayout > New > Item   Attribute Property ID Cancel Item Style submitButton Attribute /oracle/apps/fnd/attributesets/Buttons/Cancel   12. Add Page Controller for SearchPG Right Click on PageLayoutRN of SearchPG > Set New Controller Name – SearchCO Package -- prajkumar.oracle.apps.fnd.searchdemo.webui   Add Following code in Search Page controller SearchCO    import oracle.apps.fnd.framework.webui.OAPageContext; import oracle.apps.fnd.framework.webui.beans.OAWebBean; import oracle.apps.fnd.framework.webui.OAWebBeanConstants; import oracle.apps.fnd.framework.webui.beans.layout.OAQueryBean; public void processRequest(OAPageContext pageContext, OAWebBean webBean) {  super.processRequest(pageContext, webBean);  OAQueryBean queryBean = (OAQueryBean)webBean.findChildRecursive("QueryRN");  queryBean.clearSearchPersistenceCache(pageContext); }   public void processFormRequest(OAPageContext pageContext, OAWebBean webBean) {  super.processFormRequest(pageContext, webBean);     if ("update".equals(pageContext.getParameter(EVENT_PARAM)))  {   pageContext.setForwardURL("OA.jsp?page=/prajkumar/oracle/apps/fnd/searchdemo/webui/UpdatePG",                                     null,                                     OAWebBeanConstants.KEEP_MENU_CONTEXT,                                                                 null,                                                                                        null,                                     true,                                                                 OAWebBeanConstants.ADD_BREAD_CRUMB_NO,                                     OAWebBeanConstants.IGNORE_MESSAGES);  }  } 13. Add Page Controller for UpdatePG Right Click on PageLayoutRN of UpdatePG > Set New Controller Name – UpdateCO Package -- prajkumar.oracle.apps.fnd.searchdemo.webui   Add Following code in Update Page controller UpdateCO    import oracle.apps.fnd.framework.webui.OAPageContext; import oracle.apps.fnd.framework.webui.beans.OAWebBean; import oracle.apps.fnd.framework.webui.OAWebBeanConstants; import oracle.apps.fnd.framework.OAApplicationModule; import java.io.Serializable;  public void processRequest(OAPageContext pageContext, OAWebBean webBean) {  super.processRequest(pageContext, webBean);  OAApplicationModule am = pageContext.getApplicationModule(webBean);  String Column1 = pageContext.getParameter("PColumn1");  String Column2 = pageContext.getParameter("PColumn2");  Serializable[] params = { Column1, Column2 };  am.invokeMethod("updateRow", params); } public void processFormRequest(OAPageContext pageContext, OAWebBean webBean) {  super.processFormRequest(pageContext, webBean);  OAApplicationModule am = pageContext.getApplicationModule(webBean);         if (pageContext.getParameter("Apply") != null)  {    am.invokeMethod("apply");   pageContext.forwardImmediately("OA.jsp?page=/prajkumar/oracle/apps/fnd/searchdemo/webui/SearchPG",                                          null,                                          OAWebBeanConstants.KEEP_MENU_CONTEXT,                                          null,                                          null,                                          false, // retain AM                                          OAWebBeanConstants.ADD_BREAD_CRUMB_NO);  }  else if (pageContext.getParameter("Cancel") != null)  {    am.invokeMethod("rollback");   pageContext.forwardImmediately("OA.jsp?page=/prajkumar/oracle/apps/fnd/searchdemo/webui/SearchPG",                                          null,                                          OAWebBeanConstants.KEEP_MENU_CONTEXT,                                          null,                                          null,                                          false, // retain AM                                          OAWebBeanConstants.ADD_BREAD_CRUMB_NO);  } }   14. Add following Code in SearchAMImpl   import oracle.apps.fnd.framework.server.OAApplicationModuleImpl; import oracle.apps.fnd.framework.server.OAViewObjectImpl;     public void updateRow(String Column1, String Column2) {  SearchVOImpl vo = (SearchVOImpl)getSearchVO1();  vo.initQuery(Column1, Column2); }     public void apply() {  getTransaction().commit(); } public void rollback() {  getTransaction().rollback(); }   15. Add following Code in SearchVOImpl   import oracle.apps.fnd.framework.server.OAViewObjectImpl;     public void initQuery(String Column1, String Column2) {  if ((Column1 != null) && (!("".equals(Column1.trim()))))  {   setWhereClause("column1 = :1 AND column2 = :2");   setWhereClauseParams(null); // Always reset   setWhereClauseParam(0, Column1);   setWhereClauseParam(1, Column2);   executeQuery();  } }   16. Congratulation you have successfully finished. Run Your Search page and Test Your Work                          

    Read the article

  • 64-bit Archives Needed

    - by user9154181
    A little over a year ago, we received a question from someone who was trying to build software on Solaris. He was getting errors from the ar command when creating an archive. At that time, the ar command on Solaris was a 32-bit command. There was more than 2GB of data, and the ar command was hitting the file size limit for a 32-bit process that doesn't use the largefile APIs. Even in 2011, 2GB is a very large amount of code, so we had not heard this one before. Most of our toolchain was extended to handle 64-bit sized data back in the 1990's, but archives were not changed, presumably because there was no perceived need for it. Since then of course, programs have continued to get larger, and in 2010, the time had finally come to investigate the issue and find a way to provide for larger archives. As part of that process, I had to do a deep dive into the archive format, and also do some Unix archeology. I'm going to record what I learned here, to document what Solaris does, and in the hope that it might help someone else trying to solve the same problem for their platform. Archive Format Details Archives are hardly cutting edge technology. They are still used of course, but their basic form hasn't changed in decades. Other than to fix a bug, which is rare, we don't tend to touch that code much. The archive file format is described in /usr/include/ar.h, and I won't repeat the details here. Instead, here is a rough overview of the archive file format, implemented by System V Release 4 (SVR4) Unix systems such as Solaris: Every archive starts with a "magic number". This is a sequence of 8 characters: "!<arch>\n". The magic number is followed by 1 or more members. A member starts with a fixed header, defined by the ar_hdr structure in/usr/include/ar.h. Immediately following the header comes the data for the member. Members must be padded at the end with newline characters so that they have even length. The requirement to pad members to an even length is a dead giveaway as to the age of the archive format. It tells you that this format dates from the 1970's, and more specifically from the era of 16-bit systems such as the PDP-11 that Unix was originally developed on. A 32-bit system would have required 4 bytes, and 64-bit systems such as we use today would probably have required 8 bytes. 2 byte alignment is a poor choice for ELF object archive members. 32-bit objects require 4 byte alignment, and 64-bit objects require 64-bit alignment. The link-editor uses mmap() to process archives, and if the members have the wrong alignment, we have to slide (copy) them to the correct alignment before we can access the ELF data structures inside. The archive format requires 2 byte padding, but it doesn't prohibit more. The Solaris ar command takes advantage of this, and pads ELF object members to 8 byte boundaries. Anything else is padded to 2 as required by the format. The archive header (ar_hdr) represents all numeric values using an ASCII text representation rather than as binary integers. This means that an archive that contains only text members can be viewed using tools such as cat, more, or a text editor. The original designers of this format clearly thought that archives would be used for many file types, and not just for objects. Things didn't turn out that way of course — nearly all archives contain relocatable objects for a single operating system and machine, and are used primarily as input to the link-editor (ld). Archives can have special members that are created by the ar command rather than being supplied by the user. These special members are all distinguished by having a name that starts with the slash (/) character. This is an unambiguous marker that says that the user could not have supplied it. The reason for this is that regular archive members are given the plain name of the file that was inserted to create them, and any path components are stripped off. Slash is the delimiter character used by Unix to separate path components, and as such cannot occur within a plain file name. The ar command hides the special members from you when you list the contents of an archive, so most users don't know that they exist. There are only two possible special members: A symbol table that maps ELF symbols to the object archive member that provides it, and a string table used to hold member names that exceed 15 characters. The '/' convention for tagging special members provides room for adding more such members should the need arise. As I will discuss below, we took advantage of this fact to add an alternate 64-bit symbol table special member which is used in archives that are larger than 4GB. When an archive contains ELF object members, the ar command builds a special archive member known as the symbol table that maps all ELF symbols in the object to the archive member that provides it. The link-editor uses this symbol table to determine which symbols are provided by the objects in that archive. If an archive has a symbol table, it will always be the first member in the archive, immediately following the magic number. Unlike member headers, symbol tables do use binary integers to represent offsets. These integers are always stored in big-endian format, even on a little endian host such as x86. The archive header (ar_hdr) provides 15 characters for representing the member name. If any member has a name that is longer than this, then the real name is written into a special archive member called the string table, and the member's name field instead contains a slash (/) character followed by a decimal representation of the offset of the real name within the string table. The string table is required to precede all normal archive members, so it will be the second member if the archive contains a symbol table, and the first member otherwise. The archive format is not designed to make finding a given member easy. Such operations move through the archive from front to back examining each member in turn, and run in O(n) time. This would be bad if archives were commonly used in that manner, but in general, they are not. Typically, the ar command is used to build an new archive from scratch, inserting all the objects in one operation, and then the link-editor accesses the members in the archive in constant time by using the offsets provided by the symbol table. Both of these operations are reasonably efficient. However, listing the contents of a large archive with the ar command can be rather slow. Factors That Limit Solaris Archive Size As is often the case, there was more than one limiting factor preventing Solaris archives from growing beyond the 32-bit limits of 2GB (32-bit signed) and 4GB (32-bit unsigned). These limits are listed in the order they are hit as archive size grows, so the earlier ones mask those that follow. The original Solaris archive file format can handle sizes up to 4GB without issue. However, the ar command was delivered as a 32-bit executable that did not use the largefile APIs. As such, the ar command itself could not create a file larger than 2GB. One can solve this by building ar with the largefile APIs which would allow it to reach 4GB, but a simpler and better answer is to deliver a 64-bit ar, which has the ability to scale well past 4GB. Symbol table offsets are stored as 32-bit big-endian binary integers, which limits the maximum archive size to 4GB. To get around this limit requires a different symbol table format, or an extension mechanism to the current one, similar in nature to the way member names longer than 15 characters are handled in member headers. The size field in the archive member header (ar_hdr) is an ASCII string capable of representing a 32-bit unsigned value. This places a 4GB size limit on the size of any individual member in an archive. In considering format extensions to get past these limits, it is important to remember that very few archives will require the ability to scale past 4GB for many years. The old format, while no beauty, continues to be sufficient for its purpose. This argues for a backward compatible fix that allows newer versions of Solaris to produce archives that are compatible with older versions of the system unless the size of the archive exceeds 4GB. Archive Format Differences Among Unix Variants While considering how to extend Solaris archives to scale to 64-bits, I wanted to know how similar archives from other Unix systems are to those produced by Solaris, and whether they had already solved the 64-bit issue. I've successfully moved archives between different Unix systems before with good luck, so I knew that there was some commonality. If it turned out that there was already a viable defacto standard for 64-bit archives, it would obviously be better to adopt that rather than invent something new. The archive file format is not formally standardized. However, the ar command and archive format were part of the original Unix from Bell Labs. Other systems started with that format, extending it in various often incompatible ways, but usually with the same common shared core. Most of these systems use the same magic number to identify their archives, despite the fact that their archives are not always fully compatible with each other. It is often true that archives can be copied between different Unix variants, and if the member names are short enough, the ar command from one system can often read archives produced on another. In practice, it is rare to find an archive containing anything other than objects for a single operating system and machine type. Such an archive is only of use on the type of system that created it, and is only used on that system. This is probably why cross platform compatibility of archives between Unix variants has never been an issue. Otherwise, the use of the same magic number in archives with incompatible formats would be a problem. I was able to find information for a number of Unix variants, described below. These can be divided roughly into three tribes, SVR4 Unix, BSD Unix, and IBM AIX. Solaris is a SVR4 Unix, and its archives are completely compatible with those from the other members of that group (GNU/Linux, HP-UX, and SGI IRIX). AIX AIX is an exception to rule that Unix archive formats are all based on the original Bell labs Unix format. It appears that AIX supports 2 formats (small and big), both of which differ in fundamental ways from other Unix systems: These formats use a different magic number than the standard one used by Solaris and other Unix variants. They include support for removing archive members from a file without reallocating the file, marking dead areas as unused, and reusing them when new archive items are inserted. They have a special table of contents member (File Member Header) which lets you find out everything that's in the archive without having to actually traverse the entire file. Their symbol table members are quite similar to those from other systems though. Their member headers are doubly linked, containing offsets to both the previous and next members. Of the Unix systems described here, AIX has the only format I saw that will have reasonable insert/delete performance for really large archives. Everyone else has O(n) performance, and are going to be slow to use with large archives. BSD BSD has gone through 4 versions of archive format, which are described in their manpage. They use the same member header as SVR4, but their symbol table format is different, and their scheme for long member names puts the name directly after the member header rather than into a string table. GNU/Linux The GNU toolchain uses the SVR4 format, and is compatible with Solaris. HP-UX HP-UX seems to follow the SVR4 model, and is compatible with Solaris. IRIX IRIX has 32 and 64-bit archives. The 32-bit format is the standard SVR4 format, and is compatible with Solaris. The 64-bit format is the same, except that the symbol table uses 64-bit integers. IRIX assumes that an archive contains objects of a single ELFCLASS/MACHINE, and any archive containing ELFCLASS64 objects receives a 64-bit symbol table. Although they only use it for 64-bit objects, nothing in the archive format limits it to ELFCLASS64. It would be perfectly valid to produce a 64-bit symbol table in an archive containing 32-bit objects, text files, or anything else. Tru64 Unix (Digital/Compaq/HP) Tru64 Unix uses a format much like ours, but their symbol table is a hash table, making specific symbol lookup much faster. The Solaris link-editor uses archives by examining the entire symbol table looking for unsatisfied symbols for the link, and not by looking up individual symbols, so there would be no benefit to Solaris from such a hash table. The Tru64 ld must use a different approach in which the hash table pays off for them. Widening the existing SVR4 archive symbol tables rather than inventing something new is the simplest path forward. There is ample precedent for this approach in the ELF world. When ELF was extended to support 64-bit objects, the approach was largely to take the existing data structures, and define 64-bit versions of them. We called the old set ELF32, and the new set ELF64. My guess is that there was no need to widen the archive format at that time, but had there been, it seems obvious that this is how it would have been done. The Implementation of 64-bit Solaris Archives As mentioned earlier, there was no desire to improve the fundamental nature of archives. They have always had O(n) insert/delete behavior, and for the most part it hasn't mattered. AIX made efforts to improve this, but those efforts did not find widespread adoption. For the purposes of link-editing, which is essentially the only thing that archives are used for, the existing format is adequate, and issues of backward compatibility trump the desire to do something technically better. Widening the existing symbol table format to 64-bits is therefore the obvious way to proceed. For Solaris 11, I implemented that, and I also updated the ar command so that a 64-bit version is run by default. This eliminates the 2 most significant limits to archive size, leaving only the limit on an individual archive member. We only generate a 64-bit symbol table if the archive exceeds 4GB, or when the new -S option to the ar command is used. This maximizes backward compatibility, as an archive produced by Solaris 11 is highly likely to be less than 4GB in size, and will therefore employ the same format understood by older versions of the system. The main reason for the existence of the -S option is to allow us to test the 64-bit format without having to construct huge archives to do so. I don't believe it will find much use outside of that. Other than the new ability to create and use extremely large archives, this change is largely invisible to the end user. When reading an archive, the ar command will transparently accept either form of symbol table. Similarly, the ELF library (libelf) has been updated to understand either format. Users of libelf (such as the link-editor ld) do not need to be modified to use the new format, because these changes are encapsulated behind the existing functions provided by libelf. As mentioned above, this work did not lift the limit on the maximum size of an individual archive member. That limit remains fixed at 4GB for now. This is not because we think objects will never get that large, for the history of computing says otherwise. Rather, this is based on an estimation that single relocatable objects of that size will not appear for a decade or two. A lot can change in that time, and it is better not to overengineer things by writing code that will sit and rot for years without being used. It is not too soon however to have a plan for that eventuality. When the time comes when this limit needs to be lifted, I believe that there is a simple solution that is consistent with the existing format. The archive member header size field is an ASCII string, like the name, and as such, the overflow scheme used for long names can also be used to handle the size. The size string would be placed into the archive string table, and its offset in the string table would then be written into the archive header size field using the same format "/ddd" used for overflowed names.

    Read the article

  • AIX Checklist for stable obiee deployment

    - by user554629
    Common AIX configuration issues     ( last updated 27 Aug 2012 ) OBIEE is a complicated system with many moving parts and connection points.The purpose of this article is to provide a checklist to discuss OBIEE deployment with your systems administrators. The information in this article is time sensitive, and updated as I discover new  issues or details. What makes OBIEE different? When Tech Support suggests AIX component upgrades to a stable, locked-down production AIX environment, it is common to get "push back".  "Why is this necessary?  We aren't we seeing issues with other software?"It's a fair question that I have often struggled to answer; here are the talking points: OBIEE is memory intensive.  It is the entire purpose of the software to trade memory for repetitive, more expensive database requests across a network. OBIEE is implemented in C++ and is very dependent on the C++ runtime to behave correctly. OBIEE is aggressively thread efficient;  if atomic operations on a particular architecture do not work correctly, the software crashes. OBIEE dynamically loads third-party database client libraries directly into the nqsserver process.  If the library is not thread-safe, or corrupts process memory the OBIEE crash happens in an unrelated part of the code.  These are extremely difficult bugs to find. OBIEE software uses 99% common source across multiple platforms:  Windows, Linux, AIX, Solaris and HPUX.  If a crash happens on only one platform, we begin to suspect other factors.  load intensity, system differences, configuration choices, hardware failures.  It is rare to have a single product require so many diverse technical skills.   My role in support is to understand system configurations, performance issues, and crashes.   An analyst trained in Business Analytics can't be expected to know AIX internals in the depth required to make configuration choices.  Here are some guidelines. AIX C++ Runtime must be at  version 11.1.0.4$ lslpp -L | grep xlC.aixobiee software will crash if xlC.aix.rte is downlevel;  this is not a "try it" suggestion.Nov 2011 11.1.0.4 version  is appropriate for all AIX versions ( 5, 6, 7 )Download from here:https://www-304.ibm.com/support/docview.wss?uid=swg24031426 No reboot is necessary to install, it can even be installed while applications are using the current version.Restart the apps, and they will pick up the latest version. AIX 5.3 Technology Level 12 is required when running on Power5,6,7 processorsAIX 6.1 was introduced with the newer Power chips, and we have seen no issues with 6.1 or 7.1 versions.Customers with an unstable deployment, dozens of unexplained crashes, became stable after the upgrade.If your AIX system is 5.3, the minimum TL level should be at or higher than this:$ oslevel -s  5300-12-03-1107IBM typically supports only the two latest versions of AIX ( 6.1 and 7.1, for example).  AIX 5.3 is still supported and popular running in an LPAR. obiee userid limits$ ulimit -Ha  ( hard limits )$ ulimit -a   ( default limits )core file size (blocks)     unlimiteddata seg size (kbytes)      unlimitedfile size (blocks)          unlimitedmax memory size (kbytes)    unlimitedopen files                  10240 cpu time (seconds)          unlimitedvirtual memory (kbytes)     unlimitedIt is best to establish the values in /etc/security/limitsroot user is needed to observe and modify this file.If you modify a limit, you will need to relog in to change it again.  For example,$ ulimit -c 0$ ulimit -c 2097151cannot modify limit: Operation not permitted$ ulimit -c unlimited$ ulimit -c0There are only two meaningful values for ulimit -c ; zero or unlimited.Anything else is likely to produce a truncated core file that cannot be analyzed. Deploy 32-bit or 64-bit ?Early versions of OBIEE offered 32-bit or 64-bit choice to AIX customers.The 32-bit choice was needed if a database vendor did not supply a 64-bit client library.That's no longer an issue and beginning with OBIEE 11, 32-bit code is no longer shipped.A common error that leads to "out of memory" conditions to to accept the 32-bit memory configuration choices on 64-bit deployments.  The significant configuration choices are: Maximum process data (heap) size is in an AIX environment variableLDR_CNTRL=IGNOREUNLOAD@LOADPUBLIC@PREREAD_SHLIB@MAXDATA=0x... Two thread stack sizes are made in obiee NQSConfig.INI[ SERVER ]SERVER_THREAD_STACK_SIZE = 0;DB_GATEWAY_THREAD_STACK_SIZE = 0; Sort memory in NQSConfig.INI[ GENERAL ]SORT_MEMORY_SIZE = 4 MB ;SORT_BUFFER_INCREMENT_SIZE = 256 KB ; Choosing a value for MAXDATA:0x080000000  2GB Default maximum 32-bit heap size ( 8 with 7 zeros )0x100000000  4GB 64-bit breaking even with 32-bit ( 1 with 8 zeros )0x200000000  8GB 64-bit double 32-bit max0x400000000 16GB 64-bit safetyUsing 2GB heap size for a 64-bit process will almost certainly lead to an out-of-memory situation.Registers are twice as big ... consume twice as much memory in the heap.Upgrading to a 4GB heap for a 64-bit process is just "breaking even" with 32-bit.A 32-bit process is constrained by the 32-bit virtual addressing limits.  Heap memory is used for dynamic requirements of obiee software, thread stacks for each of the configured threads, and sometimes for shared libraries. 64-bit processes are not constrained in this way;  extra heap space can be configured for safety against a query that might create a sudden requirement for excessive storage.  If the storage is not available, this query might crash the whole server and disrupt existing users.There is no performance penalty on AIX for configuring more memory than required;  extra memory can be configured for safety.  If there are no other considerations, start with 8GB.Choosing a value for Thread Stack size:zero is the value documented to select an appropriate default for thread stack size.  My preference is to change this to an absolute value, even if you intend to use the documented default;  it provides better documentation and removes the "surprise" factor.There are two thread types that can be configured. GATEWAY is used by a thread pool to call a database client library to establish a DB connection.The default size is 256KB;  many customers raise this to 512KB ( no performance penalty for over-configuring ). This value must be set to 1 MB if Teradata connections are used. SERVER threads are used to run queries.  OBIEE uses recursive algorithms during the analysis of query structures which can consume significant thread stack storage.  It's difficult to provide guidance on a value that depends on data and complexity.  The general notion is to provide more space than you think you need,  "double down" and increase the value if you run out, otherwise inspect the query to understand why it is too complex for the thread stack.  There are protections built into the software to abort a single user query that is too complex, but the algorithms don't cover all situations.256 KB  The default 32-bit stack size.  Many customers increased this to 512KB on 32-bit.  A 64-bit server is very likely to crash with this value;  the stack contains mostly register values, which are twice as big.512 KB  The documented 64-bit default.  Some early releases of obiee didn't set this correctly, resulting in 256KB stacks.1 MB  The recommended 64-bit setting.  If your system only ever uses 512KB of stack space, there is no performance penalty for using 1MB stack size.2 MB  Many large customers use this value for safety.  No performance penalty.nqscheduler does not use the NQSConfig.INI file to set thread stack size.If this process crashes because the thread stack is too small, use this to set 2MB:export OBI_BACKGROUND_STACK_SIZE=2048 Shared libraries are not (shared) When application libraries are loaded at run-time, AIX makes a decision on whether to load the libraries in a "public" memory segment.  If the filesystem library permissions do not have the "Read-Other" permission bit, AIX loads the library into private process memory with two significant side-effects:* The libraries reduce the heap storage available.      Might be significant in 32-bit processes;  irrelevant in 64-bit processes.* Library code is loaded into multiple real pages for execution;  one copy for each process.Multiple execution images is a significant issue for both 32- and 64-bit processes.The "real memory pages" saved by using public memory segments is a minor concern.  Today's machines typically have plenty of real memory.The real problem with private copies of libraries is that they consume processor cache blocks, which are limited.   The same library instructions executing in different real pages will cause memory delays as the i-cache ( instruction cache 128KB blocks) are refreshed from real memory.   Performance loss because instructions are delayed is something that is difficult to measure without access to low-level cache fault data.   The machine just appears to be running slowly for no observable reason.This is an easy problem to detect, and an easy problem to correct.Detection:  "genld -l" AIX command produces a list of the libraries used by each process and the AIX memory address where they are loaded.32-bit public segment is 13 ( "dxxxxxxx" ).   private segments are 2-a.64-bit public segment is 9 ( "9xxxxxxxxxxxxxxx") ; private segment is 8.genld -l | grep -v ' d| 9' | sort +2provides a list of privately loaded libraries. Repair: chmod o+r <libname>AIX shared libraries will have a suffix of ".so" or ".a".Another technique is to change all libraries in a selected directory to repair those that might not be currently loaded.   The usual directories that need repair are obiee code, httpd code and plugins, database client libraries and java.chmod o+r /shr/dir/*.a /shr/dir/*.so Configure your system for diagnosticsProduction systems shouldn't crash, and yet bad things happen to good software.If obiee software crashes and produces a core, you should configure your system for reliable transfer of the failing conditions to Oracle Tech Support.  Here's what we need to be able to diagnose a core file from your system.* fullcore enabled. chdev -lsys0 -a fullcore=true* core naming enabled. chcore -n on -d* ulimit must not truncate core. see item 3.* pstack.sh is used to capture core documentation.* obidoc is used to capture current AIX configuration.* snapcore  AIX utility captures core and libraries. Use the proper syntax. $ snapcore -r corename executable-fullpath   /tmp/snapcore will contain the .pax.Z output file.  It is compressed.* If cores are directed to a common directory, ensure obiee userid can write to the directory.  ( chcore -p /cores -d ; chmod 777 /cores )The filesystem must have sufficient space to hold a crashing obiee application.Use:  df -k  Check the "Free" column ( not "% Used" )  8388608 is 8GB. Disable Oracle Client Library signal handlingThe Oracle DB Client Library is frequently distributed with the sqlplus development kit.By default, the library enables a signal handler, which will document a call stack if the application crashes.   The signal handler is not needed, and definitely disruptive to obiee diagnostics.   It needs to be disabled.   sqlnet.ora is typically located at:   $ORACLE_HOME/network/admin/sqlnet.oraAdd this line at the top of the file:   DIAG_SIGHANDLER_ENABLED=FALSE Disable async query in the RPD connection pool.This might be an obiee 10.1.3.4 issue only ( still checking  )."async query" must be disabled in the connection pools.It was designed to enable query cancellation to a database, and turned out to have too many edge conditions in normal communication that produced random corruption of data and crashes.  Please ensure it is turned off in the RPD. Check AIX error report (errpt).Errors external to obiee applications can trigger crashes.  $ /bin/errpt -aHardware errors ( firmware, adapters, disks ) should be reported to IBM support.All application core files are recorded by AIX;  the most recent ones are listed first. Reserved for something important to say.

    Read the article

  • CodePlex Daily Summary for Friday, November 18, 2011

    CodePlex Daily Summary for Friday, November 18, 2011Popular ReleasesDelta Engine: Delta Engine Beta Preview v0.9.1: v0.9.1 beta release with lots of refactoring, fixes, new samples and support for iOS, Android and WP7 (you need a Marketplace account however). If you want a binary release for the games (like v0.9.0), just say so in the Forum or here and we will quickly prepare one. It is just not much different from v0.9.0, so I left it out this time. See http://DeltaEngine.net/Wiki.Roadmap for details.Scrum Task Board Card Creator: TaskCardCreator 2.5.1.0: What's New: Fix of Work Item 484 Fix of Work Item 480 Fix of Work Item 478 Supported Templates: Microsoft Visual Studio Scrum 1.0 Product Backlog Item, Task, Impediment, and Bug MSF for Agile Software Development v5.0 User Story, Task, and BugAllNewsManager.NET: AllNewsManager.NET 1.5: AllNewsManager.NET 1.5. This new version provide several new features, minor/major improvements and bug fixes. Some new features: Comment Report. CkFinder integration with CkEditor. If you are upgrading or making a new installation, please take a look here.ASP.net Awesome Samples (Web-Forms): 1.0 samples: Full Demo VS2008 Very Simple Demo VS2010 (demos for the ASP.net Awesome jQuery Ajax Controls)SharpMap - Geospatial Application Framework for the CLR: SharpMap-0.9-AnyCPU-Trunk-2011.11.17: This is a build of SharpMap from the 0.9 development trunk as per 2011-11-17 For most applications the AnyCPU release is the recommended, but in case you need an x86 build that is included to. For some dataproviders (GDAL/OGR, SqLite, PostGis) you need to also referense the SharpMap.Extensions assembly For SqlServer Spatial you need to reference the SharpMap.SqlServerSpatial assemblySQL Monitor - tracking sql server activities: SQLMon 4.1 alpha 5: 1. added basic schema support 2. added server instance name and process id 3. fixed problem with object search index out of range 4. improved version comparison with previous/next difference navigation 5. remeber main window spliter and object explorer spliter positionAJAX Control Toolkit: November 2011 Release: AJAX Control Toolkit Release Notes - November 2011 Release Version 51116November 2011 release of the AJAX Control Toolkit. AJAX Control Toolkit .NET 4 - Binary – AJAX Control Toolkit for .NET 4 and sample site (Recommended). AJAX Control Toolkit .NET 3.5 - Binary – AJAX Control Toolkit for .NET 3.5 and sample site (Recommended). Notes: - The current version of the AJAX Control Toolkit is not compatible with ASP.NET 2.0. The latest version that is compatible with ASP.NET 2.0 can be found h...MVC Controls Toolkit: Mvc Controls Toolkit 1.5.5: Added: Now the DateRanteAttribute accepts complex expressions containing "Now" and "Today" as static minimum and maximum. Menu, MenuFor helpers capable of handling a "currently selected element". The developer can choose between using a standard nested menu based on a standard SimpleMenuItem class or specifying an item template based on a custom class. Added also helpers to build the tree structure containing all data items the menu takes infos from. Improved the pager. Now the developer ...SharpCompress - a fully native C# library for RAR, 7Zip, Zip, Tar, GZip, BZip2: SharpCompress 0.7: Reworked API to be more consistent. See Supported formats table. Added some more helper methods - e.g. OpenEntryStream (RarArchive/RarReader does not support this) Fixed up testsSilverlight Toolkit: Windows Phone Toolkit - Nov 2011 (7.1 SDK): This release is coming soon! What's new ListPicker once again works in a ScrollViewer LongListSelector bug fixes around OutOfRange exceptions, wrong ordering of items, grouping issues, and scrolling events. ItemTuple is now refactored to be the public type LongListSelectorItem to provide users better access to the values in selection changed handlers. PerformanceProgressBar binding fix for IsIndeterminate (item 9767 and others) There is no longer a GestureListener dependency with the C...DotNetNuke® Community Edition: 06.01.01: Major Highlights Fixed problem with the core skin object rendering CSS above the other framework inserted files, which caused problems when using core style skin objects Fixed issue with iFrames getting removed when content is saved Fixed issue with the HTML module removing styling and scripts from the content Fixed issue with inserting the link to jquery after the header of the page Security Fixesnone Updated Modules/Providers ModulesHTML version 6.1.0 ProvidersnoneDotNetNuke Performance Settings: 01.00.00: First release of DotNetNuke SQL update queries to set the DNN installation for optimimal performance. Please review and rate this release... (stars are welcome)SCCM Client Actions Tool: SCCM Client Actions Tool v0.8: SCCM Client Actions Tool v0.8 is currently the latest version. It comes with following changes since last version: Added "Wake On LAN" action. WOL.EXE is now included. Added new action "Get all active advertisements" to list all machine based advertisements on remote computers. Added new action "Get all active user advertisements" to list all user based advertisements for logged on users on remote computers. Added config.ini setting "enablePingTest" to control whether ping test is ru...QuickGraph, Graph Data Structures And Algorithms for .Net: 3.6.61116.0: Portable library build that allows to use QuickGraph in any .NET environment: .net 4.0, silverlight 4.0, WP7, Win8 Metro apps.Devpad: 4.7: Whats new for Devpad 4.7: New export to Rich Text New export to FlowDocument Minor Bug Fix's, improvements and speed upsC.B.R. : Comic Book Reader: CBR 0.3: New featuresAdd magnifier size and scale New file info view in the backstage Add dynamic properties on book and settings Sorting and grouping in the explorer with new design Rework on conversion : Images, PDF, Cbr/rar, Cbz/zip, Xps to the destination formats Images, Cbz and XPS ImprovmentsSuppress MainViewModel and ExplorerViewModel dependencies Add view notifications and Messages from MVVM Light for ViewModel=>View notifications Make thread better on open catalog, no more ihm freeze, less t...Desktop Google Reader: 1.4.2: This release remove the like and the broadcast buttons as Google Reader stopped supporting them (no, we don't like this decission...) Additionally and to have at least a small plus: the login window now automaitcally logs you in if you stored username and passwort (no more extra click needed) Finally added WebKit .NET to the about window and removed Awesomium MD5-Hash: 5fccf25a2fb4fecc1dc77ebabc8d3897 SHA-Hash: d44ff788b123bd33596ad1a75f3b9fa74a862fdbFluent Validation for .NET: 3.2: Changes since 3.1: Fixed issue #7084 (NotEmptyValidator does not work with EntityCollection<T>) Fixed issue #7087 (AbstractValidator.Custom ignores RuleSets and always runs) Removed support for WP7 for now as it doesn't support co/contravariance without crashing.Rawr: Rawr 4.2.7: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr AddonWe now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including bag and bank items) like Char...VidCoder: 1.2.2: Updated Handbrake core to svn 4344. Fixed the 6-channel discrete mixdown option not appearing for AAC encoders. Added handling for possible exceptions when copying to the clipboard, added retries and message when it fails. Fixed issue with audio bitrate UI not appearing sometimes when switching audio encoders. Added extra checks to protect against reported crashes. Added code to upgrade encoding profiles on old queued items.New Projects3D Image Analysis: This is a technology development project. Obective is to create a intelligent Machine Vision system. This will be making use of Microsoft Kinect and PCL. ASP.net Awesome Samples (Web-Forms): samples for ASP.net Awesome jQuery Ajax Controls ( www.aspnetawesome.com ) Demonstrating the following controls: AjaxDropdown, Lookup, MultiLookup, AjaxRadioList, AjaxCheckboxList and AjaxRadioListCocoon: Cocoon is a framework to support the development of .Net Windows 8 Metro-style applications, in particular those that link to web services. It simplifies accessing, displaying and editing data using standard Metro controls, and allows easy application of the MVVM pattern.Dagens: Windows Phone 7.5 application that locates places where you can get a good lunch at A fixed price. The product will be localised for Swedish, Danish and Norwegian traditional lunchtime market. DataSift: DataSift API This is the official C# library for accessing the DataSift API. See the example projects for some simple example usage. See https://github.com/datasift/datasift-csharp for the most up to date revision DynaCache: A small C# library that allows you to autmatically cache the output from functions. No longer will you have to write boilerplate code to retrieve or store results!Forgotten Runes - A community based, motion controlled fantasy RPG: It's a motion controlled 3D fantasy RPG, powered by CryEngine 3 FreeSDK. PS Move is used for motion control, but Kinect support might be included later. It's community based, everybody can join and help us. The smallest ideas are welcome too! ;)FujiyBlog: A simple Open Source Blog using ASP.NET MVC 3, jQuery, Entity Framework Code First and SQL CE 4 or SQL Server 2008. Features: -Multi-author support -Widgets -Themes -Comments (with moderation) -BlogML import -Tags -Categories -WebFarm Support (using SQL Server) -Multi-Language support The main motivation for creating this blog is to analyze the latest technologies.Gemcraft Labyrinth Summon Helper: Gemcraft Labyrinth Summon Helper is a calculator created to help GemCraft Labyrinth players to choose the best gem grade (ie: the one with the maximum mana profit) to throw at a wave stone to summon monsters.gkom: GKOMLoA: PL: Podstawa gry bez tekstur, modeli, map i skryptów. EN: Base game without textures, models, maps and scripts.Mickey: A project to explore building Domain objects for certain industriesNAVI - Navigational Aids for the Visually Impaired: NAVI is a navigational aid for visually impaired based on Microsoft's Kinect.Office 2007 Multiple windows: A simple application to allow Office 2007 applications to appear in multiple windows.Picture Organizer: This project will focus on a nice, clean and fast interface to organize albums and pictures on Facebook. Upload pictures on facebook, create new facebook albums, resize pictures before sending them to facebook. Project will be based on the Caliburn.Micro framework in c#.NET WPF!PixelGuess: Guess the pixels-game.Service Validation Libraries Orchard module: Contains libraries that can be used by other modules to help validation in service classesWP7 Demos: Wp7 Demos is a small project where many Windows Phone 7 features are combined in an easy to browse application. Newcomers can see many of the great features of WP7 in one place. Coded in XAML / C#. You can also download the source code directly from petestockley.com/wp7demosYet Another System Monitor: Distributed (agent based) system monitoring system with a web (MVC3) based dashboard. Includes: - Performance (CPU / RAM / Disk Space) monitors - URL Monitoring from any node - Service Monitoring????: ?????????,???????????、?????,???????????。 ??????????,????????。 ???????,???????,?????,?????。

    Read the article

  • Optimistic non-locking copy of InnoDB .frm files

    - by jothir
    MySQL Enterprise Backup(MEB) does hot backup of innodb data and log files. Till MEB 3.6.1, the user backs up the only innodb tables in a 3 step process: STEP 1. Take backup using --only-innodb option STEP 2. Temporarily make the table read only by executing “FLUSH TABLES WITH READ LOCK” MEB 3.7.0 has an enhancement to innodb file copying. The .frm files gets copied along with the hot backup done for innodb files. I would like to make the blog a little interactive by explaining the feature as answers: 1. What are these .frm files? The files containing the metadata, such as the table definition, of a MySQL table. For backups, the full set of .frm files are always required along with the backup data, to be able to restore tables that are altered or dropped after the backup. 2. Can the .frm files not be copied by MEB itself? --only-innodb-with-frm is the new option introduced in MEB 3.7.1 to do a copy of .frm files without locking the tables during backup operation itself. This is to reduce the pain of manually copying the .frm files. The option is intended for backups where you can ensure that no ALTER TABLE, CREATE TABLE, DROP TABLE, or other DDL statements modify the .frm files for InnoDB tables during the backup operation. 3. How is data consistency ensured? MEB does validation of the .frm files after copying by comparing with the server directory to see if the timestamps of any of the .frm files is greater than the saved system time (check .frm time).  This change in timestamp of the .frm files will show if a table is altered during the process of backup. The total number of frm files in the server directory is also verified against the copied contents. If the number of .frm files is less compared to server directory, it shows that table/tables have been dropped during the process of backup. If the number of .frm files is more compared to server directory, it shows that new table/tables have been created during backup operation. 4. How does MEB handle data inconsistency? MEB copies the .frm files through several iterations,  does the validation and throws a WARNING if there is any inconsistency found in .frm files at the end of backup operation. This means the user is warned of some DDL operations that had occurred during backup operation, and has to manually copy the .frm files or do a backup again. 5. What is the option and explain its usage? The option introduced is --only-innodb-with-frm which does optimistic copy of .frm files without locking. This can be used when the user wants to backup only innodb tables along with .frm files. The option can take one of the 2 values: all | related. --only-innodb-with-frm=all does copy of all .frm files of all innodb tables. --only-innodb-with-frm=related works in conjunction with --include option.This is to allow partial backup of .frm files corresponding to the tables specified in --include. Let me show the usage with example output: ./mysqlbackup -uroot --backup-dir=/logs/backupWithFrmAll --only-innodb-with-frm=all backup MySQL Enterprise Backup version 3.7.1 [2012/06/05] Copyright (c) 2003, 2012, Oracle and/or its affiliates. All Rights Reserved. INFO: Starting with following command line ... ./mysqlbackup -uroot --backup-dir=/logs/backupWithFrmAll        --only-innodb-with-frm=all backup INFO: Got some server configuration information from running server. IMPORTANT: Please check that mysqlbackup run completes successfully.            At the end of a successful 'backup' run mysqlbackup            prints "mysqlbackup completed OK!". --------------------------------------------------------------------                       Server Repository Options: --------------------------------------------------------------------  datadir                          =  /mysql/trydb/  innodb_data_home_dir             =    innodb_data_file_path            =  ibdata1:10M:autoextend  innodb_log_group_home_dir        =  /mysql/trydb/  innodb_log_files_in_group        =  2  innodb_log_file_size             =  5242880 --------------------------------------------------------------------                       Backup Config Options: --------------------------------------------------------------------  datadir                          =  /logs/backupWithFrmAll/datadir  innodb_data_home_dir             =  /logs/backupWithFrmAll/datadir  innodb_data_file_path            =  ibdata1:10M:autoextend  innodb_log_group_home_dir        =  /logs/backupWithFrmAll/datadir  innodb_log_files_in_group        =  2  innodb_log_file_size             =  5242880 mysqlbackup: INFO: Unique generated backup id for this is 13451979804504860 mysqlbackup: INFO: Uses posix_fadvise() for performance optimization. mysqlbackup: INFO: System tablespace file format is Antelope. mysqlbackup: INFO: Found checkpoint at lsn 1656792. mysqlbackup: INFO: Starting log scan from lsn 1656320. 120817 15:36:22 mysqlbackup: INFO: Copying log... 120817 15:36:22 mysqlbackup: INFO: Log copied, lsn 1656792.          We wait 1 second before starting copying the data files... 120817 15:36:23 mysqlbackup: INFO: Copying /mysql/trydb/ibdata1 (Antelope file format). 120817 15:36:23 mysqlbackup: INFO: Copying /mysql/trydb/innodb1/table2.ibd (Antelope file format). 120817 15:36:23 mysqlbackup: INFO: Copying /mysql/trydb/innodb1/table3.ibd (Antelope file format). 120817 15:36:23 mysqlbackup: INFO: Copying /mysql/trydb/innodb1/table1.ibd (Antelope file format). mysqlbackup: INFO: Opening backup source directory '/mysql/trydb/' 120817 15:36:23 mysqlbackup: INFO: Starting to backup .frm files in the subdirectories of /mysql/trydb/ mysqlbackup: INFO: Copying innodb data and logs during final stage ... mysqlbackup: INFO: A copied database page was modified at 1656792.          (This is the highest lsn found on page)          Scanned log up to lsn 1656792.          Was able to parse the log up to lsn 1656792.          Maximum page number for a log record 0 mysqlbackup: INFO: Copying non-innodb files took 2.000 seconds 120817 15:36:25 mysqlbackup: INFO: Full backup completed! mysqlbackup: INFO: Backup created in directory '/logs/backupWithFrmAll' -------------------------------------------------------------   Parameters Summary          -------------------------------------------------------------   Start LSN                  : 1656320   End LSN                    : 1656792 ------------------------------------------------------------- mysqlbackup completed OK! bash$ ls /logs/backupWithFrmAll/datadir/innodb1/ table1.frm  table1.ibd  table2.frm  table2.ibd  table3.frm  table3.ibd Here the backup directory contains all the .frm files of all the innodb tables. ./mysqlbackup -uroot --backup-dir=/logs/backupWithFrm --include="innodb1.table3.*" --only-innodb-with-frm=related backup MySQL Enterprise Backup version 3.7.1 [2012/06/05] Copyright (c) 2003, 2012, Oracle and/or its affiliates. All Rights Reserved. INFO: Starting with following command line ... ./mysqlbackup -uroot --backup-dir=/logs/backup371frm        --include=innodb1.table3.* --only-innodb-with-frm=related backup INFO: Got some server configuration information from running server. IMPORTANT: Please check that mysqlbackup run completes successfully.            At the end of a successful 'backup' run mysqlbackup            prints "mysqlbackup completed OK!". --------------------------------------------------------------------                       Server Repository Options: --------------------------------------------------------------------  datadir                          = /mysql/trydb/  innodb_data_home_dir             =    innodb_data_file_path            =  ibdata1:10M:autoextend  innodb_log_group_home_dir        =  /mysql/trydb  innodb_log_files_in_group        =  2  innodb_log_file_size             =  5242880 --------------------------------------------------------------------                       Backup Config Options: --------------------------------------------------------------------  datadir                          =  /logs/backupWithFrm/datadir  innodb_data_home_dir             =  /logs/backupWithFrm/datadir  innodb_data_file_path            =  ibdata1:10M:autoextend  innodb_log_group_home_dir        =  /logs/backupWithFrm/datadir  innodb_log_files_in_group        =  2  innodb_log_file_size             =  5242880 mysqlbackup: INFO: Unique generated backup id for this is 13451973458118162 mysqlbackup: INFO: Uses posix_fadvise() for performance optimization. mysqlbackup: INFO: The --include option specified: innodb1.table3.* mysqlbackup: INFO: System tablespace file format is Antelope. mysqlbackup: INFO: Found checkpoint at lsn 1656792. mysqlbackup: INFO: Starting log scan from lsn 1656320. 120817 15:25:47 mysqlbackup: INFO: Copying log... 120817 15:25:47 mysqlbackup: INFO: Log copied, lsn 1656792.          We wait 1 second before starting copying the data files... 120817 15:25:48 mysqlbackup: INFO: Copying /mysql/trydbibdata1 (Antelope file format). 120817 15:25:49 mysqlbackup: INFO: Copying /mysql/trydbinnodb1/table3.ibd (Antelope file format). mysqlbackup: INFO: Opening backup source directory '/mysql/trydb' 120817 15:25:49 mysqlbackup: INFO: Starting to backup .frm files in the subdirectories of /mysql/trydb mysqlbackup: INFO: Copying innodb data and logs during final stage ... mysqlbackup: INFO: A copied database page was modified at 1656792.          (This is the highest lsn found on page)          Scanned log up to lsn 1656792.          Was able to parse the log up to lsn 1656792.          Maximum page number for a log record 0 mysqlbackup: INFO: Copying non-innodb files took 2.000 seconds 120817 15:25:51 mysqlbackup: INFO: Full backup completed! mysqlbackup: INFO: Backup created in directory '/logs/backupWithFrm' -------------------------------------------------------------   Parameters Summary          -------------------------------------------------------------   Start LSN                  : 1656320   End LSN                    : 1656792 ------------------------------------------------------------- mysqlbackup completed OK! bash$ ls /logs/backupWithFrm/datadir/innodb1/ table3.frm table3.ibd Thus the backup directory contains only the .frm file matching the innodb table name specified in --include option. In a nutshell, we present our great new option --only-innodb-with-frm which is a true hot InnoDB-only backup with .frm files, but with an additional check, if any DDL happened during the backup. If a DDL has happened, the DBA can decide if to repeat the backup, or to live with the potential inconsistency. This is the ideal solution for users that have all their "real" data in InnoDB and seldom change their schemas. You may also like: http://dev.mysql.com/doc/mysql-enterprise-backup/3.7/en/backup-partial-options.html   STEP 3. Manually copy the .frm files of innodb tables to the destination directory where backup is stored.

    Read the article

  • Increase Max Pool Size ERROR when using SYBASE ASE ADO.NET data provider

    - by Brani
    I have made a program in VB.net (visual studio 2003) that connects to a SYBASE ASE database using the ADO.NET data provider. Recently, after a hard disk failure, I restored the program's code from a (rather old) backup. But now the connection fails with a message that does not remind me of anything that I have seen before. Here is the code and the error message: Dim cn As New AseConnection("Data Source='my_server';Port='5000';UID='sa';PWD='my_pwd';Database='my_db';") cn.Open() Error message: Sybase.Data.AseClient.AseException - Cannot allocate more connections. Connection pool is at maximum. Increase Max Pool Size Can anybody help me?

    Read the article

  • JQuery UI Autocomplete TextBox in ASP.NET C# with ArrayList

    - by Avishek Kumar
    hello, I am a asp.net coder looking forward to the "easiest tutorial on the planet" to understand how to make a JQuery Autocomplete in ASP.NET c# with ArrayList which not just me but every .net idiot can understand for once and forever as im tired of looking up so many tutorials which teach me nothing. Im referring to this http://jqueryui.com/demos/autocomplete/ library for autocomplete thing. Here is what exactly i want: 1ASP.NET text-box which has autocomplete added to it. 2It will fetch records from "search.aspx?q=searchtext" and get back maximum of 5 matching results in C# ArrayList Format 3Show those 5 matching autocomplete records below text-box as it does in the jquery UI demo page 4Keep doing the autocomplete work for every new Textvalue/changed value in Text-box Here is "what i don't want to see" in my help example: 1JSON, 2XML 3.ASMX, LINQ, Theories of Book and all weired stuff So let me see who actually knows the best code for helping me. I would be thankful to the best ASP.NET coder who helps me out.

    Read the article

  • Ruby on Rails deployment, on "thin" server with lot of attachments

    - by Horace Ho
    A lot of PDFs are stored inside MySQL as a BLOB field for each PDF file. The average file size is 500K each. The Rails app will stream the :binary data as file downloads, where there is a user click on the download link. Assume there is a maximum of 5 users downloading 5 PDFs concurrently, what kind of deployment setup parameters I should be aware of? e.g. for the case of thin: thin start --servers 3 whether --servers 3 is good enough (or 5 or more is needed) for the above example? The 2nd question is whether 'thin' a capable solution? Thanks!

    Read the article

  • nagios NRPE: Unable to read output

    - by user555854
    I currently set up a script to restart my http servers + php5 fpm but can't get it to work. I have googled and have found that mostly permissions are the problems of my error but can't figure it out. I start my script using /usr/lib/nagios/plugins/check_nrpe -H bart -c restart_http This is the output in my syslog on the node I want to restart Jun 27 06:29:35 bart nrpe[8926]: Connection from 192.168.133.17 port 25028 Jun 27 06:29:35 bart nrpe[8926]: Host address is in allowed_hosts Jun 27 06:29:35 bart nrpe[8926]: Handling the connection... Jun 27 06:29:35 bart nrpe[8926]: Host is asking for command 'restart_http' to be run... Jun 27 06:29:35 bart nrpe[8926]: Running command: /usr/bin/sudo /usr/lib/nagios/plugins/http-restart Jun 27 06:29:35 bart nrpe[8926]: Command completed with return code 1 and output: Jun 27 06:29:35 bart nrpe[8926]: Return Code: 1, Output: NRPE: Unable to read output Jun 27 06:29:35 bart nrpe[8926]: Connection from 192.168.133.17 closed. If I run the command myself it runs fine (but asks for a password) (nagios user) This are the script permission and the script contents. -rwxrwxrwx 1 nagios nagios 142 Jun 26 21:41 /usr/lib/nagios/plugins/http-restart #!/bin/bash echo "ok" /etc/init.d/nginx stop /etc/init.d/nginx start /etc/init.d/php5-fpm stop /etc/init.d/php5-fpm start echo "done" I also added this line to visudo nagios ALL=(ALL) NOPASSWD: /usr/lib/nagios/plugins/ My local nagios nrpe.cfg ############################################################################# # Sample NRPE Config File # Written by: Ethan Galstad ([email protected]) # # # NOTES: # This is a sample configuration file for the NRPE daemon. It needs to be # located on the remote host that is running the NRPE daemon, not the host # from which the check_nrpe client is being executed. ############################################################################# # LOG FACILITY # The syslog facility that should be used for logging purposes. log_facility=daemon # PID FILE # The name of the file in which the NRPE daemon should write it's process ID # number. The file is only written if the NRPE daemon is started by the root # user and is running in standalone mode. pid_file=/var/run/nagios/nrpe.pid # PORT NUMBER # Port number we should wait for connections on. # NOTE: This must be a non-priviledged port (i.e. > 1024). # NOTE: This option is ignored if NRPE is running under either inetd or xinetd server_port=5666 # SERVER ADDRESS # Address that nrpe should bind to in case there are more than one interface # and you do not want nrpe to bind on all interfaces. # NOTE: This option is ignored if NRPE is running under either inetd or xinetd #server_address=127.0.0.1 # NRPE USER # This determines the effective user that the NRPE daemon should run as. # You can either supply a username or a UID. # # NOTE: This option is ignored if NRPE is running under either inetd or xinetd nrpe_user=nagios # NRPE GROUP # This determines the effective group that the NRPE daemon should run as. # You can either supply a group name or a GID. # # NOTE: This option is ignored if NRPE is running under either inetd or xinetd nrpe_group=nagios # ALLOWED HOST ADDRESSES # This is an optional comma-delimited list of IP address or hostnames # that are allowed to talk to the NRPE daemon. # # Note: The daemon only does rudimentary checking of the client's IP # address. I would highly recommend adding entries in your /etc/hosts.allow # file to allow only the specified host to connect to the port # you are running this daemon on. # # NOTE: This option is ignored if NRPE is running under either inetd or xinetd allowed_hosts=127.0.0.1,192.168.133.17 # COMMAND ARGUMENT PROCESSING # This option determines whether or not the NRPE daemon will allow clients # to specify arguments to commands that are executed. This option only works # if the daemon was configured with the --enable-command-args configure script # option. # # *** ENABLING THIS OPTION IS A SECURITY RISK! *** # Read the SECURITY file for information on some of the security implications # of enabling this variable. # # Values: 0=do not allow arguments, 1=allow command arguments dont_blame_nrpe=0 # COMMAND PREFIX # This option allows you to prefix all commands with a user-defined string. # A space is automatically added between the specified prefix string and the # command line from the command definition. # # *** THIS EXAMPLE MAY POSE A POTENTIAL SECURITY RISK, SO USE WITH CAUTION! *** # Usage scenario: # Execute restricted commmands using sudo. For this to work, you need to add # the nagios user to your /etc/sudoers. An example entry for alllowing # execution of the plugins from might be: # # nagios ALL=(ALL) NOPASSWD: /usr/lib/nagios/plugins/ # # This lets the nagios user run all commands in that directory (and only them) # without asking for a password. If you do this, make sure you don't give # random users write access to that directory or its contents! command_prefix=/usr/bin/sudo # DEBUGGING OPTION # This option determines whether or not debugging messages are logged to the # syslog facility. # Values: 0=debugging off, 1=debugging on debug=1 # COMMAND TIMEOUT # This specifies the maximum number of seconds that the NRPE daemon will # allow plugins to finish executing before killing them off. command_timeout=60 # CONNECTION TIMEOUT # This specifies the maximum number of seconds that the NRPE daemon will # wait for a connection to be established before exiting. This is sometimes # seen where a network problem stops the SSL being established even though # all network sessions are connected. This causes the nrpe daemons to # accumulate, eating system resources. Do not set this too low. connection_timeout=300 # WEEK RANDOM SEED OPTION # This directive allows you to use SSL even if your system does not have # a /dev/random or /dev/urandom (on purpose or because the necessary patches # were not applied). The random number generator will be seeded from a file # which is either a file pointed to by the environment valiable $RANDFILE # or $HOME/.rnd. If neither exists, the pseudo random number generator will # be initialized and a warning will be issued. # Values: 0=only seed from /dev/[u]random, 1=also seed from weak randomness #allow_weak_random_seed=1 # INCLUDE CONFIG FILE # This directive allows you to include definitions from an external config file. #include=<somefile.cfg> # INCLUDE CONFIG DIRECTORY # This directive allows you to include definitions from config files (with a # .cfg extension) in one or more directories (with recursion). #include_dir=<somedirectory> #include_dir=<someotherdirectory> # COMMAND DEFINITIONS # Command definitions that this daemon will run. Definitions # are in the following format: # # command[<command_name>]=<command_line> # # When the daemon receives a request to return the results of <command_name> # it will execute the command specified by the <command_line> argument. # # Unlike Nagios, the command line cannot contain macros - it must be # typed exactly as it should be executed. # # Note: Any plugins that are used in the command lines must reside # on the machine that this daemon is running on! The examples below # assume that you have plugins installed in a /usr/local/nagios/libexec # directory. Also note that you will have to modify the definitions below # to match the argument format the plugins expect. Remember, these are # examples only! # The following examples use hardcoded command arguments... command[check_users]=/usr/lib/nagios/plugins/check_users -w 5 -c 10 command[check_load]=/usr/lib/nagios/plugins/check_load -w 15,10,5 -c 30,25,20 command[check_hda1]=/usr/lib/nagios/plugins/check_disk -w 20% -c 10% -p /dev/hda1 command[check_zombie_procs]=/usr/lib/nagios/plugins/check_procs -w 5 -c 10 -s Z command[check_total_procs]=/usr/lib/nagios/plugins/check_procs -w 150 -c 200 # The following examples allow user-supplied arguments and can # only be used if the NRPE daemon was compiled with support for # command arguments *AND* the dont_blame_nrpe directive in this # config file is set to '1'. This poses a potential security risk, so # make sure you read the SECURITY file before doing this. #command[check_users]=/usr/lib/nagios/plugins/check_users -w $ARG1$ -c $ARG2$ #command[check_load]=/usr/lib/nagios/plugins/check_load -w $ARG1$ -c $ARG2$ #command[check_disk]=/usr/lib/nagios/plugins/check_disk -w $ARG1$ -c $ARG2$ -p $ARG3$ #command[check_procs]=/usr/lib/nagios/plugins/check_procs -w $ARG1$ -c $ARG2$ -s $ARG3$ command[restart_http]=/usr/lib/nagios/plugins/http-restart # # local configuration: # if you'd prefer, you can instead place directives here include=/etc/nagios/nrpe_local.cfg # # you can place your config snipplets into nrpe.d/ include_dir=/etc/nagios/nrpe.d/ My Sudoers files # /etc/sudoers # # This file MUST be edited with the 'visudo' command as root. # # See the man page for details on how to write a sudoers file. # Defaults env_reset # Host alias specification # User alias specification # Cmnd alias specification # User privilege specification root ALL=(ALL) ALL nagios ALL=(ALL) NOPASSWD: /usr/lib/nagios/plugins/ # Allow members of group sudo to execute any command # (Note that later entries override this, so you might need to move # it further down) %sudo ALL=(ALL) ALL # #includedir /etc/sudoers.d Hopefully someone can help!

    Read the article

  • jQuery input mask length

    - by Tim
    Hey all, I have an input field and I want to limit it to alphanumerical (A-Z, a-z, 0-9) characters only, with a MINIMUM field length of 5, and a maximum length of up to 15 characters total. Does anyone know how I can do this using jQuery? I'm trying to use the jQuery input mask by digitalBush - http://digitalbush.com/projects/masked-input-plugin/ The problem is that UNLESS I enter 15 characters, the field goes blank. If I enter "012345678912345" (15 characters) in the input field then the value is stored and it's fine. But if I enter "12345" as a username, when that input box loses focus its value goes back to being blank. Is there something that I can change in the "definitions" or options somewhere that fixes this? Many thanks for your help :) Tim

    Read the article

  • jquery - check length of input field?

    - by KnockKnockWhosThere
    The code below is intended to enable the submit button once the user clicks in the textarea field. It works, but I'm trying to also make it so that it's only enabled if there's at least one character in the field. I tried wrapping it in: if($(this).val().length > 1) { } But, that didn't seem to work... Any ideas? $("#fbss").focus(function(){ $(this).select(); if($(this).val()=="Default text") { $(this).val(""); $("input[id=fbss-submit]").removeClass(); $("input[id=fbss-submit]").attr('disabled',false); $("input[id= fbss-submit]").attr('class','.enableSubmit'); if($('.charsRemaining')) { $('.charsRemaining').remove(); $("textarea[id=fbss]").maxlength({ maxCharacters: 190, status: true, statusClass: 'charsRemaining', statusText: 'characters left', notificationClass: 'notification', showAlert: false, alertText: 'You have exceeded the maximum amount of characters', slider: false }); } });

    Read the article

  • The width of items in select box is different in IE 8, Firefox and Google Chrome

    - by Purushotham
    I have a problem with the select box in my application. I specified some width to the select box. The width of some of the items in the select box is more than the actual width of the select box. Each browser is treating it differently. I can see a shrink of the select box items in IE 8. Whereas google chrome and firefox takes the maximum width of the items in the select box. I want to control the width of the select box. Is there any way to do this ?

    Read the article

  • ASP.NET 4 - IIS 7 - Request timed out - Request timed out

    - by sharru
    My website is running on Asp.net v4 , IIS 7 , Windows server 2008. My cpu is running on 20-30% and the site is responding quickly. Every 2-5 mins i'm receiving the following error: Event code: 3001 Event message: The request has been aborted. Exception type: HttpException Exception message: Request timed out. , Request information: Request URL: http://www.xxxx.com/Services/AxRefresh.asmx/AxUpdate Request path: /Services/AxRefresh.asmx/AxUpdate User host address: 84.110.251.198 User: Is authenticated: False Authentication Type: Thread account name: NT AUTHORITY\NETWORK SERVICE i read that the error is related to the maximum concurrent requests limit http://support.microsoft.com/kb/821268 but then i found out that on IIS 7 this limitation is changed and not relevant. http://msdn.microsoft.com/en-us/library/dd560842(VS.100).aspx Any other ideas what can be the problem or where to start looking ? Thx!

    Read the article

  • SSIS Send Mail task limited to 255 characters in address?

    - by CodeByMoonlight
    Is this a bug, or some hidden limit I can't find any documentation about? When creating a Send Mail Task in SSIS 2008, the TO, CC and BCC fields seem to have a hidden limit of 255 characters. I'm aware this is the standard limit for individual email addresses, but all three are commonly used for multiple addresses and the comment for the To field even says "separate the recipients with semicolons". But nevertheless, it truncates the address to a maximum of 255 characters. Bug, non-obvious standard, or something I'm missing? Any way around this? We were trying to build a CC list dynamically, but this has caused a rethink.

    Read the article

  • How to record an iPad screencast

    - by hgpc
    How do you record an iPad screencast at full scale? I have an iMac with maximum resolution 1680x1050 and the simulator doesn't fit the screen in portrait orientation. It does fit in landscape orientation. Reducing the scale to 50% is not an option because the end result is too small. If the scale could be reduced slightly it would be fine, but not 50%. Is it possible to put the simulator in landscape orientation and still keep the app in portrait mode? Then I could simply rotate the resulting video to get a portrait screencast.

    Read the article

  • Disable scrolling in an iPhone web application?

    - by Stefan Kendall
    Is there any way to completely disable web page scrolling in an iPhone web app? I've tried numerous things posted on google, but none seem to work. Here's my current header setup: <meta name="viewport" content="width=device-width; initial-scale=1.0; maximum-scale=1.0; user-scalable=no;"/> <meta name="apple-mobile-web-app-capable" content="yes"/> document.body.addEventListener('touchmove', function(e){ e.preventDefault(); }); doesn't seem to work.

    Read the article

  • django admin: Add a "remove file" field for Image- or FileFields

    - by w-
    I was hunting around the net for a way to easily allow users to blank out imagefield/filefields they have set in the admin. I found this http://www.djangosnippets.org/snippets/894/ What was really interesting to me here was the code posted in the comment by rfugger remove_the_file = forms.BooleanField(required=False) def save(self, *args, **kwargs): object = super(self.__class__, self).save(*args, **kwargs) if self.cleaned_data.get('remove_the_file'): object.the_file = '' return object When i try to use this in my own form I basically added this to my admin.py which already had a BlahAdmin class BlahModelForm(forms.ModelForm): class Meta: model = Blah remove_img01 = forms.BooleanField(required=False) def save(self, *args, **kwargs): object = super(self.__class__, self).save(*args, **kwargs) if self.cleaned_data.get('remove_img01'): object.img01 = '' return object when i run it I get this error maximum recursion depth exceeded while calling a Python object at this line object = super(self.__class__, self).save(*args, **kwargs) When i think about it for a bit, it seems obvious that it is just infinitely calling itself causing the error. My problem is i can't figure out what is the correct way i should be doing this. Any suggestions? thanks

    Read the article

  • django select max field from mysql when column is varchar

    - by doza
    Hi, Using Django 1.1, I am trying to select the maximum value from a varchar column (in MySQL.) The data stored in the column looks like: 9001 9002 9017 9624 10104 11823 (In reality, the numbers are much bigger than this.) This worked until the numbers incremented above 10000: Feedback.objects.filter(est__pk=est_id).aggregate(sid=Max('sid')) Now, that same line would return 9624 instead of 11823. I'm able to run a query directly in the DB that gives me what I need, but I can't figure out the best way to do this in Django. The query would be: select max(sid+0) from Feedback; Any help would be much appreciated. Thanks!

    Read the article

  • How to set viewport meta for iPhone that handles rotation properly?

    - by Caffeine Coma
    So I've been using: <meta name="viewport" content="width=device-width; initial-scale=1.0; maximum-scale=1.0;"/> to get my HTML content to display nicely on the iPhone. It works great until the user rotates the device into landscape mode, where the display remains constrained to 320px. Is there a simple way to specify a viewport that changes in response to the user changing the device orientation? Or must I resort to Javascript to handle that?

    Read the article

  • Horizontal Scrolling Flash Game/Large Horizontal Scene

    - by Nathan
    Hello, I'm currently learning Flash (CS4, AS3) and am creating a game. I have currently 1 flv file with 4 scenes, I then move from left to right and then to scene 2 and go from left to right. This is the game where items pop up that need to be clicked on and you get points. Is there any way I can combine these onto 1 scene? Flash only allows you to have a maximum of 2880px wide. The reason for this is the transition between the scenes is RUBBISH and that my AS is not working correctly in between scenes (it loses values). Any help would be greatly appreciated! Nathan

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >